Install and Configure Logstash

Install and Configure LogStash for Log Forwarding and Aggregation

Logstash can serve as local on-premise log aggregator and forwarder that can send logs to DLA running in CloudFabrix SaaS environment. Following are instructions for installing Logstash in customer’s environment.

Prerequisites

Java -

Logstash Version - 2.x / 5.x / 6.x / 7.x

Software

Version

Java

Java version 8 or Java Version 11. Ensure JAVA_HOME environment variable is set.

Linux

CentOS 7.x

Debian 8.x

Ubuntu 18.0.4

Installation

Detailed installation for most popular Linux distributions are provided in the official website. Abridged version of installation instructions for some common distributions are provided here.

How to install Java / Logstash on client side

Basic Configuration

The following example tails the /var/log/messages file and forwards every line to your Logs App.To start pushing logs, you must create a file named /etc/logstash/conf.d/logsene.conf with the below text and restart Logstash. An example is as shown below.

input {
  file {
    path => "/var/log/messages"
    start_position => "beginning"
  }
}

output {
  elasticsearch {
    # use port 80 for plain HTTP, instead of HTTPS
    hosts => "logsene-receiver.cfxDLA.com:443"
    # set to false if you don't want to use SSL/HTTPS
    ssl => "true"
    index => "db0461c5-7106-4b3e-b2af-42f20fd95b0f"
    manage_template => false
  }
}

Configuration File

Logstash Configuration file (logstash-sample.conf) will be located at /etc/logstash/ folder where logstash is installed which looks like below. Once the file is updated it is copied to /etc/logstash/conf.d folder as configured in pipelines.yml file (which is located in /etc/logstash). Logstash configuration file will contain three objects.

Filebeat installation and configuration covered in detail at 'Install and Configure Filebeat'.

  • Input : Logstash input is about the information of the filebeat IP address and port.

input {
  beats {
        client_inactivity_timeout => "300"
        host => "<FileBeat IP Address>"
        port => "<FileBeat Port>"
     }
}

Note :- “client_inactivity_timeout” time is in ms. FileBeat ip address should be given in the “host” field and FileBeat Port should be given in the “port” field.

  • Filter: Filter section is about the Grok Patterns of the filebeat log file. Grok Pattern needs to be specific to the log file which you had given in the filebeat.yml input file

filter {
 grok {
    match => { "message" => ["<Grok Pattern of the log>"]}
 }
}
  • Output :- Output section is about the filebeat output where the log files should be transferred to. An example is shown. Change IP addresses.

 output {
        kafka {
        codec => json
        topic_id => "<Kafka Topic Id>"
        bootstrap_servers => "ip address:9093,ip address:9093,ip address:9093"
        security_protocol => "SSL"
        ssl_endpoint_identification_algorithm => " "
        ssl_truststore_location => "<Location of Truststore"
        ssl_truststore_password => "<Password of the Truststore>"
        }
 }

Note :- “ssl_endpoint_identification_algorithm” name should be the machine hostname where ssl truststore is generated.

Last updated