Sunday 10 September 2017

Set up an ELK stack (CentOS)

# -------------------------------------------------------------------------------------------------------------------------------------
Elasticsearch
# -------------------------------------------------------------------------------------------------------------------------------------

install java:
yum install -y java

Install elasticsearch: 
yum install -y elasticsearch-5.5.2.rpm

Start elasticsearch: 
service elasticsearch start

Configure elasticsearch
/etc/elasticsearch/elasticsearch.yml

Test elasticsearch:
curl http://10.1.1.100:9200

this should return something like the following:
{
  "name" : "BT_A1W0",
  "cluster_name" : "elasticsearch",
  "cluster_uuid" : "85TTdoncTPqC2vAnEufDWQ",
  "version" : {
    "number" : "5.5.2",
    "build_hash" : "b2f0c09",
    "build_date" : "2017-08-14T12:33:14.154Z",
    "build_snapshot" : false,
    "lucene_version" : "6.6.0"
  },
  "tagline" : "You Know, for Search"
}


Create entry in elasticsearch:
curl -XPUT '10.1.1.100:9200/my_index/my_type/my_id' -H 'Content-Type: application/json' -d’
{
"user":"bob”,
"post_date":"2009-11-15T14:12:12”,
"message":”hello"
}

The PUT must be of the form /index/type/id (at least I think this is the case)

List indexes
curl "http://10.1.1.100:9200/_cat/indices"


delete all indexes:
curl -X DELETE "http://10.1.1.100:9200/_all"



# -------------------------------------------------------------------------------------------------------------------------------------
# Logstash
# -------------------------------------------------------------------------------------------------------------------------------------

Install logstash:
yum install -y logstash-5.5.2.rpm

Configure logstash: 

Assuming you have Apache HTTP server installed, create /etc/logstash/conf.d/logstash.conf with the following contents:
input {
  file {
    path => "/var/log/httpd/access*"
    start_position => “end"
  }
}

filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
  date {
    match => [ "timestamp" , "dd/MMM/yyyy:HH:mm:ss Z" ]
  }
}

output {
  stdout {
    codec => rubydebug
  }
  elasticsearch {
    hosts => ["10.1.1.100:9200"]
  }
}



Start logstash

To test that things are working correctly, you can start in the foreground as follows:
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf

Once you’re happy you may want to create a basic init.d script,  e.g:

#!/bin/bash

function start() {
        echo "Starting logstash..."
        /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/logstash.conf --config.reload.automatic > /var/log/logstash/logstash.log 2>&1 &
}

function stop() {
        echo "Stopping logstash..."
        pkill -9 -f .+logstash/runner.rb
}

case $1 in
        start)
                start
                ;;

        stop)
                stop
                ;;

        restart)
                stop
                sleep 3
                start
                ;;

        *)
                echo ""
                echo "Usage $0 start|stop|restart"
                echo ""
                ;;
esac



Once things are up and running, hit your Apache server with an HTTP request. Because you’ve set up an STDOUT output you’ll get something similar to the following in /var/log/logstash/logstash.log:

{
        "request" => "/",
        "agent" => "\"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36\"",
        "auth" => "-",
        "ident" => "-",
        "verb" => "GET",
        "message" => "10.1.1.1 - - [31/Aug/2017:20:47:12 +0000] \"GET / HTTP/1.1\" 302 233 \"-\" \"Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9_5) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/60.0.3112.101 Safari/537.36\"",
        "path" => "/var/log/httpd/access_log",
        "referrer" => "\"-\"",
        "@timestamp" => 2017-08-31T20:47:12.000Z,
        "response" => “200",
        "bytes" => "233",
        "clientip" => "10.1.1.1",
        "@version" => "1",
        "host" => "jupiter",
        "httpversion" => "1.1",
        "timestamp" => "31/Aug/2017:20:47:12 +0000"
}



Also, because you set up an output to elasticsearch you will see this entry in there too:

Check for the index as follows:

curl "http://10.1.1.100:9200/_cat/indices/logstash-*"

Take a look at the index as follows:

curl "http://10.1.1.100:9200/logstash-*?pretty"


# -------------------------------------------------------------------------------------------------------------------------------------
# Kibana
# -------------------------------------------------------------------------------------------------------------------------------------

Install Kibana

tar xvzf kibana-5.5.2-linux-x86_64.tar.gz -C /opt
cd /opt
mv kibana-5.5.2-linux-x86_64 kibana

Configure Kibaba
Edit /opt/kibana/config/kibana.yml, set server.host and elasticsearch.url

Start kibana

cd /opt/kibana/bin && ./kibana &

Test kibana
  • In a browser, navigate to http://10.1.1.100:5601
  • Navigate to dev tools
  • In the left panel type GET _cat/indices and click the green arrow to run the query
  • In the results window you should see the index created by logstash (when you made a request from Apache earlier) similar to:
    • yellow open logstash-2017.08.31 uzTssKBfTuecbgGzth-ViA 5 1 2 0 23.2kb 23.2kb
      
  • Navigate to Management
  • Select Index patterns 
  • Create an index pattern of logstash-*
  • Our Grok filter that we set up in logstash should have created approx 38 feilds from the Apache log, you should see that many of these are agregatable which means they can be used in visualisations
  • Navigate to Visualisations > create visualisations > vertical bar
  • Select the logstash-* index pattern you created earlier 
  • you’ll see the Y-axis is already set up as count 
  • Under buckets select X-axis
  • for aggregation select Date Histogram with a field of @timestamp
  • Add a sub bucket with a bucket size of split series 
  • select a sub aggregation of terms with a filed of response.keyword
  • Run the visual by pressing the blue/white arrow (top left)


No comments:

Post a Comment