ELK+kafka integration

1. Because log4j2 is used in this project, it is directly configured in log4j2
	    <Kafka name="Kafka" topic="XX_log">
	      <PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss}||%p||%c{1}||XX_web||%m%n"/>
	        <Property name="bootstrap.servers">127.0.0.1:9092</Property>
	        <Property name="timeout.ms">500</Property>
	    </Kafka>

The format in PatternLayout uses || to connect the content for the purpose of splitting logstash. The timeout.ms attribute is added to ensure that the log system hangs up without a great impact on the business system. Of course, kafka can use the cluster method. Multiple addresses of bootstrap.servers are separated by ",". XX_web represents the current business platform.
2. Build a kafka cluster. I won't introduce more here. The official website is very complete.
zookeeper.connect=127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183


3. Create a logstash dynamic template
{
    "template": "*",
    "settings": {
        "index.refresh_interval": "5s",
        "number_of_replicas": "0",
        "number_of_shards": "3"
    },
    "mappings": {
        "_default_": {
            "_all": {
                "enabled": false
            },
            "dynamic_templates": [
                {
                    "message_field": {
                        "match": "message",
                        "match_mapping_type": "string",
                        "mapping": {
                            "type": "string",
                            "index": "analyzed"
                        }
                    }
                },
                {
                    "string_fields": {
                        "match": "*",
                        "match_mapping_type": "string",
                        "mapping": {
                            "type": "string",
                            "index": "not_analyzed"
                        }
                    }
                }
            ],
            "properties": {
                "dateTime": {
                    "type": "date",
                    "format": "yyy-MM-dd HH:mm:ss"
                },
                "@version": {
                    "type": "integer",
                    "index": "not_analyzed"
                },
                "context": {
                    "type": "string",
                    "index": "analyzed"
                },
                "level": {
                    "type": "string",
                    "index": "not_analyzed"
                },
                "class": {
                    "type": "string",
                    "index": "not_analyzed"
                },
                "server": {
                    "type": "string",
                    "index": "not_analyzed"
                }
            }
        }
    }
}

4. Configure logstash
input{
       kafka {
                zk_connect =>"127.0.0.1:2181,127.0.0.1:2182,127.0.0.1:2183"
                group_id =>"logstash"
                topic_id =>"XX_log"
                reset_beginning => false
                consumer_threads => 5
                decorate_events => true
       }
}
filter {
   mutate{
        split=>["message","||"]
        add_field => {
             "dateTime" => "%{[message][0]}"
        }
        add_field => {
              "level" => "%{[message][1]}"
        }
        add_field => {
               "class" => "%{[message][2]}"
        }
        add_field => {
                "server" => "%{[message][3]}"
         }
        add_field => {
                "context" => "%{[message][4]}"
         }
         remove_field => ["message"]
    }
    date {
        match => ["logdate", "yyyy-MM-dd HH:mm:ss"]
    }
}
output{
	  elasticsearch {
		    hosts => ["127.0.0.1:9200"]
		    index => "XX_log-%{+YYYY-MM}"
		    codec => "json"
		    manage_template => true
		    template_overwrite => true
		    flush_size => 50000
		    idle_flush_time => 10
		    workers => 2
		    template => "E:\logstash\template\template_log.json"  
	}
}

Save the log into the ES index according to the year and month index => "XX_log-%{+YYYY-MM}", logstash reads the log information from the kafka cluster.

5. Building a ZK cluster, I won't introduce it here, there are more online information----http://blog.csdn.net/shirdrn/article/details/7183503

6. Building an ES cluster, the ES cluster is relatively simple to set up The parameters can be used without too many. http://blog.csdn.net/xgjianstart/article/details/52192675

7. Configure kibana
server.port: 5601 # service port
# The host to bind the server to.
server.host: "115.28.240.113"
elasticsearch.url: http://127.0.0.1:9200 ES address-cluster
kibana.index: "kibana"


8, Print Book JKD 1.7 ES-2.4, logstash 2.4, kafka-2.10, kibana-4.6.4

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=326262735&siteId=291194637