elk Cluster Setup

One: Download the software packages

Version Information: elasticsearch-7.0.1-linux-x86_64.tar.gz kibana-7.0.1-linux-x86_64.tar.gz logstash-7.0.1.tar.gz

Link: https: //pan.baidu.com/s/1uCfnTFqvNPeTrVfAUS2wcQ extraction code: m0uu

Two: unpack and configure

1, the new host parsing
[root@kafka7 config]# cat /etc/hosts
::1	localhost	localhost.localdomain	localhost6	localhost6.localdomain6
127.0.0.1	localhost	localhost.localdomain	localhost4	localhost4.localdomain4

10.250.1.196 kafka000008 kafka000008

10.250.1.196 kafka7
10.250.1.197 kafka8
10.250.1.198 kafka9

  

2, the configuration elasticsearch
[Root @ kafka7 config] # cat elasticsearch.yml
cluster.name: Azerbaijani-dev-logger
node.name: kafka7
path.data: / data / es7-9-date
path.logs: /data/es7-9-logs
network.host: 10.250.1.196
http.port: 9200
discovery.seed_hosts: ["kafka7", "kafka8","kafka9"]
cluster.initial_master_nodes: ["kafka7", "kafka8","kafka9"]

  

3, adjust jvm.options
[root@kafka7 config]# cat jvm.options
-Xms4g
-Xmx4g
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-Des.networkaddress.cache.ttl=60
-Des.networkaddress.cache.negative.ttl=10
-XX:+AlwaysPreTouch
-Xss1m
-Djava.awt.headless=true
-Dfile.encoding=UTF-8
-Djna.nosys = true
-XX:-OmitStackTraceInFastThrow
-Dio.netty.noUnsafe=true
-Dio.netty.noKeySetOptimization=true
-Dio.netty.recycler.maxCapacityPerThread=0
-Dlog4j.shutdownHookEnabled=false
-Dlog4j2.disable.jmx=true
-Djava.io.tmpdir = $ {} ES_TMPDIR
-XX:+HeapDumpOnOutOfMemoryError
-XX:HeapDumpPath=data
-XX:ErrorFile=logs/hs_err_pid%p.log
8:-XX:+PrintGCDetails
8:-XX:+PrintGCDateStamps
8: -XX + PrintTenuringDistribution
8:-XX:+PrintGCApplicationStoppedTime
8:-Xloggc:logs/gc.log
8:-XX:+UseGCLogFileRotation
8:-XX:NumberOfGCLogFiles=32
8:-XX:GCLogFileSize=64m
9-:-Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m
9-:-Djava.locale.providers=COMPAT

  

Other two of the same configuration, to modify elasticsearch.yml corresponding ip, jvm.options parameter adjustment according to their needs

 

logstash
1, into logstash directory, configuration logstash, attention topics => [ "xxx.log.live"] xxx.log
[root@kafka7 config]# cat logstash-kafka.conf 
input {
        kafka {
	        bootstrap_servers => ["10.250.1.196:9092,10.250.1.197:9092,10.250.1.198:9092"]
	        client_id => "kafka_client_1"
          group_id => "logstash"
	        auto_offset_reset => "latest"
	        consumer_threads => 16
	        topics => ["xxx.log.live"]
	        type => "live"
	        decorate_events => true
	        codec => json
        }
    
        kafka {
	        bootstrap_servers => ["10.250.1.196:9092,10.250.1.197:9092,10.250.1.198:9092"]
	        client_id => "kafka_client_2"
          group_id => "logstash"
	        auto_offset_reset => "latest"
	        consumer_threads => 16
	        topics => ["xxx.log.runtime"]
	        type => "runtime"
	        decorate_events => true
	        codec => json
        }
}

filter {
    # Match time field and the original set of log time field
    # Time field is local time field, there is no difference of 8 hours
    date {
        match => ["time","yyyy-MM-dd'T'HH:mm:ss.S'Z'"]
        target => "@timestamp"
    }

}


output {
     if[type] == "live"{
          elasticsearch {
        		hosts =>  ["10.250.1.196:9200","10.250.1.197:9200","10.250.1.198:9200"]
        		index => "xxxx.log.live-%{+YYYY-MM-dd}"
        		#document_type => "form"
        		#document_id => "%{id}"
          }
        }
        
     if[type] == "runtime"{
          elasticsearch {
        		hosts =>  ["10.250.1.196:9200","10.250.1.197:9200","10.250.1.198:9200"]
        		index => "xxxx.log.runtime-%{+YYYY-MM-dd}"
        		#document_type => "form"
        		#document_id => "%{id}"
          }
        }
 }

  

2, jvm.options logstash parameters adjusted according to its own
[root@kafka7 config]# cat jvm.options 
-Xms4g
-Xmx4g
-XX:+UseConcMarkSweepGC
-XX:CMSInitiatingOccupancyFraction=75
-XX:+UseCMSInitiatingOccupancyOnly
-Djava.awt.headless=true
-Dfile.encoding=UTF-8
-Djruby.compile.invokedynamic=true
-Djruby.jit.threshold=0
-XX:+HeapDumpOnOutOfMemoryError
-Djava.security.egd=file:/dev/urandom

  

Configuring kibana, kibana only in a stage configuration, directory into kibana
[root@kafka7 config]# cat kibana.yml 
server.port: 5601
server.host: "10.250.1.196"
elasticsearch.hosts: ["http://10.250.1.196:9200","http://10.250.1.197:9200","http://10.250.1.198:9200"]

  

3: Start

Start es the needs of ordinary users, pay attention to some authority, seems to be a wrong start, like this sysctl -w vm.max_map_count = 262144

#elk kibnan7,8,9 su ordinary users to run
#start it
#[root@kafka7 elasticsearch-7.0.1]# pwd
#/data/elasticsearch-7.0.1
#[root@kafka7 elasticsearch-7.0.1]# sh ./bin/elasticsearch &
#start logstash
#[root@kafka7 logstash-7.0.1]# pwd
#/data/logstash-7.0.1
#[root@kafka7 logstash-7.0.1]# sh ./bin/logstash -f config/logstash-kafka.conf
#start yam
#/data/kibana-7.0.1-linux-x86_64/bin/../node/bin/node --no-warnings --max-http-header-size=65536 /data/kibana-7.0.1-linux-x86_64/bin/../src/cli

  

 

Guess you like

Origin www.cnblogs.com/dribs/p/12055357.html