linux system log platform to build enterprise ELK Stack

Change the time zone

cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime
yum install ntpdate -y
ntpdate time.windows.com

Configuring YUM warehouse, behind the installation are using this library:

rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
 we / etc / yum .repos.d / elastic.repo
[elastic-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md

1, the installation elasticsearch

yum install elasticsearch -y
vim /etc/elasticsearch/elasticsearch.yml

Node # 1

cluster.name: elk-cluster
node.name: node-1
#node.master: true或fase  #是否作为主节点
path.data: /home/es/es_data
network.host: 192.168.1.195
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.195", "192.168.1.196", "192.168.1.197"]
discovery.zen.minimum_master_nodes: 2

Node # 2

cluster.name: elk-cluster
node.name: node-2
path.data: /home/es/es_data
network.host: 192.168.1.196
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.195", "192.168.1.196", "192.168.1.197"]
discovery.zen.minimum_master_nodes: 2

# Node 3

cluster.name: elk-cluster
node.name: node-3
path.data: /home/es/es_data
network.host: 192.168.1.197
http.port: 9200
discovery.zen.ping.unicast.hosts: ["192.168.1.195", "192.168.1.196", "192.168.1.197"]
discovery.zen.minimum_master_nodes: 2

cluster.name # cluster name
node.name # node name
path.data # data directory. A plurality of paths may be provided, in this case, all paths are stored data.

Cluster focuses on two parameters:
discovery.zen.ping.unicast.hosts # unicast, cluster nodes IP list. Provides automatic organization cluster, automatically scans port to connect to other nodes 9300-9305. No additional configuration.
discovery.zen.minimum_master_nodes # least the main nodes
in order to prevent data loss, this parameter is very important, if not set, the reason may be due to the network split brain leads into two separate clusters. To avoid the split brain, should be set in line with a quorum of nodes: (nodes / 2) + 1
In other words, if there are three cluster nodes, the master node is set to the minimum (3/2) + 1 or 2

View cluster nodes:

curl -XGET 'http://127.0.0.1:9200/_cat/nodes?pretty'  

Viewing Cluster Health status:

curl -i -XGET http://127.0.0.1:9200/_cluster/health?pretty

Installation Elasticsearch - head plug

Npm install software

tar -zxvf node-v4.4.7-linux-x64.tar.gz
vi /etc/profile
NODE_HOME=/usr/local/node-v4.4
PATH=$NODE_HOME/bin:$PATH
export NODE_HOME PATH
source /etc/profile

Installation elasticsearch-head

git clone git://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head
vi Gruntfile.js
options: {
     port: 9100,
     base: '.',
     keepalive: true,
     hostname: '*'                                                                                                                                                                                                                                                                                                                
}
npm install
npm run start

2, installation logstash

yum install logstash -y
/usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/test.conf
input {
    file {
        path => "/var/log/messages"
        type => "system"
        start_position => "beginning"
        }
}
output {    
         elasticsearch {
                hosts => ["192.168.1.202:9200"]
                index => "system-%{+YYYY.MM.dd}"
            }
}

Here is my profile example

cat yuejiaxiao.conf 
input {
   file {
        path => ["/data/docker-yuejiaxiao/logs/certification/certification-provider.info.log"]
        type => "certification-info"
        start_position => "beginning"
   }
   file {
        path => ["/data/docker-yuejiaxiao/logs/certification/certification-provider.error.log"]
        type => "certification-error"
        start_position => "beginning"
   }
}
filter {
    date {
       match => ["timestamp","yyyy-MM-dd HH:mm:ss"]
       remove_field => "timestamp"
    }  
}
output {
    if [type] == "certification-info" {
         elasticsearch {
            hosts  => ["http://172.16.86.215:9200"]
            index  => "certification-info-%{+YYYY.MM.dd}"
         }
    }
    if [type] == "certification-error" {
         elasticsearch {
            hosts  => ["http://172.16.86.215:9200"]
            index  => "certification-error-%{+YYYY.MM.dd}"
         }
    }
}

Middle filter configuration, in order to resolve conflicts and time pulling log local system time.

3, AnSo kibana

yum  install Kibana - and
 saw /etc/kibana/kibana.yml
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.url: http://localhost:9200
systemctl start kibana
systemctl enable kibana

elk build a good, elasticsearch (search engine), logstash (collect), kibana (visible platform), which is kibana platform address http: // ip: 5601

kibana Chinese Speaking to solve

The Kibana_Hanization-master.zip uploaded to the server kibana

unzip Kibana_Hanization-master.zip

translations file copy of this item in the folder to src / legacy / core_plugins under your kibana directory / kibana / directory

cd Kibana_Hanization-master/
cp -r translations/  /usr/share/kibana/src/legacy/core_plugins/kibana/

Modify your configuration items kibana profile kibana.yml in: i18n.locale: "zh-CN"

vim /etc/kibana/kibana.yml
#i18n.locale: "en"
i18n.locale: "zh-CN" 

Restart Kibana, finished complete

systemctl restart kibana

Guess you like

Origin www.cnblogs.com/xinxing1994/p/11947017.html