The beginning of ELK log analysis

ELK is an open source log analysis system

ELK is the abbreviation of three open source software, respectively: Elasticsearch, Logstash, Kibana, they are all open source software. A new FileBeat has been added, which is a lightweight log collection and processing tool (Agent). Filebeat takes up less resources and is suitable for collecting logs on each server and transferring them to Logstash. This tool is also recommended by the official.

official document

Filebeat:
https://www.elastic.co/cn/products/beats/filebeat
https://www.elastic.co/guide/en/beats/filebeat/5.6/index.html

Logstash:
https://www.elastic.co/cn/products/logstash
https://www.elastic.co/guide/en/logstash/5.6/index.html

Kibana:
https://www.elastic.co/cn/products/kibana
https://www.elastic.co/guide/en/kibana/5.5/index.html

Elasticsearch:
https://www.elastic.co/cn/products/elasticsearch
https://www.elastic.co/guide/en/elasticsearch/reference/5.6/index.html

elasticsearch Chinese community:
https://elasticsearch.cn/

concept

Elasticsearch log retrieval and storage

Logstash collection analysis processing

Kibana visualization display

Elasticsearch Lucene-based search server

Elasticsearch is an open source distributed, highly scalable, high real-time, RESTful search and data analysis engine.  Its bottom layer is the open source library Apache Lucene (search engine).

stand-alone installation

[root@es-0001 ~]# vim /etc/hosts
192.168.1.21	es-0001
192.168.1.22	es-0002
192.168.1.23	es-0003
192.168.1.24	es-0004
192.168.1.25	es-0005
[root@es-0001 ~]# yum install -y java-1.8.0-openjdk elasticsearch
[root@es-0001 ~]# vim /etc/elasticsearch/elasticsearch.yml
55:  network.host: 0.0.0.0
[root@es-0001 ~]# systemctl enable --now elasticsearch
[root@es-0001 ~]# curl http://127.0.0.1:9200/
{
  "name" : "War Eagle",
  "cluster_name" : "elasticsearch",
  "version" : {
    "number" : "2.3.4",
    "build_hash" : "e455fd0c13dceca8dbbdbb1665d068ae55dabe3f",
    "build_timestamp" : "2016-06-30T11:24:31Z",
    "build_snapshot" : false,
    "lucene_version" : "5.5.0"
  },
  "tagline" : "You Know, for Search"
}

 Ansible for cluster installation

cluster.name: my-es
node.name: {
   
   { ansible_hostname }}
path.data: /var/lib/elasticsearch
path.logs: /var/log/elasticsearch
network.host: 0.0.0.0
discovery.zen.ping.unicast.hosts: ["es-0001", "es-0002"]

---
- hosts: es
  tasks:
    - copy:
        src: hosts
        dest: /etc/hosts
        owner: root
        group: root
        mode: 0644
    - name: install elasticsearch
      yum:
        name: java-1.8.0-openjdk,elasticsearch
        state: installed
    - template:
        src: elasticsearch.yml
        dest: /etc/elasticsearch/elasticsearch.yml
        owner: root
        group: root
        mode: 0644
      notify: reload elasticsearch
      tags: esconf
    - service:
        name: elasticsearch
        enabled: yes
  handlers:
    - name: reload elasticsearch
      service:
        name: elasticsearch
        state: restarted

View the cluster status on any cluster

        curl http://127.0.0.1:9200/_cluster/health?pretty

cluster management

API management

Plug-in management (essential web page)

  • Install apache on es-0001 and deploy the head plugin

  • Map port 8080 through ELB, and publish the web service of es-0001 to the Internet

  • es-0001 Access Authorization

        1. Deploy Apache and put it on

                Dynamic and static read and write management separation

        2. Put it on the machine

[root@es-0001 ~]# yum install -y httpd
[root@es-0001 ~]# systemctl enable --now httpd
[root@es-0001 ~]# tar zxf head.tar.gz -C /var/www/html 
[root@es-0001 ~]# vim /etc/httpd/conf/httpd.conf
# 配置文件最后追加
ProxyRequests off
ProxyPass /es/ http://127.0.0.1:9200/
ProxyPassReverse /es/ http://127.0.0.1:9200/
<Location ~ "^/es(-head)?/">
    Options None
    AuthType Basic
    AuthName "Elasticsearch Admin"
    AuthUserFile "/var/www/webauth"
    Require valid-user
</Location>
[root@es-0001 ~]# htpasswd -cm /var/www/webauth admin
New password: 
Re-type new password: 
Adding password for user admin
[root@es-0001 ~]# vim /etc/elasticsearch/elasticsearch.yml
# 配置文件最后追加
http.cors.enabled : true
http.cors.allow-origin : "*"
http.cors.allow-methods : OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers : X-Requested-With,X-Auth-Token,Content-Type,Content-Length
[root@es-0001 ~]# systemctl restart elasticsearch httpd

Access the es cluster through the web plugin

Simple API management

Three parts of http request

        Method Request-URL http-version

http request method

        get  post head

PUT

DELETE

POST

GET

curl

curl -X request method

        -H custom request header 

 

Cluster status query

# Query supported keywords 
[root@es-0001 ~]# curl -XGET http://127.0.0.1:9200/_cat/ 
# Check specific information 
[root@es-0001 ~]# curl -XGET http:/ /127.0.0.1:9200/_cat/master 
# Display detailed information?v 
[root@es-0001 ~]# curl -XGET http://127.0.0.1:9200/_cat/master?v 
# Display help information?help 
[ root@es-0001 ~]# curl -XGET http://127.0.0.1:9200/_cat/master?help

create index

  • Specify the name of the index, specify the number of shards, and specify the number of copies

  • Use the PUT method to create an index, and verify it through the head plug-in after the creation is complete

[root@es-0001 ~]# curl -XPUT -H "Content-Type: application/json" \
http://127.0.0.1:9200/tedu -d '{
    "settings":{
       "index":{
          "number_of_shards": 5, 
          "number_of_replicas": 1
       }
    }
}'

increase data

[root@es-0001 ~]# curl -XPUT -H "Content-Type: application/json" \
                    http://127.0.0.1:9200/tedu/teacher/1 -d '{
                      "职业": "诗人",
                      "名字": "李白",
                      "称号": "诗仙",
                      "年代": "唐"
                  }' 

Query data

[root@es-0001 ~]# curl -XGET http://127.0.0.1:9200/tedu/teacher/_search?pretty
[root@es-0001 ~]# curl -XGET http://127.0.0.1:9200/tedu/teacher/1?pretty

change the data

[root@es-0001 ~]# curl -XPOST -H "Content-Type: application/json" \
                    http://127.0.0.1:9200/tedu/teacher/1/_update -d '{ 
                    "doc": {"年代":"公元701"}
                  }'

delete data

# 删除一条
[root@es-0001 ~]# curl -XDELETE http://127.0.0.1:9200/tedu/teacher/1
# 删除索引
[root@es-0001 ~]# curl -XDELETE http://127.0.0.1:9200/tedu

Import Data

[root@ecs-proxy ~]# gunzip logs.jsonl.gz 
[root@ecs-proxy ~]# curl -XPOST -H "Content-Type: application/json" http://192.168.1.21:9200/_bulk --data-binary @logs.jsonl 

kibana installation

[root@kibana ~]# vim /etc/hosts
192.168.1.21	es-0001
192.168.1.22	es-0002
192.168.1.23	es-0003
192.168.1.24	es-0004
192.168.1.25	es-0005
192.168.1.26	kibana
[root@kibana ~]# yum install -y kibana
[root@kibana ~]# vim /etc/kibana/kibana.yml
02  server.port: 5601
07  server.host: "0.0.0.0"
28  elasticsearch.hosts: ["http://es-0002:9200", "http://es-0003:9200"]
113 i18n.locale: "zh-CN"
[root@kibana ~]# systemctl enable --now kibana

        Use the ELB to publish the service, access the authentication through the WEB browser, and access port 5601

Guess you like

Origin blog.csdn.net/weixin_55000003/article/details/130151113