EFK tutorial-2: ElasticSearch high performance and high availability architecture

Reprinted from: https://mp.weixin.qq.com/s?__biz=MzUyNzk0NTI4MQ==&mid=2247483811&idx=1&sn=a413dea65f8f64abb24d82feea55db5b&chksm=fa769a8dcd01139b1da8794914e10989c6a39a99971d8013e9d3b26766b80d5833e2fbaf0ab8&mpshare=1&scene=1&srcid=1125tjbylqn3EdoMtaX2p73J&sharer_sharetime=1574686271229&sharer_shareid=6ec87ec9a11a0c18d61cde7663a9ef87#rd

The purpose of EFK's data / ingest / master role and the deployment of three nodes are explained to maximize performance while ensuring high availability

elasticsearch-data

installation

All three units perform the same installation steps

    tar -zxvf elasticsearch-7.3.2-linux-x86_64.tar.gz

    mv elasticsearch-7.3.2 /opt/elasticsearch

    useradd elasticsearch -d /opt/elasticsearch -s /sbin/nologin

    mkdir -p /opt/logs/elasticsearch

    chown elasticsearch.elasticsearch /opt/elasticsearch -R

    chown elasticsearch.elasticsearch /opt/logs/elasticsearch -R

    # 数据盘需要elasticsearch写权限

    chown elasticsearch.elasticsearch /data/SAS -R


    # 限制一个进程可以拥有的VMA(虚拟内存区域)的数量要超过262144,不然elasticsearch会报max virtual memory areas vm.max_map_count [65535] is too low, increase to at least [262144]

    echo "vm.max_map_count = 655350" >> /etc/sysctl.conf

    sysctl -p

elasticsearch-data配置

# 192.168.1.51 /opt/elasticsearch/config/elasticsearch.yml
    cluster.name: my-application

    node.name: 192.168.1.51

    # 数据盘位置,如果有多个硬盘位置,用","隔开

    path.data: /data/SAS

    path.logs: /opt/logs/elasticsearch

    network.host: 192.168.1.51


    discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    http.cors.enabled: true

    http.cors.allow-origin: "*"


    # 关闭master功能

    node.master: false

    # 关闭ingest功能

    node.ingest: false

    # 开启data功能

    node.data: true

# 192.168.1.52 /opt/elasticsearch/config/elasticsearch.yml
    cluster.name: my-application

    node.name: 192.168.1.52

    # 数据盘位置,如果有多个硬盘位置,用","隔开

    path.data: /data/SAS

    path.logs: /opt/logs/elasticsearch

    network.host: 192.168.1.52


    discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    http.cors.enabled: true

    http.cors.allow-origin: "*"


    # 关闭master功能

    node.master: false

    # 关闭ingest功能

    node.ingest: false

    # 开启data功能

    node.data: true

# 192.168.1.53 /opt/elasticsearch/config/elasticsearch.yml
    cluster.name: my-application

    node.name: 192.168.1.53

    # 数据盘位置,如果有多个硬盘位置,用","隔开

    path.data: /data/SAS

    path.logs: /opt/logs/elasticsearch

    network.host: 192.168.1.53


    discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    http.cors.enabled: true

    http.cors.allow-origin: "*"


    # 关闭master功能

    node.master: false

    # 关闭ingest功能

    node.ingest: false

    # 开启data功能

    node.data: true

elasticsearch-data start

    sudo -u elasticsearch /opt/elasticsearch/bin/elasticsearch

Elasticsearch cluster status

    curl "http://192.168.1.31:9200/_cat/health?v"

elasticsearch-data status

    curl "http://192.168.1.31:9200/_cat/nodes?v"

Elasticsearch-data parameter description

    status: green  # 集群健康状态

    node.total: 6  # 有6台机子组成集群

    node.data: 6  # 有6个节点的存储

    node.role: d  # 只拥有data角色

    node.role: i  # 只拥有ingest角色

    node.role: m  # 只拥有master角色

    node.role: mid  # 拥master、ingest、data角色

elasticsearch-ingest

Add three new ingest nodes to join the cluster, and turn off the master and data functions

elasticsearch-ingest installation

3 es all perform the same installation steps

    tar -zxvf elasticsearch-7.3.2-linux-x86_64.tar.gz

    mv elasticsearch-7.3.2 /opt/elasticsearch

    useradd elasticsearch -d /opt/elasticsearch -s /sbin/nologin

    mkdir -p /opt/logs/elasticsearch

    chown elasticsearch.elasticsearch /opt/elasticsearch -R

    chown elasticsearch.elasticsearch /opt/logs/elasticsearch -R


    # 限制一个进程可以拥有的VMA(虚拟内存区域)的数量要超过262144,不然elasticsearch会报max virtual memory areas vm.max_map_count [65535] is too low, increase to at least [262144]

    echo "vm.max_map_count = 655350" >> /etc/sysctl.conf

    sysctl -p

elasticsearch-ingest configuration

# 192.168.1.41 /opt/elasticsearch/config/elasticsearch.yml
    cluster.name: my-application

    node.name: 192.168.1.41

    path.logs: /opt/logs/elasticsearch

    network.host: 192.168.1.41


    discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    http.cors.enabled: true

    http.cors.allow-origin: "*"


    # 关闭master功能

    node.master: false

    # 开启ingest功能

    node.ingest: true

    # 关闭data功能

    node.data: false

# 192.168.1.42 /opt/elasticsearch/config/elasticsearch.yml
    cluster.name: my-application

    node.name: 192.168.1.42

    path.logs: /opt/logs/elasticsearch

    network.host: 192.168.1.42


    discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    http.cors.enabled: true

    http.cors.allow-origin: "*"


    # 关闭master功能

    node.master: false

    # 开启ingest功能

    node.ingest: true

    # 关闭data功能

    node.data: false

# 192.168.1.43 /opt/elasticsearch/config/elasticsearch.yml
    cluster.name: my-application

    node.name: 192.168.1.43

    path.logs: /opt/logs/elasticsearch

    network.host: 192.168.1.43


    discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    http.cors.enabled: true

    http.cors.allow-origin: "*"


    # 关闭master功能

    node.master: false

    # 开启ingest功能

    node.ingest: true

    # 关闭data功能

    node.data: false

elasticsearch-ingest start

    sudo -u elasticsearch /opt/elasticsearch/bin/elasticsearch

Elasticsearch cluster status

    curl "http://192.168.1.31:9200/_cat/health?v"

elasticsearch-ingest status

    curl "http://192.168.1.31:9200/_cat/nodes?v"

Elasticsearch-ingest parameter description

    status: green  # 集群健康状态

    node.total: 9  # 有9台机子组成集群

    node.data: 6  # 有6个节点的存储

    node.role: d  # 只拥有data角色

    node.role: i  # 只拥有ingest角色

    node.role: m  # 只拥有master角色

    node.role: mid  # 拥master、ingest、data角色

elasticsearch-master

First, change the three es (192.168.1.31, 192.168.1.32, 192.168.1.33) deployed in the previous article "EFK-1" to the function of master only, so you need to migrate the index data on these three to In the data node made this time

Index migration

Must do this step, put the previous index on the data node

    curl -X PUT "192.168.1.31:9200/*/_settings?pretty" -H 'Content-Type: application/json' -d'
    {
      "index.routing.allocation.include._ip": "192.168.1.51,192.168.1.52,192.168.1.53"
    }'

Confirm the current index storage location

Confirm that all indexes are not on 192.168.1.31, 192.168.1.32, 192.168.1.33 nodes

    curl "http://192.168.1.31:9200/_cat/shards?h=n"

elasticsearch-master配置

Note: To modify the configuration and restart the process, you need to execute one by one. Make sure that the first one is successful before executing the next one.

# 192.168.1.31 /opt/elasticsearch/config/elasticsearch.yml
    cluster.name: my-application

    node.name: 192.168.1.31

    path.logs: /opt/logs/elasticsearch

    network.host: 192.168.1.31


    discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    http.cors.enabled: true

    http.cors.allow-origin: "*"


    #开启master功能

    node.master: true

    #关闭ingest功能

    node.ingest: false

    #关闭data功能

    node.data: false

# 192.168.1.32 /opt/elasticsearch/config/elasticsearch.yml
    cluster.name: my-application

    node.name: 192.168.1.32

    path.logs: /opt/logs/elasticsearch

    network.host: 192.168.1.32


    discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    http.cors.enabled: true

    http.cors.allow-origin: "*"


    #开启master功能

    node.master: true

    #关闭ingest功能

    node.ingest: false

    #关闭data功能

    node.data: false

# 192.168.1.33 /opt/elasticsearch/config/elasticsearch.yml
    cluster.name: my-application

    node.name: 192.168.1.33

    path.logs: /opt/logs/elasticsearch

    network.host: 192.168.1.33


    discovery.seed_hosts: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    cluster.initial_master_nodes: ["192.168.1.31","192.168.1.32","192.168.1.33"]

    http.cors.enabled: true

    http.cors.allow-origin: "*"


    #开启master功能

    node.master: true

    #关闭ingest功能

    node.ingest: false

    #关闭data功能

    node.data: false

Elasticsearch cluster status

    curl "http://192.168.1.31:9200/_cat/health?v"

elasticsearch-master status

    curl "http://192.168.1.31:9200/_cat/nodes?v"

At this point, when no "mid" appears in all servers in node.role, it means that everything is successfully completed.

Guess you like

Origin www.cnblogs.com/sanduzxcvbnm/p/12698464.html