Elasticsearch core technology (1) --- Docker containers run ES, Kibana, Cerebro

Docker containers run ES, Kibana, Cerebro and installation and data into ES Logstash

I would like to strengthen the knowledge of the ES, Ruan Yiming looked great teacher of "Elasticsearch core technology and real" harvest, so the next will follow him more in-depth study ES.

The purpose of this blog is that 部署好ES和跟ES相关的辅助工具,同时通过Logstash将测试数据导入ES, after the work is completed, then we will be on the basis of in-depth to learn it.

A, Docker vessel running ES, Kibana, Cerebro

1, the required environment

Docker + docker-compose

First, a good environment to deploy Dockeranddocker-compose

Verify success

command docker —version

xubdeMacBook-Pro:~ xub$ docker --version
Docker version 17.03.1-ce-rc1, build 3476dbf

command docker-compose —version

xubdeMacBook-Pro:~ xub$ docker-compose --version
docker-compose version 1.11.2, build dfed245

2、docker-compose.yml

We can simply put docker-compose.yml understood as a script similar to Shell, the script defines the information to run multiple container applications.

version: '2.2'
services:
  cerebro:
    image: lmenezes/cerebro:0.8.3
    container_name: cerebro
    ports:
      - "9000:9000"
    command:
      - -Dhosts.0.host=http://elasticsearch:9200
    networks:
      - es7net
  kibana:
    image: docker.elastic.co/kibana/kibana:7.1.0
    container_name: kibana7
    environment:
      - I18N_LOCALE=zh-CN
      - XPACK_GRAPH_ENABLED=true
      - TIMELION_ENABLED=true
      - XPACK_MONITORING_COLLECTION_ENABLED="true"
    ports:
      - "5601:5601"
    networks:
      - es7net
  elasticsearch:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.1.0
    container_name: es7_01
    environment:
      - cluster.name=xiaoxiao
      - node.name=es7_01
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - discovery.seed_hosts=es7_01,es7_02
      - cluster.initial_master_nodes=es7_01,es7_02
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - es7data1:/usr/share/elasticsearch/data
    ports:
      - 9200:9200
    networks:
      - es7net
  elasticsearch2:
    image: docker.elastic.co/elasticsearch/elasticsearch:7.1.0
    container_name: es7_02
    environment:
      - cluster.name=xiaoxiao
      - node.name=es7_02
      - bootstrap.memory_lock=true
      - "ES_JAVA_OPTS=-Xms512m -Xmx512m"
      - discovery.seed_hosts=es7_01,es7_02
      - cluster.initial_master_nodes=es7_01,es7_02
    ulimits:
      memlock:
        soft: -1
        hard: -1
    volumes:
      - es7data2:/usr/share/elasticsearch/data
    networks:
      - es7net

volumes:
  es7data1:
    driver: local
  es7data2:
    driver: local

networks:
  es7net:
    driver: bridge

Start command

docker-compose up      #启动
docker-compose down    #停止容器
docker-compose down -v #停止容器并且移除数据

3, to see if success

es access address

localhost:9200  #ES默认端口为9200

kibana access address

localhost:5601 #kibana默认端口5601

cerebro access address

localhost:9000 #cerebro默认端口9000

This installation was successful overall.

说明 The project is successful deployment of the system in Mac, try to deploy their own cloud services but Ali still can not succeed because the memory is too small.


Two, Logstash mounting data into ES

注意 Logstash and kibana downloaded version and the version number to your elasticsearch an agreement.

1, 配置 movices.yml

This name is completely arbitrary

# input代表读取数据 这里读取数据的位置在data文件夹下,文件名称为movies.csv
input {
  file {
    path => "/Users/xub/opt/logstash-7.1.0/data/movies.csv"
    start_position => "beginning"
    sincedb_path => "/dev/null"
  }
}
filter {
  csv {
    separator => ","
    columns => ["id","content","genre"]
  }

  mutate {
    split => { "genre" => "|" }
    remove_field => ["path", "host","@timestamp","message"]
  }

  mutate {

    split => ["content", "("]
    add_field => { "title" => "%{[content][0]}"}
    add_field => { "year" => "%{[content][1]}"}
  }

  mutate {
    convert => {
      "year" => "integer"
    }
    strip => ["title"]
    remove_field => ["path", "host","@timestamp","message","content"]
  }

}
# 输入位置 这里输入数据到本地es ,并且索引名称为movies
output {
   elasticsearch {
     hosts => "http://localhost:9200"
     index => "movies"
     document_id => "%{id}"
   }
  stdout {}
}

Start command : start command and placement profiles movices.yml about, go to the bin directory

./logstash ../movices.yml 

movices.yml storage location

Successful start

This time you go cerebro visual interface you can see, there are already named movies的索引exist, the image above is actually movies index has been around since I was Logstash ES successful data import only truncated map.

总结Overall here is simple, before migrating data to Mysql data will be relatively more complex es by Logstash, after all, it requires a database-driven package.

Such environment has been set up successfully, the next study in this presentation are based on.


thank

Elasticsearch core technology and combat --- Ruan Yiming (eBay Pronto platform technical director)

Course-related Information Address: Github Address



 我相信,无论今后的道路多么坎坷,只要抓住今天,迟早会在奋斗中尝到人生的甘甜。抓住人生中的一分一秒,胜过虚度中的一月一年!(8)


Guess you like

Origin www.cnblogs.com/qdhxhz/p/11432112.html