Big data-Elasticsearch

Elasticsearch

Introduction

Elasticsearch , based on lucene, hides complexity, provides easy-to-use restful API interface, Java API interface

Elasticsearch : A real-time distributed search and analysis engine, which is used for full-text search, structured search, and analysis

Features

  1. Can handle petabytes of data
  2. Merging full-text search, data analysis, and distributed technologies
  3. Simple operation, easy deployment, and small amount of data
  4. Provides functions that the database cannot provide

Index (index-database)

The index contains a bunch of document data with a similar structure, an index contains many documents, and an index represents a similar or identical document

Type (type-table)

Each index can have one or more types. Type is a logical data classification in the index. A document under a type has the same field. Each type contains a bunch of documents

Document (document-line)

A document is the smallest data unit in es, and each type under index can store multiple documents

Field (field-column)

Field is the smallest unit of es, there are multiple fields in a document, each field is a data field

Mapping

Data is stored on the index object through mapping

Comparison between Elasticsearch and database

Relational database (such as MySQL) Non-relational database (Elasticsearch)
Database Index
Table Type
Row of data Document
Data column Field
Constraint Schema Mapping

Install Elasticsearch

(1) Elasticsearch official website https://www.elastic.co/cn/downloads/elasticsearch download installation package

(2) Unzip the elasticsearch-7.1.0-linux-x86_64.tar.gz compressed package

tar -xvzf elasticsearch-7.1.0-linux-x86_64.tar.gz

(3) Modify the user permissions of the elasticsearch-7.1.0 folder

chown -R destiny elasticsearch-7.1.0

(4) Create data and logs folders

mkdir -p data/data
mkdir -p data/logs

(5) Modify the user permissions of the data folder

chown -R destiny data

(6) Switch users

su destiny

(7) Modify the elasticsearch.yml file in the config folder

Master node configuration

# ---------------------------------- Cluster -----------------------------------
# 配置集群需要节点上cluster.name配置相同
cluster.name: elasticsearch
# ------------------------------------ Node ------------------------------------
# 配置集群各节点上node.name配置不能相同
node.name: master
# ----------------------------------- Paths ------------------------------------
path.data: /usr/local/elasticsearch/data/data
path.logs: /usr/local/elasticsearch/data/logs
# ----------------------------------- Memory -----------------------------------
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
# ---------------------------------- Network -----------------------------------
network.host: 192.168.138.130
# --------------------------------- Discovery ----------------------------------
discovery.zen.ping.unicast.hosts: ["192.168.138.130","192.168.138.129","192.168.138.128"]
cluster.initial_master_nodes: ["master"]
discovery.zen.minimum_master_nodes: 2
# ---------------------------------- Various -----------------------------------
#action.destructive_requires_name: true
http.cors.enabled: true
http.cors.allow-origin: "*"

slave1 node configuration

# ---------------------------------- Cluster -----------------------------------
# 配置集群需要节点上cluster.name配置相同
cluster.name: elasticsearch
# ------------------------------------ Node ------------------------------------
# 配置集群各节点上node.name配置不能相同
node.name: slave1
# ----------------------------------- Paths ------------------------------------
path.data: /usr/local/elasticsearch/data/data
path.logs: /usr/local/elasticsearch/data/logs
# ----------------------------------- Memory -----------------------------------
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
# ---------------------------------- Network -----------------------------------
network.host: 192.168.138.129
# --------------------------------- Discovery ----------------------------------
discovery.zen.ping.unicast.hosts: ["192.168.138.130","192.168.138.129","192.168.138.128"]
cluster.initial_master_nodes: ["master"]
discovery.zen.minimum_master_nodes: 2
# ---------------------------------- Various -----------------------------------
#action.destructive_requires_name: true
http.cors.enabled: true
http.cors.allow-origin: "*"

slave2 node configuration

# ---------------------------------- Cluster -----------------------------------
# 配置集群需要节点上cluster.name配置相同
cluster.name: elasticsearch
# ------------------------------------ Node ------------------------------------
# 配置集群各节点上node.name配置不能相同
node.name: slave2
# ----------------------------------- Paths ------------------------------------
path.data: /usr/local/elasticsearch/data/data
path.logs: /usr/local/elasticsearch/data/logs
# ----------------------------------- Memory -----------------------------------
bootstrap.memory_lock: false
bootstrap.system_call_filter: false
# ---------------------------------- Network -----------------------------------
network.host: 192.168.138.128
# --------------------------------- Discovery ----------------------------------
discovery.zen.ping.unicast.hosts: ["192.168.138.130","192.168.138.129","192.168.138.128"]
cluster.initial_master_nodes: ["master"]
discovery.zen.minimum_master_nodes: 2
# ---------------------------------- Various -----------------------------------
#action.destructive_requires_name: true
http.cors.enabled: true
http.cors.allow-origin: "*"

(8) Switch root user to modify the /etc/security/limits.conf file to add configuration

* soft nofile 65536
* hard nofile 131072
* soft nproc 4096
* hard nproc 4096

(9) Modify the /etc/security/limits.d/20-nproc.conf file

* soft nproc 4096

(10) Modify the /etc/sysctl.conf file to add configuration

vm.max_map_count=655360

(11) execute the command

sysctl -p

(12) Start elasticsearch

./bin/elasticsearch
# 后台运行
./elasticesrarch -d

Insert picture description here

(13) Test elasticsearch

curl http://hadoop1:9200

Insert picture description here

curl -XGET '192.168.138.130:9200/_cat/health?v&pretty'

Insert picture description here

(14) Send Elasticsearch related files and folders to other servers

scp -r elasticsearch hadoop2:$PWD
scp /etc/security/limits.conf hadoop2:/etc/security
scp /etc/profile hadoop2:/etc/
scp /etc/security/limits.d/20-nproc.conf hadoop2:/etc/security/

Install Elasticsearch-head plugin

(1) Website https://github.com/mobz/elasticsearch-head/archive/master.zip download the head plugin

(2) Unzip the elasticsearch-head-master.zip compressed package

unzip elasticsearch-head-master.zip

(3) Install node.js, command node -v to verify the installation

curl -sL https://rpm.nodesource.com/setup_8.x | bash -
yum install -y nodejs

(4) Verify whether the installation is successful

node -v
npm -v

(5) Switch to the elasticsearch-head-master directory and install grunt

npm install -g grunt-cli
npm install [email protected] --ignore-scripts
npm install

(6) Modify the Gruntfile.js file

connect: {
        server: {
                options: {
                        port: 9100,
                        hostname: '*',
                        base: '.',
                        keepalive: true
                }

(6) Modify the _site / app.js file

this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "http://192.168.138.130:9200";

(7) Start the head

grunt server &

(8) Enter http://192.168.138.130:9100/ in the browser to access

Insert picture description here

Install Elasticsearch-analysis plugin

(1) Website https://github.com/medcl/elasticsearch-analysis-ik/releases/tag/v7.1.0 download analysis plugin

(2) Create elasticsearch-analysis-ik-7.1.0 under the plugins folder of elasticsearch

mkdir elasticsearch-analysis-ik-7.1.0

(3) Unzip the elasticsearch-analysis-ik-7.1.0.zip compressed package

unzip elasticsearch-analysis-ik-7.1.0.zip

(4) Restart elasticsearch

Kibana

(1) Website https://www.elastic.co/cn/downloads/past-releases/kibana-7-1-0 download kibana

(2) Unzip the kibana-7.1.0-linux-x86_64.tar.gz compressed package

tar -xvzf kibana-7.1.0-linux-x86_64.tar.gz

(3) Modify the configuration file kibana.yml

server.port: 5601
server.host: "192.168.138.130"
elasticsearch.hosts: ["http://192.168.138.130:9200"]
kibana.index: ".kibana"

(4) Start Elasticsearch cluster

elasticsearch

(5) Kiki

kibana

Insert picture description here

(6) Web page input http://192.168.138.130:5601

Insert picture description here

Published 131 original articles · won 12 · 60,000 views +

Guess you like

Origin blog.csdn.net/JavaDestiny/article/details/90581792