Elasticsearch introduction and installation Detailed

1.1 Introduction

1.1.1.Elastic

Elastic official website: https://www.elastic.co/cn/

Elastic has a complete line of products and solutions: Elasticsearch, Kibana, Logstash etc., in front of that is, we often say that the three ELK technology stack.

1.1.2.Elasticsearch

Elasticsearch official website: https://www.elastic.co/cn/products/elasticsearch

As described above, Elasticsearch have the following characteristics:

  • Distributed without manual set up a cluster (solr will need to be manually configured, using Zookeeper as a registration center)
  • Restful style, all follow the Rest API principle, easy to use
  • Near real-time search, update data in Elasticsearch is almost fully synchronized.

1.1.3 version

The latest version is 6.3.1 Elasticsearch, we use the 6.3.0

Virtual machines need JDK1.8 and above

1.2. Installation and Configuration

In order to simulate real-world scenarios, we will install Elasticsearch under linux.

1.2.1. Create a new user leyou

For security reasons, elasticsearch not allowed to run with the default root account.

Create a user:

useradd leyou

set password:

passwd leyou

Switch User:

su - leyou

1.2.2. Upload the installation package, and extract

We will upload the installation package to: / home / leyou directory

unzip:

tar -zxvf elasticsearch-6.2.4.tar.gz

Delete archive:

rm -rf elasticsearch-6.2.4.tar.gz

We rename the directory:

mv elasticsearch-6.2.4/ elasticsearch

Enter to view the directory structure:

1.2.3. Modify the configuration

We enter the config directory:cd config

Need to modify the configuration file there are two:

  1. jvm.options

Edit jvm.options:

vim jvm.options

The default configuration is as follows:

-Xms1g
-Xmx1g

Take up too much memory, we turn down some of them:

-Xms512m
-Xmx512m
  1. elasticsearch.yml
vim elasticsearch.yml
  • Modify data and log directory:
path.data: /home/leyou/elasticsearch/data # 数据目录位置
path.logs: /home/leyou/elasticsearch/logs # 日志目录位置

We modify the data and logs directories pointing to the installation directory of elasticsearch. But these two directories do not exist, so we need to create it.

Elasticsearch into the root directory, and then create:

mkdir data
mkdir logs

  • Modify ip binding:
network.host: 0.0.0.0 # 绑定到0.0.0.0,允许任何ip来访问

The default only allows native access, you can modify the remote access after 0.0.0.0

Currently we are doing a stand-alone installation, if to do cluster, only you need to add the information to other nodes in the configuration file.

The other configuration information elasticsearch.yml:

Property name Explanation
cluster.name Configuring elasticsearch cluster name, the default is elasticsearch. Proposed changes to a meaningful name.
node.name Node name, es will be randomly assigned a default name, it is recommended to specify a meaningful name for easy management
path.conf Setting configuration file storage path, tar or zip bag mounted in the default config es root folder, rpm installed default / etc / elasticsearch
path.data Set index data storage path, data file in the default root folder es may be provided a plurality of storage paths, separated by commas
path.logs Set the log file storage path, the default is the logs files in the root directory folder es
path.plugins Storage path settings widget default plugins files in the root directory folder es
bootstrap.memory_lock Set to true can lock memory ES used to avoid memory for swap
network.host Set bind_host and publish_host, set to 0.0.0.0 to allow external network access
http.port Set external services http port, default 9200.
transport.tcp.port A communication port between the cluster nodes
discovery.zen.ping.timeout ES auto discovery time setting node connection timeout, the default of 3 seconds, if the network delay can be set larger high
discovery.zen.minimum_master_nodes Minimum number of primary nodes value, this value of the formula: (master_eligible_nodes / 2) + 1, for example: there are three compliant master node, this is set to 2
   

Modify the file permissions:

leyou to own (have) elasticsearch this folder permissions are given permission -R recursive

chown leyou:leyou elasticsearch/ -R

1.3. Run

Enter elasticsearch / bin directory, you can see the implementation of the following documents:

Then enter the command:

./elasticsearch

Found incorrect report, failed to start:

1.3.1 Error 1: The kernel is too low

Elasticsearch.yml modify the file, add the following configuration at the bottom:

bootstrap.system_call_filter: false

Then restart

1.3.2 Error 2: insufficient file permissions

Start again, and wrong:

[1]: max file descriptors [4096] for elasticsearch process likely too low, increase to at least [65536]

We use the leyou user, not root, it is insufficient file permissions.

First, log in as root user.

Then modify the configuration file:

vim /etc/security/limits.conf

Add the following content:

* soft nofile 65536

* hard nofile 131072

* soft nproc 4096

* hard nproc 4096

1.3.3 Error 3: The number of threads is not enough

Just in error, there is a line:

[1]: max number of threads [1024] for user [leyou] is too low, increase to at least [4096]

This is the number of threads is not enough.

Continue to modify the configuration:

vim /etc/security/limits.d/90-nproc.conf 

Modify the following:

* soft nproc 1024

Read:

* soft nproc 4096

1.3.4 Error 4: The process virtual memory

[3]: max virtual memory areas vm.max_map_count [65530] likely too low, increase to at least [262144]

vm.max_map_count: a process of quantitative restrictions can have VMA (virtual memory area) and continue to modify the configuration file:

vim /etc/sysctl.conf 

Add the following content:

vm.max_map_count=655360

Then execute the command:

sysctl -p

1.3.5 Restart terminal window

All errors modification is completed, be sure to restart your Xshell terminal, otherwise the configuration is invalid.

1.3.6. Start

Start again, finally succeeded!

You can see the Binding two ports:

  • 9300: Communication interface between cluster nodes
  • 9200: Client Access Interface

We visited in the browser: http://192.168.56.101:9200

1.4. AnSo kibana

1.4.1. What is Kibana?

Kibana is based on the Node.js Elasticsearch index database statistics tool, may be utilized Elasticsearch aggregation function, to generate a variety of charts, such as bar graphs, line graphs, pie charts and the like.

But also provides an operation console Elasticsearch index data, and provide some tips API, is very conducive to our learning Elasticsearch syntax.

1.4.2. Installation

Because Kibana depends on the node, our virtual machine is not installed node, while the window is installed too. So we chose to use kibana in the window.

The latest version of elasticsearch be consistent, but also 6.3.0

It can extract it to a specific directory

1.4.3. Configuration Run

Configuration

Enter config directory under the installation directory, modify kibana.yml file:

Modify the address elasticsearch server:

elasticsearch.url: "http://192.168.56.101:9200"

run

Go to the bin directory under the installation directory:

Double click:

Found kibana listening port is 5601

We visit: http://127.0.0.1:5601

1.4.4. Console

Select the left of DevTools menu, you can enter the console page:

In the right side of the page, we could enter the request, visit the Elasticsearch.

1.5. Tokenizer mounting ik

Lucene's IK word is as early as 2012 had not maintained, and now is to maintain updated version on its basis we want to use, and the development of an integrated plug-in ElasticSearch of the, and Elasticsearch maintenance upgrade with version remains the same, the latest version: 6.3.0

1.5.1. Installation

Upload data in the zip package, extract to plugins directory Elasticsearch directory:

Using the unzip command to extract:

unzip elasticsearch-analysis-ik-6.3.0.zip -d ik-analyzer

Then restart elasticsearch:

1.5.2. Test

Whether we first grammar, we first test wave.

Enter the following request kibana console:

POST _analyze
{
  "analyzer": "ik_max_word",
  "text":     "我是中国人"
}

Run get results:

{
  "tokens": [
    {
      "token": "我",
      "start_offset": 0,
      "end_offset": 1,
      "type": "CN_CHAR",
      "position": 0
    },
    {
      "token": "是",
      "start_offset": 1,
      "end_offset": 2,
      "type": "CN_CHAR",
      "position": 1
    },
    {
      "token": "中国人",
      "start_offset": 2,
      "end_offset": 5,
      "type": "CN_WORD",
      "position": 2
    },
    {
      "token": "中国",
      "start_offset": 2,
      "end_offset": 4,
      "type": "CN_WORD",
      "position": 3
    },
    {
      "token": "国人",
      "start_offset": 3,
      "end_offset": 5,
      "type": "CN_WORD",
      "position": 4
    }
  ]
}

Package Download:

Links: https://pan.baidu.com/s/1KeIQtkCRDlnba6L73lkXEg extraction code: see the comments on this post

Guess you like

Origin www.linuxidc.com/Linux/2020-01/161900.htm