End the difficulties of installing ElasticSearch, Kibana, and Logstash for beginners "Fun with ElasticSearch 1"

I am going to use ElasticSearch in the project. I only had a simple understanding of ElasticSearch before without systematic learning. This series of articles will start from basic learning to in-depth use.

Kaka wrote an article on MySQL before, and now I have entered a series to play ElasticSearch.

This article will bring you to install ElasticSearch, Kibana, Logstash, configure ElasticSearch external network access, configure the daemon to start Kibana, ElasticSearch, and use Logstash to import demo data into ElasticSearch.

ElasticSearch installation

1. Install ElasticSearch

Build an ElasticSearch environment from 0, and then install it first.

The official website of ElasticSearch is constantly being revised, and many small partners cannot find the download location.

After entering, click on the circled position, do not click on the left to download directly

After entering, you can see the released historical version, you can download the corresponding version according to your own needs, here is the 7.1.0 version downloaded

If you are Linux, you can open the developer mode, copy the address, and use wget + address to download directly.

https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.1.0-linux-x86_64.tar.gz

You can see that ElasticSearch has been downloaded at this time, which is already decompressed

At this point, don't start it impulsively. From version 5.x, ElasticSearch cannot be enabled directly with the root user for security reasons.

Add user

Execute useradd es to add the es user

Give the ElasticSearch user permission to the es user under the root user.

chown -R es ElasticSearch

Start ElasticSearch

After switching to the es user, execute execution ./bin/elasticsearchto start ElasticSearch

Problem with initializing keystore at startup

Exception in thread "main" org.elasticsearch.bootstrap.BootstrapException: java.nio.file.AccessDeniedException: /usr/local/elasticsearch/elasticsearch-6.6.0/config/elasticsearch.keystore
Likely root cause: java.nio.file.AccessDeniedException: /usr/local/elasticsearch/elasticsearch-7.1.0/config/elasticsearch.keystore
	at sun.nio.fs.UnixException.translateToIOException(UnixException.java:84)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:102)
	at sun.nio.fs.UnixException.rethrowAsIOException(UnixException.java:107)
	at sun.nio.fs.UnixFileSystemProvider.newByteChannel(UnixFileSystemProvider.java:214)
	at java.nio.file.Files.newByteChannel(Files.java:361)
	at java.nio.file.Files.newByteChannel(Files.java:407)
	at org.apache.lucene.store.SimpleFSDirectory.openInput(SimpleFSDirectory.java:77)
	at org.elasticsearch.common.settings.KeyStoreWrapper.load(KeyStoreWrapper.java:206)
	at org.elasticsearch.bootstrap.Bootstrap.loadSecureSettings(Bootstrap.java:224)
	at org.elasticsearch.bootstrap.Bootstrap.init(Bootstrap.java:289)
	at org.elasticsearch.bootstrap.Elasticsearch.init(Elasticsearch.java:159)
	at org.elasticsearch.bootstrap.Elasticsearch.execute(Elasticsearch.java:150)
	at org.elasticsearch.cli.EnvironmentAwareCommand.execute(EnvironmentAwareCommand.java:86)
	at org.elasticsearch.cli.Command.mainWithoutErrorHandling(Command.java:124)
	at org.elasticsearch.cli.Command.main(Command.java:90)
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:115)
	at org.elasticsearch.bootstrap.Elasticsearch.main(Elasticsearch.java:92)
Refer to the log for complete error details.

This version requires the security authentication function to create the elasticsearch.keystore file, so enter the following command

./bin/elasticsearch-keystore create

Check if the startup is successful

127.0.0.1::9200If you see the following interface in the browser, it means that the installation has been successful.

Here Kaka is installed on the centos virtual machine, so if you can access ElasticSearch using the external network!

2. Configure external network access

The steps to configure external network access are also very simple, follow the steps to build in three minutes

Question one

max file descriptors [4096] for elasticsearch process is too low, increase to at least [65535]

EDIT /etc/security/limits.conf, append the following

es soft nofile 65536

es hard nofile 65536

Question two

max number of threads [3782] for user [es] is too low, increase to at least [4096]

It means that the maximum number of threads in elasticsearch is too low

Revise/etc/security/limits.conf

Add the following two lines to the end of the file:

es soft nproc 4096

es hard nproc 4096

Question three

max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]

/etc/sysctl.confadd a line at the end of the file

vm.max_map_count=262144

Question four

the default discovery settings are unsuitable for production use; at least one of [discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes] must be configured

It means to configure the following three, at least one of them

#[discovery.seed_hosts, discovery.seed_providers, cluster.initial_master_nodes]

In the config directory of elasticsearch, modify the elasticsearch.yml configuration file

node.name: node-1
cluster.initial_master_nodes: ["node-1"]
network.host: 0.0.0.0

Question five

The above operations have been performed, but the external network cannot be accessed. You need to check whether the firewall is turned off.

# 关闭防火强
systemctl stop firewalld.service

# 设置永久关闭
systemctl disable firewalld.service

One thing to pay attention to here, you need to restart the machine after the modification of the first and second problems, remember, remember, remember

Access ElasticSearch from the Internet

You can see that the ip of the virtual machine ishttp://192.168.253.129/

Next, try to access the virtual machine ip + port 9200 on the host to see if it can be accessed

3. Install Kibana

The download method is the same as ElasticSearch. Learn how to download these software. It should be noted that the downloaded version needs to be consistent with the ElasticSearch version.

https://artifacts.elastic.co/downloads/kibana/kibana-7.1.0-linux-x86_64.tar.gz

So next you need to install Kibana7.1.0 version

Configure Kibana parameters

Copy to the end of the file

i18n.locale: "zh-CN"

server.host: "0.0.0.0"

elasticsearch.hosts: ["http://localhost:9200"]

Kibana

Go to the Kibana directory and execute ./bin/kibana

Unable to start with the following error

[WARN ][o.e.c.c.ClusterFormationFailureHelper] [node-1] master not discovered or elected yet, an election requires a node with id [rEq_ExihQ927BnwBy3Iz7A], have discovered [] which is not a quorum; discovery will continue using [127.0.0.1:9300, 127.0.0.1:9301, 127.0.0.1:9302, 127.0.0.1:9303, 127.0.0.1:9304, [::1]:9300, [::1]:9301, [::1]:9302, [::1]:9303, [::1]:9304] from hosts providers and [{
    
    node-1}{
    
    DtZPMDK4S3qaSQF6mRhRqw}{
    
    lBAhaMvDTKmGkysihkwAqA}{
    
    192.168.122.130}{
    
    192.168.122.130:9300}{
    
    ml.machine_memory=1907744768, xpack.installed=true, ml.max_open_jobs=20}] from last-known cluster state; node term 412, last-accepted version 16 in term 1

solution:

Check if there is any previous node data in the data directory of ElasticSearch

How to shut down a Kibana instance

Check the current port number to execute netstat -anp | grep 5601, there is a process ID at the end, and kill -9 进程IDyou can execute it

Then restart it.

If you configure Kibana to Chinese

The parameters have been given to you, this is i18n.locale: "zh-CN"the configuration

When you come in you will see this page

Click the tool hand, you can operate the ElasticSearch data, it is still very nice

4. Quick tour of Kibana interface

After entering the software, you can see the loaded dataset

After entering, you can see three sample data, namely log, e-commerce order, flight data

Then click on Dashboards and you will see the three sets of sample data just added

Next, let's look at a very important tool, dev Tools. This tool will be used a lot later. The purpose is to help you easily execute some ElasticSearch commands in Kibana.

Five, the daemon starts Kibana, ElasticSearch

After starting ElasticSearch and Kibana, you will find that it is started directly on the current port. If you close this terminal, it will be closed.

Next, use the help nohupto start the daemon process. Note that this is not what nohup has written on the Internet. Don't be confused.

install nohup

When you execute the nohup command and find that there is no, execute

yum install -y coreutils

Usually installed in

/usr/bin/nohup

Execute to which nohupconfirm the installation location

Configure the nohup command as a global, use the simplest way, execute

#vi ~/.bash_profile 

# 添加这行代码
PATH=$PATH:$HOME/bin:/use/bin

export PATH

Then, save and refresh to take effect

source ~/.bash_profile

Finally, verify, if the version information appears, the installation is successful

nohup --version

Kibana

It is estimated that many people on the Internet are directly executed nohup ./bin/kibana. Although this can start Kibana, the error in the following figure will appear. At the same time, when you ctrl+c, Kibana will also be closed.

You need to redirect the error message to the linux hole

nohup ./bin/kibana > /dev/null 2>&1 &

insert image description here

You will find that a line of numbers comes out after the execution. This is the process ID of Kibana. If you forget it, you can execute it.

ps -ef | grep node

Come check it out, because Kibana is from node

insert image description here

Start ElasticSearch

In the same way as starting Kibana, execute

nohup ./bin/elasticsearch > /dev/null 2>&1 &

insert image description here

6. Install Logstash and import demo data to ElasticSearch

download link

wget https://artifacts.elastic.co/downloads/logstash/logstash-7.1.0.zip

The demo data is movielensobtained in this recommendation system, so you don't need to download the data, you can just find the demo data.

This kind of data can’t be shared, so…

decompress

unzip logstash-7.1.0.zip

start logstash

./bin/logstash -f logstash.conf

Direct execution is definitely not possible. You first need to get the demo data, and then put the logstash.conf and movies.csv files on the first layer of the logstash directory.

The place circled in the figure below can be the storage location of movies.csv, change this path to your path

Then when you execute the start logstash command again, you will have the first problem

logstash could not find java; set JAVA_HOME or ensure java is in PATH

First you have to make sure you have java installed

# 验证是否安装
java -version

The following picture shows that the installation is successful. If it is not installed, don't worry, Kaka will give you the detailed process.

install java8

portal

https://download.oracle.com/otn/java/jdk/8u311-b11/4d5417147a92418ea8b615e228bb6935/jdk-8u311-linux-x64.tar.gz

If you are in a hurry, you can get started directly with wget. This is not acceptable. To download the java8 installation package, you need to click to confirm, and you cannot download it without registering oracle, so you need an oracle account first.

First download the compressed package to the host, and then use the artifact scp command to transfer the file, which is quite fast, so that the downloaded java package can be transferred to the server.

scp jdk-8u311-linux-x64.tar.gz root:192.168.17.128:/

Create the java directory under /usr/local

cd /usr/local

mkdir java

Move the java archive to /usr/local/java

mv jdk-8u311-linux-x64.tar.gz /usr/local/java/

decompress

tar -zxf jdk-8u311-linux-x64.tar.gz jdk8

Configure environment variables

vim /etc/profile

Add the file at the end (if you are the same as the Kaka directory, you don't need to change it)

export JAVA_HOME=/usr/local/java/jdk8

export PATH=$PATH:$JAVA_HOME/bin

export CLASSPATH=.:/$JAVA_HOME/lib/tools.jar:$JAVA_HOME/lib/dt.jar

Refresh takes effect

source /etc/profile

final verification

# 验证是否安装
java -version

Execute ./bin/logstash -f logstash.conf again

The following error will still appear, but the java environment has been added.

come vim /logstash-7.1.0/bin/logstash.lib.sh, you will find that the error is typed from here, the reason is that there is JAVACMDno value

I searched this file and found that there are a lot of them, so I can just reassign it directly on the top, pay attention to the circled place

save, exit, execute again

./bin/logstash -f logstash.conf

Finally done, import the data into ElasticSearch

Check if there is an index of movies in Kibana

Seven, install Cerebro

download

wget https://github.com/lmenezes/cerebro/releases/download/v0.9.4/cerebro-0.9.4.tgz

decompress

tar -zxf cerebro-0.9.4.tgz

To modify the configuration file, you only need to open the host configuration and write your own IP address.

hosts = [
  #{
    
    
  #  host = "http://192.168.17.128:9100"
  #  name = "Localhost cluster"
  #  headers-whitelist = [ "x-proxy-user", "x-proxy-roles", "X-Forwarded-For" ]
  #}
  # Example of host with authentication
  {
    
    
    host = "http://192.168.17.128:9200"
  #  name = "Secured Cluster"
    #auth = {
    
    
     # username = "admin"
     # password = "admin"
    #}
  }
]

start up

cd cerebro

./bin/cerebro

Enter ip address + 9000 port

cerebro installation

8. Summary

Following this article, you can successfully install and start ElasticSearch and Kibana. If ElasticSearch can be accessed from the external network, you need to add several configurations, which can be achieved by following this article.

Simply understand the interface of Kibana, and most of the later drills are on Kibana.

Finally, I talked to you about the most interesting daemons to start ElasticSearch and Kibana. There are many incomplete information on the Internet, and there are still problems in implementation. Kaka will give you the most complete tutorial and follow the operation.

Deadline MySQL Series General Catalog

Persistence in learning, perseverance in writing, perseverance in sharing are the beliefs that Kaka has upheld since her career. I hope the article can bring you a little help on the huge Internet, I am Kaka, see you in the next issue.

Guess you like

Origin blog.csdn.net/fangkang7/article/details/121458307