Build an ELK log analysis system cluster

Basic environment: four servers, the first one has 3G memory, the last three have 2G memory, turn off the firewall, and ping the external network

Channel 1 192.168.10.1

 Channel 2 192.168.10.2

 Channel 3 192.168.10.3

 Channel 4 192.168.10.4

 

China National Time Service Center

Network time synchronization: ntpdate ntp.ntsc.ac.cn

If the build fails, it may be that the time is inconsistent.

 

The first

Create the es group, add user es to the es group, create the es directory and grant permissions

 

 Move to /usr/local

 Grant permissions

 data input

 

The first one restarts

Second station:

Unzip

 

 Edit kibana configuration file

 

Modify and add:

Back to the first channel

Edit the ES configuration file

Modify and add:

Install kafka on the second machine

Modify and add:

Create the data directory yourself and write myid 1

Start the zookeeper service (default port 2181)

Check the port number:

Install kafka and modify the configuration file

Start service

Check the port number:

Verify whether it is successful

Create a topic

Simulate producer sending messages

Open another terminal to simulate a consumer receiving messages

Ok, no problem

Channel 3

Install logstash

Add modifications:

The fourth station

Unzip the filebeat installation package

Install nginx

Do optimization

ELK has two options:

  1. Log--filebeat--logstash--elasticsearch<--kibbana
  2. Log—filebeat--Kafka--logstash--elasticsearch<--kibbana

The first option

Edit configuration file

Revise:

Delete selected content

Add to:

Do not use the tab key to enter information

The default port for Logstash is 5044

The first

Start ES

If the startup is unsuccessful, then we will give permission to start again.

Channel 3

Edit configuration file

Add to:

Input: Logstash input

Beats: Indicates that the source is from filebeats

Port: Receive data from the 5044 port of the local machine

Output: output of data

Elasticsearch: The output source of data is es

Hosts: Tell es what the host is

Index: Create an index

The fourth station

Edit nginx configuration file

Revise:

Start nginx

Starting the service file beat service will block the terminal.

Channel 3

Starting logstash will block the terminal

Channel 2

Starting the kibana service will block the terminal

The fourth accesses the local IP, accesses nginx, and refreshes it several times to generate access data.

Channel 3

Access the second IP and port 5601, refresh several times

Enter the UI interface of kibana

  1. Create index style and match

successful match index

time standard

  1. Create visual icons

  1. View log analysis data

two. Second option

Join message queue

The fourth station

Stop filebeat service

Revise:

Enabled: Whether to enable this configuration

Channel 3

Stop logstash service

Revise:

Decorate_events: Load the offset of the message

Auto_offset_reset: Automatically reset the offset and keep the latest latest

The third station starts the service

Start the service and find that an error is reported. Open logstash.yml in the config directory and find the Data path inside . The default is the data directory under the installation directory. Enter the data directory, delete the .lock file, and restart the service.

The fourth station starts the service

The fourth accesses the local IP, accesses nginx, refreshes it several times, and generates access data.

The third station accesses the IP address and port 5601 of the second station

The following operations are the same as the first option

Guess you like

Origin blog.csdn.net/weixin_53053517/article/details/130340987