Ubuntu 16.04 ELK log analysis platform to build

I want to build ELK Stack icon:

Ubuntu 16.04 build ELK

 

ELK server Recommended:

  • Memory less than 4G
  • CPU:2
  • Ubuntu 16.04

# 1 Install the Java JDK

Elasticsearch and Logstash are using java to write, so we need to install Java, Elasticsearch recommends installing Oracle Java 8 (OpenJdk should also OK):

# 2 installation Elasticsearch

Import the GPG public key Elasticsearch:

 

1

$ wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

Add Elasticsearch warehouse source:

 

1

$ echo "deb http://packages.elastic.co/elasticsearch/2.x/debian stable main" | sudo tee -a /etc/apt/sources.list.d/elasticsearch-2.x.list

Installation elasticsearch:

 

1

2

$ sudo apt-get update

$ sudo apt-get install elasticsearch

After the installation is complete, the configuration Elasticsearch:

 

1

$ sudo vim /etc/elasticsearch/elasticsearch.yml

For system security, we need to limit the external network access Elasticsearch (9200 port). Cancel the following line comments, and to replace the value localhost:

 

1

network.host: localhost

Start Elasticsearch services:

 

1

$ sudo systemctl start elasticsearch

 

Set Elasticsearch boot from the start:

 

 

1

$ sudo systemctl enable elasticsearch

 

# 3 AnSo Kibana

Add Kibana warehouse source:

 

1

$ echo "deb http://packages.elastic.co/kibana/4.5/debian stable main" | sudo tee -a /etc/apt/sources.list

AnSo kibana:

 

1

2

$ sudo apt-get update

$ sudo apt-get install kibana

Located Kibana:

 

1

$ sudo vim /opt/kibana/config/kibana.yml

Uncomment server.host line, and the value to localhost:

 

1

server.host: "localhost"

The above configuration can only be accessed from the local Kibana; because we do want to use Nginx reverse proxy.

Start Kibana services:

 

1

2

$ sudo systemctl enable kibana

$ Sudo systemctl start kibana

 

# 4 Installing Nginx

installation:

 

1

$ sudo apt-get install nginx

Use openssl to create an administrator (admin), which is used to log Kibana web interface:

 

1

2

$ Sudo -v

$ echo "admin:`openssl passwd -apr1`" | sudo tee -a /etc/nginx/htpasswd.users

Follow the prompts to set the admin user's password.

Edit Nginx configuration file:

 

1

$ sudo vim /etc/nginx/sites-available/default

The contents of the file are replaced with:

 

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

server {

    listen 80;

 

    server_name your_domain_or_IP;

 

    auth_basic "Restricted Access";

    auth_basic_user_file /etc/nginx/htpasswd.users;

 

    location / {

        proxy_pass http://localhost:5601;

        proxy_http_version 1.1;

        proxy_set_header Upgrade $http_upgrade;

        proxy_set_header Connection 'upgrade';

        proxy_set_header Host $host;

        proxy_cache_bypass $http_upgrade;        

    }

}

把your_domain_or_IP替换为你服务器的IP或域名;

检查Nginx配置语法:

 

1

$ sudo nginx -t

重启Nginx:

 

1

$ sudo systemctl restart nginx

如果开启了防火墙,配置允许nginx通行:

 

1

$ sudo ufw allow 'Nginx Full'

测试;使用浏览器访问 http://your_domain_or_IP,输入前面设置的密码:

Ubuntu 16.04 build ELK

如果配置没有问题,你应该能看到如下页面:

Ubuntu 16.04 build ELK

#5 安装Logstash

添加Logstash软件源:

 

1

$ echo "deb http://packages.elastic.co/logstash/2.3/debian stable main" | sudo tee -a /etc/apt/sources.list

安装Logstash:

 

1

2

$ sudo apt-get update

$ sudo apt-get install logstash

由于客户端服务器需要使用Filebeat来向ELK服务器发送日志,为了增强日志传输的安全,我们可以使用SSL加密。首先创建存放SSL证书的目录:

 

1

2

$ sudo mkdir -p /etc/pki/tls/certs

$ sudo mkdir /etc/pki/tls/private

创建SSL证书的两种选择:

  • 直接使用IP地址
  • 如果你有架设DNS服务器解析IP,可以使用域名

选择1)使用IP地址:

在生成SSL证书之前,先配置openssl:

 

1

$ sudo vim /etc/ssl/openssl.cnf

在v3_ca一段中,添加一行:

 

1

subjectAltName = IP: ELK_server_IP

把ELK_server_IP替换为ELK服务器的ip地址。

生成SSL证书:

 

1

2

$ cd /etc/pki/tls

$ sudo openssl req -config /etc/ssl/openssl.cnf -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

要把logstash-forwarder.crt复制到要给ELK发送日志的服务器上。

选择2)使用域名:

确保DNS的A记录指向ELK服务器IP。

创建SSL证书:

 

1

2

$ cd /etc/pki/tls

$ sudo openssl req -subj '/CN=ELK_server_domain/' -x509 -days 3650 -batch -nodes -newkey rsa:2048 -keyout private/logstash-forwarder.key -out certs/logstash-forwarder.crt

把ELK_server_domain替换为你的域名。

配置Logstash:

创建配置文件:

 

1

$ sudo vim /etc/logstash/conf.d/02-beats-input.conf

写入内容:

 

1

2

3

4

5

6

7

8

input {

  beats {

    port => 5044

    ssl => true

    ssl_certificate => "/etc/pki/tls/certs/logstash-forwarder.crt"

    ssl_key => "/etc/pki/tls/private/logstash-forwarder.key"

  }

}

监听接收日志的5044端口。配置防火墙打开5044端口:

 

1

$ sudo ufw allow 5044

创建配置文件:

 

1

$ sudo vim /etc/logstash/conf.d/10-syslog-filter.conf

写入如下内容:

 

1

2

3

4

5

6

7

8

9

10

11

12

13

filter {

  if [type] == "syslog" {

    grok {

      match => { "message" => "%{SYSLOGTIMESTAMP:syslog_timestamp} %{SYSLOGHOST:syslog_hostname} %{DATA:syslog_program}(?:\[%{POSINT:syslog_pid}\])?: %{GREEDYDATA:syslog_message}" }

      add_field => [ "received_at", "%{@timestamp}" ]

      add_field => [ "received_from", "%{host}" ]

    }

    syslog_pri { }

    date {

      match => [ "syslog_timestamp", "MMM  d HH:mm:ss", "MMM dd HH:mm:ss" ]

    }

  }

}

设置接收的日志格式及类型。

创建配置文件:

 

1

$ sudo vim /etc/logstash/conf.d/30-elasticsearch-output.conf

写入如下内容:

 

1

2

3

4

5

6

7

8

9

output {

  elasticsearch {

    hosts => ["localhost:9200"]

    sniffing => true

    manage_template => false

    index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"

    document_type => "%{[@metadata][type]}"

  }

}

 

Elasticsearch使用的端口9200。

Logstash文档:https://www.elastic.co/guide/en/logstash/current/getting-started-with-logstash.html。

现在ELK服务器已经做好了接收日志的准备。


在客户端服务上安装配置Filebeat

把在ELK服务器上生成logstash-forwarder.crt证书传到到客户端服务上,可以使用scp;然后把证书复制的certs目录:

 

1

2

$ sudo mkdir -p /etc/pki/tls/certs

$ sudo cp logstash-forwarder.crt /etc/pki/tls/certs/

 

安装Filebeat;添加源和key:

 

 

1

2

$ echo "deb https://packages.elastic.co/beats/apt stable main" |  sudo tee -a /etc/apt/sources.list.d/beats.list

$ wget -qO - https://packages.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -

 

 

1

2

$ sudo apt-get update

$ sudo apt-get install filebeat

配置Filebeat:

 

1

$ sudo vim /etc/filebeat/filebeat.yml

In prospectors paragraph, configured to send the log:

Ubuntu 16.04 build ELK

Find document_type, and the value to syslog:

Ubuntu 16.04 build ELK

Find output section, delete or comment out the entire Elasticsearch output period;

Remove logstash notes, the value of hosts changed ELK_server_IP: 5044, and add the line:

Ubuntu 16.04 build ELK

Found tls paragraph:

Ubuntu 16.04 build ELK

Start Filebeat:

 

1

2

$ sudo systemctl restart filebeat

$ sudo systemctl enable filebeat

Now filebeat you can send the logs to syslog and auth.log ELK server.

 

Transfer: http://blog.topspeedsnail.com/archives/4825

Guess you like

Origin blog.csdn.net/jsd2honey/article/details/93174019