elk + filebeat + redis logging system deployment

Log System Architecture

Software Version:

filebeat-7.5.1
elasticsearch-7.5.1
logstash-7.5.1
kibana-7.5.1
redis-5.0.8

Host:

system:Centos 7.7
host IP:  192.168.1.105  安装 elasticsearch、kibana、logstash
          192.168.1.103  安装 redis
          192.168.1.111  安装 filebeat、nginx
          192.168.1.79   安装 filebeat、nginx
JDK:jdk-8u231-linux-x64

Installation:
1, the installation elasticsearch (single)
to modify environmental parameters, configure the number of threads. Modify the configuration file /etc/security/limits.conf, increase the allocation

*         hard    nofile          65536
*         soft    nofile          65536

*         soft    nproc           2048
*         hard    nproc           4096

Modify /etc/sysctl.conf file, add configuration:

vim /etc/sysctl.conf

vm.max_map_count=262144
执行 sysctl -p 命令,使配置生效

Increase the average user

groupadd elsearch   --- 添加elsearch组
useradd elsearch -g elsearch  ---添加elsearch用户,并加入elsearch组
groupadd kibana
useradd kibana -g kibana

Elasticsearch modify configuration files

vim /software/elasticsearch/config/elasticsearch.yml --- 修改以下参数

cluster.name: es-cluster   --- 集群名称
node.name: node-1   --- Elasticsearch主节点写为master,备节点写为slave
path.data: /software/elasticsearch/data    --- 数据存储目录
path.logs: /software/elasticsearch/logs    --- 程序日志存储目录
network.host: 192.168.1.105     --- 可写为本机IP或者0.0.0.0
http.port: 9200       --- 默认端口9200,打开注释即可
cluster.initial_master_nodes: ["192.168.1.105"]   --- 集群主机IP

Add java environment variables

vim /software/elasticsearch/bin/elasticsearch-env  --- 在头部添加java环境变量

JAVA_HOME=/software/jdk1.8.0_231
JRE_HOME=/software/jdk1.8.0_231/jre
PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib
export JAVA_HOME JRE_HOME PATH CLASSPATH

Modify the program rights (Elasticsearch, kibana can start with root privileges, ordinary users can only use)

chown -R  elsearch.elsearch elasticsearch/

Start Service

su - elsearch
/software/elasticsearch/bin/elasticsearch -d     --- -d参数指定程序在后台运行
访问:
http://IPaddr:9200 


View the status of

the cluster status related parameter description

URL中_cat表示查看信息,health表明返回的信息为集群健康信息,?v表示返回的信息加上头信息,跟返回JSON信息加上?pretty同理,就是为了获得更直观的信息,当然,你也可以不加,不要头信息,特别是通过代码获取返回信息进行解释,头信息有时候不                    
需要,写shell脚本也一样,经常要去除一些多余的信息。

通过这个链接会返回下面的信息,下面的信息包括:

集群的状态(status):red红表示集群不可用,有故障。yellow黄表示集群不可靠但可用,一般单节点时就是此状态。green正常状态,表示集群一切正常。

节点数(node.total):节点数,这里是2,表示该集群有两个节点。

数据节点数(node.data):存储数据的节点数,这里是2。数据节点在Elasticsearch概念介绍有。

分片数(shards):这是 0,表示我们把数据分成多少块存储。

主分片数(pri):primary shards,这里是6,实际上是分片数的两倍,因为有一个副本,如果有两个副本,这里的数量应该是分片数的三倍,这个会跟后面的索引分片数对应起来,这里只是个总数。

激活的分片百分比(active_shards_percent):这里可以理解为加载的数据分片数,只有加载所有的分片数,集群才算正常启动,在启动的过程中,如果我们不断刷新这个页面,我们会发现这个百分比会不断加大。

1.1, installation elasticsearch-head plug

Because the head is a plug-in management Elasticsearch for web front end of the plug after use es5 version that is independent service installation, it is necessary to install nodejs, npm

yum -y install nodejs npm

If you do not git installed, need to first install git:

yum -y install git

Then install the plug elasticsearch-head:

git clone https://github.com/mobz/elasticsearch-head.git
cd elasticsearch-head/
npm install

Npm install command execution, the execution may be named the following error:

npm ERR! [email protected] install: `node install.js`
npm ERR! Exit status 1
npm ERR! 
npm ERR! Failed at the [email protected] install script 'node install.js'.
npm ERR! Make sure you have the latest version of node.js and npm installed.
npm ERR! If you do, this is most likely a problem with the phantomjs-prebuilt package,
npm ERR! not with npm itself.
npm ERR! Tell the author that this fails on your system:
npm ERR!     node install.js
npm ERR! You can get information on how to open an issue for this project with:
npm ERR!     npm bugs phantomjs-prebuilt
npm ERR! Or if that isn't available, you can get their info via:
npm ERR!     npm owner ls phantomjs-prebuilt
npm ERR! There is likely additional logging output above.

npm ERR! Please include the following file with any support request:
npm ERR!     /software/elasticsearch-6.2.2/elasticsearch-head/npm-debug.log

Ignore the [email protected], execute the following command

npm install [email protected] --ignore-scripts

Then execute:

npm install

npm WARN deprecated [email protected]: CoffeeScript on NPM has moved to "coffeescript" (no hyphen)
npm WARN deprecated [email protected]: Use the built-in module in node 9.0.0 or newer, instead
npm WARN deprecated [email protected]: this package is now deprecated
npm WARN deprecated [email protected]: Please use the native JSON object instead of JSON 3
npm WARN deprecated [email protected]: Please use the native JSON object instead of JSON 3
npm WARN prefer global [email protected] should be installed with -g

> [email protected] install /software/elasticsearch-head/node_modules/phantomjs-prebuilt
> node install.js

PhantomJS not found on PATH
Downloading https://github.com/Medium/phantomjs/releases/download/v2.1.1/phantomjs-2.1.1-linux-x86_64.tar.bz2
Saving to /tmp/phantomjs/phantomjs-2.1.1-linux-x86_64.tar.bz2
Receiving...
[=======---------------------------------] 19%

Relative plug-in installation will be slower. . .

Configure the plug, stop elasticsearch

ps -ef | grep java | grep elsearch
kill -9 PID

modify:

vim /software/elasticsearch/config/elasticsearch.yml
添加以下参数:
http.cors.enabled: true
http.cors.allow-origin: "*"

Start elasticsearch

/software/elasticsearch/bin/elasticsearch -d

Start elasticsearch-head plug (background)

nohup npm run start &
[1] 11047
nohup: 忽略输入并把输出追加到"/home/elsearch/nohup.out"
netstat -anlp | grep 9100
tcp        0      0 0.0.0.0:9100            0.0.0.0:*               LISTEN      11058/grunt

Using a browser plug-in to access and interact with ES

2, AnSo kibana

tar xf kibana-7.5.1-linux-x86_64.tar.gz
mv kibana-7.5.1-linux-x86_64 kibana
cd kibana

vim /software/kibana/config/kibana.yml

server.port: 5601
server.host: "192.168.1.105"
elasticsearch.hosts: ["http://192.168.1.105:9200"]   --- 这个写的就是本机安装的Elasticsearch,只能写一个地址,目前还不支持写多个节点。如果想要对接Elasticsearch集群就需要搭建一个只能用来进行协调的Elasticsearch节点,这个节点不参与主节点选举、不存储数据。

Just to handle incoming HTTP requests, and operation Elasticsearch other nodes in the cluster to redirect, and then collecting and returning the results. The "coordination" on the node nature also played a role in load balancing.

#/bin/sh
RETVAL=
PID=`ps -ef | grep "kibana" | awk -F ' ' '{print $2}'`
echo $PID
KIBANA_DIR=/software/kibana
KIBANA=$KIBANA_DIR/bin/kibana
PROG=$(basename $KIBANA)
CONF=$KIBANA_DIR/config/kibana.yml
if [ ! -x $KIBANA ]; then
		echo -n $"$KIBANA not exist.";warning;echo
		exit 0
fi

start(){
		echo -n $"Starting $PROG: "
		nohup $KIBANA >/dev/null 2>&1 &
		RETVAL=$?
		if [ $RETVAL -eq 0 ]; then
				echo "start OK"
		else
				echo "start failure"
		fi
		return $RETVAL
}

stop(){
		echo -n $"Stopping $PROG: "
		kill -TERM $PID >/dev/null 2>&1
		RETVAL=$?
		echo "stop OK"
		return $RETVAL
}

restart(){
		stop
		sleep 2
		start
}

case "$1" in
		start)
		start
		;;
		stop)
		stop
		;;
		restart)
		restart
		;;
		status)
		ps -ef|grep $PID|grep kibana
		RETVAL=$?
		;;
		*)
		echo $"Usage: $0 {start|stop|status|restart}"
		RETVAL=1
esac
exit $RETVAL

Access: http://192.168.1.105:5601

3, install redis

cd /software/ &&  tar xf redis-5.0.8.tar.gz && mkdir redis 
cd redis-5.0.8

make && cd src/
make install PREFIX=/software/redis/     -- 指定redis安装目录为/software/redis/ 

cd ../ && mkdir /software/redis/conf && cp redis.conf /software/redis/conf/
vim /software/redis/conf/redis.conf

Modify the following parameters:

bind 192.168.1.103      --- 将这里的127.0.0.1改为192.168.1.103,否则只能连接127.0.0.1本地回环地址,无法远程连接
protected-mode yes   改为 protected-mode no  --- yes改为no,目的是为了解决安全模式引起的报错
port 6379    --- 打开注释
daemonize no  改为 daemonize yes   --- no改为yes,目的是为了设置后台运行
pidfile /software/redis/redis.pid   --- 设置redis.pid 文件存储目录
logfile "/software/redis/logs/redis.log"    --- 设置redis.log 文件存储目录

Connection Test:

4, mounting logstash-server

vim /software/logstash/config/logstash.yml

修改参数:
node.name: logstash-server    -- 设置节点名称,一般为主机名
path.data: /software/logstash/data   --- 设置logstash 和插件使用的持久化目录
config.reload.automatic: true    --- 开启配置文件自动加载
config.reload.interval: 10s      --- 定义配置文件重载时间周期
http.host: "192.168.1.105"        --- 定义访问主机名,一般为域名或IP
http.port: 9600-9700             --- 打开logstash 端口注释

vim /software/logstash/config/logstash_server.conf

    input {
    redis {
        port => "6379"
        host => "192.168.1.103"
        data_type => "list"
        batch_count => "1"
        key => "filebeat"
}
}

filter {
        grok {
                match => { "message" => "%{COMBINEDAPACHELOG}" }
        }
}

output {
        elasticsearch {
                hosts => ["192.168.1.105:9200"]
                index => "nginx-accesslog-%{+YYYY.MM.dd}"
        }
}

Edit the startup script

#/bin/sh
RETVAL=
PID=`ps -ef | grep java | grep "logstash_server\.conf" | awk -F ' ' '{print $2}'`
LOGSTASH_DIR=/software/logstash
LOGSTASH=$LOGSTASH_DIR/bin/logstash
PROG=$(basename $LOGSTASH)
CONF=$LOGSTASH_DIR/config/logstash_server.conf
LOG=$LOGSTASH_DIR/logs/logstash.log

if [ ! -x $LOGSTASH ]; then
	echo -n $"$LOGSTASH not exist.";warning;echo
	exit 0
fi
start(){
	echo -n $"Starting $PROG: "
	nohup $LOGSTASH --config $CONF --log $LOG >/dev/null 2>&1 &
	RETVAL=$?
	if [ $RETVAL -eq 0 ]; then
	echo "start OK"
	else
		echo "start failure"
	fi
	return $RETVAL
}
stop(){
	echo -n $"Stopping $PROG: "
	kill -TERM $PID >/dev/null 2>&1
	RETVAL=$?
	echo "stop OK"
	return $RETVAL
}
restart(){
	stop
	sleep 2
	start
}
case "$1" in
	start)
		start
		;;
	stop)
		stop
		;;
	restart)
		restart
		;;
	status)
		ps -ef|grep $PID|grep logstash_server\.conf
		RETVAL=$?
		;;
	*)
		echo $"Usage: $0 {start|stop|status|restart}"
		RETVAL=1
esac
exit $RETVAL

Modify the JVM Server-logstash
vim /software/logstash/config/jvm.options

-Xms1g    改为    -Xms500m      -- 根据自己的实际情况
-Xmx1g    改为    -Xmx500m      -- 根据自己的实际情况

Start logstash

program is up and running

5, filebeat installation

Guess you like

Origin www.cnblogs.com/ArchitecTang/p/12579785.html