elk 实现mysql和elasticsearch 数据同步

1、安装elasticsearch

先要安装jdk8 (配置jdk8的环境变量)

下载地址:

https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-7.5.1-linux-x86_64.tar.gz

tar  -zxvf elasticsearch-7.5.1-linux-x86_64.tar.gz 解压

修改配置文件:

# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
# cluster.name: myes # # ------------------------------------ Node ------------------------------------ # # Use a descriptive name for the node: # node.name: node-1 # # Add custom attributes to the node: # #node.attr.rack: r1 # # ----------------------------------- Paths ------------------------------------ # # Path to directory where to store the data (separate multiple locations by comma): # path.data: /path/to/data # # Path to log files: # path.logs: /path/to/logs # # ----------------------------------- Memory ----------------------------------- # # Lock the memory on startup: # bootstrap.memory_lock: false bootstrap.system_call_filter: false # # Make sure that the heap size is set to about half the memory available # on the system and that the owner of the process is allowed to use this # limit. # # Elasticsearch performs poorly when the system is swapping the memory. # # ---------------------------------- Network ----------------------------------- # # Set the bind address to a specific IP (IPv4 or IPv6): # network.host: 192.168.0.117 xpack.ml.enabled: false # # Set a custom port for HTTP: # http.port: 9200 transport.tcp.port: 9300 # # For more information, consult the network module documentation. # # --------------------------------- Discovery ---------------------------------- # # Pass an initial list of hosts to perform discovery when this node is started: # The default list of hosts is ["127.0.0.1", "[::1]"] # #discovery.seed_hosts: ["host1", "host2"] # # Bootstrap the cluster using an initial set of master-eligible nodes: # cluster.initial_master_nodes: ["node-1"] # # For more information, consult the discovery and cluster formation module documentation. # # ---------------------------------- Gateway ----------------------------------- # # Block initial recovery after a full cluster restart until N nodes are started: # #gateway.recover_after_nodes: 3 # # For more information, consult the gateway module documentation. # # ---------------------------------- Various ----------------------------------- # # Require explicit names when deleting indices: # #action.destructive_requires_name: true
cluster.name: myes 这个是集群节点的名称
node.name: node-1 节点的名称
path.data: /path/to/data 数据存储的位置
path.logs: /path/to/logs 日志

bootstrap.memory_lock: false
bootstrap.system_call_filter: false

network.host: 192.168.0.117 这个是服务器的地址
xpack.ml.enabled: false

http.port: 9200 这个是http访问的地址就是页面访问的地址
transport.tcp.port: 9300 这个是连接时的端口 就是代码连接时候用的

cluster.initial_master_nodes: ["node-1"] 指定master 节点

vi /etc/sysctl.conf
添加

vm.max_map_count=655360
保存后执行
sysctl -p
vi /etc/security/limits.conf
在最后一行加入下面的代码
* hard nofile 131072
* soft nofile 65536
* soft nproc 4096
* hard nproc 4096

执行时可能会遇到的错误

JVM is using the client VM [Java HotSpot(TM) Client VM] but should be using a server VM for the best performance JVM正在使用客户机VM [Java HotSpot(TM)客户机VM],但是为了获得最佳性能,应该使用服务器VM

如果有这个错

将VM设置成 Server VM: 找到 jre安装目录 /lib /i386 /jvm.cfg 文件,JVM默认是client版本 :如图所示,第一行和第二行互换位置即可,谁在上面就是谁。目前是Server VM

 

进入 es的 bin 目录下执行./elasticsearch -d 后台 启动

下载kibana
https://artifacts.elastic.co/downloads/kibana/kibana-7.5.1-linux-x86_64.tar.gz
进入kibana下面的config目录kibana.yml这个文件
server.port: 5601
server.host: "192.168.0.117" 这个是当前机器的ip
elasticsearch.hosts: ["http://192.168.0.117:9200"] 这个是es的址
kibana不支付root用户启动所以要新建用户来启动
groupadd es  创建分组
useradd liuchao -g es -p 123456 创建用户并指定分组
chown -R liuchao:es elasticsearch 将es的目录整个授权
 chown -R liuchao:es kibana  将 kibana的目录整个授权给新建用户
 su liuchao 切换用户

nohup ../bin/kibana &  后台启动 kibana

logstash的安装配置
下载地址:
https://artifacts.elastic.co/downloads/logstash/logstash-7.5.1.tar.gz
直接解压
进入bin 目录下面
 
vim /usr/local/logstash-5.5.2/bin/mysqltoes.conf
 
 

input {

 
 

stdin { }

 
 

jdbc {

 
 

#需要同步的数据库

 
 

jdbc_connection_string => "jdbc:mysql://192.168.0.23:3306/demo"

 
  
 
 
 

jdbc_user => "root"

 
  
 
 
 

jdbc_password => "123456"

 
  
 
 
 

#本地jar包

 
 

jdbc_driver_library => "/usr/local/es/mysql-connector-java-3.1.12-bin.jar"

 
  
 
 
 

jdbc_driver_class => "com.mysql.jdbc.Driver"

 
  
 
 
 

jdbc_paging_enabled => "true"

 
  
 
 
 

jdbc_page_size => "50000"

 
  
 
 
 

#获取到记录的SQL查询语句

 
 

statement => "SELECT * FROM user"

 
  
 
 
 

#定时字段 各字段含义(由左至右)分、时、天、月、年,全部为*默认含义为每分钟都更新

 
 

schedule => "* * * * *"

 
 

}

 
 

}

 
  
 
 
 

output {

 
 

stdout {

 
 

codec => json_lines

 
 

}

 
 

elasticsearch {

 
 

#ESIP地址与端口

 
 

hosts => ["192.168.0.117:9200"]

 
  
 
 
 

#ES索引名称(自己定义的)

 
 

index => "logdemo"

 
  
 
 
 

#文档类型

 
 

document_type => "user"

 
  
 
 
 

#文档类型id,%{userid}意思是取查询出来的userid的值,并将其映射到es中_id字段中

 
 

document_id => "%{id}"

 
 

}

 
 

}

 
 

启动:./logstash -f mysqltoes.conf




猜你喜欢

转载自www.cnblogs.com/dkws/p/12162916.html
今日推荐