Confluent local installation and use

Confluent local single node installation

0. Preface

[Click here to view the use of confluent to establish a connector and KsqlDB]

My environment

first name version
operating system centos7
confluent 5.5.1 (Commercial Edition)
jdk 1.8

1. Install jdk [recommended installation]

Upload to the linux /usr/local/directory and decompress

tar -zxvf  jdk1.8.0_144.tar.gz

Configure environment variables, modify /etc/profilefiles

vi /etc/profile

Add the following

#java
export JAVA_HOME=/usr/local/jdk1.8.0_144/
export PATH=$JAVA_HOME/bin:$PATH

Make environment variables take effect

source /etc/profile

Enter to java -versionsee if the installation is successful

[root@hadoop89]# java -version
java version "1.8.0_144"
Java(TM) SE Runtime Environment (build 1.8.0_144-b01)
Java HotSpot(TM) 64-Bit Server VM (build 25.144-b01, mixed mode)

2. Download confluent

Click here to view the official website

Click here to download the commercial version of V5.5.1 from Baidu Netdisk, extraction code: 6x48

Click here to download each version

The confluent community version is open source and free, and the commercial version is free for 1 kafka broker. Two or more kafka brokers have a 30-day trial period. After the trial period, you need to purchase a key, otherwise it cannot be used 这里我安装商业版单个broker. Refer to the figure below for specific content

3. Install confluent

Upload confluent-5.5.1-2.12.tar.gzto the /usr/local/directory and decompress

tar -zxvf confluent-5.5.1-2.12.tar.gz

Upload the required driver packages of mysql, sql server, oracle to the decompressed confluent /share/java/kafka-connect-jdbcdirectory

Configure environment variables, modify /etc/profilefiles

vi /etc/profile

Add the following

#confluent
export CONFLUENT_HOME=/usr/local/confluent-5.5.1
export PATH=$CONFLUENT_HOME/bin:$PATH

Make environment variables take effect

source /etc/profile

Enter to confluentsee if the installation is successful

4. Upload the driver package

Confluent can synchronize the data of different databases to other databases in real time, and you need to upload the driver package of the relevant database

Click here to download the driver package, if it fails, please leave a comment and contact me to get it

Upload to the $CONFLUENT_HOME/share/java/kafka-connect-jdbccatalog

cd $CONFLUENT_HOME/share/java/kafka-connect-jdbc

5. Start and stop confluent

Start confluent, you can not start it all at once, it is recommended to start it several times.

cofluent local start

Close confluent

confluent local stop

View the status of each component of confluent

confluent local status

After startup, there may be a certain delay before the browser can access the page

ip:9021

Check the log, which WGkeIh47is randomly generated, and check the log under which component reports an error.

cd /tmp/confluent.WGkeIh47

#举例 如查看连接器报错日志
cd connect
cat connect.stdout

6. Set the log not to be cleared

Confluent startup and part of the data depend on the contents of the temporary file, so it is necessary to set /tmpthe file about confluent not to be cleaned up

vi /usr/lib/tmpfiles.d/tmp.conf

Add x /tmp/confluent*as follows

#  This file is part of systemd.
#
#  systemd is free software; you can redistribute it and/or modify it
#  under the terms of the GNU Lesser General Public License as published by
#  the Free Software Foundation; either version 2.1 of the License, or
#  (at your option) any later version.

# See tmpfiles.d(5) for details

# Clear tmp directories separately, to make them easier to override
v /tmp 1777 root root 10d
v /var/tmp 1777 root root 30d

# Exclude namespace mountpoints created with PrivateTmp=yes
x /tmp/systemd-private-%b-*
X /tmp/systemd-private-%b-*/tmp
x /var/tmp/systemd-private-%b-*
X /var/tmp/systemd-private-%b-*/tmp
x /tmp/confluent*

Guess you like

Origin blog.csdn.net/qq_43853055/article/details/114639873