StarRocks Cluster Installation and Deployment Documentation

The following table shows the planned cluster component allocation

domain name starrocks1 starrocks2 starrocks3
components mysql、FE(follower)、BE1、datax-executor、datax FE(leader) BE2、datax-executor、datax FE(follower)、BE3、datax-admin、datax-executor、datax

1. Server configuration

1.1 Set the hostname

hostnamectl set-hostname starrocks1

hostnamectl set-hostname starrocks2

hostnamectl set-hostname starrocks3

1.2 Create users and groups

groupadd starrocks

useradd -g starrocks starrocks

passwd starrocks

1.3 ssh-keygen

starrocks1/ starrocks2/ starrocks3 must be configured

1.4 configure hosts

1.5 ssh-copy-id

1.6 Banned Selinux

1.7 Turn off transparent huge pages

1.8 Setting swappiness

1.9 Setting file descriptors

echo "* soft nofile 65535" >> /etc/security/limits.conf
echo "* hard nofile 65535" >> /etc/security/limits.conf
ulimit -n 65535

1.10 install ntp

1.11 Install JDK


2. Install mysql

2.1 Query and uninstall Mariadb that comes with the system

rpm -qa | grep mariadb

rpm -e --nodeps 文件名

2.2 Installation and implementation

Create a user. In order to facilitate database management, we will create a mysql user and mysql user group for the installed MySQL database in production:

# 添加mysql用户组
groupadd mysql
# 添加mysql用户
useradd -g mysql mysql -d /home/mysql
# 修改mysql用户的登陆密码
passwd mysql

2.3 Upload to the server

Upload the mysql-5.7.40-linux-glibc2.12-x86_64.tar installation package to the /usr/local directory

# 解压缩

tar -zxvf mysql-5.7.40-linux-glibc2.12-x86_64.tar.gz

# 建立软链接,便于以后版本升级

ln -s mysql-5.7.40-linux-glibc2.12-x86_64 mysql

# 修改mysql文件夹下所有文件的用户和用户组

chown -R mysql:mysql /home/mysql/

2.4 Create a configuration file

# 创建配置文件

  cd /etc

# 在my.cnf文件中添加对应的配置项,文章末尾会提供一个默认的 my.cnf 配置【然后填入下面配置】

注意设置 secureCRT 的编码格式为 UTF-8

  vi my.cnf
[client] # 客户端设置,即客户端默认的连接参数

port = 3306 # 默认连接端口

socket = /home/mysql/3306/tmp/mysql.sock # 用于本地连接的socket套接字,mysqld守护进程生成了这个文件

[mysqld] # 服务端基本设置

# 基础设置

server-id = 1 # Mysql服务的唯一编号 每个mysql服务Id需唯一

port = 3306 # MySQL监听端口

basedir = /usr/local/mysql # MySQL安装根目录

datadir = /home/mysql/3306/data # MySQL数据文件所在位置

tmpdir = /home/mysql/3306/tmp # 临时目录,比如load data infile会用到

socket = /home/mysql/3306/tmp/mysql.sock # 为MySQL客户端程序和服务器之间的本地通讯指定一个套接字文件

pid-file = /home/mysql/3306/log/mysql.pid # pid文件所在目录

skip_name_resolve = 1 # 只能用IP地址检查客户端的登录,不用主机名

character-set-server = utf8mb4 # 数据库默认字符集,主流字符集支持一些特殊表情符号(特殊表情符占用4个字节)

transaction_isolation = READ-COMMITTED # 事务隔离级别,默认为可重复读,MySQL默认可重复读级别

collation-server = utf8mb4_general_ci # 数据库字符集对应一些排序等规则,注意要和character-set-server对应

init_connect='SET NAMES utf8mb4' # 设置client连接mysql时的字符集,防止乱码

lower_case_table_names = 1 # 是否对sql语句大小写敏感,1表示不敏感

max_connections = 400 # 最大连接数

max_connect_errors = 1000 # 最大错误连接数

explicit_defaults_for_timestamp = true # TIMESTAMP如果没有显示声明NOT NULL,允许NULL值

max_allowed_packet = 128M # SQL数据包发送的大小,如果有BLOB对象建议修改成1G

interactive_timeout = 1800 # MySQL连接闲置超过一定时间后(单位:秒)将会被强行关闭

wait_timeout = 1800 # MySQL默认的wait_timeout值为8个小时, interactive_timeout参数需要同时配置才能生效

tmp_table_size = 16M # 内部内存临时表的最大值 ,设置成128M;比如大数据量的group by ,order by时可能用到临时表;超过了这个值将写入磁盘,系统IO压力增大

max_heap_table_size = 128M # 定义了用户可以创建的内存表(memory table)的大小

query_cache_size = 0 # 禁用mysql的缓存查询结果集功能;后期根据业务情况测试决定是否开启;大部分情况下关闭下面两项

query_cache_type = 0

# 用户进程分配到的内存设置,每个session将会分配参数设置的内存大小

read_buffer_size = 2M # MySQL读入缓冲区大小。对表进行顺序扫描的请求将分配一个读入缓冲区,MySQL会为它分配一段内存缓冲区。

read_rnd_buffer_size = 8M # MySQL的随机读缓冲区大小

sort_buffer_size = 8M # MySQL执行排序使用的缓冲大小

binlog_cache_size = 1M # 一个事务,在没有提交的时候,产生的日志,记录到Cache中;等到事务提交需要提交的时候,则把日志持久化到磁盘。默认binlog_cache_size大小32K

back_log = 130 # 在MySQL暂时停止响应新请求之前的短时间内多少个请求可以被存在堆栈中;官方建议back_log = 50 + (max_connections / 5),封顶数为900

# 日志设置

log_error = /home/mysql/3306/log/error.log # 数据库错误日志文件

slow_query_log = 1 # 慢查询sql日志设置

long_query_time = 1 # 慢查询时间;超过1秒则为慢查询

slow_query_log_file = /home/mysql/3306/log/slow.log # 慢查询日志文件

log_queries_not_using_indexes = 1 # 检查未使用到索引的sql

log_throttle_queries_not_using_indexes = 5 # 用来表示每分钟允许记录到slow log的且未使用索引的SQL语句次数。该值默认为0,表示没有限制

min_examined_row_limit = 100 # 检索的行数必须达到此值才可被记为慢查询,查询检查返回少于该参数指定行的SQL不被记录到慢查询日志

expire_logs_days = 5 # MySQL binlog日志文件保存的过期时间,过期后自动删除

# 主从复制设置

log-bin = mysql-bin # 开启mysql binlog功能

binlog_format = ROW # binlog记录内容的方式,记录被操作的每一行

binlog_row_image = minimal # 对于binlog_format = ROW模式时,减少记录日志的内容,只记录受影响的列

# Innodb设置

innodb_open_files = 500 # 限制Innodb能打开的表的数据,如果库里的表特别多的情况,请增加这个。这个值默认是300

innodb_buffer_pool_size = 64M # InnoDB使用一个缓冲池来保存索引和原始数据,一般设置物理存储的60% ~ 70%;这里你设置越大,你在存取表里面数据时所需要的磁盘I/O越少

innodb_log_buffer_size = 2M # 此参数确定写日志文件所用的内存大小,以M为单位。缓冲区更大能提高性能,但意外的故障将会丢失数据。MySQL开发人员建议设置为1-8M之间

innodb_flush_method = O_DIRECT # O_DIRECT减少操作系统级别VFS的缓存和Innodb本身的buffer缓存之间的冲突

innodb_write_io_threads = 4 # CPU多核处理能力设置,根据读,写比例进行调整

innodb_read_io_threads = 4

innodb_lock_wait_timeout = 120 # InnoDB事务在被回滚之前可以等待一个锁定的超时秒数。InnoDB在它自己的锁定表中自动检测事务死锁并且回滚事务。InnoDB用LOCK TABLES语句注意到锁定设置。默认值是50秒

innodb_log_file_size = 32M # 此参数确定数据日志文件的大小,更大的设置可以提高性能,但也会增加恢复故障数据库所需的时间
# 创建目录

mkdir -p /home/mysql/3306/data
mkdir -p /home/mysql/3306/tmp
mkdir -p /home/mysql/3306/log
chown -R mysql:mysql /home/mysql/

2.5 Install the database

cd /usr/local/mysql/bin


# 初始化数据库,并指定启动mysql的用户

./mysqld --initialize --user=mysql
# 这里最好指定启动mysql的用户名,否则就会在启动MySQL时出现权限不足的问题
# 安装完成后,在my.cnf中配置的datadir目录下生成一个 /home/mysql/3306/log/error.log 文件,里面记录了root用户的随机密码。
cat /home/mysql/3306/log/error.log | grep pass

2.6 Set boot self-start service

# 复制启动脚本到资源目录

cp /usr/local/mysql-5.7.40-linux-glibc2.12-x86_64/support-files/mysql.server /etc/rc.d/init.d/mysqld 

# 增加mysqld服务控制脚本执行权限 

chmod +x /etc/rc.d/init.d/mysqld 

# 将mysqld服务加入到系统服务 

chkconfig --add mysqld  

# 检查mysqld服务是否已经生效 

chkconfig --list mysqld  

# 切换至mysql用户,启动|停止|重启|状态

service mysqld start|stop|restart|status

2.7 Configure environment variables

In order to better operate mysql, configure environment variables

# 切换至mysql用户

su - mysql

# 修改配置文件

vi .bash_profile

MYSQL_HOME=/usr/local/mysql

PATH=$MYSQL_HOME/bin

# 立即生效

source .bash_profile

Use the root user mysql -uroot -proot to log in

2.8 Login, change password

# 登陆mysql

mysql -u root -p

# 修改root用户密码

set password for root@localhost=password("1qaz@WSX");
设置允许所有IP访问

GRANT ALL ON *.* to root@'%' IDENTIFIED BY '1qaz@WSX';

FLUSH PRIVILEGES;

3. Install StarRocks

3.1 Upload and decompress the installation package

Upload StarRocks and extract the binary installation package.

tar -xzvf StarRocks-x.x.x.tar.gz

Note: Change the above file name to the downloaded binary installation package name. After the upload is complete, distribute the installation package to each node.

3.2 Deploying FE nodes

This section describes how to configure and deploy Frontend (FE) nodes. FE is the front-end node of StarRocks, responsible for managing metadata, managing client connections, query planning, query scheduling, etc.

3.2.1 Configuring FE nodes

# 进入 StarRocks-x.x.x/fe 路径。
cd /opt/starrocks/fe

# 修改 FE 配置文件 conf/fe.conf。
LOG_DIR = /var/log/starrocks/fe
meta_dir = /hdisk1/starrocks/fe/meta
priority_networks = 192.168.10.21/24
sys_log_dir = /var/log/starrocks/fe
audit_log_dir = /var/log/starrocks/fe

Note: JAVA_HOME = Change this path to the local path where Java is located.

3.2.2 Create metadata path

Create the metadata path meta in the FE node.

mkdir -p meta

Note: This path needs to be consistent with the configuration path in the conf/fe.conf file.

tar -zxvf StarRocks-2.4.2.tar.gz -C /opt/

cd /opt

ln -s StarRocks-2.4.2/ starrocks

chown -R starrocks:starrocks /opt/

chmod -R 755 /opt/



 mkdir -p /hdisk1/starrocks/fe/meta
 mkdir -p /var/log/starrocks/fe
 mkdir -p /hdisk1/starrocks/be/storage
 mkdir -p /hdisk2/starrocks/be/storage
 mkdir -p /hdisk3/starrocks/be/storage
 mkdir -p /var/log/starrocks/be
 chown -R starrocks:starrocks /hdisk1/starrocks
 chown -R starrocks:starrocks /hdisk2/starrocks
 chown -R starrocks:starrocks /hdisk3/starrocks
 chown -R starrocks:starrocks /var/log/starrocks
 chmod -R 755 /hdisk1/starrocks
 chmod -R 755 /hdisk2/starrocks
 chmod -R 755 /hdisk3/starrocks
 chmod -R 755 /var/log/starrocks
chown -R starrocks:starrocks /var/log

3.2.3 Start FE node

Run the following command to start the FE node.

bin/start_fe.sh --daemon

3.2.4 Confirm that FE starts successfully

Verify whether the FE node is started successfully by the following methods:

  • Check the log/fe.log to confirm whether FE starts successfully.
2020-03-16 20:32:14,686 INFO 1 [FeServer.start():46] thrift server started.  // FE 节点启动成功。
2020-03-16 20:32:14,696 INFO 1 [NMysqlServer.start():71] Open mysql server success on 9030  // 可以使用 MySQL 客户端通过 `9030` 端口连接 FE。
2020-03-16 20:32:14,696 INFO 1 [QeService.start():60] QE service start.
2020-03-16 20:32:14,825 INFO 76 [HttpServer$HttpServerThread.run():210] HttpServer started with port 8030
  • Check the Java process by running the jps command to confirm whether the StarRocksFE process exists.
  • Access the WebUI of StarRocks by accessing FE ip:http_port (the default http_port is 8030) in the browser, the user name is root, and the password is blank.

Note: If the FE fails to start due to the port being occupied, you can modify the port number http_port in the configuration file conf/fe.conf.

3.2.5 Add FE node

You can connect to StarRocks through MySQL client to add FE nodes.
After the FE process starts, use the MySQL client to connect to the FE instance.

mysql -h 127.0.0.1 -P9030 -uroot

Note: root is the default built-in user of StarRocks, the password is empty, the port is the query_port configuration item in fe/conf/fe.conf, and the default value is 9030.

change root password

set password=PASSWORD('1qaz@WSX');

View FE status

SHOW PROC '/frontends'\G

Example:

MySQL [(none)]> SHOW PROC '/frontends'\G

*************************** 1. row ***************************
             Name: 172.26.xxx.xx_9010_1652926508967
               IP: 172.26.xxx.xx
         HostName: iZ8vb61k11tstgnvrmrdfdZ
      EditLogPort: 9010
         HttpPort: 8030
        QueryPort: 9030
          RpcPort: 9020
             Role: LEADER
        ClusterId: 1160043595
             Join: true
            Alive: true
ReplayedJournalId: 1303
    LastHeartbeat: 2022-10-19 11:27:16
         IsHelper: true
           ErrMsg:
        StartTime: 2022-10-19 10:15:21
          Version: 2.4.0-c0fa2bb
1 row in set (0.02 sec)

• When the Role is LEADER, the current FE node is the master node for the master election.
• When Role is FOLLOWER, the current node is a FE node that can participate in the election of the master.
If the MySQL client connection fails, you can find the problem by looking at the log/fe.warn.log log file.
If you encounter any unexpected issues during the initial deployment, you can restart the deployment after deleting and recreating the FE's metadata directory.

3.2.6 Deploying a highly available cluster of FE nodes

FE nodes of StarRocks support HA model deployment to ensure high availability of the cluster.

3.2.7 Add new FE node

Use the MySQL client to connect to the existing FE node, and add the information of the new FE node, including role, IP address, and Port.

Note that you must add it before starting

• Add Follower FE nodes.

ALTER SYSTEM ADD FOLLOWER "host:port";

• Add Observer FE node.

ALTER SYSTEM ADD OBSERVER "host:port";

Parameters:
• host: IP address of the machine. If the machine has multiple IP addresses, this item is the only communication IP address set under the priority_networks setting item.
• port: The port set under the edit_log_port setting item, the default is 9010.

For security reasons, StarRocks FE nodes and BE nodes will only listen to one IP address for communication. If a machine has multiple network cards, StarRocks may not be able to find the correct IP address automatically. For example, through the ifconfig command, the IP address of eth0 is 192.168.1.1, and the IP address of docker0 is 172.17.0.1. You can set the 192.168.1.0/24 subnet to use eth0 as the communication IP. Here, CIDR is used to specify the range of the subnet where the IP resides, so that the same configuration can be used on all BE and FE nodes.

If an error occurs, you can delete the corresponding FE node through the command.

• Delete the Follower FE node.

ALTER SYSTEM DROP FOLLOWER "host:port";

• Delete the Observer FE node.

ALTER SYSTEM drop OBSERVER "host:port";

3.2.8 Connecting FE nodes

FE nodes need to establish a communication connection between each other to realize functions such as master selection, voting, log submission and replication of the replication protocol. When a new FE node is added to an existing cluster for the first time and started, you need to designate an existing node in the cluster as a helper node, and obtain the configuration information of all FE nodes in the cluster from this node in order to establish a communication connection. Therefore, when starting a new FE node for the first time, you need to specify the --helper parameter via the command line.

./bin/start_fe.sh --helper host:port --daemon

Parameters:
• host: IP address of the machine. If the machine has multiple IP addresses, this item is the only communication IP address set under the priority_networks setting item.
• port: The port set under the edit_log_port setting item, the default is 9010.

For example:

/opt/starrocks/fe/bin/start_fe.sh --helper 192.168.10.22:9010 --daemon

3.2.9 Confirm that the FE cluster is deployed successfully

Check the cluster status to confirm that the deployment is successful.

mysql> SHOW PROC '/frontends'\G
*************************** 1. row ***************************
             Name: 192.168.10.21_9010_1672903151744
               IP: 192.168.10.21
      EditLogPort: 9010
         HttpPort: 8030
        QueryPort: 9030
          RpcPort: 9020
             Role: FOLLOWER
        ClusterId: 114599321
             Join: true
            Alive: true
ReplayedJournalId: 443
    LastHeartbeat: 2023-01-05 15:26:42
         IsHelper: true
           ErrMsg: 
        StartTime: 2023-01-05 15:23:56
          Version: 2.4.2-3994421
*************************** 2. row ***************************
             Name: 192.168.10.23_9010_1672903157548
               IP: 192.168.10.23
      EditLogPort: 9010
         HttpPort: 8030
        QueryPort: 9030
          RpcPort: 9020
             Role: FOLLOWER
        ClusterId: 114599321
             Join: true
            Alive: true
ReplayedJournalId: 443
    LastHeartbeat: 2023-01-05 15:26:42
         IsHelper: true
           ErrMsg: 
        StartTime: 2023-01-05 15:26:21
          Version: 2.4.2-3994421
*************************** 3. row ***************************
             Name: 192.168.10.22_9010_1672902152503
               IP: 192.168.10.22
      EditLogPort: 9010
         HttpPort: 8030
        QueryPort: 9030
          RpcPort: 9020
             Role: LEADER
        ClusterId: 114599321
             Join: true
            Alive: true
ReplayedJournalId: 444
    LastHeartbeat: 2023-01-05 15:26:41
         IsHelper: true
           ErrMsg: 
        StartTime: 2023-01-05 15:02:44
          Version: 2.4.2-3994421
3 rows in set (0.10 sec)

When the Alive item of the node is true, the node is added successfully.

3.3 Deploy BE nodes

This section describes how to configure and deploy Backend (BE) nodes. BE is the backend node of StarRocks, responsible for data storage and SQL execution. The following example deploys only one BE node. You can add multiple BE nodes by repeating the following steps.

3.3.1 Configuring BE nodes

Enter the StarRocks-xxx/be path.

cd /opt/starrocks/be/

Modify the BE node configuration file conf/be.conf. Since the cluster can be started with the default configuration, the following example does not modify the BE node configuration.
Note
When a machine has multiple IP addresses, you need to set priority_networks in the BE configuration file conf/be.conf to set a unique IP for this node.

priority_networks = 192.168.10.21/24
storage_root_path = /hdisk1/starrocks/be/storage;/hdisk2/starrocks/be/storage;/hdisk3/starrocks/be/storage
sys_log_dir = /var/log/starrocks/be

3.3.2 Add BE node

Add the BE node to the StarRocks cluster through the MySQL client.

mysql> ALTER SYSTEM ADD BACKEND "host:port";

For example:

ALTER SYSTEM ADD BACKEND "192.168.10.21:9050";
ALTER SYSTEM ADD BACKEND "192.168.10.22:9050";
ALTER SYSTEM ADD BACKEND "192.168.10.23:9050";

Note: host needs to match priority_networks, port needs to be the same as heartbeat_service_port set in be.conf file, the default is 9050.

If an error occurs during the adding process, the BE node needs to be removed from the cluster through the following command.

mysql> ALTER SYSTEM decommission BACKEND "host:port";

Note: The host and port are consistent with the added BE node.

3.3.3 Start the BE node

Run the following command to start the BE node.

bin/start_be.sh --daemon

3.3.4 Confirm that the BE startup is successful

Confirm whether the BE node starts successfully through the MySQL client.

SHOW PROC '/backends'\G

Example:

MySQL [(none)]> SHOW PROC '/backends'\G

*************************** 1. row ***************************
            BackendId: 10003
              Cluster: default_cluster
                   IP: 172.26.xxx.xx
             HostName: sandbox-pdtw02
        HeartbeatPort: 9050
               BePort: 9060
             HttpPort: 8040
             BrpcPort: 8060
        LastStartTime: 2022-05-19 11:15:00
        LastHeartbeat: 2022-05-19 11:27:36
                Alive: true
 SystemDecommissioned: false
ClusterDecommissioned: false
            TabletNum: 10
     DataUsedCapacity: .000
        AvailCapacity: 1.865 TB
        TotalCapacity: 1.968 TB
              UsedPct: 5.23 %
       MaxDiskUsedPct: 5.23 %
               ErrMsg:
              Version: 2.2.0-RC02-2ab1482
               Status: {
    
    "lastSuccessReportTabletsTime":"2022-05-19 11:27:01"}
    DataTotalCapacity: 1.865 TB
          DataUsedPct: 0.00 %
1 row in set (0.01 sec)

When Alive is true, the current BE node is connected to the cluster normally.
If the BE node is not connected to the cluster normally, you can check the log/be.WARNING log file to troubleshoot the problem.
If information similar to the following appears in the log, it indicates that there is a problem with the configuration of priority_networks.

W0708 17:16:27.308156 11473 heartbeat_server.cpp:82\] backend ip saved in master does not equal to backend local ip127.0.0.1 vs. 172.16.xxx.xx

If you encounter the above problem, you can solve it by DROP the wrong BE node, and then re-add the BE node with the correct IP.

ALTER SYSTEM DROP BACKEND "172.16.xxx.xx:9050";

If you encounter any unexpected issues during the initial deployment, you can restart the deployment after deleting and recreating the BE's data path.

3.4 View page

http://192.168.10.22:8030

The following operations are for the convenience of StarRocks data migration.

4. Install DataX and datax-web

4.1 Upload and decompress

Upload 2 installation packages

Execute under the starrocks user

tar -zxvf datax.tar.gz -C /opt/

tar -zxvf datax-web-2.1.2.tar.gz -C /opt/

4.2 Generate directory

Execute the install.sh of datax-web

Pay attention to choose yynn

Only generate directory without installation

4.3 Modify the configuration file

Modify /opt/datax-web-2.1.2/modules/datax-admin/conf/bootstrap.properties

DB_HOST=192.168.10.21
DB_PORT=3306
DB_USERNAME=root
DB_PASSWORD=1qaz@WSX
DB_DATABASE=dataxweb

Modify the application.yml file of datax-executor:

# web port
server:
  port: ${
    
    server.port}
  #port: 8081

# log config
logging:
  config: classpath:logback.xml
  path: ${
    
    data.path}/applogs/executor/jobhandler
  #path: ./data/applogs/executor/jobhandler

datax:
  job:
    admin:
      ### datax admin address list, such as "http://address" or "http://address01,http://address02"
      #addresses: http://127.0.0.1:8080
      addresses: http://192.168.10.23:${
    
    datax.admin.port}
    executor:
      appname: datax-executor
      ip: 192.168.10.23
      #port: 9999
      port: ${
    
    executor.port:9999}
      ### job log path
      #logpath: ./data/applogs/executor/jobhandler
      logpath: ${
    
    data.path}/applogs/executor/jobhandler
      ### job log retention days
      logretentiondays: 30
    ### job, access token
    accessToken:

  executor:
    #jsonpath: D:\\temp\\executor\\json\\
    jsonpath: ${
    
    json.path}

  #pypath: F:\tools\datax\bin\datax.py
  pypath: ${
    
    python.path}

4.4 Synchronization

cd /opt

scp -r datax starrocks1:/opt/

scp -r datax starrocks2:/opt/

scp -r datax-web-2.1.2/ starrocks1:/opt/

scp -r datax-web-2.1.2/ starrocks2:/opt/

4.5 Create a database and install it

At the same time, you need to connect to mysql on starrocks1 through navicat, and execute the following sql

create database dataxweb;

Execute the install.sh of datax-web

Pay attention to choose nnyy

only install

Note that if the installation fails, you need to connect to mysql on starrocks1 through navicat, and manually execute the SQL in the datax-web.sql file.

4.6 Configure DATAX_HOME

Modify the .bash_profile file

DATAX_HOME=/opt/datax
export PATH DATAX_HOME
source .bash_profile

4.7 Modify log configuration

Modify the logback.xml of admin

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false" scan="true" scanPeriod="1 seconds">

    <contextName>admin</contextName>
    <property name="LOG_PATH"
              value="/var/log/datax-web"/>

    <!--控制台日志, 控制台输出 -->
    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <!--格式化输出:%d表示日期,%thread表示线程名,%-5level:级别从左显示5个字符宽度,%msg:日志消息,%n是换行符-->
            <pattern>%d{HH:mm:ss.SSS} %contextName [%thread] %-5level %logger{5} - %msg%n</pattern>
        </encoder>
    </appender>

    <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${LOG_PATH}/datax-admin.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${LOG_PATH}.%d{yyyy-MM-dd}.zip</fileNamePattern>
        </rollingPolicy>
        <encoder>
            <pattern>%date %level [%thread] %logger{36} [%file : %line] %msg%n
            </pattern>
        </encoder>
    </appender>

    <!--mybatis log configure-->
    <logger name="com.apache.ibatis" level="TRACE"/>
    <logger name="java.sql.Connection" level="DEBUG"/>
    <logger name="java.sql.Statement" level="DEBUG"/>
    <logger name="java.sql.PreparedStatement" level="DEBUG"/>

    <root level="info">
        <appender-ref ref="console"/>
        <appender-ref ref="file"/>
    </root>

</configuration>
scp logback.xml starrocks1:/opt/datax-web-2.1.2/modules/datax-admin/conf/

scp logback.xml starrocks2:/opt/datax-web-2.1.2/modules/datax-admin/conf/

Modify the logback.xml of the executor

<?xml version="1.0" encoding="UTF-8"?>
<configuration debug="false" scan="true" scanPeriod="1 seconds">

    <contextName>exe</contextName>
    <property name="LOG_PATH"
              value="/var/log/datax-web" />

    <appender name="console" class="ch.qos.logback.core.ConsoleAppender">
        <encoder>
            <pattern>%d{HH:mm:ss.SSS} %contextName [%thread] %-5level %logger{10} - %msg%n</pattern>
        </encoder>
    </appender>

    <appender name="file" class="ch.qos.logback.core.rolling.RollingFileAppender">
        <file>${LOG_PATH}/datax-executor.log</file>
        <rollingPolicy class="ch.qos.logback.core.rolling.TimeBasedRollingPolicy">
            <fileNamePattern>${LOG_PATH}.%d{yyyy-MM-dd}.zip</fileNamePattern>
        </rollingPolicy>
        <encoder>
            <pattern>%date %level [%thread] %logger{20} [%file : %line] %msg%n
            </pattern>
        </encoder>
    </appender>

    <root level="info">
        <appender-ref ref="console"/>
        <appender-ref ref="file"/>
    </root>

</configuration>
scp logback.xml starrocks1:/opt/datax-web-2.1.2/modules/datax-executor/conf/

scp logback.xml starrocks2:/opt/datax-web-2.1.2/modules/datax-executor/conf/

4.8 start

su - starrocks

starrocks3 execute ./start-all.sh

starrocks1 and 2 execute ./start.sh -m datax-executor

4.9 View results by page

http://192.168.10.23:9527/index.html#/dashboard

Guess you like

Origin blog.csdn.net/Shockang/article/details/128750919