Single-node mode describes Orchestrator

First, the environment Description:

1.1,3 Taiwan vm virtual machine environment system introduced:

Three VM system:

[root@mgr01 ~]# cat /etc/redhat-release 
CentOS Linux release 7.2.1511 (Core)

VM centos three systems are closed iptables, SELinux Close
3 virtual time synchronization system:
the ntpdate ntp1.aliyun.com
each mounted on a three orchestrator mysql vm VM

orchestrator版本为:orchestrator-3.1.4-linux-amd64.tar.gz
下载地址:
https://github.com/github/orchestrator/releases

mysql version is installed binary version mysql5.7.24 GA

Three machines ip:

10.0.0.130    172.16.0.130
10.0.0.131    172.16.0.131
10.0.0.132    172.16.0.132

Three vm binding hostname:

[root@mgr01 bin]# cat /etc/hosts
172.16.0.130 mgr01
172.16.0.131 mgr03
172.16.0.132 mgr02
[root@mgr02 ~]# cat /etc/hosts
172.16.0.132 mgr02
172.16.0.131 mgr03
172.16.0.130 mgr01
[root@mgr03 bin]# cat /etc/hosts
172.16.0.132 mgr02
172.16.0.131 mgr03
172.16.0.130 mgr01

Tip: Orchestrator is recommended to use the machine name instead of ip to manage MySQL instances, such as change master to the master_host If you specify an ip, may lead to a master-slave switch or failover problems
so it is best to bind hosts, set the host name

1.2, three vm installation instructions mysql

MySQL mounting step is omitted, and the regular installation as MySQL .

2 configured ahead of a main filter based Gtid copied from mysql

    172.16.0.131  master
    172.16.0.130  slave 
    172.16.0.132  slave

Three vm instances mysql configuration file must open the following parameters:
Description: open gtid, just copy the table under test libraries test001, other databases are ignored

[root@mgr01 orchestrator]# egrep -i 'gtid|replicate_wild' /data/mysql/mysql3306/my3306.cnf
####: for gtid
#gtid_executed_compression_period    =1000                          #   1000
gtid_mode                           =on                            #    off
enforce_gtid_consistency            =on                            #    off
replicate_wild_do_table=test001.%
replicate_wild_ignore_table=information_schema.%
replicate_wild_ignore_table=performance_schema.%
replicate_wild_ignore_table=mysql.%
replicate_wild_ignore_table=orchestrator.%  

172.16.0.131: master operation:


mysql -uroot -p'123456' -e "reset mater;"
mysql -e "grant replication slave on *.* to repuser@'172.16.0.%' identified by 'JuwoSdk21TbUser'; flush privileges;"
mysqldump -uroot -p'123456' -B -A -F --set-gtid-purged=OFF  --master-data=2 --single-transaction  --events|gzip >/opt/test_$(date +%F).sql.gz

172.16.0.130:slave operation:

mysql < /test_$(date +%F).sql.gz
mysql  -e "CHANGE MASTER TO MASTER_HOST='mgr03',MASTER_PORT=3306,MASTER_USER='repuser',MASTER_PASSWORD='JuwoSdk21TbUser',MASTER_AUTO_POSITION = 1;start slave;show slave status\G" |grep -i "yes"

172.16.0.132 slave operation:

mysql < /test_$(date +%F).sql.gz
mysql  -e "CHANGE MASTER TO MASTER_HOST='mgr03',MASTER_PORT=3306,MASTER_USER='repuser',MASTER_PASSWORD='JuwoSdk21TbUser',MASTER_AUTO_POSITION = 1;start slave;show slave status\G" |grep -i "yes"

Orchestrator is installed on two, three vm

prompt! ! ! ! : {{{This post describes Orchestrator machines installed and used in a single node}}}

2.1, the machine role description:

orchestrator machine: 172.16.0.130 172.16.0.131 172.16.0.132
orchestrator yuan back-end database MySQL: 172.16.0.131
monitoring target database: 172.16.0.130 172.16.0.131 172.16.0.132

2.2, each VM machines execute the following command

Installation orchestrator:
download the installation package orchestrator, orchestrator-3.1.4-Linux-amd64.tar.gz
https://github.com/github/orchestrator/releases

Orchestrator extracting installation package:
the tar -xf orchestrator-3.1.4-Linux-amd64.tar.gz 
will be more usr, etc directory 2 below:
[mgr01 the root @ ~] # -lrt LS / the root /
drwxr-X-XR 4096 Jan the root 26 is the root. 3 22:05 usr
drwxr-X. 3 XR-4096 Jan the root 26 is the root 22:05 etc

The usr / local / orchestrator / orchestrator-sample.conf.json move to the / etc, and named orchestrator.conf.json

cp /root/usr/local/orchestrator/orchestrator-sample.conf.json /etc/orchestrator.conf.json

Create a library and user orchestrator need to use after the installation is complete:

CREATE DATABASE orchestrator;
CREATE USER 'orchestrator'@'127.0.0.1' IDENTIFIED BY 'orchestrator';
GRANT ALL PRIVILEGES ON `orchestrator`.* TO 'orchestrator'@'127.0.0.1';
这里元数据库MySQL和orchestrator在同一台机器上,所以创建账号的时候用的'127.0.0.1',
如果不在同一台机器上,将IP换成orchestrator所在机器ip。

Monitoring the target database authorization:

在需要监控的目标数据库上进行授权
CREATE USER 'orchestrator'@'172.16.0.%'  IDENTIFIED BY 'orchestrator';
GRANT SUPER, PROCESS, REPLICATION SLAVE, RELOAD ON *.* TO 'orchestrator'@'172.16.0.%';
GRANT SELECT ON mysql.slave_master_info TO 'orchestrator'@'172.16.0.%';
提示:
MySQLTopologyUser 账号的权限应该设置为super,process,reload,select,replicatiopn slave,
官网文档中缺少了select权限,orchestrator切换过程中需要通过读取从库的mysql.slave_master_info表,获取复制账号和密码,如果没有select权限,将导致读取失败,并且不会有任何错误信息报出来。

2.3, each VM modify configuration files orchestrator

Modify /etc/orchestrator.conf.json as follows:

#### yuan configuration orchestrator backend database information

"MySQLOrchestratorHost": "127.0.0.1",
"MySQLOrchestratorPort": 3306,
"MySQLOrchestratorDatabase": "orchestrator",
"MySQLOrchestratorUser": "orchestrator",
"MySQLOrchestratorPassword": "orchestrator",

### configuration orchestrator monitoring of the target database information

"MySQLTopologyUser": "orchestrator",
"MySQLTopologyPassword": "orchestrator",

2.4, only start a separate service orchestrator

Start orchestrator separate service on 172.16.0.131 machine, the default listening port is 3000
startup command:

cd /root/usr/local/orchestrator && ./orchestrator --config=/etc/orchestrator.conf.json http &
[root@mgr01 ~]# ps -ef|grep orc
root       3478   3477  6 23:47 pts/3    00:00:02 ./orchestrator --config=/etc/orchestrator.conf.json http
root       3489   2648  0 23:48 pts/2    00:00:00 grep --color=auto orc
[root@mgr01 ~]# ss -lntup|grep orc
tcp    LISTEN     0      128      :::3000                 :::*                   users:(("orchestrator",pid=3478,fd=5))

There error log:

2020-02-20 23:47:40 ERROR ReadTopologyInstance(mgr01:3306) show slave hosts: ReadTopologyInstance(mgr01:3306) 'show slave hosts' returned row with <host,port>: <,3306>
2020-02-20 23:47:41 DEBUG Waiting for 15 seconds to pass before running failure detection/recovery
2020-02-20 23:47:41 ERROR ReadTopologyInstance(mgr02:3306) show slave hosts: ReadTopologyInstance(mgr02:3306) 'show slave hosts' returned row with <host,port>: <,3306>

Solution of error:
report_host in the MySQL configuration file my.cnf parameters,
report_host read-only parameter, you must restart the mysql service to become effective
report_host = xxxx // ip ip for the server itself
Tip: report- series of parameter description about mysql as follows:
# Report- Report series series is provided from the library, four parameters comprising report- [host | port | user | password].
when the my.cnf set report-host, the execution start slave from the library when the report-host, and will report-port (default 3306) issued to the master database, the master database record structures in a global variable slave_list the hash
also be noted that for report_host mysql limited to a maximum length of 60 bytes, also 60 non-Chinese characters, so the mysql server host name should be less than 60 characters, or else call the shots when copying from, slave will complain
reference: https://www.jianshu.com/p/9a5b7d30b0ae

The reason: my.cnf configuration file without report_host, the orchestrator program show slave hosts will not show host, it will lead to an error of the program
or modify the configuration file parameters DiscoverByShowSlaveHosts /etc/orchestrator.conf.json is false, restart the service orchestrator, this eliminates the need to set up report_host

2.5, Web page views Introduction

http://10.0.0.130:3000/web/status

First open the web page can not see mysql cluster cluster name, you need to click discover discovery instance, as shown below:
Single-node mode describes Orchestrator

Single-node mode describes Orchestrator

Single-node mode describes Orchestrator
Single-node mode describes Orchestrator

Click Clusters again, giving rise to a cluster alias and Instance:
Single-node mode describes Orchestrator

Select the status at home, you can see the current health of nodes:
Single-node mode describes Orchestrator
view detailed replication topology:

Single-node mode describes Orchestrator

Single-node mode describes Orchestrator
View analysis of replication to fail:
Single-node mode describes Orchestrator

Single-node mode describes Orchestrator

About copying failure diagnosis:

Single-node mode describes Orchestrator

Single-node mode describes Orchestrator

View detailed information copied:
Single-node mode describes Orchestrator

Online adjust the replication relationship has changed from a master copy Cascade 2:
Single-node mode describes Orchestrator

Single-node mode describes Orchestrator

Then becomes the main cascading replication from the 2:
Single-node mode describes Orchestrator

These are the orchestrator of services on a single node cluster boot manager mysql replication applications simple web page introduction, welcome the exchange of learning together

Guess you like

Origin blog.51cto.com/wujianwei/2472610