mysql cluster搭建

mysql cluster搭建

本文介绍在同一个虚拟机中搭建mysql cluster。mysql cluster环境需要5台左右的设备,因为没有这么多资源,选择在同一台虚拟机中搭建集群环境。具体操作如下。

准备工作

在虚拟机中,单独分配了一个分区,来安装集群。挂在到/disk4目录下,具体步骤不详细描述了,网上可以找到大量介绍如何扩展磁盘的操作。所有安装部署工作都是在/disk4目录下完成。
下载linux安装包:https://cdn.mysql.com//Downloads/MySQL-Cluster-7.5/mysql-cluster-gpl-7.5.12-linux-glibc2.12-x86_64.tar.gz

拷贝到/disk4目录下,解压到当前目录,重命名为mysql-1,拷贝4份,分别命名为mysql-2,mysql-3,mysql-4,mysql-5 。
mysql-1作为管理节点;
mysql-2,mysql-3作为数据节点;
mysql-4,mysql-5作为sql节点。

管理节点配置

  1. 配置:

在mysql-1目录下新建conf目录,新增config.ini文件,作为管理节点的配置文件。
配置文件内容

[ndbd default]
NoOfReplicas=2
DataMemory=512M
IndexMemory=18M
ServerPort=3310 #端口

[ndb_mgmd]
NodeId=1
HostName=192.168.17.128
PortNumber=3310
DataDir=/disk4/mysql-1/data

[ndbd]
NodeId=2
HostName=192.168.17.128
ServerPort=3311
DataDir=/disk4/mysql-2/data

[ndbd]
NodeId=3
HostName=192.168.17.128
ServerPort=3312
DataDir=/disk4/mysql-3/data

[mysqld]
[mysqld]
  1. 初始化

进入到bin目录,执行如下命令
./ndb_mgmd -f /disk4/mysql-1/conf/config.ini --configdir=/disk4/mysql-1/conf/ --initial

  1. 启动管理节点

执行如下命令

./ndb_mgmd -f /disk4/mysql-1/conf/config.ini --configdir=/disk4/mysql-1/conf/
  1. 启动结果

启动成功后会输出如下文字

MySQL Cluster Management Server mysql-5.7.24 ndb-7.5.12

数据节点配置

  1. 配置
    在mysql-2下新增conf目录,编辑my.cnf
[MYSQLD]
user=root
character_set_server=utf8
port=3311
pid-file=/disk4/mysql-2/mysql.pid
log-error=/disk4/mysql-2/mysql.err
language=/disk4/mysql-2/share/english
datadir=/disk4/mysql-2/data
basedir=/disk4/mysql-2/
ndbcluster
ndb-connectstring=192.168.17.128
default-storage-engine=ndbcluster
[MYSQL_CLUSTER]
ndb-connectstring=192.168.17.128:3310 #管理节点地址
  1. 启动
    进入bin目录,执行命令
./ndbd --defaults-file=../conf/my.cnf --initial
  1. 启动结果
    在控制台输出如下内容,表示启动成功
2018-12-20 14:04:07 [ndbd] INFO     -- Angel connected to '192.168.17.128:3310' 
2018-12-20 14:04:07 [ndbd] INFO     -- Angel allocated nodeid: 2

在控制节点的 data目录下的ndb_1_cluster.log文件,会有如下输出

2018-12-19 19:34:52 [MgmtSrvr] INFO     -- Node 1: Node 3 Connected
2018-12-19 19:34:54 [MgmtSrvr] INFO     -- Node 2: Initial start, waiting for 3 to connect,  nodes [ all: 2 and 3 connected: 2 no-wait:  ]
2018-12-19 19:34:57 [MgmtSrvr] INFO     -- Node 3: Buffering maximum epochs 100
2018-12-19 19:34:57 [MgmtSrvr] INFO     -- Node 3: Start phase 0 completed
2018-12-19 19:34:57 [MgmtSrvr] INFO     -- Node 3: Communication to Node 2 opened
2018-12-19 19:34:57 [MgmtSrvr] INFO     -- Node 3: Initial start, waiting for 2 to connect,  nodes [ all: 2 and 3 connected: 3 no-wait:  ]
2018-12-19 19:34:57 [MgmtSrvr] INFO     -- Node 3: Node 2 Connected
2018-12-19 19:34:57 [MgmtSrvr] INFO     -- Node 2: Node 3 Connected
2018-12-19 19:34:57 [MgmtSrvr] INFO     -- Node 2: Initial start with nodes 2 and 3 [ missing:  no-wait:  ]
2018-12-19 19:34:57 [MgmtSrvr] INFO     -- Node 2: CM_REGCONF president = 2, own Node = 2, our dynamic id = 0/1
2018-12-19 19:35:00 [MgmtSrvr] INFO     -- Node 3: CM_REGCONF president = 2, own Node = 3, our dynamic id = 0/2
2018-12-19 19:35:00 [MgmtSrvr] INFO     -- Node 2: Node 3: API mysql-5.7.24 ndb-7.5.12
2018-12-19 19:35:00 [MgmtSrvr] INFO     -- Node 3: Node 2: API mysql-5.7.24 ndb-7.5.12
2018-12-19 19:35:00 [MgmtSrvr] INFO     -- Node 3: Start phase 1 completed
2018-12-19 19:35:00 [MgmtSrvr] INFO     -- Node 2: Start phase 1 completed
2018-12-19 19:35:00 [MgmtSrvr] INFO     -- Node 2: System Restart: master node: 2, num starting: 2, gci: 0
2018-12-19 19:35:00 [MgmtSrvr] INFO     -- Node 2: CNTR_START_CONF: started: 0000000000000000
2018-12-19 19:35:00 [MgmtSrvr] INFO     -- Node 2: CNTR_START_CONF: starting: 000000000000000c
2018-12-19 19:35:00 [MgmtSrvr] INFO     -- Node 2: Local redo log file initialization status:
#Total files: 64, Completed: 0
#Total MBytes: 1024, Completed: 0
2018-12-19 19:35:00 [MgmtSrvr] INFO     -- Node 3: Local redo log file initialization status:
#Total files: 64, Completed: 0
#Total MBytes: 1024, Completed: 0
2018-12-19 19:35:01 [MgmtSrvr] INFO     -- Node 3: Local redo log file initialization completed:
#Total files: 64, Completed: 64
#Total MBytes: 1024, Completed: 1024
2018-12-19 19:35:01 [MgmtSrvr] INFO     -- Node 3: Start phase 2 completed (initial start)
2018-12-19 19:35:01 [MgmtSrvr] INFO     -- Node 2: Local redo log file initialization completed:
................................

在此文件中可以看到各个节点连接的动态状况。

  1. 注意事项
    需要至少两个数据节点,不然在sql节点连接时会出现如下异常
2018-12-19 19:32:13 [MgmtSrvr] WARNING  -- Failed to allocate nodeid for API at 192.168.17.128. Returned error: 'No free node id found for mysqld(API).'

mysql-3的配置与mysql-2相同,把端口和路径修改一下即可。

sql节点配置

  1. 配置
    在mysql-2下新增conf目录,编辑my.cnf
[MYSQLD]
user=root
character_set_server=utf8
port=3313
pid-file=/disk4/mysql-4/mysql.pid
log-error=/disk4/mysql-4/mysql.err
language=/disk4/mysql-4/share/english
datadir=/disk4/mysql-4/data
basedir=/disk4/mysql-4/
socket=/disk4/mysql-4/mysql.sock
ndbcluster
ndb-connectstring=192.168.17.128
default-storage-engine=ndbcluster
[MYSQL_CLUSTER]
ndb-connectstring=192.168.17.128:3310
  1. 数据库初始化
    进入到bin目录,执行命令
./mysqld --defaults-file=../conf/my.cnf --initialize

初始密码在msql.err文件末尾

  1. 后台启动mysql
./mysqld --defaults-file=../conf/my.cnf --user=root &

启动成功后,在管理节点日志中会输出

Alloc node id 4 succeeded
2018-12-19 21:50:49 [MgmtSrvr] INFO     -- Nodeid 4 allocated for API at 192.168.17.128
2018-12-19 21:50:49 [MgmtSrvr] INFO     -- Node 4: mysqld --server-id=0
2018-12-19 21:50:49 [MgmtSrvr] INFO     -- Node 2: Node 4 Connected
2018-12-19 21:50:49 [MgmtSrvr] INFO     -- Node 3: Node 4 Connected
2018-12-19 21:50:49 [MgmtSrvr] INFO     -- Node 3: Node 4: API mysql-5.7.24 ndb-7.5.12
2018-12-19 21:50:49 [MgmtSrvr] INFO     -- Node 2: Node 4: API mysql-5.7.24 ndb-7.5.12
  1. 登录mysql
./mysql -S../mysql.sock -uroot -p

根据提示输入初始密码,登录成功。
修改初始密码

  1. 注意事项
    mysql-5的配置与mysql-4相同,修改端口和文件路径即可
    sql节点的密码需要保持一致

集群验证

在bin目录下输入如下命令,登录上管理节点

./ndb_mgm -c 192.168.17.128:3310

登录后,输入show命令,输出集群节点的详情

Connected to Management Server at: 192.168.17.128:3310
Cluster Configuration
---------------------
[ndbd(NDB)]	2 node(s)
id=2	@192.168.17.128  (mysql-5.7.24 ndb-7.5.12, Nodegroup: 0, *)
id=3	@192.168.17.128  (mysql-5.7.24 ndb-7.5.12, Nodegroup: 0)

[ndb_mgmd(MGM)]	1 node(s)
id=1	@192.168.17.128  (mysql-5.7.24 ndb-7.5.12)

[mysqld(API)]	2 node(s)
id=4	@192.168.17.128  (mysql-5.7.24 ndb-7.5.12)
id=5	@192.168.17.128  (mysql-5.7.24 ndb-7.5.12)

mysql cluster搭建完成。

猜你喜欢

转载自blog.csdn.net/MLTR1/article/details/85120725