Build a mysql8 cluster with one master and two slaves on Kubernetes (k8s)

Environmental preparation

The video tutorial address of this article:https://www.bilibili.com/video/BV1iw411e7ZE/

First, you need to prepare a kubernetes cluster and an nfs server. For convenience, I use k8s-master as the nfs server. My server IP address and purpose are as follows:

IP address CPU name use
192.168.1.160 k8s-master Kubernetes master server cum nfs server
192.168.1.161 K8s-node01 Kubernetes’ first worker node
192.168.1.162 K8s-node02 Kubernetes second worker node

The Kubernetes cluster is ready as shown below:
Insert image description here

If you don’t have a Kubernetes cluster yet, please refer to the article I wrote:https://blog.csdn.net/m0_51510236/article/details/130842122

I have previously written articles about setting up MySQL master-slave on physical machines and installing single nodes on Kubernetes. You can refer to this article:

Build nfs server

Install NFS

I plan to install nfs on the main server. Because nfs needs to be connected on the two working nodes, nfs must also be installed on the working nodes:

yum install -y nfs-utils

Three servers must be installed, as shown in the figure:
Insert image description here

Expose nfs directory

Because we want to install three mysql servers (one master and two slaves), we need to create three directories for these three servers.

I have written an article about dynamic storage before. If you need to use dynamic storage, please refer to:https://blog.csdn.net/m0_51510236/article/details/ 132641343, the installation steps of dynamic storage are the first half of this article

We create these three directories directly in the nfs server (k8s-master) and write them into the /etc/exports folder (the created directories can be modified):

mkdir -p /data/nfs/{
    
    mysql-master,mysql-slaver-01,mysql-slaver-02}
cat >> /etc/exports << EOF
/data/nfs/mysql-master *(rw,sync,no_root_squash)
/data/nfs/mysql-slaver-01 *(rw,sync,no_root_squash)
/data/nfs/mysql-slaver-02 *(rw,sync,no_root_squash)
EOF

After creation, it looks like this:
Insert image description here

Start nfs server

We have already set up the NFS server before, and then we can start the nfs server directly on the main server. This line of command:

systemctl enable --now nfs-server

After execution, we can check whether the directory is successfully exposed through this line of command:

# 注意修改为自己的nfs服务器地址
showmount -e 192.168.1.160

You can check that all three servers can check the three directories that have been exposed:
Insert image description here

Install MySQL Cluster

Create namespace

Create a namespace to deploy MySQL cluster. Of course, you can also use the default Default namespace. Here we use the deploy-test namespace to build the cluster. First we create this namespace:

  • Command to create
kubectl create namespace deploy-test
  • Yaml resource list file creation (recommended)
apiVersion: v1
kind: Namespace
metadata:
  name: deploy-test
spec: {
    
    }
status: {
    
    }

After creation, it looks like this:
Insert image description here

Create a MySQL password secret

To create a Secret that stores the MySQL password, we can directly use this line of command to generate the resource list file of this Secret:

# 注意修改root的密码和命名空间,我的root密码设置为的是root
kubectl create secret generic mysql-password --namespace=deploy-test --from-literal=mysql_root_password=root --dry-run=client -o=yaml

As shown in the figure after execution:
Insert image description here

Save it in a yaml file and execute:
Insert image description here

Install MySQL master node

Create pv and pvc

We installed nfs before, now we can create pv and pvc for those directories, and create resource list files of pv and pvc (pv and pvc are introduced in detail in one of my previous articles:):https://blog.csdn.net/m0_51510236/article/details/132482351

apiVersion: v1
kind: PersistentVolume
metadata:
  name: deploy-mysql-master-nfs-pv
  namespace: deploy-test
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    # 注意修改IP地址和暴露的目录(如果不一样)
    server: 192.168.1.160
    path: /data/nfs/mysql-master
  storageClassName: "nfs"

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: deploy-mysql-master-nfs-pvc
  namespace: deploy-test
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: "nfs"
  resources:
    requests:
      storage: 1Gi
  volumeName: deploy-mysql-master-nfs-pv

After creation, we can use this line of commands to view the created pv and pvc:

kubectl get pv,pvc -n deploy-test

The creation is successful as shown in the figure:
Insert image description here

Master node configuration file

We need to prepare a my.cnf configuration file for the master node. The content of the file is as follows:

[mysqld]
skip-host-cache
skip-name-resolve
datadir          = /var/lib/mysql
socket           = /var/run/mysqld/mysqld.sock
secure-file-priv = /var/lib/mysql-files
pid-file         = /var/run/mysqld/mysqld.pid
user             = mysql
secure-file-priv = NULL
server-id        = 1
log-bin          = master-bin
log_bin_index    = master-bin.index
binlog_do_db     = xiaohh_user
binlog_ignore_db = information_schema
binlog_ignore_db = mysql
binlog_ignore_db = performance_schema
binlog_ignore_db = sys
binlog-format    = ROW

[client]
socket           = /var/run/mysqld/mysqld.sock

!includedir /etc/mysql/conf.d/

There are several configurations that need to be paid attention to:

# server id,要注意多个mysql节点唯一
server-id        = 1
# 生成的logbin的文件名
log-bin          = master-bin
log_bin_index    = master-bin.index
# 同步哪个数据库,这里我们为了测试之同步xiaohh_user这个数据库
binlog_do_db     = xiaohh_user
# 排除哪个数据库,可以写多行,排除的数据库不会被主从同步,这里写上mysql自带的几个数据库
binlog_ignore_db = information_schema
... # 还有几行省略
# binlog的格式
binlog-format    = ROW

Next, a ConfigMap will be created to store this configuration file. You can use the following configuration to generate the yaml resource manifest file content:

kubectl create configmap mysql-master-cm -n deploy-test --from-file=my.cnf --dry-run=client -o yaml

Generated as shown:
Insert image description here

We will create it in the next step.

Deploy the mysql master node

We go directly to the yaml resource list file of the mysql master node:

apiVersion: v1
data:
  my.cnf: |
    [mysqld]
    skip-host-cache
    skip-name-resolve
    datadir          = /var/lib/mysql
    socket           = /var/run/mysqld/mysqld.sock
    secure-file-priv = /var/lib/mysql-files
    pid-file         = /var/run/mysqld/mysqld.pid
    user             = mysql
    secure-file-priv = NULL
    server-id        = 1
    log-bin          = master-bin
    log_bin_index    = master-bin.index
    binlog_do_db     = xiaohh_user
    binlog_ignore_db = information_schema
    binlog_ignore_db = mysql
    binlog_ignore_db = performance_schema
    binlog_ignore_db = sys
    binlog-format    = ROW

    [client]
    socket           = /var/run/mysqld/mysqld.sock

    !includedir /etc/mysql/conf.d/
kind: ConfigMap
metadata:
  name: mysql-master-cm
  namespace: deploy-test

---

apiVersion: v1
kind: Service
metadata:
  name: deploy-mysql-master-svc
  namespace: deploy-test
  labels:
    app: mysql-master
spec:
  ports:
  - port: 3306
    name: mysql
    targetPort: 3306
    nodePort: 30306
  selector:
    app: mysql-master
  type: NodePort
  sessionAffinity: ClientIP

---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: deploy-mysql-master
  namespace: deploy-test
spec:
  selector:
    matchLabels:
      app: mysql-master
  serviceName: "deploy-mysql-master-svc"
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql-master
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - args:
        - --character-set-server=utf8mb4
        - --collation-server=utf8mb4_unicode_ci
        - --lower_case_table_names=1
        - --default-time_zone=+8:00
        name: mysql
        # image: docker.io/library/mysql:8.0.34
        image: registry.cn-shenzhen.aliyuncs.com/xiaohh-docker/mysql:8.0.34
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-data
          mountPath: /var/lib/mysql
        - name: mysql-conf
          mountPath: /etc/my.cnf
          readOnly: true
          subPath: my.cnf
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              key: mysql_root_password
              name: mysql-password
      volumes:
      - name: mysql-data
        persistentVolumeClaim:
          claimName: deploy-mysql-master-nfs-pvc
      - name: mysql-conf
        configMap:
          name: mysql-master-cm
          items:
          - key: my.cnf
            mode: 0644
            path: my.cnf

I have talked about the content of stateful’s mysql resource list file before. You can refer to the article I wrote:https://blog.csdn.net/m0_51510236/article/details/132482351 , I won’t explain too much here. There are a few special places that need to be mentioned. The first is this location:
Insert image description here

In order to facilitate some friends who cannot pull the dockerhub image, I pulled the mysql image of dockerhub and pushed it to the domestic Alibaba Cloud. You can use it optionally. Next, you need to pay attention to the mounting of files:
Insert image description here

Then execute the following command to deploy this yaml file:

kubectl apply -f mysql-master.yaml

As shown in the picture, you can see that mysql is already running:
Insert image description here

Next, check the directory where nfs is hung. You can see that the log-bin file has appeared:
Insert image description here

Install the first MySQL Slave node

We just installed the MySQL master node, and next we install the first MySQL slave node.

Create pv and pvc

As mentioned when setting up the master node, the yaml resource list file is provided directly here:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: deploy-mysql-slave-01-nfs-pv
  namespace: deploy-test
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.1.160
    path: /data/nfs/mysql-slaver-01
  storageClassName: "nfs"

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: deploy-mysql-slave-01-nfs-pvc
  namespace: deploy-test
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: "nfs"
  resources:
    requests:
      storage: 1Gi
  volumeName: deploy-mysql-slave-01-nfs-pv

We create directly:
Insert image description here

First slave node configuration file

We need to prepare a my.cnf configuration file for the master node. The content of the file is as follows:

[mysqld]
skip-host-cache
skip-name-resolve
datadir          = /var/lib/mysql
socket           = /var/run/mysqld/mysqld.sock
secure-file-priv = /var/lib/mysql-files
pid-file         = /var/run/mysqld/mysqld.pid
user             = mysql
secure-file-priv = NULL
server-id        = 2
log-bin          = slave-bin
relay-log        = slave-relay-bin
relay-log-index  = slave-relay-bin.index

[client]
socket           = /var/run/mysqld/mysqld.sock

!includedir /etc/mysql/conf.d/

There are several configurations that need to be paid attention to:

# server id,注意不同节点要不一样
server-id        = 2
# 从节点的logbin文件
log-bin          = slave-bin
relay-log        = slave-relay-bin
relay-log-index  = slave-relay-bin.index

Next, a ConfigMap will be created to store this configuration file. You can use the following configuration to generate the yaml resource manifest file content:

kubectl create configmap mysql-slave-01-cm -n deploy-test --from-file=my.cnf --dry-run=client -o yaml

Generated as shown:
Insert image description here

We will create it in the next step.

Deploy mysql slave node

The yaml resource list file is similar to the main node, and is provided directly here:

apiVersion: v1
data:
  my.cnf: |
    [mysqld]
    skip-host-cache
    skip-name-resolve
    datadir          = /var/lib/mysql
    socket           = /var/run/mysqld/mysqld.sock
    secure-file-priv = /var/lib/mysql-files
    pid-file         = /var/run/mysqld/mysqld.pid
    user             = mysql
    secure-file-priv = NULL
    server-id        = 2
    log-bin          = slave-bin
    relay-log        = slave-relay-bin
    relay-log-index  = slave-relay-bin.index

    [client]
    socket           = /var/run/mysqld/mysqld.sock

    !includedir /etc/mysql/conf.d/
kind: ConfigMap
metadata:
  name: mysql-slave-01-cm
  namespace: deploy-test

---

apiVersion: v1
kind: Service
metadata:
  name: deploy-mysql-slave-svc
  namespace: deploy-test
  labels:
    app: mysql-slave
spec:
  ports:
  - port: 3306
    name: mysql
    targetPort: 3306
    nodePort: 30308
  selector:
    app: mysql-slave
  type: NodePort
  sessionAffinity: ClientIP

---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: deploy-mysql-slave-01
  namespace: deploy-test
spec:
  selector:
    matchLabels:
      app: mysql-slave
  serviceName: "deploy-mysql-slave-svc"
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql-slave
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - args:
        - --character-set-server=utf8mb4
        - --collation-server=utf8mb4_unicode_ci
        - --lower_case_table_names=1
        - --default-time_zone=+8:00
        name: mysql
        # image: docker.io/library/mysql:8.0.34
        image: registry.cn-shenzhen.aliyuncs.com/xiaohh-docker/mysql:8.0.34
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-data
          mountPath: /var/lib/mysql
        - name: mysql-conf
          mountPath: /etc/my.cnf
          readOnly: true
          subPath: my.cnf
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              key: mysql_root_password
              name: mysql-password
      volumes:
      - name: mysql-data
        persistentVolumeClaim:
          claimName: deploy-mysql-slave-01-nfs-pvc
      - name: mysql-conf
        configMap:
          name: mysql-slave-01-cm
          items:
          - key: my.cnf
            mode: 0644
            path: my.cnf

We create directly:
Insert image description here

Install the second MySQL Slave node

Create pv and pvc

Only the corresponding yaml resource list file is also provided here:

apiVersion: v1
kind: PersistentVolume
metadata:
  name: deploy-mysql-slave-02-nfs-pv
  namespace: deploy-test
spec:
  capacity:
    storage: 1Gi
  accessModes:
    - ReadWriteMany
  nfs:
    server: 192.168.1.160
    path: /data/nfs/mysql-slaver-02
  storageClassName: "nfs"

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: deploy-mysql-slave-02-nfs-pvc
  namespace: deploy-test
spec:
  accessModes:
    - ReadWriteMany
  storageClassName: "nfs"
  resources:
    requests:
      storage: 1Gi
  volumeName: deploy-mysql-slave-02-nfs-pv

We create it directly, and you can see that the third pv and pvc are created:
Insert image description here

Second slave node configuration file

We need to prepare a my.cnf configuration file for the master node. The content of the file is as follows:

[mysqld]
skip-host-cache
skip-name-resolve
datadir          = /var/lib/mysql
socket           = /var/run/mysqld/mysqld.sock
secure-file-priv = /var/lib/mysql-files
pid-file         = /var/run/mysqld/mysqld.pid
user             = mysql
secure-file-priv = NULL
server-id        = 3
log-bin          = slave-bin
relay-log        = slave-relay-bin
relay-log-index  = slave-relay-bin.index

[client]
socket           = /var/run/mysqld/mysqld.sock

!includedir /etc/mysql/conf.d/

There are several configurations that need to be paid attention to:

# server id,注意不同节点要不一样
server-id        = 3
# 从节点的logbin文件
log-bin          = slave-bin
relay-log        = slave-relay-bin
relay-log-index  = slave-relay-bin.index

Next, a ConfigMap will be created to store this configuration file. You can use the following configuration to generate the yaml resource manifest file content:

kubectl create configmap mysql-slave-02-cm -n deploy-test --from-file=my.cnf --dry-run=client -o yaml

Generated as shown:
Insert image description here

We will create it in the next step.

Deploy the second slave node of mysql

Note that this configuration file is missing a service, because I plan to share it with slave1. The yaml resource list file:

apiVersion: v1
data:
  my.cnf: |
    [mysqld]
    skip-host-cache
    skip-name-resolve
    datadir          = /var/lib/mysql
    socket           = /var/run/mysqld/mysqld.sock
    secure-file-priv = /var/lib/mysql-files
    pid-file         = /var/run/mysqld/mysqld.pid
    user             = mysql
    secure-file-priv = NULL
    server-id        = 3
    log-bin          = slave-bin
    relay-log        = slave-relay-bin
    relay-log-index  = slave-relay-bin.index

    [client]
    socket           = /var/run/mysqld/mysqld.sock

    !includedir /etc/mysql/conf.d/
kind: ConfigMap
metadata:
  name: mysql-slave-02-cm
  namespace: deploy-test

---

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: deploy-mysql-slave-02
  namespace: deploy-test
spec:
  selector:
    matchLabels:
      app: mysql-slave-02
  serviceName: "deploy-mysql-slave-svc"
  replicas: 1
  template:
    metadata:
      labels:
        app: mysql-slave-02
    spec:
      terminationGracePeriodSeconds: 10
      containers:
      - args:
        - --character-set-server=utf8mb4
        - --collation-server=utf8mb4_unicode_ci
        - --lower_case_table_names=1
        - --default-time_zone=+8:00
        name: mysql
        # image: docker.io/library/mysql:8.0.34
        image: registry.cn-shenzhen.aliyuncs.com/xiaohh-docker/mysql:8.0.34
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-data
          mountPath: /var/lib/mysql
        - name: mysql-conf
          mountPath: /etc/my.cnf
          readOnly: true
          subPath: my.cnf
        env:
        - name: MYSQL_ROOT_PASSWORD
          valueFrom:
            secretKeyRef:
              key: mysql_root_password
              name: mysql-password
      volumes:
      - name: mysql-data
        persistentVolumeClaim:
          claimName: deploy-mysql-slave-02-nfs-pvc
      - name: mysql-conf
        configMap:
          name: mysql-slave-02-cm
          items:
          - key: my.cnf
            mode: 0644
            path: my.cnf

We create directly:
Insert image description here

Make three servers form a cluster

Check the status of the master node

First we go to the master's mysql and enter the following command:

kubectl exec -itn deploy-test pod/deploy-mysql-master-0 -- mysql -uroot -p

After entering the password, you can enter mysql:
Insert image description here

We enter this line of command to view the status of the master:

show master status;

To view the output, we need to remember the File and Position parameters:
Insert image description here

Next, we go to the two sub-points to execute this line of command:

change master to master_host='deploy-mysql-master-0.deploy-mysql-master-svc.deploy-test.svc.cluster.local', master_port=3306, master_user='root', master_password='root', master_log_file='master-bin.000003', master_log_pos=157, master_connect_retry=30, get_master_public_key=1;

You need to pay attention to the following parameters:

  • master_host: This parameter is the address of the master. The parsing rule provided by kubernetes is pod名称.service名称.命名空间.svc.cluster.local, so the mysql address of our master is deploy-mysql-master-0.deploy-mysql-master-svc.deploy-test.svc.cluster.local
  • master_port: The mysql port of the master node, we have not changed it and the default is 3306
  • master_user: the mysql user who logs in to the master node
  • master_password: Password used to log in to the master node
  • master_log_file: File field when we checked the mysql master node status before
  • master_log_pos: Position field when we checked the mysql master node status before
  • master_connect_retry: Master node reconnection time
  • get_master_public_key: How to obtain the public key to connect to the main mysql

According to the above parameters, if you want to modify them, please modify them according to your own environment.

Connect the first Slave

We can enter the first mysql slave node using the following line of commands:

kubectl exec -itn deploy-test pod/deploy-mysql-slave-01-0 -- mysql -uroot -p

As shown in the figure after execution:
Insert image description here

Let’s first execute the command mentioned above:
Insert image description here

Then we need to enable the slave and execute the following command:

start slave;

As shown in the figure after execution:
Insert image description here

Then we can execute this line of command to view the slave status:

show slave status\G

You can see that the status of Slave is normal:
Insert image description here

The first slave node was successfully added

Connect the second Slave

We can enter the second mysql slave node using the following line of commands:

kubectl exec -itn deploy-test pod/deploy-mysql-slave-02-0 -- mysql -uroot -p

As shown in the figure after execution:
Insert image description here

We execute the same command as shown in the figure:
Insert image description here

Check Slave status:

show slave status\G

You can see that the second Slave is also normal:
Insert image description here

Test the master-slave cluster

First we create a database and a data table in the main node:

CREATE DATABASE `xiaohh_user`;
USE `xiaohh_user`;

CREATE TABLE `user` (
  `user_id` BIGINT UNSIGNED PRIMARY KEY AUTO_INCREMENT COMMENT '用户id',
  `username` VARCHAR(50) NOT NULL COMMENT '用户名',
  `age` TINYINT UNSIGNED DEFAULT 18 COMMENT '年龄',
  `gender` TINYINT UNSIGNED DEFAULT 2 COMMENT '性别;0=男,1=女,2=未知'
) COMMENT '用户表';

Create as shown:
Insert image description here

View the database on the master node:
Insert image description here

Then we come to the slave node to continue viewing:
Insert image description here

We come to the main database to create a piece of data:

INSERT INTO `user` (`username`, `age`, `gender`) VALUES ('XiaoHH', '18', '0');

After insertion, it looks like this:
Insert image description here

You can see that the data from the two slave databases are also synchronized successfully:
Insert image description here

Okay, setting up the MySQL master-slave on Kubernetes is complete. Happy coding~

Guess you like

Origin blog.csdn.net/m0_51510236/article/details/133145221