kubernetes-集群备份和恢复

一、备份
 
思路:
①集群运行中etcd数据备份到磁盘上
②kubeasz项目创建的集群,需要备份CA证书文件,以及ansible的hosts文件
 
【deploy节点操作】
1:创建存放备份文件目录
[root@master ~]# mkdir -p /backup/k8s1
 
2:etcd数据保存到备份目录下
[root@master ~]# ETCDCTL_API=3 etcdctl snapshot save /backup/k8s1/snapshot.db Snapshot saved at /backup/k8s1/snapshot.db
[root@master ~]# du -h /backup/k8s1/snapshot.db 1.6M /backup/k8s1/snapshot.db
 
3:拷贝kubernetes目录下ssl文件
[root@master ~]# cp /etc/kubernetes/ssl/* /backup/k8s1/
[root@master ~]# ll /backup/k8s1/
总用量 1628
-rw-r--r--. 1 root root 1675 12月 10 21:21 admin-key.pem
-rw-r--r--. 1 root root 1391 12月 10 21:21 admin.pem
-rw-r--r--. 1 root root 997 12月 10 21:21 aggregator-proxy.csr
-rw-r--r--. 1 root root 219 12月 10 21:21 aggregator-proxy-csr.json
-rw-------. 1 root root 1675 12月 10 21:21 aggregator-proxy-key.pem
-rw-r--r--. 1 root root 1383 12月 10 21:21 aggregator-proxy.pem
-rw-r--r--. 1 root root 294 12月 10 21:21 ca-config.json
-rw-r--r--. 1 root root 1675 12月 10 21:21 ca-key.pem
-rw-r--r--. 1 root root 1350 12月 10 21:21 ca.pem
-rw-r--r--. 1 root root 1082 12月 10 21:21 kubelet.csr
-rw-r--r--. 1 root root 283 12月 10 21:21 kubelet-csr.json
-rw-------. 1 root root 1675 12月 10 21:21 kubelet-key.pem
-rw-r--r--. 1 root root 1452 12月 10 21:21 kubelet.pem
-rw-r--r--. 1 root root 1273 12月 10 21:21 kubernetes.csr
-rw-r--r--. 1 root root 488 12月 10 21:21 kubernetes-csr.json
-rw-------. 1 root root 1679 12月 10 21:21 kubernetes-key.pem
-rw-r--r--. 1 root root 1639 12月 10 21:21 kubernetes.pem
-rw-r--r--. 1 root root 1593376 12月 10 21:32 snapshot.db
4:模拟集群崩溃,执行clean.yml清除操作
[root@master ~]# cd /etc/ansible/ [root@master ansible]# ansible-playbook 99.clean.yml
 
 
二、恢复
 
【deploy节点操作】
1:恢复ca证书
[root@master ansible]# mkdir -p /etc/kubernetes/ssl
[root@master ansible]# cp /backup/k8s1/ca* /etc/kubernetes/ssl/
 
2:开始执行重建集群操作
[root@master ansible]# ansible-playbook 01.prepare.yml
[root@master ansible]# ansible-playbook 02.etcd.yml
[root@master ansible]# ansible-playbook 03.docker.yml
[root@master ansible]# ansible-playbook 04.kube-master.yml
[root@master ansible]# ansible-playbook 05.kube-node.yml
 
3:暂停etcd服务
[root@master ansible]# ansible etcd -m service -a 'name=etcd state=stopped'
 
4:清空数据
[root@master ansible]# ansible etcd -m file -a 'name=/var/lib/etcd/member/ state=absent'
 1 [DEPRECATION WARNING]: The TRANSFORM_INVALID_GROUP_CHARS settings is set to allow bad characters in group names by default, this will change, but still be user 
 2 configurable on deprecation. This feature will be removed in version 2.10. Deprecation warnings can be disabled by setting deprecation_warnings=False in ansible.cfg.
 3 [WARNING]: Invalid characters were found in group names but not replaced, use -vvvv to see details
 4 
 5 192.168.1.203 | CHANGED => {
 6     "ansible_facts": {
 7         "discovered_interpreter_python": "/usr/bin/python"
 8     }, 
 9     "changed": true, 
10     "path": "/var/lib/etcd/member/", 
11     "state": "absent"
12 }
13 192.168.1.202 | CHANGED => {
14     "ansible_facts": {
15         "discovered_interpreter_python": "/usr/bin/python"
16     }, 
17     "changed": true, 
18     "path": "/var/lib/etcd/member/", 
19     "state": "absent"
20 }
21 192.168.1.200 | CHANGED => {
22     "ansible_facts": {
23         "discovered_interpreter_python": "/usr/bin/python"
24     }, 
25     "changed": true, 
26     "path": "/var/lib/etcd/member/", 
27     "state": "absent"
28 }
4:将备份的etcd数据文件同步到每个etcd节点上
[root@master ansible]# for i in 202 203; do rsync -av /backup/k8s1 192.168.1.$i:/backup/; done
 
5:在每个etcd节点执行下面数据恢复操作,然后重启etcd
 
##说明:在/etc/systemd/system/etcd.service找到--inital-cluster etcd1=https://xxxx:2380,etcd2=https://xxxx:2380,etcd3=https://xxxx:2380替换恢复命令中的--initial-cluster{ }变量,--name=【当前etcd-node-name】,最后还需要填写当前节点的IP:2380
 
①【deploy操作】
[root@master ansible]# cd /backup/k8s1/
[root@master k8s1]# ETCDCTL_API=3 etcdctl snapshot restore snapshot.db --name etcd1 --initial-cluster etcd1=https://192.168.1.200:2380,etcd2=https://192.168.1.202:2380,etcd3=https://192.168.1.203:2380 --initial-cluster-token etcd-cluster-0 --initial-advertise-peer-urls https://192.168.1.200:2380
2019-12-10 22:26:50.037127 I | mvcc: restore compact to 46505
2019-12-10 22:26:50.052409 I | etcdserver/membership: added member 12229714d8728d0e [https://192.168.1.200:2380] to cluster b8ef796b710cde7d
2019-12-10 22:26:50.052451 I | etcdserver/membership: added member 552fb05951af50c9 [https://192.168.1.203:2380] to cluster b8ef796b710cde7d
2019-12-10 22:26:50.052474 I | etcdserver/membership: added member 8b4f4a6559bf7c2c [https://192.168.1.202:2380] to cluster b8ef796b710cde7d
 
执行上面步骤后,会在当前节点目录下,生成一个【node-name】.etcd目录文件
[root@master k8s1]# tree etcd1.etcd/
etcd1.etcd/
└── member
├── snap
│ ├── 0000000000000001-0000000000000003.snap
│ └── db
└── wal
└── 0000000000000000-0000000000000000.wal
[root@master k8s1]# cp -r etcd1.etcd/member /var/lib/etcd/
[root@master k8s1]# systemctl restart etcd
 
②【etcd2节点操作】
[root@node1 ~]# cd /backup/k8s1/
[root@node1 k8s1]# ETCDCTL_API=3 etcdctl snapshot restore snapshot.db --name etcd2 --initial-cluster etcd1=https://192.168.1.200:2380,etcd2=https://192.168.1.202:2380,etcd3=https://192.168.1.203:2380 --initial-cluster-token etcd-cluster-0 --initial-advertise-peer-urls https://192.168.1.202:2380
2019-12-10 22:28:35.175032 I | mvcc: restore compact to 46505
2019-12-10 22:28:35.232386 I | etcdserver/membership: added member 12229714d8728d0e [https://192.168.1.200:2380] to cluster b8ef796b710cde7d
2019-12-10 22:28:35.232507 I | etcdserver/membership: added member 552fb05951af50c9 [https://192.168.1.203:2380] to cluster b8ef796b710cde7d
2019-12-10 22:28:35.232541 I | etcdserver/membership: added member 8b4f4a6559bf7c2c [https://192.168.1.202:2380] to cluster b8ef796b710cde7d
[root@node1 k8s1]# tree etcd2.etcd/
etcd2.etcd/
└── member
├── snap
│ ├── 0000000000000001-0000000000000003.snap
│ └── db
└── wal
└── 0000000000000000-0000000000000000.wal
[root@node1 k8s1]# cp -r etcd1.etcd/member /var/lib/etcd/
[root@node1 k8s1]# systemctl restart etcd
 
③【etcd3节点操作】
[root@node2 ~]# cd /backup/k8s1/
[root@node2 k8s1]# ETCDCTL_API=3 etcdctl snapshot restore snapshot.db --name etcd3 --initial-cluster etcd1=https://192.168.1.200:2380,etcd2=https://192.168.1.202:2380,etcd3=https://192.168.1.203:2380 --initial-cluster-token etcd-cluster-0 --initial-advertise-peer-urls https://192.168.1.203:2380
2019-12-10 22:28:55.943364 I | mvcc: restore compact to 46505
2019-12-10 22:28:55.988674 I | etcdserver/membership: added member 12229714d8728d0e [https://192.168.1.200:2380] to cluster b8ef796b710cde7d
2019-12-10 22:28:55.988726 I | etcdserver/membership: added member 552fb05951af50c9 [https://192.168.1.203:2380] to cluster b8ef796b710cde7d
2019-12-10 22:28:55.988754 I | etcdserver/membership: added member 8b4f4a6559bf7c2c [https://192.168.1.202:2380] to cluster b8ef796b710cde7d
[root@node2 k8s1]# tree etcd3.etcd/
etcd3.etcd/
└── member
├── snap
│ ├── 0000000000000001-0000000000000003.snap
│ └── db
└── wal
└── 0000000000000000-0000000000000000.wal
 
[root@node2 k8s1]# cp -r etcd1.etcd/member /var/lib/etcd/
[root@node2 k8s1]# systemctl restart etcd
 
6:在deploy节点上操作重建网络
[root@master ansible]# cd /etc/ansible/
[root@master ansible]# ansible-playbook tools/change_k8s_network.yml
 
 7:查看pod、svc恢复是否成功
[root@master ansible]# kubectl get svc
1 NAME         TYPE        CLUSTER-IP      EXTERNAL-IP   PORT(S)    AGE
2 kubernetes   ClusterIP   10.68.0.1       <none>        443/TCP    5d5h
3 nginx        ClusterIP   10.68.241.175   <none>        80/TCP     5d4h
4 tomcat       ClusterIP   10.68.235.35    <none>        8080/TCP   76m

[root@master ansible]# kubectl get pods

1 NAME                     READY   STATUS              RESTARTS   AGE
2 nginx-7c45b84548-4998z   1/1     Running             0          5d4h
3 tomcat-8fc9f5995-9kl5b   1/1     Running             0          77m
 
三、自动备份、自动恢复
 
1:一键备份
[root@master ansible]# ansible-playbook /etc/ansible/23.backup.yml
 
2:模拟故障
[root@master ansible]# ansible-playbook /etc/ansible/99.clean.yml
 
修改文件/etc/ansible/roles/cluster-restore/defaults/main.yml,指定要恢复的etcd快照备份,如果不修改就是最新的一次
 
3:执行自动恢复操作
[root@master ansible]# ansible-playbook /etc/ansible/24.restore.yml
[root@master ansible]# ansible-playbook /etc/ansible/tools/change_k8s_network.yml
 

猜你喜欢

转载自www.cnblogs.com/douyi/p/12019807.html