ceph cluster change IP address

ceph cluster change ip address

Modify the IP settings in /etc/hosts (all node hosts in the cluster need to be modified)

vi /etc/hosts

192.168.1.185 node1
192.168.1.186 node2
192.168.1.187 node3

Modify the ip address in ceph.conf

vi ceph.conf

[global]
fsid = b679e017-68bd-4f06-83f3-f03be36d97fe
mon_initial_members = node1, node2, node3
mon_host = 192.168.1.185,192.168.1.186,192.168.1.187
auth_cluster_required = cephx
auth_service_required = cephx
auth_client_required = cephx

After the modification is completed, execute ceph-deploy --overwrite-conf config push node1 node2 node3

Getmonmap

monmaptool --create --generate -c /etc/ceph/ceph.conf ./monmap Generate a monmap file in the current directory.
If the cluster IP has not been changed, you can use ceph mon getmap -o ./monmap to generate a monmap file in the current directory. Generate monmap file

Use monmaptool --print monmap to view the cluster monmap as follows

[root@node1 ~]# monmaptool --print monmap 
monmaptool: monmap file monmap
epoch 0
fsid b679e017-68bd-4f06-83f3-f03be36d97fe
last_changed 2021-04-02 15:21:23.495691
created 2021-04-02 15:21:23.495691
0: 192.168.1.185:6789/0 mon.node1 #主要是更改这一部分,将我们新集群的集群网络ip写入到这里
1: 192.168.1.186:6789/0 mon.node2
2: 192.168.1.187:6789/0 mon.node3

Delete the configuration corresponding to mon.id

Use monmaptool --rm node1 --rm node2 --rm node3 ./monmap to delete the configuration corresponding to mon.id (node1, node2, node3 are taken from the above mon.node1, mon.node2, mon.node3)

[root@node1 ~]# monmaptool --rm node1 --rm node2 --rm node3 monmap 
monmaptool: monmap file monmap
monmaptool: removing node1
monmaptool: removing node2
monmaptool: removing node3
monmaptool: writing epoch 0 to monmap (0 monitors)

Check the monmap content again as follows

[root@node1 ~]# monmaptool --print monmap 
monmaptool: monmap file monmap
epoch 0
fsid b679e017-68bd-4f06-83f3-f03be36d97fe
last_changed 2021-04-02 15:21:23.495691
created 2021-04-02 15:21:23.495691

Just add the IP of our current device to the monmap

[root@node1 ~]# monmaptool --add node1 192.168.1.185:6789 --add node2 192.168.1.186:6789 --add node3 192.168.1.187:6789 monmap
monmaptool: monmap file monmap
monmaptool: writing epoch 0 to monmap (3 monitors)
[root@node1 ~]# monmaptool --print monmap 
epoch 0
fsid b679e017-68bd-4f06-83f3-f03be36d97fe
last_changed 2021-04-02 15:21:23.495691
created 2021-04-02 15:21:23.495691
0: 192.168.1.185:6789/0 mon.node1
1: 192.168.1.186:6789/0 mon.node2
2: 192.168.1.187:6789/0 mon.node3

At this point we have successfully modified the monmap content. We will synchronize the modified monmap to all nodes in the cluster through scp.

Inject monmap into the cluster

Please stop all ceph services in the cluster before injecting, systemctl stop ceph.service because all components need to re-read the configuration file, mainly mon reloads monmap to change the cluster communication IP

# 每个节点都需要执行,{
    
    {node1}}跟着本机的name修改
systemctl stop ceph

ceph-mon -i {
    
    {
    
    node1}} --inject-monmap ./monmap

# 当执行ceph-mon -i 报错的时候LOCK时候,可以尝试一下方法
# rocksdb: IO error: While lock file: /var/lib/ceph/mon/ceph-node1/store.db/LOCK: Resource temporarily unavailable
ps -ef|grep ceph
# 通过ps查询到ceph进程,kill -9 pid 杀掉所有关于ceph的进程
kill -9 1379
# 杀掉这些进程后,有些进程可能会重启,但是没有关系,接着在执行ceph-mon -i {
    
    {node1}} --inject-monmap ./monmap
ceph-mon -i {
    
    {
    
    node1}} --inject-monmap ./monmap

After the injection is completed, restart systemctl restart ceph.service

systemctl restart ceph.service

Restart all machines

init 6

After restarting all machines, check the ceph cluster status

root@node1:~# ceph -s
  cluster:
    id:     b679e017-68bd-4f06-83f3-f03be36d97fe
    health: HEALTH_WARN
            Degraded data redundancy: 1862/5586 objects degraded (33.333%), 64 pgs degraded, 64 pgs undersized
            application not enabled on 1 pool(s)
            15 slow ops, oldest one blocked for 115 sec, osd.1 has slow ops

  services:
    mon: 3 daemons, quorum node1,node2,node3
    mgr: node3(active), standbys: node1, node2
    osd: 3 osds: 2 up, 2 in

  data:
    pools:   1 pools, 64 pgs
    objects: 1.86 k objects, 6.5 GiB
    usage:   15 GiB used, 1.8 TiB / 1.8 TiB avail
    pgs:     1862/5586 objects degraded (33.333%)
             64 active+undersized+degraded

  io:
    client:   7.3 KiB/s wr, 0 op/s rd, 0 op/s wr

Guess you like

Origin blog.csdn.net/qq_36607860/article/details/115413824