keepalived high availability | Deploy Ceph distributed storage


Preface article: https://blog.csdn.net/shengweiit/article/details/135168233

keepalived high availability

Deploy two proxy servers to achieve the following effects:

  • Use keepalived to achieve high availability of two proxy servers
  • Configure vip as 192.168.4.80
  • Modify the corresponding domain name resolution record
    Insert image description here
    Insert image description here

1. Configure the second haproxy proxy server

Deploy HAProxy

Because we have deployed HAProxy on 192.168.4.5 before, after installing HAProxy on 192.168.4.6, we can copy the configuration file on 4.5 to 4.6.

[root@proxy2 ~]# yum -y install haproxy
[root@proxy2 ~]# scp 192.168.4.5:/etc/haproxy/haproxy.cfg /etc/haproxy/
[root@proxy2 ~]# systemctl start haproxy
[root@proxy2 ~]# systemctl enable haproxy

Insert image description here

2. Configure keepalived for two proxy servers

Configure the first proxy server proxy (192.168.4.5)

Package --> Modify configuration file --> Start service

[root@proxy ~]# sed -i '36,$d' /etc/keepalived/keepalived.conf # 删除36行之后的
[root@proxy ~]# yum install -y keepalived  # 装包
[root@proxy ~]# vim /etc/keepalived/keepalived.conf
global_defs {
    
    
  router_id  proxy1                #设置路由ID号
  vrrp_iptables                    #设置防火墙规则(手动添加该行)
}
vrrp_instance VI_1 {
    
    
  state MASTER                         #主服务器为MASTER(备服务器需要修改为BACKUP)
  interface eth0                    #网卡名称(不能照抄网卡名)
  virtual_router_id 51                
  priority 100                     #服务器优先级,优先级高优先获取VIP
  advert_int 1
  authentication {
    
    
    auth_type pass
    auth_pass 1111                #主备服务器密码必须一致
  }
  virtual_ipaddress {
    
                    #谁是主服务器谁获得该VIP (这个是以后的公网IP)
192.168.4.80 
}    
}
[root@proxy ~]# systemctl start keepalived
[root@proxy ~]# systemctl enable keepalived

Configure the second proxy server proxy (192.168.4.6)

[root@proxy2 ~]# yum install -y keepalived
[root@proxy2 ~]# scp 192.168.4.5:/etc/keepalived/keepalived.conf /etc/keepalived/
global_defs {
    
    
  router_id  proxy2                        #设置路由ID号
vrrp_iptables                               #设置防火墙规则(手动添加该行)
}
vrrp_instance VI_1 {
    
    
  state BACKUP                         #主服务器为MASTER(备服务器需要修改为BACKUP)
  interface eth0                    #网卡名称(不能照抄网卡名)
  virtual_router_id 51                
  priority 50                         #服务器优先级,优先级高优先获取VIP
  advert_int 1
  authentication {
    
    
    auth_type pass
    auth_pass 1111                       #主备服务器密码必须一致
  }
  virtual_ipaddress {
    
                       #谁是主服务器谁获得该VIP
192.168.4.80 
}    
}
[root@proxy2 ~]# systemctl start keepalived
[root@proxy2 ~]# systemctl enable keepalived

Insert image description here

Modify DNS server

Resolve the address of the host name of www.lab.com to vip 192.168.4.80. 192.168.4.5 is the DNS server

[root@proxy ~]# vim /var/named/lab.com.zone
$TTL 1D
@       IN SOA  @ rname.invalid. (
                                        0       ; serial
                                        1D      ; refresh
                                        1H      ; retry
                                        1W      ; expire
                                        3H )    ; minimum
@       NS      dns.lab.com.
dns     A       192.168.4.5
www     A       192.168.4.80

Restart DNS service

[root@proxy ~]# systemctl restart named

test:

# 在客户端
host www.lab.com # 解析到的ip地址是 192.168.4.80
ping 192.168.4.80 # ping通 说明网络中有192.168.4.80
# 在优先级高的keepalived服务器上查看vip地址 
ip a s | grep 192

Deploy ceph distributed storage

Deploy ceph distributed storage to achieve the following effects:

  • Deploy ceph distributed storage using three servers
  • Implement ceph file system sharing
  • Migrate website data from NFS to ceph storage

Insert image description here
Insert image description here

Prepare hardware

克隆3台虚拟机
每台虚拟机分别添加2块盘 20G
每台虚拟机添加一个光驱
检查盘和光驱是否添加正确

Insert image description here

Experimental environment preparation

1. Configure local yum sources for three hosts and install the ceph service package

  • Put ceph10.ios into the CD-ROM drive
    Insert image description here
  • Mount the optical drive and configure boot to mount the optical drive.
mkdir /ceph
vim /etc/fstab
/dev/sr0 /ceph iso9660 defaults 0 0
mount -a 

Insert image description here

  • Configure the local yum source to install the ceph service package
vim  /etc/yum.repos.d/ceph.repo
[mon]
name=mon
baseurl=file:///ceph/MON
enabled=1
gpgcheck=0
[osd]
name=osd
baseurl=file:///ceph/OSD
enabled=1
gpgcheck=0
[tools]
name=tools
baseurl=file:///ceph/Tools
enabled=1
gpgcheck=0

yum repolist                #验证YUM源软件数量

Insert image description here

  • Configure the ssh key on node1 so that node1 can connect to node1, node2, and node3 without password. Let node1 manage both the management host of the cluster and the storage server in the cluster.
[root@node1 ~]# ssh-keygen  -f /root/.ssh/id_rsa  -N  ''
#-f后面跟密钥文件的名称(创建密钥到哪个文件)
#-N  ''设置密钥的密码为空(不要给密钥配置密码)

#通过ssh-copy-id将密钥传递给node1,node2,node3
[root@node1 ~]# for i in   41  42  43
do
ssh-copy-id  192.168.2.$i
done
  • Do host name mapping on all three machines (do not delete the source files)
[root@node1 ~]# vim /etc/hosts      #修改文件,手动添加如下内容(不要删除原文件的数据)
192.168.2.41    node1
192.168.2.42     node2
192.168.2.43    node3

[root@node1 ~]# for i in 41 42 43
do
     scp /etc/hosts 192.168.2.$i:/etc
done
  • Configure NTP service synchronization time
    . Note: Use node1 server as the server.
  1. Modify the configuration file of the NTP service and restart it
[root@node1 ~]# vim /etc/chrony.conf
allow 192.168.2.0/24        #修改26行
local stratum 10            #修改29行(去注释即可)
[root@node1 ~]# systemctl restart chronyd
  1. node2 and node3 as clients
[root@node2 ~]# vim /etc/chrony.conf
server 192.168.2.41   iburst              #配置文件第二行,手动加入该行内容
[root@node2 ~]# systemctl restart chronyd
[root@node2 ~]# chronyc sources -v # 验证


[root@node3 ~]# vim /etc/chrony.conf
server 192.168.2.41   iburst              #配置文件第二行,手动加入该行内容
[root@node3 ~]# systemctl restart chronyd
[root@node3 ~]# chronyc sources -v

Insert image description here

2. Deploy ceph cluster

  • Install the management tool ceph-deploy on the node1 host
[root@node1 ~]# yum -y install ceph-deploy
[root@node1 ~]# mkdir ceph-cluster
[root@node1 ~]# cd ceph-cluster
  • Install ceph related software packages for all ceph nodes
[root@node1 ceph-cluster]# for i in node1 node2 node3
do
     ssh $i "yum -y install ceph-mon ceph-osd ceph-mds"
done
  • Initializing the mon service must be done in the ceph-cluster directory.
#生成ceph配置文件
[root@node1 ceph-cluster]# ceph-deploy new node1 node2 node3
#拷贝ceph配置文件给node1,node2,node3,启动所有主机的mon服务
[root@node1 ceph-cluster]# ceph-deploy mon create-initial

[root@node1 ceph-cluster]# ceph -s                    #查看状态(此时失败是正常的)
    cluster 9f3e04b8-7dbb-43da-abe6-b9e3f5e46d2e
     health HEALTH_ERR   # 此时还没有加存储盘
     monmap e2: 3 mons at
 {
    
    node1=192.168.2.41:6789/0,node2=192.168.2.42:6789/0,node3=192.168.2.43:6789/0}
     
osdmap e45: 0 osds: 0 up, 0 in
  • Use the ceph-deploy tool to initialize the data disk (node1 operation only). Fill in the hard disk name according to the actual situation and cannot be copied.
ceph-deploy disk zap 主机名:磁盘名 主机名:磁盘名

Insert image description here

[root@node1 ceph-cluster]# ceph-deploy disk  zap  node1:sdb  node1:sdc    
[root@node1 ceph-cluster]# ceph-deploy disk  zap  node2:sdb  node2:sdc
[root@node1 ceph-cluster]# ceph-deploy disk  zap  node3:sdb  node3:sdc
  • Initialize the OSD cluster and fill in the disk name according to the actual situation.
#每个磁盘都会被自动分成两个分区;一个固定5G大小;一个为剩余所有容量
#5G分区为Journal缓存;剩余所有空间为数据盘。
[root@node1 ceph-cluster]# ceph-deploy osd create  node1:sdb  node1:sdc  
[root@node1 ceph-cluster]# ceph-deploy osd create  node2:sdb  node2:sdc
[root@node1 ceph-cluster]# ceph-deploy osd create  node3:sdb  node3:sdc 
[root@node1 ceph-cluster]# ceph -s                 #查看集群状态,状态为OK

Insert image description here
Effective space:
Insert image description here
If you want to recreate it, (unified solution to the error)
clear the current configuration on the management host node1
and delete all installed software.

[root@node1 ceph-cluster]# ceph-deploy purge node1
[root@node1 ceph-cluster]# ceph-deploy purge node2
[root@node1 ceph-cluster]# ceph-deploy purge node3

Delete all profiles and data

[root@node1 ceph-cluster]# ceph-deploy purgedata node1
[root@node1 ceph-cluster]# ceph-deploy purgedata node2
[root@node1 ceph-cluster]# ceph-deploy purgedata node3

Check the cluster environment:
1. yum source
2. NTP service
3. SSH
4. Host name binding

3. Deploy ceph file system

Use the disk storage space provided by the ceph cluster to store web page files on three website servers

  • Start the MDS service (can be started on node1 or node2 or node3, or mds can be started on multiple hosts)
[root@node1 ceph-cluster]# ceph-deploy mds create node3
  • Create a storage pool (the file system is composed of innode (storage data information) and block (storage data)) and
    divide the space to store innode information and block information.
[root@node1 ceph-cluster]# ceph osd pool create cephfs_data 64
[root@node1 ceph-cluster]# ceph osd pool create cephfs_metadata 64
[root@node1 ceph-cluster]# ceph osd lspools      #查看共享池
0 rbd,1 cephfs_data,2 cephfs_metadata
  • Create file system
[root@node1 ceph-cluster]# ceph fs new myfs1 cephfs_metadata cephfs_data  # cephfs_metadata存放innode cephfs_data存放数据
[root@node1 ceph-cluster]# ceph fs ls
name: myfs1, metadata pool: cephfs_metadata, data pools: [cephfs_data ]

4. Migrate website data to ceph cluster

The web page files of the current website cluster are stored in the NFS31 server.
To migrate the website data is to store the web pages stored in the NFS31 server into the file system created by the ceph cluster.
The specific operations are as follows:

  • Uninstall the NFS shares of web1, web2, and web3.
    Pause the service to prevent someone from reading and writing files in real time.
[root@web1 ~]# /usr/local/nginx/sbin/nginx -s stop
[root@web2 ~]# /usr/local/nginx/sbin/nginx -s stop
[root@web3 ~]# /usr/local/nginx/sbin/nginx -s stop
[root@web1 ~]# umount /usr/local/nginx/html
[root@web2 ~]# umount /usr/local/nginx/html
[root@web3 ~]# umount /usr/local/nginx/html
[root@web1 ~]# vim /etc/fstab
#192.168.2.31:/web_share/html /usr/local/nginx/html/ nfs defaults 0 0
[root@web2 ~]# vim /etc/fstab
#192.168.2.31:/web_share/html /usr/local/nginx/html/ nfs defaults 0 0
[root@web3 ~]# vim /etc/fstab
#192.168.2.31:/web_share/html /usr/local/nginx/html/ nfs defaults 0 0
  • There are three solutions for permanently mounting the Ceph file system on the web server (web1, web2, and web3 all require operation)
    . You can use any of them. No matter which method is used to mount the Ceph file system, a user name and password are required, so you need to install it in advance. Obtain the username and password on any server in the ceph cluster, and view the username and password on the node1 host.
[root@node1 ~]# cat /etc/ceph/ceph.client.admin.keyring 
[client.admin]
    key = AQA0KtlcRGz5JxAA/K0AD/uNuLI1RqPsNGC7zg==

/etc/rc.local is the boot script. Any command placed in this file will start automatically at boot.
ceph-common is ceph client software.

[root@web1 ~]# yum -y install ceph-common
[root@web2 ~]# yum -y install ceph-common
[root@web3 ~]# yum -y install ceph-common
[root@web1 ~]#  mount -t ceph 192.168.2.41:6789:/ /usr/local/nginx/html/ \
-o name=admin,secret=AQA0KtlcRGz5JxAA/K0AD/uNuLI1RqPsNGC7zg==
# 系统重启之后依旧挂载
[root@web1 ~]# echo 'mount -t ceph 192.168.2.41:6789:/ /usr/local/nginx/html/ \
-o name=admin,secret=AQA0KtlcRGz5JxAA/K0AD/uNuLI1RqPsNGC7zg==' >> /etc/rc.local 
[root@web1 ~]# chmod +x /etc/rc.local

[root@web2 ~]#  mount -t ceph 192.168.2.41:6789:/ /usr/local/nginx/html/ \
-o name=admin,secret=AQA0KtlcRGz5JxAA/K0AD/uNuLI1RqPsNGC7zg==
[root@web2 ~]# echo 'mount -t ceph 192.168.2.41:6789:/ /usr/local/nginx/html/ \
-o name=admin,secret=AQA0KtlcRGz5JxAA/K0AD/uNuLI1RqPsNGC7zg==' >> /etc/rc.local 
[root@web2 ~]# chmod +x /etc/rc.local
[root@web3 ~]#  mount -t ceph 192.168.2.41:6789:/ /usr/local/nginx/html/ \
-o name=admin,secret=AQA0KtlcRGz5JxAA/K0AD/uNuLI1RqPsNGC7zg==
[root@web3 ~]# echo 'mount -t ceph 192.168.2.41:6789:/ /usr/local/nginx/html/ \
-o name=admin,secret=AQA0KtlcRGz5JxAA/K0AD/uNuLI1RqPsNGC7zg==' >> /etc/rc.local 
[root@web3 ~]# chmod +x /etc/rc.local

Another solution is to achieve permanent mounting through fstab.
Tip: If you want to use fstab to achieve permanent mounting, the client needs to additionally install the libcephfs1 software package.

[root@web1 ~]# yum -y install libcephfs1
[root@web1 ~]# vim /etc/fstab
… …
192.168.2.41:6789:/ /usr/local/nginx/html/    ceph   defaults,_netdev,name=admin,secret=AQCVcu9cWXkgKhAAWSa7qCFnFVbNCTB2DwGIOA== 0 0

The third mounting solution: For high availability issues, multiple IPs can be written simultaneously during mount.

[root@web1 ~]# mount -t ceph  \
192.168.2.41:6789,192.168.2.42:6789,192.168.2.43:6789:/ /usr/local/nginx/html  \
-o name=admin,secret=密钥
永久修改:
[root@web1 ~]# vim /etc/fstab
192.168.2.41:6789,192.168.2.42:6789,192.168.2.43:6789:/ /usr/local/nginx/html/ \
ceph defaults,_netdev,name=admin,secret=密钥 0 0
  • Migrate the data in the NFS server to ceph,
    back up the web page file in nfs, then copy the backup file to any of the three website servers, and then decompress the backup file on the website host with the backup file.
[root@nfs ~]# cd /web_share/html/
[root@nfs html]# tar -czpf /root/html.tar.gz ./*
[root@nfs html]# scp /root/html.tar.gz 192.168.2.11:/usr/local/nginx/html/
登陆web1将数据恢复到Ceph共享目录
[root@web1 html]# tar -xf html.tar.gz
[root@web1 html]# rm -rf html.tar.gz
  • Start website service
[root@web1 ~]# /usr/local/nginx/sbin/nginx
[root@web2 ~]# /usr/local/nginx/sbin/nginx
[root@web3 ~]# /usr/local/nginx/sbin/nginx

Guess you like

Origin blog.csdn.net/shengweiit/article/details/135193074