### RHCS Cluster Suite and achieve high availability cluster ####

#### environment to deploy ###
1. Snapshot 6.5 three virtual machines

Here Insert Picture Description
2. Import virtual machine, modifying the host name, set the network, address resolution
vim / etc / sysconfig / network # modify the host name
vim / etc / sysconfig / network- script / ifcfg-eth0 # configure network
vim / etc / hosts # increasing address resolve
getenforce = disabled # turn off SELinux
/etc/init.d/iptables STOP # turn off the firewall
Here Insert Picture Description
3.generic1 and generic2 to build advanced yum source
Here Insert Picture Description
Here Insert Picture Description
4. download on generic1 ricci (graphics cluster management), luci (GUI), set ricci password and open ricci and luci set the boot from Kai

[root@generic1 ~]# yum install -y luci ricci  ##安装ricci、luci
[root@generic1 ~]# getenforce  #查看selinux
Disabled   #关闭
[root@generic1 ~]# /etc/init.d/iptables status  #查看火墙状态  
iptables: Firewall is not running.  #关闭
[root@generic1 ~]# id ricci  #安装好的ricci会自动生成id
uid=140(ricci) gid=140(ricci) groups=140(ricci)
[root@generic1 ~]# passwd ricci  #设置ricci的密码
Changing password for user ricci.
New password:   #输入密码
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password:   #再次输入密码
passwd: all authentication tokens updated successfully.
[root@generic1 ~]# /etc/init.d/ricci start   #打开ricci
Starting system message bus:                               [  OK  ]
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Generating NSS database...  done
Starting ricci:                                            [  OK  ]
[root@generic1 ~]# /etc/init.d/luci start   #打开luci
Adding following auto-detected host IDs (IP addresses/domain names), corresponding to `generic1' address, to the configuration of self-managed certificate `/var/lib/luci/etc/cacert.config' (you can change them by editing `/var/lib/luci/etc/cacert.config', removing the generated certificate `/var/lib/luci/certs/host.pem' and restarting luci):
	(none suitable found, you can still do it manually as mentioned above)

Generating a 2048 bit RSA private key
writing new private key to '/var/lib/luci/certs/host.pem'
Start luci...                                              [  OK  ]
Point your web browser to https://generic1:8084 (or equivalent) to access luci
[root@generic1 ~]# chkconfig ricci on   #设置开机自启
[root@generic1 ~]# chkconfig luci on   ##设置开机自启
[root@generic1 ~]# netstat -antlp  ##查看端口
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   
tcp        0      0 0.0.0.0:8084                0.0.0.0:*                   LISTEN       ##生成8084端口        

generic2 only download ricci password and boot from Kai

[root@generic2 ~]# yum install -y ricci 
[root@generic2 ~]# id ricci
uid=140(ricci) gid=140(ricci) groups=140(ricci)
[root@generic2 ~]# passwd
Changing password for user root.
New password: 
BAD PASSWORD: it is based on a dictionary word
BAD PASSWORD: is too simple
Retype new password: 
passwd: all authentication tokens updated successfully.
[root@generic2 ~]# /etc/init.d/ricci start
Starting system message bus:                               [  OK  ]
Starting oddjobd:                                          [  OK  ]
generating SSL certificates...  done
Generating NSS database...  done
Starting ricci:                                            [  OK  ]
[root@generic2 ~]# chkconfig ricci on

## graphical interface to create a cluster ###
1.) access https://172.25.40.110:8084 in the browser, because it is https visit, so authentication is required, click on Advanced-> Add Expection ...
Here Insert Picture Description
Click Confirm Security Expextion
Here Insert Picture Description
enter the login user and password (generic1 the root user and password)
Here Insert Picture Description
click Manager Clusters-> create
Here Insert Picture Description
Creating a cluster
Here Insert Picture Description
after creating click create cluster, waiting to enter the page will automatically install software packages used and restart generic1 and generci2 node is added successfully, the following figure will appear
Here Insert Picture Description
at this time two the hosts also can view the cluster information
Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description### ### configuration fence
principle of fence:

Configuration:
1. fence installed on the host, edit fence profile
[foundation40 the root @ ~] # yum fence Search
[foundation40 the root @ ~] # yum the install-virtd.x86_64 fence fence -Y-virtd-libvirt.x86_64 fence- multicast.x86_64-virtd
[foundation40 the root @ ~] # fence_virtd -C
Module1 Search path [/ usr / the lib64 / fence the virt-]:

Available backends:
libvirt 0.1
Available listeners:
multicast 1.2

Listener modules are responsible for accepting requests
from fencing clients.

Listener module [multicast]:

The multicast listener module is designed for use environments
where the guests and hosts may communicate over a network using
multicast.

The multicast address is the address that a client will use to
send fencing requests to fence_virtd.

Multicast IP Address [225.0.0.12]:

Using ipv4 as family.

Multicast IP Port [1229]:

Setting a preferred interface causes fence_virtd to listen only
on that interface. Normally, it listens on all interfaces.
In environments where the virtual machines are using the host
machine as a gateway, this must be set (typically to virbr0).
Set to ‘none’ for no interface.

Interface [virbr0]: br0 ## into br0 only change this, the rest are the Enter key
at The Key File IS IS at The Shared Key Information Which Used to
the authenticate fencing at The Contents of the this File Requests the MUST.
BE PHYSICAL Distributed to the each and the WITHIN Virtual Machine Host
A Cluster.

Key File [/etc/cluster/fence_xvm.key]:

Backend modules are responsible for routing requests to
the appropriate hypervisor or management layer.

Backend module [libvirt]:

Configuration complete.

=== Begin Configuration ===
fence_virtd {
listener = “multicast”;
backend = “libvirt”;
module_path = “/usr/lib64/fence-virt”;
}

listeners {
multicast {
key_file = “/etc/cluster/fence_xvm.key”;
address = “225.0.0.12”;
interface = “br0”;
family = “ipv4”;
port = “1229”;
}

}

backends {
libvirt {
uri = “qemu:///system”;
}

}

=== End Configuration ===
Replace /etc/fence_virt.conf with the above [y/N]? y

End change can go into br0 /etc/fence_virt.conf see whether
Here Insert Picture Description
the establishment of a key directory, the file is sent to intercept the key and the key to generic1 node and generic2 node (two nodes use the same key)

[root@foundation40 ~]# mkdir /etc/cluster/    ##建立钥匙目录
[root@foundation40 ~]# dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1  #截取密钥
1+0 records in
1+0 records out
128 bytes (128 B) copied, 0.000226116 s, 566 kB/s
[root@foundation40 cluster]# scp /etc/cluster/fence_xvm.key  [email protected]:/etc/cluster/  #将密钥发送给generic1
[root@foundation40 cluster]# scp /etc/cluster/fence_xvm.key  [email protected]:/etc/cluster/   #将密钥发送给generic2
[root@foundation40 cluster]# systemctl start fence_virtd.service   ##开启fence服务
[root@foundation40 cluster]# systemctl status fence_virtd.service 
● fence_virtd.service - Fence-Virt system host daemon
   Loaded: loaded (/usr/lib/systemd/system/fence_virtd.service; disabled; vendor preset: disabled)
   Active: active (running) since Thu 2019-06-20 11:31:35 CST; 6s ago
  Process: 7962 ExecStart=/usr/sbin/fence_virtd $FENCE_VIRTD_ARGS (code=exited, status=0/SUCCESS)
 Main PID: 7967 (fence_virtd)
   CGroup: /system.slice/fence_virtd.service
           └─7967 /usr/sbin/fence_virtd -w

generic1 generic2 two nodes also can view the key
Here Insert Picture Description
Here Insert Picture Description
Add Fence Device
1. Click Devices Fence -> the Add
Here Insert Picture Description
2.) Select the fence multicast mode, the establishment of a name
Here Insert Picture Description
Here Insert Picture Descriptionbinding nodes (two nodes to be binding)
Click Nodes - -> generic1

Here Insert Picture Description
Click the Add Fence Method, edit a method name
Here Insert Picture Description
click on Fence Instance the Add
Here Insert Picture Description
Domain address to add a virtual machine management interface UUID
Here Insert Picture Description
after binding is successful
Here Insert Picture Description
generic2 and generic1 operating as
3. Both nodes will bind the following profile
Here Insert Picture Description
test:
on the fence in generic off generic2, power failure and take over the service for the success of
Here Insert Picture Description
#### high-availability service configuration (httpd) ###
1.generic1 and generic2 required to install httpd, and write the default test page

[root@generic1 ~]# yum install -y httpd
[root@generic1 ~]# cd /var/www/html
[root@generic1 html]# vim index.html
[root@generic1 html]# cat index.html
generic1
[root@generic1 ~]# vim /etc/init.d/httpd  ##httpd的启动脚本
[root@generic1 ~]# /etc/init.d/httpd status
httpd is stopped

2. Add the failover domain
will be added and Failover Domian geteric1 geteric2 -> add
Here Insert Picture Description
When one node fails, it automatically switches to the normal; open service cluster falls on the high priority nodes (the lower the number, the priority the higher level)
Here Insert Picture Description
display after successfully added
Here Insert Picture Description
2. Add the service to use the resources Resources-> Add
Here Insert Picture Description

Adding IP Adress (cluster external ip)
Here Insert Picture Description
Here Insert Picture Description
to add the service to use resources Resources-> Add-> select Script mode
Here Insert Picture Description
Here Insert Picture Description
to create a resource group
Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description
Here Insert Picture Description
test:
real machine access 172.25.40.100
Here Insert Picture Description
access to a generic1, then there will be less in the service generic1 show it at work

[root@generic1 html]# /etc/init.d/httpd status  #httpd打开状态
httpd (pid  19375) is running...
[root@generic1 html]# clustat
Cluster Status for generic_dd @ Thu Jun 20 14:17:04 2019
Member Status: Quorate

 Member Name                              ID   Status
 ------ ----                              ---- ------
 generic1                                     1 Online, Local, rgmanager
 generic2                                     2 Online, rgmanager

 Service Name                    Owner (Last)                    State         
 ------- ----                    ----- ------                    -----         
 service:apache                  generic1                        started       ##  
[root@generic1 html]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:44:3b:8c brd ff:ff:ff:ff:ff:ff
    inet 172.25.40.110/24 brd 172.25.40.255 scope global eth0
    inet 172.25.40.150/24 scope global secondary eth0
    inet6 fe80::5054:ff:fe44:3b8c/64 scope link 
       valid_lft forever preferred_lft forever
[root@generic1 html]# echo c >/proc/sysrq-trigger Write failed: Broken pipeWrite failed: Broken pipe
[kiosk@foundation40 ~]$ 

Real test:
Here Insert Picture Description
geteric2:

[root@generic2 html]# clustat 
Cluster Status for generic_dd @ Thu Jun 20 14:24:16 2019
Member Status: Quorate

 Member Name                              ID   Status
 ------ ----                              ---- ------
 generic1                                     1 Online, rgmanager
 generic2                                     2 Online, Local, rgmanager

 Service Name                    Owner (Last)                    State         
 ------- ----                    ----- ------                    -----         
 service:apache                  generic2                        started       
[root@generic2 html]# ip addr show
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 16436 qdisc noqueue state UNKNOWN 
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP qlen 1000
    link/ether 52:54:00:5f:fa:68 brd ff:ff:ff:ff:ff:ff
    inet 172.25.40.111/24 brd 172.25.40.255 scope global eth0
    inet 172.25.40.150/24 scope global secondary eth0
    inet6 fe80::5054:ff:fe5f:fa68/64 scope link 
       valid_lft forever preferred_lft forever
[root@generic2 html]# /etc/init.d/httpd status
httpd (pid  10362) is running...
[root@generic2 html]# /etc/init.d/httpd stop
Stopping httpd:                            

test:
Here Insert Picture Description

Guess you like

Origin blog.csdn.net/weixin_44821839/article/details/92976539