The availability of Red Hat Linux rhcs [6]

# Virt-difference libvirtd Manager QEMU KVM virsh
#kvm: the underlying virtualization
#qemu: for virtual peripheral devices, such as IO device (top command to see the process virtual machine is QEMU KVM-)
#libvirtd: Virtualization interface for managing the underlying virtualization (down off the virtual machine does not affect the work, but virt-manager can not see the virtual machine)

#### RHCS kit (Red Hat high availability) ####
1. 2-Open virtual machine, configuration yum source, adding high availability, load balancing, a memory, a file system (hereinafter, storage, etc. to add HA)
# Close firewalld selinux and
add a local host on various analytical #
[HighAvailability]
name = HighAvailability
BaseURL = HTTP: //172.25.0.250/rhel6.5/x86_64/dvd/HighAvailability
gpgcheck = 0

[LoadBalancer]
name=LoadBalancer
baseurl=http://172.25.0.250/rhel6.5/x86_64/dvd/LoadBalancer
gpgcheck=0

[ResilientStorage]
name=ResilientStorage
baseurl=http://172.25.0.250/rhel6.5/x86_64/dvd/ResilientStorage
gpgcheck=0

[ScalableFileSystem]
name=ScalableFileSystem
baseurl=http://172.25.0.250/rhel6.5/x86_64/dvd/ScalableFileSystem
gpgcheck=0

2. ricci mounted on all HA nodes, here nodes and manage to do server2 HA node
#luci is a web management interface tool
. 1) server2:
yum the install luci ricci -Y

2)server5:
yum install -y ricci

3) after installation ricci user generated, the user to set a password ricci: RedHat
the passwd ricci

4) two nodes turned ricci, luci and set boot
/etc/init.d/ricci Start
/etc/init.d/luci Start
the chkconfig Ricci ON
the chkconfig luci ON

5) landing
https://172.25.0.2:8084
root
RedHat

Add server2, server5 attention to cluster ## to resolve the problem

6) Check whether to add a successful
cat /etc/cluster/cluster.conf

<?xml version="1.0"?>

# View cluster status
[root @ server2 ~] # the clustat
Cluster Status for westos_ha @ Sat Nov 17 11:00:01 2018
Member Status: Quorate

Member Name ID Status


server2 1 Online, Local
server5 2 Online

################################################## #######
cman distributed cluster manager
rgmanager resources agency, responsible for resource takeover
modclusterd cluster state monitoring
clvmd clustered logical volume, shared storage
################## ########################################
7) Add fence
fence Device -> the Add -> Fence virt (Multicast Mode) -> vmfence

8) Installation packages fence (on a physical machine)
yum fence Search

fence-virtd.x86_64
fence-virtd-libvirt.x86_64
fence-virtd-multicast.x86_64

9) Configure fence
fence_virtd -c # vim /etc/fence_virt.conf can view the configuration file

Interface [virbr0]: br0 ## device selection br0, the other with the default

#生成fence_xvm.key
dd if=/dev/urandom of=/etc/cluster/fence_xvm.key bs=128 count=1

10) The fence_xvm.key distributed to the HA node, the node is managed by the key
SCP fence_xvm.key the root @ Server2: / etc / Cluster /
SCP fence_xvm.key Server5 the root @: / etc / Cluster /

11) web interface fence configured node
Node -> server2 -> Add Fence Method -> vmfence-1 -> Add Fence Instance -> vmfence (xvm Virtual Machine Fencing) -> Domain ( UUID server2 fill the )

server5 Ibid.

12) Start fence and test
systemctl start fence_virtd.service

[root @ server2 cluster] # fence_node server5 ## server5 be restarted
fence server5 success

## netstat -antulp | grep 1229 ## udp 1229 port

13) failed to add back-cut and priority
Failover -> Add -> webfail - > Prioritized ( node fails over to the service of sorts) -> Restricted (service can only be run on the specified node) - > No Failback (when the service is available again, do not send it back to the priority of node 1) (this option is selected, failover host in order of priority will be cut back to a normal size again, or will not)
# server2 priority is 1, server5 priority 10, the lower the number, the higher the priority

14) Add vip resources
#Resources -> the Add -> IP Address
172.25.0.100
24-
Monitor Link (selected) link monitoring
Number of ... 5 after deleting the IP address of sleep in seconds

# Add service
#Resources -> the Add -> Script
the Name: httpd
Full Path to Script File: /etc/init.d/httpd

Both nodes installed and started httpd, write the default publishing page

15) add a service group to the cluster
Service Groups -> Add

Service the Name: the Apache
Automatically Start This Service: Select automatically start this service
Run Exclusive: Run-exclusive (first select)
Failover Domain: webfail
Recovery Policy recovery strategy reply policy repair principle
RELOCATE
... The n-floating vt resettlement; migration vi resettlement; Relocation

# Then add resource
Add Resource -> 172.25.0.100/24 -> Script ## add IP and startup scripts

3. Test Access
172.25.0.100 # default access to server2

4. Make server2 split brain, the test fence
echo C> / proc / SysRq-Trigger

fence success, server2 automatically restart after the split brain, after starting the httpd has returned to server2 to provide services, because there is no choice No Failback

The attached storage device to the cluster
# 1 to open a virtual machine is used as a shared storage iscsi, server3
on server3 server installation:
yum the install SCSI- -Y *

HA installed on the client node:
yum the install iscsi--Y *

Configuration # 2 is stored.
On Server3:
Vim /etc/tgt/targets.conf
38 is

Start:
/etc/init.d/tgtd Start

tgt-admin -s ## to view information stored
ps ax ## saw two tgtd process (where students look at the process in question, if there are four, is wrong)

Found on a shared storage node HA:
[Server2 the root @ ~] -t Discovery ST # -m -p the iscsiadm 172.25.0.3
Starting The iscsid: [the OK]
172.25.0.3:3260,1 iqn.2018-11.com.example: server.target1

[Root @ server2 ~] # iscsiadm -m node -l ## mounted storage

fdisk -cu / dev / sdb ## only to a district, to facilitate recovery of the partition table is damaged when

Share storage iscsi:
cluster manager machine:
yum install scsi-target-utils.x86_64 0: 1.0.24-10.el6 -y
vim /etc/tgt/targets.conf

vim /etc/lvm/lvm.conf
462 locking_type = 3
[root@server2 html]# /etc/init.d/clvmd status
clvmd (pid 1235) is running…
Clustered Volume Groups: cluster_vg
Active clustered Logical Volumes: demo

// ext4 is a local file system (not in the same cloth) //
mount:
premise: /etc/init.d/clvmd Status (IS running)
locking_type 3 =
1. Two cluster virtual machine
DD2:
the pvcreate / dev / sdb1
the vgcreate dangdang / dev / sdb1
the lvcreate -L -n 4G dd dangdang
DD3:
PVS
the PV VG of the Attr PSize pfree Fmt
/ dev / sda2 volgroup LVM2 A-- 0 19.51 g
/ dev / sdb1 LVM2 A-- 8.00 g 8.00 g

vgs
VG #PV #LV #SN Attr VSize VFree
VolGroup 1 2 0 wz–n- 19.51g 0
dangdang 1 0 0 wz–nc 8.00g 8.00g
lvs
LV VG Attr LSize Pool Origin Data% Move Log Cpy%Sync Convert
lv_root VolGroup -wi-ao---- 18.54g
lv_swap VolGroup -wi-ao---- 992.00m
dd dangdang -wi-a----- 4.00g

DD2 is:
mkfs.ext4 / dev / dangdang / dd
Mount / dev / dangdang / dd / mnt (both virtual machines simultaneously mount eg: wherein one at / mnt inside of cp / etc / passwd If not then unload the other mount a machine can not see because the ext4 file system is local (not synchronized (does not support simultaneous writing)))
cd / mnt
vim index.html
umount / dev / dangdang / dd

Graphical Operation: Add together the storage service
device, FS label or UUID Device, FS Label, or UUID
force unloading Force Unmount
using a rapid status check Use Quick Status Checks
if the uninstallation, reboot the host node Reboot Host Node if Unmount Fails
// clusvcadm -d apache (this service is not closed tell apache cluster service on both machines I do not need this service is on a cluster command with the carrier web interface operation is the same)
clusvcadm -r apache -m dd3.example .com ( the service is transferred to dd3.example.com )
/etc/init.d/httpd STOP (test stopped service on the virtual machine system stopped service)

Multiple nodes simultaneously mount a write (gfs2: shared file system)
the mkfs.gfs2 2 -j -t -p lock_dlm westos_ha: mygfs2 / dev / dangdang / dd
kfs.gfs2 create tools for gfs2 file system, which is commonly used options are :
2
-b BlockSize: specify the file system block size, a minimum of 512, the default is 4096;
-J & lt MegaBytes: gfs2日志区域大小specified, the default is 128MB, the minimum value of 8MB;
-j Number the: Specifies file system created gfs2 the number of regions created by the log, a log area typically need to specify each client mounted;
-p LockProtoName: lock protocol name used is typically one or lock_dlm lock_nolock;
-t LockTableName: lock table name, general is a cluster file system needs to lock a table name in order to make informed cluster node cluster file system to which it is associated at the time of application of file lock, lock table name clustername: fsname, where clustername must clusters with the cluster configuration file name remains the same, and therefore, there was only node in the cluster can access the cluster file systems; moreover, within the same cluster, each file system The name must be unique;

测试:dd2: mount /dev/dangdang/dd /mnt
cd /mnt
cp /etc/passwd
dd3: mount /dev/dangdang/dd /mnt

3.vim / etc / fstab (the two do)
/ dev / dangdang / dd / var / the WWW / HTML GFS2 _netdev (network equipment) 0 0
Mount -a
Remove website Service Group inside the filesystem and then delete Resources inside webdate
4.clusvcadm -e apache

Services do not have to go along to mount to mount the service do not have to open his own

gfs2_tool sb /dev/dangdang/dd all

gfs2_tool journals / dev / dangdang / dd (mount point a few have several logs)

gfs2_jadd -j 3 /dev/dangdang/dd

Support the expansion, also supports the reduction, but the reduction in risk (bottom layer is LVM)
the lvextend -L + 1G / dev / dangdang / dd (expand disk space)
gfs2_grow / dev / dangdang / dd (expand file systems)

9. The table name and cluster name must be the same as (not a name is not mount up)
the mkfs.gfs2 -p lock_dlm -j -t westos_dd. 3: mygfs2 / dev / dangdang / dd
The gfs2_tool SB / dev / dangdang / dd Table cluster name: mygfs2

# Demo
dd if = / dev / sdb of = mbr bs = 512 count = 1 ## backup MBR
dd IF = / dev / ZERO of = / dev / SDB BS. 1 = 512 = ## COUNT destruction MBR
'command without dd file system is written directly from the underlying device (bios) '

After the restart, under / proc / partitions do not see just minutes sdb1
Mount / dev / sdb / mnt mount ## will complain because the partition table is gone

dd if = mbr of = / dev / sdb ## resume MBR
the fdisk after -Cu / dev / sdb ## into, and then save and exit w, can be mounted properly, it can also be seen previously saved content
'may not be partitioned , so that no partition table, do not worry about this problem, delete the / dev / sdb1, do not partition behind the experiment '

#iscsi can only write a single point, or data is not synchronized

6. Additional back-end database
# HA nodes are installed two databases
yum install -y mysql-server

mount / dev / sdb / var / lib / mysql ## mysql the data stored in the shared device

chown mysql.mysql / var / lib / mysql ## mysql to make this directory writable

umount /var/lib/mysql

7. Add the back-end database and stored on the Web
1) Disabled off apache, run exclusively canceled

2)添加Resource
#1.File System
dbdata
ext4
/var/lib/mysql
/dev/sdb
Force Umount
Use Quick Status Checks
Reboot Host Node if Unmount Fails

#2.Script
mysqld
/etc/init.d/mysqld

#3.IP Address
172.25.0.200
24

3) Add the Domain Failover
dbfail
Prioritized
Tel Restricted
Server2: Priority 10
Server5: priority 1

4)添加Service Groups
Add
Service Name : sql
Automatically Start This Service
Failover Domain : dbfail

Add Resource:
Filesystem --> IP Address --> Script

#命令方式启动apache
[root@server5 ~]# clusvcadm -e apache
Local machine trying to enable service:apache…Success
service:apache is now running on server5

Guess you like

Origin blog.csdn.net/qq_36016375/article/details/94914985