1. Experiment
Architecture diagram
2. Experimental environment
CPU name | IP address | install service |
master01 | 192.168.40.10 | NFS、mysql |
slave01 | 192.168.40.20 | nginx-1、tomcat-1、mysql |
slave02 | 192.168.40.30 | nginx-2、tomcat-2、mysql |
ha01 | 192.168.40.40 | lvs-master |
ha02 | 192.168.40.50 | lvs-slave |
3. Experimental requirements
The experiment requires that when the user accesses the virtual IP, the LVS load is responsible for high availability, the static resources are handed over to nginx for processing, and the dynamic resources are handed over to tomcat for processing, and then the data is put into the database, and the MHA high availability is built for the database.
4. Experimental steps
1. Deploy static pages
slave01-nginx-1 slave02-nginx2
systemctl stop firewalld
sentenforce 0
#Turn off the firewall and selinux security verification
yum install epel-release - y
#Install epel source
yum install nginx -y
#Install nginx
systemctl start nginx
#Start nginx
echo nginx01 test >/usr/share/nginx/html/index.html
echo nginx02 test >/usr/share/nginx/html/index.html
#Modify the content of nginx homepage, nginx01 is modified to nginx01 test, nginx02 modifies nginx02 test
curl 192.168.50.10
curl 192.168.50.20
#Test whether the two hosts can access the defined static content from each other
[rootaslaved]01 opt]# systemctl start nginx
[root@slave01 opt]# echo nginx01 test >/usr/share/nginx/html/index.html
[root@slaved01 opt]#curl 192.168.40.20 nginx01 test
[root@slave01 opt]# curl 192.168.40.30 nginx02 test
The slave02 node is the same
Add virtual network cards to two nodes and set up virtual IP to add routing through the virtual network card.
ifconfig ens33:1 192.168.40.244 netmask 255.255.255.0
route add -host 192.168.40.244 dev ens33:1
Edit the kernel parameters and add the following content
vim /etc/sysctl.conf
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.all.arp_announce = 2
net.ipv4.conf.default.arp_ignore = 1
net.ipv4.conf.default.arp_announce = 2
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
sysctl -p
#查看参数是否配置完成
2. Deploy load balancing
ha01-lvs ha02-lvs
yum install ipvsadm -y
#yum installs the ipvsadm service, which is the real service name of LVS load balancing
systemctl start ipvsadm.service #The error message message is not/ etc/sysconfig/ipvsadm file. The solution is to save ipvsadm to the /etc/sysconfig/ipvsadm file. ipvsadm-save>/etc/sysconfig/ipvsadm #Direct the ipvsadm service startup error message to 1 file and check cat 1 systemctl status ipvsadm.service>1
#Enable ipvsadm service, this step will report an error when starting the service
systemctl restart ipvsadm.service
#Restart ipvsadm successfully
yum install keepalived.x86_64 -y
# Install keepalived.x86_64
vim /etc/sysctl.conf
#Edit kernel file
Add content: < a i=7> net.ipv4.conf.all.send_redirects = 0 net.ipv4.conf.default.send_redirects = 0 net.ipv4.conf. ens33.send_redirects = 0 #Save and exit sysctl -p #Check whether the added kernel content is successful vim keepalived.conf #Edit configuration file Modification content: global_defs module smtp_server 127.0.0.1 #Change to 127.0.0.1 to change it to the local machine router_id LVS_01 # The first LVS name is changed to LVS_01 #vrrp_skip_check_adv_addr #vrrp_strict #vrrp_garp_interval 0 #vrrp_gna_interval 0 #Comment out the above 4 lines of security mechanism with # signs vrrp_instance VI_1 { state MASTER interface ens33 #Modify the network card name virtual_ipaddress { 192.168.40.244 } #Change the virtual host address to the virtual ip to be configured. Multiple ips can be configured. If only Write one to delete the other 2 of the template. virtual_server 192.168.40.244 80 { #The virtual ip address is changed to the configured virtual ip and web service port number delay_loop 6 lb_algo rr lb_kind DR #Change the mode to DR mode, that is, direct connection mode persistence_timeout 50 protocol TCP delay_before_retry 3< /span> delay_before_retry 3< /span> #Others do not need to be modified and saved< /span> a> TCP 192.168.247.244:80 rr persistent 50 Display content: ipvsadm -ln #Restart service systemctl restart ipvsadm keepalived priority 90 interface ens33 state BACKUP vrrp_instance VI_1 { #The second step is to modify the vrrp_instance VI_1 module The state in is BACKUP, the interface changes the network card to ens33 and the priority 90 to a level lower than the main LVS priority #The first step is to modify the router_id LVS_02 name to LVS_02 #Transfer the configured configuration file to the LVS_02 server LVS_02 configuration file modification: scp keepalived.conf 192.168.40.50:/etc/keepalived/ #Just delete all content after the two real servers , save and exit after modification is completed. The complete configuration file is as shown below } } } nb_get_retry 3 connect_timeout 3 #Add connection port 80 connect_port 80 #Add TCP_check. Note that there is a space before the semicolon to detect the health status of the real server, that is, detect the back-end server port 80. If it is abnormal, access other servers TCP_CHECK { weight 1 #The second real IP address and web service port number real_server 192.168.40.30 80 { } } nb_get_retry 3 connect_timeout 3 #Add connection port 80 connect_port 80 #Add TCP_check. Note that there is a space before the semicolon to detect the health status of the real server, that is, detect the back-end server port 80. If it is abnormal, access other servers TCP_CHECK { weight 1 #The first real IP address and web service port number real_server 192.168.40.20 80 {
-> 192.168.40.10:80 Route 1 0 0
-> 192.168.40.20:80 Route 1 0 1 1
#View Just use the IP addresses of 2 real servers. Note that port 80 of the two real servers must be open in order for both to be detected
3. Build dynamic web pages
slave01-tomcat01 slave02-tomcat02
yum install tomcat -y
systemcat start tomcat
#Install and start the tomcat service
cd /var/ lib/tomcat/webapps
mkdir test
cd test
vim index.jsp
Add Dynamic page content:
<%@ page language="java" import="java.util.*" pageEncoding="UTF-8"%>
<html>
<head>
<title>test</title>
< ;/head>
<body>
<% out.println("Dynamic page:test tomcat01");%>
#The output content of the page is test tomcat01. On the tomcat02 server, change 01 to 02 to facilitate viewing the experimental results
</body>
< ;/html>
4. nginx reverse proxy
slave01-nginx01 slave02-nginx02
vim /etc/nginx/conf.d/upstream.conf
#Create a new sub-configuration file directly in the nginx sub-configuration folder, because it is a yum installation and does not need to be installed in Subconfiguration file specified in the main configuration file
File content:
upstream nginxtest{ server 192.168.40.20:8080; server 192.168.40.30:8080; } #Reverse proxy module, write the IP and port numbers of the two tomcats into it. The name of the reverse proxy module is nginxtest server { location / { root html; index index.html index.htm; } #server The first location module in the module specifies the root as html and supports three homepage file types. Pay attention to the semicolon ending location ~ .*\.(gif|jpg|jpeg|png| bmp|swf|css)$ { root /usr/share/nginx/html; } #The second location module in the server module is not configured. As long as the case-sensitive beginning ends with any content in .() brackets, it is a static resource access page under /usr/share/nginx/html location ~ .*\.jsp ${ proxy_pass http://nginxtest; proxy_set_header HOST $host; proxy_set_header X-Real-IP $remote_addr; a> } #The third location module in the server module, the configuration is not case sensitive Any beginning as long as it ends with .jsp will be processed by the nginx reverse proxy module nginxtest } proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
5. Deploy NFS
master01
yum install nfs-utils.x86_64 rpcbind -y
#Install software package
mkdir /share
# Create shared directory
chmod -R 777 /share/
#Modify permissions
vim /etc/exports< a i=7> #Edit configuration file /share * /share 192.168.91.0/24(rw,sync,no_root_squash) #Shared directory network segment Read and write, synchronization, no root permissions systemctl start rpcbind systemctl start nfs exportfs - v showmount -e #View the NFS shared directory published by this machine
slave node
Mount the shared directory locally
vim /etc/fatab
6.Install mysql
tee /etc/yum.repos.d/mysql.repo <<EOF
[mysql]
name=mysql5.7
baseurl=https://mirrors.tuna.tsinghua.edu.cn/mysql/yum/mysql-5.7-community-el7-x86_64/
gpgcheck=0
EOF
#清华大学源 5.7.41
Check
cat >/etc/yum.repos.d/mysql.repo <<EOF
Install
yum -y install mysql-community-server
systemctl start mysqld
ss -ntap |grep 3306
Log in to mysql
mysql -u root -p
grep password /var/log/mysqld.log
#Filter out the password of mysql< a i=3> mysql -u root -p'password' #Use single quotes for special symbols mysql> alter user root@ 39;localhost' identified by 'Admin@123'; #Change the password after entering vim /etc/my.cnf< /span> #Modify character set character-set-server=utf8mb4 [mysqld]
7.Install MHA
yum install epel-release.noarch -y
The master node needs to be installed
yum install -y mha4mysql-node-0.58-0.el7.centos.noarch.rpm
yum install -y mha4mysql-manager-0.58-0.el7.centos.noarch.rpm
#Install node first and then manager
Install from node
yum install mha4mysql-node-0.58-0.el7.centos.noarch.rpm -y
On the master node, based on key verification
ssh-keygen
cd
ssh-copy-id 127.0.0.1
#Self-harmonization Closely uploaded
rsync -a .ssh 192.168.40.20:/root/
rsync -a .ssh 192.168.40.30:/root/
#Note. Cannot add / after ssh
Create MHA folder and configuration file
mkdir /etc/mastermha
vim /etc/mastermha/app1.cnf
[server default]
user=mhauser
password=Admin@123
manager_workdir=/data/mastermha/app1/
manager_log=/data/mastermha/app1/manager.log
remote_workdir=/data/mastermha/app1/
ssh_user=root
repl_user=test
repl_password=Admin@123
ping_interval=1
master_ip_failover_script=/usr/local/bin/master_ip_failover
#report_script=/usr/local/bin/sendmail.sh 可以不加
check_repl_delay=0
master_binlog_dir=/data/mysql/
[server1]
hostname=192.168.40.10
candidate_master=1
[server2]
hostname=192.168.40.20
candidate_master=1
[server3]
hostname=192.168.40.30
vim master_ip_failover
#!/usr/bin/env perl
use strict;
use warnings FATAL => 'all';
use Getopt::Long;
my (
$command, $ssh_user, $orig_master_host, $orig_master_ip,
$orig_master_port, $new_master_host, $new_master_ip, $new_master_port
);
my $vip = '192.168.247.188/24'; #设置Virtual IP 此处注释需要删除 *****
my $gateway = '192.168.247.2'; #网关Gateway IP 此处注释需要删除 *****
my $interface = 'ens33';
my $key = "1";
my $ssh_start_vip = "/sbin/ifconfig $interface:$key $vip;/sbin/arping -I $interface -c 3 -s $vip $gateway >/dev/null 2>&1";
my $ssh_stop_vip = "/sbin/ifconfig $interface:$key down";
GetOptions(
'command=s' => \$command,
'ssh_user=s' => \$ssh_user,
'orig_master_host=s' => \$orig_master_host,
'orig_master_ip=s' => \$orig_master_ip,
'orig_master_port=i' => \$orig_master_port,
'new_master_host=s' => \$new_master_host,
'new_master_ip=s' => \$new_master_ip,
'new_master_port=i' => \$new_master_port,
);
exit &main();
sub main {
print "\n\nIN SCRIPT TEST====$ssh_stop_vip==$ssh_start_vip===\n\n";
if ( $command eq "stop" || $command eq "stopssh" ) {
# $orig_master_host, $orig_master_ip, $orig_master_port are passed.
# If you manage master ip address at global catalog database,
# invalidate orig_master_ip here.
my $exit_code = 1;
eval {
print "Disabling the VIP on old master: $orig_master_host \n";
&stop_vip();
$exit_code = 0;
};
if ($@) {
warn "Got Error: $@\n";
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "start" ) {
# all arguments are passed.
# If you manage master ip address at global catalog database,
# activate new_master_ip here.
# You can also grant write access (create user, set read_only=0, etc) here.
my $exit_code = 10;
eval {
print "Enabling the VIP - $vip on the new master - $new_master_host \n";
&start_vip();
$exit_code = 0;
};
if ($@) {
warn $@;
exit $exit_code;
}
exit $exit_code;
}
elsif ( $command eq "status" ) {
print "Checking the Status of the script.. OK \n";
`ssh $ssh_user\@$orig_master_host \" $ssh_start_vip \"`;
exit 0;
}
else {
&usage();
exit 1;
}
}
# A simple system call that enable the VIP on the new master
sub start_vip() {
`ssh $ssh_user\@$new_master_host \" $ssh_start_vip \"`;
}
# A simple system call that disable the VIP on the old_master
sub stop_vip() {
`ssh $ssh_user\@$orig_master_host \" $ssh_stop_vip \"`;
}
sub usage {
"Usage: master_ip_failover --command=start|stop|stopssh|status --orig_master_host=host --orig_master_ip=ip --orig_master_port=port --new_master_host=host --new_master_ip=ip --new_master_port=port\n";
}
mv master_ip_failover /usr/local/bin/
#Cut the script file to the /usr/local/bin/ directory
chmod + x /usr/local/bin/master_ip_failover
#Add executable permissions to make it executable
ifconfig ens33:1 192.168.247.188/24< a i=5> #MHA configures the virtual IP. After the primary fails, the virtual IP will be transferred to the backup primary
8. Prepare for master-slave replication
Master execution:
vim /etc/my.cnf
[mysqld] #Add content under this line to enable master-slave replication and binary logs
server_id=99
log-bin=/data/mysql/mysql-bin
skip_name_resolve=1< a i=6> general_log #After adding, save and exit mkdir /data/mysql/ -p #Create a binary log to save Path file chown mysql.mysql /data/ -R #Modify folder permissions systemctl restart mysqld #Restart the mysql service mysql -uroot -pAdmin@123 #Log in to the database show master status; #View the binary log location and use it from the configuration later. grant replication slave on *.* to test@'192.168.40.%& #39; identified by 'Admin@123'; #Create a copy user grant all on *.* to mhauser@' ;192.168.40.%' identified by 'Admin@123'; #Create mha management account
two slaves
vim /etc/my.cnf
#[mysqld]Add content under this line to enable master-slave replication and binary logs
server_id=100 #The two IDs are different
log-bin=/data/mysql/mysql-bin
relay-log=relay-log-bin< a i=5> relay-log-index=slave-relay-bin.index skip_name_resolve=1 general_log mkdir / data/mysql/ -p #Create binary log saving path file chown mysql.mysql /data/ -R # Modify folder permissions systemctl restart mysqld #Restart the mysql service mysql -uroot -pAdmin@123< a i=15> #Login database CHANGE MASTER TO MASTER_HOST='192.168.40.10', MASTER_USER='test', MASTER_PASSWORD='Admin@123', MASTER_PORT=3306, #Configure master information #Check whether the configuration is successful show slave status\G #Start master-slave replication start slave MASTER_LOG_POS=154; MASTER_LOG_FILE='mysql-bin.000001',
9. Check the MHA environment on the MHA server and start MHA
masterha_check_ssh --conf=/etc/mastermha/app1.cnf
#Check mha's ssh password-free login environment
masterha_check_repl --conf=/etc/mastermha/app1.cnf
#Check the mha master-slave environment
MySQL Replication Health is OK.
#The presence of this information at the end of the line means there is no problem
Turn on MHA
masterha_check_status --conf=/etc/mastermha/app1.cnf
#Check the mha status, the default is the stop state stop
#Enable MHA, The default is to run in the foreground, and the production environment generally runs in the background
nohup masterha_manager --conf=/etc/mastermha/app1.cnf &> /dev/null
#Non-background needs to wait for a long time to start successfully. This process is running in the foreground. You need to reopen a window to check the status.
masterha_manager --conf=/etc/mastermha/app1.cnf
#Check status
masterha_check_status --conf=/etc/mastermha/app1.cnf