lvs/DR+vsftpd+nginx source code compilation

VS/DR
configuration yum source: go to the directory of the mount point to view

As shown
Configuration as shown

1. Install ipvsadm

2 Bind the virtual network card to server1 (the real ip is 172.25.40.1)

ip addr add 172.25.40.100/24 dev eth0

Add VIP network card

3.Server2 installs apache and writes server2 in the default release directory index.html
server3 installs apache and writes server3 in the default release directory index.html

4. Load the rule on server1 and save it:

ipvsadm ­-A -­t 172.25.40.100:80 ­-s rr
ipvsadm -­a ­-t 172.25.40.100:80 ­-r 172.25.40.3:80 ­-g
ipvsadm ­-a -­t 172.25.40.100:80 ­-r 172.25.40.4:80 -­g
/etc/init.d/ipvsadm save   #保存rule
(/etc/sysconfig/ipvsadm 可查看rule)

5.Add VIP to server2 and server3 respectively

ip addr add 172.25.40.100/32 dev lo

6. On the basis of the above, all 100 of server2 and server3 are not accessed externally with the arptables_jf rule;

yum install -y arptables_jf
arptables -A IN -d 172.25.40.100 -j DROP
arptables -A OUT -s 172.25.40.100 -j mangle --mangle-ip-s 172.25.40.2/3
/etc/init.d/arptables_jf save

7. curl 172.25.40.100 real machine test view to access server2 and server3 polling through server1 scheduler

arp -an | grep 100 to view the scheduling status of vip on server
ipvsadm -l server1

arp -d 172.25.40.100 clear

arptables -L View arp

tcpdump -vv -p arp monitoring

ldirectord implements health check

1.server1 install ldirtord service

 rpm -ivh ldiretord-3.9.5-3.1.x86_64.rpm
 rpm -ql ldiretord-3.9.5-3,1    #查看配置文件目录

2. Modify the configuration file

cp /usr/share/doc/ldirectord-3.9.5/ldirectord.cf /etc/ha.d/
vim /etc/ha.d/ldirectord.cf
...
# Sample for an http virtual service
virtual=172.25.40.100:80
        real=172.25.40.2:80 gate
        real=172.25.40.3:80 gate
        fallback=127.0.0.1:80 gate
        service=http
        scheduler=rr
        #persistent=600
        #netmask=255.255.255.255
        protocol=tcp
        checktype=negotiate
        checkport=80
        request="index.html"
        #receive="Test Page"
        #virtualhost=www.x.y.z
...

3. In order to avoid interference, stop the server1 ipvsadm service

/etc/init.d/ipvsadm stop

5) Open the ldirectord service ipvsadm -l to check the situation, change the local apache port from 8080 to 80 and add the default directory to "the website is under maintenance"

/etc/init.d/ ldirectord start 

6) Stop the apache service of server2 or server3 for a real machine test to achieve health check

Health check and high availability through keepalived

(Download website: www.keepalived.org)
1. Install keepalived

tar zxf keepalived-1.4.3.tar.gz
yum install -y openssl-devel libnl3-devel ipset-devel iptables-devel libnfnetlink-devel       (安装依赖性)
tar -ivh libnfnetlink-devel-1.0.0-1.el6.x86_64.rpm 
yum install -y libnfnetlink-devel-1.0.0-1.el6.x86_64

cd keepalived-1.4.3
for source code compilation: ./configure –prefix=/usr/local/keepalived –with-init=SYSV

(If there is anything missing during the compilation process, go to the Internet to check the corresponding package)
Successful installation

make

make&&make insatll to complete the installation

Easy to use keepalived, make soft connection

ln -s /usr/local/keepalived/etc/rc.d/init.d/keepalived /etc/init.d/
ln -s /usr/local/keepalived/etc/sysconfig/keepalived /etc/sysconfig/
ln -s /usr/local/keepalived/etc/keepalived /etc/
ln -s /usr/local/keepalived/sbin/keepalived /bin/
chmod +x /usr/local/keepalived/etc/rc.d/init.d/keepalived  (给予执行权限)

Send the downloaded keepalived directly to server4
scp -r keepalived/ server4:/usr/local/
server4 to perform the above steps

Edit the keepalived configuration file
vim /usr/local/keepalived/etc/keepalived/keepalived.conf

! Configuration File for keepalived
global_defs {
notification_email {
#接收警报的 email 地址,可以添加多个
root@localhost
}
notification_email_from keepalived@server88.example.com
smtp_server 127.0.0.1
#设置邮件的发送地址
#设置 smtp server 地址
smtp_connect_timeout 30 #设置连接 smtp 服务器超时时间
router_id LVS_DEVEL
}
vrrp_instance VI_1 {
*state MASTER
#load balancer 的标识 ID,用于 email 警报
#备机改为 BACKUP,此状态是由 priority 的值来决定的,当前
priority 的值小于备机的值,那么将会失去 MASTER 状态
interface eth0 #HA 监测网络接口
virtual_router_id 51 #主、备机的 virtual_router_id 必须相同,取值 0-255
*priority 100 #主机的优先级,备份机改为 50,主机优先级一定要大于备机
advert_int 1 #主备之间的通告间隔秒数
authentication { #主备切换时的验证
auth_type PASS #设置验证类型,主要有 PASS 和 AH 两种
auth_pass 1111 #设置验证密码,在一个 vrrp_instance 下,MASTER 与 BACKUP 必
须使用相同的密码才能正常通信
}
virtual_ipaddress {
172.25.40.100
}
#设置虚拟 IP 地址,可以设置多个虚拟 IP 地址,每行一个
}
virtual_server 172.25.40.100 80 {
#定义虚拟服务器
delay_loop 6 #每隔 6 秒查询 realserver 状态
lb_algo rr #lvs 调度算法,这里使用轮叫lb_kind DR
#
persistence_timeout 50
lb_kind DR
#LVS 是用 DR 模式
#会话保持时间,单位是秒,这个选项对于动态网页是非常有
用的,为集群系统中 session 共享提供了一个很好的解决方案。有了这个会话保持功能,用户的
请求会被一直分发到某个服务节点,直到超过这个会话保持时间。需要注意的是,这个会话保
持时间,是最大无响应超时时间,也就是说用户在操作动态页面时,如果在 50 秒内没有执行任
何操作,那么接下来的操作会被分发到另外节点,但是如果一直在操作动态页面,则不受 50
的时间限制。
protocol TCP
real_server 172.25.40.2 80 {
weight 1
#指定转发协议类型,有 tcp 和 udp 两种
#配置服务节点
#配置服务节点的权值,权值大小用数字表示,数字越大,权
值越高,设置权值的大小可以为不同性能的服务器分配不同的负载,可以对性能高的服务器设
置较高的权值,而对性能较低的服务器设置相对较低的权值,这样就合理的利用和分配了系统
资源
TCP_CHECK {
#realserve 的状态检测设置部分,单位是秒
connect_timeout 3 #10 秒无响应超时
nb_get_retry 3 #重试次数
delay_before_retry 3 #重试间隔
}
}
real_server 172.25.40.3 80 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}

Send the modified keepalived file to the server4 standby machine, modify some content, and close the ldirectord that did the health check before, drop the VIP, install ipvsadm on the standby machine, and use ipvsadm -l/L

Note: The keepalived configuration file of the standby machine is marked with * place!
ifdown eth0 simulates network disconnection

Test:
1. High availability test: Stop the keepalived service on the master and see if the backup takes over.
2. Load balancing test: Visit http://192.168.0.163 and see that the page switches between two realservers to indicate success!
You can also view the detailed connection status through ipvsadm -Lnc!
3. Failover test: arbitrarily close the realserver httpd service, Keepalived monitors whether the module can detect it in time,
then shields the faulty node, and transfers the service to the normal node for execution.

vsftpd

server2 server3 install vsftpd and enable
yum install -y vsftpd
server2: cd /var/ftp/ touch server2
server1: modify the keepalived configuration file

...
virtual_server 172.25.40.100 21 {
delay_loop 6 
lb_algo rr 
persistence_timeout 50

protocol TCP
real_server 172.25.40.2 21 {
weight 1

TCP_CHECK {

connect_timeout 3 
nb_get_retry 3 
delay_before_retry 3 
}
}
real_server 172.25.40.3 21 {
weight 1
TCP_CHECK {
connect_timeout 3
nb_get_retry 3
delay_before_retry 3
}
}
}
...

Restart keepalived
ipvsadm -l to view an ftp and an http
to lftp 172.25.40.100 (VIP) to get server2

nginx source code compilation

server4 shuts down keepalived server2 server3 shuts down arptables_jf
server3 :cd /var/ftp/ touch server3
server2 server3 VIP remove

nginx-1.14.0.tar.gz
cd nginx-1.14.0/
vim src/core/nginx.h #Remove the version number
vim auto/cc/gcc #Comment out debug
for source code compilation

./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-http_stub_status_module 
(安装依赖性)
make && make install 

To reduce the occupied space, you can: cd nginx-1.14.0/ make clean rm -fr nginx-1.14.0 tar zxf
nginx-1.14.0.tar.gz and then recompile
du -sh to check the occupied space

Easy to use for soft connection
ln -s /usr/local/nginx/sbin/nginx /usr/local/sbin/
nginx -t

nginx will enable port 80

Test: If the browser accesses the server1 IP (172.25.40.1) and can get welcome to nginx, it is successful

VS/NAT
VS/TUN

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325641592&siteId=291194637