written in front
- Organize some
LVS
relevant notes - If you don’t understand enough, please help me to correct
In the evening, you sit under the eaves, watching the sky slowly getting dark, feeling lonely and desolate in your heart, feeling that your life has been deprived of you. I was a young man at the time, but I was afraid to live like this, to grow old. In my opinion, this is something more terrible than death. --------Wang Xiaobo
Brief introduction of LVS & Keepaliveed
About LVS
what is, that is, Linux virtual server, is an open source load balancing project led by Dr. Zhang Wensong, which has been LVS
integrated into Linux
the kernel module. Only one operating tool needs to be installed to use it.
Use to ansible
perform related operations, the following is the list file, here master
as a proxy node, node
as a load node.
┌──[[email protected]]-[~]
└─$cat inventory
[master]
192.168.26.152
[node]
192.168.26.153
192.168.26.154
We need to install the toolkit before usingipvsadm
┌──[[email protected]]-[~]
└─$ansible master -m yum -a 'name=ipvsadm state=installed'
The LVS load balancing environment VIP
consists of one or more and multiple real servers, and multiple VIP
need to be Keepalived
used together.
The working principle of LVS is that the user requests the VIP address of LVS, and LVS forwards the request to the back-end server according to 转发方式
and .负载均衡算法
There are three ways to realize LVS
load balancing and forwarding, which are NAT、DR、TUN
modes, and there are relatively many load balancing algorithms.
NAT
: That is, network address translation, simple understanding, that is, communication between two different network segments can be realized. The communication between the user's private network and the public network is realized through the LVS server.TUN
: The idea of TUN is to separate the request from the response data, the request also goes through the LVS machine, and the response goes through a separate channel, so the TUN mode requires that the real server can be directly connected to the external network, and the real server will directly send the request packet to the client after receiving the request packet. The host responds with data.DR
: The DR mode is also called the direct routing mode. The direct routing mode (DR mode) requires that the LVS and the back-end server must be in the same LAN.
LVS load balancing algorithm:
- RR algorithm: round-robin scheduling (Round-Robin Scheduling)
- WRR Algorithm: Weighted Round-Robin Scheduling (WeightedRound-RobinScheduling)
- LC algorithm: Least-Connection Scheduling
- WLC algorithm: Weighted Least-Connection Scheduling (WeightedLeast-ConnectionScheduling)
- LBLC Algorithm: Locality-Based LeastConnectionsScheduling
- LBLCR Algorithm: Locality-Based Least Connections with Replication Scheduling
- DH Algorithm: Destination Hashing Scheduling
- SH Algorithm: Source Hashing Scheduling (SourceHashingScheduling)
ipvsadm common commands
-A
Add a virtual server VIP address;-t
The virtual server provides tcp service;-s
The scheduling algorithm used;-a
Add a backend real server to the virtual server;-r
Specify the real server address;-w
The weight of the backend real server;-m
Set the current forwarding mode to NAT mode;-g
For direct routing mode;-i
The mode is tunnel mode.
Simple to use via the command line
Here use the ipvsadm tool to build a simple LVS load balancing Demo
The "ipvsadm -C" command is used to clear all IPVS virtual server entries. This means that all virtual servers and their associated connections will be deleted.
┌──[[email protected]]-[~]
└─$ipvsadm -C
The following command is used to add a new virtual service to IP Virtual Server (IPVS) 表
. in:
- The -A option specifies adding a new virtual service,
- The -t option specifies the IP address and port number of the virtual service, in this case 192.168.26.200:80.
- The -s option specifies the scheduling algorithm used by the virtual service, in this case it is specified as rr, which is "round robin". This means that connections will be evenly distributed among the real servers in the IPVS table.
┌──[[email protected]]-[~]
└─$ipvsadm -A -t 192.168.26.200:80 -s rr
Add a new real server to the virtual service you just created
- -a: This option specifies that we want to add a new real server to the virtual service.
- -t 192.168.26.200:80: This option specifies the virtual service to add the new real server to. In this example, the virtual service is on IP address 192.168.26.200 and port 80.
- -r 192.168.26.153:80: This option specifies the IP address and port of the new real server we want to add to the virtual service. In this example, the IP address of the new real server is 192.168.26.153 and the port is 80.
- -m: This option specifies that we want to use NAT mode for the virtual service. This means that when traffic is forwarded to the real server, the IP address of the real server will be translated to the IP address of the load balancer.
- -w 2: This option specifies the weight of the new real server. Weights are used to distribute traffic among virtual services. In this example, the weight of the new real server is set to 2.
┌──[[email protected]]-[~]
└─$ipvsadm -a -t 192.168.26.200:80 -r 192.168.26.153:80 -m -w 2
Add a real machine in the same way
┌──[[email protected]]-[~]
└─$ipvsadm -a -t 192.168.26.200:80 -r 192.168.26.154:80 -m -w 2
Start the httpd service on the load node
┌──[[email protected]]-[~]
└─$ansible node -m service -a 'name=httpd state=started'
Simple test, polling mode returns
┌──[[email protected]]-[~]
└─$curl 192.168.26.200:80
vms154.liruilongs.github.io
┌──[[email protected]]-[~]
└─$curl 192.168.26.200:80
vms153.liruilongs.github.io
┌──[[email protected]]-[~]
└─$curl 192.168.26.200:80
vms154.liruilongs.github.io
┌──[[email protected]]-[~]
└─$
View through ipvsadm -Ln
the command, IPVS
all virtual services and real servers in will be listed in numerical format:
- -L: This option specifies that we want to list all virtual services and real servers in IPVS.
- -n: This option specifies that we want to display IP addresses and port numbers in numeric format.
┌──[[email protected]]-[~]
└─$ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.26.200:80 rr
-> 192.168.26.153:80 Masq 2 0 0
-> 192.168.26.154:80 Masq 2 0 0
ipvsadm -C
Clean up the ipvs table.
┌──[[email protected]]-[~]
└─$ipvsadm -C
┌──[[email protected]]-[~]
└─$ipvsadm -Ln
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
┌──[[email protected]]-[~]
└─$
LVS
It is impossible to monitor and detect real load machines, that is, it is impossible to judge whether the load nodes are healthy and able to provide capabilities, so keepalived is introduced:
Keepalived can automatically detect the running status of the server according to the configured rules, and perform actions to remove and add, so that users can not feel whether the background server is down. When the server is added to the server group, all these tasks are automatically completed without manual intervention. All that needs to be done manually is to repair the faulty WEB server.
In addition, keepalived can be used for HA failover, that is, there is a backup LVS, and when the master LVS is down, the LVS VIP will automatically switch to the slave, which can realize load balancing and high availability based on the function, and meet the stable and efficient operation of the website 7x24 hours LVS+Keepalived
.
Keepalived is based on three-layer detection (IP layer, TCP transport layer, and application layer). It should be noted that if the configuration is used, keepalived.conf
there is no need to execute ipvsadm -A
the command to add the balanced realserver
command. All configurations keepalived.conf
can be set in it. It can automatically add real servers to IPVS. Of course, you need to configure the virtual service and the real server in the keepalived.conf file.
keepalived+LVS
Automated deployment through ansible script
script writing
┌──[[email protected]]-[~]
└─$cat keepalived_lvs.yaml
---
# 初始化工作,安装 keepalived 和 ipvsadm ,关闭防火墙
- name: ipvsadm keepalived init
hosts: node
tasks:
- name: install
yum:
name:
- keepalived
- ipvsadm
state: installed
- name: firewall clons
shell: firewall-cmd --set-default-zone=trusted
- name: ipvsadm clean
shell: ipvsadm -C
# 在主节点安装 keepalived
- name: vms153.liruilongs.github.io config
hosts: 192.168.26.153
tags:
- master
vars:
role: MASTER
priority: 100
tasks:
- name: copy keeplived config
template:
src: keepalived.conf.j2
dest: /etc/keepalived/keepalived.conf
- name: restart keeplived
service:
name: keepalived
state: restarted
# 在从节点安装 keepalived
- name: vms154.liruilongs.github.io config
hosts: 192.168.26.154
tags:
- backup
vars:
role: BACKUP
priority: 50
tasks:
- name: copy keepalived config
template:
src: keepalived.conf.j2
dest: /etc/keepalived/keepalived.conf
- name: restart keepalived
service:
name: keepalived
state: restarted
┌──[[email protected]]-[~]
└─$
In the corresponding keepalived.conf.j2
template configuration file, some variables are specified in the script. Here, the format of the configuration file needs to be noted, and the configuration of different versions is slightly different.
┌──[[email protected]]-[~]
└─$cat keepalived.conf.j2
! Configuration File for keepalived
global_defs {
router_id LVS_DEVEL
vrrp_iptables
}
vrrp_instance VI_1 {
state {
{
role }}
interface ens32
virtual_router_id 51
priority {
{
priority }}
advert_int 1
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
192.168.26.200
}
}
virtual_server 192.168.26.200 80 {
delay_loop 1
lb_algo rr
lb_kind DR
protocol TCP
real_server 192.168.26.155 80 {
weight 3
TCP_CHECK {
connect_timeout 3
retry 3
delay_before_retry 3
connect_port 80
}
}
real_server 192.168.26.156 80 {
weight 3
TCP_CHECK {
connect_timeout 3
retry 3
delay_before_retry 3
connect_port 80
}
}
}
Web service configuration, kernel parameter modification, local loopback network card modification
ifcfg-lo:0
The interface is a virtual loopback interface used to assign a virtual IP address to the system. In the context of keepalived and LVS, this virtual IP address is used as a VIP (virtual IP) shared between the active and standby nodes in the cluster. This VIP is used to route traffic to the active httpd
node running the service.
At the beginning, I didn't understand it. When LVS and keepalived are combined, the original VIP becomes a virtual VIP. The VIP used in the Demo above is an actual IP address. After becoming a virtual VIP, the local loopback address needs to be modified.
┌──[[email protected]]-[~]
└─$cat deploy_web.yaml
---
- name: web init
hosts: web
tasks:
- name: 网卡配置
copy:
dest: /etc/sysconfig/network-scripts/ifcfg-lo:0
src: ifcfg-lo
force: yes
- name: 内核参数修改
copy:
dest: /etc/sysctl.conf
src: sysctl.conf
force: yes
- name: sysctl
shell: sysctl -p
- name : install httpd
yum:
name: httpd
state: installed
- name: restart network
service:
name: network
state: restarted
- name: httpd content
shell: "echo `hostname` > /var/www/html/index.html"
- name: Restart service httpd, in all cases
service:
name: httpd
state: restarted
- name: firewall clons
shell: firewall-cmd --set-default-zone=trusted
- name: iptables
shell: "iptables -F"
┌──[[email protected]]-[~]
└─$
The configuration files involved
┌──[[email protected]]-[~]
└─$cat ifcfg-lo
DEVICE=lo:0
IPADDR=192.168.26.200
NETMASK=255.255.255.255
NETWORK=192.168.26.200
BROADCAST=192.168.26.200
ONBOOT=yes
NAME=lo:0
Why should the local loopback address be changed to a VIP address?
The VIP address is the address that the client uses to access the service. The load balancer then distributes incoming traffic across the real servers.
To ensure that the real servers can handle traffic from the load balancer, we need to change their loopback address to the VIP address. This is because the real server will receive traffic from the load balancer from the VIP address as the source address. If the real server responds to this traffic with its own loopback address as the source address, the response will not be routed back to the load balancer.
┌──[[email protected]]-[~]
└─$cat sysctl.conf
# sysctl settings are defined through files in
# /usr/lib/sysctl.d/, /run/sysctl.d/, and /etc/sysctl.d/.
#
# Vendors settings live in /usr/lib/sysctl.d/.
# To override a whole file, create a new file with the same in
# /etc/sysctl.d/ and put new settings there. To override
# only specific settings, add a file with a lexically later
# name in /etc/sysctl.d/ and put new settings there.
#
# For more information, see sysctl.conf(5) and sysctl.d(5).
net.ipv4.ip_forward = 1
net.ipv4.conf.all.arp_ignore = 1
net.ipv4.conf.lo.arp_ignore = 1
net.ipv4.conf.lo.arp_announce = 2
net.ipv4.conf.all.arp_announce = 2
┌──[[email protected]]-[~]
└─$
deployment test
After the script runs, check it with the following command
┌──[[email protected]]-[~]
└─$ansible node -a 'ipvsadm -Ln'
192.168.26.153 | CHANGED | rc=0 >>
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.26.200:80 rr persistent 2
-> 192.168.26.155:80 Route 3 0 0
-> 192.168.26.156:80 Route 3 0 0
192.168.26.154 | CHANGED | rc=0 >>
IP Virtual Server version 1.2.1 (size=4096)
Prot LocalAddress:Port Scheduler Flags
-> RemoteAddress:Port Forward Weight ActiveConn InActConn
TCP 192.168.26.200:80 rr persistent 2
-> 192.168.26.155:80 Route 3 0 0
-> 192.168.26.156:80 Route 3 0 0
┌──[[email protected]]-[~]
└─$
Check the local loopback address
┌──[[email protected]]-[~]
└─$ansible web -m shell -a 'ip a | grep lo:'
192.168.26.156 | CHANGED | rc=0 >>
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
inet 192.168.26.200/32 brd 192.168.26.200 scope global lo:0
192.168.26.155 | CHANGED | rc=0 >>
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN qlen 1
inet 192.168.26.200/32 brd 192.168.26.200 scope global lo:0
┌──[[email protected]]-[~]
└─$
┌──[[email protected]]-[~]
└─$curl 192.168.26.200
vms156.liruilongs.github.io
┌──[[email protected]]-[~]
└─$curl 192.168.26.200
vms155.liruilongs.github.io
┌──[[email protected]]-[~]
└─$
Deal with problems
- If you encounter problems, you need to look at the official documents. The good ones on the Internet are a bit old
- Different versions have slightly different configuration files, mainly official documents, you can refer to some configuration templates in the help document
┌──[[email protected]]-[~]
└─$rpm -ql keepalived | grep doc | grep conf
/usr/share/doc/keepalived-1.3.5/keepalived.conf.SYNOPSIS
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.HTTP_GET.port
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.IPv6
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.SMTP_CHECK
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.SSL_GET
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.fwmark
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.inhibit
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.misc_check
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.misc_check_arg
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.quorum
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.sample
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.status_code
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.track_interface
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.virtual_server_group
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.virtualhost
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.vrrp
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.vrrp.localchec
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.vrrp.lvs_syncd
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.vrrp.routes
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.vrrp.rules
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.vrrp.scripts
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.vrrp.static_ipaddress
/usr/share/doc/keepalived-1.3.5/samples/keepalived.conf.vrrp.sync
┌──[[email protected]]-[~]
└─$
- If the configuration does not take effect, but the service starts normally, it may be affected by the spaces in the configuration file. Some configurations have taken the default configuration, and you need to
set list
modify the configuration through vim +
Part of the blog post content reference
© The copyright of the content of the reference link in the article belongs to the original author, if there is any infringement, please inform
https://blog.csdn.net/weixin_42808782/article/details/115671278
© 2018-2023 [email protected], All rights reserved. Attribution-Non-Commercial-Share Alike (CC BY-NC-SA 4.0)