LVS load-balancing cluster of building NAT mode, you can now do! ! !

Related concepts regarding LVS load-balancing cluster can refer Bowen: LVS load balancing cluster Detailed

I. Case Summary

LVS load balancing mode -NAT Mode: NAT usage could have been due to lack of internal network IP address and the IP address reserved by mapping converted into a means of access to a public address (source address NAT). If the NAT process a little change, it can become a way of load balancing. In fact, the principle of IP leaders from the client to the IP address of the packet into which a Web server node in the IP address of the DIR concurrent to this Web server node, and the Web server node after the DIR data after processing host sent back to the client, DIR at this time, then the source IP address of the packet's IP address can be changed on the DIR interface. Period, both incoming traffic or outgoing traffic must go through DIR.

LVS load balancing mode -NAT mode:
Advantages: simple and easy to implement, is also easily understood;
disadvantages: LVS will load balancer scheduler called a bottleneck optimization, all packets go through the LVS load balancing scheduler, therefore, the load the number of end Web server node is about 10-20 units, server performance may be, if LVS load balancing scheduler broken, the consequences are serious, do not support remote disaster recovery;

Second, the case of the environment

Due to the experimental environment, there is no need to get so large topology diagram, two Web service nodes with 10 nodes Web server meaning is the same, and the configuration method is the same, so the experimental environment to deploy two Web server nodes. Experimental topology is as follows:

LVS load-balancing cluster of building NAT mode, you can now do!  !  !

LVS load balancing mode -NAT mode features:

  1. Web server node and LVS load balancing scheduler belong to the same IP network card should use private IP addresses and Web server node gateway to point LVS load balancing scheduler;
  2. Request and response messages must be via load balancing scheduler LVS forwarding; system bottlenecks extremely high load scenario, LVS load balancing scheduler may become;
  3. Support port mapping;
  4. LVS load balancer scheduler may use any operating system (the OS);
  5. LVS load balancer scheduler needs two network cards (a typical lan / wan) Web server nodes and load balancing LVS scheduler must have a same IP network card;
  6. VIP need to configure LVS load balancing scheduler accepts client requests on the card, and directly provide services.

Third, the implementation of case

Case Examples principle

  1. client sends request to the VIP LVS scheduling server, the LVS scheduling server selects a Web node server according to the load algorithm and records the link information into the hash table, and then modifies the destination IP address of the request of the client for the address of the Web node, the request is sent to the Web server node r;
  2. After the Web server receives the request packet node finds that the destination IP own IP, then process the request and sends reply to the LVS;
  3. After LVS received reply packet, the reply packet to modify the source address is the VIP, to a Client;

1. Configure load balancer

1) turn routing forwarding

[root@localhost ~]# vim /etc/sysctl.conf
                 …………               //省略部分内容
net.ipv4.ip_forward = 1
[root@localhost ~]# sysctl -p
net.ipv4.ip_forward = 1

2) Configure load allocation policy

[root@localhost ~]# yum -y install ipvsadm
//默认没有ipvsadm工具,需要手动安装
[root@localhost ~]# ipvsadm -C
//清空原有策略
[root@localhost ~]# ipvsadm -A -t 192.168.1.1:80 -s rr
[root@localhost ~]# ipvsadm -a -t 192.168.1.1:80 -r 192.168.2.2:80 -m -w 1
[root@localhost ~]# ipvsadm -a -t 192.168.1.1:80 -r 192.168.2.3:80 -m -w 1
[root@localhost ~]# ipvsadm-save 
-A -t localhost.localdomain:http -s rr
-a -t localhost.localdomain:http -r 192.168.2.2:http -m -w 1
-a -t localhost.localdomain:http -r 192.168.2.3:http -m -w 1
//确认VIP、添加Web节点服务器

A detailed explanation of these commands can also refer Bowen: LVS load balancing cluster Detailed

2. Configure the Web server node

All Web server nodes use the same configuration, including the port httpd service, website content of the document. In fact the site documentation of each node can be stored in the shared storage device, eliminating the need to process synchronization. But in this experiment, the debugging process can use different pages for each node in order to test the effect of load balancing.

The first Web server node

[root@localhost ~]# yum -y install httpd
[root@localhost ~]# echo "qqqqqqqq" > /var/www/html/index.html
[root@localhost ~]# systemctl start httpd

Second Web server node

[root@localhost ~]# yum -y install httpd
[root@localhost ~]# echo "oooooo" > /var/www/html/index.html
[root@localhost ~]# systemctl start httpd

About the configuration of the firewall can refer Bowen: to ensure the security of the Linux system firewall Detailed entry firewalld this experiment put a firewall, SELinux is turned off.

When a client visits, many times there will be different refresh the page! In the actual production environment, it is impossible to make the page has been changed, so it is necessary to build --NFS shared storage server.

3. To set up NFS shared storage server

[root@localhost ~]# mkdir -p /var/www/html
[root@localhost ~]# echo "welcome to beijing" > /var/www/html/index.html
[root@localhost ~]# vim /etc/exports
/var/www/html   192.168.2.0/24(rw,sync,no_root_squash)
[root@localhost ~]# systemctl start nfs
[root@localhost ~]# systemctl start rpcbind

Web server nodes need to mount it:

[root@localhost ~]# mount 192.168.2.4:/var/www/html /var/www/html

Client Access page will never have again changed!

-------- end of this article so far, thanks for reading --------

Guess you like

Origin blog.51cto.com/14157628/2437699