[Linux] F5 load balancer

F5 load balancer hardware load balancing equipment

 

F5 load balancing simplest configuration, parameters to be configured there Node (node), Pool (resource pool), and Virtual Server (Virtual Server), their relationship is, first configure Node, and then configure VS. Node is the most basic definition, as if each Node is a server, a load balancing group is a group Pool Node device to receive and process traffic, such as a web server cluster. BIGIP system client traffic request is sent to any Pool member for use in a server (the Node), and the Pool with BIGIP System Virtual server associated, finally, BIGIP system will enter in Virtual Server traffic to the Pool members, Pool and then communicated to the Node.

 

Load balancing algorithms:
load balancing device itself are based load balancing algorithm-based load balancing algorithm is divided into two types: balancing algorithms and dynamic load balancing algorithm for static load.

Polling (of the RoundRobin) : a request to sequentially cycle through the loop connected to each server. Wherein when a server fails a second to the seventh layer, in order from The BIG-IP took it out of the circular queue, a polling without participation until it returned to normal.  · ratio (Ratio): to each server assigned a weighted value ratio, noted in this ratio, the user requests assigned to each server. When a server fails wherein the second layer 7, which is put out from The BIG-IP server queue, the user does not participate in the next allocation request until it returns to normal.
Priority (the Priority) : a server for all packets, for each priority group definition, The BIG-IP user's request, the highest priority assigned to the server group (within the same group, or ratio polling algorithm, assign the user's request) ; when the server goes down all the highest-priority, BIGIP only the second highest priority request to the server group. In this manner, actually provide a way for users to hot backup.
Minimal connection (Least Connection) : transmitting the new server is connected to those for minimal connection processing. When a server fails wherein the second layer 7, which is put out from The BIG-IP server queue, the user does not participate in the next allocation request until it returns to normal.
The fastest mode (Fastest) : transfer connection to the server that the fastest response. When a server fails wherein the second layer 7, which is put out from The BIG-IP server queue, the user does not participate in the next allocation request until it returns to normal.
Observation mode (the Observed) : the number of connections and response time selection of a new request to the server that best balance of two basis. When a server fails wherein the second layer 7, which is put out from The BIG-IP server queue, the user does not participate in the next allocation request until it returns to normal.
Prediction mode (Predictive): BIGIP server using the collected current performance indicators, predictive analysis, to select a server the next time slice, which will achieve the best performance of the server requests the user. (BIGIP detected is subjected to)
the dynamic performance allocation (the APM-DynamicRatio) : BIGIP collected application performance parameters and application servers, dynamically adjust traffic allocation.
Supplementary active server (DynamicServer Act.) : When the primary server farm failure caused due to reduction in the number, dynamically added to the primary and backup server cluster.
Quality of Service (the QoS) : according to different priorities are allocated to the data stream.
Service type (the ToS) : according to different types of services (of Field in identified Type) data stream assignment.
Regular pattern : a guide rules for different data flows, the user can edit the traffic allocation rule, use these rules to The BIG-IP data flows embodiment pilot control

Guess you like

Origin www.cnblogs.com/taoshihan/p/11311361.html