The latest version of Linux operation and maintenance interview questions in 2023 (2)

  • About the author: A cloud computing network operation and maintenance personnel, sharing network and operation and maintenance technology and useful information every day. 

  • Public account: Netdou Cloud Computing School

  •  Motto: Keep your head down and be respectful

  • Personal homepage:  Internet Bean’s homepage

Table of contents

write in front

 11. Comparison of three load balancing modes of LVS

 12. Load scheduling algorithm of LVS

13. The difference between LVS and nginx

 Comparison between nginx and LVS:

14. What are the functions of load balancing?

15. nginx implements load balancing distribution strategy 

Will continue to update!


write in front

Hello everyone, I am Wangdou, a blogger focusing on the field of operation and maintenance. Today, I bring you a special topic: operation and maintenance interview questions. Today, as the IT industry continues to develop, interviews for operations and maintenance positions are no longer limited to the examination of basic knowledge, but pay more attention to candidates' practical experience, problem-solving abilities, and attitude toward continuous learning. Therefore, this article will share with you some common operation and maintenance interview questions to help you better prepare for interviews and improve your competitiveness.

With the popularization of cloud computing, big data and other technologies, operation and maintenance positions are becoming more and more important in the IT field. An excellent operation and maintenance engineer must not only have a solid technical foundation, but also need to have good problem-solving skills, teamwork spirit and learning ability. Therefore, the interview is a key step in selecting excellent operation and maintenance engineers.

During the interview process, the interviewer usually examines aspects such as basic knowledge, practical experience, teamwork, and learning ability. Below, I will introduce the interview questions in these aspects one by one, and give corresponding answer ideas and techniques. I hope that this article can help you better prepare for the operation and maintenance interview and get your favorite position.

Please note that these questions are just one of the common interview questions, and other aspects may be covered in the actual interview. Therefore, it is recommended that when preparing for interviews, in addition to mastering these questions, you should also focus on comprehensively improving your technical capabilities and overall quality.


 11. Comparison of three load balancing modes of LVS

 Three types of load balancing: nat, tunneling, dr
|category|NAT|TUN|DR|
|--|--|--|--|
operating system|any|support tunnel|majority (support non-arp)
|server Network | Private Network | LAN/WAN | LAN
| Number of Servers | 10-20 | 100 | Greater than 100
| Server Gateway | Load Balancer | Own Route | Own Route |
Efficiency | Average | High | Highest

 12. Load scheduling algorithm of LVS

Round-robin scheduling
Weighted round-robin scheduling
Least-connected scheduling
Weighted least-connected scheduling Least-
connected based on local performance Least-connected based
on local performance with replication Destination
address hash scheduling
Source address hash scheduling

13. The difference between LVS and nginx

Advantages of lvs:

1. Strong load resistance, because the logic of LVS working mode is very simple, and it works on the 4th layer of the network. It is only used for request distribution
and has no traffic, so there is basically no need to think too much about efficiency. LVS generally rarely fails. Even if it does,
it is usually a problem elsewhere (such as memory, CPU, etc.) that causes LVS problems.
 

2. Low configurability, which is usually a major disadvantage but also a major advantage, because there are not many configurable options, so in addition to adding or removing
servers, you do not need to touch it frequently, which greatly reduces the chance of human error. .

3. Stable work, because it has strong load resistance, so high stability is a matter of course. In addition, various LVS have complete
dual-machine hot backup solutions, so there is no need to worry about any problems with the equalizer itself, the node If a fault occurs, LVS will automatically identify it
, so the overall system is very stable.

4. No traffic, lvs only distributes requests, and the traffic does not go out from itself, so you can use it to do some line diversion
. There is no traffic and the IO performance of the equalizer is not affected by large traffic.

5.lvs can basically support all applications. Because lvs works on layer 4, it can load balance almost all applications, including
http, databases, chat rooms, etc.

 Comparison between nginx and LVS:

nginx works on the 7th layer of the network, so it can make diversion strategies for the http application itself, such as domain names, directory structures,
etc. In contrast, lvs does not have such a function, so nginx can take advantage of this alone There are far more occasions than lvs; but
these useful functions of nginx make it more adjustable than lvs, so you often need to touch it. Judging from the second advantage of lvs, if you touch
too much, problems may arise artificially. The probability will be greater.

nginx has less dependence on the network. In theory, as long as ping is available and web page access is normal, nginx can be connected. nginx
can internal and external networks. If it has nodes with both internal and external networks, it is equivalent to having a backup on a single machine. Line; LVS is more dependent on the network environment. At present, it seems that the server is in the same network segment and LVS uses direct mode to divert traffic, so the effect is better guaranteed. Also note
that LVS needs to apply for at least one IP from the hosting provider to use as visual IP.

nginx is relatively simple to install and configure, and it is also very convenient to test, because it can basically print out errors in logs. The installation, configuration, and testing of LVS
take a relatively long time, because as mentioned above, LVS is highly dependent on the network. In many cases, failure to configure successfully is due to network problems rather than configuration problems. If there is a problem, it must be solved accordingly. It will be much more troublesome.

nginx can also withstand high loads and is stable, but the load and stability are different. There are several levels of lvs: nginx handles all traffic,
so it is limited by machine IO and configuration; its own bugs are still unavoidable; nginx is not ready There is a dual-machine hot backup solution, so
running on a single machine is still relatively risky, and it is difficult to say anything about a single machine.

nginx can detect internal server failures, such as status codes, timeouts, etc. returned by the server processing web pages, and will
resubmit requests that return errors to another node. Currently, ldirectd in lvs can also support monitoring the internal conditions of the server
, but the principle of lvs prevents it from resending requests. For example, if the user is uploading a file, and the node processing the upload happens to fail during the upload process, nginx will switch the upload to another server for reprocessing, and lvs will be directly disconnected.

Use both together:

nginx is used as a reverse proxy for http, and can upsteam achieve balanced forwarding of http requests in multiple ways. Because asynchronous forwarding is used, if a server request fails, it will be immediately switched to other servers until the request succeeds or the last server fails. This can maximize the system's request success rate.

 LVS adopts a synchronous request forwarding strategy. Let’s talk about the difference between synchronous forwarding and asynchronous forwarding. Synchronous forwarding means that after the LVS server receives the request, it immediately redirects to a back-end server, and the client directly establishes a connection with the back-end server. Asynchronous forwarding means that nginx initiates a new request with the same content to the backend while maintaining the client connection. After the backend returns the result, nginx returns it to the client.

Further: when nginx and lvs, which are load balancing servers, process the same request, all request and response traffic will pass through nginx; but when using lvs, only the request traffic passes through the lvs network, and the response traffic is sent by the back-end server's network return.

That is to say, when the server used as the backend is large in scale, nginx's network bandwidth becomes a huge bottleneck.

But if you only use lvs as load balancing, once there is a problem with the server where the backend receives the request, the request will fail. However, if you add a layer of nginx (multiple) to the backend of lvs, and each nginx backend has several application servers, then combining the advantages of the two can not only avoid the traffic concentration bottleneck of a single nginx, but also avoid the traffic concentration bottleneck of a single nginx. LVS is a one-time deal.

14. What are the functions of load balancing?

1. Forwarding function:
According to a certain algorithm [weighting, polling], client requests are forwarded to different application servers, reducing the pressure on a single server and increasing system concurrency.
2. Fault removal
: Use heartbeat detection to determine whether the application server can currently work normally. If the server goes down, the request will be automatically sent to other application servers.
3. Recovery addition:
If it is detected that the failed application server has resumed work, it will be automatically added to the team that handles user requests.

15. nginx implements load balancing distribution strategy 

The allocation algorithm currently supported by Nginx's upstream:
1), polling - 1:1 processing requests in turn (default)

Each request is assigned to different application servers one by one in chronological order. If the application server goes down, it will be automatically eliminated, and the remaining requests will continue to be polled.

2) Weight - you can
configure the weight and specify the polling probability. The weight is proportional to the access ratio, which is used when the application server performance is uneven.
3). IP_Hash algorithm:
Each request is allocated according to the hash result of the access IP, so that each visitor has fixed access to an application server, which can solve the problem of session sharing.


Finally, I hope this article can help you achieve good results in the operation and maintenance interview and achieve your career goals. If you have any other questions or need more help, feel free to ask me. I wish you greater success in the field of operation and maintenance!

Will continue to update!

Guess you like

Origin blog.csdn.net/yj11290301/article/details/135205106