Linux operation and maintenance engineer interview questions (2)

Linux operation and maintenance engineer interview questions (2)

I wish you all the best of luck in finding the job you want.
Continuous learning will not be eliminated.
The earth doesn't explode, we don't take holidays.
Opportunities are always reserved for those who are prepared.
Come on, hit the workers!

1 The process of visiting a website

  1. Open a browser and enter the URL. First look for the local cache, if there is, open the page, if not, use DNS for domain name resolution;
  2. The browser makes a DNS request to recursively look up the IP address of the website. HOSTS table --> local DNS --> upper layer DNS (including root DNS);
  3. After DNS analysis, the URL is converted to an IP address, the specified web server is found through the IP address, and a tcp three-way handshake is established with the server;
  4. After the handshake is successful, the browser sends an HTTP request, and the default request is index.html;
  5. After receiving the request, the server sends the web page file back to the browser;
  6. After the browser receives the webpage file returned by the server, the browser parses the html file and presents the rendered webpage file to the user.
  7. The browser stores the returned data and status code in Cache for subsequent quick access.

2 TCP three-way handshake, wave four times

three handshake

insert image description here

  1. The client sends a request message to the server, which contains a randomly generated seq sequence number (x), and also sets the SYN field in the message to 1, indicating that a TCP connection request needs to be established;
  2. After receiving the request, the server replies with a request message, which contains a randomly generated seq (y) sequence number, and sets the SYN field of the reply message to 1, indicating that a connection request needs to be established and two-way confirmation is required. And it will generate an ACK verification field, set the ACK confirmation message field to 1, and the value of the ACK verification field is to add 1 to the seq(x) serial number sent by the client to reply, that is, to reply ack(x+1) confirmation number field, so that when the client receives the information, it knows that its TCP establishment request has been confirmed, and ack can also be understood as the value of the seq sequence number expected to be sent by the client next time;
  3. After the client receives the TCP connection establishment request sent by the server, it will add 1 to its original sequence number (x) to send the sequence number seq (x+1) again, and reply the ACK verification request at the same time, marking the ACK field as 1. Add 1 to the seq (y) sent by the server, that is, reply to the ack confirmation control field (y+1), so that when the server receives the information, it knows that its TCP establishment request has been confirmed, and the same ack It can also be understood as the value of the seq serial number expected to be sent by the server next time.

The purpose of the three-way handshake is to ensure that the serial numbers at both ends are synchronized, and that both parties can send and receive data. If the first handshake fails, the client will repeatedly send the SYN packet; if the second handshake fails, the server will also repeatedly send the SYN+ACK packet; if the third handshake fails, the client will resend the ACK packet.

waved four times

insert image description here

  1. The client actively sends a connection close request to the server. At this time, the sequence number seq is u, and the FIN field in the message is marked as 1, indicating that the TCP connection is to be closed;
  2. After receiving the message, the server will send an ACK message for verification, set the ACK to 1, and send the ack (u+1) confirmation field at the same time. At this time, the sequence number seq is v, and this message indicates that the end request has been received ;
  3. At this time, the server may still have data to send, so it needs to wait for a while. When the server finishes sending data, it will send a FIN packet, indicating that the TCP connection has ended, and mark FIN as 1 and ACK as 1. At this time, the sequence number seq is w, and the confirmation number ack is u+1;
  4. After receiving the FIN packet, the client will respond with an ACK packet, indicating that the end request has been received, set the ACK to 1, the sequence number seq to u+1, the ack to w+1, and enter the TIME_WAIT state. After waiting for some time, the client closes the connection. After the server receives the ACK packet, it also closes the connection.

The purpose of the four waves is to ensure that both ends know that the other side has closed the connection and no more data will be sent. If the client does not receive the ACK packet after sending the FIN packet, it can resend the FIN packet; if the server does not receive the ACK packet after sending the FIN packet, it can resend the FIN packet. If the client receives duplicate FIN packets before the TIME_WAIT state ends, it can ignore it.

3 What kind of virtual hosts are there for apache and nginx

  • IP-based virtual hosting
  • Domain-based web hosting
  • Port number based virtual hosting

4 The difference between TCP and UDP

  1. TCP is a connection-oriented protocol, while UDP is a connectionless protocol. TCP is one-to-one transmission, and UDP supports one-to-one, one-to-many, many-to-one and many-to-many interactive communication;
  2. TCP provides reliable data transmission, ensuring that data can reach the destination in a complete and orderly manner, while UDP does not provide reliability guarantee, and data transmission may be lost, repeated or out of order.
  3. TCP divides the data into some data segments, and each data segment has a sequence number, which can ensure the order and integrity of the data. UDP encapsulates data into datagrams, and each datagram is independent and may be lost, duplicated, or out of order.
  4. TCP is byte-oriented, that is, it regards the message from the application layer as a byte stream, splits the byte stream into data blocks of different sizes, and adds the TCP header; UDP is message-oriented
    , The packets passed down from the application layer are not split or merged, and only the UDP header is added;
  5. UDP is faster than TCP because it does not have the reliability guarantees and complex flow control and congestion control mechanisms of TCP.
  6. UDP is suitable for applications with small transmission volume but high transmission speed requirements, such as video, audio and games; while TCP is suitable for applications with large transmission volume but high reliability and integrity requirements, such as file transmission, mail, etc. .
  7. TCP supports various measures of transmission reliability, including guaranteeing the transmission order of packets, retransmission mechanism, flow control and congestion control; UDP only provides the most basic data transmission capabilities.

5 The difference between nginx and apache

Nginx:

  • Lightweight, written in C, the same web service will take up less memory and resources;
  • Anti-concurrency, nginx uses epoll and kqueue as the development model, processing requests is asynchronous and non-blocking, and its load capacity is much higher than that of apache, while apache is blocking. Under high concurrency, nginx can maintain low resource consumption and high performance, while apache is prone to soaring process numbers and denial of service when PHP processing is slow or the front-end pressure is high;
  • nginx handles static files well, and its static processing performance is more than three times higher than that of apache;
  • The design of nginx is highly modular, and writing modules is relatively simple;
  • The nginx configuration is simple, and the regular configuration makes many things easier. After changing the configuration, you can use -t to test whether there is any problem with the configuration. The apache configuration is complicated. When you restart, you find that the configuration is wrong, and it will crash;
  • As a load balancing server, nginx supports layer 4 and layer 7 load balancing, and layer 7 load can effectively prevent ddos ​​attacks;
  • Nginx itself is a reverse proxy server and can also be used as a mail proxy server.

Apache:

  • The rewrite of apache is more powerful than nginx, in the case of frequent rewrite, use apache;
  • Apache has developed to the present, and there are so many modules that you can basically find anything you think of;
  • Apache is more mature and has fewer bugs, while nginx has relatively more bugs;
  • Apache's support for PHP is relatively simple, and nginx needs to be used with other backends;
  • Apache has an advantage in handling dynamic requests, but nginx is useless in this regard. Generally, dynamic requests need to be handled by apache, and nginx is suitable for static and reverse;
  • Apache is still the current mainstream, with rich features, mature technology and development community.

The core difference between the two is that apache is a synchronous multi-process model, one connection corresponds to one process, while nginx is asynchronous, and multiple connections (ten thousand levels) can correspond to one process.

You need apache for stability, and nginx for high performance

6 What is a reverse proxy, what is a forward proxy, and what is the difference?

Both reverse proxy and forward proxy are an application form of proxy server. The main difference between them is that the direction of the proxy and the proxy object are different.

Forward proxy is to set up a proxy server between the client and the target server to send requests to the target server instead of the client, and the client cannot directly access the target server. Forward proxy servers are generally used for functions such as encryption, caching, access control, and content filtering. For example, the circumvention tool is a common forward proxy application.

The reverse proxy is to set up a proxy server between the target server and the client to replace the target server to provide services to the client. The client cannot directly access the target server, and all requests must go through the reverse proxy server. The reverse proxy server can select different target servers according to the client's request, so as to achieve load balancing and high availability. For example, some large websites and applications use reverse proxy servers for load balancing and caching.

The main difference between the two lies in the direction of the agent. A forward proxy is a proxy client that forwards the client's request to the target server; a reverse proxy is a proxy server that forwards the target server's response to the client.

7 difference between cookie and session

  1. Storage location: Cookie is saved on the client browser, while Session is saved on the server side.
  2. Security: Since cookies are stored on the client browser, they can be stolen and tampered with, and there are certain security risks; while sessions are stored on the server side, which is relatively more secure.
  3. Storage capacity: The storage capacity of a cookie is limited, and generally only about 4KB of data can be saved; while the storage capacity of a session is not limited, and more data can be saved.
  4. Scope: The scope of Cookie is all pages under the entire domain name, and data can be shared between multiple pages; while the scope of Session is the session between the current user and the server, and data can only be shared within the same session.
  5. Expiration date: Cookies can be set with an expiration date and can be stored on the client browser for a long time; and Session will generally expire automatically when the user closes the browser or does not operate within a certain period of time.

To sum up, Cookie is suitable for storing some insensitive data, such as user preferences, shopping cart information, etc.; and Session is suitable for storing some sensitive data, such as user login status, permission information, etc. In actual development, an appropriate mechanism is usually selected according to specific application scenarios and requirements.

8 nginx tuning

Working process binding, maximum upload file size, event-driven model optimization, file descriptor optimization, anti-leech optimization, hidden software name and version number, optimization of default pages such as 404 and 502, so as to be user-friendly and prohibit malicious domain names Analysis, prohibiting access to websites through IP addresses, strictly setting the permissions of web site directories, and optimizing anti-reptiles through the robot protocol and HTTP_USER_AGENT. Prevent ddos ​​single ip concurrent connection control. The status monitoring module has to be turned on and so on. limit_rate limits the upload rate. client_max_body_size allows the maximum size of files uploaded by users.

chatgpt:

  1. Adjust the number of working processes: The number of working processes enabled by Nginx by default is the number of CPU cores, which can be adjusted appropriately according to server performance and application load. In general, the number of worker processes can be set to double or triple the number of CPU cores.
  2. Adjust the file handle limit: The default file handle limit of Nginx is 1024, which can be adjusted appropriately according to the actual situation. The file handle limit can be adjusted by modifying system parameters or setting the "worker_rlimit_nofile" parameter in the Nginx configuration file.
  3. Enable TCP optimization parameters: Nginx provides some TCP optimization parameters, which can improve TCP connection performance and reliability by setting parameters such as "tcp_nopush", "tcp_nodelay", and "keepalive_timeout".
  4. Enable Gzip compression: Enabling Gzip compression can reduce the amount of data transmitted over the network and improve page loading speed. Gzip compression can be enabled by setting the "gzip" parameter in the Nginx configuration file.
  5. Enable HTTP caching: Enabling HTTP caching can reduce server load and page loading time, you can set the "proxy_cache_path" parameter in the Nginx configuration file to enable HTTP caching.
  6. Adjust Nginx memory usage: The default memory usage of Nginx is low. You can adjust Nginx memory usage by setting parameters such as "worker_processes" and "worker_connections" to improve performance and reliability.

9 How to solve a large number of time_wait problems in the system

In an accident, the server suffered a ddos ​​attack, which had a great impact on us. Although we closed the front-end user access portal and suspended the ddos ​​attack, there was a lot of time-wait on the server side, which resulted in the failure of the new connection. And it takes up a lot of system resources, our temporary solution is like this. Add two parameters in
the system configurationsysctl.conf

net.ipv4.tcp_tw_recycle = 1
net.ipv4.tcp_tw_reuse = 1
net.ipv4.tcp_fin_timeout = 300
net.ipv4.tcp_syncookies = 1

After changing it to 1, and executing sysctl -pthe command it will take effect immediately, and then it will slow down.

chatgpt:

time_wait is a state of the TCP protocol, indicating that a TCP connection has been closed, but it still takes a while to wait for some delayed data packets to arrive. If there are a large number of connections in the time_wait state in the system, it may cause waste of system resources and performance degradation. Here are some solutions:

  1. Adjust TCP parameters: You can control the number and time of time_wait by adjusting TCP parameters, such as modifying tcp_tw_recycle, tcp_tw_reuse and other parameters. However, it should be noted that it is not recommended to adjust these parameters in a production environment because of potential risks.
  2. Increase server resources: The number of time_wait state connections is related to system resources. Increasing server resources can reduce the number of time_wait state connections in the system, such as increasing memory, CPU, and network bandwidth.
  3. Use a connection pool: You can use a connection pool to reuse established TCP connections, avoiding frequent establishment and closure of connections. Connection pooling can reduce the number of time_wait state connections and improve system performance.
  4. Use TCP fast recovery technology: TCP fast recovery technology is a timestamp-based technology that allows the system to quickly recover connections in the time_wait state. The TCP fast recycling technique can be enabled by setting net.ipv4.tcp_timestamps=1.

In short, when solving the time_wait problem, it is necessary to comprehensively consider and adjust according to specific application scenarios and system resource conditions to improve system performance and reliability. At the same time, attention needs to be paid to avoid incorrect adjustment parameters and configurations, which may cause potential risks.

10 What to do after getting a new server, and what system optimization to do

Change the port of ssh, disable the password login on the machine on the public network, and select the way of key login;

Prohibit root login, try to use ordinary users to log in to the system, and add sudo permissions;

Configure time synchronization, set the network card name to eth0 traditional naming method;

Turn on the firewall and configure the firewall rules;

Optimize kernel parameters, modify resource restrictions, such as tcp parameter optimization, increase file descriptors;

Configure local yum sources or domestic sources, and install common software packages, such as tcpdump and other toolkits.

chatgpt:

  1. Update system and software packages: First, you need to update the operating system and software packages to the latest versions to get the latest security patches and features.
  2. Configure firewall and security policies: Configuring firewall and security policies can protect servers from network attacks and malware.
  3. Configuring remote access: Configuring remote access can facilitate administrators to remotely manage servers, but security needs to be paid attention to, such as using SSH encrypted connections.
  4. Install necessary software: installing necessary software can facilitate server management and maintenance, such as editors, log analysis tools, and monitoring tools.
  5. Configure system resource limit: Configure system resource limit to prevent certain processes from occupying too many system resources, causing system crashes or performance degradation.
  6. Configure system parameters: Configuring system parameters can improve system performance and reliability, such as adjusting the number of file handles, memory size, network buffer, etc.
  7. Configuring system logs: Configuring system logs can facilitate administrators to check system operation status and error messages, and find and solve problems in time.
  8. Configure system backup: Configure system backup to restore data and services in case of system problems, avoiding data loss and service interruption.

The above are the basic optimization suggestions for the new server. The specific optimization methods and steps need to be comprehensively considered and adjusted according to the specific application scenarios and system configuration.

The above interview questions are just a personal summary. Write whatever you think of, without any order. If there is anything wrong with the writing, please comment and leave a message, and I will correct it in time.

Original link: Linux operation and maintenance engineer interview questions (2)

Guess you like

Origin blog.csdn.net/qq_45520116/article/details/129113854