Front-end interview questions about computer network common test

1. What is the difference between Post and Get?

Post and Get are two methods of HTTP request.

(1) From the application scenario, the GET request is an idempotent request. Generally, the Get request is used in scenarios that will not affect server resources, such as requesting a web page. Post is not an idempotent request, and is generally used in scenarios that affect server resources. For example, operations such as registering users.

(2) Because of different application scenarios, browsers generally cache Get requests, but rarely request cache for Post.

(3) In terms of the format of the message sent, the entity part of the message of the Get request is empty, and the entity part of the message of the Post request is generally the data sent to the server.

(4) But the Get request can also put the requested parameters in the URL and send it to the server. Compared with the Post request, this approach is less secure in one aspect, because the requested URL will be kept in the history record. And because the browser has a limitation on the length of the url, it will affect the length of the get request to send the data. This restriction is specified by the browser, not by the RFC. There is also support for more data types in post parameter passing.


2. What must use three random numbers in TLS/SSL to generate the "session key"?

Both the client and the server need to generate random numbers to ensure that the secret keys generated each time are different. Three random numbers are used because the SSL protocol does not trust that each host
can generate a completely random number by default . If only one pseudo-random number is used to generate the secret key, it will be easy to crack. By using three random numbers to increase the degree of freedom, one pseudo-random may be cracked, but three pseudo-randoms are very close to random, so this method can be used to maintain the randomness and security of the generated secret key .


3. How to restore the SSL connection after it is disconnected?

There are two methods to restore a broken SSL connection, one is to use the session ID and the other is to use the session ticket.

Using session ID, each session has a number. When the conversation is interrupted, the next time you reconnect, as long as the client gives this number, if the server has a record of this number, then both parties can continue to use the previous Secret key without regenerating one. All current browsers support this method. But one disadvantage of this method is that the session ID can only exist on one server. If our request is transferred to another server through load balancing, the conversation cannot be resumed.

Another way is the session ticket. The session ticket is sent by the server to the client in the last conversation. This ticket is encrypted and only the server can decrypt it. It contains the information of the current session, such as the conversation key and encryption. Methods etc. In this way, no matter whether our request is transferred to another server
or not, when the server decrypts the ticket, it can obtain the information of the last conversation, and there is no need to regenerate the conversation key.


4. What is the security guarantee of the RSA algorithm?
The difficulty of factoring extremely large integers determines the reliability of the RSA algorithm. In other words, the more difficult it is to factorize a very large integer, the more reliable the RSA algorithm. Now 1024-bit RSA keys are basically safe, and 2048-bit keys are extremely safe.


5. Why does DNS use UDP as the transport layer protocol?

The main reason why DNS uses the UDP protocol as the transport layer protocol is to avoid the connection delay caused by the use of the TCP protocol. Because in order to obtain the IP address of a domain name, it is often necessary to query multiple domain name servers. If the TCP protocol is used, there will be a connection delay for each request, which makes the DNS service very slow, because most of the address query requests, It is sent when the browser requests the page, which will cause the waiting time of the page to be too long.

There is a problem when using the UDP protocol as the DNS protocol. Due to historical reasons, the minimum MTU of the physical link = 576. Therefore, in order to limit the message length to no more than 576, the length of the UDP message segment is limited to 512 bytes. In this way, once the DNS query or response message exceeds 512 bytes, the UDP-based DNS protocol will be truncated to 512 bytes, and it is possible that the DNS response received by the user is incomplete. Here, once the length of the DNS packet exceeds the limit, it will not be split into multiple segments for transmission like the TCP protocol. Because the UDP protocol does not maintain the connection state, we have no way to determine that the segments belong to the same. One piece of data, UDP will only intercept the redundant data. To solve this problem, we can use the TCP protocol to request messages.

Another problem with DNS is security, that is, we cannot be sure that the response we get must be a secure response, because the response can be forged by others, so DNS over HTTPS is now available to solve this problem.


6. What happens when you enter Google.com in the browser and press Enter?

(1) First, the URL will be parsed, and the required transmission protocol and the path of the requested resource will be analyzed. If the protocol or host name in the entered URL is invalid,
the content entered in the address bar will be passed to the search engine. If there is no problem, the browser will check whether there are illegal characters in the URL. If there are illegal characters, it will escape the illegal characters before proceeding to the next process.

(2) The browser will judge whether the requested resource is in the cache. If the requested resource is in the cache and is not invalid, it will be used directly, otherwise it will initiate a new
request to the server .

(3) In the next step, what we first need to obtain is the IP address of the domain name in the URL entered. First, we will judge whether there is a cache of the IP address of the domain name locally, if there is, use it, and if not, initiate a request to the local DNS server. The local DNS server will first check whether there is a cache, if not it will Xianxiang root domain
after the name of the server initiates a request to obtain a top-level domain name server address of the responsible, again top-level domain server requests, and access to authoritative name servers responsible for the address Then, it initiates a request to the authoritative domain name server, and finally obtains the IP address of the domain name, and the local DNS server returns this IP address to the requesting user. The request initiated by the user to the local DNS server is a recursive request, and the request initiated by the local DNS server to the domain name servers at all levels is an iterative request.

(4) After the browser obtains the IP address, the data transmission also needs to know the destination host's MAC address. Because the application layer sends data to the transport layer, the TCP protocol will specify the source port number and destination port number, and then send it to the network layer. The network layer will use the local address as the source address and the obtained IP address as the destination address. Then it will be sent to the data link layer. The sending of the data link layer needs to add the MAC addresses of both parties to the communication. The MAC address of our local machine is used as the source MAC address, and the destination MAC address needs to be dealt with according to the situation. The subnet mask of the machine is the same, we can judge whether we are in the same subnet as the requesting host, if in the same subnet, we can use the APR protocol to get the MAC address of the destination host, if we are not in the same subnet , Then our request should be forwarded to our gateway, and it will forward it on its behalf. At this time, the MAC address of the gateway can also be obtained through the ARP protocol. At this time, the MAC address of the destination host should be
the address of the gateway.

(5) The following is the three-way handshake process of TCP connection establishment. First, the client sends a SYN connection request segment and a random sequence number to the server. After receiving the request, the server sends a SYN ACK segment to the server to confirm A connection request is also sent to the client with a random serial number. After the client receives the server's confirmation response, it enters the connection establishment state, and at the same time sends an ACK confirmation message segment to the server. After the server receives the confirmation, it also enters the connection establishment state. At this time, the connection between the two parties is established.

(6) If the HTTPS protocol is used, there is a four-way handshake process of TLS before communication. First, the client sends to the server the version number of the protocol used, a random number, and the encryption method that can be used. After the server receives it, it confirms the encryption method and also sends a random number and its own digital certificate to the client. After the client receives it, it first checks whether the digital certificate is valid. If it is valid, it generates a random number, encrypts the random number with the public key in the certificate, and then sends it to the server, and also provides a list of all the previous contents. The hash value is for server-side verification. After the server receives it, it uses its own private key to decrypt the data, and at the same time sends a hash value of all the previous content to the client for verification by the client. At this time, both parties have three random numbers. According to the previously agreed encryption method, these three random numbers are used to generate a secret key. Before the two parties communicate, they will use this secret key to encrypt the data before transmission.

(7) When the page request is sent to the server side, the server side will return an html file as a response. After the browser receives the response, it will start to parse the html file and start the page rendering process.

(8) The browser will first build a DOM tree based on the html file, and build a CSSOM tree based on the parsed css file. If it encounters a script tag, it will judge whether the end contains defer or async attributes, otherwise the loading and execution of the script will cause the page Blocking of the rendering. When the DOM tree and CSSOM tree are established, build a rendering tree based on them. After the render tree is built, it will be laid out according to the render tree. After the layout is completed, finally use the browser's UI interface to draw the page. At this time, the entire page is displayed.

(9) The last step is the four waves of TCP disconnection.


7. Talk about CDN service?

CDN is a content distribution network. It uses multiple servers located in different regions and different operators to provide users with the function of nearby access by caching the source website resources. In other words, the user's request is not sent directly to the source website, but to the CDN server, and the CND server locates the request to the nearest server containing the resource to request. This is helpful to improve the access speed of the website, and at the same time it also reduces the access pressure of the origin server in this way.


8. What are forward proxy and reverse proxy?

The proxy we often say refers to the forward proxy, the process of forward proxy, which hides the real requesting client, the server does not know who the real client is, and the service requested by the client is replaced by the proxy server.

The reverse proxy hides the real server. When we request a website, there may be thousands of servers behind it to serve us, but we don’t know or don’t need to know which one it is, we just need It is good to know who the reverse proxy server is, and the reverse proxy server will help us forward the request to the real server. A reverse proxy is generally used to achieve load balancing.


9. Two ways to achieve load balancing?

  • One way is to use a reverse proxy. All user requests are sent to the reverse proxy service, and then the reverse proxy server forwards the request to the real server to achieve load balancing of the cluster.
  • The other is DNS, which can be used to achieve load balancing on redundant servers.

Because most large-scale websites now use multiple servers to provide services, a domain name may correspond to multiple server addresses. When a user makes a request for a website domain name, the DNS server returns the set of server IP addresses corresponding to this domain name, but in each response, the order of these IP addresses is looped, and the user generally chooses the address that is ranked first to send the request. In this way, the user's requests are evenly distributed to different servers, so as to achieve load balancing. One disadvantage of this method is that, due to the cache in the DNS server, it is possible that after a server fails, the domain name resolution still returns the IP address, which will cause access problems.


10. What is the use of the options method of the http request method?

OPTIONS request is similar to HEAD, and is generally used by the client to view the server's performance. This method will request the server to return all HTTP request methods supported by the resource. This method will use'*' instead of the resource name, and send an OPTIONS request to the server to test whether the server functions normally. JS's XMLHttpRequestwhen objects CORS cross-domain resource sharing for complex requests, is to send a request using the OPTIONS sniffing method, to determine whether there is access to the specified resource.


11. What are the differences between http1.1 and http1.0?

There are several differences between http1.1 and http1.0:

(1) The difference in connection, http1.1 uses persistent connections by default, while http1.0 uses non-persistent connections by default. http1.1 uses persistent connections to reuse the same TCP connection for multiple http requests, so as to avoid the delay in establishing a connection each time when a non-persistent connection is used.

(2) The difference in resource request. In http1.0, there are some phenomena of wasting bandwidth. For example, the client only needs a part of an object, but the server sends the entire object and does not support resumable transmission. Function, http1.1 introduces the range header field in the request header, which allows only a certain part of the resource to be requested, that is, the return code is 206 (Partial Content), which facilitates the free choice of developers in order to make full use of bandwidth and connection.

(3) The difference in caching. In http1.0, If-Modified-Since and Expires in the header are mainly used as the criteria for caching judgment, and http1.1 introduces more caching control strategies such as Etag, If- Unmodified-Since, If-Match, If-None-Match and more optional cache headers to control the cache strategy.

(4) A host field has been added to http1.1 to specify the domain name of the server. In http1.0, each server is considered to be bound with a unique IP address. Therefore, the URL in the request message does not convey the hostname. But with the development of virtual host technology, there can be multiple virtual hosts on a physical server, and they share an IP address. Therefore, with the host field, requests can be sent to different websites on the same server.

(5) Compared with http1.0, http1.1 has added many new methods, such as PUT, HEAD, OPTIONS, etc.


12. What is the difference between the website domain name with www and without www?

Domestic users are accustomed to using www, but the default domain name without www is better than the one with www. The one with Www is the second-level domain name and the one without the top-level domain name. The default one will have a higher weight in search engines.

The difference is that one with www, one without www, and the others are the same. The domain name of www is a subdomain without www.


The realization of instant messaging, the difference between short polling, long polling, SSE and WebSocket?

The purpose of short polling and long polling is to realize an instant communication between the client and the server.

Short pollingThe basic idea is that the browser sends http requests to the browser at regular intervals, and the server responds directly after receiving the request, regardless of whether there is data update. The instant communication implemented in this way is essentially a process in which the browser sends a request and the server accepts the request. By allowing the client to continuously request, the client can simulate real-time data changes received from the server. The advantage of this method is relatively simple and easy to understand. The disadvantage is that this method seriously wastes server and client resources due to the need to continuously establish http connections. When users increase, the pressure on the server side will increase, which is very unreasonable.

Long pollingThe basic idea is that the client initiates a request to the server first. When the server receives the request from the client, the server does not respond directly, but
suspends the request first , and then determines whether the server-side data is available Update. If there is an update, it will respond. If there is no data, it will return after a certain time limit. The client-side JavaScript response processing function will issue the request again after processing the information returned by the server to re-establish the connection. Compared with short polling, long polling has the advantage of significantly reducing the number of unnecessary http requests, which in contrast saves resources. The disadvantage of long polling is that the connection hangs will also lead to a waste of resources.

SSEThe basic idea is that the server uses streaming information to push information to the server. Strictly speaking, the http protocol cannot enable the server to actively push information. However, there is a workaround, that is, the server declares to the client that the next thing to send is stream information. In other words, what is sent is not a one-time data packet, but a data stream, which will be sent continuously. At this time, the client will not close the connection and will always wait for the new data stream sent by the server. Video playback is an example of this. SSE uses this mechanism to push information to the browser using streaming information. It is based on the http protocol and currently supports all browsers except IE/Edge. Compared with the previous two methods, it does not need to establish too many http requests, which saves resources in comparison.

The above three methods are essentially based on the http protocol, we can also use the WebSocket protocol to achieve. WebSocket is a new protocol defined by Html5, which is different from the traditional http protocol, which allows the server to actively push information to the client. The disadvantage of using the WebSocket protocol is that the configuration on the server side is more complicated. WebSocket is a full-duplex protocol, that is, both parties in communication are equal and can send messages to each other. The SSE method is one-way communication, and only the server can push information to the client. If the client needs to send information, it is It belongs to the next http request.


14. How to share login status between multiple websites

Sharing login status between multiple websites refers to single sign-on. In multiple application systems, users only need to log in once to access all mutually trusted application systems.

Single sign-on can be implemented in this way. First, the verification center of user information is separated as a separate verification center. The role of the verification center is to determine the correctness of the account password sent by the client, and then return the corresponding user to the client Information, and return a token of the login information encrypted by the server-side secret key to the client. The token has a certain validity period. When an application system jumps to another application system, the token is passed through the url parameter, and then the transferred application site is sent to the certification center. The certification center decrypts the token and verifies it. If the user information is not invalid, then The client returns the corresponding user information, if it fails, the page will be redirected to the single sign-on page.

Guess you like

Origin blog.csdn.net/PILIpilipala/article/details/113819330