Common interview questions of major network security companies

   The following are the interview questions involved in various aspects of network security. The more stars, the greater the probability of problems appearing. I wish you all can find a satisfactory job.

Note: This set of interview questions has been compiled into a pdf document, but the content is still being updated, because it is impossible to cover all the interview questions anyway, and more people still hope to fill in the gaps by pointing to the surface.

Network Security: Free Sharing of "Hacker & Network Security Introduction & Advanced Learning & Interview Questions"

Summary of network security interview questions:

Defense against common Web attacks
Important protocols distribution layer
arp protocol working principle
rip protocol what? How rip works
What is RARP? Working principle
of OSPF protocol? The working principle of OSPF
Summary of the difference between TCP and UDP
What is the three-way handshake and four-way handshake?
Why does tcp need a three-way handshake?
what is dns? The working principle of dns
A complete HTTP request process
The difference between Cookies and session The difference
between GET and POST
The difference between HTTPS and HTTP
The working principle of session?
What is the difference between http long connection and short connection
OSI seven-layer model?
How does session work? What is TCP sticky packet/unpacket? cause? Solution
How does TCP ensure reliable transmission?
Difference between URI and URL
What is SSL?
How does https ensure the security of data transmission (how does SSL work to ensure security) The
application layer protocol corresponding to TCP and the application layer protocol corresponding to UDP
What are the common status codes?

Prevent Common Web Attacks
What is SQL Injection Attack
An attacker injects malicious SQL code into an HTTP request. When the server uses parameters to construct a database SQL command, the malicious SQL is constructed together and executed in the database.
User login, input user name lianggzone, password' or '1'='1, if you use the method of parameter construction at this time, select
* from user where name = 'lianggzone' and password = '' or '1'= will appear '1'
makes the queried user list not empty, no matter what the user name and password are. How to prevent SQL injection attacks It is necessary to use precompiled PrepareStatement, but generally we will start from two aspects at the same time.

2. Web side

1) Validity test.
2) Limit the length of the string input.

1) No need to concatenate SQL strings.
2) Use a precompiled PrepareStatement.
3) Validity test. (Why does the server still need to check the validity? The first rule is that the outside world is untrustworthy to prevent attackers from bypassing web requests.)
4) Filter the special characters in the parameters required by SQL. Such as single quotes, double quotes.

What is an XSS attack


Cross-site scripting attack refers to an attack method in which an attacker controls a user's browser to perform malicious operations by tampering with a webpage and embedding a malicious script program when the user browses the webpage.

How to prevent XSS attacks


1) The front end and the server end both require a length limit for string input.
2) Front-end and server-side need to escape HTML at the same time. Special characters such as "<", ">" are escaped and encoded.
The core of anti-XSS is to filter the input data.

What is a CSRF attack


Cross-site request forgery refers to an attacker performing illegal operations as a legitimate user through a cross-site request. CSRF attacks can be understood in this way: the attacker steals your identity and sends malicious requests to third-party websites in your name. What CRSF can do includes using your identity to send emails, send text messages, conduct transaction transfers, and even steal account information.

How to Prevent CSRF Attacks


A security framework such as Spring Security.
Token mechanism. Token verification is performed in the HTTP request. If there is no token in the request or the content of the token is incorrect, the request is considered as a CSRF attack and the request is rejected.
Captcha. Usually, verification codes can prevent CSRF attacks very well, but in many cases, due to user experience considerations, verification codes can only be used as an auxiliary means, not the main solution.
referer identification. There is a field Referer in the HTTP Header, which records the source address of the HTTP request. If the Referer is another website, it may be a CSRF attack, and the request is rejected. However, not all servers can get Referer. Many users restrict the sending of Referer for the sake of privacy protection. In some cases, the browser will not send Referer, such as HTTPS jumping to HTTP.
1) Verify the source address of the request;
2) Add a verification code for key operations;
3) Add and verify the token at the request address.

What is a file upload vulnerability


The file upload vulnerability refers to the user uploading an executable script file and gaining the ability to execute server commands through this script file.

Many third-party frameworks and services have been exposed to file upload vulnerabilities, such as Struts2 long ago, and rich text editors, etc., can be uploaded by attackers malicious code, and the server may be hacked.

How to Prevent File Upload Vulnerabilities


The directory where the file is uploaded is set to non-executable.
1) Determine the file type. When judging the file type, you can use MIME Type, suffix checking and other methods in combination. Because for uploaded files, the file type cannot be judged simply by the suffix name, because the attacker can change the suffix name of the executable file to a picture or other suffix type to induce users to execute it.
2) Perform whitelist verification on uploaded file types, and only reliable types are allowed to be uploaded.
3) The uploaded file needs to be renamed, so that the attacker cannot guess the access path of the uploaded file, which will greatly increase the cost of the attack. At the same time, the file such as shell.php.rar.ara cannot be successfully attacked because of the rename.
4) Limit the size of uploaded files.
5) Set the domain name of the file server separately.

DDos attack


The client sends a request link packet to the server, the server sends a confirmation packet to the client, the client does not send a confirmation packet to the server, and the server waits for the confirmation from the client. There is no complete cure unless TCP is not
used

DDos prevention:


1) Limit the number of simultaneous open SYN half-links
2) Shorten the Time out time of SYN half-links
3) Close unnecessary services

How the arp protocol works


Address Resolution Protocol, or ARP (Address Resolution Protocol), is a TCP/IP protocol that obtains a physical address based on an IP address.
1. The Ethernet data frame sending the ARP request is broadcast to each host on the Ethernet, and the ARP request frame contains the IP address of the destination host.
2. After receiving the ARP request, the destination host will send an ARP response, which contains the MAC address of the destination host.

The working principle of ARP protocol:


Each host will establish an ARP list in its own ARP buffer to represent the correspondence between IP addresses and MAC addresses.
When a host (network interface) newly joins the network (maybe only the mac address changes, the interface restarts, etc.), it will send a free ARP message to broadcast the mapping relationship between its own IP address and Mac address to other hosts.
When a host on the network receives a gratuitous ARP packet, it will update its own ARP buffer. Update the new mapping relationship to its own ARP table.
When a host needs to send a message, first check whether there is a MAC address of the destination host corresponding to the IP address in the ARP list, if yes, send the data directly, if not, send the ARP packet to all hosts in the network The contents of the packet include: source host IP address, source host MAC address, destination host IP address, etc.

When all hosts on this network receive the ARP packet:


(A) First check whether the IP address in the packet is its own IP address, if not, ignore the packet.
(B) If yes, take out the IP and MAC addresses of the source host from the data packet and write them into the ARP list, if they already exist, overwrite them.
(C) Then write its own MAC address into the ARP response packet, telling the source host that it is the MAC address it wants to find.
6. After the source host receives the ARP response packet. Write the IP and MAC address of the destination host into the ARP list, and use this information to send data. If the source host has not received the ARP response packet, it means that the ARP query fails. The ARP cache (that is, the ARP table) is the key to the efficient operation of the ARP address resolution protocol

What are RARPs? working principle


Summary: Reverse address translation protocol, network layer protocol, RARP and ARP work in the opposite way. RARP enables a host that only knows its own hardware address to know its IP address. RARP sends out the physical address to be reversed and expects its IP address back, and the reply includes the IP address from the RARP server that can provide the required information.
Principle:
(1) Each device on the network will have a unique hardware address, usually a MAC address assigned by the device manufacturer. The host reads the MAC address from the network card, and then sends a RARP request broadcast packet on the network, requesting the RARP server to reply the host's IP address.

(2) The RARP server receives the RARP request packet, assigns it an IP address, and sends the RARP response to the host.

(3) After receiving the RARP response, PC1 uses the obtained IP address for communication.

What is dns? how dns works


Convert the host domain name to an ip address, which belongs to the application layer protocol and uses UDP transmission. (application layer protocol)

Summary: browser cache, system cache, router cache, IPS server cache, root domain name server cache, top-level domain name server cache, master domain name server cache.
1. The query from the host to the local domain name server generally adopts recursive query.
2. The iterative query of the query from the local domain name server to the root domain name server.
1) When the user enters the domain name, the browser first checks whether the IP address mapped to the domain name is in its own cache, and the analysis ends.
2) If there is no hit, check whether there is any parsed result in the operating system cache (such as Windows hosts), and the parsing ends.
3) If there is no hit, request local domain name server resolution (LDNS).
4) If the LDNS does not hit, it will directly jump to the root domain name server to request resolution. The root domain name server returns a primary domain name server address to LDNS.
5) At this time, LDNS sends a request to the gTLD (generic top-level domain) returned in the previous step, and the gTLD that accepts the request searches and returns the address of the Name Server corresponding to the domain name 6) The Name Server finds the target ip according to the mapping table and returns it to
LDNS
7) LDNS caches the domain name and the corresponding ip, returns the result of resolution to the user, and the user caches it in the local system cache according to the TTL value, and the domain name resolution process ends here

What is the rip protocol? how rip works


RIP dynamic routing protocol (network layer protocol)
RIP is a protocol based on the distance vector (Distance-Vector) algorithm, which uses the hop count (Hop Count) as a metric to measure the routing distance to the destination network. RIP exchanges routing information through UDP packets, and the port number used is 520.

Working principle:
The RIP routing protocol uses two groups of "update (UNPDATES)" and "request (REQUESTS)" to transmit information. Each router with RIP protocol function uses UDP520 port to broadcast update information to the machines directly connected to it every 30 seconds. And when (Using the "distance number" (that is, "hop number") as the scale of network distance. Each router sends routing information to adjacent routers, it will add internal distance to each path.
The convergence mechanism of the router:


One problem with any distance vector routing protocol (such as RIP) is that routers do not know the overall situation of the network, and routers must rely on neighboring routers to obtain network reachability information. Due to the slow propagation of routing update information on the network, the distance vector routing algorithm has a slow convergence problem, which will lead to inconsistency.
Problems caused by RIP less route convergence mechanism:
1) Mechanism of counting to infinity: RIP protocol allows the maximum number of hops to be 15. Destinations greater than 15 are considered unreachable. When the hop count of the path exceeds 15, the path is deleted from the routing table.
2) Split horizon method: The router does not return the path in the direction from which the path came. When the router interface is opened, the router records which interface the path came from, and does not return the path to this interface.
3) Split horizon method that destroys reversal: Ignore the path obtained from a router during the update process and send it back to the
router seconds) to accept new routing information. Ensure that each router has received the path unreachable information
5) Triggered update method: When the hop count of a certain path changes, the router sends an update message immediately, no matter whether the router reaches the regular information update time or not.

Disadvantages of RIPs


1. Since 15 hops is the maximum value, RIP can only be applied to small-scale networks;
2. The convergence speed is slow;
3. The route selected according to the number of hops may not be the optimal route.

OSPF protocol? Working principle of OSPF
OSPF (Open Shortest Pass First, open shortest path first protocol), is one of the most commonly used internal network management protocols, and is a link state protocol. (Network layer protocol,)
principle:
OSPF multicast sends Hello packets on all OSPF-enabled interfaces to determine whether there is an OSPF neighbor. If found, an OSPF neighbor relationship is established to form a neighbor table, and then LSAs are sent to each other ( Link State Advertisement) Advertise routes to each other to form LSDB (Link State Database). Then use the SPF algorithm to calculate the best path (minimum cost) and put it into the routing table.

Summary of the difference between TCP and UDP?


1. TCP provides reliable services for connections (such as dialing up to establish a connection to make a call); UDP is connectionless, that is, no connection is required before sending data; UDP does its best to deliver, that is, it does not guarantee reliable delivery. (Because UDP does not need to establish a connection, UDP will not introduce a delay in establishing a connection. TCP needs to maintain the connection state in the end system, such as receiving and sending buffers, congestion control, parameters of sequence number and confirmation number, etc., so TCP will be faster than UDP. slow)

2. UDP has better real-time performance, and its work efficiency is higher than that of TCP. It is suitable for high-speed transmission and real-time communication or broadcast communication.

3. Each TCP connection can only be one-to-one; UDP supports one-to-one, one-to-many, many-to-one and many-to-many interactive communications

4. UDP packet header overhead is small, TCP header overhead is 20 bytes; UDP header overhead is small, only 8 bytes.

5. TCP is byte-oriented. In fact, TCP regards data as a series of unstructured byte streams; UDP is message-oriented (a complete message is delivered at a time, and the message is indivisible. The message is a UDP datagram. The smallest unit of processing).

6. UDP is suitable for network applications that transmit small data at one time, such as DNS, SNMP, etc.

What is a three-way handshake and four-way wave? Why does tcp need a three-way handshake?


In order to prevent the invalid connection request segment from being sent to the server suddenly, resulting in an error

The first handshake: when the connection is established, the client sends a syn packet (syn=j) to the server, and enters the SYN_SEND state, waiting for the server to confirm; the second handshake: the
server receives the syn packet and must confirm the client's SYN (ack= j+1), and at the same time send a SYN packet (syn=k), that is, a SYN+ACK packet, at this time the server enters the SYN_RECV state; the third handshake: the client
receives the SYN+ACK packet from the server and sends a confirmation packet to the server ACK(ack=k+1), the packet is sent, the client and server enter the ESTABLISHED state, and complete the three-way handshake.
After completing the three-way handshake, the client and server start to transmit data

The client first sends a FIN and enters the FIN_WAIT1 state to close the data transmission from the Client to the server. The server
receives the FIN, sends an ACK, and enters the CLOSE_WAIT state. The client receives the ACK and enters the FIN_WAIT2 state. The
server sends a FIN and enters the LAST_ACK state. , used to close the data transmission from Server to Client.
The client receives FIN, sends ACK, and enters TIME_WAIT state. The server receives ACK, and enters CLOSE state (waiting for 2MSL time, about 4 minutes. Mainly to prevent the loss of the last ACK.)

The first wave: the active closing party sends a FIN to close the data transmission from the active party to the passive closing party, that is, the active closing party tells the passive closing party: I will not send you any more data (of course, in fin The data sent before the packet, if the corresponding ack confirmation message is not received, the active closing party will still resend the data), however, the active closing party can still accept the data at this time.
The second wave: After receiving the FIN packet, the passive closing party sends an ACK to the other party, and the confirmation sequence number is the received sequence number + 1 (same as SYN, one FIN occupies one sequence number).
The third wave: the passive closing party sends a FIN to close the data transmission from the passive closing party to the active closing party, that is to tell the active closing party that my data has been sent and I will not send you any more data.
The fourth waving: After the active closing party receives the FIN, it sends an ACK to the passive closing party, confirming that the sequence number is the received sequence number + 1, so far, four waved hands are completed.

The difference between GET and POST


get is to obtain data, post is to modify data
get puts the requested data on the url, separates the URL and transmission data with ?, and connects the parameters with &, so get is not very safe. And post puts the data in the HTTP package body (request body).
The maximum data submitted by get is 2k (the limit actually depends on the browser), and post has no limit in theory.
GET generates a TCP data packet, the browser will send the http header and data together, and the server responds with 200 (return data); POST generates two TCP data packets, the browser sends the header first, the server responds with 100 continue, and the browser then Send data, and the server responds with 200 ok (return data).
GET requests will be actively cached by the browser, but POST will not, unless manually set.
GET is idempotent while POST is not

The difference between cookies and sessions


Cookie and Session are both solutions for maintaining state between the client and the server
. The storage locations are different. Cookie: stored on the client side, session: stored on the server side. The data stored in Session is relatively safe
2, the stored data types are different
. Both are key-value structures, but there are differences in the type of value
cookie: value can only be of string type, session: value is of Object type
3, The size of the stored data is limited by different
cookies: the size is limited by the browser, many of which are 4K in size, session: theoretically limited by the current memory,
4, life cycle control
cookie life cycle When the browser is closed, it will Extinct
(1) The life cycle of the cookie is cumulative, starting from the time of creation, and the life cycle of the cookie ends after 20 minutes, (2) The life cycle
of the session is interval, starting from the time of creation, such as at 20 Minutes, no access to the session, then the session life cycle is destroyed

How does session work?


The working principle of the session is that after the client login is completed, the server will create a corresponding session. After the session is created, the session id will be sent to the client, and the client will store it in the browser. In this way, every time the client accesses the server, it will bring the sessionid with it. After the server gets the sessionid, it will find the corresponding session in the memory, so that it can work normally.

A complete HTTP request process


Domain name resolution --> Initiate a TCP 3-way handshake --> Initiate an http request after establishing a TCP connection --> The server responds to the http request, and the browser gets the html code --> The browser parses the html code and requests the resources in the html code (such as js, css, pictures, etc.) --> The browser renders the page and presents it to the user.

The difference between HTTPS and HTTP


1. The data transmitted by the HTTP protocol is unencrypted, that is, plain text, so it is very unsafe to use the HTTP protocol to transmit private information. The HTTPS protocol is a network protocol that can perform encrypted transmission and identity authentication constructed by the SSL+HTTP protocol. It is safer than the http protocol.
2. The https protocol needs to apply for a certificate from the CA. Generally, there are few free certificates, so a certain fee is required.
3. http and https use completely different connection methods and different ports. The former is 80 and the latter is 443.
https://www.cnblogs.com/wqhwe/p/5407468.html

What are the seven layers of the OSI model?


Physical layer: use the transmission medium to provide a physical connection for the data link layer, and realize the transparent transmission of the bit stream.
Data link layer: Receives data in the form of bit stream from the physical layer, encapsulates it into a frame, and transmits it to the upper layer
Network layer: translates the network address into the corresponding physical address, and passes the packet through the communication subnet through the routing algorithm Choose the most appropriate path.
Transport layer: Provide reliable transparent data transmission between the source and destination
Session layer: Responsible for establishing, maintaining and terminating communication between two nodes in the network
Presentation layer: Deal with user information representation, data encoding, compression and decompression, data encryption and decryption
Application layer: Provide network communication services for user application processes

The difference between http long connection and short connection


Short connections are used by default in HTTP/1.0. That is to say, every time the client and server perform an HTTP operation, a connection is established, and the connection is terminated when the task ends. Since HTTP/1.1, long connections are used by default to maintain connection characteristics. What is TCP sticky packet/unpacket? cause? Solution A complete service may be split into multiple packets by TCP for transmission, or multiple small packets may be encapsulated into one large data packet for transmission. This is the problem of TCP unpacking and sticking packets. Reasons: 1. The byte size of the data written by the application is larger than the size of the socket send buffer. 2. Perform TCP segmentation of MSS size. (MSS=TCP segment length-TCP header length) 3. The payload of Ethernet is larger than MTU for IP fragmentation. (MTU refers to: the maximum packet size that can pass through a certain layer of a communication protocol.) Solution: 1. Message fixed length. 2. Add special characters such as carriage return or space character at the end of the packet for segmentation. 3. Divide the message into a message header and a message trailer. 4. Use other complex protocols, such as RTMP protocol, etc.

How does TCP ensure reliable transmission?


Three handshakes.
Truncate data to a reasonable length. Application data is divided into data blocks that TCP considers most suitable for sending (numbered by bytes, reasonably fragmented) and
resent after timeout. When TCP sends a segment, it starts a timer, and resends if it cannot receive an acknowledgment in time Confirmation
response: For the received request, give a confirmation response
Checksum: Check the packet for errors, discard the message Segment, no response
Sequence number: Reorder out-of-sequence data before handing it over to the application layer
Discard duplicate data: For duplicate data, can discard duplicate data
Flow control. Each side of a TCP connection has a fixed size buffer. The receiving end of TCP only allows the other end to send as much data as the receiving end's buffer can hold. This will prevent faster hosts from overflowing the buffers of slower hosts.
congestion control. When the network is congested, reduce the sending of data.


What are the common status codes?


200 OK //The client request was successful 403 Forbidden //The server received the request but refused to provide the service
404 Not Found //The requested resource does not exist, eg: wrong URL was entered
500 Internal Server Error //An unexpected error occurred on the server The difference between URI and URL URI, Uniform Resource Identifier, is used to uniquely identify a resource. URL can be used to identify a resource, but also indicates how to locate this resource.

What is SSL? How does https ensure the security of data transmission (how does SSL work to ensure security)


SSL stands for Secure Sockets Layer. It is a protocol for encrypting and authenticating data sent between an application (such as a browser) and a web server. Authentication, Encryption The encryption mechanism of Https is a hybrid encryption mechanism that uses both shared key encryption and public key encryption. Functions of the SSL/TLS protocol: authenticate users and services, encrypt data, and maintain data integrity. Application layer protocol encryption and decryption require two different keys, so it is called asymmetric encryption; both encryption and decryption use the same key Symmetric encryption of keys. The advantage is that the encryption and decryption efficiency is usually relatively high. HTTPS is based on asymmetric encryption, and the public key is public.
(1) The client initiates an SSL connection request to the server;
(2) The server sends the public key to the client, and the server (3) The client uses
the public key to encrypt the symmetric secret key for communication between the two parties and sends it to the server
(4) The server uses its own unique private key to encrypt the symmetric secret key sent by the client. Decryption,
(5) For data transmission, both the server and the client use the same public symmetric key to encrypt and decrypt the data, which can ensure the security during the data sending and receiving process, even if the third party obtains the data packet, it cannot Encryption, decryption and tampering.
Because digital signatures and abstracts are key weapons for certificate anti-counterfeiting. "Summary" is to calculate a fixed-length string through the hash algorithm for the transmitted content. Then, the digest is encrypted with the private key of the CA, and the result obtained after encryption is a "digital signature"

The basic idea of ​​the SSL/TLS protocol is to use public key encryption, that is to say, the client first asks the server for the public key, and then encrypts the information with the public key. After receiving the ciphertext, the server decrypts it with its own private key.

How to ensure that the public key is not tampered with?


Put the public key in the digital certificate. As long as the certificate is trusted, the public key is trusted.
Public key encryption calculation is too large, how to reduce the time consumed?
For each conversation (session), the client and server generate a "session key" (session key), which is used to encrypt information. Since the "session key" is symmetrically encrypted, the operation speed is very fast, and the server public key is only used to encrypt the "session key" itself, which reduces the time consumed for encryption operations.
(1) The client requests and verifies the public key from the server.
(2) The two parties negotiate to generate a "dialogue key".
(3) The two parties use the "dialogue key" for encrypted communication. The first two steps of the above process are also called "handshake phase" (handshake).

"Computer Network" book:
SSL working process, A: client, B: server
1. Negotiate encryption algorithm: A sends the SSL version number and optional encryption algorithm to B, B chooses the algorithm he supports and informs A
2. Server Authentication: B sends a digital certificate containing the public key to A, and A uses the public key issued by CA to verify the certificate
3. Session key calculation: A generates a random secret number, encrypts it with B's public key and sends it to B , B generates a shared symmetric session key according to the negotiated algorithm and sends it to A.
4. Secure data transmission: Both parties use the session key to encrypt and decrypt the data transmitted between them and verify its integrity

The application layer protocol corresponding to TCP


FTP: defines the file transfer protocol, using port 21.
Telnet: it is a port for remote login, port 23
SMTP: defines the simple mail transfer protocol, and the server opens port 25.
POP3: It corresponds to SMTP, and POP3 is used to receive mail.
HTTP

Application layer protocol corresponding to UDP


DNS: used for domain name resolution service, using port 53
SNMP: Simple Network Management Protocol, using port 161
TFTP (Trival File Transfer Protocol): simple file transfer protocol
 

Guess you like

Origin blog.csdn.net/zxcvbnmasdflzl/article/details/130705971