Finishing network programming interview questions (c)

1: What is concurrency and parallelism?

It refers to a concurrent processor to handle multiple tasks. It refers to a parallel processor or a plurality of multi-core processor to handle many different tasks. Concurrent place simultaneously (Simultaneous) logical, occur simultaneously in parallel on a physical.

 

2: The role of process and thread lock lock? 

Thread lock: everyone is familiar with, it is mainly used to process the code block lock. When a block of code or method of use of the lock, then at the same time at most only one execution thread at the segment code. When there are multiple threads access the same object locking method / code block, only one thread at the same time the implementation of the remaining thread must wait for the current thread to execute the code segment after the execution. However, the rest of the threads can access the non-locking block of the object.   

Process lock: also to control multiple processes to access the same operating system, a shared resource, simply because the independence of the program, each process can not be controlled by other processes access to resources, but you can use semaphores to control the local system (OS basic knowledge).

 

3: Explain what is asynchronous non-blocking?

First of all you need to know what is synchronous and asynchronous:

Synchronous and asynchronous is for the interactive applications and kernel in terms of synchronization refers to the user process triggered IO operations and wait to see whether or polling IO ready for operation, but asynchronous refers to the user process is triggered after the operation began doing IO their own thing, and when IO operation has been completed will be notified IO completion. Once synchronization method indicates that call, the caller must wait for the completion of the implementation of the method, in order to continue the implementation of the follow-up methods. Asynchronous method shows that the method once you start to return immediately, without waiting for the caller to complete the implementation of the method in which it can continue to perform follow-up methods. Usually we write methods are synchronization method between the method of execution is serialized, operate within a thread.

Bank teller, for example:

Synchronization: the need to get money to go to the mall to buy a cell phone, you need to hold a bank card to withdraw money line at the bank, get the money and then go to the mall to see the phone (when using synchronous IO, Java IO deal with their own reading and writing);

Asynchronous: line up the middle of the mall to take a look at the phone, come back to withdraw money before discharged himself. (When using asynchronous IO, Java OS will be entrusted to handle IO read and write, you need to pass data buffer address and size of the OS (bank cards and passwords), OS needs to support asynchronous IO operations API).

Blocking and non-blocking for different ways to access data at the time of the process, according to the state of readiness to take the IO operation, saying that white is a read or write operation method

Implementation, the blocking mode under the read or write function will wait the non-blocking mode, read or write method returns a status value immediately.

Bank teller, for example:

Obstruction: ATM withdrawals queue, you have to wait (when to use blocking IO, Java has been blocked calls will not return to complete the reading and writing);

Non-blocking: teller counter, take a number, and then sitting in a chair doing anything else, you will be notified of the equal sign broadcast deal with, not to the number you just can not go, you can always ask the manager lined up in the lobby did not, lobby manager also said that if no you can not go to (the use of non-blocking IO, if you can not read and write Java call will return immediately, when the IO event notification dispatcher can read and write and then continue to read and write, read and write continuously loop until completion).

Also, for example: out of trouble to inform me in line at the bank to do business, this person suddenly find themselves guilty of addiction, need to go out a cigarette, so he told the lobby manager, said this number when I discharged it, then he would not be blocked waiting for this operation above, this is the natural asynchronous non-blocking + the way the
 

4: the difference between routers and switches?   

Different (1) working level. The initial switch is working in the data link layer of OSI / RM open architecture, it is the second layer, while routers start design of the network layer in the OSI model. Since the switch operating in the second layer (data link layer) of the OSI, so it works is relatively simple, while the router in the third layer (network layer) of the OSI, a protocol can obtain more information, the router can be made more intelligent forwarding decisions.   

Different (2) data forwarding is based on the object. The switch is determined by the destination address to forward data physical address or a MAC address. The router uses a different ID numbers (IP address) of the network to determine the data forwarding address. IP addresses are implemented in software, describes where the network equipment, sometimes also referred to as the address of the third layer protocol address or network address. MAC address is normally the hardware that comes from the card manufacturer to distribution, and has been cured to the card and going, in general, can not be changed. The IP address is usually assigned automatically by the network administrator or system.   

(3) conventional switch can split the collision domain can not be split broadcast domains; and routers can be split broadcast domains connected by a network switch still belong to the same broadcast domain, broadcast data packets will propagate in all network segments connected to the switch, in some cases, lead to traffic congestion and security vulnerabilities. Connected to the router on the segment will be assigned to a different broadcast domain, broadcast data does not pass through the router. Although the third layer above the switch has a VLAN function, broadcast domain may be divided, but the communication is not communication between the sub-broadcast domains, communication between them still need a router.   

(4) router provides a firewall service. Routers only forward packets that a particular address, the packet transfer transfer does not support the routing protocol packets and unknown target network is not transmitting, which can prevent broadcast storms.

 

5: What is DNS?

DNS is the domain name at web space IP, so that people can easily access to a service site through the domain name registration. IP address is a numerical address on the network identify the site, in order to facilitate memory, the use of the domain name instead of the IP address identifies the site address. DNS is the domain name to the IP address of the conversion process. Analytical work done by the DNS domain name server.

 

6: producer-consumer model scenarios and advantages?

Producer consumer model belongs to the process-oriented programming model. In the actual software development process, often encounter the following scene: a module responsible for generating data, which is responsible for processing by another module (module here is broad, can be classes, functions, threads, processes, etc. ). Data generation module, it is aptly called producer; and data processing module, called the consumer. Abstract only producers and consumers, but also is not enough on the producer / consumer model. This mode also need to have a buffer in between producers and consumers, as an intermediary. Manufacturer data into the buffer, data taken from the buffer and the consumer.

Why set up a buffer: Assuming producers and consumers are two classes respectively. If you let the producer directly calling a method of consumers, then consumers will have to rely on producers (ie coupling). If consumers in the future code changes may affect producers. If both rely on a buffer, not directly dependent therebetween, coupled correspondingly reduced. Producer consumers directly call a method, there are other drawbacks. Since the function call is synchronous (or call blocking), the method does not return before the consumers, the producers had been waiting over there. In case the consumer data processing is slow, the producer would spoil the good times in vain. After using the producer / consumer model, producers and consumers may be two independent concurrent subject. Manufacturer data into the buffer produced a lost production can go to the next data. Basically do not rely on the consumer's processing speed. In fact, when this mode is mainly used to deal with concurrency issues. Buffer has another advantage. If the speed or slower manufacturing data faster, the benefits of the buffer is manifested. When the fast manufacturing data, consumers time to process, unprocessed data can be temporarily stored in the buffer. Manufacturing speed slow down the producer, consumer and then slowly disposed of.

Producers and consumers strongly coupled mode is solved by a producer-consumer relationship between a container, without direct communication between producers and consumers, but the use of blocking queue to communicate directly throw the producer generates data for blocking queue, consumers need to obtain data from the blocking queue, practical applications, the main mode of producers and consumers to resolve inconsistencies rate producers and consumers of production and consumption issues, to deal with the balance of producers and consumers capacity, and is equivalent to blocking queue buffer. There is also a typical example is the logging, multithreading have a log, but due to the write log files exclusively, not to write multi-threaded, so we can put the thread pressed into the queue, the queue data is read from the log thread, complete write log.

 

7: What is cdn?

CDN stands for Content Delivery Network, ie, content delivery network . The basic idea is as far as possible to avoid bottlenecks and links may affect the data transmission speed and stability of the Internet, the content delivery faster and more stable. By placing a layer of intelligence on the existing Internet-based server nodes throughout the network composed of the virtual network, in accordance with the CDN system capable of real-time and network traffic of each node connected to the load conditions and the user's response time and the distance and other comprehensive information to the user's request to redirect the service node closest to the user. Its purpose is to enable users to obtain the required content of the nearest address the Internet network congestion condition and improve the response speed of the user access to the site.

 

8: LVS and what is the role?

LVS is the Linux Virtual Server, Linux virtual servers; virtual server is a cluster [multiple machines LBIP]. LVS is mainly used for load balancing multiple servers. It operates at the network layer, can achieve high performance, high availability server clustering technology. It is inexpensive, the combination of a number of low-performance servers together to form a super server. It is easy to use, simple configuration, and a variety of load balancing methods. It is stable and reliable, a server does not work even in a server cluster, it does not affect the overall results. In addition scalability is also very good.

LVS cluster is a three-tier structure:

Load balancer (load balancer): This is the core of the LVS, it is like our site MVC Controller model. It is responsible for the customer's request to distribute to the next level different server for processing in accordance with a certain algorithm itself not address specific business. Another layer with a state of the layer may monitor, if the next layer of a server is not working properly, it will automatically remove it, and then resume available plus. This layer consists of one or several Director Server components.

Server Pool (server pool): a group of servers actually perform client requests, usually our web server; in addition to web, as well as FTP, MAIL, DNS.

Shared storage (shared storage): It provides a shared storage area for the server pool, it is easy to let the server pool have the same content, provide the same services. Mainly to improve the data layer and the upper layer of consistent data. 

 

9:  Nginx and what is the role?

First, Nginx is a HTTP server with Apache is a kind of the same WEB server, static files (such as HTML, pictures) on the server can be presented to the client through the HTTP protocol. Server subject to the limitations of the environment in the beginning of the design, such as the time scale users, network bandwidth, and other product characteristics and limitations of their positioning and development are different. It also makes various WEB servers have their own distinctive features. Apache development period is very long and is the undisputed world's largest server. It has many advantages: a stable, open source, cross-platform, and so on. Time it appears too long, it's the rise of the Internet industry, far less than now. It is designed to be a heavyweight. It does not support high-concurrency server. Apache running on tens of thousands of concurrent access, will cause the server to consume a lot of memory. Its operating system to switch between processes or threads also consumes a lot of CPU resources, resulting in lower average response rate of HTTP requests. All decisions of Apache WEB server can not become a high-performance, lightweight high-concurrency server Nginx came into being. Nginx based event-driven architecture, so that it can support millions-level TCP connection. Highly modular and free software licenses enable a third party module after another (which is an open source era ah ~). Nginx is a cross-platform server that can run on Linux, Windows, FreeBSD, Solaris, AIX, Mac OS and other operating systems.

So, Nginx is a free, open-source, high-performance HTTP server and reverse proxy server; also a IMAP, POP3, SMTP proxy server; Nginx can publish treatment site as an HTTP server, Nginx can be used as additional reverse proxy load balancing to achieve.

Nginx role:

1. static HTTP server.

2.  reverse proxy server. The client would have direct access to a site via HTTP protocol application server, Web site administrators can add a Nginx in the middle, the client requests Nginx, Nginx request an application server, then returns the results to the client, then Nginx is a reverse proxy server. (Speaking of agents, we need a clear concept of the so-called proxy is a representative of a channel; this time it involves two roles, one is acting role, a role is the goal, this is a proxy role by proxy to access the target role complete some task is called a proxy process operation; as life in the store - adidas guests to the store to buy a pair of shoes, this store is the agent, the agent role is adidas manufacturers, the goal is the role of the user).

3.  load balancing. When the site was visited by a very large, webmasters happy to make money, but also stalls fix. Because the site more slowly, a server was not enough. Thus the same application deployed on multiple servers, the user's request to allocate a large number of multiple machines handling. At the same time bring benefits is linked to one of the servers in case, as long as there are other server is running, it will not affect users.

4. virtual host . Some sites Sheremetyevo, load balancing needs. However, not all sites are so good, some sites, because the traffic is too small, need to save costs, will be deployed in multiple sites on the same server. For example, www.aaa.com and www.bbb.com deployed in two sites on the same server, two domain names resolve to the same IP address, but the user can open it two completely different websites through two domain names, each other It does not affect, like access to two servers, two so called virtual hosts.

5. FastCGI. Nginx does not support PHP and other languages, but it can be a request by FastCGI throw some languages ​​or frameworks processing (eg PHP, Python, Perl).

 

10: keepalived and what is the role?

keepalived is a similar Layer2,4,7 exchange mechanism of the software . Linux cluster is the cluster management to ensure high availability of a service software, and its function is to prevent a single point of failure.

keepalived works: keepalived service is based on a software protocol VRRP ensures high availability cluster, the main function is to achieve failover and fault isolation between the load balancer real machine, preventing single points of failure. Before understanding the principles keepalived first look VRRP protocol.

VRRP protocol: Virtual Route Redundancy Protocol Virtual Router Redundancy Protocol. Is a fault-tolerant protocol , the host ensure that when the next-hop route fails, the other router instead of the failed router to work, to maintain the continuity and reliability of network communication.

keepalived specific principle can be found in the relevant blog favorites! Keepalived principle

 

11: HAProxy and what is the role?

Haproxy is a use of free and open-source software written in C language, which provides high availability , load balancing , and proxy TCP and HTTP-based applications. HAProxy especially for those large load of web sites that usually they need to maintain or seven treatment sessions. HAProxy running on current hardware can support thousands of concurrent connections. And its mode of operation makes it really simple to integrate into your current security architecture, while protecting your web server is not exposed to the network. HAProxy implements an event-driven, single process model, this model supports a very large number of concurrent connections. Multi-process or multi-threaded model by memory limitations, restrictions and lock system scheduler restrictions ubiquitous, few can handle thousands of concurrent connections. Event-driven model because there is a better user space resource and time management - to achieve all of these tasks (User Space), so we do not have these problems. Disadvantages of this model is, on a multi-core system, these programs generally poor scalability. That is why they must be optimized so that each CPU time slice (Cycle) to do more work.

 

12: What is load balancing?

Load balancing , also known as load sharing, refers to the system load dynamically adjusted to try to eliminate or reduce the load of each node in the system imbalance. Specific implementation method is to transfer tasks on the overloaded node to other nodes light load, load balancing system as each node, thereby improving the throughput of the system. Load sharing is conducive to the overall management of resources in a distributed system, easy to use and share information service mechanism to expand the processing capacity of the system. Dynamic load sharing strategy is already in the system to load on each node as the reference information, during operation, depending on the load status of each node in the system, at any time adjust the distribution of the load, the load of each node to maintain a balance as possible.

Published 61 original articles · won praise 9 · views 30000 +

Guess you like

Origin blog.csdn.net/qq_33204444/article/details/94043190