Interview: Chapter XIII: senior programmer interview

This I presented four options, did not seem to meet the requirements, there is no way to guarantee one second overflow, memory allocation personally think should be the direction can be considered.

The first: to change the configuration, tuning heap size, circulation create objects

The second: GC threads do not let it kill recycling, create an object with strong references

Third: calculate the factorial of an integer, integer calculations great, great time, using a recursive algorithm

Fourth: the same time, multi-threading to create a strong reference to the object

 

And the isolation level properties of things and problems that may arise?

The basic properties of transactions (ACID)

  • Atomicity (Atomicity) , all the operations in a transaction, corresponds to an atomic operation, either all succeed or all fail.
  • Consistency (Consistency) , before and after the transaction is executed, for the intention of the transaction itself, the data in the database is consistent, consistency of the database is based on the atomicity of more encoded by programmers assurance, the most classic case is the transfer between the a, B accounts.
  • Isolation (Isolation) , transaction isolation means that the data between the transaction and transaction visibility, which is detailed in this article requires place. Database defines the various isolation levels to tradeoff in concurrency and data integrity.
  • Persistent (Durability Rev) , after the transaction is completed, all data will be persistent to the database, you will not be lost for other reasons.

Without transaction concurrency control, could produce four exceptions:

  • Magic Reading (phantom read): A transaction second query results appear not the first time, indicating another transaction has inserted some data. Note that this is the same query inside a transaction
  • Non-repeatable read (nonrepeatable read): a read transaction was repeated twice to give a different result, the read operation described unrepeatable results.
  • Dirty read (dirty read): a transaction reads to amend another transaction did not submit, that is, when another transaction it has not submitted a modified transaction would read to modify.
  • Modify lost (lost update): concurrent writes result in some modifications are lost.

In order to solve unusual problems that may arise concurrency control, the database isolation level defines four kinds of transactions, SQL standard defines four types of isolation levels (from low to high):

  • Uncommitted Read (read uncommitted): other transaction can read uncommitted change . Problems arise: a transaction if data can be modified but not committed, to another transaction reads data modified, but for some reason to modify the data transaction is rolled back, and appeared dirty read;
  • Read Committed (read committed): can only read data already submitted . Problems arise: a transaction in the query data to another data modification transaction just submitted the query again, the results are not the same query twice, appear dummy read
  • Repeatable read (repeatable read): Has the same transaction as the query results, Mysql InnoDB the default implementation Repeatable Read level. Problems arise: twice query data items in a transaction inconsistent, for example: once a transaction query data, this time in another transaction inserted a few data, there have been few data previously not available when the query again generating phantom read;
  • Serializable (Serializable): Transaction complete serialization of execution, isolation highest level, the lowest efficiency. Equivalent to completely lock the database when a transaction execution, and so it can execute executing the next. This is the highest level of isolation, it is forced to sort through the transaction, making it impossible to conflict with each other, thereby solving the problem of phantom read

Propagation properties and seven kinds of things Spring propagation behavior of things?

Propagation properties of things

  • Required (required): The current method must demand open transactions, current thread if the transaction does not exist, open a new transaction, if the current thread already exists Affairs, will be added to the current transaction.
  • RequiresNew (need new): the current method must request to open a new transaction, if the current thread transaction context already exists, to suspend the current transaction until the transaction before the end of the new transaction, and then continue to recover.
  • Mandatory (must be mandatory) : The current method must require a transaction, the transaction if the current thread does not exist, an exception is thrown, if present, is joined in the transaction.
  • Supports (support): The current method of support services, if the current thread transaction exists, it is added to the transaction to go, and if not, do not do anything.
  • NotSupported (not supported): The current method does not support transactions, the current thread if there is a transaction, it suspends the current transaction, after executing the current method, the transaction recovery.
  • Never (do not): The current method does not support transactions, the current thread if there is a transaction, an exception is thrown. This use is relatively small.

 

 

Dubbo underlying principle and the underlying protocol?

There are five roles / core components, it is divided Container (container), Provider (provider services), Registry (registry), Consumer (consumer services), Monitor (monitoring center).

Container: mainly responsible for starting, load, run a service provider ;

Registration Center: Registration Center is only responsible for the registration and find the address of

Monitoring Center: Monitoring Center responsible for each count the number of service calls, call time

The core framework is distributed RPC framework, the core framework is RPC RPC protocol, Dubbo supported RPC protocols, the default protocol, and employing a single long connection NIO asynchronous communication, a small amount of data transmitted, using the thread pool concurrent processing request, can concurrent increase efficiency and reduce the handshake

Load balancing algorithm?

  • HTTP redirect load balancing: the user needs to initiate an action twice HTTP request, send a request to schedule a server, the request is first intercepted cluster scheduler; dispatcher according to some allocation strategy, select a server, the server selected Location field IP address encapsulated in a message header of the HTTP response, and the response status code is set to the message 302, the response message is finally returned to the browser. The second backend server sends a request, when the browser receives the response message, parses Location field, initiating a request to the URL, and then specify the server processes the user's request to obtain the processing result. Finally, the results are returned to the user.
  • DNS DNS load balancing: Before we visit the Web site domain name through, you first need to resolve a domain name into an IP address, this work is done by the DNS. That is the domain name server. It will help us to resolve domain names into IP addresses and return it to us. After we receive IP will initiate a request to the IP. If a domain name points to multiple IP addresses, each time for domain name resolution, DNS IP as long as the election of a return to the user, it is possible to achieve load balancing server cluster. 
  • Reverse Proxy Load Balancing: requests are the first to go through the reverse proxy server, the server either directly return the result to the user based on the user's request or the request to the backend server processes, and then returned to the user. Reverse proxy server may act as a scheduler server clusters, it may be forwarded based on the current load request to the backend server a suitable server, and the results returned to the user. 
  • IP Load Balancing
  • Load balancing data link layer

Load Balancing Algorithm

  • Polling Method : requests sequentially alternately assigned to the back-end server, the rear end of it balanced treat each server, without regard to the actual number of connections the server and the current system load.
  • Randomly : random algorithm system, in accordance with the size of the list of values randomly selected back-end servers to which a server is accessed.
  • Hashing the source address : the source address is a hash of the idea of acquiring a client IP address, obtained by calculating a hash function value, the size of the modulo operation with the value of the server list, the result is that the end customer to access the serial number of the server. Using the source address hashing load balancing, the same IP address of the client when the same back-end server list, each time it is mapped to the same back-end server to access it.
  • WRR method : different back-end server may load the machine configuration and the current system is not the same, so they are not the same compressive strength. A high-profile, low-load machine configuration higher weight, more please let process; arranged low, high load machines, to assign a lower weight, reduce the system load, weighted round-robin well address this issue and request order according to their weights and assigned to the rear end.
  • Weighted Random Method : Same as WRR method, a method is also weighted random heavy load distribution depending on the configuration of the rear end of the machine, the system weights. The difference is that it is weighted random requests in accordance with the backend server, rather than sequentially.
  • The minimum number of connections Method : Minimum number of connections is flexible and intelligent algorithm, due to the configuration of the different backend server, processing the request for fast or slow, which is based on the current connection back-end server, to dynamically select which of the current backlog the minimum number of connections a server to handle the current request, improve the efficiency of back-end services as much as possible, will be responsible for reasonable diversion to each server.
     

Dubbo to provide load balancing and N ginx load balancing ?

Dubbo provide load balancing:

  • Stochastic Equilibrium : weight is set by a random probability
  • Weight polling balancing: by convention right after the re-set the polling proportion
  • Minimum number of active calls (weight): Number of active calls before and after the counting means the difference, a high priority call, the same random number is active. Providing fast response are more requests to accept slow response accepting the request, the less
  • Consistency Hash: depending on the service provider to set ip hash ring, carrying the same service provider always sends the same parameters, if the service hung up, will be in equal shares to the other provider-based virtual node

Description: dubbo load balancing service levels, and load balancing is still on Nginx http request level, totally different.

Nginx load balancing:

  • Polling (default) : each request individually assigned to a different time order back-end server, if the back-end server is down, can be automatically removed.
  • weight : polling a probability proportional, weight ratios, and access, for the case where unevenness backend server performance.
  • ip_hash : Each request is assigned by hash result Access ip so that each visitor to access a fixed back-end server, can solve the problem of session.
  • Fair (third party) : according to the response time of allocation request to the backend server, a short response time priority allocation.
  • url_hash (third party) : according to the results of hash access url allocation request to the url each directed to the same back-end server, the back end server is effective when the cache.

What are initialized object has a new operation?

Load and initialize the class and create objects

Loading ( in the presence of inheritance ): After the first non-stationary, the sub Fathers, after the first block is

initialization:

  • When an object is created using new, jvm will be a memory to store the object allocated on the heap
  • Parent class member variables and member variables subclasses create a memory heap, point value is null, Fathers class member variables explicitly initialized (if any)
  • Parent code block (the parent class member variables are initialized)
  • Parent class constructor
  • Subclass member variables explicitly initialized (if any)
  • Subclass code block (sub-class member variables are initialized)
  • Subclass constructor
     

Create an object:

1, the object needs to allocate memory in the heap

  Memory allocation includes all the instance variables in this class and the parent class, but does not include any static variable

2, assign default values ​​for all instance variables

  The method defined within the copy of the instance variables to a heap, and then assign the default value

3, instance initialization code execution

  Initialization Sequence is the first parent class to initialize reinitialization subclass, execute the example code block is then initialized when the constructor

 

Published 276 original articles · won praise 181 · views 50000 +

Guess you like

Origin blog.csdn.net/java_wxid/article/details/100550582