tomcat startup problems SessionIdGeneratorBase.createSecureRandom took 5 minutes when the super slow start tomcat

The original address  https://www.cnblogs.com/devilwind/p/6902037.html

Today deploy new environment tomcat, has just started soon, and then start after closing, they found the boot log to print

00:25:14.144 [localhost-startStop-1] INFO  o.s.web.context.ContextLoader - Root WebApplicationContext: initialization completed in 6287 ms

Has been hold the, tomcat program can not access, that is where the program is configured wrong, looking for a long time, even the spring configuration is loaded completely removed to start, why, program development environment, but ran up brush brush

Later, in charge of this program has not had a few minutes to see the log and found that the program was started tomcat is completed, why? They are not stuck, but slow

Jstack observation about starting a thread, found that C2 CompilerThread high occupancy cpu, while org.apache.catalina.util.SessionIdGeneratorBase.createSecureRandom here to read files produced obstruction of CPU is also high, Baidu, reproduced below two others article

Jitter problems when restarting the service or online publishing solutions http://www.cnblogs.com/LBSer/p/3703967.html

I. Description of the problem

      When you publish a line or restart a service (jetty8 as a server), often found that some machines will load soared to very high (up to 70), and after a long period of time (five minutes) continued to drop (Figure 1), and At the same time the response time curve (FIG. 2) is also consistent with the load curve. Note: load soared high initial time application service port is open, the flow into the (load refers to what specifically refer http://www.cnblogs.com/amsun/p/3155246.html).

 

 

1 released when the load soar

 

 

Figure 2 released when the response time skyrocketed

Second, the troubleshooting method

     For resource usage monitoring time of publication.

1) -p find high cpu usage thread through the top -H, 2129 and 2130 found that two threads cpu usage high.

 

 

3 Find high cpu usage thread

 

2) printing information jstack stack, and the number of threads 2129 and 2130 is converted to hexadecimal (printf "% x \ n" 2129), 851 and 852, respectively, found that the two threads are threads compiled (Table 1). Also when these two threads to reduce cpu usage load and response time is immediately returned to normal, time is very consistent.

 

Table two threads Details 1 cpu usage is high

"C2 CompilerThread1" daemon prio=10 tid=0x00007fce48125800 nid=0x852 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
Locked ownable synchronizers:
- None
"C2 CompilerThread0" daemon prio=10 tid=0x00007fce48123000 nid=0x851 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
Locked ownable synchronizers:
- None

Third, explain the phenomenon

      C2 CompilerThread thread cpu utilization project early start so high, it is doing it? 

      Java programs at boot time to perform all of the code are in interpreted mode, only after running for a while, up to a certain threshold value based on the number of code execution method, such as the number or code execution cycle will be compiled into machine code, compiled into machine code execution efficiency will be greatly improved, and as further lengthen the execution time, a variety of more advanced compiler optimization methods JVM will gradually add, such as the status of implementation if conditions, escape analysis. Here's C2 CompilerThread threads dry compiler optimization is live.

     Now it seems to be able to explain the phenomenon before.

     Just when the program starts, java is still in interpreted mode, so the service is very low efficiency, response time is slow, too slow a process, load naturally high. When traffic continued to import many ways we are increasing the number of code execution, this time C2 CompilerThread threads continue to collect information optimization, and begin some hot code optimization compiled to native machine code, so the thread cpu utilization increased. And when the C2 CompilerThread thread to complete the initial compiler optimization process, C2 CompilerThread thread cpu usage began to decline, while optimizing performance boost after the service, the service response time is greatly shortened, load drops.

     Now the crux of longer duration compiler optimization process, causing jitter . How to reduce the duration of the compiler to optimize it?

IV Solutions

1) warm-up

      If the compiler optimization process is completed in advance of the online service accepts the request, it will be able to avoid this jitter. The general practice is warm, there are two ways:

      a) preheating active program: After the start, the program code active access the hotspot, the hotspot to ensure that the main flow into the code has been compiled into machine code and then, by -XX: confirmed + PrintCompilation.

      b) replication traffic preheating: preheating a copy nginx flow line through tcpcopy software, then into the flow line after completion.

2) Start multiple threads compiler optimization

     If we can speed up the compilation speed optimization, it can also reduce the time to explain the implementation phase jitter caused. So we can do more than take a few threads compilation, speed peak performance.

     You can use -XX: CICompilerCount parameters to set the number of threads to compile, the default value is 2 (previously seen in the stack has compiled two threads), we added 4.

3) multilayer compiled

      There are three ways to compile: 1) Client mode; 2) Server mode; 3) Tiered mode. Our service is the default Server mode.

      Server mode is the use of advanced c2 compiled, it would be more time-consuming and you want to run some time to compile the trigger. Server mode advantage of the higher efficiency of the procedure is compiled;

      Client mode is also relatively lightweight, relatively fast trigger (trigger faster than Server mode), the compiler optimization program as efficient as Server mode;

      Tiered mode is the mode and Client Server model of compromise, will initially enable the Client mode, it can make faster after starting the first part of the code into the compiler optimization phase will start after Server mode, achieve maximum optimization of program efficiency.

      Oracle JDK 7 in the HotSpot VM has begun to have a better Tiered compilation (tiered compilation) support, you can set the parameters -XX: + TieredCompilation to start Tiered mode, java 8 is Tiered default mode.

      FIG 4 is a performance comparison to the embodiment of FIG different compilation http://www.javaworld.com/article/2078635/enterprise-middleware/jvm-performance-optimization--part-2--compilers.html taken, the abscissa is the time and the ordinate is the performance. Tired pattern can be seen beginning C1 performance equivalent to a time when reaching the performance and C2 considerably.

 

 

4 compare the performance of different compilation mode of FIG.

     

Fifth, the results of analysis

       Simplicity using Option 2 and Option 3 be optimized.

       Using scheme 2 and 3 times after release, in addition to individual load reaches outside the machine 10, is not substantially released when an excessively high level (in the range of 2 to 4), and a short time (2 minutes), will load down a more reasonable level (about 2), load time than the release point of view, much better than before optimization.

      Scheme 2 and Scheme 3 only reduces jitter duration and intensity of jitter, the jitter can not be completely avoided. Really avoid jitter programs should be 1 program, implemented by way of pre-release or smooth restart

 

 

 

 

##########################################################################################################

tomcat SessionIdGeneratorBase.createSecureRandom took 5 minutes a problem starting     http://www.cnblogs.com/chyg/p/6844737.html

 

Typically, tomcat start as long as 2 to 3 seconds, and suddenly one day, tomcat start very slowly, it takes 5 to 6 minutes, check for a long time, finally we found a solution in this article, bloggers cattle ah.

See original: http: //blog.csdn.net/chszs/article/details/49494701

 

Tomcat 8 start very slowly, and without any errors on the log, view the following information in the log:

Log4j:[2015-10-29 15:47:11]  INFO ReadProperty:172 - Loading properties file from class path resource [resources/jdbc.properties]
Log4j:[2015-10-29 15:47:11]  INFO ReadProperty:172 - Loading properties file from class path resource [resources/common.properties]
29-Oct-2015 15:52:53.587 INFO [localhost-startStop-1] org.apache.catalina.util.SessionIdGeneratorBase.createSecureRandom Creation of SecureRandom instance for session ID generation using [SHA1PRNG] took [342,445] milliseconds.

the reason

Tomcat 7/8 are generated using class instance of a secure random org.apache.catalina.util.SessionIdGeneratorBase.createSecureRandom SecureRandom class as a session ID, where spent 342 seconds, i.e. close to 6 minutes.

SHA1PRNG algorithm is based on the SHA-1 algorithm and strong confidentiality of the pseudo-random number generator.

In SHA1PRNG, there is a seed generator which performs various operations according to the configuration.

1) If the Java .security.egd property or securerandom.source property specifies "file: / dev / random" or "file: / dev / urandom" , then the JVM will use local seed producer NativeSeedGenerator, it calls super ( ) method, which calls SeedGenerator.URLSeedGenerator (/ dev / random) method to initialize.

2) If java.security.egd property or securerandom.source attribute specifies that other existing URL, it will call SeedGenerator.URLSeedGenerator (url) method to initialize.

That's why we set the value of "file: /// dev / urandom" or value: the reason "file /./ dev / random" will play a role.

In this implementation, the amount of noise generated will evaluate entropy pool (entropy pool) in. Random number is created from the entropy pool. When read, / dev / random device will only return random bytes noise entropy pool. / Dev / random is ideal for those who need very high quality randomness of the scene, such as a one-time payment or generated scene keys.

When the entropy pool is empty, from / dev / random read operation will be blocked until enough entropy pools collected ambient noise data. The goal is to become a cryptographically secure pseudo-random number generator, the entropy pool to have the greatest possible output. For high-quality encryption key generation or require long-term protection of the scene, be sure to do so.

So what is the ambient noise?

Random number generator would phone ambient noise data from the device driver, and other sources, and placed in the entropy pool. It will evaluate the amount of noise generated data entropy pool. When the entropy pool is empty, the noise data collected is more time-consuming. This means that when, Tomcat using the entropy pool in a production environment, will be blocked longer time.

solve

There are two solutions:

1) solve the Tomcat environment

can be configured JRE using non-blocking Entropy Source.

Adding such a line in the catalina.sh: -Djava.security.egd = file: / dev /./ urandom can be.

Join and then start Tomcat, start consuming the entire decline to Server startup in 2912 ms.

2) to solve the JVM environment

open $ JAVA_PATH / jre / lib / security / java.security file, find the following:

securerandom.source=file:/dev/urandom

Replaced

securerandom.source=file:/dev/./urandom

Guess you like

Origin www.cnblogs.com/jackcui/p/11504295.html