Chapter X JMH Performance Test

JMH, namely Java Microbenchmark Harness, is a suite of tools designed for micro-benchmarks of the code. Micro Benchmark What is it? Simply means that based on the method level benchmark, precision can reach microsecond. When you locate the hot spots, and I hope to further optimize the performance of the method when you can use the results to optimize JMH quantify analysis.

JMH typical application scenarios are:

  • I want to know exactly how long a method needs to perform, and perform a correlation between time and input;
  • Comparison of different interface under given certain conditions, to find the optimal implementation
  • View the percentage of completion of the request how long

1, JMH environment to build

<dependency>
 	<groupId>org.openjdk.jmh</groupId>
 	<artifactId>jmh-core</artifactId>
	 <version>1.20</version>
</dependency>
<dependency>
 	<groupId>org.openjdk.jmh</groupId>
	 <artifactId>jmh-generator-annprocess</artifactId>
 	<version>1.20</version>
  	<scope>provided</scope>
 </dependency>

Use case (in this case with Comparative Example for performance at high dateTime Calendar and concurrent case)

2, execution

You may encounter an error:

Exception in thread "main" java.lang.RuntimeException: ERROR: Unable to find the resource: /META-INF/BenchmarkList 

Solution: Install m2e-apt eclipse plug-in, and then enable the Automatically configure JDT APT option;

3, the basic concept

Mode: Mode indicates the mode when performed Benchmark JMH used. Measurement usually different dimensions, or different measurement ways.

JMH There are currently four models:

  • Throughput: overall throughput, for example, "How many times in one second can call execution."
  • AverageTime: The average call time, such as "each call takes an average of xxx milliseconds."
  • SampleTime: random sampling, uniform sampling of the final output result, such as "99 percent of calls within xxx ms, 99.99% of calls within milliseconds xxx"
  • SingleShotTime: more than one iteration is the default mode is 1s, only SingleShotTime
    is run only once. Often at the same time the number is zero warmup for testing performance when cold start.
  • Iteration: Iteration is the smallest unit of JMH testing. In most models, the first iteration on behalf of one second, JMH
    will continue to call the method requires a benchmark in the second, then according to their sampling mode, computational throughput, average execution time of calculation.
  • Warmup: Warmup refers to the act to be preheated before the actual benchmark. Why do we need warm-up? JIT JVM because of
    the presence of the mechanism, then if a function is called multiple times, JVM will try to compile it to become machine code to improve execution speed. In order to benchmark
    the results closer to the real situation on the need for preheating.

4, Notes and Options

  • @BenchmarkMode: corresponds to Mode option, the class or method can be used,
    to be noted that the value of the annotation is an array, it can be performed together several Mode may be provided to Mode.All, i.e. all performed again.
  • @State: class notes, JMH test class must use @State annotation, State defines the lifecycle of a class instance, it can be compared Spring Bean's Scope.

Since JMH allows multiple threads simultaneously performing the test, different options have the following meanings:

  • Scope.Thread: Default State, each thread is allocated a test example;
    Scope.Benchmark: All threads share a test example, performance tests are for instance in a state of sharing multi-thread;
    Scope.Group: each thread group shared one example;
  • @OutputTimeUnit: benchmark
    time unit the results used for class or method can be annotated using the standard time unit java.util.concurrent.TimeUnit.
  • @Benchmark: annotated method indicates that the method is the object needs to be a benchmark.
  • @Setup: annotated method will be executed before executing benchmark, as its name suggests, is mainly used for initialization.
  • @TearDown: annotated method, relative, will perform @Setup after the end of all benchmark execution, mainly for recycling resources.
  • @Param: members of the annotation can be used to specify multiple parameters of a particular case. Particularly suitable for testing the performance of a function in the case of various input parameters. @Param receives a String annotation array, it converted to a corresponding method performed before @setup data type. Is a member between a plurality of annotations @Param product relationship, for example, there are two fields with @Param annotation, a first 5 values, the second field has a value of 2, then each test method will run 5 * 2 = 10 times.

Guess you like

Origin blog.csdn.net/m0_37661458/article/details/90707777