Track JVM in local memory

1 Overview

Ever wonder why Java applications consume -Xms by the well-known tune and -Xmx flag memory far more than the number specified? For a variety of reasons and possible optimization, JVM can allocate additional native memory. These additional allocation will ultimately consume memory beyond the limit -Xmx.

In this tutorial, we will list some common source JVM memory allocation, as well as their size adjustment flag and then learn how to use the machine memory to track and monitor them.

2. The primary distribution

Java application stack is usually the biggest memory consumers, but there are others. In addition to the stack, the JVM further dispensed from the machine in a relatively large memory blocks to maintain metadata category, application code, the JIT generated code, the internal data structure. In the following section, we will explore some of the distribution of them.

2.1. Metaspace (dimensional space)

In order to maintain some metadata about a loaded class, JVM using a dedicated non-heap area called Metaspace of. Before Java 8, it is called PermGen or Permanent Generation. Metaspace or PermGen contain metadata about the loaded class, rather than examples thereof, are stored in the heap.

What is important here is the heap size does not affect the yuan space, because Metaspace is a heap outside the data area. In order to limit the size of Metaspace, we use other tuning flags:

  • -XX: MetaspaceSize and -XX: MaxMetaspaceSize set the minimum and maximum dollar amount of space
  • Before Java 8, -XX: PermSize and -XX: MaxPermSize set the minimum and maximum size PermGen

    2.2. Threads (thread)

    One of the most JVM memory-intensive data area is the stack, and each thread is created at the same time. Stack stores local variables and partial results, plays an important role in the method call.

    The default thread stack size depends on the platform, but in most modern 64-bit operating system, it is about 1 MB. This size can be configured by adjusting -Xss flag.

    Compared with other data area, when there is no limit to the number of threads, the total memory allocated to stack actually unlimited. It is worth mentioning that, JVM itself needs some threads to execute its internal operations, such as GC or time compilation.

    2.3. Code Cache (code cache)

    JVM byte code to run on different platforms, the need to convert it to machine instructions. When executing the program, JIT compiler is responsible for this compilation.

    When the JVM bytecode compiler is assembler instruction, it places them instructions stored in a special area called the non-stack data in the code cache. Code cache can be managed like any other area of data management in the same JVM. -XX:InitialCodeCacheSizeAnd -XX:ReservedCodeCacheSizeadjusting the initial value of the flag is determined and possible maximum code cache.

    2.4. Garbage Collection (garbage collection)

    JVM comes with some GC algorithms, each algorithm for different use cases. All of these GC algorithms have a common characteristic: they need to use some external heap data structure to perform their tasks. The internal data structure of the machine consumes more memory.

    2.5. Symbols (symbol)

    Let's start Strings, which is one application and library code of the most commonly used data types. Because they are everywhere, they usually occupy a large part of the heap. If a large number of these strings contain the same content, then a large part of the heap will be wasted.

    To save some heap space, we can store a version of each of String, and let the other versions cited version store. This process is called String Interning. Due to internal JVM can only compile-time string constants, we can intern method of manually calling the string to get the internal compiler string.

    The JVM actually stored in a particular fixed-size string is stored and the unit is called a hash table string table, also known as string pool . We can -XX:StringTableSizeadjust the size of the mark allocation table (that is, the number of buckets).

    In addition to the string table, yet another runtime constant pool called native data region. JVM uses this pool to store constants, such as alphanumeric or compile time must be resolved at run-time method and field references.

    2.6. Native Byte Buffers (local byte buffer)

    JVM usually allocate a large number of native memory of the suspects, but sometimes developers can allocate native memory directly. The most common method is ByteBuffers malloc and NIO are JNI calls that can be called directly.

    2.7. Additional Tuning Flags (additional adjustment mark)

    In this section, we use a small amount of JVM tuning flags for different optimization schemes. Use the following tips, we can find almost all tuning flags associated with a particular concept:

    $ java -XX:+PrintFlagsFinal -version | grep <concept>

    PrintFlagsFinal print all options -XX JVM. For example, to find all related marks Metaspace of:

    $ java -XX:+PrintFlagsFinal -version | grep Metaspace
        // truncated
        uintx MaxMetaspaceSize                          = 18446744073709547520                    {product}
        uintx MetaspaceSize                             = 21807104                                {pd product}
        // truncated

    3. The machine memory trace (the NMT)

    Now that we understand the common source of the JVM native memory allocation, it is time to figure out how to monitor them. First, we should use another JVM tuning flags enable native memory trace: -XX:NativeMemoryTracking = off | sumary | detail. By default, NMT is off, but its view summary or detailed view of observation we can make it.

    We want to assume that this machine is allocated to track the typical Spring Boot application:

    $ java -XX:NativeMemoryTracking=summary -Xms300m -Xmx300m -XX:+UseG1GC -jar app.jar

    At the same time here, we allocated 300 MB heap space to enable NMT, G1 as our GC algorithm.

    3.1. Examples snapshot

    When enabled NMT, we can use the command jcmd ready access to the machine's memory information:

    $ jcmd <pid> VM.native_memory

    To find the PID JVM applications, we can use the jps command:

    $ jps -l                    
    7858 app.jar // This is our app
    7899 sun.tools.jps.Jps

    Now, if we will jcmd used with appropriate pid, VM.native_memory JVM will print out information about native allocation:

    $ jcmd 7858 VM.native_memory

    Let NMT output section by section analysis.

    3.2. The total allocation

    NMT report all reserved and committed memory as follows:

    Native Memory Tracking:
    Total: reserved=1731124KB, committed=448152KB

    It represents the total amount of memory reserved memory of our application might use. Instead, the memory indicates the amount of memory submitted our application currently in use.

    Despite the heap allocated 300MB of total memory reserved for our application almost 1.7 GB, far more than it. Similarly, the memory submitted approximately 440 MB, which again is far more than 300 MB.

    After overall understanding, memory allocation NMT report each distribution source. So, let's dig in for each source.

    3.3. Heap (stack)

    NMT according to our expectations heap allocation report:

    Java Heap (reserved=307200KB, committed=307200KB)
            (mmap: reserved=307200KB, committed=307200KB)

    300 MB of reserved and committed memory, match our heap size.

    3.4. Metaspace (dimensional space)

    This is a report on the NMT metadata load of classes:

    Class (reserved=1091407KB, committed=45815KB)
        (classes #6566)
        (malloc=10063KB #8519) 
        (mmap: reserved=1081344KB, committed=35752KB)

    Almost retained the 1 GB, 45 MB reserved loaded 6566 class.

    3.5. Thread (thread)

    This is NMT report on the allocation of threads:

    Thread (reserved=37018KB, committed=37018KB)
         (thread #37)
         (stack: reserved=36864KB, committed=36864KB)
         (malloc=112KB #190) 
         (arena=42KB #72)

    A total of 36 MB of memory is allocated to the thread's stack 37 - for each stack about 1 MB. JVM when creating the memory allocated to the thread, so reserved and submission of assignments are equal.

    3.6. Code Cache (Code buffer)

    Let's look at NMT JIT report generation and caching of assembly instructions:

    Code (reserved=251549KB, committed=14169KB)
       (malloc=1949KB #3424) 
       (mmap: reserved=249600KB, committed=12220KB)

    Currently, the cache is approximately 13 MB of code, the number could reach 245 MB.

    3.7. GC

    The following is a report on the NMT G1 GC memory usage:

    GC (reserved=61771KB, committed=61771KB)
     (malloc=17603KB #4501) 
     (mmap: reserved=44168KB, committed=44168KB)

    We can see, retention and submitted are close to 60 MB, we are committed to helping G1.

    Let's look at a simpler GC memory usage, such as Serial GC:

    $ java -XX:NativeMemoryTracking=summary -Xms300m -Xmx300m -XX:+UseSerialGC -jar app.jar

    Serial GC almost use less than 1 MB:

    GC (reserved=1034KB, committed=1034KB)
     (malloc=26KB #158) 
     (mmap: reserved=1008KB, committed=1008KB)

    Obviously, we can not just because its memory usage and select GC algorithm, because the serial nature of the recovery GC pauses may cause performance degradation. However, there are several to choose from GC , their respective balance of memory and performance.

    3.8. Symbol (Symbol)

    The following is a report on the NMT symbols are allocated, e.g. string table and constant pool:

    Symbol (reserved=10148KB, committed=10148KB)
         (malloc=7295KB #66194) 
         (arena=2853KB #1)

    Nearly 10 MB to the symbol.

    3.9. Over time, the NMT

    NMT allows us to track how memory allocation changes over time. First of all, we should be the current state of the application is marked as a baseline:

    $ jcmd <pid> VM.native_memory baseline
    Baseline succeeded

    Then, after a while children, we can compare the current memory usage compared with the baseline (baseline):

    $ jcmd <pid> VM.native_memory summary.diff

    NMT use the + and - symbols will tell us how memory usage changes during this period:

    
    Total: reserved=1771487KB +3373KB, committed=491491KB +6873KB
  • Java Heap (reserved=307200KB, committed=307200KB)
    (mmap: reserved=307200KB, committed=307200KB)

  • Class (reserved=1084300KB +2103KB, committed=39356KB +2871KB)
    // Truncated

    
    
    保留和提交的总内存分别增加了3 MB和6 MB。可以很容易地发现内存分配的其他波动。
    
    ### 3.10. 详细的NMT
    
    NMT可以提供非常详细的有关整个存储空间映射的信息。要启用此详细报告,我们应使用 `-XX:NativeMemoryTracking =detail` 信息调整标志。
    
    ## 4. 结束语
    
    在本文中,我们列举了JVM中本机内存分配的不同使用者。然后,我们学习了如何检查正在运行的应用程序以监视其本机分配。借助以上这些,我们可以更有效地调整应用程序以及运行时环境的大小。
    
    > 原文:<https://www.baeldung.com/native-memory-tracking-in-jvm>
    >
    > 作者:[Ali Dehghani](https://www.baeldung.com/author/ali-dehghani/)
    >
    > 译者:Emma

Guess you like

Origin blog.51cto.com/14479103/2426520