【Reproduced】Android oom full analysis

I think it's a good article, which can solve the problem of OOM caused by too large images loaded
: http://www.cnblogs.com/manuosex/p/3661762.html

Android oom sometimes appears very frequently, which is generally not a problem of Android design. Generally our problem.

  As far as my experience is concerned, the appearance of oom is mainly due to the following aspects:

  1. The loading object is too large;

  2. There are too many corresponding resources, and it is not too late to release.

  To solve such a problem, there are also the following aspects:

  1: Do some processing on memory references, commonly used soft references, enhanced references, and weak references

  2 : When loading pictures in memory, do processing directly in memory, such as: boundaries Compression.
  3: Dynamically reclaiming memory
  4 : Optimizing heap memory allocation for Dalvik virtual machine
  5: Customizing heap memory

  size Is it really that simple, not necessarily, let me explain:

  Soft Reference, Phantom Reference ), weak reference (WeakReference), these three classes are the application of the java object in the heap, through these three classes can do simple interaction with gc, in addition to these three, there is one of the most commonly used strong reference.

  Strong Reference, such as the following code:

  Object o=new Object();      
  Object o1=o;  

  The first sentence in the above code is to create a new Object object in the heap heap to refer to this object through o, and the second sentence is to create a reference to the object in the heap heap from o1 to new Object() through o, both of which are Strong reference. As long as there is a reference to the object in the heap, gc will not collect the object. If the following code is passed:

  o=null;         

  o1=null;

  The objects in the heap are strongly accessible, softly accessible, and weakly accessible Objects, Virtually Reachable Objects, and Unreachable Objects. The order of strength applied is strong, soft, weak, and imaginary. As to which reachable object the object belongs to, it is determined by its strongest reference. As follows:

Copy code
String abc=new String("abc"); //1      
SoftReference<String> abcSoftRef=new SoftReference<String>(abc); //2      
WeakReference<String> abcWeakRef = new WeakReference<String>(abc) ; //3      
abc=null; //4      
abcSoftRef.clear();//5 In this example, the object pointed to by this Reference can be obtained through get(), if the return value is null, it means this Object has been cleared. Such techniques are often used when designing programs such as Optimizer or Debugger, because such programs need to obtain information about an object, but cannot affect the garbage collection of this object.
Copy code
  Phantom reference
  It means nothing. After establishing a virtual reference, the result returned by the get method is always null. Through the source code, you will find that the virtual reference will write the referenced object into the referent, but the return result of the get method is null. The process of gc interaction is talking about his role.
     Instead of setting referent to null, directly set the new String ("abc") object in the heap to be finalizable.
      Different from soft references and weak references, first put Add the PhantomRefrence object to its ReferenceQueue. Then release the phantom-reachable object. You will find that before collecting the new String("abc") object in the heap, you can do some other things. You can understand by the following code His role.

  Although these common references can make their gc recycled, but gc is not very intelligent, so oom is free.

  Two: do processing directly in memory when loading images in memory.

  Try not to use setImageBitmap or setImageResource or BitmapFactory.decodeResource to set a large image, because these functions are finally completed through the createBitmap of the java layer after decoding, which needs to consume more memory.

  Therefore, use the BitmapFactory.decodeStream method to create a bitmap first, and then set it as the source of ImageView. The biggest secret of decodeStream is that it directly calls JNI>>nativeDecodeAsset() to complete the decoding, without using the createBitmap of the java layer. , thus saving the space of the java layer.

  If you add the Config parameter of the image when reading, you can effectively reduce the loaded memory, thereby effectively preventing the out of Memory exception from being thrown. In addition, the image directly taken by decodeStream is used to read bytecode, not according to the machine's Various resolutions are automatically adapted. After using decodeStream, you need to configure the corresponding image resources in hdpi, mdpi, and ldpi. Otherwise, it will be the same size (number of pixels) on different resolution machines, and the displayed size will be wrong. .

  Also, the following approach helps a lot: InputStream is = this.getResources().

openRawResource
(R.drawable.pic1);
     BitmapFactory.Options options=new BitmapFactory.Options();
     options.inJustDecodeBounds = false;
     options.inSampleSize = 10; //width, hight set to the original tenth
     Bitmap btp =BitmapFactory.decodeStream(is,null,options);
 
if(!bmp.isRecycle() ){
         bmp.recycle() //Recycle the image occupied Memory
         system.gc() //Remind the system to recycle in time
}
 
Copy code
  Here

is a method to read pictures of local resources in the most memory-saving way
/**



* @param context

* @param resId

* @return

*/



public static Bitmap readBitMap(Context context, int resId){



BitmapFactory.Options opt = new BitmapFactory.Options();



opt.inPreferredConfig = Bitmap.Config.RGB_565 ;



opt.inPurgeable = true; opt.inInputShareable



= true;



//Get resource image



InputStream is = context.getResources().openRawResource(resId); return



BitmapFactory.decodeStream(is,null,opt); When I put a picture into the gallery on the server, a java.lang.OutOfMemoryError: bitmap size exceeds VM budget exception occurs, and the image size exceeds the RAM memory.       The emulator RAM is relatively small, only 8M memory. When I put a large number of pictures (more than 100 K each), the above reasons appear.










Since each picture was previously compressed, when it is put into Bitmap, the size will become larger, resulting in exceeding RAM memory. The specific solution is as follows:


Copy code
```java
//Solve the problem of memory overflow when loading pictures
                    //Options Only save the size of the picture, do not save the picture to the memory
                BitmapFactory.Options opts = new BitmapFactory.Options();
                //The scaling ratio, the scaling is difficult to scale according to the prepared ratio, its value indicates the scaling factor, in the SDK It is recommended that the value is an exponential value of 2. The larger the value, the less clear the picture will be
                . opts.inSampleSize = 4;
                Bitmap bmp = null;
                bmp = BitmapFactory.decodeResource(getResources(), mImageIds[position],opts);                             

                ...              

               // Recycling
                bmp.recycle();

Copy code Solved



 
by the above method, but this is not the most perfect solution.

Through some understanding, we know the following:

Optimize the heap memory allocation of the Dalvik virtual machine.

For the Android platform, the Dalvik JavaVM used by its hosting layer still has many places to optimize processing from the current performance. For example, we may consider when developing some large games or resource-intensive applications. Manually interfere with GC processing and use the setTargetHeapUtilization method provided by the dalvik.system.VMRuntime class to enhance the processing efficiency of program heap memory. Of course, we can refer to the open source project for the specific principle. Here we only talk about the usage method: private final static floatTARGET_HEAP_UTILIZATION = 0.75f; VMRuntime.getRuntime().setTargetHeapUtilization(TARGET_HEAP_UTILIZATION); can be called when the program is onCreate.

Android heap memory can also define its own size.

For some Android projects, the main bottleneck that affects performance is Android's own memory management mechanism. At present, mobile phone manufacturers are stingy with RAM. For the fluency of software, RAM is very sensitive to the impact of performance. In addition to optimizing the heap memory allocation of the Dalvik virtual machine, we can also force to define the memory size of our own software. We use the dalvik.system.VMRuntime class provided by Dalvik to set the minimum heap memory as an example:

private final static int CWJ_HEAP_SIZE = 6* 1024* 1024 ;

VMRuntime.getRuntime().setMinimumHeapSize(CWJ_HEAP_SIZE); //Set the minimum heap memory to 6MB. Of course, if the memory is tight, you can also manually intervene in the GC to process

the bitmap and set the image size to avoid memory overflow. The optimization method of OutOfMemoryError


★It is easy to overflow memory when using bitmap in android, and the following error is reported: Java.lang.OutOfMemoryError : bitmap size exceeds VM budget

● Mainly add this paragraph:

BitmapFactory.Options options = new BitmapFactory.Options();
                options.inSampleSize = 2;
 
● eg1: (take the picture through Uri)

copy code
private ImageView preview;
BitmapFactory.Options options = new BitmapFactory.Options();
                    options.inSampleSize = 2;//The width and height of the picture are half of the original, that is The picture is a quarter of the original
                    Bitmap bitmap = BitmapFactory.decodeStream(cr
                            .openInputStream(uri), null, options);
                    preview.setImageBitmap(bitmap); And can not completely solve the memory overflow. ● eg2: (go to the picture through the path)
 







Copy code
private ImageView preview;
private String fileName= "/sdcard/DCIM/Camera/2010-05-14 16.01.44.jpg";
BitmapFactory.Options options = new BitmapFactory.Options();
                options.inSampleSize = 2;// The width and height of the picture are half of the original, that is, the picture is a quarter of the original
                        Bitmap b = BitmapFactory.decodeFile(fileName, options);
                        preview.setImageBitmap(b);
                        filePath.setText(fileName)
;
    , can compress enough ratio, but it is unavoidable for mobile phones with small memory, especially those with 16mheap.

  3. Dynamic memory allocation

  Dynamic memory management DMM (Dynamic Memory Management) directly allocates and reclaims memory from Heap.

  There are two ways to implement dynamic memory management.

  One is to display the memory management EMM (Explicit Memory Management).
In EMM mode, memory is allocated from the Heap and manually reclaimed after it is used up. The program allocates an integer array using the malloc() function and frees the allocated memory using the free() function.

  The second is automatic memory management AMM (Automatic Memory Management).
AMM can also be called garbage collector (Garbage Collection). The Java programming language implements AMM. Unlike EMM, the Run-time system pays attention to the allocated memory space and reclaims it immediately once it is no longer used.

  Whether it is EMM or AMM, all Heap management plans face some common problems and existing defects:
  1) Internal Fragmentation
When memory is wasted, internal fragmentation occurs. Because the memory request can cause the allocated memory block to be too large. For example, when 128 bytes of storage space is requested, the Run-time system allocates 512 bytes.

  2) External Fragmentation
When a series of memory requests leave several valid memory blocks, but the size of these memory blocks cannot meet the new request service, external fragmentation occurs.

  3) Location-based Latency The
latency problem occurs when two data values ​​are stored far apart, resulting in increased access time.

  EMMs tend to be faster than AMMs.
  EMM vs AMM Comparison Chart:
———————————————————————————————————————
                                 EMM AMM
———————————————————————————————————————
Benefits Smaller size, faster speed, easier control stay Focused on domain issues
Costs Complexity, accounting, memory leaks, dangling pointers Good performance
————————————————————————————————— ——————

The early garbage collectors were very slow, often taking 50% of the execution time.

The theory of the garbage collector originated in 1959, when Dan Edwards implemented the first garbage collector during the development of the Lisp programming language.

The garbage collector has three basic classic algorithms:

1) The basic idea of ​​reference counting
is: when an object is created and assigned a value, the reference counter of the object is set to 1, and whenever an object is assigned a value to any variable, the reference count is +1; Once out of scope the reference count is -1. Once the reference count reaches 0, the object can be garbage collected.
Reference counting has its corresponding advantages: each operation only takes a small chunk of time for the program to execute. This is a natural advantage for real-time systems that cannot be interrupted for too long.
But it also has its drawbacks: it cannot detect cycles (the mutual references of two objects); and it is time-consuming to increment or decrement the reference count each time.
In modern garbage collection algorithms, reference counting is no longer used.

2) The basic idea of ​​Mark-sweep
is to search for all references (called live objects) from the root set each time, and mark each one found. When the tracking is completed, all unmarked Objects are garbage that needs to be collected.
Also called a tracking algorithm, based on mark-and-sweep. This garbage collection step is divided into two phases: in the marking phase, the garbage collector traverses the entire reference tree and marks every object encountered. During the cleanup phase, unmarked objects are freed and made available in memory.

3) The basic idea of ​​Copying collection
is: divide the memory into two blocks, one is currently in use; the other is currently unused. The memory currently in use is used each time it is allocated. When there is no available memory, the memory in this area is marked, and all the marked objects are copied to the currently unused memory area. This is to reverse the two areas, that is, the current available area becomes Currently unused, and currently unused becomes currently available, the algorithm continues.
The copy algorithm needs to stop all program activity and start a long and busy copy job. This is its downside.

There are two algorithms in recent years:

1) Generational garbage collection (generational)
The idea is based on:
  (1) Most objects created by most programs have very short lifetimes.
  (2) Some objects created by most programs have very long lifetimes.
The main disadvantage of simple copy algorithms is that they spend more time copying some long-lived objects.
The basic idea of ​​the generational algorithm is to divide the memory area into two (or more) blocks, one of which represents the young generation, and the other represents the old generation. According to different characteristics, the garbage collection of the young generation is more frequent, and the collection of the old generation is less. Every time after the garbage collection of the young generation, there will always be uncollected live objects, and these living objects will increase after collection. Maturity, when the maturity reaches a certain level, it is put into the old generation memory block.
The generational algorithm well realizes the dynamic nature of garbage collection and avoids memory fragmentation. It is currently the garbage collection algorithm used by many JVMs.

2) Conservative garbage collection (conservative)

Which algorithm is the best? The answer is there is no best.

As a very commonly used garbage collection algorithm, EMM has five basic methods:
  1) Table-driven algorithms Table-driven
  algorithms divide memory into fixed-size block sets. These blocks are indexed using abstract data structures. For example, a bit corresponds to a block, and 0 and 1 are used to indicate whether to allocate. Disadvantage: The bitmap depends on the size of the memory block; in addition, searching a series of free memory blocks may require searching the entire bitmap table, which affects performance.

  2)
The sequential fit algorithm allows the memory to be divided into different sizes. This algorithm keeps track of allocated and free Heaps, marking the start and end addresses of free blocks. It has three sub-categories:
  (1) First fit - allocates the first block found that fits the memory request
  (2) Best fit - allocates the block that best fits the memory request
  (3) Worst fit (least fit) - allocate the largest block to the memory request

3) Buddy systems
  The main purpose of the Buddy systems algorithm is to speed up the merging of allocated memory after it is released. Shows that memory management EMMs using the Buddy systems algorithm can cause internal fragmentation.

4) Segregated storage
  isolation storage technology involves dividing the Heap into multiple zones and using different memory management plans for each zone. This is a very effective method.

5) Sub-allocators
  The sub-configuration technique attempts to solve the memory allocation problem of allocating a large block of memory and managing it separately under the Run-time System. In other words, the program is completely responsible for the memory allocation and reclamation of its own private storage heap (stockpile), without the help of the run-time system. It may introduce additional complexity, but you can significantly improve performance. In the 1990 book "C Compiler Design", Allen Holub made excellent use of Sub-allocators to speed up the implementation of his compiler.

  Note that the explicit memory management EMM must be flexible and able to respond to several different types of requests.

  Finally, use EMM or use AMM? This is a Religious question, based on personal preference. EMM achieves speed and control with complex overhead. AMM sacrifices performance for simplicity.

  Whether it is emm or amm to allocate memory, the problem of oom occurs, and it is inevitable because the loading memory is too large.

  Four. Optimize the heap memory allocation of the Dalvik virtual machine.

  For the Android platform, the Dalvik Java VM used by its hosting layer still has many places to optimize its processing. For example, we may develop some large-scale games or resource-hungry applications. Considering manual intervention in GC processing, using the setTargetHeapUtilization method provided by the dalvik.system.VMRuntime class can enhance the processing efficiency of program heap memory. Of course, we can refer to the open source project for the specific principle. Here we only talk about the usage method: private final static float TARGET_HEAP_UTILIZATION = 0.75f; VMRuntime.getRuntime().setTargetHeapUtilization(TARGET_HEAP_UTILIZATION); can be called when the program is onCreate.

       Android heap memory can also define its own size

       For some Android projects, the main bottleneck affecting the performance is Android's own memory management mechanism. At present, mobile phone manufacturers are relatively stingy with RAM. For the fluency of software, RAM is very sensitive to the impact of performance. In addition to optimizing the heap memory of the Dalvik virtual machine In addition to the allocation, we can also forcefully define the heap memory size of our own software. We use the dalvik.system.VMRuntime class provided by Dalvik to set the minimum heap memory as an example:

       private final static int CWJ_HEAP_SIZE = 6* 1024* 1024 ;

       VMRuntime.getRuntime( ).setMinimumHeapSize(CWJ_HEAP_SIZE); //Set the minimum heap memory to 6MB. Of course, if the memory is tight, it can also be handled by manually interfering with the GC.

  Note that this method of setting the configuration of the dalvik virtual machine is invalid for Android 4.0 settings.

  Here's my little take on oom for Android.

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=326970069&siteId=291194637