[Android Performance Optimization Interview Questions] How does the app limit memory? How should memory be used reasonably?

How does the app limit memory? How should memory be used reasonably?

What do you want to examine with this question?

Mastery of Android memory management to avoid writing code that causes memory issues

Knowledge points to investigate

Memory management, memory optimization

How should candidates answer

Memory management overview

Android Runtime (ART) and Dalvik virtual machine usepagingandmemory mapping to manage memory. This means that any memory modified by the app, whether by allocating a new object or tapping a memory-mapped page, will always reside in RAM and cannot be swapped out. To free memory from an application, you can only release object references held by the application, making the memory available for the garbage collector. There is one exception to this: any unmodified memory-mapped file (such as code) can be swapped out of RAM if the system wants to use its memory elsewhere.

Garbage collection

A managed memory environment like the ART or Dalvik virtual machine keeps track of every memory allocation. Once it determines that the program is no longer using a piece of memory, it releases the memory back to the heap without any intervention from the programmer. This mechanism for reclaiming unused memory in a managed memory environment is called garbage collection. Garbage collection has two goals: to find data objects in a program that will no longer be accessible and to reclaim the resources used by these objects.

Android's memory heap is generational, which means it keeps track of different allocation buckets based on the expected lifespan and size of the allocated objects. For example, recently allocated objects belong to the "young generation". When an object remains alive long enough, it can be promoted to the older generation and then to the permanent generation.

Each generation of the heap has its own dedicated upper limit on the amount of memory that a corresponding object can occupy. Whenever a generation begins to fill up, the system performs a garbage collection event to free up memory. The duration of a garbage collection depends on which generation of objects it collects and how many live objects there are in each generation.

Although garbage collection is very fast, it still affects the performance of your application. Typically, there is no way to control from code when a garbage collection event occurs. The system has a specific set of criteria for determining when to perform garbage collection. When the conditions are met, the system stops the execution process and starts garbage collection. If garbage collection occurs during an intensive processing loop such as animation or music playback, it may increase processing time, which may cause code execution in the application to exceed the recommended 16ms threshold for efficient and smooth frame rendering.

Additionally, the various work performed by our code flow may force garbage collection events to occur more frequently or cause them to last longer than normal. For example, if you allocate multiple objects in the innermost level of a for loop during each frame of an alpha blend animation, you may flood the memory heap with a large number of objects. In this case, the garbage collector performs multiple garbage collection events and may degrade application performance.

Shared memory

To fit everything it needs in RAM, Android attempts to share RAM pages across processes. It can achieve this by:

  • Each application process is forked from an existing process named Zygote. The Zygote process starts when the system boots and loads common framework code and resources, such as activity themes. To launch a new application process, the system forks the Zygote process and then loads and runs the application code in the new process. This approach enables the majority of RAM pages allocated for framework code and resources to be shared among all application processes.
  • Most static data is memory mapped into a process. This approach allows data to not only be shared between processes, but also swapped out when needed. Examples of static data include: Dalvik code (directly memory mapped by putting it into a pre-linked .odex file), application resources (by designing the resource table as a memory mappable structure, and by aligning APK's zip entry) and traditional project elements (such as native code in .so files).
  • In many places, Android uses explicitly allocated shared memory regions (via ashmem or gralloc) to share the same dynamic RAM between processes. For example, the window surface uses memory shared between the app and the screen compositor, while the cursor buffer uses memory shared between the content provider and the client.

Due to the widespread use of shared memory, care needs to be taken when determining the amount of memory used by an application.

Allocate and deallocate application memory

The Dalvik heap is limited to a single virtual memory range per application process. This defines the logical heap size, which can grow as needed, but cannot exceed the system-defined upper limit for each application.

The logical size of the heap is not the same as the amount of physical memory used by the heap. When examining the app heap, Android calculates a proportional share size (PSS) value that takes into account both dirty and clean pages shared with other processes, but is proportional to the number of apps sharing that RAM. This (PSS) total is what the system considers to be the physical memory footprint.

The Dalvik heap does not compress the logical size of the heap, which means Android does not defragment the heap to reduce space. Android can reduce the logical heap size only if there is unused space at the end of the heap. However, the system can still reduce the physical memory used by the heap. After garbage collection, Dalvik walks the heap and finds unused pages, then uses madvise to return these pages to the kernel. Therefore, paired allocations and deallocations of large data blocks should cause all (or nearly all) of the physical memory used to be reclaimed. However, reclaiming memory from smaller allocations is much less efficient because the pages used for smaller allocations may still be shared with other blocks of data that have not yet been freed.

Limit application memory

To maintain a multitasking environment, Android sets a hard cap on the heap size of each application. The exact maximum heap size for different devices depends on the overall available RAM size of the device. If an app attempts to allocate more memory after reaching the heap capacity limit, it may receive OutOfMemoryError.

In some cases, for example, to determine how much data it is safe to keep in the cache, it may be necessary to query the system to determine the exact amount of heap space currently available on the device. This value can be queried from the system by calling getMemoryClass(). This method returns an integer representing the number of megabytes available for the application heap.

Switch app

Android keeps non-foreground apps in the cache when the user switches between apps. Non-foreground applications refer to applications that users cannot see or do not run foreground services (such as music playback). For example, when a user first launches an app, the system creates a process for it; but when the user leaves the app, the process does not exit. The system keeps the process in cache. If the user returns to the app later, the process is reused, making app switching faster.

If an app has cached processes and holds resources that are not currently needed, it can affect the overall performance of the system even when the user is not using the app. When system resources (such as memory) are insufficient, it will terminate processes in the cache. The system also considers terminating the process that consumes the most memory to free up RAM.

Memory allocation between processes

The Android platform does not waste available memory while running. It will always try to utilize all available memory. For example, the system retains apps in memory after they are closed so users can quickly switch back to them. So, typically, Android devices run with little to no memory available. Memory management is critical to correctly allocating memory among important system processes and many user applications.

memory type

Android devices contain three different types of memory: RAM, zRAM, and storage. Note that the CPU and GPU access the same RAM.

Insert image description here

Figure 1. Memory Types - RAM, zRAM, and Storage

  • RAM is the fastest type of memory, but its size is usually limited. High-end devices usually have the largest RAM capacity.
  • zRAM is the RAM partition used for swap space. All data is compressed when placed into zRAM and then decompressed when copied out of zRAM. This portion of RAM grows or shrinks as pages are moved in and out of zRAM. Device manufacturers can set a maximum zRAM size limit.
  • The memory contains all persistent data (such as the file system, etc.), as well as added object code for all applications, libraries, and platforms. The memory has a much larger capacity than the other two types of memory. On Android, memory is not used for swap space as it is on other Linux implementations, as frequent writes can cause corruption of this memory and shorten the life of the storage media.
memory page

RAM is divided into "pages". Typically, each page is 4KB of memory.

The system considers pages "available" or "used." Free pages are unused RAM. Used pages are the RAM currently in use by the system and are divided into the following categories:

  • Cache page: Memory backed by a file in memory (such as code or a memory-mapped file). There are two types of cache memory:
    • Private page: owned by one process and not shared
      • Clean page: An unmodified copy of a file in memory that can be deleted by kswapd to increase available memory
      • Dirty page: A modified copy of a file in memory; may be moved to zRAM by kswapd or compressed in zRAM to increase available memory
    • Shared pages: used by multiple processes
      • Clean page: An unmodified copy of a file in memory that can be deleted by kswapd to increase available memory
      • Dirty page: A modified copy of a file in storage; allowed by kswapd or by explicit use of msync() or munmap() Write changes back to a file in storage to increase available space
  • Anonymous page:No memory backed by files in storage (for example, by setting the MAP_ANONYMOUS flag mmap() for allocation)
    • Dirty pages: can be moved to zRAM/compressed in zRAM by kswapd to increase available memory

As the system actively manages RAM, the ratio of free to used pages changes continuously.

Out of memory management

Android has two main mechanisms for handling low-memory conditions: the kernel swap daemon and the low-memory termination daemon.

kernel swap daemon

The kernel swap daemon (kswapd) is part of the Linux kernel and is used to convert used memory into available memory. This daemon becomes active when there is insufficient memory available on the device. The Linux kernel has upper and lower thresholds for available memory. When available memory drops below the lower threshold, kswapd begins reclaiming memory. When the available memory reaches the upper threshold, kswapd stops reclaiming memory.

kswapdClean pages can be reclaimed by deleting them because they are backed by memory and unmodified. If a process attempts to process a clean page that has been deleted, the page is copied from storage to RAM. This operation is called "request paging".

Insert image description here

Figure 2. Memory-backed clean page deleted

kswapdCached private dirty pages and anonymous dirty pages can be moved to zRAM for compression. This frees up available memory (free pages) in RAM. If a process attempts to process a dirty page in zRAM, the page will be decompressed and moved back to RAM. If the process associated with a compressed page is killed, the page is removed from zRAM.

If the amount of available memory falls below a certain threshold, the system begins terminating processes.

Insert image description here

Figure 3. Dirty pages are moved to zRAM and compressed

Low memory kill daemon

Oftentimes,kswapd does not free enough memory for the system. In this case, the system uses onTrimMemory() to notify the application that it is low on memory and should reduce its allocation. If that's not enough, the kernel starts terminating processes to free up memory. It uses the low-memory kill daemon (LMK) to do this.

LMK uses an "out of memory" score called oom_adj_score to prioritize running processes to determine which processes to terminate. The process with the highest score is killed first. Background applications are terminated first, and system processes are terminated last. The table below lists the LMK scoring categories from highest to lowest. The highest rated categories, items in the first row, will be terminated first:

Insert image description here

Figure 4. Android process, with high scores at the top and low scores at the bottom

Here are descriptions of the various categories in the table above:

  • Background apps: Apps that have run before and are not currently active. LMK will kill background apps first, starting with the app with the highest oom_adj_score.
  • Previous application: Recently used background application. The previous app has a higher priority (lower score) than the background app because the user is more likely to switch to the previous app than to a background app.
  • Home screen app: This is the launcher app. Terminating the app will make the wallpaper disappear.
  • Services: Services are initiated by the app and may include syncing or uploading to the cloud.
  • Perceivable apps: Non-foreground apps that are visible to the user in some way, such as running a search process that displays a small interface or listening to music.
  • Foreground application: The application currently in use. Terminating a foreground app looks like the app has crashed and may indicate to the user that something is wrong with the device.
  • Persistence (Services): These are the core services of the device, such as telephony and Wi-Fi.
  • System: System process. After these processes are killed, the phone may appear to be about to restart.
  • Native: A very low-level process used by the system (for example, kswapd).

Device manufacturers can change the behavior of LMK.

Calculate memory usage

The kernel keeps track of all memory pages in the system.

Insert image description here

Figure 5. Pages used by different processes

The system must consider shared pages when determining the amount of memory used by an application. Applications accessing the same service or library will share memory pages. For example, Google Play services and a gaming app might share location services. This makes it difficult to determine how much memory belongs to the service as a whole and to each application.

Insert image description here

Figure 6. Page shared by two applications (middle)

To determine your app's memory footprint, you can use any of the following metrics:

  • Resident Size (RSS): The number of shared and unshared pages used by the application
  • Prorated Memory Size (PSS): The number of non-shared pages used by the application plus the evenly allocated number of shared pages (for example, if three processes share 3MB, each process has a PSS of 1MB)
  • Exclusive Memory Size (USS): The number of non-shared pages used by the application (excluding shared pages)

PSS is useful if the operating system wants to know how much memory is used by all processes, since pages are only counted once. Calculating PSS takes a long time because the system needs to determine the pages that are shared and the number of processes sharing the pages. RSS does not distinguish between shared and unshared pages (and therefore is faster to calculate) and is better suited for tracking changes in memory allocations.

Managing application memory (how to use memory rationally)

Random access memory (RAM) is a valuable resource in any software development environment, but in mobile operating systems, where physical memory is often limited, RAM is even more valuable. Although both the Android Runtime (ART) and the Dalvik virtual machine perform routine garbage collection tasks, this does not mean that where and when an application allocates and frees memory can be ignored. We still need to avoid introducing memory leaks (usually caused by retaining object references in static member variables) and release all Reference objects at the appropriate time (as defined by lifecycle callbacks).

Monitor available memory and memory usage

We need to find memory usage issues in our app before we can fix them. TheMemory Profilerin Android Studio can find and diagnose memory issues by:

  1. Understand how your application allocates memory over time. Memory Analyzer displays real-time graphs of your application's memory usage, the number of Java objects allocated, and when garbage collection events occur.
  2. Initiates a garbage collection event and takes a snapshot of the Java heap while the application is running.
  3. Record your app's memory allocations, then inspect all allocated objects, view the stack trace for each allocation, and jump to the corresponding code in the Android Studio editor.
Free memory in response to events

As described inAndroid Memory Management Overview, Android can reclaim memory from an app in several ways, or terminate it completely if necessary application, thereby freeing up memory to perform critical tasks. To further help balance system memory and prevent the system from needing to terminate the application process, the interface can be implemented in the Activity class. The provided callback methods allow your app to listen for memory-related events while in the foreground or background, and then release objects in response to app lifecycle events or system events that indicate the system needs to reclaim memory. ComponentCallbacks2onTrimMemory()

For example, you can implement onTrimMemory() callbacks in response to different memory-related events as follows:

    import android.content.ComponentCallbacks2;
    // Other import statements ...

    public class MainActivity extends AppCompatActivity
        implements ComponentCallbacks2 {
    
    

        // Other activity code ...

        /**
         * 当应用UI隐藏或者系统内存紧张 释放内存
         * @param level 引发的内存相关事件的级别
         */
        public void onTrimMemory(int level) {
    
    
            switch (level) {
    
    

                case ComponentCallbacks2.TRIM_MEMORY_UI_HIDDEN:
                    /*
                       ui隐藏,可以释放当前保存内存的所有UI对象。
                    */

                    break;

                case ComponentCallbacks2.TRIM_MEMORY_RUNNING_MODERATE:
                case ComponentCallbacks2.TRIM_MEMORY_RUNNING_LOW:
                case ComponentCallbacks2.TRIM_MEMORY_RUNNING_CRITICAL:

                    /*
					设备内存严重不足,
					如果Level为TRIM_MEMORY_RUNNING_CRITICAL则系统将开始终止后台进程。
                    */

                    break;

                case ComponentCallbacks2.TRIM_MEMORY_BACKGROUND:
                case ComponentCallbacks2.TRIM_MEMORY_MODERATE:
                case ComponentCallbacks2.TRIM_MEMORY_COMPLETE:

                    /*
					当前应用已经处于LRU列表中,
					如果Level为 TRIM_MEMORY_COMPLETE ,当前进程将是最先终止的进程之一
                    */

                    break;

                default:
                    /*
                      应用程序从系统接收到无法识别的内存级别值。将此消息视为通用低内存消息。
                      释放非关键数据
                    */
                    break;
            }
        }
    }
 

The onTrimMemory() callback was added in Android 4.0 (API level 14). For earlier versions, you can use onLowMemory(), which is roughly equivalent to the TRIM_MEMORY_COMPLETE event.

How much memory should be used

To allow multiple processes to run simultaneously, Android sets a hard limit on the heap size allocated for each application. The exact heap size limit for a device varies depending on how much RAM is available overall to the device. If the application has reached the heap capacity limit and attempts to allocate more memory, the system throws OutOfMemoryError.

To avoid running out of memory, the system can be queried to determine the heap space currently available on the device. This value can be queried from the system by calling getMemoryInfo(). It returns an ActivityManager.MemoryInfo object that provides information about the device's current memory state, including free memory, total memory, and a memory threshold (if this memory level is reached, the system begins terminating processes ). ActivityManager.MemoryInfo The object also provides a simple Boolean value lowMemory that can be used to determine whether the device is out of memory.

public void doSomethingMemoryIntensive() {
    
    

        ActivityManager.MemoryInfo memoryInfo = getAvailableMemory();

        if (!memoryInfo.lowMemory) {
    
    
            //...
        }
    }

    // 获得内存信息
    private ActivityManager.MemoryInfo getAvailableMemory() {
    
    
        ActivityManager activityManager = (ActivityManager) this.getSystemService(ACTIVITY_SERVICE);
        ActivityManager.MemoryInfo memoryInfo = new ActivityManager.MemoryInfo();
        activityManager.getMemoryInfo(memoryInfo);
        return memoryInfo;
    }
Use more memory efficient code structures

Certain Android features, Java classes, and code structures tend to use more memory than others. You can choose more efficient alternatives in your code to keep your app's memory usage as low as possible.

Use the service with caution

Leaving a service running when it is not needed is one of the most serious memory management mistakes an Android app can make. If your app requires aservice to perform work in the background, don't leave it running unless it needs to run a job. Be careful to stop the service after it has completed its task. Otherwise, you may inadvertently cause a memory leak.

After starting a service, the system prefers to keep the service's process running. This behavior can be very expensive for service processes because once a portion of RAM is used by a service, it is no longer available to other processes. This reduces the number of cached processes the system can keep in the LRU cache, making application switching less efficient. This can even lead to system thrashing when memory is tight and the system cannot maintain enough processes to host all the services currently running.

Persistence services should generally be avoided because they place persistent demands on available memory. It is recommended to use alternative implementations such as JobScheduler.

If a service must be used, the best way to limit its lifetime is to use IntentService which will end itself as soon as the intent that started it is handled.

Use optimized data containers

Some classes provided by the programming language are not optimized for mobile devices. For example, a regular HashMap implementation can be very memory inefficient because each mapping requires a separate entry object.

The Android framework includes several optimized data containers, including SparseArray, SparseBooleanArray, and LongSparseArray. For example, SparseArray classes are more efficient because they avoid the need for the system to autobox the keys (and sometimes the values) (which creates an additional 1-2 objects for each entry) ).

Be careful with code abstractions

Developers often think of abstraction simply as a good programming practice because abstraction increases code flexibility and maintainability. However, abstractions come at a high cost: typically they require more code to execute, more time and more RAM to map the code into memory. Therefore, abstraction should be avoided if it does not bring significant benefits.

Use a lite version of Protobuf for serialized data

Protobuf is a language- and platform-independent and extensible mechanism designed by Google for serializing structured data. The mechanism is similar to XML, but smaller, faster, and simpler. If you decide to use Protobuf for your data, you should always use the lite version of Protobuf in your client code. Regular Protobuf generates extremely verbose code, which can cause problems for your app, such as increased RAM usage, significantly larger APK sizes, and slower execution.

Avoid memory thrashing

Garbage collection events generally do not affect application performance. However, if many garbage collection events occur in a short period of time, frame time can be exhausted quickly. The more time the system spends on garbage collection, the less time it has available to handle other tasks.

Often, "memory thrashing" can cause a large number of garbage collection events. In fact, memory thrashing can account for the number of allocated temporary objects that occur in a given period of time.

For example, multiple temporary objects can be allocated in a for loop. Alternatively, you can create a new or object in the view's onDraw() function. In both cases, the application creates a large number of objects quickly. These operations can quickly consume all available memory in the young generation area, forcing a garbage collection event. PaintBitmap

Of course, we have to find the place in the code where memory thrashing is high before we can fix it. To do this, you should use the Memory Analyzer in Android Studio.

Once you've identified problem areas in your code, try reducing the number of allocations in performance-critical areas. Consider moving some code logic out of the inner loop, or designing an object pool.

Remove resources and libraries that take up a lot of memory

Certain resources and libraries in your code can eat up memory. The overall size of your APK, including third-party libraries or embedded resources, may affect your app's memory consumption. Reduce your app's memory consumption by removing any redundant, unnecessary, or bloated components, resources, or libraries from your code.

Reduce overall APK size

Significantly reduces your app's memory usage by reducing its overall size. Bitmap size, resources, animation frames, and third-party libraries all affect APK size. Android Studio and the Android SDK provide several tools to help reduce the size of resources and external dependencies. These tools support modern code shrinking methods, such as R8 compilation. (Android Studio 3.3 and lower use ProGuard, not R8 compilation.)

Dependency injection using reflection

Dependency injection frameworks simplify code and provide an adaptive environment for testing and other configuration changes.

If you plan to use a dependency injection framework in your application, you should avoid using reflective dependency injection frameworks. This process may require more CPU cycles and RAM, and may cause significant delays when the app starts.

Hilt/Dagger does not use reflection to scan the application's code, but adopts a static compile-time implementation, which means that it can be used in Android applications without unnecessary runtime penalty or memory consumption.

Use external libraries with caution

External library code is often not written for mobile environments and may be inefficient when running on mobile clients. If you decide to use an external library, you may need to optimize the library for mobile devices. Before deciding whether to use the library, plan ahead and analyze the library in terms of code size and RAM consumption.

Even some libraries that are optimized for mobile devices can cause problems due to differences in implementation. For example, one library might use a lite version of Protobuf, while another library uses Micro Protobuf, resulting in two different Protobuf implementations for your application. Different implementations of logging, profiling, image loading frameworks and caching, and many other unexpected features can cause this.

While ProGuard can remove APIs and resources using the appropriate tags, it cannot remove large internal dependencies of a library. Functionality in these libraries that your application requires may require lower-level dependencies. This is particularly problematic if you use a Activity subclass from a library (which tends to have a lot of dependencies), or if the library uses reflection (which is common, meaning It takes a lot of time to manually tune ProGuard to make it work) etc.

Also, avoid using shared libraries for just one or two functions out of dozens. This can result in a lot of code and overhead that is never used. When considering whether to use a library, look for an implementation that closely matches your needs. Otherwise, you can decide to create the implementation yourself.

at last

I have compiled a collection of Android interview questions. In addition to the above interview questions, it also includes [Java basics, collections, multi-threading, virtual machines, reflection, generics, and concurrent programming , Android's four major components, asynchronous tasks and message mechanisms, UI drawing, performance tuning, SDN, third-party frameworks, design patterns, Kotlin, computer network, system startup process, Dart, Flutter, algorithms and data structures, NDK, H. 264, H.265. Audio codec, FFmpeg, OpenMax, OpenCV, OpenGL ES
Insert image description here

Friends in need can scan the QR code below to receive all interview questions + answer analysis for free! ! !

Guess you like

Origin blog.csdn.net/datian1234/article/details/134813551