[Android Framework Series] Chapter 2 Binder Mechanism Encyclopedia

1 Introduction to Binder

1.1 What is Binder

  Binder是Android中主要的跨进程通信方式. In the Android system, each application program is composed of one or more of the four swordsmen of Android, Activity, Service, BroadCast, and ContentProvider. The bottom layer of multi-process communication involved in these four swordsmen all depends on the BinderIPC mechanism. For example, when the Activity in process A wants to communicate with the Service in process B, it needs to rely on BinderIPC. Not only that, in the entire Android system architecture, a large number of Binder mechanisms are used as the IPC (Inter-Process Communication) solution . Of course, there are also some other IPC methods, such as Zygote communication that uses sockets.
  The Binder driver and ServiceManager are respectively equivalent to routers and DNS in the network protocol, and based on mmapthe realization of IPC, only one copy is required when transferring data. Binder includes various Binder entities such BinderProxyas , BpBinder, , etc., as well as the encapsulation and encapsulation BBinderof the Binder driver operation , plus the internal structure and command processing of the Binder driver, which runs through the Java and Native layers as a whole, involving user mode and kernel mode. It can be said that From Service, AIDL, etc., it can be said that mmap and Binder drive devices are quite large and cumbersome mechanisms.ProcessStateIPCThreadState

1.2 Why use Binder

  Why use Binder? ? ? Good question. We know that Binder is a unique cross-process communication method for Android, and the bottom layer of the Android system is Linux, so the cross-process communication method of Linux can also be used in Android. Are there other ways to achieve cross-process communication in Linux?
  Traditional Linux inter-process communication methods: pipes, semaphores, sockets, shared memory , and Binder是android系统独有的通讯方式.

  1. Binder : only needs to be copied once , based on C/S architecture , high ease of use, the system assigns UID to each APP and supports real name and anonymity, which is more secure
  2. Shared memory : No need to copy , complex control, poor usability, relying on upper-layer protocols, access points are open and unsafe
  3. Socket : It needs to be copied twice . Based on the C/S architecture , as a general interface, its transmission efficiency is low and the overhead is high. Since the upper layer protocol, the access point is open and unsafe
  4. Pipeline : It needs to be copied twice . It is not a C/S architecture . It is a one-to-one communication model. The output of one program is directly linked to the input of another program. There are two main types of pipes under Linux: unnamed pipes (pipe) and named pipes (fifo). Unnamed pipes can only be used for communication between processes with kinship (that is, between parent-child processes or sibling processes). It is a half-duplex communication method with slow speed and limited capacity; named pipes are for unnamed pipes. An improvement of , which enables communication between two complementary coherent processes. And the pipe is visible in the file system, but the size is always 0.
  5. Semaphore : It is not the same as other inter-process communication methods. It mainly provides an access mechanism for inter-process shared resources . The process will determine whether it can access certain shared resources based on it. At the same time, the process can also modify the flag. In addition to being used for access control, it can also be used for process synchronization, which is mainly a communication mechanism used to solve the problem of synchronization and mutual exclusion between process threads.

The Android system needs a high-efficiency and high-security method, so Binder is the most suitable. Binder only needs to be copied once, the efficiency is second only to shared memory, and it adopts the traditional C/S structure
insert image description here

2 Binder Implementation Mechanism

2.1 Memory concept

First of all, you need to clarify several concepts about memory in the Linux system: 虚拟内存, 用户空间, 内核空间,MMap

2.1.1 Virtual memory

Simply put, virtual memory is a memory management technology that is a virtual and logically existing storage space; it forms a logically continuous and complete address space from physical (discontinuous physical) memory (fragmentation). The process of virtual memory initialization is called memory mapping . The memory operated by our user program actually operates the real physical memory through virtual memory.
insert image description hereSimple understanding: the virtual address points to the virtual memory, and the virtual memory points to the corresponding physical memory. The physical memory will load the active program in the disk into the physical memory according to the principle of program locality . The MMU converts the address of physical memory into a virtual address for external (CPU) use. The mapping between virtual memory and physical memory takes pages as units, and the common page size is 4KB.
Effective bits and disk addresses in virtual memory can form three states:

  1. unallocated (significant bit 0, disk address null), does not yet exist on disk
  2. Not cached (effective bit 0, disk address has a value), exists on disk, but does not exist in memory
  3. Has been cached (effective bit 1, disk address has a value), both disk and memory exist

2.1.1.1 MMU

MMU是Memory Management UnitThe abbreviation of , the Chinese name is the memory management unit , sometimes called the paged memory management unit (English: paged memory management unit, abbreviated as PMMU). It is a piece of computer hardware responsible for handling memory access requests from the central processing unit (CPU) . Its functions include translation from virtual address to physical address (that is, virtual memory management), memory protection, and control of the CPU cache. In a relatively simple computer architecture, it is responsible for bus arbitration and bank switching (bank switching, especially on 8-bit systems).
insert image description here
As shown in the figure above, the MMU is actuallya bridge between the CPU and the physical memory/disk , and its main responsibility is to convert physical addresses into virtual addresses for the CPU to use. There are frequent read and write operations between physical memory and disk, because according to the principle of program locality

2.1.2 User Space, Kernel Space

The virtual memory is divided into two parts by the operating system: user space and kernel space. User space is where user program code runs, and kernel space is where kernel code runs. For safety, they are isolated, even if the user's program crashes, the kernel will not be affected.
32-bit system, that is, 2^32, that is, the total accessible address is 4G, the kernel space is 1G, and the user space is 3G; 64-
bit operating system, the low-order 2-47 bits are valid variable addresses, and the high-order 48-63 bits are all Filling 0 corresponds to user space, and filling 1 is kernel space

2.1.3 MMap (Memory Mapping) memory mapping

Linux initializes the contents of the virtual memory area by associating it with space on the disk, a process called memory mapping . Specifically, a file or other object is mapped to the address space of the process to realize the mapping relationship between the file disk address and the process virtual address space. After realizing this mapping relationship, the process can use pointers to read and write this section of memory, and the system will automatically write back to the corresponding disk file.

There is a mapping relationship between user space (virtual memory) and disk files (physical memory), so that when operating files in virtual memory, they will be automatically written back to disk files.

2.2 Binder principle

Binder通信采用C/S架构, from the perspective of components, includes Client, Server, ServiceManagerand Binder驱动, where ServiceManager is used to manage various services in the system. The architecture diagram is as follows:
insert image description here
It can be seen that both the registration service and the service acquisition process are required ServiceManager. It should be noted that the ServiceManager here refers to the Native layer ServiceManager(C++), not the framework layer ServiceManager(Java). ServiceManager is the big steward of the entire Binder communication mechanism and the daemon process of Binder, the Android inter-process communication mechanism. To master the Binder mechanism, you first need to understand how the system starts ServiceManager for the first time. After the ServiceManager is started, it needs to be obtained before communicating with the service before Client端starting the communication service. Server端ServiceManager接口
The mutual communication between the graphs Client/Server/ServiceManageis based on the Binder mechanism. Since the communication is based on the Binder mechanism, it is also a C/S architecture , and the three major steps in the figure have corresponding Client and Server ends.

  1. 注册服务(addService): The Server process must first register the Service to the ServiceManager. The process: Server is the client, and ServiceManager is the server.
  2. 获取服务(getService): Before the Client process uses a Service, it must first obtain the corresponding Service from the ServiceManager. The process: Client is the client, ServiceManager is the server.
  3. 使用服务: The Client establishes a communication path with the Server process where the Service is located according to the obtained Service information, and then can directly interact with the Service. The process: client is the client, server is the server.

The interactions between Client, Server, and ServiceManager in the figure are indicated by dotted lines, because they do not directly interact with each other, but all interact with the Binder driver to realize the IPC communication method . Among them, the Binder driver is located in the kernel space, and the Client, Server, and ServiceManager are located in the user space. Binder驱动和ServiceManager可以看做是Android平台的基础架构However Client和Server是Android的应用层, developers only need to customize the implementation of Client and Server, and use the basic platform architecture of Android to directly carry out IPC communication.

3 Binder Overall Architecture

insert image description here

  1. Client: calling end (client) process
  2. Server: called end (server) process​
  3. BpBinder和BinderProxy: The remote Binder entity is just a Native layer and a Java layer. BpBinder internally holds a Binder handle value handle.
  4. BBinder: The server side receives the information from the client side through IPCThreadState, and then calls back the onTransact implementation method.
  5. ProcessState: Process singleton, responsible for opening the Binder driver device and mmap;
  6. IPCThreadState: Thread singleton, responsible for specific command communication with the Binder driver.

Communication process :

  1. By Proxyinitiating transact()the call, the data will be packaged into the Parcel, which will be called down layer by layer BpBinder, BpBindercalled in IPCThreadState的transact()方法and passed in handlethe handle value, IPCThreadStateand then the specific Bindercommand will be executed.
  2. Binder驱动The approximate Serverprocess is: Serverafter IPCThreadStatereceiving Clientthe request, go up layer by layer, and finally call back to Stub的onTransact()方法.

ClientThe remote entities passed ServiceManageror AMSobtained Binderare generally used Proxyas a layer of encapsulation, such as ServiceManagerProxygenerated AIDLclasses Proxy. And the encapsulated remote Binderentity is one BinderProxy.
​Of course, this does not represent all IPC processes. For example, when ServiceManager is a Server, there is no upper-layer encapsulation, nor does it use IPCThreadState, but it communicates binder_loop()方法directly with it after initialization.Binder驱动

4 Binder four-layer structure

insert image description here
Binder plays a pivotal role in the entire Android system. In the Native layer, there is a complete set of C/S architecture for binder communication Binder类代表Server端,BinderProxy类代码Client端. BpBinder作为客户端,BBinder作为服务端。Java also has a set of binder C/S architecture with mirroring function, which corresponds to the binder of the native layer through JNI technology, and the binder function of the Java layer is finally completed by the native binder. See the Binder class diagram for all relevant classes and methods involved in the architecture of layers from to kernelto . The picture above shows all the classes involved in the entire Binder from the kernel to the native, JNI, and Framework layersnativejniframework
insert image description here

insert image description here

4.1 Binder Framework layer

The binder is at the framework layer, using JNI technology to call the binder architecture of the native (C/C++) layer, so as to provide services for the upper-layer applications. We know that in the native layer, the binder is a C/S architecture, which is divided into Bn端(Server)and Bp端(Client). The Java layer is very similar in terms of naming and architecture, and also implements a set of IPC communication architecture.

4.1.1 Stub and Proxy mechanism

Stub is the server and receives data.
Proxy is the client and sends data.

4.1.2 How does the ServiceManager Java object manage services

ServiceManager is the big steward of the Binder mechanism , managing various services of the android system. The Service registers with the ServiceManager. When the Client needs to call the Service, it first queries the Service through the ServiceManager, and the Client then communicates with the Service. These services have java layer and native layer. The native layer realizes the interaction between Service and ServiceManager through BpServiceManager/BnServiceManager.

4.1.3 Communication mechanism between Client and Service

Similar to the native layer, the java layer, the aidl script generates the IServiceManager interface class based on IServiceManager.aidl, including subclasses IServiceManager.Stuband IServiceManager.Stub.Proxy, both of which implement the IServiceManager interface, the former represents server端and the latter represents client端.

    public interface IServiceManager extends android.os.IInterface 
    {
    
    
        public static abstract class Stub extends android.os.Binder implements android.os.IServiceManager
        {
    
    
            public static android.os.IServiceManager asInterface(android.os.IBinder obj)
            {
    
    
                if (obj == null){
    
    
                    return null;
                }
                android.os.IInterface iin = obj.queryLocalInterface(DESCRIPTOR);
                if (iin != null && iin instanceof IServiceManager){
    
    
                    return (IServiceManager)iin;
                }
                return new IServiceManager.Stub.Proxy(obj);
            }
            
            public boolean onTransact(int code, Parcel data, Parcel reply, int flags) throws RemoteException
            {
    
    
                String descriptor = DESCRIPTOR;
                ...
                switch (code){
    
    
                    case TRANSACTION_getService:
                    {
    
    
                        String _arg0;
                        _arg0 = data.readString();
                        data.enforceNoDataAvail();
                        IBinder _result = getService(_agr0);
                        reply.writeNoException();
                        reply.writeStrongBinder(_result);
                        break;
                    }
                    ...
                }
            }
        }
        
        private static class Proxy implements IServiceManager
        {
    
    
            private IBinder mRemote;
            Proxy(IBinder remote){
    
    
                mRemote = remote;
            }
            
            @Override
            public IBinder getService(String name) throws RemoteException
            {
    
    
                Parcel _data = Parcel.obtain();
                Parcel _reply = Parcel.obtain();
                IBinder _result;
                try {
    
    
                    _data.writeInterfaceToken(DESCRIPTOR);
                    _data.writeString(name);
                    boolean _status = mRemote.transact(Stub.TRANSACTION_getService, _data, _reply, 0);
                    _reply.readException();
                    _result = _reply.readStrongBinder();
                }finally{
    
    
                    _reply.recycle();
                    _data.recycle();
                }
                return _result;
            }
            
            @Override
            public IBinder checkService(String name) throws RemoteException
            {
    
    
                Parcel _data = Parcel.obtain();
                Parcel _reply = Parcel.obtain();
                IBinder _result;
                try {
    
    
                    _data.writeInterfaceToken(DESCRIPTOR);
                    _data.writeString(name);
                    boolean _status = mRemote.transact(Stub.TRANSACTION_checkService, _data, _reply, 0);
                    _reply.readException();
                    _result = _reply.readStrongBinder();
                }finally{
    
    
                    _reply.recycle();
                    _data.recycle();
                }
                return _result;
            }
        }
    }

The java layer uses the ServiceManager class to implement the client side of the servicemanager, obtains the proxy class of the servicemanager through getIServiceManager(), and BinderInternal.getContextObject()creates a BinderProxy object through jni.
Here BinderProxy is originally a java class, why should it be created through jni? The purpose is to create a BpBinder object while creating a BinderPrxoy object. BinderProxy and BpBinder are in a one-to-one correspondence. Similarly, when creating a Java layer Binder object, a BBinder object will also be created through jni. We can understand it as BinderProxy/Binderencapsulation BpBinder/BBinder, and it is the latter that actually works.

public final class ServiceManager {
    
    
    private static IServiceManager sServiceManager;
    
    private static IServiceManager getIServiceManager(){
    
    
        if (sServiceManager != null){
    
    
            return sServiceManager;
        }
        sServiceManager = ServiceManagerNative
                .asInterface(Binder.allowBlocing(BinderInternal.getContextObject()));
        return sServiceManager;
    }
    
    public static IBinder getService(String name){
    
    
        try {
    
    
            IBinder service = sCache.get(name);
            if (service != null){
    
    
                return service;
            } else {
    
    
                return Binder.allowBlocking(rawGetServices(name));
            }
        } catch (RemoteException e){
    
    
            ...
        }
        return null;
    }
    
    private static IBinder rawGetService(String name) throws RemoteException  {
    
    
        final IBinder binder = getIServiceManager().getService(name);
        ...
        return binder;
    }
}

At this point, the proxy class of the obtained servicemanager IServiceManagerProxy(BinderProxy).

public final class ServiceManagerNative {
    
    
    private ServiceManagerNative(){
    
    }
    
    public static IServiceManager asInterface(IBinder obj){
    
    
        if (obj == null){
    
    
            return null;
        }
        return new ServiceManagerProxy(obj);
    }
    
    class ServiceManagerProxy implements IServiceManager {
    
    
        public ServiceManagerProxy(IBinder remote){
    
    
            mRemote = remote;
            //servicemanager实际的代理类
            mServiceManager = IServiceManager.Stub.asInterface(remote);
        }
        
        public IBinder getService(String name) throws RemoteException {
    
    
            return mServiceManager.checkService(name);
        }
        ...
        private IBinder mRemote;
        private IServiceManager mServiceManager;
    }
}

In this way, when we use ServiceManager.addService()/getService()the etc. method, we will go BinderProxytotransact()->transactNative()

public boolean transact(int code, Parcel data, Parcel reply, int flags) throws RemoteException {
    
    
    ...
    try {
    
    
        return transactNative(code, data, reply, flags);
    }finally{
    
    
        ...
    }
}

public native boolean transactNative(int code, Parcel data, Parecl reply, int flags) throws RemoteException;

Finally, enter the native layer through jni, BpBinder.transact()->IPCThreadState.transact()->writeTransactionData()->waitForResponse()communicate with the servicemanager process by entering the binder driver layer, and wait for the result to be returned.

Note that the server of the servicemanager here uses the native layer BnServiceManager, and does not use the IServiceManager.Stub of the java layer

After servicemanager receives the request, it performs corresponding operations IPCThreadState::executeCommand()->BBinder::transact()->BnServiceManager::onTransact(). The specific implementation method is in ServiceManager.cpp, which will not be described in detail here.

::android::status_t BnServiceManager::onTransact(uint32_t _aidl_code, const ::android::Parcel& _aidl_data, ::android::Parcel* _aidl_reply, uint32_t _aidl_flags){
    
    
    ::android::status_t _aidl_ret_status = ::android::OK;
    switch(_aidl_code){
    
    
    case BnServiceManager::TRANSACTION_getService:
        ::std::string in_name;
        ::android::sp<::android::IBinder> _aidl_return;
        ...
        _aidl_ret_status = _aidl_data.readUtf8FromUtf16(&in_name);
        ...
        ::android::binder::Status _aidl_status(getService(in_name, &_aidl_return));
        ...
        _aidl_ret_status = _aidl_reply->writeStrongBinder(_aidl_return);
        break;
     case BnServiceManager::TRANSACTION_checkService:
     ...
    }
}

After the data is assembled, go back IPCThreadState::executeCommand()and executesendReply()

status_t IPCThreadState::sendReply(const Parcel& reply, uint32_t flags){
    
    
    status_t err;
    status_t statusBuffer;
    err = writeTransactionData(BC_REPLY, flags, -1, 0, reply, &statusBuffer);
    if (err < NO_ERROR) return err;
    
    return waitForResponse(nullptr, nullptr);
}

The result of the request is returned to the client through the binder driver, as shown below, and the returned result is written into Parcel.

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult){
    
    
    ...
    while(1){
    
    
        if ((err=talkWithDriver()) < NO_ERROR) break;
        ...
        cmd = (uint32_t)mIn.readInt32();
        switch(cmd){
    
    
        ...
        case BR_REPLY:
            binder_transaction_data tr;
            err = mIn.read(&tr, sizeof(tr));
            ...
            if(reply){
    
    
                if((tr.flags & TF_STATUS_CODE) == 0){
    
    
                    reply->ipcSetDataReference(reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);
                }else{
    
    
                    ...
                }
            }else{
    
    
                ...
            }
            goto finish;
        }
    }
  finish:
      if (err != NO_ERROR){
    
    
          ...
      }
      return err;
}

Then enter IServiceManager.Stub.Proxythe method of the class, and Parcel.readStrongBinder()obtain the result of communicating with the servicemanager.

Summary : When servicemanager manages the java layer service, it currently only uses it IServiceManager.Stub.Proxyas a proxy class and does not use it IServiceManager.Stubas a service class. The service class still uses the native layerBnServiceManager .

4.2 Binder jni layer

4.2.1 How does the android_util_binder jni interface implement the native method

service registration

int register_android_os_Binder(JNIEnv* env)
{
    
    
    if (int_register_android_os_Binder(env) < 0)
        return -1;
    if (int_register_android_os_BinderInternal(env) < 0)
        return -1;
    if (int_register_android_os_BinderProxy(env) < 0)
        return -1;

    jclass clazz = FindClassOrDie(env, "android/util/Log");
    gLogOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    gLogOffsets.mLogE = GetStaticMethodIDOrDie(env, clazz, "e",
            "(Ljava/lang/String;Ljava/lang/String;Ljava/lang/Throwable;)I");

    clazz = FindClassOrDie(env, "android/os/ParcelFileDescriptor");
    gParcelFileDescriptorOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    gParcelFileDescriptorOffsets.mConstructor = GetMethodIDOrDie(env, clazz, "<init>",
                                                                 "(Ljava/io/FileDescriptor;)V");

    clazz = FindClassOrDie(env, "android/os/StrictMode");
    gStrictModeCallbackOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    gStrictModeCallbackOffsets.mCallback = GetStaticMethodIDOrDie(env, clazz,
            "onBinderStrictModePolicyChange", "(I)V");

    return 0;
}

static int int_register_android_os_Binder(JNIEnv* env)
{
    
    
    jclass clazz = FindClassOrDie(env, kBinderPathName);

    gBinderOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    gBinderOffsets.mExecTransact = GetMethodIDOrDie(env, clazz, "execTransact", "(IJJI)Z");
    gBinderOffsets.mObject = GetFieldIDOrDie(env, clazz, "mObject", "J");

    return RegisterMethodsOrDie(
        env, kBinderPathName,
        gBinderMethods, NELEM(gBinderMethods));
}

static int int_register_android_os_BinderInternal(JNIEnv* env)
{
    
    
    jclass clazz = FindClassOrDie(env, kBinderInternalPathName);

    gBinderInternalOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    gBinderInternalOffsets.mForceGc = GetStaticMethodIDOrDie(env, clazz, "forceBinderGc", "()V");

    return RegisterMethodsOrDie(
        env, kBinderInternalPathName,
        gBinderInternalMethods, NELEM(gBinderInternalMethods));
}

static int int_register_android_os_BinderProxy(JNIEnv* env)
{
    
    
    jclass clazz = FindClassOrDie(env, "java/lang/Error");
    gErrorOffsets.mClass = MakeGlobalRefOrDie(env, clazz);

    clazz = FindClassOrDie(env, kBinderProxyPathName);
    gBinderProxyOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    gBinderProxyOffsets.mConstructor = GetMethodIDOrDie(env, clazz, "<init>", "()V");
    gBinderProxyOffsets.mSendDeathNotice = GetStaticMethodIDOrDie(env, clazz, "sendDeathNotice",
            "(Landroid/os/IBinder$DeathRecipient;)V");

    gBinderProxyOffsets.mObject = GetFieldIDOrDie(env, clazz, "mObject", "J");
    gBinderProxyOffsets.mSelf = GetFieldIDOrDie(env, clazz, "mSelf",
                                                "Ljava/lang/ref/WeakReference;");
    gBinderProxyOffsets.mOrgue = GetFieldIDOrDie(env, clazz, "mOrgue", "J");

    clazz = FindClassOrDie(env, "java/lang/Class");
    gClassOffsets.mGetName = GetMethodIDOrDie(env, clazz, "getName", "()Ljava/lang/String;");

    return RegisterMethodsOrDie(
        env, kBinderProxyPathName,
        gBinderProxyMethods, NELEM(gBinderProxyMethods));
}

int_register_android_os_BinderThe main function of the method :
pass gBinderOffsets, save the information of the Java layer Binder class, and provide a channel for the JNI layer to access the Java layer;
pass RegisterMethodsOrDie, will gBinderMethods数组complete the mapping relationship, and provide a channel for the Java layer to access the JNI layer.
That is to say, this process establishes a bridge for the Binder class to call each other between the Native layer and the framework layer
.

static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)
{
    
    
    sp<IBinder> b = ProcessState::self()->getContextObject(NULL);
    return javaObjectForIBinder(env, b);
}

There is a native method in BinderInternal.java getContextObject(), and JNI calls to execute the above method. ProcessState::self()->getContextObject()Equivalent to newBpBinder(0). In ProcessState's Self, its own constructor will be called, and in the constructor, several things will be done:

  1. Open the binder device and set the service最大线程数目为15个
  2. Use mmap to do memory mapping (size is 1M-8K)
jobject javaObjectForIBinder(JNIEnv* env, const sp<IBinder>& val)
{
    
    
    if (val == NULL) return NULL;

    if (val->checkSubclass(&gBinderOffsets)) {
    
    
        // One of our own!
        jobject object = static_cast<JavaBBinder*>(val.get())->object();
        LOGDEATH("objectForBinder %p: it's our own %p!\n", val.get(), object);
        return object;
    }

    // For the rest of the function we will hold this lock, to serialize
    // looking/creation/destruction of Java proxies for native Binder proxies.
    AutoMutex _l(mProxyLock);

    // Someone else's...  do we know about it?
    jobject object = (jobject)val->findObject(&gBinderProxyOffsets);
    if (object != NULL) {
    
    
        jobject res = jniGetReferent(env, object);
        if (res != NULL) {
    
    
            ALOGV("objectForBinder %p: found existing %p!\n", val.get(), res);
            return res;
        }
        LOGDEATH("Proxy object %p of IBinder %p no longer in working set!!!", object, val.get());
        android_atomic_dec(&gNumProxyRefs);
        val->detachObject(&gBinderProxyOffsets);
        env->DeleteGlobalRef(object);
    }

    object = env->NewObject(gBinderProxyOffsets.mClass, gBinderProxyOffsets.mConstructor);
    if (object != NULL) {
    
    
        LOGDEATH("objectForBinder %p: created new proxy %p !\n", val.get(), object);
        // The proxy holds a reference to the native object.
        env->SetLongField(object, gBinderProxyOffsets.mObject, (jlong)val.get());
        val->incStrong((void*)javaObjectForIBinder);

        // The native object needs to hold a weak reference back to the
        // proxy, so we can retrieve the same proxy if it is still active.
        jobject refObject = env->NewGlobalRef(
                env->GetObjectField(object, gBinderProxyOffsets.mSelf));
        val->attachObject(&gBinderProxyOffsets, refObject,
                jnienv_to_javavm(env), proxy_cleanup);

        // Also remember the death recipients registered on this proxy
        sp<DeathRecipientList> drl = new DeathRecipientList;
        drl->incStrong((void*)javaObjectForIBinder);
        env->SetLongField(object, gBinderProxyOffsets.mOrgue, reinterpret_cast<jlong>(drl.get()));

        // Note that a new object reference has been created.
        android_atomic_inc(&gNumProxyRefs);
        incRefsCreated(env);
    }

    return object;
}

Generate a BinderProxy (Java) object based on BpBinder (C++). The main job is to create a BinderProxy object and save the address of the BpBinder object to the BinderProxy.mObject member variable. So far we can know that ServiceManagerNative.asInterface(BinderInternal.getContextObject()) is equivalent to ServiceManagerNative.asInterface(newBinderProxy())

sp<IBinder> ibinderForJavaObject(JNIEnv* env, jobject obj)
{
    
    
    if (obj == NULL) return NULL;

    if (env->IsInstanceOf(obj, gBinderOffsets.mClass)) {
    
    
        JavaBBinderHolder* jbh = (JavaBBinderHolder*)
            env->GetLongField(obj, gBinderOffsets.mObject);
        return jbh != NULL ? jbh->get(env, obj) : NULL;
    }

    if (env->IsInstanceOf(obj, gBinderProxyOffsets.mClass)) {
    
    
        return (IBinder*)
            env->GetLongField(obj, gBinderProxyOffsets.mObject);
    }

    ALOGW("ibinderForJavaObject: %p is not a Binder object", obj);
    return NULL;
}

Generate a JavaBBinderHolder (C++) object based on Binde (Java). The main job is to create a JavaBBinderHolder object and save the address of the JavaBBinderHolder object to the Binder.mObject member variable

class JavaBBinderHolder : public RefBase
{
    
    
public:
    sp<JavaBBinder> get(JNIEnv* env, jobject obj)
    {
    
    
        AutoMutex _l(mLock);
        sp<JavaBBinder> b = mBinder.promote();
        if (b == NULL) {
    
    
            b = new JavaBBinder(env, obj);
            mBinder = b;
            ALOGV("Creating JavaBinder %p (refs %p) for Object %p, weakCount=%" PRId32 "\n",
                 b.get(), b->getWeakRefs(), obj, b->getWeakRefs()->getWeakCount());
        }

        return b;
    }

    sp<JavaBBinder> getExisting()
    {
    
    
        AutoMutex _l(mLock);
        return mBinder.promote();
    }

private:
    Mutex           mLock;
    wp<JavaBBinder> mBinder;
};

JavaBBinderHolder has a member variable mBinder, which saves the currently created JavaBBinder object. This is a type of wp, which may be recycled by the garbage collector, so before each use, it is necessary to determine whether it exists.

    JavaBBinder(JNIEnv* env, jobject object)
        : mVM(jnienv_to_javavm(env)), mObject(env->NewGlobalRef(object))
    {
    
    
        ALOGV("Creating JavaBBinder %p\n", this);
        android_atomic_inc(&gNumLocalRefs);
        incRefsCreated(env);
    }

Created JavaBBinder, the object inherits from BBinderObject. data.writeStrongBinder(service)Ultimately equivalent toparcel->writeStrongBinder(newJavaBBinder(env,obj));

static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
        jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException
{
    
    
    if (dataObj == NULL) {
    
    
        jniThrowNullPointerException(env, NULL);
        return JNI_FALSE;
    }

    Parcel* data = parcelForJavaObject(env, dataObj);
    if (data == NULL) {
    
    
        return JNI_FALSE;
    }
    Parcel* reply = parcelForJavaObject(env, replyObj);
    if (reply == NULL && replyObj != NULL) {
    
    
        return JNI_FALSE;
    }

    IBinder* target = (IBinder*)
        env->GetLongField(obj, gBinderProxyOffsets.mObject);
    if (target == NULL) {
    
    
        jniThrowException(env, "java/lang/IllegalStateException", "Binder has been finalized!");
        return JNI_FALSE;
    }

    ALOGV("Java code calling transact on %p in Java object %p with code %" PRId32 "\n",
            target, obj, code);


    bool time_binder_calls;
    int64_t start_millis;
    if (kEnableBinderSample) {
    
    
        // Only log the binder call duration for things on the Java-level main thread.
        // But if we don't
        time_binder_calls = should_time_binder_calls();

        if (time_binder_calls) {
    
    
            start_millis = uptimeMillis();
        }
    }

    //printf("Transact from Java code to %p sending: ", target); data->print();
    status_t err = target->transact(code, *data, reply, flags);
    //if (reply) printf("Transact from Java code to %p received: ", target); reply->print();

    if (kEnableBinderSample) {
    
    
        if (time_binder_calls) {
    
    
            conditionally_log_binder_call(start_millis, target, code);
        }
    }

    if (err == NO_ERROR) {
    
    
        return JNI_TRUE;
    } else if (err == UNKNOWN_TRANSACTION) {
    
    
        return JNI_FALSE;
    }

    signalExceptionForError(env, obj, err, true /*canThrowRemoteException*/, data->dataSize());
    return JNI_FALSE;
}

The final completion of the Java layer BinderProxy.transact()is handed over to the Native layer BpBinder::transact(). The BpBinder execution process is described in detail in the registration service (addService) of NativeBinder. Additionally, this method may throw RemoteException.

4.2.2 AndroidRuntime jni scheduling mechanism

4.3 Binder native layer

4.3.1 IPC与RPC

  IPC (Inter-Process Communication) is inter-process communication , which refers to the process of data interaction between processes. The bottom layer of Android is based on Linux, and based on security considerations, Linux does not allow two processes to directly manipulate each other's data, which is process isolation.

  RPC (Remote Procedure Call) is a remote procedure call . It is a service request from a remote computer program through the network, and the data in the computer process can be obtained under the protocol that does not need to understand the underlying network technology. RPC makes it easier to develop applications including network distributed multiprogramming. In short: the client can send several process requests to the server, and the server returns corresponding calculation results in turn according to the sent process parameters. RPC can be said that the process of the client calling the server's interface is interface-oriented programming.

  Android provides an interprocess communication (IPC) mechanism using Remote Procedure Calls (RPC), whereby methods called by an Activity or other application component are executed remotely (in another process) and all results are returned to the caller. This requires breaking down the method call and its data to the extent that the operating system can understand it, and transferring it from the local process and address space to the remote process and address space, where the call is then reassembled and executed. The return value is then transmitted back in the opposite direction. Android provides all the code needed to perform these IPC transactions, so you only need to concentrate on defining and implementing the RPC programming interface. To perform IPC, the application must be bound to the service using bindService(). That is to say, the specific embodiment of RPC in Android is the process of returning the calculation result of the server to the client (Activity and other components) bindService()in a dependent way .onBind

4.3.2 How does the service_manager service start with the system startup

insert image description here
The ServiceManager process is created in the init process, so we start the analysis from the main() of the init process:

// 文件路径: system/core/init/main.cpp
 
int main(int argc, char** argv) {
    
    
 
    ...
     if (!strcmp(argv[1], "second_stage")) {
    
      //TODO  根据条件会走到这个分支
            return SecondStageMain(argc, argv);
     }
 
}
 
 
int SecondStageMain(int argc, char** argv) {
    
    
 
   ...
   //用来存放解析出的内容
   ActionManager& am = ActionManager::GetInstance();
   ServiceList& sm = ServiceList::GetInstance();
   //在这个方法中会对 /system/core/rootdir/init.rc 脚本文件文件进行解析
   LoadBootScripts(am, sm);
 
   //循环处理init.rc脚本中的command命令,处理完就进入等待
    while (true) {
    
    
         if (!(prop_waiter_state.MightBeWaiting() || Service::is_exec_service_running())) 
         {
    
    
            //内部遍历执行每个action中携带的command对应的执行函数
			am.ExecuteOneCommand();
         }
    }
}
 
 
static void LoadBootScripts(ActionManager& action_manager, ServiceList& service_list) {
    
    
    //创建解析器
	Parser parser = CreateParser(action_manager, service_list);
 
    std::string bootscript = GetProperty("ro.boot.init_rc", "");
    if (bootscript.empty()) {
    
    
		//解析init.rc ,这个是手机设备上的路径,和源码中system/core/init/init.rc是一个文件
        parser.ParseConfig("/system/etc/init/hw/init.rc");
        if (!parser.ParseConfig("/system/etc/init")) {
    
    
            late_import_paths.emplace_back("/system/etc/init");
        }
        // late_import is available only in Q and earlier release. As we don't
        // have system_ext in those versions, skip late_import for system_ext.
        parser.ParseConfig("/system_ext/etc/init");
        if (!parser.ParseConfig("/product/etc/init")) {
    
    
            late_import_paths.emplace_back("/product/etc/init");
        }
        if (!parser.ParseConfig("/odm/etc/init")) {
    
    
            late_import_paths.emplace_back("/odm/etc/init");
        }
        if (!parser.ParseConfig("/vendor/etc/init")) {
    
    
            late_import_paths.emplace_back("/vendor/etc/init");
        }
    } else {
    
    
        parser.ParseConfig(bootscript);
    }
}

The following is the part related to starting the servicemanager process in init.rc:

on init
    # Start essential services.
    start servicemanager  #启动servicemanager进程
    start hwservicemanager
    start vndservicemanager

The detailed configuration of the servicemanager process startup is placed in frameworks\native\cmds\servicemanager\servicemanager.rc

#此脚本文件描述了启动servicemanager进程时的一些细节
#service用于通知init进程创建名为servicemanager的进程,这个进程执行程序的路径是/system/bin/servicemanager
#在手机系统中是能找到这个文件的
service servicemanager /system/bin/servicemanager
    class core animation
    #表明此进程是以system身份运行的
    user system
    group system readproc
    #说明servicemanager是系统中的关键服务,关键服务是不会退出的,若退出系统则会重启,系统重启则会重启
    #以下onrestart修饰的进程,也可以说明这些进程是依赖于servicemanager进程的
    critical
    onrestart restart healthd
    onrestart restart zygote
    onrestart restart audioserver
    onrestart restart media
    onrestart restart surfaceflinger
    onrestart restart inputflinger
    onrestart restart drm
    onrestart restart cameraserver
    onrestart restart keystore
    onrestart restart gatekeeperd
    onrestart restart thermalservice
    writepid /dev/cpuset/system-background/tasks
    shutdown critical

Summary: ServiceManager is an independent process, 由init进程创建and it is being created zygote进程之前被创建.

4.3.3 loop (main method of ServiceManager)

The main method of ServiceManager:

int main(int argc, char** argv)
{
    
    
    struct binder_state *bs;
    union selinux_callback cb;
    char *driver;

    if (argc > 1) {
    
    
        driver = argv[1];
    } else {
    
    
        driver = "/dev/binder";
    }
    // 打开 /dev/binder 初始化系统 其内部通过 mmap() 函数创建一块内存空间 
    // 参数 mapsize = 128*1024 就是空间的大小 128k
    bs = binder_open(driver, 128*1024);
    if (!bs) {
    
    
        return -1;
    }
    // 调用了 binder.c 中的 binder_become_context_manager(bs) 函数
    // 作用:把本进程设置为 Binder 框架的管理进程 
    // 其内部非常简单 就是通过 ioctl 将 BINDER_SET_CONTEXT_MGR 发送到了驱动
    // 调用了 binder_become_context_manager 后就代表我已经准备就绪了
    if (binder_become_context_manager(bs)) {
    
    
        return -1;
    }

    cb.func_audit = audit_callback;
    selinux_set_callback(SELINUX_CB_AUDIT, cb);
    cb.func_log = selinux_log_callback;
    selinux_set_callback(SELINUX_CB_LOG, cb);

#ifdef VENDORSERVICEMANAGER
    sehandle = selinufx_android_vendor_service_context_handle();
#else
    sehandle = selinux_android_service_context_handle();
#endif
    selinux_status_open(true);

    if (sehandle == NULL) {
    
    
        ALOGE("SELinux: Failed to acquire sehandle. Aborting.\n");
        abort();
    }

    if (getcon(&service_manager_context) != 0) {
    
    
        ALOGE("SELinux: Failed to acquire service_manager context. Aborting.\n");
        abort();
    }

    // 最后调用 binder_loop 开启消息循环。将 svcmgr_handler 函数指针传入。
    // binder_loop() 函数的主要作用就是从驱动读取命令解析后再 用svcmgr_handler 处理
    binder_loop(bs, svcmgr_handler);

    return 0;
}

During the startup process of the SeviceManager process, the following four things are mainly done:

1) Initialize the binder driver and complete the mapping
2) Add itself as "manager" to the map collection in servicemanager
3) binder_become_context_manager, register as a context manager for the binder driver
4) binder_loop, wait for the client's request in a loop , (for Looper sets callback, enters an infinite loop, and processes requests from the client)

4.3.4 Principles of Kernel Space and User Space

In Linux, 操作系统和驱动程序运行在Kernel space(内核空间),应用程序运行在User space(用户空间). The two cannot simply use pointers to pass data, because of the virtual kernel mechanism used by Linux, when the kernel space uses user space pointers, the corresponding data may not be in memory (the data has been swapped out). The memory of the user space adopts the segment page type, and the kernel space also has its own rules.
Simply put, Kernel space is the running space of the Linux kernel, and User space is the running space of user programs. For safety, they are isolated, even if the user's program crashes, the kernel will not be affected . Kernel space can execute arbitrary commands and call all resources of the system; User space can only perform simple calculations, and cannot directly call system resources, and must pass through the system interface (also known as system call) to issue instructions to the kernel.

4.3.5 misc_register

misc_register函数The function used to 注册provide external operations
The driver file is a special file, for example, open a file, just open it. In the Linux mechanism, if a driver file is operated, the corresponding method of the driver will be called. open ( ' / dev / binder " )file, will execute binder_openthe method of driving the source code

4.4 Binder driver layer

4.4.1 Basics

Handle callback function detailed
data structure: binder_node, binder_ref, binder_proc, binder_thread

4.4.2 Data interaction

In the BinderIPC communication process, the inter-process communication must first send the BC_XXX command to the Binder driver, and then the Binder driver does a little processing and then transfers the command to the target process through the corresponding BR_XXX.
If there is a return value, the process also first sends the return result to the Binder driver in the form of BC_REPLY, and then forwards it with the BR_REPLY command through the driver.
insert image description here
PS:从Driver发出的命令以BR开始,而发往Driver的命令以BC开头

4.4.3 Three Logical Processes

Service registration, service discovery, service call, let's take a look at the three major logics of Binder in the c layer:

Service registration detection and service discovery
service_manager.c

int svcmgr_handler(struct binder_state *bs,
                   struct binder_transaction_data *txn,
                   struct binder_io *msg,
                   struct binder_io *reply)
{
    
    
    struct svcinfo *si;
    uint16_t *s;
    size_t len;
    uint32_t handle;
    uint32_t strict_policy;
    int allow_isolated;

    //ALOGI("target=%p code=%d pid=%d uid=%d\n",
    //      (void*) txn->target.ptr, txn->code, txn->sender_pid, txn->sender_euid);

    if (txn->target.ptr != BINDER_SERVICE_MANAGER)
        return -1;

    if (txn->code == PING_TRANSACTION)
        return 0;

    // Equivalent to Parcel::enforceInterface(), reading the RPC
    // header with the strict mode policy mask and the interface name.
    // Note that we ignore the strict_policy and don't propagate it
    // further (since we do no outbound RPCs anyway).
    strict_policy = bio_get_uint32(msg);
    s = bio_get_string16(msg, &len);
    if (s == NULL) {
    
    
        return -1;
    }

    if ((len != (sizeof(svcmgr_id) / 2)) ||
        memcmp(svcmgr_id, s, sizeof(svcmgr_id))) {
    
    
        fprintf(stderr,"invalid id %s\n", str8(s, len));
        return -1;
    }

    if (sehandle && selinux_status_updated() > 0) {
    
    
        struct selabel_handle *tmp_sehandle = selinux_android_service_context_handle();
        if (tmp_sehandle) {
    
    
            selabel_close(sehandle);
            sehandle = tmp_sehandle;
        }
    }

    switch(txn->code) {
    
    
    case SVC_MGR_GET_SERVICE:
    case SVC_MGR_CHECK_SERVICE:
    	//服务名
        s = bio_get_string16(msg, &len);
        if (s == NULL) {
    
    
            return -1;
        }
        //根据名称查找相应服务
        handle = do_find_service(s, len, txn->sender_euid, txn->sender_pid);
        if (!handle)
            break;
        bio_put_ref(reply, handle);
        return 0;

    case SVC_MGR_ADD_SERVICE:
    	//服务名
        s = bio_get_string16(msg, &len);
        if (s == NULL) {
    
    
            return -1;
        }
        handle = bio_get_ref(msg);
        allow_isolated = bio_get_uint32(msg) ? 1 : 0;
        //注册指定服务
        if (do_add_service(bs, s, len, handle, txn->sender_euid,
            allow_isolated, txn->sender_pid))
            return -1;
        break;

    case SVC_MGR_LIST_SERVICES: {
    
    
        uint32_t n = bio_get_uint32(msg);

        if (!svc_can_list(txn->sender_pid, txn->sender_euid)) {
    
    
            ALOGE("list_service() uid=%d - PERMISSION DENIED\n",
                    txn->sender_euid);
            return -1;
        }
        si = svclist;
        while ((n-- > 0) && si)
            si = si->next;
        if (si) {
    
    
            bio_put_string16(reply, si->name);
            return 0;
        }
        return -1;
    }
    default:
        ALOGE("unknown code %d\n", txn->code);
        return -1;
    }

    bio_put_uint32(reply, 0);
    return 0;
}

struct svcinfo
{
    
    
    struct svcinfo *next;
    //服务的 handle 值
    uint32_t handle;
    struct binder_death death;
    int allow_isolated;
    //名字长度
    size_t len;
    //服务名
    uint16_t name[0];
};

Functions of this method: register service, query service, and list all services. Each service is represented by the svcinfo structure, and the handle value is determined by the end of the process where the service is located
during the process of registering the service .

Register service
service_manager.c

int do_add_service(struct binder_state *bs,
                   const uint16_t *s, size_t len,
                   uint32_t handle, uid_t uid, int allow_isolated,
                   pid_t spid)
{
    
    
    struct svcinfo *si;

    //ALOGI("add_service('%s',%x,%s) uid=%d\n", str8(s, len), handle,
    //        allow_isolated ? "allow_isolated" : "!allow_isolated", uid);

    if (!handle || (len == 0) || (len > 127))
        return -1;

	//权限检查
    if (!svc_can_register(s, len, spid, uid)) {
    
    
        ALOGE("add_service('%s',%x) uid=%d - PERMISSION DENIED\n",
             str8(s, len), handle, uid);
        return -1;
    }

	//服务检索
    si = find_svc(s, len);
    if (si) {
    
    
        if (si->handle) {
    
    
            ALOGE("add_service('%s',%x) uid=%d - ALREADY REGISTERED, OVERRIDE\n",
                 str8(s, len), handle, uid);
            //服务已注册时,释放相应的服务
            svcinfo_death(bs, si);
        }
        si->handle = handle;
    } else {
    
    
        si = malloc(sizeof(*si) + (len + 1) * sizeof(uint16_t));
        if (!si) {
    
    
			//内存不足,无法分配足够内存
            ALOGE("add_service('%s',%x) uid=%d - OUT OF MEMORY\n",
                 str8(s, len), handle, uid);
            return -1;
        }
        si->handle = handle;
        si->len = len;
        //内存拷贝服务信息
        memcpy(si->name, s, (len + 1) * sizeof(uint16_t));
        si->name[len] = '\0';
        si->death.func = (void*) svcinfo_death;
        si->death.ptr = si;
        si->allow_isolated = allow_isolated;
        // svclist 保存所有已注册的服务
        si->next = svclist;
        svclist = si;
    }

	//以 BC_ACQUIRE 命令,handle 为目标的信息,通过 ioctl 发送给 binder 驱动
    binder_acquire(bs, handle);
    //以 BC_REQUEST_DEATH_NOTIFICATION 命令的信息,通过 ioctl 发送给 binder 驱动,
    //主要用于清理内存等收尾工作。
    binder_link_to_death(bs, handle, &si->death);
    return 0;
}

static int svc_can_register(const uint16_t *name, size_t name_len, pid_t spid, uid_t uid)
{
    
    
    const char *perm = "add";

    if (multiuser_get_app_id(uid) >= AID_APP) {
    
    
        return 0; /* Don't allow apps to register services */
    }

	//检查 selinux 权限是否满足
    return check_mac_perms_from_lookup(spid, uid, perm, str8(name, name_len)) ? 1 : 0;
}

void svcinfo_death(struct binder_state *bs, void *ptr)
{
    
    
    struct svcinfo *si = (struct svcinfo* ) ptr;

    ALOGI("service '%s' died\n", str8(si->name, si->len));
    if (si->handle) {
    
    
        binder_release(bs, si->handle);
        si->handle = 0;
    }
}

binder.c

uint32_t bio_get_ref(struct binder_io *bio)
{
    
    
    struct flat_binder_object *obj;

    obj = _bio_get_obj(bio);
    if (!obj)
        return 0;

    if (obj->type == BINDER_TYPE_HANDLE)
        return obj->handle;

    return 0;
}
void binder_link_to_death(struct binder_state *bs, uint32_t target, struct binder_death *death)
{
    
    
    struct {
    
    
        uint32_t cmd;
        struct binder_handle_cookie payload;
    } __attribute__((packed)) data;

    data.cmd = BC_REQUEST_DEATH_NOTIFICATION;
    data.payload.handle = target;
    data.payload.cookie = (uintptr_t) death;
    binder_write(bs, &data, sizeof(data));
}

Service registration:
The registration service is divided into the following three parts:
svc_can_register: check permissions, check whether the selinux permissions are satisfied;
find_svc: service retrieval, query the matching service according to the service name;
svcinfo_death: release the service, when a service with the same name is found , then clear the service
information first, and then add the current service to the service list svclist;

After binder_write enters the Binder driver, it directly calls and enters binder_thread_write to process the BC_REQUEST_DEATH_NOTIFICATION command

service discovery
service_manager.c

uint32_t do_find_service(const uint16_t *s, size_t len, uid_t uid, pid_t spid)
{
    
    
    struct svcinfo *si = find_svc(s, len);

    if (!si || !si->handle) {
    
    
        return 0;
    }

    if (!si->allow_isolated) {
    
    
        // If this service doesn't allow access from isolated processes,
        // then check the uid to see if it is isolated.
        uid_t appid = uid % AID_USER;
        if (appid >= AID_ISOLATED_START && appid <= AID_ISOLATED_END) {
    
    
            return 0;
        }
    }

    if (!svc_can_find(s, len, spid, uid)) {
    
    
        return 0;
    }

    return si->handle;
}

struct svcinfo *find_svc(const uint16_t *s16, size_t len)
{
    
    
    struct svcinfo *si;

    for (si = svclist; si; si = si->next) {
    
    
        if ((len == si->len) &&
            !memcmp(s16, si->name, len * sizeof(uint16_t))) {
    
    
            return si;
        }
    }
    return NULL;
}

Service discovery: Query the target service, and return the handle corresponding to the service.
From the svclist service list, traverse to find whether it has been registered according to the service name. When the service already exists in
svclist, it returns the corresponding service name, otherwise it returns NULL.
When the handle of the service is found, call bio_put_ref(reply, handle) to encapsulate the handle into
the reply.

binder.c

void bio_put_ref(struct binder_io *bio, uint32_t handle)
{
    
    
    struct flat_binder_object *obj;

    if (handle)
        obj = bio_alloc_obj(bio);
    else
        obj = bio_alloc(bio, sizeof(*obj));

    if (!obj)
        return;

    obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    obj->type = BINDER_TYPE_HANDLE;
    obj->handle = handle;
    obj->cookie = 0;
}

void bio_put_obj(struct binder_io *bio, void *ptr)
{
    
    
    struct flat_binder_object *obj;

    obj = bio_alloc_obj(bio);
    if (!obj)
        return;

    obj->flags = 0x7f | FLAT_BINDER_FLAG_ACCEPTS_FDS;
    obj->type = BINDER_TYPE_BINDER;
    obj->binder = (uintptr_t)ptr;
    obj->cookie = 0;
}

static void *bio_alloc(struct binder_io *bio, size_t size)
{
    
    
    size = (size + 3) & (~3);
    if (size > bio->data_avail) {
    
    
        bio->flags |= BIO_F_OVERFLOW;
        return NULL;
    } else {
    
    
        void *ptr = bio->data;
        bio->data += size;
        bio->data_avail -= size;
        return ptr;
    }
}

Service call
The upper layer obtains the service through checkService() and getService(), and actually calls the underlying transact() method to call its method.

BpBinder.cpp

status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    
    
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
    
    
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

The Binder proxy class calls the transact() method, and the real work is still handed over to IPCThreadState for the
transact work.

IPCThreadState.cpp

static pthread_mutex_t gTLSMutex = PTHREAD_MUTEX_INITIALIZER;
static bool gHaveTLS = false;
static pthread_key_t gTLS = 0;
static bool gShutdown = false;
static bool gDisableBackgroundScheduling = false;

IPCThreadState* IPCThreadState::self()
{
    
    
    if (gHaveTLS) {
    
    
restart:
        const pthread_key_t k = gTLS;
        IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
        if (st) return st;
        return new IPCThreadState;
    }
    
    if (gShutdown) return NULL;
    
    pthread_mutex_lock(&gTLSMutex);
    if (!gHaveTLS) {
    
    
        if (pthread_key_create(&gTLS, threadDestructor) != 0) {
    
    
            pthread_mutex_unlock(&gTLSMutex);
            return NULL;
        }
        gHaveTLS = true;
    }
    pthread_mutex_unlock(&gTLSMutex);
    goto restart;
}

IPCThreadState::IPCThreadState()
    : mProcess(ProcessState::self()),
      mMyThreadId(androidGetTid()),
      mStrictModePolicy(0),
      mLastTransactionBinderFlags(0)
{
    
    
    pthread_setspecific(gTLS, this);
    clearCaller();
    mIn.setDataCapacity(256);
    mOut.setDataCapacity(256);
}

TLS refers to Thread local storage (thread local storage space). Each thread has its own TLS, which is a private space and will not be shared between threads. The contents of these spaces can be obtained/set by the pthread_getspecific/pthread_setspecific functions. Get the IPCThreadState object stored in the thread local storage space.

Each thread has an IPCThreadState, and each IPCThreadState has an mIn and an mOut. The member variable mProcess saves the ProcessState variable (only one per process).
mIn is used to receive data from the Binder device, the default size is 256 bytes;
mOut is used to store data sent to the Binder device, the default size is 256 bytes

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    
    
	//数据错误检查
    status_t err = data.errorCheck();

    flags |= TF_ACCEPT_FDS;

    IF_LOG_TRANSACTIONS() {
    
    
        TextOutput::Bundle _b(alog);
        alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
            << handle << " / code " << TypeCode(code) << ": "
            << indent << data << dedent << endl;
    }
    
    if (err == NO_ERROR) {
    
    
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        // 传输数据
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }
    
    if (err != NO_ERROR) {
    
    
        if (reply) reply->setError(err);
        return (mLastError = err);
    }
    
    if ((flags & TF_ONE_WAY) == 0) {
    
    
        #if 0
        if (code == 4) {
    
     // relayout
            ALOGI(">>>>>> CALLING transaction 4");
        } else {
    
    
            ALOGI(">>>>>> CALLING transaction %d", code);
        }
        #endif
        if (reply) {
    
    
        	//等待响应
            err = waitForResponse(reply);
        } else {
    
    
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        #if 0
        if (code == 4) {
    
     // relayout
            ALOGI("<<<<<< RETURNING transaction 4");
        } else {
    
    
            ALOGI("<<<<<< RETURNING transaction %d", code);
        }
        #endif
        
        IF_LOG_TRANSACTIONS() {
    
    
            TextOutput::Bundle _b(alog);
            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
                << handle << ": ";
            if (reply) alog << indent << *reply << dedent << endl;
            else alog << "(none requested)" << endl;
        }
    } else {
    
    
    	//不需要响应消息的 binder 则进入该分支
        err = waitForResponse(NULL, NULL);
    }
    
    return err;
}

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
{
    
    
    binder_transaction_data tr;

    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
    tr.target.handle = handle;
    tr.code = code;
    tr.flags = binderFlags;
    tr.cookie = 0;
    tr.sender_pid = 0;
    tr.sender_euid = 0;
    
    // data 为记录 Media 服务信息的 Parcel 对象
    const status_t err = data.errorCheck();
    if (err == NO_ERROR) {
    
    
        tr.data_size = data.ipcDataSize();
        tr.data.ptr.buffer = data.ipcData();
        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
        tr.data.ptr.offsets = data.ipcObjects();
    } else if (statusBuffer) {
    
    
        tr.flags |= TF_STATUS_CODE;
        *statusBuffer = err;
        tr.data_size = sizeof(status_t);
        tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
        tr.offsets_size = 0;
        tr.data.ptr.offsets = 0;
    } else {
    
    
        return (mLastError = err);
    }
    
    //cmd = BC_TRANSACTION
    mOut.writeInt32(cmd);
    //写入 binder_transaction_data 数据
    mOut.write(&tr, sizeof(tr));
    
    return NO_ERROR;
}

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    
    
    int32_t cmd;
    int32_t err;

    while (1) {
    
    
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;
        
        cmd = mIn.readInt32();
        
        IF_LOG_COMMANDS() {
    
    
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }

        switch (cmd) {
    
    
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;
        
        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;

        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;
        
        case BR_ACQUIRE_RESULT:
            {
    
    
                ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                const int32_t result = mIn.readInt32();
                if (!acquireResult) continue;
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
            }
            goto finish;
        
        case BR_REPLY:
            {
    
    
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {
    
    
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
    
    
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t),
                            freeBuffer, this);
                    } else {
    
    
                        err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t), this);
                    }
                } else {
    
    
                    freeBuffer(NULL,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(binder_size_t), this);
                    continue;
                }
            }
            goto finish;

        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
    
    
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }
    
    return err;
}

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    
    
    if (mProcess->mDriverFD <= 0) {
    
    
        return -EBADF;
    }
    
    binder_write_read bwr;
    
    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
    
    // We don't want to write anything if we are still reading
    // from data left in the input buffer and the caller
    // has requested to read the next data.
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
    
    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {
    
    
    	//接收数据缓冲区信息的填充。如果以后收到数据,就直接填在 mIn 中了。
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
    
    
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

    IF_LOG_COMMANDS() {
    
    
        TextOutput::Bundle _b(alog);
        if (outAvail != 0) {
    
    
            alog << "Sending commands to driver: " << indent;
            const void* cmds = (const void*)bwr.write_buffer;
            const void* end = ((const uint8_t*)cmds)+bwr.write_size;
            alog << HexDump(cmds, bwr.write_size) << endl;
            while (cmds < end) cmds = printCommand(alog, cmds);
            alog << dedent;
        }
        alog << "Size of receive buffer: " << bwr.read_size
            << ", needRead: " << needRead << ", doReceive: " << doReceive << endl;
    }
    
    // Return immediately if there is nothing to do.
    //当读缓冲和写缓冲都为空,则直接返回
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
    
    
        IF_LOG_COMMANDS() {
    
    
            alog << "About to read/write, write size = " << mOut.dataSize() << endl;
        }
#if defined(HAVE_ANDROID_OS)
		//通过 ioctl 不停的读写操作,跟 Binder Driver 进行通信
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;
#else
        err = INVALID_OPERATION;
#endif
        if (mProcess->mDriverFD <= 0) {
    
    
            err = -EBADF;
        }
        IF_LOG_COMMANDS() {
    
    
            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
        }
    //当被中断,则继续执行
    } while (err == -EINTR);

    IF_LOG_COMMANDS() {
    
    
        alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: "
            << bwr.write_consumed << " (of " << mOut.dataSize()
                        << "), read consumed: " << bwr.read_consumed << endl;
    }

    if (err >= NO_ERROR) {
    
    
        if (bwr.write_consumed > 0) {
    
    
            if (bwr.write_consumed < mOut.dataSize())
                mOut.remove(0, bwr.write_consumed);
            else
                mOut.setDataSize(0);
        }
        if (bwr.read_consumed > 0) {
    
    
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }
        IF_LOG_COMMANDS() {
    
    
            TextOutput::Bundle _b(alog);
            alog << "Remaining data size: " << mOut.dataSize() << endl;
            alog << "Received commands from driver: " << indent;
            const void* cmds = mIn.data();
            const void* end = mIn.data() + mIn.dataSize();
            alog << HexDump(cmds, mIn.dataSize()) << endl;
            while (cmds < end) cmds = printReturnCommand(alog, cmds);
            alog << dedent;
        }
        return NO_ERROR;
    }
    
    return err;
}

The value of handle is used to identify the destination. The destination of the registration service process is the service manager. Here handle=0 corresponds to the binder_context_mgr_node object, which is the binder entity object corresponding to the servicemanager. The binder_transaction_data structure is the data structure of the binder-driven communication. The process is to write the Binder request code BC_TRANSACTION and the binder_transaction_data structure to mOut.

The binder_write_read structure is used to exchange data with the Binder device. It communicates with mDriverFD through ioctl, which is the process of actually interacting with the Binder driver for data reading and writing. First send a query service request (BR_TRANSACTION) to the servicemanager process.
When the service manager process receives the command, it will execute do_find_service() to query the handle corresponding to the service, then binder_send_reply() to respond to the initiator, send the BC_REPLY protocol, and then call binder_transaction(), and then insert it into the Todo queue of the service requester affairs.
Next, look at the binder_transaction process.

binder.c

static void binder_transaction(struct binder_proc *proc,
				   struct binder_thread *thread,
				   struct binder_transaction_data *tr, int reply)
{
    
    
	//根据各种判定,获取以下信息:
	struct binder_transaction *t;
	struct binder_work *tcomplete;
	binder_size_t *offp, *off_end;
	//目标进程
	struct binder_proc *target_proc;
	//目标线程
	struct binder_thread *target_thread = NULL;
	//目标binder节点
	struct binder_node *target_node = NULL;
	//目标TODO队列
	struct list_head *target_list;
	//目标等待队列
	wait_queue_head_t *target_wait;
	struct binder_transaction *in_reply_to = NULL;
	struct binder_transaction_log_entry *e;
	uint32_t return_error;

	e = binder_transaction_log_add(&binder_transaction_log);
	e->call_type = reply ? 2 : !!(tr->flags & TF_ONE_WAY);
	e->from_proc = proc->pid;
	e->from_thread = thread->pid;
	e->target_handle = tr->target.handle;
	e->data_size = tr->data_size;
	e->offsets_size = tr->offsets_size;

	if (reply) {
    
    
		in_reply_to = thread->transaction_stack;
		if (in_reply_to == NULL) {
    
    
			binder_user_error("%d:%d got reply transaction with no transaction stack\n",
					  proc->pid, thread->pid);
			return_error = BR_FAILED_REPLY;
			goto err_empty_call_stack;
		}
		binder_set_nice(in_reply_to->saved_priority);
		if (in_reply_to->to_thread != thread) {
    
    
			binder_user_error("%d:%d got reply transaction with bad transaction stack, transaction %d has target %d:%d\n",
				proc->pid, thread->pid, in_reply_to->debug_id,
				in_reply_to->to_proc ?
				in_reply_to->to_proc->pid : 0,
				in_reply_to->to_thread ?
				in_reply_to->to_thread->pid : 0);
			return_error = BR_FAILED_REPLY;
			in_reply_to = NULL;
			goto err_bad_call_stack;
		}
		thread->transaction_stack = in_reply_to->to_parent;
		target_thread = in_reply_to->from;
		if (target_thread == NULL) {
    
    
			return_error = BR_DEAD_REPLY;
			goto err_dead_binder;
		}
		if (target_thread->transaction_stack != in_reply_to) {
    
    
			binder_user_error("%d:%d got reply transaction with bad target transaction stack %d, expected %d\n",
				proc->pid, thread->pid,
				target_thread->transaction_stack ?
				target_thread->transaction_stack->debug_id : 0,
				in_reply_to->debug_id);
			return_error = BR_FAILED_REPLY;
			in_reply_to = NULL;
			target_thread = NULL;
			goto err_dead_binder;
		}
		target_proc = target_thread->proc;
	} else {
    
    
		if (tr->target.handle) {
    
    
			struct binder_ref *ref;

			ref = binder_get_ref(proc, tr->target.handle);
			if (ref == NULL) {
    
    
				binder_user_error("%d:%d got transaction to invalid handle\n",
					proc->pid, thread->pid);
				return_error = BR_FAILED_REPLY;
				goto err_invalid_target_handle;
			}

			target_node = ref->node;
		} else {
    
    
			target_node = binder_context_mgr_node;
			if (target_node == NULL) {
    
    
				return_error = BR_DEAD_REPLY;
				goto err_no_context_mgr_node;
			}
		}
		e->to_node = target_node->debug_id;
		target_proc = target_node->proc;
		if (target_proc == NULL) {
    
    
			return_error = BR_DEAD_REPLY;
			goto err_dead_binder;
		}
		if (!(tr->flags & TF_ONE_WAY) && thread->transaction_stack) {
    
    
			struct binder_transaction *tmp;

			tmp = thread->transaction_stack;
			if (tmp->to_thread != thread) {
    
    
				binder_user_error("%d:%d got new transaction with bad transaction stack, transaction %d has target %d:%d\n",
					proc->pid, thread->pid, tmp->debug_id,
					tmp->to_proc ? tmp->to_proc->pid : 0,
					tmp->to_thread ?
					tmp->to_thread->pid : 0);
				return_error = BR_FAILED_REPLY;
				goto err_bad_call_stack;
			}
			while (tmp) {
    
    
				if (tmp->from && tmp->from->proc == target_proc)
					target_thread = tmp->from;
				tmp = tmp->from_parent;
			}
		}
	}
	if (target_thread) {
    
    
		e->to_thread = target_thread->pid;
		target_list = &target_thread->todo;
		target_wait = &target_thread->wait;
	} else {
    
    
		target_list = &target_proc->todo;
		target_wait = &target_proc->wait;
	}
	e->to_proc = target_proc->pid;

	/* TODO: reuse incoming transaction for reply */
	//分配两个结构体内存
	t = kzalloc(sizeof(*t), GFP_KERNEL);
	if (t == NULL) {
    
    
		return_error = BR_FAILED_REPLY;
		goto err_alloc_t_failed;
	}
	binder_stats_created(BINDER_STAT_TRANSACTION);
	//分配两个结构体内存
	tcomplete = kzalloc(sizeof(*tcomplete), GFP_KERNEL);
	if (tcomplete == NULL) {
    
    
		return_error = BR_FAILED_REPLY;
		goto err_alloc_tcomplete_failed;
	}
	binder_stats_created(BINDER_STAT_TRANSACTION_COMPLETE);

	t->debug_id = ++binder_last_id;
	e->debug_id = t->debug_id;

	if (reply)
		binder_debug(BINDER_DEBUG_TRANSACTION,
				 "%d:%d BC_REPLY %d -> %d:%d, data %016llx-%016llx size %lld-%lld\n",
				 proc->pid, thread->pid, t->debug_id,
				 target_proc->pid, target_thread->pid,
				 (u64)tr->data.ptr.buffer,
				 (u64)tr->data.ptr.offsets,
				 (u64)tr->data_size, (u64)tr->offsets_size);
	else
		binder_debug(BINDER_DEBUG_TRANSACTION,
				 "%d:%d BC_TRANSACTION %d -> %d - node %d, data %016llx-%016llx size %lld-%lld\n",
				 proc->pid, thread->pid, t->debug_id,
				 target_proc->pid, target_node->debug_id,
				 (u64)tr->data.ptr.buffer,
				 (u64)tr->data.ptr.offsets,
				 (u64)tr->data_size, (u64)tr->offsets_size);

	if (!reply && !(tr->flags & TF_ONE_WAY))
		t->from = thread;
	else
		t->from = NULL;
	t->sender_euid = task_euid(proc->tsk);
	t->to_proc = target_proc;
	t->to_thread = target_thread;
	t->code = tr->code;
	t->flags = tr->flags;
	t->priority = task_nice(current);

	trace_binder_transaction(reply, t, target_node);
//从target_proc分配一块buffer
	t->buffer = binder_alloc_buf(target_proc, tr->data_size,
		tr->offsets_size, !reply && (t->flags & TF_ONE_WAY));
	if (t->buffer == NULL) {
    
    
		return_error = BR_FAILED_REPLY;
		goto err_binder_alloc_buf_failed;
	}
	t->buffer->allow_user_free = 0;
	t->buffer->debug_id = t->debug_id;
	t->buffer->transaction = t;
	t->buffer->target_node = target_node;
	trace_binder_transaction_alloc_buf(t->buffer);
	if (target_node)
		binder_inc_node(target_node, 1, 0, NULL);

	offp = (binder_size_t *)(t->buffer->data +
				 ALIGN(tr->data_size, sizeof(void *)));

	if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
			   tr->data.ptr.buffer, tr->data_size)) {
    
    
		binder_user_error("%d:%d got transaction with invalid data ptr\n",
				proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (copy_from_user(offp, (const void __user *)(uintptr_t)
			   tr->data.ptr.offsets, tr->offsets_size)) {
    
    
		binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
				proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (!IS_ALIGNED(tr->offsets_size, sizeof(binder_size_t))) {
    
    
		binder_user_error("%d:%d got transaction with invalid offsets size, %lld\n",
				proc->pid, thread->pid, (u64)tr->offsets_size);
		return_error = BR_FAILED_REPLY;
		goto err_bad_offset;
	}
	off_end = (void *)offp + tr->offsets_size;
	for (; offp < off_end; offp++) {
    
    
		struct flat_binder_object *fp;

		if (*offp > t->buffer->data_size - sizeof(*fp) ||
			t->buffer->data_size < sizeof(*fp) ||
			!IS_ALIGNED(*offp, sizeof(u32))) {
    
    
			binder_user_error("%d:%d got transaction with invalid offset, %lld\n",
					  proc->pid, thread->pid, (u64)*offp);
			return_error = BR_FAILED_REPLY;
			goto err_bad_offset;
		}
		fp = (struct flat_binder_object *)(t->buffer->data + *offp);
		switch (fp->type) {
    
    
		case BINDER_TYPE_BINDER:
		case BINDER_TYPE_WEAK_BINDER: {
    
    
			struct binder_ref *ref;
			struct binder_node *node = binder_get_node(proc, fp->binder);

			if (node == NULL) {
    
    
				node = binder_new_node(proc, fp->binder, fp->cookie);
				if (node == NULL) {
    
    
					return_error = BR_FAILED_REPLY;
					goto err_binder_new_node_failed;
				}
				node->min_priority = fp->flags & FLAT_BINDER_FLAG_PRIORITY_MASK;
				node->accept_fds = !!(fp->flags & FLAT_BINDER_FLAG_ACCEPTS_FDS);
			}
			if (fp->cookie != node->cookie) {
    
    
				binder_user_error("%d:%d sending u%016llx node %d, cookie mismatch %016llx != %016llx\n",
					proc->pid, thread->pid,
					(u64)fp->binder, node->debug_id,
					(u64)fp->cookie, (u64)node->cookie);
				return_error = BR_FAILED_REPLY;
				goto err_binder_get_ref_for_node_failed;
			}
			ref = binder_get_ref_for_node(target_proc, node);
			if (ref == NULL) {
    
    
				return_error = BR_FAILED_REPLY;
				goto err_binder_get_ref_for_node_failed;
			}
			if (fp->type == BINDER_TYPE_BINDER)
				fp->type = BINDER_TYPE_HANDLE;
			else
				fp->type = BINDER_TYPE_WEAK_HANDLE;
			fp->handle = ref->desc;
			binder_inc_ref(ref, fp->type == BINDER_TYPE_HANDLE,
					   &thread->todo);

			trace_binder_transaction_node_to_ref(t, node, ref);
			binder_debug(BINDER_DEBUG_TRANSACTION,
					 "        node %d u%016llx -> ref %d desc %d\n",
					 node->debug_id, (u64)node->ptr,
					 ref->debug_id, ref->desc);
		} break;
		case BINDER_TYPE_HANDLE:
		case BINDER_TYPE_WEAK_HANDLE: {
    
    
			struct binder_ref *ref = binder_get_ref(proc, fp->handle);

			if (ref == NULL) {
    
    
				binder_user_error("%d:%d got transaction with invalid handle, %d\n",
						proc->pid,
						thread->pid, fp->handle);
				return_error = BR_FAILED_REPLY;
				goto err_binder_get_ref_failed;
			}
			if (ref->node->proc == target_proc) {
    
    
				if (fp->type == BINDER_TYPE_HANDLE)
					fp->type = BINDER_TYPE_BINDER;
				else
					fp->type = BINDER_TYPE_WEAK_BINDER;
				fp->binder = ref->node->ptr;
				fp->cookie = ref->node->cookie;
				binder_inc_node(ref->node, fp->type == BINDER_TYPE_BINDER, 0, NULL);
				trace_binder_transaction_ref_to_node(t, ref);
				binder_debug(BINDER_DEBUG_TRANSACTION,
						 "        ref %d desc %d -> node %d u%016llx\n",
						 ref->debug_id, ref->desc, ref->node->debug_id,
						 (u64)ref->node->ptr);
			} else {
    
    
				struct binder_ref *new_ref;

				new_ref = binder_get_ref_for_node(target_proc, ref->node);
				if (new_ref == NULL) {
    
    
					return_error = BR_FAILED_REPLY;
					goto err_binder_get_ref_for_node_failed;
				}
				fp->handle = new_ref->desc;
				binder_inc_ref(new_ref, fp->type == BINDER_TYPE_HANDLE, NULL);
				trace_binder_transaction_ref_to_ref(t, ref,
									new_ref);
				binder_debug(BINDER_DEBUG_TRANSACTION,
						 "        ref %d desc %d -> ref %d desc %d (node %d)\n",
						 ref->debug_id, ref->desc, new_ref->debug_id,
						 new_ref->desc, ref->node->debug_id);
			}
		} break;

		case BINDER_TYPE_FD: {
    
    
			int target_fd;
			struct file *file;

			if (reply) {
    
    
				if (!(in_reply_to->flags & TF_ACCEPT_FDS)) {
    
    
					binder_user_error("%d:%d got reply with fd, %d, but target does not allow fds\n",
						proc->pid, thread->pid, fp->handle);
					return_error = BR_FAILED_REPLY;
					goto err_fd_not_allowed;
				}
			} else if (!target_node->accept_fds) {
    
    
				binder_user_error("%d:%d got transaction with fd, %d, but target does not allow fds\n",
					proc->pid, thread->pid, fp->handle);
				return_error = BR_FAILED_REPLY;
				goto err_fd_not_allowed;
			}

			file = fget(fp->handle);
			if (file == NULL) {
    
    
				binder_user_error("%d:%d got transaction with invalid fd, %d\n",
					proc->pid, thread->pid, fp->handle);
				return_error = BR_FAILED_REPLY;
				goto err_fget_failed;
			}
			target_fd = task_get_unused_fd_flags(target_proc, O_CLOEXEC);
			if (target_fd < 0) {
    
    
				fput(file);
				return_error = BR_FAILED_REPLY;
				goto err_get_unused_fd_failed;
			}
			task_fd_install(target_proc, target_fd, file);
			trace_binder_transaction_fd(t, fp->handle, target_fd);
			binder_debug(BINDER_DEBUG_TRANSACTION,
					 "        fd %d -> %d\n", fp->handle, target_fd);
			/* TODO: fput? */
			fp->handle = target_fd;
		} break;

		default:
			binder_user_error("%d:%d got transaction with invalid object type, %x\n",
				proc->pid, thread->pid, fp->type);
			return_error = BR_FAILED_REPLY;
			goto err_bad_object_type;
		}
	}
	if (reply) {
    
    
		BUG_ON(t->buffer->async_transaction != 0);
		binder_pop_transaction(target_thread, in_reply_to);
	} else if (!(t->flags & TF_ONE_WAY)) {
    
    
		BUG_ON(t->buffer->async_transaction != 0);
		t->need_reply = 1;
		t->from_parent = thread->transaction_stack;
		thread->transaction_stack = t;
	} else {
    
    
		BUG_ON(target_node == NULL);
		BUG_ON(t->buffer->async_transaction != 1);
		if (target_node->has_async_transaction) {
    
    
			target_list = &target_node->async_todo;
			target_wait = NULL;
		} else
			target_node->has_async_transaction = 1;
	}
	t->work.type = BINDER_WORK_TRANSACTION;
	list_add_tail(&t->work.entry, target_list);
	tcomplete->type = BINDER_WORK_TRANSACTION_COMPLETE;
	list_add_tail(&tcomplete->entry, &thread->todo);
	if (target_wait)
		wake_up_interruptible(target_wait);
	return;

err_get_unused_fd_failed:
err_fget_failed:
err_fd_not_allowed:
err_binder_get_ref_for_node_failed:
err_binder_get_ref_failed:
err_binder_new_node_failed:
err_bad_object_type:
err_bad_offset:
err_copy_data_failed:
	trace_binder_transaction_failed_buffer_release(t->buffer);
	binder_transaction_buffer_release(target_proc, t->buffer, offp);
	t->buffer->transaction = NULL;
	binder_free_buf(target_proc, t->buffer);
err_binder_alloc_buf_failed:
	kfree(tcomplete);
	binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
err_alloc_tcomplete_failed:
	kfree(t);
	binder_stats_deleted(BINDER_STAT_TRANSACTION);
err_alloc_t_failed:
err_bad_call_stack:
err_empty_call_stack:
err_dead_binder:
err_invalid_target_handle:
err_no_context_mgr_node:
	binder_debug(BINDER_DEBUG_FAILED_TRANSACTION,
			 "%d:%d transaction failed %d, size %lld-%lld\n",
			 proc->pid, thread->pid, return_error,
			 (u64)tr->data_size, (u64)tr->offsets_size);

	{
    
    
		struct binder_transaction_log_entry *fe;

		fe = binder_transaction_log_add(&binder_transaction_log_failed);
		*fe = *e;
	}

	BUG_ON(thread->return_error != BR_OK);
	if (in_reply_to) {
    
    
		thread->return_error = BR_TRANSACTION_COMPLETE;
		binder_send_failed_reply(in_reply_to, return_error);
	} else
		thread->return_error = return_error;
}

This process is very important, in two cases:

  1. When the process requesting the service and the service belong to different processes, a binder_ref object is created for the process requesting the service, pointing to the binder_node in the service process;
  2. When the process requesting the service belongs to the same process as the service, no new object will be created, but the reference count will be increased by 1, and the type will be changed to BINDER_TYPE_BINDER or BINDER_TYPE_WEAK_BINDER.

4.4.4 Data Copy

A copy of data occurs on the client side: copy_from_user
binder.c

	if (copy_from_user(t->buffer->data, (const void __user *)(uintptr_t)
			   tr->data.ptr.buffer, tr->data_size)) {
    
    
		binder_user_error("%d:%d got transaction with invalid data ptr\n",
				proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}
	if (copy_from_user(offp, (const void __user *)(uintptr_t)
			   tr->data.ptr.offsets, tr->offsets_size)) {
    
    
		binder_user_error("%d:%d got transaction with invalid offsets ptr\n",
				proc->pid, thread->pid);
		return_error = BR_FAILED_REPLY;
		goto err_copy_data_failed;
	}

To be precise, one copy separates two operations,一次信息的拷贝(就像请求头部),第二次数据的拷贝

4.4.5 transction_task

Sending end and receiving end
Transmission process
Service use

4.4.6 server thread

When there are multiple clientsending requests, serverit will be overwhelmed, resulting in the creation of multiple threads. client请求When the time comes, the data will be put to todo链表, and 唤醒等待wait队列的线程if there is a thread waiting in the wait queue, it means that the server is too busy, if not, it means that it is too busy. At this point the driver will report back to the application that you should create more threads to handle it.

The condition for the driver to issue a "creation of a new thread request" to the APP

proc->requested_threads=0, unhandled new thread requests.
proc->ready_threads is 0, the number of idle threads
proc->requested_threads_started < proc->max_threads. Number of threads started < max_threads

	*consumed = ptr - buffer;
	if (proc->requested_threads + proc->ready_threads == 0 &&
	    proc->requested_threads_started < proc->max_threads &&
	    (thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
	     BINDER_LOOPER_STATE_ENTERED)) /* the user-space code fails to */
	     /*spawn a new thread if we leave this out */) {
    
    
		proc->requested_threads++;
		binder_debug(BINDER_DEBUG_THREADS,
			     "binder: %d:%d BR_SPAWN_LOOPER\n",
			     proc->pid, thread->pid);
		
		SAMPLE_INFO("%s (%d, %d) [%s]", proc->tsk->comm, proc->pid, thread->pid, binder_cmd_name(BR_SPAWN_LOOPER));
		if (put_user(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))
			return -EFAULT;
	}
	return 0;

When a new thread is created, the new thread will execute ioctl to indicate that it has entered the loop
binder_thread_write
case BC_REGISTER_LOOPER:
proc->requested_threads_started++; set thread_started here to increase the number of threads

How to create a new thread

  1. 设置max_threads
    proc->max_threads = 0。
    binder_ioctl
    case BINDER_SET_MAX_THREADS
    copy_from_user(&proc->max_threads, ubuf, sizeof(proc->max_threads)
  2. Received BR_SPAWN_LOOPER.
    Why use pthread_create() instead of fork when creating threads. Because fork will copy the address of mmap, the address of mmap is set to be non-copyable
  3. Then ioctl BC_REGISTER_LOOPER indicates that it is running normally.
    The following two variables will be updated in the driver.
    switch BC_REGISTER_LOOPER
    proc->requested_threads–;
    proc->requested_threads_started++;
  4. Once entering the main line, enter the loop body, read the drive, and process.

5 interview questions

1. Advantages and disadvantages of Binder

advantage:

  1. In terms of performance
    Shared memory 0 data copy
    Binder 1 data copy
    Socket/pipeline/message queue 2 data copies
  1. In terms of stability,
    Binder: Based on the C/S architecture, any needs of the client (Client) can be thrown to the server (Server) to complete.
    The structure is clear, the responsibilities are clear and independent of each other, and the natural stability is better.
    Shared memory: although no copying is required , but the control is complex and difficult to use
    . From the perspective of stability, the Binder mechanism is superior to memory sharing.
  1. Security aspect
    The traditional IPC does not have any security measures, and the security depends on the upper layer protocol to ensure.
    The traditional IPC method cannot obtain the reliable process user ID/process UI (UID/PID) of the other party, so it cannot identify the
    other party's identity.
    In traditional IPC, users can only fill in UID/PID in the data packet, which is easy to be used by malicious programs.
    Traditional IPC access points are open, which cannot prevent malicious programs from obtaining connections by guessing recipient addresses.
    Binder supports both real-name binder and anonymous binder, with high security.
  1. From the perspective of language level,
    everyone knows that Linux is based on C language (process-oriented language), and Android is based on Java language (object-oriented statement), and Binder is also in line with object-oriented thinking, transforming inter-process communication into The method of this object is invoked through a reference to a Binder object, and its uniqueness is that the Binder object is an object that can be referenced across processes. Its entity is located in one process, but its references are spread throughout the various processes of the system. among. It can be passed from one process to another, so that everyone can access the same Server, just like assigning an object or reference to another reference. Binder blurs the process boundary, downplays the inter-process communication process, and the whole system seems to run in the same object-oriented program. From the language level, Binder is more suitable for the Android system based on the object-oriented language, and it may be a little "acclimatized" for the Linux system.
  1. From the perspective of company strategy
    , as we all know, the Linux kernel is an open source system, protected by the open source code license agreement GPL, which has the ability of "viral infection". How to understand this sentence? The GPL-protected Linux Kernel runs in the kernel space. For any class library, service, application, etc. running in the user space on the upper layer, once a SysCall (system call) is made to call the underlying Kernel, it must also follow the GPL agreement.
    And Andy Rubin, the father of Android, obviously cannot accept the GPL. For this reason, Google cleverly controls the GPL protocol in the kernel space, and adopts the Apache-2.0 protocol for the user space protocol (allowing Android-based developers not to feed back the source code to the community. ), and at the same time, the BSD license authorization method is adopted in the Lib library between the GPL agreement and Apache-2.0, which effectively cuts off the contagion of the GPL. There is still a lot of controversy, but at least for now, Android is relieved and the GPL stops in the kernel space. It is a successful example of Google's coexistence of open source and commercialization under GPL Linux.

shortcoming:

  1. Binder shared memory exhausts
    Binder's performance (reducing copy_to_user once) and security are the biggest advantages, but since Binder has limitations on the amount of data transmitted in both the kernel and user mode, it is necessary to avoid transferring large amounts of data through Binder. The shared memory used by a process to receive Binder data is 1M-8K, which is halved for oneway. At the same time, the Binder driver also has a 4M upper limit. When multiple threads share this shared memory, once the driver finds that the shared memory of the data receiver is not enough, it will return an error.
    The impact of Binder shared memory exhaustion:
    Binder calls take a long time, or even fail;
    if it is a key process of user operation, it will cause freezes.
    Optimization suggestions:
    Release data (Server) or reply (Client) in time, and AIDL framework is recommended;
    Binder interface design avoids large data parameter transfer, if necessary, use Ashmem to transfer;
    avoid a large number of threads calling a certain thread in parallel in a short period of time Server, if necessary, need to perform peak clipping processing.
  1. The Binder thread pool is exhausted.
    There is a certain number of Binder thread pools on the server side to respond to the client's call. The default maximum number of Binder threads in a process is 16 (1 main thread and 15 non-main threads). Exceeding requests will be blocked and wait The idle Binder thread, that is, the exhaustion of the Binder thread pool will cause subsequent calls to be blocked.
    The impact of the exhaustion of the Binder thread pool:
    the client end of the synchronous call is blocked, and if it is in the key process of the user operation, it will freeze; if the
    Binder thread pool of the key service process of the system such as system_server is exhausted, the whole machine will freeze.
    Optimization suggestions:
    avoid a large number of threads calling a server in parallel in a short period of time, and perform peak-shaving processing if necessary;
    improve the execution efficiency of the Binder interface implementation.
  1. Create a large number of BpBinder or Binder objects
    BpBinder is a Binder reference in the client, which stores the handle information of the target service, that is, the reference information of the Binder entity on the server side, and is used to query the Binder nodes in the kernel and communicate with the Binder entity.
    The system does not limit the creation of Binder objects on the server side. In theory, a large number of Binder objects can be created, but a Service should only create one Binder object. There is an upper limit of 6000/2500 BpBinder objects. If there are too many, they will be killed. Therefore, it is recommended to use BpBinder of the same Service Implement in-process sharing.
    The impact of creating a large number of BpBinder or Binder objects:
    memory usage, frequent GC, and even crash due to OOM;
    the whole machine freezes.
    Optimization suggestions:
    A Service has only one Binder object instance, and services are merged according to usage scenarios and life cycles;
    BpBinders that are no longer in use are released in time.
  1. Use multiple ServiceConnection objects to bind the same Service
    ServiceConnection is actually a Service, which is provided for AMS maintenance and used to manage the callback of the target Service. The same ServiceConnection object can manage multiple Services, and the client has achieved multiplexing of different Services, while AMS only maintains one IServiceConnection; however, different ServiceConnection objects have not been multiplexed, even for the same Service.
    The impact of using multiple ServiceConnection objects to bind the same Service:
    It increases the maintenance burden of AMS, and when the Service starts/exits, it will traverse the SC after holding the AMS lock;
    holding the AMS lock for a long time will cause the whole machine to freeze.
    Optimization suggestions:
    reuse the same ServiceConnection object, especially the same Service;
    monitor the death callback of the Service, and re-bind it immediately if necessary;
    unbind the Service that is no longer used in time.
  1. Obtain system services as you use them.
    System services are maintained by ServiceManager. The process of obtaining system service IBinder is also a cross-process Binder call. The system services commonly used by applications are brought over by ATMS when the application process is started, while other system service systems use the hiden interface to restrict common applications. When the checkService interface is frequently called, it is mainly a system service or system application.
    The impact of obtaining system services as you use them:
    Unnecessary cross-process synchronization calls take about 1ms, and some even reach the S level (logcat -b events -s service_manager_slow);
    increase system load.
    Optimization suggestions:
    local cache after one successful acquisition;
    the exit of the service is sensed through the linkToDeath mechanism of IBinder, and the local IBinder cache is cleared after the service exits, and the next time it is needed, it will be obtained from Servicemanager again.

2. How does Binder make a copy (how does mmap make a copy)

The main reason is that Linux uses virtual memory addressing mode, which has the following characteristics:

  1. The virtual memory address of user space is mapped to physical memory
  2. Reading and writing to virtual memory is actually reading and writing to physical memory. This process is memory mapping
  3. This memory mapping process is implemented through the system call mmap().
    Binder uses the method of memory mapping to make a layer of memory mapping between the kernel space and the data buffer area of ​​the receiver user space, which is equivalent to directly copying to the data buffer area of ​​the receiver user space, thereby reducing a data copy

Client and Server are in different processes and have different virtual address rules, so they cannot communicate directly.
And a page frame can be mapped to multiple pages, then a piece of physical memory can be mapped to the virtual memory blocks of Client and Server respectively.
As shown in the figure, the client only needs to copy_from_user to copy the data once, and the server process can read the data.
In addition, the size of the mapped virtual memory block is close to ** 1M(1M-8K)**, so the amount of data transmitted by IPC communication is also limited to this value.
(Note: In fact, one copy is divided into two operations, namely header information copy and data copy)
insert image description here

3. Do you understand the memory mapping principle of MMAP?

The implementation process of MMAP memory mapping can be generally divided into three stages:
(1) The process starts the mapping process and creates a virtual mapping area for mapping in the virtual address space

  1. The process calls the library function mmap in user space, prototype: void *mmap(void *start, size_t length, int
    prot, int flags, int fd, off_t offset);
  2. In the virtual address space of the current process, find a free continuous virtual address that meets the requirements
  3. A vm_area_struct structure is allocated for this virtual area, and then the fields of this structure are initialized
  4. Insert the newly created virtual area structure (vm_area_struct) into the process's virtual address area list or tree

(2) Call the system call function mmap of the kernel space (different from the user space function) to realize the
one-to-one mapping relationship between the physical address of the file and the virtual address of the process

  1. After allocating a new virtual address area for mapping,
    find through the file pointer to be mapped, and link to the file in the kernel "open file set" through the file descriptor File
    structure (struct file), each file structure maintains various information related to the opened file.
  2. Through the file structure of this file, link to the file_operations module and call the kernel function mmap, whose prototype
    is: int mmap(struct file *filp, struct vm_area_struct *vma), which is different from the user space library function.
  3. The kernel mmap function locates the physical address of the file disk through the virtual file system inode module.
  4. The page table is established through the remap_pfn_range function, which realizes the mapping relationship between the file address and the virtual address area
    . At this time, this piece of virtual address does not have any data associated with main memory.

(3) The process initiates access to this mapping space, causing a page fault exception, and realizing the copy
of the file content to the physical memory (main memory) Copy of
any file data to main memory.
A real file read is when a process initiates a read or write operation.
The read or write operation of the process accesses the mapped address of the virtual address space. By querying the page table, it is found that this
address is not on the physical page. Because only the address mapping has been established at present, the real hard disk data has not been
copied into the memory, so a page fault exception is caused.

  1. A series of judgments are made for page fault exceptions. After confirming that there is no illegal operation, the kernel initiates a request paging process.
  2. The paging process first looks for the memory page that needs to be accessed in the swap cache space (swap cache), if there is no page,
    call the nopage function to load the missing page from the disk into the main memory.
  3. Afterwards, the process can read or write to this piece of main memory. If the write operation changes its content, the system will automatically write back the dirty page to the corresponding disk address after a certain
    period of time , that is, the process of writing to the file is completed. .

Note: The modified dirty pages will not be updated back to the file immediately, but there will be a delay for a period of time. You can call
msync() to force synchronization, so that the written content can be saved to the file immediately.

4. How does the Binder mechanism work across processes

1. The Binder driver
creates a receiving buffer in the kernel space
to implement address mapping: map the kernel buffer and the user space of the receiving process to the same receiving buffer.
2. The sending process sends data to the kernel buffer through the system call (copy_from_user). Since
there is a mapping relationship between the kernel buffer area and the user space of the receiving process, it is equivalent to sending the user
space of the receiving process, realizing cross-process communication.

5. Talk about the communication mechanism of the four major components

  1. activity
    (1) An Activity is usually a separate screen (window).
    (2) Activities communicate through Intents.
    (3) Every Activity in the android application must be
    declared in the AndroidManifest.xml configuration file, otherwise the system will neither recognize nor execute the Activity.
  2. service
    (1) service is used to complete user-specified operations in the background. Service is divided into two types:
    Started (start): When the application component (such as activity) calls the startService() method to start the service,
    the service is in the started state.
    Bound (binding): When the application component calls the bindService() method to bind to the service, the service is in the bound
    state.
    (2) The difference between startService() and bindService():
    The started service (start service) is started by other components calling the startService() method, which causes the
    onStartCommand() method of the service to be called.
    When the service is in the started state, its life cycle is independent of the component that started it , and can run indefinitely in the background, even if the component that started the service has been destroyed. Therefore,
    the service needs to be stopped by calling the stopSelf() method after completing the task, or by other components calling the stopService()
    method to stop.
    Using the bindService() method to enable the service, the caller and the service are bound together. Once the caller exits, the service will
    terminate, which has the characteristics of "not to live at the same time, but to die at the same time".
    (3) Developers need to declare all services in the application configuration file, using
    the tag.
    (4) Service usually runs in the background, and it generally does not need to interact with users, so the Service component does not
    GUI. The Service component needs to inherit the Service base class. Service components are usually used to provide background services for other
    components or monitor the running status of other components.
  3. content provider
    (1) the android platform provides a Content Provider to make the specified data set of an application available to
    other applications. Other applications can obtain or store data from this content provider through the ContentResolver class
    .
    (2) A content provider is only required if data needs to be shared between multiple applications. For example, address book
    data is used by multiple applications and must be stored in a content provider. Its benefit is to unify
    data access methods.
    (3) ContentProvider realizes data sharing. ContentProvider is used to save and get data and
    make it visible to all applications. This is the only way to share data between different applications, because
    android does not provide a common storage area that all applications can access.
    (4) Developers will not directly use the objects of the ContentProvider class, most of them
    implement the operations on the ContentProvider through the ContentResolver object.
    (5) ContentProvider uses URI to uniquely identify its data set, where the URI is
    prefixed with content://, indicating that the data is managed by ContentProvider.
  4. broadcast receiver (1) Your application can use it to filter external events, and only receive and respond
    to external events of interest (such as when a call comes in, or when a data network is available).
    Broadcast receivers have no user interface
    . However, they can start an activity or serice in response to the information they receive, or use
    the NotificationManager to notify the user. Notifications can be used in many ways to attract the user's attention,
    such as flashing the backlight, vibrating, playing a sound, and so on. The general rule is to put a persistent icon on the status bar that
    the user can open and get messages.
    (2) There are two ways to register the broadcast receiver, namely the dynamic registration of the program and the
    static registration in the AndroidManifest file.
    (3) The characteristic of dynamic registration broadcast receiver is that when the Activity used for registration is closed, the broadcast will be invalid.
    Static registration does not need to worry about whether the broadcast receiver is turned off, as long as the device is turned on, the broadcast receiver
    is also turned on. That is to say, even if the app itself is not started, the broadcast subscribed by the app will also work on it when triggered
    .

6. Why Intent cannot transfer large data

The size of the information carried by the Intent is actually limited by the Binder. Data is stored in the Binder transfer cache in the form of Parcel objects
. If the data or return value is larger than the transfer buffer, the transfer call fails and
a TransactionTooLargeException is thrown.
Binder transfer cache has a limited size, usually 1Mb. But all transfers in the same process share
the buffer space. When multiple places are transmitting, even if the data they transmit does not exceed the size limit,
TransactionTooLargeException may also be thrown. 1Mb is not a safe upper limit when passing data using Intents
. Because Binder may be processing other transmission work.
The upper limit may be different for different models and system versions.

Guess you like

Origin blog.csdn.net/u010687761/article/details/131057192