Analysis of Binder communication mechanism in Android system (6) - Detailed explanation of Binder communication mechanism

statement

1 Binder package library

In the Android system, there are Binder-related implementations at various levels. The main Binder library is implemented by local native code. The structure of each header file in the core part of the Binder library is as follows:

  • Parcel.h : defines the class Parcel, which defines the container for the data transmitted in IPC.
  • IBinder.h : defines the class IBinder, which is the abstract interface of the Binder object.
  • Binder.h : Define class BBinder and BpRefBase as the successor of IBinder class and the base class used by the server side respectively.
  • BpBinder.h : Define class BpBinder, which is the client base class.
  • IInterface.h : Define class IInterface, template class BnInterface, template class BpInterface, these classes are the content that needs to be used in the specific Binder implementation.
  • ProcessState.h : Define the class ProcessState, which represents the state of the process.
  • IPCThreadState.h : defines the class IPCThreadState, which represents the state of the IPC thread.

  From the perspective of the external interface of the Binder system , IInterface is used to represent an "interface"; the BnInterface<> template class is used to represent the local server side, which inherits from IBinder; the BpInterface template class is used to represent the agent of the client, which can point to a IBinder.
  From the perspective of Binder implementation , the core content is two classes, BBinder and BpBinder, which represent the most basic server and client respectively. In the process of use, usually the function call action and parameters are converted into a data stream marked with index, which is transmitted to the remote BBinder.onTransact() through BpBinder.transact(), and the actual operation is completed by it, and then the return value is written back .
  ProcessState and IPCThreadState are actually Binder's adaptation layer, they deal with the Binder driver in the Linux kernel, and provide support for Binder system implementation.

1.1 Binder's 3-layer structure

In the Android system, both the Java and C++ layers define Binder interfaces that have the same function for applications to use, and they actually call the implementation of the native Binder library .insert image description here

  1. Binder driver part
    The driver part is located at the bottom of the Binder structure. This part is used to realize the device driver of Binder, and mainly realizes the following functions.
    • Organize Binder's service nodes .
    • Call the processing thread related to Binder .
    • Complete the actual Binder transfer .
  2. Binder Adapter Layer
    The Binder Adapter layer is the encapsulation of the Binder driver, and its main function is to operate the Binder driver . The application program does not need to be directly associated with the Binder driver, and the associated files include some contents in IPCThreadState.cpp, ProcessStatecpp and Parcel.cpp . The Binder core library is the core implementation of the Binder framework, mainly including IBinder, Binder (server side) and BpBinder (client side).
  3. The top-
    level Binder framework and the specific client/server have two implementations of Java and C++ respectively , which are mainly used by applications, such as cameras and multimedia. This part is realized by calling the core library of Binder.

  The IBinder class is defined in the file frameworks/native/include/binder/IBinder.h, the basic structure is as follows:

class IBinder : public virtual RefBase
{
    
    
public:
    enum {
    
    
        FIRST_CALL_TRANSACTION  = 0x00000001,
        LAST_CALL_TRANSACTION   = 0x00ffffff,

        PING_TRANSACTION        = B_PACK_CHARS('_','P','N','G'),
        DUMP_TRANSACTION        = B_PACK_CHARS('_','D','M','P'),
        SHELL_COMMAND_TRANSACTION = B_PACK_CHARS('_','C','M','D'),
        INTERFACE_TRANSACTION   = B_PACK_CHARS('_', 'N', 'T', 'F'),
        SYSPROPS_TRANSACTION    = B_PACK_CHARS('_', 'S', 'P', 'R'),

        // Corresponds to TF_ONE_WAY -- an asynchronous call.
        FLAG_ONEWAY             = 0x00000001
    };

                          IBinder();

    /**
     * Check if this IBinder implements the interface named by
     * @a descriptor.  If it does, the base pointer to it is returned,
     * which you can safely static_cast<> to the concrete C++ interface.
     */
    virtual sp<IInterface>  queryLocalInterface(const String16& descriptor);

    /**
     * Return the canonical name of the interface provided by this IBinder
     * object.
     */
    virtual const String16& getInterfaceDescriptor() const = 0;

    virtual bool            isBinderAlive() const = 0;
    virtual status_t        pingBinder() = 0;
    virtual status_t        dump(int fd, const Vector<String16>& args) = 0;
    static  status_t        shellCommand(const sp<IBinder>& target, int in, int out, int err,
                                         Vector<String16>& args,
                                         const sp<IResultReceiver>& resultReceiver);

    virtual status_t        transact(   uint32_t code,
                                        const Parcel& data,
                                        Parcel* reply,
                                        uint32_t flags = 0) = 0;
......
    class DeathRecipient : public virtual RefBase
    {
    
    
    public:
        virtual void binderDied(const wp<IBinder>& who) = 0;
    };
}
......

  IBinder can be regarded as a transport tool in Binder communication. The transact() function is used for communication. When this function is called, it means a remote call through communication. The data and reply parameters represent the parameters and return value respectively . Most of the functions in the IBinder class are pure virtual functions (virtal ·····= 0) that are not implemented and need to be implemented by inheritors.

  In the file frameworks/native/include/binder/IInterface.h, class IInterface, class templates BnInterface and BpInterface are defined respectively. Among them, the class templates BnInterface and BpInterface are used to realize the Service component and Client component .

class IInterface : public virtual RefBase
{
    
    
public:
            IInterface();
            static sp<IBinder>  asBinder(const IInterface*);
            static sp<IBinder>  asBinder(const sp<IInterface>&);

protected:
    virtual                     ~IInterface();
    virtual IBinder*            onAsBinder() = 0;
};

template<typename INTERFACE>
class BnInterface : public INTERFACE, public BBinder
{
    
    
public:
    virtual sp<IInterface>      queryLocalInterface(const String16& _descriptor);
    virtual const String16&     getInterfaceDescriptor() const;

protected:
    virtual IBinder*            onAsBinder();
};

template<typename INTERFACE>
class BpInterface : public INTERFACE, public BpRefBase
{
    
    
public:
                                BpInterface(const sp<IBinder>& remote);

protected:
    virtual IBinder*            onAsBinder();
};

  When using these two templates, it plays the role of double inheritance. Users define an interface interface, and then use the two templates BnInterface and BpInterface to combine their own interfaces to build their own two classes BnXXX and BpXXX .

1.2 BBinder class

  The class template BnInterface inherits from the class BBinder. BBinder is the carrier of the service and works together with the binder driver to ensure that the client's request is ultimately a call to a Binder object (BBinder class) . From the perspective of the Binder driver, each service is a BBinder class, and the Binder driver is responsible for finding out the BBinder class corresponding to the service, and then returning the BBinder class to IPCThreadState, and IPCThreadState calls BBinder's transact() . BBinder's transact() will call onTransact() again. BBinder::onTransact() is a virtual function, so it actually calls BnXXXService::onTransact(), so that the specific service function call can be completed in BnXXXService::onTransact() . The class diagram of the entire BnXXXService is shown below.

Implementation principle of Server component :
insert image description here
Implementation principle of Client component :
insert image description here
As can be seen from the figure, BnXXXService includes the following two parts:

  • IXXXService: The interface of the main body of the service.
  • BBinder: It is the carrier of the service and works together with the Binder driver to ensure that the client's request is ultimately a call to a Binder object (BBinder class).

  Each service is a BBinder class, and the Binder driver is responsible for finding out the BBinder class corresponding to the service, and then returning the BBinder class to IPCThreadState, and IPCThreadState calls BBinder's transact() . BBinder's transact() will call onTransact() again. BBinderon::Transact() is a virtual function, so it actually calls BnXXXService::onTransact(), so that the specific service function can be called in BnXXXService::onTransact() . Code location: frameworks/native/include/binder/Binder.h

class BBinder : public IBinder
{
    
    
public:
                        BBinder();

    virtual const String16& getInterfaceDescriptor() const;
    virtual bool        isBinderAlive() const;
    virtual status_t    pingBinder();
    virtual status_t    dump(int fd, const Vector<String16>& args);

    virtual status_t    transact(   uint32_t code,
                                    const Parcel& data,
                                    Parcel* reply,
                                    uint32_t flags = 0);

    virtual status_t    linkToDeath(const sp<DeathRecipient>& recipient,
                                    void* cookie = NULL,
                                    uint32_t flags = 0);

    virtual status_t    unlinkToDeath(  const wp<DeathRecipient>& recipient,
                                        void* cookie = NULL,
                                        uint32_t flags = 0,
                                        wp<DeathRecipient>* outRecipient = NULL);

    virtual void        attachObject(   const void* objectID,
                                        void* object,
                                        void* cleanupCookie,
                                        object_cleanup_func func);
    virtual void*       findObject(const void* objectID) const;
    virtual void        detachObject(const void* objectID);

    virtual BBinder*    localBinder();

protected:
    virtual             ~BBinder();

    virtual status_t    onTransact( uint32_t code,
                                    const Parcel& data,
                                    Parcel* reply,
                                    uint32_t flags = 0);

private:
                        BBinder(const BBinder& o);
            BBinder&    operator=(const BBinder& o);

    class Extras;

    std::atomic<Extras*> mExtras;
            void*       mReserved0;
};

  In class BBinder, when a Binder proxy object sends a process communication request to a Binder local object through the Binder driver, the Binder driver will call the member function transac() of the Binder local object to process the request . The function transact() is implemented in the file frameworks/native/libs/binder/Binder.cpp, and the specific code is as follows.

status_t BBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    
    
    data.setDataPosition(0);

    status_t err = NO_ERROR;
    switch (code) {
    
    
        case PING_TRANSACTION:
            reply->writeInt32(pingBinder());
            break;
        default:
            err = onTransact(code, data, reply, flags);
            break;
    }

    if (reply != NULL) {
    
    
        reply->setDataPosition(0);
    }

    return err;
}

  In the above code, the PING_TRANSACTION request is used to check whether the object still exists. Here, the return value of pingBinder is simply returned to the caller, and other requests are handed over to onTransact for processing . onTransact is a virtual function of protected type declared in BBinder. This function is implemented in its subclasses. The function is to distribute inter-process communication requests related to business . The specific implementation code is as follows:

status_t BBinder::onTransact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t /*flags*/)
{
    
    
    switch (code) {
    
    
        case INTERFACE_TRANSACTION:
            reply->writeString16(getInterfaceDescriptor());
            return NO_ERROR;

        case DUMP_TRANSACTION: {
    
    
            int fd = data.readFileDescriptor();
            int argc = data.readInt32();
            Vector<String16> args;
            for (int i = 0; i < argc && data.dataAvail() > 0; i++) {
    
    
               args.add(data.readString16());
            }
            return dump(fd, args);
        }

        case SHELL_COMMAND_TRANSACTION: {
    
    
            int in = data.readFileDescriptor();
            int out = data.readFileDescriptor();
            int err = data.readFileDescriptor();
            int argc = data.readInt32();
            Vector<String16> args;
            for (int i = 0; i < argc && data.dataAvail() > 0; i++) {
    
    
               args.add(data.readString16());
            }
            sp<IResultReceiver> resultReceiver = IResultReceiver::asInterface(
                    data.readStrongBinder());

            // XXX can't add virtuals until binaries are updated.
            //return shellCommand(in, out, err, args, resultReceiver);
            if (resultReceiver != NULL) {
    
    
                resultReceiver->send(INVALID_OPERATION);
            }
        }

        case SYSPROPS_TRANSACTION: {
    
    
            report_sysprop_change();
            return NO_ERROR;
        }

        default:
            return UNKNOWN_TRANSACTION;
    }
}

1.3 Class BpRefBase

  The class template BpInterface inherits from the class BpRefBase and acts as a proxy. Above BpReBase is the business logic (what function to realize), and below BpRefBase is data transmission (how to realize the function through binder) . In the file frameworks/native/include/binder/Binder.h, define the code of class BpRefBase.

class BpRefBase : public virtual RefBase
{
    
    
protected:
                            BpRefBase(const sp<IBinder>& o);
    virtual                 ~BpRefBase();
    virtual void            onFirstRef();
    virtual void            onLastStrongRef(const void* id);
    virtual bool            onIncStrongAttempted(uint32_t flags, const void* id);
    
	//BpRefBase 类当中的 remote() 函数指向了一个IBinder。BpRefBase给代理端使用,代理端获得IBinder之后使用它来进行传输。
    inline  IBinder*        remote()                {
    
     return mRemote; }
    inline  IBinder*        remote() const          {
    
     return mRemote; }

private:
                            BpRefBase(const BpRefBase& o);
    BpRefBase&              operator=(const BpRefBase& o);

    IBinder* const          mRemote;
    RefBase::weakref_type*  mRefs;
    std::atomic<int32_t>    mState;
};

  The member function transact() in the class BpRefBase is used to send communication requests between processes to the Service component running in the Server process, which is realized indirectly through the Binder driver . The specific implementation code of the function transact() is as follows.

/*
	code:  表示请求的ID号
	data:  表示请求的参数
	reply: 表示返回的结果
	flags: 是一些额外的标识,例如FLAG_ONEWAY,通常为 0
*/
status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    
    
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
    
    
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

1.4 Class IPCThreadState

  Both the class BBinder and the class BpRefBase introduced above are implemented through the interaction between the class IPCThreadState and the Binder driver . The class IPCThreadState is implemented in the file frameworks/native/include/binder/IPCThreadState.h, and the specific implementation code is as follows.

namespace android {
    
    

class IPCThreadState
{
    
    
public:
    static  IPCThreadState*     self();
    static  IPCThreadState*     selfOrNull();  // self(), but won't instantiate

            sp<ProcessState>    process();

            status_t            clearLastError();

            pid_t               getCallingPid() const;
            uid_t               getCallingUid() const;

            void                setStrictModePolicy(int32_t policy);
            int32_t             getStrictModePolicy() const;

            void                setLastTransactionBinderFlags(int32_t flags);
            int32_t             getLastTransactionBinderFlags() const;

            int64_t             clearCallingIdentity();
            void                restoreCallingIdentity(int64_t token);

            int                 setupPolling(int* fd);
            status_t            handlePolledCommands();
            void                flushCommands();

            void                joinThreadPool(bool isMain = true);

            // Stop the local process.
            void                stopProcess(bool immediate = true);

            status_t            transact(int32_t handle,
                                         uint32_t code, const Parcel& data,
                                         Parcel* reply, uint32_t flags);

            void                incStrongHandle(int32_t handle);
            void                decStrongHandle(int32_t handle);
            void                incWeakHandle(int32_t handle);
            void                decWeakHandle(int32_t handle);
            status_t            attemptIncStrongHandle(int32_t handle);
    static  void                expungeHandle(int32_t handle, IBinder* binder);
            status_t            requestDeathNotification(   int32_t handle,
                                                            BpBinder* proxy);
            status_t            clearDeathNotification( int32_t handle,
                                                        BpBinder* proxy);

    static  void                shutdown();

    // Call this to disable switching threads to background scheduling when
    // receiving incoming IPC calls.  This is specifically here for the
    // Android system process, since it expects to have background apps calling
    // in to it but doesn't want to acquire locks in its services while in
    // the background.
    static  void                disableBackgroundScheduling(bool disable);

            // Call blocks until the number of executing binder threads is less than
            // the maximum number of binder threads threads allowed for this process.
            void                blockUntilThreadAvailable();

private:
                                IPCThreadState();
                                ~IPCThreadState();

            status_t            sendReply(const Parcel& reply, uint32_t flags);
            status_t            waitForResponse(Parcel *reply,
                                                status_t *acquireResult=NULL);
            status_t            talkWithDriver(bool doReceive=true);
            status_t            writeTransactionData(int32_t cmd,
                                                     uint32_t binderFlags,
                                                     int32_t handle,
                                                     uint32_t code,
                                                     const Parcel& data,
                                                     status_t* statusBuffer);
            status_t            getAndExecuteCommand();
            status_t            executeCommand(int32_t command);
            void                processPendingDerefs();

            void                clearCaller();

    static  void                threadDestructor(void *st);
    static  void                freeBuffer(Parcel* parcel,
                                           const uint8_t* data, size_t dataSize,
                                           const binder_size_t* objects, size_t objectsSize,
                                           void* cookie);

    const   sp<ProcessState>    mProcess;
            Vector<BBinder*>    mPendingStrongDerefs;
            Vector<RefBase::weakref_type*> mPendingWeakDerefs;

            Parcel              mIn;
            Parcel              mOut;
            status_t            mLastError;
            pid_t               mCallingPid;
            uid_t               mCallingUid;
            int32_t             mStrictModePolicy;
            int32_t             mLastTransactionBinderFlags;
};

  In class IPCThreadState, member functions are used to implement data processing. In the transact request, the requested data is sent to the Service through the Binder device. After the Service processes the request, it returns the result to the client in the same way . The function transact() is defined in the file frameworks/native/libs/binder/IPCThreadState.cpp, and the specific implementation code is as follows.

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    
    
    status_t err = data.errorCheck();

    flags |= TF_ACCEPT_FDS;

    IF_LOG_TRANSACTIONS() {
    
    
        TextOutput::Bundle _b(alog);
        alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
            << handle << " / code " << TypeCode(code) << ": "
            << indent << data << dedent << endl;
    }

    if (err == NO_ERROR) {
    
    
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }

    if (err != NO_ERROR) {
    
    
        if (reply) reply->setError(err);
        return (mLastError = err);
    }

    if ((flags & TF_ONE_WAY) == 0) {
    
    
        #if 0
        if (code == 4) {
    
     // relayout
            ALOGI(">>>>>> CALLING transaction 4");
        } else {
    
    
            ALOGI(">>>>>> CALLING transaction %d", code);
        }
        #endif
        if (reply) {
    
    
            err = waitForResponse(reply);
        } else {
    
    
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        #if 0
        if (code == 4) {
    
     // relayout
            ALOGI("<<<<<< RETURNING transaction 4");
        } else {
    
    
            ALOGI("<<<<<< RETURNING transaction %d", code);
        }
        #endif

        IF_LOG_TRANSACTIONS() {
    
    
            TextOutput::Bundle _b(alog);
            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
                << handle << ": ";
            if (reply) alog << indent << *reply << dedent << endl;
            else alog << "(none requested)" << endl;
        }
    } else {
    
    
        err = waitForResponse(NULL, NULL);
    }

    return err;
}

1.5 IPermissionController

1.6 IServiceManager

1.7 Other parts of the Binder library

3 Initialize the Java layer Binder framework

  Although the Java layer Binder system is a mirror image of the Native layer Binder system, this image still needs to rely on the Native layer Binder system to carry out work, that is, it is inextricably linked with the Native layer Binder, so it must be in the Java layer Binder Establish this relationship before formal work . This section explains the initialization process of the Java layer Binder framework.

3.1 Build an interactive relationship

  In the Android system, the function register_android_os_Binder() (specially responsible for building the interactive relationship between Java Binder and Native Binder) . This function is implemented in the file frameworks/base/core/jni/android_util_Binder.cpp.

int register_android_os_Binder(JNIEnv* env)
{
    
    
	//初始化Java Binder类和Native层的关系
    if (int_register_android_os_Binder(env) < 0)
        return -1;
    //初始化Java BinderInternel类和Native层的关系
    if (int_register_android_os_BinderInternal(env) < 0)
        return -1;
    //初始化Java BinderProxy类和Native层的关系
    if (int_register_android_os_BinderProxy(env) < 0)
        return -1;

    jclass clazz = FindClassOrDie(env, "android/util/Log");
    gLogOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    gLogOffsets.mLogE = GetStaticMethodIDOrDie(env, clazz, "e",
            "(Ljava/lang/String;Ljava/lang/String;Ljava/lang/Throwable;)I");

    clazz = FindClassOrDie(env, "android/os/ParcelFileDescriptor");
    gParcelFileDescriptorOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    gParcelFileDescriptorOffsets.mConstructor = GetMethodIDOrDie(env, clazz, "<init>",
                                                                 "(Ljava/io/FileDescriptor;)V");

    clazz = FindClassOrDie(env, "android/os/StrictMode");
    gStrictModeCallbackOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    gStrictModeCallbackOffsets.mCallback = GetStaticMethodIDOrDie(env, clazz,
            "onBinderStrictModePolicyChange", "(I)V");

    return 0;
}

  According to the above code, the function register_android_os_Binder() completes the initialization of the three most important classes in the Java layer Binder architecture. The initialization process of the Binder class will be analyzed in detail below.

3.2 Realize the initialization of the Binder class

  The function int_register_android_os_Binder() implements the initialization of the Binder class . This function is implemented in the file android_util_Binder.cpp. The specific implementation code is as follows.

static int int_register_android_os_Binder(JNIEnv* env)
{
    
    
	//kBinderPathName为 Java 层中 Binder 类的全路径名,android/os/Binder
    jclass clazz = FindClassOrDie(env, kBinderPathName);

	/*
		gBinderOffsets 是一个静态类对象,专门保存Binder类的一些在JNI 层中使用的信息,
		如:成员函数execTransact()的 methodID,Binder 类中成员mObject的fieldID
	*/
    gBinderOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    gBinderOffsets.mExecTransact = GetMethodIDOrDie(env, clazz, "execTransact", "(IJJI)Z");
    gBinderOffsets.mObject = GetFieldIDOrDie(env, clazz, "mObject", "J");
	//注册Binder类中Native函数的实现
    return RegisterMethodsOrDie(
        env, kBinderPathName,
        gBinderMethods, NELEM(gBinderMethods));
}

  As can be seen from the above code, the gBinderOffsets object saves some information related to the Binder class used in the JNI layer. The next initialized class is BinderInternal, whose code is located in the int_register_android_os_BinderInternal() function. This function is implemented in the file android_util_Binder.cpp, and the specific implementation code is as follows.

static int int_register_android_os_BinderInternal(JNIEnv* env)
{
    
    
    jclass clazz = FindClassOrDie(env, kBinderInternalPathName);

    gBinderInternalOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    gBinderInternalOffsets.mForceGc = GetStaticMethodIDOrDie(env, clazz, "forceBinderGc", "()V");

    return RegisterMethodsOrDie(
        env, kBinderInternalPathName,
        gBinderInternalMethods, NELEM(gBinderInternalMethods));
}

  It can be seen that the function of int_register_android_os_BinderInternal() is similar to that of int_register_android_os_Binder(), mainly including the following two aspects:

  • Get some useful methodIDs and fieldIDs. This shows that the JNI layer will definitely call up the functions of the Java layer.
  • Register the implementation of the native function in the related class.

3.3 Realize the initialization of the BinderProxy class

  The function int_register_android_os_BinderProxy() completes the initialization of the BinderProxy class . This function is implemented in the file android_util_Binder.cpp. The specific implementation code is as follows.

static int int_register_android_os_BinderProxy(JNIEnv* env)
{
    
    
    jclass clazz = FindClassOrDie(env, "java/lang/Error");
    //gErrorOffsets用来和Error类交互
    gErrorOffsets.mClass = MakeGlobalRefOrDie(env, clazz);

    clazz = FindClassOrDie(env, kBinderProxyPathName);
    //gBinderProxyOffsets用来和BinderProxy打交道
    gBinderProxyOffsets.mClass = MakeGlobalRefOrDie(env, clazz);
    gBinderProxyOffsets.mConstructor = GetMethodIDOrDie(env, clazz, "<init>", "()V");
    gBinderProxyOffsets.mSendDeathNotice = GetStaticMethodIDOrDie(env, clazz, "sendDeathNotice",
            "(Landroid/os/IBinder$DeathRecipient;)V");
	//获取BinderProxy的一些信息
    gBinderProxyOffsets.mObject = GetFieldIDOrDie(env, clazz, "mObject", "J");
    gBinderProxyOffsets.mSelf = GetFieldIDOrDie(env, clazz, "mSelf",
                                                "Ljava/lang/ref/WeakReference;");
    gBinderProxyOffsets.mOrgue = GetFieldIDOrDie(env, clazz, "mOrgue", "J");

	clazz = FindClassOrDie(env, "java/lang/Class");
    //gClassOffsets用来和Class类交互
    gClassOffsets.mGetName = GetMethodIDOrDie(env, clazz, "getName", "()Ljava/lang/String;");
    
	// 注册BinderProxy native 函数的实现
    return RegisterMethodsOrDie(
        env, kBinderProxyPathName,
        gBinderProxyMethods, NELEM(gBinderProxyMethods));
}

  According to the above code, the int_register_android_os_BinderProxy() function not only initializes the BinderProxy class, but also obtains some information about the Error class. So far, the initialization of several important members of Java Binder has been completed, and several global static objects have been defined in the code, namely gBinderOffsets, gBinderInternalOffsets and gBinderProxyOffsets.

4 entity object binder_node

  In Android system, bindcr node is used to describe a Binder entity object. Each Service component corresponds to a Binder entity object in the Binder driver, which is used to describe its state in the kernel . The Binder communication framework of the Android system is shown in Figure 5-2.
insert image description here

4.1 Define Entity Objects

Binder entity object binder_node is defined in the kernel code binder.c.

struct binder_node {
    
    
    int debug_id;							//调试id
    struct binder_work work;				//描述一个待处理的工作项
    union {
    
    
        struct rb_node rb_node;				//挂载到宿主进程binder_proc的成员变量nodes红黑树的节点
        struct hlist_node dead_node;		//当宿主进程死亡,该Binder实体对象将挂载到全局binder_dead_nodes 链表中
    };
    struct binder_proc *proc;				//指向该Binder线程的宿主进程
    struct hlist_head refs;					//保存所有引用该Binder实体对象的Binder引用对象
    int internal_strong_refs;				//Binder实体对象的强引用计数
    int local_weak_refs;					//Binder实体对象的弱引用计数
    int local_strong_refs;
    binder_uintptr_t ptr;
    binder_uintptr_t cookie;
    unsigned has_strong_ref:1;
    unsigned pending_strong_ref:1;
    unsigned has_weak_ref:1;
    unsigned pending_weak_ref:1;
    unsigned has_async_transaction:1;		//标示该Binder实体对象是否正在处理一个异步事务
    unsigned accept_fds:1;					//设置该Binder实体对象是否可以接收包含有文件描述符的IPC数据
    unsigned min_priority:8;				//Binder实体对象要求处理线程应具备的最小线程优先级
    struct list_head async_todo;			//异步事务队列
};

  From the above code, we can see that in the Binder driver, each Binder local object in the user space corresponds to a Binder entity object. The specific description of each member is as follows:

  • proc : points to the host process of the Binder entity object, and the host process uses a red-black tree to maintain all the Binder entity objects inside it.
  • rbnode : A node in the red-black tree of the Binder entity object used to mount to the host process proc.
  • dead_node: If the host process of the Binder entity object has died, the Binder entity will be saved to the global linked list binder_dead_nodes through the member variable dead_node.
  • refs: A Binder entity object can be referenced by multiple clients, and the member variable refs is used to save all Binder
    reference objects that refer to the Binder entity.
  • internal_strong_refs and local_strong_refs: Both are used to describe the strong reference count of the Binder entity object.
  • local_weak_refs : The weak reference count used to describe the Binder entity object.
  • ptr and cookie: respectively point to the user space address, cookie points to the address of BBinder, ptr points to the reference count address of the BBinder object.
  • has_async_transaction : Describes whether a Binder entity object is processing an asynchronous transaction. When the Binder driver specifies a thread to process a transaction, it first saves the transaction to the todo queue of the specified thread, indicating that the thread will process the transaction . If it is an asynchronous transaction, the Binder driver will save it in an asynchronous transaction queue async_todo of the target Binder entity object.
  • min_priority: Indicates that a Binder entity object requires the minimum thread priority of the processing thread when processing requests from the client process.

4.2 Increase the reference count

  In the Binder driver, use the function binder_inc_node() to increase the reference count of a Binder entity object. The function binder_inc_node() is defined in the file kernel code binder.c, and the specific implementation code is as follows.

/**
	各个参数的具体说明如下所示:
	node : 表示要增加引用计数的 Binder 实体对象。
	strong : 表示要增加强引用计数还是要增加弱引用计数。
	internal : 用于区分增加的是内部引用计数还是外部引用计数。
	target_list : 用于指向一个目标进程或目标线程的 todo 队列,当不是null值时表示增加了Binder实体对象的引用计数后,需要对应增加它所引用的Binder本地对象的引用计数。
*/
static int binder_inc_node(struct binder_node *node, int strong, int internal,
               struct list_head *target_list)
{
    
    
    if (strong) {
    
    
        if (internal) {
    
    
            if (target_list == NULL &&
                node->internal_strong_refs == 0 &&
                !(node == binder_context_mgr_node &&
                node->has_strong_ref)) {
    
    
                pr_err("invalid inc strong node for %d\n",
                    node->debug_id);
                return -EINVAL;
            }
            node->internal_strong_refs++;
        } else
            node->local_strong_refs++;
        if (!node->has_strong_ref && target_list) {
    
    
            list_del_init(&node->work.entry);
            list_add_tail(&node->work.entry, target_list);
        }
    } else {
    
    
        if (!internal)
            node->local_weak_refs++;
        if (!node->has_weak_ref && list_empty(&node->work.entry)) {
    
    
            if (target_list == NULL) {
    
    
                pr_err("invalid inc weak node for %d\n",
                    node->debug_id);
                return -EINVAL;
            }
            list_add_tail(&node->work.entry, target_list);
        }
    }
    return 0;
}

4.3 Reduce reference count

  Contrary to the function binder_inc_node(), in the Binder driver, use the function binder_dec_node() to reduce the reference count of a Binder entity object. The function binder_dec_node() decrements the usage count of internal_strong_refs, local_strong_refs or local_weak_refs and deletes the node's work.entry list.

static int binder_dec_node(struct binder_node *node, int strong, int internal)
{
    
    
    if (strong) {
    
    
        if (internal)
            node->internal_strong_refs--;
        else
            node->local_strong_refs--;
        if (node->local_strong_refs || node->internal_strong_refs)
            return 0;
    } else {
    
    
        if (!internal)
            node->local_weak_refs--;
        if (node->local_weak_refs || !hlist_empty(&node->refs))
            return 0;
    }
    if (node->proc && (node->has_strong_ref || node->has_weak_ref)) {
    
    
        if (list_empty(&node->work.entry)) {
    
    
            list_add_tail(&node->work.entry, &node->proc->todo);
            wake_up_interruptible(&node->proc->wait);
        }
    } else {
    
    
        if (hlist_empty(&node->refs) && !node->local_strong_refs &&
            !node->local_weak_refs) {
    
    
            list_del_init(&node->work.entry);
            if (node->proc) {
    
    
                rb_erase(&node->rb_node, &node->proc->nodes);
                binder_debug(BINDER_DEBUG_INTERNAL_REFS,
                         "refless node %d deleted\n",
                         node->debug_id);
            } else {
    
    
                hlist_del(&node->dead_node);
                binder_debug(BINDER_DEBUG_INTERNAL_REFS,
                         "dead node %d deleted\n",
                         node->debug_id);
            }
            kfree(node);
            binder_stats_deleted(BINDER_STAT_NODE);
        }
    }

    return 0;
}

5 Native object BBinder

  Because the function of Binder is to execute the function of other processes locally, for the Binder mechanism, it is not only a perfect IPC mechanism in the Android system, but also a remote procedure call (RPC) mechanism of Android. When the process obtains the process service to be called through Binder, it can be not only a local object, but also a reference to a remote service . In other words, Binder can communicate not only with local processes, but also with remote processes. The local process here is the local object explained in this section, and the remote process is a reference to the remote service.

5.1 References to running local objects

  In the Android system, the Binder driver refers to the Binder local object running in the Server process through the function binder_thread_read() , which is defined in the kernel code binder.c. This function waits while service_manager is running until a request arrives.

static int binder_thread_read(struct binder_proc *proc,
			      struct binder_thread *thread,
			      binder_uintptr_t binder_buffer, size_t size,
			      binder_size_t *consumed, int non_block)
{
    
    
	void __user *buffer = (void __user *)(uintptr_t)binder_buffer;
	void __user *ptr = buffer + *consumed;
	void __user *end = buffer + size;

	int ret = 0;
	int wait_for_proc_work;

	if (*consumed == 0) {
    
    
		if (put_user_preempt_disabled(BR_NOOP, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
	}

retry:
	wait_for_proc_work = thread->transaction_stack == NULL &&
				list_empty(&thread->todo);

	if (thread->return_error != BR_OK && ptr < end) {
    
    
		if (thread->return_error2 != BR_OK) {
    
    
			if (put_user_preempt_disabled(thread->return_error2, (uint32_t __user *)ptr))
				return -EFAULT;
			ptr += sizeof(uint32_t);
			binder_stat_br(proc, thread, thread->return_error2);
			if (ptr == end)
				goto done;
			thread->return_error2 = BR_OK;
		}
		if (put_user_preempt_disabled(thread->return_error, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		binder_stat_br(proc, thread, thread->return_error);
		thread->return_error = BR_OK;
		goto done;
	}


	thread->looper |= BINDER_LOOPER_STATE_WAITING;
	if (wait_for_proc_work)
		proc->ready_threads++;

	binder_unlock(__func__);

	trace_binder_wait_for_work(wait_for_proc_work,
				   !!thread->transaction_stack,
				   !list_empty(&thread->todo));
	if (wait_for_proc_work) {
    
    
		if (!(thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
					BINDER_LOOPER_STATE_ENTERED))) {
    
    
			binder_user_error("%d:%d ERROR: Thread waiting for process work before calling BC_REGISTER_LOOPER or BC_ENTER_LOOPER (state %x)\n",
				proc->pid, thread->pid, thread->looper);
			wait_event_interruptible(binder_user_error_wait,
						 binder_stop_on_user_error < 2);
		}
		binder_set_nice(proc->default_priority);
		if (non_block) {
    
    
			if (!binder_has_proc_work(proc, thread))
				ret = -EAGAIN;
		} else
			ret = wait_event_freezable_exclusive(proc->wait, binder_has_proc_work(proc, thread));
	} else {
    
    
		if (non_block) {
    
    
			if (!binder_has_thread_work(thread))
				ret = -EAGAIN;
		} else
			ret = wait_event_freezable(thread->wait, binder_has_thread_work(thread));
	}

	binder_lock(__func__);

	if (wait_for_proc_work)
		proc->ready_threads--;
	thread->looper &= ~BINDER_LOOPER_STATE_WAITING;

	if (ret)
		return ret;

	while (1) {
    
    
		uint32_t cmd;
		//将用户传进来的transact 参数复制到本地变量 struct binder_transaction_data tr 中
		struct binder_transaction_data tr;
		struct binder_work *w;
		struct binder_transaction *t = NULL;
		//由于thread->todo非空,执行下列语句
		if (!list_empty(&thread->todo))
			w = list_first_entry(&thread->todo, struct binder_work, entry);
		else if (!list_empty(&proc->todo) && wait_for_proc_work)
			//ServiceManager 被唤醒之后,就进入while 循环开始处理事务了
			//此处 wait_for_proc_work 等于1,并且 proc-todo 不为空,所以从 proc-todo 列表中得到第一个工作项
			w = list_first_entry(&proc->todo, struct binder_work, entry);
		else {
    
    
			if (ptr - buffer == 4 && !(thread->looper & BINDER_LOOPER_STATE_NEED_RETURN)) /* no data added */
				goto retry;
			break;
		}

		if (end - ptr < sizeof(tr) + 4)
			break;

		switch (w->type) {
    
    
		//函数调用binder_transaction进一步处理
		case BINDER_WORK_TRANSACTION: {
    
    
			//因为这个工作项的类型为BINDER_WORK_TRANSACTION,所以通过下面语句得到事务项
			t = container_of(w, struct binder_transaction, work);
		} break;
		//因为w->type 为BINDER_WORK_TRANSACTION_COMPLETE
		//这是在上面的 binder_transaction()函数中设置的,于是执行下面的代码
		case BINDER_WORK_TRANSACTION_COMPLETE: {
    
    
			cmd = BR_TRANSACTION_COMPLETE;
			if (put_user_preempt_disabled(cmd, (uint32_t __user *)ptr))
				return -EFAULT;
			ptr += sizeof(uint32_t);

			binder_stat_br(proc, thread, cmd);
			binder_debug(BINDER_DEBUG_TRANSACTION_COMPLETE,
				     "%d:%d BR_TRANSACTION_COMPLETE\n",
				     proc->pid, thread->pid);

			//将w从thread->todo删除
			list_del(&w->entry);
			kfree(w);
			binder_stats_deleted(BINDER_STAT_TRANSACTION_COMPLETE);
		} break;
		case BINDER_WORK_NODE: {
    
    
			struct binder_node *node = container_of(w, struct binder_node, work);
			uint32_t cmd = BR_NOOP;
			const char *cmd_name;
			int strong = node->internal_strong_refs || node->local_strong_refs;
			int weak = !hlist_empty(&node->refs) || node->local_weak_refs || strong;
			if (weak && !node->has_weak_ref) {
    
    
				cmd = BR_INCREFS;
				cmd_name = "BR_INCREFS";
				node->has_weak_ref = 1;
				node->pending_weak_ref = 1;
				node->local_weak_refs++;
			} else if (strong && !node->has_strong_ref) {
    
    
				cmd = BR_ACQUIRE;
				cmd_name = "BR_ACQUIRE";
				node->has_strong_ref = 1;
				node->pending_strong_ref = 1;
				node->local_strong_refs++;
			} else if (!strong && node->has_strong_ref) {
    
    
				cmd = BR_RELEASE;
				cmd_name = "BR_RELEASE";
				node->has_strong_ref = 0;
			} else if (!weak && node->has_weak_ref) {
    
    
				cmd = BR_DECREFS;
				cmd_name = "BR_DECREFS";
				node->has_weak_ref = 0;
			}
			if (cmd != BR_NOOP) {
    
    
				if (put_user_preempt_disabled(cmd, (uint32_t __user *)ptr))
					return -EFAULT;
				ptr += sizeof(uint32_t);
				if (put_user_preempt_disabled(node->ptr,
					     (binder_uintptr_t __user *)ptr))
					return -EFAULT;
				ptr += sizeof(binder_uintptr_t);
				if (put_user_preempt_disabled(node->cookie,
					     (binder_uintptr_t __user *)ptr))
					return -EFAULT;
				ptr += sizeof(binder_uintptr_t);

				binder_stat_br(proc, thread, cmd);
				binder_debug(BINDER_DEBUG_USER_REFS,
					     "%d:%d %s %d u%016llx c%016llx\n",
					     proc->pid, thread->pid, cmd_name,
					     node->debug_id,
					     (u64)node->ptr, (u64)node->cookie);
			} else {
    
    
				list_del_init(&w->entry);
				if (!weak && !strong) {
    
    
					binder_debug(BINDER_DEBUG_INTERNAL_REFS,
						     "%d:%d node %d u%016llx c%016llx deleted\n",
						     proc->pid, thread->pid, node->debug_id,
						     (u64)node->ptr, (u64)node->cookie);
					rb_erase(&node->rb_node, &proc->nodes);
					kfree(node);
					binder_stats_deleted(BINDER_STAT_NODE);
				} else {
    
    
					binder_debug(BINDER_DEBUG_INTERNAL_REFS,
						     "%d:%d node %d u%016llx c%016llx state unchanged\n",
						     proc->pid, thread->pid, node->debug_id,
						     (u64)node->ptr, (u64)node->cookie);
				}
			}
		} break;
		case BINDER_WORK_DEAD_BINDER:
		case BINDER_WORK_DEAD_BINDER_AND_CLEAR:
		case BINDER_WORK_CLEAR_DEATH_NOTIFICATION: {
    
    
			struct binder_ref_death *death;
			uint32_t cmd;

			death = container_of(w, struct binder_ref_death, work);
			if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION)
				cmd = BR_CLEAR_DEATH_NOTIFICATION_DONE;
			else
				cmd = BR_DEAD_BINDER;
			if (put_user_preempt_disabled(cmd, (uint32_t __user *)ptr))
				return -EFAULT;
			ptr += sizeof(uint32_t);
			if (put_user_preempt_disabled(death->cookie,
				     (binder_uintptr_t __user *)ptr))
				return -EFAULT;
			ptr += sizeof(binder_uintptr_t);
			binder_stat_br(proc, thread, cmd);
			binder_debug(BINDER_DEBUG_DEATH_NOTIFICATION,
				     "%d:%d %s %016llx\n",
				      proc->pid, thread->pid,
				      cmd == BR_DEAD_BINDER ?
				      "BR_DEAD_BINDER" :
				      "BR_CLEAR_DEATH_NOTIFICATION_DONE",
				      (u64)death->cookie);

			if (w->type == BINDER_WORK_CLEAR_DEATH_NOTIFICATION) {
    
    
				list_del(&w->entry);
				kfree(death);
				binder_stats_deleted(BINDER_STAT_DEATH);
			} else
				list_move(&w->entry, &proc->delivered_death);
			if (cmd == BR_DEAD_BINDER)
				goto done; /* DEAD_BINDER notifications can cause transactions */
		} break;
		}

		if (!t)
			continue;

		BUG_ON(t->buffer == NULL);
		//把事务项t中的数据复制到本地局部变量struct binder_transaction_data tr 中
		if (t->buffer->target_node) {
    
    
			struct binder_node *target_node = t->buffer->target_node;
			tr.target.ptr = target_node->ptr;
			tr.cookie =  target_node->cookie;
			t->saved_priority = task_nice(current);
			if (t->priority < target_node->min_priority &&
			    !(t->flags & TF_ONE_WAY))
				binder_set_nice(t->priority);
			else if (!(t->flags & TF_ONE_WAY) ||
				 t->saved_priority > target_node->min_priority)
				binder_set_nice(target_node->min_priority);
			cmd = BR_TRANSACTION;
		} else {
    
    
			tr.target.ptr = 0;
			tr.cookie = 0;
			cmd = BR_REPLY;
		}
		tr.code = t->code;
		tr.flags = t->flags;
		tr.sender_euid = from_kuid(current_user_ns(), t->sender_euid);

		if (t->from) {
    
    
			struct task_struct *sender = t->from->proc->tsk;
			tr.sender_pid = task_tgid_nr_ns(sender,
							task_active_pid_ns(current));
		} else {
    
    
			tr.sender_pid = 0;
		}

		tr.data_size = t->buffer->data_size;
		tr.offsets_size = t->buffer->offsets_size;
		/*
			t->buffer->data 所指向的地址是内核空间的,如果要把数据返回给 Service Manager 进程的用户空间
			而Service Manager 进程的用户空间是不能访问内核空间的数据的,所以需要进一步处理
			在具体处理时,Binder 机制使用类似浅拷贝的方法,通过在用户空间分配一个虚拟地址
			然后让这个用户空间虚拟地址与 t->buffer->data 这个内核空间虚拟地址指向同一个物理地址
			在此只需将t->buffer->data 加上一个偏移值 proc->user_buffer_offset
			就可以得到t->buffer->data 对应的用户空间虚拟地址了
			在调整了tr.data.ptr.buffer 值后,需要一起调整tr.data.ptr.offsets 的值
		*/
		tr.data.ptr.buffer = (binder_uintptr_t)(
					(uintptr_t)t->buffer->data +
					proc->user_buffer_offset);
		tr.data.ptr.offsets = tr.data.ptr.buffer +
					ALIGN(t->buffer->data_size,
					    sizeof(void *));
		
		//把tr的内容复制到用户传进来的缓冲区,指针ptr 指向这个用户缓冲区的地址
		if (put_user_preempt_disabled(cmd, (uint32_t __user *)ptr))
			return -EFAULT;
		ptr += sizeof(uint32_t);
		if (copy_to_user_preempt_disabled(ptr, &tr, sizeof(tr)))
			return -EFAULT;
		ptr += sizeof(tr);
		
		//上述代码只是对 tr.data.ptr.bufferr 和 tr.data.ptr.ffsets 的内容进行了浅拷贝工作
		trace_binder_transaction_received(t);
		binder_stat_br(proc, thread, cmd);
		binder_debug(BINDER_DEBUG_TRANSACTION,
			     "%d:%d %s %d %d:%d, cmd %d size %zd-%zd ptr %016llx-%016llx\n",
			     proc->pid, thread->pid,
			     (cmd == BR_TRANSACTION) ? "BR_TRANSACTION" :
			     "BR_REPLY",
			     t->debug_id, t->from ? t->from->proc->pid : 0,
			     t->from ? t->from->pid : 0, cmd,
			     t->buffer->data_size, t->buffer->offsets_size,
			     (u64)tr.data.ptr.buffer, (u64)tr.data.ptr.offsets);
			     
		//从todo列表中删除,因为已经处理了这个事务
		list_del(&t->work.entry);
		t->buffer->allow_user_free = 1;
		//如果cmd== BR_TRANSACTION && !(t->fags &TF_ONE_WAY)为true
		//说明虽然在驱动程序中已经处理完了这个事务,但是仍然要在Service Manager 完成之后等待回复确认
		if (cmd == BR_TRANSACTION && !(t->flags & TF_ONE_WAY)) {
    
    
			//把当前事务t放在thread->transaction_stack 队列的头部
			t->to_parent = thread->transaction_stack;
			t->to_thread = thread;
			thread->transaction_stack = t;
		} else {
    
    	//如果为false 则不需要等待回复了,而是直接删除事务t
			t->buffer->transaction = NULL;
			kfree(t);
			binder_stats_deleted(BINDER_STAT_TRANSACTION);
		}
		break;
	}

done:

	*consumed = ptr - buffer;
	if (proc->requested_threads + proc->ready_threads == 0 &&
	    proc->requested_threads_started < proc->max_threads &&
	    (thread->looper & (BINDER_LOOPER_STATE_REGISTERED |
	     BINDER_LOOPER_STATE_ENTERED)) /* the user-space code fails to */
	     /*spawn a new thread if we leave this out */) {
    
    
		proc->requested_threads++;
		binder_debug(BINDER_DEBUG_THREADS,
			     "%d:%d BR_SPAWN_LOOPER\n",
			     proc->pid, thread->pid);
		if (put_user_preempt_disabled(BR_SPAWN_LOOPER, (uint32_t __user *)buffer))
			return -EFAULT;
		binder_stat_br(proc, thread, BR_SPAWN_LOOPER);
	}
	return 0;
}

It can be seen that the Binder driver refers to the Binder local object running in the Server process through the following four protocols:

  • BR_INCREFS
  • BR_ACQUIRE
  • BR_DECREFS
  • BR_RELEASE

5.2 Handling Interface Protocols

  In the file frameworks/native/libs/binder/IPCThreadState.cpp, the four interface protocols in the previous section are processed by using the class member function executeCommand().

status_t IPCThreadState::executeCommand(int32_t cmd)
{
    
    
    BBinder* obj;
    RefBase::weakref_type* refs;
    status_t result = NO_ERROR;
    
    switch ((uint32_t)cmd) {
    
    
    case BR_ERROR:
        result = mIn.readInt32();
        break;
        
    case BR_OK:
        break;
        
    case BR_ACQUIRE:
        refs = (RefBase::weakref_type*)mIn.readPointer();
        obj = (BBinder*)mIn.readPointer();
        ALOG_ASSERT(refs->refBase() == obj,
                   "BR_ACQUIRE: object %p does not match cookie %p (expected %p)",
                   refs, obj, refs->refBase());
        obj->incStrong(mProcess.get());
        IF_LOG_REMOTEREFS() {
    
    
            LOG_REMOTEREFS("BR_ACQUIRE from driver on %p", obj);
            obj->printRefs();
        }
        mOut.writeInt32(BC_ACQUIRE_DONE);
        mOut.writePointer((uintptr_t)refs);
        mOut.writePointer((uintptr_t)obj);
        break;
        
    case BR_RELEASE:
        refs = (RefBase::weakref_type*)mIn.readPointer();
        obj = (BBinder*)mIn.readPointer();
        ALOG_ASSERT(refs->refBase() == obj,
                   "BR_RELEASE: object %p does not match cookie %p (expected %p)",
                   refs, obj, refs->refBase());
        IF_LOG_REMOTEREFS() {
    
    
            LOG_REMOTEREFS("BR_RELEASE from driver on %p", obj);
            obj->printRefs();
        }
        mPendingStrongDerefs.push(obj);
        break;
        
    case BR_INCREFS:
        refs = (RefBase::weakref_type*)mIn.readPointer();
        obj = (BBinder*)mIn.readPointer();
        refs->incWeak(mProcess.get());
        mOut.writeInt32(BC_INCREFS_DONE);
        mOut.writePointer((uintptr_t)refs);
        mOut.writePointer((uintptr_t)obj);
        break;
        
    case BR_DECREFS:
        refs = (RefBase::weakref_type*)mIn.readPointer();
        obj = (BBinder*)mIn.readPointer();
        // NOTE: This assertion is not valid, because the object may no
        // longer exist (thus the (BBinder*)cast above resulting in a different
        // memory address).
        //ALOG_ASSERT(refs->refBase() == obj,
        //           "BR_DECREFS: object %p does not match cookie %p (expected %p)",
        //           refs, obj, refs->refBase());
        mPendingWeakDerefs.push(refs);
        break;
        
    case BR_ATTEMPT_ACQUIRE:
        refs = (RefBase::weakref_type*)mIn.readPointer();
        obj = (BBinder*)mIn.readPointer();
         
        {
    
    
            const bool success = refs->attemptIncStrong(mProcess.get());
            ALOG_ASSERT(success && refs->refBase() == obj,
                       "BR_ATTEMPT_ACQUIRE: object %p does not match cookie %p (expected %p)",
                       refs, obj, refs->refBase());
            
            mOut.writeInt32(BC_ACQUIRE_RESULT);
            mOut.writeInt32((int32_t)success);
        }
        break;
    
    case BR_TRANSACTION:
        {
    
    
            binder_transaction_data tr;
            result = mIn.read(&tr, sizeof(tr));
            ALOG_ASSERT(result == NO_ERROR,
                "Not enough command data for brTRANSACTION");
            if (result != NO_ERROR) break;
            
            Parcel buffer;
            buffer.ipcSetDataReference(
                reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                tr.data_size,
                reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                tr.offsets_size/sizeof(binder_size_t), freeBuffer, this);
            
            const pid_t origPid = mCallingPid;
            const uid_t origUid = mCallingUid;
            const int32_t origStrictModePolicy = mStrictModePolicy;
            const int32_t origTransactionBinderFlags = mLastTransactionBinderFlags;

            mCallingPid = tr.sender_pid;
            mCallingUid = tr.sender_euid;
            mLastTransactionBinderFlags = tr.flags;

            int curPrio = getpriority(PRIO_PROCESS, mMyThreadId);
            if (gDisableBackgroundScheduling) {
    
    
                if (curPrio > ANDROID_PRIORITY_NORMAL) {
    
    
                    // We have inherited a reduced priority from the caller, but do not
                    // want to run in that state in this process.  The driver set our
                    // priority already (though not our scheduling class), so bounce
                    // it back to the default before invoking the transaction.
                    setpriority(PRIO_PROCESS, mMyThreadId, ANDROID_PRIORITY_NORMAL);
                }
            } else {
    
    
                if (curPrio >= ANDROID_PRIORITY_BACKGROUND) {
    
    
                    // We want to use the inherited priority from the caller.
                    // Ensure this thread is in the background scheduling class,
                    // since the driver won't modify scheduling classes for us.
                    // The scheduling group is reset to default by the caller
                    // once this method returns after the transaction is complete.
                    set_sched_policy(mMyThreadId, SP_BACKGROUND);
                }
            }

            //ALOGI(">>>> TRANSACT from pid %d uid %d\n", mCallingPid, mCallingUid);

            Parcel reply;
            status_t error;
            IF_LOG_TRANSACTIONS() {
    
    
                TextOutput::Bundle _b(alog);
                alog << "BR_TRANSACTION thr " << (void*)pthread_self()
                    << " / obj " << tr.target.ptr << " / code "
                    << TypeCode(tr.code) << ": " << indent << buffer
                    << dedent << endl
                    << "Data addr = "
                    << reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer)
                    << ", offsets addr="
                    << reinterpret_cast<const size_t*>(tr.data.ptr.offsets) << endl;
            }
            if (tr.target.ptr) {
    
    
                // We only have a weak reference on the target object, so we must first try to
                // safely acquire a strong reference before doing anything else with it.
                if (reinterpret_cast<RefBase::weakref_type*>(
                        tr.target.ptr)->attemptIncStrong(this)) {
    
    
                    error = reinterpret_cast<BBinder*>(tr.cookie)->transact(tr.code, buffer,
                            &reply, tr.flags);
                    reinterpret_cast<BBinder*>(tr.cookie)->decStrong(this);
                } else {
    
    
                    error = UNKNOWN_TRANSACTION;
                }

            } else {
    
    
                error = the_context_object->transact(tr.code, buffer, &reply, tr.flags);
            }

            //ALOGI("<<<< TRANSACT from pid %d restore pid %d uid %d\n",
            //     mCallingPid, origPid, origUid);
            
            if ((tr.flags & TF_ONE_WAY) == 0) {
    
    
                LOG_ONEWAY("Sending reply to %d!", mCallingPid);
                if (error < NO_ERROR) reply.setError(error);
                sendReply(reply, 0);
            } else {
    
    
                LOG_ONEWAY("NOT sending reply to %d!", mCallingPid);
            }
            
            mCallingPid = origPid;
            mCallingUid = origUid;
            mStrictModePolicy = origStrictModePolicy;
            mLastTransactionBinderFlags = origTransactionBinderFlags;

            IF_LOG_TRANSACTIONS() {
    
    
                TextOutput::Bundle _b(alog);
                alog << "BC_REPLY thr " << (void*)pthread_self() << " / obj "
                    << tr.target.ptr << ": " << indent << reply << dedent << endl;
            }
            
        }
        break;
    
    case BR_DEAD_BINDER:
        {
    
    
            BpBinder *proxy = (BpBinder*)mIn.readPointer();
            proxy->sendObituary();
            mOut.writeInt32(BC_DEAD_BINDER_DONE);
            mOut.writePointer((uintptr_t)proxy);
        } break;
        
    case BR_CLEAR_DEATH_NOTIFICATION_DONE:
        {
    
    
            BpBinder *proxy = (BpBinder*)mIn.readPointer();
            proxy->getWeakRefs()->decWeak(proxy);
        } break;
        
    case BR_FINISHED:
        result = TIMED_OUT;
        break;
        
    case BR_NOOP:
        break;
        
    case BR_SPAWN_LOOPER:
        mProcess->spawnPooledThread(false);
        break;
        
    default:
        printf("*** BAD COMMAND %d received from Binder driver\n", cmd);
        result = UNKNOWN_ERROR;
        break;
    }

    if (result != NO_ERROR) {
    
    
        mLastError = result;
    }
    
    return result;
}

  It can be seen from the above code that BBinder::transact() will be called in the function executeCommand() to process the client's request. When multiple threads are required to provide services, the driver will request the creation of new threads. The specific process of creating a certain thread can be obtained through the process of processing the BR_SPAWN_LOOPER protocol with the above function.

6 reference object binder_ref

struct binder_ref {
    
    
	/* Lookups needed: */
	/*   node + proc => ref (transaction) */
	/*   desc + proc => ref (transaction, inc/dec ref) */
	/*   node => refs + procs (proc exit) */
	int debug_id;						//调试id
	struct rb_node rb_node_desc;		//挂载到宿主对象binder_proc的红黑树refs_by_desc中的节点
	struct rb_node rb_node_node;		//挂载到宿主对象binder_proc的红黑树refs_by_node中的节点
	struct hlist_node node_entry;		//挂载到Binder实体对象的refs链表中的节点
	struct binder_proc *proc;			//Binder引用对象的宿主进程binder_proc
	struct binder_node *node;			//Binder引用对象所引用的Binder实体对象
	uint32_t desc;						//Binder引用对象的句柄值
	int strong;							//强引用计数
	int weak;							//弱引用计数
	struct binder_ref_death *death;		//注册死亡接收通知
};

  In the Binder mechanism, binder_ref is used to describe a Binder reference object, and each client has a binder reference object in the Binder driver. The specific description of each member variable is as follows.

  • Member variable node: saves the Binder entity object referenced by the Binder reference object, and the Binder entity object uses a linked list to store all the Binder reference objects referencing the entity object.
  • node_entry: It is the node in the member variable refs linked list of the entity object referenced by the Binder reference object.
  • desc: is a handle value used to describe a Binder reference object.
  • node: When the Client process accesses a Service through the handle value, the Binder driver can find the corresponding Binder reference object through the handle value, and then find the corresponding Binder entity object according to the member variable node of the Binder reference object, and finally through the The Binder entity object finds the Service to be accessed.
  • proc : The host process that executes the Binder reference object.
  • rb_node_desc and rb_node_node: are the nodes of the red-black tree refs_by_desc and refs_by_node in binder_proc.

The Binder driver has the following four important protocols for increasing and decreasing the strong and weak reference counts of Binder reference objects:

  • BR_INCREFS
  • BR_ACQUIRE
  • BR_DECREFS
  • BR_RELEASE

  The above counting processing function is realized by the function binder_thread_write(), which is defined in the kernel driver binder.c. Protocols BR_INCREFS and BR_ACOUIRE are used to increase the strong reference count and weak reference count of a Binder reference object. This function is realized by calling the function binder_inc_ref(). The specific code is as follows.

static int binder_inc_ref(struct binder_ref *ref, int strong,
			  struct list_head *target_list)
{
    
    
	int ret;
	if (strong) {
    
    
		if (ref->strong == 0) {
    
    
			ret = binder_inc_node(ref->node, 1, 1, target_list);
			if (ret)
				return ret;
		}
		ref->strong++;
	} else {
    
    
		if (ref->weak == 0) {
    
    
			ret = binder_inc_node(ref->node, 0, 1, target_list);
			if (ret)
				return ret;
		}
		ref->weak++;
	}
	return 0;
}

  The protocols BR_RELEASE and BR_DECREFS are used to reduce the strong reference count and weak reference count of a Binder reference object. This function is realized by calling the function binder_dec_ref(). The specific code is as follows:

static int binder_dec_ref(struct binder_ref **ptr_to_ref, int strong)
{
    
    
	struct binder_ref *ref = *ptr_to_ref;
	if (strong) {
    
    
		if (ref->strong == 0) {
    
    
			binder_user_error("%d invalid dec strong, ref %d desc %d s %d w %d\n",
					  ref->proc->pid, ref->debug_id,
					  ref->desc, ref->strong, ref->weak);
			return -EINVAL;
		}
		ref->strong--;
		if (ref->strong == 0) {
    
    
			int ret;
			ret = binder_dec_node(ref->node, strong, 1);
			if (ret)
				return ret;
		}
	} else {
    
    
		if (ref->weak == 0) {
    
    
			binder_user_error("%d invalid dec weak, ref %d desc %d s %d w %d\n",
					  ref->proc->pid, ref->debug_id,
					  ref->desc, ref->strong, ref->weak);
			return -EINVAL;
		}
		ref->weak--;
	}
	if (ref->strong == 0 && ref->weak == 0) {
    
    
		binder_delete_ref(ref);
		*ptr_to_ref = NULL;
	}
	return 0;
}

Look at the function binder_delete_ref() again, the function is to destroy the binder ref object.

static void binder_delete_ref(struct binder_ref *ref)
{
    
    
	binder_debug(BINDER_DEBUG_INTERNAL_REFS,
		     "%d delete ref %d desc %d for node %d\n",
		      ref->proc->pid, ref->debug_id, ref->desc,
		      ref->node->debug_id);

	rb_erase(&ref->rb_node_desc, &ref->proc->refs_by_desc);
	rb_erase(&ref->rb_node_node, &ref->proc->refs_by_node);
	if (ref->strong)
		binder_dec_node(ref->node, 1, 1);
	hlist_del(&ref->node_entry);
	binder_dec_node(ref->node, 0, 1);
	if (ref->death) {
    
    
		binder_debug(BINDER_DEBUG_DEAD_BINDER,
			     "%d delete ref %d desc %d has death notification\n",
			      ref->proc->pid, ref->debug_id, ref->desc);
		list_del(&ref->death->work.entry);
		kfree(ref->death);
		binder_stats_deleted(BINDER_STAT_DEATH);
	}
	kfree(ref);
	binder_stats_deleted(BINDER_STAT_REF);
}

7 Proxy object BpBinder

  In the Android system, the proxy object BpBinder is the proxy of the remote object in the current process, and it implements the IBinder interface . For beginners, BBinder and BpBinder are easily confused. **In fact, it is easy to distinguish between the two. For Service, it inherits BBinder(BnInterface), because BBinder has onTransact() message processing function. For the Client communicating with the Service, it is necessary to inherit BpBinder (BpInterface), because BpBinder has a message transfer function transact() ** .

7.1 Create Binder proxy object

  Taking the client of cameraService as an example, the function getCameraService() in the file Camera.cpp can obtain the IBinder object of the remote CameraService, and then reconstruct it through the following code to obtain the BpCameraService object.

mCameraService = interface_cast<ICameraService>(binder);

And BpCameraService inherited BpInterface, and passed in BBinder.

cameraService:
	defaultServiceManager()->addService(
		String16("media.camera"), new CameraService());

  In the process of IPC transfer, the IBinder pointer is indispensable. This pointer is unique to a process just like the socket ID. Regardless of whether this IBinder is BBinder or BpBinder, they all pass IBinder as a parameter when refactoring BpBinder or BBinder . In the Android system, the method of creating a Binder proxy object is defined in the file frameworks/native/libs/binder/BpBinder.cpp, and the specific code is as follows.

BpBinder::BpBinder(int32_t handle)
    : mHandle(handle)
    , mAlive(1)
    , mObitsSent(0)
    , mObituaries(NULL)
{
    
    
    ALOGV("Creating BpBinder %p handle %d\n", this, mHandle);
	//设置Binder代理对象的生命周期受到弱引用的计数影响
    extendObjectLifetime(OBJECT_LIFETIME_WEAK);
    //调用当前线程内部的IPCThreadState的成员函数incWeakHandle()增加相应的Binder引用对象的弱引用计数
    IPCThreadState::self()->incWeakHandle(handle);
}

In the file frameworks/native/libs/binder/IPCThreadState.cpp, define the code of the member function incWeakHandle().

void IPCThreadState::incWeakHandle(int32_t handle)
{
    
    
    LOG_REMOTEREFS("IPCThreadState::incWeakHandle(%d)\n", handle);
    mOut.writeInt32(BC_INCREFS);
    mOut.writeInt32(handle);
}

7.2 Destroy the Binder proxy object

  When destroying a Binder proxy object, the thread will call the member function decWeakHandle0 of the internal IPCThreadState object to reduce the weak reference count of the corresponding Binder reference object. The specific implementation code of the function decWeakHandle0 is shown below.

void IPCThreadState::decWeakHandle(int32_t handle)
{
    
    
    LOG_REMOTEREFS("IPCThreadState::decWeakHandle(%d)\n", handle);
    mOut.writeInt32(BC_DECREFS);
    mOut.writeInt32(handle);
}

In the Binder proxy object, the implementation code of its transact0 function is as follows.

/*
	各个参数的具体说明如下所示:
	code:表示请求的ID号。
	data:表示请求的参数。
	reply:表示返回的结果。
	flags:表示一些额外的标识,如FLAG_ONEWAY,通常为0
*/
status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
{
    
    
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
    
    
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    }

    return DEAD_OBJECT;
}

The above transact() function simply calls the transact of IPCThreadState::self(), and the code in IPCThreadState::transact is as follows.

status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
{
    
    
    status_t err = data.errorCheck();

    flags |= TF_ACCEPT_FDS;

    IF_LOG_TRANSACTIONS() {
    
    
        TextOutput::Bundle _b(alog);
        alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
            << handle << " / code " << TypeCode(code) << ": "
            << indent << data << dedent << endl;
    }

    if (err == NO_ERROR) {
    
    
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    }

    if (err != NO_ERROR) {
    
    
        if (reply) reply->setError(err);
        return (mLastError = err);
    }

    if ((flags & TF_ONE_WAY) == 0) {
    
    
        #if 0
        if (code == 4) {
    
     // relayout
            ALOGI(">>>>>> CALLING transaction 4");
        } else {
    
    
            ALOGI(">>>>>> CALLING transaction %d", code);
        }
        #endif
        if (reply) {
    
    
            err = waitForResponse(reply);
        } else {
    
    
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
        }
        #if 0
        if (code == 4) {
    
     // relayout
            ALOGI("<<<<<< RETURNING transaction 4");
        } else {
    
    
            ALOGI("<<<<<< RETURNING transaction %d", code);
        }
        #endif

        IF_LOG_TRANSACTIONS() {
    
    
            TextOutput::Bundle _b(alog);
            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
                << handle << ": ";
            if (reply) alog << indent << *reply << dedent << endl;
            else alog << "(none requested)" << endl;
        }
    } else {
    
    
        err = waitForResponse(NULL, NULL);
    }

    return err;
}

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
{
    
    
    uint32_t cmd;
    int32_t err;

    while (1) {
    
    
        if ((err=talkWithDriver()) < NO_ERROR) break;
        err = mIn.errorCheck();
        if (err < NO_ERROR) break;
        if (mIn.dataAvail() == 0) continue;

        cmd = (uint32_t)mIn.readInt32();

        IF_LOG_COMMANDS() {
    
    
            alog << "Processing waitForResponse Command: "
                << getReturnString(cmd) << endl;
        }

        switch (cmd) {
    
    
        case BR_TRANSACTION_COMPLETE:
            if (!reply && !acquireResult) goto finish;
            break;

        case BR_DEAD_REPLY:
            err = DEAD_OBJECT;
            goto finish;

        case BR_FAILED_REPLY:
            err = FAILED_TRANSACTION;
            goto finish;

        case BR_ACQUIRE_RESULT:
            {
    
    
                ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
                const int32_t result = mIn.readInt32();
                if (!acquireResult) continue;
                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
            }
            goto finish;

        case BR_REPLY:
            {
    
    
                binder_transaction_data tr;
                err = mIn.read(&tr, sizeof(tr));
                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
                if (err != NO_ERROR) goto finish;

                if (reply) {
    
    
                    if ((tr.flags & TF_STATUS_CODE) == 0) {
    
    
                        reply->ipcSetDataReference(
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t),
                            freeBuffer, this);
                    } else {
    
    
                        err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
                        freeBuffer(NULL,
                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                            tr.data_size,
                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                            tr.offsets_size/sizeof(binder_size_t), this);
                    }
                } else {
    
    
                    freeBuffer(NULL,
                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
                        tr.data_size,
                        reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
                        tr.offsets_size/sizeof(binder_size_t), this);
                    continue;
                }
            }
            goto finish;

        default:
            err = executeCommand(cmd);
            if (err != NO_ERROR) goto finish;
            break;
        }
    }

finish:
    if (err != NO_ERROR) {
    
    
        if (acquireResult) *acquireResult = err;
        if (reply) reply->setError(err);
        mLastError = err;
    }

    return err;
}

  In the above code, the request is sent to the server through the kernel module transact. When the server finishes processing the request, it will return the result to the caller along the original path. Here it can be seen that the request is a synchronous operation, it waits until the result is returned . In this way, after simple wrapping on top of BpBinder, the same interface as the service object can be obtained, and the caller does not need to care whether the called object is remote or local.

Guess you like

Origin blog.csdn.net/Xiaoma_Pedro/article/details/130925529