Source code analysis of Android IPC communication mechanism

Introduction to Binder communication: 
    The methods of inter-process communication in the Linux system are: socket, named pipe, message queque, signal, and share memory. The inter-process communication methods in the Java system include socket, named pipe, etc. Of course, android applications can use JAVA's IPC mechanism to realize inter-process communication, but when I look at the source code of android, the communication of application software on the same terminal is almost invisible. To these IPC communication methods, it is replaced by Binder communication. Why does Google adopt this method, it depends on the high efficiency of the Binder communication method. Binder communication is implemented through linux's binder driver. Binder communication operation is similar to thread migration. IPC between two processes looks like one process enters another process to execute code and then returns with the execution result. The user space of Binder maintains an available thread pool for each process. The thread pool is used to process incoming IPC and execute process local messages. Binder communication is synchronous rather than asynchronous.
    Binder communication in Android is based on Service and Client. All processes that require IBinder communication must create an IBinder interface. There is a process in the system to manage all system services. Android does not allow users to add unauthorized System services. Of course, now the source code After development, we can modify some code to achieve the purpose of adding the underlying system Service. For user programs, we also need to create a server or service for inter-process communication. There is an ActivityManagerService to manage all the service creation and connection (connect) and disconnect of the JAVA application layer. All activities are also started through this service. loaded. ActivityManagerService is also loaded in Systems Servcie.
    Before the Android virtual machine starts, the system will first start the service manager process, the service manager will open the binder driver, and notify the binder kernel driver that this process will act as the system service manager, and then the process will enter a loop, waiting to process data from other processes. After the user creates a System service, he obtains a remote ServiceManager interface through defaultServiceManager. Through this interface, we can call the addService function to add the System service to the Service Manager process, and then the client can obtain the IBinder object of the destination Service that needs to be connected through getService. This IBinder is a reference of the BBinder of the Service in the binder kernel, so the service IBinder will not have the same two IBinder objects in the binder kernel, and each Client process also needs to open the Binder driver. For the user program, we can access the methods in the service object through the binder kernel when we get this object. The Client and the Service are in different processes. This way, a communication method similar to the migration between threads is realized. For the user program, when the IBinder interface returned by the Service is called, accessing the method in the Service is like calling its own function.
The following figure is a schematic diagram of the connection between the client and the Service


First, from the ServiceManager registration process to gradually analyze how the above process is implemented.

Source code analysis of ServiceMananger process registration process:
Service Manager Process (Service_manager.c):
    Service_manager provides management for the Services of other processes. This service program must be run before Android Runtime is up, otherwise Android JAVA Vm ActivityManagerService cannot be registered.
int main(int argc, char **argv)
    struct binder_state *bs;
    void *svcmgr = BINDER_SERVICE_MANAGER;

    bs = binder_open(128*1024); //Open /dev/binder driver

    if (binder_become_context_manager(bs)) {//注册为service manager in binder kernel
        LOGE("cannot become context manager (%s)\n", strerror(errno));
        return -1;
    svcmgr_handle = svcmgr;
    binder_loop(bs, svcmgr_handler);
    return 0;
首先打开binder的驱动程序然后通过binder_become_context_manager函数调用ioctl告诉Binder Kernel驱动程序这是一个服务管理进程,然后调用binder_loop等待来自其他进程的数据。BINDER_SERVICE_MANAGER是服 务管理进程的句柄,它的定义是:
/* the one magic object */
#define BINDER_SERVICE_MANAGER ((void*) 0)
如果客户端进程获取Service时所使用的句柄与此不符,Service Manager将不接受Client的请求。客户端如何设置这个句柄在下面会介绍。

int main(int argc, char** argv)
    sp<ProcessState> proc(ProcessState::self());
    sp<IServiceManager> sm = defaultServiceManager();
    LOGI("ServiceManager: %p", sm.get());
    AudioFlinger::instantiate();             //Audio 服务
    MediaPlayerService::instantiate();       //mediaPlayer服务
    CameraService::instantiate();             //Camera 服务
    ProcessState::self()->startThreadPool(); //为进程开启缓冲池
    IPCThreadState::self()->joinThreadPool(); //将进程加入到缓冲池

void CameraService::instantiate() {
            String16(""), new CameraService());

client获取remote IServiceManager IBinder接口:
sp<IServiceManager> defaultServiceManager()
    if (gDefaultServiceManager != NULL) return gDefaultServiceManager;
        AutoMutex _l(gDefaultServiceManagerLock);
        if (gDefaultServiceManager == NULL) {
            gDefaultServiceManager = interface_cast<IServiceManager>(
    return gDefaultServiceManager;
任何一个进程在第一次调用defaultServiceManager的时候gDefaultServiceManager值为Null,所以该进程会通 过ProcessState::self得到ProcessState实例。ProcessState将打开Binder驱动。
sp<ProcessState> ProcessState::self()
    if (gProcess != NULL) return gProcess;
    AutoMutex _l(gProcessMutex);
    if (gProcess == NULL) gProcess = new ProcessState;
    return gProcess;

: mDriverFD(open_driver()) //打开/dev/binder驱动

sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller)
    if (supportsProcesses()) {
        return getStrongProxyForHandle(0);
    } else {
        return getContextObject(String16("default"), caller);
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
    sp<IBinder> result;
    AutoMutex _l(mLock);
    handle_entry* e = lookupHandleLocked(handle);

    if (e != NULL) {
        // We need to create a new BpBinder if there isn't currently one, OR we
        // are unable to acquire a weak reference on this current one. See comment
        // in getWeakProxyForHandle() for more info about this.
        IBinder* b = e->binder; //第一次调用该函数b为Null
        if (b == NULL || !e->refs->attemptIncWeak(this)) {
            b = new BpBinder(handle); 
            e->binder = b;
            if (b) e->refs = b->getWeakRefs();
            result = b;
        } else {
            // This little bit of nastyness is to allow us to add a primary
            // reference to the remote proxy when this team doesn't have one
            // but another team is sending the handle to us.
    return result;
BpBinder::BpBinder(int32_t handle)
    : mHandle(handle)
    , mAlive(1)
    , mObitsSent(0)
    , mObituaries(NULL)
    LOGV("Creating BpBinder %p handle %d\n", this, mHandle);


void IPCThreadState::incWeakHandle(int32_t handle)
    LOG_REMOTEREFS("IPCThreadState::incWeakHandle(%d)\n", handle);

template<typename INTERFACE>
inline sp<INTERFACE> interface_cast(const sp<IBinder>& obj)
    return INTERFACE::asInterface(obj);
sp<IServiceManager> IServiceManager::asInterface(const sp<IBinder>& obj)     
        sp<IServiceManager> intr;                                           
        if (obj != NULL) {                                               
            intr = static_cast<IServiceManager*>(                           
            if (intr == NULL) {                                          
                intr = new BpServiceManager(obj);                           
        return intr; 

const sp<ICameraService>& Camera::getCameraService()
    Mutex::Autolock _l(mLock);
    if (mCameraService.get() == 0) {
        sp<IServiceManager> sm = defaultServiceManager();
        sp<IBinder> binder;
        do {
            binder = sm->getService(String16(""));
            if (binder != 0)
            LOGW("CameraService not published, waiting...");
            usleep(500000); // 0.5 s
        } while(true);
        if (mDeathNotifier == NULL) {
            mDeathNotifier = new DeathNotifier();
        mCameraService = interface_cast<ICameraService>(binder);
    LOGE_IF(mCameraService==0, "no CameraService!?");
    return mCameraService;
由前面的分析可知sm是BpCameraService对象 ://应该为BpServiceManager对象
    virtual sp<IBinder> getService(const String16& name) const
        unsigned n;
        for (n = 0; n < 5; n++){
            sp<IBinder> svc = checkService(name);
            if (svc != NULL) return svc;
            LOGI("Waiting for sevice %s...\n", String8(name).string());
        return NULL;
    virtual sp<IBinder> checkService( const String16& name) const
        Parcel data, reply;
        remote()->transact(CHECK_SERVICE_TRANSACTION, data, &reply);
        return reply.readStrongBinder();
status_t BpBinder::transact(
    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
    // Once a binder has died, it will never come back to life.
    if (mAlive) {
        status_t status = IPCThreadState::self()->transact(
            mHandle, code, data, reply, flags);
        if (status == DEAD_OBJECT) mAlive = 0;
        return status;
    return DEAD_OBJECT;
mHandle为0,BpBinder继续往下调用IPCThreadState:transact函数将数据发给与mHandle相关联的Service Manager Process。
status_t IPCThreadState::transact(int32_t handle,
                                  uint32_t code, const Parcel& data,
                                  Parcel* reply, uint32_t flags)
    if (err == NO_ERROR) {
        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
    if (err != NO_ERROR) {
        if (reply) reply->setError(err);
        return (mLastError = err);
    if ((flags & TF_ONE_WAY) == 0) {
        if (reply) {
            err = waitForResponse(reply);
        } else {
            Parcel fakeReply;
            err = waitForResponse(&fakeReply);
    return err;

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
    binder_transaction_data tr; = handle; //这个handle将传递到service_manager
    tr.code = code;
    tr.flags = bindrFlags;
waitForResponse将调用talkWithDriver与对Binder kernel进行读写操作。当Binder kernel接收到数据后,service_mananger线程的ThreadPool就会启动,service_manager查找到 CameraService服务后调用binder_send_reply,将返回的数据写入Binder kernel,Binder kernel。
status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
    int32_t cmd;
    int32_t err;

    while (1) {
        if ((err=talkWithDriver()) < NO_ERROR) break;
status_t IPCThreadState::talkWithDriver(bool doReceive)
#if defined(HAVE_ANDROID_OS)
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
            err = -errno;
        err = INVALID_OPERATION;
通过上面的ioctl系统函数中BINDER_WRITE_READ对binder kernel进行读写。

Guess you like