Android Binder communication principle (2): servicemanager startup

Source code based on: Android R

0. Preface

The figure below is the software framework of binder before android 8.0. The dependent drive device is /dev/binder. The four elements of the binder mechanism are client, server, servicemanager and binder driver.

For the binder and vndbinder after android 8.0, this framework is still the same, but the driven device adds /dev/vndbinder

This article mainly analyzes the process of servicemanger, and hwservicemanger will be supplemented later.

2. servicemanager generation

First look at the bin file generated:

frameworks/native/cmds/servicemanager/Android.bp

cc_binary {
    name: "servicemanager",
    defaults: ["servicemanager_defaults"],
    init_rc: ["servicemanager.rc"],
    srcs: ["main.cpp"],
}

cc_binary {
    name: "vndservicemanager",
    defaults: ["servicemanager_defaults"],
    init_rc: ["vndservicemanager.rc"],
    vendor: true,
    cflags: [
        "-DVENDORSERVICEMANAGER=1",
    ],
    srcs: ["main.cpp"],
}

In this directory, two bin files are compiled through the same main.cpp: servicemanager and vndservicemanager, and the two bin files correspond to different *.rc files.

frameworks/native/cmds/servicemanager/servicemanager.rc

service servicemanager /system/bin/servicemanager
    class core animation
    user system
    group system readproc
    critical
    onrestart restart healthd
    onrestart restart zygote
    onrestart restart audioserver
    onrestart restart media
    onrestart restart surfaceflinger
    onrestart restart inputflinger
    onrestart restart drm
    onrestart restart cameraserver
    onrestart restart keystore
    onrestart restart gatekeeperd
    onrestart restart thermalservice
    writepid /dev/cpuset/system-background/tasks
    shutdown critical
frameworks/native/cmds/servicemanager/vndservicemanager.rc

service vndservicemanager /vendor/bin/vndservicemanager /dev/vndbinder
    class core
    user system
    group system readproc
    writepid /dev/cpuset/system-background/tasks
    shutdown critical

3. main() of servicemanager

frameworks/native/cmds/servicemanager/main.cpp

int main(int argc, char** argv) {
    if (argc > 2) {
        LOG(FATAL) << "usage: " << argv[0] << " [binder driver]";
    }

    //根据参数确认代码的设备是binder还是vndbinder
    const char* driver = argc == 2 ? argv[1] : "/dev/binder";

    //驱动设备的初始化工作,后面第 3.1 节详细说明
    sp<ProcessState> ps = ProcessState::initWithDriver(driver);

    //告知驱动最大线程数,并设定servicemanger的线程最大数
    ps->setThreadPoolMaxThreadCount(0);
    //调用处理,以error方式还是fatal
    ps->setCallRestriction(ProcessState::CallRestriction::FATAL_IF_NOT_ONEWAY);

    //核心的接口都在这里
    sp<ServiceManager> manager = new ServiceManager(std::make_unique<Access>());

    //将servicemanager作为一个特殊service添加进来
    if (!manager->addService("manager", manager, false /*allowIsolated*/, IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT).isOk()) {
        LOG(ERROR) << "Could not self register servicemanager";
    }

    //保存进程的context obj
    IPCThreadState::self()->setTheContextObject(manager);
    //通知驱动context mgr是该context
    ps->becomeContextManager(nullptr, nullptr);

    //servicemanager 中新建一个looper,用以处理binder消息
    sp<Looper> looper = Looper::prepare(false /*allowNonCallbacks*/);

    //通知驱动进入looper,并开始监听驱动消息,并设立callback进行处理
    BinderCallback::setupTo(looper);
    ClientCallbackCallback::setupTo(looper, manager);

    //启动looper,进入每次的poll处理,进程如果没有出现异常情况导致abort是不会退出的
    while(true) {
        looper->pollAll(-1);
    }

    // should not be reached
    return EXIT_FAILURE;
}

3.1 initWithDriver()

frameworks/native/libs/binder/ProcessState.cpp

sp<ProcessState> ProcessState::initWithDriver(const char* driver)
{
    Mutex::Autolock _l(gProcessMutex);
    if (gProcess != nullptr) {
        // Allow for initWithDriver to be called repeatedly with the same
        // driver.
        if (!strcmp(gProcess->getDriverName().c_str(), driver)) {
            return gProcess;
        }
        LOG_ALWAYS_FATAL("ProcessState was already initialized.");
    }

    if (access(driver, R_OK) == -1) {
        ALOGE("Binder driver %s is unavailable. Using /dev/binder instead.", driver);
        driver = "/dev/binder";
    }

    gProcess = new ProcessState(driver);
    return gProcess;
}

The parameter of the function is the device name of the driver, whether it is /dev/binder or /dev/vndbinder.

The other logic is relatively simple. ProcesState is the management "process state" . Each process in Binder will have one and only one mProcess object. If the instance is not empty, confirm whether the driver opened by the instance is the name of the driver that currently needs init; If the instance does not exist, create one via new.

Detailed PorcessState can be found in Section 4.

3.2 new ServiceManager()

The core interfaces are all located in Servicemanager, where a new Access object is created during construction, as follows:

frameworks/native/cmds/servicemanager/ServiceManager.cpp

ServiceManager::ServiceManager(std::unique_ptr<Access>&& access) : mAccess(std::move(access)) {
    ...
}

Call the move constructor of Access through the move interface to create an instance of mAccess, which is used to confirm the servicemanager's authority through selinux.

3.3 addService()

if (!manager->addService("manager", manager, false /*allowIsolated*/, IServiceManager::DUMP_FLAG_PRIORITY_DEFAULT).isOk()) {
    LOG(ERROR) << "Could not self register servicemanager";
}

If other services are registered to servicemanager, they need to call addService() in ServiceManager through IServiceManager through binder, and here they are registered directly through ServiceManager object.

frameworks/native/cmds/servicemanager/ServiceManager.cpp

Status ServiceManager::addService(const std::string& name, const sp<IBinder>& binder, bool allowIsolated, int32_t dumpPriority) {
    auto ctx = mAccess->getCallingContext();

    //应用进程没有权限注册服务
    if (multiuser_get_app_id(ctx.uid) >= AID_APP) {
        return Status::fromExceptionCode(Status::EX_SECURITY);
    }

    // selinux 曲线是否允许注册为SELABEL_CTX_ANDROID_SERVICE
    if (!mAccess->canAdd(ctx, name)) {
        return Status::fromExceptionCode(Status::EX_SECURITY);
    }

    //传入的IBinder 不能为nullptr
    if (binder == nullptr) {
        return Status::fromExceptionCode(Status::EX_ILLEGAL_ARGUMENT);
    }

    //service name 需要符合要求,由0-9、a-z、A-Z、下划线、短线、点号、斜杠组成,name 长度不能超过127
    if (!isValidServiceName(name)) {
        LOG(ERROR) << "Invalid service name: " << name;
        return Status::fromExceptionCode(Status::EX_ILLEGAL_ARGUMENT);
    }

    //这里应该是需要普通的vnd service 进行vintf 声明
#ifndef VENDORSERVICEMANAGER
    if (!meetsDeclarationRequirements(binder, name)) {
        // already logged
        return Status::fromExceptionCode(Status::EX_ILLEGAL_ARGUMENT);
    }
#endif  // !VENDORSERVICEMANAGER

    //注册linkToDeath,监听service 状态
    if (binder->remoteBinder() != nullptr && binder->linkToDeath(this) != OK) {
        LOG(ERROR) << "Could not linkToDeath when adding " << name;
        return Status::fromExceptionCode(Status::EX_ILLEGAL_STATE);
    }

    //添加到map 中
    auto entry = mNameToService.emplace(name, Service {
        .binder = binder,
        .allowIsolated = allowIsolated,
        .dumpPriority = dumpPriority,
        .debugPid = ctx.debugPid,
    });

    //确认是否注册了service callback,如果注册调用回调
    auto it = mNameToRegistrationCallback.find(name);
    if (it != mNameToRegistrationCallback.end()) {
        for (const sp<IServiceCallback>& cb : it->second) {
            entry.first->second.guaranteeClient = true;
            // permission checked in registerForNotifications
            cb->onRegistration(name, binder);
        }
    }

    return Status::ok();
}

3.4 setTheContextObject()

    IPCThreadState::self()->setTheContextObject(manager);
    ps->becomeContextManager(nullptr, nullptr);

The first line is to create IPCThreadState, and store servicemanager in IPCThreadState for later transact use.

The second line informs the driver context mgr through the command BINDER_SET_CONTEXT_MGR_EXT :

frameworks/native/libs/binder/ProcessState.cpp

bool ProcessState::becomeContextManager(context_check_func checkFunc, void* userData)
{
    AutoMutex _l(mLock);
    mBinderContextCheckFunc = checkFunc;
    mBinderContextUserData = userData;

    flat_binder_object obj {
        .flags = FLAT_BINDER_FLAG_TXN_SECURITY_CTX,
    };

    int result = ioctl(mDriverFD, BINDER_SET_CONTEXT_MGR_EXT, &obj);

    // fallback to original method
    if (result != 0) {
        android_errorWriteLog(0x534e4554, "121035042");

        int dummy = 0;
        result = ioctl(mDriverFD, BINDER_SET_CONTEXT_MGR, &dummy);
    }

    if (result == -1) {
        mBinderContextCheckFunc = nullptr;
        mBinderContextUserData = nullptr;
        ALOGE("Binder ioctl to become context manager failed: %s\n", strerror(errno));
    }

    return result == 0;
}

Notify the driver to create context_mgr_node, the following is the code of the driver layer:

drivers/android/binder.c

static long binder_ioctl(struct file *filp, unsigned int cmd, unsigned long arg)
{
    ...
	case BINDER_SET_CONTEXT_MGR_EXT: {
        struct flat_binder_object fbo;
        
        if (copy_from_user(&fbo, ubuf, sizeof(fbo))) {
        ret = -EINVAL;
        goto err;
        }
        ret = binder_ioctl_set_ctx_mgr(filp, &fbo);
        if (ret)
        goto err;
        break;
	}
	case BINDER_SET_CONTEXT_MGR:
        ret = binder_ioctl_set_ctx_mgr(filp, NULL);
        if (ret)
        goto err;
        break;
    ...
    ...
}

The main difference between the two cmds is whether there is a flat_binder_object, and ultimately the binder_ioctl_set_ctx_mgr function is called:

drivers/android/binder.c

static int binder_ioctl_set_ctx_mgr(struct file *filp,
				    struct flat_binder_object *fbo)
{
	int ret = 0;
    //进程的binder_proc, 这里是ServiceManager的 binder_proc,之前通过open("/dev/binder")得来
	struct binder_proc *proc = filp->private_data;
	struct binder_context *context = proc->context;
	struct binder_node *new_node;
	kuid_t curr_euid = current_euid();   // 线程的uid

	mutex_lock(&context->context_mgr_node_lock);
    //正常第一次为null,如果不为null则说明该进程已经设置过context mgr则直接退出
	if (context->binder_context_mgr_node) {
        pr_err("BINDER_SET_CONTEXT_MGR already set\n");
        ret = -EBUSY;
        goto out;
	}

    //检查当前进程是否具有注册Context Manager的SEAndroid安全权限
	ret = security_binder_set_context_mgr(proc->tsk);
	if (ret < 0)
		goto out;
	if (uid_valid(context->binder_context_mgr_uid)) {
        //读取binder_context_mgr_uid和当前的比,如果不一样,报错
        if (!uid_eq(context->binder_context_mgr_uid, curr_euid)) {
        pr_err("BINDER_SET_CONTEXT_MGR bad uid %d != %d\n",
              from_kuid(&init_user_ns, curr_euid),
              from_kuid(&init_user_ns,
        	 context->binder_context_mgr_uid));
        ret = -EPERM;
        goto out;
        }
	} else {
	    context->binder_context_mgr_uid = curr_euid;
	}

    //创建binder_node对象
	new_node = binder_new_node(proc, fbo);
	if (!new_node) {
        ret = -ENOMEM;
        goto out;
	}
	binder_node_lock(new_node);
	new_node->local_weak_refs++;
	new_node->local_strong_refs++;
	new_node->has_strong_ref = 1;
	new_node->has_weak_ref = 1;
    //把新创建的node对象赋值给context->binder_context_mgr_node,成为serviceManager的binder管理实体
	context->binder_context_mgr_node = new_node;
	binder_node_unlock(new_node);
	binder_put_node(new_node);
out:
	mutex_unlock(&context->context_mgr_node_lock);
	return ret;
}

The process of binder_ioctl_set_ctx_mgr() is also relatively simple

  • First check whether the current process has the SEAndroid security permission to register the Context Manager
  • If you have SELinux permissions, a binder_node node will be specially generated for the context manager of the entire system, and the strength of the node will be increased by 1.
  • The newly created binder_node node is recorded in context->binder_context_mgr_node, which is the context binder node of the ServiceManager process, making it the binder management entity of the serviceManager

3.5 Looper::prepare()

sp<Looper> looper = Looper::prepare(0 /* opts */);

The detailed code is not listed, mainly to add the monitoring of fd through epoll

3.6 BinderCallback::setupTo(looper)

frameworks/native/cmds/servicemanager/main.cpp

class BinderCallback : public LooperCallback {
public:
    static sp<BinderCallback> setupTo(const sp<Looper>& looper) {
        sp<BinderCallback> cb = new BinderCallback;

        int binder_fd = -1;
        //获取主线程的binder fd,并通知驱动ENTER_LOOPER
        IPCThreadState::self()->setupPolling(&binder_fd);
        LOG_ALWAYS_FATAL_IF(binder_fd < 0, "Failed to setupPolling: %d", binder_fd);

        //将线程中的cmd flush 给驱动,此处应该是ENTER_LOOPER
        IPCThreadState::self()->flushCommands();

        //looper 中的epoll 添加对binder_fd 的监听,并且将callback 注册进去,会回调handleEvent
        int ret = looper->addFd(binder_fd,
                                Looper::POLL_CALLBACK,
                                Looper::EVENT_INPUT,
                                cb,
                                nullptr /*data*/);
        LOG_ALWAYS_FATAL_IF(ret != 1, "Failed to add binder FD to Looper");

        return cb;
    }

    //epoll 触发该fd事件时,会回调该函数
    int handleEvent(int /* fd */, int /* events */, void* /* data */) override {
        IPCThreadState::self()->handlePolledCommands();
        return 1;  // Continue receiving callbacks.
    }
};

In fact, after each ordinary service is created, it will call ProcessState::startThreadPool() to generate a main IPC thread, and then use it to spawn other IPCThreadStates through IPCThreadState::joinThreadPool(), but servicemanager does not need other threads, so it just Use Looper in the main thread for further monitoring.

The core of each IPCThreadState should be to monitor and process the interactive information driven by the binder, and these operations are all in the function getAndExecuteCommand() , see Section 5.4 for details .

3.7 ClientCallbackCallback::setupTo(looper, manager)

It is not clear why here. From the code point of view, ServiceManager can choose to registerClientCallback before addService, so that if addService() succeeds, it will call back the notification.

As for the ClientCallbackCallback here, a timer is set, and it is triggered once every 5s, which feels like a heartbeat packet.

3.8 looper->pollAll(-1)

enter an infinite loop

4. ProcessState class

ProcesState is the management "process state" , each process in Binder will have one and only one mProcess object. This object is used to:

  • Initialize the drive device;
  • Record the name of the drive, FD;
  • Record the upper limit of the number of process threads;
  • Record the context obj of the binder;
  • Start the binder thread;

4.1 ProcessState Construction

frameworks/native/libs/binder/ProcessState.cpp

ProcessState::ProcessState(const char *driver)
    : mDriverName(String8(driver))
    , mDriverFD(open_driver(driver))
    , mVMStart(MAP_FAILED)
    , mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)
    , mThreadCountDecrement(PTHREAD_COND_INITIALIZER)
    , mExecutingThreadsCount(0)
    , mMaxThreads(DEFAULT_MAX_BINDER_THREADS)
    , mStarvationStartTimeMs(0)
    , mBinderContextCheckFunc(nullptr)
    , mBinderContextUserData(nullptr)
    , mThreadPoolStarted(false)
    , mThreadPoolSeq(1)
    , mCallRestriction(CallRestriction::NONE)
{

// TODO(b/139016109): enforce in build system
#if defined(__ANDROID_APEX__)
    LOG_ALWAYS_FATAL("Cannot use libbinder in APEX (only system.img libbinder) since it is not stable.");
#endif

    if (mDriverFD >= 0) {
        // mmap the binder, providing a chunk of virtual address space to receive transactions.
        mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);
        if (mVMStart == MAP_FAILED) {
            // *sigh*
            ALOGE("Using %s failed: unable to mmap transaction memory.\n", mDriverName.c_str());
            close(mDriverFD);
            mDriverFD = -1;
            mDriverName.clear();
        }
    }

#ifdef __ANDROID__
    LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binder driver '%s' could not be opened.  Terminating.", driver);
#endif
}

This function mainly does the following:

In the initialization list, call the open_driver() code device driver, see Section 4.3 below for details ;

If the driver opens successfully, after mDriverFD is assigned, a buffer of size BINDER_VM_SIZE is created by mmap() to receive transactions data.

This size can be confirmed through the command line. Assuming that the PID of servicemanager is 510, then pass:

cat /proc/510/maps can see:

748c323000-748c421000 r--p 00000000 00:1f 4                              /dev/binderfs/binder

Don't wonder why it's not /dev/binder, it's just a soft link:

lrwxrwxrwx  1 root      root                  20 1970-01-01 05:43 binder -> /dev/binderfs/binder
lrwxrwxrwx  1 root      root                  22 1970-01-01 05:43 hwbinder -> /dev/binderfs/hwbinder
lrwxrwxrwx  1 root      root                  22 1970-01-01 05:43 vndbinder -> /dev/binderfs/vndbinder

4.2 ProcessState singleton

ProcessState uses the self() function to get the object, because vndbinder and binder share a code, so if you need to use vndbinder, you need to call initWithDriver() to specify the driver device name before calling the self() function . Of course, if the self() function is forced to be used, then the driver device for the obtained singleton is kDefaultDriver .

frameworks/native/libs/binder/ProcessState.cpp

#ifdef __ANDROID_VNDK__
const char* kDefaultDriver = "/dev/vndbinder";
#else
const char* kDefaultDriver = "/dev/binder";
#endif

sp<ProcessState> ProcessState::self()
{
    Mutex::Autolock _l(gProcessMutex);
    if (gProcess != nullptr) {
        return gProcess;
    }
    gProcess = new ProcessState(kDefaultDriver);
    return gProcess;
}

4.3 open_driver()

For binder and vndbinder devices, when ProcessState is constructed, open_driver() will be called in the initialization list to open and initialize the device.

frameworks/native/libs/binder/ProcessState.cpp

static int open_driver(const char *driver)
{
    //通过系统调用open 设备
    int fd = open(driver, O_RDWR | O_CLOEXEC);
    if (fd >= 0) {
        int vers = 0;

        //如果open 成功,会查询binder version是否匹配
        status_t result = ioctl(fd, BINDER_VERSION, &vers);
        if (result == -1) {
            ALOGE("Binder ioctl to obtain version failed: %s", strerror(errno));
            close(fd);
            fd = -1;
        }

        //如果不是当前的version,为什么还是不return?
        if (result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {
          ALOGE("Binder driver protocol(%d) does not match user space protocol(%d)! ioctl() return value: %d",
                vers, BINDER_CURRENT_PROTOCOL_VERSION, result);
            close(fd);
            fd = -1;
        }
        size_t maxThreads = DEFAULT_MAX_BINDER_THREADS;
        //如果binder version 是当前的,会通知驱动设置最大的线程数
        result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);
        if (result == -1) {
            ALOGE("Binder ioctl to set max threads failed: %s", strerror(errno));
        }
    } else {
        ALOGW("Opening '%s' failed: %s\n", driver, strerror(errno));
    }
    return fd;
}

It should be noted here that the maximum number of binder threads created by each process is: DEFAULT_MAX_BINDER_THREADS

frameworks/native/libs/binder/ProcessState.cpp

#define DEFAULT_MAX_BINDER_THREADS 15

For servicemanager, set it to 0 in the main function , that is, servicemanager directly uses the main thread, and the maximum binder thread limit for ordinary services is 15, and the driver analysis will be detailed later.

Let me add here that in the system_server process, the maximum number of binders is 31 :

frameworks/base/services/java/com/android/server/SystemServer.java

    private static final int sMaxBinderThreads = 31;

    private void run() {
            ...
            BinderInternal.setMaxThreads(sMaxBinderThreads);
            ...
    }

4.4 makeBinderThreadName()

frameworks/native/libs/binder/ProcessState.cpp

String8 ProcessState::makeBinderThreadName() {
    int32_t s = android_atomic_add(1, &mThreadPoolSeq);
    pid_t pid = getpid();
    String8 name;
    name.appendFormat("Binder:%d_%X", pid, s);
    return name;
}

This is the function that generates the name of the binder thread. The sequence and number are controlled by the variable mThreadPoolSeq . The final name of the binder thread is similar to Binder:1234_F .

The maximum number of threads is specified, as mentioned in DEFAULT_MAX_BINDER_THREADS (the default is 15).

Of course, the maximum number of threads can also be set through  the setThreadPoolMaxThreadCount() function. In servicemanager, this function is used to specify max threads as 0. See Section 3 for details.

4.4.1 setThreadPoolMaxThreadCount()

frameworks/native/libs/binder/ProcessState.cpp

status_t ProcessState::setThreadPoolMaxThreadCount(size_t maxThreads) {
    status_t result = NO_ERROR;
    if (ioctl(mDriverFD, BINDER_SET_MAX_THREADS, &maxThreads) != -1) {
        mMaxThreads = maxThreads;
    } else {
        result = -errno;
        ALOGE("Binder ioctl to set max threads failed: %s", strerror(-result));
    }
    return result;
}

It has been explained in Section 4.3 that when ProcessState is constructed, open_driver() will be used to notify the binder driver of the default binder thread maximum value. The default binder thread MAX value is 15.

The process can set the MAX value of the binder thread separately through ProcessState. For example, the system_server process sets the value to 31, and the servicemanager process sets the value to 0.

4.5 startThreadPool()

frameworks/native/libs/binder/ProcessState.cpp

void ProcessState::startThreadPool()
{
    AutoMutex _l(mLock);
    if (!mThreadPoolStarted) {
        mThreadPoolStarted = true;
        spawnPooledThread(true);
    }
}

Each binder communication process needs to call this function.

This function can be said to be the beginning and inevitable process of binder communication. There are probably two reasons:

  • The binder communication of each process will save a ProcessState singleton, and there will be a state protection, that is, the mThreadPoolStarted variable here. Any subsequent binder thread spawning needs to call spawnPooledThread(), and the prerequisite for this function is that mThreadPoolStarted is true;
  • For the binder driver, each process needs to create a main binder thread , and other binder threads are non-main threads;

4.6 spawnPooledThread()

frameworks/native/libs/binder/ProcessState.cpp

void ProcessState::spawnPooledThread(bool isMain)
{
    if (mThreadPoolStarted) {
        String8 name = makeBinderThreadName();
        ALOGV("Spawning new pooled thread, name=%s\n", name.string());
        sp<Thread> t = new PoolThread(isMain);
        t->run(name.string());
    }
}

This is used to spawn a new binder thread. PoolThread inherits from Thread, and it will bring in the binder name when running. Therefore, when printing the thread stack, you can know the number of Binder threads. And each binder thread will be managed by IPCThreadState:

frameworks/native/libs/binder/ProcessState.cpp

class PoolThread : public Thread
{
public:
    explicit PoolThread(bool isMain)
        : mIsMain(isMain)
    {
    }

protected:
    virtual bool threadLoop()
    {
        IPCThreadState::self()->joinThreadPool(mIsMain);
        return false;
    }

    const bool mIsMain;
};

5. IPCThreadState class

Same as the ProcessState class, each process has many threads to record the "thread state" . After each binder's BINDER_WRITE_READ call, the driver will determine whether to spawn threads according to the situation, and creating a PoolThread (see ProcessState for details) will be accompanied by a IPCThreadState is managed, and all operations in the binder thread are performed through IPCThreadState.

5.1 IPCThreadState Construction

frameworks/native/libs/binder/IPCThreadState.cpp

IPCThreadState::IPCThreadState()
    : mProcess(ProcessState::self()),
      mServingStackPointer(nullptr),
      mWorkSource(kUnsetWorkSource),
      mPropagateWorkSource(false),
      mStrictModePolicy(0),
      mLastTransactionBinderFlags(0),
      mCallRestriction(mProcess->mCallRestriction)
{
    pthread_setspecific(gTLS, this);
    clearCaller();
    mIn.setDataCapacity(256);
    mOut.setDataCapacity(256);
}

5.2 self()

frameworks/native/libs/binder/IPCThreadState.cpp

IPCThreadState* IPCThreadState::self()
{
    if (gHaveTLS.load(std::memory_order_acquire)) {
restart:
        const pthread_key_t k = gTLS;
        IPCThreadState* st = (IPCThreadState*)pthread_getspecific(k);
        if (st) return st;
        return new IPCThreadState;
    }

    // Racey, heuristic test for simultaneous shutdown.
    if (gShutdown.load(std::memory_order_relaxed)) {
        ALOGW("Calling IPCThreadState::self() during shutdown is dangerous, expect a crash.\n");
        return nullptr;
    }

    pthread_mutex_lock(&gTLSMutex);
    if (!gHaveTLS.load(std::memory_order_relaxed)) {
        int key_create_value = pthread_key_create(&gTLS, threadDestructor);
        if (key_create_value != 0) {
            pthread_mutex_unlock(&gTLSMutex);
            ALOGW("IPCThreadState::self() unable to create TLS key, expect a crash: %s\n",
                    strerror(key_create_value));
            return nullptr;
        }
        gHaveTLS.store(true, std::memory_order_release);
    }
    pthread_mutex_unlock(&gTLSMutex);
    goto restart;
}

After each thread is created, it will confirm whether there is an IPCThreadState in TLS through pthread_getspecific(), and return directly if there is, and create a new one if not.

5.3 setupPolling()

frameworks/native/libs/binder/IPCThreadState.cpp

int IPCThreadState::setupPolling(int* fd)
{
    if (mProcess->mDriverFD < 0) {
        return -EBADF;
    }

    mOut.writeInt32(BC_ENTER_LOOPER);
    *fd = mProcess->mDriverFD;
    return 0;
}

Mainly do two things, send BC_ENTER_LOOPER to notify the driver to enter the looper, and return the driver fd.

5.4 getAndExecuteCommand()

frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::getAndExecuteCommand()
{
    status_t result;
    int32_t cmd;

    //step1,与binder驱动交互,等待binder驱动返回
    result = talkWithDriver();
    if (result >= NO_ERROR) {
        size_t IN = mIn.dataAvail();
        if (IN < sizeof(int32_t)) return result;

        //step2,解析从binder驱动中的reply command
        cmd = mIn.readInt32();

        //step3,留意binder处理的thread count
        //system server中会喂狗,这里当处理的线程count超过最大值,monitor会阻塞直到有足够的数量
        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount++;
        if (mProcess->mExecutingThreadsCount >= mProcess->mMaxThreads &&
                mProcess->mStarvationStartTimeMs == 0) {
            mProcess->mStarvationStartTimeMs = uptimeMillis();
        }
        pthread_mutex_unlock(&mProcess->mThreadCountLock);

        //step4,binder通信用户端的核心处理函数,根据reply command进行对应的处理
        result = executeCommand(cmd);

        //step5,每个线程executeCommand() 完成都会将thread count减1,且每次都会条件变量broadcast
        pthread_mutex_lock(&mProcess->mThreadCountLock);
        mProcess->mExecutingThreadsCount--;
        if (mProcess->mExecutingThreadsCount < mProcess->mMaxThreads &&
                mProcess->mStarvationStartTimeMs != 0) {
            int64_t starvationTimeMs = uptimeMillis() - mProcess->mStarvationStartTimeMs;
            if (starvationTimeMs > 100) {
                ALOGE("binder thread pool (%zu threads) starved for %" PRId64 " ms",
                      mProcess->mMaxThreads, starvationTimeMs);
            }
            mProcess->mStarvationStartTimeMs = 0;
        }
        pthread_cond_broadcast(&mProcess->mThreadCountDecrement);
        pthread_mutex_unlock(&mProcess->mThreadCountLock);
    }

    return result;
}

The logic of the code is relatively simple, mainly consists of three parts:

  • talkWithDriver() interacts with the binder driver and determines whether the return value is abnormal;
  • Determine the execute thread count, the system server will feed the dog;
  • executeCommand() for core processing;

The processing logic of the two key functions is relatively complex, so let’s briefly analyze them below.

5.4.1 talkWithDriver()

frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::talkWithDriver(bool doReceive)
{
    if (mProcess->mDriverFD < 0) {
        return -EBADF;
    }

    binder_write_read bwr;

    // Is the read buffer empty?
    const bool needRead = mIn.dataPosition() >= mIn.dataSize();

    // We don't want to write anything if we are still reading
    // from data left in the input buffer and the caller
    // has requested to read the next data.
    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;

    bwr.write_size = outAvail;
    bwr.write_buffer = (uintptr_t)mOut.data();

    // This is what we'll read.
    if (doReceive && needRead) {
        bwr.read_size = mIn.dataCapacity();
        bwr.read_buffer = (uintptr_t)mIn.data();
    } else {
        bwr.read_size = 0;
        bwr.read_buffer = 0;
    }

    ...

    // Return immediately if there is nothing to do.
    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;

    bwr.write_consumed = 0;
    bwr.read_consumed = 0;
    status_t err;
    do {
        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
            err = NO_ERROR;
        else
            err = -errno;

        if (mProcess->mDriverFD < 0) {
            err = -EBADF;
        }
    } while (err == -EINTR);

    if (err >= NO_ERROR) {
        if (bwr.write_consumed > 0) {
            if (bwr.write_consumed < mOut.dataSize())
                LOG_ALWAYS_FATAL(...);
            else {
                mOut.setDataSize(0);
                processPostWriteDerefs();
            }
        }
        if (bwr.read_consumed > 0) {
            mIn.setDataSize(bwr.read_consumed);
            mIn.setDataPosition(0);
        }

        return NO_ERROR;
    }

    return err;
}

Two pieces of information are stored in IPCThreadState: mIn and mOut , mIn is the data driven by read, and mOut is the data driven by write.

The core here is the do...while loop, which interacts with the driver through the command  BINDER_WRITE_READ . If the ioctl is not interrupted, the do...while will return after processing.

The detailed processing of the BINDER_WIRTE_READ driver will be analyzed in detail later.

5.4.2 executeCommand()

Binder thread core processing part, the result processing core after talkWithDriver() :

frameworks/native/libs/binder/IPCThreadState.cpp

status_t IPCThreadState::executeCommand(int32_t cmd)
{
    BBinder* obj;
    RefBase::weakref_type* refs;
    status_t result = NO_ERROR;

    switch ((uint32_t)cmd) {
    case BR_ERROR:
        result = mIn.readInt32();
        break;

    case BR_OK:
        break;

    case BR_ACQUIRE:
        ...
        break;

    case BR_RELEASE:
        ...
        break;

    case BR_INCREFS:
        ...
        break;

    case BR_DECREFS:
        ...
        break;

    case BR_ATTEMPT_ACQUIRE:
        ...
        break;

    case BR_TRANSACTION_SEC_CTX:
    case BR_TRANSACTION:
        ...
        break;

    case BR_DEAD_BINDER:
        ...

    case BR_CLEAR_DEATH_NOTIFICATION_DONE:
        ...

    case BR_FINISHED:
        result = TIMED_OUT;
        break;

    case BR_NOOP:
        break;

    case BR_SPAWN_LOOPER:
        mProcess->spawnPooledThread(false);
        break;

    default:
        ALOGE("*** BAD COMMAND %d received from Binder driver\n", cmd);
        result = UNKNOWN_ERROR;
        break;
    }

    if (result != NO_ERROR) {
        mLastError = result;
    }

    return result;
}

The analysis will not be done here for the time being, and will be analyzed in detail in the native CS later.

So far, the startup process of servicemanager has been sorted out, and the basic process is as follows:

  • According to the command line parameters, choose to start the device  binder or the device  vndbinder ;
  • Through ProcessState::initWithDriver() open, initialize the device driver, and the maximum number of threads passing through the process is 0;
  • Instantiate ServcieManager and register it in mServiceMap in ServiceManager as a special servie;
  • Store special context obj in IPCThreadState, and drive context mgr through ProcessState notification;
  • Through BinderCallback, notify the driver that servicemanger is ready and enter BC_ENTER_LOOPER;
  • Add the driver device fd to monitor through Epoll in Looper, and call back hanleEvent();
  • Process poll cmd in handleEvent() and process all information;

Guess you like

Origin blog.csdn.net/jingerppp/article/details/131327393