Get ServiceManager
hongxi.zhu 2023-7-1
Taking SurfaceFlinger as an example, analyze how the client process obtains the ServiceManager proxy service object
Main process
Obtain SM service in SurfaceFlinger
frameworks/native/services/surfaceflinger/main_surfaceflinger.cpp
// publish surface flinger
sp<IServiceManager> sm(defaultServiceManager());
sm->addService(String16(SurfaceFlinger::getServiceName()), flinger, false,
IServiceManager::DUMP_FLAG_PRIORITY_CRITICAL | IServiceManager::DUMP_FLAG_PROTO);
In SurfaceFlinger
the main
function, the first thing is to get IServiceManager
the object, which is actually BpServiceManager
the object, and then BpServiceManager->addService()
register the two services related to SurfaceFlinger. Then take a look at how the IServiceManager object is obtained. From the above, you can see that defaultServiceManager()
the ServiceManager is obtained from the method.
defaultServiceManager()
frameworks/native/libs/binder/IServiceManager.cpp
using AidlServiceManager = android::os::IServiceManager;
...
[[clang::no_destroy]] static std::once_flag gSmOnce;
[[clang::no_destroy]] static sp<IServiceManager> gDefaultServiceManager;
sp<IServiceManager> defaultServiceManager()
{
std::call_once(gSmOnce, []() {
sp<AidlServiceManager> sm = nullptr;
while (sm == nullptr) {
sm = interface_cast<AidlServiceManager>(ProcessState::self()->getContextObject(nullptr));
if (sm == nullptr) {
ALOGE("Waiting 1s on context object on %s.", ProcessState::self()->getDriverName().c_str());
sleep(1); //循环等待SM就绪
}
}
gDefaultServiceManager = sp<ServiceManagerShim>::make(sm);
});
return gDefaultServiceManager;
}
This method is implemented libbinder
in IServiceManager.cpp
. It is a call_once
singleton method implemented by interface_cast<AidlServiceManager>(ProcessState::self()->getContextObject(nullptr))
obtaining an IServiceManager
object and then constructing a IServiceManager
subclass ServiceManagerShim
object to return to the client for use (ServiceManagerShim is a new intermediate class after Google AIDL transformed ServiceManager. It is responsible for SM specific functions)
ProcessState::self()
frameworks/native/libs/binder/ProcessState.cpp
sp<ProcessState> ProcessState::self()
{
return init(kDefaultDriver, false /*requireDefault*/); //kDefaultDriver = "/dev/binder";
}
sp<ProcessState> ProcessState::init(const char *driver, bool requireDefault)
{
...
[[clang::no_destroy]] static std::once_flag gProcessOnce;
std::call_once(gProcessOnce, [&](){
//call_once单例模式确保每个进程只有一个ProcessState
if (access(driver, R_OK) == -1) {
//测试下binder的节点是否可读
ALOGE("Binder driver %s is unavailable. Using /dev/binder instead.", driver);
driver = "/dev/binder";
}
...
std::lock_guard<std::mutex> l(gProcessMutex);
gProcess = sp<ProcessState>::make(driver); //构造ProcessState实例
});
...
return gProcess;
}
ProcessState::self()
The main purpose is to obtain a process-unique ProcessState
object through a single instance, and you need to use the construction method when constructing it for the first time.
#define BINDER_VM_SIZE ((1 * 1024 * 1024) - sysconf(_SC_PAGE_SIZE) * 2)
#define DEFAULT_MAX_BINDER_THREADS 15
#define DEFAULT_ENABLE_ONEWAY_SPAM_DETECTION 1
...
ProcessState::ProcessState(const char* driver)
: mDriverName(String8(driver)),
mDriverFD(-1),
mVMStart(MAP_FAILED),
mThreadCountLock(PTHREAD_MUTEX_INITIALIZER),
mThreadCountDecrement(PTHREAD_COND_INITIALIZER),
mExecutingThreadsCount(0),
mWaitingForThreads(0),
mMaxThreads(DEFAULT_MAX_BINDER_THREADS),
mStarvationStartTimeMs(0),
mForked(false),
mThreadPoolStarted(false),
mThreadPoolSeq(1),
mCallRestriction(CallRestriction::NONE) {
base::Result<int> opened = open_driver(driver);
if (opened.ok()) {
// mmap the binder, providing a chunk of virtual address space to receive transactions.
mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE,
opened.value(), 0);
if (mVMStart == MAP_FAILED) {
close(opened.value());
// *sigh*
opened = base::Error()
<< "Using " << driver << " failed: unable to mmap transaction memory."; //内存不足
mDriverName.clear();
}
}
...
}
The two most important things done in the constructor method are to pass open_driver()
,mmap()
open_driver
static base::Result<int> open_driver(const char* driver) {
int fd = open(driver, O_RDWR | O_CLOEXEC); //通过open打开binder节点,获取binder设备驱动的fd
...
int vers = 0;
status_t result = ioctl(fd, BINDER_VERSION, &vers); //通过ioctl和binder驱动通信,查询binder驱动的binder版本,binder驱动的版本要和用户空间的binder协议的版本保持匹配,不然无法工作
...
size_t maxThreads = DEFAULT_MAX_BINDER_THREADS; //DEFAULT_MAX_BINDER_THREADS = 15
result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads); //通过ioctl告知binder驱动,用户进程支持的最大binder工作线程数,默认是15+1 = 16个(加上本身)
...
uint32_t enable = DEFAULT_ENABLE_ONEWAY_SPAM_DETECTION;
result = ioctl(fd, BINDER_ENABLE_ONEWAY_SPAM_DETECTION, &enable); //开启oneway方式垃圾请求攻击检测(类似于垃圾邮件攻击检测)
...
return fd; //返回驱动的fd
}
open_driver
Mainly do four things:
- Open the device node through system call
open
and obtain the device driver fd ioctl
Obtain the driver's binder version through system callioctl
Inform the driver of the maximum number of threads supported by the user process through system calls (the default is 15+1, the default for the SystemServer process is 31+1)ioctl
Set up spam oneway asynchronous communication detection through system calls
mmap
...
if (opened.ok()) {
// mmap the binder, providing a chunk of virtual address space to receive transactions.
mVMStart = mmap(nullptr, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE,
opened.value(), 0); //BINDER_VM_SIZE = 1MB-8KB
if (mVMStart == MAP_FAILED) {
close(opened.value());
// *sigh*
opened = base::Error()
<< "Using " << driver << " failed: unable to mmap transaction memory."; //内存不足(没有满足需求的连续内存)
mDriverName.clear();
}
}
...
ProcessState
The constructor then mmap
maps a piece of virtual memory of the current process (a virtual address allocated by the kernel, which is in the process address space) to the kernel space through a system call. The final implementation of this system call is in the binder driver. The binder driver will be in the kernel binder_mmap()
. Also apply for a space (kernel space) and point to a physical address. Note that this is only used when this process serves as a server to receive messages from binder. No IO copy occurs in this process.
Returning to the above analysis, we are currently registering a service. From the above analysis ProcessState::self()
, it obtains ProcessState
the object and then calls its getContextObject()
method.
getContextObject(nullptr)
sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& /*caller*/)
{
sp<IBinder> context = getStrongProxyForHandle(0); //BpServiceManager的handle是特殊的(handle = 0)
...
return context;
}
getStrongProxyForHandle(0)
sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)
{
sp<IBinder> result;
AutoMutex _l(mLock);
handle_entry* e = lookupHandleLocked(handle); //查找handle对应的handle_entry
if (e != nullptr) {
IBinder* b = e->binder; //handle_entry会保存handle对应的BpBinder对象
if (b == nullptr || !e->refs->attemptIncWeak(this)) {
if (handle == 0) {
//ServiceManager很特殊,固定的handle = 0
IPCThreadState* ipc = IPCThreadState::self(); //创建IPCThreadState实例,线程独有,用于发起真正的跨进程通信动作。
CallRestriction originalCallRestriction = ipc->getCallRestriction(); //权限检查
ipc->setCallRestriction(CallRestriction::NONE);
Parcel data;
status_t status = ipc->transact(
0, IBinder::PING_TRANSACTION, data, nullptr, 0); //向handle(0)发起PING_TRANSACTION事务,检测Binder是否已经正常工作,对端的BBdiner里会处理这个请求并返回NO_ERROR
ipc->setCallRestriction(originalCallRestriction);
if (status == DEAD_OBJECT)
return nullptr;
}
sp<BpBinder> b = BpBinder::PrivateAccessor::create(handle); //根据handle创建BpBinder
e->binder = b.get();
if (b) e->refs = b->getWeakRefs();
result = b;
} else {
...
}
}
return result;
}
Find the BpBinder object corresponding to handle(0). If it is the first call, first IPCThreadState::transact
initiate a Ping transaction to the peer process to the binder driver to see if the binder link to the SM side is working normally, and then create the corresponding handle(0) BpBinder(0), and save it to handle_entry, which actually returns new BpBinder(0)
lookupHandleLocked(0)
ProcessState::handle_entry* ProcessState::lookupHandleLocked(int32_t handle)
{
const size_t N=mHandleToObject.size();
if (N <= (size_t)handle) {
//如果handle大于mHandleToObject.size,就创建handle+1-N个新的handle_entry并插入,第一次进入时,N = 0,handle = 0,所以handle = 0肯定是第一个元素,从这里也看出,除了第一个元素外,每个进程里Bpbinder对应的handle值不一定相同
handle_entry e;
e.binder = nullptr; //首次创建时binder为nullptr
e.refs = nullptr;
status_t err = mHandleToObject.insertAt(e, N, handle+1-N);
if (err < NO_ERROR) return nullptr;
}
return &mHandleToObject.editItemAt(handle); //返回对应元素的地址
}
handle_entry
It is a structure that saves the BpBinder mapping relationship in the user process. The corresponding can be queried mHandleToObject
as handle
a subscript, BpBinder
so what ProcessState::self()->getContextObject(nullptr)
is returned is actually an interface implementation object that is converted to .new BpBInder(0)
BpBInder对象
IInterface
interface_cast
android::os::IServiceManager
BpServiceManager