android系统源码阅读--进程间通信Binder机制

进程间通信Binder机制

Linux内存管理

  段式分布

一个进程对应一个程序的一次运行状态,程序运行的代码数据,需要用到的资源数据(如图片,音视频等)以及用户输入数据等都需要占用内存空间,根据不同的数据用途,程序把数据存储到不同的分段,每个程序都会对应5种数据分段:代码段、数据段、BSS段、堆、栈:

1)代码段:代码段是用来存放可执行文件的操作指令,也就是说是它是可执行程序在内存中的镜像。代码段需要防止在运行时被非法修改,所以只准许读取操作,而不允许写入(修改)操作——它是不可写的。

2)数据段:数据段用来存放可执行文件中已初始化全局变量,换句话说就是存放程序静态分配的变量和全局变量

3BSS段:BSS段包含了程序中未初始化的全局变量,在内存中 bss段全部置零。

4)堆(heap):堆是用于存放进程运行中被动态分配的内存段

5)栈:栈是用户存放程序临时创建的局部变量,也就是说我们函数括弧“{}”中定义的变量

参考:http://www.cnblogs.com/clover-toeic/p/3754433.html

用户空间与内核空间

每个进程启动时系统会分配4G的虚拟内存空间给进程,对于32位系统来说,只有4G的实际物理空间,对于同时运行在系统上的多个进程来说,不可能独占所有的内存空间,所以4G的虚拟内存空间是系统给进程的一张有4G空间的规划图,你可以在纸上随意分配,具体到利用物理空间就需要操作系统使用地址映射把虚拟空间地址映射到物理地址。

前面说到在程序中数据是分段存储的,当执行程序即启动进程时,要把程序对应的数据段地址映射到进程的虚拟地址空间,即完成数据段地址到线性地址的映射

由于操作系统本身的运行与各种驱动程序是需要占用内存的,4G的虚拟地址空间有划分为内核空间与用户空间,用户进程使用的逻辑空间实际上是用户空间这段内存。

Linux操作系统的内存分配网上有大量的资料可供参考。这里仅用一个比喻记录我的理解。系统的内存使用好比是一个商人拥有一座仓库对外出租,而每个租户都是财大气粗要整租整个仓库,而每个租户又都不能完全利用整个仓库面积(货物不够多),为了多赚点钱(提高资源利用率),商人同时把仓库出租给多个租户,并对每个租户说整个仓库都是你的,并给每个商户一张仓库平面图,你可以在图上规划怎么存,但不能进仓库,我来调度货物;这样商人开始暗箱操作,把仓库面积分片,把多个租户的规划分别安排(映射)到不同的分块上去(前提是用户不会同时占用完所有面积);由于商人进行调度需要叉车啊,又要消防设备放火啊,折磨多工作还要有仓库管理员啊等等也是需要空间的,所以商人又跟租户约定,虽然整个仓库组给你了但是这些空间你不能用,我的留着进行仓库管理;ok,现在必要概念都有了:实际的仓库就是物理内存,每个租户拿到的规划图就是虚拟地址空间,商人的仓库管理空间就是内核空间,剩下的就是用户空间了。

内核态与用户态

Linux进程间通信机制

由于每个进程有自己独立的虚拟地址空间,独立映射到物理内存,进程A的地址addr1与进程B相同的地址addr2并不指向相同的位置,进程A并不能直接使用进程B的虚拟地址进行正确操作(而在同进程的不同线程由于是共享地址空间的,统一编址,不同线程是可以操作同一对象地址的),两个进程之间就像需要有一种通信机制实现进程间的数据交换。

主要的进程间通信方式有管道,信号量,信号,消息队列,共享内存和socket。

管道、信号量、消息队列都是利用了内核空间,信号这种实现不太清楚,共享内存是将两个进程的一块虚拟内存地址映射到同一个物理空间实现共享,socket一般用于跨主机的进程间通信,效率相对较低。

Android系统采用Binder机制以RPC的形式进行进程间通信,client获取server的Binder对象引用,调用server 方法并获取返回结果。

Binder通信机制

Android 系统C/S架构

在安卓系统架构图上,framework层提供了ActivityMaanager,PackageManager,WindowManager等基础服务,上层应用在运行时会依赖这些服务。Android整体上使用了C/S架构,服务对应的进程作为Server端,App进程作为Client端,Android系统启动时会启动SystemServer进程,由SystemServer启动各种服务进程,并用SystemServiceManager注册管理各种服务进程。类似于我们熟悉的网络C/S结构。而不同于网络中使用的socket机制,服务端进程与客户端进程间的通信是使用Binder机制实现的

原理


binder进程间通信采用远程调用RPC的形式,以代理的方式实现,服务端和客户端实现相同的服务接口,服务端保存对应的Binder对象实体,实现对应接口功能,客户端通过SystemServiceManager获取Binder引用,通过代理调用服务端对应方法。

Java层的Binder对象和BinderProxy通过JNI层android_util_Binder .cpp与c++层交互,对应的c++层对象BpBinder和BBinder,BpBinder、BBinder通过ProcessState和IPCThreadState与驱动交互。

类似于socket链接的建立过程,服务端会启动监听客户端请求,在有请求到来时创建处理线程并用joinThreadPool加入处理线程池,IPCThreadState有一个循环来接收处理客户端的请求;客户端通过BpBinder调用IPCThreadState的transact方法发送请求和数据。

Binder机制的进程间通信也是通过内核来完成的,与普通的用户空间—>内核空间—>用户空间不同,binder在内核空间与用户空间进行了内存共享,服务端的Binder对象和内核空间的Binder对象是映射到相同的物理内存的。(见参考文献)

AndroidBinder设计与实现 -设计篇 http://blog.csdn.net/universus/article/details/6211589

binder内存映射参考http://blog.csdn.net/xiaojsj111/article/details/31422175

实现


java端实现

在java层 IBinder接口和Binder、BinderProxy构成了基本结构,对具体的服务比如AMS,抽象统一的服务接口,供服务端和客户端实现;服务端服务继承Binder,客户端封装Binder引用,提供代理接口API。

Activity的startActivity调用到Instrumenation的execStartActivity方法

public ActivityResult execStartActivity(
        Context who, IBinder contextThread, IBinder token, Activity target,
        Intent intent, int requestCode, Bundle options) {
    IApplicationThread whoThread = (IApplicationThread) contextThread;
    Uri referrer = target != null ? target.onProvideReferrer() : null;
    if (referrer != null) {
        intent.putExtra(Intent.EXTRA_REFERRER, referrer);
    }
    if (mActivityMonitors != null) {
        synchronized (mSync) {
            final int N = mActivityMonitors.size();
            for (int i=0; i<N; i++) {
                final ActivityMonitor am = mActivityMonitors.get(i);
                if (am.match(who, null, intent)) {
                    am.mHits++;
                    if (am.isBlocking()) {
                        return requestCode >= 0 ? am.getResult() : null;
                    }
                    break;
                }
            }
        }
    }
    try {
        intent.migrateExtraStreamToClipData();
        intent.prepareToLeaveProcess(who);
        int result = ActivityManagerNative.getDefault()
            .startActivity(whoThread, who.getBasePackageName(), intent,
                    intent.resolveTypeIfNeeded(who.getContentResolver()),
                    token, target != null ? target.mEmbeddedID : null,
                    requestCode, 0, null, options);
        checkStartActivityResult(result, intent);
    } catch (RemoteException e) {
        throw new RuntimeException("Failure from system", e);
    }
    return null;
}
private static final Singleton<IActivityManager> gDefault = new Singleton<IActivityManager>() {
    protected IActivityManager create() {
        IBinder b = ServiceManager.getService("activity");
        if (false) {
            Log.v("ActivityManager", "default service binder = " + b);
        }
        IActivityManager am = asInterface(b);
        if (false) {
            Log.v("ActivityManager", "default service = " + am);
        }
        return am;
    }
};
static public IActivityManager asInterface(IBinder obj) {
    if (obj == null) {
        return null;
    }
    IActivityManager in =
        (IActivityManager)obj.queryLocalInterface(descriptor);
    if (in != null) {
        return in;
    }

    return new ActivityManagerProxy(obj);
}

先看代理对象的调用

public int startActivity(IApplicationThreadcaller, StringcallingPackage, Intentintent,
       
StringresolvedType, IBinderresultTo, StringresultWho, int requestCode,
        int
startFlags, ProfilerInfo profilerInfo, Bundle options) throwsRemoteException {
    Parcel data = Parcel.obtain()
;
   
Parcelreply = Parcel.obtain();
   
data.writeInterfaceToken(IActivityManager.descriptor);
   
data.writeStrongBinder(caller!= null ?caller.asBinder() : null);
   
data.writeString(callingPackage);
   
intent.writeToParcel(data, 0);
   
data.writeString(resolvedType);
   
data.writeStrongBinder(resultTo);
   
data.writeString(resultWho);
   
data.writeInt(requestCode);
   
data.writeInt(startFlags);
    if
(profilerInfo!= null){
        data.writeInt(
1);
       
profilerInfo.writeToParcel(data, Parcelable.PARCELABLE_WRITE_RETURN_VALUE);
   
}else {
        data.writeInt(
0);
   
}
   
if(options != null) {
        data.writeInt(
1);
       
options.writeToParcel(data, 0);
   
}else {
        data.writeInt(
0);
   
}
   mRemote.transact(START_ACTIVITY_TRANSACTION
,data, reply, 0);
   
reply.readException();
    int
result= reply.readInt();
   
reply.recycle();
   
data.recycle();
    return
result;
}

mRemote是一个IBinder对象,也就是服务端Binder实体的引用,这里调用了Binder的transact方法,Binder的默认实现调用了onTransact方法,这是一个抽象方法,在ActivityManagerNative进行了实现

/**
 * Default implementation rewinds the parcels and calls onTransact.  On
 * the remote side, transact calls into the binder to do the IPC.
 */
public final boolean transact(int code, Parcel data, Parcel reply,
        int flags) throws RemoteException {
    if (false) Log.v("Binder", "Transact: " + code + " to " + this);
    if (data != null) {
        data.setDataPosition(0);
    }
    boolean r = onTransact(code, data, reply, flags);
    if (reply != null) {
        reply.setDataPosition(0);
    }
    return r;
}

ActivityManagerNative中onTransact对应的startActivity方法处理

@Override
public boolean onTransact(int code, Parcel data, Parcel reply, int flags)
        throws RemoteException {
    switch (code) {
    case START_ACTIVITY_TRANSACTION:
    {
        data.enforceInterface(IActivityManager.descriptor);
        IBinder b = data.readStrongBinder();
        IApplicationThread app = ApplicationThreadNative.asInterface(b);
        String callingPackage = data.readString();
        Intent intent = Intent.CREATOR.createFromParcel(data);
        String resolvedType = data.readString();
        IBinder resultTo = data.readStrongBinder();
        String resultWho = data.readString();
        int requestCode = data.readInt();
        int startFlags = data.readInt();
        ProfilerInfo profilerInfo = data.readInt() != 0
                ? ProfilerInfo.CREATOR.createFromParcel(data) : null;
        Bundle options = data.readInt() != 0
                ? Bundle.CREATOR.createFromParcel(data) : null;
        int result = startActivity(app, callingPackage, intent, resolvedType,
                resultTo, resultWho, requestCode, startFlags, profilerInfo, options);
        reply.writeNoException();
        reply.writeInt(result);
        return true;
    }

这里有调用到AMS中的 startActivity方法。

C++层实现

在ActivitymanagerNative的gDefault对象中有这样一句

IBinder b = ServiceManager.getService("activity");
这里的IBinder就是客户端所持有的Binder引用,但这个引用是怎么实现的

public static IBinder getService(String name) {
    try {
        IBinder service = sCache.get(name);
        if (service != null) {
            return service;
        } else {
            return getIServiceManager().getService(name);
        }
    } catch (RemoteException e) {
        Log.e(TAG, "error in getService", e);
    }
    return null;
}
private static IServiceManager getIServiceManager() {
    if (sServiceManager != null) {
        return sServiceManager;
    }

    // Find the service manager
    sServiceManager = ServiceManagerNative.asInterface(BinderInternal.getContextObject());
    return sServiceManager;
}

这里的sServiceManager是服务ServiceManager在客户端的代理

BinderInternal.getContextObject()这是一个本地方法

/**

84     * Returnthe global "context object" of the system.  This is usually

85     * animplementation of IServiceManager, which you can use to find

86     * otherservices.

87     */

88    publicstatic final native IBinder getContextObject();

static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)

903{

904    sp<IBinder> b = ProcessState::self()->getContextObject(NULL);

905    return javaObjectForIBinder(env, b);

906}

sp<IBinder> ProcessState::getContextObject(constsp<IBinder>& /*caller*/)

86{

87    returngetStrongProxyForHandle(0);

88}

sp<IBinder> ProcessState::getStrongProxyForHandle(int32_t handle)

180{

181    sp<IBinder> result;

182

183    AutoMutex _l(mLock);

184

185    handle_entry* e = lookupHandleLocked(handle);

186

187    if (e!= NULL) {

188        // We need to create a new BpBinder if there isn'tcurrently one, OR we

189        // are unable to acquire a weak reference on thiscurrent one.  See comment

190        // in getWeakProxyForHandle() for more info aboutthis.

191        IBinder* b = e->binder;

192        if(b == NULL || !e->refs->attemptIncWeak(this)) {

193            if(handle == 0) {

194                // Special case for context manager...

195                // The context manager is the only object for which wecreate

196                // a BpBinder proxy without already holding a reference.

197                // Perform a dummy transaction to ensure the contextmanager

198                // is registered before we create the first localreference

199                // to it (which will occur when creating theBpBinder).

200                // If a local reference is created for the BpBinderwhen the

201                // context manager is not present, the driver willfail to

202                // provide a reference to the context manager, but the

203                // driver API does not return status.

204                //

205                // Note that this is not race-free if the contextmanager

206                // dies while this code runs.

207                //

208                // TODO: add a driver API to wait for context manager,or

209                // stop special casing handle 0 for context managerand add

210                // a driver API to get a handle to the context managerwith

211                // proper reference counting.

212

213                Parcel data;

214                status_t status = IPCThreadState::self()->transact(

215                       0, IBinder::PING_TRANSACTION, data, NULL, 0);

216                if(status == DEAD_OBJECT)

217                  return NULL;

218            }

219

220            b = newBpBinder(handle);

221           e->binder = b;

222            if(b) e->refs = b->getWeakRefs();

223            result = b;

224        } else{

225            // This little bit of nastyness is to allow us to adda primary

226            // reference to the remote proxy when this teamdoesn't have one

227            // but another team is sending the handle to us.

228            result.force_set(b);

229           e->refs->decWeak(this);

230        }

231    }

232

233    returnresult;

234}

以上是获取系统ServiceManager对应Binder引用的过程,getStrongProxyForHandle(0);参数0表明ServiceManager服务的固定地址;可以看到拿到的binder引用对应c++层的一个BpBinder对象。

以下参考:http://www.cnblogs.com/Doing-what-I-love/p/5530173.html

Binder进程间通信机制的Java接口

每一个Java层的Binder本地对象(Binder)C++层都对应有一个JavaBBinder对象,后者是从C++层的BBinder继承下来的

每一个Java层的Binder代理对象(BinderProxy)C++层都对应有一个BpBinder对象

于是Java层的Binder进程间通信实际上就是通过C++层的BpBinderBBinder来进行的,与C++层的Binder进程间通信一致

 

IPCThreadState与ProcessState

ProcessState负责打开Binder驱动open_driver,并进行共享内存映射

static jobject android_os_BinderInternal_getContextObject(JNIEnv* env, jobject clazz)
903{
904    sp<IBinder> b =ProcessState::self()->getContextObject(NULL);
905    return javaObjectForIBinder(env, b);
906}
sp<ProcessState> ProcessState::self()
71{
72    Mutex::Autolock _l(gProcessMutex);
73    if (gProcess != NULL) {
74        return gProcess;
75    }
76    gProcess = newProcessState;
77    return gProcess;
78}

static int open_driver()

313{

314    int fd = open("/dev/binder", O_RDWR | O_CLOEXEC);

315    if (fd >= 0) {

316        intvers= 0;

317        status_t result = ioctl(fd, BINDER_VERSION, &vers);

318        if(result == -1) {

319            ALOGE("Binderioctl to obtain version failed: %s", strerror(errno));

320            close(fd);

321            fd = -1;

322        }

323        if(result != 0 || vers != BINDER_CURRENT_PROTOCOL_VERSION) {

324            ALOGE("Binderdriver protocol does not match user space protocol!");

325            close(fd);

326            fd = -1;

327        }

328        size_t maxThreads= DEFAULT_MAX_BINDER_THREADS;

329        result = ioctl(fd, BINDER_SET_MAX_THREADS, &maxThreads);

330        if(result == -1) {

331            ALOGE("Binderioctl to set max threads failed: %s", strerror(errno));

332        }

333    } else{

334        ALOGW("Opening '/dev/binder' failed: %s\n", strerror(errno));

335    }

336    returnfd;

337}

338

339ProcessState::ProcessState()

340    :mDriverFD(open_driver())

341    , mVMStart(MAP_FAILED)

342    , mThreadCountLock(PTHREAD_MUTEX_INITIALIZER)

343    , mThreadCountDecrement(PTHREAD_COND_INITIALIZER)

344    , mExecutingThreadsCount(0)

345    , mMaxThreads(DEFAULT_MAX_BINDER_THREADS)

346    , mStarvationStartTimeMs(0)

347    , mManagesContexts(false)

348    , mBinderContextCheckFunc(NULL)

349    , mBinderContextUserData(NULL)

350    , mThreadPoolStarted(false)

351    , mThreadPoolSeq(1)

352{

353    if (mDriverFD >= 0) {

354        // mmap the binder, providing a chunk of virtualaddress space to receive transactions.

355        mVMStart = mmap(0, BINDER_VM_SIZE, PROT_READ, MAP_PRIVATE | MAP_NORESERVE, mDriverFD, 0);

356        if(mVMStart == MAP_FAILED) {

357            // *sigh*

358            ALOGE("Using /dev/binder failed: unable to mmap transaction memory.\n");

359            close(mDriverFD);

360            mDriverFD = -1;

361        }

362    }

363

364    LOG_ALWAYS_FATAL_IF(mDriverFD < 0, "Binderdriver could not be opened.  Terminating.");

365}

IPCThreadState负责与Binder驱动的交互

static jboolean android_os_BinderProxy_transact(JNIEnv* env, jobject obj,
1094        jint code, jobject dataObj, jobject replyObj, jint flags) // throws RemoteException
1095{
1096    if (dataObj == NULL) {
1097        jniThrowNullPointerException(env, NULL);
1098        return JNI_FALSE;
1099    }
1100
1101    Parcel* data = parcelForJavaObject(env, dataObj);
1102    if (data == NULL) {
1103        return JNI_FALSE;
1104    }
1105    Parcel* reply = parcelForJavaObject(env, replyObj);
1106    if (reply == NULL && replyObj != NULL) {
1107        return JNI_FALSE;
1108    }
1109
1110    IBinder* target = (IBinder*)
1111        env->GetLongField(obj, gBinderProxyOffsets.mObject);
1112    if (target == NULL) {
1113        jniThrowException(env, "java/lang/IllegalStateException", "Binder has been finalized!");
1114        return JNI_FALSE;
1115    }
1116
1117  ````
1132
1133    //printf("Transact from Java code to %p sending: ", target); data->print();
1134    status_t err = target->transact(code, *data, reply, flags);
1135    //if (reply) printf("Transact from Java code to %p received: ", target); reply->print();
1136
1137    if (kEnableBinderSample) {
1138        if (time_binder_calls) {
1139            conditionally_log_binder_call(start_millis, target, code);
1140        }
1141    }
1142
1143    if (err == NO_ERROR) {
1144        return JNI_TRUE;
1145    } else if (err == UNKNOWN_TRANSACTION) {
1146        return JNI_FALSE;
1147    }
1148
1149    signalExceptionForError(env, obj, err, true /*canThrowRemoteException*/, data->dataSize());
1150    return JNI_FALSE;
1151}
 
 
status_t BpBinder::transact(
161    uint32_t code, const Parcel& data, Parcel* reply, uint32_t flags)
162{
163    // Once a binder has died, it will never come back to life.
164    if (mAlive) {
165        status_t status = IPCThreadState::self()->transact(
166            mHandle, code, data, reply, flags);
167        if (status == DEAD_OBJECT) mAlive = 0;
168        return status;
169    }
170
171    return DEAD_OBJECT;
172}

IPCThreadState.cpp

status_t IPCThreadState::transact(int32_t handle,
570                                  uint32_t code, const Parcel& data,
571                                  Parcel* reply, uint32_t flags)
572{
573    status_t err = data.errorCheck();
574
575    flags |= TF_ACCEPT_FDS;
576
577    IF_LOG_TRANSACTIONS() {
578        TextOutput::Bundle _b(alog);
579        alog << "BC_TRANSACTION thr " << (void*)pthread_self() << " / hand "
580            << handle << " / code " << TypeCode(code) << ": "
581            << indent << data << dedent << endl;
582    }
583
584    if (err == NO_ERROR) {
585        LOG_ONEWAY(">>>> SEND from pid %d uid %d %s", getpid(), getuid(),
586            (flags & TF_ONE_WAY) == 0 ? "READ REPLY" : "ONE WAY");
587        err = writeTransactionData(BC_TRANSACTION, flags, handle, code, data, NULL);
588    }
589
590    if (err != NO_ERROR) {
591        if (reply) reply->setError(err);
592        return (mLastError = err);
593    }
594
595    if ((flags & TF_ONE_WAY) == 0) {
596        #if 0
597        if (code == 4) { // relayout
598            ALOGI(">>>>>> CALLING transaction 4");
599        } else {
600            ALOGI(">>>>>> CALLING transaction %d", code);
601        }
602        #endif
603        if (reply) {
604            err = waitForResponse(reply);
605        } else {
606            Parcel fakeReply;
607            err = waitForResponse(&fakeReply);
608        }
609        #if 0
610        if (code == 4) { // relayout
611            ALOGI("<<<<<< RETURNING transaction 4");
612        } else {
613            ALOGI("<<<<<< RETURNING transaction %d", code);
614        }
615        #endif
616
617        IF_LOG_TRANSACTIONS() {
618            TextOutput::Bundle _b(alog);
619            alog << "BR_REPLY thr " << (void*)pthread_self() << " / hand "
620                << handle << ": ";
621            if (reply) alog << indent << *reply << dedent << endl;
622            else alog << "(none requested)" << endl;
623        }
624    } else {
625        err = waitForResponse(NULL, NULL);
626    }
627
628    return err;
629}
630

数据准备

status_t IPCThreadState::writeTransactionData(int32_t cmd, uint32_t binderFlags,
926    int32_t handle, uint32_t code, const Parcel& data, status_t* statusBuffer)
927{
928    binder_transaction_data tr;
929
930    tr.target.ptr = 0; /* Don't pass uninitialized stack data to a remote process */
931    tr.target.handle = handle;
932    tr.code = code;
933    tr.flags = binderFlags;
934    tr.cookie = 0;
935    tr.sender_pid = 0;
936    tr.sender_euid = 0;
937
938    const status_t err = data.errorCheck();
939    if (err == NO_ERROR) {
940        tr.data_size = data.ipcDataSize();
941        tr.data.ptr.buffer = data.ipcData();
942        tr.offsets_size = data.ipcObjectsCount()*sizeof(binder_size_t);
943        tr.data.ptr.offsets = data.ipcObjects();
944    } else if (statusBuffer) {
945        tr.flags |= TF_STATUS_CODE;
946        *statusBuffer = err;
947        tr.data_size = sizeof(status_t);
948        tr.data.ptr.buffer = reinterpret_cast<uintptr_t>(statusBuffer);
949        tr.offsets_size = 0;
950        tr.data.ptr.offsets = 0;
951    } else {
952        return (mLastError = err);
953    }
954
955    mOut.writeInt32(cmd);
956    mOut.write(&tr, sizeof(tr));
957
958    return NO_ERROR;
959}

waitForResponse与Binder驱动交互,传递命令与数据,处理返回结果

status_t IPCThreadState::waitForResponse(Parcel *reply, status_t *acquireResult)
734{
735    uint32_t cmd;
736    int32_t err;
737
738    while (1) {
739        if ((err=talkWithDriver()) < NO_ERROR) break;
740        err = mIn.errorCheck();
741        if (err < NO_ERROR) break;
742        if (mIn.dataAvail() == 0) continue;
743
744        cmd = (uint32_t)mIn.readInt32();
745
746        IF_LOG_COMMANDS() {
747            alog << "Processing waitForResponse Command: "
748                << getReturnString(cmd) << endl;
749        }
750
751        switch (cmd) {
752        case BR_TRANSACTION_COMPLETE:
753            if (!reply && !acquireResult) goto finish;
754            break;
755
756        case BR_DEAD_REPLY:
757            err = DEAD_OBJECT;
758            goto finish;
759
760        case BR_FAILED_REPLY:
761            err = FAILED_TRANSACTION;
762            goto finish;
763
764        case BR_ACQUIRE_RESULT:
765            {
766                ALOG_ASSERT(acquireResult != NULL, "Unexpected brACQUIRE_RESULT");
767                const int32_t result = mIn.readInt32();
768                if (!acquireResult) continue;
769                *acquireResult = result ? NO_ERROR : INVALID_OPERATION;
770            }
771            goto finish;
772
773        case BR_REPLY:
774            {
775                binder_transaction_data tr;
776                err = mIn.read(&tr, sizeof(tr));
777                ALOG_ASSERT(err == NO_ERROR, "Not enough command data for brREPLY");
778                if (err != NO_ERROR) goto finish;
779
780                if (reply) {
781                    if ((tr.flags & TF_STATUS_CODE) == 0) {
782                        reply->ipcSetDataReference(
783                            reinterpret_cast<const  uint8_t*>(tr.data.ptr.buffer),
784                            tr.data_size,
785                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
786                            tr.offsets_size/sizeof(binder_size_t),
787                            freeBuffer, this);
788                    } else {
789                        err = *reinterpret_cast<const status_t*>(tr.data.ptr.buffer);
790                        freeBuffer(NULL,
791                            reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
792                            tr.data_size,
793                            reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
794                            tr.offsets_size/sizeof(binder_size_t), this);
795                    }
796                } else {
797                    freeBuffer(NULL,
798                        reinterpret_cast<const uint8_t*>(tr.data.ptr.buffer),
799                        tr.data_size,
800                        reinterpret_cast<const binder_size_t*>(tr.data.ptr.offsets),
801                        tr.offsets_size/sizeof(binder_size_t), this);
802                    continue;
803                }
804            }
805            goto finish;
806
807        default:
808            err = executeCommand(cmd);
809            if (err != NO_ERROR) goto finish;
810            break;
811        }
812    }
813
814finish:
815    if (err != NO_ERROR) {
816        if (acquireResult) *acquireResult = err;
817        if (reply) reply->setError(err);
818        mLastError = err;
819    }
820
821    return err;
822}

talkWithDriver通过ioctl与驱动进行交互

status_t IPCThreadState::talkWithDriver(bool doReceive)
825{
826    if (mProcess->mDriverFD <= 0) {
827        return -EBADF;
828    }
829
830    binder_write_read bwr;
831
832    // Is the read buffer empty?
833    const bool needRead = mIn.dataPosition() >= mIn.dataSize();
834
835    // We don't want to write anything if we are still reading
836    // from data left in the input buffer and the caller
837    // has requested to read the next data.
838    const size_t outAvail = (!doReceive || needRead) ? mOut.dataSize() : 0;
839
840    bwr.write_size = outAvail;
841    bwr.write_buffer = (uintptr_t)mOut.data();
842
843    // This is what we'll read.
844    if (doReceive && needRead) {
845        bwr.read_size = mIn.dataCapacity();
846        bwr.read_buffer = (uintptr_t)mIn.data();
847    } else {
848        bwr.read_size = 0;
849        bwr.read_buffer = 0;
850    }
851
852    IF_LOG_COMMANDS() {
853        TextOutput::Bundle _b(alog);
854        if (outAvail != 0) {
855            alog << "Sending commands to driver: " << indent;
856            const void* cmds = (const void*)bwr.write_buffer;
857            const void* end = ((const uint8_t*)cmds)+bwr.write_size;
858            alog << HexDump(cmds, bwr.write_size) << endl;
859            while (cmds < end) cmds = printCommand(alog, cmds);
860            alog << dedent;
861        }
862        alog << "Size of receive buffer: " << bwr.read_size
863            << ", needRead: " << needRead << ", doReceive: " << doReceive << endl;
864    }
865
866    // Return immediately if there is nothing to do.
867    if ((bwr.write_size == 0) && (bwr.read_size == 0)) return NO_ERROR;
868
869    bwr.write_consumed = 0;
870    bwr.read_consumed = 0;
871    status_t err;
872    do {
873        IF_LOG_COMMANDS() {
874            alog << "About to read/write, write size = " << mOut.dataSize() << endl;
875        }
876#if defined(__ANDROID__)
877        if (ioctl(mProcess->mDriverFD, BINDER_WRITE_READ, &bwr) >= 0)
878            err = NO_ERROR;
879        else
880            err = -errno;
881#else
882        err = INVALID_OPERATION;
883#endif
884        if (mProcess->mDriverFD <= 0) {
885            err = -EBADF;
886        }
887        IF_LOG_COMMANDS() {
888            alog << "Finished read/write, write size = " << mOut.dataSize() << endl;
889        }
890    } while (err == -EINTR);
891
892    IF_LOG_COMMANDS() {
893        alog << "Our err: " << (void*)(intptr_t)err << ", write consumed: "
894            << bwr.write_consumed << " (of " << mOut.dataSize()
895                        << "), read consumed: " << bwr.read_consumed << endl;
896    }
897
898    if (err >= NO_ERROR) {
899        if (bwr.write_consumed > 0) {
900            if (bwr.write_consumed < mOut.dataSize())
901                mOut.remove(0, bwr.write_consumed);
902            else
903                mOut.setDataSize(0);
904        }
905        if (bwr.read_consumed > 0) {
906            mIn.setDataSize(bwr.read_consumed);
907            mIn.setDataPosition(0);
908        }
909        IF_LOG_COMMANDS() {
910            TextOutput::Bundle _b(alog);
911            alog << "Remaining data size: " << mOut.dataSize() << endl;
912            alog << "Received commands from driver: " << indent;
913            const void* cmds = mIn.data();
914            const void* end = mIn.data() + mIn.dataSize();
915            alog << HexDump(cmds, mIn.dataSize()) << endl;
916            while (cmds < end) cmds = printReturnCommand(alog, cmds);
917            alog << dedent;
918        }
919        return NO_ERROR;
920    }
921
922    return err;
923}

参考:

Binder驱动

http://www.cnblogs.com/Doing-what-I-love/p/5530173.html

ServiceAndroid系统设计(7---Binder驱动http://blog.csdn.net/21cnbao/article/details/8087354

理解Android Binder机制(1/3):驱动篇

http://blog.csdn.net/zdy0_2004/article/details/54708127

猜你喜欢

转载自blog.csdn.net/yuanjw2014/article/details/74332350