Android IPC 通讯机制源码分析
Android IPC 通讯机制源码分析
Binder通信简介:
Linux系统中进程间通信的方式有:socket,namedpipe,messagequeque,signal,sharememory。Java系统中的进程间通信方式有socket,namedpipe等,android应用程序理所当然可以应用JAVA的IPC机制实现进程间的通信,但我查看android的源码,在同一终端上的应用软件的通信几乎看不到这些IPC通信方式,取而代之的是Binder通信。Google为什么要采用这种方式呢,这取决于Binder通信方式的高效率。Binder通信是通过linux的binderdriver来实现的,Binder通信操作类似线程迁移(threadmigration),两个进程间IPC看起来就象是一个进程进入另一个进程执行代码然后带着执行的结果返回。Binder的用户空间为每一个进程维护着一个可用的线程池,线程池用于处理到来的IPC以及执行进程本地消息,Binder通信是同步而不是异步。
Android中的Binder通信是基于Service与Client的,所有需要IBinder通信的进程都必须创建一个IBinder接口,系统中有一个进程管理所有的systemservice,Android不允许用户添加非授权的Systemservice,当然现在源码开发了,我们可以修改一些代码来实现添加底层systemService的目的。对用户程序来说,我们也要创建server,或者Service用于进程间通信,这里有一个ActivityManagerService管理JAVA应用层所有的service创建与连接(connect),disconnect,所有的Activity也是通过这个service来启动,加载的。ActivityManagerService也是加载在SystemsServcie中的。
Android虚拟机启动之前系统会先启动serviceManager进程,serviceManager打开binder驱动,并通知binderkernel驱动程序这个进程将作为SystemServiceManager,然后该进程将进入一个循环,等待处理来自其他进程的数据。用户创建一个Systemservice后,通过defaultServiceManager得到一个远程ServiceManager的接口,通过这个接口我们可以调用addService函数将Systemservice添加到ServiceManager进程中,然后client可以通过getService获取到需要连接的目的Service的IBinder对象,这个IBinder是Service的BBinder在binderkernel的一个参考,所以serviceIBinder在binderkernel中不会存在相同的两个IBinder对象,每一个Client进程同样需要打开Binder驱动程序。对用户程序而言,我们获得这个对象就可以通过binderkernel访问service对象中的方法。Client与Service在不同的进程中,通过这种方式实现了类似线程间的迁移的通信方式,对用户程序而言当调用Service返回的IBinder接口后,访问Service中的方法就如同调用自己的函数。
下图为client与Service建立连接的示意图首先从ServiceManager注册过程来逐步分析上述过程是如何实现的。
ServiceMananger进程注册过程源码分析:
ServiceManagerProcess(Service_manager.c):
Service_manager为其他进程的Service提供管理,这个服务程序必须在AndroidRuntime起来之前运行,否则AndroidJAVAVmActivityManagerService无法注册。
intmain(intargc,char**argv)
{
structbinder_state*bs;
void *svcmgr = BINDER_SERVICE_MANAGER;bs = binder_open(128*1024); //打开/dev/binder 驱动
if (binder_become_context_manager(bs)) {//注册为service manager in binder kernel
LOGE("cannotbecomecontextmanager(%s)\n",strerror(errno));
return-1;
}
svcmgr_handle=svcmgr;
binder_loop(bs,svcmgr_handler);
return0;
}
首先打开binder的驱动程序然后通过binder_become_context_manager函数调用ioctl告诉BinderKernel驱动程序这是一个服务管理进程,然后调用binder_loop等待来自其他进程的数据。BINDER_SERVICE_MANAGER是服务管理进程的句柄,它的定义是:
/*theonemagicobject*/
#defineBINDER_SERVICE_MANAGER((void*)0)
如果客户端进程获取Service时所使用的句柄与此不符,Service Manager将不接受Client的请求。客户端如何设置这个句柄在下面会介绍。CameraSerivce服务的注册(Main_mediaservice.c)
intmain(intargc,char**argv)
{
sp<ProcessState>proc(ProcessState::self());
sp<IServiceManager>sm=defaultServiceManager();
LOGI("ServiceManager:%p",sm.get());
AudioFlinger::instantiate();//Audio服务
MediaPlayerService::instantiate();//mediaPlayer服务
CameraService::instantiate();//Camera服务
ProcessState::self()->startThreadPool();//为进程开启缓冲池
IPCThreadState::self()->joinThreadPool();//将进程加入到缓冲池
}CameraService.cpp
voidCameraService::instantiate(){
defaultServiceManager()->addService(
String16("media.camera"),newCameraService());
}
创建CameraService服务对象并添加到ServiceManager进程中。client获取remoteIServiceManagerIBinder接口:
sp<IServiceManager>defaultServiceManager()
{
if(gDefaultServiceManager!=NULL)returngDefaultServiceManager;
{
AutoMutex_l(gDefaultServiceManagerLock);
if(gDefaultServiceManager==NULL){
gDefaultServiceManager=interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));
}
}
returngDefaultServiceManager;
}
任何一个进程在第一次调用defaultServiceManager的时候gDefaultServiceManager值为Null,所以该进程会通过ProcessState::self得到ProcessState实例。ProcessState将打开Binder驱动。
ProcessState.cpp
sp<ProcessState>ProcessState::self()
{
if(gProcess!=NULL)returngProcess;
AutoMutex_l(gProcessMutex);
if(gProcess==NULL)gProcess=newProcessState;
returngProcess;
}ProcessState::ProcessState()
:mDriverFD(open_driver())//打开/dev/binder驱动
...........................
{
}sp<IBinder> ProcessState::getContextObject(const sp<IBinder>& caller)
{
if(supportsProcesses()){
returngetStrongProxyForHandle(0);
}else{
returngetContextObject(String16("default"),caller);
}
}
Android是支持Binder驱动的所以程序会调用getStrongProxyForHandle。这里handle为0,正好与Service_manager中的BINDER_SERVICE_MANAGER一致。
sp<IBinder>ProcessState::getStrongProxyForHandle(int32_thandle)
{
sp<IBinder>result;
AutoMutex_l(mLock);
handle_entry* e = lookupHandleLocked(handle);if (e != NULL) {
//WeneedtocreateanewBpBinderifthereisn'tcurrentlyone,ORwe
//areunabletoacquireaweakreferenceonthiscurrentone.Seecomment
//ingetWeakProxyForHandle()formoreinfoaboutthis.
IBinder*b=e->binder;//第一次调用该函数b为Null
if(b==NULL||!e->refs->attemptIncWeak(this)){
b=newBpBinder(handle);
e->binder=b;
if(b)e->refs=b->getWeakRefs();
result=b;
}else{
//Thislittlebitofnastynessistoallowustoaddaprimary
//referencetotheremoteproxywhenthisteamdoesn'thaveone
//butanotherteamissendingthehandletous.
result.force_set(b);
e->refs->decWeak(this);
}
}
returnresult;
}
第一次调用的时候b为Null所以会为b生成一BpBinder对象:
BpBinder::BpBinder(int32_thandle)
:mHandle(handle)
,mAlive(1)
,mObitsSent(0)
,mObituaries(NULL)
{
LOGV("Creating BpBinder %p handle %d\n", this, mHandle);extendObjectLifetime(OBJECT_LIFETIME_WEAK);
IPCThreadState::self()->incWeakHandle(handle);
}void IPCThreadState::incWeakHandle(int32_t handle)
{
LOG_REMOTEREFS("IPCThreadState::incWeakHandle(%d)\n",handle);
mOut.writeInt32(BC_INCREFS);
mOut.writeInt32(handle);
}
getContextObject返回了一个BpBinder对象。
interface_cast<IServiceManager>(
ProcessState::self()->getContextObject(NULL));template<typename INTERFACE>
inlinesp<INTERFACE>interface_cast(constsp<IBinder>&obj)
{
returnINTERFACE::asInterface(obj);
}
将这个宏扩展后最终得到的是:
sp<IServiceManager>IServiceManager::asInterface(constsp<IBinder>&obj)
{
sp<IServiceManager>intr;
if(obj!=NULL){
intr=static_cast<IServiceManager*>(
obj->queryLocalInterface(
IServiceManager::descriptor).get());
if(intr==NULL){
intr=newBpServiceManager(obj);
}
}
returnintr;
}
返回一个BpServiceManager对象,这里obj就是前面我们创建的BpBInder对象。client获取Service的远程IBinder接口
以CameraService为例(camera.cpp):
constsp<ICameraService>&Camera::getCameraService()
{
Mutex::Autolock_l(mLock);
if(mCameraService.get()==0){
sp<IServiceManager>sm=defaultServiceManager();
sp<IBinder>binder;
do{
binder=sm->getService(String16("media.camera"));
if(binder!=0)
break;
LOGW("CameraServicenotpublished,waiting...");
usleep(500000);//0.5s
}while(true);
if(mDeathNotifier==NULL){
mDeathNotifier=newDeathNotifier();
}
binder->linkToDeath(mDeathNotifier);
mCameraService=interface_cast<ICameraService>(binder);
}
LOGE_IF(mCameraService==0,"noCameraService!?");
returnmCameraService;
}
由前面的分析可知sm是BpCameraService对象://应该为BpServiceManager对象
virtualsp<IBinder>getService(constString16&name)const
{
unsignedn;
for(n=0;n<5;n++){
sp<IBinder>svc=checkService(name);
if(svc!=NULL)returnsvc;
LOGI("Waitingforsevice%s...\n",String8(name).string());
sleep(1);
}
returnNULL;
}
virtualsp<IBinder>checkService(constString16&name)const
{
Parceldata,reply;
data.writeInterfaceToken(IServiceManager::getInterfaceDescriptor());
data.writeString16(name);
remote()->transact(CHECK_SERVICE_TRANSACTION,data,&reply);
returnreply.readStrongBinder();
}
这里的remote就是我们前面得到BpBinder对象。所以checkService将调用BpBinder中的transact函数:
status_tBpBinder::transact(
uint32_tcode,constParcel&data,Parcel*reply,uint32_tflags)
{
//Onceabinderhasdied,itwillnevercomebacktolife.
if(mAlive){
status_tstatus=IPCThreadState::self()->transact(
mHandle,code,data,reply,flags);
if(status==DEAD_OBJECT)mAlive=0;
returnstatus;
}
returnDEAD_OBJECT;
}
mHandle为0,BpBinder继续往下调用IPCThreadState:transact函数将数据发给与mHandle相关联的ServiceManagerProcess。
status_tIPCThreadState::transact(int32_thandle,
uint32_tcode,constParcel&data,
Parcel*reply,uint32_tflags)
{
............................................................
if(err==NO_ERROR){
LOG_ONEWAY(">>>>SENDfrompid%duid%d%s",getpid(),getuid(),
(flags&TF_ONE_WAY)==0?"READREPLY":"ONEWAY");
err=writeTransactionData(BC_TRANSACTION,flags,handle,code,data,NULL);
}
if(err!=NO_ERROR){
if(reply)reply->setError(err);
return(mLastError=err);
}
if((flags&TF_ONE_WAY)==0){
if(reply){
err=waitForResponse(reply);
}else{
ParcelfakeReply;
err=waitForResponse(&fakeReply);
}
..............................
returnerr;
}通过writeTransactionData构造要发送的数据
status_tIPCThreadState::writeTransactionData(int32_tcmd,uint32_tbinderFlags,
int32_thandle,uint32_tcode,constParcel&data,status_t*statusBuffer)
{
binder_transaction_data tr;tr.target.handle = handle; //这个handle将传递到service_manager
tr.code=code;
tr.flags=bindrFlags;
。。。。。。。。。。。。。。
}
waitForResponse将调用talkWithDriver与对Binderkernel进行读写操作。当Binderkernel接收到数据后,service_mananger线程的ThreadPool就会启动,service_manager查找到CameraService服务后调用binder_send_reply,将返回的数据写入Binderkernel,Binderkernel。
status_tIPCThreadState::waitForResponse(Parcel*reply,status_t*acquireResult)
{
int32_tcmd;
int32_t err;while (1) {
if((err=talkWithDriver())<NO_ERROR)break;
..............................................
}
status_tIPCThreadState::talkWithDriver(booldoReceive)
{
............................................
#ifdefined(HAVE_ANDROID_OS)
if(ioctl(mProcess->mDriverFD,BINDER_WRITE_READ,&bwr)>=0)
err=NO_ERROR;
else
err=-errno;
#else
err=INVALID_OPERATION;
#endif
...................................................
}
通过上面的ioctl系统函数中BINDER_WRITE_READ对binder kernel进行读写。Client A与Binder kernel通信: kernel\drivers\android\Binder.c) staticintbinder_open(structinode*nodp,structfile*filp) { struct binder_proc *proc;if (binder_debug_mask & BINDER_DEBUG_OPEN_CLOSE) printk(KERN_INFO "binder_open: %d:%d\n", current->group_leader->pid, current->pid); proc = kzalloc(sizeof(*proc), GFP_KERNEL); if(proc==NULL) return-ENOMEM; get_task_struct(current); proc->tsk=current;//保存打开/dev/binder驱动的当前进程任务数据结构 INIT_LIST_HEAD(&proc->todo); init_waitqueue_head(&proc->wait); proc->default_priority=task_nice(current); mutex_lock(&binder_lock); binder_stats.obj_created[BINDER_STAT_PROC]++; hlist_add_head(&proc->proc_node,&binder_procs); proc->pid=current->group_leader->pid; INIT_LIST_HEAD(&proc->delivered_death); filp->private_data=proc; mutex_unlock(&binder_lock);if (binder_proc_dir_entry_proc) { charstrbuf[11]; snprintf(strbuf,sizeof(strbuf),"%u",proc->pid); create_proc_read_entry(strbuf,S_IRUGO,binder_proc_dir_entry_proc,binder_read_proc_proc,proc);//为当前进程创建一个process入口结构信息 } return0; } 从这里可以知道每一个打开/dev/binder的进程的信息都保存在binderkernel中,因而当一个进程调用ioctl与kernelbinder通信时,binderkernel就能查询到调用进程的信息。BINDER_WRITE_READ是调用ioctl进程与Binderkernel通信一个非常重要的command。大家可以看到在IPCThreadState中的transact函数这个函数中calltalkWithDriver发送的command就是BINDER_WRITE_READ。 staticlongbinder_ioctl(structfile*filp,unsignedintcmd,unsignedlongarg) { intret; structbinder_proc*proc=filp->private_data; structbinder_thread*thread; unsignedintsize=_IOC_SIZE(cmd); void __user *ubuf = (void __user *)arg;/*printk(KERN_INFO "binder_ioctl: %d:%d %x %lx\n", proc->pid, current->pid, cmd, arg);*/ //将调用ioctl的进程挂起caller将挂起直到service返回 ret=wait_event_interruptible(binder_user_error_wait,binder_stop_on_user_error<2); if(ret) return ret;mutex_lock(&binder_lock); thread=binder_get_thread(proc);//根据当caller进程消息获取该进程线程池数据结构 if(thread==NULL){ ret=-ENOMEM; gotoerr; }switch (cmd) { caseBINDER_WRITE_READ:{//IPcThreadState中talkWithDriver设置ioctl的CMD structbinder_write_readbwr; if(size!=sizeof(structbinder_write_read)){ ret=-EINVAL; gotoerr; } if(copy_from_user(&bwr,ubuf,sizeof(bwr))){ ret=-EFAULT; gotoerr; } if(binder_debug_mask&BINDER_DEBUG_READ_WRITE) printk(KERN_INFO"binder:%d:%dwrite%ldat%08lx,read%ldat%08lx\n", proc->pid,thread->pid,bwr.write_size,bwr.write_buffer,bwr.read_size,bwr.read_buffer); if(bwr.write_size>0){ ret=binder_thread_write(proc,thread,(void__user*)bwr.write_buffer,bwr.write_size,&bwr.write_consumed); if(ret<0){ bwr.read_consumed=0; if(copy_to_user(ubuf,&bwr,sizeof(bwr))) ret=-EFAULT; gotoerr; } } if(bwr.read_size>0){//数据写入到callerprocess。 ret=binder_thread_read(proc,thread,(void__user*)bwr.read_buffer,bwr.read_size,&bwr.read_consumed,filp->f_flags&O_NONBLOCK); if(!list_empty(&proc->todo)) wake_up_interruptible(&proc->wait);//恢复挂起的caller进程 if(ret<0){ if(copy_to_user(ubuf,&bwr,sizeof(bwr))) ret=-EFAULT; gotoerr; } } ......................................... }Int binder_thread_write(struct binder_proc *proc, struct binder_thread *thread,void __user *buffer, int size, signed long *consumed) { uint32_tcmd; void__user*ptr=buffer+*consumed; void __user *end = buffer + size;while (ptr < end && thread->return_error == BR_OK) { if(get_user(cmd,(uint32_t__user*)ptr))//从user空间获取cmd数据到内核空间 return-EFAULT; ptr+=sizeof(uint32_t); if(_IOC_NR(cmd)<ARRAY_SIZE(binder_stats.bc)){ binder_stats.bc[_IOC_NR(cmd)]++; proc->stats.bc[_IOC_NR(cmd)]++; thread->stats.bc[_IOC_NR(cmd)]++; } switch(cmd){ caseBC_INCREFS: ......................................... caseBC_TRANSACTION://IPCThreadState通过writeTransactionData设置该cmd caseBC_REPLY:{ struct binder_transaction_data tr;if (copy_from_user(&tr, ptr, sizeof(tr))) return-EFAULT; ptr+=sizeof(tr); binder_transaction(proc,thread,&tr,cmd==BC_REPLY); break; } ........................................ }static void binder_transaction(structbinder_proc*proc,structbinder_thread*thread, structbinder_transaction_data*tr,intreply) { .............................................. if(reply)//cmd!=BC_REPLY不走这个case { ...................................... } else { if(tr->target.handle){//对于service_manager来说这个条件不满足(handle==0) ....................................... } }else{//这一段我们获取到了service_manangerprocess注册在binderkernle的进程信息 target_node=binder_context_mgr_node;//BINDER_SET_CONTEXT_MGR注册了service if(target_node==NULL){//manager return_error=BR_DEAD_REPLY; gotoerr_no_context_mgr_node; } } e->to_node=target_node->debug_id; target_proc=target_node->proc;//得到目标进程service_mananger的结构 if(target_proc==NULL){ return_error=BR_DEAD_REPLY; gotoerr_dead_binder; } .................... } if(target_thread){ e->to_thread=target_thread->pid; target_list=&target_thread->todo; target_wait=&target_thread->wait;//得到servicemanager挂起的线程 }else{ target_list=&target_proc->todo; target_wait=&target_proc->wait; } ............................................ caseBINDER_TYPE_BINDER: caseBINDER_TYPE_WEAK_BINDER:{ .......................... ref=binder_get_ref_for_node(target_proc,node);//在Binderkernel中创建 ..........................//查找到的service参考 } break;............................................ if(target_wait) wake_up_interruptible(target_wait);//唤醒挂起的线程处理callerprocess请求 ............................................//处理命令可以看svcmgr_handler } 到这里我们已经通过getService连接到servicemanager进程了,servicemanager进程得到请求后,如果他的状态是挂起的话,将被唤醒。现在我们来看一下servicemanager中的binder_loop函数。 Service_manager.c voidbinder_loop(structbinder_state*bs,binder_handlerfunc) { ................................. binder_write(bs, readbuf, sizeof(unsigned));for (;;) { bwr.read_size=sizeof(readbuf); bwr.read_consumed=0; bwr.read_buffer=(unsigned)readbuf; res=ioctl(bs->fd,BINDER_WRITE_READ,&bwr);//如果没有要处理的请求进程将挂起 if(res<0){ LOGE("binder_loop:ioctlfailed(%s)\n",strerror(errno)); break; } res=binder_parse(bs,0,readbuf,bwr.read_consumed,func);//这里func就是 ...................................//svcmgr_handler } } 接收到数据处理的请求,这里进行解析并调用前面注册的回调函数查找caller请求的service intbinder_parse(structbinder_state*bs,structbinder_io*bio, uint32_t*ptr,uint32_tsize,binder_handlerfunc) { .................................... switch(cmd){ ...... caseBR_TRANSACTION:{ structbinder_txn*txn=(void*)ptr; if((end-ptr)*sizeof(uint32_t)<sizeof(structbinder_txn)){ LOGE("parse:txntoosmall!\n"); return-1; } binder_dump_txn(txn); if(func){ unsignedrdata[256/4]; structbinder_iomsg; structbinder_ioreply; int res;bio_init(&reply, rdata, sizeof(rdata), 4); bio_init_from_txn(&msg,txn); res=func(bs,txn,&msg,&reply);//找到caller请求的service binder_send_reply(bs,&reply,txn->data,res);//将找到的service返回给caller } ptr+=sizeof(*txn)/sizeof(uint32_t); break; ........ }} voidbinder_send_reply(structbinder_state*bs, structbinder_io*reply, void*buffer_to_free, intstatus) { struct{ uint32_tcmd_free; void*buffer; uint32_tcmd_reply; structbinder_txntxn; } __attribute__((packed)) data;data.cmd_free = BC_FREE_BUFFER; data.buffer=buffer_to_free; data.cmd_reply=BC_REPLY;//将我们前面binder_thread_write中cmd替换为BC_REPLY就可以知 data.txn.target=0;//道servicemanager如何将找到的service返回给caller了 .......................... binder_write(bs,&data,sizeof(data));//调用ioctl与binderkernel通信 } 从这里走出去后,caller该被唤醒了,client进程就得到了所请求的service的IBinder对象在Binder kernel中的参考,这是一个远程BBinder对象。连接建立后的client连接Service的通信过程: virtualsp<ICamera>connect(constsp<ICameraClient>&cameraClient) { Parceldata,reply; data.writeInterfaceToken(ICameraService::getInterfaceDescriptor()); data.writeStrongBinder(cameraClient->asBinder()); remote()->transact(BnCameraService::CONNECT,data,&reply); returninterface_cast<ICamera>(reply.readStrongBinder()); } 向前面分析的这里remote是我们得到的CameraService的对象,caller进程会切入到CameraService。android的每一个进程都会创建一个线程池,这个线程池用处理其他进程的请求。当没有数据的时候线程是挂起的,这时binderkernel唤醒了这个线程: IPCThreadState::joinThreadPool(boolisMain) { LOG_THREADPOOL("**** THREAD %p (PID %d) IS JOINING THE THREAD POOL\n", (void*)pthread_self(), getpid());mOut.writeInt32(isMain ? BC_ENTER_LOOPER : BC_REGISTER_LOOPER); status_tresult; do{ int32_tcmd; result=talkWithDriver(); if(result>=NO_ERROR){ size_tIN=mIn.dataAvail();//binderkernel传递数据到service if(IN<sizeof(int32_t))continue; cmd=mIn.readInt32(); IF_LOG_COMMANDS(){ alog<<"Processingtop-levelCommand:" <<getReturnString(cmd)<<endl; } result=executeCommand(cmd);//service执行binderkernel请求的命令 } //Letthisthreadexitthethreadpoolifitisnolonger //neededanditisnotthemainprocessthread. if(result==TIMED_OUT&&!isMain){ break; } }while(result!=-ECONNREFUSED&&result!=-EBADF); ....................... }status_t IPCThreadState::executeCommand(int32_t cmd) { BBinder*obj; RefBase::weakref_type*refs; status_tresult=NO_ERROR; switch(cmd){ ......................... caseBR_TRANSACTION: { binder_transaction_datatr; result=mIn.read(&tr,sizeof(tr)); LOG_ASSERT(result==NO_ERROR, "NotenoughcommanddataforbrTRANSACTION"); if(result!=NO_ERROR)break; Parcelbuffer; buffer.ipcSetDataReference( reinterpret_cast<constuint8_t*>(tr.data.ptr.buffer), tr.data_size, reinterpret_cast<constsize_t*>(tr.data.ptr.offsets), tr.offsets_size/sizeof(size_t),freeBuffer,this); constpid_torigPid=mCallingPid; constuid_torigUid=mCallingUid; mCallingPid=tr.sender_pid; mCallingUid=tr.sender_euid; //LOGI(">>>>TRANSACTfrompid%duid%d\n",mCallingPid,mCallingUid); Parcelreply; ......................... if(tr.target.ptr){ sp<BBinder>b((BBinder*)tr.cookie);//service中Binder对象即CameraService conststatus_terror=b->transact(tr.code,buffer,&reply,0);//将调用 if(error<NO_ERROR)reply.setError(error);//CameraService的onTransact函数 }else{ conststatus_terror=the_context_object->transact(tr.code,buffer,&reply,0); if(error<NO_ERROR)reply.setError(error); } //LOGI("<<<<TRANSACTfrompid%drestorepid%duid%d\n", //mCallingPid,origPid,origUid); if((tr.flags&TF_ONE_WAY)==0){ LOG_ONEWAY("Sendingreplyto%d!",mCallingPid); sendReply(reply,0); }else{ LOG_ONEWAY("NOTsendingreplyto%d!",mCallingPid); } mCallingPid=origPid; mCallingUid=origUid; IF_LOG_TRANSACTIONS(){ TextOutput::Bundle_b(alog); alog<<"BC_REPLYthr"<<(void*)pthread_self()<<"/obj" <<tr.target.ptr<<":"<<indent<<reply<<dedent<<endl; } .................................. } break; } .................................. if((tr.flags&TF_ONE_WAY)==0){ LOG_ONEWAY("Sendingreplyto%d!",mCallingPid); sendReply(reply,0);//通过binderkernel返回数据到caller进程这个过程大家 }else{//参照前面的叙述自己分析一下 LOG_ONEWAY("NOTsendingreplyto%d!",mCallingPid); } if(result!=NO_ERROR){ mLastError=result; } returnresult; } 调用CameraServiceBBinder对象中的transact函数: status_tBBinder::transact( uint32_tcode,constParcel&data,Parcel*reply,uint32_tflags) { ..................... switch(code){ casePING_TRANSACTION: reply->writeInt32(pingBinder()); break; default: err=onTransact(code,data,reply,flags); break; } ................... returnerr; }将调用CameraService的onTransact函数,CameraService继承了BBinder。 status_tBnCameraService::onTransact( uint32_tcode,constParcel&data,Parcel*reply,uint32_tflags) { switch(code){ caseCONNECT:{ CHECK_INTERFACE(ICameraService,data,reply); sp<ICameraClient>cameraClient=interface_cast<ICameraClient>(data.readStrongBinder()); sp<ICamera>camera=connect(cameraClient);//真正的处理函数 reply->writeStrongBinder(camera->asBinder()); returnNO_ERROR; }break; default: returnBBinder::onTransact(code,data,reply,flags); } } 至此完成了一次从client到service的通信。设计一个多客户端的Service Service可以连接不同的Client,这里说的多客户端是指在Service中为不同的client创建不同的IClient接口,如果看过AIDL编程的话,应该清楚,Service需要开放一个IService接口给客户端,我们通过defaultServiceManager->getService就可以得到相应的service一个BpBinder接口,通过这个接口调用transact函数就可以与service通信了,这样也就完成了一个简单的service与client程序了,但这里有个缺点就是,这个IService是对所有的client开放的,如果我们要对不同的client做区分的话,在建立连接的时候所有的client需要给Service一个特性,这样做也未尝不可,但会很麻烦。比如对Camera来说可能不止一个摄像头,摄像头的功能也不一样,这样做就比较麻烦了。其实我们完全可以参照QT中多客户端的设计方式,在Service中为每一个Client都创建一个IClient接口,IService接口只用于Serivce与Client建立连接用。对于Camera,如果存在多摄像头我们就可以在Service中为不同的Client打开不同的设备。 importandroid.os.IBinder; importandroid.os.RemoteException; publicclassTestServerServerextendsandroid.app.testServer.ITestServer.Stub { intmClientCount=0; testServerClientmClient[]; @Override publicandroid.app.testServer.ITestClient.Stubconnect(ITestClientclient)throwsRemoteException { //TODOAuto-generatedmethodstub testServerClienttClient=newtestServerClient(this,client);//为Client创建 mClient[mClientCount]=tClient;//不同的IClient mClientCount++; System.out.printf("***Serverconnectclientis%d",client.asBinder()); returntClient; }@Override publicvoidreceivedData(intcount)throwsRemoteException { //TODOAuto-generatedmethodstub } PublicstaticclasstestServerClientextendsandroid.app.testServer.ITestClient.Stub { publicandroid.app.testServer.ITestClientmClient; publicTestServerServermServer; publictestServerClient(TestServerServertServer,android.app.testServer.ITestClienttClient) { mServer=tServer; mClient=tClient; } publicIBinderasBinder() { //TODOAuto-generatedmethodstub returnthis; } } } 这仅仅是个Service的demo而已,如果添加这个作为system Service还得改一下android代码avoid permission check!总结: 假定一个ClientA进程与ServiceB进程要建立IPC通信,通过前面的分析我们知道他的流程如下: 1:ServiceB打开Binderdriver,将自己的进程信息注册到kernel并为Service创建一个binder_ref。 2:ServiceB通过Add_Service将Service信息添加到service_manager进程 3:ServiceB的Threadpool挂起等待client的请求 4:ClientA调用open_driver打开Binderdriver将自己的进程信息注册到kernel并为Service创建一个binder_ref 5:ClientA调用defaultManagerService.getService得到ServiceB在kernel中的IBinder对象 6:通过transact与Binderkernel通信,BinderKernel将ClientA挂起。 7:BinderKernel恢复ServiceBthreadpool线程,并在joinThreadPool中处理Client的请求 8:BinderKernel挂起ServiceB并将ServiceB返回的数据写到ClientA 9:BinderKernle恢复ClientA Binderkerneldriver在ClientA与ServiceB之间扮演着中间代理的角色。任何通过transact传递的IBinder对象都会在Binderkernel中创建一个与此相关联的独一无二的BInder对象,用于区分不同的Client。 |