zookeeper源码浅析(一)

版权声明:本文为博主原创文章,未经博主允许不得转载。 https://blog.csdn.net/a040600145/article/details/53842280

1.基本架构

 2.ZAB协议

   ZooKeeper并没有完全采用Paxos算法,而是使用了一种称为ZooKeeper Atomic Broadcast(ZAB,zookeeper原子消息广播协议)的协议作为其数据一致性的核心算法。

    2.1选择Leader需用半数通过才选举成成功,同时集群中已经有过半的机器与该Leader服务器完成状态同步(数据同步)才能开始服务。

    2.2所有事务请求必须由一个全局唯一的服务器来协调处理,这样的服务器称为Leader服务器,而余下的其他服务器则成为Follower服务器。Leader服务器负责将一个客户端事务请求转换成一个事务Proposal(提议),并将该Proposal分发给集群中所有的Follower服务器。之后Leader服务器需要等待所有Follower服务器的反馈,一旦超过半数的Follower服务器进行了正确反馈后,那么Leader就会再次向所有的Follower服务器分发Commit消息,要求其将前一个Proposal进行提交。

3.Leader和Follower启动过程

4.请求处理

   4.1请求处理链

      4.1.1leader请求处理链

      4.1.2follower请求处理链

    4.2处理流程

    以creater服务端为leade为例流程如下

     FollowerZooKeeperServer与LeaderZooKeeperServer处理流程的差别是FollowerRequestProcessor会将事务请求转发给leader,SendAckRequestProcessor向leader返回事务提议正确的响应,其他的处理链都是一致的。SendAckRequestProcessor和AckRequestProcessor的区别是AckRequestProcessor是leader的本地调用。FollowerRequestProcessor的事务请求的代码如下

Java代码
  1. public void run() {  
  2.        try {  
  3.            while (!finished) {  
  4.                Request request = queuedRequests.take();  
  5.                if (LOG.isTraceEnabled()) {  
  6.                    ZooTrace.logRequest(LOG, ZooTrace.CLIENT_REQUEST_TRACE_MASK,  
  7.                            'F', request, "");  
  8.                }  
  9.                if (request == Request.requestOfDeath) {  
  10.                    break;  
  11.                }  
  12.                // We want to queue the request to be processed before we submit  
  13.                // the request to the leader so that we are ready to receive  
  14.                // the response  
  15.                nextProcessor.processRequest(request);  
  16.                  
  17.                // We now ship the request to the leader. As with all  
  18.                // other quorum operations, sync also follows this code  
  19.                // path, but different from others, we need to keep track  
  20.                // of the sync operations this follower has pending, so we  
  21.                // add it to pendingSyncs.  
  22.                switch (request.type) {  
  23.                case OpCode.sync:  
  24.                    zks.pendingSyncs.add(request);  
  25.                    zks.getFollower().request(request);  
  26.                    break;  
  27.                case OpCode.create:  
  28.                case OpCode.delete:  
  29.                case OpCode.setData:  
  30.                case OpCode.setACL:  
  31.                case OpCode.createSession:  
  32.                case OpCode.closeSession:  
  33.                case OpCode.multi:  
  34.                    zks.getFollower().request(request);  
  35.                    break;  
  36.                }  
  37.            }  
  38.        } catch (Exception e) {  
  39.            LOG.error("Unexpected exception causing exit", e);  
  40.        }  
  41.        LOG.info("FollowerRequestProcessor exited loop!");  
  42.    }  

5.数据同步

    ZooKeeper集群数据同步分为4类,分别为直接差异化同步(DIFF)、先回滚再差异化同步(TRUNC+DIFF)、回滚同步(TRUNC)和全量同步(SNAP)。在同步之前,leader服务器先对peerLastZxid(该leader服务器最好处理的ZXID)、minCommittedLog(leader服务器提议缓存队列committedLog中的最小ZXID)、maxCommittedLog(leader服务器提议缓存队列committedLog中的最大ZXID)进行初始化,然后通过这3个ZXID值进行判断同步类型,并进行同步。代码见LearnerHandler的run方法:

Java代码
  1. .....  
  2. long peerLastZxid;  
  3. StateSummary ss = null;  
  4. long zxid = qp.getZxid();  
  5. long newEpoch = leader.getEpochToPropose(this.getSid(), lastAcceptedEpoch);  
  6.   
  7. if (this.getVersion() < 0x10000) {  
  8.     // we are going to have to extrapolate the epoch information  
  9.     long epoch = ZxidUtils.getEpochFromZxid(zxid);  
  10.     ss = new StateSummary(epoch, zxid);  
  11.     // fake the message  
  12.     leader.waitForEpochAck(this.getSid(), ss);  
  13. else {  
  14.     byte ver[] = new byte[4];  
  15.     ByteBuffer.wrap(ver).putInt(0x10000);  
  16.     QuorumPacket newEpochPacket = new QuorumPacket(Leader.LEADERINFO, ZxidUtils.makeZxid(newEpoch, 0), ver, null);  
  17.     oa.writeRecord(newEpochPacket, "packet");  
  18.     bufferedOutput.flush();  
  19.     QuorumPacket ackEpochPacket = new QuorumPacket();  
  20.     ia.readRecord(ackEpochPacket, "packet");  
  21.     if (ackEpochPacket.getType() != Leader.ACKEPOCH) {  
  22.         LOG.error(ackEpochPacket.toString()  
  23.                 + " is not ACKEPOCH");  
  24.         return;  
  25.   
  26.     ByteBuffer bbepoch = ByteBuffer.wrap(ackEpochPacket.getData());  
  27.     ss = new StateSummary(bbepoch.getInt(), ackEpochPacket.getZxid());  
  28.     leader.waitForEpochAck(this.getSid(), ss);  
  29. }  
  30. peerLastZxid = ss.getLastZxid();  
  31.   
  32. /* the default to send to the follower */  
  33. int packetToSend = Leader.SNAP;  
  34. long zxidToSend = 0;  
  35. long leaderLastZxid = 0;  
  36. /** the packets that the follower needs to get updates from **/  
  37. long updates = peerLastZxid;  
  38.   
  39. /* we are sending the diff check if we have proposals in memory to be able to  
  40.  * send a diff to the  
  41.  */   
  42. ReentrantReadWriteLock lock = leader.zk.getZKDatabase().getLogLock();  
  43. ReadLock rl = lock.readLock();  
  44. try {  
  45.     rl.lock();          
  46.     final long maxCommittedLog = leader.zk.getZKDatabase().getmaxCommittedLog();  
  47.     final long minCommittedLog = leader.zk.getZKDatabase().getminCommittedLog();  
  48.     LOG.info("Synchronizing with Follower sid: " + sid  
  49.             +" maxCommittedLog=0x"+Long.toHexString(maxCommittedLog)  
  50.             +" minCommittedLog=0x"+Long.toHexString(minCommittedLog)  
  51.             +" peerLastZxid=0x"+Long.toHexString(peerLastZxid));  
  52.   
  53.     LinkedList<Proposal> proposals = leader.zk.getZKDatabase().getCommittedLog();  
  54.   
  55.     if (proposals.size() != 0) {  
  56.         LOG.debug("proposal size is {}", proposals.size());  
  57.         if ((maxCommittedLog >= peerLastZxid)  
  58.                 && (minCommittedLog <= peerLastZxid)) {  
  59.             LOG.debug("Sending proposals to follower");  
  60.   
  61.             // as we look through proposals, this variable keeps track of previous  
  62.             // proposal Id.  
  63.             long prevProposalZxid = minCommittedLog;  
  64.   
  65.             // Keep track of whether we are about to send the first packet.  
  66.             // Before sending the first packet, we have to tell the learner  
  67.             // whether to expect a trunc or a diff  
  68.             boolean firstPacket=true;  
  69.   
  70.             for (Proposal propose: proposals) {  
  71.                 // skip the proposals the peer already has  
  72.                 if (propose.packet.getZxid() <= peerLastZxid) {  
  73.                     prevProposalZxid = propose.packet.getZxid();  
  74.                     continue;  
  75.                 } else {  
  76.                     // If we are sending the first packet, figure out whether to trunc  
  77.                     // in case the follower has some proposals that the leader doesn't  
  78.                     if (firstPacket) {  
  79.                         firstPacket = false;  
  80.                         // Does the peer have some proposals that the leader hasn't seen yet  
  81.                         if (prevProposalZxid < peerLastZxid) {  
  82.                             // send a trunc message before sending the diff  
  83.                             packetToSend = Leader.TRUNC;  
  84.                             LOG.info("Sending TRUNC");  
  85.                             zxidToSend = prevProposalZxid;  
  86.                             updates = zxidToSend;  
  87.                         }   
  88.                         else {  
  89.                             // Just send the diff  
  90.                             packetToSend = Leader.DIFF;  
  91.                             LOG.info("Sending diff");  
  92.                             zxidToSend = maxCommittedLog;          
  93.                         }  
  94.   
  95.                     }  
  96.                     queuePacket(propose.packet);  
  97.                     QuorumPacket qcommit = new QuorumPacket(Leader.COMMIT, propose.packet.getZxid(),  
  98.                             nullnull);  
  99.                     queuePacket(qcommit);  
  100.                 }  
  101.             }  
  102.         } else if (peerLastZxid > maxCommittedLog) {  
  103.             LOG.debug("Sending TRUNC to follower zxidToSend=0x{} updates=0x{}",  
  104.                     Long.toHexString(maxCommittedLog),  
  105.                     Long.toHexString(updates));  
  106.   
  107.             packetToSend = Leader.TRUNC;  
  108.             zxidToSend = maxCommittedLog;  
  109.             updates = zxidToSend;  
  110.         } else {  
  111.             LOG.warn("Unhandled proposal scenario");  
  112.         }  
  113.     } else {  
  114.         // just let the state transfer happen  
  115.         LOG.debug("proposals is empty");  
  116.     }                 
  117.   
  118.     leaderLastZxid = leader.startForwarding(this, updates);  
  119.     if (peerLastZxid == leaderLastZxid) {  
  120.         LOG.debug("Leader and follower are in sync, sending empty diff. zxid=0x{}",  
  121.                 Long.toHexString(leaderLastZxid));  
  122.         // We are in sync so we'll do an empty diff  
  123.         packetToSend = Leader.DIFF;  
  124.         zxidToSend = leaderLastZxid;  
  125.     }  
  126. finally {  
  127.     rl.unlock();  
  128. }  
  129.   
  130.  QuorumPacket newLeaderQP = new QuorumPacket(Leader.NEWLEADER,  
  131.         ZxidUtils.makeZxid(newEpoch, 0), nullnull);  
  132.  if (getVersion() < 0x10000) {  
  133.     oa.writeRecord(newLeaderQP, "packet");  
  134. else {  
  135.     queuedPackets.add(newLeaderQP);  
  136. }  
  137. bufferedOutput.flush();  
  138. //Need to set the zxidToSend to the latest zxid  
  139. if (packetToSend == Leader.SNAP) {  
  140.     zxidToSend = leader.zk.getZKDatabase().getDataTreeLastProcessedZxid();  
  141. }  
  142. oa.writeRecord(new QuorumPacket(packetToSend, zxidToSend, nullnull), "packet");  
  143. bufferedOutput.flush();  
  144.   
  145. /* if we are not truncating or sending a diff just send a snapshot */  
  146. if (packetToSend == Leader.SNAP) {  
  147.     LOG.info("Sending snapshot last zxid of peer is 0x"  
  148.             + Long.toHexString(peerLastZxid) + " "   
  149.             + " zxid of leader is 0x"  
  150.             + Long.toHexString(leaderLastZxid)  
  151.             + "sent zxid of db as 0x"   
  152.             + Long.toHexString(zxidToSend));  
  153.     // Dump data to peer  
  154.     leader.zk.getZKDatabase().serializeSnapshot(oa);  
  155.     oa.writeString("BenWasHere""signature");  
  156. }  
  157. bufferedOutput.flush();  
  158.   
  159. // Start sending packets  
  160. new Thread() {  
  161.     public void run() {  
  162.         Thread.currentThread().setName(  
  163.                 "Sender-" + sock.getRemoteSocketAddress());  
  164.         try {  
  165.             sendPackets();  
  166.         } catch (InterruptedException e) {  
  167.             LOG.warn("Unexpected interruption",e);  
  168.         }  
  169.     }  
  170. }.start();  
  171.   
  172. /* 
  173.  * Have to wait for the first ACK, wait until  
  174.  * the leader is ready, and only then we can 
  175.  * start processing messages. 
  176.  */  
  177. qp = new QuorumPacket();  
  178. ia.readRecord(qp, "packet");  
  179. if(qp.getType() != Leader.ACK){  
  180.     LOG.error("Next packet was supposed to be an ACK");  
  181.     return;  
  182. }  
  183. LOG.info("Received NEWLEADER-ACK message from " + getSid());  
  184. leader.waitForNewLeaderAck(getSid(), qp.getZxid(), getLearnerType());  
  185. .....  

6.watch

    6.1服务端

    在请求处理链的最后端FinalRequestProcessor的processRequest()中会判断是否需要处理watch。

    注册watch会调用DataTree.getData()方法将当前的ServerCnxn和path注册到dataWatches或者childWatches。以getData为例代码如下

Java代码
  1. case OpCode.getData: {  
  2.                ...  
  3.                byte b[] = zks.getZKDatabase().getData(getDataRequest.getPath(), stat,  
  4.                        getDataRequest.getWatch() ? cnxn : null);  
  5.                rsp = new GetDataResponse(b, stat);  
  6.                break;  

    触发watch会在事务保存数据时调用DataTree.processTxn时触发,并通过调用WatchManager.triggerWatch()触发当前的ServerCnxn的process方法的调用返回客户端。以create为例代码如下:

Datatree.createnode()代码
  1. public String createNode(String path, byte data[], List<ACL> acl,  
  2.             long ephemeralOwner, int parentCVersion, long zxid, long time)  
  3.             throws KeeperException.NoNodeException,  
  4.             KeeperException.NodeExistsException {  
  5.         .....  
  6.         dataWatches.triggerWatch(path, Event.EventType.NodeCreated);  
  7.         childWatches.triggerWatch(parentName.equals("") ? "/" : parentName,  
  8.                 Event.EventType.NodeChildrenChanged);  
  9.         return path;  
  10.     }  
Watchmanager.triggerwatch代码
  1. public Set<Watcher> triggerWatch(String path, EventType type, Set<Watcher> supress) {  
  2.         WatchedEvent e = new WatchedEvent(type,  
  3.                 KeeperState.SyncConnected, path);  
  4.         HashSet<Watcher> watchers;  
  5.         synchronized (this) {  
  6.             //watch触发一次后就会将其移除  
  7.             watchers = watchTable.remove(path);  
  8.             if (watchers == null || watchers.isEmpty()) {  
  9.                 if (LOG.isTraceEnabled()) {  
  10.                     ZooTrace.logTraceMessage(LOG,  
  11.                             ZooTrace.EVENT_DELIVERY_TRACE_MASK,  
  12.                             "No watchers for " + path);  
  13.                 }  
  14.                 return null;  
  15.             }  
  16.             for (Watcher w : watchers) {  
  17.                 HashSet<String> paths = watch2Paths.get(w);  
  18.                 if (paths != null) {  
  19.                     paths.remove(path);  
  20.                 }  
  21.             }  
  22.         }  
  23.         for (Watcher w : watchers) {  
  24.             if (supress != null && supress.contains(w)) {  
  25.                 continue;  
  26.             }  
  27.             w.process(e);  
  28.         }  
  29.         return watchers;  
  30.     }  
Nioservercnxn.process()代码
  1. @Override  
  2.     synchronized public void process(WatchedEvent event) {  
  3.         ReplyHeader h = new ReplyHeader(-1, -1L, 0);  
  4.         if (LOG.isTraceEnabled()) {  
  5.             ZooTrace.logTraceMessage(LOG, ZooTrace.EVENT_DELIVERY_TRACE_MASK,  
  6.                                      "Deliver event " + event + " to 0x"  
  7.                                      + Long.toHexString(this.sessionId)  
  8.                                      + " through " + this);  
  9.         }  
  10.   
  11.         // Convert WatchedEvent to a type that can be sent over the wire  
  12.         WatcherEvent e = event.getWrapper();  
  13.   
  14.         sendResponse(h, e, "notification");  
  15.     }  

    6.2客户端

    注册watch后,客户端会将当前客户端的请求设置为使用watch监听,同时封装一个DataWatchRegistration保存路径和watch的对应关系。以getData为例代码如下:

Zookeeper.getdata()代码
  1. public byte[] getData(final String path, Watcher watcher, Stat stat)  
  2.         throws KeeperException, InterruptedException  
  3.      {  
  4.         .....  
  5.         // the watch contains the un-chroot path  
  6.         WatchRegistration wcb = null;  
  7.         if (watcher != null) {  
  8.             wcb = new DataWatchRegistration(watcher, clientPath);  
  9.         }  
  10.         ......  
  11.         RequestHeader h = new RequestHeader();  
  12.         h.setType(ZooDefs.OpCode.getData);  
  13.         GetDataRequest request = new GetDataRequest();  
  14.         request.setPath(serverPath);  
  15.         request.setWatch(watcher != null);  
  16.         GetDataResponse response = new GetDataResponse();  
  17.         ReplyHeader r = cnxn.submitRequest(h, request, response, wcb);  
  18.         .....  
  19.     }  
   通过ClientCnxn.submitRequest()方法将请求放入outgoingQueue队列,ClientCnxn的SendThread线程会循环调用ClientCnxnSocketNIO.doTransport()方法最终调用ClientCnxnSocketNIO.doIO()方法取出outgoingQueue队列的请求发送到服务端,并将请求放入pendingQueue队列中。当服务端返回请求时ClientCnxnSocketNIO.doIO()会调用ClientCnxn.SendThread.readResponse()方法最终调用ClientCnxn.finishPacket()方法后调用WatchRegistration.register方法将数据保存到ZooKeeper.ZKWatchManager的dataWatches、existWatches、childWatches中完成注册。
Clientcnxn.finishpacket()代码
  1. private void finishPacket(Packet p) {  
  2.         if (p.watchRegistration != null) {  
  3.             p.watchRegistration.register(p.replyHeader.getErr());  
  4.         }  
  5.   
  6.         if (p.cb == null) {  
  7.             synchronized (p) {  
  8.                 p.finished = true;  
  9.                 p.notifyAll();  
  10.             }  
  11.         } else {  
  12.             p.finished = true;  
  13.             eventThread.queuePacket(p);  
  14.         }  
  15.     }  
   在注册watch的过程中watcher实体不会随着客户端被发送到服务端。代码见ClientCnxn.Packet.createBB()方法:
Clientcnxn.packet.createbb()代码
  1. public void createBB() {  
  2.             try {  
  3.                 ByteArrayOutputStream baos = new ByteArrayOutputStream();  
  4.                 BinaryOutputArchive boa = BinaryOutputArchive.getArchive(baos);  
  5.                 boa.writeInt(-1"len"); // We'll fill this in later  
  6.                 if (requestHeader != null) {  
  7.                     requestHeader.serialize(boa, "header");  
  8.                 }  
  9.                 if (request instanceof ConnectRequest) {  
  10.                     request.serialize(boa, "connect");  
  11.                     // append "am-I-allowed-to-be-readonly" flag  
  12.                     boa.writeBool(readOnly, "readOnly");  
  13.                 } else if (request != null) {  
  14.                     request.serialize(boa, "request");  
  15.                 }  
  16.                 baos.close();  
  17.                 this.bb = ByteBuffer.wrap(baos.toByteArray());  
  18.                 this.bb.putInt(this.bb.capacity() - 4);  
  19.                 this.bb.rewind();  
  20.             } catch (IOException e) {  
  21.                 LOG.warn("Ignoring unexpected exception", e);  
  22.             }  
  23.         }  
      watch的触发时在ClientCnxn.SendThread.readResponse()方法中触发的,当replyHdr.getXid()== -1表明为服务端返回的watch触发请求,将请求序列化成WatcherEvent,并创建WatchedEvent后调用ClientCnxn.EventThread.queueEvent()放入waitingEvents队列中。ClientCnxn.EventThread会循环取出并调用ClientCnxn.EventThread.processEvent()触发。
Clientcnxn.eventthread.readresponse()代码
  1. void readResponse(ByteBuffer incomingBuffer) throws IOException {  
  2.            .....  
  3.            if (replyHdr.getXid() == -1) {  
  4.                // -1 means notification  
  5.                if (LOG.isDebugEnabled()) {  
  6.                    LOG.debug("Got notification sessionid:0x"  
  7.                        + Long.toHexString(sessionId));  
  8.                }  
  9.                WatcherEvent event = new WatcherEvent();  
  10.                event.deserialize(bbia, "response");  
  11.   
  12.                // convert from a server path to a client path  
  13.                if (chrootPath != null) {  
  14.                    String serverPath = event.getPath();  
  15.                    if(serverPath.compareTo(chrootPath)==0)  
  16.                        event.setPath("/");  
  17.                    else if (serverPath.length() > chrootPath.length())  
  18.                        event.setPath(serverPath.substring(chrootPath.length()));  
  19.                    else {  
  20.                     LOG.warn("Got server path " + event.getPath()  
  21.                             + " which is too short for chroot path "  
  22.                             + chrootPath);  
  23.                    }  
  24.                }  
  25.   
  26.                WatchedEvent we = new WatchedEvent(event);  
  27.                if (LOG.isDebugEnabled()) {  
  28.                    LOG.debug("Got " + we + " for sessionid 0x"  
  29.                            + Long.toHexString(sessionId));  
  30.                }  
  31.   
  32.                eventThread.queueEvent( we );  
  33.                return;  
  34.            }  
  35.         ...  
  36.            /*  
  37.             * Since requests are processed in order, we better get a response  
  38.             * to the first request!  
  39.             */  
  40.            try {  
  41.                ...  
  42.            } finally {  
  43.                finishPacket(packet);  
  44.            }  
  45.        }  
Clientcnxn.eventthread.queueevent()代码
  1. public void queueEvent(WatchedEvent event) {  
  2.             if (event.getType() == EventType.None  
  3.                     && sessionState == event.getState()) {  
  4.                 return;  
  5.             }  
  6.             sessionState = event.getState();  
  7.   
  8.             // materialize the watchers based on the event  
  9.             WatcherSetEventPair pair = new WatcherSetEventPair(  
  10.                     watcher.materialize(event.getState(), event.getType(),  
  11.                             event.getPath()),  
  12.                             event);  
  13.             // queue the pair (watch set & event) for later processing  
  14.             waitingEvents.add(pair);  
  15.         }  
Zookeeper.zkwatchmanager.materialize()代码
  1. public Set<Watcher> materialize(Watcher.Event.KeeperState state,  
  2.                                         Watcher.Event.EventType type,  
  3.                                         String clientPath)  
  4.         {  
  5.             Set<Watcher> result = new HashSet<Watcher>();  
  6.   
  7.             switch (type) {  
  8.             case None:  
  9.                 result.add(defaultWatcher);  
  10.                 boolean clear = ClientCnxn.getDisableAutoResetWatch() &&  
  11.                         state != Watcher.Event.KeeperState.SyncConnected;  
  12.   
  13.                 synchronized(dataWatches) {  
  14.                     for(Set<Watcher> ws: dataWatches.values()) {  
  15.                         result.addAll(ws);  
  16.                     }  
  17.                     if (clear) {  
  18.                         dataWatches.clear();  
  19.                     }  
  20.                 }  
  21.   
  22.                 synchronized(existWatches) {  
  23.                     for(Set<Watcher> ws: existWatches.values()) {  
  24.                         result.addAll(ws);  
  25.                     }  
  26.                     if (clear) {  
  27.                         existWatches.clear();  
  28.                     }  
  29.                 }  
  30.   
  31.                 synchronized(childWatches) {  
  32.                     for(Set<Watcher> ws: childWatches.values()) {  
  33.                         result.addAll(ws);  
  34.                     }  
  35.                     if (clear) {  
  36.                         childWatches.clear();  
  37.                     }  
  38.                 }  
  39.   
  40.                 return result;  
  41.             //通过dataWatches或者existWatches或者childWatches的remove取出对应的watch,表明客户端watch也是注册一次就移除  
  42.             case NodeDataChanged:  
  43.             case NodeCreated:  
  44.                 synchronized (dataWatches) {  
  45.                     addTo(dataWatches.remove(clientPath), result);  
  46.                 }  
  47.                 synchronized (existWatches) {  
  48.                     addTo(existWatches.remove(clientPath), result);  
  49.                 }  
  50.                 break;  
  51.             case NodeChildrenChanged:  
  52.                 synchronized (childWatches) {  
  53.                     addTo(childWatches.remove(clientPath), result);  
  54.                 }  
  55.                 break;  
  56.             case NodeDeleted:  
  57.                 synchronized (dataWatches) {  
  58.                     addTo(dataWatches.remove(clientPath), result);  
  59.                 }  
  60.                 // XXX This shouldn't be needed, but just in case  
  61.                 synchronized (existWatches) {  
  62.                     Set<Watcher> list = existWatches.remove(clientPath);  
  63.                     if (list != null) {  
  64.                         addTo(existWatches.remove(clientPath), result);  
  65.                         LOG.warn("We are triggering an exists watch for delete! Shouldn't happen!");  
  66.                     }  
  67.                 }  
  68.                 synchronized (childWatches) {  
  69.                     addTo(childWatches.remove(clientPath), result);  
  70.                 }  
  71.                 break;  
  72.             default:  
  73.                 String msg = "Unhandled watch event type " + type  
  74.                     + " with state " + state + " on path " + clientPath;  
  75.                 LOG.error(msg);  
  76.                 throw new RuntimeException(msg);  
  77.             }  
  78.   
  79.             return result;  
  80.         }  
 

猜你喜欢

转载自blog.csdn.net/a040600145/article/details/53842280