ZooKeeper learning ZooKeeper source code analysis

1. Macro analysis of ZooKeeper source structure

  ZooKeeper macro analysis source code, as shown below:

        

 

  To analyze the source code, you first need to analyze the entire ZooKeeper structure macro. You must know that ZooKeeper is divided into two parts: the server cluster and the client.

  The server:

  • Each ZooKeeper server has three states: initialization, running, and shutdown. Therefore, when the servers are all running to form a zookeeper cluster, then they can provide external services (single machine can also run);
  • After the server starts the service, it initializes to form an available cluster;

  For the client:

  • The client encapsulates the API operation layer, so that any access is based on the same API;
  • The client's API must follow a certain protocol to encapsulate the message protocol;
  • Network communication should realize serialization, deserialization and connection establishment;

  Of course, this part of the protocol encapsulation, serialization / deserialization, and connection establishment provided by the client also needs to be provided by the server. We can intercept the request by writing the pseudo server to view, the code is as follows:

public class SoecktLister {
    
    public static void main(String[] args) throws Exception {
        ServerSocket serverSocket = new ServerSocket(2181);
        Socket accept = serverSocket.accept();
        byte[] result = new byte[2048];
        accept.getInputStream().read(result);

        ByteBuffer bb = ByteBuffer.wrap(result);
        ByteBufferInputStream bbis = new ByteBufferInputStream(bb);
        BinaryInputArchive bia = BinaryInputArchive.getArchive(bbis);
        RequestHeader header2 = new RequestHeader();
        header2.deserialize(bia, "header");
        System.out.println(header2);
        bbis.close();
    }
}

  Then access through the client:

public class ZooKeeperTest {

   private ZooKeeper zooKeeper;

   public ZooKeeperTest() {
      try {
         zooKeeper= new ZooKeeper("localhost:2181",
                 5000,
                null, false);
      } catch (IOException e) {
         e.printStackTrace();
      }
   }

   public void add(String path,String data){
      try {
         String newPath = zooKeeper.create(path, data.getBytes(), ZooDefs.Ids.OPEN_ACL_UNSAFE, CreateMode.PERSISTENT);
      } catch (KeeperException e) {
         e.printStackTrace();
      } catch (InterruptedException e) {
         e.printStackTrace();
      }
   }

   public static void main(String[] args) {
      ZooKeeperTest zooKeeperTest=new ZooKeeperTest();
      zooKeeperTest.add("/monkey2","2019");
   }

}

  So the server can receive the request:

RequestHeader{protocolVersion=45, lastZxidSeen=0, timeOut=0, sessionId=21474836480000, passwd=[]}

  In fact, these contents are the protocol packaging of a simple request message.

Second, the server source code analysis

  1. Server initialization

  According to the ZooKeeper startup script./zkServer.sh start -server ip: port, open the script to see the server startup entry: org.apache.zookeeper.server.quorum.QuorumPeerMain.

  Note: The data storage structure of the server is: org.apache.zookeeper.server.DataTree, dataTree is placed in ZKDataBasse.

  After the server starts, it will load the configuration file zoo.cfg, data load, communication establishment, and leader election in turn. The code is as follows:

@Override
 public  synchronized  void start () {
     if (! GetView (). ContainsKey (myid)) {
         throw  new RuntimeException ("My id" + myid + "not in the peer list" ); 
    } 
    loadDataBase ();      // Load Data znode data loading: read the hard disk snapshot file (under the data directory) 
    startServerCnxnFactory ();    // Network communication establishment 
    try { 
        adminServer.start (); 
    } catch (AdminServerException e) { 
        LOG.warn ( "Problem starting AdminServer" , e ); 
        System.out.println (e); 
    }
    startLeaderElection ();       // Election 
    startJvmPauseMonitor ();          super .start ();    // Call thread run method now 
}

  Note: The configuration has been loaded before calling this, such as the code:

public void runFromConfig(QuorumPeerConfig config) throws IOException, AdminServerException {
    try {
        ManagedUtil.registerLog4jMBeans();
    } catch (JMException e) {
        LOG.warn("Unable to register log4j JMX control", e);
    }

    LOG.info("Starting quorum peer");
    MetricsProvider metricsProvider;
    try {
        metricsProvider = MetricsProviderBootstrap.startMetricsProvider(
            config.getMetricsProviderClassName(),
            config.getMetricsProviderConfiguration());
    } catch (MetricsProviderLifeCycleException error) {
        throw new IOException("Cannot boot MetricsProvider " + config.getMetricsProviderClassName(), error);
    }
    try {
        ServerMetrics.metricsProviderInitialized(metricsProvider);
        ServerCnxnFactory cnxnFactory = null;
        ServerCnxnFactory secureCnxnFactory = null;

        if (config.getClientPortAddress() != null) {
            cnxnFactory = ServerCnxnFactory.createFactory();
            cnxnFactory.configure(config.getClientPortAddress(), config.getMaxClientCnxns(), config.getClientPortListenBacklog(), false);
        }

        if (config.getSecureClientPortAddress() != null) {
            secureCnxnFactory = ServerCnxnFactory.createFactory();
            secureCnxnFactory.configure(config.getSecureClientPortAddress(), config.getMaxClientCnxns(), config.getClientPortListenBacklog(), true);
        }

        quorumPeer = getQuorumPeer();
        quorumPeer.setTxnFactory(new FileTxnSnapLog(config.getDataLogDir(), config.getDataDir()));
        quorumPeer.enableLocalSessions (config.areLocalSessionsEnabled ()); 
        quorumPeer.enableLocalSessionsUpgrading (config.isLocalSessionsUpgradingEnabled ()); 
        // quorumPeer.setQuorumPeers (config.getAllMembers ()); 
        quorumPeer.setElectionType (config.getElectionAlg ()); 
        quorumPeer.setMyid (config.getServerId ()); 
        quorumPeer.setTickTime (config.getTickTime ()); 
        quorumPeer.setMinSessionTimeout (config.getMinSessionTimeout ()); 
        quorumPeer.setMaxSessionTimeout (config.getMaxSessionTimeout ()); 
        quorumPeer.setInitLimit (config.getInitLimit ()); 
        quorumPeer.setSyncLimit (config.getSyncLimit ());
        quorumPeer.setConnectToLearnerMasterLimit (config.getConnectToLearnerMasterLimit ()); 
        quorumPeer.setObserverMasterPort (config.getObserverMasterPort ()); 
        quorumPeer.setConfigFileName (config.getConfigFilename ());
        quorumPeer.setClientPortListenBacklog (config.getClientPortListenBacklog ()); 
        quorumPeer.setZKDatabase ( New ZKDatabase (quorumPeer.getTxnFactory ())); 
        quorumPeer.setQuorumVerifier (config.getQuorumVerifier (), false );
        if (config.getLastSeenQuorumVerifier ()! = null ) { 
            quorumPeer.setLastSeenQuorumVerifier (config.getLastSeenQuorumVerifier (), false ); 
        } 
        QuorumPeer.initConfigInZKDatabase (); 
        quorumPeer.setCnxnFactory(cnxnFactory);
        quorumPeer.setSecureCnxnFactory (secureCnxnFactory); 
        quorumPeer.setSslQuorum (config.isSslQuorum ());
        quorumPeer.setUsePortUnification(config.shouldUsePortUnification());
        quorumPeer.setLearnerType(config.getPeerType());
        quorumPeer.setSyncEnabled(config.getSyncEnabled());
        quorumPeer.setQuorumListenOnAllIPs(config.getQuorumListenOnAllIPs());
        if (config.sslQuorumReloadCertFiles) {
            quorumPeer.getX509Util().enableCertFileReloading();
        }

        // sets quorum sasl authentication configurations
        quorumPeer.setQuorumSaslEnabled(config.quorumEnableSasl);
        if (quorumPeer.isQuorumSaslAuthEnabled()) {
            quorumPeer.setQuorumServerSaslRequired(config.quorumServerRequireSasl);
            quorumPeer.setQuorumLearnerSaslRequired(config.quorumLearnerRequireSasl);
            quorumPeer.setQuorumServicePrincipal(config.quorumServicePrincipal);
            quorumPeer.setQuorumServerLoginContext(config.quorumServerLoginContext);
            quorumPeer.setQuorumLearnerLoginContext(config.quorumLearnerLoginContext);
        }
        quorumPeer.setQuorumCnxnThreadsSize(config.quorumCnxnThreadsSize);
        quorumPeer.initialize();

        if (config.jvmPauseMonitorToRun) {
            quorumPeer.setJvmPauseMonitor(new JvmPauseMonitor(config));
        }

        quorumPeer.start ();    // At the moment, it is to call the start method of quorumPeer, not to start the quorumPeer thread, the real thread is started in the start method of super.start () 
        quorumPeer.join ();      // Wait for the server to complete initialization 
    }  , error);catch (InterruptedException e) {
         // warn, but generally this is ok 
        LOG.warn ("Quorum Peer interrupted" , e); 
    } finally {
         if (metricsProvider! = null ) {
             try { 
                metricsProvider.stop (); 
            } catch (Throwable error) { 
                LOG. warn ( "Error while stopping metrics" 
            } 
        } 
    }
}

  The detailed process of server startup is shown in the following figure:

  

 

 

  2. Server request response

  Then the server provides the service response request externally, as shown in the following figure (response write operation):

    

 

 

   The above process complies with ZooKeeper's Zab consistency protocol. The full name of the Zab protocol is Zookeeper Atomic Broadcast (Zookeeper atomic broadcast). Zookeeper uses the Zab protocol to ensure the final consistency of distributed transactions.

  For details of Zab agreement and election rules, please refer to:

 

 

Three, client source code analysis

  1. Client initialization

  The client startup process is as follows:

        

 

 

  At the beginning, the client will perform cluster analysis and network initialization (ClientCncx object). At the same time, the ClientCncx object will create two threads, SendThread and EventThread, for managing request / response and watcher events. The code is as follows:

public ClientCnxn(
    String chrootPath,
    HostProvider hostProvider,
    int sessionTimeout,
    ZooKeeper zooKeeper,
    ClientWatchManager watcher,
    ClientCnxnSocket clientCnxnSocket,
    long sessionId,
    byte[] sessionPasswd,
    boolean canBeReadOnly) {
    this.zooKeeper = zooKeeper;
    this.watcher = watcher;
    this.sessionId = sessionId;
    this.sessionPasswd = sessionPasswd;
    this.sessionTimeout = sessionTimeout;
    this.hostProvider = hostProvider;
    this.chrootPath = chrootPath;

    connectTimeout = sessionTimeout / hostProvider.size();
    readTimeout = sessionTimeout * 2 / 3;
    readOnly = canBeReadOnly;

    sendThread = new SendThread(clientCnxnSocket);
    eventThread = new EventThread();
   this.clientConfig = zooKeeper.getClientConfig();
    initRequestTimeout();
}

public void start() {
    sendThread.start();
    eventThread.start();
}

  2. Client request management

  The process for the client to access the server is as follows:

           

 

 

   It can be seen from the above figure that ClientCncx has started two threads: SendThread and EventThread. These two threads handle the response to the request from the server, and the other handles the listening event.

  These two threads are based on the queue for request management, outGoingQueue is used to process the queue for sending request requests, PendingQueue is used to store requests that have been sent waiting for service responses, so that when the request is received, response processing can be performed, and waitingEventsQueue is temporarily used The object that needs to be triggered, so the application of the queue realizes the high performance of ZooKeeper.

  Therefore, the client mainly uses these technologies: the underlying request management is queue> thread for queue processing> NIO default communication method> synchronized lock (used on the queue).

  Both the client and the server use the Jute serialization component and its own communication protocol. For details, please check:

Four, ZooKeeper operation and maintenance

  Daily use under Linux: echo zk command | nc ip port command for daily ZooKeeper operation and maintenance, such as: echo mntr | nc 192.168.0.31 2181.

  The nc command in Linux is a powerful network tool, full name is netcat, online installation: yum install -y nc. Common zk commands are as follows:

        

 

  Of course, you can also use your own code to implement interface operation and maintenance. 

Guess you like

Origin www.cnblogs.com/jing99/p/12722430.html