ZooKeeper leader election

Paxos是分布式应用中解决同步问题的核心。作为应用研发工程师,我们总是倾向于使用一种相对简洁的方式实现复杂的算法。ZooKeeper leader election实现就是一个非常好的参考。

其实现比标准Paxos算法简单,基本过程是:
1                                                                                           
收票->

2

判断是否是本轮投票->如是本轮开始查票;如是新一轮投票,清空本轮投票;如是上轮投票,抛弃->
3                                                          
更新最大的leader id和提案id;如无更新,沉默;->


通知其他peer->
5
检查收到选票是否来自全部投票人/来自大多数投票人->
6
检查自己是否被选为leader

(投票轮次在code里是:n.epoch +"|"+ logicalclock,在log里叫:n.round;round看起来比epoch要清楚。)

大致就是这样,ZooKeeper leader election代码写的很漂亮。我给出一个election状态图,结合上面6步的解释,可以看清楚。





下面再给出一个时序图。但主要是收发notification逻辑,和election无关。属于基本socket通信。
第三步是向所有配置文件中的所有Server发election notification,default proposal leader id一定是自己;
第12步根据自己的状态和notification的状态处理,
self.getPeerState() == QuorumPeer.ServerState.LOOKING -> 继续election
(self.getPeerState() == QuorumPeer.ServerState.LOOKING && notification.state == QuorumPeer.ServerState.LOOKING && 自己轮次大) || notification.state == QuorumPeer.ServerState.LOOKING
-> Send notification
简而言之就是,如果你找他也找,如果你轮次大,你就说话,否则沉默;如果只有别人找,直接告诉他你的状态;



最后再以The peer who is looking为例,看看Fast Paxos的过程



Notification数据结构是

static public class Notification {
long leader;  //所推荐的Server id

long zxid;      //所推荐的Server的zxid(zookeeper transtion id)

long epoch;   //描述leader是否变化(每一个Server启动时都有一个logicalclock,初始值为0)

QuorumPeer.ServerState state;   //发送者当前的状态
InetSocketAddress addr;            //发送者的ip地址
}

相关的ZooKeeper log是



2011-07-07 21:39:46,591 - INFO  [QuorumPeer:/0:0:0:0:0:0:0:0:2181:FastLeaderElection@663] - New election. My id =  3, Proposed zxid = 64424509440

2011-07-07 21:39:46,593 - DEBUG [WorkerSender Thread:QuorumCnxManager@367] - Opening channel to server 2

2011-07-07 21:39:46,598 - DEBUG [WorkerSender Thread:QuorumCnxManager$SendWorker@541] - Address of remote peer: 2

2011-07-07 21:39:46,601 - DEBUG [WorkerSender Thread:QuorumCnxManager@367] - Opening channel to server 4

...



2011-07-07 21:39:46,602 - INFO  [WorkerReceiver Thread:FastLeaderElection@496] - Notification: 3 (n.leader), 64424509440 (n.zxid), 1 (n.round), LOOKING (n.state), 3 (n.sid), LOOKING (my state)

2011-07-07 21:39:46,606 - INFO  [WorkerReceiver Thread:FastLeaderElection@496] - Notification: 4 (n.leader), 60129542144 (n.zxid), 16 (n.round), FOLLOWING (n.state), 2 (n.sid), LOOKING (my state)

2011-07-07 21:39:46,607 - INFO  [WorkerReceiver Thread:FastLeaderElection@496] - Notification: 4 (n.leader), 60129542144 (n.zxid), 16 (n.round), FOLLOWING (n.state), 2 (n.sid), LOOKING (my state)

2011-07-07 21:39:46,608 - INFO  [WorkerReceiver Thread:FastLeaderElection@496] - Notification: 4 (n.leader), 60129542144 (n.zxid), 16 (n.round), LEADING (n.state), 4 (n.sid), LOOKING (my state)

2011-07-07 21:39:46,609 - INFO  [WorkerReceiver Thread:FastLeaderElection@496] - Notification: 4 (n.leader), 60129542144 (n.zxid), 16 (n.round), LEADING (n.state), 4 (n.sid), LOOKING (my state)

2011-07-07 21:39:46,611 - INFO  [QuorumPeer:/0:0:0:0:0:0:0:0:2181:QuorumPeer@643] - FOLLOWING


使用leader node是一个很好的设计。我是很赞成使用一个Leader Node去处理所有写request。这非常有助于session ID等global unique资源的分配。选举Leader确保了cluster中leader node的健壮,但是在实际情况中,Leader Node Machine是否应该比Follower Node Machine更强大?

另一个很好的设计是对node进行角色的划分。其实几乎所有cluster设计都需要在对等和差异角色的设计上取舍。如果全是对等角色,则cluster健壮性最佳。但是状态可能需要同步到全cluster,会降低性能。如果是有单一node承担角色,则健壮性下降。以角色区分,在cluster内部选取一部分node作为一种角色的小集群,是非常聪明的。

具体到ZooKeeper,每个participate node相互之间都是socket连接,显然如果cluster node过多,会很糟糕。比如一个500个node的cluster,会要求participate node仅仅为leader election就维护499个socket。但通过角色设置,只有10%的node参与leader election,即设置为participate node。就可有效的解决以上问题。

参考,
ZK Paxos算法描述的清晰易懂的是:http://www.spnguru.com/?p=232
代码描述的另一详文是:http://rdc.taobao.com/blog/cs/?p=162

猜你喜欢

转载自aoyouzi.iteye.com/blog/2030781