Distributed Consistency Model

  1. Consistency Model

    • Weak consistency
      • The final consistency
        • DNS (Domain Name System)
        • Gossip (Cassandra communication protocol)
    • Strong consistency
      • Synchronize

      • Paxos

      • Raft (multi-paxos)

      • ZAB (multi-paxos)

  2. Strong consistency problem to be solved

    • Can not exist on a single data point (safety)

    • Distributed system fault tolorence general solution is state machine replication (state machine replication)

    • state machine replication consensus (consensus) algorithm.

    • paxos actually a consensus algorithm

    • The final consistency of the system, not only the need to reach consensus, but also depends on the client's behavior.

      • Let x for the command

      • Client - (X) -> Consensus Module1
        X Server Stored in that apos own log (X stored in the server's own log)
        Consensus Module1 - (X) -> Other Servers
        each of Them Their own Records in The Command log (x each server will be stored in their logs)

  3. Strong consensus algorithm: master-slave synchronization

    • Master accept the write request
    • Master copy the log to the slave
    • Master wait until all the returns from the library
    • Problems may arise: a node fails, Master blocked, causing the entire cluster is unavailable to ensure the consistency, availability of (A) is greatly reduced
  4. Strong consensus algorithm: Majority

    • The basic idea: writing every write is guaranteed is greater than N / 2 nodes, each read from the read guarantee greater than the N / 2 nodes (N is the total number of nodes).

    • However, in a concurrent environment, can not guarantee the accuracy of the system, the order is very important

      • image-20191104210130529

  5. Strong consensus algorithm: Paxos

    • Inventor: Lesile Lamport, who is also the inventor of Latex

    • Paxos classification

      • Basic Paxos

        • Character:

          • Client: the role of external systems, initiator of the request, similar to the public.
          • Proposer: Client accepts the request, put forward proposals (propose) to the cluster. And in case of conflict, conflict with the role of regulation, similar to the parliamentarians.
          • Acceptor (Voter): When proposing the vote and the recipient, only constitute a quorum (Quorum, majority usually is the majority), the proposal will finally be accepted, would be too similar
          • Learner: the proposed recipient, backup, backup, there is no impact on the cluster consistency, similar to the recorder.
        • Stages (phases :):

          • Phase 1a: Prepare

            • Proposer made a proposal number N, a proposal number greater than N times before the proposer. The quorum to accept the request acceptors.
          • Phase 1b: Promise

            • If you accept any proposal before the N number is larger than this acceptor is accepted, or rejected.
          • Phase 2a: Accept

            • If you reach the majority, proposer will be issued accept the request, the request includes a proposal number N, and the content of the proposal.
          • Phase 2b: Accepted

            • If the acceptor during this period did not receive any number greater than proposal, accept the proposal content, otherwise ignored.

            • This means that if the proposal received a proposal emerged satisfied after a meet, the previous proposal is ignored, so at this stage the proposals received satisfy even the majority, or may fail.

        • The basic process:

          image-20191104213210259

        • Proposer fail

          image-20191104213337912

        • Potential problems: Livelock (liveness) or dueling

          image-20191104213508177

          • Simple definition:
            • If the transaction T1 blocked data R, transaction T2 blocked and the request R, then wait T2. T3 also request block R, T1 is released after blockade of R, the system first approved the request of T3, T2 is still waiting. T4 is then blocked and the request R, T3 is released after blockade R, the system has approved the request of T4, T2 may then wait forever.
            • T2 and then continue to acquire locks, we call this phenomenon as a living lock (sometimes their own unlock)
          • Processing program on the industry:
            • With a random Timeout (delay)
        • Basic Paxos other issues:

          • Difficult to achieve low efficiency (2 RPC)
      • Multi Paxos

        • Leader: The only proposer, all requests need to go through this Leader

        • The basic process:

          image-20191104215104190

        • Reduce the role of further simplification:

          image-20191104215449044

  6. Strong consensus algorithm: Raft (simplify the optimization of Multi Paxos)

    • Divided into three sub-questions:
      • Leader Election
      • Log Replication
      • Safety
    • Redefine Role:
      • Leader
      • Follower
      • Candidate
    • The entire cluster is only one Leader

    • When the Leader is present, other nodes Follower

    • When it is detected Leader does not exist, there enter Candidate node status, and to send requests to other nodes, the other nodes in most cases will reach the stage of elections

      • Selecting an odd number of nodes is to reduce the effect of network isolation, isolation occurs if always cut majority side, maintain the operation of the cluster, and the minority side after the network is restored, detects and sets itself as obsolete reset state.

      • Log submission process:

        • Client sending a request to submit a log Leader
        • Leader send logs to Follower
        • Follower return information after receiving the log (to not submitted)
        • Leader receives a commit log to the log after successfully sending information
        • Leader reply directly to the client successfully submitted (asynchronous), Leader sends information submitted to the Follower, after Follower submit information returns submitted after Leader re-synchronization reply was submitted by the client. (Such as the ack = 1 and Storm ack = all)
    • Specific can try illustrated official website: official website icon

    • Framework for the practical application of the algorithm: Etcd

  7. Strong consensus algorithm: ZAB (simplify the optimization of Multi Paxos)

    • Basically the same raft
    • There are some difference terms is called: ZAB such as a leader of a certain period called epoch, and the raft is called term
    • The implementation is slightly different: for example raft to ensure continuity of the log, the heartbeat direction leader to follower. ZAB is the opposite.
    • Framework for the practical application of the algorithm: Zookeeper

Guess you like

Origin www.cnblogs.com/ronnieyuan/p/11795616.html