How does hdfs ensure data consistency?

1 The metadata of the namenode and SecondaryNamenode (the metadata information on the namenode is periodically saved) are consistent.

2 The heartbeat mechanism of namenode and datanode guarantees the re-creation of the copy. If the DataNode dies, the copy originally stored on this machine will be recreated on another machine.

3 Whether the data created by the DataNode and DataNode replicas are consistent (network transmission checksum problem)

4 The lease mechanism is to ensure that only one user is allowed to write data in a file, and the lease is issued by the NameNode to the client.

5 The rollback mechanism is mainly reflected in the Hadoop upgrade process. If it fails, it can be restored to its original state.

Guess you like

Origin blog.csdn.net/u013963379/article/details/106567687