About data validation error correction algorithm

Recently a solution to the noise problem of the loss of data transmission check the information

Thus make a summary:

  Data Corruption

    Because the cable is eating a mouse or a hard fall on the ground lead to the wrong data

 

    In fact, the issue of data corruption is not limited to network transmission, and data can relate to all relevant aspects, such as the file extraction, verify (signature data) network communications, confidential data, etc.

  Error checking

    That test whether a piece of data that is incorrect.

    Because it is not wrong even with the data itself can not know, it is necessary to add additional authentication methods

    Of course, the easiest way is to send it again authentication data, holding data one by one in front of the original control what data. . . But this is not to say too much trouble, if you want to implement this method, the second transmission of this data, the data is not difficult to ensure that there have been errors

    On the other hand, obviously the shorter the data, the probability of error of this data appears smaller because the bit error probability (BER) is constant. So if this can be authenticated data (check code) is compressed into a short data can be reduced where the error check code

    Such as a parity (Parity Check) or the number of parity 1

      Parity developed a protocol, a predetermined number of data is even only 1

      If the original data is an odd number 1, a 1 is added on top; otherwise added to a 0 in front.

      Such as  0 110101 and   1 1.01001 million are compliant. Obviously this is the first check code.

      So if a data error, a 0 to 1 or 1 becomes 0, then the recipient number 1 again and found not an even number, then prove wrong data

    Of course defect parity is clear, if both are wrong two or even number of bits, the error will not be detected. . .

    So we need stronger checksum algorithms, so check code as simple as possible at the same time, so that the error rate as small as possible.

 

 

    Specific method is to add hash algorithm, a hash value corresponding to a data, then the original hash value representative feature data, as a check code, so as to achieve one correspondence.

      For example the section data by modulo 10086 23333 again, as a result of this hash check code is appended to the data, the recipient do it again and then get his hash hash value.

 

      If they are the same, it can be seen as two sets of data characteristics meet, so that data is verified.

    (Essentially parity also a hash)

    Of course on the other hand, since it is so inevitably hash error occurs, such as data errors, however hash value and the recipient get exactly the same checksum, or that the data and check code are wrong but also coincided . So it requires a hash function is strong enough.

             However, it is noted that, if unfortunate checksum error, it can only retransmit a packet over. .

    MD5 algorithm in use today is a kind of hash algorithm implemented by check, currently widely used in the banking system as well as confidential documents check transaction data, and so on.

    There are similar CRC cyclic redundancy check algorithm widely used, is used in the TCP / IP protocol.

    In addition LRC, BCC, SHA algorithm, etc. can also be used for error checking, but very much the same hash function, similar in nature (it). There is a Nand ECC algorithm, did not quite understand. . .

    

  Error correction

    That is a piece of data to correct the error.

    Currently you can see the error correction algorithms are the following:

      After FEC: by error check (check code does not match the data) to know the data is wrong after the data required to try again. Including data and check code is wrong is wrong in both cases (which can also be called algorithm ??) (hard is enabled)

      Forward error correction (FEC algorithm): the transmitting side before transmitting data, plus some data (redundant data) after the pretreatment, can be retroactive to an error after the error correction.

        Zero-order redundancy: there is no redundancy, the original data is sent

        A redundant order: N plus a redundant data in the packet after the data, the data generated in accordance with an equation (specifically equation need not bother to hit can be self Baidu), inserted in the tail end of the data block.

              (Note: a data packet with a checksum inside, i.e. a whole)

                If there is a bad data, find the number of data later by other anti-good data will push back the equation, this data can be counter-push back.

        Second Order redundancy: N plus two redundant data packets after the data ..................

              Receiving a data block may be two cases of bad data (analog if the solution of equations, two unknowns two equations)

        Third-order redundancy: ............

      General to the third-order redundant would be able to ensure a strong robustness

 

      Hamming code (Hamming code): https://blog.csdn.net/Yonggie/article/details/83186280 much bb paste link, the great God has made it very clear

        The disadvantage is that the situation can only withstand a mistake

        In addition to all the advantages of the advantages ...... only do error checking and error correction currently seen in the algorithm

 

      

 

    

Guess you like

Origin www.cnblogs.com/euphoria-eden/p/11374646.html