The clearest explanation of the original code, the inverse code, and the complement code, the representation of negative numbers in the computer

From: http://blog.chinaunix.net/uid-26495963-id-3074603.html
 
 
Original code: Converting an integer to binary is its original code.
               For example, the original code of single-byte 5 is: 0000 0101; the original code of -5 is 1000 0101.

 Inverse code: The inverse code of a positive number is its original code; the inverse code of a negative number is to invert every bit except the sign bit in the original code.
               For example, the complement of 5 in a single byte is: 0000 0101; the complement of -5 is 1111 1010.

 Complement code: The complement code of a positive number is the original code; the complement code of a negative number + 1 is the complement code.
               For example, the 5's complement of a single byte is: 0000 0101; the original code of -5 is 1111 1011.

  In the computer, the positive number is directly represented by the original code, such as the single byte 5, which is represented in the computer as: 0000 0101.
                          Negative numbers are represented by two's complement, such as single-byte -5, which is represented as 1111 1011 in the computer.

Here is a question, why in computers, negative numbers are represented by two's complement? Why not use the original code directly? Such as single byte -5: 1000 0101.
  
    I want to think about it in terms of software, for two reasons

  1. Representation range
  Take a single-byte integer as an example, the unsigned type, whose representation range is [0,255], represents a total of 256 data. Signed type, which represents the range [-128,127].
  Look at unsigned first, 0 is represented as 0000 0000, and 255 is represented as 1111 1111, which just meets the requirements and can represent 256 data.
  Look at the signed one, if it is represented by the original code, 0 is represented as 0000 000. Since we have a sign, there should also be a negative 0 (although it is still 0): 1000 0000.

  Then let's see if this can still meet our requirements, representing 256 data?
  A positive number, no problem, 127 is 0111 1111, 1 is 0000 0001, of course, the other should be no problem.
  Negative numbers, -1 is 1000 0001, then remove the negative sign, the largest number is 111 1111, which is 127, so the smallest data that can be represented in negative numbers is -127.
  This doesn't seem right, how to represent -128? It seems that it cannot be represented directly by the original code, but we have two 0s.
  If we specify one of the 0's as -128, won't it? This is also an idea, but there are two problems: one is that it has a large span from -127; the other is that it is inconvenient to perform calculations with hardware.
  Therefore, in the computer, negative numbers are represented by complement.
For example, single byte -1, the original code is 1000 0001, the complement code is 1111 1110, the complement code is 1111 1111, and the single byte -1 in the computer is represented as 1111 1111.

  Single byte -127, the original code is 1111 1111, the complement code is 1000 0000, the complement code is 1000 0001, and the single byte -127 is represented as 1000 0001 in the computer.
  
       Single-byte -128, the original code seems to be unable to represent, except for the symbol, the maximum number can only be 127, and its representation in the computer is 1000 0000.

  2. Habits of size (personal opinion)
  It can also be understood from the data size. Take single-byte data as an example. Among the signed numbers, the range of positive numbers is [1,127], the largest is 127, regardless of the sign, it is expressed as 111 1111; the smallest is 1, regardless of the sign, it is expressed as 000 0001.
  Among the negative numbers, the largest is -1, and we use 111 1111 to represent its numerical part. Subsequent data are decremented by 1. When reducing to 000 0001, we marked it with -127. Subtract 1 and it becomes 000 0000. Fortunately we have the sign as, so there are two 0s. Take the signed 0 to represent -128, which is just enough to satisfy the representation range.

   The above is only analyzed from the perspective of software. Of course, from the perspective of hardware, there is a reason why negative numbers are represented by complement code. After all, in the computer, it is the hardware that finally realizes the operation.
There are three main reasons
  1>. The conversion between the complement of a negative number and the complement of a corresponding positive number can be done in the same way - the complement operation, which simplifies the hardware.
  Such as
                             Original Code Inverse Code Complement Code
  -127     -〉   127      1000 0001   -〉        0111 1110  -〉   0111 1111
    127     -〉  -127      0111 1111   -〉        1000 0000  -〉   1000 0001
  -128     -〉   128      1000 0000   -〉        0111 1111  -〉   1000 0000
  128      -〉  -128      1000 0000   -〉        0111 1111  -〉   1000 0000
  It can be found that the method of complementing negative and positive numbers is the same.

  2>. Subtraction can be changed into addition, eliminating the need for a subtractor.
  In the computer, we can see that the result of complementing it is the negative number corresponding to its value. The same goes for negative numbers.
  In the operation, subtracting a number is equal to adding its opposite number. This elementary school has learned it. Since its complement is its opposite, we can add its complement.
  Such as: A-127,
  It is equivalent to: A + (-127),
  And because the negative number is stored in the form of complement, that is, the true value of the negative number is the complement. In this case, when we want to subtract a number, we directly take its complement and add it, and it is OK. We also You can safely say goodbye to subtraction!
  Of course, this also involves type conversion, such as single-byte 128, its original code is 1000 0000, and its complement is also 1000 0000. In this way, we +128, or -128, all take 1000 0000 and add them together, so it will not be confusing? Fortunately, the editors of each programming language have restrictions related to type conversion.
  Such as: (assuming that the constants are all single-byte)
  1 + 128, the true value of the operation is 0000 0001 + 1000 0000, if you assign the result to a single-byte signed positive number, the editor will prompt you to exceed the representation range. Because the two data of the operation are unsigned, the result is also unsigned 129, and the maximum value that a signed single-byte variable can represent is 127.
  1 - 128, the operation of true knowledge is 0000 0001 + 1000 0000, because -128 is signed, the result of the operation is also signed, 1000 0001, which is exactly the true value of -127 in the computer.

  3>, unsigned and signed addition operations can be done with the same circuit.
  Signed and unsigned addition and subtraction are actually adding their true values. The truth value, that is, the binary representation of a number in the computer. The true value of a positive number is its original code, and the true value of a negative number is its complement. So signed and unsigned are controlled by the compiler, and all the computer has to do is add the two truth values ​​together.

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325496350&siteId=291194637