Single precision floating point type float reasons not exactly represent real numbers

float are stored in memory

From high addresses to lower addresses (from left to right):
the sign bit (bit 31) index (30 - 23) mantissa (22 to 0)
a float -32 bits 4 bytes total:

  1. Sign bit (a total of)
    the leftmost bit is the sign, 0 for positive, 1 for negative
  2. Index (8-bit)
    exponent portion, with the same index representing the binary class
  3. Mantissa (total 23)
    mantissa is a 23-bit fraction + 1 bits (explained later)

About mantissa

Effective to save mantissa floating-point number, for example 8.25 to 8.25 expressed as binary: 1000.01,
so the effective number of 8.25 to 1000.01, 1000.01 mantissa is to say 8,25. If scientific notation to represent is 1.00001 * (23), so the index is 3, ending 1.00001 ,. However, memory is stored in the float structure mantissa is not part of 1.00001, but 00001, that is 1 point and are not stored. Because IEEE felt, since we all agreed after the decimal point moves to the first significant digit, then it must have a default before the decimal point and only a 1, so this one also save up waste, simply do not, which is why He says ending 23 reasons +1 bits. Therefore, in the computer, float mantissa part of the actual structure of the storage memory is 00001 (behind Tim 0 make up 23), but the real number should be the mantissa 1.00001 itself, one not lost.
In fact, this is called the mantissa, I think it is on the 32 lowest level, that is, it came in the end, so a name called the mantissa, it is very easy and the concept of fractional intertwined. 8.25 converted into binary and scientific notation is 1.00001 * (23), float in its mantissa is 1.00001, the actual storage is 00001, the fractional part of this number is .01, and the fractional part of the mantissa is not half dime . Personally think, as a direct call to float simple, because this place is in itself indicates that the real numbers (note that the real number here refers to scientific notation representation).

float can not really reason to be accurately represented in binary floating-point number

First look at an example: 0.9 seek binary representation of
the decimal converted into a binary representation of the method: take the fractional part is multiplied by 2, to take the integer part, until the product is 0, the number of top-down and finally removed positive sequence arrangement

0.9 * 2 = 1.8 preparative 1
0.8 * 2 = 1.6 preparative 1
0.6 * 2 = 1.2 preparative 1
0.2 * 2 = 0.4 preparative 0
0.4 * 2 = 0.8 preparative 0
0.8 * 2 = 1.6 preparative 1
0.6 * 2 = 1.2 preparative 1
... ...

So 0.9 binary representation: 111001100 ... infinite loop
we will find it and then take down an infinite loop, never ever came. To get the algorithm stops, then the fractional part is exactly 0.5 to 0.25 or 0.5, etc. can be divisible decimals. But this result can not be met. So most of the time, the decimal number binary representation is converted into an infinite loop. However, float stored mantissa part of the float itself is only 23, can only save 23 decimal places, meaning that came in after the first 23 decimal number will be dropped, which led to the floating point can not be accurately He said,

Published 40 original articles · won praise 18 · views 7587

Guess you like

Origin blog.csdn.net/weixin_44395686/article/details/98498306