[Java] Why can't floating-point numbers be accurately represented?

We know that when writing a program, two floating-point numbers (float or double) cannot be directly compared in size.
Of course, we all know that the reason why the size cannot be compared directly is because floating-point numbers cannot be accurately represented inside the computer. However, why can't floating-point numbers be accurately represented inside a computer?

This has to start with the IEEE 754 standard.

According to the IEEE 754 standard, floating-point numbers are mainly divided into sign bits (sign), exponent (exponent) part, and fraction (fraction) part when stored inside the computer:
Insert picture description here
then, this number is:
Insert picture description here
for example, 0.5 in decimal can be expressed as
−1)0×2−1×1.0(−1)0×2−1×1.0 。
32-bit Take a single-precision floating-point number (common float) as an example, its sign bit occupies one bit (bit), the exponent part occupies 8 bits, and the mantissa part occupies 23 bits.

Now we need to store the decimal 0.1 as a decimal number. First, we need to convert 0.1 into a binary number. However, we will find that the conversion of 0.1 in decimal to binary is an infinitely recurring decimal: 0.0001100110011001100...

However, the mantissa has only 23 digits, and only the first 23 digits of the binary decimal can be intercepted and stored. At this time, errors occur.

When the floating-point number is converted to a decimal number again, the converted decimal number will naturally be different from the original because some binary digits are lost.

Guess you like

Origin blog.csdn.net/Mr_zhang66/article/details/107835782