js precision loss problem (and novice reasoning process)

**

js precision loss problem (and novice reasoning process)

**

(1) Question: Why do java and js calculate 0.1+0.2=0.30000000000000004?


First of all, we must understand that all numbers stored in the computer are binary, that is, there are only two numbers 01 in the computer .
The most superficial understanding: convert decimal 0.1 to binary: 0.000110011001100110011001100110011...infinite loop.
Similarly: convert decimal 0.2 to binary: 0.00110011001100110011001100110011...infinite loop.
So: infinite loop + infinite loop = infinite loop, so binary When converting decimal numbers, there is an error, and in JavaScript The number is a 64-bit double-precision floating point number using the IEEE 754 standard. It has digits and cannot loop infinitely.

(2) But I found a problem...

Insert image description here
0.4 (decimal) = 0.0110011001100110…(binary)
0.5 (decimal) = 0.1 (binary)
Shouldn’t it be: an infinite loop of numbers + a finite number = an infinite loop of numbers? Logically speaking, 0.5+0.4 should not be equal to 0.9, but should be equal to something like this 0.900000000003

(3) Verify the facts

// Convert 0.1 and 0.2 into binary before performing the operation
0.0001100110011001100110011001100110011001100110011001101 +
0.00110011001100110011001100110011001100110 0110011001101 =
0.010011001100110011001100110011001100110011001100110011

// Converted to decimal, it is exactly 0.30000000000000004

And 0.5+0.4: 0.01100110011001100110011001100110011001100110011001101+0.1=0.11100110011001100110011001100110011001100110011001 101

// Converted to decimal, it's exactly 0.9,
that's all

Guess you like

Origin blog.csdn.net/qq_44646982/article/details/113250349