Python's decimal module is used to solve the calculation results appearing a lot of .9999, resulting in inaccurate precision

Because Python uses double-precision floating-point numbers to store decimals. In the IEEE 754 standard (52M/11E/1S) used by Python, the 8-byte 64-bit storage space is allocated 52 bits to store the significant digits of the floating-point number, 11 bits to store the exponent, and 1 bit to store the sign, which is A binary version of scientific notation format. Although 52 significant digits seem to be a lot, the trouble is that binary decimals are prone to infinite loops when representing rational numbers. Many of them are limited in decimal decimals, such as 1/10 of decimal, which can be simply written as 0.1 in decimal, but in binary, it has to be written as: 0.0001100110011001100110011001100110011001100110011001... (there are all 1001 cycles). Because floating-point numbers have only 52 significant digits, they are rounded starting from the 53rd digit. This caused the "loss of floating-point precision" problem mentioned in the title. The rule of rounding is "0 rounding 1", so sometimes it is a little bit larger and sometimes a bit smaller.


E.g:

print(350*1.4)

The result is: 489.99999999999994

Code improvements:

import decimal 
a = decimal.Decimal('350')
b = decimal.Decimal('1.4')
c = a * b    
print(c)

The result is: 490.0

Guess you like

Origin blog.csdn.net/weixin_43283397/article/details/108362940