The development process inevitably have floating-point arithmetic, floating point precision issues JavaScript will bring some problems
JavaScript is only one type of digital (Number)
JavaScript uses a standard IEEE 754 double-precision floating-point (64), 64-bit significant figures in a floating-point symbol, the index storage 11, 52 stores floating-point
Sometimes is infinite decimal in binary, so will start from 53 rounding (rounding rule is 0 and rounding up 1), thus resulting in a "floating-point precision problem" (due to rounding rule sometimes big point, sometimes dots)
With a look at the following example
JavaScript Math operation
addition
Example: 0.1 + 0.2
Expected results: 0.3
Actual results: .30000000000000004
Subtraction
Example: 1.0 - 0.7
Expected results: 0.3
Actual results: .30000000000000004
multiplication
Example: 1.01 * 1.003
Expected results: 1.01303
Actual results: 1.0130299999999999
division
Example: 0.029 / 10
Expected result: 0.0029
Actual results: .0029000000000000002
Note: The above addition, subtraction, multiplication, and division, respectively example demonstrates the JavaScript result of the operation (of course, the actual result is not what we want), so why is there a result, has been explained in the preface ^ _ ^!
decimal.js arithmetic operation
An arbitrary-precision Decimal type for JavaScript
GITHUB: https://github.com/MikeMcl/decimal.js
API: http://mikemcl.github.io/decimal.js/
NPM: https://www.npmjs.com/package/decimal.js
先安装decimal.js
npm install --save decimal.js
把上面的示例,用decimal.js运算一次,对比一下结果
转自:https://blog.csdn.net/qq3401247010/article/details/78784788