本文翻译自:decimal vs double! - Which one should I use and when? [duplicate]
This question already has an answer here: This question has been answered here:
I keep seeing people using doubles in C #. I've seen people use doubles in C #. I know I read somewhere that doubles sometimes lose precision. I know I've read something somewhere, sometimes lose precision. My question is when should a use a double and when should I use a decimal type? My question is when should you use double, When should I use a decimal type? Which type is suitable for money computations? What type suitable for currency? (IE. Greater Within last $ 100 Million 101 - 500) (ie more than US $ 100 million)
#1st Floor
Reference: https://stackoom.com/question/4tGb/ decimal and double - should be used which - when used - Repeat
#2nd Floor
Money the For: decimal
. Money:decimal
. Costs A Little More Memory IT, But you do not have have troubles like a Rounding double
Sometimes has. It costs a little more memory, but there is no rounding troubles like double
sometimes there.
#3rd floor
Money the For, Always decimal. For the money, always decimal. It's why it was created. This is why it was created.
If numbers must add up correctly or balance , use decimal. If the number must be correct sum or balance, use decimal. This includes any financial storage or calculations, scores, or other numbers that people might do by hand. This includes any financial or computing storage, or other digital people may score performed manually.
If the exact value of numbers is not important, use double for speed. If the exact value of the numbers is not important, use double as speed. This includes graphics, physics or other physical sciences computations where there is already a "number of significant digits". This includes graphics, physical or other physical scientific computing, which has "significant digits."
#4th floor
Definitely use Integer types for your Money Computations. Absolute integer types currency. This can not be emphasized enough, since at first glance it might seem that a floating point type is adequate. Because at first glance seem to have enough floating-point type, and therefore can not be stressed enough.
Here an example in python code: This is an example python code:
>>> amount = float(100.00) # one hundred dollars
>>> print amount
100.0
>>> new_amount = amount + 1
>>> print new_amount
101.0
>>> print new_amount - amount
>>> 1.0
It looks pretty normal. looked fine.
Now try this again with 10 ^ 20 Zimbabwe dollars now and then 10 ^ 20 Zimbabwe dollars to try
>>> amount = float(1e20)
>>> print amount
1e+20
>>> new_amount = amount + 1
>>> print new_amount
1e+20
>>> print new_amount-amount
0.0
As you can see, the dollar disappeared . As you can see, the dollar disappeared.
If you use the integer type, it works fine: If you use an integer type, you can work:
>>> amount = int(1e20)
>>> print amount
100000000000000000000
>>> new_amount = amount + 1
>>> print new_amount
100000000000000000001
>>> print new_amount - amount
1
#5th Floor
Decimal is for exact values. Decimals exact values. Approximate values for IS Double. Double is approximate.
USD: $12,345.67 USD (Decimal)
CAD: $13,617.27 (Decimal)
Exchange Rate: 1.102932 (Double)
#6th floor
My question is when should a use a double and when should I use a decimal type? My question is when should you use double, When should I use a decimal type?
decimal
for when with values in the range of 10 ^ (+/- 28) and where you have expectations about the behaviour based on base 10 representations you work -. basically money when you use the (+/- 28) value in the range of 10 ^ and when there are expectations of behavior based on the case base 10 representation, essentially decimal
.
double
the when you need for relative Accuracy (IE Losing at The trailing digits Precision in Large values ON IS A problem not) across wildly Different magnitudes - double
. Covers More Within last 10 ^ (+/- 300) double
when you need the relative accuracy (that is, in large the end of the numeric value of the loss of accuracy is not a problem) across a lot of different magnitudes - double
covering more than 10 ^ (+/- 300). Scientific calculations are the best example here. Scientific computing is the best example.
which type is suitable for money computations? What type suitable for currency?
decimal, decimal , decimal decimal, decimal , decimal
Accept no substitutes. Does not accept alternatives.
Of The MOST Important factor IS that double
, being Implemented AS A binary fraction, CAN Not Accurately Represent MANY decimal
the fractions are (like 0.1) AT All and ITS Overall Number of digits IS Smaller Operating since IT IS 64-'bit Wide FC vs. 128-' bit for decimal
. The most important factor is implemented in the form of binary fraction double
can not accurately represent many decimal
fractions (such as 0.1) , and the total number of bits is small, because its width is 64 bits, and decimal
128 bits. A finally, Financial Applications Often field have have the Follow specific to a Rounding Modes (Sometimes mandated by LAW). Finally, financial applications often must follow a particular rounding mode (sometimes required by law). decimal
THESE the Supports ; decimal
support them ; double
does not. double
no.