Misunderstanding of float, double, decimal in C# (reproduced)

floating point


 

Name

CTS Type

Description

Significant Figures

Range (approximate)

float

System.Single

32-bit single-precision floating point

7

±1.5 × 10−45 to ±3.4 × 1038

double

System.Double

64-bit double-precision floating point

15/16

±5.0 × 10 −324 to ±1.7 × 10308

If we write a 12.3 in the code, the compiler will automatically think this number is a double. So if we want to specify 12.3 as a float, then you have to add F/f after the number:

float f = 12.3F;

 

decimal type


 

As a supplement, the decimal type is used to represent high-precision floating-point numbers

Name

CTS Type

Description

Significant Figures

Range (approximate)

decimal

System.Decimal

128-bit high precision decimal notation

28

±1.0 × 10−28 to ±7.9 × 1028 

As can be seen from the above table, decimal has a large number of significant digits, reaching 28 bits, but the data range represented is smaller than that of float and double types. The decimal type is not the basic type in C#, so when it is used, it will affect the performance of the calculation.

We can define a decimal floating point number as follows:

decimal d = 12.30M;

Understanding of decimal, float, double errors

 

Using floating-point numbers in exact calculations is very dangerous, although C# takes a lot of steps to make floating-point arithmetic look normal. But in fact, if you don't know the characteristics of floating-point numbers and use them rashly, it will cause very serious hidden dangers. Consider the following statement:

double dd = 10000000000000000000000d;

dd += 1;

Console.WriteLine ( "{0:G50}", dd );   

What is the output? who knows?

The output is: 1000000000000000000000000   

This is the problem of floating-point precision loss, and the most important thing is that when precision is lost, no errors will be reported, and no exceptions will be generated. The loss of precision in floating point numbers can occur in many places, such as d * g / g not necessarily equal to d, and d / g * g not necessarily equal to d.   

    

There are two very dangerous misconceptions! !

1. Decimal is not a floating-point type, and there is no loss of precision in decimal.

Below is a program you can go to see what the result is. remember! All floating-point variables have the problem of loss of precision, and decimal is an uncompromising floating-point type, no matter how high the precision is, the loss of precision still exists!

decimal dd = 10000000000000000000000000000m;

dd += 0.1m;

Console.WriteLine ( "{0:G50}", dd );   

 

2. The number that decimal can store is larger than that of double, and the type conversion from double to decimal will not cause any problems.

Microsoft really needs to reflect on the help of decimal. In fact, only the conversion from integer to decimal is the expansion conversion. The precision of decimal is larger than that of double, but the maximum number that can be stored is smaller than that of double.

 

Application scenarios of decimal


 

"The decimal type is a 128-bit data type suitable for financial and monetary calculations."

Of course, decimal is safe in most cases, but floating point numbers are not theoretically safe.

As for the display problem caused by the precision error, it is easy to fix. The problems that floating-point numbers will bring and the problems that integers can avoid are one:

For example, if you transfer money from account A to account B, the calculated result is 3.788888888888888 yuan, then we deduct so much money from account A and add so much money to account B, but in fact, account A may not deduct the exact value, for example The amount of account A is 100000000000, then the calculation result of 100000000000 - 3.788888888888888 at this time is very likely to be 99999999996.211111111111112. At this time, if the amount in the B account is 0, it is very likely to add an accurate value, such as 3.788888888888888. In this way, 0.0111111111111112 yuan will disappear, and over time, the difference will become larger and larger.

double is 64-bit, which is higher than single-32-bit precision. Decimal is a 128-bit high-precision floating-point number, which is often used in financial operations and is not prone to errors in floating-point calculation. The decimal type has higher precision and a smaller range, which makes it suitable for financial and monetary calculations.

 

one example


 

As soon as I arrived at the office in the morning, I was called by the pilot test room. It turned out that the software found a small problem during the testing process: the data read by the software was 0.01 smaller than the data displayed on the equipment LCD.

How can this happen? I have used the double type for the data type. The entire data length is only 6 bits. The valid data bits of the double type data are 7 bits, which is enough. I don't understand. So come back to the next breakpoint trace.

When calculating the double type, there is no problem, the data is 66.24, but when I multiply 66.24 by 100, the processing result is wrong: 66.24*100.0d = 6623.9999...91, the problem lies here. Check msdn, Double type data: Double value type represents a double-precision 64-bit number between -1.79769313486232e308 and +1.79769313486232e308, floating-point numbers can only approximate decimal numbers, and the precision of floating-point numbers determines floating-point numbers Approximate precision to a decimal number. By default, the precision of Double values ​​is 15 decimal digits, but the maximum precision maintained internally is 17 digits. So it appears that after multiplying by one hundred, the accuracy is not enough. And because we are not allowed to round up when processing data, after unit conversion, the final data displayed in the software is 66.23, which is 0.01 smaller than the 66.24 displayed on the LCD.

Therefore, after this, it was thought that a higher precision decimal type should be used.

Name

CTS Type

Description

Significant Figures

Range (approximate)

decimal

System.Decimal

128-bit high precision decimal notation

28

±1.0 × 10−28 to ±7.9 × 1028 

 

When declaring decimal type data, you can a: decimal myData = 100, and the compiler implicitly converts the integer 100 to 100.0m; of course, you can also b: decimal myData = 100.0m, but if it is decimal myData = 100.0d or decimal myData = 100.0f, it will not work, because 100.0d or 100.0f, the compiler considers it to be a floating-point number, and there is no implicit conversion between floating-point numbers and decimal types; therefore, you must use coercion between these two types. convert between. This is the important, otherwise the compiler will report an error. Therefore, the general financial software will use the decimal type when dealing with it.

Well, after switching to the decimal type, it is OK, and the result is completely displayed as 66.24.

 

 

Original link

 

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325075914&siteId=291194637