How are float/double literals stored in compiled .NET DLL files?

kaalus :

My application must produce exactly the same numeric results on all machines. I understand that float/double math in C# is not deterministic, but how about the binary representation of literal values?

float x = 1.23f;

How will 1.23f be stored in the compiled DLL file? As a binary representation of 32-bit IEEE-754 format float? Or as some intermediate representation that will need converting to IEEE-754 floating point by the jitter on target machine (a potential source of indeterminism)?

I understand why floating point operations in C# are not deterministic. I am asking ONLY whether the binary representation of literals is deterministic. Please, no answers about floating point determinism in general.

Matthew Watson :

They are stored as IEEE 754 format.

You can verify this by using ILDASM.EXE to disassemble the generated code and inspecting the binary values.

For example, float x = 1.23f; will generate:

IL_0000:  /* 22   | A4709D3F         */ ldc.r4     1.23

(Note that this is stored in little-endian format on Intel platforms.)

Guess you like

Origin http://10.200.1.11:23101/article/api/json?id=390664&siteId=1