Go language data types: integer and floating point

The public starting number "Go programming time."

1. Integer

Go languages, integer type can be subdivided into 10 types, in order to facilitate them to learn, I will organize these types into a form.

int and uint difference is that a u, there udescribed is unsigned, no urepresentatives symbol.

Explain the difference between this symbol

In int8and uint8of example, representing 8 8 'bit, the number of values that can be represented with a 2 ^ 8 = 256.

uint8 is unsigned, they are positive numbers that can be represented, 0-255, just 256 number.

int8 is signed, either positive, can also be negative, then how do? Half points chant, -128-127, also happens to be the number 256.

int8 int16 int32 int64 these types has a final value, which indicates the number of values ​​which can be represented is fixed.

And no int did not specify its number of digits, indicating its size, it can change, that change according to what it?

  • When you are in the 32-bit system, int and uint occupies 4 bytes, i.e. 32 bits.
  • If you are under 64-bit system, int, and uint occupies 8 bytes, which is 64.

For this reason, in some scenarios, you should avoid using the int and uint, whereas the use of more precise int32 and int64, such as in binary transmission, structure description file read and write (in order to maintain the structure of the file will not be different compiler Effect byte length of the target platform)

Different hexadecimal representation

Out of habit, when initializing a variable of type integer data, we will use decimal notation, because it is the most intuitive, such as this, is an integer of 10.

var num int = 10

However, you have to know, you can still use the other to represent a binary integer, more commonly used here to binary, octal and hexadecimal example.

Binary: with 0bor 0Bprefixed

var num01 int = 0b1100

Octal: with 0oor 0Oas prefix

var num02 int = 0o14

Hex: to 0xbe prefixed

var num03 int = 0xC

The following section of code are used with binary, octal, hexadecimal 16 to indicate the decimal value: 12

package main

import (
	"fmt"
)

func main() {
	var num01 int = 0b1100
	var num02 int = 0o14
	var num03 int = 0xC
    
	fmt.Printf("2进制数 %b 表示的是: %d \n", num01, num01)
	fmt.Printf("8进制数 %o 表示的是: %d \n", num02, num02)
	fmt.Printf("16进制数 %X 表示的是: %d \n", num03, num03)
}

Output follows

2进制数 1100 表示的是: 12 
8进制数 14 表示的是: 12 
16进制数 C 表示的是: 12 

Used the above code format function package fmt, you can see here above with reference to the code

%b    表示为二进制
%c    该值对应的unicode码值
%d    表示为十进制
%o    表示为八进制
%q    该值对应的单引号括起来的go语法字符字面值,必要时会采用安全的转义表示
%x    表示为十六进制,使用a-f
%X    表示为十六进制,使用A-F
%U    表示为Unicode格式:U+1234,等价于"U+%04X"
%E    用科学计数法表示
%f    用浮点数表示

2. Float

Floating-point number from the integer part of a general type, decimal " ." and a fractional part.

Wherein the integer and fractional parts by decimal notation. But there is another way of saying. That is where the addition of the exponent part. Exponent part by the "E" or "e" and a decimal number with sign composition. For example, 3.7E-2it represents a floating-point number 0.037. As another example, 3.7E+1it represents a floating-point number 37.

Sometimes, a float representing the type of value can also be simplified. For example, 37.0it can be simplified 37. As another example, 0.037it can be simplified .037.

One thing to note, in the Go language, the relevant part of the floating-point number can only be represented by decimal notation, to be represented by the octal notation or hexadecimal notation. For example, 03.7representation must be floating point 3.7.

float32 and float64

Go language provides two precision floating-point float32 and float64.

float32 , i.e. we often say that single-precision stores 4 bytes, i.e. 4 * 8 = 32 bits, wherein a symbol is used, 8 bits are used to index the remaining 23 bits mantissa

img

float64 , that we are familiar with double precision store 8 bytes, i.e., 8 * 8 = 64 bits, wherein a symbol is used, the index 11 is used, the remaining 52 mantissa

img

So what does this mean accuracy? How many valid bit?

Accuracy depends on the number of digits of the mantissa part.

For float32 (single precision), the mantissa is 23 bits other than removing all zero, a minimum of 2 -23 is approximately equal to 1.19 x 10 -7, so the fractional part only accurate to float back 6, plus one before the decimal point, that is a valid number is seven.

Similarly float64 (single precision) of the mantissa part 52, a minimum of 2 -52 of about 2.22 x 10 -16, it is accurate to 15 decimal places, with an effective number of digits before the decimal point is 16 .

Through the above, we can conclude the following:

A, float32 values ​​and can represent many float64

Floating-point type values ​​can range from very small to very huge. Limit the range of floating-point math can be found in the package:

  • It represents a constant math.MaxFloat32 float32 can take the maximum value is about 3.4e38;
  • It represents a constant math.MaxFloat64 float64 can take the maximum value is about 1.8e308;
  • float32 and the minimum value that can be represented float64 1.4e-45, respectively, and 4.9e-324.

Second, a large but finite precision numerical

Although people can express the value of a great, but the accuracy is not so great location.

  • (After scientific notation represented by, 6 decimal places) float32 accuracy only provide approximately six decimal precision
  • (After showing scientific notation, decimal 15) to provide accuracy float64 about 15 decimal precision

Accuracy here is what does that mean?

For example this number 10000018, float32 type used to represent words, because of its significant bit is bit 7, the scientific notation represented as 10000018, is 1.0000018 * 10 ^ 7, accurate to 6 after the decimal point.

After this time expressed in scientific notation, after the decimal point there are seven, just to meet our accuracy requirements, meaning what is it? At this point you perform mathematical operations such as +1 or -1 of this number, the results are guaranteed to be accurate

import "fmt"
var myfloat float32 = 10000018
func main()  {
	fmt.Println("myfloat: ", myfloat)
	fmt.Println("myfloat: ", myfloat+1)
}

Output follows

myfloat:  1.0000018e+07
myfloat:  1.0000019e+07

Cited above meet the accuracy requirements of data just a critical situation, in order to make comparison, hereinafter also cite just one example does not satisfy the required accuracy. As long as adding another number to the value on the line.

100000187 replaced, the same type used float32, expressed as scientific notation, since finite precision, represented time after the decimal point 7 is accurate, but if it is subjected to mathematical operations, can not be expressed since the eighth, seventh operation so after the value of the bit, it will become inaccurate.

Here we write a code to verify, as we understand it following myfloat01 = 100000182, its +5post-operation, should be equal myfloat02 = 100000187,

import "fmt"

var myfloat01 float32 = 100000182
var myfloat02 float32 = 100000187

func main() {
	fmt.Println("myfloat: ", myfloat01)
	fmt.Println("myfloat: ", myfloat01+5)
	fmt.Println(myfloat02 == myfloat01+5)
}

But because of its type float32, lack of precision, leading to the final result of the comparison is not equal (the seventh start inaccurate decimal)

myfloat:  1.00000184e+08
myfloat:  1.0000019e+08
false

Due to the problem of accuracy, it will happen very strange phenomenon, myfloat == myfloat +1will return true.

Reference article:

https://www.zhihu.com/question/26022206


Guess you like

Origin www.cnblogs.com/wongbingming/p/12580606.html