go language - binary bit computing


I. Introduction binary
1, binary: 0,1 -> not directly represented by a binary integer, with the output binary% b
package main
import "fmt"
func main() {
5 = I int var 
   fmt.Printf ( "% B", I) show a binary number // --- >> 5
   2, decimal: 0-9 

3 octal: 0-7-- >> expressed with a digit 0
package main
import "fmt"
func main() {
var a int = 011
      fmt.Println("a=",a)
 4, Hex: 0-9 AF (case insensitive), beginning with 0x indicates, case insensitive
package main
import "fmt"
func main() {
var j =0x11
   fmt.Println("j=",j)
Second, the binary conversion - the radix decimal turn (* initial power initial hexadecimal number hexadecimal) 
1, Binary Decimal: 2 power = 1011 11,1 * 2 3 * 0 + 2 power 0 + 1 th * 1 * 2 1 + 2
2, octal, decimal turn: 0123 = 0 + 1 th 8 th power of 2 * 3 + 2 * 83,1 * 8 8
3, ten decimal hex turn: a power of +10 0x34A = 842,3 * 16 * 4 + 2-th power of 16 (a) * 0 16 th

three binary conversion - the radix decimal turn --- >>> are divided by the number of hexadecimal to turn, take the remainder
1, turn binary Coded decimal, this number is divided by 2, the remainder upside down, that is, binary 10 = 1010
2, octal, decimal turn, divided by the number 8 the remainder upside down, is the octal 156 = 0234
3 decimal convert hexadecimal, the number is divided by 16, the remainder upside down, that is, hexadecimal 356 = 0X164

four binary conversion - binary other band (octal when a group of three is, when the hexadecimal group of four bits)
1, octal binary: binary divided into a set of three, then the power of each of the number of * 2 = 1101011 0325
2 , binary hex: binary sub group of four bits, then * The number of power of 2, 11010101 = 0xD5

five binary conversion - other binary transfer binary --- >>> are divided by the number of hexadecimal to turn, take the remainder
1, octal, binary transfer: Each bit octal number turn into a three-digit binary 0237 = 10011111
2 hex binary transfer: will be transferred every hexadecimal number to binary four bits, 0x237 = 1000110111

six-bit computing: &, |, ^, <<, >>, the operator needs to turn into a complement arithmetic, if negative, the results need to be transferred into the anti-complement code, then into the source
1, the original code, the inverted, complement introduced
2, binary most significant bit is the sign bit, 0 for positive, 1 for negative = --- >> 1 000000001-1 0001 = 1000
3 positive original code, anti-code, are the same complement
4, negative = inverted bits unchanged original code symbols, the other inverted, the inverted complement +1 =
  1-- >> original code: 00000001--> anti-code: 00000001--> complement: 0001 0000
  -1 - >> original code: 10000001--> anti code: 11111110--> complement: 1111 1111
5,0 inverse code, complement all 0
. 6 , when the computer's complement operation is the operation mode
7, negative: the anti-rotation complement code is -1, the inverted transfer source is negated

8, & bitwise and: 1 is a full, 0 otherwise
   - - >> 2 & 3 = 2 2 = 000000103 = 0000 0011, and the twos complement 2 & 3 performs calculation
. 9, | bitwise oR: YES 1 1, otherwise 0 - >> 2 | 3 = 3 2 = 000000103 = 0000 0011, and the 2's complement binary 3 is | operation
10, ^ bitwise XOR: a is 1, a is 0, the result is 1, otherwise 0
   - >> 2 ^ 3 = 12 = 000000103 = 0000 0011, and the binary two's complement arithmetic ^ 3 is
   - >> - 2 = 2 ^ -42 = (original code: 10000010-- >> inversion code: 11111101 - >> complement: 111,111,102 = 000 00011,
   - >> calculation result is the complement of: 11111100-- >> turn inverted: 11111011-- turn >> Source: 10000100 = -4

11 Right shift operation: low overflow, sign bit unchanged, and the symbol bit overflow up high
12, left shift operation: symbol digits unchanged, low fill 0

Guess you like

Origin www.cnblogs.com/puti306/p/11415015.html