Principles of Computer Composition (2) - Data Representation and Operation

Series Article Directory

        Chapter 1 Computer System Overview

        Chapter 2 Representation and Operation of Data

        Chapter 3 Storage System

        Chapter 4 Command System

        Chapter 5 CPU

        Chapter 6 Bus

        Chapter 7 Input/Output System


Chapter 2 Representation and Operation of Data

Series Article Directory

introduction

1. Number system and coding

1.1 Carry counting system

1.1.1 Base and conversion

1.1.2 Truth value, number of machines

1.2 BCD code

1.3 ASCII code

1.4 Check code

1.4.1 Parity code

1.4.2 Hamming Code

1.4.3 CRC Cyclic Redundancy Code

2. Representation and operation of fixed-point numbers

2.1 Representation of fixed-point numbers

2.1.1 Unsigned

2.1.2 Signed (original code, inverse code, complement code, code shift)

2.2 Operation of fixed-point numbers

2.2.1 Shift operation

2.2.2 Addition and subtraction (overflow judgment)

2.2.3 Multiplication

2.2.4 Division operation

2.3 Mandatory type conversion

2.4 Data Storage and Arrangement

3. Representation and operation of floating point numbers

3.1 Representation of floating point numbers 

3.1.1 The role and basic principles of floating point numbers

3.1.2 Floating point normalization

3.2 Floating point standard IEEE 754

3.3 Operations on floating point numbers

3.3.1 Addition and subtraction

​3.3.2 Mandatory type conversion

4. Arithmetic logic unit ALU

4.1 Basic principle of the circuit

4.2 Adder design


introduction

In this article we will discuss the following two points:

  1. How is data represented in a computer?
  2. How does the arithmetic unit in the CPU implement arithmetic and logic operations on data? which includes a discussion of fixed and floating point numbers

1. Number system and coding

1.1 Carry counting system

1.1.1 Base and conversion

The so-called carry counting method is a counting method. The most commonly used one is decimal, in addition to octal and hexadecimal. A few conceptual terms have to be mentioned here:

  • Base : For example, decimal, octal and hexadecimal , their bases are 10 (0~9), 8 (0~7), 16 (0~15) respectively
  • Digits : For example, the binary number 1010, there are 4 digits here, and the digital values ​​from high to low are 1, 0, 1, 0
  • Digital : such as octal, then digital, that is, the range of digital values ​​is (0~7)
  • Bit weight : The value represented by each number is equal to the digital value multiplied by a constant related to its position. This constant is called bit weight.
  • The relationship between the base number and the number: the number of different numbers used in each digit is called the base number

1.1.2 Truth value, number of machines

  • True value : The numbers we see every day“+”、“-”are true values, such as: +15, -8. The true value is the actual value represented by the machine number, generally a decimal number.
  • Number of machines :“符号->数字化”the number of will. Generally, it is a binary number, such as the 4-bit binary number 0 110 with a sign bit , the highest bit is the sign bit, this number真值就是 +6. Usually0represents the "+" sign,1representing-the sign.

1.2 BCD code

BCD code is to use 4 bits (binary number) to correspond to 1 decimal digit . It is a binary digital coding form, and binary code is used to replace decimal code.

  • 8421 code operation
  • The remaining 3 yards (no right code)
  • 2421 yards

1.3 ASCII code

We know the numbers (0~9) we see every day, symbols such as: #, %, +, @, etc., English letters such as: (A~z), (a~z), they are all binary in the computer To represent storage, we can use different binary numbers to represent, but in order to unify and enable everyone to communicate with each other normally, then it is necessary to formulate a set of unified standards to regulate it, and thus the ASCII code was born. Everyone complies with this unified standard for data communication and exchange. As for the meaning of ASCII code, please refer to Wikipedia.

Summary (to sort out the knowledge points in this section!!!):

1.4 Check code

1.4.1 Parity code

The code distance d = 2 of the parity check code can only detect odd bit errors and has no error correction capability. 

1.4.2 Hamming Code

Hamming code has 2-bit error detection ability and 1-bit error correction ability , that is, when one bit error occurs, it can correct the error, and when two bit errors occur, it can only detect the error, but cannot perform error correction.

When there is a bit error, in order to distinguish whether there is 1 bit or 2 error bits, it is necessary to add a "full parity bit" to perform even parity as a whole. Judgment rules are as follows:

1.4.3 CRC Cyclic Redundancy Code

Note : It is imprecise to say that the remainder of 010 obtained through error detection is a C2 error. It is not a strict binary->decimal conversion. The following is explained through the above example:

Let's look at another example with shorter information bits:

In theory, the error detection capability of the cyclic redundancy check code has the following characteristics:

  • Detects all odd errors
  • Detects all double-bit errors
  • Can detect all consecutive errors less than or equal to the check bit length

The CRC cyclic redundancy code is generally only used for "error detection" in practical applications, not for "error correction".


2. Representation and operation of fixed-point numbers

2.1 Representation of fixed-point numbers

2.1.1 Unsigned

All the binary bits of the entire machine word length are value bits, and there is no sign bit , which is equivalent to the absolute value of the number. If the word length of the machine is 8 bits, the representation range of the number is 0~255. (usually only unsigned integers, no unsigned decimals)

2.1.2 Signed (original code, inverse code, complement code, code shift)

In the machine, we cannot recognize the sign of the number, but we can use the binary number instead of the sign. Generally, '0' is positive, '1' is negative, and the sign bit is generally at the front of the effective number.

fixed-point representation of signed numbers

Note: The fixed-point integers and fixed-point decimals can be expressed in three ways: original code, inverse code, and complement code, and fixed-point integers can also be expressed by shifting code .

① Original code : Use the mantissa to represent the absolute value of the truth value, and the sign bit "0/1" corresponds to "positive/negative".

② Inverse code : If the sign bit is 0, the original codes of the inverse code are the same. If the sign bit is 1, all value bits are inverted. (The value range of the inverse code is the same as the original code)

③ Complementary code : Complementary code of positive number = original code; Complementary code of negative number = last bit of inverse code + 1 (carry should be considered)

The role of complement code: to convert the subtraction operation into an equivalent addition operation, saving hardware costs.

④ Code shift : On the basis of complement code, invert the sign bit. Frameshifts can only be used to represent integers. (The value range of the frame shift is the same as that of the complement code)

What is the function of frame shifting? When performing numerical comparisons, the integers represented by frame shifting conform to the characteristics of computer comparison (from front to back, 1 appears first), and it is easy to compare the size

Summary (to sort out the knowledge points in this section!!!): 

2.2 Operation of fixed-point numbers

2.2.1 Shift operation

Shift: Change the bit weight of each digital digit by changing the relative position of each digital digit and the decimal point. Multiplication and division can be realized by shift operation.

Arithmetic shift:

① Arithmetic shift of the original code: the sign bit remains unchanged, and only the value bit is shifted.

  • Right shift: high bits are filled with 0, and low bits are discarded. If discarded bits = 0, it is equivalent to ÷ 2; if discarded bits ≠ 0, precision is lost
  • Left shift: add 0 to the low bit, and discard the high bit. If discarded bits = 0, it is equivalent to × 2; if discarded bits ≠ 0, a fatal error occurs

② Arithmetic shift of one's complement:

③ Complementary arithmetic shift:

Note: Due to the limited number of bits, it is sometimes impossible to use arithmetic shifts to accurately equivalent multiplication and division

Logical shift:

  • Logical right shift: add 0 to the high bit, discard the low bit
  • Logical left shift: add 0 to the low bit, and discard the high bit

Logically, it can be regarded as an arithmetic shift of "unsigned numbers".

Cyclic shift:

Note: The following addition, subtraction, multiplication, and division operations are all discussions of the original code and the complement code

2.2.2 Addition and subtraction (overflow judgment)

① Addition and subtraction of the original code

② Complementary addition and subtraction

For complement code, no matter addition or subtraction, it will be transformed into addition in the end , and the operation will be realized by the adder, and the sign bit will also participate in the operation.

overflow judgment

Method ① : Use a sign bit

Method ② : Use a sign bit to judge the overflow according to the carry of the data bit

Method ③ : Use double sign bit

2.2.3 Multiplication

① Multiplication of the original code

② Complementary multiplication

Comparison of original code multiplication and complement multiplication:

2.2.4 Division operation

① Division operation of the original code (restoring remainder method)

Is it possible not to restore the remainder after the quotient 1 gets a negative number? The answer is of course yes!

Our improved method is: addition and subtraction alternating method

② Complementary division operation

Comparison of original code division and complement code division:

2.3 Mandatory type conversion

2.4 Data Storage and Arrangement


3. Representation and operation of floating point numbers

3.1 Representation of floating point numbers 

3.1.1 The role and basic principles of floating point numbers

The range of numbers that can be represented by fixed-point numbers is limited, from short (2 bytes) to int (4 bytes) to long (8 bytes), but we cannot increase the length of the data without limit, so how do we What about increasing the range of data representation without changing the number of digits ? The solution is the floating point number to be introduced next

3.1.2 Floating point normalization

How to solve the problem of precision loss in the example question above ? The normalization of the mantissa of the floating-point number is used here

Summary (to sort out the knowledge points in this section!!!): 

3.2 Floating point standard IEEE 754

What code is used for the exponent and mantissa of floating-point numbers? How many bits are appropriate for each? In this section we will discuss this issue.

Before introducing the content of this section, let's review the frame shift :

Let's look at two examples:

3.3 Operations on floating point numbers

3.3.1 Addition and subtraction

3.3.2 Mandatory type conversion

Summary (to sort out the knowledge points in this section!!!): 


4. Arithmetic logic unit ALU

4.1 Basic principle of the circuit

4.2 Adder design

Adder optimization process:

Serial adder ----> Parallel adder with serial carry ----> Parallel adder within a group and serial carry between groups ----> Parallel adder within a group and parallel carry between groups

Guess you like

Origin blog.csdn.net/weixin_52850476/article/details/125200583