Computer Architecture

  • Chapter One

  1. Basic Concepts of Computer System Architecture 

    1.1.1. The Hierarchy of Computer Systems

      The computer system is composed of hardware and software, and is divided into 7 levels according to functions

              

      •  A level 0 machine has a hardware implementation that is the hardware kernel of the machine
      •  The first-level machine is implemented by a bit program (firmware), which is equipped with a set of micro-instructions according to the control timing required by the operation of various instructions, writes micro-programs, and controls the transmission of information between registers.              
      •   Level 2 is a traditional machine language machine. The machine language at this level is the instruction system of the machine. Programs written by machine language programmers remember that the instruction set has a first level of interpretation for the program.
      • Level 3 is the operating system machine
      • Level 4 is an assembly language machine. A program written in assembly language is first translated into a level 3 or level 2 language, and then interpreted by a corresponding machine. The program that does the translation is called the assembler.
      • Level 5 is a high-level language machine
      • Level 6 is an applied language machine, a language that enables non-computer professionals to use computers directly;

      The translation or interpretation between the levels is achieved. Translation refers to converting a higher-level language program into a younger-level language program through a compiler and running it.

      Firmware refers to hardware with software functionality.

    1.1.2. Definition of computer system architecture

      System structure: The properties of a computer's system as seen by the programmer, that is, the conceptual structure and functional characteristics.

      Features:

      Data representation: The type of data that the hardware can directly recognize and process.

      Addressing technology: preparation method, addressing method and positioning method, etc.

      Register addressing: The definition, quantity and usage rules of operand registers, index registers, control registers, and special registers.

      Instruction system: operation type, format, ordering control method of instructions, etc.

      Interrupt system: interrupt type, terminal level, and interrupt response method, etc.

      Storage system: addressing space, virtual registers, cache memory

       Processing and working status: define and switch methods, such as off-day and eye-status, etc.

      Input and output system: data exchange method, control of exchange process, etc.

      Information protection: information protection methods and hardware support for information protection, etc.

    1.1.3 The composition and realization of the computer

      Computer composition refers to the logical realization of the structure of a computer system.

      Computer implementation refers to the physical implementation of a computer composition.

      System structure, composition and realization of the relationship between the three

      • Computer composition is the logical realization of the system organization of the computer, and computer realization is the physical realization of the computer composition. The three contain different contents, but they are closely related. 
      • A system structure can have multiple components, and similarly, a component can have multiple physical implementations.
      • For example: multiplication function, it is enough to have multiplication instructions, multiplication/addition + shift       

    1.1.4 Classification of Computer System Organizations 

    1. Flynn's taxonomy: Classification according to the polyploidy of instruction flow and data flow. Divided into SISD, SIMD, MISD, MIMD four.
    2. Fung classification
    3. Handler classification method: a classification method proposed according to the degree of parallelism and pipeline, t (system model) = (K, d, w) k represents the number of program-controlled components in the pipeline, and d represents the number of arithmetic logic components in the instruction pipeline. number, w represents the number of sets of basic logic circuits in the operation pipeline.

  2. Design of computer system

    1.2.1 Quantitative principles of computer system design

      Speed ​​up recurring events.

      Eminem fixed rate

      

       Amdahl's law holds that the improvement of the performance of the whole system by improving a certain component in the system is related to the proportion of the use frequency of this component to the total execution time.

         

       

       CPU Performance Formula

          The execution time of the CPU depends on three factors

        • Instruction number IC
        • CPI in clock cycles per instruction
        • Clock frequency f (or clock period t)

       Derivation of the formula:

          CPU time (T) = number of clock cycles required for program execution * clock cycle time (t) 

          Average number of clock cycles per instruction: CPI=Number of clock cycles required to execute the program/IC

          in:

        • Clock Cycle Time: Device Process Dependent
        • CPI: Depends on the instruction system structure of the computer
        • IC: depends on the structure of the instruction set and compilation technology

       locality of access

          Locality is divided into temporal locality and spatial locality.

        • Temporal locality means that code that has been accessed recently may also be code that will not be accessed.
        • Spatial locality means that code with similar addresses may be accessed together. 

          The composition of the storage system is based on the principle of locality of access

       Employing parallelism is an important way to improve computer performance.

          System-level parallelism, such as multi-processor, multi-disk technology, enables greater throughput.

          At the level of a single processor, the method of instruction-level parallelism is generally used to improve performance. The representative technology of instruction-level parallelism is pipeline.

          Parallelism can be exploited at the digital circuit design level. For example, a group-connected cache can use multiple memory blocks at the same time, and the carry-ahead chain can speed up the summation process.

     1.2.2 The main tasks of the computer system designer

      • Demand analysis according to user requirements
      • Balance hardware and software
      • Design a system structure in line with future development directions

     1.2.3 The main methods of computer system design

      • Design from top to bottom  
      • Design from the bottom up
      • Design from the middle

  3. Performance Evaluation Criteria for Computer Systems

     1.3.1 Performance

       Key Performance Metrics

       Regardless of the standard, the time compared when testing performance is the execution time of the same load—the total execution time of the program and operating system instructions.

      • MIPS million instruction level
      • MFLOPS
      • Benchmark program

       Analysis of performance evaluation results

      • Execution time of different programs on different machines
      • average execution time
      • weighted average execution time
      • Use a reference machine
      • standard deviation

     1.3.2 Cost

       The cost of a computer system refers to the cost of both hardware and software

  4. Development of computer system architecture

      • Von Neumann Architecture and Its Improvement
      • The influence of software on system structure
      • The influence of period development on system structure
      • The impact of the application on the system structure        

 

  5. Development of Parallelism in Computer System Architecture

     1.5.1 The concept of parallelism

       The so-called parallelism means that a computer system performs multiple operations or operations at the same time or within the same time interval. As long as there is overlap in time, there is parallelism. It includes two meanings of simultaneous line and concurrency.

        Simultaneity: Two or more events occur at the same time.

        Concurrent release: Two or more events occur within the same time interval.

       There are different levels of parallelism in computer systems. From the perspective of data processing, it can be divided into

      • string bit string
      • string bitwise
      • word and bit string
      • fully parallel

       From the perspective of program execution, the parallelism level is divided into:

      • Internal parallelism of instructions
      • instruction level parallelism
      • Task-level or process-level parallelism
      • Job or program level parallelism

       Technical Approaches to Improve Parallelism

      • Time overlap, multiple processing processes are staggered in time, and each part of the same set of hardware equipment is used in turn and overlapped.
      • Resource duplication, introducing space factor into the concept of parallelism, and the principle of winning by quantity, by repeatedly setting hardware resources, can greatly improve the system performance of the computer.
      • Resource sharing, which is a software method that allows multiple tasks to take turns using the same set of hardware devices in a certain order.                    

  (All hands by Jaiken Wong)

Guess you like

Origin http://43.154.161.224:23101/article/api/json?id=325020721&siteId=291194637