The main performance index of computer evaluation

1. Clock frequency (main frequency) The main frequency is one of the main performance indicators of a computer, and determines the computing speed of the computer to a large extent. The working beat of the CPU is controlled by the main clock, which continuously generates clock pulses with a fixed frequency. The frequency of the main clock is the main frequency of the CPU. The higher the main frequency, the faster the CPU's working beat and the faster the computing speed. But since IBM released the first dual-core processor in 2000, multi-core has become an important direction of CPU development. It turns out that the method of calculating performance indicators based on clock frequency alone is no longer appropriate, and it depends on the number of cores in a single CPU. Most of the current mainstream server CPUs are eight-core or twelve-core, and it is more likely to develop to 32 cores, 96 cores or even more in the future.

2. Cache Caches can improve the CPU's operating efficiency. At present, two-level cache technology is generally used, and some use three levels. The cache memory is composed of static RAM (Random Access Memory, random access memory), and the structure is relatively complex. When the CPU die area cannot be too large, the capacity of the L1 cache cannot be made too large. A cache with a write-back (WriteBack) structure. It provides caching for both read and write operations. A cache with a write-through structure is only valid for read operations. L2 and L3 cache capacity will also affect CPU performance, the principle is that the bigger the better.

3. Computing speed Computing speed is the main characterization of computer working ability and production efficiency, it depends on the amount of data that the CPU can process in a given time and the main frequency of the CPU. Its units generally use MIPS (million instructions/second) and MFLOPS (million floating-point operations/second). MIPS is used to describe the fixed-point computing capability of the computer; MFLOPS is used to represent the floating-point computing capability of the computer.

4. Operation precision refers to the number of digits of binary data that can be directly processed by the computer when processing information. The more digits, the higher the precision. The basic number of digits of the data involved in the operation is usually represented by the basic word length. The word length of PC (Personal Computer) has been developed from 8088's quasi-16 bits (16 bits for calculation and 8 bits for I/O) to the current 32 bits and 64 bits. Medium and large computers are generally 32-bit and 64-bit. Mainframes are generally 64-bit. In single-chip microcomputers, 8-bit and 16-bit word lengths are mainly used at present.

5. Memory storage capacity The memory is used to store data and programs, and exchange information directly with the CPU. The larger the capacity of the memory, the more data and programs can be stored, thereby reducing the number of information exchanges with the disk and improving the operating efficiency. Storage capacity is generally measured in bytes. The memory of PC has been developed from 1MB configured by 286 machines to more than 1G which is now the mainstream. In the field of servers, they are generally 2~8G, and many of them are mainframes used by provincial settlement centers in the banking system, with memory up to hundreds of GB. Increased memory capacity is necessary for running large-scale software, especially for large-scale database applications. The emergence of memory databases has brought the use of memory to the extreme.

6. Memory access cycle The time required for memory to complete a read (fetch) or write (store) operation is called memory access time or access time. The minimum time required for two consecutive reads (or writes) is called a storage cycle. The shorter the storage cycle, the shorter the time to access information from the memory, and the better the performance of the system. The current memory access cycle is about a few to tens of ns (10-9 seconds). The speed of memory I/O and the speed of host I/O depend on the design of the I/O bus. This doesn't matter much for slow devices (e.g. keyboards, printers), but can be significant for high speed devices. For example, for the current hard disk, its external transfer rate can reach more than 100MBps and 133MBps.

7. The formula for calculating the data processing rate (Processing Data Rate, PDR) is: PDR=L/R. Among them: L=0.85G+0.15H+0.4J+0.15K; R=0.85M+0.09N+0.06P

Where: G is the number of bits per fixed-point instruction; M is the average fixed-point addition time; H is the number of bits per floating-point instruction; N is the average floating-point addition time; Floating-point multiplication time; K is the number of floating-point operands; In addition, it is also stipulated: G>20 bits, H>30 bits; the time to fetch an instruction from the main memory is equal to the time to fetch a word; both instructions and operands are stored In the same main memory, there is no indexing or indirection operation; the function of fetching instructions in advance or in parallel is allowed, and the average fetching time is selected at this time. PDR is mainly used to measure the speed of CPU and main memory, it does not involve cache and multifunction etc. Therefore, PDR does not measure the overall speed of the machine.

8. Response time The period of time from occurrence to completion of an event. Its meaning will vary depending on the application. Response times can be atomic or compounded of several response times. In the development of computer technology, as early as 1968, Mr. Miller had given three classic suggestions about response time. 0.1 seconds: The user does not perceive any delay. 1.0 seconds: The limit of time the user is willing to accept an immediate response from the system. That is, when the effective feedback time for executing a task is within 0.1 to 1 second, the user is willing to accept it. Exceeding this data value means that the user will feel a delay, but as long as it does not exceed 10 seconds, the user is still acceptable. 10 seconds: The limit for the user to keep his attention to perform this task. If the value exceeds this value and still cannot get effective feedback, the customer will turn to other tasks while waiting for the computer to complete the current operation.

9. RASIS Features RASIS features are the general term for Reliability, Availability, Serviceability, Integraity, and Security. Reliability refers to the probability that a computer system will continue to operate correctly under specified working conditions and within specified working hours. Reliability is generally measured by Mean Time To Failure (MTTF) or Mean Time Between Failure (MTBF). Maintainability refers to the ability of the system to be repaired as soon as possible after a failure occurs, and is generally expressed by the mean time to repair (Mean Time To Repair, MTTR). It depends on the technical level of the maintenance personnel and the familiarity with the system, and is also closely related to the maintainability of the system. Details about these features are given in Section 17.5.

10. Average Trouble Response Time Average Trouble Response Time (TAT) is the period of time from when a fault occurs to when the fault is confirmed to be repaired. This indicator reflects the service level. The shorter the mean failure response time, the smaller the impact on the user's system.

11. Compatibility Compatibility refers to the compatibility between the hardware or software of one system and the hardware or software of another system or multiple operating systems, and refers to the coexistence of certain aspects between systems, that is, there is a certain degree of versatility. Compatibility is a broad concept, which includes data and file compatibility, program and language level compatibility, system program compatibility, device compatibility, upward compatibility and backward compatibility.

In addition to the above performance indicators, there are other performance indicators, such as comprehensive performance indicators such as throughput and utilization; qualitative indicators such as confidentiality and scalability; functional characteristic indicators such as word processing capabilities, online transaction processing capabilities, I/O bus characteristics, network characteristics, etc.

Guess you like

Origin blog.csdn.net/Dove_Knowledge/article/details/125077485