SystemVerilog Verification Test Platform Writing Guide Chapter 1 Introduction to Verification

As a verification engineer, you should examine the design in as much detail as possible and extract all possible vulnerabilities. Every hole found before streaming means that there will eventually be one less hole in the customer's hands.
System Verilog hardware verification language (Hardware Verification Language, HVL), compared to hardware description language (Hardware Description Language, HDL),HVL has the following typical properties:
(1) Constrained random excitation generation (CRT).
(2) Functional coverage.
(3) Higher-level structure, especially object-oriented programming (OOP).
(4) Multi-thread and communication between threads.
(5) Support HDL data type.
(6) The integrated event simulator facilitates the control of the design.
1.1 Verification process
What is verification? As a verification engineer, your purpose isEnsure that the device can successfully complete the scheduled task, That is, theDesign is an accurate expression of the specification.
The verification process is parallel to the design process. A verification engineer, you mustRead the hardware specifications and draw up a verification plan, create tests to check whether the RTL code accurately implements all the features.
1.1.1 Testing
at different levels The vulnerabilities in the design will produce vulnerabilities at different levels. The following categorizes the vulnerabilities in the design from the bottom.
Code block level, the code block created in the module.
The boundaries of the code block, different designers will have different understanding of the same specification, which will cause controversy in the hardware logic.
In order to simulate a code block, you need to create a test set to simulate the surrounding code blocks to generate incentives, so that low-level simulation will run quickly, but the way to generate incentives will be very cumbersome. Integrating multiple code blocks together, at the highest level of the design to be tested, the entire system will be tested, and the code blocks will be mutually stimulated to run slower, but the simulation process will be much simpler. Your test should try to keep all code blocks active concurrently. All input and output ports are activated, the processor is processing data, and the cache is also loading data. In this case, loopholes in data distribution and timing will definitely appear.
Error injection and handling. Once you have verified that the design under test can perform all the expected functions, you also need to look at how the design under test will operate when an error occurs.
We can never prove that no holes remain, so we need to keep trying new verification strategies.
1.1.2 Verification plan The
verification plan mainly includes the following methods: directed test, random test, assertion, software and hardware co-verification, hardware simulation, formal verification, and use of verification IP, etc.
1.3 Functions of the basic test platform
The purpose of the test platform is to determine the correctness of the design under test. It includes the following steps:
(1) Generate incentives.
(2) Apply excitation to the DUT.
(3) Capture the response.
(4) Check the correctness.
(5) Measure progress against the entire verification target.
1.4 Directional test
Directional testing is to formulate specific excitation vectors for the design to be tested, and then use these vectors to simulate the design to be tested. After the simulation, review the result files and waveforms manually to ensure that the design's behavior is consistent with expectations. If the test result is correct, you can continue to the next test. If you are given enough time and manpower, targeted testing can complete all the tests required to verify the 100% coverage of the plan.
1.5 Fundamentals of
methodology The principles of methodology are as follows:
(1) Constrained random excitation.
Directed testing can identify expected vulnerabilities in the design, and random incentives can identify unexpected vulnerabilities. In complex designs, random excitation is critical.
(2) Functional coverage.
When using random incentives, functional coverage needs to be used to assess the progress of verification. Using automatically generated incentives requires a way to automatically predict the results—usually scoreboards or reference models. Building a test platform infrastructure including self-prediction is a heavy workload.
(3) A layered test platform using a transaction processor.
The hierarchical test platform can break down the problem into small pieces that are easy to handle, which helps control complexity.
(4) A common test platform for all tests.
The infrastructure needed to build a test platform is common to all tests and does not require frequent modification. You only need to place "hooks" in certain places so that the test can perform specific operations such as adjusting incentives or injecting errors in these places.
(5) Personalized test code independent of the test platform.
Personalized code for a single test must be separated from the test platform, which can avoid increasing the complexity of the infrastructure.
In short, it takes much more time to build a test platform of this style than the traditional directional test platform—especially the self-checking part. As a result, it may take a long time to prepare for the first runnable test. Each random test can share the test platform. Constrained random testing platforms find vulnerabilities much faster than many targeted tests.
As the occurrence rate of vulnerabilities decreases, you should create new random constraints to explore new areas. The last few vulnerabilities may only be discovered through targeted testing, but most of the vulnerabilities should appear in random tests.
1.6 Constrained Random Excitation
We hope that the simulator can generate random excitations, but at the same time do not want to be completely random when the excitation is performed. The SystemVerilog language can describe the format of the stimulus, and then let the simulator produce values ​​that satisfy the constraints. These values ​​will be sent to the design, as well as a high-level module responsible for predicting the simulation results. The actual output of the design will eventually need to be compared with the predicted output.
1.7 Randomizing objects
To someone who is new to the field of verification, the so-called randomization is the data field. This kind of incentive is the easiest to create—just call the $ random () function. But the rewards of such random data in finding vulnerabilities are very small. Vulnerabilities found using such random data are generally in the data path, and are likely to be bit-level errors. In fact, we need to find some loopholes in the control logic. For example, the following types:
(1) Device and environment configuration
Many tests only use the design after only reset or apply a fixed initial vector set to lead the design to a known state. In an actual application environment, as the design test time increases, its configuration will become more and more random. You should randomize the configuration of the entire environment, including the length of simulation, the number of devices, and how they are configured.
(2) Input data
When you see random excitation, you may think of selecting a transaction or ATM cell written by the bus, and then filling random data into the data field.
(3) Protocol abnormalities, errors and violations
The most likely cause of device crashes is that part of the logic inside the product cannot be recovered after encountering errors, so the device cannot work normally. We want to try to simulate the errors that may occur in the actual hardware as much as possible, try these conditions one by one, and then ensure that the device can continue to operate normally.
Try to use improper commands to motivate the hardware while paying attention to capturing problems.
(4) Delay and synchronization
A code block may work properly for all possible incentives from the same interface, but in the face of multiple inputs at the same time, hidden vulnerabilities may appear.
(5) Parallel random testing
Random testing includes test platform code and random seeds. If you want to run the same test 50 times, each time using a different seed, then you will get 50 different sets of incentives. Using multiple seeds to run the same test can increase coverage and also reduce your workload.
1.8 Functional Coverage In the
previous section, we described how to create stimuli and use them to traverse the entire possible input space. With this method, your test platform will frequently access some areas, but it will take a long time to reach all possible states. Even if the simulation time is not limited, the unreachable state will never be accessed. At this time, we need to know which parts have been verified, so that we can check the items in the verification plan.
The measurement and use of functional coverage includes the following steps:
(1) Add code to the test platform to monitor the stimulus entering the device and the device's response to the stimulus, and determine which functions have been verified accordingly .
(2) Run the simulation several times, each time using a different seed.
(3) Combine the results of these simulations into one report.
(4) Analyze the results and finally decide how to use new incentives to achieve those conditions and logic that have not yet been tested.

Published 38 original articles · Like 29 · Visits 10,000+

Guess you like

Origin blog.csdn.net/weixin_45270982/article/details/96622379