What is the deep understanding of IC verification? Ten years of experienced engineers interpret for you (2)

Q: Do you want to do gate-level simulation?

A: If you use design-service, I don’t know whether the final netlist simulation with sdf needs to be done. If so, it’s best to do it when the netlist is completed after the release is integrated (there are conditions after inserting the scan-chain and completing the CTS. Do it too), if you need VCD files for power analysis and guide PR tools, then door imitation is a must.

If the design-service company is not responsible for adjusting the mass production pattern, then the door imitation of ATPG and others needs to be done by itself. Door imitation is not a sign-off standard, but it is recommended to do it, and problems can often occur. If you do the door imitation of sdf anti-marking, you need to eliminate the async multi-level dff (both VCS and NC have options, you can check "+optconfigfile" in the manual for vcs, and "+nctfile" for NC).

The three steps of notimingcheckàno_notifyàchecking_timing with optconfigfile are recommended for anti-annotation Sdf simulation. When evaluating IP in the early stage, it is possible that individual modules may need to set up a separate gate-level environment. For example, CPU-IP has RTL. If you want to do the flow yourself, you usually need to do gate imitation (maybe mainly for running vcd or saif) power analysis). Modification of Tb: Due to CTS and synthesis, the clock name and signal name have changed, so tb may need to be modified.

In addition, the probe file in tb recommends using anti-edge sampling, which is also to avoid clk not stepping on the entire data-vector after sdf anti-marking. In addition, I personally don't recommend using the automatic tb when imitating the door. Because the names of many internal signals captured in your tb may have changed (or been optimized), this will cause tb to be troublesome to maintain when running at the gate level. Some signals may be reversed even if the name remains the same, which will cause your checker to report an error.

After all, you don’t need to run too many testcases when imitating, you can rely on several simulations that correspond one-to-one with the rtl simulation to cover. After all, door imitation is not for function, but for checking timing. If you use the description of dff without reset in your design, it may cause your imitation to fail due to the propagation of the indeterminate state. The method I personally recommend is: if there are too many, use a script to find all the dffs in the corresponding module, and generate a force-release file (note: it will greatly affect the compilation time, so don’t use it if you can)

Q: How to arrange the order of FPGA and simulation?

A: The first is the schedule first, and the second is the ability. But in principle, it is to simulate first and then go to the FPGA. Simulation can quickly clear up some basic bugs. If you have enough time for the simulation, then the simulation should go as far as possible, and try to test more before using the FPGA (if there are not too many cases, the test speed of the FPGA should be faster after all). Even if the FPGA is in a hurry, at least let the simulation use a few direct testcases to debug and pass the most basic functions. The logic of the first version of FPGA may be optimized due to dead connection, floating and signal inversion. Sometimes these problems cannot be fully discovered by simulation, so it is necessary to combine leda and other lint tools.

Q: How does the simulation reproduce the bugs found in the FPGA?

A: First of all, to ensure the consistency of the configuration, you can consider making some internal tools. The simulation requires the probe register to operate the port, and the FPGA must be able to convert the configuration process in the firmware into text. If the configuration is the same and still cannot be found, go to the logic analyzer to debug the timing. Of course, the problem of CDC is invisible in simulation. Personally, I don't recommend doing FPGA netlist imitation, it's not worth the candle.

Q: Verification of parts that cannot be covered by FPGA?

A: The corresponding functions of PAD_Mux (Test_mux), Clkrst, Power-management-unit and high frequencies that FPGA cannot run. This part of Clkrst is mainly pll config, clock-gate, divider, soft-and-hard reset. From the point of view of the test point, it is still very clear. If the RTL code is slightly modified, it can be considered that there is no need to do too complicated verification (but the clkrst module There may be some control logic or state machine in it, such as: frequency switching of sdram, here generally requires a state machine control, this requires careful and careful verification.) PAD_mux personally recommends using an automated process, because the code style is very Fixed, so you can use scripts to generate RTL and testcases (generally such testcases are a bunch of forces) PMU recommends reading the MVsim documentation of VCS, which is very clear. (It still needs to be done together with the static verification tool MVRC) If there is no MVSim, you can consider using $power $isolate of VCS.

Q: How to verify the cured firmware?

A: Personally, it is not recommended to let the simulation cover the firmware, but it is important to cover the differences between FPGA and ASIC. Larger processes need to be covered, and other details are guaranteed by FPGA.

Q: Architecture evaluation?

A: I don't have much experience. Let me give you a few examples. For example, is your bus topology reasonable? Is the efficiency (mechanism) of the memory controller adequate for your application? Which type of Cache to use? Cache size? Is the FIFO depth of the module enough (error-injection can be measured)? How many mips does the algorithm need (simulators such as rvds and other tool belts can give conclusions, but the simulator must take into account the latency of memory access)? If there are a lot of memcpy in the software, it is necessary to simulate the efficiency of memcpy after the system is running. If there is no manpower to use ESL (such as Carbon's CMS) tools, it is recommended to do it on the verification platform (of course, once there is a big problem, it will be very troublesome to overthrow the structure).

Q: Which resources should be saved?

A: Of course, the first thing is to save people (number). The cost of people is much higher than the cost of computing resources and storage resources. Only by improving technology and increasing the degree of automation can the cost of people be saved. (The method of low Package is harmful to nature, not a legitimate way) Reduce hard disk requirements (if necessary) Share simv/simv.daidir csrc (including automatic disk space cleaning during regression); whether incentive data can not be generated all at once (It is more meaningful for the communication class, because it is the floating incentive data, so it often needs GB space in a short time). Pay attention to setting the hard disk quota for each person and each project, so as to avoid being overwhelmed by individual people. Reduce the number of compilations (necessary in soc projects, testcases are based on firmware), parallel-compile or separate-compile, vmm-test, cover multiple function points in one testcase, do not recompile the changes in the dump level of fsdb/vpd Compile (fsdb has command, vpd may have to use ucli).

Q: What should I do if the design scale is large and the compilation is slow?

A: Sometimes the design scale is too large and the compilation is very slow, but in many cases of SoC projects, the functional modules are separated from each other by the bus (even if there are hardware connections between the functional modules, you can consider using the simulation model to do it. alternative). When simulating a certain functional module, you can dummy out irrelevant modules. But this introduces a new problem "maintenance of file list". Based on this dummy idea, the maintenance of the file list has become a key point in tb. Try to avoid maintaining too many file lists. Personally, I recommend using scripts to automatically generate the list of required files. In addition, I personally recommend using absolute paths in the file list for simulation (to avoid the problem of wrongly adjusting files when others debug, and you can also specify different working directories). In CVS, use relative path bswk SMT to process www.smtsmt.com.cn path, and the work of converting relative path to absolute path is automatically completed by the script.

Q: compile or run option?

A: In order to reduce the number of compilations, use the running option if you can use the running option. For example, use value valuevalueplusargs t e s t test t es t plusargs

Q: Who will write the Assertion?

A: It is recommended that both RTL designer and IC verification engineer write. The description of the internal implementation details is written by the RTL-designer itself, and the timing between modules is written by the IC verification engineer.

If you want to know more, you can pay attention to IC Xiuzhenyuan, and continue to update it for everyone in the next issue! Or you can directly ling the full document

Put a mouth here: a deep understanding of IC verification

Guess you like

Origin blog.csdn.net/coachip/article/details/130831748