In a traditional verification methodology, the verification engineer goes through the following steps to verify a DUT (block, chip, or system):
Create a test plan that contains directed tests for the DUT based on the engineer's knowledge of the design specification.
Write the directed tests. The engineer typically spends a lot of manual effort in writing these directed tests. Since the DUT is still evolving at this point, it is impossible to predict where the bugs might be. It is possible that a certain directed test may not uncover any bugs. Moreover, many directed tests may overlap with other directed tests, thus testing the same functionality.
Run the directed tests to find bugs in the DUT. Since the directed tests verify specific scenarios, only bugs pertaining to those specific scenarios are detected. Other scenarios are left uncovered.
Add more directed tests if necessary to cover new scenarios. Engineer spends more manual effort thinking about new scenarios that need to be tested.
Run these additional directed tests to find more bugs in the DUT. Steps 4 and 5 are run until the engineer is convinced that enough directed testing has been done. However, the measurement of adequacy is still very ad hoc.
Random testing is initiated with some form of a random stimulus generator after multiple iterations of steps 4 and 5 are performed.
Random testing uncovers bugs that were not detected originally by directed tests. Random testing often catches corner cases that were missed by the verification engineer. The bugs that are uncovered are therefore fixed in a very late stage of the verification process.
Functional coverage is initiated after multiple iterations of steps 6 and 7 are performed. Functional coverage is run mainly in the post-processing mode and it provides results on the values of interesting items and state transitions. Random simulations are run until desired functional coverage levels are achieved.
After steps 1 through 8 are performed and the coverage results are satisfactory, the verification is considered complete.
Figure 14-1 shows the traditional verification process outlined above.
The traditional verification methodology shown in Figure 14-1 has key productivity and quality issues:
The verification effort starts with directed testing. Directed tests require intense manual effort, are time consuming to create, and are extremely difficult to maintain. Thus, rather than focus on what areas to verify, the engineer spends most of the time figuring out how to write directed tests.
At this stage the DUT is rapidly changing in both functionality and implementation. Therefore, the verification engineer might focus on writing directed tests for areas that are not prone to bugs and hence waste valuable verification time.
There is no quantifiable metric about how much directed testing is enough. Currently, verification engineers stop when they have written a large number of tests and they think that they have adequate coverage or they run out of time. This process is very unstructured and ad hoc. This reduces the quality of the DUT verification.
Random simulation uncovers bugs that are not caught by directed tests. However, if directed tests were not run, random simulation would have also caught most of the bugs that directed tests caught. Therefore, the verification engineer spent a lot of manual effort finding bugs that would have been caught very easily with random simulation.
At some point, the engineer is satisfied that there are no more bugs from random simulation. Often, this decision is based on the fact that random simulation ran for a certain number of days without a bug. Although this means that the DUT is reasonably stable, it does not guarantee that all interesting corner cases have been covered.
It is very hard to control random simulation. A lot of time is spent tuning the random simulator to catch corner cases.
Coverage is run towards the end of the verification cycle. It is mainly intended as a checklist item to ensure that decent coverage has been achieved. However, it is not used as a feedback mechanism to tune the quality of tests.
Thus, it is clear that many aspects of the traditional verification methodology are inefficient and very manual in nature. The problems with this approach lead to lots of wasted verification time and suboptimal verification quality.
Although there are obvious problems with the traditional verification methodology, verification projects have persisted with this approach over the years. There are some important reasons for this dependence on the traditional verification methodology:
Commercial random simulation tools were not easily available. Moreover, these tools were not easily customizable.
Random simulation tools were often developed in-house by companies. But these tools required a massive maintenance effort and therefore were affordable only to big companies.
Random simulation tools were not integrated into the verification environment. Therefore, it took a lot of time and effort for verification engineers to integrate these tools into their environment.
Random simulation tools did not afford a high degree of control. Directed-random (controlled random) simulation tools were not available.
Coverage tools were not integrated into the environment. A verification engineer needed to spend a lot of time and effort integrating the coverage tool.
Coverage tools only provided post-processed reports. The process of taking coverage reports and using that feedback to generate new tests was not smooth.
Due to these problems, verification engineers could never take advantage of the coverage-driven functional verification approach that can provide more productivity and higher verification quality. Most of their time was spent on integrating the tools and working with inefficient verification flows, rather than on focusing on DUT verification.