The research in this project focuses on diagnostic test generation methods for non-classical faults, such as transition delay and bridging faults, which represent the fault behaviors of modern VLSI devices. Although, the existing tools for VLSI circuit testing incorporate years of research they only deal with classical stuck-at faults. A stuck-at fault is detectable by a single input pattern. Detection of a transition fault is more complex because it requires a sequence of two patterns. Also, the existing tools find tests that detect faults and may not diagnose them, i.e., identify the exact cause of a failure. The traditional metric used in testing is fault coverage. This research investigates the use of a new metric termed diagnostic coverage for the effectiveness of tests in their role of fault diagnosis. For example, to distinguish between two faults one must use a test that detects one fault but not the other; such a test is called an exclusive test. This research provides new algorithms to generate tests for diagnosis of non-classical faults while allowing the use of the available testing tools.
Moore's law prediction of the number of devices on a VLSI chip doubling every eighteen months simply follows the trend of minimum cost per transistor. The enabling technology driver is the shrinking geometry of features allowing higher transistor density and speed. Nanometer geometries have, however, led to greater process variations. Two characteristics separate the testing of modern VLSI technologies. First, the complex fault mechanisms are no longer represented by the classical stuck-at faults. Second, the impact of increased process variation on yield requires testing to be diagnostic-oriented; faults must be identified so that their causes can be eliminated. The research in this project addresses both needs of the advancing VLSI technology.
Mooreâ€™s law prediction of the number of devices on a VLSI chip doubling every eighteen months follows the trend of minimum cost per transistor. The enabling technology driver is the shrinking transistor geometry allowing higher density and speed. Nanometer geometries, however, lead to greater process variations. Several characteristics separate the testing of modern VLSI technologies from those of the past. First, the complex fault mechanisms are no longer represented by classical stuck-at faults. Second, increased process variation impacts yield requiring test to be diagnostic-oriented; faults must be identified and their causes eliminated. Third, increasing transistor density aggravates test power, time and cost issues. This project addresses fault modeling, diagnosis, and test time versus power issues. It produced five doctoral and three masterâ€™s dissertations. Six participating students are presently contributing to test related activities in the industry. Principal outcomes and respective intellectual merits and broader impacts are listed below under three subareas. In addition, a large number of journal and refereed conference papers were produced. The project helped five PhD and three master's students complete their dissertations. Six of them have joined the industry and are contributing to VLSI testing. Diagnostic Testing In traditional pass/fail testing if a manufactured very large scale integration (VLSI) circuits is found to be bad, then we may not test further to determine what exactly is wrong. Thus, existing test algorithms and tools focus on detection (finding whether any fault occurred) rather than identification or diagnosis (finding which fault occurred). In the present-day nanometer technologies, device geometries are not much larger than the manufacturing tolerances. Hence, it is important to diagnose the exact faults so that their causes can be eliminated to enhance the production yield of devices. This research advances the test methodology from detection to diagnosis. Prevailing tools in the industry support test generation for detecting a given fault, and fault simulation to determine the coverage in a fault set by given tests. This research provided new algorithms to generate tests to distinguish between any pair of faults, and for diagnostic fault simulation to determine how well the faults in a given set can be uniquely identified by given tests. Similar to fault coverage that has been used as a test quality metric, we define a new diagnostic coverage metric that is measured by diagnostic fault simulation. Non-Classical Faults A stuck-at fault in a digital circuit means that a faulty signal is permanently fixed at a logic (0 or 1) state. This is the industry-standard classical fault model. Many actual defects in nanometer devices, characterized by high density and speed, do not conform to this fault model and the semiconductor industry recognizes the need for a change. A transition fault, in which a faulty signal is either too slow to rise from 0 to 1 or too slow to fall from 1 to 0, is a non-classical fault model that can represent the faulty timing behavior. Although the transition fault model has gained popularity, algorithms and tools for diagnosis are lacking. The present research attempts to satisfy this need. The 3D-stacked VLSI devices belong to an upcoming technology. Chips, containing processors and memories, are bonded on top of each other, allowing significant benefits of compactness, high speed, and low cost. Signals pass between chips by through silicon via (TSV) conductors. A typical 3D-stack may contain ten thousand or more TSVs. Testing of their defects that are resistive in nature is a critical problem. Our research provides algorithms for optimized TSV testing. Specifically, tests are selected and sequenced through mathematical optimization using integer liner programming (ILP). Test Power and Time Circuits are designed to consume certain maximum power. A test that exceeds that maximum is run at a slower clock. This increases the test time and hence the test cost. Power consumption in a digital circuit rises linearly with clock frequency and is a quadratic function of the supply voltage. In our research, the voltage is reduced such that the power constrained test clock is sped up in spite of the increased delay in the circuit. However, a large reduction in voltage makes the circuit too slow, requiring the clock to slow down. We determine an optimum supply voltage allowing the fastest clock speed for a given power limit. New test scheduling algorithms use dynamic voltage and frequency scaling (DVFS) for significantly reduced test time of large system-on-chip (SoC) devices. Another outcome of this research is an entirely new aperiodic test method. The clock period during test is dynamically controlled by the automatic test equipment (ATE) to run the test at fastest possible speed without exceeding the power limit of the circuit. Our test program dynamically varies the clock rate using pre-stored simulation data on power consumption. Test time reduction of up to 50% is demonstrated, though greater reductions may be possible.