Inferring Input Grammars from Code with Symbolic Parsing
Generating effective test inputs for a software system requires that these inputs be valid, as they will otherwise be rejected without reaching actual functionality. In the absence of a specification for the input language, common test generation techniques rely on sample inputs , which are abstracted into matching grammars and/or evolved guided by test coverage. However, if sample inputs miss features of the input language, the chances of generating these features randomly are slim. In this work, we present the first technique for symbolically and automatically mining input grammars from the code of recursive descent parsers. So far, the complexity of parsers has made such a symbolic analysis challenging to impossible. Our realization of the symbolic parsing technique overcomes these challenges by (1) associating each parser function parse_ELEM() with a nonterminal <ELEM> ; (2) limiting recursive calls and loop iterations, such that a symbolic analysis of parse_ELEM() needs to consider only a finite number of paths; and (3) for each path, create an expansion alternative for <ELEM> . Being purely static, symbolic parsing does not require seed inputs; as it mitigates path explosion, it scales to complex parsers. Our evaluation promises symbolic parsing to be highly accurate. Applied on parsers for complex languages such as TINY-C or JSON, our STALAGMITE implementation extracts grammars with an accuracy of 99–100%, widely improving over the state of the art despite requiring only the program code and no input samples. The resulting grammars cover the entire input space, allowing for comprehensive and effective test generation, reverse engineering, and documentation.
- Conference Article
- 10.5555/304032.304167
- Nov 8, 1992
We address the problem of generating tests for delay faults in non-scan and partial scan synchronous sequential circuits. A recently proposed transition fault model for sequential circuits (I) is considered. In this fault model. a transition fault is characterized by the fault site, the fault type and the fault size. The fault type is either slow-to-rise or slow-to-fall. The fault size is specified in units of clock cycles. It was observed that neither a comprehensive functional verification sequence nor a sequence with a high stuck-at fault coverage gives a high transition fault coverage for sequential circuits. Deterministic test generation for delay faults is required to raise the coverage to a reasonable level. In this papex, we present a test generation algorithm for this fault model. With the use of a novel fault injection technique, tests for transition faults can be generated by using a stuck-at fault test generation algorithm with some modifications. The new test gen- erator DATEST (Delay fault Automatic TEST generator for sequential circuits) has been integrated with our sequential circuit delay fault simulator, TFSIM. Experimental results for ISCAS-89 benchmark circuits and some AT&T designs are presented. For partial scan circuits, we first describe a test application scheme for detecting transition faults. Modifications on test gen- eration and fault simulation algorithms required for partial scan circuits are presented. Experimental results on large benchmark circuits show that a high transition fault coverage can be achieved for the partial scan circuits designed using the cycle-breaking POI. faults, not mentioning for delay faults. Therefore, we believe that deterministic delay test generation is required even if a set of com- prehensive functional verification vectors is available. In this paper, we address the problem of generating tests for delay faults in non-scan or partial scan sequential circuits. We assume that the sequential circuit under test is synchronous. Mul- tiple clocks and multiple phases are allowed. The entire input sequence is applied at a rated speed. At the end of each vector and clock application. the primary outputs are observed. There- fore, the outputs are also observed at a fixed rate. Under this test application scheme, which is a typical scheme used for applying functional vectors, fault sizes of delay defects must be considered. Different excess delays caused by a fault will result in completely different logic behaviors. A recently proposed transition fault model for sequential circuits (I) that takes the fault size into account is used in this work. It is a generalization of the combina- tional circuit transition fault model (5). A brief description of the model will be given in the next section. In this work, we consider only non-robust tests for three reasons: (1) Fault simulation and test generation methods for non-robust gate delay faults can be made fully compatible with existing stuck-fault testing methods. With a novel fault injection technique, we transform the fault simulatiodtest generation process for non-robust delay faults into a fault simulationhest generation process for stuck-at faults. (2) The computational complexity is lower such that very large cir- cuits can be handled. (3) For many sequential circuits, a large number of faults is not robustly testable under the normal test application scheme. Therefore, the robust delay fault coverage may be very low and too pessimistic.
- Research Article
79
- 10.1007/bf00971937
- Feb 1, 1993
- Journal of Electronic Testing
Asynchronizing sequence drives a circuit from an arbitrary power-up state into a unique state. Test generation on a circuit without a reset state can be much simplified if the circuit has a synchronizing sequence. In this article, a framework and algorithms for test generation based on themultiple observation time strategy are developed by taking advantage of synchronizing sequences. Though it has been shown that the multiple observation time strategy can provide a higher fault coverage than the conventional single observation time strategy, until now the multiple observation time strategy has required a very complex tester operation model (referred asMultiple Observation time-Multiple Reference strategy (MOMR) in the sequel) over the conventional tester operation model. The overhead of MOMR, exponential in the worst case, has prevented widespread use of the method. However, when a circuit is synchronizable, test generation can employ the multiple observation time strategy and provide better fault coverages, without resorting to MOMR. This testing strategy is referred asMultiple Observation time-Single Reference strategy (MOSR). We prove in this article that the same fault coverage, that could be achieved in MOMR, can be obtained in MOSR, if the circuit under test generation is synchronizable. We investigate how a synchronizing sequences simplifies test generation and allows to use MOSR under multiple observation time strategy. The experimental results show that higher fault coverages and large savings in CPU time can be achieved by the proposed framework and algorithms over both existing single observation time strategy methods as well as other multiple observation time strategy methods.
- Conference Article
4
- 10.1109/icccnt.2013.6726500
- Jul 1, 2013
The test generation problem for circuits is known to be NP-hard. Efficient techniques for test generation are essential in order to reduce the test generation time. Test patterns were generated using ATPG (Automatic Test Pattern Generation) and faults were inserted in the netlist file generated using DFT (Design for Test). Here ATPG is achieved using the combination of Design Compiler and the Tetramax. Fault coverage and test patterns were generated. It was observed that neither a comprehensive functional verification sequence nor a sequence with high stuck-at fault coverage gives high transition fault coverage for sequential circuits. A customized LFSR algorithm is used to find the fault coverage and pattern used to detect the faults. It is found that LFSR techniqque seems to be good when compared to the ATPG tool for the small and medium circuits. LFSR technique yields 100% fault coverage where as Tetramax is giving about 97% fault coverage.
- Conference Article
9
- 10.1109/edac.1993.386426
- Feb 22, 1993
A synchronizing sequence drives a circuit from an arbitrary power-up state into a unique state. A framework and algorithms for test generation based on the multiple observation time strategy are developed by taking advantage of synchronizing sequences. Though it has been shown that the multiple observation time strategy can provide a higher fault coverage than the conventional single observation time strategy, until now the multiple observation time strategy has required a very complex tester operation model and its overhead. However, when a circuit is synchronizable, test generation can employ the multiple observation time strategy and provide better fault coverages, while using the conventional tester operation model. It is shown that the same fault coverage can be achieved in both tester operation models if the circuit under test generation is synchronizable. The authors investigate how a synchronizing sequence simplifies test generation and allows one to use the simpler tester operation model. >
- Conference Article
88
- 10.1109/issre.1994.341373
- Nov 6, 1994
Models the relationship between testing effort, coverage and reliability, and presents a logarithmic model that relates testing effort to test coverage: statement (or block) coverage, branch (or decision) coverage, computation use (c-use) coverage, or predicate use (p-use) coverage. The model is based on the hypothesis that the enumerables (like branches or blocks) for any coverage measure have different detectability, just like the individual defects. This model allows us to relate a test coverage measure directly to the defect coverage. Data sets for programs with real defects are used to validate the model. The results are consistent with the known inclusion relationships among block, branch and p-use coverage measures. We show how the defect density controls the time-to-next-failure. The model can eliminate variables like the test application strategy from consideration. It is suitable for high-reliability applications where automatic (or manual) test generation is used to cover enemerables which have not yet been tested.< <ETX xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">></ETX>
- Research Article
189
- 10.1109/tr.2002.804489
- Dec 1, 2002
- IEEE Transactions on Reliability
"Software test-coverage measures" quantify the degree of thoroughness of testing. Tools are now available that measure test-coverage in terms of blocks, branches, computation-uses, predicate-uses, etc. that are covered. This paper models the relations among testing time, coverage, and reliability. An LE (logarithmic-exponential) model is presented that relates testing effort to test coverage (block, branch, computation-use, or predicate-use). The model is based on the hypothesis that the enumerable elements (like branches or blocks) for any coverage measure have various probabilities of being exercised; just like defects have various probabilities of being encountered. This model allows relating a test-coverage measure directly with defect-coverage. The model is fitted to 4 data-sets for programs with real defects. In the model, defect coverage can predict the time to next failure. The LE model can eliminate variables like test-application strategy from consideration. It is suitable for high reliability applications where automatic (or manual) test generation is used to cover enumerables which have not yet been tested. The data-sets used suggest the potential of the proposed model. The model is simple and easily explained, and thus can be suitable for industrial use. The LE model is based on the time-based logarithmic software-reliability growth model. It considers that: at 100% coverage for a given enumerable, all defects might not yet have been found.
- Research Article
3
- 10.1016/j.infsof.2014.09.007
- Sep 28, 2014
- Information and Software Technology
Generating semantically valid test inputs using constrained input grammars
- Conference Article
- 10.1109/test.1999.805860
- Sep 28, 1999
We describe the application of the following tools to ITC-99 benchmark circuits. Deterministic test generation: The test generation procedure MIX [1], and its extension MIX + [2], combines several test generation approaches to derive test sequences exhibiting very high fault coverages at relatively low CPU times. It includes a simulation-based test generation procedure based on LOCSTEP [3], a deterministic test generation procedure, and a test generation procedure based on genetic optimization. It assumes fault detection under the restricted multiple observation time approach [4]. Property-based test generation: The test generation procedure PROPTEST [5], [6] uses several simulation-based techniques to generate test sequences without resorting to branch-and-bound procedures. The techniques include static compaction based on vector restoration [7] to capture the most effective test subsequences of the test sequence, holding of test vectors [8] and perturbation of test vectors. PROPTEST achieves very high fault coverages at very low test generation times in spite of its relatively low complexity. Identification of undetectable faults and removal of redundant faults: Identification of undetectable faults prior to test generation can save the potentially wasted effort in targeting undetectable faults. Redundant faults can be removed from the circuit as a way to simplify the circuit and/or the test generation process. Procedures to identify undetectable faults and remove redundant faults were described in [9]-[11]. The definitions from [12] and [13] are used in these procedures. Selection of partial scan flip-flops: The partial scan selection procedure of [14] achieves the same fault coverage as full scan design by eliminating sequentially undetectable faults. The efficiency of the procedure is due to the use of several scan selection phases based on difficult to control flip-flops, and fast identification of undetectable faults. Sequential ATPG is invoked only in the last phase, when several flip-flops are already scanned, and the circuit is easier to handle by ATPG. The tools can currently handle single clock designs consisting of basic gates and D flip-flops. We are extending the tools to handle multiple clock designs and other primitives.
- Research Article
16
- 10.1109/43.658570
- Jan 1, 1997
- IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
For sequential circuit test pattern generation incorporating backward justification, we need to justify the values on flip-flops to activate and propagate fault effects. This takes much time when the values to be justified on flip-flops appear to be invalid states. Hence, it is desirable to know invalid states, either dynamically during the justification process or statically before proceeding to test generation. This paper proposes algorithms to identify, before test generation, invalid states for sequential circuits without reset states. The first algorithm explores all valid states from an unknown initial state to search the complete set of invalid states. The second algorithm finds the complete set of invalid states from searching the reachable states for each state. The third algorithm searches the invalid states which are required for test generation to help stop justification early by analyzing dependency among flip-flops to simulate each partial circuit. Experimental results on ISCAS benchmark circuits show that the algorithms can identify invalid states in short time. The obtained invalid states were also used in test generation, and it was shown that they improved test generation significantly in test generation time, fault coverage, and detection efficiency, especially for larger circuits and for those that were difficult to generate.
- Research Article
1
- 10.1016/0167-9260(89)90007-2
- Sep 1, 1989
- Integration, the VLSI Journal
Test generation and fault detection for VLSI PPL circuits
- Conference Article
- 10.1109/cadsm.2003.1255109
- Feb 18, 2003
Deterministic test generation algorithms are highly complex and time-consuming. New approaches are needed to reduce execution time and to improve fault coverage. In this work, genetic algorithms for sequential circuit test generation are offered. The genetic algorithm builds candidate test vectors and sequences, using a deductive-parallel fault simulator to compute the fitness of each candidate test. The deductive-parallel method is offered for significant improvement of fault coverage percentage and for speed up of test generators. A new hardware deductive-parallel fault simulator is developed. It combines the advantages of deductive and parallel fault simulation algorithms for digital circuits described at gate, functional and RTL levels. Experimental results shows high fault coverage for most of the ISCAS'89 sequential benchmark circuits, and execution times were significantly lower than in a deterministic test generator and in test generators using random selection of the initial population.
- Conference Article
46
- 10.1109/vtest.1996.510860
- Apr 28, 1996
A fault-oriented sequential circuit test generator is described in which various types of distinguishing sequences are derived, both statically and dynamically, to aid the test generation process. A two-phase algorithm is used during test generation. The first phase activates the target fault, and the second phase propagates the fault effects (FE's) from the flip-flops with assistance from the distinguishing sequences. This strategy improves the propagation of FE's to the primary outputs, and the overall fault coverage is greatly increased. In our new test generator, DIGATE, genetic algorithms are used to derive both activating and distinguishing sequences during test generation. Our results show very high fault coverages for the ISCAS89 sequential benchmark circuits and several synthesized circuits.
- Conference Article
12
- 10.1109/ats.1995.485360
- Nov 23, 1995
Many test generation algorithms for path delay faults assume a special methodology for application of the test sequence. The two-vector test sequences are valid under the assumption that the combinational logic reaches a steady state following the first vector before the second vector is applied. While such vectors may be acceptable for combinational circuits, their use for testing a non-scan sequential circuit is virtually impossible where it is difficult to run the clock at a constant rate. Most multi-valued algebras for combinational circuits are rendered invalid when vectors are applied at the rated speed. We present a new multi-valued algebra and a test generation algorithm to derive tests for a uniform rated speed test application methodology. The main ideas in the paper include an algebra that derives three-vector test sequences combinational logic and (2) a value propagation rule for latches, resulting in more realistic fault coverages in sequential circuits when all vectors are applied at the rated speed. The test generator uses Boolean functions to reason about state transitions in sequential machines. These Boolean functions are stored and manipulated as Binary Decision Diagrams (BDDs). Experimental data on moderate size ISCAS89 benchmarks are included.
- Conference Article
109
- 10.1145/1542476.1542517
- Jun 15, 2009
Symbolic analysis shows promise as a foundation for bug-finding, specification inference, verification, and test generation. This paper addresses demand-driven symbolic analysis for object-oriented programs and frameworks. Many such codes comprise large, partial programs with highly dynamic behaviors--polymorphism, reflection, and so on--posing significant scalability challenges for any static analysis.We present an approach based on interprocedural backwards propagation of weakest preconditions. We present several novel techniques to improve the efficiency of such analysis. First, we present directed call graph construction, where call graph construction and symbolic analysis are interleaved. With this technique, call graph construction is guided by constraints discovered during symbolic analysis, obviating the need for exhaustively exploring a large, conservative call graph. Second, we describe generalization, a technique that greatly increases the reusability of procedure summaries computed during interprocedural analysis. Instead of tabulating how a procedure transforms a symbolic state in its entirety, our technique tabulates how the procedure transforms only the pertinent portion of the symbolic state. Additionally, we show how integrating an inexpensive, custom logic simplifier with weakest precondition computation dramatically improves performance.We have implemented the analysis in a tool called Snugglebug and evaluated it as a bug-report feasibility checker. Our results show that the algorithmic techniques were critical for successfully analyzing large Java applications.
- Research Article
24
- 10.1145/1543135.1542517
- May 28, 2009
- ACM SIGPLAN Notices
Symbolic analysis shows promise as a foundation for bug-finding, specification inference, verification, and test generation. This paper addresses demand-driven symbolic analysis for object-oriented programs and frameworks. Many such codes comprise large, partial programs with highly dynamic behaviors--polymorphism, reflection, and so on--posing significant scalability challenges for any static analysis. We present an approach based on interprocedural backwards propagation of weakest preconditions. We present several novel techniques to improve the efficiency of such analysis. First, we present directed call graph construction , where call graph construction and symbolic analysis are interleaved. With this technique, call graph construction is guided by constraints discovered during symbolic analysis, obviating the need for exhaustively exploring a large, conservative call graph. Second, we describe generalization , a technique that greatly increases the reusability of procedure summaries computed during interprocedural analysis. Instead of tabulating how a procedure transforms a symbolic state in its entirety, our technique tabulates how the procedure transforms only the pertinent portion of the symbolic state. Additionally, we show how integrating an inexpensive, custom logic simplifier with weakest precondition computation dramatically improves performance. We have implemented the analysis in a tool called Snugglebug and evaluated it as a bug-report feasibility checker. Our results show that the algorithmic techniques were critical for successfully analyzing large Java applications.
- Ask R Discovery
- Chat PDF
AI summaries and top papers from 250M+ research sources.