- Model-based testing
-
Model-based testing is the application of Model based design for designing and optionally executing the necessary artifacts to perform software testing. Models can be used to represent the desired behavior of the System Under Test (SUT), or to represent the desired testing strategies and testing environment. The picture on the right depicts the former approach.
A model describing the System Under Test (SUT) is usually an abstract, partial presentation of the system under test's desired behavior. The test cases derived from this model are functional tests on the same level of abstraction as the model. These test cases are collectively known as the abstract test suite. The abstract test suite cannot be directly executed against the system under test because it is on the wrong level of abstraction. Therefore an executable test suite must be derived from the abstract test suite that can communicate with the system under test. This is done by mapping the abstract test cases to concrete test cases suitable for execution. In some model-based testing tools, the model contains enough information to generate an executable test suite from it. In the case of online testing (see below), the abstract test suite exists only as a concept but not as an explicit artifact.
There are many different ways to "derive" tests from a model. Because testing is usually experimental and based on heuristics, there is no one best way to do this. It is common to consolidate all test derivation related design decisions into a package that is often known as "test requirements", "test purpose" or even "use case". This package can contain e.g. information about the part of the model that should be the focus for testing, or about the conditions where it is correct to stop testing (test stopping criteria).
Because test suites are derived from models and not from source code, model-based testing is usually seen as one form of black-box testing. In some aspects, this is not completely accurate. Model-based testing can be combined with source-code level test coverage measurement, and functional models can be based on existing source code in the first place.
Model-based testing for complex software systems is still an evolving field.
Contents
Models
Especially in Model Driven Engineering or in OMG's model-driven architecture the model is built before or parallel to the development process of the system under test. The model can also be constructed from the completed system. Recently the model is created mostly manually, but there are also attempts to create the model automatically, for instance out of the source code. One important way to create new models is by model transformation, using languages like ATL, a QVT-like Domain Specific Language.
Model-based testing inherits the complexity of the domain or, more particularly, of the related domain models.
Deploying model-based testing
There are various known ways to deploy model-based testing, which include online testing, offline generation of executable tests, and offline generation of manually deployable tests.[1]
Online testing means that a model-based testing tool connects “directly” to a system under test and tests it dynamically.
Offline generation of executable tests means that a model-based testing tool generates test cases as a computer-readable asset that can be later deployed automatically. This asset can be, for instance, a collection of Python classes that embodies the generated testing logic.
Offline generation of manually deployable tests means that a model-based testing tool generates test cases as a human-readable asset that can be later deployed manually. This asset can be, for instance, a PDF document in English that describes the generated test steps.
Deriving tests algorithmically
The effectiveness of model-based testing is primarily due to the potential for automation it offers. If the model is machine-readable and formal to the extent that it has a well-defined behavioral interpretation, test cases can in principle be derived mechanically.
Often the model is translated to or interpreted as a finite state automaton or a state transition system. This automaton represents the possible configurations of the system under test. To find test cases, the automaton is searched for executable paths. A possible execution path can serve as a test case. This method works if the model is deterministic or can be transformed into a deterministic one. Valuable off-nominal test cases may be obtained by leveraging un-specified transitions in these models.
Depending on the complexity of the system under test and the corresponding model the number of paths can be very large, because of the huge amount of possible configurations of the system. For finding appropriate test cases, i.e. paths that refer to a certain requirement to proof, the search of the paths has to be guided. For test case generation, multiple techniques have been applied and are surveyed in.[2]
Test case generation by theorem proving
Theorem proving has been originally used for automated proving of logical formulas. For model-based testing approaches the system is modeled by a set of logical expressions (predicates) specifying the system's behavior. For selecting test cases the model is partitioned into equivalence classes over the valid interpretation of the set of the logical expressions describing the system under development. Each class is representing a certain system behavior and can therefore serve as a test case. The simplest partitioning is done by the disjunctive normal form approach. The logical expressions describing the system's behavior are transformed into the disjunctive normal form.
Test case generation by constraint logic programming and symbolic execution
Constraint programming can be used to select test cases satisfying specific constraints by solving a set of constraints over a set of variables. The system is described by the means of constraints.[3] Solving the set of constraints can be done by Boolean solvers (e.g. SAT-solvers based on the Boolean satisfiability problem) or by numerical analysis, like the Gaussian elimination. A solution found by solving the set of constraints formulas can serve as a test cases for the corresponding system.
Constraint programming can be combined with symbolic execution. In this approach a system model is executed symbolically, i.e. collecting data constraints over different control paths, and then using the constraint programming method for solving the constraints and producing test cases.
Test case generation by model checking
Model checkers can also be used for test case generation.[4] Originally model checking was developed as a technique to check if a property of a specification is valid in a model. When used for testing, a model of the system under test, and a property to test is provided to the model checker. Within the procedure of proofing, if this property is valid in the model, the model checker detects witnesses and counterexamples. A witness is a path, where the property is satisfied, whereas a counterexample is a path in the execution of the model, where the property is violated. These paths can again be used as test cases.
Test case generation by using an event-flow model
A popular model that has recently been used extensively for testing software with a graphical user-interface (GUI) front-end is called the event-flow model that represents events and event interactions. In much the same way as a control-flow model represents all possible execution paths in a program, and a data-flow model represents all possible definitions and uses of a memory location, the event-flow model represents all possible sequences of events that can be executed on the GUI. More specifically, a GUI is decomposed into a hierarchy of modal dialogs; this hierarchy is represented as an integration tree; each modal dialog is represented as an event-flow graph that shows all possible event execution paths in the dialog; individual events are represented using their preconditions and effects. An overview of the event-flow model with associated algorithms to semi-automatically reverse engineer the model from an executing GUI software is presented in this 2007 paper.[5] Because the event-flow model is not tied to a specific aspect of the GUI testing process, it may be used to perform a wide variety of testing tasks by defining specialized model-based techniques called event-space exploration strategies (ESES). These ESES use the event-flow model in a number of ways to develop an end-to-end GUI testing process, namely by checking the model, test-case generation, and test oracle creation. Please see the GUI Testing page for more details.
Test case generation by using a Markov chains model
Markov chains are an efficient way to handle Model-based Testing. Test model realized with Markov chains model can be understood as a usage model: we spoke of Usage/Statistical Model Based Testing. Usage models, so Markov chains, are mainly constructed by 2 artifacts : the Finite State Machine (FSM) which represents all possible usage scenario of the system and the Operational Profiles (OP) which qualify the FSM to represent how the system is or will be used statistically. The first (FSM) helps to know what can be or has been tested and the second (OP) helps to derive operational test cases. Usage/Statistical Model-based Testing starts from the facts that is not not possible to exhaustively test a system and that failure can appear with a very low rate.[6] This approach offers a pragmatic way to statically derive test cases focused on: improving as prompt as possible the system under test reliability.
See also
- Domain Specific Language (DSL)
- Domain Specific Modeling (DSM)
- Model Driven Architecture (MDA)
- Model Driven Engineering (MDE)
- Object Oriented Analysis and Design (OOAD)
References
- ^ Practical Model-Based Testing: A Tools Approach, Mark Utting and Bruno Legeard, ISBN 978-0-12-372501-1, Morgan-Kaufmann 2007
- ^ John Rushby. Automated Test Generation and Verified Software. Verified Software: Theories, Tools, Experiments: First IFIP TC 2/WG 2.3 Conference, VSTTE 2005, Zurich, Switzerland, October 10–13. pp. 161-172, Springer-Verlag
- ^ Jefferson Offutt. Constraint-Based Automatic Test Data Generation. IEEE Transactions on Software Engineering, 17:900-910, 1991
- ^ Gordon Fraser, Franz Wotawa, and Paul E. Ammann. Testing with model checkers: a survey. Software Testing, Verification and Reliability, 19(3):215–261, 2009. URL: http://www3.interscience.wiley.com/journal/121560421/abstract
- ^ Atif M. Memon. An event-flow model of GUI-based applications for testing Software Testing, Verification and Reliability, vol. 17, no. 3, 2007, pp. 137-157, John Wiley and Sons Ltd. URL: http://www.cs.umd.edu/%7Eatif/papers/MemonSTVR2007.pdf
- ^ Helene Le Guen. Validation d'un logiciel par le test statistique d'usage : de la modelisation de la decision à la livraison, 2005. URL:ftp://ftp.irisa.fr/techreports/theses/2005/leguen.pdf
Further reading
- OMG UML 2 Testing Profile; [1]
- Eckard Bringmann, Andreas Krämer; Model-based Testing of Automotive Systems In: ICST, pp. 485–493, 2008 International Conference on Software Testing, Verification, and Validation, 2008.
- Practical Model-Based Testing: A Tools Approach, Mark Utting and Bruno Legeard, ISBN 978-0-12-372501-1, Morgan-Kaufmann 2007.
- Model-Based Software Testing and Analysis with C#, Jonathan Jacky, Margus Veanes, Colin Campbell, and Wolfram Schulte, ISBN 978-0-521-68761-4, Cambridge University Press 2008.
- Model-Based Testing of Reactive Systems Advanced Lecture Series, LNCS 3472, Springer-Verlag, 2005.
- Hong Zhu et al. (2008). AST '08: Proceedings of the 3rd International Workshop on Automation of Software Test. ACM Press. ISBN 978-1-60558-030-2. http://portal.acm.org/citation.cfm?id=1370042#.
- Requirements for information systems model-based testing
- Model-Based Testing Adds Value, Ewald Roodenrijs, Methods & Tools, Spring 2010.
- A Systematic Review of Model Based Testing Tool Support, Muhammad Shafique, Yvan Labiche, Carleton University, Technical Report, May 2010.
- Model-Based Testing for Embedded Systems (Computational Analysis, Synthesis, and Design of Dynamic Systems), Justyna Zander, Ina Schieferdecker, Pieter J. Mosterman, 592 pages, CRC Press, ISBN 1439818452, September 15, 2011.
- Online Community for Model-based Testing
Categories:
Wikimedia Foundation. 2010.