GUI software testing

GUI software testing

In computer science, GUI software testing is the process of testing a product that uses a graphical user interface, to ensure it meets its written specifications. This is normally done through the use of a variety of test cases.

Testing GUI Applications

Most clients in client/server and web-based systems deliver system functionality using a GUI. When testing complete systems, the tester must grapple with the additional functionality provided by the GUI. GUIs make testing systems more difficult for many reasons: the event-driven nature of GUIs, unsolicited events, many ways in/many ways out and the infinite input domain problems make it likely that the programmer has introduced errors because he could not test every path.

Available literature on testing GUIs almost exclusively tends to focus on tools as the solution to the GUI testing problem. With few exceptions, papers on this topic pay attention to GUI test design, but have concentrated on how automated regression-test suites can be built and maintained.

Test Case Generation

To generate a ‘good’ set of test cases, the test designer must be certain that their suite covers all the functionality of the system and also has to be sure that the suite fully exercises the GUI itself. The difficulty in accomplishing this task is twofold: one has to deal with domain size and then one has to deal with sequences. In addition, the tester faces more difficulty when they have to do regression testing.

The size problem can be easily illustrated. Unlike a CLI (command line interface) system, a GUI has many operations that need to be tested. A very small program such as Microsoft WordPad has 325 possible GUI operationsAtif M. Memon, M.E. Pollack and M.L. Soffa. Using a Goal-driven Approach to Generate Test Cases for GUIs.] . In a large program, the number of operations can easily be an order of magnitude larger.

The second problem is the sequencing problem. Some functionality of the system may only be accomplishable by following some complex sequence of GUI events. For example, to open a file a user may have to click on the File Menu and then select the Open operation, and then use a dialog box to specify the file name, and then focus the application on the newly opened window. Obviously, increasing the number of possible operations increases the sequencing problem exponentially. This can become a serious issue when the tester is creating test cases manually.

Regression testing becomes a problem with GUIs as well. This is because the GUI may change significantly across versions of the application, even though the underlying application may not. A test designed to follow a certain path through the GUI may not be able to follow that path since a button, menu item, or dialog may have changed location or appearance.

These issues have driven the GUI testing problem domain towards automation. Many different techniques have been proposed to automatically generate test suites that are complete and that simulate user behavior.

Most of the techniques used to test GUIs attempt to build on techniques previously used to test CLI programs. However, most of these have scaling problems when they are applied to GUI’s. For example, Finite State Machine-based modelingJ.M. Clarke. Automated test generation from a Behavioral Model. In Proceedings of Pacific Northwest Software Quality Conference. IEEE Press, May 1998.] S. Esmelioglu and L. Apfelbaum. Automated Test generation, execution and reporting. In Proceedings of Pacific Northwest Software Quality Conference. IEEE Press, October 1997.] — where a system is modeled as a finite state machine and a program is used to generate test cases that exercise all states — can work well on a system that has a limited number of states but may become overly complex and unwieldy for a GUI (see also model-based testing).

Planning and artificial intelligence

A novel approach to test suite generation, adapted from a CLI techniqueA. Howe, A. von Mayrhauser and R.T. Mraz. Test case generation as an AI planning problem. Automated Software Engineering, 4:77-106, 1997.] . involves using a planning system“Hierarchical GUI Test Case Generation Using Automated Planning” by Atif M. Memon, Martha E. Pollack, and Mary Lou Soffa. IEEE Trans. Softw. Eng., vol. 27, no. 2, 2001, pp. 144-155, IEEE Press.] . Planning is a well studied technique from the artificial intelligence (AI) domain that attempts to solve problems that involve four parameters:

*an initial state,
*a goal state,
*a set of operators, and
*a set of objects to operate on.

Planning systems determine a path from the initial state to the goal state by using the operators. An extremely simple planning problem would be one where you had two words and one operation called ‘change a letter’ that allowed you to change one letter in a word to another letter – the goal of the problem would be to change one word into another.

For GUI testing, the problem is a bit more complex. In A.M. Memon, M.E. Pollack and M.L. Soffa. Using a Goal-driven Approach to Generate Test Cases for GUIs. ] the authors used a planner called IPPJ. Koehler, B. Nebel, J. Hoffman and Y. Dimopoulos. Extending planning graphs to an ADL subset. Lecture Notes in Computer Science, 1348:273, 1997.] to demonstrate this technique. The method used is very simple to understand. First, the systems UI is analyzed to determine what operations are possible. These operations become the operators used in the planning problem. Next an initial system state is determined. Next a goal state is determined that the tester feels would allow exercising of the system. Lastly the planning system is used to determine a path from the initial state to the goal state. This path becomes the test plan.

Using a planner to generate the test cases has some specific advantages over manual generation. A planning system, by its very nature, generates solutions to planning problems in a way that is very beneficial to the tester:
#The plans are always valid. What this means is that the output of the system can be one of two things, a valid and correct plan that uses the operators to attain the goal state or no plan at all. This is beneficial because much time can be wasted when manually creating a test suite due to invalid test cases that the tester thought would work but didn’t.
#A planning system pays attention to order. Often to test a certain function, the test case must be complex and follow a path through the GUI where the operations are performed in a specific order. When done manually, this can lead to errors and also can be quite difficult and time consuming to do.
#Finally, and most importantly, a planning system is goal oriented. What this means and what makes this fact so important is that the tester is focusing test suite generation on what is most important, testing the functionality of the system.

When manually creating a test suite, the tester is more focused on how to test a function (i. e. the specific path through the GUI). By using a planning system, the path is taken care of and the tester can focus on what function to test. An additional benefit of this is that a planning system is not restricted in any way when generating the path and may often find a path that was never anticipated by the tester. This problem is a very important one to combatD.J. Kasik and H.G. George. Toward automatic generation of novice user test scripts. In M.J. Tauber, V. Bellotti, R. Jeffries, J.D. Mackinlay, and J. Nielsen, editors, Proceedings of the Conference on Human Factors in Computing Systems : Common Ground, pages 244-251, New York, 13-18 Apr. 1996, ACM Press.] .

Another interesting method of generating GUI test cases uses the theory that good GUI test coverage can be attained by simulating a novice user. One can speculate that an expert user of a system will follow a very direct and predictable path through a GUI and a novice user would follow a more random path. The theory therefore is that if we used an expert to test the GUI, many possible system states would never be achieved. A novice user, however, would follow a much more varied, meandering and unexpected path to achieve the same goal so it’s therefore more desirable to create test suites that simulate novice usage because they will test more.

The difficulty lies in generating test suites that simulate ‘novice’ system usage. Using Genetic algorithms is one proposed way to solve this problemD.J. Kasik and H.G. George. Toward automatic generation of novice user test scripts. In M.J. Tauber, V. Bellotti, R. Jeffries, J.D. Mackinlay, and J. Nielsen, editors, Proceedings of the Conference on Human Factors in Computing Systems : Common Ground, pages 244-251, New York, 13-18 Apr. 1996, ACM Press.] . Novice paths through the system are not random paths. First, a novice user will learn over time and generally won’t make the same mistakes repeatedly, and, secondly, a novice user is not analogous to a group of monkeys trying to type Hamlet, but someone who is following a plan and probably has some domain or system knowledge.

Genetic algorithms work as follows: a set of ‘genes’ are created randomly and then are subjected to some task. The genes that complete the task best are kept and the ones that don’t are discarded. The process is again repeated with the surviving genes being replicated and the rest of the set filled in with more random genes. Eventually one gene (or a small set of genes if there is some threshold set) will be the only gene in the set and is naturally the best fit for the given problem.

For the purposes of the GUI testing, the method works as follows. Each gene is essentially a list of random integer values of some fixed length. Each of these genes represents a path through the GUI. For example, for a given tree of widgets, the first value in the gene (each value is called an allele) would select the widget to operate on, the following alleles would then fill in input to the widget depending on the number of possible inputs to the widget (for example a pull down list box would have one input…the selected list value). The success of the genes are scored by a criterion that rewards the best ‘novice’ behavior.

The system to do this testing described inD.J. Kasik and H.G. George. Toward automatic generation of novice user test scripts. In M.J. Tauber, V. Bellotti, R. Jeffries, J.D. Mackinlay, and J. Nielsen, editors, Proceedings of the Conference on Human Factors in Computing Systems : Common Ground, pages 244-251, New York, 13-18 Apr. 1996, ACM Press.] can be extended to any windowing system but is described on the X window system. The X Window system provides functionality (via and the editers' protocol) to dynamically send GUI input to and get GUI output from the program without directly using the GUI. For example, one can call XSendEvent() to simulate a click on a pull-down menu, and so forth. This system allows researchers to automate the gene creation and testing so for any given application under test, a set of novice user test cases can be created.

Event Flow Graphs

A recent breakthrough“An event-flow model of GUI-based applications for testing” by Atif M. Memon. Software Testing, Verification and Reliability, vol. 17, no. 3, 2007, pp. 137-157, John Wiley and Sons Ltd. [http://www.cs.umd.edu/~atif/papers/MemonSTVR2007-abstract.html] ] in the field of automated GUI testing is a new graph model called the event-flow graph that represents events and event interactions. In much the same way as a control-flow graph represents all possible execution paths in a program, and a data-flow graph represents all possible definitions and uses of a memory location, the event-flow model represents all possible sequences of events that can be executed on the GUI. A GUI is decomposed into a hierarchy of modal dialogues; this hierarchy is represented as an integration tree; each modal dialogue is represented as an event-flow graph that shows all possible event execution paths in the dialogue.

Running the test cases

At first the strategies were migrated and adapted from the CLI testing strategies. A popular method used in the CLI environment is capture/playback. Capture playback is a system where the system screen is “captured” as a bitmapped graphic at various times during system testing. This capturing allowed the tester to “play back” the testing process and compare the screens at the output phase of the test with expected screens. This validation could be automated since the screens would be identical if the case passed and different if the case failed.

Using capture/playback worked quite well in the CLI world but there are significant problems when one tries to implement it on a GUI-based system L.R. Kepple. The black art of GUI testing. Dr. Dobb’s Journal of Software Tools, 19(2):40, Feb. 1994.] . The most obvious problem one finds is that the screen in a GUI system may look different while the state of the underlying system is the same, making automated validation extremely difficult. This is because a GUI allows graphical objects to vary in appearance and placement on the screen. Fonts may be different, window colors or sizes may vary but the system output is basically the same. This would be obvious to a user, but not obvious to an automated validation system.

To combat this and other problems, testers have gone ‘under the hood’ and collected GUI interaction data from the underlying windowing system M.L. Hammontree, J.J. Hendrickson and B.W. Hensley. Integrated data capture and analysis tools for research and testing on graphical user interfaces. In P. Bauersfeld, J. Bennett and G. Lynch, editors, Proceedings of the Conference on Human Factors in Computing System, pages 431-432, New York, NY, USA, May 1992. ACM Press.] . By capturing the window ‘events’ into logs the interactions with the system are now in a format that is decoupled from the appearance of the GUI. Now, only the event streams are captured. There is some filtering of the event streams necessary since the streams of events are usually very detailed and most events aren’t directly relevant to the problem. This approach can be made easier by using an MVC architecture for example and making the view (i. e. the GUI here) as simple as possible while the model and the controller hold all the logic. Another approach is to use an HTML interface or a three-tier architecture that makes it also possible to better separate the user interface from the rest of the application.

Another way to run tests on a GUI is to build a driver into the GUI so that commands or events can be sent to the software from another programD.J. Kasik and H.G. George. Toward automatic generation of novice user test scripts. In M.J. Tauber, V. Bellotti, R. Jeffries, J.D. Mackinlay, and J. Nielsen, editors, Proceedings of the Conference on Human Factors in Computing Systems: Common Ground, pages 244-251, New York, 13-18 Apr. 1996, ACM Press.] . This method of directly sending events to and receiving events from a system is highly desirable when testing, since the input and output testing can be fully automated and user error is eliminated.

References

External links

* Article [http://www.methodsandtools.com/archive/archive.php?id=37 GUI Testing Checklist]


Wikimedia Foundation. 2010.

Игры ⚽ Поможем написать курсовую

Look at other dictionaries:

  • Software testing — is an empirical investigation conducted to provide stakeholders with information about the quality of the product or service under test [ [http://www.kaner.com/pdfs/ETatQAI.pdf Exploratory Testing] , Cem Kaner, Florida Institute of Technology,… …   Wikipedia

  • Software development process — Activities and steps Requirements Specification …   Wikipedia

  • Software quality — Contents 1 Motivation for Defining Software Quality 2 Definition 3 Alternative Approaches to Software Quality Defin …   Wikipedia

  • Software development methodology — A software development methodology or system development methodology in software engineering is a framework that is used to structure, plan, and control the process of developing an information system. Contents 1 History 1.1 As a noun 1.2 As a… …   Wikipedia

  • System testing — of software or hardware is testing conducted on a complete, integrated system to evaluate the system s compliance with its specified requirements. System testing falls within the scope of black box testing, and as such, should require no… …   Wikipedia

  • Computer software — Software redirects here. For other uses, see Software (disambiguation). Computer software, or just software, is a collection of computer programs and related data that provide the instructions for telling a computer what to do and how to do it.… …   Wikipedia

  • Software Quality Days — Die Software Quality Days (SWQD) ist ein jährlich in Wien stattfindender Fachkongress zum Thema Software Qualität und Testen und hat sich seit der ersten Veranstaltung im Jahr 2009 als eine der bedeutendsten Veranstaltungen mit diesem Themenfokus …   Deutsch Wikipedia

  • Model-based testing — is the application of Model based design for designing and optionally executing the necessary artifacts to perform software testing. Models can be used to represent the desired behavior of the System Under Test (SUT), or to represent the desired… …   Wikipedia

  • Fuzz testing — Fuzzing redirects here. For other uses, see Fuzz (disambiguation). Fuzz testing or fuzzing is a software testing technique, often automated or semi automated, that involves providing invalid, unexpected, or random data to the inputs of a computer …   Wikipedia

  • Manual testing — Compare with Test automation. Manual testing is the process of manually testing software for defects. It requires a tester to play the role of an end user, and use most of all features of the application to ensure correct behavior. To ensure… …   Wikipedia

Share the article and excerpts

Direct link
Do a right-click on the link above
and select “Copy Link”