When looking at all the available Testing Tools [for automation] and Test-Management Tools [for control]. You can conclude that there is a lot of help for automation and monitoring of testing, but not really helpful in the ALM.
ALM and Automation.
One of the important parts of ALM is the connection between the different lifecycle tools. For example, design artifacts can / must work together with coding tools, they must give the developer a kind of skeleton and give boundaries within they can start building the solution [ the application diagram together with the service factory for example].
Forrester describes ALM in this paper [The Changing Face Of Application Life-Cycle Management]:
ALM doesn’t support specific life-cycle activities; rather, it keeps them all in sync.
A development effort can still fail miserably even if analysts document business requirements
perfectly, architects build flawless models, developers write defect-free code, and testers execute thousands of tests. ALM ensures the coordination of these activities, which keeps practitioners’ efforts directed at delivering applications that meet business needs.
Traceability of relationships between artifacts.
Some more information about ALM, can be found in this post "ALM Definitions", also a good start to get comfortable with ALM is this presentation "ALM foundational concepts" by Clementino de Mendonça, Senior Development Consultant from Microsoft Services.
He also talks about automation in the slide "Why Automate ALM" and also refers to the Forrester paper.
So, for the tools which needs to support the Application Life-Cycle, integration with other ALM tools is one of the main capabilities it must provide. Next to this capability are reporting and traceability, something most test suites already deliver. But without the integration with other ALM tooling and without a good approach testing stays a manual process, the tools just offer repeatability by automation.
Available Testing Tools.
I divided the tools in "automation" testing and "managing" testing. You can further divide these automation tools in "GUI / Functional, Load and performance tools" and managing in "Bugtracking and Testcase managing tools".
Below a small [not complete] list of test tools current out there.
GUI Test Tools / Functional test Automation
[a must read, if you are planning to select a GUI test tool, is this paper [PDF] from Elisabeth Hendrickson]
- WinRunner by Mercury
- Astra Quick Test and QuickTest Pro by Mercury
- SilkTest (formerly QAPartner) by Segue Software
- QARun by Compuware
- e-Test Suite by Empirix
- QA Wizard by Seapine Software
- Eggplant by Redstone Software
Load Test / Performance Tools
- LoadRunner by Mercury
- SilkPerformer by Segue Software [now Borland]
- QALoad by Compuware
- e-Load by Empirix
- WebLoad by RadView Software
- WebFT by Radview
Bugtracking and Test Case management Tools
- Bugzilla by YOU [open source]
- TestDirector by Mercury
- Visual Studio Test Edition by Microsoft [see table below]
- SQA Suite by Rational
- TestComplete by AutomatedQA
And there are many many more look-a-like tools.
But most of them... do the same. There are some differences in details but most of them solve the same problem. The GUI / Load test tools are helping to be repeatable and the reporting / management tools are helping to monitor the state of the project. Helpful in the ALM but not that exciting.
New Testing Features in VSTS 2008, Test Types. [PPT]
Test Maturity Levels
The four IO levels of Microsoft's Infrastructure Optimization Model can also be used for testing. It's about the maturity according to "the outcome of testing/ reporting" not HOW to do it.
For the different levels I used the TPI ®Model [ Test Process Improvement ].
The Test Process Improvement (TPI) model has been developed based on the practical
knowledge and experiences of test process development. TPI offers a viewpoint in
the maturity of the test processes within the organization. Based on this understanding
the model helps to define gradual and controllable test process improvement steps.
When comparing these levels with the capabilities of test-tools you can conclude that the GUI, Functional and load tooling offer the "basic" level. They automate the test, defects are found and reported. Those tools don't do reporting about the progress of the tests for the overall system [is everything tested?] and no recommendations according to what should be tested in what way. The user [tester / developer tester] still have to figure out what to do. No integration and no automation of the high-level design / test-case creation process.
The test suites do a better job, they offer more reports and integration. So, I put them under the "standardized" level. Also because the "advanced" level needs to have something with risk and not only documents or work-items but automation..!
These two quote's precisely point to the pain points according to these kind of test-tools:
However, automated execution of tests does not address the problems of costly test development and uncertain coverage of the input domain.
Model-Based Testing in Practice [PDF]
How the software [test automation software] can be automated is a technologically interesting problem. But this can lose sight of whether the result meets the testing need.
"Seven Steps to Test Automation Success" by Bret Pettichord
I like this last one, because it points directly to a tricky point according to test tools. They are made by developers [not by testers] and often they aren't focused on the problems that needs to be solved.
Looking at the ALM Assessment in the "Quality & Test" section, you can take some interesting questions according to what testing really needs.
Are test cases and designs created in line with the design?
Have the areas of greatest risk been identified and tests prioritized accordingly?
Just two, but two important ones. "in line with design" and "identify risks". What actually means: Did you thought about what needs to be tested and in what manner are you going to test it? When you put this in the IO maturity model, you get something like this. [still working on this model]
I like the term "Monkey Testing" everybody immediately understands what it is ;-). Looking at the test-tools, all of them offer only the basic level. No intelligence, not in line with the design, no defined quality and no generation.
They don't stay in sync with other life-cycle activities [see Forrester paper]. Still a lot of hand crafting and manual processes.
Test Case Generation.
There are many papers around test case generation, which focuses on Model Driven Testing with UML.
For example IBM:
Model-driven Testing is a new and promising approach for the automation of software testing. This approach can significantly reduce the most painstaking cycle of all software development efforts—testing. Testing currently comprises between 30% and 70% of all software development projects. This new methodology and toolset will enable software developers and testers to become far more productive and reduce the time-to-market, while maintaining high standards of software quality.
and I'm curious what the status is of SpecExplorer from Microsoft Research.
Spec Explorer is a software development tool for advanced model-based specification and conformance testing. Spec Explorer can help software development teams detect errors in the design, specification and implementation of their systems. The tool is intended to be used by software testers, designers and implementers.
Testing is one of the costliest aspects of commercial software development.
Model-based testing is a promising approach addressing these deficits.
At Microsoft, model-based testing technology developed by the Foundations
of Software Engineering group in Microsoft Research has been used
since 2003. The second generation of this tool set, Spec Explorer, deployed
in 2004, is now used on a daily basis by Microsoft product groups
for testing operating system components, .NET framework components
and other areas. This paper provides a comprehensive survey of the concepts
of the tool and their foundations.
This is a list of papers about test case generation.
- Using UML Collaboration Diagrams for Static Checking and Test Generation [PDF]
Abstract. Software testing can only be formalized and quanti ed when a solid basis for test generation can be de ned. Tests are commonly generated from program source code, graphical models of software (such
as control ow graphs), and speci cations/requirements. UML collaboration diagrams represent a signi cant opportunity for testing because they precisely describe how the functions the software provides are connected in a form that can be easily manipulated by automated means.
- Generating Test Sequences from UML Sequence Diagrams and State Diagrams [PDF].
Abstract: UML models offer a lot of information that should not be ignored in testing. By combining different UML components, different views of the program under test are used. The paper concentrates on a technique for generating test cases from a combination of UML sequence and state diagrams.
- Test Case Generation from Message Sequence Charts [PDF].
black-box and specific white-box testing for communication protocols and distributed systems. UML models provide scenario descriptions by sequence diagrams respectively MSCs. Thus, the combination
of TTCN- 3, as test description language, and UML by MSC to specify and automatically generate test cases has to be considered.
- Model-Driven Testing with UML 2.0
Abstract. The UML 2.0 Testing Profile provides support for UML 2.0 based model-driven testing. This paper introduces a methodology of how to use the profile in order to transform an existing UML system design
model for tests.
Anyway, with Rosario Team Architect [see CTP 10 Nov'07 download] gets an extra view [dynamic] with the sequence diagram beside the static application diagram and class diagrams. So, it's going to be interesting to take a look if it's possible to generate some "high-level" Test-Cases from those diagrams and I do think, it should be possible [and maybe more easier] to define/ generate test cases for a service factory implementation.
So, maybe we can get to the advanced level with test case generation, Rosario and different models / viewpoints...
More to come...