Thursday, December 01, 2011

Test Planning


Testing and Test Planning
Are you finding that you are shipping too many bugs with your releases? Do you run out of time and are unable to adequately test your products before they need to ship? Could you have benefited from more thorough test planning?

It can be difficult to produce complete test plans and test cases, particularly when your schedule is tight. When we create these test plans for you, you realize the immediate benefit of these resources and the long-term benefit of a lasting tool you can use and reuse.

When it comes to test execution, more isn't always better. Let us help you test smarter - by using technology-savvy experienced testers to achieve a maximum return on your testing dollars.

As part of our comprehensive service package we give you the option of using QA Labs’ web-enabled relational database product, TIQS™, to provide an easy way to monitor defect status remotely via the Internet.

A summary report is delivered at the end of the contract period, which includes information such as test effort, test areas, areas not tested, and defects that were detected. All defects are delivered to you in your chosen format (Word, HTML, Excel, TAB, Access Table).

Test Automation

Improving the testability of your product will make testing more effective and efficient. Many of our clients need to run extensive test suites for each release cycle. Our support for test automation can cut the required time for regression testing dramatically. In other cases projects have components or subsystems that are difficult to test by traditional black-box testing methods. Sometimes these areas are the riskiest of your project.

We can reduce the risk and increase the testability of your product by shouldering some of your load and developing the custom tools that you need. We provide everything from test data automation suites and unit-testers to full-blown applications in C++ or Java.
If you own an off-the-shelf automation tool, we can create test scripts using that tool to add to your library. QA Labs can also create the framework to pull these automated scripts together to form a cohesive asset.

Quality Improvement Roadmap Series

2. Requirements
Most companies would benefit greatly from assigning the required Development and QA resources to completing their requirements to a much higher degree of formality, accuracy, and reusability. Improving the requirements allows for better design, implementation, test planning, and test execution. Every defect not corrected in the requirements phase is estimated to cost 15 to 100 times the time and resources ($) to correct once it reaches the testing phase. Once the requirements have been rewritten in a complete and correct manner, extending the set of requirements for future versions of the product should be fairly trivial in nature.

One good method of analyzing requirements is by use of NASA’s free Automated Requirements Measurement (ARM)[1] tool. This tool examines requirement documents for specific problem areas, such as "TBD"s, vague or ambiguous wording, as well as counting the number of requirements (the number of "shall"s) and comparing that to the number of lines of text (i.e.: is the document overly verbose?). Refer to a sample output in Appendix A.
Other requirement tools are available, most of which are used for management of requirements. A simple database (such as one created in a database application like MS Access or in a test management tool like Mercury’s Test Director) would probably be adequate for most situations. The requirements could then be stored in such a database, and exported into a Word table to create a requirements document (such as would be sent to a client). The database would allow Project Management to track any changes to and other metrics related to the requirements.

Implementation Issues
Finding resources to complete this task may be difficult.
QA should peer review the requirements as they are being written. (Product) Management, Development, and QA should be in agreement on all requirements before the project moves to the next part of the development lifecycle.
Caution should be taken to not use a requirement management tool that does not allow for easy import or export of data, as this reduces the reusability of the data.

3. Modular Delivery of Components
Each component should be testable on its own. This may require harnesses/stubs written to allow for stand-alone testing. The most critical modules should be delivered first to allow for the early identification and removal of any serious (Priority = High; Severity = Inoperable or Major) defects discovered by QA. Each and every module should be testable
First we need to identify the modules themselves. Then we need to determine the criticality of each module, based on various risk factors (number of requirements, complexity of requirements, are there new technologies involved in the module, the number of corrected defects in the module, etc.). Lastly, we need to take a look at each module in relation to the others, to take into account any relationships or interdependencies between the modules that may affect a delivery schedule. Once we rank the modules in term of effort, criticality, and interdependencies, then we can schedule their development and resource allocation accordingly.
Module Criteria:
  • Group the project requirements into logical areas and sub-areas where possible
  • Once the high-level design has been completed:
    • Will there be/is there a single executable that contains much of the requirements of one area?
    • Will there be/are there several executables that must be operated together in order to be tested (separately or together)?
    • Will the code be grouped in any particular fashion (i.e.: packages, sub-projects)?
  • Once the first draft of the Project Plan has been completed:
    • Can the deliverables be grouped together in any particular fashion?
    • Can the deliverables be delivered separately or must several of them be delivered simultaneously?
The high-level design itself and the answers to these questions will help identify modules. Project personnel should immediately begin the process of defining the modules contained in each of the projects, and then estimate the risk factors associated with each of those modules.

Proposed List of Risk Factors:
  • The number of requirements (the number of "shall"s in the requirement document)
  • The complexity of the requirements (decision nodes, options, and list items are a good indication of requirement complexity)
  • The use of new technologies (new to the technological forefront, such as C++ programmers writing an application in Java, which is new to the project members and is a relatively immature language)
  • The type of work (i.e.: taking a project team that has worked on mostly graphics filters in the past and assigning them to create a client-server application; in these sorts of cases, the project members will have to learn new techniques for design, implementation, and testing)
  • If a high-level Design has been created, the estimated testability (how much UI, how easily will QA be able to exercise all of the logical paths, all of the functionality)
  • The number of defects corrected in the module since the last calculation (this is a good indication of the flux, or the degree of change, of the module's code base)
  • The importance of the module relative to the remainder of the project (this may be determined by Product Management)

Once the modules have been ranked according to a list of risk factors (refer to the matrix in Appendix B), such as the list above, we should be able to determine which modules should be scheduled for early delivery. Also, modules with little or no UI should be designed to have a test harness (to allow for complete verification and validation of the module prior to integration) to accompany this module. Critical modules that do not lend themselves to early delivery should be mocked up (i.e.: a "stubbed interface" should be created) for use by other modules, or they may be candidates for further decomposition into sub-modules that may have different delivery timelines.
Implementation Issues
Learning how to rate the risk factors will take time. Start with what you know, and then estimate as best you can. The team’s estimation skills in this area will improve over time. Statistical analysis and experience will help find the best weighting for all of the risk factors.
Assigning Development resource(s) to create the harnesses/stubs to be used by QA. One method of allocating resources for this type of task is to have the Development resources assigned to development of a module requiring a harness responsible for the creation of the harness.

4 Test Planning
A test plan is a document that collects and organizes test cases in a form that can be presented to project personnel, project management and external clients. A solid, well-written test plan should allow a new tester to step in and easily execute the test cases by simply following the test steps. Writing a test plan early in the project lifecycle, and having it peer reviewed by Development; in general helps reduce the workload later in the project lifecycle. This allows QA to quickly and unambiguously complete the majority of the testing required, which will provide more time for "Ad Hoc", "Real World", and User Scenario testing of the product.
Once the modules have been defined, QA should create a test plan for each project, with well-defined groups of test cases that correspond to the modules of the project. Using a testcase database (refer to the Test Case Management section) to export the appropriate test cases for each module on a per-project basis will make this task much simpler.
A test plan should include:
  • An overview of the project
  • Any assumptions made in the course of creating the test cases
  • A description of how Build Verification Test cases (BVTs) are distinguished
  • Test cases required for testing the build, core, user interface, error handling, system, load, stress and performance functionality
  • Expected results for each testcase
  • Prioritization of test cases, if any
A test plan may include:
  • Unique testcase IDs for each testcase (these may be generated by a testcase database)
  • A listing of requirement IDs that correspond to the requirement being tested by the testcase

In general, the project requirements can be used as the starting point for the creation of a project’s test plan. The majority of the test cases can be created using the requirements, with the remainder created incrementally as more information is received from Development during the design phase. As the module begins to be testable, the test plan itself will be tested, and any incomplete or under-tested areas can, at that point, have more test cases created.

A test plan should NOT include descriptions of methodologies, practices, standards, defect reporting methods, corrective action methods, tools or quality requirements (refer to the section below outlining Quality Assurance Plans). Refer to the sample test plan outline in Appendix C.

5. General Testing Strategy for Integrated Modules
All effort should be made to complete as much testing of basic functionality of each module as early as possible. Once the basic functionality has been established as acceptable, QA/Testing can then move on to perform stress (load, file sizes, show sizes, etc.), error handling ("bad user" testing), performance (timing) and system (Operating systems, languages/locales, hardware, etc.) testing of each module. Once completed, with no serious defects outstanding, the source code for the module should be "frozen" (see Code Freezes in November 2000).

The basic strategy when testing individual modules is to begin by verifying that the module itself is testable. A small sub-set of the test cases for the module should be performed with each new build, to quickly and efficiently determine that in-depth testing of the module is possible. This testcase sub-set is commonly referred to as a Build Verification Test (BVT). One the module is determined to be testable, QA/Testing then proceeds to test as much of the core functionality as possible. As much of the input and output interfaces are tested (either via the UI or via the use of a test harness or stub), and user interface is fully exercised. All defects discovered are reported, and as many corrected as possible within a reasonable timeframe (based on the project schedule).
Once the functionality has been fully tested and all serious defects corrected, QA/Testing can then go onto perform stress testing and negative (error handling, or "bad user") testing. This allows QA/Testing to identify serious problem areas early on in the project lifecycle, when they are simpler to detect, isolate and correct.

After stress and negative testing have been completed and all serious defects corrected, QA/Testing will move on to system and performance testing. Problems with hardware, operating systems, cards, drivers, and other peripherals should be identified and corrected as early in the project as possible, and modular system testing will allow QA/Testing to do just that. Preliminary performance testing should be broad and shallow, that is, it should focus on the performance of the basic functions. If problems are detected in the performance of a particular module, then more comprehensive performance testing should be performed.

Once the module shows acceptable performance, load and stress testcase results with no serious defects remaining, a regression test pass should be performed. In this testing phase, all defects identified in the module that have been closed (whether closed as fixed, not reproducible or won't fix) should be retested to ensure that all closed issues are still closed, and that the corrective actions (if any) have not caused collateral defects elsewhere in the module.
Implementation Issues
QA/Testing needs incremental and scheduled delivery of each module from Development. QA/Testing would also benefit from having complete (robust) testcases that are accessible, usable, easily added to, and that can display the results of testing.

6. General Testing Strategy for Integrated Modules
QA needs to be able to integrate the "accepted" modules incrementally. If the product follows a somewhat linear workflow, this should be easy to implement for the next project lifecycle. The kind of testing that will yield the best results at this point is to follow something akin to User Scenarios (Use Cases), where the work flow is tested, as opposed to the unit functionality of each distinct module. Of course, one caveat here is if designed/planned module interfaces have undergone any kind of changes prior to integration. This could be problematic if this kind of information is not communicated to all users and implementers of the interfaces, and the interfaces are not kept consistent between modules (i.e.: module APIs should be implemented as per their design; any deviations from expected APIs could cause serious problems).

The basic strategy when testing integrated modules is to begin by testing as much of the complete workflow as possible. The testcases for this type of testing will take the form of User Scenarios/Use Cases, where the entire system is treated as one large black box. Linking together several testcases can create primitive User Scenarios as a starting point, and over time, more sophisticated User Scenarios can be developed. All defects discovered are reported, and as many corrected as possible within a reasonable timeframe (based on the project schedule).

Once the basic workflow has been fully tested and all serious defects corrected, QA can begin system testing of the integrated product, completing testing on all hardware and software configurations, as well as collecting performance data. Corrections to serious defects in this phase of testing are usually of moderate to high risk, therefore risk analysis should be performed by Development and QA prior to the implementation of any correction. This sort of risk analysis must include all stakeholders, including Support, Product Management, and other domain experts to allow for an accurate assessment of the impact of correcting or of not correcting the defect.

Next, QA can then go onto perform stress testing and negative (error handling, or "bad user") testing. This will involve creating extremely large and complex files ("Real World" shows), large numbers of files, passing them through a complex scheduling routine, and transmitting the information correctly to a large number of players. This type of testing should be followed up by negative or "bad user" testing, where the tester seeks to abuse the functionality of the product in an effort to create odd program states. The majority of serious defects discovered in this last phase will require risk analysis, as they will likely have a large impact on the entire product. Please note that system testing is performed before stress and performance testing when testing integrated modules. This is a particularly useful approach for testing distributed product(s), where different modules are often located on physically separate machines.

Lastly, a complete regression test pass should be performed. In this testing phase, all defects identified in the integrated product should be retested to ensure that all closed issues are still closed, and that the corrective actions (if any) have not caused collateral defects elsewhere in the product.
Implementation Issues
One issue is training QA staff on how to design and document effective User Scenarios. Another issue is scheduling time for the creation of the User Scenarios. Since the majority of test planning is completed early in the project lifecycle but the testing does not occur until much later in the project lifecycle, there is a risk that the User Scenarios will not be developed until during the test phase of the project. This is precisely when QA resources have little or no additional bandwidth to take on this task.

7. Reusability of Data
Increasing the amount of data QA can reuse during the test/fix/retest cycle is a method to reduce the time required during the planning and the testing stages. One area that lends itself to increased reusability is project documentation. Requirements should be stored in such a manner that the information can be easily accessed, simple to update (where the data does not change in general, but the data, once approved for change, is modifiable), and can be reused by other departments. For example, QA should be able to grab a table full of requirements from a requirements document, export the table to a tab delimited file, import the data into Excel or Access (or any Database), and use that data to help generate and track functional testcases. To go one step further along this route, using two relational databases, one, which contains the requirements, and another, which contains the related testcases, QA and Development increase the reusability of the information contained inside the databases.

The requirements, if written in a certain manner or following a certain structure, can be reused by QA to aid in testcase generation. For example, the following requirement, written as such, could be reused quite easily to generate testcases:
Requirement ID
Version
Date-Phone Format
XXX-1234
1.1a
For all phone number entry fields the user will have a pre-formatted text box to help the user in correctly entering phone number information.
This table could be partially reused in a Test Plan, as follows (copy, paste, add one column, modify the headers, add the testcase steps, clarify the expected results):
Requirement ID
Version
Date-Phone Format
Expected Result(s)
XXX-1234
1.1a
[Enter test case steps here]
  1. Launch X
  2. Bring up the Phone entry panel by clicking on the "phone" icon
  3. Enter a valid telephone number, i.e.: "1234567", in the phone number field
  4. Click "Ok" to accept the phone number.
[Edit the following to yield the expected results for the associated test case]
The text field should reformat the data to read "123 4567".
This is far simpler and will save a considerable amount of time as compared to creating a 100-page test plan from scratch. The test plan will also have the information structured in a manner similar to the requirements, which generally increases its readability. Another way to achieve significant savings in time spent generating testcases is to use Excel. Simply export the Word requirements table to a CSV (comma separated) or TAB (tab delimited) text file, then open the file in Excel. Use the fill and sort functions in Excel, create the remainder of the testcases and their expected values, save the Excel file as a CSV or TAB file, then, once opened in Word, convert the text to table (Table/Convert Table to Text). Select the same delimiter as used when saving the file in Excel, and the testcase table should be created correctly.

During modular testing, much of the output of many of the modules can be reused for future re-testing of the module, or as input for the following module (the next module in the product workflow). Retention of these testing assets will, over time, result in the creation of a large test suite that can be reused for future versions of the product. During final performance testing, all data should be retained for use as a "baseline" for future versions of the product. This can be done in an Excel spreadsheet, which allows for easy graphical representation of the data.
After the release of the product, the closed defects should be analyzed for reusability as testcases for the next version of the product. The defects that are not directly associated to a previously existing testcase can be seen as a new testcase, one that wasn’t included in the original planning, and thus it should be added as a testcase for future versions.
Implementation Issues
Companies need to avoid tools that do not support import and/or export of data, as this reduces the reusability of data and can force them to be "locked" into one particular tool or data format. Changing to another data format requires changing tools, which can be costly. Custom databases that are scalable, modifiable and provide import and export capabilities are more than sufficient.

8. Codebase Stability (a.k.a. "Daily/Weekly Builds")
QA and/or Development should be producing incremental, scheduled builds as soon as possible in the implementation phase of the project lifecycle. This allows QA and Development to track when and where a defect was introduced into the source code.

Builds should be done on a daily or a scheduled basis, while still allowing for "on-demand" builds. Moving to a daily build situation allows QA to test very incremental changes, particularly defect corrections, very quickly as there will not be many other changes made to the codebase between successive build (i.e.: one day). Doing daily builds also allows Development to quickly isolate the introduction point of defects (determine which build introduced the defect, then analyze the code changes that contributed to the defect itself).

On-demand builds should be done only if there is a clear objective, which must be communicated to all parties. The easiest manner of doing this would be the creation of a "Build Request" form. This form would be filled out well before beginning each build, and would contain:
  • Proposed Date of Build
  • Proposed Build Number
  • Build Objective: why is this build being done? and for which product?
  • Code Modifications: what areas of the codebase, what modules will be modified in this build?
  • Risk Areas: where are the risky areas of the codebase, the risky modules?
  • (Re)Testing Required: a set of checkboxes for each project assigned to help QA focus their testing efforts on each product appropriately
These requests, after their initial creation, would then be "owned" by the creator of the request. The request should then be e-mailed to all affected parties (project personnel on all three projects) which allows them a chance to add any information specific to their project. The build cannot be done unless all parties "sign off" on the request. The completed Build Request forms should be retained in VSS for future reference and for use in the project Post Mortem. This form could be something as simple as a Word template, which could then evolve to a more sophisticated build tracking system, perhaps via the use of a form on an Intranet that puts the entered data into an Access database. As the process becomes more robust, the form could be used as a means of collecting metrics such as lines of code changed in each build, number of defects addressed in each build, etc. Refer to Appendix E for an explanation of the building from VSS workflow.

Automation of the build procedure is one way to reduce the amount of resources spent on performing regular and on-demand builds. The Source Safe archive can be easily accessed via the command line, which offers a simple and effective option: scripting. Using scripts, a company could:
  • Set a network time to start the script
  • Launch SS
  • Check-out, Modify and Check-in any files that display build number information (i.e.: increment the build number)
  • Place a Label on the appropriate project in SS
  • Perform a recursive Get on that label
  • Launch the compiler
  • Compile the source files for each project
  • Create the executables and associated files (library files, DLLs, etc)
  • Create the Installable files
  • Copy the Installable files and the binary files (executables and associated files) to the Distribution server
  • Archive and compress the uncompiled source code
  • Copy the archived source code to a "back-up" server or other location for retention.
This could then be run every morning at a very early time, when it is likely that no one will have files that are critical to the project checked out. Alternatively, a staggered build schedule devised on a per module basis, one which is responsive to changes in the overall project and individual module statuses, would help alleviate some of the difficulties encountered by QA.
Implementation Issues
There is often no requirement that the source code must compile and must be testable (as testable as possible). Consequently, QA may be "forced" to continue testing an older build of the product rather than the latest, merely because the newer build is not functionally testable (i.e.: "bad build"). The amount of time required for determining if a build is testable or not can be quite lengthy. This may be reduced through scheduled delivery of builds combined with automation of Build Verification Testcases (BVTs).
Automating the build requires some domain knowledge of scripting, network transactions, SS command line interfaces, VB command line interfaces, and archival and compression techniques. This knowledge may exist in-house, but there may not adequate resources to perform this work in parallel with the current development cycle. This may be a good candidate for external (contract) work. All issues with third-party products will need resolution before the implementation of any automation can begin.
Designing a useable Build Request form.

9. Automated Testing
Product companies that have scheduled incremental releases can benefit enormously from adding Automation to their testing arsenal. The ability to reduce the amount of manual testing and re-testing is appealing, yet must be approached with some caution -- it is not the "silver bullet" (magic solution) that it appears to be at first glance.

The fatal flaws in implementation of automation are trying to automate too much at once, and trying to automate indiscriminately. Although automation can be a time and effort saving mechanism, if it is poorly or partially implemented it’s usability (and thus it’s time saving properties) decreases. Automating modules of the project that are subject to frequent re-designs or re-implementations will require constant updating and re-automating.

Ideally, we want to start with a small set of automated testcases, and with each release, add to this set. Identifying a core set of testcases that should pass with every build (Build Verification Testcases, or BVTs) is the logical first step. At that point, we should automate those BVTs. The automated BVTs could be performed as part of the automated build process, while still being available for use for "on-demand" builds.

After automation of BVTs, the automation of the UI testcases is the next step. Since the User Interface of an application is generally one of the first areas to be "frozen" or less subject to design changes after initial implementation, the automation of the UI testing is one way to reduce manual testing and increase efficiency. UI testing involves using the mouse and the keyboard to navigate to all panes and windows, testing all keyboard mnemonics and accelerators, changing window properties (size, position, minimize, maximize, close). Once much of the UI testing is automated, any further automation should target those areas, components and modules that have a high degree of commonality between them.

Many third party tools exist to assist and manage automated testing. A good "capture-playback" tool that provides scripting capabilities and a command-line interface will more than likely suffice for the immediate to medium term needs, provided it can gracefully recover and continue testing after a serious failure. Other tools offer more functionality, including integration with IDEs and version control software, but in general, these tools are quite expensive, so some analysis of the requirements of a testing tool should be done prior to purchasing. Some care must be taken to ensure that, if an off-the-shelf tool is chosen for future use, the project personnel, particularly QA personnel have adequate training in the use of the tool.
Implementation Issues
Third party tools can be very expensive, and often offer limited functionality, particularly if the users are not sufficiently trained in the use of the tool.
Creating automation scripts that are maintainable and scalable is critical. QA must be able to easily modify the automation scripts to allow for variations from release to release, and ensure that all automation scripts are well documented for future users.

10 Test Case Management
Management of testcases will allow QA to consistently test the product in a reproducible manner, while generating useful metrics that can be used for tracking project test coverage and results or for predicting emerging trends. A means to reduce duplicate testcases (and thus reduce duplicate effort!) is by the use of a testcase database.

A simple database in MS Access should be created for use in the short and medium term. Each record should contain the following information:
  • Title: a brief descriptive title
  • Testcase Number: a unique number that can be used for identification of the testcase in other records (defects, test reports)
  • Description: a brief one-line description of what the testcase tests
  • Reproduction Steps: a listing of precise steps that test one feature in a particular fashion
  • Module(s): this kind of categorization allows QA to focus their efforts on specific modules of the product, which is particularly useful for testing defect corrections for collateral damage
  • Version(s): for which versions of the product is this testcase applicable? As products evolve, certain testcases will no longer be relevant or useful
  • Priority: prioritizing the testcases allows QA to better focus their efforts when testing time is limited
  • Type: is this a BVT testcase? A functional testcase? Performance? Error? Stress? Regression?
  • Associated Requirement(s): a listing of requirements that this testcase tests
Once the database has been created, QA should begin populating it with functional testcases, then focus on identifying BVT testcases from that set. The database and the record form must be extremely accessible and usable, otherwise it will not be used consistently. One key piece of functionality that will enhance usability of the testcase database is to ensure that users can easily import data from other sources to populate the database very quickly and efficiently.

Once populated to an acceptable degree, QA can record the results of their test passes in the database, including dates testcase performed, on which build the testcase was performed, the result (pass or fail), and IDs of any defects recorded against that testcase. This will allow queries to be run against the database records to determine such metrics as: % testcases pass; % testcases fail; % testcases Hi, Medium, Low priority; % testcases that overlap between multiple projects; and so forth. QA will also be able to generate reports that show, in varying degrees of detail, the amount of testing completed and the results of the testing completed, including associated defect IDs. This information can be valuable internally, as a means to collect quantitative data that can be analyzed and used to drive process improvement, as well as externally, as a means of demonstrating to a client precisely what tests were performed to determine the quality of the product (this can be extremely important for mission-critical/life-critical products).

11. Quality Assurance Plans
A Quality Assurance Plan (sometimes referred to as simply a Quality Plan or a QA Plan) lists in detail what the QA team’s responsibilities will be for the project. This document also includes what tasks the QA team will not perform and why. The main reason behind writing a QA Plan is it clearly outlines the boundaries of QA’s responsibilities on the project to the rest of the project personnel, including any external clients, sub-contractors, and co-contractors.

QA should write a QA Plan for each project while the requirements are being written. Here is a listing of the first-level contents of the IEEE Quality Assurance Plan:
    1. Purpose
    2. References
    3. Management
    4. Documentation
    5. Standards, Practices, Conventions and Metrics
    6. Reviews and Audits
    7. Test
    8. Problem Reporting and Corrective Action
    9. Tools, Techniques and Methodologies
    10. Code Control
    11. Media Control
    12. Supplier Control
    13. Records Collection, Maintenance and Retention
    14. Training
    15. Risk Management
A QA Plan that covers the key items of the list above (modified where appropriate) should be written for each project. A sample outline, with a brief description of each item, is included in Appendix F. A copy of the QA Plan should be circulated to all stakeholders, including Development (Project Lead), Product Management, External Clients, Support, and any Sub- or Co-Contractors. All parties should sign-off on the QA Plan once any issues have been resolved. QA should control the document, that is, changes to the QA Plan should not be commonplace, and not without agreement from all stakeholders.
Implementation Issues
Lack of training and technical expertise will make this a difficult document to create. However, having an incomplete plan is probably of greater benefit than having no plan, particularly when dealing with external parties, co-, or sub-contractors.

12. Incremental Testing combined with Slowed Feature Addition (a.k.a. "Negative Feedback" Development)
Logic dictates that fixing a problem in a simple system should be easier than fixing the same problem in a complex system. When serious defects are discovered in early testing, these defects should be addressed prior to the completion and integration of new code/functionality. This requires that Development be able to stop incremental development on new code to stop and return to correct any serious defects found in older code, while QA must be aware of incomplete functionality and avoid testing those areas of a module currently under initial development.

In the next project lifecycle, the modules that have been identified as both complex and critical should adopt a negative feedback development approach. After the implementation of any new functionality, that functionality should be tested and all serious (Priority = High; Severity = Inoperable or Major) defects relating to that functionality corrected prior to the implementation of any additional functionality to that module, and in particular, any additions to the defective functionality. Metrics can then be collected at the end of the project lifecycle and the rates and distributions of defects in the modules that used the negative feedback development approach can be compared against those that did not (i.e.: what is the distribution of serious defects? do more of them occur in the modules that used a negative feedback development approach or in those that did not?).
Implementation Issues
As the codebase increases in size from version to version, this becomes a more difficult task.

13. Code Freezes
As a module becomes feature complete, and once a module has been validated and verified against the requirements, the source code should not change, except in the case where a serious defect is discovered. In that case, the defect may be corrected once analyses of possible solutions and their associated risks have been completed and all parties (Product Management, Development and QA) are in agreement that the defect should be corrected.

In conjunction with modular delivery and testing, once a module has completed modular testing with no serious defects remaining unresolved, the module should enter a quasi code freeze, or a "code slush". At this point, no new functionality should be added to the module codebase, so that integrated testing can begin. The module codebase at this point is determined to be "functionally complete". Integrated testing will uncover more defects in the "slushy" module, and once they have been resolved in some manner, the module codebase should be frozen. At this point, no further changes to the module codebase should occur, with the exception of correction of serious defects. As each module becomes frozen, QA can perform full regression test passes on the frozen module. Once no more corrections are required, the module codebase is then determined to be "code complete". Each frozen module will no longer require modular (functional) testing, and only those "slushy" modules that still have outstanding serious defects will require modular testing. At the point when all modules are frozen, QA can begin a regression test pass on the frozen (integrated) product. Once no further corrections are required for the integrated product, the product is then determined to be "code complete" and the product is ready for release.
Implementation Issues
QA should have knowledge of the Source Safe Administrator password and have the authority to remove check-in/out/add/delete permissions on modules that are ready to be frozen. If this is not feasible/palatable, then QA should have the authority to direct the SS Administrator to remove the related permissions.
Implementation of a Build Request system would allow QA to preview planned changes to modules that may be in a frozen state, and allow time for a response to the Build Request.

14. Improved Defect Reporting and Tracking
Defect reports have many uses: to record defects and their solutions, to provide quantitative data for post-project analysis and project trend predictions, to increase the effectiveness of regression testing, and to simplify the investigation of defects for Development. QA must begin to add more detail to the defect reports to allow for their use and reuse in the manners described above.

The "Description" field should always include a complete list of "repro steps" which, if followed, will allow any person to reproduce the defect. These repro steps are nearly identical to the test case steps listed for each test case and are critical when performing a regression test pass; without them, QA cannot verify that they are testing the same functionality that contained the corrected defect. A template that is automatically generated for the "Description" field of the defect form would help QA to fill out the repro steps for each defect.

QA should present the defect statistics at project status meeting, or perhaps publish them on the project Intranet. The data presented should include: the number of open defects; a breakdown of open defects by priority; the number found in the last week; the number resolved in the last week; and the "Bug of the Week" (the most serious, strange or interesting defect discovered during the last week). This would give the entire project team a good overview of project stability, QA efforts, and outstanding defect issues. These statistics should be saved to allow for statistical analysis as part of the Post Mortem phase. A defect "glide path" should also be created by QA for distribution to project personnel as the project enters the "code slush" phase. This glide path would show an ideal defect correction rate to achieve a state where all serious defects are corrected and thus the project is deemed acceptable for external release.
Implementation Issues
Adding the template to the "Description" field will have to be done by IT (most likely). QA should know how to make changes to the defect forms.

15. Improved Communication between Development and QA
The greater the flow of information between the various team members, the easier it is to resolve issues that arise during the course of a project. At many companies, QA, although they have access to project documentation and attend project status meetings, are usually not receiving enough information to properly perform their assigned tasks. And much of the information they do receive is either beyond their technical skill level or so far below it as to be nearly useless.
QA should use these status meetings to update the rest of the project team about such issues as current defect counts and their distribution by severity or priority, current risks that QA feels may crop up or need addressing, or even resource issues (hardware, software, people, etc.).

A way to collect project-specific information in a highly usable manner is via a project Intranet. Information such as how to set-up test environments, how to build the project, latest revisions of project requirements documents, project design and architecture documents, project quality assurance plans, project test plans, and so forth, can all be linked together through a single Intranet. VSS allows for "shadowing" of files from the archive to an administrator-specified location. The shadowed files could easily be linked via a central Home page. On the central Home page, other important project information could be listed, such as the availability of updated documents, module status, build requests, et cetera. Over time, the Intranet could grow to include user documentation in HTML format, particularly since on-line (HTML format) help files and web-downloadable applications are becoming the norm. QA could then create as many pages as they want to convey project-specific and QA-specific information to the rest of the project team. Limited access could also be given to external clients to connect and review documents on-line.
Implementation Issues
Having the hardware, software and resources to create and maintain a project intranet may be difficult to find.

16. Independence and Authority of QA
QA must be independent and have the necessary authority to perform the tasks associated with their role(s) properly. QA must be independent of Development to allow for impartial validation and verification of the code produced by Development. QA must have authority to raise issues at meetings, to reopen uncorrected defects, to enforce adherence to the QA Plan by all parties, and to escalate issues that QA feels are not being addressed appropriately.

The granting of authority to QA must come from management, and it must be in place for all aspects of the project lifecycle. The QA Plan should be the reference document to which QA can refer Development, Project Management, and external clients for the enforceable guidelines for the project. QA has the authority to inform members of the project team when they are not complying with these guidelines. QA also has the authority to inform project management when non-compliance continues. Examples of common guidelines: buildable and testable source code at all times; impact analysis prior to correction of a high-risk defect, particularly late in the project lifecycle; monitoring for adherence to requirements and design documents (watching for "feature creep"); prioritizing (with other stakeholders) defects for correction, particularly late in the project lifecycle ("bug triage").

QA should write QA Plans for each project as soon as it is appropriate. QA should become the VSS Administrator and the defect tracking database/software Administrator. QA should be in charge of scheduled and on-demand builds.
QA must have the authority to make the "ship/no ship" decision. Features can always be removed or scaled back in order to meet ship deadlines.
Implementation Issues
This issue depends on the technical and professional competency of the members of the QA department. Training and management support (buy-in) is mandatory.

17. QA Resource Levels and Organization
The ratio of QA to Development varies from company to company, but in general, the target is to have a low ratio. Microsoft adopts a 1:1 ratio of QA to developers; Adobe is more along the lines of 1:2 or 1:3. Once a project reaches a ratio of 1:6 or 7, the amount of "catch-up" testing (testing required to keep up with daily changes to the codebase) begins to dominate the QA persons' time. The QA resources then have no extra time to work on process improvement, additional test planning and emergency situations that can arise, particularly near the end of the project lifecycle, let alone perform adequate testing of the product!

The QA team, as it grows, should be structured to allow for flexibility in times of high-testing demands. One attractive solution is to have one QA/Test Lead for each project, a person who is responsible for:
  • overseeing the testing and reporting all defects for their project;
  • reporting the project status to the project lead, including escalating unresolved issues and possible problem areas and risks;
  • reading and understanding the requirements, design documents, and any specifications relating to their project;
  • completing the test planning for their project;
  • writing a QA Plan for their project; and
  • performing additional testing on other projects during times of high-testing demands.
This structure allows each QA Lead to develop domain expertise in the technical area(s) that their projects utilize, as well as acting as extra test help on other projects when the need arises (i.e.: prior to release). A strategy such as this also allows staff to feel ownership of a product, which can be a great motivational tool, and which ensures the QA Lead for the project is always focussed on the "best interests" of the project.
In the long term, as the QA team increases in size (beyond four [4] QA personnel), one person should be nominated to act as QA/Test Manager. This person will be responsible for:
  • overseeing the testing for all projects and adjusting resource levels as required;
  • reporting the project statuses to the client, Product Management, and the Director of Development/Engineering, including escalating unresolved issues and possible problem areas and risks;
  • reading and understanding the requirements, design documents, and any specifications relating to all projects at a high level;
  • reviewing the test planning for all projects;
  • reviewing (and possibly writing, if required) the QA Plans for all projects;
  • performing additional testing on other projects (other projects) during times of high-testing demands; and
  • proposing, introducing, and creating new process to allow for continuous improvement in the QA team.
This structure provides enough resources to begin working on processes and procedures to incorporate continuous improvements in Development and QA. It also allows for the addition of new personnel quite easily.
Implementation Issues
Finding experienced QA personnel is difficult; training an inexperienced person is an acceptable, although more expensive solution in the short term (long term value is greater). As the team grows, senior team members will be able to take on the majority of the training of new personnel.
Management must buy-in to long-term quality improvement, and must commit the required finances to ensure a successful transition from an ad-hoc QA and Development cycle to a repeatable, continuously improving cycle.

Appendix C - Test Plan Outline
1. Introduction
1.1. Purpose
1.2. Scope
1.3. References
1.4. Assumptions
2. Test Overview
2.1. Test Environment
2.2. Hardware Issues (if any)
2.3. Software Issues (if any)
2.4. Acceptance Criteria
3. Test Cases
3.1 Unit Testing (if any)
3.2 Integration Testing (if any)
3.3 Build Verification Test Cases
3.4 Functional Test Cases
3.5 User Interface Test Cases
3.6 Negative Testing
3.7 User Scenarios
3.7.1 User Scenario 1
3.7.2 User Scenario 2
3.7.3 User Scenario 3
3.7.4 User Scenario 4
3.7.5 User Scenario 5
3.8 System Testing
3.9 Stress Testing
3.10 Performance Testing
3.11 Reliability Testing (if any)
3.12 Installation Testing
Data Driven Testing
One of the risks of any application is being able to do enough testing in the time available.  Another common risk is getting at those critical components behind the GUI to ensure that they are as bug free as possible before and after integration with the rest of the system.
Black box testing can tell you whether the GUI layer is functioning as it should, by taking in user input and responding with a result.  But what if there is an error, how do we know what layer (GUI, Application, Database, etc) is creating the error? What if our black box tests don't result in any database or application layer bugs?  Does that mean there aren't any? What if the GUI layer is masking the errors?
The tester needs to avoid the risk of automating a fluctuating GUI and at the same time provide the volume of tests needed to ensure that the application and database layers are stable and functioning.

Test Data Creation
In order to begin performing data driven testing you first need to create good test data.  Sometimes sources for this data are available from previous testing efforts, from users of the previous versions of the application, or perhaps obtainable by purchasing test suites from a vendor.  More often than not, this test data needs to be created from scratch in the specific context of the application to be tested and the test plans to be executed against it.

Sending the Message
Once the database is populated with your initial data, you will want to take that data and construct a message, command, or request to send to the application that you are testing.
Constructing the message can be as simple as extracting the information from the database and putting it together in the order you need. Or there could be additional, more complex, logic required to construct the message that the application needs to receive.  Working with scripts and databases together can give you a very versatile and useful solution.

Getting Your Results
To get the results of sending your message to the application, you need to be able to collect the result codes or expect your application to send back a response.  Assuming the latter is the case, your test tool would wait for that response, process it as necessary and then compare aspects of the response to the value in an Expected Results column in your tool's database.  Depending on the result, a Pass or Fail can be determined and a log entry created with as much detail as you care to capture.

Example Testing Problem
To pull all of what we have talked about so far into a tangible example, let us imagine a certain application that we want to test.
The application is a multi-tier web application that passes information from its User Interface layer to its Application Layer, via XML messaging.  These XML messages are made up of a number of fields of all different types.  The application takes these messages, processes them, and decides what the appropriate response is - return the information requested, perform the required action, etc.

If we assume that a typical message can contain 20 data elements or fields, and each data element needs to be tested with four different values: valid, error, upper field length boundary, and lower field length boundary: we need to perform over ONE TRILLION tests.
4 to the 20th power = 1,099,511,627,776
This number of fields is not unreasonable, and the different values we mentioned above do not include every case that may need to be considered depending on the constraints for the message's data elements.  Are all the fields required?  Can they be blank?  What about surrounding whitespace characters, are they stripped away for each field?  What about fields that take codes like ON/OFF or RED/GREEN/BLUE/BLACK/WHITE?  Each field will need a certain number of tests based on its constraints.

This is where you need to make use of Equivalence Classes and other test planning techniques to determine the minimum number of tests for each field.  For example, if we can assume that we only need to vary the input of 15 fields out of the 20, and that each field contains only either valid data or invalid data, the number of combinations of test data we now need is much less.
2 to the 15th power = 32,768  <-- still a lot of tests
This is obviously a coarse example, but it makes the point. What if you had 30 potential data elements in your message? Generating this amount of data, executing the tests, and reviewing the results takes a huge amount of time.
Always, make sure that you have well designed test cases before undertaking your automated testing. A little upfront analysis and planning can save you a lot of work.

Defining Your Test Tool
Before creating or purchasing any software application or test tool, the first thing one must do is to collect and enumerate the requirements.
In general the tool needs to:
* Create and/or populate a database based on a predefined schema including an "Expected Results" column that adequately populates the database based on the elements created
* Create a 'message' or formatted string, delimited by tabs or in XML, for sending to a network location, where the application under testing is waiting for connections and incoming requests or data to be delivered
* Send the message to the network location monitored by the application being tested
* Listen for a response from the application
* Compare the response to the value in the appropriate row in the data table.
* Report on the success or failure of the test based on the result of the comparison of the actual response received and the corresponding value in the expected result column.
Before stating that the basic requirements are complete, it would be much more useful if the tool could execute thousands or millions of tests.  The time for these tests to complete could be extensive.  It also may be that the tests should really be run at times of the day when there is low usage of the application or low network traffic.  Both of these needs require that the tool is able to run unattended and perhaps is able to start up at a scheduled time.
But remember, if you are building your own tool or even buying one, keep it simple. After all this isn't the product you are trying to ship.  The first priority is to make the test process work better than it is working right now.  You can add in other bells and whistles later.
Summary
The framework required for any basic data driven testing tool is made up of three main components: File Input/Output for reading from configuration files and writing reports, a database for storing the test data, and an engine with which to extract the data from the database and make meaningful directives and requests of the system under test.
And that's that.  With a tool comprised of these core components, you can send potentially trillions of messages to an application for testing purposes - far more than you could, or would ever want to do manually.  Good hunting...

Performance Testing and the Wide World Web
Today's client/server systems are expected to perform reliably under loads ranging from hundreds to thousands of simultaneous users. The fast growing number of mission critical applications (e-commerce, e-business, contents management, etc.) accessible through the Internet makes web site performance an important feature for success in the market.
According to a broad statement in the white paper Web Performance Testing and Measurement: a complete approach, by G. Cassone, G. Elia, D. Gotta, F. Mola, A. Pinnola: "...a survey [in the US] has found that a user typically waits just 8 seconds for a page to download completely before leaving the site!"
Organizations need to perform repeatable load testing to determine the ultimate performance and potential limits of a system on an on-going basis. Poor performance can have direct negative consequences on the ability of a company to attract and retain its customers. Controlling performance of web site and back-end systems (where e-business transactions run) is a key factor for every on-line business.
"Performance Testing" is the name given to a number of non-functional tests carried out against an application. There are three main elements that often comprise what is called Performance Testing. These are:
  • Performance Testing - Concentrates on testing and measuring the efficiency of the system.
  • Load Testing - Simulates business use with multiple users in typical business scenarios, looking for weaknesses of design with respect to performance.
  • Stress Testing - Sets out to push the system to its limits so that potential problems can be detected before the system goes live.
A difference between performance and load testing is that performance testing generally provides benchmarking data for marketing purposes, whereas load testing provides data for the developers and system engineers to fine-tune the system and determine its scalability.
With load testing, you can simulate the load generated by hundreds or thousands of users on your application - without requiring the involvement of the end users or their equipment. You can easily repeat load tests with varied system configurations to determine the settings for optimum performance. Load testing is also particularly useful to identify areas of performance bottlenecks in high traffic web sites.
Top five questions to ask yourself when considering load testing:
  • Do you experience problems with performance in production?
  • What is the cost of downtime, including monetary, person hours, opportunity cost, customer satisfaction, and reputation?
  • Does your application scale with an increase of users?
  • Do you have a method for obtaining real performance metrics?
  • How do you repeat/reproduce a performance problem?
Defining exactly what you want to get from this type of testing is fundamental. In a comprehensive approach, there are some major questions that have to be considered:
  • Who are your end users?
  • How can you monitor their experience with the system?
  • How can you translate these measurements into solutions?
  • What tools and methods can help?
With the answers to the above questions you get started with:
  • Strategy and Planning
    • Define your specific performance objective.
    • Specify the types of users to generate the necessary load.
    • Define the scenarios that simulate the work and data flow.
    • Define how the scenarios will be measured and tested.
    • Define the repository for storing the data to be collected.
    • Plan your test environment.
    • Identify appropriate tools.
  • Development
    • Develop/customize test scripts which simulate your user’s behaviour.
    • Configure your test environment.
  • Execution
    • Execute your test scripts to simulate user load.
    • Monitor the system resources.
  • Result Analysis
    • Analyze and interpret the results.
    • Isolate and address issues.
    • Tune your implementation.
    • Plan for future marketing requests.
QA Labs can help you assess what you need, provide you information on the tools available, and work with your staff to develop a strategy in accordance with your specific needs. We can develop test scripts to simulate the user behaviour defined in the test plan, test them, document them for use in subsequent performance and load test cycles, and transfer them to your staff.

Testing Without Requirements
A typical software project lifecycle includes such phases as requirements definition, design, code and fix. But, are you shipping software applications with minimal to no requirements and little time for testing because of time-to-market pressures? Build it, ship it, then document and patch things later.

‘Time-to-market’ products can lack detailed requirements for use by testing because of their huge pressures for quick turnaround time. You don’t want to slow down development, or testing, by having to create detailed documentation. At the same time, the test effort needs to be useful and measurable. So, how can this product be tested to achieve adequate effective coverage and overall stability of its functionality?
A Starting Point
Effective ad-hoc testing relies on the combination of tester experience, intuition, and some luck to find the critical defects. Adequate test coverage involves a systematic approach that includes analyzing the available documentation for use in test planning, execution, and review. Ad-hoc testers can greatly benefit from up-front information gathering, even when they don’t have time for formal testing processes and procedures. Ad hoc testers must still understand how the software is intended to work and in which situations.
Ask developers, testers, project managers, end users, and other stakeholders these basic questions to assist with clarifying the product’s undoubtedly complex tasks:
  Why is the system being built?
  What are the tasks to be performed?
  Who are the end users of the system?
  When must the system be delivered?
  Where must the system be deployed?
  How is the system being built?
Also, the risks of the system need to be identified. (See our article Risk Based Testing Article #11 for more on this topic.) Correlate these risks against the time available to prioritize the test focuses.
With this information you are well on your way to being able to define an applicable strategy for your upcoming test effort.
User Scenarios
User Scenarios (sometimes called Use Cases) define a sequence of actions completed by a system or user that provides a recognizable result to the user. A user scenario is written in natural language that pulls together from a common glossary. The user scenario will have the basic or typical flow of events (the ‘must have’ functionality) and the alternate flows. Creating user scenarios/use cases can be kick-started by simply drawing a flowchart of the basic and alternate flows through the system. This exercise rapidly identifies the areas for testing including outstanding questions or design issues before you start.
Benefits of creating user scenarios:
  Easy for the owner of the functionality to tell/draw the story about how it is supposed to work
  System entities and user types are identified
  Allows for easy review and ability to fill in the gaps or update as things change
  Provides early ‘testing’ or validation of architecture, design, and working demos
  Provides systematic step-by-step description of the systems' services
  Easy to expand the steps into individual test cases as time permits
User scenarios quickly provide a clearer picture of what the customer is expecting the product to accomplish. Employing these use cases can reduce ambiguity and vagueness in the development process and can, in turn, be used to create very specific test cases to validate the functionality, boundaries, and error handling of a program.
Checklists
Are there common types of tasks that can be performed on the application? Checklists are useful tools to ensure test coverage of these common tasks. There may be a:
  User Interface checklist
  Error and Boundary checklist
  Certain Features (eg: Searching)
Benefits of creating checklists:
  Easy to maintain as things change
  Easy to improve as time goes by
  Captures the tests being performed in a central location
Checklists used in conjunction with User Scenarios make a powerful combination of light-weight test planning.
Matrices
A test matrix is used to track the execution of a series of tests over a number of configurations or versions of the application. Test matrices are ideal when there are multiple environments, configurations, or versions (builds) of the application.
Benefits of using test matrices:
  Easy to maintain as priorities and functionality change
  Simple to order the functional areas and the tests in each areas by priority
  Clear progress monitoring of the test effort
  East to identify problem areas or environments as testing proceeds
Test matrices provide a clear picture of what you have done and how much you have left to do.
Summary
If you have minimal to no requirements there are still ways that effective testing can be achieved with a methodical approach. You can quickly outline a methodology for yourself that considers the basics of:
  Describing the application in terms of intended purpose
  Identifying the risks of the application
  Identifying the functionality of the application with basic and alternate flows
  Identifying and grouping common tests with checklists
  Identifying how testing records will be traced
  Revisiting and refining each of the above as the project and testing effort proceeds
For more information...
A recent study of over 8,300 IT projects found that more than 50% were "challenged" by reduced functionality, being over budget and going beyond the original schedule. The main reasons were a lack of user input and incomplete and changing requirements.
QA Labs' expert analysts can provide you with a proven light-weight solution tailored to your timeframe and resource realities that allows you to capture your already existing requirements without slowing development and make that document serve as your test plan as well - saving you significant rework, confusion, and money. Two critical artifacts in one!
Our clients hire us knowing they have hired a company, not an individual or group of individuals. We bring our comprehensive experience to the table and can provide the exact resource skill sets needed at exactly the right time. QA Labs offers a clear alternative to individual hires.
Toolkit Testing - A Lightweight Alternative
  • Are you ready to take on the challenges of the new project?
  • Are you knowledgeable of the latest in testing tools and techniques?
  • What will be the ramification of not testing the product adequately?
  • How will this impact your future business?
  • Can you afford not to test?
Exhaustive software testing standards, frame-works, and techniques, promote the notion that a robust variety of testing techniques and structures will increase the likelihood that defects will be uncovered. But with tight project budgets and short timelines, the accompanying bureaucracy and documentation can greatly reduce the interest in formalized testing, structured or otherwise.
It is well understood that a higher quality product demands a higher upfront price and compromises regarding quality versus cost are made every day. However, the best approach is not likely the most structured or complicated one. Rather a sophisticated approach is required that maximizes the value of the resources available within the organization and without. A good starting place to developing this sophisticated approach is to examine the fundamental skills that make great testers great, enabling them to draw upon their almost innate ability to find the crucial defects quickly.
From this examination it is very probable that you will generate ideas for:
  • Managing iterative development and test cycles
  • Creating reusable sets of tests and test data
  • Compressed delivery schedules
  • Newly refactored architectures
  • Standardized project metrics
  • Migrating from simple to complex deployments
  • Changing customer expectations
A few simple principles can provide the framework from which to grow and adapt the test approach. Keeping things simple will make it easy for the benefits and costs to drive further evolution. While there are continuously new development tools and programming languages, many testing requirements remain the same, and simply require additional emphasis within and by the test team.
  • Examples - having previous projects from which examples of the "best of breed" can be drawn for each type of process, document, or test technique is invaluable in giving the next project a giant jump start in how to approach the test effort. Asking, "how did we do it last time and what can we improve?" will drive forward improvements to your testing toolkit strongly, and with frequent project iterations, rapidly.
From these previous project examples you can begin to derive reusable tools for your toolkit:
  • Guidance Process - generic process frameworks and best practices that can be applied to most project types and be ingrained as habit as much as in any documentation.
  • Templates - light-weight documents focused on capturing the critical information and not on keeping resources busy with technical writing.
  • Checklists - lists of test, lists of tasks, matrices of test configurations that allow you to rapidly document and check off what has been done and see what is left to do
Note: If you do not have your own history of previous examples there are many resources on the web where others share their experiences and advice such as at www.stickyminds.com
In one such example, James Bach of Satisfice.com provides a number of whitepapers and articles on "Exploratory Testing" wherein he has a set of mnemonics and heuristics in his toolkit. One of these mnemonics is SFDPO where the letters stand for Structure, Function, Data, Platform, and Operations.
  • Structure - what the product is
  • Function - what the product does
  • Data - what the product processes
  • Platform - what the product depends upon
  • Operations - how the product will be used
Using rules and checklists such as this allow you to quickly focus your test idea generation and ensure that you have systematically visited the major aspects of the product.
"SFDPO is not a template or a test plan, it’s just a way to bring important ideas into your conscious mind while you’re testing. It’s part of your intellectual toolkit. The key thing if you want to become an excellent and reliable exploratory tester is to begin collecting and creating an inventory of heuristics that work for you. Meanwhile, remember that there is no wisdom in heuristics. The wisdom is in you. Heuristics wake you up to ideas, like a sort of cognitive alarm clock, but can’t tell you for sure what the right course of action is here and now. That’s where skill and experience come in. Good testing is a subtle craft. You should have good tools for the job." - James Bach
However, even with the best tools and techniques, a test team can't create the kind of return on investment managers require as long as the test efforts don't start early and don't involve all appropriate stakeholders and participants. When developing your testing processes, look for those improvements where:
  • Errors are detected and corrected as early as possible in the software life cycle
  • Project risk, cost, and schedule effects are lessened
  • Software quality and reliability are enhanced
  • Management visibility into the software process is improved
  • Proposed changes and their consequences can be quickly assessed
"... when all the pieces come together - the right people, the right processes, the right time, the right techniques, the right focus - then we can achieve truly impressive returns on our testing investment. Significant reductions in post-release costs are ours for the taking with good testing. In cost of quality parlance, we invest in upfront costs of conformance (testing and quality assurance) to reduce the downstream costs of nonconformance (maintenance costs and other intangibles associated with field failures)." - Investing in Software Testing by Rex Black.
For more information…
QA Labs' test experts construct effective test strategies tailored to your timeframe and resource realities. A well-scoped and actionable testing strategy is critical to ensuring the continued survival of a given product line, or the successful launch of a new application to the market. QA Labs paves the way for immediate and on-going quality improvements by establishing critical artifacts, practical automation, and integrating quality best practices - all while getting those bugs logged.
Our clients hire us knowing they have hired a company, not an individual or group of individuals. We bring our comprehensive experience to the table and can provide the exact resource skill sets needed at exactly the right time. QA Labs offers a clear alternative to hiring individual contractors, and even to hiring new permanent full-time employees.

Components of a Test Strategy
A Test Strategy is a documented approach to testing where the test effort, test domain, test configurations, and test tools employed to verify and validate a set of functionality are defined. It also includes information on schedules, resource allocations, and staff utilization. This information is crucial to allow the test team (Test) to be as organized and effective as possible.
A Test Strategy is not the same as a Test Plan, which is a document that collects and organizes test cases by functional areas and/or types of testing in a form that can be presented to the other teams and/or customer.
Both are important pieces of the Quality Assurance process since they help communicate the test approach scope and ensure test coverage while improving the efficiency of the testing effort.

What is in the Test Strategy?
The following is a list of some of the sections that are typically included in the Test Strategy document.
* Introduction - contains an overview of the project, lists related documents and references, document conventions, and assumptions made in the course of creating the strategy.
* Scope - describes the scope of the test team's involvement in the project; describes the test areas Test is responsible for and why. It also defines the areas for which Test is not responsible.
* Resources & Schedule for Test Activities - describes the resources available and their roles. Includes a schedule overview for the project, making sure the estimated time for the testing activities and milestone dates are present. The build schedule can also be included if available.
* Acceptance Criteria - defines the minimum criteria that a product must achieve before it is shipped.
* Test environment - describes the hardware and software platforms that are used for testing, including Client/Server configuration, Network, etc...and what will be tested in each platform.
* Tools - describes the tools used for test case management, defect reporting and test automation.
* Test Priorities - describes the priorities of the test effort during the test planning, test automation, test data creation, and test execution phases.
* Test Planning - describes such activities as requirements review and test analysis to determine a list of suitable tests required for verification of the product. It also describes how the tests are expanded into full test cases, complete with descriptions, reproduction steps, and expected results.
* Executing a Test Pass - describes how the test pass execution is performed, and when the testing is executed, in accordance with the types of testing to be performed. For example, test cases that are critical are tested first to ensure the build has the minimum functionality required before further testing.
* Types of testing to be performed - defines the different types of testing to be performed, and the extent to which Test will be carrying out each type of testing. The most common types of testing types are:
  • Build Verification Tests
  • Functionality Testing
  • User Interface Testing
  • Usability Testing
  • Error Handling
  • System Platform
  • Stress Testing
  • Performance Testing
  • Installation Testing
  • Print Testing
  • Localization Testing
  • Regression testing
  • Risks and Issues - lists of outstanding risks and issues related to the test effort.

Benefiting from the Test Strategy
The main groups that benefit from having a Test Strategy are the testing team, development and project management, but other groups such as user ed and marketing can also benefit from the information contained in the Test Strategy.
* Test team
The Test team will follow the Test Strategy and make sure testing is performed in accordance with the plan. They will also analyze the results and make recommendations on the quality of the functionality. The Test Strategy document should help the Test Team answer the questions below:
  • Do I have all documentation I need to start the test planning?
  • Is the time scheduled for testing planning adequate?
  • Do I have the tools to develop the test cases? To log defects?
  • Who is going to review the test analysis/ test planning and when?
  • Do I have all I need to start testing (equipment/tools)?
  • Do I have all the data/files I need to start testing?
  • Do I know the functionality I will test on each build?
  • Is all functionality being covered during all phases?
  • What are the procedures if I find a serious defect?
* Development
Development will understand the functionality being tested and the conditions under which these tests are to be performed. The questions that the Test Strategy document should answer are:
  • What is the overall approach to testing on this project?
  • Who is responsible for the different types of testing, particularly Unit and Integration testing?
  • Do I have time scheduled for reviewing the test plans for depth of coverage?
  • Is the test environment adequate relative to the intended production environment?
* Project Management
Project Management will understand the information regarding configurations (hardware and software) under which the product was verified and validated, and the procedure for assessing the quality of the product based on the type of testing being performed. They are also informed about the testing schedule and its impact on the deadlines. The Test Strategy should help with the following questions:
  • Do I need to hire more people during the test planning or testing phase?
  • Do we have all the hardware and software required for testing?
  • Do we have the tools required for test planning and defect reporting?
  • If a new tool is required, is the time needed for training the testing team scheduled?
  • Are all types of testing defined as required?
  • Are all the testing tasks well defined?
  • Are the testing priorities clear for each phase of the project?
  • Are there enough test execution passes for each phase of the project?
  • What are the issues and risks identified by Test that are still outstanding?
There is another important document whose purpose is very often confused with the Test Strategy or Test Plan; and that is the QA Plan. The QA Plan is intended to be a high level document that clearly outlines the boundaries of QA's responsibilities on the project relative to the rest of the project personnel, including any clients, sub-contractors, and co-contractors.
The QA Plan includes descriptions of methodologies, practices, standards, quality requirements, metrics, reviews and audits, code control, media control, etc., in addition to outlining the basics of the responsibilities of the Test Team.
The Test Strategy draws upon this parent document and its information, if available, and
further details the responsibilities of the Test Team and its approach to testing.

Error Messages and How to Improve Them
Error messages are displayed by applications in response to unusual or exceptional conditions that can't be rectified within the application itself.
The need for "useful error messages" can be defined, in the simplistic case, to be a need for some form of error handling and reporting that enables the user to understand what has happened in the case of an error and what must be done to remedy the situation.
Most testers are no doubt familiar with the feeling of reluctance to log usability issues, fearing that they could be misunderstanding the functionality or they are "wasting valuable time reporting trivial bugs". The project team can further drive this feeling by tending to postpone or ignore such issues under the premise that "at least there is some feedback isn't there?", or "there isn't going to be time to address those kind of issues", and besides "the user wouldn't do that."

Issues with Error Messages
"Error messages are often less than helpful or useful because they're written by people who have an intimate knowledge of the program. Those people often fail to recognize that the program will be run by other people, who don't have that knowledge." Michael Bolton, 1999.
Furthermore, Byron Reeves and Clifford Nass suggest in 'The Media Equation', that even text-only interfaces are felt by users as having some "personality" and that "people respond socially and naturally to media."
As noted by Julianne Chatelaine in 'Polite, Personable Error Messages', Byron Reeves and Clifford Nass determined that if the application does not have the ability to assess each user's personality and adapt to it, the next best thing is to select one personality or tone and be consistent to avoid contributing to confusion and even dislike. The published TME findings were underscored by Nass' remarks at UPA '97 where he said that when an application's textual messages were written by a variety of different people, using different styles and degrees of strength or dominance, it made the product seem "psychotic."

Guidelines for Error Messages
"You may design the perfect system but eventually, your system will fail. How it does so, however, can make all the difference in the world in terms of usability." Tristan Louis, 'Usability 101: Errors'.
"The guidelines for creating effective error messages have been the same for 20 years." Jakob Nielsen, 'Error Message Guidelines'.
The following checklist, compiled from several of the referenced sources, will help you confirm that your application meets basic usability requirements with respect to error messages.
  • Message Exists: the problem with an error is often that no message is actually attached to it. Notify the user when the error happens, every time it happens. The error may be due to a flaw in the software or a flaw in the way the user is using the software but if the user doesn't know of the error, they will assume that the problem is with the software.
  • Polite Phrasing: the message should not blame users or imply that they are either stupid or doing something wrong, such as "illegal command."
  • Visible and Human-readable: the message should be highly noticeable and expressed clearly in plain language using words, phrases, and concepts familiar to the user rather than in system-related terms.
  • Precise Descriptions: the message should identify the application that is posting the error and alert the user to the specific problem, rather than a vague generality such as "syntax error".
  • Clear Next Steps: error messages should provide clear solution steps and/or exit points. An application should never capture users in situations that have no visible or reasonable escape.
  • Consistent: users should not have to wonder whether words, icons, colors, or choices mean the same thing or not in different instances.
  • Helpful: the message should provide some specific indications as to how the problem may be resolved and if possible let users pick from a small list of possible solutions. Links can also be used to connect a concise error message to a page with additional background material or a detailed explanation of the problem and possible solutions. Finally the message should provide extra information, such as an identifying code, so that if technical support is helping the end-user they can better analyze and remedy the problem.
Error Message Presentation
When deciding on the style your error messages will adhere to, you should consider the presentation of your error message:
  • Tone: be firm and authoritative, stating the facts of the situation in a neutral and business-like manner.
  • Color: an error message printed in red may call attention to itself, but to use color solely as the way to present an error message is generally a poor idea. People that are color blind for example will not read the text with any additional meaning attached to it.
  • Language: if your application is used by people in different countries, consider that your error messages will have to be translated and need to be presented in a format flexible enough to accommodate the translated text.
  • Icons: if you use icons to present your error messages make sure they are intuitive to the end-users and that they are appropriate to the circumstance of the error message.
To highlight how careful you should be when considering icons, Tristan Louis cites in 'Usability 101: Errors', the case of when an Apple Macintosh crashed, it used to show an icon presenting a little bomb with a burning fuse along with the message in the error dialog. He comments that users in many countries were terrified by this icon and would not touch the computer for fear that it would actually explode.
Summary
Remember that errors will happen but what will make all the difference is if they are handled properly. Unclear and unhelpful error messages tend to mean that errors will recur, or take longer to resolve. The resultant frustration can lead users to mistrust the interface or even abort the task in question.
Your error message must convey useful information -- useful information saves time and for more than just the end-user. The message will also need to be understood by and useful to the technical support person who handles the call, the quality assurance analyst who helps to track down or replicate the problem, and the maintenance programmer who is charged with fixing the problem in the code. Each person in this process represents a cost to your company, cost that could be greatly mitigated by a small investment made now.


No comments: