Preface – About Team Work.
Documentation:
About Team Work |
Once upon a time a tortoise and a hare had an argument about who was faster. They decided to settle the argument with a race. They agreed on a route and started off the race.
The hare shot ahead and ran briskly for some time. Then seeing that he was far ahead of the tortoise, he thought he'd sit under a tree for some time and relax before continuing the race. He sat under the tree and soon fell asleep. The tortoise plodding on overtook him and soon finished the race, emerging as the undisputed champ.
The hare woke up and realised that he'd lost the race. The moral of the story is that slow and steady wins the race. This is the version of the story that we've all grown up with.
But then recently, someone told me a more interesting version of this story. It continues the hare was disappointed at losing the race and he did some soul-searching. He realised that he'd lost the race only because he had been overconfident, careless and lax. If he had not taken things for granted, there's no way the tortoise could have beaten him. So he challenged the tortoise to another race. The tortoise agreed. This time, the hare went all out and ran without stopping from start to finish. He won by several miles.
What’s the moral of the story? Fast and consistent will always beat the slow and steady. If you have two people in your organisation, one slow, methodical and reliable, and the other fast and still reliable at what he does, the fast and reliable chap will consistently climb the organisational ladder faster than the slow, methodical chap. It's good to be slow and steady; but it's better to be fast and reliable.
But the story doesn't end here. The tortoise did some thinking this time, and realised that there's no way he can beat the hare in a race the way it was currently formatted. He thought for a while, and then challenged the hare to another race, but on a slightly
different route.
The hare shot ahead and ran briskly for some time. Then seeing that he was far ahead of the tortoise, he thought he'd sit under a tree for some time and relax before continuing the race. He sat under the tree and soon fell asleep. The tortoise plodding on overtook him and soon finished the race, emerging as the undisputed champ.
The hare woke up and realised that he'd lost the race. The moral of the story is that slow and steady wins the race. This is the version of the story that we've all grown up with.
But then recently, someone told me a more interesting version of this story. It continues the hare was disappointed at losing the race and he did some soul-searching. He realised that he'd lost the race only because he had been overconfident, careless and lax. If he had not taken things for granted, there's no way the tortoise could have beaten him. So he challenged the tortoise to another race. The tortoise agreed. This time, the hare went all out and ran without stopping from start to finish. He won by several miles.
What’s the moral of the story? Fast and consistent will always beat the slow and steady. If you have two people in your organisation, one slow, methodical and reliable, and the other fast and still reliable at what he does, the fast and reliable chap will consistently climb the organisational ladder faster than the slow, methodical chap. It's good to be slow and steady; but it's better to be fast and reliable.
But the story doesn't end here. The tortoise did some thinking this time, and realised that there's no way he can beat the hare in a race the way it was currently formatted. He thought for a while, and then challenged the hare to another race, but on a slightly
different route.
The hare agreed. They started off. In keeping with his self-made commitment to be consistently fast, the hare took off and ran at top speed until he came to a broad river. The finishing line was a couple of kilometers on the other side of the river. The hare sat there wondering what to do. In the meantime the tortoise trundled along, got into the river, swam to the opposite bank, continued walking and finished the
race.
What’s the moral of the story? First identify your core competency and then change the playing field to suit your core competency. In an organisation, if you are a good speaker, make sure you create opportunities to give presentations that enable the senior management to notice you. If your strength is analysis, make sure you do some sort of research, make a report and send it upstairs. Working to your strengths will not only get you noticed, but will also create opportunities for growth and advancement.
The story still hasn't ended.
The hare and the tortoise, by this time, had become pretty good friends and they did some thinking together. Both realised that the last race could have been run much better.
So they decided to do the last race again, but to run as a team this time.
They started off, and this time the hare carried the tortoise till the riverbank. There, the tortoise took over and swam across with the hare on his back. On the opposite bank, the hare again carried the tortoise and they reached the finishing line together. They both felt a greater sense of satisfaction than they'd felt earlier.
What’s the moral of the story? It's good to be individually brilliant and to have strong core competencies; but unless you're able to work in a team and harness each other's core competencies, you'll always perform below par because there will always be situations at which you'll do poorly and someone else does well.
Teamwork is mainly about situational leadership, letting the person with the relevant core competency for a situation take leadership.
There are more lessons to be learnt from this story.
Note that neither the hare nor the tortoise gave up after failures. The hare decided to work harder and put in more effort after his failure.
The tortoise changed his strategy because he was already working as hard as he could. In life, when faced with failure, sometimes it is appropriate to work harder and put in more
effort. Sometimes it is appropriate to change strategy and try something different. And sometimes it is appropriate to do both.
The hare and the tortoise also learnt another vital lesson. When we stop competing against a rival and instead start competing against the situation, we perform far better.
To sum up, the story of the hare and tortoise teaches us many things. “Chief among them are that fast and consistent will always beat slow and steady; work to your competencies; pooling resources and working as a team will always beat individual performers; never give up when faced with failure; and finally, compete against the situation? Not against a rival.”
1. Testing Introduction. |
Why do we Test:
Because of the fallibility of its human designers and its own abstract, complex nature, software development must be accompanied by quality assurance activities. It is not unusual for developers to spend 40% of the total project time on testing. That’s why we do testing.
What is the Relation between QA , QC and Testing?
Many people and organizations are confused about the difference between quality assurance (QA), quality control (QC), and testing. They are closely related, but they are different concepts. Since all three are necessary to effectively manage the risks of developing and maintaining software, it is important for software managers to understand the differences. They are defined below:
Quality Assurance: A set of activities designed to ensure that the development and/or maintenance process is adequate to ensure a system will meet its objectives.
Quality Control: A set of activities designed to evaluate a developed work product.
Testing: The process of executing a system with the intent of finding defects. (Note that the "process of executing a system" includes test planning prior to the execution of the test cases.)
QA activities ensure that the process is defined and appropriate. Methodology and standards development are examples of QA activities. A QA review would focus on the process elements of a project - e.g., are requirements being defined at the proper level of detail.
In contrast, QC activities focus on finding defects in specific deliverables - e.g., are the defined requirements the right requirements.
Testing is one example of a QC activity, but there are others such as inspections.
Both QA and QC activities are generally required for successful software development.
QA- is where the concept of Process, Verification is done
QC – is where the quality of the product is maintained
The basic function of testing is to detect errors in the software. Testing not only has to uncover errors introduced during coding, but also errors introduced during the previous phases. Its basic function is detecting errors in the software.
Testing is vital to the success of the system. It is a process of establishing confidence that a program or system does what it is proposed to. Testing is the only way to assure the quality of the software and it is a safe guarding activity rather than a separate phase
Testing Objectives:
1. Testing is a process of executing a program with the intent of finding an error.
2. A good test case is one that has a high probability of finding an as-yet undiscovered error.
3. A successful test is one that uncovers an as yet-undiscovered error.
Testing Process:
Testing is the major quality control measure employed during the software engineering development. Its basic function is to detect error in the software. Testing is necessary for the proper functioning of the system.
1. Unit testing
2. Integration testing
3. Functional testing
4. Acceptance testing
5. System Level testing
UNIT TESTING:
Unit testing focuses verification effort on the smallest unit of software and design of the module. Here, using the detail design as a guide, important control paths are tested to uncover errors within the boundary of the module. Unit testing is always white-box oriented, and the step can be conducted in parallel for multiple modules.
INTEGRATION TESTING:
Integration testing is actually a series of different tests whose primary purpose is to fully exercise the computer-based system. Although each test has a different purpose, all works should verify that all system elements have been properly integrated and perform allocated functions.
FUNCTIONAL TESTING:
Functional testing is actually a series of different test whose primary purpose is to fully exercise the Functionality of the system elements and all the modules in the following system.
ACCEPTANCE TESTING:
This is normally performed with realistic data of the client to demonstrate that the software is working satisfactorily. Here the testing is focus on the external behavior of the system.
Test-Case Specification
Purpose. To define a test case identified by a test-design specification.
A test-case specification shall have the following structure:
(1) Test-case-specification identifier
(2) Test items
(3) Input specifications
(4) Output specifications
(5) Environmental needs
(6) Special procedural requirements
(7) Inter Case dependencies
Testing Types:
1.Functionality Testing - WinRunner Tools and Silk Test tools
2.Performance Testing - Load Runner Tool, load test
3.Load Testing –Sub set of Performance
4.Stress Testing – Load is kept Constant, but Resource will be degraded
5.Smoke Testing –Daily Built Test
6.Regression. – Testing according to requirement changes.
Black Box – Functionality check
White Box- Coding Review, inner logic etc
White Box Testing= Black +White + Web Based Testing.
Unit Testing – Check logic of the programmer, Flow Graphs, Cyclomatic Competency
Functionality – 1.Equivalance Partition
2. Boundary Value Analysis
The Other Testing Types are –
Performance Testing
Verifies and validates that the performance requirements have been achieved; measures response times, transaction rates, and other time sensitive requirements.
Verifies and validates that the performance requirements have been achieved; measures response times, transaction rates, and other time sensitive requirements.
Security Testing
Evaluates the presence and appropriate functioning of the security of the application to ensure the integrity and confidentiality of the data.
Evaluates the presence and appropriate functioning of the security of the application to ensure the integrity and confidentiality of the data.
Volume Testing
Subjects the application to heavy volumes of data to determine if it can handle the volume of data.
Subjects the application to heavy volumes of data to determine if it can handle the volume of data.
Stress Testing
Investigates the behavior of the system under conditions that overload its resources. Of particular interest is the impact that this has on system processing time.
Investigates the behavior of the system under conditions that overload its resources. Of particular interest is the impact that this has on system processing time.
Compatibility Testing
Tests the compatibility of the application with other applications or on different Browser systems.
Tests the compatibility of the application with other applications or on different Browser systems.
Conversion Testing
Verifies the conversion of existing data and loads a new database.
Verifies the conversion of existing data and loads a new database.
Usability Testing
Determines how well the user will be able to use and understand the application.
Determines how well the user will be able to use and understand the application.
Documentation Testing
Verifies that the user documentation is accurate and ensures that the manual procedures work correctly.
Verifies that the user documentation is accurate and ensures that the manual procedures work correctly.
Backup Testing
Verifies the ability of the system to back up its data in the event of a software or hardware failure.
Verifies the ability of the system to back up its data in the event of a software or hardware failure.
Recovery Testing
Verifies the system’s ability to recover from a software or hardware failure.
Verifies the system’s ability to recover from a software or hardware failure.
Installation Testing
Verifies the ability to install the system successfully.
Verifies the ability to install the system successfully.
Configuration Testing
Testing the application on different Configurations of the Hardware
According to IEEE 829-1998
The Inputs for Testing are SRS and Project Plan
1. Object
2. Scope
3. Strategy
4. Features to be tested
5. Features that cannot be tested
6. Pass/Fail criteria
7. Exit/Resumption criteria
8. Test Deliveries
9. Test Schedulers
10. Risks
11. Enviornment Requirements
12. Staffing and Training
13. Conclusion
List of Topics to be Concentrated: |
- What is “Software Testing”
- Quality Assurance
- Quality Control
- Verification & Validation
- Software De Life cycle
Spiral Model
‘V’ Model
- When testing starts?
- Role of Test Engineer
- Test Plans
Contents of Test Plan
Preparation of Test Plans
Inputs to a Test Plan
“Features to be Tested” in a test plan
Risks in test plan
Test deliverables
- Test Cases
Structure of test cases
Inputs to test cases
What is a “Good Test Case”
Complexity in test cases
Test Reports
- Web Architecture
- Challenges in web based testing
- Difference Web and Client-Server Architecture
- Test Concepts
Unit Test-WHITE BOX
Functionality Testing –BLACK BOX
System Testing
Performance Testing
Parameters to be verified in performance testing
Actions to suggested when performance degrades
Load Testing
Stress Testing
Security Testing
Beta Testing
User Acceptance Testing
- What is Code Optimizing?
- Data Base Testing
What is Query optimizing?
Optimizing of stored procedures
General cases when a query fails.
Advantage of indexes in database.
- Bug
What is a “Bug”.
Life cycle of a bug
Bug Tracking
Severities of Bug
Priorities of Bug
Bug Reports
Back To Top
2. WinRunner Tool |
It has an advantage over other tools in the market when focused on Functionality test.
The Features of WinRunner Tool are: -
1. WinRunner compares automated and manual testing methods. It introduces the WinRunner testing process and familiarizes you with the WinRunner user interface.
2. Setting Up the GUI Map explains how WinRunner identifies GUI (Graphical User Interface) objects in an application and describes the two modes for organizing GUI map files.
3. Recording Tests teaches you how to record a test script and explains the basics of Test Script Language (TSL)—Mercury Interactive’s C-like programming language designed for creating scripts.
4. Synchronizing Tests shows you how to synchronize a test so that it can run successfully even when an application responds slowly to input.
5. Checking GUI Objects shows you how to create a test that checks GUI objects. You will use the test to compare the behavior of GUI objects in different versions of the sample application.
6. Checking Bitmaps shows you how to create and run a test that checks bitmaps in your application. You will run the test on different versions of the sample application and examine any differences, pixel by pixel.
7. Programming Tests with TSL shows you how to use visual programming to add functions and logic to your recorded test scripts.
List of Topics to be Concentrated: |
- Purpose of WinRunner
- Recording of scripts and running
- Recording notes
- Run modes
- GUI Map editor
- GUI Checkpoints
- Bitmap Checkpoints
- Database Checkpoints
- Reading of texts from browsers
- Exception handling
- Data Driven Testing
- Data parameterizations
- Connecting to Test Director
- Function Generator
- Complied Modules
- Test Script Language (TSL)
- Sample scripts on web applications using TSL.
Back to Top
3. TestDirector Tool |
This is the Product of Mercury Interactive.
TestDirector includes all the features you need to organize and manage the testing process. It enables you to create a database of tests, execute tests, and report and track defects detected in the software.
Test Director is generally categorized into 3 parts
- Planning Tests.
- Running Tests.
- Defects Tracking.
1. Planning Tests: Develop a test plan and create tests. This includes defining goals and strategy, designing tests, automating tests where beneficial, and analyzing the plan.
2. Running Test: Execute the tests created in the Planning Tests phase, and analyze the test results.
3. Defects Tracking: Monitor software quality. This includes reporting defects, determining repair priorities, assigning tasks, and tracking repair progress.
Administrator Roles in TestDirector:
To maintain the integrity of a project and to ensure thorough and effective testing, only a TestDirector administrator is given full privileges to a project database.
As an administrator you can perform the following tasks:
1. Control access to a project by defining the users who can access the database and by determining the types of tasks each user can perform.
2. Customize a project database to meet the specific requirements of a testing environment.
3. Create a database for storing project information. TestDirector supports file-based databases (Microsoft Access) and client/server based databases (Oracle, Sybase, and MS SQL).
4. Manage activities such as upgrading a database, ensuring database connectivity, repairing corrupted files, querying a database, and deleting a database.
List of Topics to be Concentrated: |
- Purpose of TestDirector
- Creating a Test Plan in TestDirector
Creating Folder/Subfolders
Creating test cases Manuals
Saving TSL Scripts
- Running of tests
Creation of Test sets
Running of test sets
Running WinRunner scripts from test director
- Defect Tracking
Adding of bug
Modifying a Bug Cycle
Mailing of a Bug to Remote user
· Report Generation
Test runs
Test steps
Customize reports
Bug Reports
· Analyzing using graphs
Bug “severity versus priority”
Bug “status versus assigned bug”
Other graphs
- Customizing the Test Director
Creation of a Database
Creation of new users
Creation of new lists
4. LoadRunner Tool |
It’s a Mercury Interactive’s tool for testing the performance of client/server systems. LoadRunner stresses your entire client/server system to isolate and identify potential client, network, and server bottlenecks. LoadRunner enables you to test your system under controlled and peak load conditions. To generate load, LoadRunner runs thousands of Virtual Users that are distributed over a network. Using a minimum of hardware resources, these Virtual Users provide consistent, repeatable, and measurable load to exercise your client/server system just as real users would. LoadRunner in-depth reports and graphs provide the information that you need to evaluate the performance of your client/server system.
List of Topics to be Concentrated: |
- Purpose of LoadRunner
- Components of LoadRunner
- Recording of LoadRunner
- Remote Command Launcher
- Agent
- Virtual User concept
- Virtual user generator
Generating scripts using VuGen
Compiling in Stand alone mode
Parameterize scripts
Changing Run time settings
- Controller
Connecting to Hosts
Adding scripts
Changing Run Time Settings
Adding virtual users
Online Monitors
Memory utilization
CPU utilization
Server Response
- Analysis
Generating performance Reports
Analyzing using graphs
Response Time
Hit Rate
Throughput
Analyze the logs
- Testing Client-Server systems.
5. SilkTest Tool |
This is a product of Segue Software.
SilkTest and QA DBTester (a companion product to SilkTest) provide powerful support for testing client/server applications and/or databases in a networked environment. Testing multiple remote applications raises the level of complexity of QA engineering above that required for stand-alone application testing.
List of Topics to be Concentrated: |
· What is silk test?
- Architecture of a silk test
Host scripts
Agent
- Extension enabler
- Components of silk tests
Test plan
Frame
Test scripts
Run test
- Recording of scripts and running
- Understanding test plan
- Understanding Frames
Identifiers
Tags
- Verifying of GUI objects
- Verifying of Bitmap objects
- Verifying methods
- Creation of test cases
- Recording of application states
- Default base state
- Data driven testing
- 4Test Language
Scripting
Sample scripts
Functions
Error Printing
(Double click to View Slide Show)
6. Communication Skills. |
Back To Top
7. Glossary For General Testing Terms. |
This testing glossary is intended to provide a set of common terms and definitions as used in testing methodology. These definitions have origin in many different industry standards and sources such as British Standards Institute, IEEE. Many of these terms are in common use and therefore may have a slightly different meaning elsewhere. If more than one definition is in common use, they have been included where appropriate.
Acceptance Criteria | The definition of the results expected from the test cases used for acceptance testing. The product must meet these criteria before implementation can be approved. |
Acceptance Testing | (1) Formal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the client to determine whether or not to accept the system. (2) Formal testing conducted to enable a user, client, or other authorized entity to determine whether to accept a system or component. |
Acceptance Test Plan | Describes the steps the client will use to verify that the constructed system meets the acceptance criteria. It defines the approach to be taken for acceptance testing activities. The plan identifies the items to be tested, the test objectives, the acceptance criteria, the testing to be performed, test schedules, entry/exit criteria, staff requirements, reporting requirements, evaluation criteria, and any risks requiring contingency planning. |
Audit and Controls Testing | A functional type of test that verifies the adequacy and effectiveness of controls and completeness of data processing results. |
Auditability | A test focus area defined as the ability to provide supporting evidence to trace processing of data. |
Backup and Recovery Testing | A structural type of test that verifies the capability of the application to be restarted after a failure. |
Black Box Testing | Evaluation techniques that are executed without knowledge of the program’s implementation. The tests are based on an analysis of the specification of the component without reference to its internal workings. |
Bottom-up Testing | Approach to integration testing where the lowest level components are tested first then used to facilitate the testing of higher-level components. This process is repeated until the component at the top of the hierarchy is tested. See "Top-down". |
Boundary Value Analysis | A test case selection technique that selects test data that lies along "boundaries" or extremes of input and output possibilities. Boundary Value Analysis can apply to parameters, classes, data structures, variables, loops, etc. |
Branch Testing | A white box testing technique that requires each branch or decision point to be taken once. |
Build | (1) An operational version of a system or component that incorporates a specified subset of the capabilities that the final product will provide. Builds are defined whenever the complete system cannot be developed and delivered in a single increment. (2) A collection of programs within a system those are functionally independent. A build can be tested as a unit and can be installed independent of the rest of the system. |
Business Function | A set of related activities that comprise a stand-alone unit of business. It may be defined as a process that results in the achievement of a business objective. It is characterized by well-defined start and finishes activities and a workflow or pattern. |
Causal Analysis | The evaluation of the cause of major errors, to determine actions that will prevent reoccurrence of similar errors. |
Change Control | The process, by which a change is proposed, evaluated, approved or rejected, scheduled, and tracked. |
Change Management | A process methodology to identify the configuration of a release and to manage all changes through change control, data recording, and updating of baselines. |
Change Request | A documented proposal for a change of one or more work items or work item parts. |
Condition Testing | A white box test method that requires all decision conditions be executed once for true and once for false. |
Configuration Management | (1) The process of identifying and defining the configuration items in a system, controlling the release and change of these items throughout the system life cycle, recording and reporting the status of configuration items and change requests, and verifying the completeness and correctness of configuration items. (2) A discipline applying technical and administrative direction and surveillance to (a) identify and document the functional and physical characteristics of a configuration items, (b) control changes to those characteristics, and (c) record and report change processing and implementation status. |
Conversion testing | A functional type of test that verifies the compatibility of converted programs, data and procedures with the “old” ones that are being converted or replaced. |
Coverage | The extent to which test data tests a program’s functions, parameters, inputs, paths, branches, statements, conditions, modules or data flow paths. |
Coverage Matrix | Documentation procedure to indicate the testing coverage of test cases compared to possible elements of a program environment (i.e. inputs, outputs, parameters, paths, cause-effects, equivalence partitioning, etc.). |
Continuity of Processing | A test focus area defined as the ability to continue processing if problems occur. Included is the ability to backup and recover after a failure. |
Correctness | A test focus area defined as the ability to process data according to prescribed rules. Controls over transactions and data field edits provide an assurance on accuracy and completeness of data. |
Data flow Testing | Testing in which test cases are designed based on variable usage within the code. |
Debugging | The process of locating, analyzing, and correcting suspected faults. Compare with testing. |
Decision Coverage | Percentage of decision outcomes that have been exercised through (white box) testing. |
Defect | A variance from expectations. See also Fault. |
Defect Management | A set of processes to manage the tracking and fixing of defects found during testing and to perform causal analysis. |
Documentation and Procedures Testing | A functional type of test that verifies that the interface between the system and the people works and is usable. It also verifies that the instruction guides are helpful and accurate. |
Design Review | (1) A formal meeting at which the preliminary or detailed design of a system is presented to the user, customer or other interested parties for comment and approval. (2) The formal review of an existing or proposed design for the purpose of detection and remedy of design deficiencies that could affect fitness-for-use and environmental aspects of the product, process or service, and/or for identification of potential improvements of performance, safety and economic aspects. |
Desk Check | Testing of software by the manual simulation of its execution. It is one of the static testing techniques. |
Detailed Test Plan | The detailed plan for a specific level of dynamic testing. It defines what is to be tested and how it is to be tested. The plan typically identifies the items to be tested, the test objectives, the testing to be performed, test schedules, personnel requirements, reporting requirements, evaluation criteria, and any risks requiring contingency planning. It also includes the testing tools and techniques, test environment set up, entry and exit criteria, and administrative procedures and controls. |
Driver | A program that exercises a system or system component by simulating the activity of a higher-level component. |
Dynamic Testing | Testing that is carried out by executing the code. Dynamic testing is a process of validation by exercising a work product and observing the behavior of its logic and its response to inputs. |
Entry Criteria | A checklist of activities or work items that must be complete or exist, respectively, before the start of a given task within an activity or sub-activity. |
Environment | See Test Environment. |
Equivalence Partitioning | Portion of the component’s input or output domains for which the component’s behavior is assumed to be the same from the component’s specification. |
Error | (1) A discrepancy between a computed, observed or measured value or condition and the true specified or theoretically correct value or condition. (2) A human action that results in software containing a fault. This includes omissions or misinterpretations, etc. See Variance. |
Error Guessing | A test case selection process that identifies test cases based on the knowledge and ability of the individual to anticipate probable errors. |
Error Handling Testing | A functional type of test that verifies the system function for detecting and responding to exception conditions. Completeness of error handling determines the usability of a system and insures that incorrect transactions are properly handled. |
Execution Procedure | A sequence of manual or automated steps required to carry out part or all of a test design or execute a set of test cases. |
Exit Criteria | (1) Actions that must happen before an activity is considered complete. (2) A checklist of activities or work items that must be complete or exist, respectively, prior to the end of a given process stage, activity, or sub-activity. |
Expected Results | Predicted output data and file conditions associated with a particular test case. Expected results, if achieved, will indicate whether the test was successful or not. Generated and documented with the test case prior to execution of the test. |
Fault | (1) An accidental condition that causes a functional unit to fail to perform its required functions. (2) A manifestation of an error in software. A fault if encountered may cause a failure. Synonymous with bug. |
Full Lifecycle Testing | The process of verifying the consistency, completeness, and correctness of software and related work products (such as documents and processes) at each stage of the development life cycle. |
Function | (1) A specific purpose of an entity or its characteristic action. (2) A set of related control statements that perform a related operation. Functions are sub-units of modules. |
Function Testing | A functional type of test, which verifies that each business function, operates according to the detailed requirements, the external and internal design specifications. |
Functional Testing | Selecting and executing test cases based on specified function requirements without knowledge or regard of the program structure. Also known as black box testing. See "Black Box Testing". |
Functional Test Types | Those kinds of tests used to assure that the system meets the business requirements, including business functions, interfaces, usability, audit & controls, and error handling etc. See also Structural Test Types. |
Implementation | (1) A realization of an abstraction in more concrete terms; in particular, in terms of hardware, software, or both. (2) The process by which software release is installed in production and made available to end users. |
Inspection | (1) A group review quality improvement process for written material, consisting of two aspects: product (document itself) improvement and process improvement (of both document production and inspection). (2) A formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violations of development standards, and other problems. Contrast with walk-through. |
Installation Testing | A functional type of test that verifies that the hardware, software and applications can be easily installed and run in the target environment. |
Integration Testing | A level of dynamic testing, which verifies the proper execution of application components and does not require that the application under test interface with other applications. |
Interface / Inter-system Testing | A functional type of test, which verifies that the interconnection between applications and systems functions correctly. |
Level of Testing | Refers to the progression of software testing through static and dynamic testing. Examples of static testing levels are Project Objectives Review, Requirements Walkthrough, Design (External and Internal) Review, and Code Inspection. Examples of dynamic testing levels are Unit Testing, Integration Testing, System Testing, Acceptance Testing, Systems Integration Testing and Operability Testing. Also known as a test level. |
Lifecycle | The software development process stages. Requirements, Design, Construction (Code/Program, Test), and Implementation. |
Logical Path | A path that begins at an entry or decision statement and ends at a decision statement or exit. |
Maintainability | A test focus area defined as the ability to locate and fix an error in the system. Can also be the ability to make dynamic changes to the system environment without making system changes. |
Master Test Plan | A plan that addresses testing from a high-level system viewpoint. It ties together all levels of testing (unit test, integration test, system test, acceptance test, systems integration, and operability). It includes test objectives, test team organization and responsibilities, high-level schedule, test scope, test focus, test levels and types, test facility requirements, and test management procedures and controls. |
Operability | A test focus area defined as the effort required of support personnel to learn and operate a manual or automated system. Contrast with Usability. |
Operability Testing | A level of dynamic testing in which the operations of the system are validated in the real or closely simulated production environment. This includes verification of production settings, installation procedures and operations procedures. Operability Testing considers such factors as performance, resource consumption, adherence to standards, etc. Operability Testing is normally performed by Operations to assess the readiness of the system for implementation in the production environment. |
Operational Testing | A structural type of test that verifies the ability of the application to operate at an acceptable level of service in the production-like environment. |
Parallel Testing | A functional type of test, which verifies that the same input on “old” and “new” systems, produces the same results. It is more of an implementation than a testing strategy. |
Path Testing | A white box testing technique that requires all code or logic paths to be executed once. Complete path testing is usually impractical and often uneconomical. |
Performance | A test focus area defined as the ability of the system to perform certain functions within a prescribed time. |
Performance Testing | A structural type of test which verifies that the application meets the expected level of performance in a production-like environment. |
Portability | A test focus area defined as ability for a system to operate in multiple operating environments. |
Problem | (1) A call or report from a user. The call or report may or may not be defect oriented. (2) A software or process deficiency found during development. (3) The inhibitors and other factors that hinder an organization's ability to achieve its goals and critical success factors. (4) An issue that a project manager has the authority to resolve without escalation. Compare to ‘defect’ or ‘error’. |
Quality Plan | A document which describes the organization, activities, and project factors that have been put in place to achieve the target level of quality for all work products in the application domain. It defines the approach to be taken when planning and tracking the quality of the application development work products to insure conformance to specified requirements and to insure the client’s expectations are met. |
Regression Testing | A functional type of test, which verifies that changes to one part of the system have not caused unintended adverse effects to other parts. |
Reliability | A test focus area defined as the extent to which the system will provide the intended function without failing. |
Requirement | (1) A condition or capability needed by the user to solve a problem or achieve an objective. (2) A condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. The set of all requirements forms the basis for subsequent development of the system or system component. |
Review | A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users or other interested parties for comment or approval. |
Root Cause Analysis | See Causal Analysis. |
Scaffolding | Temporary programs may be needed to create or receive data from the specific program under test. This approach is called scaffolding. |
Security | A test focus area defined as the assurance that the system/data resources will be protected against accidental and/or intentional modification or misuse. |
Security Testing | A structural type of test which verifies that the application provides an adequate level of protection for confidential information and data belonging to other systems. |
Software Quality | (1) The totality of features and characteristics of a software product that bear on its ability to satisfy given needs; for example, conform to specifications. (2) The degree to which software possesses a desired combination of attributes. (3) The degree to which a customer or user perceives that software meets his or her composite expectations. (4) The composite characteristics of software that determine the degree to which the software in use will meet the expectations of the customer. |
Software Reliability | (1) The probability that software will not cause the failure of a system for a specified time under specified conditions. The probability is a function of the inputs to and use of the system as well as a function of the existence of faults in the software. The inputs to the system determine if existing faults are encountered. (2) The ability of a program to perform a required function under stated conditions for a stated period of time. |
Statement Testing | A white box testing technique that requires all code or logic statements to be executed at least once. |
Static Testing | (1) The detailed examination of a work product's characteristics to an expected set of attributes, experiences and standards. The product under scrutiny is static and not exercised and therefore its behaviour to changing inputs and environments cannot be assessed. (2) The process of evaluating a program without executing the program. See desk checking, inspection, walk-through. |
Stress / Volume Testing | A structural type of test that verifies that the application has acceptable performance characteristics under peak load conditions. |
Structural Function | Structural functions describe the technical attributes of a system. |
Structural Test Types | Those kinds of tests that may be used to assure that the system is technically sound. |
Stub | (1) A dummy program element or module used during the development and testing of a higher-level element or module. (2) A program statement substituting for the body of a program unit and indicating that the unit is or will be defined elsewhere. The inverse of Scaffolding. |
Sub-system | (1) A group of assemblies or components or both combined to perform a single function. (2) A group of functionally related components that are defined as elements of a system but not separately packaged. |
System | A collection of components organized to accomplish a specific function or set of functions. |
Systems Integration Testing | A dynamic level of testing which insures that the systems integration activities appropriately address the integration of application subsystems, integration of applications with the infrastructure, and impact of change on the current live environment. |
System Testing | A dynamic level of testing in which all the components that comprise a system are tested to verify that the system functions together as a whole. |
Test Bed | (1) A test environment containing the hardware, instrumentation tools, simulators, and other support software necessary for testing a system or system component. (2) A set of test files, (including databases and reference files), in a known state, used with input test data to test one or more test conditions, measuring against expected results. |
Test Case | (1) A set of test inputs, execution conditions, and expected results developed for a particular objective, such as to exercise a particular program path or to verify compliance with a specific requirement. (2) The detailed objectives, data, procedures and expected results to conduct a test or part of a test. |
Test Condition | A functional or structural attribute of an application, system, network, or component thereof to be tested. |
Test Conditions Matrix | A worksheet used to formulate the test conditions that, if met, will produce the expected result. It is a tool used to assist in the design of test cases. |
Test Conditions Coverage Matrix | A worksheet that is used for planning and for illustrating that all test conditions are covered by one or more test cases. Each test set has a Test Conditions Coverage Matrix. Rows are used to list the test conditions and columns are used to list all test cases in the test set. |
Test Coverage Matrix | A worksheet used to plan and cross check to insure all requirements and functions are covered adequately by test cases. |
Test Data | The input data and file conditions associated with a specific test case. |
Test Environment | The external conditions or factors that can directly or indirectly influence the execution and results of a test. This includes the physical as well as the operational environments. Examples of what is included in a test environment are: I/O and storage devices, data files, programs, JCL, communication lines, access control and security, databases, reference tables and files (version controlled), etc. |
Test Focus Areas | Those attributes of an application that must be tested in order to assure that the business and structural requirements are satisfied. |
Test Level | See Level of Testing. |
Test Log | A chronological record of all relevant details of a testing activity. |
Test Matrices | A collection of tables and matrices used to relate functions to be tested with the test cases that do so. Worksheets used to assist in the design and verification of test cases. |
Test Objectives | The tangible goals for assuring that the Test Focus areas previously selected as being relevant to a particular Business or Structural Function are being validated by the test. |
Test Plan | A document prescribing the approach to be taken for intended testing activities. The plan typically identifies the items to be tested, the test objectives, the testing to be performed, test schedules, entry / exit criteria, personnel requirements, reporting requirements, evaluation criteria, and any risks requiring contingency planning. |
Test Procedure | Detailed instructions for the setup, operation, and evaluation of results for a given test. A set of associated procedures is often combined to form a test procedures document. |
Test Report | A document describing the conduct and results of the testing carried out for a system or system component. |
Test Run | A dated, time-stamped execution of a set of test cases. |
Test Scenario | A high-level description of how a given business or technical requirement will be tested, including the expected outcome; later decomposed into sets of test conditions, each in turn, containing test cases. |
Test Script | A sequence of actions that executes a test case. Test scripts include detailed instructions for set up, execution, and evaluation of results for a given test case. |
Test Set | A collection of test conditions test sets are created for purposes of test execution only. A test set is created such that its size is manageable to run and its grouping of test conditions facilitates testing. The grouping reflects the application build strategy. |
Test Sets Matrix | A worksheet that relates the test conditions to the test set in which the condition is to be tested. Rows list the test conditions and columns list the test sets. A checkmark in a cell indicates the test set will be used for the corresponding test condition. |
Test Specification | A set of documents that define and describe the actual test architecture, elements, approach, data and expected results. Test Specification uses the various functional and non-functional requirement documents along with the quality and test plans. It provides the complete set of test cases and all supporting detail to achieve the objectives documented in the detailed test plan. |
Test Strategy | A high level description of major system-wide activities which collectively achieve the overall desired result as expressed by the testing objectives, given the constraints of time and money and the target level of quality. It outlines the approach to be used to insure that the critical attributes of the system are tested adequately. |
Test Type | See Type of Testing. |
Testability | (1) The extent to which software facilitates both the establishment of test criteria and the evaluation of the software with respect to those criteria. (2) The extent to which the definition of requirements facilitates analysis of the requirements to establish test criteria. |
Testing | The process of exercising or evaluating a program, product, or system, by manual or automated means, to verify that it satisfies specified requirements, to identify differences between expected and actual results. |
Testware | The elements that are produced as part of the testing process. Testware includes plans, designs, test cases, test logs, test reports, etc. |
Top-down | Approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested. |
Transaction Flow Testing | A functional type of test that verifies the proper and complete processing of a transaction from the time it enters the system to the time of its completion or exit from the system. |
Type of Testing | Tests a functional or structural attribute of the system. E.g. Error Handling, Usability. (Also known as test type.) |
Unit Testing | The first level of dynamic testing and is the verification of new or changed code in a module to determine whether all new or modified paths function correctly. |
Usability | A test focus area defined as the end-user effort required to learn and use the system. Contrast with Operability. |
Usability Testing | A functional type of test which verifies that the final product is user-friendly and easy to use. |
User Acceptance Testing | See Acceptance Testing. |
Validation | (1) The act of demonstrating that a work item is in compliance with the original requirement. For example, the code of a module would be validated against the input requirements it is intended to implement. Validation answers the question "Is the right system being built?” (2) Confirmation by examination and provision of objective evidence that the particular requirements for a specific intended use have been fulfilled. See "Verification". |
Variance | A mismatch between the actual and expected results occurring in testing. It may result from errors in the item being tested, incorrect expected results, invalid test data, etc. See "Error". |
Verification | (1) The act of demonstrating that a work item is satisfactory by using its predecessor work item. For example, code is verified against module level design. Verification answers the question "Is the system being built right?” (2) Confirmation by examination and provision of objective evidence that specified requirements have been fulfilled. See "Validation". |
Walkthrough | A review technique characterized by the author of the object under review guiding the progression of the review. Observations made in the review are documented and addressed. Less formal evaluation technique than an inspection. |
White Box Testing | Evaluation techniques that are executed with the knowledge of the implementation of the program. The objective of white box testing is to test the program's statements, code paths, conditions, or data flow paths. |
Work Item | A software development lifecycle work product. |
Work Product | (1) The result produced by performing a single task or many tasks. A work product, also known as a project artifact, is part of a major deliverable that is visible to the client. Work products may be internal or external. An internal work product may be produced as an intermediate step for future use within the project, while an external work product is produced for use outside the project as part of a major deliverable. (2) As related to test, software deliverable that is the object of a test, a test work item. |
No comments:
Post a Comment