Download Software Testing: Testing Strategies and Types and more Lecture notes Introduction to Software Engineering in PDF only on Docsity!
FOREWORD
Beginners Guide To Software Testing introduces a practical approach to testing software. It bridges the gap between theoretical knowledge and real world implementation. This article helps you gain an insight to Software Testing - understand technical aspects and the processes followed in a real working environment.
Who will benefit?
Beginners. For those of you who wish to mould your theoretical software engineering knowledge into practical approach to working in the real world.
Those who wish to take up Software Testing as a profession.
Developers! This is an era where you need to be an “All rounder”. It is advantageous for developers to posses testing capabilities to test the application before hand. This will help reduce overhead on the testing team.
Already a Tester! You can refresh all your testing basics and techniques and gear up for Certifications in Software Testing
An earnest suggestion: No matter which profession you choose, it is advisable that you posses the following skills:
- Good communication skills – oratory and writing
- Fluency in English
- Good Typing skills
By the time you finish reading this article, you will be aware of all the techniques and processes that improves your efficiency, skills and confidence to jump start into the field of Software Testing.
Table Of Contents
- OVERVIEW
- 1.1. THE BIG P ICTURE
- 1.2. WHAT IS SOFTWARE? WHY SHOULD IT BE TESTED?
- 1.3. WHAT IS Q UALITY? HOW IMPORTANT IS IT?
- 1.4. WHAT EXCATLY DOES A SOFTWARE TESTER DO?
- 1.5. WHAT MAKES A GOOD TESTER?
- 1.6. GUIDELINES FOR NEW TESTERS
- INTRODUCTION
- 2.1. SOFTWARE L IFE CYCLE
- 2.1.1. VARIOUS L IFE CYCLE MODELS
- 2.2. SOFTWARE TESTING L IFE CYCLE
- 2.3. WHAT IS A B UG? WHY DO BUGS OCCUR?
- 2.4. BUG L IFE CYCLE
- 2.5. COST OF FIXING BUGS
- 2.6. WHEN CAN TESTING BE STOPPED /REDUCED ?...................................................................................
- SOFTWARE TESTING L EVELS , TYPES , TERMS AND D EFINITIONS
- 3.1. TESTING LEVELS AND TYPES
- 3.2. TESTING TERMS
- MOST COMMON SOFTWARE ERRORS
- THE TEST P LANNING PROCESS
- 5.1. WHAT IS A TEST STRATEGY? W HAT ARE ITS COMPONENTS ?.................................................................
- 5.2. TEST PLANNING – SAMPLE STRUCTURE
- 5.3. MAJOR TEST PLANNING TASKS
- TEST CASE D EVELOPMENT
- 6.1. GENERAL G UIDELINES
- 6.2. TEST CASE – SAMPLE STRUCTURE
- 6.3. TEST CASE D ESIGN TECHNIQUES
- 6.3.1. SPECIFICATION DERIVED TESTS
- 6.3.2. EQUIVALENCE PARTITIONING
- 6.3.3. BOUNDARY VALUE ANALYSIS
- 6.3.4. STATE TRANSITION TESTING
- 6.3.5. BRANCH TESTING
- 6.3.6. CONDITION TESTING
- 6.3.7. D ATA DEFINITION – U SE TESTING
- 6.3.8. INTERNAL BOUNDARY VALUE TESTING
- 6.3.9. ERROR GUESSING
- 6.4. U SE CASES
- D EFECT TRACKING
- 7.1. WHAT IS A DEFECT ?..............................................................................................................
- 7.2. WHAT ARE THE DEFECT CATEGORIES ?...........................................................................................
- 7.3. HOW IS A DEFECT REPORTED?
- 7.4. HOW DESCRIPTIVE SHOULD YOUR BUG/DEFECT REPORT BE ?..................................................................
- 7.5. WHAT DOES THE TESTER DO WHEN THE DEFECT IS FIXED?
- TYPES OF TEST REPORTS
- 8.1. TEST ITEM TRANSMITTAL REPORT
- 8.2. TEST LOG
- 8.3. TEST INCIDENT REPORT
- 8.4. TEST SUMMARY REPORT
- SOFTWARE TEST A UTOMATION
- 9.1. FACTORS DETERMINING TEST AUTOMATION
- 9.2. APPROACHES TO AUTOMATION
- 9.3. CHOOSING THE RIGHT TOOL
- 9.4. TOP TEN CHALLENGES OF SOFTWARE TEST AUTOMATION
- INTRODUCTION TO SOFTWARE STANDARDS
- 10.1. CMM..............................................................................................................................
- 10.2. S IX SIGMA
- 10.3. ISO
- SOFTWARE TESTING CERTIFICATIONS
- FACTS ABOUT SOFTWARE E NGINEERING
- REFERENCES
- I NTERNET L INKS
What is Quality? How important is it?
Quality can briefly be defined as “a degree of excellence”. High quality software usually conforms to the user requirements. A customer’s idea of quality may cover a breadth of features - conformance to specifications, good performance on platform(s)/configurations, completely meets operational requirements (even if not specified!), compatibility to all the end-user equipment, no negative impact on existing end-user base at introduction time.
Quality software saves good amount of time and money. Because software will have fewer defects, this saves time during testing and maintenance phases. Greater reliability contributes to an immeasurable increase in customer satisfaction as well as lower maintenance costs. Because maintenance represents a large portion of all software costs, the overall cost of the project will most likely be lower than similar projects.
Following are two cases that demonstrate the importance of software quality:
Ariane 5 crash June 4, 1996
- Maiden flight of the European Ariane 5 launcher crashed about 40 seconds after takeoff
- Loss was about half a billion dollars
- Explosion was the result of a software error
- Uncaught exception due to floating-point error: conversion from a 64-bit integer to a 16-bit signed integer applied to a larger than expected number
- Module was re-used without proper testing from Ariane 4
- Error was not supposed to happen with Ariane 4
- No exception handler
Mars Climate Orbiter - September 23, 1999
- Mars Climate Orbiter, disappeared as it began to orbit Mars.
- Cost about $US 125-million
- Failure due to error in a transfer of information between a team in Colorado and a team in California
- One team used English units (e.g., inches, feet and pounds) while the other used metric units for a key spacecraft operation.
What exactly does a software tester do?
Apart from exposing faults (“bugs”) in a software product confirming that the program meets the program specification, as a test engineer you need to create test cases, procedures, scripts and generate data. You execute test procedures and scripts, analyze standards and evaluate results of system/integration/regression testing. You also...
- Speed up development process by identifying bugs at an early stage (e.g. specifications stage)
- Reduce the organization's risk of legal liability
- Maximize the value of the software
- Assure successful launch of the product, save money, time and reputation of the company by discovering bugs and design flaws at an early stage before failures occur in production, or in the field
- Promote continual improvement
What makes a good tester?
As software engineering is now being considered as a technical engineering profession, it is important that the software test engineer’s posses certain traits with a relentless attitude to make them stand out. Here are a few.
- Know the technology. Knowledge of the technology in which the application is developed is an added advantage to any tester. It helps design better and powerful test cases basing on the weakness or flaws of the technology. Good testers know what it supports and what it doesn’t, so concentrating on these lines will help them break the application quickly.
- Perfectionist and a realist. Being a perfectionist will help testers spot the problem and being a realist helps know at the end of the day which problems are really important problems. You will know which ones require a fix and which ones don’t.
- Tactful, diplomatic and persuasive. Good software testers are tactful and know how to break the news to the developers. They are diplomatic while convincing the developers of the bugs and persuade them when necessary and have their bug(s) fixed. It is important to be critical of the issue and not let the person who developed the application be taken aback of the findings.
- An explorer. A bit of creativity and an attitude to take risk helps the testers venture into unknown situations and find bugs that otherwise will be looked over.
- Troubleshoot. Troubleshooting and figuring out why something doesn’t work helps testers be confident and clear in communicating the defects to the developers.
- Posses people skills and tenacity. Testers can face a lot of resistance from programmers. Being socially smart and diplomatic doesn't mean being indecisive. The best testers are both- socially adept and tenacious where it matters.
- Organized. Best testers very well realize that they too can make mistakes and don’t take chances. They are very well organized and have checklists, use files, facts and figures to support their findings that can be used as an evidence and double-check their findings.
- Objective and accurate. They are very objective and know what they report and so convey impartial and meaningful information that keeps politics and emotions out of message. Reporting inaccurate information is losing a little credibility. Good testers make sure their findings are accurate and reproducible.
- Defects are valuable. Good testers learn from them. Each defect is an opportunity to learn and improve. A defect found early substantially costs less when compared to the one found at a later stage. Defects can cause serious problems if not managed properly. Learning from defects helps – prevention of future problems, track improvements, improve prediction and estimation.
Guidelines for new testers
- Testing can’t show that bugs don’t exist. An important reason for testing is to prevent defects. You can perform your tests, find and report bugs, but at no point can you guarantee that there are no bugs.
- It is impossible to test a program completely. Unfortunately this is not possible even with the simplest program because – the number of inputs is very large, number of outputs is very large, number of paths through the software is very large, and the specification is subjective to frequent changes.
- You can’t guarantee quality. As a software tester, you cannot test everything and are not responsible for the quality of the product. The main way that a tester can fail is to fail to report accurately a defect you have observed. It is important to remember that we seldom have little control over quality.
- Target environment and intended end user. Anticipating and testing the application in the environment user is expected to use is one of the major factors that should be considered. Also, considering if the application is a single user system or multi user system is important for demonstrating the ability for immediate readiness when necessary. The error case of Disney’s
2. Introduction
Software Life Cycle
The software life cycle typically includes the following: requirements analysis, design, coding, testing, installation and maintenance. In between, there can be a requirement to provide Operations and support activities for the product.
Requirements Analysis. Software organizations provide solutions to customer requirements by developing appropriate software that best suits their specifications. Thus, the life of software starts with origin of requirements. Very often, these requirements are vague, emergent and always subject to change.
Analysis is performed to - To conduct in depth analysis of the proposed project, To evaluate for technical feasibility, To discover how to partition the system, To identify which areas of the requirements need to be elaborated from the customer, To identify the impact of changes to the requirements, To identify which requirements should be allocated to which components.
Design and Specifications. The outcome of requirements analysis is the requirements specification. Using this, the overall design for the intended software is developed.
Activities in this phase - Perform Architectural Design for the software, Design Database (If applicable), Design User Interfaces, Select or Develop Algorithms (If Applicable), Perform Detailed Design.
Coding. The development process tends to run iteratively through these phases rather than linearly; several models (spiral, waterfall etc.) have been proposed to describe this process.
Activities in this phase - Create Test Data, Create Source, Generate Object Code, Create Operating Documentation, Plan Integration, Perform Integration
Testing. The process of using the developed system with the intent to find errors. Defects/flaws/bugs found at this stage will be sent back to the developer for a fix and have to be re-tested. This phase is iterative as long as the bugs are fixed to meet the requirements.
Activities in this phase - Plan Verification and Validation, Execute Verification and validation Tasks, Collect and Analyze Metric Data, Plan Testing, Develop Test Requirements, Execute Tests
Installation. The so developed and tested software will finally need to be installed at the client place. Careful planning has to be done to avoid problems to the user after installation is done.
Activities in this phase - Plan Installation, Distribution of Software, Installation of Software, Accept Software in Operational Environment.
Operation and Support. Support activities are usually performed by the organization that developed the software. Both the parties usually decide on these activities before the system is developed.
Activities in this phase - Operate the System, Provide Technical Assistance and Consulting, Maintain Support Request Log.
Maintenance. The process does not stop once it is completely implemented and installed at user place; this phase undertakes development of new features, enhancements etc.
Activities in this phase - Reapplying Software Life Cycle.
Various Life Cycle Models The way you approach a particular application for testing greatly depends on the life cycle model it follows. This is because, each life cycle model places emphasis on different aspects of the software i.e. certain models provide good scope and time for testing whereas some others don’t. So, the number of test cases developed, features covered, time spent on each issue depends on the life cycle model the application follows. No matter what the life cycle model is, every application undergoes the same phases described above as its life cycle. Following are a few software life cycle models, their advantages and disadvantages.
Waterfall Model Prototyping Model Spiral Model Strengths: •Emphasizes completion of one phase before moving on •Emphasises early planning, customer input, and design •Emphasises testing as an integral part of the life cycle •Provides quality gates at each life cycle phase
Weakness: •Depends on capturing and freezing requirements early in the life cycle •Depends on separating requirements from design
- Feedback is only from testing phase to any previous stage •Not feasible in some organizations
- Emphasises products rather than processes
Strengths: •Requirements can be set earlier and more reliably •Requirements can be communicated more clearly and completely between developers and clients •Requirements and design options can be investigated quickly and with low cost •More requirements and design faults are caught early
Weakness: •Requires a prototyping tool and expertise in using it – a cost for the development organisation •The prototype may become the production system
Strengths:
- It promotes reuse of existing software in early stages of development.
- Allows quality objectives to be formulated during development.
- Provides preparation for eventual evolution of the software product.
- Eliminates errors and unattractive alternatives early.
- It balances resource expenditure.
- Doesn’t involve separate approaches for software development and software maintenance.
- Provides a viable framework for integrated Hardware-software system development.
Weakness:
- This process needs or usually associated with Rapid Application Development, which is very difficult practically.
- The process is more difficult to manage and needs a very different approach as opposed to the waterfall model (Waterfall model has management techniques like GANTT charts to assess)
Software Testing Life Cycle Software Testing Life Cycle consist of six (generic) phases: 1) Planning, 2) Analysis, 3) Design, 4) Construction, 5) Testing Cycles, 6) Final Testing and Implementation and 7) Post Implementation. Each phase in the life cycle is described with the respective activities. Planning. Planning High Level Test plan, QA plan (quality goals), identify – reporting procedures, problem classification, acceptance criteria, databases for testing, measurement criteria (defect quantities/severity level and defect origin), project metrics and finally begin the schedule for project testing. Also, plan to maintain all test cases (manual or automated) in a database.
developers are very often subject to pressure related to timelines; frequently changing requirements, increase in the number of bugs etc.
- Designing and re-designing, UI interfaces, integration of modules, database management all these add to the complexity of the software and the system as a whole.
- Fundamental problems with software design and architecture can cause problems in programming. Developed software is prone to error as programmers can make mistakes too. As a tester you can check for, data reference/declaration errors, control flow errors, parameter errors, input/output errors etc.
- Rescheduling of resources, re-doing or discarding already completed work, changes in hardware/software requirements can affect the software too. Assigning a new developer to the project in midway can cause bugs. This is possible if proper coding standards have not been followed, improper code documentation, ineffective knowledge transfer etc. Discarding a portion of the existing code might just leave its trail behind in other parts of the software; overlooking or not eliminating such code can cause bugs. Serious bugs can especially occur with larger projects, as it gets tougher to identify the problem area.
- Programmers usually tend to rush as the deadline approaches closer. This is the time when most of the bugs occur. It is possible that you will be able to spot bugs of all types and severity.
- Complexity in keeping track of all the bugs can again cause bugs by itself. This gets harder when a bug has a very complex life cycle i.e. when the number of times it has been closed, re- opened, not accepted, ignored etc goes on increasing.
Bug Life Cycle
Bug Life Cycle starts with an unintentional software bug/behavior and ends when the assigned developer fixes the bug. A bug when found should be communicated and assigned to a developer that can fix it. Once fixed, the problem area should be re-tested. Also, confirmation should be made to verify if the fix did not create problems elsewhere. In most of the cases, the life cycle gets very complicated and difficult to track making it imperative to have a bug/defect tracking system in place.
See Chapter 7 – Defect Tracking
Following are the different phases of a Bug Life Cycle:
Open: A bug is in Open state when a tester identifies a problem area
Accepted: The bug is then assigned to a developer for a fix. The developer then accepts if valid.
Not Accepted/Won’t fix: If the developer considers the bug as low level or does not accept it as a bug, thus pushing it into Not Accepted / Won’t fix state.
Such bugs will be assigned to the project manager who will decide if the bug needs a fix. If it needs, then assigns it back to the developer, and if it doesn’t, then assigns it back to the tester who will have to close the bug.
Pending: A bug accepted by the developer may not be fixed immediately. In such cases, it can be put under Pending state.
Fixed: Programmer will fix the bug and resolves it as Fixed.
Close: The fixed bug will be assigned to the tester who will put it in the Close state.
Re-Open: Fixed bugs can be re-opened by the testers in case the fix produces problems elsewhere.
The image below clearly shows the Bug Life Cycle and how a bug can be tracked using Bug Tracking Tools (BTT)
Cost of fixing bugs
Costs are logarithmic; they increase in size tenfold as the time increases. A bug found and fixed during the early stages – requirements or product spec stage can be fixed by a brief interaction with the concerned and might cost next to nothing.
During coding, a swiftly spotted mistake may take only very less effort to fix. During integration testing, it costs the paperwork of a bug report and a formally documented fix, as well as the delay and expense of a re-test.
During system testing it costs even more time and may delay delivery. Finally, during operations it may cause anything from a nuisance to a system failure, possibly with catastrophic consequences in a safety-critical system such as an aircraft or an emergency service.
When can testing be stopped/reduced?
It is difficult to determine when exactly to stop testing. Here are a few common factors that help you decide when you can stop or reduce testing:
- Deadlines (release deadlines, testing deadlines, etc.)
- Test cases completed with certain percentage passed
- Functional Testing: Verifying that a module functions as stated in the specification and establishing confidence that a program does what it is supposed to do.
- End-to-end Testing: Similar to system testing - testing a complete application in a situation that mimics real world use, such as interacting with a database, using network communication, or interacting with other hardware, application, or system.
- Sanity Testing: Sanity testing is performed whenever cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing. It normally includes testing basic GUI functionality to demonstrate connectivity to the database, application servers, printers, etc.
- Regression Testing: Testing with the intent of determining if bug fixes have been successful and have not created any new problems.
- Acceptance Testing: Testing the system with the intent of confirming readiness of the product and customer acceptance. Also known as User Acceptance Testing.
- Adhoc Testing: Testing without a formal test plan or outside of a test plan. With some projects this type of testing is carried out as an addition to formal testing. Sometimes, if testing occurs very late in the development cycle, this will be the only kind of testing that can be performed – usually done by skilled testers. Sometimes ad hoc testing is referred to as exploratory testing.
- Configuration Testing: Testing to determine how well the product works with a broad range of hardware/peripheral equipment configurations as well as on different operating systems and software.
- Load Testing: Testing with the intent of determining how well the product handles competition for system resources. The competition may come in the form of network traffic, CPU utilization or memory allocation.
- Stress Testing: Testing done to evaluate the behavior when the system is pushed beyond the breaking point. The goal is to expose the weak links and to determine if the system manages to recover gracefully.
- Performance Testing: Testing with the intent of determining how efficiently a product handles a variety of events. Automated test tools geared specifically to test and fine-tune performance are used most often for this type of testing.
- Usability Testing: Usability testing is testing for 'user-friendliness'. A way to evaluate and measure how users interact with a software product or site. Tasks are given to users and observations are made.
- Installation Testing: Testing with the intent of determining if the product is compatible with a variety of platforms and how easily it installs.
- Recovery/Error Testing: Testing how well a system recovers from crashes, hardware failures, or other catastrophic problems.
- Security Testing: Testing of database and network software in order to keep company data and resources secure from mistaken/accidental users, hackers, and other malevolent attackers.
- Penetration Testing: Penetration testing is testing how well the system is protected against unauthorized internal or external access, or willful damage. This type of testing usually requires sophisticated testing techniques.
- Compatibility Testing: Testing used to determine whether other system software components such as browsers, utilities, and competing software will conflict with the software being tested.
- Exploratory Testing: Any testing in which the tester dynamically changes what they're doing for test execution, based on information they learn as they're executing their tests.
- Comparison Testing: Testing that compares software weaknesses and strengths to those of competitors' products.
- Alpha Testing: Testing after code is mostly complete or contains most of the functionality and prior to reaching customers. Sometimes a selected group of users are involved. More often this testing will be performed in-house or by an outside testing firm in close cooperation with the software engineering department.
- Beta Testing: Testing after the product is code complete. Betas are often widely distributed or even distributed to the public at large.
- Gamma Testing: Gamma testing is testing of software that has all the required features, but it did not go through all the in-house quality checks.
- Mutation Testing: A method to determine to test thoroughness by measuring the extent to which the test cases can discriminate the program from slight variants of the program.
- Independent Verification and Validation (IV&V): The process of exercising software with the intent of ensuring that the software system meets its requirements and user expectations and doesn't fail in an unacceptable manner. The individual or group doing this work is not part of the group or organization that developed the software.
- Pilot Testing: Testing that involves the users just before actual release to ensure that users become familiar with the release contents and ultimately accept it. Typically involves many users, is conducted over a short period of time and is tightly controlled. (See beta testing)
- Parallel/Audit Testing: Testing where the user reconciles the output of the new system to the output of the current system to verify the new system performs the operations correctly.
- Glass Box/Open Box Testing: Glass box testing is the same as white box testing. It is a testing approach that examines the application's program structure, and derives test cases from the application's program logic.
- Closed Box Testing: Closed box testing is same as black box testing. A type of testing that considers only the functionality of the application.
- Bottom-up Testing: Bottom-up testing is a technique for integration testing. A test engineer creates and uses test drivers for components that have not yet been developed, because, with bottom-up testing, low-level components are tested first. The objective of bottom-up testing is to call low-level components first, for testing purposes.
- Smoke Testing: A random test conducted before the delivery and after complete testing.
Testing Terms
- Bug: A software bug may be defined as a coding error that causes an unexpected defect, fault or flaw. In other words, if a program does not perform as intended, it is most likely a bug.
- Error: A mismatch between the program and its specification is an error in the program.
- Defect: Defect is the variance from a desired product attribute (it can be a wrong, missing or extra data). It can be of two types – Defect from the product or a variance from customer/user expectations. It is a flaw in the software system and has no impact until it affects the user/customer and operational system. 90% of all the defects can be caused by process problems.
- Failure: A defect that causes an error in operation or negatively impacts a user/ customer.
- Quality Assurance: Is oriented towards preventing defects. Quality Assurance ensures all parties concerned with the project adhere to the process and procedures, standards and templates and test readiness reviews.
- Quality Control: quality control or quality engineering is a set of measures taken to ensure that defective products or services are not produced, and that the design meets performance requirements.
- Verification: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings.
- Validation: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verifications are completed.
5. The Test Planning Process
What is a Test Strategy? What are its Components?
Test Policy - A document characterizing the organization’s philosophy towards software testing.
Test Strategy - A high-level document defining the test phases to be performed and the testing within those phases for a programme. It defines the process to be followed in each project. This sets the standards for the processes, documents, activities etc. that should be followed for each project.
For example, if a product is given for testing, you should decide if it is better to use black-box testing or white-box testing and if you decide to use both, when will you apply each and to which part of the software? All these details need to be specified in the Test Strategy.
Project Test Plan - a document defining the test phases to be performed and the testing within those phases for a particular project.
A Test Strategy should cover more than one project and should address the following issues: An approach to testing high risk areas first, Planning for testing, How to improve the process based on previous testing, Environments/data used, Test management - Configuration management, Problem management, What Metrics are followed, Will the tests be automated and if so which tools will be used, What are the Testing Stages and Testing Methods, Post Testing Review process, Templates.
Test planning needs to start as soon as the project requirements are known. The first document that needs to be produced then is the Test Strategy/Testing Approach that sets the high level approach for testing and covers all the other elements mentioned above.
Test Planning – Sample Structure
Once the approach is understood, a detailed test plan can be written. Usually, this test plan can be written in different styles. Test plans can completely differ from project to project in the same organization.
IEEE SOFTWARE TEST DOCUMENTATION Std 829-1998 - TEST PLAN
Purpose To describe the scope, approach, resources, and schedule of the testing activities. To identify the items being tested, the features to be tested, the testing tasks to be performed, the personnel responsible for each task, and the risks associated with this plan.
OUTLINE
A test plan shall have the following structure:
- Test plan identifier. A unique identifier assign to the test plan.
- Introduction: Summarized the software items and features to be tested and the need for them to be included.
- Test items: Identify the test items, their transmittal media which impact their
- Features to be tested
- Features not to be tested
- Approach
- Item pass/fail criteria
- Suspension criteria and resumption requirements
- Test deliverables
- Testing tasks
- Environmental needs
- Responsibilities
- Staffing and training needs
- Schedule
- Risks and contingencies
- Approvals
Major Test Planning Tasks
Like any other process in software testing, the major tasks in test planning are to – Develop Test Strategy, Critical Success Factors, Define Test Objectives, Identify Needed Test Resources, Plan Test Environment, Define Test Procedures, Identify Functions To Be Tested, Identify Interfaces With Other Systems or Components, Write Test Scripts, Define Test Cases, Design Test Data, Build Test Matrix, Determine Test Schedules, Assemble Information, Finalize the Plan