Guideline: Important Decisions in Test
var backPath = './../../../';
var imgPath = './../../../images/';
var nodeInfo=[{view: "view:_LVCagP5WEdmAzesbYywanQ", path: ["_LVCagP5WEdmAzesbYywanQ", "_mp7z0DIDEdqwaNnSEheSAg", "_u2yEADIEEdqwaNnSEheSAg", "_yd3EzdnmEdmO6L4XMImrsA", "2.474890767192022E-305"]}, {view: "view:_FCx1oN7CEdmsEI4YDGX2ag", path: ["_FCx1oN7CEdmsEI4YDGX2ag", "_kC0pcN7GEdm8G6yT7-Wdqw", "_yd3EzdnmEdmO6L4XMImrsA", "2.474890767192022E-305"]}, {view: "view:_FCx1oN7CEdmsEI4YDGX2ag", path: ["_FCx1oN7CEdmsEI4YDGX2ag", "_jD8dUAIbEdqEutyfYo0quQ", "_2ClPcDIcEdqDs_9ORT1Rig", "2.474890767192022E-305"]}];
contentPage.preload(imgPath, backPath, nodeInfo, '', false, false, false);
Guideline: Important Decisions in Test
This guideline describes important things to consider when tailoring the Test aspects of the process.
Relationships
Related Elements
Tailor the Development Process for the Project
Test
Test
Main Description
Decide How to Use Work Products
Make a decision about what work products are to be used and how they are to be used. In addition to
identifying what work products are to be used, it is also important to also tailor each work product to be used to fit
the needs of the project.
The table below specifies which Test work products are recommended and which are considered optional (i.e., may only be
used in certain cases). For additional tailoring considerations, see the tailoring section of the work product
description page.
Work Product
Purpose
Tailoring (Optional, Recommended)
Test Evaluation Summary
Summarizes the Test Results for use primarily by the management team and other stakeholders external to
the test team.
Recommended for most projects.
Where the project culture is relatively information, it may be appropriate simply to record test
results and not create formal evaluation summaries. In other cases, Test Evaluation Summaries can be
included as a section within other Assessment work products, such as the Iteration Assessment or Review Record.
Test Results
This work product is the analyzed result determined from the raw data in one or more Test Logs.
Recommended. Most test teams retain some form of reasonably detailed record of the results of testing.
Manual testing results are usually recorded directly here, and combined with the distilled Test Logs
from automated tests.
In some cases, test teams will go directly from the Test Logs to producing the Test Evaluation
Summary.
Test Log
The raw data output during test execution, typically produced by automated tests.
Optional.
Many projects that perform automated testing will have some form of Test Log. Where projects differ
is whether the Test Logs are retained or discarded after Test Results have been determined.
You might retain Test Logs if you need to satisfy certain audit requirements, if you want to
perform analysis on how the raw test output data changes over time, or if you are uncertain at the
outset of all the analysis you may be required to give.
Test Suite
Used to group individual related tests (Test Scripts) together in meaningful subsets.
Recommended for most projects.
Also required to define any Test Script execution sequences that are required for tests to work
correctly.
Test-Ideas List
This is an enumerated list of ideas, often partially formed, to be considered as useful tests to
conduct.
Recommended for most projects.
In some cases these lists will be informally defined and discarded once Test Scripts or Test Cases
have been defined from them.
Test Strategy
Defines the strategic plan for how the test effort will be conducted against one or more aspects of the
target system.
Recommended for most projects.
A single Test Strategy per project or per phase within a project is recommended in most cases.
Optionally, you might reuse existing strategies where appropriate, or you might further subdivide the
Test Strategies based on the type of testing being conducted.
Test Plan
Defines finer grained testing goals, objectives, motivations, approach, resources, schedule and work
products that govern an iteration.
Recommended for most projects.
A separate Test Plan per iteration is recommended to define the specific, fine-grained test strategy.
Optionally, you can include the Test Plan as a section within the Iteration Plan.
Test Plan
Defines high-level testing goals, objectives, approach, resources, schedule and work products that
govern a phase or the entire the lifecycle.
Optional. Useful for most projects.
A Master Test Plan defines the high-level strategy for the test effort over large parts of the
software development lifecycle. Optionally, you can include the Test Plan as a section within the Software Development Plan.
Consider whether to maintain a "Master" Test Plan in addition to the "Iteration" Test Plans. The
Master Test Plan covers mainly logistic and process enactment information that typically relates to the
entire project lifecycle, therefore it is unlikely to change between iterations.
Test Script, Test Data
The Test Scripts and Test Data are the realization or implementation of the test, where the Test Script
embodies the procedural aspects, and the Test Data the defining characteristics.
Recommended for most projects.
Where projects differ is how formally these work products are treated. In some cases, these are
informal and transitory, and the test team is judged based on other criteria. In other cases-especially
with automated tests-the Test Scripts and associated Test Data (or some subset thereof) are regarded as
major deliverables of the test effort.
Test Case
Defines a specific set of test inputs, execution conditions, and expected results.
Documenting test cases allows them to be reviewed for completeness and correctness, and considered
before implementation effort is planned & expended.
This is most useful where the input, execution conditions and expected results are particularly
complex.
We recommend that on most projects, were the conditions required to conduct a specific test are
complex or extensive, you should define Test Cases. You will also need to document Test Cases where
they are a contractually required deliverable.
In most other cases we recommend maintaining the Test-Ideas List and the Implemented Test Scripts
instead of detailed textual Test Cases.
Some projects will simply outline Test Cases at a high level and defer details to the Test Scripts.
Another style commonly used is to document the Test Case information as comments within the Test
Scripts.
Workload Analysis Model
A specialized type of Test Case. Used to define a representative workload to allow quality risks
associated with the system operating under load to be assessed.
Recommended for most systems, especially those where system performance under load must be
evaluated, or where there are other significant quality risks associated with system operation
under load.
Not usually required for systems that will be deployed on a standalone target system.
Testability Classes in the Design Model
Testability Elements in the Implementation Model
If the project has to develop significant additional specialized behavior to accommodate and
support testing, these concerns are represented by the inclusion of Testability Classes in the
Design Model & the Testability Elements in the Implementation Model.
Where Required.
Stubs are a common category of Test Classes and Test Component.
Test Automation Architecture
Provides an architectural overview of the test automation system, using a number of different
architectural views to depict different aspects of the system.
Optional.
Recommended on projects where the test architecture is relatively complex, when a large number of
staff will be collaborating on building automated tests, or when the test automation system is
expected to be maintained over a long period of time.
In some cases this might simply be a whiteboard diagram that is recorded centrally for interested
parties to consult.
Test Interface Specification
Defines a required set of behaviors by a classifier (specifically, a Class, Subsystem or Component)
for the purposes of testing (testability). Common types include test access, stubbed behavior,
diagnostic logging and test oracles.
Optional.
On many projects, there is either sufficient accessibility for test in the visible operations on
classes, user interfaces etc.
Some common reasons to create Test Interface Specifications include UI extensions to allow GUI test
tools to interact with the tool and diagnostic message logging routines, especially for batch
processes.
Decide How to Review Work Products
This section gives some guidelines to help you decide how you should review the test work products. For general
guidance, see Guideline: Review Levels.
Defects
The treatment of Defect reviews is very much dependent on context, however they are generally treated as
Informal, Formal-Internal, or Formal-External. This review process is often enforced or at least
assisted by workflow management in a defect-tracking system. As a general comment, the level of review formality often
relates to the perceived severity or impact of the defect, however factors such as project culture and level of
ceremony often have an effect on the choice of review handling.
In some cases you may need to consider separating the handling of defects-also known as symptoms or failures-from
faults; the actual source of the error. For small projects, you can typically manage by tracking only the defects and
implicitly handle the faults. However, as the system grows in complexity, it may be beneficial to separate the
management of defects from faults. For example, several defects may be caused by the same fault. Therefore, if a fault
is fixed, it's necessary to find the reported defects and inform those users who submitted the defects, which is only
possible if defects and faults can be identified separately.
Test Plan and Test Strategy
In any project where the testing is nontrivial, you will need some form of Test Plan or Strategy. Generally you'll need
a Test Plan for each iteration and some form of governing Test Strategy. Optionally you might create and maintain a
Master Test Plan. In many cases, these work products are reviewed as Informal; that is, they are reviewed, but
not formally approved. Where testing visibility has importance to stakeholders external to the test team, it should be
treated as Formal-Internal or even Formal-External.
Test Scripts
Test Scripts are usually treated as Informal; that is, they are approved by someone within the test team. If the
Test Scripts are to be used by many testers, and shared or reused for many different tests, they should be treated as
Formal-Internal.
Test Cases
Test Cases are created by the test team and-depending on context-are typically reviewed using either an Informal
process or simply not reviewed as all. Where appropriate, Test Cases might be approved by other team members in which
case they can be treated as Formal-Internal, or by external stakeholders in which case they would be
Formal-External.
As a general heuristic, we recommend you only plan to formally review what test cases it is necessary to, which
generally will be limited to a small subset representing the most significant test cases. For example, where a customer
wants to validate a product before it is released, some subset of the Test Cases could be selected as the basis for
that validation. These Test Cases should be treated as Formal-External.
Test work products in design and implementation
Testability Classes are found in the Design Model, and Testability Elements in the Implementation Model. There are also
two other related (although not specific to test) work products: Packages in the Design Model, and Subsystems in the
Implementation Model.
These work products are design and implementation work products, however, they're created for the purpose of supporting
testing functionality in the software. The natural place to keep them is with the design and implementation work
products. Remember to name or otherwise label them in such a way that they are clearly separated from the design and
implementation of the core system. Review these work products by following the review procedures for Design and
Implementation work products.
Decide on Iteration Approval Criteria
As you enter each iteration, strive to clearly define upfront how the test effort will be judged to have been
sufficient, and on what basis that judgment will be measured. Do this by discussion with the individual or group
responsible for making the approval decision.
The following are examples of ways to handle iteration approval:
The project management team approves the iteration and assesses the testing effort by reviewing the test evaluation
summaries.
The customer approves the iteration by reviewing the test evaluation summaries.
The customer approves the iteration based on the results of a demonstration that exercises a certain subset of the
total tests. This subset of tests should be defined and agreed before hand, preferably early in the iteration.
These test are treated as Formal-External and are often referred to as acceptance tests.
The customer approves the system quality by conducting their own independent tests. Again, the nature of these
tests should be clearly defined and agreed before hand, preferably early in the iteration. These test are treated
as Formal-External and are often referred to as acceptance tests..
This is an important decision-you cannot reach a goal if you don't know what it is.
© Copyright IBM Corp. 1987, 2006. All Rights Reserved.
contentPage.onload();
Wyszukiwarka
Podobne podstrony:
important?cisions in analysis?signR062184important?cisions in implementation?39CD13important?cisions in implementation?39CD13import?pendency in?signz5BDB73important?cisions in requirements3ABF08important?cisions in project managementY117D8Bimportant?cisions in configuration change management798119Bimport?pendency in implementation C8C9DCE in T?atures & nescessityFunctional Origins of Religious Concepts Ontological and Strategic Selection in Evolved Mindsterminarz Importy rzymskie w Barbaricum 2015You maybe in love Blue CafeIn the?rnGhost in the Shell 2 0 (2008) [720p,BluRay,x264,DTS ES] THORASteve Fearson Card in Ceilingwięcej podobnych podstron