Task: Determine Test Results
var defaultQueryStr = '?proc={35359DDF-6361-43E5-8B1B-18D204DA8CFF}&path={35359DDF-6361-43E5-8B1B-18D204DA8CFF},{5DA9C9E9-1538-4433-8B57-B28667D67514},_PYIOEEsZEdqlNo9QEt4izQ';
var backPath = './../../';
var imgPath = './../../images/';
var nodeInfo=[{view: "view:_e_O28N7KEdm8G6yT7-Wdqw", path: ["_e_O28N7KEdm8G6yT7-Wdqw", "_vCtak0JHEdq4z9xc-r201w", "_vChNQkJHEdq4z9xc-r201w", "_vChNREJHEdq4z9xc-r201w", "_C6jF0EdlEdqVZeSWHJlGyA", "_mUqf8EdjEdqVZeSWHJlGyA", "_D1WfBkdWEdqMoerwwyqMKQ", "_PYIOEEsZEdqlNo9QEt4izQ"]}, {view: "view:_e_O28N7KEdm8G6yT7-Wdqw", path: ["_e_O28N7KEdm8G6yT7-Wdqw", "_vCtak0JHEdq4z9xc-r201w", "_vCtajUJHEdq4z9xc-r201w", "_vCtai0JHEdq4z9xc-r201w", "_dTvnIEdlEdqVZeSWHJlGyA", "_mUqf8EdjEdqVZeSWHJlGyA", "_D1WfBkdWEdqMoerwwyqMKQ", "_PYIOEEsZEdqlNo9QEt4izQ"]}, {view: "view:_e_O28N7KEdm8G6yT7-Wdqw", path: ["_e_O28N7KEdm8G6yT7-Wdqw", "_vCtak0JHEdq4z9xc-r201w", "_vCtaj0JHEdq4z9xc-r201w", "_vCtakkJHEdq4z9xc-r201w", "_sz1ksZ5IEdq7s5zuJVEAAw", "_mUqf8EdjEdqVZeSWHJlGyA", "_D1WfBkdWEdqMoerwwyqMKQ", "_PYIOEEsZEdqlNo9QEt4izQ"]}, {view: "view:_FCx1oN7CEdmsEI4YDGX2ag", path: ["_FCx1oN7CEdmsEI4YDGX2ag", "_PEpmMCVuEdqSZ9OimJ-AzA", "_-kFhcCVuEdqSZ9OimJ-AzA", "_pV4NgSFsEdqrX8YVzvtlIg", "_C6jF0EdlEdqVZeSWHJlGyA", "_mUqf8EdjEdqVZeSWHJlGyA", "_D1WfBkdWEdqMoerwwyqMKQ", "_PYIOEEsZEdqlNo9QEt4izQ"]}, {view: "view:_FCx1oN7CEdmsEI4YDGX2ag", path: ["_FCx1oN7CEdmsEI4YDGX2ag", "_PEpmMCVuEdqSZ9OimJ-AzA", "_SkuIwCVwEdqSZ9OimJ-AzA", "_gM9X0CGFEdqMcovRzkCQow", "_dTvnIEdlEdqVZeSWHJlGyA", "_mUqf8EdjEdqVZeSWHJlGyA", "_D1WfBkdWEdqMoerwwyqMKQ", "_PYIOEEsZEdqlNo9QEt4izQ"]}, {view: "view:_FCx1oN7CEdmsEI4YDGX2ag", path: ["_FCx1oN7CEdmsEI4YDGX2ag", "_PEpmMCVuEdqSZ9OimJ-AzA", "_cn2akCVwEdqSZ9OimJ-AzA", "_zUDkgSGFEdqMcovRzkCQow", "_zvv6IEdlEdqVZeSWHJlGyA", "_mUqf8EdjEdqVZeSWHJlGyA", "_D1WfBkdWEdqMoerwwyqMKQ", "_PYIOEEsZEdqlNo9QEt4izQ"]}, {view: "view:_FCx1oN7CEdmsEI4YDGX2ag", path: ["_FCx1oN7CEdmsEI4YDGX2ag", "_e_O28N7KEdm8G6yT7-Wdqw", "_vCtak0JHEdq4z9xc-r201w", "_vChNQkJHEdq4z9xc-r201w", "_vChNREJHEdq4z9xc-r201w", "_C6jF0EdlEdqVZeSWHJlGyA", "_mUqf8EdjEdqVZeSWHJlGyA", "_D1WfBkdWEdqMoerwwyqMKQ", "_PYIOEEsZEdqlNo9QEt4izQ"]}, {view: "view:_FCx1oN7CEdmsEI4YDGX2ag", path: ["_FCx1oN7CEdmsEI4YDGX2ag", "_e_O28N7KEdm8G6yT7-Wdqw", "_vCtak0JHEdq4z9xc-r201w", "_vCtajUJHEdq4z9xc-r201w", "_vCtai0JHEdq4z9xc-r201w", "_dTvnIEdlEdqVZeSWHJlGyA", "_mUqf8EdjEdqVZeSWHJlGyA", "_D1WfBkdWEdqMoerwwyqMKQ", "_PYIOEEsZEdqlNo9QEt4izQ"]}, {view: "view:_FCx1oN7CEdmsEI4YDGX2ag", path: ["_FCx1oN7CEdmsEI4YDGX2ag", "_e_O28N7KEdm8G6yT7-Wdqw", "_vCtak0JHEdq4z9xc-r201w", "_vCtaj0JHEdq4z9xc-r201w", "_vCtakkJHEdq4z9xc-r201w", "_sz1ksZ5IEdq7s5zuJVEAAw", "_mUqf8EdjEdqVZeSWHJlGyA", "_D1WfBkdWEdqMoerwwyqMKQ", "_PYIOEEsZEdqlNo9QEt4izQ"]}];
contentPage.preload(imgPath, backPath, nodeInfo, defaultQueryStr, true, true, false);
Task: Determine Test Results
This task describes how to accurately record the test findings and what kind of follow-up is needed.
Purpose
The purpose of this task is to:
Make ongoing summary evaluations of the perceived quality of the product
Identify and capture the detailed Test Results
Propose appropriate corrective actions to resolve failures in quality
Relationships
RolesMain:
Test Analyst
Additional:
Assisting:
InputsMandatory:
Test-Ideas List
Test Log
Test Strategy
Optional:
None
External:
None
Outputs
Test Evaluation Summary
Test Results
Steps
Examine all test incidents and failures
Purpose:
To investigate each incident and obtain detailed understanding of the resulting problems.
In this task, the Test Logs are analyzed to determine the meaningful Test Results, regarding the differences between
the expected results and the actual results of each test. Identify and analyze each incident and failure in turn. Learn
as much as you can about each occurrence.
Check for duplicate incidents, common symptoms and other relationships between incidents. These conditions often
provide valuable insight into the root cause of a group of the incidents.
Create and maintain Change Requests
Purpose:
To enter change request information into a tracking tool for assessment, management, and resolution.
Differences indicate potential defects in the Target Test Items and should be entered into a tracking system as
incidents or Change Requests, with an indication of the appropriate corrective actions that could be taken.
Sub-topics:
Verify incident facts
Clarify Change Request details
Indicate relative impact severity and resolution priority
Log additional Change Requests separately
Verify incident facts
Verify that there is accurate, supporting data available. Collate the data for attachment directly to the Change
Request, or reference where the data can be obtained separately.
Whenever possible, verify that the problem is reproducible. Reproducible problems have much more likelihood of
receiving developer attention and being subsequently fixed; a problem that cannot be reproduced both frustrates
development staff and will waste valuable programming resources in fruitless research. We recommend that you still log
these incidents, but that you consider identifying unreproducable incidents separately from the reproducible ones.
Clarify Change Request details
It's important for Change Requests to be understandable, especially the headline. Make sure the headline is crisp and
concise, articulating clearly the specific issue. A brief headline is useful for summary defect listings and discussion
in CCB status meetings.
It's important that the detailed description of the Change Request is unambiguous and can be easily interpreted. It's a
good idea to log your Change Requests as soon as possible, but take time to go back and improve and expand on your
descriptions before they are viewed by development staff.
Provide candidate solutions, as many as practical. This helps to reduce any remaining ambiguity in the description,
often helping to clarify. It also ensures increases the likelihood that the solution will be close to your exceptions.
Furthermore, it shows that the test team is not only prepared to find the problems, but also to help identify
appropriate solutions.
Other details to include are screen image captures, Test Data files, automated Test Scripts, output from diagnostic
utilities and any other information that would be useful to the developers in isolating and correcting the underlying
fault.
Indicate relative impact severity and resolution priority
Provide an indication to the management and development staff of the severity of the problem. In larger teams the
actual resolution priority is normally left for the management team to determine, however you might allow individuals
to indicate their preferred resolution priority and subsequently adjust as necessary. As a general rule, we recommend
you assign Change Requests an average resolution priority by default, and raise or lower that priority on a
case-by-case basis as necessary.
You may need to differentiate between the impact the Change Request will have on the production environment if it isn't
addressed and the impact the Change Request will have on the test effort if it isn't addressed; It's just as important
for the management team to know when a defect is impacting the testing effort as it is to be aware of severity to
users.
Sometimes it's difficult to see in advance why you need both attributes. It's possible that an incident may be really
severe, such as a system crash, but the actions required to reproduce it very unlikely to occur. In this case the team
may indicate it's severity as high, but indicate a very low resolution priority.
Log additional Change Requests separately
Incidents often bare out the old adage"Where there's smoke, there's fire"; as you identify and log one Change Request,
you quite often identify other issues that need to be addressed. Avoid the temptation to simply add these additional
findings to the existing Change Request: if the information is directly related and helps to solve the existing issue
better, then that's OK. If the other issues are different, identifying them against an existing CR may result in those
issues not being actioned, not getting appropriate priority in their own right, or impacting the speed at which other
issues are addressed.
Analyze and evaluate status
Purpose:
To calculate and deliver the key measures and indicators of test.
Sub-topics:
Incident distribution
Test execution coverage
Change Requests statistics
Incident distribution
Analyze the incidents based on where they are distributed, such as functional area, quality risk, assigned tester and
assigned developer.
Look for patterns in the distribution, such as functional areas that appear to have above average defects count. Also
look for both developers and testers that may be overworked and where their quality of work is slipping
Test execution coverage
To evaluate test execution coverage, you need to review the Test Logs and determine:
The ratio between how many tests (Test Scripts or Test Cases) have been performed in this Test Cycle and a total
number of tests for all intended Target Test Items.
The ratio of successfully performed test cases.
The objective is to ensure that a sufficient number of the tests targeted for this Test Cycle have been executed
usefully. If this is not possible, or to augment that execution data, one or more additional test coverage criteria can
be identified, based upon:
Quality Risk or priority
Specification-based coverage (Requirements etc.)
Business need or priority
Code-based coverage
See Concept: Key Measures of Test, Requirements-based test coverage.
Record an present the Test Results in an Test Evaluation Report for this Test Cycle.
Change Requests statistics
To analyze defects, you need to review and analyze the measures chosen as part of your defect analysis strategy. The
most common defect measures used include the following different measures (often displayed in the form of a graph):
Defect Density - the number of defects are shown as a function of one or two defect attributes (such as
distribution over functional area or quality risk compared to status or severity).
Defect Trend - the defect count is shown as a function over time.
Defect Aging - a special defect density report in which the defect counts are shown as a function of the
length of time a defect remained in a given status (open, new, waiting-for-verification, etc.)
Compare the measures from this Test Cycle to the running totals for the current Iteration and those from the analysis
of previous iterations, to better understand the emerging trends over time.
It is recommended you present the results in diagram form with supporting findings on request.
Make an assessment of the current quality experience
Purpose:
To give feedback on the current perceived or experienced quality in the software product.
Formulate a summary of the current quality experience, highlighting both good and bad aspects of the software
products quality.
Make an assessment of outstanding quality risks
Purpose:
To provide feedback on what remaining areas of risk provide the most potential exposure to the
project.
Identify and explain those areas that have not yet been addressed in terms of quality risks and indicate what impact
and exposure this leaves the team.
Provide an indication of what priority you consider each outstanding quality risk to have, and use the priority to
indicate the order in which these issues should be addressed.
Make an assessment of test coverage
Purpose:
To make a summary assessment of the test coverage analysis.
Based on the work in step test execution coverage, provide a brief summary
statement of the status and information the data represents.
Draft the Test Evaluation Summary
Purpose:
To communicate the results of testing to stakeholders and make an objective assessment of quality and test
status.
Present the Test Results for this Test Cycle in a Test Evaluation Summary. This step is to develop the initial draft of
the summary. This is accomplished by assembling the previous information that has been gathered into a readable summary
report. Depending on the stakeholder audience and project context, the actual format and content of the summary will
differ.
Often it is a good idea to distribute the initial draft to a subset of stakeholders to obtain feedback that you can
incorporate before publishing to a broader audience.
Advise stakeholders of key findings
Purpose:
To promote and publicize the Evaluation Summary as appropriate.
Using whatever means is appropriate, publicize this information. We recommend you consider posting these on a
centralized project site, or present them in regularly held status meetings to enable feedback to be gathered and next
actions to be determined.
Be aware that making evaluation summaries publicly available can sometimes be a sensitive political issue. Negotiate
with the development manager to present results in such a manner that they reflect an honest and accurate summary of
your findings, yet respect the work of the developers.
Evaluate and verify your results
Purpose:
To verify that the task has been completed appropriately and that the resulting work products are
acceptable.
Now that you have completed the work, it is beneficial to verify that the work was of sufficient value, and that you
did not simply consume vast quantities of paper. You should evaluate whether your work is of appropriate quality, and
that it is complete enough to be useful to those team members who will make subsequent use of it as input to their
work. Where possible, use the checklists provided in RUP to verify that quality and completeness are "good enough".
Have the people performing the downstream tasks that rely on your work as input take part in reviewing your interim
work. Do this while you still have time available to take action to address their concerns. You should also evaluate
your work against the key input work products to make sure you have represented them accurately and sufficiently. It
may be useful to have the author of the input work product review your work on this basis.
Try to remember that that RUP is an iterative delivery process and that in many cases work products evolve over time.
As such, it is not usually necessary-and is often counterproductive-to fully-form a work product that will only be
partially used or will not be used at all in immediately subsequent work. This is because there is a high probability
that the situation surrounding the work product will change-and the assumptions made when the work product was created
proven incorrect-before the work product is used, resulting in wasted effort and costly rework. Also avoid the trap of
spending too many cycles on presentation to the detriment of content value. In project environments where presentation
has importance and economic value as a project deliverable, you might want to consider using an administrative resource
to perform presentation tasks.
Properties
Multiple Occurrences
Event Driven
Ongoing
Optional
Planned
Repeatable
More Information
Tool Mentors
Analyzing Test Failures using Rational TestManager and TestFactory
Evaluating Test Coverage Using Rational TestFactory
Evaluating the Results of Executing a Test Suite Using Rational TestFactory
Reporting Defect Trends and Status Using Rational ClearQuest
Submitting Change Requests Using Rational ClearQuest
Using Rational TestFactory to Measure and Evaluate Code-based Test Coverage on Rational Robot Test Scripts
© Copyright IBM Corp. 1987, 2006. All Rights Reserved.
contentPage.onload();
contentPage.processPage.fixDescriptorLinks();
Wyszukiwarka
Podobne podstrony:
determine test resultsc0C9EC9rup test results?9BFFB0rup test results?735CFEChapter10 last test resultsFirst Driving Test Results of FEV s 7H AMT Hybrid Transmissionrup test results9F0BEFParanoia MBD Determination Test 88 9bChapter08 last test resultsanalyze test results?F84209rup test resultsD6F945Bwięcej podobnych podstron