Tool Mentor: Implementing an Automated Component Test using Rational QualityArchitect
var backPath = './../../../';
var imgPath = './../../../images/';
var nodeInfo=[{view: "view:_LVCagP5WEdmAzesbYywanQ", path: ["_LVCagP5WEdmAzesbYywanQ", "_zRigkAILEdq-_NKqZM1EhA", "_U5tiUAISEdqTna4sZVFRow", "{11A91795-5221-4C65-A9DE-EE431CEA6993}", "{CE38BCE7-D8E1-460F-ADF0-1FC94CB1D7E2}"]}, {view: "view:_LVCagP5WEdmAzesbYywanQ", path: ["_LVCagP5WEdmAzesbYywanQ", "_AUv4MAIMEdq-_NKqZM1EhA", "_5ZuQsAIUEdqEutyfYo0quQ", "{11A91795-5221-4C65-A9DE-EE431CEA6993}", "{CE38BCE7-D8E1-460F-ADF0-1FC94CB1D7E2}"]}, {view: "view:_FCx1oN7CEdmsEI4YDGX2ag", path: ["_FCx1oN7CEdmsEI4YDGX2ag", "_jD8dUAIbEdqEutyfYo0quQ", "_F1OgYAIbEdqEutyfYo0quQ", "{11A91795-5221-4C65-A9DE-EE431CEA6993}", "{CE38BCE7-D8E1-460F-ADF0-1FC94CB1D7E2}"]}];
contentPage.preload(imgPath, backPath, nodeInfo, '', false, false, false);
Tool Mentor: Implementing an Automated Component Test using Rational QualityArchitect
This tool mentor provides an overview of unit testing tasks performed with Rational QualityArchitect.
Tool: Rational QualityArchitect
Relationships
Related Elements
Execute Developer Tests
Implement Developer Test
Main Description
Overview
This tool mentor provides an overview of the four primary unit testing tasks performed with Rational QualityArchitect:
Unit Testing
Scenario Testing
Stub Generation
EJB Session Recording
A development process that puts off testing until all components can be assembled into a completed system is a risky
proposition. Defects found so late in the lifecycle will be more difficult to fix and more likely to cause serious
schedule delays, particularly if they are architectural problems that may require an extensive redesign to correct.
Even if a team has reasonably high confidence in the quality of its system's components, the overall confidence of the
system can still be unacceptably low. For example, consider a simple system comprised of five components, each of which
is rated (either by test coverage metrics or by less quantitative methods) to be 95% reliable. Because system
reliability is cumulative, the overall rating is 95% x 95% x 95% x 95%x 95%, or just over 77%. Whereas the potential
for problems in any one component may be just 1 in 20, for the overall system it approaches 1 in 4-and that's for a
system with relatively few components.
In contrast, a development process that incorporates component testing throughout an iterative development process
offers several significant advantages:
Problems can be found and fixed in an isolated context, making them not only easier to repair, but also easier to
detect and diagnose.
Because testing and development are tightly coupled through the lifecycle, progress measurements are more
believable-progress can now be viewed in terms of how much of the project is coded and working, not just coded.
Disruptions to the schedule caused by unforeseen problems are minimized, which makes the overall schedule more
realistic and reduces project risk.
Although there are tremendous benefits to early testing, the practice is far from commonplace especially when it comes
to testing middle-tier, GUI-less components.
Why? Because it's time-consuming and tedious, and in the past the costs of overcoming these practical issues have
frequently outweighed the benefits. Also, since most tests are tailored for a particular component, there's little
opportunity for re-use. Many organizations recognize the wastefulness of building test harnesses and stubs from
scratch, using them, and then throwing them away project after project. They prefer to focus their limited resources in
other areas.
With QualityArchitect, early testing truly becomes feasible because test harnesses and stubs are generated
automatically: not just once, but incrementally as the model evolves throughout development. The entire development
process becomes more structured, measured, and visible as results from component tests facilitate stronger entry
criteria to prevent premature system testing. QualityArchitect enables developers to focus on the creative aspects of
defining tests, so they can spend time thinking about the best way to exercise a component, instead of writing and
debugging test drivers and stubs. Developers and architects work closely together with the shared visual models, so
they naturally develop a more productive relationship with each other.
This tool mentor is applicable when running Windows 98/2000/NT 4.0.
Tool Steps
This tool mentor covers these main tasks associated with implementing an automated component test using
QualityArchitect:
Prerequisite steps for unit testing
Implement a unit test
Implement a scenario test
Create a stub component
Use EJB Session Recorder
1. Prerequisite
steps for unit testing
To generate any tests using QualityArchitect, whether they're for COM of EJB components, a Rational Project must be
created and configured using the Rational Administrator. This project must contain a Test Datastore to hold all of the
testing assets, such as test results and datapools. This is described in Tool Mentor: Configuring Projects Using Rational Administrator.
2. Implement a unit test
The objective of a unit test is to validate that a given operation on a given component provides the correct return
value for a given set of inputs. Unit tests are created off of the class specification in the logical view. The process
of creating and executing a unit test is comprised of three steps:
Generating unit test code
Generating unit test data
Executing the test and examining the results
Generating unit test code
The unit test code contains all instructions necessary to instantiate the component, call the operation under test, and
examine the returned result against a baseline.
For COM components
Select the operation to test under the component interface in the Logical View.
Right-click on the operation listed under the component's interface and select Rational Test > Generate Unit
Test. If prompted, during this process you may have to log into a Rational Project.
QualityArchitect generates Visual Basic 6 compatible code as output from this process.
From Visual Basic, you need to first attempt to compile the code. Any compilation errors need to be examined. Under
certain circumstances, QualityArchitect will not be able to generate code to test operations that make extensive use of
complex datatypes. When this is the case, QualityArchitect will insert invalid code, which at compile time will
highlight the segments of code where manual coding is required. Once the code compiles, you can proceed to the next
step, Generating unit test data.
For EJB components
Select the operation to test from the remote interface in the Logical View.
Right-click on the operation and select Rational Test > Select Unit Test Template.
Navigate to the appropriate template for your EJB server. For WebSphere, select the websphere_remote.template in
the EJBWebSphereBusiness Methods folder. For Web Logic, select the weblogic_remote.template in the EJBWeb
LogicBusiness Methods folder.
Select Rational Test > Generate Unit Test. If prompted during this process, you may have to log into a
Rational Project.
QualityArchitect will generate Java code as the output from this process.
You can use the IDE or editor of your choice to examine the Java code. Rational Rose ships with the R2 editor,
which can be used for this purpose.
Once in your editor, you can first attempt to compile the code. Any compilation errors need to be examined. Under
certain circumstances, QualityArchitect will not be able to generate code that makes extensive use of complex
datatypes. When this is the case, QualityArchitect will insert invalid code that will not compile to flag lines of code
where manual coding will be required. Once the code compiles, you can proceed to the next step, Generating unit-test data.
Generating unit-test data
The true measure of a successful unit test is the test data. The test code itself is completely disposable, as
QualityArchitect can regenerate the code at any point in time. While QualityArchitect can create the test code, it
cannot create meaningful test data. This is the responsibility of the analyst or the implementer. Care should be taken
to create test data that validates representative positive and negative tests. Test data that focuses on the boundary
conditions of the component's logic are excellent candidates for unit test data.
For COM components
Select the operation to test under the component's interface in the Logical View.
Right-click on the operation and select Rational Test > Create Datapool.
Once you've selected Create Datapool, a Datapool Properties dialog displays. At this point, you can either
select Edit Datapool Data to begin entering data or select Define Datapool Fields to have
QualityArchitect generate test data for you.
For EJB components
Select the operation to test from the remote interface in the Logical View.
Right-click on the operation listed under the remote interface and select Rational Test > Create Datapool.
Once you've selected Create Datapool, a Datapool Properties dialog displays. At this point, you can either
select Edit Datapool Data to begin entering data or select Define Datapool Fields to have
QualityArchitect generate test data for you.
Working with Datapools
If you select Define Datapool Fields, you'll have the ability to use QualityArchitect's test data generation
capabilities. QualityArchitect can generate various types of generic data, which are specified in the datatypes
drop-down list in the Type field.
When you've selected the appropriate types, select the number of rows to generate and click Generate Data. It's
quite likely that QualityArchitect will not be able to generate all of the data for you. As an example,
QualityArchitect will be able to generate a generic list of U.S. cities, but will not have the ability to generate a
list of valid, system-specific invoice numbers for an ordering system. This data must be manually entered as a datatype
or directly entered into a datapool. The value of creating a datatype with custom data is that QualityArchitect, from
that point on, will be able to generate this type of data from the Define Datapool Fields interface. If you enter the
data directly into the datapool, it will only be available to that specific datapool.
When you select Edit Datapool Data, you'll directly enter in meaningful test data. There is one field for each
argument, as well as one field for an expected return and one field for an expected error. When you specify an error,
both error number and textual error messages are valid entries. If you operation requires a complex object as an
argument, or if it should return a complex object, you won't be able to insert that object reference in the datapool.
Instead, break the object down to the simple argument types required to construct an instance of the object. Use the
Insert Before and Insert After buttons to add fields to the datapool for this purpose. You'll have to
modify the test code to construct an instance of the object with the data provided.
Executing the test and examining the results
Once you've created both the test code and the test data, you're ready to run your test. You can run your test from the
IDE or schedule the test in a TestManager Suite. See Tool Mentor: Executing a Test Suite Using Rational TestManager for
more information on this topic.
As the test begins to run, you are prompted to provide a location for the test log results. Once you specify a
location, TestManager takes places the results of the test run in there.
At the end of the run, TestManager displays the Test Log. To view the results of your test, select the Detailed
View tab of the Log Viewer window. Expand the tree view of the results to see the details of the test run.
Further information can be accessed by right-clicking on any line and selecting Properties.
3. Implement a scenario test
The objective of a scenario test is to validate that a given series of operations across a given series of components
combine to correctly perform a collective task. Scenario tests are created from interaction diagrams, specifically
sequence and collaboration diagrams. The process of creating and executing a unit test is comprised of these three
steps:
Generating scenario test code
Generating scenario test data
Executing the test and examining the results
Generating scenario test code
The scenario test code will comprise all of the test driver code necessary to instantiate the components, call the
operations under test, and evaluate the results of these operations using verification points. Verification points are
a mechanism by which the test code can run SQL statements against a database to verify proper manipulation of the
underlying data.
For EJB components
Select the collaboration diagram in the browser.
Right-click on the diagram and select Rational Test > Select ScenarioTest Template.
Navigate to the appropriate template for your EJB server. For WebSphere, select the
websphere_scenario.template in the EJBWebSphereScenario folder. For Web Logic, select the
weblogic_scenario.template in the EJBWeb LogicScenario folder.
Open the given sequence or collaboration diagram that models the scenario under test. It's important that the
messages to the components be specified for the components on the diagram that will be tested. Messages are
specified by double-clicking on the message line and specifying a name in the droop-down list box on the
General tab. The name needs to correspond to the operation being tested. Further, these specifications can
be modified to include test case data.
As an example, by default, Rose will expose the message specification as:
getTransactions(customerID : String)
This specification can be modified to include a single data case as follows:
getTransactions(customerID : String="BBryson")
For every scenario test, QualityArchitect automatically generates a datapool for test case data. The data
in the diagram will be populated in the first row. You can add additional rows from this point on.
To begin the test, right-click on the diagram in the browser and select Rational Test > Generate Scenario
Test. If you're prompted to log into your project, do so.
A dialog displays to prompt you to select the scenario test targets. Select all of the components on the diagram
that will take part in the test. For each component selected, the corresponding operation specified in that
component's message will be invoked.
For COM components
Open the given sequence or collaboration diagram that models the scenario under test. It's important that the
messages to the components be specified for the components on the diagram that will be tested. Messages are
specified by double-clicking on the message line and specifying a name in the droop-down list box on the
General tab. The name needs to correspond to the operation being tested. Further, these specifications can
be modified to include test case data.
As an example, by default, Rose will expose the message specification as:
getTransactions(customerID : String)
This specification can be modified to include a single data case as follows:
getTransactions(customerID : String="BBryson")
For every scenario test, QualityArchitect automatically generates a datapool for test case data. The data
in the diagram will be populated in the first row. You can add additional rows from this point on.
To begin the test, right-click on the diagram in the browser and select Rational Test > Generate Scenario
Test. If you're prompted to log into your project, do so.
A dialog displays to prompt you to select the scenario test targets. Select all of the components on the diagram
that will take part in the test. For each component selected, the corresponding operation specified in that
component's message will be invoked.
Verification points
For each operation that will be invoked and again at the end of the test, you'll be prompted to insert a verification
point. Verification points are used by QualityArchitect to validate that the operations took place correctly. Although
the verification point architecture is open and extensible, currently only the database verification point is
implemented. The database verification point allows you to enter some SQL to run a query. The query created will be
executed at test time to validate the correct manipulation of the database by the component.
You can implement your own
verification points, using the steps found in QualityArchitect online Help.
Select Yes to insert a verification point.
Select the appropriate type of verification point to insert. Unless you've implemented your own verification
points, you must select the Database VP.
You are presented with a Query Builder, which you'll use to establish the connection parameters to your database
and build the query that will be executed to validate the correct functioning of the operation being invoked. Basic
knowledge of the underlying database and SQL syntax is necessary to establish this connection and to create this
query.
The code necessary to instantiate all components, call all operations, and run the inserted verification points is
output at this stage.
Generating scenario test data
For every scenario test generated, QualityArchitect automatically creates a datapool to contain the test data. If there
was data specified in the diagram, then the first row of this datapool will already be populated with that information,
as well as the information relating to any inserted verification points. If not, the datapool will contain only
information relating to verification points.
To view and edit this information, follow these steps:
From Rose, select Tools > Rational Test > Toolbar.
On the Toolbar, select the second toolbar item to edit your datapool. QualityArchitect will have created a datapool
that contains the name of the scenario diagram, which ends with a _D. The algorithm used to name the datapool is
sufficiently complex that it's too difficult to predict every datapool's name in this documentation.
To edit this data, follow the same basic steps outlined in Working with
datapools.
Executing the test and examining the results
Once you've created both the test code and the test data, you're ready to run your test. You can run your test from the
IDE or schedule the test in a TestManager Suite. See Tool Mentor: Executing a Test Suite Using Rational TestManager for
more information on this topic.
As the test begins to run, you are prompted to provide a location for the test log results. Once you specify a
location, TestManager takes places the results of the test run in there.
At the end of the run, TestManager displays the Test Log. To view the results of your test, select the Detailed
View tab of the Log Viewer window. Expand the tree view of the results to see the details of the test run.
Further information can be accessed by right-clicking on any line and selecting Properties.
For verification points, no Pass or Fail indication is given on the first run, which is used to capture a
snapshot of the query results to be used as baseline data for future test runs.
Double-click on the verification points to display a comparator that presents the results of the query. These results
can be edited, so if the query didn't return the correct results, you can modify this data. All subsequent runs of this
test will compare their query results to those captured in this first run.
4. Create a stub component
Often the components being tested in a unit or scenario test rely on other components to complete their tasks. Problems
arise when these secondary components are not operational. Sometimes they're still in development; sometimes they're
buggy. Regardless, testing the primary component doesn't have to be halted until the secondary components become
available. Instead a stub or temporary component can replace any non-operational components for testing purposes. The
stub doesn't implement the functionality of the real component; it merely reacts to inputs. Stubs return a programmed
response for a given set of values without implementing any logic. It's a simple stimulus response relationship.
QualityArchitect can easily create stubs for both COM and EJB components. These stubs rely on lookup tables to
replicate the business logic of the components they're replacing. The table, implemented as a datapool, determines what
the returned value should be for a given set of inputs.
The process of creating and deploying a stub is made up of these three steps:
Generating a stub component
Generating a stub lookup table
Deploying the stub
Generating a stub component
When you generate a stub, you must generate a complete component. The, for the operations being stubbed, you need to
create a lookup table. A stubbed component, which contains stub code for all operations of that component, is the
output of the stub generation process. You cannot stub a single operation.
For Com components
Select the component interface in the Logical View.
Right-click on the interface and select Rational Test > Generate Stub. You are prompted for the location
of where you want to place the generated stub code. Select this location and the code will be generated.
For EJB components
Select the bean implementation class in the Logical View.
Right-click on the class and select Rational Test > Generate Stub. You are prompted for the location of
where you want to place the generated stub code. Select this location and the code will be generated.
Generating a stub lookup table
To replicate the logic of the real component, the stub must know how the real component would react when given a set of
arguments. This logic is maintained in a lookup table, which specifies what value or error to return for a given set of
arguments. You create one lookup table for each operation on the component that is being stubbed.
For Com components
Select the operation below the component interface in the Logical View.
Right-click on the interface and select Rational Test > Create Lookup Table. This displays the Datapool
Properties dialog.
To create this lookup table, follow the same basic steps outlined in Working
with datapools. You'll use the table to specify the values or exceptions to return for a given set of
arguments.
For EJB components
Select the operation off of the bean implementation class in the Logical View.
Right-click on the class and select
Rational Test > Create Lookup Table. This displays the Datapool Properties dialog.
To create this lookup table, follow the same basic steps outlined in Working
with datapools. You'll use the table to specify the values or exceptions to return for a given set of
arguments.
Deploying the stub
When the stub and lookup table have been generated, the stub must be deployed in place of the existing component. This
processes is environment-specific and guidance for this task is provided under the heading in QualityArchitect online
Help.
5. Use the EJB session
recorder
The EJB session recorder is a Java application that allows you to interact with live, deployed EJB components. This
functionality is only available for Enterprise JavaBeans, not for COM components.
The process for using the EJB session recorder involves these steps:
Starting an XML recording session
Connecting to the EJB server
Creating an instance of the bean under test
Invoking operation on the bean
Inserting verification points and java code
Generating test code from the EJB session recording
The EJB session recorder can be used in two modes: recording and non-recording. When recording, all action taken is
recorded to an XML log that the EJB session recorder will convert into executable java code. The code contains all
method calls, any inserted java code, and verification points. When operating in non-recording mode, the tool will be
limited to creating instances of EJBs and invoking their operations.
To connect to the EJB server, you need to provide the Provider URL and the InitialContextFactory to connect to the
EJB server. This information should be the same as that used by your client code to connect to the server. Default
connection information for WebSphere and Web Logic can be found in the online product documentation.
When you've supplied your connection information, select Connect and you're presented with a list of beans
deployed on that server. You can interact with one-to-many beans during a session, and you need to select the first
bean to interact with at this point.
Here you create an instance of the first bean under test. Select the appropriate creation method from the top half
of the Methods window. If the create method requires specific parameters, specify them in the Parameters
section. Once complete, select Invoke to create an instance of the bean.
With the instance of the bean created, the EJB session recorder presents you with the various operations available
on that bean. You'll see the bean's own operations in the top half of the Methods window, inherited operations in
the bottom half. As a general rule, you won't be testing the inherited operations. Once you've selected the
operation to test, you can supply the required parameters for this operation in the Parameters window.
If the parameter is a complex object, there will be a button called New. This opens a subsequent window where
you're presented with a dialog that allows you to create an instance of the required object. The window shows all
constructors and the required arguments to construct an instance of the object. When you've supplied the
constructor information, you need to name the object so it can be referenced later during the recording, if
necessary.
There is value in assigning names to parameters if these values will be used again during the session recording. If
you provide a name, QualityArchitect will be able to populate the value in any parameter field when you right-click
that field.
When you click Invoke, the operation is called with the provided parameters. The return value is shown in
the Last Return Value field. If this value is required as the input to a subsequent call, it can be dragged
and dropped into the required field. You can also right-click it when the mouse is pointing at the parameter field
where the value will be inserted. To determine what values to present on the right-click menu, the EJB session
recorder matches the type of the parameter to the previous types that have been used during testing.
At any point in the session, you can insert java code or verification points from the Insert menu. The
verification points are the same as those used when generating scenario test code. Similarly, java code can be
inserted to perform any additional processing.
If you are in record mode, you can convert the XML-based recording to java code when all steps of your test are
complete. Click Stop to perform this action. You are prompted to convert the XML code to java code, and
you'll need to provide a session name and a script name. Java code, which you can execute to replicate the steps
taken during your recording, is the output of this process.
© Copyright IBM Corp. 1987, 2006. All Rights Reserved.
contentPage.onload();
Wyszukiwarka
Podobne podstrony:
smoke test?1BBC4Dtest 1test 1Systemy test 2 1więcej podobnych podstron