MBSE, ooaeng

background image

Pathfinder Solutions

N

S

W

E

web: www.pathfindersol.com

90 Oak Point Wrentham, Massachusetts 02093 U.S.A

voice: +01 508-384-1392

fax: +01 508-384-7906

Model Based Software Engineering Process

2/9/00

Peter J. Fontana

version 2.02

Copyright entire contents 1995 - 2000

Pathfinder Solutions all rights reserved

background image

MBSE Software Engineering Process

N

S

W

E

1

1.

INTRODUCTION ........................................................................................................................ 2

1.1

G

OALS

:.....................................................................................................................................2

1.2

R

EFERENCES

.............................................................................................................................2

2.

SOFTWARE ENGINEERING WITH MBSE:............................................................................ 4

2.1

MBSE - R

EVIEW OF

D

EVELOPMENT

P

ROCESS

M

ECHANICS

..........................................................5

2.2

S

YSTEM

-L

EVEL

D

ETAILED

R

EQUIREMENTS

................................................................................7

2.2.1

Entry Criteria...................................................................................................................7

2.2.2

Goals of the System-Level Requirements Document..........................................................7

2.2.3

Requirements Changes .....................................................................................................7

2.2.4

Exit Criteria .....................................................................................................................7

2.3

D

OMAIN

M

ODELING

..................................................................................................................8

2.3.1

Domain Modeling Goals ..................................................................................................8

2.3.2

Entry Criteria...................................................................................................................8

2.3.3

Domain Chart - a Living Document..................................................................................8

2.3.4

Roles of the Domain Chart .............................................................................................10

2.3.5

Partitioning the System into Domains.............................................................................10

2.3.6

When to Analyze a Domain.............................................................................................12

2.3.7

Domain Model Validation ..............................................................................................13

2.3.8

Exit Criteria ...................................................................................................................14

2.4

D

ETAILED

D

EVELOPMENT

P

LAN

W

ITH

S

CHEDULE

.................................................................... 15

2.4.1

Entry Criteria.................................................................................................................15

2.4.2

Object Blitz ....................................................................................................................15

2.4.3

Estimation Guidelines ....................................................................................................15

2.4.4

Activity Sequencing ........................................................................................................15

2.4.5

Revision Points...............................................................................................................16

2.4.6

Metrics Gathering ..........................................................................................................16

2.4.7

Exit Criteria ...................................................................................................................17

2.5

A

NALYZING

E

ACH

D

OMAIN

..................................................................................................... 18

2.5.1

Iterative Development ....................................................................................................18

2.5.2

Entry Criteria.................................................................................................................18

2.5.3

Domain Requirements Matrix.........................................................................................18

2.5.4

Bridge Definition ...........................................................................................................18

2.5.5

Information Modeling.....................................................................................................18

2.5.6

Scenario Modeling .........................................................................................................19

2.5.7

State Modeling ...............................................................................................................19

2.5.8

Action Modeling.............................................................................................................19

2.5.9

Dynamic Verification .....................................................................................................19

2.5.10

Exit Criteria ...................................................................................................................20

2.6

S

OFTWARE

I

NTEGRATION

........................................................................................................ 20

2.6.1

Entry Criteria.................................................................................................................20

2.6.2

Exit Criteria ...................................................................................................................20

background image

MBSE Software Engineering Process

N

S

W

E

2

1. INT R ODUCT ION

This document outlines a rigorous and effective software engineering process using Analysis
models developed with the Unified Modeling Language (UML

). The process identified here is a

synthesis of the mos t effective current processes in use, taking advantage of the general long
term convergence of methodologists, practitioners, and vendors, especially in the embedded
software arena.

We have coined the term Model Based Software Engineering (MBSE) to refer

to this convergent

process, referring to any software development approach that effectively applies rigorous and
complete Analysis models.

It is assumed that the reader of this document has recently read (and has available for
reference) the Pathfinder So lutions paper “Model -Based Software Engineering - An Overview of
Rigorous and Effective Software Development using UML

. This is available from

www.pathfindersol.com.

1.1 Goals:

The goals of this document are to help the reader:

n apply MBSE in a consistent m anner, leading to high quality models and resulting

product

n reduce the overall software engineering development time

n be best positioned to respond to future product requirements

n improve the quality of their documentation to better support internal and ext

ernal review

and debugging requirements

1.2 References

For more information on Model Based Software Engineering, please call Pathfinder Solutions at
888-MBSE-PATH (888-662-7248) or +01-508-384-1392, email us at info@pathfindersol.com, or
visit us at www.pathfi ndersol.com.

You may wish to refer to the following sources:

on MBSE:

" Model -Based Software Engineering - An Overview of Rigorous and Effective Software
Development using UML

", Pathfinder Solutions, 1998; (this paper is available from

www.pathfindersol .com)

"Reviewing MBSE Work Products", Pathfinder Solutions, 1997; (this paper is available
from www.pathfindersol.com)

"The Costs and Benefits of MBSE", Pathfinder Solutions, 1999; (this Powerpoint
presentation is available from www.pathfindersol.com)

on the UML

:

"The Unified Modeling Language User Guide ", Grady Booch, James Rumbaugh, Ivar
Jacobson, Addison Wesley, 1999; ISBN 0 -201-57168-4

"UML Distilled ", Martin Fowler, Addison Wesley, 1997; ISBN 0 -201-32563-2

"UML Summary Version 1.1", Object Managem ent Group, Inc. 1997 (this paper is
available from www.omg.org)

on Shlaer -Mellor OOA/RD:

background image

MBSE Software Engineering Process

N

S

W

E

3

"Object Lifecycles ", Sally Shlaer and Stephen Mellor, Prentice -Hall, 1992; ISBN 0 -13-
629940-7

UML

is a trademark of Object Management Group, Inc. in the U.S. and o ther countries.

background image

MBSE Software Engineering Process

N

S

W

E

4

2. SOFTWARE ENGINEERING WIT H MBSE:

As we look at the application of UML

modeling in software engineering, we quickly recognize

that Analysis, Design, and translation represent only a few of the steps in the overall software
engineering proces s. In an effort to understand how to best apply MBSE, we must establish a
sound software engineering process context in which the MBSE portions can fit.

To best limit the scope of this document, we will concentrate on the application of MBSE within
such a context. For the purpose of this review, we begin consideration at a point in the process
where some form of Product Definition Document has been completed and is approved by
Engineering Management and its customer (usually Marketing). At this point, wo

rk on the

detailed System -Level Requirements Document can begin. Our review of MBSE processes
continues through the completion of Software Integration, at which point we’re ready to integrate
and validate system processing on target hardware.

background image

MBSE Software Engineering Process

N

S

W

E

5

2.1 MBSE - Review of Development Process Mechanics

The overall software development process is broken into four major phases:

n Domain Separation: partition the entire system at the highest level into domains of

separate subject matter.

n Domain Development: model each analyz ed domain and import each realized (non -

analyzed) domain developed by hand -coding or generated from other environments
(like a GUI/RAD facility).

n Design: develop a strategy for mapping analysis to an implementation and for

assembling system components. De sign development and preliminary validation is
parallel to and independent from the analysis conducted during Domain Development
and is often available commercially.

n Integration: assemble all system components and verify that they work together using a

controlled set of iterative development cycles.

Please refer to figure 1 for the high -level flow of a single build iteration of this process.

Design

Build

Translation

Analysis

Execution-Specific Requirements

Application-Specific Rqts

implementation libraries

realized code

Deliverable System

Base Mechanisms

Design Policies

Analysis models

implementation

of models

application-specific and

capacity requirements

figure 1

background image

MBSE Software Engineering Process

N

S

W

E

6

terms from figure 1:

Analysis................................ .. the process of developing UML Analysis Models and their

Dynamic Verifica tion, for each analyzed domain in the system.
This typically is conducted largely in parallel with Design.

Analysis Models ..................... a complete set of UML Analysis, including the Domain Model

(for the entire system), and for each analyzed domain an
Information Mod el, Scenario Models, State Models, and Action
Models.

Application -Specific Requirements

all requirements that define the system under

development in terms of features, specific capabilities, and all
aspects of system operation and behavior that are not
exclusively Execution -Specific.

Base Mechanisms .................. the set of C++ base and utility classes that provide the operating

infrastructure of the system, including event queuing and
dispatch, inter -task and inter -process communication, basic
analysis operation suppor t, memory management, and general
software primitives such as lists and strings.

Build ................................ ....... the process of compiling and linking the translated

implementation code, realized code and implementation
libraries into the Deliverable System.

Deliverable System ................ the set of executable elements that constitute the software

product to be verified and/or delivered.

Design................................ .... the process of defining and deploying a strategy for deriving an

implementing from the Analysis, including Structural
Architecture, Design Templates, and Base Mechanisms. This
typically is conducted largely in parallel with Analysis.

Design Policies ....................... a set of Design Patterns that define how the C++

implementation code for the Analysis will be translated from the
Analysis models. These are captured as t emplate files in the
specific notation of the Pathfinder Solutions Springboard
translation engine.

Execution -Specific Requirements

all requirements that define how the system under

development will execute in its specific deployment
environment, including task and processor topology and
allocation, general capacities, performance, operating system
interfaces, and application -independent capabilities.

realized elements ................... system components that have not been analyzed, and are

typically hand -written code, gener ated from a specific
environment (like a GUI builder or math algorithm environment),
or purchased from a third party.

implementation libraries ......... realized system components supporting a specific compiler,

language or operating system environment.

Translation ............................. the process of executing the Springboard translation engine to

generate the complete implementation code for all Analysis
Models.

background image

MBSE Software Engineering Process

N

S

W

E

7

2.2 System-Level Detailed Requirements

Due to the rigor and detail involved in MBSE - even at the level of Information Modeling,

we find

that software organizations using MBSE illuminate the need for detailed, consistent and firm
requirements definition earlier than less formal approaches. This is a good thing, because poor
requirements health translates directly into poor project health, and early detection leads to less
painful cures. No approach alleviates the need for solid requirements

- some just hide the

problem longer.

2.2.1 Entry Criteria

An approved product concept document must be available before system

-level detailed

requirements can be started in earnest.

2.2.2 Goals of the System-Level Requirements Document

n requirements broken down to the atomic level (from external perspective)

n sufficiently detailed to support Information Modeling

n don’t specify solutions - just bound the proble m

n properly identified - to support external reference

2.2.3 Requirements Changes

Changes in requirements after “freeze” are inevitable, and any process that addresses software
engineering in the “real -world” must address the possibility. However, this does not mean that
we should not strive to reduce the scope of, or even possibly defer any such changes. A
change of any real scope will always have a negative impact on schedule, cost, and quality
(resulting in future schedule and cost impacts).

n firmly identify baseline requirements versions

n document all changes, and accept no changes without a diligent impact assessment

n for all “official” requirements changes, accept them in an “official” requirements review,

including a re -release of the s/w schedule.

n requirem ents defects are a manifestation of our humanness - make your requirements

decision makers shameful of requirements “bugs”, just as we are shameful of software
bugs.

2.2.4 Exit Criteria

Once the System -Level Requirements Document has been approved, it’s time to

stop this work -

until a requirements change is imposed.

background image

MBSE Software Engineering Process

N

S

W

E

8

2.3 Domain Modeling

Domain Modeling is the most powerful of all MBSE elements. However it is also the least
mature and least covered (in terms of published papers and texts). Proper separation of subj

ect

matter supports powerful and simple constructs within domains, minimizes bridge complexity,
and provides the only technically sound basis of significant reuse in the software industry.

The domain chart is a UML class diagram of all software components in the system separated
into “domains”. These domains are directionally connected with “bridges” showing the flow of
requirements from the higher level domains to subordinates which provide required lower

-level

services. The domain model is a domain chart with descriptions for all domains and their
interconnecting bridges.

One of the first tasks in domain modeling is to identify the bounds of the system under
construction. This can be an easy task when you are building a simple system with one program
and do not expect significant growth across releases. It can be somewhat more difficult when
your “system” involves many programs on many processors, with significant increases in system
complexity across releases. We recommend that a single, large system

perspective be taken to

as high a level as is practical. (One system - not necessarily one program/library, etc.) This
contrasts with alternatives that may bound systems on processor or executable boundaries, or by
other partitioning approaches.

The bene fits of making the system larger instead of smaller come from the ability to place all
elements of the problem into a single conceptual space and exercise the relationships between
them. On the constraining side, the upper limit on system size will primar

ily be bounded by the

abilities and authority of the system architect – they system cannot span beyond what this person
can understand and control (or at least influence).

2.3.1 Domain Modeling Goals

n identify the boundaries of the system under construction

n identify the separate subject matters in the system

n partition the system into manageable components

n recognize what you will analyze, what you will “realize” (hand -code), and what you will

buy

n establish a flow of requirements from domains containing higher -level abstractions

down to lower -level domains

2.3.2 Entry Criteria

Domain Modeling can start if an approved product concept document is available, and this
document is felt to contain sufficient detail to have illuminated the major subject matter areas in
the system . However, if it believed that detailed System

-Level Requirements work will uncover

significant system structure not otherwise apparent, then the bulk of this work should be
completed before Domain Modeling.

2.3.3 Domain Chart - a Living Document

Domain modeli ng is one of the most difficult parts of the MBSE process to manage properly.
This is true for a number or reasons:

n it is the least mature of all the different elements of MBSE.

n far more than any other single aspect of MBSE, domain modeling has the highes t

strategic impact on your organization’s effectiveness, productivity, and flexibility.

background image

MBSE Software Engineering Process

N

S

W

E

9

n it is the most subjective area of MBSE to apply - the guidelines for proper domain

modeling are difficult to apply in a group setting, requiring a high degree of
professionalism, and an effective leader.

n it presents somewhat of a chicken -and-egg problem: you cannot effectively start

information modeling without a domain model, and some of the best feedback on a
particular domain breakout is a set of preliminary IM’s.

n since it is the foundation conceptual layer upon which all other analysis rests, any major

changes in your domain model have a potentially significant rework impact. On the
other hand, it is extremely difficult to work around a domain modeling problem of an

y

real significance.

Given this difficulty, why not separate based on other criteria? Because there are no generally
applicable separation schemes that are easier to apply, or that yield beneficial and repeatable
results. As imperfect as Domain Modeling is, it is still much better than the alternatives.

2.3.3.1 Managing Domain Model Development

Given all of the above, it is clear that project leadership must consider the domain model to be
an area where the most rigorous development process must be applied, inclu

ding:

n clearly appointing a single leader for the domain model (typically the overall project

technical leader), with responsibility/authority to make decisions - even in the absence
of consensus.

n identifying a core subset of the project technical staff (ty pically no more than 4 people

total) to participate in domain modeling.

n clearly identifying the goals for the effort (see section 2.3.1 Domain Modeling Goals ).

n providing a bounded (2 weeks or le ss), dedicated timeframe to ensure focus and retain

momentum.

n applying the highest degree of professionalism, to ensure the proper balance of

discussion of alternatives and cooperative forward motion.

n once the domain model is completed, updates should be m ade as necessary, and only

after a reasonable review process.

2.3.3.2 Group Portrait

In the case where a development organization is formed to support one consistent family of
closely related products, a proper domain chart is a self portrait of the development or

ganization

itself. This chart reflects the relationships between all significant software efforts underway or
planned in the near term. In this case, the application domain should reflect the identity of this
group - what constitutes the essence of the g roup.

2.3.3.3 Describing Both the Short and Long Term Views

Partly due to this self -image aspect of the domain model, but mostly due to the strategic nature
of domain modeling, this is the only modeling area where releases beyond the *current* effort
are considere d. It is recommended that two explicit versions of the domain chart be kept, if
possible: a “master plan” domain model should reflect the organization’s best perspective on
what the system will look like in the medium time frame

- usually one full releas e cycle forward.

Once the initial domain model work is done, a second domain model is done as a subset of the
“master” - to reflect the specific composition of the system in the first release. Once the
organization is ready to start work on the second re lease, the “master” is revisited, updated as
necessary, and a release -specific domain model is again carved out - to define this next release
version of the system.

background image

MBSE Software Engineering Process

N

S

W

E

10

2.3.4 Roles of the Domain Chart

The domain chart can seem very familiar to people used to dealing

with system -wide issues and

high-level design. This typically leads some people to derive unintended meaning from a
domain chart. One of the significant benefits of MBSE is the rigor of the definition of the method
- including the core meaning of the do main chart. A domain chart tells us:

n the population of domains in the system

n how domains are related through the hierarchical flow of requirements

It does not specify:

n a development organization chart

n allocation of software to tasks, processes, processors , networks, etc.

n run-time flow of data or control

2.3.5 Partitioning the System into Domains

The genesis forces of a domain fall into two basic categories:

n methodologically defensible universes of software, with purity of subject matter, broken

out by careful s tudy and conceptual exercise

n clumps of legacy code, off -the-shelf software and other “realized” code, usually

hammered into a set of realized domains

2.3.5.1 Separation of Subject Matter

Domains should be considered canonical conceptual universes - defined by the domain
description or mission statement. When a domain is deemed to contain a particular capability,
abstraction, or if any other subject matter “fragment” has been allocated to a domain, it cannot
appear in any other domain.

This separation does not pr event the manifestation of elements of a server domain in a client
through the presence of some sort of a “handle” or “magic cookie”. The handle must be atomic
in the client, and the client must not know any more about the handle than what is published in

the server bridge.

If there appears to be a case where a certain subset of abstractions appear in one domain, and
also seem to be called for in another domain, then these abstractions must somehow be
removed from these domains and allocated to a common se

rver. See section 2.3.5.7 Subject

Matter Algebra for more on this.

Realized domains representing yet -to-be developed code are subject to the same rules as MBSE
domains regarding purity of subje ct matter. They should not implement abstractions present in
other domains, and the domain should stand consistent by itself.

When setting up domains for existing packages/modules of existing code, remember it is not
necessary to map all of the capabiliti es of one package into a single domain. For instance, the
MFC as delivered with Microsoft’s Visual C++ can populate several realized domains: GUI
Foundation, DB Engine, File System Utilities, Task and Process Controls.

2.3.5.2 What is *The Application Domain*

The application domain should reflect the highest level abstractions in the system. These should
be the identity concepts of the system, and most should seem familiar to anyone knowledgeable
about the product. The application also bears the brunt of most of the system -level
requirements.

background image

MBSE Software Engineering Process

N

S

W

E

11

Some practitioners may have on the order of 50% of the total system object population in the
application domain. For a simple system, this may be OK, but for a system of any real
complexity, this should not be allowed to happen. Any domain should be kept to a manageable
size. Experience shows over 80 objects can indicate too much complexity in a single domain.
The key to managing domain complexity is also the secret of a successful executive: delegation.
See section 2.3.5.6 Delegating to Servers on how to break out pieces of a client and move them
a server.

2.3.5.3 Levels of Abstraction

The conceptual level abstraction at the top level domains should be the highest

, with more

detailed/mechanical concepts closer to the bottom. This should work well with the overall flow of
requirements downward, and the notion of a client domain delegating tasks downward.

2.3.5.4 Naming Domains

Maintaining a high degree of conceptual purity in a domain keeps the concepts more simple and
powerful. This pursuit of purity can be a difficult, continuous struggle, and every reasonable aid
should be employed to keep things in order. An effective name for a domain can do a lot to help
enforce the proper level of abstraction and conceptual purity. While a well

-crafted domain

description can do a lot to identify a domain’s conceptual space during domain model
development, they are rarely referred to relative to the domain name.

As an example, your top-level domain should not be called “Application” - this is a waste of
space (don’t laugh - we’ve seen this). However, good application domain names can be difficult
to arrive at. Frequently, the first name for an application domain may come from the m

ost

prominent visible aspect of a product, or the product name itself. For instance, an application
domain for an air traffic control radar system might be called “ATC Radar”, but a more complete
analysis may show the highest -level domain is more appropri ately termed “Aircraft Traffic
Management”, with a “Radar Tracking” server domain.

When naming server domains, a frequent mistake is to apply a name from the client’s
perspective. The key is to convey the capabilities offered by the server without restric

ting how

they may be used. This will help keep the client’s subject matter from leaking down into the
server. For instance, if a “Vehicle Speed Control” domain relies on server for “Speed Detector
Monitoring”, we may find more clients for these abstracti ons if we can name the server
“Asynchronous Incident Buffering” and keep the subject matter free from the specifics of the type
of device if possible.

2.3.5.5 Testing a Domain Definition

The first test of a domain is to read the domain description out loud. Is it

defensible? Does it

meet the requirements imposed upon it from the system -level and/or all of its clients? Does it
provide usable boundaries on the constituent abstractions - including a conceptual lower bound?
Can you construct a core domain mission s tatement without including vocabulary or concepts
from other domains? (However it is helpful to augment this “pure” mission statement with
“system-level” descriptions and examples, using perspectives from clients and other domains for
clarification if nec essary.)

If a domain definition cannot be readily written, perhaps additional effort is needed to identify the
major players (objects) envisioned to be in the domain. There is no need to apply details such
as attributes or even relationships - simply iden tify the domain core. This may provide a
sufficiently concrete context from which to define a domain.

A conceptual exercise to test the integrity of a domain is to “transplant” it to another system. For
a high level domain, envision how well the integrit

y of the subject matter survives the

replacement of all server domains with different but equivalent substitutes. For instance, how

background image

MBSE Software Engineering Process

N

S

W

E

12

well does you application domain survive if you move to a different O/S, GUI. Database, and
underlying hardware?

For server domains, envision their reuse on a different system. Can it satisfy similar
requirements imposed from a different application?

2.3.5.6 Delegating to Servers

The relationship between a client and a server should mirror a supervisor/skilled worker
arrangement: the supervisor needs to know something about what the worker is doing, but does
not need to know everything. Basically, as long as the worker gets the required job done, the
supervisor should not care how this is done.

Performing the analysis of a domain may illuminate the need for capabilities at a lower level of
abstraction than the current domain. This presents an opportunity to delegate these capabilities
to a server domain. Typically scope issues influence the decision to create another domain

- if

the subordinate capabilities represent a sufficiently large “mass” of effort, the overhead in
creating and managing another domain may be justified. The size and manageability of the
current domain may also affect this decision.

Through modeling, we may also realize that a certain set of abstractions modeled here seem to
be somewhat loosely connected to the rest of the domain, and quite tightly coupled within itself.
This may simply be a valid aspect of this domain, essential to the core subject matter and
inseparable though loosely connected. Or it may indicate a separate “sub

-subject matter” within

the domain. If this separate “sub -subject matter” is identified as already allocated to another
domain, then obviously it should be moved there. However, if

another domain is not a likely

choice as a new home, then we are faced with the new server domain decision detailed above.

To determine if it is appropriate to move a “sub -subject matter” cluster to a new domain, the
domain modeler must decide:

n the cluster to be moved to the server is coherent standing alone, and not dependent on

its former context

n the set of abstractions represented by the cluster is at a lower level than the client

n the client can easily be changed to eliminate any former dependence on the

cluster

2.3.5.7 Subject Matter Algebra

When two or more domains appear to have a need for a common set of abstractions, it is often
desirable to move these objects/relationships to a common server domain. The domain modeler
must determine if the needs of each o f the potential clients are sufficiently similar to allow a
single consistent set of requirements for the server. Once the “common factor” subject matter
has been identifier, it can be “divided out” of each client and moved to the server.

2.3.6 When to Analyze a Domain

2.3.6.1 I Knew That

UML Analysis is a very effective, general purpose approach for developing software. Therefore
we generally use Analysis in any domain where we are developing software. To be more
specific, consider Analysis for any domain where you c an quickly imagine two or more objects as
forming a sound basis for understanding of that subject matter.

background image

MBSE Software Engineering Process

N

S

W

E

13

2.3.6.2 Realized Code - Off-The-Shelf or Minor Changes

In the case where an existing package is being used, and only “minor” (or no) changes are
required, thi s code must be conceptually allocated into separate subject matters, and a bridge
constructed to abstract each of these realized domains to the rest of the system.

2.3.6.3 Legacy Code - Major Changes

If a block of existing code must be significantly changed, a co mmon inclination - especially by
managers and developers intimately familiar with this code

- is to expect that some economy will

be realized by trying to save “big pieces” of it, and just rearrange things. This inclination should
be identified as a false hope, and any significant restructuring of an existing system is most
economically achieved by starting from day 1 by laying a sound foundation in Analysis.

The implementation layer (code) of a system should be considered like a concrete casting. A bit
of grinding here and there is fine from release to release. However any significant restructuring
of this layer fundamentally weakens the overall structure of the system

- the “mold” needs to be

changed, and a new piece should be cast. Consider the Analy sis of a domain to be the “mold”,
and the process of translating out code from the Analysis as “casting”. The “mold” is what’s
important - “casting” is relatively cheap.

The intrinsic high cost of going from analysis or design concepts to code (the “casti ng” process)
in ad-hoc or elaborative approaches should not be repeated. Or were you planning on not
designing your major changes at all … ?

Old code should be treated like old underwear - if it starts to wear out or need alterations, just
chuck it.

2.3.6.4 When MBSE Isn’t Appropriate

There are many cases where packages or environments are available that provide very specific
and effective support to develop code for certain specialized domains. An example of this is
Microsoft’s Visual Workbench. This is a Rapid Application Development environment supporting
the quick generation of GUI -specific code. There are many examples of these environments,
ranging from database and GUI realms to specialized numeric algorithm support.

Another case where a domain may not use Analysis for new code is when the project’s Design
Templates will not provide a satisfactory implementation layer. This could be due to space or
time performance requirements, or other issues. The first response a project should make to
this condition is to attempt to adjust the Design, or try a different translation approach for this
domain. For example, not all domains in a system need to use queued asynchronous events.
Some domains may not even have active objects

– domain and object services coul d do all that

is needed. However some cases will remain where Analysis is simply not the most effective way
to solve a problem.

In the case of a realized domain, it may occasionally be necessary to create an Analyzed
interface domain to provide the reali zed capabilities at a level and in a form more compatible
with the rest of the system.

2.3.7 Domain Model Validation

Once the initial domain model has been completed, there are a number of steps to take in
validating your subject matter separation. While the a ctual separation process may appear
somewhat subjective at first, the validation steps are more objective, and their early and iterative
application can help steady the Domain Modeling process. This evaluation process includes the
assessment of the system overall, and/or each domain in the following areas:

-

conceptual clarity

background image

MBSE Software Engineering Process

N

S

W

E

14

-

level of abstraction

-

subject matter purity

-

reusability

-

analyze or realize

-

scenario testing

In addition to the above modeling -specific evaluation criteria, the top level partitioning of

the

system can be evaluated by more traditional Structured Design criteria: coupling and cohesion.
Coupling is the amount of undesirable interaction between domains and their constituent
elements – something to be reduced. Cohesion is the degree to whic h elements within a single
domain rely on each other and belong together – something to be increased. Low coupling
between things in different domains is good. High cohesion within a domain is good.

2.3.8 Exit Criteria

While the domain model must be regarded a s a living “document”, it should be considered
essentially complete once the authoring team considers the diagram and descriptions to be
complete, self -consistent, and have resolved all major review items.

background image

MBSE Software Engineering Process

N

S

W

E

15

2.4 Detailed Development Plan With Schedule

2.4.1 Entry Criteria

In order to estimate the overall size of the project, and to understand the breakout and
sequencing dependencies of the work, the initial domain modeling effort must be complete.

2.4.2 Object Blitz

The purpose of the object blitz at this point is to gain an understanding of the scope of effort in a
domain. Restrict blitz activities to a single session (1 -2 hours) per domain, only divining up the
objects - to not delve into descriptions, relationships or attributes. Once a blitz identifies possible
objects for a domain, this list must then be examined to eliminate all those that are not valid
objects. Quickly eliminate all that are not obviously objects looking for:

n attributes

n objects in another domain

n supporting requirements not in the current release

Once the first level cut has been made, the number of remaining objects is the Blitz Count.

2.4.3 Estimation Guidelines

At the time the initial effort is made to create a development plan and schedule, two basic facts
must be accepted by the plan authors and their c ustomers (management):

n at this point in the schedule, it is generally not reasonable to expect estimation accuracy

any better than +30%/ -0%

n once the application domain Information Modeling is complete, its servers should be re

-

blitzed, and a schedule revis ion will be released based on the new information. (OK -
now who thinks this new schedule will be shorter than the original?)

Based on the above, it is not advisable to expend considerable effort on an object blitz to
achieve greater accuracy. Only *real * information modeling will provide true illumination into the
composition of a domain, so let’s make our best cut at the initial schedule and get on with things.

First, determine your hours -per-object multiplier. This is a subjectively fudged collection of
historical data from previous projects of similar size, staff and approach. Do not bother to
separate active and inactive object statistics at this level. Now apply the following formula:

domainLaborHours = futureDiscoveryFactor * BlitzCount * hoursPe rObject

where futureDiscoveryFactor initially = 2

2.4.4 Activity Sequencing

The basic constraint for starting any domain is the detailed understanding of all requirements and
constraints that bear on that domain. This basically means the system -level requiremen ts that
bear on the domain must be known, and all bridge services into the domain from clients must be
defined. Typically completing the State Models for all clients will be sufficient to flesh out all
bridge service needs. However, if greater overlap is needed between client and server domain
development, then the results of Scenario Modeling can be used to establish server bridges.

The risk for overlapping client and server domain development is rework. You make the call.

background image

MBSE Software Engineering Process

N

S

W

E

16

2.4.5 Revision Points

As identified in section 2.4.3 Estimation Guidelines , the development plan should be reissued
after the Information Modeling for the application domain is completed. In large systems where
there are a number high-level domains constituting the “application”, these should also be
completed. In any case, use a guideline of 25% - once the highest level domains constituting
25% or more of the initially blitzed objects are completed through Information Modeling,

you are

in a position to revise the schedule with much greater certainty.

Once the second revision of the schedule has been published, use your metrics tracking to help
identify one of the following crisis indicators:

n a completed domain consumed significan tly more hours per object than the initial

estimate multiplier, possibly indicating this foundation coefficient is incorrect

n a domain is consuming time in one phase significantly out of proportion with what is

expected, possibly indicating a domain with co re issues such as subject matter,
requirements, technique, etc.

After a trouble area is investigated and a likely root cause identified, update the schedule as
necessary to account for the new information.

2.4.6 Metrics Gathering

The gathering of metrics on an M BSE project is necessary to be able to :

n identify trouble spots in the current effort

n provide an object basis for status determination and reporting

n build estimates for future efforts

n share data with other organizations using MBSE

Each organization comes a t this task with a varied history, culture, and infrastructure to support
metrics collection. Any metrics collection system must consider the following:

n the system must be free of any compensatory, punitive, incentive or other distorting

influence (anonym ous reports can be quite accurate; report administrative “timecard”
hours through a different system.)

n very low overhead, highly automated procedures ensure accuracy

n simple effort categorization increases accuracy - only break things down to the required

level

n quickly make information available to all, helping to show the system is useful and likely

to be used.

A very natural environment for the capture of MBSE effort metrics can be the CASE facility
itself. Some are quite extensible, and provide a logica

l place to capture such information.

A fairly simple but reasonably complete set of metrics would collect the following effort

-hour

information at the end of each day or week:

n system-level detailed requirements

n domain modeling

n project plan/schedule and mis c. admin

n information modeling on domain <X>

n Scenario Modeling on domain <X>

n state modeling on domain <X>

background image

MBSE Software Engineering Process

N

S

W

E

17

n action modeling on domain <X>

n dynamic verification of domain <X>

n software system integration

n hardware system integration

2.4.7 Exit Criteria

In the same ma nner as the domain model, the development plan must be regarded as a living
“document”, and should be considered essentially complete once technical management
considers the task list, interrelationships, and descriptions to be complete, self -consistent, and
have resolved all major review items.

background image

MBSE Software Engineering Process

N

S

W

E

18

2.5 Analyzing Each Domain

The process of analyzing a domain is typically the most discussed and published of all MBSE
topics. Instead of repeating what is well covered elsewhere, we will provide a process context,
and fill in any gaps.

2.5.1 Iterative Development

To help manage the complexity of a full release, and to gain competence with the MBSE
development lifecycle through practice, the development timeline should be broken up into
iterative builds within the overall rele ase development. Each build should span a reasonably
short period – generally 2 -3 months – and be fully completed through verification on target
hardware.

2.5.2 Entry Criteria

Analysis on a domain can start as soon as the System -Level Detailed Requirements Docu ment
is approved, and state modeling is completed in all of its client domains. If a high degree of
parallelism is required in the allocation of development resources, analysis can be started once
Scenario Models are complete in all of its client domains.

2.5.3 Domain Requirements Matrix

The Domain Requirements Matrix is a collection of references to all requirements that bear on a
domain. It has 3 primary goals:

n provide a list of all system -level requirements that bear directly on this domain

n identify and descr ibe all bridges into this domain

n provide a place for the analyst to record all assumptions and issues identified during the

analysis phase

This document should be considered to be the “development contract” for the domain. However,
it should not attempt t o duplicate information that can simply be referenced from other sources.
The form should meet the individual needs of each project, but can initially be considered a set
of tables augmented with some prose (issues and assumptions).

2.5.4 Bridge Definition

From a very mechanical perspective, the externally published domain services can be
considered the functional interface to a domain. The population of this bridge is determined by:

n the system -level requirements that bear directly on the domain, as referenced in the

Domain Requirements Matrix

n the set of services required by the domain’s clients, as outlined in the client’s Scenario

and State Models

These bridge services should be defined before the start of information modeling to place a
mechanical context on the requirements. The description of each service outline the service
action at a high level without duplicating the detail of the service’s Action Model.

2.5.5 Information Modeling

The IM is the highest -level work product of a domain, and as such should “tell the story” of the
domain. Once the domain mission has been reviewed, the natural abstractions of the domain
should be captured. Do not immediately fret over mechanical issues, response time

background image

MBSE Software Engineering Process

N

S

W

E

19

optimization, and other distracting concerns. Make your first trip around the landscape at a
conceptually pure level.

Once it is felt that the population of the domain is reasonably complete and consistent from the
first pass, then quickly cobble together a couple of rough scenario outlines - just enough to
exercise the new objects. Informally walk through these scenarios and begin to review the IM in
a more critical light. Review each bridge service into the domain and further refine the IM.

Don’t over polish the IM - once things seem reasonably complete, and all o bject, attribute and
relationship descriptions are accurate, move on. You should always allow for changes from later
modeling phases - even substantial restructuring if necessary, so don’t wax the bodywork yet.

As in all MBSE phases, use defensible, accur ate and concise names. Take time to be sure
descriptions convey what is necessary for an external review, or absented minded developer
needs to gain the context needed to understand the abstractions. Only model to satisfy the
requirements of the immediat e release - consider any structuring to help future work to be a
cheat. Sometimes this is can be a good cheat, and sometimes our foresight is not as clear as
we’d like.

2.5.6 Scenario Modeling

The purpose of the Scenario Model phase is to establish a foundation for state modeling. An
effective Scenario Model phase can significantly raise the productivity and quality of later
phases.

If a high degree of parallelism is required in the allocation of development resources, the
analysis of server domains can be star ted once the initial Scenario Models for all clients are
completed.

2.5.7 State Modeling

The Scenario Models set the overall strategy for state modeling. However, once actual State
Models are constructed, a more detailed understanding of the object behavior may

require

Scenario Model adjustments. Initially, State Models should lay out the positive processing steps,
leaving error processing to be added in a subsequent pass.

Once positive actions have been modeled, review them with their corresponding scenarios.

Following this, add in the error processing and other fringe cases, and review those. Go back
and update the Scenario Models as necessary - they will be very valuable during integration.

The State Transition Tables should be completed to account for igno red or deferred events, and
to help structure another form of error analysis - handling “untimely” events.

Complete all state modeling in a domain before doing any process modeling in that domain

- this

will help avoid rework. Once the state modeling is c omplete for a domain, server domains can
be started.

2.5.8 Action Modeling

Action modeling is an analysis step, and the action language should only deal in the abstractions
of its domain. Leave manipulations of other domains in those domains. Perform low

-level

operations (below the level of analysis) in the Software Mechanisms domain.

2.5.9 Dynamic Verification

Your MBSE development environment’s static model analyzer should be employed throughout
the MBSE modeling process to ensure correct MBSE syntax and consistenc y. Dynamic
Verification is used once the analysis is complete to verify the correctness of your behavior

background image

MBSE Software Engineering Process

N

S

W

E

20

analysis. This is a technique in which the actual behavior of your analysis is executed (not
“simulated” as it is commonly referred to) in your deve

lopment environment.

Dynamic Verification is a form of testing - likened to “Unit Testing”. Like any other form of
testing, requirements are used to define the desired behavior, a test plan is used to define and
structure the execution of scenarios, tests are run, output is evaluated, and the test results
determined and recorded. The Domain Requirements Matrix (and it’s background documents) is
used to define the expected behavior. Scenarios from the Scenario Model work should provide a
good basis for th e Dynamic Verification work.

2.5.10 Exit Criteria

The analysis of a domain is considered complete once the dynamic verification tests for all
scenarios are passed. At this point, an inventory of all scenarios that pertain to the domain
under verification should be referenced against the Domain Requirements Matrix to be ensure
sufficient test coverage.

Now the domain is ready for Software Integration.

2.6 Software Integration

2.6.1 Entry Criteria

Any competent software development organization has developed techniques for

Software

Integration. Once two or more domains have completed Dynamic Verification (for analyzed
domains, Unit Test (for realized domains), or Acceptance Testing (for off -the-shelf domains),
then Integration can start.

2.6.2 Exit Criteria

Software Integration is considered complete once the tests for all scenarios are passed.

background image

MBSE Software Engineering Process

N

S

W

E

21

Acknowledgments:

We would like to thank the people at the Teradyne/ATB Tester Software Group. who participated
in the development of this document.


Wyszukiwarka

Podobne podstrony:
MBSE, ooarev
MBSE, popkin
MBSE, arch ag
MBSE, dv guide
MBSE, model conventions
MBSE, AL oview
MBSE Sprng ug id 770850 Nieznany
MBSE, mbse ov
MBSE, ooarev

więcej podobnych podstron