CH24 (12)














Special Edition Using Visual C++ 6 -- Ch 24 -- Improving Your Application's Performance






Special Edition Using Visual C++ 6







- 24 -

Improving Your Application's Performance



Preventing Errors with ASSERT and TRACE

ASSERT: Detecting Logic Errors
TRACE: Isolating Problem Areas in Your Program

Adding Debug-Only Features
Sealing Memory Leaks

Common Causes of Memory Leaks
Debug new and delete
Automatic Pointers

Using Optimization to Make Efficient Code
Finding Bottlenecks by Profiling








When developing a new application, there are various challenges developers must
meet. You need your application to compile, to run without blowing up, and you must
be sure that it does what you want it to do. On some projects, there is time to determine
whether your application can run faster and use less memory or whether you can have
a smaller executable file. The performance improvement techniques discussed in this
chapter can prevent your program from blowing up and eliminate the kind of thinkos
that result in a program calculating or reporting the wrong numbers. These improvements
are not merely final tweaks and touch-ups on a finished product.
You should form the habit of adding an ounce of prevention to your code as you
write and the habit of using the debugging capabilities that Developer Studio provides
you to confirm what's going on in your program. If you save all your testing to the
end, both the testing and the bug-fixing will be much harder than if you had been
testing all along. Also, of course, any bug you manage to prevent will never have
to be fixed at all!

Preventing Errors with ASSERT and TRACE
The developers of Visual C++ did not invent the concepts of asserting and tracing.
Other languages support these ideas, and they are taught in many computer science
courses. What is exciting about the Visual C++ implementation of these concepts is
the clear way in which your results are presented and the ease with which you can
suppress assertions and TRACE statements in release versions of your application.

ASSERT: Detecting Logic Errors
The ASSERT macro enables you to check a condition that you logically believe should
always be TRUE. For example, imagine you are about to access an array like this:

array[i] = 5;

You want to be sure that the index, i, isn't less than zero and larger than the
number of elements allocated for the array. Presumably you have already written code
to calculate i, and if that code has been written properly, i must be between 0 and
the array size. An ASSERT statement will verify that:

ASSERT( i > 0 && i < ARRAYSIZE)






NOTE: There is no semicolon (;) at the end of the line because ASSERT is
a macro, not a function. Older C programs may call a function named assert(), but
you should replace these calls with the ASSERT macro because ASSERT disappears during
a release build, as discussed later in this section. 





You can check your own logic with ASSERT statements. They should never be used
to check for user input errors or bad data in a file. Whenever the condition inside
an ASSERT statement is FALSE, program execution halts with a message telling you
which assertion failed. At this point, you know you have a logic error, or a developer
error, that you need to correct. Here's another example:

// Calling code must pass a non-null pointer
void ProcessObject( Foo * fooObject )
{
ASSERT( fooObject )
// process object
}

This code can dereference the pointer in confidence, knowing execution will be
halted if the pointer is NULL.
You probably already know that Developer Studio makes it simple to build debug
and release versions of your programs. The debug version #defines a constant, _DEBUG,
and macros and other pre-processor code can check this constant to determine the
build type. When _DEBUG isn't defined, the ASSERT macro does nothing. This means
there is no speed constraint in the final code, as there would be if you added if
statements yourself to test for logic errors. There is no need for you to go through
your code, removing ASSERT statements when you release your application, and, in
fact, it's better to leave them there to help the developers who work on version
2. They document your assumptions, and they'll be there when the debugging work starts
again. In addition, ASSERT can't help you if there is a problem with the release
version of your code because it is used to find logic and design errors before you
release version 1.0 of your product.

TRACE: Isolating Problem Areas in Your Program
As discussed in Appendix D, "Debugging," the power of the Developer
Studio debugger is considerable. You can step through your code one line at a time
or run to a breakpoint, and you can see any of your variables' values in watch windows
as you move through the code. This can be slow, however, and many developers use
TRACE statements as a way of speeding up this process and zeroing in on the problem
area. Then they turn to more traditional step-by-step debugging to isolate the bad
code.
In the old days, isolating bad code meant adding lots of print statements to your
program, which is problematic in a Windows application. Before you start to think
up workarounds, such as printing to a file, relax. The TRACE macro does everything
you want, and like ASSERT, it magically goes away in release builds.
There are several TRACE macros: TRACE, TRACE0, TRACE1, TRACE2, and TRACE3. The
number-suffix indicates the number of parametric arguments beyond a simple string,
working much like printf. The different versions of TRACE were implemented to save
data segment space.
When you generate an application with AppWizard, many ASSERT and TRACE statements
are added for you. Here's a TRACE example:

if (!m_wndToolBar.Create(this)
|| !m_wndToolBar.LoadToolBar(IDR_MAINFRAME))
{
TRACE0("Failed to create toolbar\n");
return -1; // fail to create
}

If the creation of the toolbar fails, this routine will return -1, which signals
to the calling program that something is wrong. This will happen in both debug and
release builds. In debug builds, though, a trace output will be sent to help the
programmer understand what went wrong.
All the TRACE macros write to afxDump, which is usually the debug window, but
can be set to stderr for console applications. The number-suffix indicates the parametric
argument count, and you use the parametric values within the string to indicate the
passed data type--for example, to send a TRACE statement that includes the value
of an integer variable:

TRACE1("Error Number: %d\n", -1 );

or to pass two arguments, maybe a string and an integer:

TRACE2("File Error %s, error number: %d\n", __FILE__, -1 );

The most difficult part of tracing is making it a habit. Sprinkle TRACE statements
anywhere you return error values: before ASSERT statements and in areas where you
are unsure that you constructed your code correctly. When confronted with unexpected
behavior, add TRACE statements first so that you better understand what is going
on before you start debugging.

Adding Debug-Only Features
If the idea of code that isn't included in a release build appeals to you, you
may want to arrange for some of your own code to be included in debug builds but
not in release builds. It's easy. Just wrap the code in a test of the _DEBUG constant,
like this:

#ifdef _DEBUG
// debug code here
#endif

In release builds, this code will not be compiled at all.
All the settings and configurations of the compiler and linker are kept separately
for debug and release builds and can be changed independently. For example, many
developers use different compiler warning levels. To bump your warning level to 4
for debug builds only, follow these steps:




1. Choose Project, Settings, which opens the Project Settings dialog box,
shown in Figure 24.1.


2. Choose Debug or Release from the drop-down list box at the upper left.
If you choose All Configurations, you'll change debug and release settings simultaneously.


3. Click the C/C++ tab and set the Warning Level to Level 4, as shown
in Figure 24.2. The default is Level 3, which you will use for the release version
(see Figure 24.3).



Warning level 4 will generate a lot more errors than level 3. Some of those errors
will probably come from code you didn't even write, such as MFC functions. You'll
just have to ignore those warnings.
FIG. 24.1 The Project
Settings dialog box enables you to set configuration items for different phases of
development.

FIG. 24.2 Warning
levels can be set higher during development.

FIG. 24.3 Warning
levels are usually lower in a production release.


Sealing Memory Leaks
A memory leak can be the most pernicious of errors. Small leaks may not cause
any execution errors in your program until it is run for an exceptionally long time
or with a larger-than-usual data file. Because most programmers test with tiny data
files or run the program for only a few minutes when they are experimenting with
parts of it, memory leaks may not reveal themselves in everyday testing. Alas, memory
leaks may well reveal themselves to your users when the program crashes or otherwise
misbehaves.

Common Causes of Memory Leaks
What does it mean when your program has a memory leak? It means that your program
allocated memory and never released it. One very simple cause is calling new to allocate
an object or an array of objects on the heap and never calling delete. Another cause
is changing the pointer kept in a variable without deleting the memory the pointer
was pointing to. More subtle memory leaks arise when a class with a pointer as a
member variable calls new to assign the pointer but doesn't have a copy constructor,
assignment operator, or destructor. Listing 24.1 illustrates some ways that memory
leaks are caused.

Listing 24.1  Causing Memory Leaks
// simple pointer leaving scope
{
int * one = new int;
*one = 1;
} // one is out of scope now, and wasn't deleted
// mismatched new and delete: new uses delete and new[] uses delete[]
{
float * f = new float[10];
// use array
delete f; // Oops! Deleted f[0] correct version is delete [] f;
}
// pointer of new memory goes out of scope before delete
{
const char * DeleteP = "Don't forget P";
char * p = new char[strlen(DeleteP) + 1];
strcpy( p, DeleteP );
} // scope ended before delete[]
class A
{
public:
int * pi;
}
A::A()
{
pi = new int();
*pi = 3;
}
// ..later on, some code using this class..
A firsta; //allocates an int for first.pi to point to
B seconda; //allocates another int for seconda.pi
seconda=firsta;
// will perform a bitwise (shallow) copy. Both objects
// have a pi that points to the first int allocated.
// The pointer to the second int allocated is gone

// forever.

The code fragments all represent ways in which memory can be allocated and the
pointer to that memory lost before deallocation. After the pointer goes out of scope,
you can't reclaim the memory, and no one else can use it either. It's even worse
when you consider exceptions, discussed in Chapter 26, "Exceptions and Templates,"
because if an exception is thrown, your flow of execution may leave a function before
reaching the delete at the bottom of the code. Because destructors are called for
objects that are going out of scope as the stack unwinds, you can prevent some of
these problems by putting delete calls in destructors. This, too, is discussed in
more detail in Chapter 26, in the "Placing the catch Block" section.
Like all bugs, the secret to dealing with memory leaks is to prevent them--or
to detect them as soon as possible when they occur. You can develop some good habits
to help you:



If a class contains a pointer and allocates that pointer with new, be sure to
code a destructor that deletes the memory. Also, code a copy constructor and an operator
(=).

If a function will allocate memory and return something to let the calling program
access that memory, it must return a pointer instead of a reference. You can't delete
a reference.

If a function will allocate memory and then delete it later in the same function,
allocate the memory on the stack, if at all possible, so that you don't forget to
delete it.

Never change a pointer's value unless you have first deleted the object or array
it was pointing to. Never increment a pointer that was returned by new.


Debug new and delete
MFC has a lot to offer the programmer who is looking for memory leaks. In debug
builds, whenever you use new and delete, you are actually using special debug versions
that track the filename and line number on which each allocation occurred and match
up deletes with their news. If memory is left over as the program ends, you get a
warning message in the output section, as shown in Figure 24.4.
FIG. 24.4 Memory
leaks are detected automatically in debug builds.

To see this for yourself, create an AppWizard MDI application called Leak, accepting
all the defaults. In the InitInstance() function of the application class (CLeakApp
in this example), add this line:

int* pi = new int[20];

Build a debug version of the application and run it by choosing Build, Start Debug,
and Go, or click the Go button on the Build minibar. You will see output like Figure
24.4. Notice that the filename (Leak.cpp) and line number where the memory was allocated
are provided in the error message. Double-click that line in the output window, and
the editor window displays Leak.cpp with the cursor on line 54. (The coordinates
in the lower-right corner always remind you what line number you are on.) If you
were writing a real application, you would now know what the problem is. Now you
must determine where to fix it (more specifically, where to put the delete).

Automatic Pointers
When a program is executing within a particular scope, like a function, all variables
allocated in that function are allocated on the stack. The stack is a temporary
storage space that shrinks and grows, like an accordion. The stack is used to store
the current execution address before a function call, the arguments passed to the
function, and the local function objects and variables.
When the function returns, the stack pointer is reset to that location
where the prior execution point was stored. This makes the stack space after the
reset location available to whatever else needs it, which means those elements allocated
on the stack in the function are gone. This process is referred to as stack unwinding.





NOTE: Objects or variables defined with the keyword static are not allocated
on the stack. 





Stack unwinding also happens when an exception occurs. To reliably restore the
program to its state before an exception occurred in the function, the stack is unwound.
Stack-wise variables are gone, and the destructors for stack-wise objects are called.
Unfortunately, the same is not true for dynamic objects. The handles (for example,
pointers) are unwound, but the unwinding process doesn't call delete. This causes
a memory leak.
In some cases, the solution is to add delete statements to the destructors of
objects that you know will be destructed as part of the unwinding, so they can use
these pointers before they go out of scope. A more general approach is to replace
simple pointers with a C++ class that can be used just like a pointer but contains
a destructor that deletes any memory at the location where it points. Don't worry,
you don't have to write such a class: One is included in the Standard Template Library,
which comes with Visual C++. Listing 24.2 is a heavily edited version of the auto_ptr
class definition, presented to demonstrate the key concepts.





TIP: If you haven't seen template code before, it's explained in Chapter
26.





Listing 24.2  A Scaled-Down Version of the auto_ptr Class
// This class is not complete. Use the complete definition in
//the Standard Template Library.
template <class T>
class auto_ptr
{
public:
auto_ptr( T *p = 0) : rep(p) {}
// store pointer in the class
~auto_ptr(){ delete rep; } // delete internal rep
// include pointer conversion members
inline T* operator->() const { return rep; }
inline T& operator*() const { return *rep; }
private:
T * rep;

};

The class has one member variable, a pointer to whatever type that you want a
pointer to. It has a one-argument constructor to build an auto_ptr from an int* or
a Truck* or any other pointer type. The destructor deletes the memory pointed to
by the internal member variable. Finally, the class overrides -> and *, the dereferencing
operators, so that dereferencing an auto_ptr feels just like dereferencing an ordinary
pointer.
If there is some class C to which you want to make an automatic pointer called
p, all you do is this:

auto_ptr<C> p(new C());

Now you can use p as though it were a C*--for example:

p->Method(); // calls C::Method()

You never have to delete the C object that p points to, even in the event of an
exception, because p was allocated on the stack. When it goes out of scope, its destructor
is called, and the destructor calls delete on the C object that was allocated in
the new statement.
You can read more about managed pointers and exceptions in Chapter 26.

Using Optimization to Make Efficient Code
There was a time when programmers were expected to optimize their code themselves.
Many a night was spent arguing about the order in which to test conditions or about
which variables should be register instead of automatic storage. These days, compilers
come with optimizers that can speed execution or shrink program size far beyond what
a typical programmer can accomplish by hand.
Here's a simple example of how optimizers work. Imagine you have written a piece
of code like this:

for (i=0;i<10;i++)
{
y=2;
x[i]=5;
}
for (i=0; i<10; i++)
{
total += x[i];
}

Your code will run faster, with no effect on the final results, if you move the
y=2 in front of the first loop. In addition, you can easily combine the two loops
into a single loop. If you do that, it's faster to add 5 to total each time than
it is to calculate the address of x[i] to retrieve the value just stored in it. Really
bright optimizers may even realize that total can be calculated outside the loop
as well. The revised code might look like this:

y=2;
for (i=0;i<10;i++)
{
x[i]=5;
}
total += 50;

Optimizers do far more than this, of course, but this example gives you an idea
of what's going on behind the scenes. It's up to you whether the optimizer focuses
on speed, occasionally at the expense of memory usage, or tries to minimize memory
usage, perhaps at a slighter lower speed.
To set the optimization options for your project, select the Project, Settings
command from Developer Studio's menu bar. The Project Settings property sheet, first
shown in Figure 24.1, appears. Click the C/C++ tab and make sure you are looking
at the Release settings; then select Optimizations in the Category box. Keep optimization
turned off for debug builds because the code in your source files and the code being
executed won't match line for line, which will confuse you and the debugger. You
should turn on some kind of optimization for release builds. Choose from the drop-down
list box, as shown in Figure 24.5.
FIG. 24.5 Select
the type of optimization you want.

If you select the Customize option in the Optimizations box, you can select from
the list of individual optimizations, including Assume No Aliasing, Global Optimizations,
Favor Fast Code, Generate Intrinsic Functions, Frame-Pointer Omission, and more.
However, as you can tell from these names, you really have to know what you're doing
before you set up a custom optimization scheme. For now, accept one of the schemes
that have been laid out for you.

Finding Bottlenecks by Profiling
Profiling an application lets you discover bottlenecks, pieces of code
that are slowing your application's execution and deserve special attention. It's
pointless to hand-optimize a routine unless you know that the routine is called often
enough for its speed to matter.
Another use of a profiler is to see whether the test cases you have put together
result in every one of your functions being called or in each line of your code being
executed. You may think you have selected test inputs that guarantee this; however,
the profiler can confirm it for you.
Visual C++ includes a profiler integrated with the IDE: All you need to do is
use it. First, adjust your project settings to include profiler information. Bring
up the Project Settings property sheet as you did in the preceding section and click
the Link tab. Check the Enable Profiling check box. Click OK and rebuild your project.
Links will be slower now because you can't do an incremental link when you are planning
to profile, but you can go back to your old settings after you've learned a little
about the way your program runs. Choose Build, Profile and the Profile dialog box,
shown in Figure 24.6, appears.
FIG. 24.6 A profiler
can gather many kinds of information.

If you aren't sure what any of the radio buttons on this dialog box mean, click
the question mark in the upper-right corner and then click the radio button. You'll
receive a short explanation of the option. (If you would like to add this kind of
context-sensitive Help to your own applications, be sure to read Chapter 11, "Help.")

You don't profile as a method to catch bugs, but it can help to validate your
testing or show you the parts of your application that need work, which makes it
a vital part of the developer's toolbox. Get in the habit of profiling all your applications
at least once in the development cycle.








© Copyright, Macmillan Computer Publishing. All
rights reserved.








Wyszukiwarka