Previous Table of Contents Next
Application and Protocol Efficiency
Let's consider the example of a driver who stops every few minutes to
go to the bathroom. This isn't exactly an efficient way to drive, but
some folks do it. Similarly, some applications don't take advantage of
certain protocol efficiencies. What protocol efficiencies might these
be?
Let's take a look at a concrete example. Novell's "packet burst mode"
for IPX/SPX is a way for an application at one end to avoid a "ping
pong" effect with the end station. On totally unreliable networks,
it's sometimes necessary to get an acknowledgment for each packet
transmitted. On a reasonably reliable network, this creates
unnecessary traffic. Take a look at Figure 23.1. The workstation on
the right gets all the packets it needs in six transmissions; the
workstation on the left needs many more to get the same data, because
the application on the workstation is insisting on an acknowledgment
for each packet. In large quantities, this is extremely inefficient;
Novell's burst mode is a way of avoiding this. However, older Novell
networks using older clients don't take advantage of this feature.
[23-01t.jpg]
Figure 23.1 Unnecessary network traffic.
______________________________________________________________
A way of seriously upgrading the speed of an older Novell
network-without a new investment in hardware-is to get the newest
Client32 for your DOS or Windows workstations. Make sure that
PBURST.NLM is loaded on your 3.x server (no action is necessary on
a 4.x server). You'll be amazed at how much faster your network
seems to run.
______________________________________________________________
TCP/IP also has a "burst mode" called sliding windows. Why? Because,
when conditions are good, the TCP "window" (the amount of data that
may be sent without an acknowledgment) is large, but when conditions
get bad, that amount "slides" down to compensate. When conditions are
good, it slides back up.
When testing two similar applications' network efficiency, you can
simply do the same operation twice, measure how much data was
transmitted, and come up with the throughput per second for each. I
did this once with two thin-client applications and found that one
client was almost four times as efficient as the other. Wow!
Application efficiency typically doesn't change unless the design of
the application causes certain events to make the app handle things
differently at certain thresholds (more than x indexes? More than x
users?), which can affect efficiency. Unfortunately, there's no
formula on how to deal with this; you'll have to rely on instinct and
then test your theory.
Server Limitations
Every server can run out of resources. Once you identify which server
a slow application is running on, you'll want to check the following
items:
o CPU utilization
o CPU "waiting for I/O" statistic
o Swapping statistics/RAM available
Sure, every CPU can "run out of gas." But most of the time, you'll see
less than 100 percent CPU utilization. Even when a lot of CPU cycles
are available, you still can be slow. Why? It all boils down to this:
Hard drives are slow as dirt, physical memory is the speed of light.
Hard drives don't even begin to approach how fast memory is. Consider
these analogies:
o Using cache memory is like reaching up to your cabinet to get a
can of beans.
o Using regular memory is like walking to your pantry to get
a can of beans.
o Using "swap" (or virtual memory) is like getting in your
car and driving to the grocery store for a can of beans.
Is it that bad? Pretty close. Even when the CPU is not busy, if your
program has to be "paged" back in to physical memory from virtual
(hard drive) memory, it takes a long time, and your performance is
going to suffer. How much swapping is acceptable? For an answer to
that, see the "Baselining" section later in this chapter.
Finally, your application may be disk intensive, regardless of whether
there's enough physical memory to go around. Database programs, no
matter how well indexed, will suffer performance problems if they're
on nonoptimized disks. This typically isn't a "new" problem-you'll see
this from the first installation of an application. However, if index
or database files grow to a certain point, taking up more disk space
(and thus taking longer to load), performance may start to degrade.
You can see whether your applications are "disk bound" by checking
your "waiting for I/O" CPU statistic. If it's a large percentage of
the total CPU utilization, you probably have problems.
Measure Twice, Cut Once
The only way to know for sure whether you're running out of
anything-bandwidth, server resources, and so on-is to measure.
Everything else is guesswork. What do you measure first? It depends on
your theory. Remember, you'll be applying good black box
troubleshooting measures when someone tells you that "the network is
slow." You'll identify all of the pieces that constitute the whole
connection and then rule out one item at a time as the cause of the
slowness. If you rule out the local segment for the moment (other
people are working fine on this segment) and the route (other people
who use that route for different applications are also working fine),
you might suspect the server. If the server is working fine for two
other applications, but you don't know what's happening on the network
segment where the user is complaining, it's time to take measurements
on the segment in question.
How do you measure? It depends. For long-term monitoring, distributed
network analyzers or management probes are probably best; for
short-term problem determination, you can rely on your trusty
standalone network analyzer. Intermittent problems are probably better
suited to probes, and standalone analyzers for ongoing problems.
Previous Table of Contents Next
Wyszukiwarka
Podobne podstrony:
376 379376,4,artykul377 379Interpretacja tytułu powieści Tadeusza Konwickiego pt M~376373 376373 376 2y6ernrrl7llggqrqhkgl7gq4x4c2niirkuh5yq376 381 axc5jlpeya5e5e26gci7bke4decmlk4qn57mrky376 Jak sprawdzać dowody księgowe07 (379)378 379więcej podobnych podstron