White Paper
Gigabit
Ethernet and ATM
A Technology Perspective
Bursty, high-bandwidth applications are driving
the need for similarly high-bandwidth campus
backbone infrastructures. Today, there are two choices
for the high-speed campus backbone: ATM or Gigabit
Ethernet. For many reasons, business and technical,
Gigabit Ethernet is selected as the technology of choice.
This paper briefly presents, from a technical perspective,
why Gigabit Ethernet is favored for most enterprise LANs.
In the past, most campuses use shared-media backbones
such as 16/32 Mbps Token-Ring and 100 Mbps FDDI
that are only slightly higher in speed than the LANs
and end stations they interconnect. This has caused
severe congestion in the campus backbones when these
backbones interconnect a number of access LANs.
A high capacity, high performance, and highly resilient
backbone is needed-one that can be scaled as end stations
grow in number or demand more bandwidth. Also
needed is the ability to support differentiated service
levels (Quality of Service or QoS), so that high priority,
time-sensitive, and mission-critical applications can
share the same network infrastructure as those that
require only best-effort service.
2 Gigabit Ethernet and ATM: A Technology Perspective White Paper
Until recently, Asynchronous Transfer Interface, and Multiprotocol over ATM). Which is a better technology is no
Mode (ATM) was the only switching This additional complexity is required in longer a subject of heated industry debate
technology able to deliver high capacity order to adapt ATM to the connectionless, Gigabit Ethernet is an appropriate
and scalable bandwidth, with the promise frame-based world of the campus LAN. choice for most campus backbones.
of end-to-end Quality of Service. ATM Many business users have chosen Gigabit
Meanwhile, the very successful Fast
offered seamless integration from the Ethernet as the backbone technology
Ethernet experience spurred the development
desktop, across the campus, and over the for their campus networks. An Infonetics
of Gigabit Ethernet standards. Within
metropolitan/wide area network. It was Research survey (March 1999) records
two years of their conception (June
thought that users would massively that 91 percent of respondents believe
1996), Gigabit Ethernet over fiber
deploy connection-oriented, cell-based that Gigabit Ethernet is suitable for LAN
(1000BASE-X) and copper (1000BASE-T)
ATM to the desktop to enable new native backbone connection, compared with 66
standards were approved, developed, and
ATM applications to leverage ATM s rich percent for ATM. ATM continues to be a
in operation. Gigabit Ethernet not only
functionality (such as QoS). However, good option where its unique, rich, and
provides a massive scaling of bandwidth
this did not come to pass. The Internet complex functionality can be exploited
to 1000 Mbps (1 Gbps), but also shares a
Protocol (IP), aided and abetted by the by its deployment, most commonly in
natural affinity with the vast installed base
exploding growth of the Internet, rode metropolitan and wide area networks.
of Ethernet and Fast Ethernet campus
roughshod over ATM deployment and
LANs running IP applications. Whether Gigabit Ethernet or ATM
marched relentlessly to world dominance.
is deployed as the campus backbone
Enhanced by additional protocols already
When no other gigabit technology existed, technology of choice, the ultimate
common to Ethernet (such as IEEE
ATM provided much needed relief as a decision is one of economics and sound
802.1Q Virtual LAN tagging, IEEE
high bandwidth backbone to interconnect business sense, rather than pure technical
802.1p prioritization, IETF
numerous connectionless, frame-based considerations.
Differentiated Services, and Common
campus LANs. But with the massive
Open Policy Services), Gigabit Ethernet is The next two sections provide a brief
proliferation of IP applications, new
now able to provide the differential qualities description of each technology.
native ATM applications did not appear.
of service that previously only ATM could
Even 25 Mbps and 155 Mbps ATM
provide. One key difference with Gigabit Asynchronous Transfer
to the desktop did not appeal to the
Ethernet is that additional functionality
Mode (ATM)
vast majority of users, because of their
can be incrementally added in a
Asynchronous Transfer Mode (ATM)
complexity, small bandwidth increase,
non-disruptive way as required, compared
has been used as a campus backbone
and high costs when compared with the
with the rather revolutionary approach of
technology since its introduction in the
very simple and inexpensive 100 Mbps
ATM. Further developments in bandwidth
early 1990s. ATM is specifically designed
Fast Ethernet.
and distance scalability will see 10 Gbps
to transport multiple traffic types
On the other hand, Fast Ethernet, with its Ethernet over local (10G-BASE-T) and
data, voice and video, real-time or
auto-sensing, auto-negotiation capabilities, wide area (10G-BASE-WX) networks.
non-real-time with inherent QoS
integrated seamlessly with the millions Thus, the promise of end-to-end seamless
for each traffic category.
of installed 10 Mbps Ethernet clients integration, once only the province of
To enable this and other capabilities,
and servers. Although relatively simple ATM, will be possible with Ethernet
additional functions and protocols are
and elegant in concept, the actual and all its derivations.
added to the basic ATM technology.
implementation of ATM is complicated
Today, there are two technology choices
Private Network Node Interface (PNNI)
by a multitude of protocol standards
for the high-speed campus backbone:
provides OSPF-like functions to signal
and specifications (for instance, LAN
ATM and Gigabit Ethernet. While both
and route QoS requests through a
Emulation, Private Network Node
seek to provide high bandwidth and
hierarchical ATM network. Multiprotocol
differentiated QoS within enterprise
LANs, these are very different technologies.
Gigabit Ethernet and ATM: A Technology Perspective White Paper 3
The main reason for this success is that
Gigabit Ethernet provides the functionality
that meets today s immediate needs at an
affordable price, without undue complexity
Interface, LAN Emulation Network-
and cost. Gigabit Ethernet is complemented
Network Interface, and a multitude of
by a superset of functions and capabilities
additional protocols, signaling controls,
that can be added as needed, with the
and connections (point-to-point, point-
promise of further functional enhancements
to-multipoint, multipoint-to-point, and
and bandwidth scalability (for example,
multipoint-to-multipoint).
over ATM (MPOA) allows the establish-
IEEE 802.3ad Link Aggregation, and 10
Until recently, ATM was the only
ment of short-cut routes between
Gbps Ethernet) in the near future. Thus,
technology able to promise the benefits
communicating end systems on different
Gigabit Ethernet provides a simple
of QoS from the desktop, across the LAN
subnets, bypassing the performance
scaling-up in bandwidth from the 10/100
and campus, and right across the world.
bottlenecks of intervening routers. There
Mbps Ethernet and Fast Ethernet LANs
However, the deployment of ATM to the
have been and continue to be enhancements
that are already massively deployed.
desktop, or even in the campus backbone
in the areas of physical connectivity,
Simply put, Gigabit Ethernet is Ethernet,
LANs, has not been as widespread as
bandwidth scalability, signaling, routing
but 100 times faster!
predicted. Nor have there been many
and addressing, security, and management.
native applications available or able to Since Gigabit Ethernet uses the same
While rich in features, this functionality
benefit from the inherent QoS capabilities frame format as today s legacy installed
has come with a fairly heavy price tag in
provided by an end-to-end ATM solution. LANs, it does not need the segmentation
complexity and cost. To provide backbone
Thus, the benefits of end-to-end QoS and reassembly function that ATM
connectivity for today s legacy access
have been more imagined than realized. requires to provide cell-to-frame and
networks, ATM a connection-oriented
frame-to-cell transitions. As a connection-
Gigabit Ethernet as the campus backbone
technology has to emulate capabilities
less technology, Gigabit Ethernet does not
technology of choice is now surpassing
inherently available in the predominantly
require the added complexity of signaling
ATM. This is due to the complexity
connectionless Ethernet LANs, including
and control protocols and connections
and the much higher pricing of ATM
broadcast, multicast, and unicast
that ATM requires. Finally, because QoS-
components such as network interface
transmissions. ATM must also manipulate
capable desktops are not readily available,
cards, switches, system software,
the predominantly frame-based traffic on
Gigabit Ethernet is no less deficient in
management software, troubleshooting
these LANs, segmenting all frames into cells
providing QoS. New methods have been
tools, and staff skill sets. There are also
prior to transport, and then reassembling
developed to incrementally deliver QoS
interoperability issues, and a lack of
cells into frames prior to final delivery.
and other needed capabilities that lend
suitable exploiters of ATM technology.
Many of the complexity and interoperability
themselves to much more pragmatic and
issues are the result of this LAN
cost-effective adoption and deployment.
Gigabit Ethernet
Emulation, as well as the need to provide
Today, Gigabit Ethernet is a very viable To complement the high-bandwidth
resiliency in these emulated LANs. There
and attractive solution as a campus capacity of Gigabit Ethernet as a campus
are many components required to make
backbone LAN infrastructure. Although backbone technology, higher-layer functions
this workable; these include the LAN
relatively new, Gigabit Ethernet is derived and protocols are available, or are being
Emulation Configuration Server(s),
from a simple technology, and a large and defined by standards bodies such as the
LAN Emulation Servers, Broadcast and
well-tested Ethernet and Fast Ethernet Institute of Electrical and Electronics
Unknown Servers, Selective Multicast
installed base. Since its introduction, Engineers (IEEE) and the Internet
Servers, Server Cache Synchronization
Gigabit Ethernet has been vigorously
Protocol, LAN Emulation User Network
adopted as a campus backbone technology,
with possible use as a high-capacity
connection for high-performance servers
and workstations to the backbone switches.
4 Gigabit Ethernet and ATM: A Technology Perspective White Paper
Engineering Task Force (IETF). Many of
these capabilities recognize the desire for
convergence upon the ubiquitous Internet
Protocol (IP). IP applications and transport
Technological Aspects
protocols are being enhanced or developed
Aspects of a technology are important
to address the needs of high speed, multi-
because they must meet some minimum
media networking that benefit Gigabit
requirements to be acceptable to users.
Ethernet. The Differentiated Services
Value-added capabilities will be used
(DiffServ) standard provides differential
where desirable or affordable. If these
QoS that can be deployed from the
Quality of Service
additional capabilities are not used,
Ethernet and Fast Ethernet desktops
Until recently, Quality of Service (QoS)
whether for reasons of complexity or lack
across the Gigabit Ethernet campus
was a key differentiator between ATM
of exploiters of those capabilities, then
backbones. The use of IEEE 802.1Q
and Gigabit Ethernet. ATM was the only
users are paying for them for no reason
VLAN Tagging and 802.1p User Priority
technology that promised QoS for voice,
(a common example is that many of the
settings allow different traffic types to
video, and data traffic. The Internet
advanced features of a VCR are rarely
be accorded the appropriate forwarding
Engineering Task Force (IETF) and
exploited by most users). If features are
priority and service.
various vendors have since developed
too expensive, relative to the benefits that
protocol specifications and standards that
When combined with policy-enabled can be derived, then the technology is
enhance the frame-switched world with
networks, DiffServ provides powerful, not likely to find widespread acceptance.
QoS and QoS-like capabilities. These
secure, and flexible QoS capabilities for Technology choices are ultimately
efforts are accelerating and, in certain
Gigabit Ethernet campus LANs by using business decisions.
cases, have evolved for use in both the
protocols such as Common Open Policy
The fundamental requirements for LAN
ATM and frame-based worlds.
Services (COPS), Lightweight Directory
campus networks are very much different
Access Protocol (LDAP), Dynamic Host
The difference between ATM and Gigabit
from those of the WAN. It is thus
Configuration Protocol (DHCP), and
Ethernet in the delivery of QoS is that
necessary to identify the minimum
Domain Name System (DNS). Further
ATM is connection-oriented, whereas
requirements of a network, as well as
developments, such as Resource
Ethernet is connectionless. With ATM,
the value-added capabilities that are
Reservation Protocol, multicasting,
QoS is requested via signaling before
nice to have
.
real-time multimedia, audio and video
communication can begin. The connection
In the sections that follow, various terms
transport, and IP telephony, will add
is only accepted if it is without detriment
are used with the following meanings:
functionality to a Gigabit Ethernet campus,
to existing connections (especially for
" Ethernet is used to refer to all current
using a gradual and manageable approach
reserved bandwidth applications).
variations of the Ethernet technology:
when users need these functions.
Network resources are then reserved as
traditional 10 Mbps Ethernet, 100
required, and the accepted QoS service is
There are major technical differences
Mbps Fast Ethernet, and 1000 Mbps
guaranteed to be delivered end-to-end.
between Gigabit Ethernet and ATM. A
Gigabit Ethernet.
By contrast, QoS for Ethernet is mainly
companion white paper, Gigabit Ethernet
delivered hop-by-hop, with standards
" Frame and packet are used
and ATM: A Business Perspective, provides a
in progress for signaling, connection
interchangeably, although this is not
comparative view of the two technologies
admission control, and resource reservation.
absolutely correct from a technical
from a managerial perspective.
purist point of view.
Gigabit Ethernet and ATM: A Technology Perspective White Paper 5
services to already established connections.
Once established, traffic from the end
systems are policed and shaped for
conformance with the agreed traffic
" nrt-VBR: Non-real-time Variable Bit
contract. Flow and congestion are
Rate, for applications with similar
managed in order to ensure the proper
needs as rt-VBR, requiring low cell loss,
QoS delivery.
varying amounts of bandwidth, and
with no critical delay and delay
Gigabit Ethernet QoS
variation requirements. Example
ATM QoS One simple strategy for solving the
applications include non-real-time
From its inception, ATM has been backbone congestion problem is to over-
voice and video.
designed with QoS for voice, video provision bandwidth in the backbone.
" ABR: Available Bit Rate, for applications
and data applications. Each of these This is especially attractive if the initial
requiring low cell loss, guaranteed
has different timing bounds, delay, investment is relatively inexpensive and
minimum and maximum bandwidths,
delay variation sensitivities (jitter), the ongoing maintenance is virtually
and with no critical delay or delay
and bandwidth requirements. costless during its operational life.
variation requirements. The minimum
In ATM, QoS has very specific meanings Gigabit Ethernet is an enabler of just such
and maximum bandwidths are
that are the subject of ATM Forum and a strategy in the LAN. Gigabit Ethernet,
characterized by the Minimum Cell
other standards specifications. Defined at and soon 10-Gigabit Ethernet, will
Rate and Peak Cell Rate respectively.
the ATM layer (OSI Layer 2), the service provide all the bandwidth that is ever
" UBR: Unspecified Bit Rate, for
architecture provides five categories of needed for many application types,
applications that can use the network
services that relate traffic characteristics eliminating the need for complex QoS
on a best-effort basis, with no service
and QoS requirements to network behavior: schemes in many environments. However,
guarantees for cell loss, delay and delay
some applications are bursty in nature
" CBR: Constant Bit Rate, for applications
variations. Example applications are
and will consume all available bandwidth,
that are sensitive to delay and delay
e-mail and file transfer.
to the detriment of other applications that
variations, and need a fixed but
Depending on the QoS requested, ATM
may have time-critical requirements. The
continuously available amount of
provides a specific level of service. At
solution is to provide a priority mechanism
bandwidth for the duration of a
one extreme, ATM provides a best-effort
that ensures bandwidth, buffer space,
connection. The amount of bandwidth
service for the lowest QoS (UBR), with
and processor power are allocated to the
required is characterized by the
no bandwidth reserved for the traffic.
different types of traffic.
Peak Cell Rate. An example of this
At the other extreme, ATM provides a
is circuit emulation. With Gigabit Ethernet, QoS has a broader
guaranteed level of service for the higher
interpretation than with ATM. But it
" rt-VBR: Real-time Variable Bit Rate, for
QoS (that is, CBR and VBR) traffic.
is just as able albeit with different
applications that need varying amounts
Between these extremes, ABR is able to
mechanisms to meet the requirements
of bandwidth with tightly regulated
use whatever bandwidth is available with
of voice, video and data applications.
delay and delay variation, and whose
proper traffic management and controls.
traffic is bursty in nature. The amount In general, Ethernet QoS is delivered
Because ATM is connection-oriented,
of bandwidth is characterized by the at a high layer of the OSI model. Frames
requests for a particular QoS, admission
Peak Cell Rate and Sustainable Cell are typically classified individually by a
control, and resource allocation are an
Rate; burstiness is defined by the filtering scheme. Different priorities are
integral part of the call signaling and
Maximum Burst Size. Example assigned to each class of traffic, either
connection setup process. The call is
applications include real-time voice explicitly by means of priority bit settings
admitted and the connection established
and video conferencing. in the frame header, or implicitly in the
between communicating end systems
only if the resources exist to meet a
requested QoS, without jeopardizing
6 Gigabit Ethernet and ATM: A Technology Perspective White Paper
priority level of the queue or VLAN to
which they are assigned. Resources are
then provided in a preferentially prioritized
(unequal or unfair) way to service the
These definitions are required in order
queues. In this manner, QoS is delivered
to guarantee the consistency of expected
by providing differential services to
service when a packet crosses from one
the differentiated traffic through this
network s administrative domain to
mechanism of classification, priority
another, or for multi-vendor interoperability.
setting, prioritized queue assignment,
The Working Group also standardized the
and prioritized queue servicing. (For restricted, medium precedence frames
following specific per-hop behaviors and
further information on QoS in Frame- are discarded next, and low precedence
recommended bit patterns (also known as
Switched Networks, see WP3510-A/5-99, frames are dropped only in the most
code points or DSCPs) of the DS Field
a Nortel Networks white paper available extreme lack of resource conditions.
for each PHB:
on the Web at www.nortelnetworks.com.)
" A recommended Default PHB with
" Expedited Forwarding (EF-PHB),
a DSCP of b 000000 (six zeros) that
sometimes described as Premium
Differentiated Services
equates to today s best-effort service
Service, uses a DSCP of b 101110 .
Chief among the mechanisms available
when no explicit DS marking exists.
The EF-PHB provides the equivalent
for Ethernet QoS is Differentiated
service of a low loss, low latency, low In essence, DiffServ operates as follows:
Services (DiffServ). The IETF DiffServ
jitter, assured bandwidth point-to-
Working Group proposed DiffServ " Each frame entering a network is
point connection (a virtual leased line).
as a simple means to provide scalable analyzed and classified to determine
EF-PHB frames are assigned to a high
differentiated services in an IP network. the appropriate service desired by the
priority queue where the arrival rate of
DiffServ redefines the IP Precedence/Type application.
frames at a node is shaped to be always
of Service field in the IPv4 header and
" Once classified, the frame is marked in
less than the configured departure rate
the Traffic Class field in the IPv6 header
the DS field with the assigned DSCP
at that node.
as the new DS Field (see Figure 1). An IP
value to indicate the appropriate PHB.
packet s DS Field is then marked with a " Assured Forwarding (AF-PHB) uses
Within the core of the network, frames
specific bit pattern, so the packet will 12 DSCPs to identify four forwarding
are forwarded according to the PHB
receive the desired differentiated service classes, each with three levels of drop
indicated.
(that is, the desired forwarding priority), precedence (12 PHBs). Frames are
" Analysis, classification, marking,
also known as per-hop behavior (PHB), assigned by the user to the different
policing, and shaping operations need
at each network node along the path classes and drop precedence depending
only be carried out at the host or
from source to destination. on the desired degree of assured
network boundary node. Intervening
but not guaranteed delivery. When
To provide a common use and interpreta-
nodes need only examine the short
allocated resources (buffers and band-
tion of the possible DSCP bit patterns,
fixed length DS Field to determine the
width) are insufficient to meet demand,
RFC 2474 and RFC 2475 define the
appropriate PHB to be given to the
frames with the high drop precedence are
architecture, format, and general use
frame. This architecture is the key to
discarded first. If resources are still
of these bits within the DSCP Field.
DiffServ scalability. In contrast, other
models such as RSVP/Integrated
Services are severely limited by signaling,
Figure 1: Differentiated Services Field (RFC 2474).
application flow, and forwarding state
Byte Bit 1 2 3 4 5 6 7 8
maintenance at each and every node
1 IP Version IP Header Length
along the path.
2 Differentiated Services Code Point (DSCP) Currently Unused
3-20 (Remainder of IP Header)
Gigabit Ethernet and ATM: A Technology Perspective White Paper 7
Figure 2: Passport Campus Solution and Optivity Policy Services.
Server Farm
Optivity Policy Services Passport 8000
and Management Routing Switch
Connection-oriented
vs. Connectionless
Policy Server communicates
filter and queuing rules using
ATM is a connection-oriented protocol.
Common Open Policy Services
Server Switch ensures most
appropriate server used,
Most enterprise LAN networks are
Routing Switch policies,
depending on loads and
End Station can set
shapes and forwards
response times
connectionless Ethernet networks,
802.1p or DSCP field
classified frames
whether Ethernet, Fast Ethernet and
Gigabit Ethernet.
Passport 700
Data
Server Switch Note: Because of Ethernet s predominance,
Routing Switch validates
it greatly simplifies the discussion to not
using policy server and sets/resets
DSCP using Express Classification
refer to the comparatively sparse Token-
Ring technology; this avoids complicating
the comparison with qualifications for
Passport 1000
Routing Switch
Token-Ring LANs and ELANs, Route
Descriptors instead of MAC addresses
as LAN destinations, and so forth.
" Policies govern how frames are marked With Gigabit Ethernet, the switches at the
and traffic conditioned upon entry network ingress may act as COPS clients.
An ATM network may be used as a
to the network; they also govern the COPS clients examine frames as they
high-speed backbone to connect Ethernet
allocation of network resources to the enter the network, communicate with a
LAN switches and end stations together.
traffic streams, and how the traffic is central COPS server to decide if the traffic
However, a connection-oriented ATM
forwarded within that network. should be admitted to the network, and
backbone requires ATM Forum LAN
enforce the policies. These policies include
Emulation (LANE) protocols to emulate
DiffServ allows nodes that are not DS-
any QoS forwarding treatment to be
the operation of connectionless legacy
capable, or even DS-aware, to continue
applied during transport. Once this is
LANs. In contrast with simple Gigabit
to use the network in the same way as
determined, the DiffServ-capable Gigabit
Ethernet backbones, much of the
they have previously by simply using
Ethernet switches can mark the frames
complexity of ATM backbones arises
the Default PHB, which is best-effort
using the selected DSCP bit pattern,
from the need for LANE.
forwarding. Thus, without requiring
apply the appropriate PHB, and forward
end-to-end deployment, DiffServ provides
the frames to the next node. The next
ATM LAN Emulation v1
Gigabit Ethernet with a powerful, yet
node need only examine the DiffServ
LANE version 1 was approved in January
simple and scalable, means to provide
markings to apply the appropriate PHB.
1995. Whereas a Gigabit Ethernet
differential QoS services to support
Thus, frames are forwarded hop-by-hop
backbone is very simple to implement,
various types of application traffic.
through a Gigabit Ethernet campus with
each ATM emulated LAN (ELAN) needs
the desired QoS.
several logical components and protocols
Common Open Policy Services
that add to ATM s complexity. These
To enable a Policy Based Networking In Nortel Networks Passport* Campus
components are:
capability, the Common Open Policy Solution, COPS will be used by Optivity*
Services (COPS) protocol can be used Policy Services (COPS server) and the
" LAN Emulation Configuration
to complement DiffServ-capable devices. Passport Enterprise and Routing Switches
Server(s) (LECS) to, among other
COPS provides an architecture and (COPS clients) to communicate QoS
duties, provide configuration data to an
a request-response protocol for policies defined at the policy server to the
end system, and assign it to an ELAN
communicating admission control switches for enforcement (see Figure 2).
(although the same LECS may serve
requests, policy-based decisions, and
more than one ELAN).
policy information between a network
" Only one LAN Emulation Server
policy server and the set of clients it serves.
(LES) per ELAN to resolve 6-byte LAN
MAC addresses to 20-byte ATM
addresses and vice versa.
8 Gigabit Ethernet and ATM: A Technology Perspective White Paper
Figure 3: LAN Emulation v1 Connections and Functions.
Point-to-point or
Connection Name Uni- or Bi-directional Point-to-multipoint Used for communication
Configuration Direct VCC Bi-directional Point-to-point Between an LECS and an LEC
Control Direct VCC Bi-directional Point-to-point Between an LES and its LECs**
Control Distribute VCC Uni-directional Point-to-multipoint From an LES to its LECs
Multicast Send VCC Bi-directional Point-to-point Between a BUS and an LEC
Multicast Forward VCC Uni-directional Point-to-multipoint From a BUS to its LECs
Data Direct VCC Bi-directional Point-to-point Between an LEC and another LEC
**Note: There is a difference between LECS with an uppercase S (meaning LAN Emulation Configuration Server) and LECs with a lowercase s
meaning LAN Emulation Clients, or more than one LEC) at the end of the acronym.
" Only one Broadcast and Unknown after. Unintended release of a required LUNI, among other enhancements,
Server (BUS) per ELAN to forward VCC may trigger the setup process. In added the Selective Multicast Server
broadcast frames, multicast frames, and certain circumstances, this can lead to (SMS), to provide a more efficient means
frames for destinations whose LAN or instability in the network. of forwarding multicast traffic, which
ATM address is as yet unknown. was previously performed by the BUS.
The most critical components of the
SMS thus offloads much of the multicast
" One or more LAN Emulation Clients LAN Emulation Service are the LES and
processing from the BUS, allowing the
(LEC) to represent the end systems. BUS, without which an ELAN cannot
BUS to focus more on the forwarding
This is further complicated by whether function. Because each ELAN can only
of broadcast traffic and traffic with
the end system is a LAN switch to be served by a single LES and BUS, these
yet-to-be-resolved LAN destinations.
which other Ethernet end stations are components need to be backed up
attached, or whether it is an ATM- by other LESs and BUSs to prevent LNNI provides for the exchange of
directly attached end station. A LAN any single point of failure stopping configuration, status, control coordination,
switch requires a proxy LEC, whereas communication between the possibly and database synchronization between
an ATM-attached end station requires hundreds or even thousands of end stations redundant and distributed components
a non-proxy LEC. attached to an ELAN. In addition, the of the LAN Emulation Service.
single LES or BUS represents a potential
Collectively, the LECS, LES, and BUS However, each improvement adds new
performance bottleneck.
are known as the LAN Emulation complexity. Additional protocols are
Services. Each LEC (proxy or non-proxy) Thus, it became necessary for the LAN required and additional VCCs need to be
communicates with the LAN Emulation Emulation Service components to be established, maintained, and monitored
Services using different virtual channel replicated for redundancy and elimination for communication between the new
connections (VCCs) and LAN Emulation of single points of failures, and distributed LAN Emulation Service components and
User Network Interface (LUNI) protocols. for performance. LECs. For example, all LESs serving an
Figure 3 shows the VCCs used in ELAN communicate control messages to
LANE v1. ATM LAN Emulation v2 each other through a full mesh of Control
To enable communication between Coordinate VCCs. These LESs must also
Some VCCs are mandatory once
the redundant and distributed LAN synchronize their LAN-ATM address
established, they must be maintained if
Emulation Service components, as well as databases, using the Server Cache
the LEC is to participate in the ELAN.
other functional enhancements, LANE v1 Synchronization Protocol (SCSP RFC
Other VCCs are optional they may or
was re-specified as LANE v2; it now 2334), across the Cache Synchronization
may not be established and, if established,
comprises two separate protocols: VCC. Similarly, all BUSs serving an
they may or may not be released there-
ELAN must be fully connected by a
" LUNI: LAN Emulation User Network
mesh of Multicast Forward VCCs used
Interface (approved July 1997)
to forward data.
" LNNI: LAN Emulation Network-Network
Interface (approved February 1999).
Gigabit Ethernet and ATM: A Technology Perspective White Paper 9
Figure 4: LAN Emulation v2 Additional Connections and/or Functions.
Point-to-point or
Connection Name Uni- or Bi-directional Point-to-multipoint Used for communication
LECS Synchronization VCC Bi-directional Point-to-point Between LECSs
Configuration Direct VCC Bi-directional Point-to-point Between an LECS and an LEC, LES or BUS
Control Coordinate VCC Bi-directional Point-to-point Between LESs
Cache Synchronization VCC Bi-directional Point-to-point Between an LES and its SMSs
Default Multicast Send VCC Bi-directional Point-to-point Between a BUS and an LEC (as in v1)
Default Multicast Forward VCC Uni-directional Point-to-multipoint From a BUS to its LECs and other BUSs
Selective Multicast Send VCC Bi-directional Point-to-point Between an SMS and an LEC
Selective Multicast Forward VCC Uni-directional Point-to-multipoint From an SMS to its LECs
Unicast traffic from a sending LEC is " If an SMS is available, the LEC can address database with its LES using
initially forwarded to a receiving LEC via establish, in addition to the Default SCSP across Cache Synchronization
the BUS. When a Data Direct VCC has Multicast Send VCC to the BUS, a VCCs.
been established between the two LECs, Selective Multicast Send VCC to the
Figure 4 shows the additional connections
the unicast traffic is then forwarded via SMS. In this case, the BUS will add the
required by LANE v2.
the direct path. During the switchover LEC as a leaf to its Default Multicast
This multitude of control and coordina-
from the initial to the direct path, it is Forward VCC and the SMS will add
tion connections, as well as the exchange
possible for frames to be delivered out of the LEC as a leaf to its Selective
of control frames, consumes memory,
order. To prevent this possibility, LANE Multicast Forward VCC. The BUS is
processing power, and bandwidth, just
requires an LEC to either implement the then used initially to forward multicast
so that a Data Direct VCC can finally be
Flush protocol, or for the sending LEC to traffic until the multicast destination is
established for persistent communication
delay transmission at some latency cost. resolved to an ATM address, at which
between two end systems. The complexity
time the SMS is used. The SMS also
The forwarding of multicast traffic
can be seen in Figure 5.
synchronizes its LAN-ATM multicast
from an LEC depends on the availability
of an SMS:
Figure 5: Complexity of ATM LAN Emulation.
" If an SMS is not available, the LEC
11
establishes the Default Multicast Send
LECS LECS
1 1
VCC to the BUS that, in turn, will
1
1 1
1
1 1
add the LEC as a leaf to its Default
10**
Multicast Forward VCC. The BUS
LES LES
9**
is then used for the forwarding of
BUS BUS
2 2 2 2
9 9 9 9
multicast traffic.
5 3
3
5
4
4
4 4
SMS SMS SMS SMS
8 8
7 7
7
7
LEC LEC LEC LEC LEC LEC
6 66 6
66
1 Configuration Direct VCC 7 Selective Multicast VCC
2 Control Direct VCC 8 Selective Multicast Forward VCC
3 Control Distribute VCC 9 Cache Sync-only VCC
4 Default Multicast Send VCC 10 Control Coordinate-only VCC
5 Default Multicast Forward VCC 11 LECS Sync VCC
6 Data Direct VCC
** may be combined into one dual function VCC between two neighbor LESs
10 Gigabit Ethernet and ATM: A Technology Perspective White Paper
AAL-5 Encapsulation
In addition to the complexity of connections
and protocols, the data carried over
LANE uses ATM Adaptation Layer-5
Frame Format (Full-Duplex)
(AAL-5) encapsulation, which adds
Full-duplex Gigabit Ethernet uses the
overhead to the Ethernet frame. The
same frame format as Ethernet and Fast
Ethernet frame is stripped of its Frame
Ethernet, with a minimum frame length
Check Sequence (FCS); the remaining
of 64 bytes and a maximum of 1518 bytes
fields are copied to the payload portion
(including the FCS but excluding the
of the CPCS-PDU, and a 2-byte LANE
Preamble/SFD). If the data portion is less
header (LEH) is added to the front, with
than 46 bytes, pad bytes are added to
an 8-byte trailer at the end. Up to 47
produce a minimum frame size of 64 bytes.
pad bytes may be added, to produce a
Figure 7 shows the same frame format for
CPCS-PDU that is a multiple of 48,
Ethernet, Fast Ethernet and full-duplex
the size of an ATM cell payload.
Gigabit Ethernet that enables the seamless
The CPCS-PDU also has to be segmented
integration of Gigabit Ethernet campus
into 53-byte ATM cells before being
backbones with the Ethernet and
transmitted onto the network. At the
Fast Ethernet desktops and servers
receiving end, the 53-byte ATM cells have
they interconnect.
to be decapsulated and reassembled into
the original Ethernet frame.
Figure 6: AAL-5 CPCS-PDU.
Figure 6 shows the CPCS-PDU that is
CPCS-PDU Trailer
used to transport Ethernet frames over
Bytes 1-65535 0-47 1 1 2 4
LANE.
LEH CPCS-PDU Payload Pad CPCS-UU CPI Length CRC
CPCS-PDU
Gigabit Ethernet LAN
In contrast, a Gigabit Ethernet LAN
Figure 7: Full-Duplex Gigabit Ethernet Frame Format
backbone does not have the complexity
(no Carrier Extension).
and overhead of control functions,
data encapsulation and decapsulation,
Bytes 8 6 6 2 46 to 1500 4
segmentation and reassembly, and control
Preamble/ Destination Source Length/
Data Pad FCS
SFD Address Address Type
and data connections required by an
64 min to 1518 bytes max
ATM backbone.
As originally intended, at least for initial
deployment in the LAN environment,
Gigabit Ethernet uses full-duplex
transmission between switches, or
between a switch and a server in a server
farm in other words, in the LAN
backbone. Full-duplex Gigabit Ethernet
is much simpler, and does not suffer
from the complexities and deficiencies
of half-duplex Gigabit Ethernet, which
uses the CSMA/CD protocol, Carrier
Extension, and frame bursting.
Gigabit Ethernet and ATM: A Technology Perspective White Paper 11
Figure 8: Half-Duplex Gigabit Ethernet Frame Format
(with Carrier Extension).
512 bytes min transmission
Bytes 8 6 6 2 46 to 493 4 448 to 1
However, this calculation is only applicable
Preamble/ Destination Source Length/ Carrier
Data Pad FCS
SFD Address Address Type Extension
to half-duplex (as opposed to full-duplex)
64 bytes min (non-extended)
Gigabit Ethernet. In the backbone and
server-farm connections, the vast majority
Goodput Efficiency
(if not all) of the Gigabit Ethernet
With full-duplex Gigabit Ethernet,
deployed will be full-duplex.
the good throughput ( goodput ) in
a predominantly 64-byte frame size
Mapping Ethernet Frames
environment, where no Carrier Extension
into ATM LANE Cells
is needed, is calculated as follows (where
As mentioned previously, using ATM
SFD=start frame delimiter, and
Frame Format (Half-Duplex)
LAN Emulation as the campus backbone
IFG=interframe gap):
Because of the greatly increased speed of
for Ethernet desktops require AAL-5
propagation and the need to support
encapsulation and subsequent segmentation
64 bytes (frame)
practical network distances, half-duplex
and reassembly.
Gigabit Ethernet requires the use of the
[64 bytes (frame) + 8 bytes (SFD) + 12 bytes (IFG)]
Figure 9 shows a maximum-sized
Carrier Extension. The Carrier Extension
=
1518-byte Ethernet frame mapped into
provides a minimum transmission length
a CPCS-PDU and segmented into 32
76 % approx.
of 512 bytes. This allows collisions to be
53-byte ATM cells, using AAL-5; this
detected without increasing the minimum This goodput translates to a forwarding
translates into a goodput efficiency of:
frame length of 64 bytes; thus, no changes rate of 1.488 million packets per second
are required to higher layer software, such (Mpps), known as the wirespeed rate.
1514 bytes (frame without FCS)
as network interface card (NIC) drivers
With Carrier Extension, the resulting
[32 ATM cells x 53 bytes per ATM cell]
and protocol stacks.
goodput is very much reduced:
=
With half-duplex transmission, if the data
64 bytes (frame) 89 % approx.
portion is less than 46 bytes, pad bytes
are added in the Pad field to increase
For a minimum size 64-byte Ethernet
[512 bytes (frame with CE) + 8 bytes (SFD)
the minimum (non-extended) frame to
frame, two ATM cells will be required;
+ 12 bytes (IFG)]
64 bytes. In addition, bytes are added
this translates into a goodput efficiency of:
=
in the Carrier Extension field so that a
minimum of 512 bytes for transmission is 12 % approx.
60 bytes (frame without FCS)
generated. For example, with 46 bytes of
In ATM and Gigabit Ethernet comparisons,
[2 ATM cells x 53 bytes per ATM cell]
data, no bytes are needed in the Pad field,
this 12 percent figure is sometimes quoted
=
and 448 bytes are added to the Carrier
as evidence of Gigabit Ethernet s inefficiency.
Extension field. On the other hand, with 57 % approx.
494 or more (up to 1500) bytes of data,
no pad or Carrier Extension is needed.
Figure 9: Mapping Ethernet Frame into ATM Cells.
Bytes 8 6 6 2 1500 4
Preamble/ Destination Source Length/
Data Pad FCS
SFD Address Address Type
1514 bytes
CPCS-PDU Payload CPCS-PDU Trailer
Bytes 2 1514 12 1 1 2 4
CPCS-PDU LEH Ethernet Frame Pad CPCS-UU CPI Length CRC
ATM Cells 1 2 3 4 29 30 31 32
1696 bytes
12 Gigabit Ethernet and ATM: A Technology Perspective White Paper
Figure 10: Frame Bursting.
Bytes 8 52-1518 12 8 64-1518 12 8 64-1518 12
Preamble/ MAC Extension Preamble/ MAC Preamble/ MAC
IFG IFG IFG
SFD Frame-1 (if needed) SFD Frame-2 SFD Frame-n
1 Frame Burst
" Traffic Policing: monitoring and
Frame Bursting
Flow Control and
controlling the stream of cells entering
The Carrier Extension is an overhead,
Congestion Management
the network for connections accepted,
especially if short frames are the predominant
In both ATM or Gigabit Ethernet, flow
and marking out-of-profile traffic for
traffic size. To enhance goodput, half-
control and congestion management
possible discard using Usage Parameter
duplex Gigabit Ethernet allows frame
are necessary to ensure that the network
Control (UPC) and the Generic Cell
bursting. Frame bursting allows an end
elements, individually and collectively,
Rate Algorithm (GCRA).
station to send multiple frames in one
are able to meet QoS objectives required
access (that is, without contending for
" Backpressure: exerting on the source
by applications using that network.
channel access for each frame) up to
to decrease cell transmission rate when
Sustained congestion in a switch, whether
the burstLength parameter. If a frame is
congestion appears likely or imminent.
ATM or Gigabit Ethernet, will eventually
being transmitted when the burst Length
result in frames being discarded. Various
" Congestion Notification: notifying the
threshold is exceeded, the sender is
techniques are employed to minimize
source and intervening nodes of current
allowed to complete the transmission.
or prevent buffer overflows, especially
or impending congestion by setting the
Thus, the maximum duration of a frame
under transient overload conditions. The
Explicit Forward Congestion
burst is 9710 bytes; this is the burst
difference between ATM and Gigabit
Indication (EFCI) bit in the cell header
Length (8192 bytes) plus the max Frame
Ethernet is in the availability, reach, and
(Payload Type Indicator) or using
Size (1518 bytes). Only the first frame is
complexity (functionality and granularity)
Relative Rate (RR) or Explicit Rate
extended if required. Each frame is spaced
of these techniques.
(ER) bits in Resource Management
from the previous by a 96-bit interframe
(RM) cells to provide feedback both in
gap. Both sender and receiver must be
ATM Traffic and
the forward and backward directions,
able to process frame bursting.
Congestion Management
so that remedial action can be taken.
In an ATM network, the means employed
" Cell Discard: employing various discard
CSMA/CD Protocol
to manage traffic flow and congestion are
strategies to avoid or relieve congestion:
Full-duplex Gigabit Ethernet does not
based on the traffic contract: the ATM
use or need the CSMA/CD protocol.
" Selective Cell Discard: dropping cells
Service Category and the traffic descriptor
Because of the dedicated, simultaneous,
that are non-compliant with traffic
parameters agreed upon for a connection.
and separate send and receive channels, it
contracts or have their Cell Loss
These means may include:
is very much simplified without the need
Priority (CLP) bit marked for
" Connection Admission Control (CAC):
for carrier sensing, collision detection,
possible discard if necessary
accepting or rejecting connections
backoff and retry, carrier extension, and
being requested at the call setup stage,
frame bursting.
depending upon availability of network
resources (this is the first point
of control and takes into account
connections already established).
Gigabit Ethernet and ATM: A Technology Perspective White Paper 13
Using this simple start-stop mechanism,
Gigabit Ethernet prevents frame discards
when input buffers are temporarily
depleted by transient overloads. It is only
Gigabit Ethernet Flow Control
effective when used on a single full-duplex
For half-duplex operation, Gigabit
link between two switches, or between a
Ethernet uses the CSMA/CD protocol
switch and an end station (server).
to provide implicit flow control by
Because of its simplicity, the PAUSE
backpressuring the sender from
function does not provide flow control
transmitting in two simple ways:
" Early Packet Discard (EPD): dropping
across multiple links, or from end-to-end
" Forcing collisions with the incoming
all the cells belonging to a frame that
across (or through) intervening switches.
traffic, which forces the sender to back
is queued, but for which transmission
It also requires both ends of a link (the
off and retry as a result of the collision,
has not been started
sending and receiving partners) to be
in conformance with the CSMA/CD
MAC Control-capable.
" Partial Packet Discard (PPD): dropping
protocol.
all the cells belonging to a frame that
" Asserting carrier sense to provide a
Bandwidth Scalability
is being transmitted (a more drastic
channel busy signal, which prevents
Advances in computing technology have
action than EPD)
the sender from accessing the medium
fueled the explosion of visually and aural-
" Random Early Detection (RED):
to transmit, again in conformance with
ly exciting applications for e-commerce,
dropping all the cells of randomly
the protocol.
whether Internet, intranet or extranet.
selected frames (from different sources)
With full-duplex operation, Gigabit These applications require exponential
when traffic arrival algorithms indicate
Ethernet uses explicit flow control to increases in bandwidth. As a business
impending congestion (thus avoiding
throttle the sender. The IEEE 802.3x grows, increases in bandwidth are also
congestion), and preventing waves
Task Force defined a MAC Control required to meet the greater number of
of synchronized re-transmission
architecture, which adds an optional users without degrading performance.
precipitating congestion collapse.
MAC Control sub-layer above the MAC Therefore, bandwidth scalability in
A further refinement is offered using
sub-layer, and uses MAC Control frames the network infrastructure is critical to
Weighted RED (WRED).
to control the flow. To date, only one supporting incremental or quantum
" Traffic Shaping: modifying the stream
MAC Control frame has been defined; increases in bandwidth capacity, which is
of cells leaving a switch (to enter or
this is for the PAUSE operation. frequently required by many businesses.
transit a network) so as to ensure
A switch or an end station can send ATM and Gigabit Ethernet both provide
conformance with contracted profiles
a PAUSE frame to stop a sender from bandwidth scalability. Whereas ATM s
and services. Shaping may include
transmitting data frames for a specified bandwidth scalability is more granular
reducing the Peak Cell Rate, limiting
length of time. Upon expiration of the and extends from the desktop and over
the duration of bursting traffic, and
period indicated, the sender may resume the MAN/WAN, Gigabit Ethernet has
spacing cells more uniformly to reduce
transmission. The sender may also resume focused on scalability in campus networking
the Cell Delay Variation.
transmission when it receives a PAUSE from the desktop to the MAN/WAN
frame with a zero time specified, indicating edge. Therefore, Gigabit Ethernet provides
the waiting period has been cancelled. quantum leaps in bandwidth from
On the other hand, the waiting period 10 Mbps, through 100 Mbps, 1000
may be extended if the sender receives a Mbps (1 Gbps), and even 10,000 Mbps
PAUSE frame with a longer period than (10 Gbps) without a corresponding
previously received. quantum leap in costs.
14 Gigabit Ethernet and ATM: A Technology Perspective White Paper
ATM Bandwidth
ATM is scalable from 1.544 Mbps
through to 2.4 Gbps and even higher
speeds. Approved ATM Forum
multiplexed. The original cell stream is
specifications for the physical layer
recovered in correct sequence from the
include the following bandwidths:
multiple physical links at the receiving
end. Loss and recovery of individual links
" 1.544 Mbps DS1
in an IMA group are transparent to the
" 2.048 Mbps E1
users. This capability allows users to:
" 25.6 Mbps over shielded and unshielded
together to provide greater bandwidth
" Interconnect ATM campus networks
twisted pair copper cabling (the bandwidth
and resiliency. Work in this area of
over the WAN, where ATM WAN
that was originally envisioned for ATM
standardization is proceeding through
facilities are not available by using
to the desktop)
the IEEE 802.3ad Link Aggregation Task
existing DS1/E1 facilities
Force (see the Trunking and Link
" 34.368 Mbps E3
" Incrementally subscribe to more
Aggregation section of this paper).
" 44.736 Mbps DS3
DS1/E1 physical links as needed
" 100 Mbps over multimode fiber cabling
" Protect against single link failures Distance Scalability
when interconnecting ATM campus Distance scalability is important because
" 155.52 Mbps SONET/SDH over UTP
networks across the WAN of the need to extend the network across
and single and multimode fiber cabling
widely dispersed campuses, and within
" Use multiple DS1/E1 links that are
" 622.08 Mbps SONET/SDH over single
large multi-storied buildings, while making
typically lower cost than a single
and multimode fiber cabling
use of existing UTP-5 copper cabling
DS3/E3 (or higher speed) ATM
" 622.08 Mbps and 2.4 Gbps cell-
and common single and multimode
WAN link for normal operation or
based physical layer (without any frame
fiber cabling, and without the need for
as backup links.
structure).
additional devices such as repeaters,
Work is also in progress (as of October extenders, and amplifiers.
Gigabit Ethernet Bandwidth
1999) on 1 Gbps cell-based physical layer,
Ethernet is scalable from the traditional Both ATM and Gigabit Ethernet (IEEE
2.4 Gbps SONET/SDH, and 10 Gbps
10 Mbps Ethernet, through 100 Mbps 802.3ab) can operate easily within the
SONET/SDH interfaces.
Fast Ethernet, and 1000 Mbps Gigabit limit of 100 meters from a wiring closet
Ethernet. Now that the Gigabit Ethernet switch to the desktop using UTP-5
Inverse Multiplexing
standards have been completed, the next copper cabling. Longer distances are
over ATM
evolutionary step is 10 Gbps Ethernet. typically achieved using multimode
In addition, the ATM Forum s Inverse
The IEEE P802.3 Higher Speed Study (50/125 or 62.5/125 µm) or single mode
Multiplexing over ATM (IMA) standard
Group has been created to work on 10 (9-10/125 µm) fiber cabling.
allows several lower-speed DS1/E1 physical
Gbps Ethernet, with Project
links to be grouped together as a single
Authorization Request and formation of
higher speed logical link, over which cells
a Task Force targeted for November 1999
from an ATM cell stream are individually
and a standard expected by 2002.
Bandwidth scalability is also possible
through link aggregation ( that is,
grouping multiple Gigabit Ethernet links
Gigabit Ethernet and ATM: A Technology Perspective White Paper 15
Figure 11: Ethernet and Fast Ethernet Supported Distances.
Ethernet Ethernet Ethernet Ethernet
10BASE-T 10BASE-FL 100BASE-TX 100BASE-FX
IEEE Standard 802.3 802.3 802.3u 802.3u
Data Rate 10 Mbps 10 Mbps 100 Mbps 100 Mbps
Multimode Fiber distance N/A 2 km N/A 412 m (half duplex)
2 km (full duplex)
Singlemode Fiber distance N/A 25 km N/A 20 km
Cat 5 UTP distance 100 m N/A 100 m N/A
STP/Coax distance 500 m N/A 100 m N/A
single computer room or wiring closet. The IEEE 802.3ab standard specifies the
Gigabit Ethernet Distances
Collectively, the three designations operation of Gigabit Ethernet over distances
Figure 11 shows the maximum distances
1000BASE-SX, 1000BASE-LX and up to 100m using 4-pair 100 ohm
supported by Ethernet and Fast Ethernet,
1000BASE-CX are referred to as Category 5 balanced unshielded twisted
using various media.
1000BASE-X. pair copper cabling. This standard is also
known as the 1000BASE-T specification;
IEEE 802.3z Gigabit Ethernet
Figure 12 shows the maximum distances
it allows deployment of Gigabit Ethernet
Fiber Cabling
supported by Gigabit Ethernet, using
in the wiring closets, and even to the
IEEE 802.3u-1995 (Fast Ethernet)
various media.
desktops if needed, without change to the
extended the operating speed of
1000BASE-X Gigabit Ethernet is capable
UTP-5 copper cabling that is installed in
CSMA/CD networks to 100 Mbps over
of auto-negotiation for half- and full-duplex
many buildings today.
both UTP-5 copper and fiber cabling.
operation. For full-duplex operation,
The IEEE P802.3z Gigabit Ethernet Task
auto-negotiation of flow control includes
Trunking and
Force was formed in July 1996 to develop
both the direction and symmetry
Link Aggregation
a Gigabit Ethernet standard. This work
of operation symmetrical and
Trunking provides switch-to-switch
was completed in July 1998 when the
asymmetrical.
connectivity for ATM and Gigabit
IEEE Standards Board approved the
Ethernet. Link Aggregation allows
IEEE 802.3z-1998 standard.
IEEE 802.3ab Gigabit Ethernet
multiple parallel links between switches,
Copper Cabling
The IEEE 802.3z standard specifies the
or between a switch and a server, to
For Gigabit Ethernet over copper cabling,
operation of Gigabit Ethernet over existing
provide greater resiliency and bandwidth.
an IEEE Task Force started developing a
single and multimode fiber cabling. It
While switch-to-switch connectivity
specification in 1997. A very stable draft
also supports short (up to 25m) copper
for ATM is well-defined through the
specification, with no significant technical
jumper cables for interconnecting switches,
NNI and PNNI specifications, several
changes, had been available since July
routers, or other devices (servers) in a
vendor-specific
1998. This specification, known as IEEE
802.3ab, is now approved (as of June
1999) as an IEEE standard by the IEEE
Standards Board.
16 Gigabit Ethernet and ATM: A Technology Perspective White Paper
Figure 12: Gigabit Ethernet Supported Distances.
1000BASE-SX 1000BASE-LX 1000BASE-CX 1000BASE-T
IEEE Standard 802.3z 802.3z 802.3z 802.3ab
Data Rate 1000 Mbps 1000 Mbps 1000 Mbps 1000 Mbps
Optical Wavelength (nominal) 850 nm (shortwave) 1300 nm (longwave) N/A N/A
Multimode Fiber (50 (m) distance 525 m 550 m N/A N/A
Multimode Fiber (62.5 (m) distance 260 m 550 m N/A N/A
Singlemode Fiber (10 (m) distance N/A 3 km N/A N/A
UTP-5 100 ohm distance N/A N/A N/A 100m
STP 150 ohm distance N/A N/A 25 m N/A
Number of Wire Pairs/Fiber 2 fiber 2 fiber 2 pairs 4 pairs
Connector Type Duplex SC Duplex SC Fibre Channel-2 RJ-45
or DB-9
Note: distances are for full duplex, the expected mode of operation in most cases.
protocols are used for Gigabit Ethernet, procedures as a single logical aggregated the list back to the ingress switch for
with standards-based connectivity to be link. The individual links within a set of recomputation of a new path. An ATM
provided once the IEEE 802.3ad Link paralleled links may be any combination switch may perform path computation as
Aggregation standard is complete. of the supported ATM speeds. As more a background task before calls are received
bandwidth is needed, more PNNI links (to reduce latency during call setups),
Nortel Networks is actively involved in
may be added between switches as necessary or when a call request is received (for
this standards effort, while providing
without concern for the possibility of real-time optimized path at the cost of
highly resilient and higher bandwidth
loops in the traffic path. some setup delay), or both (for certain
Multi-Link Trunking (MLT) and Gigabit
QoS categories), depending on user
LinkSafe technology in the interim. By using source routing to establish a path
configuration.
(VCC) between any source and destination
ATM PNNI end systems, PNNI automatically eliminates PNNI also provides performance scalability
ATM trunking is provided through NNI the forming of loops. The end-to-end when routing traffic through an ATM
(Network Node Interface or Network-to- path, computed at the ingress ATM network, using the hierarchical structure
Network Interface) using the Private NNI switch using Generic Connection of ATM addresses. An individual ATM
(PNNI) v1.0 protocols, an ATM Forum Admission Control (GCAC) procedures, end system in a PNNI peer group can
specification approved in March 1996. is specified by a list of ATM nodes known be reached using the summary address
as a Designated Transit List (DTL). for that peer group, similar to using the
To provide resiliency, load distribution
Computation based on default parameters network and subnet ID portions of an
and balancing, and scalability in
will result in the shortest path meeting the IP address. A node whose address does
bandwidth, multiple PNNI links may be
requirements, although preference may be not match the summary address (the
installed between a pair of ATM switches.
given to certain paths by assigning lower non-matching address is known as a
Depending on the implementation,
Administrative Weight to preferred links. foreign address) can be explicitly set
these parallel links may be treated for
This DTL is then validated by local CAC to be reachable and advertised.
Connection Admission Control (CAC)
procedures at each ATM node in the list.
If an intervening node finds the path is
invalid, maybe as a result of topology or
link state changes in the meantime, that
node is able to automatically crank
Gigabit Ethernet and ATM: A Technology Perspective White Paper 17
Gigabit Ethernet
Link Aggregation
With Gigabit Ethernet, multiple physical
links may be installed between two
ATM UNI Uplinks
switches, or between a switch and a server,
versus NNI Risers
to provide greater bandwidth and resiliency.
PNNI provides many benefits with
Typically, the IEEE 802.1d Spanning Tree
regard to resiliency and scalability when
Protocol (STP) is used to prevent loops
connecting ATM switches in the campus
forming between these parallel links, by
backbone. However, these advantages are
A Peer Group Leader (PGL) may represent
blocking certain ports and forwarding on
not available in most ATM installations
the nodes in the peer group at a higher
others so that there is only one path
where the LAN switches in the wiring
level. These PGLs are logical group nodes
between any pair of source-destination
closets are connected to the backbone
(LGNs) that form higher-level peer
end stations. In doing so, STP incurs
switches using ATM UNI uplinks. In
groups, which allow even shorter summary
some performance penalty when
such connections, the end stations
addresses. These higher-level peer groups
converging to a new spanning tree structure
attached to the LAN switch are associated,
can be represented in even higher peer
after a network topology change.
directly or indirectly (through VLANs),
groups, thus forming a hierarchy. By
with specific proxy LECs located in the Although most switches are plug-and-play,
using this multi-level hierarchical routing,
uplinks. An end station cannot be associated with default STP parameters, erroneous
less address, topology, and link state
with more than one proxy LEC active in configuration of these parameters can lead
information needs to be advertised across
separate uplinks at any one time. Hence, to looping, which is difficult to resolve. In
an ATM network, allowing scalability as
no redundant path is available if the proxy addition, by blocking certain ports, STP
the number of nodes grow.
LEC (meaning uplink or uplink path) will allow only one link of several parallel
However, this rich functionality comes
representing the end stations should fail. links between a pair of switches to carry
with a price. PNNI requires memory,
traffic. Hence, scalability of bandwidth
While it is possible to have one uplink
processing power, and bandwidth from
between switches cannot be increased by
active and another on standby, connected
the ATM switches for maintaining state
adding more parallel links as required,
to the backbone via a different path and
information, topology and link state
although resiliency is thus improved.
ready to take over in case of failure, very
update exchanges, and path computation.
few ATM installations have implemented To overcome the deficiencies of STP,
PNNI also results in greater complexity
this design for reasons of cost, complexity, various vendor-specific capabilities are
in hardware design, software algorithms,
and lack of this capability from the offered to increase the resiliency, load
switch configuration, deployment, and
switch vendor. distribution and balancing, and scalability
operational support, and ultimately much
in bandwidth, for parallel links between
One solution is provided by the Nortel
higher costs.
Gigabit Ethernet switches.
Networks Centillion* 50/100 and System
5000BH/BHC LAN-ATM Switches. For example, the Nortel Networks
These switches provide Token-Ring and Passport Campus Solution offers Multi-
Ethernet end station connectivity on the Link Trunking and Gigabit Ethernet
one (desktop) side and NNI riser LinkSafe:
uplinks to the core ATM switches on
Multi-Link Trunking (MLT) that allows
the other (backbone) side. Because these
up to four physical connections between
NNI risers are PNNI uplinks, the
two Passport 1000 Routing Switches, or
LAN-to-ATM connectivity enjoys all
the benefits of PNNI.
18 Gigabit Ethernet and ATM: A Technology Perspective White Paper
a BayStack* 450 Ethernet Switch and
an Passport 1000 Routing Switch, to
be grouped together as a single logical
link with much greater resiliency and
With MLT and Gigabit Ethernet
bandwidth than is possible with several
LinkSafe redundant trunking and link
individual connections.
aggregation, the BayStack 450 Ethernet
Each MLT group may be made up
Switch and Passport 1000 Routing Switch
of Ethernet, Fast Ethernet or Gigabit
provide a solution that is comparable
Ethernet physical interfaces; all links
to ATM PNNI in its resilience and
within a group must be of the same media
" Greater resiliency and fault-tolerance,
incremental scalability, and is superior
type (copper or fiber), have the same
where traffic is automatically reassigned
in its simplicity.
speed and half- or full-duplex settings,
to remaining operative links, thus
and belong to the same Spanning Tree
maintaining communication if individual
IEEE P802.3ad
group, although they need not be from
links between two switches, or a switch
Link Aggregation
the same interface module within a
and a server, fail.
In recognition of the need for open
switch. Loads are automatically balanced
standards and interoperability, Nortel " Flexible and simple migration vehicle,
across the MLT links, based on source
Networks actively leads in the IEEE where Ethernet and Fast Ethernet
and destination MAC addresses (bridged
P802.3ad Link Aggregation Task Force, switches at the LAN edges can have
traffic), or source and destination IP
authorized by the IEEE 802.3 Trunking multiple lower-speed links aggregated
addresses (routed traffic). Up to eight
Study Group in June 1998, to define to provide higher-bandwidth transport
MLT groups may be configured in an
a link aggregation standard for use on into the Gigabit Ethernet core.
Passport 1000 Routing Switch.
switch-to-switch and switch-to-server
A brief description of the IEEE P802.3ad
Gigabit Ethernet LinkSafe that provides
parallel connections. This standard is
Link Aggregation standard (which may
two Gigabit Ethernet ports on an Passport
currently targeted for availability in
change as it is still fairly early in the
1000 Routing Switch interface module
early 2000.
standards process) follows.
to connect to another similar module on
The IEEE P802.3ad Link Aggregation is
A physical connection between two
another switch, with one port active and
an important full-duplex, point-to-point
switches, or a switch and a server, is
the other on standby, ready to take over
technology for the core LAN infrastructure
known as a link segment. Individual link
automatically should the active port or
and provides several benefits:
segments of the same medium type and
link fails. LinkSafe is used for riser and
" Greater bandwidth capacity, allowing speed may make up a Link Aggregation
backbone connections, with each link
parallel links between two switches, or Group (LAG), with a link segment
routed through separate physical paths
a switch and a server, to be aggregated belonging to only one LAG at any one
to provide a high degree of resiliency
together as a single logical pipe with time. Each LAG is associated with a single
protection against a port or link failure.
multi-Gigabit capacity (if necessary); MAC address.
An important capability is that virtual
traffic is automatically distributed
LANs (VLANs) distributed across multiple
and balanced over this pipe for high
switches can be interconnected, with or
performance.
without IEEE 802.1Q VLAN Tagging,
" Incremental bandwidth scalability,
using MLT and Gigabit Ethernet trunks.
allowing more links to be added
between two switches, or a switch and
a server, only when needed for greater
performance, from a minimal initial
hardware investment, and with minimal
disruption to the network.
Gigabit Ethernet and ATM: A Technology Perspective White Paper 19
Integration of Layer 3
and Above Functions
Both ATM and Gigabit Ethernet provide
the underlying internetwork over which
Technology
IP packets are transported. Although
Complexity and Cost
initially a Layer 2 technology, ATM
Two of the most critical criteria in the
functionality is creeping upwards in the
technology decision are the complexity
OSI Reference Model. ATM Private
and cost of that technology. In both
Network Node Interface (PNNI) provides
aspects, simple and inexpensive Gigabit
Frames that belong logically together
signaling and OSPF-like best route
Ethernet wins hands down over complex
(for example, to an application being used
determination when setting up the path
and expensive ATM at least in
at a given instance, flowing in sequence
from a source to a destination end system.
enterprise networks.
between a pair of end stations) are treated
Multiprotocol Over ATM (MPOA)
as a conversation (similar to the concept ATM is fairly complex because it is a
allows short-cut routes to be established
of a flow ). Individual conversations are connection-oriented technology that has
between two communicating ATM end
aggregated together to form an to emulate the operation of connection-
systems located in different IP subnets,
Aggregated Conversation, according to less LANs. As a result, additional physical
completely bypassing intervening routers
user-specified Conversation Aggregation and logical components, connections,
along the path.
Rules, which may specify aggregation, for and protocols have to be added, with
In contrast, Gigabit Ethernet is strictly
example, on the basis of source/destination the attendant need for understanding,
a Layer 2 technology, with much of the
address pairs, VLAN ID, IP subnet, or configuration, and operational support.
other needed functionality added above
protocol type. Frames that are part of a Unlike Gigabit Ethernet (which is largely
it. To a large extent, this separation of
given conversation are transmitted on plug-and-play), there is a steep learning
functions is an advantage because changes
a single link segment within a LAG to curve associated with ATM, in product
to one function do not disrupt another
ensure in-sequence delivery. development as well as product usage.
if there is clear modularity of functions.
ATM also suffers from a greater number
A Link Aggregation Control Protocol
This decoupling was a key motivation in
of interoperability and compatibility
is used to exchange link configuration,
the original development of the 7-layer
issues than does Gigabit Ethernet, because
capability, and state information between
OSI Reference Model. In fact, the
of the different options vendors implement
adjacent switches, with the objective
complexity of ATM may be due to the
in their ATM products. Although
of forming LAGs dynamically. A Flush
rich functionality all provided in one
interoperability testing does improve the
protocol, similar to that in ATM LAN
hit, unlike the relative simplicity of
situation, it also adds time and cost to
Emulation, is used to flush frames in
Gigabit Ethernet, where higher layer
ATM product development.
transit when links are added or removed
functionality is kept separate from,
from a LAG. Because of the greater complexity,
and added one at a time to, the basic
the result is also greater costs in:
Among the objectives of the IEEE Physical and Data Link functions.
P802.3ad standard are automatic " Education and training
configuration, low protocol overheads,
" Implementation and deployment
rapid and deterministic convergence when
" Problem determination and resolution
link states change, and accommodation of
aggregation-unaware links. " Ongoing operational support
" Test and analysis equipment, and other
management tools.
20 Gigabit Ethernet and ATM: A Technology Perspective White Paper
MPOA and NHRP
A traditional router provides two basic
Layer 3 functions: determining the best
possible path to a destination using
DDVCC between a destination end
routing control protocols such as RIP
station and its gateways router, and
and OSPF (this is known as the routing
several DDVCCs between intervening
function), and then forwarding the frames
routers along the path. With MPOA,
over that path (this is known as the
only one DDVCC is needed between
forwarding function).
the source and destination end stations.
Multi-Protocol Over ATM (MPOA)
Virtual Router
Gigabit Ethernet can also leverage a
enhances Layer 3 functionality over ATM
Redundancy Protocol
similar capability for IP traffic using the
in three ways:
For Gigabit Ethernet, an IETF RFC 2338
Next Hop Resolution Protocol (NHRP).
Virtual Router Redundancy Protocol
" MPOA uses a Virtual Router model to
In fact, MPOA uses NHRP as part of the
(VRRP) is available for deploying
provide greater performance scalability
process to resolve MPOA destination
interoperable and highly resilient default
by allowing the typically centralized
addresses. MPOA Resolution Requests
gateway routers. VRRP allows a group of
routing control function to be divorced
are converted to NHRP Resolution
routers to provide redundant and distributed
from the data frame forwarding function,
Requests by the ingress MPOA server
gateway functions to end stations through
and distributing the data forwarding
before forwarding the requests towards
the mechanism of a virtual IP address
function to access switches on the
the intended destination. NHRP
the address that is configured in end
periphery of the network. This
Resolution Responses received by the
stations as the default gateway router.
separation of powers allows routing
ingress MPOA server are converted to
capability and forwarding capability to
At any one time, the virtual IP address
MPOA Resolution Responses before
be distributed to where each is most
is mapped to a physical router, known
being forwarded to the requesting source.
effective, and allows each to be scaled
as the Master. Should the Master fail,
Just as MPOA shortcuts can be established
when needed without interference
another router within the group is elected
for ATM networks, NHRP shortcuts
from the other.
as the new Master with the same virtual
can also be established to provide the
IP address. The new Master automatically
" MPOA enables paths (known as short- performance enhancement in a frame
takes over as the new default gateway,
cut VCCs) to be directly established switched network.
without requiring configuration
between a source and its destination,
changes in the end stations. In addition,
without the hop-by-hop, frame-by- Gateway Redundancy
each router may be Master for a set
frame processing and forwarding that is For routing between subnets in an ATM
of end stations in one subnet while
necessary in traditional router networks. or Gigabit Ethernet network, end stations
providing backup functions for another,
Intervening routers, which are potentially typically are configured with the static IP
thus distributing the load across
performance bottlenecks, are completely address of a Layer 3 default gateway
multiple routers.
bypassed, thereby enhancing forwarding router. Being a single point of failure,
performance. sometimes with catastrophic consequences,
various techniques have been deployed to
" MPOA uses fewer resources in the form
ensure that an alternate backs this default
of VCCs. When traditional routers are
gateway when it fails.
used in an ATM network, one Data
Direct VCC (DDVCC) must be With ATM, redundant and distributed
established between a source end Layer 3 gateways are currently vendor-
station and its gateway router, one specific. Even if a standard should emerge,
it is likely that more logical components,
protocols, and connections will need to
be implemented to provide redundant
and/or distributed gateway functionality.
Gigabit Ethernet and ATM: A Technology Perspective White Paper 21
Broadcast and Multicast
Broadcasts and multicasts are very natural
means to send traffic from one source to
throughout. Deployment and on-going
multiple recipients in a connectionless
operational support are much easier
LAN. Gigabit Ethernet is designed for
because of the opportunity to learn once,
just such an environment. The higher-
do many. One important assumption in
layer IP multicast address is easily mapped
this scenario is that ATM would be widely
to a hardware MAC address. Using
deployed at the desktops. This assumption
Internet Group Management Protocol
LAN Integration
does not meet with reality.
(IGMP), receiving end stations report
The requirements of the LAN are very
ATM deployment at the desktop is group membership to (and respond to
different from those of the WAN. In the
almost negligible, while Ethernet and queries from) a multicast router, so as to
LAN, bandwidth is practically free once
Fast Ethernet are very widely installed receive multicast traffic from networks
installed, as there are no ongoing usage
in millions of desktop workstations and beyond the local attachment. Source end
costs. As long as sufficient bandwidth
servers. In fact, many PC vendors include stations need not belong to a multicast
capacity is provisioned (or even over-
Ethernet, Fast Ethernet, and (increasingly) group in order to send to members of
provisioned) to meet the demand, there
Gigabit Ethernet NIC cards on the that group.
may not be a need for complex techniques
motherboards of their workstation or
By contrast, broadcasts and multicasts in
to control bandwidth usage. If sufficient
server offerings. Given this huge installed
an ATM LAN present a few challenges
bandwidth exists to meet all demand,
base and the common technology that it
because of the connection-oriented nature
then complex traffic management and
evolved from, Gigabit Ethernet provides
of ATM.
congestion control schemes may not be
seamless integration from the desktops
needed at all. For the user, other issues
In each emulated LAN (ELAN), ATM
to the campus and enterprise backbone
assume greater importance; these include
needs the services of a LAN Emulation
networks.
ease of integration, manageability, flexibility
Server (LES) and a Broadcast and
If ATM were to be deployed as the
(moves, adds and changes), simplicity,
Unknown Server (BUS) to translate from
campus backbone for all the Ethernet
scalability, and performance.
MAC addresses to ATM addresses. These
desktops, then there would be a need for
additional components require additional
frame-to-cell and cell-to-frame conversion
Seamless Integration
resources and complexity needed to signal,
the Segmentation and Reassembly
ATM has often been touted as the
set up, maintain, and tear down Control
(SAR) overhead.
technology that provides seamless
Direct, Control Distribute, Multicast
integration from the desktop, over the
With Gigabit Ethernet in the campus Send, and Multicast Forward VCCs.
campus and enterprise, right through
backbone and Ethernet to the desktops, Complexity is further increased because
to the WAN and across the world. The
no cell-to-frame or frame-to-cell conversion an ELAN can only have a single
same technology and protocols are used
is needed. Not even frame-to-frame LES/BUS, which must be backed up by
conversion is required from one form another LES/BUS to eliminate any single
of Ethernet to another! Hence, Gigabit points of failure. Communication
Ethernet provides a more seamless
integration in the LAN environment.
22 Gigabit Ethernet and ATM: A Technology Perspective White Paper
between active and backup LES/BUS
nodes requires more virtual connections
and protocols for synchronization, failure
the common uplink technology and
detection, and takeover (SCSP and LNNI).
with translational bridging functionality,
With all broadcast traffic going through
the Ethernet and Token-Ring LANs can
the BUS, the BUS poses a potential
interoperate relatively easily.
bottleneck.
With Gigabit Ethernet, interoperation
For IP multicasting in a LANE network,
between Ethernet and Token-Ring LANs
SONET OC-3c/SDH STM-1, SONET
ATM needs the services of the BUS and,
requires translational bridges that transform
OC-12c/SDH STM-4, and Packet-
if available (with LUNI v2), an SMS.
the frame format of one type to the other.
over-SONET/SDH in their Gigabit
For IP multicasting in a Classical IP ATM
Ethernet switches.
network, ATM needs the services of a
MAN/WAN Integration
Multicast Address Resolution Server
While an ATM LAN does offer seamless
It is relatively easy to interconnect ATM
(MARS), a Multicast Connection Server
integration with the ATM MAN or
campus backbones across the MAN or
(MCS), and the Cluster Control VCCs.
WAN through direct connectivity,
WAN. Most ATM switches are offered
These components require additional
the MAN/WAN for the most part will
with DS1/E1, DS3/E3, SONET OC-3c/
resources and complexity for connection
continue to be heterogeneous, and not
SDH STM-1 and SONET OC-12c/
signaling, setting up, maintenance and
homogeneous ATM. This is due to the
SDH STM-4 ATM interfaces that
tearing down.
installed non-ATM equipment, geographical
connect directly to the ATM MAN or
coverage, and time needed to change.
With UNI 3.0/3.1, the source must first
WAN facilities. Some switches are offered
This situation will persist more so than in
resolve the target multicast address to the
with DS1/E1 Circuit Emulation, DS1/E1
the LAN where there is a greater control
ATM addresses of the group members,
Inverse Multiplexing over ATM, and
by the enterprise and, therefore, greater
and then construct a point-to-multipoint
Frame Relay Network and Service
ease of convergence. Even in the LAN,
tree, with the source itself as the root to
Interworking capabilities that connect
the convergence is towards Ethernet and
the multiple destinations before multicast
to the existing non-ATM MAN or WAN
not ATM as the underlying technology.
traffic may be distributed. With UNI 4.0,
facilities. All these interfaces allow ATM
Technologies other than ATM will be
end stations may join as leaves to a point-
campus switches direct connections to
needed for interconnecting between
to-multipoint distribution tree, with or
the MAN or WAN, without the need for
locations, and even over entire regions,
without intervention from the root. Issues
additional devices at the LAN-WAN edge.
because of difficult geographical terrain
of interoperability between the different
At this time, many Gigabit Ethernet
or uneconomic reach. Thus, there will
UNI versions are raised in either case.
switches do not offer MAN/WAN
continue to be a need for technology
interfaces. Connecting Gigabit Ethernet
conversion from the LAN to the
Multi-LAN Integration
campus networks across the MAN
WAN, except where ATM has been
As a backbone technology, ATM can
or WAN typically requires the use of
implemented.
interconnect physical LAN segments
additional devices to access MAN/WAN
using Ethernet, Fast Ethernet, Gigabit
facilities, such as Frame Relay, leased
Ethernet, and Token Ring. These are
lines, and even ATM networks. These
the main MAC layer protocols in use on
interconnect devices are typically routers
campus networks today. Using ATM as
or other multiservice switches that add to
the total complexity and cost. With the
rapid acceptance of Gigabit Ethernet as
the campus backbone of choice, however,
many vendors are now offering
MAN/WAN interfaces such as ATM
Gigabit Ethernet and ATM: A Technology Perspective White Paper 23
While all these technologies are evolving,
businesses seek to minimize risks by
investing in the lower-cost Gigabit
Ethernet, rather than the higher-cost ATM.
also known as IP over SONET/SDH).
SONET is emerging as a competitive
Management Aspects
service to ATM over the MAN/WAN.
Because businesses need to be increasingly
With POS, IP packets are directly
dynamic to respond to opportunities
encapsulated into SONET frames,
and challenges, the campus networking
thereby eliminating the additional
Another development the widespread
environment is constantly in a state of
overhead of the ATM layer (see column
deployment of fiber optic technology
flux. There are continual moves, adds,
C in Figure 13).
may enable the LAN to be extended over
and changes; users and workstations form
To extend this a step further, IP packets
the WAN using the seemingly boundless
and re-form workgroups; road warriors
can be transported over raw fiber without
optical bandwidth for LAN traffic. This
take the battle to the streets, and highly
the overhead of SONET/SDH framing;
means that Gigabit Ethernet campuses
mobile users work from homes and hotels
this is called IP over Optical (see column
can be extended across the WAN just as
to increase productivity.
D in Figure 13). Optical Networking
easily, perhaps even more easily and with
With all these constant changes,
can transport very high volumes of data,
less cost, than ATM over the WAN.
manageability of the campus network
voice and video traffic over different light
Among the possibilities are access to Dark
is a very important selection criterion.
wavelengths.
Fiber with long-haul extended distance
The more homogeneous and simpler
Gigabit Ethernet (50 km or more),
The pattern of traffic has also been rapidly
the network elements are, the easier they
Packet-over-SONET/SDH and IP
changing, with more than 80 percent of
are to manage. Given the ubiquity of
over Optical Dense Wave Division
the network traffic expected to traverse
Ethernet and Fast Ethernet, Gigabit
Multiplexing.
the MAN/WAN, versus only 20 percent
Ethernet presents a more seamless
remaining on the local campus. Given
One simple yet powerful way for extending
integration with existing network
the changing pattern of traffic, and the
high performance Gigabit Ethernet
elements than ATM. Therefore, Gigabit
emergence of IP as the dominant network
campus networks across the WAN,
Ethernet is easier to manage. Gigabit
protocol, the total elimination of layers
especially in the metropolitan area, is the
Ethernet is also easier to manage because
of communication for IP over the
use of Packet-over-SONET/SDH (POS,
of its innate simplicity and the wealth of
MAN/WAN means reduced bandwidth
experience and tools available with its
usage costs and greater application
predecessor technologies.
performance for the users.
Figure 13: Interconnection Technologies over the MAN/WAN.
B-ISDN
IP over ATM IP over SONET/SDH
IP
IP over Optical
ATM IP IP
SONET/SDH ATM SONET/SDH IP
Optical Optical Optical Optical
(A) (B) (C) (D)
24 Gigabit Ethernet and ATM: A Technology Perspective White Paper
By contrast, ATM is significantly different
from the predominant Ethernet desktops
it interconnects. Because of this difference
and its relative newness, there are few
Because of the fast pace of development
tools and skills available to manage ATM
efforts during this period, a stable
network elements. ATM is also more
environment was felt to be needed for
difficult to manage because of the
consolidation, implementation and inter-
complexity of logical components and
operability. In April 1996, the Anchorage
connections, and the multitude of protocols
Accord agreed on a collection of some 60
and interoperability of Gigabit Ethernet
needed to make ATM workable. On top
ATM Forum specifications that provided
standards. Since its formation in 1996,
of the physical network topology lie a
a basis for stable implementation. Besides
the Alliance has been very successful in
number of logical layers, such as PNNI,
designating a set of foundational and
helping to introduce the IEEE 802.3z
LUNI, LNNI, MPOA, QoS, signaling,
expanded feature specifications, the
1000BASE-X, and the IEEE 802.3ab
SVCs, PVCs, and soft PVCs. Logical
Accord also established criteria to ensure
1000BASE-T Gigabit Ethernet standards.
components are more difficult to
interoperability of ATM products and
troubleshoot than physical elements
Similar to the ATM Consortium, the
services between current and future
when problems do occur.
Gigabit Ethernet Consortium was formed
specifications. This Accord provided the
in April 1997 at the University of New
assurance needed for the adoption of
Standards and
Hampshire InterOperability Lab as a
ATM and a checkpoint for further standards
Interoperability cooperative effort among Gigabit
development. As of July 1999, there are
Like all technologies, ATM and Gigabit Ethernet product vendors. The objective
more than 40 ATM Forum specifications
Ethernet standards and functions mature of the Gigabit Ethernet Consortium
in various stages of development.
and stabilize over time. Evolved from a is the ongoing testing of Gigabit Ethernet
To promote interoperability, the ATM
common technology, frame-based Gigabit products and software from both an inter-
Consortium was formed in October
Ethernet backbones interoperate seamlessly operability and conformance perspective.
1993, one of several consortiums at the
with the millions of connectionless,
University of New Hampshire
frame-based Ethernet and Fast Ethernet
InterOperability Lab (IOL). The ATM
desktops and servers in today s enterprise
Consortium is a grouping of ATM product
campus networks. By contrast, connection-
vendors interested in testing interoperability
oriented, cell-based ATM backbones need
and conformance of their ATM products
additional functions and capabilities that
in a cooperative atmosphere, without
require standardization, and can easily
adverse competitive publicity.
lead to interoperability issues.
Gigabit Ethernet Standards
ATM Standards
By contrast, Gigabit Ethernet has evolved
Although relatively new, ATM standards
from the tried and trusted Ethernet and
have been in development since 1984 as
Fast Ethernet technologies, which have
part of B-ISDN, designed to support
been in use for more than 20 years. Being
private and public networks. Since the
relatively simple compared to ATM,
formation of the ATM Forum in 1991,
much of the development was completed
many ATM specifications were completed,
within a relatively short time. The Gigabit
especially between 1993 and 1996.
Ethernet Alliance, a group of networking
vendors including Nortel Networks,
promotes the development, demonstration,
Gigabit Ethernet and ATM: A Technology Perspective White Paper 25
" Up to 160 Fast Ethernet
100BASE-FX ports
" Up to 64 Gigabit Ethernet
family, and BayStack 450 Stackable
1000BASE-SX or -LX ports
Switch, complemented by Optivity Policy
" Wirespeed switching for Ethernet,
Services for policy-enabled networking.
Fast Ethernet and Gigabit Ethernet
The following highlights key features of
" High resiliency through Gigabit
the Passport 8000 Enterprise Switch, the
LinkSafe and Multi-Link Trunking
winner of the Best Network Hardware
Passport
" High availability through fully
award from the 1999 Annual SI Impact
Campus Solution
distributed switching and management
Awards, sponsored by IDG s Solutions
In response to the market requirements
architectures, redundant and load-sharing
Integrator magazine:
and demand for Ethernet, Fast Ethernet,
power supplies and cooling fans, and
and Gigabit Ethernet, Nortel Networks
" High port density, scalability
ability to hot-swap all modules
offers the Passport Campus Solution as
and performance
" Rich functionality through support of:
the best-of-breed technology for campus
" Switch capacity of 50 Gbps,
access and backbone LANs. The Passport
" Port- and protocol-based VLANs
scalable to 256 Gbps
Campus Solution (see Figure 14)
for broadcast containment, logical
" Aggregate throughput of 3 million
comprises the Passport 8000 Enterprise
workgroups, and easy moves, adds
packets per second
Switch, with its edge switching and routing
and changes
capabilities, the Passport 1000 Routing " Less than 9 microseconds of latency
" IEEE 802.1Q VLAN Tagging for
Switch family, Passport 700 Server Switch
" Up to 372 Ethernet 10/100BASE-T
carrying traffic from multiple VLANs
auto-sensing, auto-negotiating ports
over a single trunk
" IEEE 802.1p traffic prioritization
for key business applications
Figure 14: Passport Campus Solution and Optivity Policy Services.
" IGMP, broadcast and multicast
Server Farm
Optivity Policy Services
rate limiting for efficient broadcast
& Management
Passport 8000 10/100 Ethernet
Enterprise Switch MLT resiliency
Voice
containment
" Spanning Tree Protocol FastStart for
Data
Centillion 100 Common Open Policy Services
faster network convergence and recovery
Multi-LAN Switch Differentiated Services
Gigabit Ethernet
IP Precedence/Type of Service
LinkSafe resiliency
IEEE 802.1Q VLAN Tag " Remote Network Monitoring
IEEE 802.1p User Priority
Data
Express Classification (RMON), port mirroring, and Remote
Traffic Monitoring (RTM) for network
Passport 1000
Passport 700 Server Switch management and problem determination.
10/100/Gigabit Ethernet
Data Routing Switch
server redundancy
MLT resiliency
& load balancing
OSPF
EMP
WAN
BN Router
Data
redundant gateways
System 5000BH
Multi-LAN Switch
Voice
System 390
Mainframe Server
BayStack 450
Video
Voice
Ethernet Switch
Data
26 Gigabit Ethernet and ATM: A Technology Perspective White Paper
For users with investments in Centillion
50/100 and System 5000BH LAN-ATM
Switches, evolution to a Gigabit Ethernet
environment will be possible once Gigabit
available today, before the next paradigm
Ethernet switch modules are offered in
shift, and before new solutions introduce
the future.
another set of completely new challenges.
Information on the other award-winning
Gigabit Ethernet provides a pragmatic,
members of the Passport Campus
viable, and relatively inexpensive (and
Solution is available on Nortel Networks
For these reasons, Nortel Networks
therefore, lower risk) campus backbone
website: http://www.nortelnetworks.com
recommends Gigabit Ethernet as the
solution that meets today s needs and
technology of choice for most campus
integrates seamlessly with the omnipresence
Conclusion and
backbone LANs. ATM was, and continues
of connectionless, frame-based Ethernet
Recommendation
to be, a good option where its unique and
and Fast Ethernet LANs. Enhanced by
In enterprise networks, either ATM or
complex functionality can be exploited,
routing switch technology such as the
Gigabit Ethernet may be deployed in the
in deployment, for example, in the
Nortel Networks Passport 8000
campus backbone. The key difference is
metropolitan and wide area network.
Enterprise Switches, and policy-enabled
in the complexity and much higher cost
This recommendation is supported by
networking capabilities in the Nortel
of ATM, versus the simplicity and much
many market research surveys that show
Networks Optivity Policy Services,
lower cost of Gigabit Ethernet. While it
users overwhelmingly favor Gigabit
Gigabit Ethernet provides enterprise busi-
may be argued that ATM is richer in
Ethernet over ATM, including surveys
nesses with the bandwidth, functionality,
functionality, pure technical consideration
such as User Plans for High Performance
scalability, and performance they need, at
is only one of the decision criteria, albeit
LANs by Infonetics Research Inc.
a much lower cost than ATM.
a very important one.
(March 1999), and Hub and Switch
By contrast, ATM provides a campus
Of utmost importance is functionality 5-Year Forecast by the Dell Oro Group
backbone solution that has the disadvantages
that meets today s immediate needs at (July 1999).
of undue complexity, unused functionality,
a price that is realistic. There is no point
and much higher cost of ownership in the
in paying for more functionality and
enterprise LAN. Much of the complexity
complexity than is necessary, that may
results from the multitude of additional
or may not be needed, and may even
components, protocols, control, and
be obsolete in the future. The rate of
data connections required by connection-
technology change and competitive
oriented, cell-based ATM to emulate
pressures demand that the solution be
broadcast-centric, connectionless, frame-
based LANs. While Quality of Service
(QoS) is an increasingly important
requirement in enterprise networks, there
are other solutions to the problem that are
simpler, incremental, and less expensive.
Gigabit Ethernet and ATM: A Technology Perspective White Paper 27
For more sales and product information, please call 1-800-822-9638.
Author: Tony Tan, Portfolio Marketing, Commercial Marketing
United States Asia Pacific
Nortel Networks Nortel Networks
4401 Great America Parkway 151 Lorong Chuan
Santa Clara, CA 95054 #02-01 New Tech Park
1-800-822-9638 Singapore 556741
65-287-2877
Canada
Nortel Networks Caribbean and Latin America
8200 Dixie Road Nortel Networks
Brampton, Ontario 1500 Concord Terrace
L6T 5P6, Canada Sunrise, Florida
1-800-466-7835 33323-2815 U.S.A.
954-851-8000
Europe, Middle East, and Africa
Nortel Networks
Les Cyclades - Immeuble Naxos
25 Allée Pierre Ziller
06560 Valbonne France
33-4-92-96-69-66
http://www.nortelnetworks.com
*Nortel Networks, the Nortel Networks logo, the Globemark, How the World Shares Ideas, Unified Networks, BayStack, Centillion, Optivity,
and Passport are trademarks of Nortel Networks. All other trademarks are the property of their owners.
© 2000 Nortel Networks. All rights reserved. Information in this document is subject to change without notice.
Nortel Networks assumes no responsibility for any errors that may appear in this document. Printed in USA.
WP3740-B / 04-00
Wyszukiwarka
Podobne podstrony:
Gigabit Ethernet 032001 01 Network Security Snort and NmapGigabit Ethernettheory,empirisicm and parctice archeaeological discourses in a networ of dependency and oppositionGigabit Ethernet 04Gigabit Ethernet 02Gigabit Ethernet 01MICROACTUATORS AND THEIR TECHNOLOGIESNew hybrid drying technologies for heat sensitive foodstuff (S K Chou and K J Chua)Telecommunication Systems and Networks 2011 2012 Lecture 6Voice over Frame Relay, ATM and IPElectrical Circuit Theory and TechnologyHuman resources in science and technologyEvans Frozen Food Science and Technology (Blackwell, 2008)2002 07 Networking Dns Configuration for Both the Client and ServerRemarks on technology and artThe Intelligent Online Trader Discipline, Tools, Techniques, and TechnologyGlobal Production Networks and World City Networkwięcej podobnych podstron