Title: Experimental characterization of QoS in commercial ethernet switches for statistically bounded latency in aircraft networks
CITATION PDF VIEWER THUMBNAILS PAGE IMAGE ZOOMABLE
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/UF00094752/00001
 Material Information
Title: Experimental characterization of QoS in commercial ethernet switches for statistically bounded latency in aircraft networks
Physical Description: Book
Language: English
Creator: Jacobs, A.
Wernicks, J.
Oral, S.
Gordon, B.
George, Alan D.
Publisher: Jacobs et al.
Place of Publication: Gainesville, Fla.
Copyright Date: 2004
 Record Information
Bibliographic ID: UF00094752
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.

Downloads

This item has the following downloads:

LCN2004paper ( PDF )


Full Text







Experimental Characterization of QoS in Commercial Ethernet Switches
for Statistically Bounded Latency in Aircraft Networks

A. Jacobs, J. Wernicke, S. Oral, B. Gordon, A. George
{jacobs, wernicke, oral, gordon, george}@hcs.ufl.edu

High-performance Computing and Simulation (HCS) Research Lab, Department of Electrical and Computer Engineering,
University of Florida, P.O. Box 116200, Gainesville, Florida 32611, USA


Abstract Aircraft networks are used to service mission-critical
avionics systems as well as cabin systems such as in-flight
entertainment. These networks require that the switches used
offer line-rate switching as well as bounded latency and jitter.
Gigabit Ethernet offers an interesting replacement to traditional
proprietary networks because of its high performance and low
cost. In this paper we develop a framework for analyzing the
abilities of Gigabit Ethernet switches to provide probabilistic
guarantees for reliably low latency using Quality of Service
(QoS) controls. From the perspective of control capabilities,
management complexity, and implementation success we
compare the strengths and weaknesses of three modern
Ethernet switches. These switches exhibit a broad range of
configuration options and are representative of the different
levels of QoS implementation available today. We use the
performance metrics of latency, jitter, and packet loss to
characterize the success of a service policy. The results from a
variety of network, load, and traffic scenarios are presented in
terms of these metrics. Through the results, we are able to gain
insight into the individual switch implementations of QoS.
1. INTRODUCTION
Key applications for future cockpit and cabin avionics
systems promise to drive the performance and reliability
requirements of their integrated networking infrastructure to
ever-increasing levels. Avionics networks are used to
connect mission-critical systems, and therefore have strict
performance requirements [1]. The switches used in these
networks must be able to operate at line-rate while providing
bounded latency and jitter, as well as offer a robust QoS
scheme. Specifically, reliably low latencies are essential to
handle the growing amounts of critical data. The need for
higher bandwidth combined with the importance of cost
makes commercial off-the-shelf (COTS) networking
technologies desirable. Among the most promising network
protocols and technologies to provide these capabilities are
those of Ethernet. The economy of scale advantages of
Ethernet have led to low equipment costs and ever-increasing
line rates. While Gigabit Ethernet is currently under
consideration, 10-Gigabit and 40-Gigabit Ethernet are also
possible future choices. As the link rates increase, new
switches and network interface cards will be required, but the
network will remain interoperable with older equipment due
to protocol backward compatibility.
The standard implementation of Ethernet makes no
provision for treating individual traffic groups in distinct
ways. Using QoS, Ethernet switches can identify certain


types of traffic and perform actions on the traffic if necessary.
The development of Gigabit Ethernet has allowed
significantly increased levels of performance with much
higher data rates compared to the previous generation of
Ethernet and an increasing degree of support for QoS.
However, most QoS mechanisms have been developed to
assure bandwidth and reduce packet loss instead of explicitly
providing reliably low latency [2-3]. Further, the choice of
implementation algorithms and management features changes
the granularity and success of the quality of service
implementation. Although committed bandwidth and low
packet loss are important, avionics networks also require
consistently low latency. Mission-critical systems such as
navigation may require low latency, while other systems such
as in-flight entertainment may require high bandwidth.
Our study analyzes three QoS-enabled Ethernet switches
from the perspective of control capabilities, management
complexity, and service implementation success. These
switches were selected because they represent a range of
prices and supported QoS abilities. The control capabilities
and management complexity were examined qualitatively.
The effectiveness of QoS controls is analyzed statistically in
a variety of specific transmission scenarios. Combining these
three perspectives, conclusions about the ability of modern
COTS Ethernet switches to deliver desired performance
guarantees for critical traffic, especially in an avionics
setting, will be drawn.
The BayStack 5510-48T from Nortel Networks is an
inexpensive 48-port Gigabit Ethernet switch. The switch
supports several QoS features including 802.1p user priority
and DiffServ. The BayStack can also be stacked to provide
up to 384 Gigabit Ethernet.
The Catalyst 2970 from Cisco Systems offers 24 ports of
inexpensive Gigabit Ethernet. This switch represents the
entry-level Gigabit Ethernet switch from Cisco. It has
support for all of the QoS features that will be tested in this
study. The Catalyst also implements the shaped round-robin
(SRR) queuing method, a modified form of weighted round-
robin (WRR) queuing that most switches implement.
The E300 switch from Force10 Networks is the most
versatile switch that was tested. The switch has 6 removable
line cards that can be replaced to upgrade the network
hardware. The switch has 400 Gbps of non-blocking
backplane bandwidth, and both 802. lp priority and DiffServ
are supported. This switch is designed to be used in high-










bandwidth, mission-critical systems such as internet
exchanges or campus backbones.
The organization of the paper is as follows. Section 2 is an
overview of QoS terminology and standards for Ethernet.
Section 3 describes related work in QoS guarantees. Section
4 provides an overview of our experimental framework and
procedure. Section 5 presents the results and analysis from
our experiments. Finally, Section 6 contains the conclusions
drawn from the study as well as future work.
2. BACKGROUND
Typically, Ethernet provides no performance guarantees
and therefore operates on a best-effort basis. Although QoS
can have many definitions, our key interest is in the study of
performance-centric network QoS [2]. That is, this paper is
interested in the ability of a network to provide specific
performance guarantees. In fact, the primary focus of this
study is to examine the ability of a network to provide
reliably low latency to critical data.
In a physical (non-theoretical) network, performance
guarantees are traditionally specified probabilistically [2]. In
order to accomplish this task, the metrics of latency, jitter,
and packet loss are used. Jitter is defined as the standard
deviation of measured latencies.
In QoS, control mechanisms are specified and
implemented to provide performance guarantees. In general,
QoS mechanisms are categorized according to several key
classifications: time scale, granularity, control carrier, and
location of control [2]. QoS can work on multiple time
scales. A switch can implement QoS on a per-packet basis,
or QoS could be provided on the round-trip time scale by
providing flow control. The granularity of QoS can be used
to assign priorities based upon varying levels of information.
QoS could be classified using a low-layer MAC address or
using higher-layer information such as a destination port for a
TCP packet. Finally, the control can be stored by a switch, or
the information can be embedded inside of a packet header.
This paper focuses on packet-level QoS: mechanisms such
as classifiers, markers, and shapers that improve packet
transfer performance. Classifiers are elements of a system
that determine what level of service should be given to a
specific packet. Markers use the results of the classifier to
mark the packet header permanently to pass the classification
to the next switch. Finally, shapers moderate the packet to
provide the proper level of service inside the switch. Fig. 1
shows the process that a switch uses to implement QoS.
There are two general levels of granularity of the control
state of a QoS control mechanism: per-flow and aggregate. A
per-flow control state provides different service for eachflow.
A flow is defined by an IP source and destination address,
source and destination port, and protocol field. Aggregate
control states combine several flows together and provide
each flow in the group equal service to match the desired
group profile [4-5]. Thus, the level of flow aggregation
greatly affects the fidelity and complexity of the QoS
implementation.


Classifier Marker Shaper


Packet is identified Packet is marked Traffic classes are
using header with it's assigned serviced according
parameters like IP traffic class in the to their relative
source/destination Layer 2 802.1p tag priorities
address and or in the IP TOS tag
assigned a traffic
class
Fig. 1. Classification, marking, and shaping.
The final category of QoS classification is the location of
the control. Almost all current Ethernet QoS
implementations use the switch to store control state
information [3, 6]. For this study, the switch is the sole
location of control.
A number of Ethernet QoS standards have been developed
to provide sophisticated control over switching performance.
First, a relatively simple system, the IEEE 802.1p standard,
was created as part of IEEE 802.1D [6]. 802.1p uses three
bits from the Layer-2 tag to differentiate 8 levels of service.
Therefore, in a complex network many different types of
traffic must be aggregated into a single 802. p service class.
The differentiated services (DiffServ) standard extends the
three-bit 802.1p marking to six bits to provide 64 different
classes of services [3]. DiffServ also specifies certain per-
hop behaviors that can be implemented to assure service to
each class. While the Class of Service field for an 802.lp
packet is part of the MAC header, the DiffServ classification
uses the Type of Service (ToS) field located in the IP header.
DiffServ stores the QoS information inside of the packet
header, allowing QoS packets to pass through network
components that do not implement QoS.
In general, most switches implement both 802.1p and
DiffServ and treat traffic of both types in a consistent
manner. Most switches implement some variation of a
priority queuing system to provide quality of service control.
Ingress and egress queues are serviced according to internal
priority mappings and queuing schemes. The priority
mapping is used to map priority bits from any supported type
of QoS to the proper destination queue for an incoming
packet. All three of the switches analyzed in this study
follow this pattern. Fig. 2 is a graphical representation of the
generic implementation of switch-based QoS. Traffic is
mapped into the appropriate priority queue and then shaped
based on the priority scheme to empty the queues.
3. RELATED WORK
A number of recent studies have explored QoS guarantees.
Most of these papers concentrate on absolute guarantees in
Ingress Egress
Queues Queues

Ingress G Switch Egress
Port Fabric Port

Fig. 2. Generic packet-based QoS prioritization.










theoretical networks, but a few also discuss probabilistic
guarantees in a real network. A. Jarayya, et al., developed an
integrated service protocol in [7] as well as evaluated the
importance of resource allocation and scheduling. Using a
reservation-based protocol, their study measures the effects of
different scheduling protocols, such as weighted fair queuing
or strict priority queuing, on different types of traffic.
Several studies have also focused on using simulation or
experimental analysis on performance guarantees. V. Laatu,
et al., experimentally analyzed the effects of a specific
DiffServ mechanism on flows of similar priorities using
latency and throughput as metrics [8]. Certain types of traffic
were found to be more sensitive to the QoS policies than
others. C. Bouras, et al., used estimation to provide
theoretical performance guarantees and then use simulative
results to assess the accuracy of their predictions [9]. In [10],
V. Firoiu, et al., provided a framework for evaluating traffic
engineering using modeling and then validated the model
using simulation.
The use and development of performance metrics is a
considerably large area of study. T. Chahed discussed in
more detail performance metrics in [11]. Also, for a more
complete study, refer to the Internet Protocol Performance
Metrics (IPPM) RFCs. A good starting point is a
"Framework for IP Performance Metrics" [12].
Previous work [1] compared several Gigabit Ethernet
switches using latency and jitter as metrics. It was found that
best-effort service between under-subscribed nodes exhibits
low latency and line-rate switching. A very simple priority
system was introduced, but latency was only measured on a
single switch.
4. EXPERIMENTAL FRAMEWORK
In order to evaluate the capability of a switch to provide
probabilistic performance guarantees, both the management
features and performance of each switch are evaluated. The
fidelity and complexity of control mechanisms can vary
considerably from switch to switch. For each switch, a short
description of the management capabilities and the
granularity of control is provided in Section 5. In particular,
we evaluate to what degree the switch can be configured to
match our chosen scenarios and then generalize our
conclusions to how well the switch configuration can solve
any arbitrary QoS problem.
In our experiments we concentrate on the goal of
evaluating the ability of COTS Ethernet switches to provide
reliably low latency for high-priority streams in a congested
network. Specifically, our experiments attempt to develop a
better understanding of under what conditions a switch can
provide the requisite performance of statistically low latency
and jitter. The metrics of mean latency and jitter will be
critical to identifying expectations of performance.
Furthermore, we use the variation of measured mean
latencies to analyze the reliability of the observed results.
Fig. 3 shows the general setup for the experimental case
studies in this paper. Several key many-to-one contention


Priority
Level m


11Configuration Resou

Priority
Level 1
Fig. 3. Conceptual diagram of case studies.


rce


scenarios are used to study the ability of QoS controls in the
switches to provide performance guarantees to specific flows.
To analyze latency and jitter at the packet level, an Ixia
400T Traffic generator with 12 Gigabit Ethernet ports was
used [4]. The Ixia 400T is capable of measuring received
latency to the nearest 20 nanoseconds. The Ixia chassis and
ports were configured using a custom Tool Control Language
(Tcl) script designed to accurately measure real-time latency
and packet loss. This approach also permitted the automation
of tests to facilitate data collection. The latency was
measured from the time the first bit of the packet leaves the
Ixia transmission port until the time that same bit reaches the
Ixia receive port.
These scenarios are abstract models of more complex real
QoS problems where many different flows compete for a
limited number of resources. In the first two cases, the traffic
is formed in a many-to-one configuration where seven source
nodes are simultaneously sending traffic at some prescribed
load and priority to a single destination node.
First, each switch under test was configured to give all
flows best-effort service to provide a control set for our later
measurements. This approach provides a baseline so that the
results with QoS enabled can be put into a proper context.
This test case will be referred to as the best-effort case.
For consistency, streams with an 802.1p priority of 7 will
be referred to as platinum streams for the remainder of this
paper. Streams with a priority of 6 will be called gold
streams. Streams with a priority of 5 will be called silver
streams. Streams with a priority of zero will be referred to as
best-effort streams.
In the next case, a single platinum stream was given
highest priority while leaving the other six streams at lowest
priority. The 802. p priority was set by the switch based on
the source address of the incoming packets. The platinum
stream models a single flow of critical data competing with
less critical data for access to an egress port. This test case
will be referred to as the single platinum stream case.
In the second two cases, a larger scale experiment was
conducted using 24 ports. For this experiment, only the
BayStack 5510 and Catalyst 2970 were used. The 24-port
test was designed using information about different traffic
patterns in avionics networks included in a previous report to
Rockwell Collins [13]. There are two high-bandwidth
streams which are given platinum priority. In addition, there


II










are four gold streams which transmit at a much lower rate.
There are also five silver streams that transmit at an even
lower rate. 12 PCs are used to generate additional traffic with
best-effort priority. These computers are separated into two
groups of six. Each computer in a group communicates to
another computer in the same group, and one computer
transmits to the receive port on the Ixia traffic generator.
The test was conducted using two different sets of traffic
patterns. The Average Traffic scenario is meant to show the
network during normal conditions. The receive port on the
Ixia chassis is approximately 90% utilized. In this case, the
two platinum streams each transmit at 250 Mbps, the four
gold streams each transmit at 20 Mbps, and the five silver
streams each transmit at 10 Mbps. PCs are used to generate
an additional twelve ports of best-effort traffic, transmitting
at 140 Mbps each.
In the Peak Traffic scenario, the line rates of all streams are
increased. This is meant to simulate points in time when
there is an unexpected jump in the amount of data being
transferred. The Peak Traffic scenario causes the receive port
to be over-utilized, but the high-priority traffic is less than 1
Gbps. In this scenario, platinum streams each transmit at 300
Mbps, gold streams each transmit at 50 Mbps, and silver
streams each transmit at 25 Mbps. PCs are again used to
generate an additional twelve ports of best-effort traffic,
transmitting at 250 Mbps each.
Latencies were measured for successfully received packets
only; dropped packets do not affect latency measurements.
For the 24-port tests, traffic for twelve ports was generated
using PCs with Intel PRO/1000 MT Gigabit Ethernet
adapters, while the Ixia 400T chassis was used to generate 11
additional ports of traffic and 1 port was used to receive. The
computers used were 1.33 GHz Athlon-based machines with
256 MB of DDR PC2100 RAM. Packet sizes of 128, 512,
and 1518 bytes were used although only the results of 128-
and 1518-byte runs are shown in Section 5 to conserve space.
Only 128-byte packet data is shown for the 24-port tests. The
line rate of the transmitting ports was varied from 0% to 30%.
Trials were conducted 3 times for each data point. There
were no significant variations between trials except where
noted in Section 5.
5. RESULTS AND ANALYSIS
The following sections analyze the performance and
abilities of three COTS Gigabit Ethernet switches. In Section
5.1, we analyze each switch qualitatively for management
capabilities. Section 5.2 analyzes the performance
differences between the separate test beds.
5.1 Switch analysis
Intended as an edge switch rather than a core router, the
BayStack 5510 offers fairly straightforward QoS control [5].
The 5510 supports both 802.1p and DiffServ classification
and marking. The switch enforces policies using 8 egress
queues which can be configured to have a strict priority or
WRR queue emptying scheme. Strict priority will always


favor packets with the highest 802. lp value, while the WRR
approach will give some access to the lower-priority queues.
Each queue can be assigned a specific amount of bandwidth
to be given in order to provide more fidelity to the QoS
implementation. Depending on the packet's 802.1p/DiffServ
priority, the packet is mapped to one of the queues by a
configurable mapping.
The switch by default disables QoS but has a simple
priority mapping in memory to make setting up a WRR
priority scheme simple. A WRR scheme is usually preferred
so that lower priority streams do not starve under heavy
loading conditions. Since the hypothetical platinum stream
represents critical data in an avionics network, the highest
priority is allocated to the stream. At the same time, the low-
priority traffic should not starve whenever the platinum
stream is transmitting. Based on the requirements of the
platinum stream, network engineers could adjust the WRR
queuing. Since the 5510 only has 8 queues, all traffic must
be grouped into 8 classes of service for shaping.
The Catalyst 2970 features a slightly more complicated
QoS control than the BayStack 5510 in terms of possible
configurations. Like the 5510, the Catalyst 2970 features
802.1p and DiffServ classification and marking. The switch
offers the ability to queue the packets at ingress according to
priority as well as at the egress port. Two ingress queues and
four egress queues are available at each port. Using the
packet's 802.1p/DiffServ priority, the packet is mapped to
one of the queues based on the current priority mapping.
The Catalyst 2970 has a configurable SRR priority scheme
to empty each buffer at both the ingress and egress queues.
The SRR algorithm specifies a maximum amount of
bandwidth that a queue may use. However, if other queues
are empty, the bandwidth may be shared. The control over
the queues is more complicated than the configuration of the
BayStack 5510, but also has greater fidelity since it is
possible to specify both the SRR scheme and also the size of
each queue. Multiple ingress queues would be useful for
distinguishing between different flows that arrive on the same
port, but with only 4 egress queues, multiple flows must be
aggregated into a single class of service for shaping.
The Force10 E300 is quite different from the other two
switches in our study. Intended as a core switch with
multiple 10-Gigabit Ethernet ports, the E300 has large buffers
capable of storing up to 200 milliseconds worth of data [14].
As in the Catalyst 2970, after classification, the switch
chooses which packets to send from the ingress queues based
on priority and congestion avoidance. Congestion avoidance
consists of Random Early Drop (RED) or Weighted RED
(WRED). These methods eliminate packets before a port
reaches saturation in order to protect high-priority streams
from packet loss. Then, the packet is placed in one of 8
egress queues where egress traffic is also shaped. All of the
mappings are configurable with a default that satisfies simple
QoS problems.
The E300 allows the bandwidth percentages assigned to
each egress queue to be set in the configuration of the switch.











The granularity is 1%, so using the bandwidth percentage
command does not give quite as much control to the network
engineer. Also, it is possible to specify the committed, peak,
and burst rates allowed for each ingress and egress class so
that a more complicated scheme can be created.
5.2 Performance comparisons
This section presents the QoS performance comparisons
between the Nortel Networks BayStack 5510, the Cisco
Systems Catalyst 2970, and the Force10 Networks E300.
The switches are compared on their performance in terms of
average latency, jitter, and packet loss. Sections 5.2.1 and
5.2.2 compare best-effort performance below saturation and
above saturation, respectively. Section 5.2.3 evaluates the
performance when a single platinum stream is used. Section
5.2.4 evaluates performance in a complex, 24-port test. By
comparing the performance of the switches, insight into the
algorithms that control QoS can be gained.
5.2.1 8-port best-effort below saturation

Most often, a switch operates below saturation. Therefore,
the ability of a switch to provide reliably low latencies and
low jitter below saturation are of particular interest. Fig. 4
and 5 show the best-effort average latencies for small and
large packets for all three switches in this study.
Notice that the BayStack 5510 and the Catalyst 2970 have
similar latencies but the E300 has consistently higher

U BayStack 5510 0 Catalyst 2970 J E300
120


100
S80
u 60
m 40
...


latencies. Although the E300 has the highest latencies
measured, the difference between the E300 and the other
switches is less for larger packet sizes, indicating that the
Force10 switch favors larger packet sizes.
Comparing the jitter below saturation under best-effort
conditions shows that all three switches provide relatively
consistent latencies. Fig. 6 and 7 show that the E300 jitter is
slightly higher than the other two switches but not by the
same relative differences as found in latency given in Fig. 4
and Fig. 5. The BayStack 5510 typically has at least 5
microseconds lower jitter than the other switches for large
packet sizes. The exception is at 10% line rate where the
Catalyst 2970 has a slight edge. Without QoS controls
enabled, the BayStack 5510 is able to provide the lowest jitter
compared to the other switches, although the Catalyst 2970 is
only slightly behind.
5.2.2 8-port best-effort behavior after saturation

As each switch reaches saturation, packet loss becomes a
problem and thus latency can no longer be measured. Packet
loss on each switch occurred when the line rate of each of the
seven transmitting streams was greater than 143 Mbps (i.e.
14.3% line rate). The packet losses for the BayStack 5510
and E300 were roughly equivalent, with the E300 having
slightly lower packet loss.



U BayStack 5510 m Catalyst 2970 0 E300


S20
15
S10


IS


I I


5
% Line Rate


10


Fig. 4. Best-effort 128-byte packet average latency.

U BayStack 5510 U Catalyst 2970 E E300


80
u 60
m 40
.J


25
20
15
" 10
5
0


1 5 10
% Line Rate
Fig. 5. Best-effort 1518-byte packet average latency.


% Line Rate
Fig. 6. Best-effort 128-byte packet jitter.

U BayStack 5510 U Catalyst 2970 E E300


'P


% Line Rate
Fig. 7. Best-effort 1518-byte packet jitter.


O


, I
1











5.2.3 8-port single platinum stream behavior

The platinum stream latencies are shown for all three
switches in Fig. 8 and Fig. 9. After QoS is enabled, the
below-saturation latencies of the BayStack and Catalyst are 1
microsecond less than the best-effort case for small packets.
However, the latency of the E300 is approximately 10
microseconds lower for small packets at 5% and 10% line
rate. Large packet sizes saw considerable reductions in
latency from using QoS except in the Catalyst 2970.
The BayStack 5510 latency was reduced by an average of
7 microseconds for the large packet sizes. While the Force10
switch has higher latencies, it also benefits more from the use
of QoS below saturation.
The results in Fig. 8 indicate that the BayStack and Catalyst
vastly outperform the E300 in latency above saturation for
small packets. The BayStack and Catalyst latencies increase
slightly after saturation while the E300 nearly triples. Fig. 9
shows that the relative difference is not quite as serious in
larger packets, but small packets can be important to an
avionics network [1]. The Catalyst performs slightly better
above saturation at reducing the latency.
The single platinum stream jitter results are shown in Fig.
10 and Fig. 11. For 1518-byte packets, the jitter is reduced
greatly after applying QoS for the Catalyst and E300. The
BayStack jitter is also reduced by 5 microseconds, but
exhibits the highest jitter observed.

U BayStack 5510 0 Catalyst 2970 J E300


100

o
80 -----
60 -- - - --
-i 40
20
o
1 5 10 14 15 16 17 18 19 20
% Line Rate
Fig. 8. Single platinum stream 128-byte packet latency.

U BayStack 5510 U Catalyst 2970 J E300
140
120 -
S100O
80 -- - --
U
0so -- - --
60
-1 40
20

1 5 10 14 15 16 17 18 19 20
% Line Rate
Fig. 9. Single platinum stream 1518-byte packet latency.


U BayStack 5510 U Catalyst 2970 E E300


i 8
S6
4


] a.


IL.


1 5 10 14 15 16 17 18 19 20
% Line Rate
Fig. 10. Single platinum stream 128-byte packet jitter.

U BayStack 5510 0 Catalyst 2970 0 E300
12
10
8
6
4
2
0
1 5 10 14 15 16 17 18 19 20
% Line Rate
Fig. 11. Single platinum stream 1518-byte packet jitter.


Above saturation, the small-packet jitter is very low for the
BayStack and Catalyst, while more considerable but
manageable for the E300. However, the large-packet jitter has
all three switches having about the same jitter after saturation
and the E300 having lower and more consistent jitter below
saturation. The E300 was designed with 10GigE operation in
mind. Therefore, it may be optimized for larger packet sizes
to get the best use from faster operation because of the effects
of the inter-packet gap.
The packet loss for all three switches was 0% for the
platinum stream. Thus, all three are capable of properly
preventing packet drop with QoS for critical data.

5.2.4 24-Port experimental results

Without QoS enabled, both of the tested switches show
similar results for the 24-port experiment. The latency results
of the Average Traffic case for 128-byte packets are shown in
Fig. 12. The platinum streams receive the lowest average
latency on both switches, and the gold streams receive higher
latency. The silver streams receive lower latency on the
BayStack, but similar latency to the gold stream on the
Catalyst.
With QoS enabled, the BayStack 5510 decreases average
latency for the platinum streams, increases average latency
for the gold streams, and does not affect the average latency
of the silver streams. The platinum streams experience a











lower maximum latency when QoS is enabled, resulting in a
lower amount of jitter, as seen in Fig. 13.
The Catalyst 2970 decreases average latency for the
platinum and silver streams, but increases the average latency
for the gold streams. All of the streams have reduced
maximum latency when QoS is enabled. The silver streams
benefit the most from the use of QoS controls on the Catalyst
2970, due to the significantly lowered maximum latency.
In the Peak Traffic case, packet loss will occur on all
streams if QoS is not enabled. We will only look at the QoS-
enabled case. The latency of every stream is higher than in
the Average Traffic case on both switches. However, no
packet loss is experienced. The average latency values for
each stream are shown in Fig. 14.
Fig. 15 shows the jitter values for the various streams on
the BayStack and Catalyst switches. The QoS controls of the
BayStack 5510 are able to control jitter in a predictable way.
Higher priority streams have less jitter than lower priority
streams when the receive port is over-saturated. The Catalyst
2970 jitter is lowest for the silver streams, 1.7 us. This
behavior is due to the way the switch treats streams with a
priority level of 5.
The silver streams on the Catalyst 2970 demonstrate that
the switch treats packets with an 802. lp or IP precedence
value of 5 as the highest priority. This switch is configured
by default to provide QoS for Voice over IP (VoIP) streams.
VoIP streams usually have a priority level of 5. This
BayStack 5510 0 Catalyst 2970
25

20
20 --

15

.$ 10

5 -

0
Platinum Platinum Gold Gold Silver Silver
w/QoS w/QoS w/QoS
Fig. 12. 24-Port Average Traffic latency results.

BayStack 5510 0 Catalyst 2970


5
45
4
35
3
25
E 2
15
1
05
0


timt


Platinum Platinum
w/QoS


Gold


Gold
w/QoS


Silver Silver
w/QoS


* BayStack 5510 ECatalyst 2970


20

215

-10


Platinum w/QoS Gold w/QoS


Silver w/QoS


Fig. 14. 24-Port Peak Traffic latency results.
BayStack 5510 0 Catalyst 2970
9
8
7
6
25
4 -
3
2


Platinum w/QoS Gold w/QoS Silver w/QoS

Fig. 15. 24-Port Peak Traffic jitter results.

behavior could be changed by remapping the priority levels
to specific high-priority buffers [15].
The Catalyst 2970 has only four egress queues and must
therefore combine certain priorities into specific queues.
Streams with a priority level of 6 or 7 are combined into the
same queue, while streams with a priority level of 5 are given
their own independent queue intended for VoIP traffic. The
silver streams have lower jitter caused by lower amounts of
traffic and a dedicated queue, while the platinum and gold
streams generate a significantly greater amount of traffic and
have a shared queue.

6. CONCLUSIONS

Aircraft networks demand both high bandwidth and low
latency within bounds. For this reason, switches built for
these networks were usually custom-designed for each
generation of network. Current Gigabit Ethernet switches
offer high throughput and low latency in over-provisioned
cases. With the use of QoS-enabled switches, these benefits
can be extended to cases where the network is slightly under-
provisioned. Therefore, COTS switches with QoS controls
offer a low-cost solution for avionics networks. This paper
presents a comparative performance evaluation of three such
switches: the Nortel Networks BayStack 5510, the Cisco
Systems Catalyst 2970, and the ForcelO Networks E300.


Fig. 13. 24-Port Average Traffic jitter results.











Each switch provides a reasonable amount of QoS control
with options for creating a full solution to match desired
performance capabilities. Latencies and jitter for all switches
decreased dramatically for critical data as long as it is given
sufficient priority. However, these switches have not yet
reached their potential in terms of matching the versatility of
QoS standards. Although each switch implements DiffServ
classification and tagging, none of the three switches
examined had the ability to differentiate between 64 separate
traffic classes for purposes of queue shaping. Thus, network
engineers are given fewer options to implement complicated
priority schemes.
The BayStack 5510 from Nortel Networks featured very
impressive latency and jitter performance, especially for
smaller packets. The switch was capable of assuring jitter for
critical data below 2 microseconds for small packet sizes.
Further, the BayStack configuration utility was easy to use
while providing a good amount of QoS control to create more
powerful QoS solutions. The Nortel Networks has 8 egress
queues, which allows for a variety of traffic profiles.
The Catalyst 2970 from Cisco Systems also featured low
latency and jitter for small packets. For large packet sizes,
QoS controls decrease large-packet jitter by at least 10
microseconds compared to the best-effort case. The QoS
configuration of this switch allows the user to have control
over all aspects of the QoS policies. The ability to shape at
the ingress queue is a significant difference, which would
possibly help packets that have already been classified by a
previous switch. This switch also implements WRED and
other congestion avoidance techniques that are useful for
protecting critical data.
The E300 from ForcelO Networks also featured low jitter,
but its latency was much higher than the other two switches.
For low traffic loads, the E300 has jitter of approximately 3
microseconds for large packet sizes. The E300 is intended as
a core switch for high-end applications. The large amount of
QoS control however means that the E300 can implement the
most diverse set of policies of any of the switches analyzed.
As future work, simulation models will be built and used to
extend the experimentally gathered data. The simulations
will investigate the use of new QoS services that are beyond
the capabilities of the current experimental testbed. Various
network, traffic, and load scenarios will be analyzed for the
QoS mechanisms under study. The simulations will provide
probability distributions of arrival latencies for different QoS
algorithms. The models will be verified against the
experimental data that has already been gathered. This data
will provide a comprehensive analysis of the relation between
current and emerging QoS services and switch technologies
in terms of statistically bounded latencies.

7. ACKNOWLEDGMENTS

This research was sponsored by Rockwell Collins, Inc. in
Cedar Rapids, Iowa. Further support was gratefully received
in terms of equipment and technical support from Cisco
Systems, ForcelO Networks, Ixia, and Nortel Networks.


Finally, we would also like to thank the other members of the
high-performance networking group in the HCS Research
Lab at the University of Florida for providing support and
insight during this study.

8. REFERENCES

[1] J. Meier, S. Kim, A. George, and S. Oral, "Gigabit COTS Ethernet
Switch Evaluation for Avionics," Proc. of 27th IEEE Conference on
Local Computer Networks (LCN) for the IEEE Workshop on High-
SpeedLocal Networks (HSLN), Tampa, FL, Nov. 2002, pp. 739-740.
[2] V. Firoiu, J. Y. Le Boudec, D. Towsley, and Z. L. Zhang, "Theories
and Models for Internet Quality of Service," Proc. of the IEEE, Vol.
90, No. 9, Sept. 2002, pp. 1565-1591.
[3] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss,
"An Architecture for Differentiated Services," Internet Engineering
Task Force (IETF) specification, RFC 2475, Dec. 1998.
[4] Ixia Hardware Test Platforms Specifications, 2004,
http://www.ixiacom.com/datasheets/pdfs/ch_1600t 400t 100.pdf.
[5] Nortel Networks, BayStack 5510 Product Brief, 2004,
http://www.nortelnetworks.com/products/02/bstk/switches/baystack
5510/collateral/nn104742-012804.pdf.
[6] IEEE Standard 802.1D, "IEEE Standard for Information
Technology-Telecommunications and Information Exchange
Between Systems-Local and Metropolitan Area Networks
Common Specifications. Part 3: Media Access Control (MAC)
Bridges," IEEE, 1998.
[7] A. Koubaa, A. Jarraya, and Y. Q. Song, "SBM Protocol for
Providing Real-time QoS in Ethernet LANs," Proc. of the 1stIntl.
Workshop on Real-Time LANs in the InternetAge RTLIA'02,
Austria, Jun. 2002, pp. 45-49.
[8] V. Laatu, J. Harju, and P. Loula, "Evaluating Performance among
Different TCP flows in a Differentiated Services Enabled Network."
Proc. ofthe ICT'2003 International Conference on
Telecommunications, Papeete, Tahiti, French Polynesia, Feb. 2003,
pp. 701-715.
[9] C. Bouras and A. Sevasti, "Analytical Approach and Verification of
a DiffServ-based Priority Service," Proc. ofHigh Speed Networks
and Multimedia Communications HSNMC '03, Portugal, Jul. 2003,
pp. 11-20.
[10] V. Firoiu, I. Yeom, and X. Zhang, "A Framework for Practical
Performance Evaluation and Traffic Engineering in IP Networks,"
Proc. ofIEEE International Conf on Telecommunications, Jun.
2001.
[11] T. Chahed, "IP QoS Parameters," TF-NGN, Nov. 2000,
http://axgarr.dir.garr.it/-cmp/tf-ngn/ QoS_parameters.ps.
[12] V. Paxson, G. Almes, J. Mahdavi and M. Mathis, "Framework for IP
Performance Metrics," Internet Engineering TaskForce (IETF)
specification, RFC 2330, May 1998.
[13] I. Troxel, R. Balasubramanian, C. Catoe, J. Wills, and A. George,
"Virtual Prototyping of High-Performance Optical Networks for
Advanced Avionics Systems," Final Project Report, HCS Research
Lab, University of Florida, October 2003.
[14] ForcelO Networks, Gigabit Ethernet Performance Evaluation, 2002,
http://www.force 1Onetworks.com/ products/pdf/336GE-ports-
v3.1.pdf.
[15] Catalyst 2970 Switch Software Configuration Guide,
http://www.cisco.com/en/US/products/hw/switches/ps5206/prodco
nfiguration guides list.html.




University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs