<%BANNER%>

Buffer Management in Tone Allocated Multiple Access Protocol


PAGE 1

BUFFER MANAGEMENT IN TONE ALLOCATED MULTIPLE ACCESS PROTOCOL By USHA SURYADEVARA A THESIS PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF MASTER OF SCEINCE UNIVERSITY OF FLORIDA 2001

PAGE 2

ii ACKNOWLEDGMENTS I extend my special thanks to my advisor, Dr. Richard E. Newman, for being my committee chair for the entire project and for his valuable guidance throughout my studies here. I also thank Dr. Randy Chow and Dr. Haniph Latchman for being my committee members. I would also like to thank Mr. Srinivas Katar for his help and guidance at various stages of my work. I thank my parents, Dr. S. Sambasiva Rao and S. Basaveswari, and my sister, Dr. S. Uma, for their support and encouragement throughout my studies. I am grateful to my roommates and friends for their support.

PAGE 3

iii TABLE OF CONTENTS page ACKNOWLEDGMENTS .................................................................................................. ii ABSTRAC T ...................................................................................................................... ..v CHA P TERS 1. INTRODUCTION ...........................................................................................................1 1.1 Power Lines .............................................................................................................. 1 1.1.1 Low Bandwidth Digital Device s .................................................................... 3 1.1.2 High Bandwidth Digital Device s ................................................................... 5 1.2 TAMA Protocol Overview ....................................................................................... 6 1.3 Buffers....................................................................................................................... 6 1.4 Thesis Motivation ..................................................................................................... 7 1.5 Thesis Objective........................................................................................................ 8 1.6 Chapter Sum m ary ..................................................................................................... 8 2. POWER LINE PROTOCOLS AND BUFFER MANAGEMENT ...............................10 2.1 History..................................................................................................................... 10 2.1.1 Other Power Line Protocol s ............................................................................. 10 2.1.1.1 X-10 .......................................................................................................... 10 2.1.1.2 CEBus ....................................................................................................... 11 2.1.1.3 LonWorks ................................................................................................. 12 2.1.2 Buffer Manage m e nt ......................................................................................... 12 2.2 Co m parative Summary ........................................................................................... 16 2.3 Chapter Sum m ary ................................................................................................... 17 3. OFDM AND TA M A PROTOCOL ...............................................................................19 3.1 OFDM Modulation ................................................................................................. 19 3.1.1 Theory of Operatio n ......................................................................................... 20 3.1.2 Advantages of OFD M ...................................................................................... 22 3.2 Protocol Overview .................................................................................................. 22 3.2.1 PHY Overvie w ................................................................................................. 23 3.2.2 MAC Overvie w ................................................................................................ 27

PAGE 4

iv 3.2.2.1 Channel access mechanis m ....................................................................... 28 3.2.2.2 Seg m entation and reasse m bl y ................................................................... 31 3.2.2.3 Privacy ...................................................................................................... 33 3.3 Sum m a r y ................................................................................................................. 33 4. SIMULATION DES I GN AND DESCRIPTION ..........................................................35 4.1 Design ..................................................................................................................... 35 4.1.1 Basic Si m ulator Design.................................................................................... 35 4.1.2 Multiple Buffer Protoco l .................................................................................. 38 4.1.2.1 Based on FCFS ......................................................................................... 39 4.1.2.2 Based on source ........................................................................................ 39 4.1.2.3 Based on priorit y ....................................................................................... 40 4.1.2.4 Preemptive Approach................................................................................ 40 4.2 Si m ulation Model.................................................................................................... 41 4.2.1 Assumption s ..................................................................................................... 41 4.2.2 Si m ul a tion Configuration................................................................................. 41 4.3 Performance Measures............................................................................................ 44 4.4 Summary................................................................................................................. 45 5. RESULTS AND DISCUSSIO N ....................................................................................46 5.1 Result s ................................................................................................................... . 46 5.1.1 General Traffic Scenari o .................................................................................. 47 5.1.1.1 Based on FCFS ......................................................................................... 48 5.1.1.2 Based on Sourc e ........................................................................................ 49 5.1.1.3 Based on Priority....................................................................................... 52 5.1.1.4 Eff e ct on the number of FAIL s ................................................................ 54 5.1.1.5 Reserve a high priority buffe r ................................................................... 55 5.1.2 Multi m edia Scenari o ........................................................................................ 56 5.1.3 Hot Spot s .......................................................................................................... 56 5.2 Co m parative Summary ........................................................................................... 58 6. CONCLUSIONS AND FUTURE WOR K ....................................................................61 6.1 Conclusion .............................................................................................................. 61 6.2 Future Wor k ............................................................................................................ 62 APPENDIX: TABLES.......................................................................................................64 LIST OF REFERENCE S ...................................................................................................75 BIOGRAPHICAL SKETCH .............................................................................................78

PAGE 5

v Abstract of Thesis Presen ted to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Master of Science BUFFER MANAGEMENT IN TONE ALLOCATED MULTIPLE ACCESS PROTOCOL By Usha Suryadevara December 2001 Chairman: Dr. Richard E. Newman Major Department: Computer and Information Science and Engineering Home networking, an emerging tec hnology, has gained much attention due to the fact that it uses power lines fo r data transmission. Power line networks have very extensive infrastructure in nearly ev ery building. A Tone Allocated Multiple Access (TAMA) protocol, which uses Carrier Sense Multiple Access with Collision Avoidance (CSMA/CA), is considered. Segmentation and reassembly (SAR ) process of the TAMA protocol rejects an initial segment if there is any in sufficient buffer space due to prior incomplete transmission at the receiver end. The SAR process issues a FAIL (or a resource busy) signal to the sender asking for retransmission. An attempt is made in this thesis to find the optimal number of reassembly buffers require d at the receiver side and also to reduce

PAGE 6

vi the number of FAILS in the network by increasi ng the number of buffers at the receiver end. Three buffer allocation schemes, allocati on based on first come first serve (FCFS), allocation based on source address and allo cation based on priority, are considered. Throughput analysis for various kinds of traffic has also be en carried out. Results show that there is a considerable increase in th e throughput and decreas e in the number of FAILS with increase in the number of buffers.

PAGE 7

1 CHAPTER 1 INTRODUCTION “It was technologically feas ible, but the economics never proved (itself)” [1]. The prospect of homes where the refrigerat or can be used to surf the Internet is coming closer [2], because major computer and electronics manuf acturers are coming up with a common standard for turning a home’s electrical wiring into a data network. Home networking may be one of the most exciting mark ets that doesn’t yet really exist. All that the homeowners need is a technology that provides a connection to the broadband pipe from anywhere in the home with minimal inconvenience. Until now, the options for highspeed in-home access were phone line and RF technologies. One option that people never thought about earlier is using a power line fo r this purpose. Power line certainly is the most difficult medium of these, but it has two appealing attributes. First, as in the case of phone lines, no RF conversion hardware is neede d, so the cost can be low as compared to wireless solutions. Second, power line networ k has a very extensive infrastructure in almost every house. 1.1 Power Lines Since power lines were or iginally devised for transm ission of power at 50-60Hz and at most 400 Hz, the use of the same me dium for data transm ission posed a lot of technical problems.

PAGE 8

2 Power line medium is a harsh environm ent for communication especially because of the large attenuation. A channel between any two nodes (outlets) in a home has the transfer function of an extremely complicat ed transmission line ne twork. Amplitude and phase response in such a network will vary widely with frequency. At some frequencies, the transmitted signal may arrive at receiver with relatively little loss, while at other frequencies it is way below the noise floor. In addition, much worse things like change of transfer function might occu r with time. This might ha ppen, say, whenever the house owner has plugged in a new device into th e power line, or may be during switching power supplies or motors. Also the nature of channel between outle t pairs may vary over a wide range. One other pr oblem is interference. Power lines connect the power generati on station to a variety of customers dispersed over a wide region. Power transmi ssion is done using va rying voltage levels and power line cables. Depending on the voltage levels at which they transfer power lines can be categorized as follows: 1. High-tension lines; 2. Medium-tension power lines; 3. Low-tension lines. High-tension lines connect el ectricity generation stations to distribution stations. The voltage levels on these lines are usually on the order of hundred s of kilovolts and they run over distances in the order of tens of kilometers.

PAGE 9

3 Medium-tension lines connect distribution stations to pole mounted transformers. The voltage levels are on the order of a few kilovolts and distances of the order of few kilometers. Low-tension lines connect pole-mounted transformers to individual households. Voltage levels on these are on the order of fe w hundred volts and th ese run over distances on the order of few hundred meters. Recently data communications over lowtension lines has gained a lot of attention. Digital devices usi ng these low-tension power lines can be categorized, based on the bandwidth they use, as follows [3]: 1. Low bandwidth digital devices 2. High bandwidth digital devices. 1.1.1 Low Bandwidth Digital Devices These devices use carrier frequencies in the range 0-500KHz and are primarily used for building automation. Frequencies used by these devices are restricted by the regulatory agencies. The restrictions are impos ed to ensure the harmonious coexistence of various electromagnetic devices in the same environment. The frequency restrictions are imposed in two main markets, North Ameri ca and Europe. These are shown in Figure 1.1 [3]. The Federal Committee Commission (FCC) and European Committee for Elcetrotechnical Standardization (CENELEC) go vern regulatory rules in North America and Europe respectively. In North America a frequency band from 0 to 500 KHz can be used for power line communications. The Frequency band in Europe is further divided into five bands based on the rules. 1. 3 – 9 KHz frequency band;

PAGE 10

4 2. 9 – 95 KHz frequency band; 3. 125 – 140 KHz frequency band; 4. 125 – 140 KHz frequency band; 5. 140 – 148.5 KHz frequency band. Figure 1.1. FCC and CENELEC frequency band allocation The use of frequency band from 3KHz to 9KHz is limited to energy providers; however, with their permission it may also be used by other parties inside a consumer’s premises. The use of frequency band from 9KHZ to 95KHZ is limited to energy providers and their concession-holders. This frequency ba nd is often referred to as the “A-Band”. The use of frequency band from 125KHz to 140KHZ is limited to the energy provider’s customers; no access protocol is defined for this frequency band. This frequency band is often called the “B-Band”. FCC General Use Frequency Band FCC Prohibited Frequency Band 0 540 KHz (a) FCC Frequency Band Allocation for North America A-Band B-Band C-Band DBa nd CENELEC Prohibited Frequency Band 3 9 95125140148.5KHz (b) CENELEC Frequency Band Allocation for Europe

PAGE 11

5 The use of frequency band from 125KHz to 140KHz is limited to the energy provider’s customers. In order to make simu ltaneous operation of several systems within this frequency band possible, a carrier se nse multiple access protocol using center frequency of 132.5 KHz was defined. This freque ncy band is referred to as the “C-Band”. The use of frequency band from 140KHz to 148.5 KHz is limited to the energy provider’s customers. No access-protocol is defined for this frequency band. This frequency band is often referred to as the “D-Band”. Thus in Europe power line communicatio ns is restricted to operate in the frequency range 95 – 148.5 KHz. The various prot ocols that have been developed for use by low bandwidth digital devices for communi cation on power lines ar e discussed in the next chapter. 1.1.2 High Bandwidth Digital Devices High-speed data communications over low-tension power lines has recently gained lot of attention. Thes e devices use existing power lin e infrastructure within the apartment, office or school building for providing a local area network (LAN) to interconnect various digital devices. High ba ndwidth digital devi ces for communication on power lines use the frequency band between 1 MHz and 30 MHz. In contrast to low bandwidth devices, no regulatory standards have been developed for this region of the spectrum. High bandwidth digital devices communi cating on power lines need powerful error correction coding along w ith appropriate modulation te chniques to overcome these impairments.

PAGE 12

6 1.2 TAMA Protocol Overview High bandwidth digital devi ces operating on power line share a common medium. Efficient use of this medium requires both a robust physical layer (PHY) and an efficient media access control (MAC) protocol. Also the choice of a MAC protocol is very much dependant on the physical layer. The MAC c ontrols the sharing of the medium, while PHY specifies the modulation, codin g, and basic packet formats. PowerPacket technology, by In tellon Corporation, Ocala in cludes an effective and reliable method for achieving high rates on t ypical channels [2]. The PowerPacket PHY uses orthogonal frequency division multip lexing (OFDM) as the basic transmission technique. For historical reasons this prot ocol is called the Tone Allocated Multiple Access (TAMA) protocol. The name reflects the use of adaptive bit loading in OFDM symbols. In contrast to these technologies, however, TAMA uses OFDM in a burst mode rather than in continuous mode. This techno logy also uses con catenated Viterbi and ReedSolomon FEC with inte rleaving for payload data, a nd turbo product coding (TPC) for sensitive control data fields [4]. TAMA MAC is modeled after the IEEE 802.11 MAC adapted to the OFDM PHY layer. TAMA uses CSMA/CA (Carrier Sense Multiple Access with Collision Avoidance)[5] technique as the basic channel access mechanism. 1.3 Buffers Buffers are often used to improve throughput and reduce loss of data. Buffers may be dedicated or shared; dedica ted buffer management is strai ghtforward but its utilization is low whereas shared buffer offers higher buffer utilization at the cost of complex management. Dedicated buffers especially i nput-buffers are affected by head-of-line blocking, a phenomenon that occurs quite literal ly when the head of the buffer queue is

PAGE 13

7 blocked. Shared buffers suffer from buffer hogging in case of non-uniform traffic. The combination of traffic patterns and buffer a llocation mechanisms are numerous and many studies have been conducted on them. 1.4 Thesis Motivation Unfortunately, while offering good throughput, this protocol suffers from large delays. This gain turns out to be a potentia l problem in power line LANs as they should be able to support a large vari eties of traffic out of which some may be delay sensitive. As of now, the way this protocol works is that the destination always acknowledges unicast packets at the MAC layer by transmitting the response delimiter. TAMA protocol sends the rest of the segments only if it gets the ACK for the first segment. If the source fails to receive an acknowledgment, it assume s that a collision has caused the failure. The destination may also choose to signal FAIL if it has insuffici ent resources to process the frame, or it can signal NACK to indicate that the packet was received with errors that could not be corrected by the FEC. The basic TAMA protocol described above has ju st one buffer at the receiver end. Due to this reason some packets arriving at the same de stination at the same time are sent a FAIL (or a resource busy) signal and therefore resulting in some retransmissions. Retransmissions bring down the data throughput and do not guarantee timely delivery of multimedia traffic. This means that if we increase the number of buffers at the receiver end th en there should be a definite increase in the throughput and considerable reduction in number of retransmi ssions. This is our main area of study in this thesis.

PAGE 14

8 1.5 Thesis Objective This thesis aims to maximize the thr oughput considering buffer allocation at the destination. The analysis was focused on findi ng the optimal number of buffers required at the destination node in TAMA protoc ol. Three buffer allocation schemes were considered, allocation based on FCFS (first come first serve), allocation based on priority and allocation based on source. Also preempti on in the buffer, removing partly received packet of a lower priority when no buffer resources are available for the currently received higher priority packet, was cons idered and variation in the throughput was observed. The change in throughput when a buffer is reserved for high priority traffic (for example, VOIP) was noted. 1.6 Chapter Summary This chapter discussed power line comm unications. Problems, like extreme noise levels, interference, unpredictability of the ch annel etc, faced when a power-line is used as a medium of transmission was discussed in brief. In US the frequency band from 0 to 500KHz can be used for power line communications. This chapter also gave a brief intro on th e protocol used in th is thesis, the TAMA protocol. TAMA PHY uses OFDM as the ba sic transmission tec hnique and TAMA MAC uses CSMA/CA as the basic channel access mech anism. An intro to the prior work done in the field of buffer management was given in this chapter. This chapter also described the nature of work done in this thesis the motivation behind them. Chapter 2 describes the prior work done in both power line communications and the area of buffer management. Chapter 3 gives a detailed description of OFDM modulation used in TAMA prot ocol and protocol it self. Chapter 4 discusses the details

PAGE 15

9 of the simulation and all the different scen arios that were considered. A detailed discussion of the results collected and analysis of these results is given in chapter 5 and finally the conclusions and future scope are presented in chapter 6.

PAGE 16

10 CHAPTER 2 POWER LINE PROTOCOLS AND BUFFER MANAGEMENT In this chapter we give a brief descrip tion of previous work done in power line communications and also in the area of buffer management. 2.1 History Even though the concept of using power lines for data transmission is very recent, there have been several studies and attempts to develop protocols or come up with a common standard for home networking as a whol e. Some of them are discussed in brief in the next sub-section. 2.1.1 Other Power Line Protocols Various other protocols have been developed, even before PowerPacket technology, for communication on power lines. These protocols differ in the modulation technique, channel access mechanism and fr equency band they use. Various products based on these protocols are available in the market and are mainly used for home automation purposes. A brief overview of these protocols is presented here. 2.1.1.1 X-10 The X-10 technology is one of the oldest power line communi cation protocols. Although it was originally unidirectional (f rom controller to controlled modules), recently some bi-directional products have be en implemented. X-10 controllers send their signals over the power lines to simple receiv ers that are used mainly to control lightning

PAGE 17

11 and other appliances. Some controllers available today implement gateways between power line and other medium such as RF and infrared. A 120 KHz amplitude modulated carrier, 0.5 wa tt signal, is superimposed into AC power line at zero crossings to minimize the noise interference. Information is coded by way of bursts of high frequency signals. To in crease the reliability, one bit of information is transmitted per cycle, limiting the transmi ssion rate to 60 bits per second. This represents poor bandwidth uti lization while the reliability of transmission is severely compromised in a noisy environment. These are the main reasons why this technology is considered unreliable by many installers. 2.1.1.2 CEBus CEBus is an industry standard for tran smitting control signals via a home's power lines. CEBus uses spread spectrum frequency-hopping technology to overcome noise and other communication impediments found within power lines. Spread-spectrum signaling works by spreading a transmitted signal over a ra nge of frequencies, rather than using a single frequency. CEBus-compliant devices such as light swit ches are highly reliable but remain expensive compared with "ordinary" devices. This technology uses a peer-to-pe er communication model. To avoid collisions, a Carrier Sensed Multiple A ccess with Collision Resolution and Collision Detection (CSMA/CRCD) is used. The pow er line physical layer of the CEBus communication is based on the spread spectrum technology patented by Intellon Corporation. Unlike traditional spread sp ectrum techniques (that used frequency hopping or time hopping or direct sequence), the CEBus power line carrier sweeps through a range of frequencies as its transmitted. A single sweep covers the frequency band from 10-400 KHz. This frequency sweep is called a chirp. Chirps are used for synchronization,

PAGE 18

12 collision resolution and data transmission. Us ing this chirp technology a data rate of about 10 Kbps can be obtained. 2.1.1.3 LonWorks LonWorks is a technology developed by Echelon Corporation and provides a peer-to-peer communication protocol, implemented using Carrier Sensed Multiple Access (CSMA) technique. Unlike CEBus, LonWorks uses a narrowband spread spectrum modulation technique using the frequency band from 125 KHz to 140 KHz. It uses a multi-bit correlator intended to preserve data in the presence of noise with a patented impulse noise cancellation. All the above-mentioned protocols deal with low-bandwidth digital devices and hence provide low data rate. These are esse ntially used for home automation. High data rates, required by modern a pplications, can only be achieve d using higher frequencies. The next section deals with the prior work done in the area buffer management. 2.1.2 Buffer Management A certain policy of buffer allocation a process that determines how the total buffer space (memory) will be used when queuing of packets needs to be done as and when required. The selection and implementation of this policy is usually referred to as the buffer management [6]. Lot of study has been done in the area of buffers; observing buffer utilization in a network or calculating the op timum size of a buffer or eval uating the performance of a network when buffering is included, etc. An exact model to evaluate the perf ormance of Multi-sta te Interconnection Networks using shared intern al buffering is proposed by Sa leh and Atiquzzaman [7]. The model is based on a general output distribu tion and simulation based results have been

PAGE 19

13 collected for uniform, hot and favorite dist ribution. Favorite output has been found to have less impact on throughput than hot spot distribution. Buffer occupancy too, for hot spot, has been found to reach total capacity fo r low input loads and rate of hot occupancy has been found to increase in a non-linear fashio n, as favorite value in creases the rate of occupancy increases linearly. Saleh and Atiquzzaman have also analyzed the effect of having one hot spot on shared buffer strategies [8]. It was found that though the throughput of the hot output increases as hot spot value increases th e overall throughput decreases because of the monopolizing effect of the hot spot traffic. The advantage of shared buffer scheme; which is better buffer utilization, is negated by th e fact that hot spot traffic hogs buffer space affecting cold spot throughput. For such a traffic output buffering, i.e. dedicated buffer allocation based on destination is better. Zhou and Attiquzzaman have proposed a model for output-buffered nodes under non-uniform traffic, which provides more accura te results than the other models because it considers blocking [9]. The model can also be used for uniform traffic as a special case. The results are also validated with simulati on. This model is compared to a model that does not consider blocking and it is shown that this model provides more accurate results. The effect of buffer size on throughput is also studied. Under uniform traffic an increase in buffer size results in throughput increasi ng from a minimum to an asymptotic value. Under hot spot traffic the asymptotic value reach ed is not as high as in uniform traffic. It was also found that increasing buffer size be yond a certain value ceased to affect the throughput, that threshold was found and use for further simulation.

PAGE 20

14 Output-buffered strategy is studied by Z hou and Atiquzzaman [10]. They consider finite output-buffered models under non-uniform traffic. The various buffer strategies such as input-based, output-based, shared are characterized here and references to research done in these fi elds are also indicated. Buffer sharing can be done in various ways [6]. 1. Complete Partitioning and Complete Sharing – In the complete partitioning (CP) scheme, the entire buffer space is permanently partitioned among the total number of servers present. This scheme does not actually provide any sharing. At the other extreme is complete sharing (CS) scheme, where all buffer space is shared among all incoming data regardless of source and destination. Here an arriving packet is accepted if any space is available in the switch memory, independent of th e server to which the packet is directed. Under the CP policy buffer allocated to a port is wasted if that port is inactive, since it cannot be used by other possibly active links. On the other hand, under CS policy one of th eports may monopolize most of the storage space if it is highly utilized [11]. 2. Sharing with Maximum Queue Lengths (SMXQ) – After observing the undesirable behavior of the two extreme policies of CS and CP and to obtain maximum efficiency of buffer sharing, but also to avoid the po ssible monopolization of the buffer space by one heavily loaded link, a “restricted buffer sharing” with a

PAGE 21

15 limit on the maximum number of spaces that can be taken up for any destination was proposed by Irland [12]. 3. Sharing with Minimum Allocat ion (SMA) and Sharing with a Maximum Queue and minimum allocation (SMQMA) – In “Analysis of Shared Finite Storage in a Computer Network Node Environment under General Tr affic Conditions” [13] two other policies: SMA and SMQMA are propos ed. In SMA minimum number of spaces is allocated for each destination is reserved. SMQMA is the integration of SMA and SMXQ; each destination always has access to a minimum allocated space, but they ca nnot have arbitrarily long queues. 4. Complete sharing with push out from the longest queue. Approach (4) is analyzed in “Analy tical Modeling of Shared Buffer ATM Switches with Hot-Spot Push-out under Bursty Traffic” [14]. Shared buffering is best in terms of buffer utilization and loss of data because of the maximum degree of buffer sharing, however it’s performance degrades un der non-uniform traffic. Hot spot traffic, which is a particular kind of non-uniform tra ffic, is analyzed in this paper under the HotSpot Push out scheme where packets direct ed towards hot spots that are making unfair use of the buffer sharing scheme are purged. Th is approach is shown to be consistently better than other schemes. The majority of the analyses in the lit erature involves connectionless networks, and remains within packet-level performance issues. Later most of the work done in the area of buffer analysis was done for high sp eed switching in ATM. There the usage of buffers in the switching elements was cruc ial for high throughput. Usage of additional

PAGE 22

16 buffers in the receiver end was essentiall y to improve Traffic Management in ATM networks. The central issue here is buffer a llocation and efficient utilization for obtaining high throughput. This is our area of study. 2.2 Comparative Summary The basic TAMA protocol has just one bu ffer at the receiver end. Due to this reason some senders transmitting packets to th e same destination at the same time are sent a FAIL (or a resource busy) signal, t hus resulting in a retr ansmission procedure. Retransmissions bring down the data throughput and do not guarantee timely delivery of multimedia traffic. This thesis proposes to solve some of the problems by changing the existing TAMA protocol. Doing some buffer manageme nt at the receiver end in the TAMA protocol brings about these changes. They ar e: introduce multiple bu ffers at the receiver end and introduce preemption in the buffers. We expect to reduce the number of FAILS issued by increasing the reception buffers. Ho wever, the number of reassembly buffers should be small to minimize the buffer management problems and chip cost. All of the work presented so far is ba sed on FIFO queues at the output ports or destinations. No one approach is good for all the traffic patterns. This thesis will show that development of a buffer management po licy that works together with a desired service discipline would result in more effi cient allocation of the two critical network resources, buffer space and bandwidth. It is al so known that there is no prior work done in “buffer management” in power lines. The situation in the TAMA protocol is a bit different from the above-discussed cases because of the reassembly process c onsidered here. The receiver end of this

PAGE 23

17 protocol receives a packet as a series of segments (because of the segmentation process at the transmitter end). As soon as the destinati on receives the first segm ent of any packet, it assigns a buffer, if one is available. The rest of the segments belonging to the same packet are assigned the same buffer in order for the reassembly to be done. Only after the last segment of the packet is received is it possible to free that buffer or accept packets from any other source. This thesis considers two buffer alloca tion schemes; allocation based on source and allocation based on priority in addition to the baseline Fi rst Come First Serve (FCFS) policy. Each of these schemes takes care that all the segments that belong to the same packet are allocated the same buffer so that they are reassembled properly. Allocation based on FCFS is considered as a base line case. Here a buffer is allocated to the segment that first arrives at the receiver. Source ba sed allocation allocates a buffer to a segment depending on source address, which means that for achieving maximum throughput, number of reassembly buffe rs at the receiver end should be equal to the total number of senders in the netw ork. Similarly in case of priority-based allocation maximum throughput should be achieved when the number of buffers per destination is equal to th e number of priorities. An attempt is being done in this thesis to find the optimal number of buffers required to maximize the throughput by keep ing the number of FAILs to a minimum value. It also examines the effect of preemption on throughput. 2.3 Chapter Summary This chapter discussed in detail work done in power line communications. Here we had discussed about some protocols devel oped for low bandwidth digital devices like

PAGE 24

18 X-10, CEBus and LonWorks. Thes e protocols are essentially used for home automation. Research done in the area of buffer mana gement was discussed in detail. Various buffering schemes have been summar ized in the previous section. Much detailed description of the prot ocol is given in the next chapter.

PAGE 25

19 CHAPTER 3 OFDM AND TAMA PROTOCOL In this chapter we will discuss in de tail the modulation used by TAMA PHY, OFDM (orthogonal frequency division multiple xing), and the protocol itself. TAMA stands for Tone Allocated Multiple Access, name retained for historical reasons. 3.1 OFDM Modulation The choice of modulation depends on the na ture of the physical medium on which it has to operate. Any modulati on scheme selected for use on a power line should be able to do the following [15]. 1. Overcome non-linear channel characteristics. 2. Overcome multi-path spread. 3. Adjust dynamically. 4. Mask certain frequencies. Power lines generally have non-linear ch aracteristics. This makes equalization very complex and expensive for data rates above 10 Mbps with si ngle carrier modulation. Impedance mismatch on power lines result in echo signal causing delay spread of the order of 1ms. The modul ation technique used for power lines should have the inherent ability to overcome this multi-path effect. Power line channel characteristics cha nge dynamically as power supply varies. The modulation technique for use on power lines should have the ability to track such changes without involving larg e overhead or complexity.

PAGE 26

20 Power line communications equipment uses an unlicensed frequency band. However, it is likely that in the near future various regulatory rule s might be developed for these frequency bands too. So we should be able to mask certain frequency bands. A modulation scheme with all of the a bove desirable propert ies is Orthogonal Frequency Division Multiplexing (OFDM). OFDM is generally viewed as a collection of transmission techniques. OFDM is currently used in the Euro pean Digital Audio Broadcast (DAB) standards. In addition, several DAB system s proposed for North America are based on OFDM [16]. 3.1.1 Theory of Operation OFDM divides the high-speed data stream to be transmitted into multiple parallel bit streams, each of which has a relatively lo w bit rate. Each bit st ream then modulates one of a series of closely spaced carriers To obtain high spectral efficiency, the frequency response of the sub-carriers ar e overlapping and orthogonal, hence the name OFDM. The practical consequence of orthogonality is that, if we perform a Fast Fourier transform (FFT) of the received waveform ove r a time span equal to the bit rate on an individual carrier, the value of each point in the FFT output would be a function only of the bit (or bits) that modulated the corresponding carrier, and is not impacted by the data modulating any other carrier. Each narrowb and sub-carrier can be modulated using various other modulation formats like BPSK, QPSK and QAM. When the carrier spacing is low enough for the channel response to be relatively constant across the band occupied by the carrier, channel equalization becomes easy. The need for equalization is completely el iminated by using differential modulation. Differential modulation improves performance in an environment where rapid changes in

PAGE 27

21 phase are possible [4]. Implemented in the frequency domain, equalization can be achieved by a simple weighting of the symbol recovered from each carrier by a complex valued constant. A schematic block diagram is shown in Figure 3.1. Figure 3.1. Block diagram of OFDM System [15] OFDM modulation is generated using a Fa st Fourier Transform (FFT) process as mentioned above. M bits of data are encoded in the frequency domain onto N sub-carriers (M = N B where B is the number of bits pe r modulation symbol, e.g., B=2 for QPSK). An inverse FFT (IFFT) is performed on the set of frequency carriers pr oducing a single time domain OFDM “symbol” for transmission ove r a communication channel. The length of time for the OFDM symbol is equal to the r eciprocal of the subcarrier spacing and is generally compared to the data rate. Simply copying the last part of the time domain waveform and pre-pending it at the start of the waveform inserts a cycle prefix. The reasons for use of cyclic pref ix are twofold. It makes the In ter Carrier Interference (ICI) zero even in the presence of time dispersi on, by maintaining orthogonality. It also acts like a guard interval removing Inter Symbol Interference (ISI). Removing the cyclic prefix from the tim e domain signal and then performing an FFT on each symbol to convert the frequency domain demodulates OFDM signals. Data is decoded by examining the phase and amplitude of the sub-carriers. Add CP CHANNEL RemoveCP I F F T M U X D E M U X F F T

PAGE 28

22 3.1.2 Advantages of OFDM OFDM is a modulation scheme that ha s all the desirable properties. Some advantages of OFDM include that it 1. is very good at mitigating the effects of timedispersion; 2. is very good at mitigating the effect s of in-band narrowband interference; 3. has high bandwidth efficiency; 4. is scalable to high data rates; 5. is flexible and can be made adaptive (different modulation schemes for subcarriers bit loading, adaptable ba ndwidth/data rate s are possible); 6. has excellent ICI performance; 7. does not require channel equalization; 8. does not require phase lock of the local oscillators. In spite of all the advantages and in spite of the feature of OFDM being able to eliminate ISI and ICI, there remains the problem of fading, caused by multi-path reflection. Fading occurs when the reflected si gnal arrives such that it attenuates or even cancels the original signal. This usually occurs only at certain frequencies. It can be overcome by using interleaving and error corr ection coding techniques. For a complete discussion on OFDM refer to OFDM wireless Multimedia Communications [17]. 3.2 Protocol Overview High bandwidth digital devices opera ting on power lines share a common medium. Efficient use of this medium require s both a robust physical layer (PHY) and an efficient medium access control (MAC) protocol Also, the choice of a MAC protocol is very much dependant on the physical laye r. The MAC controls the sharing the

PAGE 29

23 communication medium, while PHY specifies the modulation, coding, and basic packet formats. The TAMA PHY uses orthogonal frequenc y division multiplexing (OFDM) as the basic transmission technique. Ma ny different MAC protocols ha ve been used in various LANs. However, the choice of MAC protocol is dependant on how the PHY layer works. Consider the IEEE 802.3 Ethernet MAC protocol that uses Carrier Sense Multiple Access with Collision Detection (CSMA/ CD). It cannot be used in power lines because the large dynamic range of signals and noise makes collision detection highly unreliable. A collision on power lines can be inferred only if the sender fails to receive an acknowledgment. High attenua tion on the power line medium could also lead to a problem of hidden nodes. This problem of hidden nodes is dealt with in IEEE 802.11 MAC protocol for wireless LANs by using Carrier Sensed Multiple Access with Collision Avoidance (CSMA/CA). Even though this protocol gives good throughput, it suffers from large delays. 3.2.1 PHY Overview The need for equalization in TAMA is co mpletely eliminated by using differential quadrature phase shift keying (DQPSK) modulat ion where the data is encoded as the difference in phase between the present and previous symbol in time on the same subcarrier (see figure 3.2 ) Differential modulation improves performance in environments where rapid changes in phase are possible.

PAGE 30

24 Figure 3.2. Differential phase encoding across symbols The TAMA PHY occupies the band from about 4.5 to 21 MHz. The PHY includes reduced transmitter power spectral density in the amateur radio bands to minimize the risk of radiated energy from the power lines interfering with these systems. The raw bit rate using DQPSK modulation with all carriers active is 20 Mbps. The bit rate delivered to the MAC by the PHY layer is about 14 Mbps. PHY packet structure consists of a preambl e sequence followed by a TPC encoded frame control field. The preamble sequence is chosen to provide good correlation properties so that each receiver can reliably detect the delimiter, even with substantial interference and a lack of knowledge of the transfer functi on that exists between the receiver and the transmitter interference. The frame control contains MAC layer ma nagement information (for example, packet lengths and response status). All three delimiter types have the same structure, but the data carried in the delimiter vari es depending on the delimiter function. dP1 dP2 dP3 dPm OFDM symbol n+1 OFDM symbol n

PAGE 31

25 Unlike the delimiters, the payload portion of the packet is intended only for the destination receiver. Payload data are carried only on a set of carriers that have been previously agreed upon by the transmitter and intended receiver during a channel adaptation procedure (when ce Tone Allocation). Since only carriers in the "good" part of the channel transfer function are used, it is not necessary to use such heavy error corr ecting coding as is requi red for transmissions intended for all receivers. This combination of channel adaptation and lightening of the coding for unicast payloads allows TAMA to achieve high data rates over power line. The adaptation has thre e degrees of freedom: 1. de-selection of carriers at badly impaired frequencies; 2. selection of modulation on indivi dual carriers (DBPSK or DQPSK); 3. selection of convolutional code rate (1/2 or 3/4). In addition to these options, the payload can be sent using ROBO mode: a highly robust mode that uses all carriers with DBPSK modulation and executes heavy error correcting code with bit repetition and interleaving on each of them. ROBO mode does not use carrier de-selection and thus can gene rally be received by any receiver. The mode is used for initial communication between devices that have not performed channel adaptation, for multicast transmission, or fo r unicast transmission in cases where the channel is so poor that ROBO mode provide s greater throughput th an de-selection of carriers with lighter coding.

PAGE 32

26Start of frame delimite r Figure 3.3. TAMA Protocol transmission format With relatively short packets, the overh ead required for channel assessment and estimation of gain and of carrier phase create s a capacity penalty that more than offsets any potential gain from the modulation efficiency. Formed from a series of OFDM symbols, the TAMA data-bearing packet consists of a start-of-frame delimiter, a payload, and an end-of-frame delimiter (see Figure 3.3). A PPDU is a collection of OFDM symbols, whic h contains SYNC, Head er and Data fields. SYNC field is used to indicat e the start of the packet. It basically contains a known group of OFDM symbols. Header field contains so me relevant physical layer information, for example, the PPDU type, modulation scheme us ed etc. Data field contains a part of TAMA segment. We also assume that the data field is either a 20 or a 40-symbol packet. Preambl e Frame Control Frame Header Frame bod y PAD FCS Preamble Frame Control Preamble Frame Control Frame Header Frame body and PAD Check se q uence End of frame delimiter (uses all tones) R F S Response Delimiter ( uses all tones ) Contention resolution window 25 bits 17 bytes Variabl e b y te 2 b y tes 25 bits25 bits 4 OFDM s y mbols Variable symbol count 20-160 OFDM s y mbols 4 OFDM s y mbols4 OFDM bl PRS0 PRS1 CRS0 CRS1 C R S 2 Start of frame indicates Start of frame Contention control Length of frame Tone map index Payload Upto 13.5 Mbps (PHY) rate Adapted modulation and tones Decoded based on tone map Extensible to higher rates End of frame End of frame Contention Control Channel access Priority Response indicates ACK – Good packet NACK – Errors detected FAIL – Receiver busy PRS0 & PRS1 11 – highest priority (3) 10 – priority 2 01 – priority 1 00 – lowest priority (0)

PAGE 33

27 Acknowledgment contains only SYNC and Header fields. Each PPDU is followed by an acknowledgment. For unicast transmissions the destination station responds by transmitting a response delimiter indicating the status of the reception (ACK, NACK, or FAIL). Reception of an ACK or a positive acknowle dgement is considered a success and the next segment in the queue is transm itted whereas, a NACK indicates a negative acknowledgement and the same segment is retransmitted after a back-off procedure. This process of retransmission is carried until the retransmission limits, i.e., 16 before the packet is totally dropped or before the transmission is considered a failure. On the other hand a FAIL is a resource busy signal, which indicates that the receiver has all its resources busy. In this case the sender has to wait for 20milliseconds and then undergo the back-off procedure before retransmission. 3.2.2 MAC Overview MAC uses a virtual carrier sense (VCS) mechanism and contention resolution to minimize the number of collisions. Upon receip t of a preamble, the receiver attempts to recover the frame control. The frame control indicates whether the delim iter is a start of frame, end of frame, or response delimiter. Start-of -frame delimiters specify the duration of the payload to follow, while the other delimiters implicitly define where the end of transmission lies. Thus, if a receiver can decode the frame control in the delimiter, it can determine the duration for which the channel will be occupi ed by this transmission, and it sets its VCS until this time ends.

PAGE 34

28 If it cannot decode the frame control, th e receiver must assume that a maximumlength packet is being transmitted and must set the VCS accordingly. In this case it may subsequently receive an end-of-frame delimite r and thus be able to correct its VCS. The destination always acknowledges unicast packets at the MAC layer by transmitting the response delimiter. If the s ource fails to receive an acknowledgment, it assumes that a collision has caused the failure The destination may al so choose to signal FAIL if it has insufficient resources to pr ocess the frame, or it can signal NACK to indicate that the packet was r eceived with errors that could not be corrected by the FEC. 3.2.2.1 Channel access mechanism Medium sharing in TAMA protocol is accomplished by the Carrier Sense Multiple Access with Collision Avoidance (CSM A/CA) mechanism with priorities and a random back-off time following the busy condi tions on the channel. TAMA protocol allows prioritized channel access with lo w probability of collision and minimum throughput. Each node that has one or more TA MA segments to send will contend for the channel if the channel is busy. Note that the cost of collisions is very high. The contention resolution protocol includes a random back-off algorithm to disperse the transmission times of frames queued (or bei ng retransmitted due to collision) while the channel has been busy, and also provides a way to ensure that clients obtain access to the channel in the order of their priorities. 3.2.2.1.1 Basic access procedure If the channel has been idle for X msec af ter the last transmission, where X is the channel access period, data is sent directly without participating in any kind of collision

PAGE 35

29 resolution. The channel access mechanism used by TAMA is shown in Figure 3.5. If the medium has been busy then a two-step process is followed for channel access. Figure 3.5. Basic Access procedure The first step involves signaling the intenti on to contend at a pa rticular priority. After this step, a contending node will defe r if it senses a higher priority node transmitting. The second step involves the actual process of contention. A priority resolution symbol is transmitted in the priority resolution slots. Priority level encoding in bits is shown in Figure 3.4. Figure 3.4. Priority level en coding in priority bits. When one node completes a transmission, other nodes with p ackets queued to transmit signal their priority in the prior ity resolution interval (indicated by PRS0 and PRS1 in Figure 3.5). P1 1 High Priorit y Medium Priority 1 0 0 Low Priority Best Effort P2 End of last transmissio n PRS0 PRS1 CRS0 Previous transmissio n Priority resolution p eriod Contention window CRS1 ……… CRSk New transmissio n CR slots

PAGE 36

30 The signals for this purpose use on/off keyi ng and are designed so the priority of the highest priority user can be easily extracted, even when multiple users signal different priorities at the same time. During the priority resolution period the hi ghest priority of the data waiting to be sent is identified and onl y the stations having da ta of this priority contend. Each of these stations generates a random backoff time according to the value of its local backoff timer. Backoff proce dure is also invoked when the transmitter retransmits due to the lack of ACK. A station will not transmit in the remain ing PRS or the contention window if a PRS symbol detected. The stati ons that had indicated their intention to contend in the PRS and were not preempted by any higher pr iority then compete for access in the contention window according to the backoff pr ocedure. Also, PRS symbols will not be transmitted if the End of frame or Response Delimiter has its contention bit set and the priority to be signaled is equal to or less than that of preceding frame. 3.2.2.1.2 Slot choices Nodes with queued frames having priority equal to the highest priority signaled choose a slot in a contention resolution window in which they will initiate transmission, if no other node begins transmission in an ea rlier slot. Each node chooses its slot at random over an interval that grows with incr easing numbers of unsuccessful attempts to access the channel. If a node were preempted in a previous contention resolution window, it continues counting slots from where it le ft off rather than choosing a new random value. This approach improves th e fairness of the access scheme. Collisions can occur if a node wishing to transmit fails to recognize a preamble from another node, or if the earliest chosen slot in the contenti on resolution window is

PAGE 37

31 selected by more than one node. The preamble design is robust enough to ensure that the missed preamble rate is so low that this s ource of collisions has only a minor impact, leaving the latter cause to produce the majority of collisions. 3.2.2.1.3 Channel adaptation Channel adaptation occurs when client s first join a logical network, and occasionally thereafter, based either on a time out or on a detected variation in the channel transfer function (which might be either an improving or degrading condition evidenced by a reduction or an increase in errors or si gnal strength). Any node can initiate a channel adaptation session with any other node in it s logical network. The adaptation is a bidirectional process that causes either node to specify to the other the tone map, i.e., the set of tones, modulation, and FEC coding to use in subsequent payload transmissions. 3.2.2.2 Segmentation and reassembly Segmentation and reassembly is provided to improve fairness and reliability, and to reduce latency. The MAC also includes featur es that allow the transmission of multiple segments with minimal delay in cases wher e there are no higher priority frames queued with other nodes, and it provides a capabil ity for contentionless access in which access to the channel may be passed from node to node. Under this protocol each packet arriving from the higher layer is divided into multip le segments for transmission on the physical layer. Segment size depends on the data rate between the transmitter and the receiver. Also there is a maximum channel occupancy time for each node.

PAGE 38

32 Segmentation has two advantages. 1. Segmentation improves the chances of frame delivery over harsh channels because it reduces the cost of collision or cost of errors as acknowledgment is done for each segment. 2. It contributes to better latency char acteristics for all stations because it puts a limit on the maximum amount of time a node can keep the channel busy. This is especially important for meeting real time quality of service requirements. Each TAMA segment has a header field which contains information like segment count, number of segments, etc. Each TAMA segment is transmitted over the physical medium by one or more physical protocol data units (PPDUs). This is necessary since the TAMA MAC PDU must be able to encapsula te an entire IEEE 802.3 PDU (150 bytes), but a TAMA PDU can hold a payload of at most 40 OFDM symbols. Figure 3.6. TAMA segmentation process DA SA Ethertype Ethernet Data Frame Control Segment Control DA SA Ethertype Ethernet Data (Part 1) Frame Control FCS Frame Control Segment Control DA SA Ethernet Data (Part 2) Frame Control FCS Maximum Length

PAGE 39

33 A common misconception is that co ntention-based access schemes have potentially unbounded latency. In the TA MA protocol, latency is bounded by the overhead of discarding packet s that cannot be delivered in the time required by the application. It has been show n [18] that the percentage of TAMA packets discarded through this approach is low enough to be tole rated by low latency applications such as Voice over IP (VoIP) or stream ing media. The combination of this feature and priority classes makes TAMA well suited to applications requiring QoS. 3.2.2.3 Privacy Privacy is provided through the use of 56-bit data encryption standard (DES) applied at the MAC layer. All nodes on a given logical network share a common encryption key. The key management system includes features that enable the distribution of keys to nodes that lack an I/O capability. 3.3 Summary Efficient use of a medium requires bo th a robust PHY and an efficient MAC. Choice of a particular modulation depends on the physical medium on which it has to operate. OFDM modulation is a good choice for power line data transmission due to various advantages it offers over other schemes like, high bandwidth efficiency, scalability to high data rates, flexibility, etc. Moreover, it eliminates the problems of Inter Carrier Interference and Inter Symbol Interference. TAMA PHY uses OFDM as the basic tr ansmission technique. Formed from a series of OFDM symbols, the TAMA data-bea ring packet consists of a start-of-frame delimiter, a payload, and an end-of-frame deli miter. Start-of-frame consists of a SYNC. SYNC field is used to indicat e the start of the packet. It basically contains a known group

PAGE 40

34 of OFDM symbols. Header field contains so me relevant physical layer information, for example, the PPDU type, modulation scheme us ed etc. Data field contains a part of TAMA segment. Different modes of encoding the payload in PHY we re also discussed. Channel access mechanism, TAMA MAC, uses CSMA/CA. It uses a VCS mechanism for career sensing followed by a contention resolution process to minimize the number of collisions including two PRS slots to permit four priority levels to be encoded. TAMA MAC protocol allows prioriti zed channel access with low probability of collision and minimum throughput. Each node th at has one or more TAMA segments to send will contend for the channel if the channel is busy. The contention resolution protocol includes a random back-off algorithm to disperse the transmission times of frames queued (or being retransmitted due to collision) while the channel has been busy, and also provides a way to ensure that client s obtain access to the ch annel in the order of their priorities. TAMA MAC also implements segmentation and reassembly process used to improve the fairness and reliability and to re duce latency of transmission of MAC. Each packet arriving from the higher layer is divi ded into multiple segments for transmission on the physical layer. Segment size depends on the data rate between the transmitter and the receiver. Also there is a maximum channel occupancy time for each node. In the next chapter we discuss the s imulation design, simulator, various traffic conditions and network parame ters that are considered.

PAGE 41

35 CHAPTER 4 SIMULATION DESIGN AND DESCRIPTION This chapter explains the simulation design and implementation of the basic TAMA protocol, multiple buffer protocol a nd preemption protocol. The designs include the basic simulator design, design of the receiver of any node with multiple buffers and different variations, like buffer allocation based on s ource or priority, introducing preemption and reserving a buffer for high priori ty traffic. Detailed description of the parameters used, their values and assumptions made during the simulation will be discussed here. 4.1 Design Simulation is one of the ways in which a system can be modeled. It can be more accurate and typically has fewer assumptions th an analytical modeling. We use an eventbased simulation technique for modeling the TAMA protocol and its variants. 4.1.1 Basic Simulator Design In an event-based simulation the execu tion of a main loop represents a single event. The simulation time clock simply advan ces to the event time after the last event. Events are processed serially, even if they have the same event time.

PAGE 42

36 The pseudocode for the main loop is presented below: Initialize all nodes in the network; Populate event queue to initialize events; Current time = 0; While (current time < simulation time) { remove next event from the head of the event queue; handle the event removed from the queue; insert event(s) resulting from current event in the event queue. update statistics; } Compute final statistics; Simulator operation overview: Any event could cause the simulation to ch ange its state and/or cause new events to occur. Every node maintains an event queue that determines which event is to be scheduled next. Once an event is schedul ed, the other events are rearranged. The general structure of a node is as shown in Figure 4.1 Higher layer traffic is modeled by differe nt traffic sources, generating packets, with selectable packet arrival time and packet size. Packets generated are stored in the prioritized packet queue. Packets in the queue are transmitted to their respective destinations by the MAC process. Successful or unsuccessful transmission will result in

PAGE 43

37 MAC process removing one more packet form the queue. Packet transmission and reception between various nodes is achieved by a series of MAC and PHY interactions at each node. A separate event queue is maintain ed at MAC and PHY to store the events generated. Each node maintains a list of transmission events in a buffer. Traffic Source 1 Traffic Source 2 Traffic Source n Device Driver Queue MAC PHY Traffic Sources PowerLine Medium Figure 4.1. Node Structure

PAGE 44

38 € Device Driver Queue The DDQ is a prioritized queue and ha s the capacity to store certain number of packets. Pack ets generated by the higher layers are stored in this queue. € MAC MAC maintains an event queue and also has a single buffer to store packets. MAC process extracts one packet at a time from the Device Driver queue for transmission over the medium. € PHY PHY also maintains an event queue of it’s own and has a single buffer. A simulation mimics a stochastic proce ss in time so that the outputs of the simulation are themselves random variables [19] This is a disadvantage if we want to make a statement about the behavior of any case based on just one single observation of the simulation. Thus we need to make repeated runs with the different starting values for the variables (including the seed for the ps eudorandom number gene rator e.g., rand() – as is also srand() which are used in our simula tion of this protocol). This protocol is implemented in C. 4.1.2 Multiple Buffer Protocol In this section we describe the Multiple Buffer protocol proposed to improve the performance of the basic protocol. The operati on will still be the same as before except

PAGE 45

39 that now the receiver can receive from more th an one source at the same time. The packet transmission and retransmission algorithm w ill remain the same. So now, the maximum sustainable load is a function of the number of reception (or destination) buffers at each node. It increases as the numb er of buffers increases. Initially the node can accept segments from any node. When a valid header segment comes along the destination node accepts this segment and assigns a buffer. This buffer allocation can be done in many ways. In this analysis we c onsider one baseline case – buffer allocation based on First Come Fi rst Serve (FCFS) and two dedicated buffer allocation schemes, allocation based on s ource address and buffer allocation based on priority. 4.1.2.1 Based on FCFS FCFS based buffer allocation is the simplest. An available buffer is allocated based on the arrival time of a packet. When a valid header segment arrives, receiver checks if there is a buffer already allocated for this packet. If a buffer is not already assigned then it assigns a buffer, if there are any available. Otherwise, receiver sends a FAIL. 4.1.2.2 Based on source Here, buffers are allocated based on the source address. When a valid header segment arrives a buffer is assigned unless th e source already has a buffer assigned to it or all buffers are in use. Subsequent segmen ts from a source already assigned a buffer are inserted into that buffer for reassembly. Ho wever, no source is assigned more than one buffer. This way a single receiver can now recei ve from more than one source at the same time and still not send a re source busy signal (FAIL).

PAGE 46

40 4.1.2.3 Based on priority In priority-based buffer allocation, a buffe r is allocated based on the priority of the arriving packet. When a valid header segment of a priority arrives, a FAIL is returned if that particular priority buffe r is in use. Otherwise, the prio rity buffer is assigned to this packet and it is accepted, if a buffer is available. Each receiver may simultaneously receive one packet at ea ch priority level. In both the cases buffer allocation is done only if the receive r has a free buffer else a busy signal is sent. Ther e may be fewer buffers than possible sources or priority levels. A buffer is not freed until all the segm ents of that particular packet have been received successfully. Whenever a segment is retransmitted too many times, or a segment is lost, or the source node its elf drops a segment for whatev er reason, then the receiver buffer will still be waiting for the next segment before it can free the buffer. Hence the receiver has a timer running on each buffer if the timer expire s for a buffer, the receiver automatically frees that buffer. 4.1.2.4 Preemptive Approach In the preemptive version of each of the disciplines above, when allocating buffers preemption based on priority is possible. If all of the buffers are occupied and the first segment of a new packet arrives with hi gher priority, the receiver preempts the lower priority packet and allocates the buffer to the new packet. Otherwise it sends a FAIL. Reserve a High Priority Buffer Here, every destination has one of its buffe rs reserved for high priority packets, i.e., there is a dedicated buffer at every destination for a high priority packet. There is not much change in the design other than reserv ing a buffer. The packet transmission process and responses sent still remain the same.

PAGE 47

41 4.2 Simulation Model The model of the TAMA protocol simulates N nodes each transmitting data according to the TAMA protocol. 4.2.1 Assumptions Some assumptions that were made du ring this analysis are listed below € The network is considered to be busy alwa ys, i.e., the network is always saturated so that the exact behavior of the netw ork under worst-case traffic conditions can be studied. € Noise at every node is –20.0dB and signal to noise ratio between nodes is 0. This is done so that we will not have a case where we lose a response due to noise in the channel. Moreover this will help us get an accurate understanding of the reduction in the number of FAILs due to the introduction of multiple buffers. The first assumption was made to get an accurate reflection of the network behavior in worst-case conditions. Even t hough a worst-case condition of the network is not a common real time scenario, it gives an idea as to how bad a network could perform given certain conditions. 4.2.2 Simulation Configuration The simulated configuration is specified by the parameters shown below. Some of these values are input to th e simulation through a high-level trace file (supporting traffic generators). € Number of nodes = variable, however maximum number of nodes is 20 € Simulation duration = 75sec (each run)

PAGE 48

42 € Number of runs = 4 € Number of traffic sources = variable, shoul d be less than the number of nodes in that particular scenario. € Maximum number of buffers = 19 € Priorities used = 1,2 & 3, i.e., low, medium and high prio rities respectively. Traffic of priority 2 and 3 is isochronous and priority traffic of priority1 is asynchronous nodes. € Size of low priority packet =16450 bytes; this size fills to eight segments. The number of segments a packet can be di vided into depends on the modulation used by the TAMA protocol. € Size of medium and high priority packet = 1500 bytes; this value depends on the type of modulation used and also the c ode rate. This causes the TAMA layer to send this packet in a single segment. € Traffic generation: Low priority packets are generate d exponentially (Poisson model) and medium and high priority packets are generated in periodic intervals. Exponential inter arrival time = 500 sec. Isochronous periodic interval = 20000 sec. The simulation model that was developed us ed all the above-specified values for traffic generation. Modulation used in this simulation was QPSK3/4. Also the number of asynchronous nodes in a network was vari ed and behavior st udied. However, the detailed analysis was done for the case when the number of asynchronous nodes in the network equals two.

PAGE 49

43 Traffic Models: In order to simulate a real-time scen ario, at any given time we have two asynchronous nodes for any N, where N is the total number of nodes in the network. The main point of this analysis is to see the effect of introducing multiple buffers at the receiver end on the performance and the throughp ut when VOIP traffic is present. Thus our scenarios are divided mainly into three cases. Case (i): Uniform destination Here we have traffic generated from one node uniformly distributed over all other nodes in the network. Case (ii): Multimedia traffic Here, instead of having every node generate traffic to all the other nodes in the network with equal probability, we have one isochronous node and one asynchronous node send all the packets to the same destina tion. This is done to simulate a real time multimedia environment, where a person is liste ning to radio on Internet and at the same time is also transferring some files. Case (iii): Hot Spots Here a hot receiver was modeled. In this case different fractions of nonuniformity were considered. Fractions cons idered were 50, 70, 80 and 90. These fractions decide the percentage of the total packets to be distributed non-uniformly. For example, in case of a fraction of 50, 50% of the total p ackets generated in the network are directed to a single destination.

PAGE 50

44 Uniform destination case simulates a bala nced traffic flow in a network and case (ii) and case (iii) simulate a non-ba lanced traffic flow in a network. 4.3 Performance Measures The simulation program measures th e performance of the basic TAMA protocol, with or without buffering capabiliti es or preemption. Performance is measured in terms of average transmission de lay, number of FAILs and throughput. Network throughput is defined as the mean number of bits successfully transferred through the channel per unit time This throughput just considers the total number of bits transmitted acr oss the network. This does not in clude the rejected or FAIL packets. Its units are megabits per second. Throughput = (TotalBytesTx 8)/ SimTime*1E6, where SimTime is the total simulation time measured in seconds. Transmission delay of a voice/data packet is defined as the time interval between the formation of the packet and its arrival ti me at the destination. The transmission delay is measured as the difference of Net delay and Queuing delay. It is measured in this simulation as the time when MAC gets the p acket from the device driver queue until all the segments are successfully sent. Other output parameters like number of FAILs, voice throughput (multimedia scenario case) for all the above -discussed cases are analyzed. Also some supporting results like probabil ity of collision and mean contention resolution slots required eith er by an asynchronous or an isochronous node are observed.

PAGE 51

45 4.4 Summary Chapter 4 discussed the basic simu lation design. In this event based simulation model the higher layer traffic is modeled by different traffic sources, generating packets, with selectable packet ar rival time and packet size. Packets generated are stored in the prioritized packet queue. Packets in the queue are transmitted to their respective destinations by the MAC process. Successful or unsuccessful transmission will result in MAC process removing one more packet form the queue. A separate event queue is maintained at MAC and PHY to stor e the events generated. Each node maintains a list of transmission events in a buffer. Different variations or changes to be done to the protocol in each of the buffer allocation schemes, based on FCFS, based on source and priority. Assumptions made and differe nt design strategies to simulate a real-time case were also discussed here. Design strategies include different traffic models considered; general traffic scenario, multimedia scenario and hotspot cases, number of isochronous and asynchronous nodes in a network, modulation used etc. Finally the parameters to be considered for performance analysis like throughput, number of FAILs, mean contention resolu tion slots, probability of collision, etc. The next chapter gives the simulation results.

PAGE 52

46 CHAPTER 5 RESULTS AND DISCUSSION In this chapter, we analyze the results obtained from the simulation of the TAMA protocol with multiple buffers and different allocation schemes. For analysis both balanced and non-balanced traffic flows were considered. The General traffic scenario simulates a balanced traffic flow in a networ k and nonbalanced traffic flow is simulated by the Multimedia scenario and different Hotspot (non-uniform traffic) cases. The simulation measures throughput, number of FA ILs issued, mean c ontention resolution slots (mean CR slots), and probability of colli sion, for different cases of the simulations. Results were analyzed using graphs plot ted between various network parameters discussed in section 4.2.2. 5.1 Results In a network of maximum N nodes, vari ation in the throughput is observed by gradually increasing the number of buffe rs from 1 to N-1 for N = 2,5,7,9,11,13,15 and 20. This was done for every traffic case men tioned in section 4.22. However, for the discussion of hotspot scenarios, the analys is was based on one case (5 nodes) for the reasons discussed in section 5.1.3. Increasing the number of buffers incr eased the maximum throughput. However, after reaching certain value, there was not much improvement in the throughput. Different sets of results were collected for all the necessary cases. The set of results considered for our discussion include the simu lations that were run for duration of 75

PAGE 53

47 seconds. Also, the number of asynchronous nodes in the network at a given time is varied. The number of asynchronous nodes in th e network for the discussion that follows is equal to 2. There was an increase of at least 10% when the number of buffers was increased from 1 to 2. For example, the throughp ut for a single buffer, 5 nodes, allocation based on source and using the general traffic ge nerator case, is 7.44 Mbps and that of a 2 buffer case is found to be 8.14 Mbps. Also, diffe rence between the tota l number of FAILs in the network decreased by at least 60 pe rcent. In the above case of 5 nodes and allocation based on source the total number of FAILs in the network reduced by 93%. Variation of number of FAILs with number of buffers is discussed in detail under the section 5.1.1. When we have just two nodes in th e network (one transmitting and other receiving), then for all the sub-cases i.e., allocation based on sour ce, allocation based on priority (with or without pr eemption), reserving a buffer fo r the high prior ity packets, traffic pattern being general or multimedia, the throughput achieved was found to be the maximum for a single buffer case and constant there onwards as expected. This can be considered as an ideal case because with only one sender to the receiver there is never a possibility of contention for reassembly buffers. The effect of introduction of multiple buffers on the performance of TAMA protocol depends on the buffer management sc hemes used. The following sections review the variation in throughput for each traffic scen ario is checked for all the buffer allocation schemes. 5.1.1 General Traffic Scenario Here the destination was chosen rand omly using uniform distribution. The behavior of the network for all the buffer allocation schemes was observed.

PAGE 54

48 5.1.1.1 Based on FCFS Buffer allocation based on first come first serve (FCFS) is take n as a baseline case and a detailed analysis is done onl y for this particular scenario. Based on FCFS0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 0123 # of BuffersAsynch Throughput(Mbps) 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Based on FCFS (with preemption) 0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 0123 # of BuffersAsynch Throughput(Mbps) 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Based on FCFS0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 0123 # of BuffersIsch Throughput(Mbps) 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Based on FCFS (with preemption)0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 0123 # of Buffers 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Based on FCFS0 1 2 3 4 5 6 7 8 9 0123 # of BuffersTotal Throughput(Mbps) 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Based on FCFS (with preemption)0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 0123 # of Buffers 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Figure 5.1. Graphs showing variat ion of throughput with number of buffers from all the cases of FCFS.

PAGE 55

49 The numbers in the graphs can be veri fied from Tables A.1 and A.2 of the Appendix. Figure 5.1 shows how throughput varies with number of buffers for different cases of FCFS, with and without preemption, as labeled in the respective graphs. It can be seen from the graphs that there is an incr ease in the throughput with increase in number of buffers. In either case, FCFS (no preemption) or FCFS with preemption, the throughput from the asynchronous nodes is highest for a 5 nodes case and redu ces as number of nodes increases. This happens because as the number of nodes in the network increases the number of isochronous nodes increases a nd therefore increasing the probability of collision for asynchronous nodes. A reverse eff ect is seen in throughput from isochronous nodes. Throughput from the isochronous nodes increases as the number of isochronous nodes in the network increases. Also, considering graphs showing the va riation of isochronous throughput, we see that there is not much difference in the behavior of isochronous nodes in the network with or without preemption in the buffers. Howe ver, there is a considerable difference in the throughput from the asynchronous node s, depending on preemption, when the number of buffers increases from one to two because the number of retransmissions reduces. Overall throughput however is seen to increase as the number of buffers is increased from one to two, for both the cases, as shown in graphs. Also the throughput achieved in two buffer case of FCFS and FCFS with preemption is almost the same. 5.1.1.2 Based on Source Table A.3 of the Appendix gives th e throughput and number of FAILs for asynchronous and isochronous nodes in the network. Table A.6 shows the mean

PAGE 56

50 Based on Source 0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 012345n-1 # of buffersAsynch Throughput(Mbps) 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Figure 5.2. Variation of throughput from asynchronous nodes with buffers, based on source case. Based on source 0.00 2.00 4.00 6.00 8.00 10.00 012345n-1 # of buffersIsch Throghput (Mbps) 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Figure 5.3. Variation of throughput from isoc hronous nodes with buffers, based on source case. Based on source0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 012345n-1 # of buffersTotal Throughput(Mbps) 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Figure 5.4. Variation of tota l throughput with buffers, based on source case.

PAGE 57

51 contention resolution slots and the probability of collision separately for asynchronous and isochronous nodes in the network. Graphs from figure 5.2 to figure 5.4 show the variation of throughput with the number of buffers for different cases of N. Graph in figure 5.2 shows how throughput (from the asynchronous nodes) varies with the increase in number of buffers. We see that there is an increase in the throughput as the number of buffers go from one to two and almost constant from then on. In case of five nodes there is a slight decrease in the throughput from 6.4 Mbps to 6.2 Mbps. Such unexpected variations are due the number of collisions suffered before a successful transmission. Probability of collision, as show n in Table A.6, for two buffers case is a bit high as compared to a single buffer case. There is a decrease in th e throughput, in graph in figure 5.2, as the number of nodes increases. This is because as the num ber of nodes in the network increases, our number of asynchronous nodes being a constant, there is an increase in the number of isochronous nodes. As the traffic generated by isochronous nodes is either multimedia or VOIP traffic, it has a higher priority and can therefore ga in access to the channel (by priority resolution process) easily. On the other hand asynchronous traffic being a low priority traffic has to back-off some number of times before it wins the priority resolution process and contends for the channel. So, as the number of isochronous nodes in the network increases there is a considerable decrease in the throughput of the asynchronous nodes. In the case of 13, 15 and 20 nodes we s ee that the throughput dr ops down to 0, i.e., the asynchronous nodes did not get to transmit at all due to blocking.

PAGE 58

52 Graph in figure 5.3 shows the variati on between throughput from the isochronous nodes, number of buffers for different values of N (number of nodes). Here we see the opposite trend. As the number of nodes increases, isochronous throughput increases. It is also seen that there is an increase in throughput with an increase in the number of reassembly buffers. Here again we have the exceptional cases of 13, 15 and 20 nodes, where we don’t see any difference in thr oughput with increase in the number of reassembly buffers because of head-on bloc king, where the high priority packets block the low priority packets. So only the high priority packets gain access to the channel. Also, the high priority packets are just one segment long and therefore do not require any reassembly to be done. This means that irre spective of the number of reassembly buffers, every segment will find a free buffer. This applies for both the allocation techniques. Graph in figure 5.4 shows the total throughput which is sum of isochronous throughput and asynchronous throughput. 5.1.1.3 Based on Priority Graphs in figure 5.5 are from buffer alloca tion based on priority, with and without preemption. Behavior of the network in ei ther case, allocation based on source or allocation based on priority, is found to be the same. The transition in throughput for both asynchronous and isochronous nodes is totally the same. The variation of throughput with the number of buffe rs however, for allocation based on priority with preemption case, is slightly different as compared to the other schemes. There is a sudden increase in th e asynchronous throughput when the number of buffers increases from one to two for N = 5,7,9 and 11. This is because earlier the low priority packets were always preempted and a ll its segments were dr opped (in a single

PAGE 59

53 Based on Priority 0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 012 # of buffersAsynch throughput 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Based on Priority (with preemption)0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 012 # of buffersAsynch Throughput(Mbps) 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Based on Priority0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 012 # of buffersIsch throughput 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Based on Priority (with preemption)0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 012 # of buffersIsch Throughput(Mbps) 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Based on Priority0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 012 # of buffersTotal throughput(Mbps) 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Based on Priority (with preemption)0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 012 # of buffersTotal Throughput(Mbps) 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Figure 5.5. Variation of throughput with number of buffers, based on priority all cases.

PAGE 60

54 buffer case). So an increase in the numb er of buffers basi cally provides these asynchronous nodes with a reassembly buffer, but when the number of isochronous nodes in the network further incr eases i.e., any case where n> 11, blocking occurs and the asynchronous nodes never get to access the ch annel and the asynchronous throughput for these nodes is always 0. 5.1.1.4 Effect on the number of FAILs A sender gets a FAIL from the receiver in two situations. First when there are no resources available, i.e., no buffer available for the packet that just came in and the second is when the resources are busy, i.e., wh en the buffer for that particular source or priority is already under use. It is seen that as the numb er of reassembly buffers are increased from one to two ther e is a sudden fall in the number of FAILs issued. This is quite obvious from Tables A.1, A.2, A.3 and A.4 irrespective of the allocation scheme used. Figure 5.6 shows the variation of numb er of FAILs with number of buffers. Based on Priority0 50 100 150 200 250 012 # of buffers# of FAILs 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Based on Priority (with preemption)0 20 40 60 80 100 120 012 # of buffers# of FAILs 5 nodes 7 nodes 9 nodes 11 nodes 13 nodes 15 nodes 20 nodes Figure 5.6. Variation of number of FAILs with number of buffers, based on priority – with and without preemption. It is also seen that the variation in the nu mber of FAILs is not mu ch when we consider two or more number of buffers. This effect is seen because when there is just one

PAGE 61

55 reassembly buffer, a receiver can receive from only one sender. When a new node tries to send to the same receiver it gets a FAIL or a resource not available signal, unlike two buffers case. It is also seen that there is not much di fference in the total number of FAILs in the network for a two buffer case and any ‘x’ buffer case, where x lies between 2 and N-1, when there are N nodes in the network. This is because at any given time there is only certain amount of packets that manage to re ach the same destination at the same time, because of the way the TAMA protocol opera tes. So, the total number of FAILs in the network gets saturated at a point. This beha vior is seen when the number of asynchronous nodes in the network is increased. However, for maximum cases we have seen here (where the number of asynchronous nodes is 2) the number of FAILs reduces to 0. 5.1.1.5 Reserve a high priority buffer It is seen that by reserving a small bu ffer for VOIP traffic optimal throughput can be achieved in a single large (low priority) buffer case itse lf. Results obtained by a two buffer case when allocation is based on priority and reserving a high priority buffer case are totally the same, as s hown in graph in figure 5.7. 0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 057911131520# of NodesThroughput (Mbps) High prirority buffer 2 buffer (based on Priroity)

PAGE 62

56 Figure 5.7. Comparison between a 2 buffer case a nd reserving a high priority buffer case This option of reserving a high priority buffer in the TAMA protocol, along with multiple buffers, would be worth implementing if the size of the buffers is taken into consideration. If not for the size there is hardly any difference between the cases where we use two buffers, when allocation based on priority or a single FCFS buffer with another buffer reserved for high priority traffi c, in the throughput or the total number of buffers used in a TAMA protocol with multiple buffers. 5.1.2 Multimedia Scenario The behavior of the network is found to exactly the same as our general traffic scenario. The variation of th e throughput with number of buffers and nodes is also found to be totally same. The only difference is seen in the thr oughput. Total throughput obtained in a multimedia scenario is slightly lower than the throughput obtained from the general traffic scenario, for all the cases. This is because th ere is not much difference in the amount of packets successfully getting acr oss the network, between a general traffic scenario and a multimedia scenario, even though there is a change in the traffic that is being generated. This is essentially becau se of the way the protocol operates. Detailed variation of throughput with numb er of buffers and number of nodes are shown in Tables A.9, A.10 and A.11 of the Appendix. 5.1.3 Hot Spots The general behavior of the network unde r non-uniform traffic is no different than the uniform traffic case. Here the case of 5 nodes under hot spot conditions is analyzed in detail. Simulations were run using three different fractions of non-uniformity. First case we had 50% of the packets generated at ever y node directed towards the hot receiver.

PAGE 63

57 Total throughput of the networ k reduced by 6 %, as compared to the uniform case. The number of FAILs issued in the network increased by 30%. The throughput further reduces for the re maining fractions, 70% and 90%. Similar sets of results were collected, when the number of asynchronous nodes in the network is varied. The difference in the results can be checked from the Tables A.12, A.13 and A.14 of the Appendix. Buffer allocation based on prior ity showed the same kind of variation as in other traffic scenarios. This means that for any kind of traffic scenario, for allocation based on priority the optimal throughput is achieved when the number of reassembly buffers used is equal to two. Based on Source0.00 2.00 4.00 6.00 8.00 10.00 01234# of BuffersTotal Throughput(Mbps) f50 f70 f80 f90 uniform Figure 5.8. Variation of total throughput with number of buffers when number of asynchronous nodes in the network is 1, base d on source case for hotspot scenario. Based on Source 0.00 2.00 4.00 6.00 8.00 10.00 01234 # of BuffersTotal Throughput (Mbps) f50 f70 f90 uniform f80

PAGE 64

58 Figure 5.9. Variation of total throughput with number of buffers when number of asynchronous nodes in the network is 2, base d on source case for hotspot scenario. Based on Source 0.00 2.00 4.00 6.00 8.00 10.00 01234 # of BuffersTotal Throughput(Mbps) f50 f70 f80 uniform f90 Figure 5.10. Variation of total throughput with number of buffers when number of asynchronous nodes in the network is 3, base d on source case for hotspot scenario. However, a change was observed in the trend of variation for buffer allocation based on source. This can be seen from the graphs, in figures 5.8 through 5.10. When the number of asynchronous nodes in the network wa s varied, the saturati on point seemed to change. For the cases of buffer allocation based on source and number of asynchronous nodes in the network is two or three, optimal throughput is obtained when the number of reassembly buffers used was three. Here it can be seen that with this kind of buffer management in TAMA protocol the throughput obtained, in either case of tra ffic conditions uniform or non-uniform, in a two buffers case is optimal. This is seen across the allocation techniques. 5.2 Comparative Summary All the allocation schemes discussed here showed the same behavior. Throughput obtained in a two buffer case is plotted agai nst the number of nodes, for each of the allocation schemes. Throughput considered in the graph, in the figur e 5.11, is the total throughput obtained from usi ng two reassembly buffers.

PAGE 65

59 It is clear from the graph that FCFS offers us the least throughput in either case, with or without preemption. Also, this allo cation technique resulte d in maximum number of FAILs. Between allocations based on source and allocation based on pr iority there is hardly any difference in the general traffic scenario considered. However, when we consider unbalanced traffic or hotspots we see that for buffer allocation based on source, optimal throughput is reached in a three buffer case. For the same hotspot 0.00 1.00 2.00 3.00 4.00 5.00 6.00 7.00 8.00 9.00 057911131520 # of nodesThroughput (Mbps) Based on Source Based on Priority Based on Priority (with preemption) Based on FCFS Based on FCFS(ih Figure 5.11. Variation of throughput with number of nodes for all the allocation schemes. scenario and for buffer allocation based on pr iority, optimal throughput is obtained in a two buffer case. Also, both the cases almost show the same throughput. This however is not the case when th ere is only one asynchronous node in the network. When there is just one asynchronous node in the network, the variation of throughput with number of buffers for buffer al location based on source is same as buffer allocation based on priority for a hotspot scenario. That is, we achieve optimal throughput in a two buffer case. At any given ti me, it is likely that there would be more than one asynchronous node in the network.

PAGE 66

60 For allocation based on priority we ag ain have two cases, with and without preemption, almost showing the same results. A percentage difference between the cases is given in Graph in the figure 5.12. The graph in figure 5.12 gives the percen tage change in throughput between preemptive and non-preemptive methods of buffer allocation based on priority. It is clear from the graph that there is not much difference between throughput when two 0.00 5.00 10.00 15.00 20.00 25.00 30.00 35.00 40.00 057911131520# of NodesPercentage Change in Throughput with 1 Buffer with 2 Buffers Figure 5.12: Percentage difference in throughput between preemptive and nonpreemptive methods of buffer allocation based on priority reassembly buffers are used by a receiver. Ho wever, throughput suffers badly in a single buffer case when there is preemption in r eassembly buffers. This is because the asynchronous nodes would never get to transm it as the number of isochronous nodes in the network increases. Selection of an allocation technique depends on the kind of traffic flowing in the network. If there is a lot of VOIP or streaming media tra ffic then using “buffer allocation based on priority with preemption” makes a lot of difference on the over all performance of the network.

PAGE 67

61 CHAPTER 6 CONCLUSIONS AND FUTURE WORK 6.1 Conclusion The objective of the thesis was to maximize the throughput by introducing multiple buffers at the receiver side for reassembly. This report describes different schemes considered to manage buffer allo cation in the TAMA protocol. Two main schemes were allocation based on source a nd allocation based on priority. Preemption was also considered. In this thesis the eff ect of varying the number of buffers on the throughput for different allocation schemes unde r various traffic conditions was analyzed as the number of nodes in the network varie d. The variation of nu mber of FAILs with buffers was also studied while the numb er of nodes in the network varied. Introduction of multiple buffers at the receiver side increased the overall throughput. The total throughput, when the number of reassembly buffers increased from one to two, was found to increase by at le ast 11% and the number of total number of FAILs in the network was found to be reduce by a minimum percentage of 60. Buffer allocation based on source and buffe r allocation based on priority with or without preemption, showed similar kind of re sults under general and multimedia traffic conditions. The throughput increased as the num ber of buffers was increased from one to two and was constant from there on. The mi nimal number of buffers required for the network to reach its saturation point was found to be two under general and multimedia

PAGE 68

62 scenarios for both the allocation techniques. The same transition was seen even for hotspot traffic conditions when allocation wa s based on priority. However, there was a slight increase in the minimal number of buffers required to obtain optimal throughput under hotspot traffic conditions when buffer allocation was based on source. The optimal throughput was obtained when the number of buffers was equal to three. Preemption in buffers gives more opport unities for voice stations to transmit by preempting a low priority packet. Depending on the Quality of Service requirements of the network, buffer allocation based on priority with or w ithout preemption can be used. This is because the difference in thro ughput between preemptive and non-preemptive methods of based on priority allocation sche me was found to be at most 2% when it comes to a two buffer case. Reserving a buffer for voIP traffic, along with multiple buffers, at each destination is worth implementing only if buffer size is taken into consideration. Otherwise, it was same as using two at the receiver side when buffer allocation was based on priority. 6.2 Future Work In this thesis the area of study was mostly finding the optimal number of buffers that are required to obtain optimal throughput. E ach of the buffers used were of infinite size. Further research can be done in optimizing the size of each buffer. In the buffer allocation based on priority with preemption scheme a low priority packet undergoing reassembly was preempted and all its segments were dropped, when a high priority packet arrives. One more direc tion of future work would be modifying this allocation technique to accomm odate the already received segments of the preempted

PAGE 69

63 packet without dropping them. Segments that were already received should be stored at some place so that the reassembly process can be resumed as soon as the high priority packet gets through. Complexity involved in such a buffer management technique should be analyzed against the pe rcentage increase in throughput from saving the already received segments of the preempted packet. Also the same analysis can be repeated for larger number of asynchronous sources and much worse channel conditions. One more interesting case would be when higher priority packets would need reassembly.

PAGE 70

64 APPENDIX: TABLES The main sets of results used in this an alysis were shown as graphs. Rest of the values is shown in form of tables in this appendix. Table A.1: Shows throughput and # (number) of FAILs for asynchronous (asynch) and isochronous (isch) nodes, respectively when buffer allocation is “Based on FCFS” ; for a general traffic scenario. 5: 1 6.278272 1.243499 7.521771 25 141 2 6.190464 1.740091 7.930555 0 19 3 6.366080 1.804688 8.17076 0 0 7: 1 4.873344 2.353765 7.227109 26 167 2 4.653824 2.818059 7.471883 1 17 3 4.895056 2.82800 7.720356 0 0 9: 1 3.600128 3.399435 6.999563 7 207 2 3.644032 4.190752 7.834784 0 11 3 3.731840 4.231125 7.962765 0 0 11: 1 2.195200 4.772128 6.967328 9 168 2 2.414720 5.434251 7.848971 0 4 3 2.283008 5.442325 7.72536 0 0 13: 1 0.000000 7.604864 7.604864 0 36 2 0.000000 7.518933 7.518933 0 0 3 0.000000 7.578011 7.578011 0 0 15: 1 0.000000 7.035573 7.035573 0 15 2 0.000000 7.137616 7.137616 0 0 3 0.000000 7.051685 7.051685 0 0 20: 1 0.000000 6.348128 6.348128 0 14 2 0.000000 6.471653 6.471653 0 0 3 0.000000 6.297749 6.297749 0 0 N odes: Buffers Throughput (Mbps) Asynch Isch Total # of FAILs Asynch Isch

PAGE 71

65 Table A.2: Shows throughput and # (number) of of FAILs for Asynchronous (asynch) and isochronous (isch) nodes, respectively when buffer allocation is “Based on FCFS with preemption” ; for a general traffic scenario. 5: 1 3.336704 1.816800 5.153504 93 0 2 6.278272 1.816800 8.095072 4 0 3 6.366080 1.816800 8.18288 0 0 7: 1 2.019584 3.023963 5.043547 103 0 2 4.829440 3.023963 7.853403 7 0 3 4.741632 3.023963 7.765595 0 0 9: 1 0.702464 4.235163 4.9372627109 0 2 3.512320 4.239200 7.750512 7 0 3 3.556224 4.237893 7.794117 0 0 11: 1 0.131712 4.929584 5.061296 79 0 2 2.283008 5.159712 7.44272 1 0 3 2.283008 5.335288 7.518296 0 0 13: 1 0.000000 7.427632 7.427632 0 1 2 0.000000 7.475968 7.475968 0 0 3 0.000000 7.465879 7.465879 0 0 15: 1 0.000000 7.014091 7.014091 0 0 2 0.000000 7.094651 7.094651 0 0 3 0.000000 7.084096 7.084096 0 0 20: 1 0.000000 6.122560 6.122560 0 0 2 0.000000 6.133301 6.133301 0 0 3 0.000000 6.133232 6.133232 0 0 N odes: Buffers Throughput (Mbps) Asynch Isch Total # of FAILs Asynch Isch

PAGE 72

66 Table A.3: Shows throughput and # (number) of FAILs for Asynchronous (asynch) and isochronous (isch) nodes, respectively when buffer allocation is “Based on Source”; for a general traffic scenario. 5: 1 6.409984 1.279835 7.689819 13 132 2 6.190464 1.780464 7.970928 0 9 3 6.322176 1.816800 8.138976 0 0 4(n-1) 6.409984 1.812763 8.232747 0 0 7: 1 4.917248 2.349728 7.267976 13 147 2 5.048960 2.939179 7.989139 0 20 3 4.741632 2.882656 7.624988 0 0 4 5.005056 3.019925 8.024981 0 0 5 4.873344 3.007813 7.881157 0 0 6(n-1) 5.005056 3.028000 8.033056 0 0 9: 1 3.644032 3.403472 7.047504 7 177 2 3.644032 4.178640 7.822672 0 13 3 3.292800 3.932363 7.225163 0 0 4 3.687936 4.235163 7.913099 0 0 5 3.687936 4.239200 7.927136 0 0 8(n-1) 3.600128 4.182677 7.782805 0 0 11: 1 0.658560 6.133301 6.791861 2 168 2 0.790272 7.105392 7.895664 0 12 3 0.658560 7.191323 7.839883 0 0 4 0.921984 7.207435 8.129419 0 0 5 0.965888 7.202064 8.167952 0 0 10(n-1) 0.658560 7.110763 7.769323 0 0 13: 1 0.0 7.379296 7.379296 0 29 2 0.0 7.61235 7.610235 0 0 3 0.0 7.540416 7.540416 0 0 4 0.0 7.749872 7.749872 0 0 5 0.0 7.583381 7.583381 0 0 12(n-1) 0.0 7.572640 7.572640 0 0 15: 1 0.0 7.030203 7.030203 0 17 2 0.0 7.030203 7.030203 0 0 3 0.0 7.320219 7.320219 0 0 4 0.0 7.083909 7.083909 0 0 5 0.0 7.073168 7.073168 0 0 14(n-1) 0.0 7.019461 7.019461 0 0 20: 1 0.0 6.396464 6.396464 0 4 2 0.0 6.315904 6.315904 0 0 3 0.0 6.262197 6.262197 0 0 4 0.0 6.203120 6.203120 0 0 5 0.0 6.240715 6.240715 0 0 19(n-1) 0.0 6.267190 6.267190 0 0 N odes: Buffers Throughput (Mbps) Asynch Isch Total # of FAILs Asynch Isch

PAGE 73

67 Table A.4: Shows throughput and # (number) of FAILs for Asynchronous (asynch) and isochronous (isch) nodes, respectively when buffer al location is “Based on Priroity”; for a general traffic scenario. 5: 1 6.146560 1.300021 7.446581 26 128 2 6.322176 1.816800 8.138976 20 0 7: 1 4.741632 3.015888 7.757520 22 0 2 4.917248 3.023963 7.94121 9 0 9: 1 3.600128 3.415584 7.015712 5 204 2 3.468416 4.045408 7.513824 0 0 11: 1 0.746368 5.972181 6.718549 10 218 2 0.834176 7.185952 8.020128 3 0 13: 1 0.0 7.653200 7.653200 0 18 2 0.0 7.513563 7.513563 0 0 15 : 1 0.0 7.019461 7.019461 0 12 2 0.0 7.239659 7.239659 0 0 20: 1 0.0 6.111819 6.111819 0 13 2 0.0 6.197749 6.197749 0 0 Table A.5: Shows throughput and #(number) of FAILs for Asynchronous (asynch) and isochronous (isch) nodes, respectively when buffe r allocation is “Based on Priroity – with preemption” ; for a general traffic scenario. 5: 1 3.424512 1.816800 5.241312 102 0 2 6.190464 1.812763 8.003227 36 0 7: 1 2.063488 3.028000 5.091488 105 0 2 4.961152 3.023963 7.985115 11 0 9: 1 1.097600 4.239200 5.336800 100 0 2 3.556224 4.239200 7.795425 0 0 11: 1 0.0 7.196693 7.196693 63 0 2 0.702464 7.191323 7.893787 10 0 13: 1 0.0 7.422261 7.422261 1 0 2 0.0 7.52965 7.52965 0 0 15: 1 0.0 7.218176 7.218176 0 0 2 0.0 7.003349 7.003349 0 0 20: 1 0.0 6.332016 6.332016 0 0 2 0.0 6.095707 6.095707 0 0 N odes: Buffers Throughput (Mbps) Asynch Isch Total # of FAILs Asynch Isch N odes: Buffers Throughput (Mbps) Asynch Isch Total # of FAILs Asynch Isch

PAGE 74

68 Table A.6: Shows the Mean CR (contention resolu tion ) slots and Probability of collision, for both asynch (asynchronous ) and isch (isochronous ) nodes, respectively when buffer allocation is “Based on source”; for a general traffic scenario. 5: 1 3.21 3.77 0.034752 0.004608 2 3.3 4.17 0.021978 0.058571 3 3.26 3.89 0.050177 0.00000 4(n-1) 3.2 3.88 0.042867 0.006818 7: 1 3.6 4.14 0.050757 0.022339 2 3.08 4.23 0.039858 0.035387 3 4.2 4.86 0.08840 0.096591 4 3.35 3.69 0.043592 0.041397 5 3.5 3.91 0.062999 0.058752 6(n-1) 3.3 4.05 0.061419 0.009211 9: 1 3.79 3.66 0.066992 0.099517 2 3.24 4.14 0.065960 0.030584 3 4.50 5.18 0.104803 0.128009 4 3.34 4.09 0.050985 0.026490 5 3.37 3.84 0.054893 0.042435 8(n-1) 3.56 4.20 0.063981 0.045536 11: 1 3.40 3.04 0.048387 0.103774 2 3.43 3.18 0.050459 0.086088 3 3.61 3.66 0.080808 3.082012 4 3.10 3.86 0.053097 0.067227 5 3.24 3.16 0.051383 0.062544 10(n-1) 2.87 2.89 0.035928 0.119174 13: 1 2.50 2.00 0.250000 0.161791 2 2.00 1.97 0.0000 0.156454 3 3.00 2.02 0.33333 0.162194 4 1.50 2.17 0.00000 0.138425 5 0.00 2.05 0.00000 0.158428 12(n-1) 3.33 1.96 0.00000 0.159714 15: 1 0.00 1.29 0.0000 0.217699 2 3.00 1.35 0.000 0.225444 3 0.00 1.33 0.0000 0.195870 4 1.00 1.37 0.0000 0.219397 5 0.00 1.41 0.0000 0.218731 14(n-1) 6.00 1.34 0.000 0.225711 20: 1 1.00 0.80 0.0 0.295227 2 0.00 0.81 0.0 0.305195 3 0.00 0.85 0.0 0.312084 4 6.00 0.85 0.0 0.316568 5 1.00 0.83 0.0 0.312833 19(n-1) 1.00 0.85 0.0 0.308102 N odes: Buffers Mean MCR slots Asynch Isch Prob of collision Asynch Isch

PAGE 75

69 Table A.7: Shows the Mean CR (contention resolu tion ) slots and Probability of collision, for both asynch (asynchronous ) and isch (isochronous ) nodes, respectively when buffer allocation is “Based on priority – no preemption”; for a general traffic scenario. 5: 1 3.61 4.10 0.042355 0.033113 2 3.30 4.0 0.035386 0.002304 7: 1 3.68 4.22 0.057710 0.052375 2 3.25 4.38 0.039747 0.045278 9: 1 3.86 3.46 0.042579 0.052055 2 4.26 4.40 0.089151 0.091413 11: 1 3.38 3.69 0.058824 0.074461 2 3.36 3.45 0.070796 0.076442 13: 1 0.00 2.07 0.0 0.141582 2 2.02 1.98 0.50000 0.167461 15: 1 4.00 1.31 0.0 0.220319 2 1.00 1.40 0.0 0.200829 20: 1 3.00 0.85 0.0 0.318343 2 0.50 0.84 0.0 0.316538 Table A.8: Shows the Mean CR (contention resolu tion ) slots and Probability of collision, for both asynch (asynchronous ) and isch (isochronous ) nodes, respectively when buffer allocation is “Based on priority – with preemption” ; for a general traffic scenario. 5: 1 3.78 4.24 0.33563 0.32333 2 3.40 4.07 0.040874 0.002257 7: 1 3.90 3.98 0.027807 0.011905 2 3.36 4.14 0.051709 0.014667 9: 1 4.21 4.08 0.040639 0.039354 2 3.38 4.28 0.052758 0.042357 11: 1 3.84 3.59 0.080000 0.071124 2 3.05 3.09 0.060748 0.089973 13: 1 2.80 1.92 0.0 0.175909 2 2.13 2.02 0.0 0.160885 15: 1 0.00 1.34 0.0 0.206143 2 0.00 1.39 0.0 0.226896 20: 1 0.00 0.82 0.0 0.304425 2 0.00 0.82 0.0 0.328208 Nodes:Buffers Mean MCR slots AsynchIsch Prob of collision AsynchIsch N odes:Buffers Mean MCR slots Asynch Isch Prob of Collision Asynch Isch

PAGE 76

70 Table A.9: Shows throughput and # (number) of FAILs for Asynchronous (asynch) and isochronous (isch) nodes, respectively when buffer allocation is “Based on Source”; for multimedia scenario. 5: 1 6.102656 1.053744 7.1564 53 189 2 6.278272 1.562448 7.84072 58 0 3 6.322176 1.816800 8.138976 0 0 4(n-1) 6.278272 1.816800 8.095072 0 0 7: 1 4.697728 1.865248 6.562976 30 256 2 5.048960 2.826133 7.875093 0 49 3 5.005056 3.028000 8.033056 0 0 4 5.092864 3.023963 8.116824 0 0 5 4.961152 3.028000 7.899152 0 0 6(n-1) 5.092864 3.028000 8.120864 0 0 9: 1 3.687936 2.995701 6.683637 15 278 2 3.687936 4.154416 7.842352 0 21 2 3.292800 3.980811 7.273611 0 0 4 3.600128 4.239200 7.839328 0 0 5 3.775744 4.239200 8.014944 0 0 8(n-1) 3.775744 4.235163 8.010907 0 0 11: 1 2.195200 4.178640 6.37384 7 309 2 2.370816 5.313131 7.683947 32 0 3 2.239104 5.014368 7.253472 0 0 4 2.239104 5.438288 7.677392 0 0 5 2.195200 5.127413 7.32394130 0 10(n-1) 2.195200 5.283611 7.478811 0 0 13: 1 0.0 7.239659 7.239659 0 55 2 0.0 7.535045 7.535045 0 0 3 0.0 7.706907 7.706907 0 0 4 0.0 7.551157 7.551157 0 0 5 0.0 7.620976 7.620976 0 0 12(n-1) 0.0 7.653200 7.653200 0 0 15: 1 0.000 6.756299 6.756299 0 41 2 0.000 6.777781 6.777781 0 0 3 0.0000 7.137616 7.137616 0 0 4 0.0000 7.153728 7.153728 0 0 5 0.0000 7.185952 7.185952 0 0 14(n-1) 0.0000 7.100021 7.100021 0 0 20: 1 0.0 6.101077 6.101077 0 22 2 0.0 6.187008 6.187008 0 0 3 0.0 6.133301 6.133301 0 0 4 0.0 6.165525 6.165525 0 0 5 0.0 6.407205 6.407205 0 0 19(n-1) 0.0 6.117189 6.117819 0 0 N odes: Buffers Throughput (Mbps) Asynch Isch Total # of FAILs Asynch Isch

PAGE 77

71 Table A.10: Shows throughput and #(number) of FAILs for Asynchronous (asynch) and isochronous (isch) nodes, respectively when buffer allocation is “Based on Priroity” ; for a multimedia scenario. 5: 1 6.014848 1.037595 7.052443 64 193 2 6.058752 1.816800 7.875552 65 0 7: 1 4.785536 1.982331 6.767867 55 259 2 4.873344 3.028000 7.901344 51 0 9: 1 3.600128 3.003776 6.603904 29 316 2 3.600128 4.235163 7.835291 37 0 11: 1 2.195200 4.283611 6.478811 20 288 2 2.151296 5.438288 7.589584 26 0 13: 1 0.0 6.820747 6.820747 0 123 2 0.0 7.647829 7.647829 0 0 15: 1 0.0 6.965755 6.965755 0 17 2 0.0 7.105392 7.105392 0 0 20: 1 0.0 6.117189 6.117189 0 22 2 0.0 6.369611 6.369611 0 0 Table A.11: Shows throughput and #(number) of FAILs for Asynchronous (asynch) and isochronous (isch) nodes, respectively when buffer allocation is “Based on Priroity with preemption”; for a multimedia scenario. 5: 1 0.658560 1.81680 2.475360 141 0 2 6.014848 1.812763 7.827611 63 0 7: 1 0.614656 3.023963 3.638619 147 0 2 4.390400 2.793835 7.184235 52 0 9: 1 0.570752 3.439808 4.01056 96 0 2 3.380608 4.235163 7.615771 70 0 11: 1 0.175616 5.426176 5.601792 96 0 2 2.063488 5.442325 7.505813 9 0 13: 1 0.0 7.502821 7.502821 0 0 2 0.0 7.572640 7.572640 0 0 15: 1 0.0 7.110763 7.110763 0 0 2 0.0 7.180581 7.180581 0 0 20: 1 0.0 6.444800 6.444800 0 0 2 0.0 6.369611 6.369611 0 0 N odes: Buffers Throughput (Mbps) Asynch Isch Total # of FAILs Asynch Isch N odes: Buffers Throughput (Mbps) Asynch Isch Total # of FAILs Asynch Isch

PAGE 78

72 Table A.12: Shows throughput and #(number) of FAILs for Asynchronous (asynch) and isochronous (isch) nodes, respectively for different fractions of non-uniformity, for a 5 nodes case and when number of asynchronous nodes in the network is 1; for a hotspot scenario. F50: 1 5.444096 2.395317 7.839413 0 154 2 5.400192 3.217029 8.617221 0 0 3 5.356288 3.222400 8.578688 0 0 4 5.312384 3.217029 8.529413 0 0 F70: 1 5.312384 2.056905 7.369289 0 217 2 5.400192 3.217029 8.617221 0 0 3 5.268480 3.21169 8.480139 0 0 4 5.356288 3.217029 8.573317 0 0 F80:1 5.400192 1.745467 7.415659 0 275 2 5.444096 3.217029 8.661125 0 0 3 5.400192 3.222400 8.622592 0 0 4 5.400192 3.222400 8.622592 0 0 F90: 1 5.356288 1.385632 6.74192 0 341 2 5.356288 3.222400 8.578688 0 185 3 5.400192 3.222400 8.622592 0 0 4 5.400192 3.217029 8.617221 0 0 F50:1 5.356288 2.465136 7.821424 0 141 2 5.356288 3.222400 8.578688 0 0 F70:1 5.312384 1.922699 7.235083 0 213 2 5.444096 3.222400 8.66336 0 0 F80 :1 5.356288 1.681019 7.037307 0 277 2 5.400192 3.222400 8.622592 0 0 F90:1 5.356288 1.288960 6.645248 0 360 2 5.224576 3.222400 8.446976 0 0 F50:1 0.746368 3.206288 3.952656 77 0 2 5.312384 3.222400 8.534784 0 0 F70:1 0.219520 3.217029 3.43904 91 0 2 5.356288 3.211659 8.567947 0 0 F80:1 0.087808 3.222400 3.310208 97 0 2 5.400192 3.222400 8.622592 0 0 F90:1 0.087808 3.222400 3.310208 100 0 2 5.400192 3.222400 8.622592 0 0 Fraction: Buffers Throughput (Mbps) Asynch Isch Total # of FAILs Asynch Isch Buffer Allocation Based on Source Buffer Allocation Based on Priority Buffer Allocation Based on Priority with Preemption

PAGE 79

73 Table A.13: Shows throughput and # (number) of FAILs for Asynchronous (asynch) and isochronous (isch) nodes, respectively for different fractions of non-uniformity, for a 5 nodes case and when number of asynchronous nodes in the network is 2; for a hotspot scenario. F50: 1 5.707520 1.562864 7.270384 26 159 2 5.970944 2.207344 8.178288 0 38 3 5.839232 2.416800 8.256032 0 0 4 5.795328 2.411429 8.206757 0 0 F70: 1 5.663616 1.353408 7.017024 61 198 2 5.839232 2.083819 7.923051 0 60 3 5.883136 2.416800 8.299936 0 0 4 5.970944 2.416800 8.387744 0 0 F80: 1 5.488000 1.256736 6.744736 78 216 2 5.839232 1.621941 7.461173 0 147 3 6.014848 2.411429 8.426277 0 0 4 5.927040 2.411429 8.338469 0 0 F90: 1 5.575808 1.025797 6.601605 90 258 2 6.014848 1.423227 7.447118 0 185 3 5.883136 2.411429 8.294565 0 0 4 5.970944 2.416800 8.387744 0 0 F50: 1 5.707520 1.562864 7.270384 33 159 2 5.883136 2.416800 8.29936 33 0 F70: 1 5.663616 1.358779 7.022395 53 197 2 5.619712 2.411429 8.031141 78 0 F80: 1 5.663616 1.213771 6.877387 75 224 2 5.663616 2.411429 8.075045 73 0 F90: 1 5.575808 1.031168 6.606976 97 258 2 5.619712 2.416800 8.036512 84 0 F50: 1 2.019584 2.411429 4.431013 129 0 2 5.839232 2.416800 8.256032 32 0 F70: 1 1.097600 2.416800 3.5144 156 0 2 5.751424 2.411429 8.162853 67 0 F80: 1 0.395136 2.416800 2.811936 177 0 2 5.619712 2.406059 8.025771 73 0 F90: 1 0.087808 2.411429 2.499237 154 0 2 5.575808 2.411429 7.987237 94 0 Fraction: Buffers Throughput (Mbps) Asynch Isch Total Buffer Allocation Based on Source Buffer Allocation Based on Priority Buffer Allocation Based on Priority with preemption # of FAILs Asynch Isch

PAGE 80

74 Table A.14: Shows throughput and # (number) of FAILs for Asynchronous (asynch) and isochronous (isch) nodes, respectively for different fractions of non-uniformity, for a 5 nodes case and when number of asynchronous nodes in the network is 3; for a hotspot scenario. F50: 1 6.058752 0.966720 7.025472 84 120 2 6.497792 1.348037 7.845829 10 48 3 6.453888 1.562864 8.016752 0 9 4 6.453888 1.611200 8.065088 0 0 F70: 1 5.970944 0.982832 6.953776 143 117 2 6.366080 1.305072 7.671152 57 27 3 6.366080 1.476933 7.843013 0 25 4 6.409984 1.605829 8.015813 0 0 F80: 1 5.707520 0.886160 6.59368 188 135 2 6.102656 0.982832 7.085488 68 117 3 6.497792 1.294331 7.792123 0 59 4 6.409984 1.611200 8.021184 0 0 F90: 1 5.839232 0.918384 6.757616 158 128 2 6.322176 1.186917 7.509093 41 79 3 6.453888 1.380261 7.834149 0 43 4 6.497792 1.611200 8.108992 0 0 F50: 1 6.058752 0.945237 7.003989 109 123 2 5.927040 1.605829 7.532869 100 0 F70: 1 5.883136 0.945237 6.8238373165 123 2 5.883136 1.611200 7.494336 126 0 F80: 1 5.795328 0.918384 6.713712 166 129 2 5.839232 1.611200 7.450432 152 0 F90: 1 5.707520 0.950608 6.658128 190 122 2 5.663616 1.611200 7.274816 180 0 F50: 1 3.731840 1.611200 5.34204 138 0 2 6.190464 1.611200 7.801664 91 0 F70: 1 2.414720 1.611200 4.02592 201 0 2 5.839232 1.600459 7.439691 138 0 F80: 1 1.668532 1.611200 3.279732 213 0 2 5.927040 1.605829 7.532869 159 0 F90: 1 0.746368 1.611200 2.357568 232 0 2 5.795328 1.600459 7.395787 178 0 Fraction: Buffers Throughput (Mbps) Asynch Isch Total # of FAILs Asynch Isch BufferAllocationBasedonPrioritywithpreemption BufferAllocationBasedonPriority BufferAllocationBasedonSource

PAGE 81

75 LIST OF REFERENCES [1] Powerline Networking Moves Ah ead, all Net Devices, June 28 2000. www.intellon.com/press/mediacoverage.asp [2] Electrical Wiring to Create Home Ne twork,Orlando Sentinel June 6, 2000. [3] Dostert, Klaus, Telecommunicatio ns over the Power Distribution Grid; Possibilities and Limitations, Procee dings of International Symposium on Power-line Communications and its Applications, Germany, 1997. [4] Gardener S., Markwalter B. and Yonge L., HomePlug Standard Brings Networking to the Home, CSD Dec 2000 Feature. www.csdmag.com/main/2000/12/0012feat5.htm [5] Tanenbaum, Andrew, Computer Networks Third Edition, Prentice-Hall, Upper Saddle River, NJ, 1996. [6] Arpaci, Mutlu and Copeland, John, Buffer Management For Shared-Memory ATM Switches, IEEE Communications Surveys, First Quarter 2000. [7] Saleh, Mahmoud and Atiquzzaman, Moha mmed, An Exact Model For Analysis of Shared Buffer Delta Networks With Arbitrary Output Distribution, IEEE Second International Confer ence on Algorithms and Architectures for Parallel Processing, 11-13 June 1996, Singapore, pp. 147-154. [8] Saleh, Mahmoud and Atiquzzaman, Moha mmed, Analysis of Shared Buffer Multistage Networks with Hot Spot, ICA3PP: IEEE First International Conference on Algorithms and Architectures for Parallel Processing.19-21 April 1995, Brisbane, Australia, pp.799-808.

PAGE 82

76 [9] Zhou, Bin and Atiquzzaman, Mohammed, Performance Modeling of Multistage ATM Switching Fabrics, ATNAC : Au stralian Telecommunication Networks and Application Conference, 5-7 De c 1994, Melbourne, Australia, pp 657-662. [10] Zhou, Bin and Atiquzzaman, Mohammed, Efficient Analysis of Multistage Interconnection Networks Using Finite Output-Buffered Switching Elements, Tech. Rep. 15/94, La Trobe University, Melbourne, Department of Computer Science, July 1994, IEEE INFOCOM Boston, MA. [11] Latouch, G., Exponential Servers Sharing a Finite Storage: Comparison of Space Allocation Policies, IEEE Trans. Commun., vol. COM-28, no. 6, June 1980, pp 910-5. [12] Irland, M., Buffer Management in a Packet Switch, IEEE Trans. Commun., vol. COM-26, no. 3, Mar. 1978, pp. 328-37. [13] Kamoun, F. and Kleinrock, L. Analysis of Shared Finite Storage in a Computer Network Node Environment under General Traffic Conditions, IEEE Trans. Commun., vol. 41, no. 1, Jan. 1993, pp. 237-45. [14] Fong, S. and Singh, S. Analytical Modeling of Shared Buffer ATM Switches with Hot-Spot Push-out under Bursty Traffic, Proc. IEEE GLOBALCOM vol. 2, Nov.1996, pp. 835-9. [15] Kaizawa, Yasuhito and Marubaya shi Gen, Needs for the Power Line Communications and its Applications, Proceedings of International Symposium on Power-line Communications a nd its Applications, U.K, 1998. [16] Brackmann, Ludwig Power line Applic ations with European Home Systems, Proceedings of International Symposium on Power-line Communications and its Applications, Germany, 1997. [17] Prasad, Ramjee and Van New, Richard, OFDM Wireless Multimedia Communications Artech House, Boston, 2000. [18] Katar, Srinivas Analysis of Tone A llocated Multiple Access Protocol, Masters Thesis, University of Florid a, Gainesville, Spring 2000.

PAGE 83

77 [19] Moloy M. K., Simulation--Fundamentals of Performance Modeling Macmillan Publishing Company, NewYork, 1989.

PAGE 84

78 BIOGRAPHICAL SKETCH Usha Suryadevara was born in Guntur, India, on September 25, 1978. She graduated from Utkal University, India, in May 1999 with a B.Tech. in computer engineering. She then joined the University of Florida in August 1999 and got her M.S. in computer and information science, under th e guidance of Dr. Richard E. Newman, in December 2001. She plans to work in the area of computer networks.


Permanent Link: http://ufdc.ufl.edu/UFE0000357/00001

Material Information

Title: Buffer Management in Tone Allocated Multiple Access Protocol
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0000357:00001

Permanent Link: http://ufdc.ufl.edu/UFE0000357/00001

Material Information

Title: Buffer Management in Tone Allocated Multiple Access Protocol
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0000357:00001


This item has the following downloads:


Full Text











BUFFER MANAGEMENT IN TONE ALLOCATED MULTIPLE ACCESS
PROTOCOL



















By

USHA SURYADEVARA


A THESIS PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
MASTER OF SCEINCE

UNIVERSITY OF FLORIDA


2001



















ACKNOWLEDGMENTS

I extend my special thanks to my advisor, Dr. Richard E. Newman, for being my

committee chair for the entire project and for his valuable guidance throughout my

studies here. I also thank Dr. Randy Chow and Dr. Haniph Latchman for being my

committee members.

I would also like to thank Mr. Srinivas Katar for his help and guidance at various

stages of my work.

I thank my parents, Dr. S. Sambasiva Rao and S. Basaveswari, and my sister, Dr.

S. Uma, for their support and encouragement throughout my studies.

I am grateful to my roommates and friends for their support.
















TABLE OF CONTENTS

page

A C K N O W L E D G M E N T S .................................................................................................. ii

AB STRA C T ............... ......................................... ........... ... .... ......... .v

CHAPTERS

1. IN TR O D U C T IO N ................ ....................................... .... .......... ............

1.1 Pow er Lines ........................ .......... ................ .............. 1
1.1.1 Low Bandwidth Digital D evices........................ ... ............................. 3
1.1.2 High Bandwidth Digital Devices ............. ............................ .............. 5
1.2 TAM A Protocol Overview .......................... ........... ........... .............. 6
1.3 B uffers.................................. .......................... 6
1.4 Thesis M motivation ...... ........ ............................. .......... .............. 7
1.5 Thesis O bjective.............................................................. 8
1.6 Chapter Summary ...... ......................................... ........ 8

2. POWER LINE PROTOCOLS AND BUFFER MANAGEMENT ............................10

2 .1 H history .................................................... 10
2.1.1 O their P ow er L ine P rotocols..................................................... ... .. .............. 10
2 .1 .1 .1 X -10 ........................................ 10
2.1.1.2 CEBus ......................................... 11
2 .1.1.3 L onW ork s ..................................................... .............. 12
2.1.2 B uffer M anagem ent ........................ .......................................................... 12
2.2 Comparative Summary ............. .................. .............. 16
2.3 C chapter Sum m ary ...................................................... .............. 17


3. OFDM AND TAMA PROTOCOL ............................................19

3.1 OFDM Modulation ........................................ 19
3.1.1 Theory of Operation................ ......... ..... ........ 20
3.1.2 Advantages of OFDM ....................... ......... ... ............... 22
3.2 Protocol Overview ........................................................................... ......... ................ 22
3.2.1 PHY Overview........................................... .............. 23
3.2.2 M A C O verview ................................... .............. 27









3.2.2.1 Channel access m echanism .......................... ........................... ...... 28
3.2.2.2 Segm entation and reassembly....................... .......................... ...... 31
3 .2 .2 .3 P riv a cy ....................................................................... 3 3
3 .3 S u m m a ry ...................................................................................................... 3 3

4. SIMULATION DESIGN AND DESCRIPTION ................... ......................... 35

4 .1 D esig n .................. ........................................................ ............... 3 5
4 .1.1 B asic Sim ulator D esign ........................................................................... ... 35
4.1.2 M multiple Buffer Protocol........................................................ ....... .... 38
4.1.2.1 B ased on FCFS .................... ................. .................................... 39
4.1.2.2 Based on source ............... ............................... ... ........... .............. 39
4.1.2.3 Based on priority .................. .................. .................. .. .. ........ 40
4.1.2.4 P reem ptive A pproach...................................................... .... .. .............. 40
4.2 Sim ulation M odel................ ................ ........................................................ 4 1
4 .2 .1 A ssum options ..................................................... 4 1
4.2.2 Simulation Configuration...................... ..... ............................ 41
4.3 Perform ance M easures...................................................................................... 44
4.4 Sum m ary ..................................... ................................. ......... 45

5. RESULTS AND DISCU SSION ....................................................... ..................46

5.1 R esults................................ ............. .............. 46
5.1.1 G general T traffic Scenario ........................................................ .... .. .............. 47
5.1.1.1 B ased on FCFS .................... ................. .................................... 48
5.1.1.2 B ased on Source ............................................. .... .. ...... .. ............ 49
5.1.1.3 B asked on P priority .................................. ... ............... ....... .......... .......... 52
5.1.1.4 Effect on the number of FAILs........_... ......... ................. .. 54
5.1.1.5 Reserve a high priority buffer..................................... .............. 55
5.1.2 M ultim edia Scenario........... ......... .... ................... ................. .............. 56
5.1.3 H ot Spots............................ .................... 56
5.2 Comparative Summary ....... ........................................... .............. 58

6. CONCLUSIONS AND FUTURE WORK ..................... ....... ............... 61

6.1 C onclu sion ........................................ 6 1
6.2 Future W ork ......................................... ......... ....... ........ .............. 62

A P P E N D IX : T A B L E S ............................................................................ .....................64

L IST O F R EFE R E N C E S ................................... ............................... ...........................75

B IO G R A PH IC A L SK E TCH ..................................................................... ..................78







iv















Abstract of Thesis Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Master of Science



BUFFER MANAGEMENT IN TONE ALLOCATED MULTIPLE ACCESS
PROTOCOL



By

Usha Suryadevara

December 2001



Chairman: Dr. Richard E. Newman
Major Department: Computer and Information Science and Engineering

Home networking, an emerging technology, has gained much attention

due to the fact that it uses power lines for data transmission. Power line networks have

very extensive infrastructure in nearly every building. A Tone Allocated Multiple Access

(TAMA) protocol, which uses Carrier Sense Multiple Access with Collision Avoidance

(CSMA/CA), is considered.

Segmentation and reassembly (SAR) process of the TAMA protocol

rejects an initial segment if there is any insufficient buffer space due to prior incomplete

transmission at the receiver end. The SAR process issues a FAIL (or a resource busy)

signal to the sender asking for retransmission. An attempt is made in this thesis to find

the optimal number of reassembly buffers required at the receiver side and also to reduce









the number of FAILS in the network by increasing the number of buffers at the receiver

end. Three buffer allocation schemes, allocation based on first come first serve (FCFS),

allocation based on source address and allocation based on priority, are considered.

Throughput analysis for various kinds of traffic has also been carried out. Results show

that there is a considerable increase in the throughput and decrease in the number of

FAILS with increase in the number of buffers.














CHAPTER 1
INTRODUCTION



"It was technologically feasible, but the economics never proved (itself)" [1].


The prospect of homes where the refrigerator can be used to surf the Internet is

coming closer [2], because major computer and electronics manufacturers are coming up

with a common standard for turning a home's electrical wiring into a data network. Home

networking may be one of the most exciting markets that doesn't yet really exist. All that

the homeowners need is a technology that provides a connection to the broadband pipe

from anywhere in the home with minimal inconvenience. Until now, the options for high-

speed in-home access were phone line and RF technologies. One option that people never

thought about earlier is using a power line for this purpose. Power line certainly is the

most difficult medium of these, but it has two appealing attributes. First, as in the case of

phone lines, no RF conversion hardware is needed, so the cost can be low as compared to

wireless solutions. Second, power line network has a very extensive infrastructure in

almost every house.


1.1 Power Lines

Since power lines were originally devised for transmission of power at 50-60Hz

and at most 400 Hz, the use of the same medium for data transmission posed a lot of

technical problems.











Power line medium is a harsh environment for communication especially because

of the large attenuation. A channel between any two nodes (outlets) in a home has the

transfer function of an extremely complicated transmission line network. Amplitude and

phase response in such a network will vary widely with frequency. At some frequencies,

the transmitted signal may arrive at receiver with relatively little loss, while at other

frequencies it is way below the noise floor. In addition, much worse things like change of

transfer function might occur with time. This might happen, say, whenever the house

owner has plugged in a new device into the power line, or may be during switching

power supplies or motors. Also the nature of channel between outlet pairs may vary over

a wide range. One other problem is interference.

Power lines connect the power generation station to a variety of customers

dispersed over a wide region. Power transmission is done using varying voltage levels

and power line cables. Depending on the voltage levels at which they transfer power lines

can be categorized as follows:

1. High-tension lines;

2. Medium-tension power lines;

3. Low-tension lines.

High-tension lines connect electricity generation stations to distribution stations.

The voltage levels on these lines are usually on the order of hundreds of kilovolts and

they run over distances in the order of tens of kilometers.









Medium-tension lines connect distribution stations to pole mounted transformers.

The voltage levels are on the order of a few kilovolts and distances of the order of few

kilometers.

Low-tension lines connect pole-mounted transformers to individual households.

Voltage levels on these are on the order of few hundred volts and these run over distances

on the order of few hundred meters.

Recently data communications over low-tension lines has gained a lot of

attention. Digital devices using these low-tension power lines can be categorized, based

on the bandwidth they use, as follows [3]:

1. Low bandwidth digital devices

2. High bandwidth digital devices.

1.1.1 Low Bandwidth Digital Devices

These devices use carrier frequencies in the range 0-500KHz and are primarily

used for building automation. Frequencies used by these devices are restricted by the

regulatory agencies. The restrictions are imposed to ensure the harmonious coexistence of

various electromagnetic devices in the same environment. The frequency restrictions are

imposed in two main markets, North America and Europe. These are shown in Figure 1.1

[3]. The Federal Committee Commission (FCC) and European Committee for

Elcetrotechnical Standardization (CENELEC) govern regulatory rules in North America

and Europe respectively.

In North America a frequency band from 0 to 500 KHz can be used for power line

communications. The Frequency band in Europe is further divided into five bands based

on the rules.

1. 3 9 KHz frequency band;










2. 9 95 KHz frequency band;

3. 125 140 KHz frequency band;

4. 125 140 KHz frequency band;

5. 140 148.5 KHz frequency band.




FCC General Use FCC Prohibited
Frequency Band Frequency Band

0 540 KHz

(a) FCC Frequency Band Allocation for North America



A-Band B-Band C-Band D- CENELEC
Ba Prohibited Frequency
nd Band
3 9 95 125 140 148.5 KHz

(b) CENELEC Frequency Band Allocation for Europe



Figure 1.1. FCC and CENELEC frequency band allocation

The use of frequency band from 3KHz to 9KHz is limited to energy providers;

however, with their permission it may also be used by other parties inside a consumer's

premises.

The use of frequency band from 9KHZ to 95KHZ is limited to energy providers

and their concession-holders. This frequency band is often referred to as the "A-Band".

The use of frequency band from 125KHz to 140KHZ is limited to the energy

provider's customers; no access protocol is defined for this frequency band. This

frequency band is often called the "B-Band".









The use of frequency band from 125KHz to 140KHz is limited to the energy

provider's customers. In order to make simultaneous operation of several systems within

this frequency band possible, a carrier sense multiple access protocol using center

frequency of 132.5 KHz was defined. This frequency band is referred to as the "C-Band".

The use of frequency band from 140KHz to 148.5 KHz is limited to the energy

provider's customers. No access-protocol is defined for this frequency band. This

frequency band is often referred to as the "D-Band".

Thus in Europe power line communications is restricted to operate in the

frequency range 95 148.5 KHz. The various protocols that have been developed for use

by low bandwidth digital devices for communication on power lines are discussed in the

next chapter.



1.1.2 High Bandwidth Digital Devices

High-speed data communications over low-tension power lines has recently

gained lot of attention. These devices use existing power line infrastructure within the

apartment, office or school building for providing a local area network (LAN) to

interconnect various digital devices. High bandwidth digital devices for communication

on power lines use the frequency band between 1 MHz and 30 MHz. In contrast to low

bandwidth devices, no regulatory standards have been developed for this region of the

spectrum.

High bandwidth digital devices communicating on power lines need powerful

error correction coding along with appropriate modulation techniques to overcome these

impairments.









1.2 TAMA Protocol Overview

High bandwidth digital devices operating on power line share a common medium.

Efficient use of this medium requires both a robust physical layer (PHY) and an efficient

media access control (MAC) protocol. Also the choice of a MAC protocol is very much

dependant on the physical layer. The MAC controls the sharing of the medium, while

PHY specifies the modulation, coding, and basic packet formats.

PowerPacket technology, by Intellon Corporation, Ocala includes an effective and

reliable method for achieving high rates on typical channels [2]. The PowerPacket PHY

uses orthogonal frequency division multiplexing (OFDM) as the basic transmission

technique. For historical reasons this protocol is called the Tone Allocated Multiple

Access (TAMA) protocol. The name reflects the use of adaptive bit loading in OFDM

symbols. In contrast to these technologies, however, TAMA uses OFDM in a burst mode

rather than in continuous mode. This technology also uses concatenated Viterbi and

Reed- Solomon FEC with interleaving for payload data, and turbo product coding (TPC)

for sensitive control data fields [4]. TAMA MAC is modeled after the IEEE 802.11 MAC

adapted to the OFDM PHY layer. TAMA uses CSMA/CA (Carrier Sense Multiple

Access with Collision Avoidance)[5] technique as the basic channel access mechanism.


1.3 Buffers

Buffers are often used to improve throughput and reduce loss of data. Buffers may

be dedicated or shared; dedicated buffer management is straightforward but its utilization

is low whereas shared buffer offers higher buffer utilization at the cost of complex

management. Dedicated buffers especially input-buffers are affected by head-of-line

blocking, a phenomenon that occurs quite literally when the head of the buffer queue is









blocked. Shared buffers suffer from buffer hogging in case of non-uniform traffic. The

combination of traffic patterns and buffer allocation mechanisms are numerous and many

studies have been conducted on them.


1.4 Thesis Motivation

Unfortunately, while offering good throughput, this protocol suffers from large

delays. This gain turns out to be a potential problem in power line LANs as they should

be able to support a large varieties of traffic out of which some may be delay sensitive.

As of now, the way this protocol works is that the destination always acknowledges

unicast packets at the MAC layer by transmitting the response delimiter. TAMA protocol

sends the rest of the segments only if it gets the ACK for the first segment. If the source

fails to receive an acknowledgment, it assumes that a collision has caused the failure. The

destination may also choose to signal FAIL if it has insufficient resources to process the

frame, or it can signal NACK to indicate that the packet was received with errors that

could not be corrected by the FEC.

The basic TAMA protocol described above has just one buffer at the

receiver end. Due to this reason some packets arriving at the same destination at the same

time are sent a FAIL (or a resource busy) signal and therefore resulting in some

retransmissions. Retransmissions bring down the data throughput and do not guarantee

timely delivery of multimedia traffic. This means that if we increase the number of

buffers at the receiver end then there should be a definite increase in the throughput and

considerable reduction in number of retransmissions. This is our main area of study in

this thesis.









1.5 Thesis Objective

This thesis aims to maximize the throughput considering buffer allocation at the

destination. The analysis was focused on finding the optimal number of buffers required

at the destination node in TAMA protocol. Three buffer allocation schemes were

considered, allocation based on FCFS (first come first serve), allocation based on priority

and allocation based on source. Also preemption in the buffer, removing partly received

packet of a lower priority when no buffer resources are available for the currently

received higher priority packet, was considered and variation in the throughput was

observed. The change in throughput when a buffer is reserved for high priority traffic (for

example, VOIP) was noted.


1.6 Chapter Summary

This chapter discussed power line communications. Problems, like extreme noise

levels, interference, unpredictability of the channel etc, faced when a power-line is used

as a medium of transmission was discussed in brief. In US the frequency band from 0 to

500KHz can be used for power line communications.

This chapter also gave a brief intro on the protocol used in this thesis, the TAMA

protocol. TAMA PHY uses OFDM as the basic transmission technique and TAMA MAC

uses CSMA/CA as the basic channel access mechanism. An intro to the prior work done

in the field of buffer management was given in this chapter. This chapter also described

the nature of work done in this thesis the motivation behind them.

Chapter 2 describes the prior work done in both power line communications and

the area of buffer management. Chapter 3 gives a detailed description of OFDM

modulation used in TAMA protocol and protocol it self. Chapter 4 discusses the details






9


of the simulation and all the different scenarios that were considered. A detailed

discussion of the results collected and analysis of these results is given in chapter 5 and

finally the conclusions and future scope are presented in chapter 6.














CHAPTER 2
POWER LINE PROTOCOLS AND BUFFER MANAGEMENT

In this chapter we give a brief description of previous work done in power line

communications and also in the area of buffer management.


2.1 History

Even though the concept of using power lines for data transmission is very recent,

there have been several studies and attempts to develop protocols or come up with a

common standard for home networking as a whole. Some of them are discussed in brief

in the next sub-section.



2.1.1 Other Power Line Protocols

Various other protocols have been developed, even before PowerPacket

technology, for communication on power lines. These protocols differ in the modulation

technique, channel access mechanism and frequency band they use. Various products

based on these protocols are available in the market and are mainly used for home

automation purposes. A brief overview of these protocols is presented here.

2.1.1.1 X-10

The X-10 technology is one of the oldest power line communication protocols.

Although it was originally unidirectional (from controller to controlled modules),

recently some bi-directional products have been implemented. X-10 controllers send their

signals over the power lines to simple receivers that are used mainly to control lightning









and other appliances. Some controllers available today implement gateways between

power line and other medium such as RF and infrared.

A 120 KHz amplitude modulated carrier, 0.5 watt signal, is superimposed into AC

power line at zero crossings to minimize the noise interference. Information is coded by

way of bursts of high frequency signals. To increase the reliability, one bit of information

is transmitted per cycle, limiting the transmission rate to 60 bits per second. This

represents poor bandwidth utilization while the reliability of transmission is severely

compromised in a noisy environment. These are the main reasons why this technology is

considered unreliable by many installers.

2.1.1.2 CEBus

CEBus is an industry standard for transmitting control signals via a home's power

lines. CEBus uses spread spectrum frequency-hopping technology to overcome noise and

other communication impediments found within power lines. Spread-spectrum signaling

works by spreading a transmitted signal over a range of frequencies, rather than using a

single frequency. CEBus-compliant devices such as light switches are highly reliable but

remain expensive compared with "ordinary" devices.

This technology uses a peer-to-peer communication model. To avoid

collisions, a Carrier Sensed Multiple Access with Collision Resolution and Collision

Detection (CSMA/CRCD) is used. The power line physical layer of the CEBus

communication is based on the spread spectrum technology patented by Intellon

Corporation. Unlike traditional spread spectrum techniques (that used frequency hopping

or time hopping or direct sequence), the CEBus power line carrier sweeps through a

range of frequencies as its transmitted. A single sweep covers the frequency band from

10-400 KHz. This frequency sweep is called a chirp. Chirps are used for synchronization,









collision resolution and data transmission. Using this chirp technology a data rate of

about 10 Kbps can be obtained.

2.1.1.3 LonWorks

LonWorks is a technology developed by Echelon Corporation and provides a

peer-to-peer communication protocol, implemented using Carrier Sensed Multiple

Access (CSMA) technique. Unlike CEBus, LonWorks uses a narrowband spread

spectrum modulation technique using the frequency band from 125 KHz to 140 KHz. It

uses a multi-bit correlator intended to preserve data in the presence of noise with a

patented impulse noise cancellation.

All the above-mentioned protocols deal with low-bandwidth digital devices and

hence provide low data rate. These are essentially used for home automation. High data

rates, required by modern applications, can only be achieved using higher frequencies.

The next section deals with the prior work done in the area buffer management.

2.1.2 Buffer Management

A certain policy of buffer allocation a process that determines how the total

buffer space (memory) will be used when queuing of packets needs to be done as and

when required. The selection and implementation of this policy is usually referred to as

the buffer management [6].

Lot of study has been done in the area of buffers; observing buffer utilization in a

network or calculating the optimum size of a buffer or evaluating the performance of a

network when buffering is included, etc.

An exact model to evaluate the performance of Multi-state Interconnection

Networks using shared internal buffering is proposed by Saleh and Atiquzzaman [7]. The

model is based on a general output distribution and simulation based results have been









collected for uniform, hot and favorite distribution. Favorite output has been found to

have less impact on throughput than hot spot distribution. Buffer occupancy too, for hot

spot, has been found to reach total capacity for low input loads and rate of hot occupancy

has been found to increase in a non-linear fashion, as favorite value increases the rate of

occupancy increases linearly.

Saleh and Atiquzzaman have also analyzed the effect of having one hot spot on

shared buffer strategies [8]. It was found that though the throughput of the hot output

increases as hot spot value increases the overall throughput decreases because of the

monopolizing effect of the hot spot traffic. The advantage of shared buffer scheme; which

is better buffer utilization, is negated by the fact that hot spot traffic hogs buffer space

affecting cold spot throughput. For such a traffic output buffering, i.e. dedicated buffer

allocation based on destination is better.

Zhou and Attiquzzaman have proposed a model for output-buffered nodes under

non-uniform traffic, which provides more accurate results than the other models because

it considers blocking [9]. The model can also be used for uniform traffic as a special case.

The results are also validated with simulation. This model is compared to a model that

does not consider blocking and it is shown that this model provides more accurate results.

The effect of buffer size on throughput is also studied. Under uniform traffic an increase

in buffer size results in throughput increasing from a minimum to an asymptotic value.

Under hot spot traffic the asymptotic value reached is not as high as in uniform traffic. It

was also found that increasing buffer size beyond a certain value ceased to affect the

throughput, that threshold was found and use for further simulation.









Output-buffered strategy is studied by Zhou and Atiquzzaman [10]. They consider

finite output-buffered models under non-uniform traffic. The various buffer strategies

such as input-based, output-based, shared are characterized here and references to

research done in these fields are also indicated.

Buffer sharing can be done in various ways [6].

1. Complete Partitioning and Complete Sharing -

In the complete partitioning (CP) scheme, the entire buffer space is

permanently partitioned among the total number of servers present. This

scheme does not actually provide any sharing.

At the other extreme is complete sharing (CS) scheme, where all

buffer space is shared among all incoming data regardless of source and

destination. Here an arriving packet is accepted if any space is available in

the switch memory, independent of the server to which the packet is

directed.

Under the CP policy buffer allocated to a port is wasted if that port

is inactive, since it cannot be used by other possibly active links. On the

other hand, under CS policy one of th reports may monopolize most of the

storage space if it is highly utilized [11].

2. Sharing with Maximum Queue Lengths (SMXQ) -

After observing the undesirable behavior of the two extreme

policies of CS and CP and to obtain maximum efficiency of buffer

sharing, but also to avoid the possible monopolization of the buffer

space by one heavily loaded link, a "restricted buffer sharing" with a









limit on the maximum number of spaces that can be taken up for any

destination was proposed by Irland [12].

3. Sharing with Minimum Allocation (SMA) and Sharing with a

Maximum Queue and minimum allocation (SMQMA) -

In "Analysis of Shared Finite Storage in a Computer Network

Node Environment under General Traffic Conditions" [13] two other

policies: SMA and SMQMA are proposed. In SMA minimum number of

spaces is allocated for each destination is reserved. SMQMA is the

integration of SMA and SMXQ; each destination always has access to a

minimum allocated space, but they cannot have arbitrarily long queues.

4. Complete sharing with push out from the longest queue.

Approach (4) is analyzed in "Analytical Modeling of Shared Buffer ATM

Switches with Hot-Spot Push-out under Bursty Traffic" [14]. Shared buffering is best in

terms of buffer utilization and loss of data because of the maximum degree of buffer

sharing, however it's performance degrades under non-uniform traffic. Hot spot traffic,

which is a particular kind of non-uniform traffic, is analyzed in this paper under the Hot-

Spot Push out scheme where packets directed towards hot spots that are making unfair

use of the buffer sharing scheme are purged. This approach is shown to be consistently

better than other schemes.

The majority of the analyses in the literature involves connectionless networks,

and remains within packet-level performance issues. Later most of the work done in the

area of buffer analysis was done for high speed switching in ATM. There the usage of

buffers in the switching elements was crucial for high throughput. Usage of additional









buffers in the receiver end was essentially to improve Traffic Management in ATM

networks. The central issue here is buffer allocation and efficient utilization for obtaining

high throughput. This is our area of study.


2.2 Comparative Summary

The basic TAMA protocol has just one buffer at the receiver end. Due to this

reason some senders transmitting packets to the same destination at the same time are

sent a FAIL (or a resource busy) signal, thus resulting in a retransmission procedure.

Retransmissions bring down the data throughput and do not guarantee timely delivery of

multimedia traffic.

This thesis proposes to solve some of the problems by changing the existing

TAMA protocol. Doing some buffer management at the receiver end in the TAMA

protocol brings about these changes. They are: introduce multiple buffers at the receiver

end and introduce preemption in the buffers. We expect to reduce the number of FAILS

issued by increasing the reception buffers. However, the number of reassembly buffers

should be small to minimize the buffer management problems and chip cost.

All of the work presented so far is based on FIFO queues at the output ports or

destinations. No one approach is good for all the traffic patterns. This thesis will show

that development of a buffer management policy that works together with a desired

service discipline would result in more efficient allocation of the two critical network

resources, buffer space and bandwidth. It is also known that there is no prior work done

in "buffer management" in power lines.

The situation in the TAMA protocol is a bit different from the above-discussed

cases because of the reassembly process considered here. The receiver end of this









protocol receives a packet as a series of segments (because of the segmentation process at

the transmitter end). As soon as the destination receives the first segment of any packet, it

assigns a buffer, if one is available. The rest of the segments belonging to the same

packet are assigned the same buffer in order for the reassembly to be done. Only after the

last segment of the packet is received is it possible to free that buffer or accept packets

from any other source.

This thesis considers two buffer allocation schemes; allocation based on source

and allocation based on priority in addition to the baseline First Come First Serve (FCFS)

policy. Each of these schemes takes care that all the segments that belong to the same

packet are allocated the same buffer so that they are reassembled properly.

Allocation based on FCFS is considered as a base line case. Here a buffer is

allocated to the segment that first arrives at the receiver. Source based allocation allocates

a buffer to a segment depending on source address, which means that for achieving

maximum throughput, number of reassembly buffers at the receiver end should be equal

to the total number of senders in the network. Similarly in case of priority-based

allocation maximum throughput should be achieved when the number of buffers per

destination is equal to the number of priorities.

An attempt is being done in this thesis to find the optimal number of buffers

required to maximize the throughput by keeping the number of FAILs to a minimum

value. It also examines the effect of preemption on throughput.


2.3 Chapter Summary

This chapter discussed in detail work done in power line communications. Here

we had discussed about some protocols developed for low bandwidth digital devices like






18


X-10, CEBus and LonWorks. These protocols are essentially used for home automation.

Research done in the area of buffer management was discussed in detail. Various

buffering schemes have been summarized in the previous section.

Much detailed description of the protocol is given in the next chapter.














CHAPTER 3
OFDM AND TAMA PROTOCOL

In this chapter we will discuss in detail the modulation used by TAMA PHY,

OFDM orthogonall frequency division multiplexing), and the protocol itself. TAMA

stands for Tone Allocated Multiple Access, name retained for historical reasons.


3.1 OFDM Modulation

The choice of modulation depends on the nature of the physical medium on which

it has to operate. Any modulation scheme selected for use on a power line should be able

to do the following [15].

1. Overcome non-linear channel characteristics.

2. Overcome multi-path spread.

3. Adjust dynamically.

4. Mask certain frequencies.

Power lines generally have non-linear characteristics. This makes equalization

very complex and expensive for data rates above 10 Mbps with single carrier modulation.

Impedance mismatch on power lines result in echo signal causing delay spread of

the order of Ims. The modulation technique used for power lines should have the

inherent ability to overcome this multi-path effect.

Power line channel characteristics change dynamically as power supply varies.

The modulation technique for use on power lines should have the ability to track such

changes without involving large overhead or complexity.









Power line communications equipment uses an unlicensed frequency band.

However, it is likely that in the near future various regulatory rules might be developed

for these frequency bands too. So we should be able to mask certain frequency bands.

A modulation scheme with all of the above desirable properties is Orthogonal

Frequency Division Multiplexing (OFDM). OFDM is generally viewed as a collection of

transmission techniques.

OFDM is currently used in the European Digital Audio Broadcast (DAB)

standards. In addition, several DAB systems proposed for North America are based on

OFDM [16].

3.1.1 Theory of Operation

OFDM divides the high-speed data stream to be transmitted into multiple parallel

bit streams, each of which has a relatively low bit rate. Each bit stream then modulates

one of a series of closely spaced carriers. To obtain high spectral efficiency, the

frequency response of the sub-carriers are overlapping and orthogonal, hence the name

OFDM. The practical consequence of orthogonality is that, if we perform a Fast Fourier

transform (FFT) of the received waveform over a time span equal to the bit rate on an

individual carrier, the value of each point in the FFT output would be a function only of

the bit (or bits) that modulated the corresponding carrier, and is not impacted by the data

modulating any other carrier. Each narrowband sub-carrier can be modulated using

various other modulation formats like BPSK, QPSK and QAM.

When the carrier spacing is low enough for the channel response to be relatively

constant across the band occupied by the carrier, channel equalization becomes easy. The

need for equalization is completely eliminated by using differential modulation.

Differential modulation improves performance in an environment where rapid changes in









phase are possible [4]. Implemented in the frequency domain, equalization can be

achieved by a simple weighting of the symbol recovered from each carrier by a complex

valued constant. A schematic block diagram is shown in Figure 3.1.



-00 D
I M E F
F U Add Remove M F
F X CP CHANNEL CP U T
T x




Figure 3.1. Block diagram of OFDM System [15]

OFDM modulation is generated using a Fast Fourier Transform (FFT) process as

mentioned above. M bits of data are encoded in the frequency domain onto N sub-carriers

(M = N B where B is the number of bits per modulation symbol, e.g., B=2 for QPSK). An

inverse FFT (IFFT) is performed on the set of frequency carriers producing a single time

domain OFDM "symbol" for transmission over a communication channel. The length of

time for the OFDM symbol is equal to the reciprocal of the sub-carrier spacing and is

generally compared to the data rate. Simply copying the last part of the time domain

waveform and pre-pending it at the start of the waveform inserts a cycle prefix. The

reasons for use of cyclic prefix are twofold. It makes the Inter Carrier Interference (ICI)

zero even in the presence of time dispersion, by maintaining orthogonality. It also acts

like a guard interval removing Inter Symbol Interference (ISI).

Removing the cyclic prefix from the time domain signal and then performing an

FFT on each symbol to convert the frequency domain demodulates OFDM signals. Data

is decoded by examining the phase and amplitude of the sub-carriers.









3.1.2 Advantages of OFDM

OFDM is a modulation scheme that has all the desirable properties. Some

advantages of OFDM include that it

1. is very good at mitigating the effects of time- dispersion;

2. is very good at mitigating the effects of in-band narrowband interference;

3. has high bandwidth efficiency;

4. is scalable to high data rates;

5. is flexible and can be made adaptive (different modulation schemes for sub-

carriers bit loading, adaptable bandwidth/data rates are possible);

6. has excellent ICI performance;

7. does not require channel equalization;

8. does not require phase lock of the local oscillators.

In spite of all the advantages and in spite of the feature of OFDM being able to

eliminate ISI and ICI, there remains the problem of fading, caused by multi-path

reflection. Fading occurs when the reflected signal arrives such that it attenuates or even

cancels the original signal. This usually occurs only at certain frequencies. It can be

overcome by using interleaving and error correction coding techniques. For a complete

discussion on OFDM refer to OFDM wireless Multimedia Communications [17].


3.2 Protocol Overview

High bandwidth digital devices operating on power lines share a common

medium. Efficient use of this medium requires both a robust physical layer (PHY) and an

efficient medium access control (MAC) protocol. Also, the choice of a MAC protocol is

very much dependant on the physical layer. The MAC controls the sharing the









communication medium, while PHY specifies the modulation, coding, and basic packet

formats.

The TAMA PHY uses orthogonal frequency division multiplexing (OFDM) as the

basic transmission technique. Many different MAC protocols have been used in various

LANs. However, the choice of MAC protocol is dependant on how the PHY layer works.

Consider the IEEE 802.3 Ethernet MAC protocol that uses Carrier Sense Multiple Access

with Collision Detection (CSMA/CD). It cannot be used in power lines because the large

dynamic range of signals and noise makes collision detection highly unreliable. A

collision on power lines can be inferred only if the sender fails to receive an

acknowledgment. High attenuation on the power line medium could also lead to a

problem of hidden nodes. This problem of hidden nodes is dealt with in IEEE 802.11

MAC protocol for wireless LANs by using Carrier Sensed Multiple Access with

Collision Avoidance (CSMA/CA). Even though this protocol gives good throughput, it

suffers from large delays.

3.2.1 PHY Overview

The need for equalization in TAMA is completely eliminated by using differential

quadrature phase shift keying (DQPSK) modulation where the data is encoded as the

difference in phase between the present and previous symbol in time on the same sub-

carrier (see figure 3.2). Differential modulation improves performance in environments

where rapid changes in phase are possible.







24
dP1

dP2

dP3




dPm




OFDM symbol n+1
OFDM symbol n 4

Figure 3.2. Differential phase encoding across symbols




The TAMA PHY occupies the band from about 4.5 to 21 MHz. The PHY

includes reduced transmitter power spectral density in the amateur radio bands to

minimize the risk of radiated energy from the power lines interfering with these systems.

The raw bit rate using DQPSK modulation with all carriers active is 20 Mbps.

The bit rate delivered to the MAC by the PHY layer is about 14 Mbps. PHY

packet structure consists of a preamble sequence followed by a TPC encoded frame

control field. The preamble sequence is chosen to provide good correlation properties so

that each receiver can reliably detect the delimiter, even with substantial interference and

a lack of knowledge of the transfer function that exists between the receiver and the

transmitter interference.

The frame control contains MAC layer management information (for example,

packet lengths and response status). All three delimiter types have the same structure, but

the data carried in the delimiter varies depending on the delimiter function.









Unlike the delimiters, the payload portion of the packet is intended only for the

destination receiver. Payload data are carried only on a set of carriers that have been

previously agreed upon by the transmitter and intended receiver during a channel

adaptation procedure (whence "Tone Allocation").

Since only carriers in the "good" part of the channel transfer function are used, it

is not necessary to use such heavy error correcting coding as is required for transmissions

intended for all receivers. This combination of channel adaptation and lightening of the

coding for unicast payloads allows TAMA to achieve high data rates over power line.

The adaptation has three degrees of freedom:

1. de-selection of carriers at badly impaired frequencies;

2. selection of modulation on individual carriers (DBPSK or DQPSK);

3. selection of convolutional code rate (1/2 or 3/4).

In addition to these options, the payload can be sent using ROBO mode: a highly

robust mode that uses all carriers with DBPSK modulation and executes heavy error

correcting code with bit repetition and interleaving on each of them. ROBO mode does

not use carrier de-selection and thus can generally be received by any receiver. The mode

is used for initial communication between devices that have not performed channel

adaptation, for multicast transmission, or for unicast transmission in cases where the

channel is so poor that ROBO mode provides greater throughput than de-selection of

carriers with lighter coding.











Start of frame
delimiter


Frame R
Frame body and Check End of frame delimiter F
Header PAD sequence (uses all tones) S


Response
Delimiter
(uses all tones)


Variable
17 2 I
25 bits bytes e ytes 25 bits I I I 25bits

Preamble Frame Frame Frame PAD FCS Preamble Frame Preamble Frame
Control Header body Control Control


4 OFDM Variable symbol count
symbols 20-160 OFDM symbols
1----------k


4 OFDM
symbols


S4
OFDM


Contention
resolution
window











U U U
a-----


Figure 3.3. TAMA Protocol transmission format


With relatively short packets, the overhead required for channel assessment and


estimation of gain and of carrier phase creates a capacity penalty that more than offsets


any potential gain from the modulation efficiency.


Formed from a series of OFDM symbols, the TAMA data-bearing packet consists


of a start-of-frame delimiter, a payload, and an end-of-frame delimiter (see Figure 3.3). A


PPDU is a collection of OFDM symbols, which contains SYNC, Header and Data fields.


SYNC field is used to indicate the start of the packet. It basically contains a known group


of OFDM symbols. Header field contains some relevant physical layer information, for


example, the PPDU type, modulation scheme used etc. Data field contains a part of


TAMA segment. We also assume that the data field is either a 20 or a 40-symbol packet.


Start of frame Payload End of frame Response indicates PRSO & PRS1
indicates

Start of frame Upto 13.5 Mbps End of frame ACK 11 highest
Contention control (PHY) rate Contention Control Good packet priority (3)
Length of frame Adapted modulation and Channel access NACK 10 priority 2
Tone map index tones Priority Errors detected 01 priority 1
Decoded based on tone FAIL 00 lowest
map Receiver busy priority (0)
Extensible to higher rates









Acknowledgment contains only SYNC and Header fields. Each PPDU is followed by an

acknowledgment. For unicast transmissions, the destination station responds by

transmitting a response delimiter indicating the status of the reception (ACK, NACK, or

FAIL).

Reception of an ACK or a positive acknowledgement is considered a success and

the next segment in the queue is transmitted whereas, a NACK indicates a negative

acknowledgement and the same segment is retransmitted after a back-off procedure. This

process of retransmission is carried until the retransmission limits, i.e., 16 before the

packet is totally dropped or before the transmission is considered a failure.

On the other hand a FAIL is a resource busy signal, which indicates that the

receiver has all its resources busy. In this case the sender has to wait for 20milliseconds

and then undergo the back-off procedure before retransmission.

3.2.2 MAC Overview

MAC uses a virtual carrier sense (VCS) mechanism and contention resolution to

minimize the number of collisions. Upon receipt of a preamble, the receiver attempts to

recover the frame control.

The frame control indicates whether the delimiter is a start of frame, end of frame,

or response delimiter. Start-of-frame delimiters specify the duration of the payload to

follow, while the other delimiters implicitly define where the end of transmission lies.

Thus, if a receiver can decode the frame control in the delimiter, it can determine the

duration for which the channel will be occupied by this transmission, and it sets its VCS

until this time ends.









If it cannot decode the frame control, the receiver must assume that a maximum-

length packet is being transmitted and must set the VCS accordingly. In this case it may

subsequently receive an end-of-frame delimiter and thus be able to correct its VCS.

The destination always acknowledges unicast packets at the MAC layer by

transmitting the response delimiter. If the source fails to receive an acknowledgment, it

assumes that a collision has caused the failure. The destination may also choose to signal

FAIL if it has insufficient resources to process the frame, or it can signal NACK to

indicate that the packet was received with errors that could not be corrected by the FEC.

3.2.2.1 Channel access mechanism

Medium sharing in TAMA protocol is accomplished by the Carrier Sense

Multiple Access with Collision Avoidance (CSMA/CA) mechanism with priorities and a

random back-off time following the busy conditions on the channel. TAMA protocol

allows prioritized channel access with low probability of collision and minimum

throughput. Each node that has one or more TAMA segments to send will contend for the

channel if the channel is busy. Note that the cost of collisions is very high. The

contention resolution protocol includes a random back-off algorithm to disperse the

transmission times of frames queued (or being retransmitted due to collision) while the

channel has been busy, and also provides a way to ensure that clients obtain access to the

channel in the order of their priorities.

3.2.2.1.1 Basic access procedure

If the channel has been idle for X msec after the last transmission, where X is the

channel access period, data is sent directly without participating in any kind of collision










resolution. The channel access mechanism used by TAMA is shown in Figure 3.5. If the

medium has been busy then a two-step process is followed for channel access.


- Previous --
transmission
End of last
transmission


4- CR slots -
PRSO PRS1 CRSO CRS1 ......... CRSk New transmission

Priority
4 resolution ---- Contention window --
period


Figure 3.5. Basic Access procedure



The first step involves signaling the intention to contend at a particular priority.

After this step, a contending node will defer if it senses a higher priority node

transmitting. The second step involves the actual process of contention. A priority

resolution symbol is transmitted in the priority resolution slots. Priority level encoding in

bits is shown in Figure 3.4.


P1


1 0

P2
1 High Low
Priority Priority

0 Medium Best
Priority Effort

Figure 3.4. Priority level encoding in priority bits.


When one node completes a transmission, other nodes with packets queued to

transmit signal their priority in the priority resolution interval (indicated by PRSO and

PRS1 in Figure 3.5).









The signals for this purpose use on/off keying and are designed so the priority of

the highest priority user can be easily extracted, even when multiple users signal different

priorities at the same time. During the priority resolution period the highest priority of the

data waiting to be sent is identified and only the stations having data of this priority

contend. Each of these stations generates a random backoff time according to the value

of its local backoff timer. Backoff procedure is also invoked when the transmitter

retransmits due to the lack of ACK.

A station will not transmit in the remaining PRS or the contention window if a

PRS symbol detected. The stations that had indicated their intention to contend in the

PRS and were not preempted by any higher priority then compete for access in the

contention window according to the backoff procedure. Also, PRS symbols will not be

transmitted if the End of frame or Response Delimiter has its contention bit set and the

priority to be signaled is equal to or less than that of preceding frame.

3.2.2.1.2 Slot choices

Nodes with queued frames having priority equal to the highest priority signaled

choose a slot in a contention resolution window in which they will initiate transmission,

if no other node begins transmission in an earlier slot. Each node chooses its slot at

random over an interval that grows with increasing numbers of unsuccessful attempts to

access the channel. If a node were preempted in a previous contention resolution window,

it continues counting slots from where it left off rather than choosing a new random

value. This approach improves the fairness of the access scheme.

Collisions can occur if a node wishing to transmit fails to recognize a preamble

from another node, or if the earliest chosen slot in the contention resolution window is









selected by more than one node. The preamble design is robust enough to ensure that the

missed preamble rate is so low that this source of collisions has only a minor impact,

leaving the latter cause to produce the majority of collisions.

3.2.2.1.3 Channel adaptation

Channel adaptation occurs when clients first join a logical network, and

occasionally thereafter, based either on a timeout or on a detected variation in the channel

transfer function (which might be either an improving or degrading condition evidenced

by a reduction or an increase in errors or signal strength). Any node can initiate a channel

adaptation session with any other node in its logical network. The adaptation is a bi-

directional process that causes either node to specify to the other the tone map, i.e., the

set of tones, modulation, and FEC coding to use in subsequent payload transmissions.

3.2.2.2 Segmentation and reassembly

Segmentation and reassembly is provided to improve fairness and reliability, and

to reduce latency. The MAC also includes features that allow the transmission of multiple

segments with minimal delay in cases where there are no higher priority frames queued

with other nodes, and it provides a capability for contentionless access in which access to

the channel may be passed from node to node. Under this protocol each packet arriving

from the higher layer is divided into multiple segments for transmission on the physical

layer. Segment size depends on the data rate between the transmitter and the receiver.

Also there is a maximum channel occupancy time for each node.









Segmentation has two advantages.

1. Segmentation improves the chances of frame delivery over harsh

channels because it reduces the cost of collision or cost of errors as

acknowledgment is done for each segment.

2. It contributes to better latency characteristics for all stations because it

puts a limit on the maximum amount of time a node can keep the

channel busy. This is especially important for meeting real time quality

of service requirements.

Each TAMA segment has a header field which contains information like segment

count, number of segments, etc. Each TAMA segment is transmitted over the physical

medium by one or more physical protocol data units (PPDUs). This is necessary since the

TAMA MAC PDU must be able to encapsulate an entire IEEE 802.3 PDU (150 bytes),

but a TAMA PDU can hold a payload of at most 40 OFDM symbols.


4 Maximum Length -


Figure 3.6. TAMA segmentation process









A common misconception is that contention-based access schemes have

potentially unbounded latency. In the TAMA protocol, latency is bounded by the

overhead of discarding packets that cannot be delivered in the time required by the

application. It has been shown [18] that the percentage of TAMA packets discarded

through this approach is low enough to be tolerated by low latency applications such as

Voice over IP (VoIP) or streaming media. The combination of this feature and priority

classes makes TAMA well suited to applications requiring QoS.

3.2.2.3 Privacy

Privacy is provided through the use of 56-bit data encryption standard (DES)

applied at the MAC layer. All nodes on a given logical network share a common

encryption key. The key management system includes features that enable the

distribution of keys to nodes that lack an I/O capability.


3.3 Summary

Efficient use of a medium requires both a robust PHY and an efficient MAC.

Choice of a particular modulation depends on the physical medium on which it has to

operate. OFDM modulation is a good choice for power line data transmission due to

various advantages it offers over other schemes like, high bandwidth efficiency,

scalability to high data rates, flexibility, etc. Moreover, it eliminates the problems of Inter

Carrier Interference and Inter Symbol Interference.

TAMA PHY uses OFDM as the basic transmission technique. Formed from a

series of OFDM symbols, the TAMA data-bearing packet consists of a start-of-frame

delimiter, a payload, and an end-of-frame delimiter. Start-of-frame consists of a SYNC.

SYNC field is used to indicate the start of the packet. It basically contains a known group









of OFDM symbols. Header field contains some relevant physical layer information, for

example, the PPDU type, modulation scheme used etc. Data field contains a part of

TAMA segment. Different modes of encoding the payload in PHY were also discussed.

Channel access mechanism, TAMA MAC, uses CSMA/CA. It uses a VCS

mechanism for career sensing followed by a contention resolution process to minimize

the number of collisions including two PRS slots to permit four priority levels to be

encoded. TAMA MAC protocol allows prioritized channel access with low probability of

collision and minimum throughput. Each node that has one or more TAMA segments to

send will contend for the channel if the channel is busy. The contention resolution

protocol includes a random back-off algorithm to disperse the transmission times of

frames queued (or being retransmitted due to collision) while the channel has been busy,

and also provides a way to ensure that clients obtain access to the channel in the order of

their priorities.

TAMA MAC also implements segmentation and reassembly process used to

improve the fairness and reliability and to reduce latency of transmission of MAC. Each

packet arriving from the higher layer is divided into multiple segments for transmission

on the physical layer. Segment size depends on the data rate between the transmitter and

the receiver. Also there is a maximum channel occupancy time for each node.

In the next chapter we discuss the simulation design, simulator, various traffic

conditions and network parameters that are considered.














CHAPTER 4
SIMULATION DESIGN AND DESCRIPTION

This chapter explains the simulation design and implementation of the basic

TAMA protocol, multiple buffer protocol and preemption protocol. The designs include

the basic simulator design, design of the receiver of any node with multiple buffers and

different variations, like buffer allocation based on source or priority, introducing

preemption and reserving a buffer for high priority traffic. Detailed description of the

parameters used, their values and assumptions made during the simulation will be

discussed here.


4.1 Design

Simulation is one of the ways in which a system can be modeled. It can be more

accurate and typically has fewer assumptions than analytical modeling. We use an event-

based simulation technique for modeling the TAMA protocol and its variants.

4.1.1 Basic Simulator Design

In an event-based simulation the execution of a main loop represents a single

event. The simulation time clock simply advances to the event time after the last event.

Events are processed serially, even if they have the same event time.









The pseudocode for the main loop is presented below:



Initialize all nodes in the network;

Populate event queue to initialize events;

Current time = 0;

While (current time < simulation time)

{

remove next event from the head of the event queue;

handle the event removed from the queue;

insert events) resulting from current event in the event queue.

update statistics;

}

Compute final statistics;



Simulator operation overview:

Any event could cause the simulation to change its state and/or cause new events

to occur. Every node maintains an event queue that determines which event is to be

scheduled next. Once an event is scheduled, the other events are rearranged.

The general structure of a node is as shown in Figure 4.1

Higher layer traffic is modeled by different traffic sources, generating packets,

with selectable packet arrival time and packet size. Packets generated are stored in the

prioritized packet queue. Packets in the queue are transmitted to their respective

destinations by the MAC process. Successful or unsuccessful transmission will result in












MAC process removing one more packet form the queue. Packet transmission and


reception between various nodes is achieved by a series of MAC and PHY interactions at


each node. A separate event queue is maintained at MAC and PHY to store the events


generated. Each node maintains a list of transmission events in a buffer.


Traffic Sources


Traffic
Source
1


Traffic
Source
2


Traffic
Source


Medium


Figure 4.1. Node Structure














Device Driver Queue

The DDQ is a prioritized queue and has the capacity to store certain

number of packets. Packets generated by the higher layers are stored in

this queue.

MAC

MAC maintains an event queue and also has a single buffer to store

packets. MAC process extracts one packet at a time from the Device

Driver queue for transmission over the medium.

PHY

PHY also maintains an event queue of it's own and has a single buffer.

A simulation mimics a stochastic process in time so that the outputs of the

simulation are themselves random variables [19]. This is a disadvantage if we want to

make a statement about the behavior of any case based on just one single observation of

the simulation. Thus we need to make repeated runs with the different starting values for

the variables (including the seed for the pseudorandom number generator e.g., rand( as

is also srandO which are used in our simulation of this protocol). This protocol is

implemented in C.



4.1.2 Multiple Buffer Protocol

In this section we describe the Multiple Buffer protocol proposed to improve the

performance of the basic protocol. The operation will still be the same as before except









that now the receiver can receive from more than one source at the same time. The packet

transmission and retransmission algorithm will remain the same. So now, the maximum

sustainable load is a function of the number of reception (or destination) buffers at each

node. It increases as the number of buffers increases.

Initially the node can accept segments from any node. When a valid header

segment comes along the destination node accepts this segment and assigns a buffer. This

buffer allocation can be done in many ways. In this analysis we consider one baseline

case buffer allocation based on First Come First Serve (FCFS) and two dedicated buffer

allocation schemes, allocation based on source address and buffer allocation based on

priority.

4.1.2.1 Based on FCFS

FCFS based buffer allocation is the simplest. An available buffer is allocated

based on the arrival time of a packet. When a valid header segment arrives, receiver

checks if there is a buffer already allocated for this packet. If a buffer is not already

assigned then it assigns a buffer, if there are any available. Otherwise, receiver sends a

FAIL.

4.1.2.2 Based on source

Here, buffers are allocated based on the source address. When a valid header

segment arrives a buffer is assigned unless the source already has a buffer assigned to it

or all buffers are in use. Subsequent segments from a source already assigned a buffer are

inserted into that buffer for reassembly. However, no source is assigned more than one

buffer. This way a single receiver can now receive from more than one source at the same

time and still not send a resource busy signal (FAIL).









4.1.2.3 Based on priority

In priority-based buffer allocation, a buffer is allocated based on the priority of

the arriving packet. When a valid header segment of a priority arrives, a FAIL is returned

if that particular priority buffer is in use. Otherwise, the priority buffer is assigned to this

packet and it is accepted, if a buffer is available. Each receiver may simultaneously

receive one packet at each priority level.

In both the cases buffer allocation is done only if the receiver has a free buffer

else a busy signal is sent. There may be fewer buffers than possible sources or priority

levels. A buffer is not freed until all the segments of that particular packet have been

received successfully. Whenever a segment is retransmitted too many times, or a segment

is lost, or the source node itself drops a segment for whatever reason, then the receiver

buffer will still be waiting for the next segment before it can free the buffer. Hence the

receiver has a timer running on each buffer if the timer expires for a buffer, the receiver

automatically frees that buffer.

4.1.2.4 Preemptive Approach

In the preemptive version of each of the disciplines above, when allocating

buffers preemption based on priority is possible. If all of the buffers are occupied and the

first segment of a new packet arrives with higher priority, the receiver preempts the lower

priority packet and allocates the buffer to the new packet. Otherwise it sends a FAIL.

Reserve a High Priority Buffer

Here, every destination has one of its buffers reserved for high priority packets,

i.e., there is a dedicated buffer at every destination for a high priority packet. There is not

much change in the design other than reserving a buffer. The packet transmission process

and responses sent still remain the same.












4.2 Simulation Model

The model of the TAMA protocol simulates N nodes each transmitting data

according to the TAMA protocol.

4.2.1 Assumptions

Some assumptions that were made during this analysis are listed below

The network is considered to be busy always, i.e., the network is always saturated

so that the exact behavior of the network under worst-case traffic conditions can

be studied.

Noise at every node is -20.0dB and signal to noise ratio between nodes is 0. This

is done so that we will not have a case where we lose a response due to noise in

the channel. Moreover this will help us get an accurate understanding of the

reduction in the number of FAILs due to the introduction of multiple buffers.

The first assumption was made to get an accurate reflection of the network

behavior in worst-case conditions. Even though a worst-case condition of the network is

not a common real time scenario, it gives an idea as to how bad a network could perform

given certain conditions.

4.2.2 Simulation Configuration

The simulated configuration is specified by the parameters shown below. Some of

these values are input to the simulation through a high-level trace file (supporting traffic

generators).

Number of nodes = variable, however maximum number of nodes is 20

Simulation duration = 75sec (each run)









* Number of runs = 4

* Number of traffic sources = variable, should be less than the number of nodes in

that particular scenario.

* Maximum number of buffers = 19

* Priorities used = 1,2 & 3, i.e., low, medium and high priorities respectively.

Traffic of priority 2 and 3 is isochronous and priority traffic of priority 1 is

asynchronous nodes.

* Size of low priority packet =16450 bytes; this size fills to eight segments. The

number of segments a packet can be divided into depends on the modulation used

by the TAMA protocol.

* Size of medium and high priority packet = 1500 bytes; this value depends on the

type of modulation used and also the code rate. This causes the TAMA layer to

send this packet in a single segment.

* Traffic generation:

Low priority packets are generated exponentially (Poisson model) and

medium and high priority packets are generated in periodic intervals.

Exponential inter arrival time = 500 pIsec.

Isochronous periodic interval = 20000 pIsec.

The simulation model that was developed used all the above-specified values for

traffic generation. Modulation used in this simulation was QPSK3/4. Also the number

of asynchronous nodes in a network was varied and behavior studied. However, the

detailed analysis was done for the case when the number of asynchronous nodes in

the network equals two.











Traffic Models:

In order to simulate a real-time scenario, at any given time we have two

asynchronous nodes for any N, where N is the total number of nodes in the network. The

main point of this analysis is to see the effect of introducing multiple buffers at the

receiver end on the performance and the throughput when VOIP traffic is present. Thus

our scenarios are divided mainly into three cases.

Case (i): Uniform destination

Here we have traffic generated from one node uniformly distributed over all other

nodes in the network.

Case (ii): Multimedia traffic

Here, instead of having every node generate traffic to all the other nodes in the

network with equal probability, we have one isochronous node and one asynchronous

node send all the packets to the same destination. This is done to simulate a real time

multimedia environment, where a person is listening to radio on Internet and at the same

time is also transferring some files.

Case (iii): Hot Spots

Here a hot receiver was modeled. In this case different fractions of non-

uniformity were considered. Fractions considered were 50, 70, 80 and 90. These fractions

decide the percentage of the total packets to be distributed non-uniformly. For example,

in case of a fraction of 50, 50% of the total packets generated in the network are directed

to a single destination.









Uniform destination case simulates a balanced traffic flow in a network and case

(ii) and case (iii) simulate a non-balanced traffic flow in a network.





4.3 Performance Measures

The simulation program measures the performance of the basic TAMA

protocol, with or without buffering capabilities or preemption. Performance is measured

in terms of average transmission delay, number of FAILs and throughput.

Network throughput is defined as the mean number of bits successfully

transferred through the channel per unit time. This throughput just considers the total

number of bits transmitted across the network. This does not include the rejected or FAIL

packets. Its units are megabits per second.

Throughput = (TotalBytesTx 8)/ SimTime* E6,

where SimTime is the total simulation time measured in seconds.

Transmission delay of a voice/data packet is defined as the time interval between

the formation of the packet and its arrival time at the destination. The transmission delay

is measured as the difference of Net delay and Queuing delay. It is measured in this

simulation as the time when MAC gets the packet from the device driver queue until all

the segments are successfully sent.

Other output parameters like number of FAILs, voice throughput (multimedia

scenario case) for all the above-discussed cases are analyzed.

Also some supporting results like probability of collision and mean contention

resolution slots required either by an asynchronous or an isochronous node are observed.









4.4 Summary

Chapter 4 discussed the basic simulation design. In this event based

simulation model the higher layer traffic is modeled by different traffic sources,

generating packets, with selectable packet arrival time and packet size. Packets generated

are stored in the prioritized packet queue. Packets in the queue are transmitted to their

respective destinations by the MAC process. Successful or unsuccessful transmission will

result in MAC process removing one more packet form the queue. A separate event

queue is maintained at MAC and PHY to store the events generated. Each node maintains

a list of transmission events in a buffer.

Different variations or changes to be done to the protocol in each of the buffer

allocation schemes, based on FCFS, based on source and priority.

Assumptions made and different design strategies to simulate a real-time case

were also discussed here. Design strategies include different traffic models considered;

general traffic scenario, multimedia scenario and hotspot cases, number of isochronous

and asynchronous nodes in a network, modulation used etc.

Finally the parameters to be considered for performance analysis like throughput,

number of FAILs, mean contention resolution slots, probability of collision, etc.

The next chapter gives the simulation results.














CHAPTER 5
RESULTS AND DISCUSSION

In this chapter, we analyze the results obtained from the simulation of the TAMA

protocol with multiple buffers and different allocation schemes. For analysis both

balanced and non-balanced traffic flows were considered. The General traffic scenario

simulates a balanced traffic flow in a network and non- balanced traffic flow is simulated

by the Multimedia scenario and different Hotspot (non-uniform traffic) cases. The

simulation measures throughput, number of FAILs issued, mean contention resolution

slots (mean CR slots), and probability of collision, for different cases of the simulations.

Results were analyzed using graphs plotted between various network parameters

discussed in section 4.2.2.


5.1 Results

In a network of maximum N nodes, variation in the throughput is observed by

gradually increasing the number of buffers from 1 to N-1 for N = 2,5,7,9,11,13,15 and 20.

This was done for every traffic case mentioned in section 4.22. However, for the

discussion of hotspot scenarios, the analysis was based on one case (5 nodes) for the

reasons discussed in section 5.1.3.

Increasing the number of buffers increased the maximum throughput. However,

after reaching certain value, there was not much improvement in the throughput.

Different sets of results were collected for all the necessary cases. The set of results

considered for our discussion include the simulations that were run for duration of 75









seconds. Also, the number of asynchronous nodes in the network at a given time is

varied. The number of asynchronous nodes in the network for the discussion that follows

is equal to 2. There was an increase of at least 10% when the number of buffers was

increased from 1 to 2. For example, the throughput for a single buffer, 5 nodes, allocation

based on source and using the general traffic generator case, is 7.44 Mbps and that of a 2

buffer case is found to be 8.14 Mbps. Also, difference between the total number of FAILs

in the network decreased by at least 60 percent. In the above case of 5 nodes and

allocation based on source the total number of FAILs in the network reduced by 93%.

Variation of number of FAILs with number of buffers is discussed in detail under the

section 5.1.1.

When we have just two nodes in the network (one transmitting and other

receiving), then for all the sub-cases i.e., allocation based on source, allocation based on

priority (with or without preemption), reserving a buffer for the high priority packets,

traffic pattern being general or multimedia, the throughput achieved was found to be the

maximum for a single buffer case and constant there onwards as expected. This can be

considered as an ideal case because with only one sender to the receiver there is never a

possibility of contention for reassembly buffers.

The effect of introduction of multiple buffers on the performance of TAMA

protocol depends on the buffer management schemes used. The following sections review

the variation in throughput for each traffic scenario is checked for all the buffer allocation

schemes.

5.1.1 General Traffic Scenario

Here the destination was chosen randomly using uniform distribution. The

behavior of the network for all the buffer allocation schemes was observed.















5.1.1.1 Based on FCFS



Buffer allocation based on first come first serve (FCFS) is taken as a baseline case



and a detailed analysis is done only for this particular scenario.


Based on FCFS


700

6 00

5 00

4 00

3 00

2 00

1 00

0o00


Based on FCFS
(with preemption)


--5 nodes
--7 nodes
-A-9 nodes
-X11 nodes
- 13 nodes
15 nodes
---20 nodes


# of Buffers


700

S600

5 00

4 00

3 00

2 00

1 00

0 00


# of Buffers


Based on FCFS


Based on FCFS

(with preemption)


800
700
600
500
400
300
200
100
000
0 1 2 3

# of Buffers


Based on FCFS


- -5 nodes
--7 nodes
A-t 9 nodes
- -11 nodes
- 13 nodes
--15 nodes
S-- 20 nodes


1 2
# of Buffers


Based on FCFS

(with preemption)


5 nodes
- -7 nodes
-A- 9 nodes
- 11 nodes
-)- 13 nodes
- 15 nodes
20nodes


1 2

# of Buffers


Figure 5.1. Graphs showing variation of throughput with number of buffers from all the

cases of FCFS.


-5 nodes
- -7 nodes
--9 nodes
- 11 nodes
-E-13 nodes
- 15 nodes
--- 20 nodes


800

700

S600

S500

S400

300

200
1 00

000


--5 nodes
--7 nodes
---9 nodes
--11 nodes
- 13 nodes
-N-15 nodes
-20 nodes


2
# of Buffers


5 nodes
- -- 7 nodes
-A-9 nodes
- 11 nodes
- 13 nodes
-- 15 nodes
S20nodes









The numbers in the graphs can be verified from Tables A.1 and A.2 of the

Appendix. Figure 5.1 shows how throughput varies with number of buffers for different

cases of FCFS, with and without preemption, as labeled in the respective graphs. It can be

seen from the graphs that there is an increase in the throughput with increase in number

of buffers.

In either case, FCFS (no preemption) or FCFS with preemption, the throughput

from the asynchronous nodes is highest for a 5 nodes case and reduces as number of

nodes increases. This happens because as the number of nodes in the network increases

the number of isochronous nodes increases and therefore increasing the probability of

collision for asynchronous nodes. A reverse effect is seen in throughput from isochronous

nodes. Throughput from the isochronous nodes increases as the number of isochronous

nodes in the network increases.

Also, considering graphs showing the variation of isochronous throughput, we see

that there is not much difference in the behavior of isochronous nodes in the network

with or without preemption in the buffers. However, there is a considerable difference in

the throughput from the asynchronous nodes, depending on preemption, when the

number of buffers increases from one to two because the number of retransmissions

reduces.

Overall throughput however is seen to increase as the number of buffers is

increased from one to two, for both the cases, as shown in graphs. Also the throughput

achieved in two buffer case of FCFS and FCFS with preemption is almost the same.

5.1.1.2 Based on Source

Table A.3 of the Appendix gives the throughput and number of FAILs for

asynchronous and isochronous nodes in the network. Table A.6 shows the mean















Based on Source


.0 -- -5 nodes
5.00
---7 nodes
4.00
S3.00 A--9 nodes
S2.00 11 nodes
1.00 E13 nodes
_0.00 ---15 nodes
0 1 2 3 4 5 n-1 -I-20nodes
# of buffers



Figure 5.2. Variation of throughput from asynchronous nodes with buffers, based on
source case.

Based on source


10.00

8.00

6.00

4.00

2.00
0.00
0 1 2 3 4 5 n-1
# of buffers


-*-5 nodes
---7 nodes
A--9 nodes
--11 nodes
-)-13 nodes
--15 nodes
--20 nodes


Figure 5.3. Variation of throughput from isochronous nodes with buffers, based on source
case.

Based on source


9.00
-8.00- -*-5 nodes
S7.00 ---7 nodes
5 6.00- r 9 nodes
5.00
4.00 -11 nodes
S4.00
3.00- I--13 nodes
S2.00 -*-15 nodes
S1.00 + 20 nodes
l- 0.00 I
0 1 2 3 4 5 n-1
# of buffers



Figure 5.4. Variation of total throughput with buffers, based on source case.











contention resolution slots and the probability of collision separately for asynchronous

and isochronous nodes in the network. Graphs from figure 5.2 to figure 5.4 show the

variation of throughput with the number of buffers for different cases of N.

Graph in figure 5.2 shows how throughput (from the asynchronous nodes) varies

with the increase in number of buffers. We see that there is an increase in the throughput

as the number of buffers go from one to two and almost constant from then on. In case of

five nodes there is a slight decrease in the throughput from 6.4 Mbps to 6.2 Mbps. Such

unexpected variations are due the number of collisions suffered before a successful

transmission. Probability of collision, as shown in Table A.6, for two buffers case is a bit

high as compared to a single buffer case.

There is a decrease in the throughput, in graph in figure 5.2, as the number of

nodes increases. This is because as the number of nodes in the network increases, our

number of asynchronous nodes being a constant, there is an increase in the number of

isochronous nodes. As the traffic generated by isochronous nodes is either multimedia or

VOIP traffic, it has a higher priority and can therefore gain access to the channel (by

priority resolution process) easily. On the other hand asynchronous traffic being a low

priority traffic has to back-off some number of times before it wins the priority resolution

process and contends for the channel. So, as the number of isochronous nodes in the

network increases there is a considerable decrease in the throughput of the asynchronous

nodes. In the case of 13, 15 and 20 nodes we see that the throughput drops down to 0, i.e.,

the asynchronous nodes did not get to transmit at all due to blocking.









Graph in figure 5.3 shows the variation between throughput from the isochronous

nodes, number of buffers for different values of N (number of nodes). Here we see the

opposite trend. As the number of nodes increases, isochronous throughput increases. It is

also seen that there is an increase in throughput with an increase in the number of

reassembly buffers. Here again we have the exceptional cases of 13, 15 and 20 nodes,

where we don't see any difference in throughput with increase in the number of

reassembly buffers because of head-on blocking, where the high priority packets block

the low priority packets. So only the high priority packets gain access to the channel.

Also, the high priority packets are just one segment long and therefore do not require any

reassembly to be done. This means that irrespective of the number of reassembly buffers,

every segment will find a free buffer. This applies for both the allocation techniques.

Graph in figure 5.4 shows the total throughput, which is sum of isochronous throughput

and asynchronous throughput.

5.1.1.3 Based on Priority

Graphs in figure 5.5 are from buffer allocation based on priority, with and without

preemption. Behavior of the network in either case, allocation based on source or

allocation based on priority, is found to be the same. The transition in throughput for both

asynchronous and isochronous nodes is totally the same.

The variation of throughput with the number of buffers however, for allocation

based on priority with preemption case, is slightly different as compared to the other

schemes. There is a sudden increase in the asynchronous throughput when the number of

buffers increases from one to two for N = 5,7,9 and 11. This is because earlier the low

priority packets were always preempted and all its segments were dropped (in a single

















Based on Priority


Based on Priority
(with preemption)


--5 nodes
-U-7 nodes
--9 nodes
-l-11 nodes
-- 13 nodes
--15 nodes
--20 nodes


# of buffers


Based on Priority


900
800
700
o 600
5 00
o S
S400
300


# of buffers


Based on Priority

(with preemption)


-*-5 nodes
- -7 nodes
-A-9 nodes
-X-11 nodes
-- 13 nodes
-- 15 nodes
--20 nodes


800
7 00
6 00
5 00
4 00
2 300
1 200
1 00


1
# of buffers


1
# of buffers


Based on Priority


-*-5 nodes
--7 nodes
-A-9 nodes
--11 nodes
NE 13 nodes
--15 nodes
----20 nodes


1
# of buffers


Based on Priority
(with preemption)


900
8 00
7 00
6 00
5 00
|400
S300
200
S1 00
0 00


1
# of buffers


Figure 5.5. Variation of throughput with number of buffers, based on priority all cases.


700

600

5 500-

4 00

S300

2 00-

1 00

0001


---5 nodes
-U-7 nodes
-A-9 nodes
SX 11 nodes
-I-13 nodes
--15 nodes
---20 nodes


-5 nodes
--7 nodes
---9 nodes
-M-11 nodes
W-13 nodes
-O-15 nodes
-- -20 nodes


900
800
S700
--
6 00
S500-
4 00
3 00-
2 00
1 00
0001


---5 nodes
-7 nodes
-A-9 nodes
---11 nodes
-N-13 nodes
--15 nodes
I 20 nodes











buffer case). So an increase in the number of buffers basically provides these

asynchronous nodes with a reassembly buffer, but when the number of isochronous nodes

in the network further increases i.e., any case where n>l 1, blocking occurs and the

asynchronous nodes never get to access the channel and the asynchronous throughput for

these nodes is always 0.

5.1.1.4 Effect on the number of FAILs

A sender gets a FAIL from the receiver in two situations. First when there are no

resources available, i.e., no buffer available for the packet that just came in and the

second is when the resources are busy, i.e., when the buffer for that particular source or

priority is already under use. It is seen that as the number of reassembly buffers are

increased from one to two there is a sudden fall in the number of FAILs issued. This is

quite obvious from Tables A. 1, A.2, A.3 and A.4 irrespective of the allocation scheme

used. Figure 5.6 shows the variation of number of FAILs with number of buffers.


Based on Priority Based on Priority (with preemption)

250 120

200 -6--5 nodes 100 -- 5 nodes
7 nodes 80 7 nodes
150 -* 9 nodes -9 nodes
X 11 nodes t 60 -11 nodes
100 13 nodes 0 N 13 nodes
15 nodes 40 15 nodes
50 I 20 nodes 20 -20 nodes
0. 0
0 1 2 0 1 2
# of buffers # of buffers



Figure 5.6. Variation of number of FAILs with number of buffers, based on priority -

with and without preemption.

It is also seen that the variation in the number of FAILs is not much when we consider

two or more number of buffers. This effect is seen because when there is just one










reassembly buffer, a receiver can receive from only one sender. When a new node tries to

send to the same receiver it gets a FAIL or a resource not available signal, unlike two

buffers case.

It is also seen that there is not much difference in the total number of FAILs in the

network for a two buffer case and any 'x' buffer case, where x lies between 2 and N-l,

when there are N nodes in the network. This is because at any given time there is only

certain amount of packets that manage to reach the same destination at the same time,

because of the way the TAMA protocol operates. So, the total number of FAILs in the

network gets saturated at a point. This behavior is seen when the number of asynchronous

nodes in the network is increased. However, for maximum cases we have seen here

(where the number of asynchronous nodes is 2) the number of FAILs reduces to 0.




5.1.1.5 Reserve a high priority buffer

It is seen that by reserving a small buffer for VOIP traffic optimal throughput can

be achieved in a single large (low priority) buffer case itself. Results obtained by a two

buffer case when allocation is based on priority and reserving a high priority buffer case

are totally the same, as shown in graph in figure 5.7.


9.00
S8.00 -
S7.00- -- High priority
S6.00 buffer
S5.00 -
S4.00 -W-2 buffer
S3.00 (based on
2.00 Priroity)
1.00
0.00 II
0 5 7 9 11 13 15 20
# of Nodes









Figure 5.7. Comparison between a 2 buffer case and reserving a high priority buffer case

This option of reserving a high priority buffer in the TAMA protocol, along with

multiple buffers, would be worth implementing if the size of the buffers is taken into

consideration. If not for the size there is hardly any difference between the cases where

we use two buffers, when allocation based on priority or a single FCFS buffer with

another buffer reserved for high priority traffic, in the throughput or the total number of

buffers used in a TAMA protocol with multiple buffers.



5.1.2 Multimedia Scenario

The behavior of the network is found to exactly the same as our general traffic

scenario. The variation of the throughput with number of buffers and nodes is also found

to be totally same. The only difference is seen in the throughput. Total throughput

obtained in a multimedia scenario is slightly lower than the throughput obtained from the

general traffic scenario, for all the cases. This is because there is not much difference in

the amount of packets successfully getting across the network, between a general traffic

scenario and a multimedia scenario, even though there is a change in the traffic that is

being generated. This is essentially because of the way the protocol operates.

Detailed variation of throughput with number of buffers and number of nodes are

shown in Tables A.9, A. 10 and A. 11 of the Appendix.

5.1.3 Hot Spots

The general behavior of the network under non-uniform traffic is no different than

the uniform traffic case. Here the case of 5 nodes under hot spot conditions is analyzed in

detail. Simulations were run using three different fractions of non-uniformity. First case

we had 50% of the packets generated at every node directed towards the hot receiver.











Total throughput of the network reduced by 6 %, as compared to the uniform case. The


number of FAILs issued in the network increased by 30%.


The throughput further reduces for the remaining fractions, 70% and 90%. Similar


sets of results were collected, when the number of asynchronous nodes in the network is


varied. The difference in the results can be checked from the Tables A. 12, A. 13 and A. 14


of the Appendix. Buffer allocation based on priority showed the same kind of variation as


in other traffic scenarios. This means that for any kind of traffic scenario, for allocation


based on priority the optimal throughput is achieved when the number of reassembly


buffers used is equal to two.


Based on Source

S10.00
C 8.00 -*-f50
-- 6.00 -f
C. -- f80
S 4.00
-- f90
0 2.00 -
S2.0 uniform
1- 0.00 -
0 1 2 3 4
# of Buffers



Figure 5.8. Variation of total throughput with number of buffers when number of
asynchronous nodes in the network is 1, based on source case for hotspot scenario.


Based on Source

S10.00
8.00 --f50
0. 6.00 ---f70
-A-- f90
4.00
0 X uniform
2.00- -- f80
0.00 .
#- 0 1 2 3 4
# of Buffers










Figure 5.9. Variation of total throughput with number of buffers when number of
asynchronous nodes in the network is 2, based on source case for hotspot scenario.


Based on Source

10.00
S8.00- -4f
6.00 -f70
S --- f80
i. 4.00 -
X uniform
0 2.00 --f90
0.00
0 1 2 3 4
# of Buffers



Figure 5.10. Variation of total throughput with number of buffers when number of
asynchronous nodes in the network is 3, based on source case for hotspot scenario.



However, a change was observed in the trend of variation for buffer allocation

based on source. This can be seen from the graphs, in figures 5.8 through 5.10. When the

number of asynchronous nodes in the network was varied, the saturation point seemed to

change. For the cases of buffer allocation based on source and number of asynchronous

nodes in the network is two or three, optimal throughput is obtained when the number of

reassembly buffers used was three.

Here it can be seen that with this kind of buffer management in TAMA protocol

the throughput obtained, in either case of traffic conditions uniform or non-uniform, in a

two buffers case is optimal. This is seen across the allocation techniques.


5.2 Comparative Summary

All the allocation schemes discussed here showed the same behavior. Throughput

obtained in a two buffer case is plotted against the number of nodes, for each of the

allocation schemes. Throughput considered in the graph, in the figure 5.11, is the total

throughput obtained from using two reassembly buffers.










It is clear from the graph that FCFS offers us the least throughput in either case,

with or without preemption. Also, this allocation technique resulted in maximum number

of FAILs.

Between allocations based on source and allocation based on priority there is

hardly any difference in the general traffic scenario considered. However, when we

consider unbalanced traffic or hotspots we see that for buffer allocation based on source,

optimal throughput is reached in a three buffer case. For the same hotspot

-*- Based on
9.00 Source
8.00 -
7.00
6.00- Based on
z 5.00 -Priority
4.00
0 3.00 -A Based on
g 2.00 Priority (with
S1.00 preemption)
0.00 -x Based on
0 5 7 9 11 13 1520 FCFS
# of nodes
-- Based on


Figure 5.11. Variation of throughput with number of nodes for all the allocation schemes.




scenario and for buffer allocation based on priority, optimal throughput is obtained in a

two buffer case. Also, both the cases almost show the same throughput.

This however is not the case when there is only one asynchronous node in the

network. When there is just one asynchronous node in the network, the variation of

throughput with number of buffers for buffer allocation based on source is same as buffer

allocation based on priority for a hotspot scenario. That is, we achieve optimal

throughput in a two buffer case. At any given time, it is likely that there would be more

than one asynchronous node in the network.










For allocation based on priority we again have two cases, with and without

preemption, almost showing the same results. A percentage difference between the cases

is given in Graph in the figure 5.12.

The graph in figure 5.12 gives the percentage change in throughput between

preemptive and non-preemptive methods of buffer allocation based on priority. It is clear

from the graph that there is not much difference between throughput when two


40.00
35.00
30.00
c 25.00 -
S-- with 1 Buffer
20.00 -
)- 20.0 -- with 2 Buffers
2 15.00-
S 10.00-
l 5.00
0.00
0 5 7 9 11 13 15 20
#of Nodes



Figure 5.12: Percentage difference in throughput between preemptive and non-
preemptive methods of buffer allocation based on priority




reassembly buffers are used by a receiver. However, throughput suffers badly in a single

buffer case when there is preemption in reassembly buffers. This is because the

asynchronous nodes would never get to transmit as the number of isochronous nodes in

the network increases.

Selection of an allocation technique depends on the kind of traffic flowing in the

network. If there is a lot of VOIP or streaming media traffic then using "buffer allocation

based on priority with preemption" makes a lot of difference on the over all performance

of the network.














CHAPTER 6
CONCLUSIONS AND FUTURE WORK




6.1 Conclusion

The objective of the thesis was to maximize the throughput by introducing

multiple buffers at the receiver side for reassembly. This report describes different

schemes considered to manage buffer allocation in the TAMA protocol. Two main

schemes were allocation based on source and allocation based on priority. Preemption

was also considered. In this thesis the effect of varying the number of buffers on the

throughput for different allocation schemes under various traffic conditions was analyzed

as the number of nodes in the network varied. The variation of number of FAILs with

buffers was also studied while the number of nodes in the network varied.

Introduction of multiple buffers at the receiver side increased the overall

throughput. The total throughput, when the number of reassembly buffers increased from

one to two, was found to increase by at least 11% and the number of total number of

FAILs in the network was found to be reduce by a minimum percentage of 60.

Buffer allocation based on source and buffer allocation based on priority with or

without preemption, showed similar kind of results under general and multimedia traffic

conditions. The throughput increased as the number of buffers was increased from one to

two and was constant from there on. The minimal number of buffers required for the

network to reach its saturation point was found to be two under general and multimedia









scenarios for both the allocation techniques. The same transition was seen even for

hotspot traffic conditions when allocation was based on priority. However, there was a

slight increase in the minimal number of buffers required to obtain optimal throughput

under hotspot traffic conditions when buffer allocation was based on source. The optimal

throughput was obtained when the number of buffers was equal to three.

Preemption in buffers gives more opportunities for voice stations to transmit by

preempting a low priority packet. Depending on the Quality of Service requirements of

the network, buffer allocation based on priority with or without preemption can be used.

This is because the difference in throughput between preemptive and non-preemptive

methods of based on priority allocation scheme was found to be at most 2% when it

comes to a two buffer case.

Reserving a buffer for voIP traffic, along with multiple buffers, at each

destination is worth implementing only if buffer size is taken into consideration.

Otherwise, it was same as using two at the receiver side when buffer allocation was based

on priority.


6.2 Future Work

In this thesis the area of study was mostly finding the optimal number of buffers

that are required to obtain optimal throughput. Each of the buffers used were of infinite

size. Further research can be done in optimizing the size of each buffer.

In the buffer allocation based on priority with preemption scheme a low priority

packet undergoing reassembly was preempted and all its segments were dropped, when a

high priority packet arrives. One more direction of future work would be modifying this

allocation technique to accommodate the already received segments of the preempted









packet without dropping them. Segments that were already received should be stored at

some place so that the reassembly process can be resumed as soon as the high priority

packet gets through. Complexity involved in such a buffer management technique should

be analyzed against the percentage increase in throughput from saving the already

received segments of the preempted packet.

Also the same analysis can be repeated for larger number of asynchronous sources

and much worse channel conditions. One more interesting case would be when higher

priority packets would need reassembly.















APPENDIX:
TABLES



The main sets of results used in this analysis were shown as graphs. Rest of the

values is shown in form of tables in this appendix.

Table A.1: Shows throughput and # (number) of FAILs for asynchronous (asynch) and
isochronous (isch) nodes, respectively when buffer allocation is "Based on FCFS" ; for a general
traffic scenario.


Nodes: Throughput (Mbps) #of FAILs
Buffers Asynch Isch Total Asynch Isch
5: 1 6.278272 1.243499 7.521771 25 141
2 6.190464 1.740091 7.930555 0 19
3 6.366080 1.804688 8.17076 0 0
7: 1 4.873344 2.353765 7.227109 26 167
2 4.653824 2.818059 7.471883 1 17
3 4.895056 2.82800 7.720356 0 0
9: 1 3.600128 3.399435 6.999563 7 207
2 3.644032 4.190752 7.834784 0 11
3 3.731840 4.231125 7.962765 0 0
11: 1 2.195200 4.772128 6.967328 9 168
2 2.414720 5.434251 7.848971 0 4
3 2.283008 5.442325 7.72536 0 0
13: 1 0.000000 7.604864 7.604864 0 36
2 0.000000 7.518933 7.518933 0 0
3 0.000000 7.578011 7.578011 0 0
15: 1 0.000000 7.035573 7.035573 0 15
2 0.000000 7.137616 7.137616 0 0
3 0.000000 7.051685 7.051685 0 0
20: 1 0.000000 6.348128 6.348128 0 14
2 0.000000 6.471653 6.471653 0 0
3 0.000000 6.297749 6.297749 0 0












Table A.2: Shows throughput and # (number) of of FAILs for Asynchronous (asynch) and
isochronous (isch) nodes, respectively when buffer allocation is "Based on FCFS with preemption"
;for a general traffic scenario.

Nodes: Throughput (Mbps) #of FAILs
Buffers Asynch Isch Total Asynch Isch
5: 1
2 6.278272 1.816800 8.095072 4 0
3 6.366080 1.816800 8.18288 0 0
7: 1 2.019584 3.023963 5.043547 103 0
2 4.829440 3.023963 7.853403 7 0
3 4.741632 3.023963 7.765595 0 0
9: 1 0.702464 4.235163 4.9372627 109 0
2 3.512320 4.239200 7.750512 7 0
3 3.556224 4.237893 7.794117 0 0
11: 1 0.131712 4.929584 5.061296 79 0
2 2.283008 5.159712 7.44272 1 0
3 2.283008 5.335288 7.518296 0 0
13: 1 0.000000 7.427632 7.427632 0 1
2 0.000000 7.475968 7.475968 0 0
3 0.000000 7.465879 7.465879 0 0
15: 1 0.000000 7.014091 7.014091 0 0
2 0.000000 7.094651 7.094651 0 0
3 0.000000 7.084096 7.084096 0 0
20: 1 0.000000 6.122560 6.122560 0 0
2 0.000000 6.133301 6.133301 0 0
3 0.000000 6.133232 6.133232 0 0










Table A.3: Shows throughput and # (number) of FAILs for Asynchronous (asynch) and
isochronous (isch) nodes, respectively when buffer allocation is "Based on Source"; for a general
traffic scenario.

Nodes: Throughput (Mbps) # of FAILs
Buffers Asynch Isch Total Asynch Isch

5: 1 6.409984 1.279835 7.689819 13 132
2 6.190464 1.780464 7.970928 0 9
3 6.322176 1.816800 8.138976 0 0
4(n-1) 6.409984 1.812763 8.232747 0 0
7: 1 4.917248 2.349728 7.267976 13 147
2 5.048960 2.939179 7.989139 0 20
3 4.741632 2.882656 7.624988 0 0
4 5.005056 3.019925 8.024981 0 0
5 4.873344 3.007813 7.881157 0 0
6(n-l) 5.005056 3.028000 8.033056 0 0
9: 1 3.644032 3.403472 7.047504 7 177
2 3.644032 4.178640 7.822672 0 13
3 3.292800 3.932363 7.225163 0 0
4 3.687936 4.235163 7.913099 0 0
5 3.687936 4.239200 7.927136 0 0
8(n-1) 3.600128 4.182677 7.782805 0 0
11:1 0.658560 6.133301 6.791861 2 168
2 0.790272 7.105392 7.895664 0 12
3 0.658560 7.191323 7.839883 0 0
4 0.921984 7.207435 8.129419 0 0
5 0.965888 7.202064 8.167952 0 0
10(n-l) 0.658560 7.110763 7.769323 0 0
13: 1 0.0 7.379296 7.379296 0 29
2 0.0 7.61235 7.610235 0 0
3 0.0 7.540416 7.540416 0 0
4 0.0 7.749872 7.749872 0 0
5 0.0 7.583381 7.583381 0 0
12(n-1) 0.0 7.572640 7.572640 0 0
15: 1 0.0 7.030203 7.030203 0 17
2 0.0 7.030203 7.030203 0 0
3 0.0 7.320219 7.320219 0 0
4 0.0 7.083909 7.083909 0 0
5 0.0 7.073168 7.073168 0 0
14(n-l) 0.0 7.019461 7.019461 0 0
20: 1 0.0 6.396464 6.396464 0 4
2 0.0 6.315904 6.315904 0 0
3 0.0 6.262197 6.262197 0 0
4 0.0 6.203120 6.203120 0 0
5 0.0 6.240715 6.240715 0 0
19(n-1) 0.0 6.267190 6.267190 0 0









Table A.4: Shows throughput and # (number) of FAILs for Asynchronous (asynch) and
isochronous (isch) nodes, respectively when buffer allocation is "Based on Priroity"; for a general
traffic scenario.

Nodes: Throughput (Mbps) # of FAILs
Buffers Asynch Isch Total Asynch Isch

5: 1 6.146560 1.300021 7.446581 26 128
2 6.322176 1.816800 8.138976 20 0
7: 1 4.741632 3.015888 7.757520 22 0
2 4.917248 3.023963 7.94121 9 0
9: 1 3.600128 3.415584 7.015712 5 204
2 3.468416 4.045408 7.513824 0 0
11: 1 0.746368 5.972181 6.718549 10 218
2 0.834176 7.185952 8.020128 3 0
13: 1 0.0 7.653200 7.653200 0 18
2 0.0 7.513563 7.513563 0 0
15 : 1 0.0 7.019461 7.019461 0 12
2 0.0 7.239659 7.239659 0 0
20: 1 0.0 6.111819 6.111819 0 13
2 0.0 6.197749 6.197749 0 0

Table A.5: Shows throughput and #(number) of FAILs for Asynchronous (asynch) and
isochronous (isch) nodes, respectively when buffer allocation is "Based on Priroity with
preemption" ; for a general traffic scenario.


Nodes: Throughput (Mbps) # of FAILs
Buffers Asynch Isch Total Asynch Isch

5: 1 3.424512 1.816800 5.241312 102 0
2 6.190464 1.812763 8.003227 36 0
7: 1 2.063488 3.028000 5.091488 105 0
2 4.961152 3.023963 7.985115 11 0
9: 1 1.097600 4.239200 5.336800 100 0
2 3.556224 4.239200 7.795425 0 0
11: 1 0.0 7.196693 7.196693 63 0
2 0.702464 7.191323 7.893787 10 0
13: 1 0.0 7.422261 7.422261 1 0
2 0.0 7.52965 7.52965 0 0
15: 1 0.0 7.218176 7.218176 0 0
2 0.0 7.003349 7.003349 0 0
20: 1 0.0 6.332016 6.332016 0 0
2 0.0 6.095707 6.095707 0 0









Table A.6: Shows the Mean CR (contention resolution) slots and Probability of collision, for
both asynch (asynchronous ) and isch (isochronous ) nodes, respectively when buffer allocation is
"Based on source": for a general traffic scenario.
Nodes: Mean MCR slots Prob of collision
Buffers Asynch Isch Asynch Isch
5: 1 3.21 3.77 0.034752 0.004608
2 3.3 4.17 0.021978 0.058571
3 3.26 3.89 0.050177 0.00000
4(n-l) 3.2 3.88 0.042867 0.006818
7: 1 3.6 4.14 0.050757 0.022339
2 3.08 4.23 0.039858 0.035387
3 4.2 4.86 0.08840 0.096591
4 3.35 3.69 0.043592 0.041397
5 3.5 3.91 0.062999 0.058752
6(n-1) 3.3 4.05 0.061419 0.009211
9: 1 3.79 3.66 0.066992 0.099517
2 3.24 4.14 0.065960 0.030584
3 4.50 5.18 0.104803 0.128009
4 3.34 4.09 0.050985 0.026490
5 3.37 3.84 0.054893 0.042435
8(n-l) 3.56 4.20 0.063981 0.045536
11: 1 3.40 3.04 0.048387 0.103774
2 3.43 3.18 0.050459 0.086088
3 3.61 3.66 0.080808 3.082012
4 3.10 3.86 0.053097 0.067227
5 3.24 3.16 0.051383 0.062544
10(n-1) 2.87 2.89 0.035928 0.119174
13: 1 2.50 2.00 0.250000 0.161791
2 2.00 1.97 0.0000 0.156454
3 3.00 2.02 0.33333 0.162194
4 1.50 2.17 0.00000 0.138425
5 0.00 2.05 0.00000 0.158428
12(n-1) 3.33 1.96 0.00000 0.159714
15: 1 0.00 1.29 0.0000 0.217699
2 3.00 1.35 0.000 0.225444
3 0.00 1.33 0.0000 0.195870
4 1.00 1.37 0.0000 0.219397
5 0.00 1.41 0.0000 0.218731
14(n-1) 6.00 1.34 0.000 0.225711
20: 1 1.00 0.80 0.0 0.295227
2 0.00 0.81 0.0 0.305195
3 0.00 0.85 0.0 0.312084
4 6.00 0.85 0.0 0.316568
5 1.00 0.83 0.0 0.312833
19(n-1) 1.00 0.85 0.0 0.308102









Table A.7: Shows the Mean CR (contention resolution) slots and Probability of collision, for
both asynch (asynchronous ) and isch (isochronous ) nodes, respectively when buffer allocation is
"Based on priority no preemption"; for a general traffic scenario.
Mean MCR slots Prob of collision
Nodes: Buffers Asvnch Isch Asvnch Isch
5: 1 3.61 4.10 0.042355 0.033113
2 3.30 4.0 0.035386 0.002304
7: 1 3.68 4.22 0.057710 0.052375
2 3.25 4.38 0.039747 0.045278
9: 1 3.86 3.46 0.042579 0.052055
2 4.26 4.40 0.089151 0.091413
11: 1 3.38 3.69 0.058824 0.074461
2 3.36 3.45 0.070796 0.076442
13: 1 0.00 2.07 0.0 0.141582
2 2.02 1.98 0.50000 0.167461
15: 1 4.00 1.31 0.0 0.220319
2 1.00 1.40 0.0 0.200829
20: 1 3.00 0.85 0.0 0.318343
2 0.50 0.84 0.0 0.316538


Table A.8: Shows the Mean CR (contention resolution) slots and Probability of collision, for
both asynch (asynchronous ) and isch (isochronous ) nodes, respectively when buffer allocation is
"Based on priority with preemption" ; for a general traffic scenario.


Mean MCR slots Prob of Collision
Nodes:Buffers Asynch Isch Asynch Isch
5: 1 3.78 4.24 0.33563 0.32333
2 3.40 4.07 0.040874 0.002257
7: 1 3.90 3.98 0.027807 0.011905
2 3.36 4.14 0.051709 0.014667
9: 1 4.21 4.08 0.040639 0.039354
2 3.38 4.28 0.052758 0.042357
11: 1 3.84 3.59 0.080000 0.071124
2 3.05 3.09 0.060748 0.089973
13: 1 2.80 1.92 0.0 0.175909
2 2.13 2.02 0.0 0.160885
15: 1 0.00 1.34 0.0 0.206143
2 0.00 1.39 0.0 0.226896
20: 1 0.00 0.82 0.0 0.304425
2 0.00 0.82 0.0 0.328208










Table A.9: Shows throughput and # (number) of FAILs for Asynchronous (asynch) and
isochronous (isch) nodes, respectively when buffer allocation is "Based on Source"; for
multimedia scenario.
Nodes: Throughput (Mbps) # of FAILs
Buffers Asynch Isch Total Asynch Isch
5: 1 6.102656 1.053744 7.1564 53 189
2 6.278272 1.562448 7.84072 58 0
3 6.322176 1.816800 8.138976 0 0
4(n-l) 6.278272 1.816800 8.095072 0 0
7: 1 4.697728 1.865248 6.562976 30 256
2 5.048960 2.826133 7.875093 0 49
3 5.005056 3.028000 8.033056 0 0
4 5.092864 3.023963 8.116824 0 0
5 4.961152 3.028000 7.899152 0 0
6(n-1) 5.092864 3.028000 8.120864 0 0
9: 1 3.687936 2.995701 6.683637 15 278
2 3.687936 4.154416 7.842352 0 21
2 3.292800 3.980811 7.273611 0 0
4 3.600128 4.239200 7.839328 0 0
5 3.775744 4.239200 8.014944 0 0
8(n-1) 3.775744 4.235163 8.010907 0 0
11:1 2.195200 4.178640 6.37384 7 309
2 2.370816 5.313131 7.683947 32 0
3 2.239104 5.014368 7.253472 0 0
4 2.239104 5.438288 7.677392 0 0
5 2.195200 5.127413 7.3239413 0 0
10(n-l) 2.195200 5.283611 7.478811 0 0
13: 1 0.0 7.239659 7.239659 0 55
2 0.0 7.535045 7.535045 0 0
3 0.0 7.706907 7.706907 0 0
4 0.0 7.551157 7.551157 0 0
5 0.0 7.620976 7.620976 0 0
12(n-1) 0.0 7.653200 7.653200 0 0
15: 1 0.000 6.756299 6.756299 0 41
2 0.000 6.777781 6.777781 0 0
3 0.0000 7.137616 7.137616 0 0
4 0.0000 7.153728 7.153728 0 0
5 0.0000 7.185952 7.185952 0 0
14(n-l) 0.0000 7.100021 7.100021 0 0
20: 1 0.0 6.101077 6.101077 0 22
2 0.0 6.187008 6.187008 0 0
3 0.0 6.133301 6.133301 0 0
4 0.0 6.165525 6.165525 0 0
5 0.0 6.407205 6.407205 0 0
19(n-1) 0.0 6.117189 6.117819 0 0












Table A.10: Shows throughput and #(number) of FAILs for Asynchronous (asynch) and
isochronous (isch) nodes, respectively when buffer allocation is "Based on Priroity" ; for a
multimedia scenario.

Nodes: Throughput (Mbps) # of FAILs
Buffers Asynch Isch Total Asynch Isch
5: 1 6.014848 1.037595 7.052443 64 193
2 6.058752 1.816800 7.875552 65 0
7: 1 4.785536 1.982331 6.767867 55 259
2 4.873344 3.028000 7.901344 51 0
9: 1 3.600128 3.003776 6.603904 29 316
2 3.600128 4.235163 7.835291 37 0
11: 1 2.195200 4.283611 6.478811 20 288
2 2.151296 5.438288 7.589584 26 0
13: 1 0.0 6.820747 6.820747 0 123
2 0.0 7.647829 7.647829 0 0
15: 1 0.0 6.965755 6.965755 0 17
2 0.0 7.105392 7.105392 0 0
20: 1 0.0 6.117189 6.117189 0 22
2 0.0 6.369611 6.369611 0 0


Table A.11: Shows throughput and #(number) of FAILs for Asynchronous (asynch) and
isochronous (isch) nodes, respectively when buffer allocation is "Based on Priroity with
preemption"; for a multimedia scenario.


Nodes: Throughput (Mbps) # of FAILs
Buffers Asynch Isch Total Asynch Isch
5: 1 0.658560 1.81680 2.475360 141 0
2 6.014848 1.812763 7.827611 63 0
7: 1 0.614656 3.023963 3.638619 147 0
2 4.390400 2.793835 7.184235 52 0
9: 1 0.570752 3.439808 4.01056 96 0
2 3.380608 4.235163 7.615771 70 0
11: 1 0.175616 5.426176 5.601792 96 0
2 2.063488 5.442325 7.505813 9 0
13: 1 0.0 7.502821 7.502821 0 0
2 0.0 7.572640 7.572640 0 0
15: 1 0.0 7.110763 7.110763 0 0
2 0.0 7.180581 7.180581 0 0
20: 1 0.0 6.444800 6.444800 0 0
2 0.0 6.369611 6.369611 0 0











Table A.12: Shows throughput and #(number) of FAILs for Asynchronous (asynch) and
isochronous (isch) nodes, respectively for different fractions of non-uniformity, for a 5 nodes
case and when number of asynchronous nodes in the network is 1; for a hotspot scenario.

Fraction: Throughput (Mbps) # of FAILs
Buffers Asvnch Isch Total Asynch Isch
Buffer Allocation Based on Source
F50: 1 5.444096 2.395317 7.839413 0 154
2 5.400192 3.217029 8.617221 0 0
3 5.356288 3.222400 8.578688 0 0
4 5.312384 3.217029 8.529413 0 0
F70: 1 5.312384 2.056905 7.369289 0 217
2 5.400192 3.217029 8.617221 0 0
3 5.268480 3.21169 8.480139 0 0
4 5.356288 3.217029 8.573317 0 0
F80:1 5.400192 1.745467 7.415659 0 275
2 5.444096 3.217029 8.661125 0 0
3 5.400192 3.222400 8.622592 0 0
4 5.400192 3.222400 8.622592 0 0
F90: 1 5.356288 1.385632 6.74192 0 341
2 5.356288 3.222400 8.578688 0 185
3 5.400192 3.222400 8.622592 0 0
4 5.400192 3.217029 8.617221 0 0
Buffer Allocation Based on Priority

F50:1 5.356288 2.465136 7.821424 0 141
2 5.356288 3.222400 8.578688 0 0
F70:1 5.312384 1.922699 7.235083 0 213
2 5.444096 3.222400 8.66336 0 0
F80 :1 5.356288 1.681019 7.037307 0 277
2 5.400192 3.222400 8.622592 0 0
F90:1 5.356288 1.288960 6.645248 0 360
2 5.224576 3.222400 8.446976 0 0
Buffer Allocation Based on Priority with Preemption

F50:1 0.746368 3.206288 3.952656 77 0
2 5.312384 3.222400 8.534784 0 0
F70:1 0.219520 3.217029 3.43904 91 0
2 5.356288 3.211659 8.567947 0 0
F80:1 0.087808 3.222400 3.310208 97 0
2 5.400192 3.222400 8.622592 0 0
F90:1 0.087808 3.222400 3.310208 100 0
2 5.400192 3.222400 8.622592 0 0









Table A.13: Shows throughput and # (number) of FAILs for Asynchronous (asynch) and
isochronous (isch) nodes, respectively for different fractions of non-uniformity, for a 5 nodes case
and when number of asynchronous nodes in the network is 2; for a hotspot scenario.


Fraction: Throughput (Mbps) # of FAILs
Buffers Asynch Isch Total Asynch Isch

Buffer Allocation Based on Source

F50: 1 5.707520 1.562864 7.270384 26 159
2 5.970944 2.207344 8.178288 0 38
3 5.839232 2.416800 8.256032 0 0
4 5.795328 2.411429 8.206757 0 0
F70: 1 5.663616 1.353408 7.017024 61 198
2 5.839232 2.083819 7.923051 0 60
3 5.883136 2.416800 8.299936 0 0
4 5.970944 2.416800 8.387744 0 0
F80: 1 5.488000 1.256736 6.744736 78 216
2 5.839232 1.621941 7.461173 0 147
3 6.014848 2.411429 8.426277 0 0
4 5.927040 2.411429 8.338469 0 0
F90: 1 5.575808 1.025797 6.601605 90 258
2 6.014848 1.423227 7.447118 0 185
3 5.883136 2.411429 8.294565 0 0
4 5.970944 2.416800 8.387744 0 0
Buffer Allocation Based on Priority

F50: 1 5.707520 1.562864 7.270384 33 159
2 5.883136 2.416800 8.29936 33 0
F70: 1 5.663616 1.358779 7.022395 53 197
2 5.619712 2.411429 8.031141 78 0
F80: 1 5.663616 1.213771 6.877387 75 224
2 5.663616 2.411429 8.075045 73 0
F90: 1 5.575808 1.031168 6.606976 97 258
2 5.619712 2.416800 8.036512 84 0
Buffer Allocation Based on Priority with preemption

F50: 1 2.019584 2.411429 4.431013 129 0
2 5.839232 2.416800 8.256032 32 0
F70: 1 1.097600 2.416800 3.5144 156 0
2 5.751424 2.411429 8.162853 67 0
F80: 1 0.395136 2.416800 2.811936 177 0
2 5.619712 2.406059 8.025771 73 0
F90: 1 0.087808 2.411429 2.499237 154 0
2 5.575808 2.411429 7.987237 94 0













Table A. 14: Shows throughput and # (number) of FAILs for Asynchronous (asynch) and
isochronous (isch) nodes, respectively for different fractions of non-uniformity, for a 5 nodes case
and when number of asynchronous nodes in the network is 3; for a hotspot scenario.

Fraction: Throughput (Mbps) # of FAILs
Buffers Asynch Isch Total Asynch Isch

Buffer Allocation Based on Source
F50: 1 6.058752 0.966720 7.025472 84 120
2 6.497792 1.348037 7.845829 10 48
3 6.453888 1.562864 8.016752 0 9
4 6.453888 1.611200 8.065088 0 0
F70: 1 5.970944 0.982832 6.953776 143 117
2 6.366080 1.305072 7.671152 57 27
3 6.366080 1.476933 7.843013 0 25
4 6.409984 1.605829 8.015813 0 0
F80: 1 5.707520 0.886160 6.59368 188 135
2 6.102656 0.982832 7.085488 68 117
3 6.497792 1.294331 7.792123 0 59
4 6.409984 1.611200 8.021184 0 0
F90: 1 5.839232 0.918384 6.757616 158 128
2 6.322176 1.186917 7.509093 41 79
3 6.453888 1.380261 7.834149 0 43
4 6.497792 1.611200 8.108992 0 0
Buffer Allrcation Rased on Prioritv
F50: 1 6.058752 0.945237 7.003989 109 123
2 5.927040 1.605829 7.532869 100 0
F70: 1 5.883136 0.945237 6.8238373 165 123
2 5.883136 1.611200 7.494336 126 0
F80: 1 5.795328 0.918384 6.713712 166 129
2 5.839232 1.611200 7.450432 152 0
F90: 1 5.707520 0.950608 6.658128 190 122
?2 5663616 1 611200 7274816 180 0
Buffer Allocation Based on Priority with nreemntion
F50: 1 3.731840 1.611200 5.34204 138 0
2 6.190464 1.611200 7.801664 91 0
F70: 1 2.414720 1.611200 4.02592 201 0
2 5.839232 1.600459 7.439691 138 0
F80: 1 1.668532 1.611200 3.279732 213 0
2 5.927040 1.605829 7.532869 159 0
F90: 1 0.746368 1.611200 2.357568 232 0
2 5.795328 1.600459 7.395787 178 0















LIST OF REFERENCES


[1] "Powerline Networking Moves Ahead," all Net Devices, June 28 2000.
www.intellon.com/press/mediacoverage. asp.


[2] "Electrical Wiring to Create Home Network,"Orlando Sentinel June 6, 2000.


[3] Dostert, Klaus, "Telecommunications over the Power Distribution Grid;
Possibilities and Limitations," Proceedings of International Symposium on
Power-line Communications and its Applications, Germany, 1997.


[4] Gardener S., Markwalter B. and Yonge L., "HomePlug Standard Brings
Networking to the Home," CSD Dec 2000 Feature.
www.csdmag.com/main/2000/12/0012feat5.htm.


[5] Tanenbaum, Andrew, Computer Networks, Third Edition, Prentice-Hall, Upper
Saddle River, NJ, 1996.


[6] Arpaci, Mutlu and Copeland, John, "Buffer Management For Shared-Memory ATM
Switches," IEEE Communications Surveys, First Quarter 2000.


[7] Saleh, Mahmoud and Atiquzzaman, Mohammed, "An Exact Model For Analysis
of Shared Buffer Delta Networks With Arbitrary Output Distribution," IEEE
Second International Conference on Algorithms and Architectures for Parallel
Processing, 11-13 June 1996, Singapore, pp. 147-154.


[8] Saleh, Mahmoud and Atiquzzaman, Mohammed, "Analysis of Shared Buffer
Multistage Networks with Hot Spot," ICA3PP'95: IEEE First International
Conference on Algorithms and Architectures for Parallel Processing. 19-21 April
1995, Brisbane, Australia, pp.799-808.









[9] Zhou, Bin and Atiquzzaman, Mohammed, "Performance Modeling of Multistage
ATM Switching Fabrics," ATNAC '94: Australian Telecommunication Networks
and Application Conference, 5-7 Dec 1994, Melbourne, Australia, pp 657-662.


[10] Zhou, Bin and Atiquzzaman, Mohammed, "Efficient Analysis of Multistage
Interconnection Networks Using Finite Output-Buffered Switching Elements,"
Tech. Rep. 15/94, La Trobe University, Melbourne, Department of Computer
Science, July 1994, IEEE INFOCOM '95, Boston, MA.


[11] Latouch, G., "Exponential Servers Sharing a Finite Storage: Comparison of Space
Allocation Policies," IEEE Trans. Commun., vol. COM-28, no. 6, June 1980, pp
910-5.


[12] Irland, M., "Buffer Management in a Packet Switch," IEEE Trans. Commun., vol.
COM-26, no. 3, Mar. 1978, pp. 328-37.


[13] Kamoun, F. and Kleinrock, L. "Analysis of Shared Finite Storage in a Computer
Network Node Environment under General Traffic Conditions," IEEE Trans.
Commun., vol. 41, no. 1, Jan. 1993, pp. 237-45.


[14] Fong, S. and Singh, S. "Analytical Modeling of Shared Buffer ATM Switches
with Hot-Spot Push-out under Bursty Traffic," Proc. IEEE GLOBALCOM '96,
vol. 2, Nov.1996, pp. 835-9.


[15] Kaizawa, Yasuhito and Marubayashi Gen, "Needs for the Power Line
Communications and its Applications," Proceedings of International Symposium
on Power-line Communications and its Applications, U.K, 1998.


[16] Brackmann, Ludwig "Power line Applications with European Home Systems,"
Proceedings of International Symposium on Power-line Communications and its
Applications, Germany, 1997.


[17] Prasad, Ramjee and Van New, Richard, OFDM Wireless Multimedia
Communications, Artech House, Boston, 2000.


[18] Katar, Srinivas "Analysis of Tone Allocated Multiple Access Protocol," Master's
Thesis, University of Florida, Gainesville, Spring 2000.






77


[19] Moloy M. K., Simulation--Fundamentals ofPerformance Modeling, Macmillan
Publishing Company, NewYork, 1989.















BIOGRAPHICAL SKETCH

Usha Suryadevara was born in Guntur, India, on September 25, 1978. She

graduated from Utkal University, India, in May 1999 with a B.Tech. in computer

engineering. She then joined the University of Florida in August 1999 and got her M.S. in

computer and information science, under the guidance of Dr. Richard E. Newman, in

December 2001. She plans to work in the area of computer networks.