<%BANNER%>

Traffic control in TCP/IP networks

HIDE
 Title Page
 Dedication
 Acknowledgement
 Table of Contents
 List of Tables
 List of Figures
 Abstract
 Introduction
 Differentiated bandwidth allocation...
 Contax: An admission control and...
 A group-based pricing and admission...
 Future work
 References
 Biographical sketch
University of Florida Institutional Repository

PAGE 4

Firstandforemost,Iwouldliketoexpressmysinceregratitudetomyadvisor,Prof.YuguangFang,forhisinvaluableadviceandencouragementwithinthepastseveralyearswhenIhavebeenwithWirelessNetworksLaboratory(WINET).MyworkwouldnothavebeencompletedifIhadnothadhisguidanceandsupport.Iwouldliketoacknowledgemyothercommitteemembers,Prof.ShigangChen,Prof.TanWong,andProf.JaniseMcNair,forservingonmysupervisorycommitteeandfortheirhelpfulsuggestionsandconstructivecriticism.MythanksalsogotoProf.JohnShea,Prof.DapengWuandProf.JianboGao,fortheirexpertadvice.IwouldliketoextendmythankstomycolleaguesinWINETforcreatingafriendlyenvironmentandforofferinggreathelpduringmyresearch.TheyareDr.YounggooKwon,Dr.WenjingLou,Dr.WenchaoMa,Dr.WeiLiu,Dr.XiangChen,Dr.Byung-SeoKim,Dr.XuejunTian,Dr.HongqiangZhai,Dr.YanchaoZhang,Dr.JianfengWang,YuZheng,XiaoxiaHuang,YunZhou,ChiZhang,FrankGoergen,PanLi,RongshengHuang,andmanyotherswhohaveofferedhelpwiththiswork.Ioweaspecialdebtofgratitudetomyfamily.Withouttheirselesscare,constantsupportandunwaveringtrust,IwouldneverimaginewhatIhaveachieved.IwouldalsoliketothanktheU.S.OfceofNavalResearchandtheU.S.NationalScienceFoundationforprovidingthegrants. iv

PAGE 5

page ACKNOWLEDGMENTS ................................ iv LISTOFTABLES ................................... vii LISTOFFIGURES ................................... viii ABSTRACT ....................................... x CHAPTER 1INTRODUCTION ................................ 1 2DIFFERENTIATEDBANDWIDTHALLOCATIONANDTCPPROTECTIONINCORENETWORKS ............................ 11 2.1Introduction ................................ 11 2.1.1TCPProtection .......................... 12 2.1.2BandwidthDifferentiation ..................... 14 2.2TheCHOKeWAlgorithm ......................... 16 2.3Model ................................... 23 2.3.1SomeUsefulProbabilities ..................... 23 2.3.2Steady-StateFeaturesofCHOKeW ................ 27 2.3.3Fairness .............................. 29 2.3.4BandwidthDifferentiation ..................... 33 2.4PerformanceEvaluation .......................... 34 2.4.1TwoPriorityLevelswiththeSameNumberofFlows ...... 35 2.4.2TwoPriorityLevelswithDifferentNumberofFlows ...... 38 2.4.3ThreeorMorePriorityLevels ................... 38 2.4.4TCPProtection .......................... 39 2.4.5Fairness .............................. 42 2.4.6CHOKeWversusCHOKeW-RED ................ 43 2.4.7CHOKeWversusCHOKeW-avg ................. 45 2.4.8TCPRenoinCHOKeW ...................... 47 2.5ImplementationConsiderations ...................... 48 2.5.1BufferforFlowIDs ........................ 48 2.5.2ParallelizingtheDrawingProcess ................. 50 2.6Conclusion ................................. 51 v

PAGE 6

52 3.1Introduction ................................ 52 3.2TheConTaxScheme ........................... 54 3.2.1TheConTax-CHOKeWFramework ................ 54 3.2.2ThePricingModelofConTax ................... 56 3.2.3TheDemandModelofUsers ................... 58 3.3Simulations ................................ 60 3.3.1TwoPriorityClasses ........................ 61 3.3.2ThreePriorityClasses ....................... 61 3.3.3HigherArrivingRate ....................... 65 3.4Conclusion ................................. 68 4AGROUP-BASEDPRICINGANDADMISSIONCONTROLSTRATEGYFORWIRELESSMESHNETWORKS .................... 72 4.1Introduction ................................ 72 4.2GroupsinWMNs ............................. 75 4.3ThePricingModelforAPRIL ...................... 77 4.4TheAPRILAlgorithm .......................... 78 4.4.1CompetitionTypeforAPRIL ................... 78 4.4.2MaximumBenetPrincipleforUsers ............... 80 4.4.3NonnegativeBenetPrincipleforProviders ........... 82 4.4.4AlgorithmOperations ....................... 88 4.5PerformanceEvaluation .......................... 88 4.6Conclusion ................................. 92 5FUTUREWORK ................................. 93 REFERENCES ..................................... 95 BIOGRAPHICALSKETCH .............................. 106 vi

PAGE 7

Table page 2TheStateofCHOKeWvs.theRangeofL 22 vii

PAGE 8

Figure page 1Buffermanagementandschedulingmodulesinarouter .......... 3 2CHOKeWalgorithm ............................. 19 2Algorithmofupdatingp0 20 2Networktopology .............................. 20 2RCFofRIOandCHOKeWunderascenariooftwoprioritylevels ..... 36 2AggregateTCPgoodputvs.thenumberofTCPowsunderascenariooftwoprioritylevels ............................. 37 2Averageper-owTCPgoodputvs.w(2)=w(1)when25owsareassignedw(1)=1and75owsw(2) 37 2Aggregategoodputvs.thenumberofTCPowsunderascenarioofthreeprioritylevels ............................... 39 2Aggregategoodputvs.thenumberofTCPowsunderascenariooffourprioritylevels ............................... 40 2Aggregategoodputvs.thenumberofUDPowsunderascenariotoinvestigateTCPprotection ........................ 41 2Basicdrawingfactorp0vs.thenumberofUDPowsunderascenariotoinvestigateTCPprotection ........................ 41 2Fairnessindexvs.thenumberofowsforCHOKeW,REDandBLUE .. 43 2LinkutilizationofCHOKeWandCHOKeW-RED ............. 43 2AveragequeuelengthofCHOKeWandCHOKeW-RED .......... 45 2AggregateTCPgoodputofCHOKeWandCHOKeW-RED ........ 46 2AveragequeuelengthofCHOKeWandCHOKeW-avg ........... 46 2AggregateTCPgoodputofCHOKeWandCHOKeW-avg ......... 47 2Linkutilization,aggregategoodput(inMb/s),andtheratioofminimumgoodputtoaveragegoodputofTCPReno ................ 48 2ExtendedmatcheddropalgorithmwithIDbuffer .............. 49 viii

PAGE 9

............................. 54 3ConTaxalgorithm .............................. 59 3Supply-demandrelationshipwhenConTaxisused.Theleftgraphisprice-supplycurves,andtherightgraphprice-demandcurvesforeachclass. 60 3Dynamicsofnetworkload(i.e.,PIi=1ini)inthecaseoftwopriorityclasses 62 3Numberofusersthatareadmittedintothenetworkinthecaseoftwopriorityclasses ............................... 63 3Demandofusersinthecaseoftwopriorityclasses ............. 64 3Aggregatepriceinthecaseoftwopriorityclasses ............. 64 3Dynamicsofnetworkloadinthecaseofthreepriorityclasses ....... 65 3Numberofusersthatareadmittedintothenetworkinthecaseofthreepriorityclasses ............................... 66 3Demandofusersinthecaseofthreepriorityclasses ............ 67 3Aggregatepriceinthecaseofthreepriorityclasses ............. 67 3Dynamicsofnetworkloadwhenarrivingrate=6users/min ....... 68 3Numberofusersthatareadmittedintothenetworkwhenarrivingrate=6users/min .............................. 69 3Demandofuserswhenarrivingrate=6users/min ............ 70 3Aggregatepricewhenarrivingrate=6users/min ............ 70 4TreetopologyformedbygroupsinaWMN ................. 76 4Pricesvs.bandwidthofBellSouthDSLserviceplans ............ 78 4Utilityuiandpricepivs.resourcesxi 82 4Threepossibleshapesofb(k)ix(k)i+1 86 4DeterminingX(k)i+1whenxiPk1j=1x(j)i+1^xi>0 87 4SimulationnetworkforAPRIL ....................... 89 4Availablebandwidth,bandwidthallocation,andbenet ........... 90 ix

PAGE 10

x

PAGE 11

xi

PAGE 12

1

PAGE 13

ofcoreroutersmusttradeofftheabilityofper-owcontrolforlowcomplexitythatensurestheforwardingspeed.Sinceroutersbufferthosepacketsthatcannotbeforwardedimmediately,andbottle-neckroutersarethedevicesthatarepronetonetworkcongestion,buffermanagementisoneofthecrucialtechnologiesfortrafccontrol.Buffermanagementschemesusuallyusepacketdroppingorpacketmarkingtocontrolthetrafc.ForthebestcompatibilitywithTCP,weonlydiscusspacket-droppingbasedbuffermanagementinthiswork.Buffermanagementmainlyfocusesonwhen,where,andhowtodroppacketsfromthebuffer.Traditionalbuffermanagementdropspacketsonlywhenthebufferisfull(i.e.,taildrop),whichcausesproblemssuchaslowlinkutilizationandglobalsynchronization;i.e.,allTCPowsdecreaseandincreasethesendingratesatthesametime.Ifpacketdropshappenbeforethebufferisfull,thebuffermanagementisalsocalledActiveQueueManagement(AQM).RandomEarlyDetection(RED)[60]wasoneofthepioneeringworkinAQM.Itpre-denestwoqueuelengththresholds,minthandmaxth.Whenthecurrentqueuelengthissmallerthanminth,thenetworkisnotregardedascongested,andnopacketdrophappens.Whenthecurrentqueuelengthislargerthanminthbutsmallerthanmaxth,thenetworkisregardedascongested,andarrivingpacketsaredroppedaccordingtoadroppingprob-abilityinbetween0andpmax,wherepmax(pmax<1)isapredenedparameterofadjustingthedroppingprobability.Thelargerthequeuelengthis,thehigherthedroppingprobabilitywillbe.Ifthecurrentqueuelengthislargerthanmaxth,allarrivingpack-etsaredroppedduetotheheavynetworkcongestion.

PAGE 14

Figure1: Buffermanagementandschedulingmodulesinarouter ManyAQMalgorithmswereproposedafterwards.Generally,theywereaimedatimprovingimplementationefciency(e.g.,RandomExponentialMarking(REM)[12]),increasingnetworkstability(e.g.,BLUE[59]andREM),protectingstandardTCPows(e.g.,REDwithPreferentialDropping(RED-PD)[100]andFlow-Valve[42]),orsupport-ingmultiplepriorityclasses(e.g.,REDwithIn/Outbit(RIO)[47]).Hereweneedtoclarifythefunctionaldifferencebetweenbuffermanagementandscheduling.TherelationshipofbuffermanagementandschedulingisillustratedinFig. 1 .Inarouter,thetotalbuffercapacityisconsideredabufferpool.Logically,thebufferpoolcanbesharedbymultiplequeues.Whenapacketarrives,buffermanagementisthemoduletodecidewhethertoletthearrivingpacketenteraqueue,andwhichqueueshouldbeusedtoholdthispacketifmultiplequeuesareavailable.Buffermanagementalsocontrolsthequeuelengthsaswellasthebufferoccupancybydiscardingpacketsfromthebuffer.Thus,abuffermanagementschemedeterminesatwhenandfromwhereapacketshouldbedropped.Ontheotherhand,schedulingisthemoduletodecidewhentoforwardapacketandwhichpacketshouldbeforwardediftherearemorethanonepacketsinthebuffer.Inotherwords,aschedulercontrolsthepacketforwardingorder.Usually,whenabufferpoolconsistsofmultiplelogicalqueues,theschedulingalgorithmselectsapacketatoneoftheheadsofthequeuestoforward.

PAGE 15

ThesimplestschedulingschemeisFirstComeFirstServed(FCFS).Ifallarrivingpacketsenterthesamequeuefromthetailandpacketsaresentoutonebyonefromtheheadofthequeue,FCFSisthescheduler.GeneralizedProcessorSharing(GPS)[113,114]wasconsideredanidealschedulingdisciplinewhichisdesignedtolettheowssharethebandwidthinproportiontotheweights.However,theuidmodelofGPSisnotamenabletoapracticalimplementation.Oneofthepopularclassesofimplementationisschedul-ingschemeswithroundrobinfeatures,includingRoundrobin[106],DecitRoundRobin(DRR)[126],StratiedRoundRobin[120],Class-BasedQueueing(CBQ)[61],etc.Theyareabletoachieveper-packetcomplexityO(1),buttheytendtohavepoorburstinessandmemory-requirementcomplexityO(N).Anotherpopularclassistimestampbasedsched-ulers,suchasWeightedFairQueueing(WFQ)[52],WF2Q[23]andSelf-ClockedFairQueueing[71].TheyhaveperformancethatisclosertotheidealGPSmodel,buttheper-packet-processingcomplexityisusuallylargerthanO(logN)andmemory-requirementcomplexityisO(N).Recentresearchofbuffermanagementandschedulinghasalsobeenextendedtothefollowingareas.InordertocontrolthebandwidthandCPUresourceconsumptionofin-networkprocessing,Shin,ChongandRheeproposedDual-ResourceQueue(DRQ)forapproximatingproportionalfairness[125].Withrespecttoopticalnetworksandphotonicpacketswitches,HaraiandMurataproposedaschemeasanexpansionofasimplesequen-tialscheduling[75].Theirschemeusesopticalberdelaylinestoconstructopticalbuffersandthesupporteddatarateisimprovedduetoaparallelandpipelineprocessingarchitec-ture.Inwirelessnetworkingarea,Alcaraz,CerdanandGarca-HaropresentedanAQMalgorithmforRadioLinkControl(RLC)inordertoimprovetheperformanceofTCPcon-nectionsovertheThirdGenerationPartnershipProject(3GPP)radioaccessnetwork[6];ChenandYangdevelopedabuffermanagementschemeforcongestionavoidanceinsen-sornetworks[41];ChouandShinusedLastComeFirstDrop(LCFD)buffermanagementpolicyandpost-handoffacknowledgementsuppressiontoenhanceperformanceofsmooth

PAGE 16

handoffinwirelessmobilenetworks[43].Bycontrast,ourdesignofbuffermanagementfocusesonthetrafccontrolinDiffServcorenetworks.Inaddition,whenweevaluateanAQMscheme,theperformancehastobeinvestigatedwithTCP,takingaccountofthefactthatthedynamicsofTCPhavesignicantinteractionswiththedroppingscheme.There-fore,bandwidthdifferentiationandTCPprotectionaretwogoalsthatwewanttoachieve.Whenweevaluatetheperformanceofbuffermanagement,wealsoneedtoconsiderthecombinationofthebuffermanagementandascheduler.Conventionally,RED,BLUE,RIO,etc.,workswithFCFSinordertokeepthesimplicityofimplementationandthelowcomplexityofoperations.Ontheotherhand,someAQMschemes,suchasLongestQueueDropping(LQD)[131],workwithWFQ,sothattheyareabletoobtaingoodisolationamongows.AsTCPusespacketdropsasnetworkcongestionsignal,someresearch[53,129,132,143]consideredlossratiothemeasurementofresourcedifferentiationtosupportpriorityservice.Theseschemessimplyassignahigherdroppingratetoarrivingpacketsfromaowinhigherpriority.Whentheyareusedincorerouters,theseschemesarefacedwiththedilemmaofchoosingbetweenper-owcontrolandclass-basedcontrol(i.e.,allowsinthesamepriorityclassaretreatedasasinglesuper-ow).Ifper-owcontrolisselected,thememory-requirementcomplexitywillbecomeO(N),whichisunacceptableforarouterworkinginhighspeednetworkswithamyriadofows.Ifclass-basedstrategyisused,allowsinthesameclassnomatteritisaTCPoworanon-TCP-friendlyowwillhavethesamelossratio,andhencethistypeofschemescannotprotectTCPows.DuringtheevolutionprocessoftheInternet,thevarietyoftheapplicationsbringsgreatlyheterogeneousrequirementstothenetwork.ThegoalofthenetworkdesignisnottoprovideperfectQoStoallusers,butrathertogivethedifferentcategoriesofapplicationsalevelofservicecommensuratewiththeparticularneeds[122].Inordertolettheperfor-manceofourbuffermanagementschememeetthespeedrequirementofcorerouters,wecombinethebuffermanagementwithaFCFSscheduler.

PAGE 17

WithregardtoincorporatingthepriorityservicesofDiffServintoTCP,twoproblemsmustbesolved:TCPprotectionandbandwidthdifferentiation.Wedesignabuffermanage-mentscheme,CHOKeW,tosolvethesetwoproblemstogether.Inpreviouswork,schemeseitherfocusonlyonTCPprotection[42,112]orbandwidthdifferentiation[23,47,52].Tothebestofourknowledge,nootherschemepriortoCHOKeWhasreachedbothgoals.WediscussCHOKeWindetailinChapter 2 .Inadditiontousingbuffermanagementandschedulingtechniquestoconductresourceallocationincorenetworks,apracticalDiffServsolutionalsoneedstoincludepricingandadmissioncontrol.Pricingisaneffectivewaytoassignpriority,especiallyinagiganticopenenvironmentsuchastheInternet.Aseverybodywantstoacquirethehighestpriorityifthecostsarethesame,itishardtoimaginethatapracticallyprioritizednetworkhasnopricingpolicies.Whenpricingisapplied,userswhoarewillingtopayahigherpriceareabletogetbetterservice.Itisknownthatinaclassicaleconomicsystem,consumersselecttheamountofre-sourceconsumptionthatresultsinthemaximumbenetforthemselves[50,117].Whenusershavedifferentutilityfunctions(whichcorrespondtodifferentnetworkapplicationsthatarebeingusedbytheusers),theoptimalresourceconsumptionfortheseusersisalsodifferentfromeachother.Inotherwords,agoodpricingschemecanlettheusersadjusttheirnetworkresourceconsumptionbasedontheirownutilityfunctions.Inthisway,thelimitednetworkresourcescanbesharedamongthoseusersbasedontheirownchoices.Ontheotherhand,webelievethatadmissioncontrolisalsoessentialtomaintainnetworkserviceswell.Generallyspeaking,anadmissioncontrolschemeisdesignedformaintainingthedeliveredQoStodifferentusers(orconnections,sessions,calls,etc.)atthetargetlevelbylimitingtheamountofongoingtrafcinthesystem[110].ThelackofadmissioncontrolwouldstronglydevaluethebenetthatDiffServcanproduce,andthedeteriorationofservicequalityresultingfromnetworkcongestioncannotbesolvedonlybydevicesworkingincorenetworks.

PAGE 18

Admissioncontrolcanbecentralizedordistributed.Earlyresearchmainlydiscussedcentralizedadmissioncontrol[9,38,134].Centralizedadmissioncontrol,likeanyothercentralizedtechniques,hasafailurepoint,andtheadmissionrequestsmayoverloadtheadmissioncontrolcenterwhenitisusedinlargenetworks.Distributedadmissioncontrolcanbefurtherclassiedintotwocategories:collaborativeschemesorlocalschemes.Thedesignofcollaborativeschemes,similartothatofcentralizedschemes,needstotakeac-countofnetworkcommunicationoverhead.Itispossiblethatthenetworkcongestionwillfurtherdeteriorateduetothecontrolpacketscarryingtheinformationforcollaborationwhenthenetworkisalreadycongested.Bycontrast,forlocalschemes,informationcollec-tionanddecisionmakingaredonelocally.Thechallengeofdesigningalocalschemeistondtheappropriatemeasurementofthenetworkcongestionstatus.Intheresearchareaofadmissioncontrolinwirelessnetworks,traditionally,studymainlyfocusedonthetrade-offbetweenthenewcallblockingprobabilityandthechangeofhandoffblockingprobabilityduetotheadmissionofnewusers[37,39,80,81,88].Forsystemswithhardcapacity,thatis,Time-DivisionMultipleAccess(TDMA)andFrequency-DivisionMultipleAccess(FDMA)systems,thistypeofadmissioncontrolschemesworkverywell.However,systemswithsoftcapacity,suchasCode-DivisionMultipleAccess(CDMA),OrthogonalFrequency-DivisionMultipleaccess(OFDM),orsystemswithacontention-basedMediumAccessControl(MAC)layer,therelationshipbetweenthenumberofusersandtheavailablecapacityismuchmorecomplicated.Aschemethatonlyfocusesonblockingprobabilityisnotenough,andthistypeofschemescannotalleviatethenetworkcongestionefciently.Hou,YangandPapavassiliouproposedapricingbasedadmissioncontrolalgorithm[82].Theirstudyattemptedtondtheoptimalpointbetweenutilityandrevenueintermsofthenewcallarrivalrate,whichwasaffectedbythepricethatwasadjusteddynamicallyaccordingtothenetworkcondition.

PAGE 19

Admissioncontrolcanalsobeconductedbyothertechniques.TheschemeproposedbyXiaetal.[138]aimedatreducingtheresponsedelayofadmissiondecisionsformul-timediaservice.Itwasexperience-basedandbelongedtoaggressiveadmissioncontrol,whereeachagentoftheadmissioncontrolsystemadmittedmorerequeststhanallocatedbandwidth.JeonandJeong[84]combinedadmissioncontrolwithpacketscheduling.Theschedulerassignedthehigherprioritytoreal-timepacketsoverbest-efforttrafcwhenthereal-timepacketsweregoingtomeetthedeadline.Thustheadmissioncontrolschemeactedasacongestioncontroller.Cross-layerdesignwasalsousedinthisscheme.Qian,HuandChen[119]focusedonadmissioncontrolforVoiceoverIP(VoIP)applicationinWire-lessLANs.TheinteractionsofWLANvoicemanager,MediumAccessControl(MAC)layerprotocols,soft-switches,routersandothernetworkdeviceswerediscussed.Ganeshetal.[69]developedanadmissioncontrolschemethatwasconductedinendusers(i.e.,endpointadmissioncontrol)byprobingthenetworkandcollectingcongestionnotication.Thisschemerequiresclosecooperationofendusers,whichisquestionedinopennetworkssuchastheInternet.Oneofthecriticaldesignconsiderationsofadmissioncontrolishowtoletthead-missioncontrollerknowthenetworkcongestionstatus.Someapproachesusedametrictoevaluateacongestionindexateachnetworkelementtoadmitnewsessions(e.g.,Mon-teiro,QuadrosandBoavida[105]).Someothersemployedpacketprobing[30,55]andaggregationofRSVPmessagesintheadmissioncontroller[16,24].Theideallocationtoconductpricingandadmissioncontrolisedgenetworks.Edgeroutersdonothavetosupportagreatmanyows,whichenablesedgerouterstokeepper-owstateswithoutloosingmuchperformance.Bymonitoringthedynamicsoftheows,edgerouterscanchargeahigherpricewhennetworkcongestionoccurs,andlowerthepricewhencongestionisalleviated.Thehigherpricereducesthedemandofuserstousethenetwork,andnewusersareunlikelytorequesttheadmissionifthenetworkiscongested,whichinturngivesbetterservicequalitytotheowsthatenterthenetwork.

PAGE 20

Basedonthisstrategy,inChapter 3 weproposeapricingandadmissioncontrolschemethatworkswithCHOKeW.Whenthenetworkcongestionisheavier,ourpricingschemewillincreasethepricebyavaluethatisproportionaltothecongestionmeasure-ment,whichisequivalenttochargingataxduetonetworkcongestionthuswenameourpricingschemeConTax(CongestionTax).ConTax-CHOKeWframeworkisacost-effectiveDiffServnetworksolutionthatin-cludespricingandadmissioncontrol(providedbyConTax)plusbandwidthdifferentiationandTCPprotection(supportedbyCHOKeW).ByusingthesumofpriorityclassesofalladmittedusersasthenetworkloadmeasurementinConTax,edgerouterscanworkinde-pendently.Thiscansavethenetworkresourceconsumptionaswellasmanagementcostforsendingcontrolmessagesfortheupdatesofthenetworkcongestionstatusfromcorerouterstoedgeroutersperiodically.ConTaxadjuststhepricesforallpriorityclasseswhenthenetworkloadforanedgerouterisgreaterthanathreshold.Theheaviertheloadis,thehigherthepriceswillbe.Theextrapriceabovethelineofthebasicprice,i.e.,congestiontax,isprovedtobeabletoeffectivelycontrolthenumberofusersthatareadmittedintothenetwork.Byusingsimulations,wealsoshowthatwhenmorepriorityclassesaresup-ported,thenetworkprovidercanearnmoreprotduetoahigheraggregateprice.Ontheotherhand,anetworkwithavarietyofpriorityserviceprovidesusersgreaterexibilitytomeetthespecicneedsfortheirapplications.Inadditiontobuffermanagementincorenetworksandpricingandadmissioncontrolinedgenetworks,ourworkwithrespecttotrafccontrolalsoincludespricingandadmis-sioncontrolinWirelessMeshNetworks(WMNs),whichhassomespecicfeaturesthatdonotexistinwirednetworks.OneofthemainpurposesofusingWMNsistoswiftlyextendtheInternetcoverageinacost-effectivemanner.ThecongurationsofWMNs,determinedbytheuserlocationsandtheapplicationfeatures,however,aregreatlydynamicandexible.Asaresult,itishighlypossibleforaowtogothroughawirelesspathconsistingofmultipleparties

PAGE 21

beforeitreachesthehotspotthatisconnectedtothewiredInternet.ThisfeatureresultsinsignicantdifferenceinadmissioncontrolforWMNsandfortraditionalnetworks.ItisinefcientandinfeasibletoaskfortheconrmationfromeachhopalongtherouteinWMNs.Agroup-basedone-hopadmissioncontrolismorerealisticthanthetraditionalend-to-endadmissioncontrol.InourresearchthatisdiscussedinChapter 4 ,weproposeagroup-basedpricingandadmissioncontrolscheme,inwhichonlytwopartiesareinvolvedintheoperationsuponeachadmissionrequest,whichminimizesthenumberofinvolvedpartiesandsimpliestheoperations.Inthisscheme,thedeterminationcriteriafornetworkadmissionaretheavailableresourcesandtherequestedresources,whichcorrespondtosupplyanddemandinaneconomicsystem,respectively.Theinvolvedpartiesusetheknowledgeofutility,cost,andbenettocalculatetheavailableandrequestedresources.Therefore,ourschemeisnamedAPRIL(AdmissioncontrolwithPRIcingLeverage).Sincetheoperationsareconducteddistributedly,thereisnoneedforasinglecontrolcenter.ByusingAPRIL,theadmissionofnewgroupsleadstobenetincreasesofbothinvolvedgroups,andthetotalbenetofthewholesystemalsoincreases.ThischaracteristiccanbeusedasanincentivetoexpandInternetcoveragebyWMNs.Finally,inChapter 5 ,wediscusssomefutureresearchissues.

PAGE 22

11

PAGE 23

basedontheirper-hopbehaviors(PHBs)[25].Becauseofitssimplicityandscalability,DiffServhascaughtthemostattentionnowadays.Ingeneral,routersintheDiffServarchitecture,similartothoseproposedinCore-StatelessFairQueueing(CSFQ)[128],aredividedintotwocategories:edge(boundary)routersandcore(interior)routers.Sophisticatedoperations,suchasper-owclassicationandmarking,areimplementedatedgerouters.Inotherwords,coreroutersdonotneces-sarilymaintainper-owstates;instead,theyonlyneedtoforwardthepacketsaccordingtotheindexedPHBvaluesthatarepredened.ThesevaluesaremarkedintheDifferentiatedServiceselds(DSelds)inthepacketheaders[25,109].Forexample,AssuredForward-ing[78]denedaPHBgroupandeachpacketisassignedalevelofdropprecedence.ThuspacketswithprimaryimportancebasedontheirPHBvaluesencounterrelativelylowdrop-pingprobability.TheimplementationofanActiveQueueManagement(AQM)toconductthedropping,however,isnotspeciedintheframeworkofAssuredForwarding.WhenwedesignanAQMscheme,theperformancehastobeinvestigatedalongwithTCP,takingaccountofthefactthatalmostallerror-sensitivedataintheInternetaretrans-mittedbyTCPandthedynamicsofTCPhasunavoidableinteractionswiththedroppingscheme.Inordertoincorporatethepriorityservices

PAGE 24

Intheworstcase,therouterswouldbeconsumedwithforwardingpacketseventhoughnopacketisusefulforreceivers.Inthemeantime,thebandwidthwouldbecompletelyoccupiedbyunresponsivesendersthatdonotreducethesendingratesevenaftertheirpacketsaredroppedbythecongestedrouters[63].ConventionalActiveQueueManagement(AQM)algorithmssuchasRandomEarlyDetection(RED)[60]andBLUE[59]cannotprotectTCPows.ItisstronglysuggestedthatnovelAQMschemesbedesignedforTCPprotectioninrouters[29,63].Cho[42]pro-posedamechanismwhichusesaow-valvelterforREDtopunishnon-TCP-friendlyows.However,thisapproachhastoreservethreeparametersforeachow,whichsignif-icantlyincreasesthememoryrequirement.MahajanandFloyd[100]describedasimplerscheme,knownasREDwithPreferentialDropping(RED-PD),inwhichthedrophistoryofREDisusedtohelpidentifynon-TCP-friendlyows,basedontheobservationthatowsathigherspeedsusuallyhavemorepacketdropsinRED.RED-PDisalsoaper-owschemeandatleastoneparameterneedstobereservedforeachowtorecordthenumberofdrops.Whencomparedwithpreviousmethodsincludingconventionalper-owschemes,theimplementationdesignofCHOKe,proposedbyPanetal.[112],issimpleanditdoesnotrequireper-owstatemaintenance.CHOKeservesasanenhancementlterforREDinwhichabufferedpacketisdrawnatrandomandcomparedwithanarrivingpacket.Ifbothpacketscomefromthesameow,theyaredroppedasapair(hence,wecallthismatcheddrops);otherwise,thearrivingpacketisdeliveredtoRED.NotethatapacketthathaspassedCHOKemaystillbedroppedbyRED.ThevalidityofCHOKehasbeenexplainedusingananalyticalmodelbyTangetal.[133].CHOKeissimpleandeffectiveforTCPprotection,butitonlysupportsbest-efforttrafc.InDiffServnetworkswhereowshavedifferentpriority,TCPprotectionisstillanimperativetask.Inthischapter,weusetheconceptofmatcheddropstodesignanother

PAGE 25

schemecalledCHOKeW.TheletterWrepresentsafunctionthatsupportsmultipleweightsforbandwidthdifferentiation.TheTCPprotectioninDiffServnetworkshasthreescenarios:rst,protectingTCPowsinhigherpriorityfromhigh-speedunresponsiveowsinlowerpriority;second,pro-tectingTCPowsfromhigh-speedunresponsiveowsinthesamepriority;andthird,protectingTCPowsinlowerpriorityfromhigh-speedunresponsiveowsinhigherprior-ity.SinceCHOKeWisdesignedforallocatingagreaterbandwidthsharetohigherpriorityows,ifTCPprotectioniseffectiveinthethirdscenario,itshouldalsobeeffectiveintherstandsecondscenarios.HerewereporttheresultsofthethirdscenarioinSubsection 2.4.4 todemonstratetheeffectivenessofTCPprotectionofCHOKeW.

PAGE 26

bandwidthshareforlow-prioritytrafc[26],whichisadisadvantageofRIO.Ourschemeusesmatcheddropstocontrolthebandwidthshare.Whenalow-priorityTCPowonlyhasasmallbandwidthshare,theresponsivenessofTCPcanleadtoasmallbacklogforthisowinthebuffer.Thepacketsfromthisowwillunlikelybedropped,sothisowwillnotbestarved.OurmodelexplainsthisfeatureinSubsection 2.3.1 (Equation( 2.10 )).Infact,someschedulingschemes,suchasWeightedFairQueueing(WFQ)[52]andotherpacketapproximationoftheGeneralizedProcessorSharing(GPS)model[113],mayalsosupportdifferentiatedbandwidthallocation.However,themaindisadvantageoftheseschemesisthattheyrequireconstantper-owstatemaintenance,whichisnotcost-effectiveincorenetworksasitcausesmemory-requirementcomplexityO(N)andper-packet-processingcomplexityusuallylargerthanO(1).

PAGE 27

CHOKeW,weexpectbetterfairnessamongtheowswiththesamepriority.Tothebestofourknowledge,nootherstatelessschemehasachievedthisgoal.Therestofthechapterisorganizedasfollows.Section 2.2 describestheCHOKeWalgorithm,CHOKeW.Section 2.3 derivestheequationsforthesteadystate,andexplainsthefeaturesandeffectivenessofCHOKeW,suchasfairnessandbandwidthdifferentiation.Section 2.4 presentsanddiscussesthesimulationresults,includingtheeffectofsupportingtwoprioritylevelsandmultipleprioritylevels,TCPprotection,theperformanceofTCPRenoinCHOKeW,acomparisonwithCHOKeW-RED(CHOKeWwithREDmodule),andacomparisonwithCHOKeW-avg(CHOKeWwithamoduletocalculatetheaver-agequeuelengthbyEWMA).Section 2.5.1 discussestheissuesinvolvingconsiderationofimplementation,andgivesasuggestionoftheextendedmatcheddropalgorithmforCHOKeWdesignedforsomespecialscenarios.WeconcludethischapterinSection 2.6

PAGE 28

numberofdrawsisnotonlyusedforrestrictingthebandwidthshareofhigh-speedunre-sponsiveows,butalsousedassignalstoinformTCPofthecongestionstatus.Inordertoavoidfunctionalredundancy,CHOKeWisnotcombinedwithREDsinceREDisalsodesignedtoinformTCPofcongestion.ThuswesaythatCHOKeWisanindependentAQMscheme,insteadofanenhancementlterforRED.TodemonstratethatREDisnotanessentialcomponentfortheeffectivenessofCHOKeW,thecomparisonbetweentheper-formanceofCHOKeWandthatofCHOKeW-RED(i.e.,CHOKeWwithRED)isshowninSubsection 2.4.6 .Inordertodeterminewhentodrawapacket(orpackets)andhowmanypacketsarepossiblydrawnfromthebuffer,weintroduceavariable,calledthedrawingfactor,tocon-trolthemaximumnumberofdraws.Foraowatpriorityleveli(i=1;2;;M,whereMisthenumberofprioritylevelssupportedbytherouter),thedrawingfactorisdenotedbypi(pi0).Thevalueofpimayalterwiththetimeduetothechangeofcongestionstatus,butataparticularmoment,allowsatpriorityleveliarehandledbyaCHOKeWrouterusingthesamepi.ThisishowCHOKeWprovidesbetterfairnessamongowswiththesameprioritythanotherconventionalstatelessschemessuchasREDandBLUE.ACHOKeWrouterkeepsthevaluesofpiinsteadofper-owstates.ThusCHOKeWpre-cludesthememoryrequirementfromrocketingupwhenmoreowsgothroughtherouter.Roughlyspeaking,wemayinterpretpiasthemaximumnumberofrandomdrawsfromthebufferuponanarrivalfromaowatpriorityleveli.Theprecisemeaningisdiscussedbelow.AssumethatthenumberofactiveowsservedbyaCHOKeWrouterisN,andthenumberofprioritylevelssupportedbytherouterisM.Letwi(wi1)bethepriorityweightofowi(i=1;2;;N),andw(k)(k=1;2;;M)betheweightofprioritylevelk.Ifowiisatprioritylevelk,thenwi=w(k).Allowsatthesameprioritylevelhavethesamepriorityweight.Ifw(k)>w(l),wesaythatowsatprioritylevelk

PAGE 29

havehigherprioritythanowsatprioritylevell,orsimply,prioritylevelkishigherthanprioritylevell.Letp0denotethebasicdrawingfactor.Thedrawingfactorusedforowiiscalculatedasfollows: 2.3.4 ).Theprecisemeaningofdrawingfactorpidependsuponitsvalue.Itcanbecategorizedintotwocases:Case1.When0pi<1,pirepresentstheprobabilityofdrawingonepacketfromthebufferatrandomforcomparison.Case2.Whenpi1,piconsistsoftwoparts,andwemayrewritepias

PAGE 30

Figure2: CHOKeWalgorithm

PAGE 31

Figure2: Algorithmofupdatingp0 Networktopology

PAGE 32

ThealgorithmofdrawingpacketsisdescribedinFig. 2 .AsCHOKeWdoesnotrequireper-owstates,inthisgure,mrepresentsthevalueofmi(beforeStep(4))anddi(afterStep(4)),wealsousepandftorepresentpiandfi,respectively.Thecongestionstatusofaroutermaybecomeeitherheavierorlighterafteraperiodoftime,sincecircumstances(e.g.,thenumberofusers,theapplicationtypes,andthetrafcpriority)constantlychange.InordertocooperatewithTCPandtoimprovethesystemperformance,anAQMschemesuchasRED[60]needstoinformTCPsenderstolowertheirsendingratesbydroppingmorepacketswhenthenetworkcongestionbecomesworse.UnlikeCHOKe[112],CHOKeWdoesnothavetoworkwithREDinordertofunctionproperly.Instead,CHOKeWcanadaptivelyupdatep0basedonthecongestionstatus.TheupdatingprocessisshowninFig. 2 ,whichdetailsStep(2)ofFig. 2 .ThecombinationofFig. 2 andFig. 2 providesacompletedescriptionoftheCHOKeWalgorithm.CHOKeWupdatesp0uponeachpacketarrival,butactivatesmatcheddropsonlywhenthequeuelengthLislongerthanthethresholdLth(Step(5)inFig. 2 ).ThreequeuelengththresholdsareappliedtoCHOKeW:Lthisthethresholdofactivatingmatcheddrops,L+increasingp0,andLdecreasingp0.Asthebufferisusedtoabsorbburstytrafc[29],wesetLth>0,sothattheshortburstytrafccanenterthebufferwithoutsufferinganypacketdropswhenthequeuelengthLislessthanLth(althoughp0maybelargerthan0forhistoricalreasons).WhenL2[L;L+],thenetworkcongestionstatusisconsideredtohavebeenstableandp0maintainsthesamevalueasbefore(i.e.,thealgorithmshowninFig. 2 doesnotadjustthevalueofp0).OnlywhenL>L+,thecongestionisconsideredtobeheavy,andp0isincreasedbyp+eachtime.ThealleviationofnetworkcongestionisrepresentedbyL
PAGE 33

Table2: TheStateofCHOKeWvs.theRangeofL RangeofL (Lth;L) [L;L+] (L+;Llim] Inactive Active 2 .Atanytime,CHOKeWworksinoneoffollowingstates: 1. inactivematcheddropsanddecreasingp0(unlessp0=0),when0LLth; 2. activematcheddropsanddecreasingp0(unlessp0=0),whenLth
PAGE 34

NowwediscussthecomplexityofCHOKeW.Onthebasisoftheabovedescription,weknowthatCHOKeWneedstorememberonlyw(k)foreachpredenedprioritylevelk(k=1;2;;M),insteadofsomevariablesforeachowi(i=1;2;;N).ThecomplexityofCHOKeWisonlyaffectedbyM.InDiffServnetworks,itisreasonabletoexpectthatMwillneverbealargevalueintheforeseeablefuture,i.e.,MN.ThuswithrespecttoN,thememory-requirementcomplexityaswellastheper-packet-processingcomplexityofCHOKeWisO(1),whileforconventionalper-owschemes,thememory-requirementcomplexityisO(N)andtheper-packet-processingcomplexityisusuallylargerthanO(1)[120]. 2 isusedforourmodel.Inthisgure,tworouters,R1andR2,areconnectedtoNsourcenodes(Si,i=1;2;;N)andNdestina-tionnodes(Di),respectively.TheR1-R2link,withbandwidthB0andpropagationdelay0,allowsallowstogothrough.BiandidenotesthebandwidthandthepropagationdelayofeachlinkconnectedtoSiorDi,respectively.Asweareinterestedinthenetworkperformanceunderaheavyload,wealwaysletB0
PAGE 35

2.2 )and( 2.5 )isthat( 2.2 )usesmandfratherthanmiandfi.WhenCHOKeWisimplemented,twovariablesmandfareadequateforallowsbecausetheycanbereusedforeacharrival.In( 2.4 ),(1ri)miistheprobabilityofnomatcheddropsintherstmidraws.Afterthecompletionoftherstmidraws,thevalueoffistochasticallydetermineswhetheronemorepacketisdrawn.Theprobabilityofnofurtherdrawis(1fi),andtheprobabilitythatonemorepacketisdrawnbutnomatcheddropsoccurisfi(1ri).Thereforetheprobabilitythatnomatcheddropsoccuris(1fi)+fi(1ri)=1firi.Werewrite(1ri)fiasitsMaclaurinSeries:(1ri)fi=1firi+o(ri2):

PAGE 36

Assumingthecorenetworkservesavastnumberofows,itisreasonabletosayri1foreachresponsiveowi. 2.4 )canberewrittenasi=(1ri)mi+fi,or 2.6 )init,weget 2.7 ),

PAGE 37

Afterapacketentersthequeue,oneandonlyoneofthefollowingpossibilitieswillhappen:1)itwillbedroppedfromthequeueduetomatcheddrops,or2)itwillpassthequeuesuccessfully.Thepassingprobabilityisatisessi=ii.Using( 2.6 )and( 2.9 )init,wegeti=21 (1ri)pi:InorderforTCPprotection,CHOKeWrequires0qi<1ifowiusesTCP.Equa-tion( 2.8 )showsthataslongaspidoesnotexceedacertainrange,CHOKeWcanguaranteethatowiwillnotbestarved,evenifitmayonlyhavelowpriority.ThisfeatureoffersCHOKeWadvantagesoverRIO[26],whichneitherprotectsTCPows,norpreventsthestarvationoflow-priorityows.Thealgorithmforupdatingp0illustratedinFig. 2 ensuresp00aftertheupdate.FromStep(3)inFig. 2 ,pi0.Usingitin( 2.8 ),wegetqi0,whichmeansinCHOKeWthelowerboundofqiissatisedautomatically.Nowwediscusstherangeofpitosatisfytheupperboundofqi(i.e.,qi<1).From( 2.8 ),wehave log2(1ri):(2.10)From( 2.3 ),ricanalsobeinterpretedasthenormalizedbacklogfromowi.Equation( 2.10 )givestheupperboundofpi,whichisafairlylargevalueifriissmall.Forinstance,whenriequals0.01(imagine100owssharethequeuelengthevenly),theupperboundofpiis68.9;inotherwords,thealgorithmmaydrawupto69packetsbeforeaowisstarved,butsuchalargepiisrarelyobservedduetothecontrolofunresponsiveows.Formula( 2.10 )alsoexplainswhyaowinCHOKe(wherepi1)thatisnotstarvedmusthaveabacklogshorterthanhalfofthequeuelength.ThisresultisconsistentwiththeconclusionofTangetal.[133].InCHOKeW,forowiwithacertainpriorityweightwiandacorrespondingdrawingfactorpi(seeEquation( 2.1 )),thehigherthearrivalrate,thelargerthebacklog,andhencethehigherthedroppingprobability.Whenthebacklogofa

PAGE 38

high-rateunresponsiveowreachestheupperbounddeterminedby( 2.10 ),thisowwillbecompletelyblockedbyCHOKeW.

PAGE 39

BasedonthePASTA(PoissonArrivalsSeeTimeAverage)propertyofPoissonar-rivals,packetsfromallowshavethesameaveragewaitingtimeinthequeue(i.e.,Di=D,i=1;2;;N).UsingLittle'sTheoremagain,forowi,weget 2.11 )in( 2.13 ), 2.8 ),( 2.12 ),( 2.13 ),and( 2.14 )init,weobtain 2.6 ),werewrite( 2.15 )as 2.3 )asri=i.PNj=1j.Equations( 2.16 )canbeinterpretedasLittle'sTheoremusedforaleakybufferwherepacketsmaybedroppedbeforereachthefrontofthequeue,whichisnotaclassicalqueue-ingsystem.From( 2.16 )wegetaninterestingresult:eveninaleakybuffer,theaveragewaitingtimeisdeterminedbytheaveragequeuelengthforowi(ortheaverageaggregate

PAGE 40

queuelength)andtheaveragearrivalratefromowi(ortheaverageaggregatearrivalrate)thatonlycountsthepacketsenteringthequeue.Theaveragewaitingtimeismeaningfultothepacketssurvivingthequeueexclusively,whereaspacketsthataredroppedafterenteringthequeuestillcontributetothequeuelength.Belowarethegroupofformulasthatdescribethesteady-statefeaturesofCHOKeW: NPj=1j; 2.17a ),ri=rj=i=j.Using( 2.17c )init,weget

PAGE 41

whereqiisthedroppingprobability,andi(i>0)denotesthecombinationofotherfac-tors. 2.18 )isi=jandri=rj.From( 2.17b )and( 2.17d )wehave0i=0j,i.e.,owiandowjgetthesameamountofbandwidthshare.Wewillshowthatthisistheonlysolution.LetG=i(1ri)pi 2.18 )isalsoasolutiontoG=0.Inthecorenetwork,arouterusuallyhastosupportagreatnumberofowsandthedownstreamlinkbandwidthissharedamongthem.Itisreasonabletoassumethatuctu-ationsinthebacklogofaTCPowdonotsignicantlyaffectthebacklogofotherows(i.e.,@rj=@ri0,i6=j).Thenwehave@G @ri=1 2.19 )and( 2.20 )),foranyvalueofri(0
PAGE 42

decreasingfunctionwithrespecttori.Asaresult,ifthereisavalueofrisatisfyingG=0,itmustbetheonlysolutiontoG=0.Thustheonlysteadystateismaintainedbyi=jandri=rj,whenTCPowsiandjhavethesamepriorityandthesamefactor.ThisindicatesthatCHOKeWiscapableofprovidinggoodfairnesstoowiandowj.Case2.i6=jLet(0i=0j)Cand(0i=0j)RdenotetheratioofaveragethroughputofowitothatofowjforCHOKeWandforconventionalstatelessAQMschemes,respectively.Bycomparing(0i=0j)Cto(0i=0j)R,wewillshowthatCHOKeWisabletoprovidebetterfairness,wheni6=j.AmongconventionalstatelessAQMschemes,REDdeterminesthedroppingproba-bilityaccordingtotheaveragequeuelength,andBLUEcalculatesthedroppingprobabilityfrompacketlossandlinkidleevents.Inasteadystate,forAQMschemessuchasREDandBLUE,everyowhasthesimilardroppingprobability.Letqdenotethedroppingproba-bility.Forallows,qi=q(i=1;2;;N).Therefore,owihasanaveragethroughputof0i=i(1q)=(1q)iR(q):Similarly,owjhasanaveragethroughputof0j=(1q)jR(q):ThusforREDandBLUE, 2.17b ),R(qi)in( 2.19 )canberewrittenasR(qi)=R(22(1ri)pi):

PAGE 43

Whenowiandowjhavethesamepriority,p,wedene(rk),R(22(1rk)p);fork2fi;jg:Then( 2.19 )canberewrittenask=k(rk);fork2fi;jg:From( 2.17b )and( 2.17d ), 2.22 )islessthani=j,ifi>j.From( 2.17a )and( 2.17c ),ri=i(ri)(1ri)p @qi@qi @qi2p(1ri)p1<0;wesee@ri=@i>0,whichmeanswheni>j,wehaveri>rjand(ri)<(rj).Usingtheseresultsin( 2.22 ),forCHOKeW,

PAGE 44

Acomparisonbetween( 2.21 )and( 2.23 )provesthatCHOKeWprovidesbetterfair-nessthanREDandBLUE. 2.1 ),pi>pj),CHOKeWallocatesasmallerbandwidthsharetoowithantoowj,i.e.,0i<0j.Thisseemstobeanintuitivestrategy,butwealsonoticedthattheinteractionamongpi,riandqimaycausesomeconfusion.ThedroppingprobabilityofowiinCHOKeW,qi,isnotonlydeterminedbypi,butalsobyri.Furthermore,theeffectsofriandpiareinverseanlargervalueofpiresultsinalargerqi,butatthesametimeitleadstoasmallerri,whichmayproduceasmallerqi.Toclearuptheconfusion,weonlyneedtoshowthatalargervalueofpileadstoasmallervalueofbandwidthshare,0i,whichisequivalenttoshowing@0i=@pi<0.From( 2.17d ),( 2.19 ),and( 2.20 ),weget@0i=@qi<0.FromtheChainRule@0i @pi(2.24)whereu=pi.Weintroducesymbolutodistinguish@qi=@ufrom@qi=@pi,whenriistreatedasaconstantin@qi=@ubutnotin@qi=@pi.From( 2.17b )

PAGE 45

Accordingto( 2.17a )and( 2.17c ),wehave 2.25 ),( 2.26 )and( 2.27 )in( 2.24 ),weget@qi 2 ,whereB0=1Mb/sandBi=10Mb/s(i=1;2;;N).Unlessspeciedotherwise,thelinkpropagationdelay0=i=1ms.Thebufferlimitis500packets,andthemeanpacketsizeis1000bytes.TCPowsaredrivenbyFTPapplications,andUDPowsaredrivenbyCBRtrafc.AllTCPsareTCPSACKexceptinSubsection 2.4.8 wheretheperformanceofTCPRenoowsgoingthroughaCHOKeWrouterisinvestigated.Eachsimulationrunsfor500seconds.ParametersofCHOKeWaresetasfollows:Lth=100packets,L=125packets,L+=175packets,p+=0:002,andp=0:001.ParametersofREDaresetasfollows:minth=100packets,maxth=200packets,gentle=true,theEWMAweightissetto0.002,andpmax=0:02(exceptinSubsection 2.4.6 wheredifferentvaluesofpmaxaretestedtobecomparedwithCHOKeW).

PAGE 46

ParametersofRIOincludethoseforouttrafcandthoseforintrafc.Forouttrafc,minth out=100packets,maxth out=200packets,pmax out=0:02.Forintrafc,minth in=110packets,maxth in=210packets,pmax in=0:01(exceptinSubsection 2.4.1 wheredifferentparametersaretested).Bothgentle outandgentle inaresettotrue.ForparametersofBLUE,weset1=0:0025(thesteplengthforincreasingthedrop-pingprobability),2=0:00025(thesteplengthfordecreasingthedroppingprobability),andfreeze time=100ms. 2.4.3 .AsmentionedinSubsection 2.1.2 ,owstarvationoftenhappensinRIObutisavoid-ableinCHOKeW.Inordertoquantifyandcomparetheseverityofowstarvationamongdifferentschemes,werecordtheRelativeCumulativeFrequency(RCF)ofgoodputforowsateachprioritylevel.Forascheme,theRCFofgoodputgforowsataspecicprioritylevelrepresentsthenumberofowsthathavegoodputlowerthanorequaltogdividedbythetotalnumberofowsinthispriority.Wesimulate200TCPows.WhenCHOKeWisused,w(1)=1andw(2)=2areassignedtoequalnumberofows.WhenRIOisused,thenumberofoutowsisalsoequaltothenumberofinows.Fig. 2 illustratestheRCFofgoodputforowsateachprioritylevelofCHOKeWandRIO.HereweshowthreesetsofresultsfromRIO,denotedbyRIO 1,RIO 2andRIO 3,respectively.ForRIO 1,wesetminth in=150packetsandmaxth in=250packets;forRIO 2,minth in=130packetsandmaxth in=230packets;forRIO 3,minth in=110packetsandmaxth in=210packets.

PAGE 47

Figure2: RCFofRIOandCHOKeWunderascenariooftwoprioritylevels FromFig. 2 ,weseethattheRCFofgoodputzeroforouttrafcofRIO 1is0.1.Inotherwords,10ofthe100outowsarestarved.Similarly,forRIO 2andRIO 3,15and6owsarestarvedrespectively.Moreover,itisobservedthatsomeinowsofRIOmayalsohaveverylowgoodput(e.g.,thelowestgoodputofinowsofRIO 2isonly0.00015Mb/s)duetoalackofTCPprotection.FlowstarvationisverycommoninRIO,butitrarelyhappensinCHOKeW.NowweinvestigatetherelationshipbetweenthenumberofTCPowsandtheag-gregateTCPgoodputforeachprioritylevel.TheresultsareshowninFig. 2 ,wherethecurvesofw(1)=1andw(2)=2correspondtothetwoprioritylevels.Halfoftheowsareassignedw(1)andtheotherhalfassignedw(2).AsmoreowsaregoingthroughtheCHOKeWrouter,thegoodputdifferencebetweenthehigher-priorityowsandthelower-priorityowschangesowingtothenetworkdynamics,buthigh-priorityowscangethighergoodputnomatterhowmanyowsexist.

PAGE 48

Figure2: AggregateTCPgoodputvs.thenumberofTCPowsunderascenariooftwoprioritylevels Figure2: Averageper-owTCPgoodputvs.w(2)=w(1)when25owsareassignedw(1)=1and75owsw(2)

PAGE 49

2 .TheresultsarecomparedwiththoseofWFQworkinginanaggregateowmode,i.e.,inordertocircumventtheper-owcomplexity,owsatthesameprioritylevelaremergedintoanaggregateowbeforeenteringWFQ,andWFQbufferspacketsinthesamequeueiftheyhavethesamepriority,insteadofusingstrictper-owqueueing.InWFQ,thebufferpoolof500packetsissplitintotwoqueues:thequeueforw(1)hasacapacityof125packetsandthequeueforw(2)hasacapacityof375packets.InFig. 2 ,itisreadilytoseethatthegoodputofowsassignedw(2)increasesalongwiththeincreaseofthevalueofw(2)forbothCHOKeWandWFQ,andaccordinglythegoodputofowsassignedw(1)decreases.However,whenw(2)=w(1)<3,theaverageper-owgoodputwithw(2)isevenlowerthanthatwithw(1)forWFQ.WesaythatWFQdoesnotguaranteetoofferhigherper-owgoodputtohigherpriorityifthepriorityistakenbymoreows,whenaggregateowsareused.ForCHOKeW,bandwidthdifferentiationworkseffectivelyinthewholerangeofw(2),eventhoughallpacketsaremixedinonesinglequeueinastatelessway.ThisfeatureisdevelopedbasedonthefactthatCHOKeWdoesnotrequiremultiplequeuestoisolateows;bycontrast,conventionalpacketapproximationofGPS,suchasWFQ,cannotavoidthecomplexitycausedbyper-ownatureandgivesatisfactoryband-widthdifferentiationonowbasisatthesametime.

PAGE 50

Figure2: Aggregategoodputvs.thenumberofTCPowsunderascenarioofthreeprioritylevels RIOonlysupportstwoprioritylevels,theresultsarenotcomparedwiththoseofRIOinthissubsection.Fig. 2 andFig. 2 demonstratetheaggregateTCPgoodputforeachprioritylevelversusthenumberofTCPowsforthreeprioritylevelsandforfourprioritylevelsrespectively.Ateachlevel,thenumberofTCPowsrangesfrom25to100.InFig. 2 ,threeprioritylevelsareconguredusingw(1)=1:0,w(2)=1:5,andw(3)=2:0;w(4)=2:5isaddedtothesimulationscorrespondingtoFig. 2 forthefourthprioritylevels.EventhoughthegoodputuctuateswhenthenumberofTCPowschanges,theowsinhigherpriorityarestillabletoobtainhighergoodput.Furthermore,noowstarvationisobserved.

PAGE 51

Figure2: Aggregategoodputvs.thenumberofTCPowsunderascenariooffourprioritylevels TCPprotectionisvalidatedprovidedthatthehigh-prioritymisbehavingowsareblockedsuccessfully.ThegoodputversusthenumberofUDPowsisshowninFig. 2 ,whereCHOKeWiscomparedwithRIO.SincenoretransmissionisprovidedbyUDPows,goodputisequaltothroughputforUDP.ForCHOKeW,evenifthenumberofUDPowsincreasesfrom1to10,theTCPgoodputineachprioritylevel(andhencetheaggregategoodputofallTCPows)isquitestable.Inotherwords,thelinkbandwidthissharedbytheseTCPows,andthehigh-speedUDPowsarecompletelyblockedbyCHOKeW.Bycontrast,thebandwidthshareforTCPowsinaRIOrouterisnearlyzero,ashigh-speedUDPowsoccupyalmostallthebandwidth.Fig. 2 illustratestherelationshipbetweenp0andthenumberofUDPowsrecordedinthesimulationsofCHOKeW.AsmoreUDPowsstart,p0increases,butp0rarelyreachesthehighvalueofstartingtoblockTCPowsbeforehigh-speedUDPowsareblocked.Inthisexperiment,wealsondthatfewpacketsofTCPowsaredroppedduetobufferoverow.Infact,whenedgerouterscooperatewithcorerouters,thehigh-speedmisbehavingowswillbemarkedwithlowerpriorityattheedgerouters.Therefore,

PAGE 52

Figure2: Aggregategoodputvs.thenumberofUDPowsunderascenariotoinvesti-gateTCPprotection Figure2: Basicdrawingfactorp0vs.thenumberofUDPowsunderascenariotoinvestigateTCPprotection

PAGE 53

CHOKeWshouldbeabletoblockevenmoremisbehavingowsthanshowninFig. 29 ,andp0shouldalsobesmallerthanshowninFig. 2 2.3.3 ,weusetheanalyticalmodeltoexplainhowCHOKeWcanpro-videbetterfairnessamongtheowsinthesameprioritythanconventionalstatelessAQMschemessuchasREDandBLUE.Wevalidatethisattributebyshowingsimulationsinthissubsection.SinceREDandBLUEdonotsupportmultipleprioritylevels,andareonlyusedinbest-effortnetworks,weletCHOKeWworkinoneprioritystate(i.e.,w(1)=1forallows)inthissubsection.InthesimulationnetworkillustratedinFig. 2 ,theend-to-endpropagationdelayofaowissettooneof6,60,100,or150ms.Eachofthefourvaluesisassignedto25%ofthetotalnumberofows. 2.28 ),weknowF2(0;1].ThecloserthevalueofFisto1,thebetterthefairnessis.Inthischapter,weusegiasgoodputinsteadofthroughputsothattheTCPperformanceevaluationcanreectthesuccessfuldeliveryrate 2 ,thepropagationdelaycanbeassignedadesiredvaluegivenanappropriatei.

PAGE 54

Figure2: Fairnessindexvs.thenumberofowsforCHOKeW,REDandBLUE Figure2: LinkutilizationofCHOKeWandCHOKeW-RED moreaccurately.Fig. 2 showsthefairnessindexofCHOKeW,RED,andBLUEversusthenumberofTCPowsrangingfrom160to280.Eventhoughthefairnessdecreasesasthenumberofowsincreasesforallschemes,CHOKeWstillprovidesbetterfairnessthanbothREDandBLUE.

PAGE 55

controlbymatcheddrops.Asaresult,theREDmoduleisnolongerrequiredinCHOKeW.Inthissubsection,wecomparetheaveragequeuelength,linkutilization,andTCPgood-putofCHOKeWwiththoseofCHOKeW-RED(i.e.,CHOKeWworkingwiththeREDmodule).InRED,pmaxisthemarginaldroppingprobabilityunderhealthycircumstancesandshouldnotbesettoavaluegreaterthan0.1[62].Forthesesimulations,weinvestigatetheperformanceofCHOKeW-REDwithpmaxrangingfrom0.02to0.1.TherelationshipbetweenthenumberofTCPowsandthevaluesoflinkutilization,theaveragequeuelength,andtheaggregateTCPgoodputisshowninFig. 2 ,Fig. 2 ,andFig. 2 respectively.Ineachgure,theperformanceresultsofCHOKeW-REDareindicatedbythreecurves,eachcorrespondingtooneofthethreevaluesforpmax(0.02,0.05,and0.1).Fig. 2 showsthatallschemesmaintainanapproximatelinkutilizationof96%(shownbythecurvesoverlappingeachother),whichisconsideredsufcientfortheIn-ternet.FromFig. 2 ,wecanseethattheaveragequeuelengthforCHOKeW-REDincreasesasthenumberofTCPowsincreases.Incontrast,theaveragequeuelengthcanbemaintainedatasteadyvaluewithinthenormalrangebetweenL(125packets)andL+(175packets)forCHOKeW.InsituationswherethenumberofTCPowsislargerthan100,CHOKeWhastheshortestqueuelength.SinceFCFS(First-Come-First-Served)isused,

PAGE 56

Figure2: AveragequeuelengthofCHOKeWandCHOKeW-RED fromRED(forexample,thismayhappenwhenallowsusesTCP),p0doesnothaveanopportunitytoincreaseitsvalue(p0isinitializedto0,andp0p0+p+onlywhenL>L+),whichcausesalongerqueueinCHOKeW-RED.Besidesthelinkutilizationandtheaveragequeuelength,theaggregateTCPgoodputisalwaysofinterestwhenevaluatingTCPperformance.ThecomparisonofTCPgoodputbetweenCHOKeWandCHOKeW-REDisshowninFig. 2 .Inthisgure,alloftheschemeshavesimilarresults.Inaddition,whenthenumberofTCPowsislargerthan100,CHOKeWrivalsthebestofCHOKeW-RED(i.e.,pmax=0:1).Inaspecialenvironment,ifthenetworkhasnotexperiencedheavycongestionandthequeuelengthL
PAGE 57

Figure2: AggregateTCPgoodputofCHOKeWandCHOKeW-RED Figure2: AveragequeuelengthofCHOKeWandCHOKeW-avg whilequeuelengthuctuatesduetotransientburstytrafc.Forthepurposeofsmoothingthetrafcmeasurement,thecombinationofp+andpisequivalenttotheEWMAaveragequeuelengthavginRED.Inthissubsection,wecomparetheaveragequeuelengthandaggregateTCPgoodputofCHOKeWwiththatofCHOKeW-avg(i.e.,CHOKeWworkingwithavg).TheresultsareshowninFig. 2 andFig. 2 respectively.CHOKeWhasanaveragequeuelengthrangingfrom147.7to150.7packetsandanaggregateTCPgoodputfrom0.923to0.942Mb/s;CHOKeW-avghasanaveragequeue

PAGE 58

Figure2: AggregateTCPgoodputofCHOKeWandCHOKeW-avg lengthrangingfrom148.5to152.2packetsandanaggregateTCPgoodputfrom0.919to0.944Mb/s.CHOKeWandCHOKeW-avghavesimilarresults.Consideringavgdoesnotimprovetheperformance,itisnotusedasanessentialparameterforCHOKeW. 2.3 )and( 2.17b )).ATCP-Renoowthathasrecentlysufferedmatcheddropsisunlikelyto

PAGE 59

Figure2: Linkutilization,aggregategoodput(inMb/s),andtheratioofminimumgoodputtoaveragegoodputofTCPReno encountermorematcheddropsinthenearfuture;thesendingrateofthisowmayincreaseforalongerperiodoftimethanotherows.Forthissimulation,allTCPowsusesTCPReno.Westudythelinkutilization,theaggregateTCPgoodputandtheratioofminimumper-owTCPgoodputtotheaverageper-owTCPgoodput(goodputratioinshort).Sinceallthevaluesofthelinkutilization,theaggregategoodput(inMb/s),andthegoodputratioareinthesamerangeof[0;1],theyareillustratedinasinglediagram,i.e.,Fig. 2 .ComparingFig. 2 andFig. 2 ,wenoticethatthelinkutilizationofTCPRenoiscomparabletoTCPSACK.TheaggregateTCPgoodputinFig. 2 islargerthan0.9Mb/s(thefulllinkbandwidthis1Mb/s),whichiscomparabletothegoodputofTCPSACKinFig. 2 .ThegoodputratiodecreaseswhenmoreTCPowssharethelink,asthepossibilitythatoneortwoowsgetsmallbandwidthishigherwhenmoreowsexist.Nonetheless,positivegoodputisalwaysmaintainedandnoowsarestarved. 2.5.1BufferforFlowIDsOneoftheimplementationconsiderationisthebuffersize.AsdiscussedinBradenetal.[29],theobjectiveofusingbuffersintheInternetistoabsorbdataburstsandtransmit

PAGE 60

Figure2: ExtendedmatcheddropalgorithmwithIDbuffer themduringsubsequentsilence.Maintainingnormally-smallqueuesdoesnotnecessarilygeneratepoorthroughputifappropriatequeuemanagementisused;instead,itmayhelpresultingoodthroughputaswellaslowerend-to-enddelay.WhenusedinCHOKeW,thisstrategy,however,maycauseaprobleminwhichnotwopacketsinthebufferarefromthesameow,althoughthisisanextremecaseandunlikelyhappenssooften,duetotheburstynatureofows.Inthiscase,nomatterhowlargepiis,packetsdrawnfromthebufferwillnevermatchanarrivingpacketfromowi.Inordertoimprovetheeffectivenessofmatcheddrops,weconsideramethodthatusesaFIFObufferforstoringtheowIDsofforwardedpacketsinthehistory.

PAGE 61

requireadditionalprocessingtimeorlargememoryspace.WegeneralizematcheddropsbydrawingowIDsfromauniedbuffer,whichincludestheIDbufferandthepacketbuffer.ThismodicationisillustratedinFig. 2 ,interpretedasastepinsertedbetweenStep(4)andStep(5)inFig. 2 .LetLIDdenotethenumberofIDsinthebufferwhenanewpacketarrives.DrawscanhappeneitherintheregularpacketbufferorintheIDbuffer.TheprobabilitiesthatthedrawshappenintheIDbufferandthepacketbufferareLID LID+Lrespectively.IfthedrawsarefromtheIDbuffer,onlyonepacket(i.e.,thenewarrival)isdroppedeachtime,andhencethemaximumnumberofdrawsissetto2pi,implementedbym2minFig. 2 2.2 ,weuseaserialdrawingprocessforthedescription(i.e.,packetsaredrawnoneatatime),toletthealgorithmbeeasilyunderstood.Ifthisprocessdoesnotmeetthetimerequirementofthepacketforwardingintherouter,aparallelmethodcanbeintroduced.LetabetheowIDofthearrivingpacket,ib(i=1;2;;m)theowIDsofthepacketsdrawnfromthebuffer.ThelogicaloperationofmatcheddropscanberepresentedbybitwiseXOR(Y)andbitwiseAND(^)asfollows:ifm^i=1aYib=0(false);thenconductmatcheddrops.NotethattheaboveequationissatisedifanytermofaYibisfalse,i.e.,anyibdrawnfromthebuffercanprovokematcheddropsifitequalsa.Whenthedrawingprocessisappliedtothepacketbuffer,matcheddropshappeninpairs.Besidesthearrivingpacket,wecansimplydropanyoneofthebufferedpacketswithowIDibthatmakesaYib=0.

PAGE 62

2 ,whenthepriority-weightratiow(2)=w(1)ishigher,thebandwidthsharebeingallocatedtothehigher-priorityowswillbegreater.Inthemeantime,con-sideringthatthetotalavailablebandwidthdoesnotchange,thebandwidthshareallocatedtothelower-priorityowswillbesmaller.Thevalueofw(2)=w(1)shouldbetailoredtotheneedsoftheapplications,thenetworkenvironments,andtheusers'demands.Thisresearchcanalsobeincorporatedwithprice-basedDiffServnetworkstoprovidedifferen-tiatedbandwidthallocationaswellasTCPprotection.

PAGE 63

52

PAGE 64

wherep0isthebasicdrawingfactorthatisadjustedaccordingtotheseverityofnetworkcongestion.Theheavierthenetworkcongestionis,thelargerthevalueofp0willbe.Ontheotherhand,ahigherpriorityclasscorrespondstoalargerwiandasmallerpi.Thuspicarriestheinformationofnetworkcongestionstatusaswellasthepriorityoftheow.Whenapacketarrivesatacorerouter,CHOKeWwilldrawsomepacketsfromthebufferofthecorerouteratrandom,andcomparethemwiththearrivingpacket.Ifapacketdrawnfromthebufferisfromthesameowasthenewarrival,bothofthemwillbedropped.InCHOKeW,themaximumnumberofpacketsthatwillbedrawnfromthebufferuponeacharrivalispi,whichismentionedabove.Thestrategyofmatcheddrops,i.e.,droppingpacketsfromthesameowinpairs,wasdesignedforCHOKebyPanetal.[112],toprotectTCPows.CHOKeworksintraditionalbesteffortnetworks,whileCHOKeWwasdesignedforDiffServnetworksthatareabletosupportmultiplepriorityclasses.OtherdifferencebetweenCHOKeandCHOKeWcanbefoundinourpreviouswork[135].InDiffServnetworks,besidesabuffermanagementschemeforcorerouters,anad-missioncontrolstrategyforedgeroutersisalsonecessary.Otherwise,wheneverusersarrive,theycanstarttosendpacketsintothenetwork,evenifthenetworkhasbeenheavilycongested.ThelackofadmissioncontrolstronglydevaluesthebenetthatDiffServcanproduce,andthedeteriorationofservicequalityresultingfromnetworkcongestioncannotbesolvedonlybyCHOKeW.Pricingisastraightforwardandefcientstrategytoassignpriorityclassestodifferentows,andtoalleviatenetworkcongestionbyraisingthepricewhenthenetworkloadbecomesheavier.Whenanetworkismodeledasaneconomicsystem,auserwhoiswillingtopayahigherpricewillgotoahigherpriorityclassandthuswillbeabletoenjoyhigher-qualityservice.Moreover,bychargingahigherprice,anetworkprovidercancontrolthenumberofuserswhoarewillingtopaythepricetousethenetwork,which,inreturn,becomesamethodtoprotecttheservicequalityofexistingusers.

PAGE 65

Figure3: ConTax-CHOKeWframework.ConTaxisinedgerouters,whileCHOKeWisincorerouters. WepresentapricingschemeforCHOKeWinthischapter.Ourpricingschemeworksinedgenetworks,whichassignhigherprioritytouserswhoarewillingtopaymore.Whenthenetworkcongestionisheavier,ourpricingschemewillincreasethepricebyavaluethatisproportionaltothecongestionmeasurement,whichisequivalenttochargingataxduetonetworkcongestionthuswenameourpricingschemeConTax(CongestionTax).Thechapterisorganizedasfollows.OurschemeisintroducedinSection 3.2 ,includ-ingtheConTax-CHOKeWframeworkinSubsection 3.2.1 ,thepricingmodelinSubsection 3.2.2 ,andtheuserdemandmodelinSubsection 3.2.3 .WeusesimulationstoevaluatetheperformanceofourschemeinSection 3.3 ,whichcoverstheexperimentsforinvestigatingthecontrolofthenumberofusersthatareadmitted,theregulationofthenetworkload,andthegainofaggregateprotforthenetworkserviceprovider.Finally,thechapterisconcludedinSection 3.4 3.2.1TheConTax-CHOKeWFrameworkConTaxisacombinationofapricingschemeandanadmissioncontrolscheme.Itcanbeimplementedinedgerouters,gateways,AAA(authentication,authorizationandaccounting)servers,oranydevicesthatareabletocontrolthenetworkaccess.Withoutlossofgenerality,inthischapterweassumethatedgeroutersarethedevicesthathaveaConTaxmodule.

PAGE 66

TheConTax-CHOKeWframeworkisillustratedinFig. 3 .Inthisgure,hexagonsrepresentcorerouters,circlesareedgerouters,anddiamondsdenoteusers.Whenuserstrytoobtainnetworkaccess,theyconnecttheneighboringedgeroutersandlookupthepriceforadesiredpriorityclass,whichisprovidedbyConTax.Ifthepriceisunderthebudget,theypaythepriceandgetthenetworkaccess;otherwise,theydonotrequestthenetworkaccess.AfteruserUobtainsthenetworkaccessfromedgerouterEbypayingcredits(i)thatcorrespondstopriorityi,UcansendpacketsintothenetworkviaE.EachpacketfromUismarkedwithpriorityibyEbeforeitentersthecorenetwork.WhenapacketfromUarrivesatcorerouterC,CusesCHOKeWtodecidewhethertodropthispacketandanotherpacketbelongingtothesameowfromthebuffer,i.e.,toconductmatcheddrops.Ifthearrivingpacketisnotdropped,itwillenterthebuffer.However,thispacketmaystillbedroppedbyCHOKeWbeforeitisforwardedtothenexthop,ifthesendingrateofthisowismuchfasterthanotherows,sinceafastersendingratecausesmorearrivalsduringthesameperiodoftime.ThusCHOKeWisabletoprovidebetterfairnessamongtheowsinthesamepriorityclassthanconventionalAQMschemessuchasRED[60]andBLUE[59].Inadditiontomarkingarrivingpacketswithpriorityvalues,edgerouterEisrespon-sibleforadjustingpricesaccordingtothenetworkcongestion.Ahigherpricehasahigherpotentialtoexceedthebudgetofmoreusers.Ifthepriceriseswhencongestionhappens,fewerusersarewillingtopaythepricetousethenetwork,andconsequently,thenetworkcongestionisalleviated.Byusingthepricingscheme,edgerouterscaneffectivelyrestrictthetrafcthatentersthecorenetworkstoareasonablevolumesothatitwillnotcausesignicantcongestion.Ontheotherhand,whenthenetworkislesscongested,thepriceshouldbereducedappropriatelytoavoidlowlinkutilization.Thepricingfunctionwillbediscussedinthefollowingsection.

PAGE 67

3.2 )showstherelationshipbetweenthequantityofresourcesandtheprice.Thepriceincreasingrateisdeterminedbytheslope,(>0),andtheinitialpriceiscontrolledbythey-intercept,c(c>0).Particularly,ifaowinpriorityigetsitimesofbandwidthofaowinpriority1,formula( 3.2 )isequivalenttothefollowingcontinuousfunction 3.3 )matchesmanycurrentpricingstrategiesofISPs(forexample,theDSLInter-netservicesprovidedbyBellSouth[22]).Oneofthefeaturesofourpricingmodelisthatthemoreresourcesauserobtains,thelesstheunit-resourcepricewillbe.Theunit-resourcepriceisdenotedbyr=0=x.From( 3.3 ),wehave@r=@x=c=x2.Sincec>0,@r=@x<0.Anotherfeatureofourpricingmodelisthattheunit-resourcepriceisaconvexfunc-tionofresources,as@2r=@x2>0.Inotherwords,whenauserobtainsmoreresourcesfromtheprovider,theunit-resourcepricewilldecrease,butthedecreasingspeedwillslowdown,whichguaranteesthattheunitpricewillneverreachzero.

PAGE 68

Whenwehaveacongestionmeasurement,wecanuseittobuildacongestiontax,representedbyt.Thenthenalpriceisdeterminedby 3 ,whereathickerlinerepresentsalinkwithhigherband-width.Acorerouterbecomesabottleneckonlybecausemanyowsgothroughit.There-fore,thecongestionincorenetworksresultsfromthetrafcgeneratedbymanysenders.Ifthisstrategyisused,thecorerouterbeingcongestedhastosendmessages(ormanycopiesofthesamemessage)todifferentedgerouters,andthecontrolmessagescouldworsenthecongestion,sincemorebandwidthisrequiredtotransmitthemessages.Wenoticethatanedgerouterisabletorecordthenumberofusersineachpriorityclass,astheseusersreceivethenetworkadmissionfromtheedgerouter.Foranedgerouter,letnidenotethenumberofusersinpriorityclassi.

PAGE 69

simplemethodtomeasurethenetworkcongestion,fromtheviewpointoftheedgerouter,istousePIi=1ini,wherepositiveintegerIisthehighestpriorityclassessupportedbythenetwork.Intherestofthischapter,wealsocallPIi=1inithenetworkloadfortheedgerouter,anditwillbeusedtochargethecongestiontax.However,congestiontaxshouldnotbechargedwhenPIi=1iniisverysmall.Weintroduceathresholdtoindicatethebeginningofcongestion,denotedbyM(M>0),andthecongestionmeasurementbecomes 3.2 ),( 3.5 )into( 3.4 ),weget 3 .Inthisscheme,whenuserUkeepsusingthenetwork,thepriceUthatUneedstopaydoesnotchange,whichisdeterminedatthemomentwhenUisadmittedintothenetwork.ThephilosophyhereistoconsiderUacommitmentmadebythenetworkprovider.Someotherpricingbasedadmissioncontrolprotocols,suchasthatproposedbyLi,IraqiandBoutaba[93],useclasspromotionforexistinguserstomaintaintheservicequalitywithoutchargingahigherprice.Inourscheme,ifthenetworkbecomesmorecongested,newuserswillbechargedhigherprices,whichpreventsfurtherdeteriorationofthecongestion,andthusmaintainstheservicequalityforexistinguserstosomeextent.FromFig. 3 ,anedgerouterupdatesprice(j)forallpriorityclassj(j=1;2;;I)onlywhenanewuserisadmittedoranexistingusercompletesthecommunication.

PAGE 70

ConTaxalgorithm price[82,93].InConTax,forpriorityclassi,theuserdemandd(i)canbemodeledas 3.7 ),istheparametertodeterminethesensitivityofdemandtothepricechange.From( 3.6 )and( 3.7 ),weillustratethemethodtodeterminethevalueofdemandbasedonthenetworkloadPIi=1iniinanedgerouterinFig. 3 .Inthemarket,thenetworkloadisabletobeinterpretedassupply,i.e,bychargingprice(i)foreachpriorityclassi,thenetworkprovideriswillingtoprovideservicethatisequaltoPIi=1ini.Theheaviertheloadis,thehigherthepricewillbe.Wedonotdrawthesupply-demandrelationshipinonesinglegraph,becauseinCon-Tax,thesupplyisthesumoftheloadofallpriorityclasses,whilethedemandtakeseffectoneachclassindividually.ByusingtwographsinFig. 3 ,givennetworkloadPIi=1ini,wecanndthepriceforapriorityclassaccordingtothesupplycurvecorrespondingtotheclassintheleftgraph.Then,intherightgraph,thesamepricemapsontoademandvaluethatisdeterminedbythecorrespondingdemandcurve.TheaboveprocessisillustratedinFig. 3 bydashedarrows.

PAGE 71

Figure3: Supply-demandrelationshipwhenConTaxisused.Theleftgraphisprice-supplycurves,andtherightgraphprice-demandcurvesforeachclass. 3 ,weassumethatuserarrivalscomplywithPoissondistribution.EventhoughthevalidityofPoissonmodelfortrafcinWideAreaNetworks(WANs)hasbeenquestioned,theinvestigationhasshownthatuser-initiatedTCPsessionscanstillbewellmodeledbyPoisson[116].Insimulations,welettheaveragearrivingratebe=3users/minun-lessspeciedotherwise.AnarrivinguserisadmittedintothenetworkaccordingtotheprobabilitythatisequaltoEq.( 3.7 ).Ifthenewarrivalisadmitted,thedatatransmissionwilllastforaperiodoftime,whichissimulatedbyarandomvariable,.TheCumulativeDistributionFunction(CDF)ofsatisesParetodistribution,i.e.,

PAGE 72

sensitivityparameter=5:0103andthethresholdofchargingacongestiontaxM=20.Thedemandofusersissimulatedwithparameter=0:5,whichisbasedontheassumptionthatthewillingnesstousethenetworkisclosetohalfwhenthepriceisdoubled(i.e.,(i)=20(i)).Twoparametersforcalculatingthebasicprice,andc,areassignedvalues3:0104dollars/minand7:0104dollars/min,respectively. 3 3 3 and 3 ,respectively.Whencongestiontaxisnotcharged,i.e.,(i)=0(i)allthetime,anarrivinguserisalwaysadmittedsincethedemandisconstant1.Sothenumberofadmittedusersisequaltothenumberofarrivals,whichriseswheneverauserarrives,anddecreaseswhenausercompletesthecommunicationandleaves.Bycontrast,whenConTaxisapplied,afterthetimereaches306sec,thedemandbecomeslessthan1duetomoreusersadmittedandaloadexceedingM(Fig. 3 ).Someuserschoosenottoobtaintheadmissionwhentheyarrive.Ineachclass,thenumberofusersadmittedintothenetworkissmallerthanthenumberofarrivals,illustratedbyasub-gureinFig. 3 .Accordingly,inFig. 3 ,thenetworkloadofusingConTaxisalsolowerthanthatwithoutcongestiontax.Theaggregatepriceisconcernedbythenetworkprovidernotonlyforalleviatingnetworkcongestion,butalsoforearningprot.FromFig. 3 ,weseethatConTaxcanbringahigheraggregatepriceandthereforemoreprottothenetworkprovider.Ontheotherhand,bypayingapricethatisslightlyhigherthanthebasicprice,ausercanenjoythenetworkservicewithbetterqualityduetolesscongestion. 3 ,thenumberofusers

PAGE 73

Figure3: Dynamicsofnetworkload(i.e.,PIi=1ini)inthecaseoftwopriorityclasses admittedforeachpriorityclassinFig. 3 ,aswellasthedemandinFig. 3 ,whenConTaxisemployed,islowerthanthatnotusingcongestiontax,whiletheaggregatepriceishigher(Fig. 3 ).BycomparingFig. 3 withFig. 3 ,weseethatthenetworkloadisheavierinthecaseofthreepriorityclassesthanthatoftwoclasses,sincetheusersinthethirdpriorityclassconsumemoreresources.ThedemandcurveinFig. 3 islowerthanthecurveinFig. 3 ,resultingfromthehigherloadinthenetworkthatsupportsmorepriorities.ThecurveofaggregatepriceinFig. 3 ishigherthanthecurveinFig. 3 .Inotherwords,bysupportingmorepriorityclasses,thenetworkproviderscanmakemoreprot,andusersarebetterservedbyhavingmoreoptionsfortheirapplications. 2

PAGE 74

(b)Priorityclass2Figure3: Numberofusersthatareadmittedintothenetworkinthecaseoftwopriorityclasses

PAGE 75

Figure3: Demandofusersinthecaseoftwopriorityclasses Figure3: Aggregatepriceinthecaseoftwopriorityclasses

PAGE 76

Figure3: Dynamicsofnetworkloadinthecaseofthreepriorityclasses 3.3.1 and 3.3.2 ,theaveragearrivingrateofusersis3users/min.Sincethetrafcvolumeinthenetworkisvariantandchangeswithtimeandlocations,wearealsointerestedintheperformanceofConTaxwhenthearrivingrateisdifferent.Inthissubsection,welet=6users/minandrepeatthesimulationsinSubsection 3.3.1 .TheresultsareshownfromFig. 3 toFig. 3 .Firstofall,wecanseethatthedifferenceofthenumberofadmittedusersandthenumberofarrivalsismoresignicantinFig. 3 thanthatinFig. 3 ,bycomparingtheresultsfromthesamepriorityclass.Thereasonisthatthenetworkloadtendstobeheavierwhenmoreusersarriveduringthesameperiodoftime,whichleadstoasmallerdemandthatisdemonstratedbythecomparisonbetweenFig. 3 andFig. 3 .Corre-spondingly,theadvantageofusingacongestiontaxregardingthenetworkload,asshownbythedifferenceofthetwocurvesinFig. 3 ,ismoresignicantthanthatinFig. 3 .Theaggregatepricecurve(Fig. 3 )hasthesimilarshapeasthecurveinFig. 37 ,butitrisesfasterwhen=6users/minwithanincreasingspeedthatisalsoroughlydoubled.

PAGE 77

(b)Priorityclass2 (c)Priorityclass3Figure3: Numberofusersthatareadmittedintothenetworkinthecaseofthreepriorityclasses

PAGE 78

Figure3: Demandofusersinthecaseofthreepriorityclasses Figure3: Aggregatepriceinthecaseofthreepriorityclasses

PAGE 79

Figure3: Dynamicsofnetworkloadwhenarrivingrate=6users/min

PAGE 80

(b)Priorityclass2Figure3: Numberofusersthatareadmittedintothenetworkwhenarrivingrate=6users/min

PAGE 81

Figure3: Demandofuserswhenarrivingrate=6users/min Figure3: Aggregatepricewhenarrivingrate=6users/min

PAGE 82

Whenthearrivingrateofusersrises,thenetworkloadalsoincreases,andthede-manddecreasesaccordingly.ThismayresultinamorenoticeableperformancedifferencebetweenConTaxandapricingschemethatdoesnotchargecongestiontax.

PAGE 83

72

PAGE 84

aowwillfollow.Forexample,inATMnetworks,anadmissiondecision(forexample,Courcoubetisetal.[49])istypicallymadeaftertheconnectionrequestcompletesaroundtripandshowsresourcesareavailableineachhop.

PAGE 85

providesnetworkaccesstootherusers,andonlycontributorscanobtainnetworkaccessfromothercontributorswhentheymobile.Wenoticethattheirschemeworksasabartermarketfromaneconomicalviewpoint,sincealluserstradetheirnetworkresourcesforotherusers'networkresources.Asamonetarysystemismoreefcientandmoreexible,weexpecttodesignanadmissioncontrolschemecombinedwithpricing,whereuserscanmakepaymentsbyanyformsofcreditsthattheyareusedtousingintheirdailylives.Inindustry,anadmissioncontrolschemecombinedwithpricing,forIEEE802.11basednetworks,wasdevisedbyFon[65].Theschemeneedsallpartiestoberegisteredinacontrolcenterbeforetheysharenetworkresourceswitheachother,andtheprice,whichstillusesatrateandisunrelatedtotheservicequality,isalsodeterminedbythecontrolcenter.Thiscausestheinexibilityforthepartieswhoprefertomaketheirownadmis-siondecisionsaccordingtothecurrentavailableresourcesandrequestedservicequality.Therefore,adistributedadmissioncontrolschemeismoreappropriateforWMNs.Inordertomeettherequirements,weproposeagroup-basedadmissioncontrolscheme,whereonlytwopartiesareinvolvedintheoperationsuponeachadmissionrequest.Thede-terminationcriteriaaretheavailableresourcesandrequestedresources,whichcorrespondtosupplyanddemandinaneconomicsystem.Theinvolvedpartiesusetheknowledgeofutility,cost,andbenetofeconomicstocalculatetheavailableandrequestedresources.Therefore,ourschemeisnamedAPRIL(AdmissioncontrolwithPRIcingLeverage).Sincetheoperationsofourschemeareconducteddistributedly,thereisnoneedforasinglecon-trolcenter.Thechapterisorganizedasfollows.WeintroducetheideaofgroupsinWMNsinSection 4.2 .ThepricingmodelisdiscussedinSection 4.3 .Basedonthepricingmodel,Section 4.4 providestheprocedureofouradmissioncontrolscheme,followedbyperfor-manceevaluationinSection 4.5 .ThechapterisconcludedinSection 4.6

PAGE 87

Figure4: TreetopologyformedbygroupsinaWMN trafcroutes,thetopologyofaWMNcanbeillustratedbyatreeormultipletrees,eachhavingahotspotastheroot,meshusersastheleaves,andmeshroutersorhybriddevicesasbranches.Ifwesubstitutegroupsfordevices,theroot,thebranches,andtheleavesareallformedbygroups.InatreetopologyillustratedinFig. 4 ,letG0representthegroupthatincludesthehotspot.SeveralothergroupsareconnecttoG0directlytogetthenetworkaccess.TheyarerepresentedbyG(1)1;G(2)1;,respectively.Forgeneralpurpose,wedenoteagroupinthetreetopologybyGi,anddenoteGi1thegroupthatprovidesnetworkaccesstoGi.GialsoprovidesnetworkaccesstogroupsG(1)i+1;G(2)i+1;;G(k)i+1.AfterGiobtainsthepermitofnetworkaccessfromGi1bypayingsomecredits,itcanfurthersharetheresourceswithG(j)i+1,j=1;2;,byacceptingpaymentsfromthem.

PAGE 88

Inspecialcases,iftheadmissioncontrolneedstobeconductedbetweendifferentde-viceswithinagroupthismayhappenalthoughthesedevicesarefromthesamepartywecanalwaysfurthercategorizethedevicesofthesamegroupintosubgroups.Thegroup-ingprocessgoesforwarduntilalladmissioncontroloperationsoccurbetweengroups.Inthismanner,weensurethatonlytwosubgroupsareinvolvedintheadmissioncontroluponeachrequest,andthenumberofpartiesinvolvedintheadmissioncontrolisminimized. 4.1 ),whichmeansthattheusercannotgetanyresourcesfromtheprovider.ThisassumptionmatchesmanycurrentpricingstrategiesofISPs.Forexample,theDSLInternetservicesprovidedbyBellSouthhaveadifferentmonthlypriceforeachplan[22].Bycollectingthevaluesoftheuplinkanddownlinkbandwidth,weillustratethepricesandthecorrespondingbandwidthinFig. 4 .Alinearapproximationcompliantwith( 4.1 )isalsoaddedineachsubgure.Weseethatthedownlinkanduplinkpriceshaveapproximationp=x=300+27andp=x=16+15:5,respectively.Inreality,thevaluesofuplinkbandwidthanddownlinkbandwidthareincludedinaserviceplan,soweonlyhavetofocusononelinkdirection.Inourmodel,weusethegeneraltermresources,which

PAGE 89

(b)Uplink Pricesvs.bandwidthofBellSouthDSLserviceplans candenotethecombinationofuplinkanddownlinkbandwidth,orothertypesofresources,dependingonthespecicrequirements.Oneofthefeaturesofourpricingmodelisthatthemoreresourcesausergets,thelesstheunit-resourcepricewillbe.Theunit-resourcepriceisdenotedbyr=p=x.From( 4.1 ),wehave@r=@x=c=x2.Whenx>0,@r=@x<0.Anotherfeatureofourpricingmodelisthattheunit-resourcepriceisaconvexfunc-tionofresources,since@2r=@x2>0.Inotherwords,whenauserobtainsmoreresourcesfromtheprovider,theunit-resourcepricewilldecrease,butthedecreasingspeedwillslowdown,sothattheunitpricewillneverreachzero. 4.4.1CompetitionTypeforAPRILTheAPRILalgorithmisbasedontheknowledgeofeconomicsystemsofWMNs.Weneedtondthecompetitiontypeofthemarketcorrespondingtosystems.Traditionally,amarketcanbemodeledbyoneofthethreecompetitiontypes:1)amonopoly,ifasingleprovidercontrolstheamountofgoods(i.e.,resourcesinthenetworkaccessmarketofWMNsherein)anddeterminesthemarketprice,2)acompetitivemarket,iftherearemany

PAGE 90

providersandusersbutnoneofthemcandictatetheprice,and3)anoligopoly,whereonlyafewprovidersareavailable[50]. 4.4.2 fordetails).Ontheother

PAGE 91

hand,weassumethateachuseronlyobtainsthenetworkaccessthroughoneprovideratatime,sothatthecomplexityofnetworkcongurations(suchasroutingalgorithms,addressallocations,loadbalance,trafcaggregation,etc.)willnotbetoohigh.Thiscircumstanceisdifferentfromtheconventionalcompetitivemarkets.Weapplythenonnegativebenetprincipletoproviders,whichwillbediscussedindetailsinSubsection 4.4.3 Theutilityfunctionui(xi)isincreasingandconcave,whichmatchesthefeaturesofelastictrafc[124].ComparedwithsomeotherutilitymodelssuchasKellyetal.[85],ourutilitymodel( 4.2b )hasaterm^xi.Thistermrepresentstheminimumresourcerequirementfortheapplications.Manynetworkapplicationsgenerateutilityonlywhenxi>^xi;oth-erwise,thecommunicationqualitywouldbetoopoortobeuseful.Parameteridenotesthewillingnessoftheapplicationstoobtainhigherresources.Auserhavingapplicationswithahighervalueofiwillbewillingtopaymoreforhigherresources.Thepricingmodel( 4.2a )isfrom( 4.1 ).Parametersiand^xiareapplication-dependent,whileandc,determinedbythepricingscheme,areapplication-independent.Inotherwords,networkresourcescanbe

PAGE 92

usedbyanyapplicationswhicharetheuser'schoices,andtheproviderwhoannouncesthepricingplandoesnothavetoknowthedetailsthatwhichapplicationsusetheresources.ThisalsoreectsthefactthattheIPbasednetworkarchitecturewiththesameresourceshasbeenandwillbeused/sharedbyavarietyofapplications.InAPRIL,weuseauniformpricingschemeforallgroups,i.e.,andcdonothavesubscripti.Thisresultsfromthecompetitionofserviceprovidersinthemarket,andthusthenonnegativebenetprincipleisapplicabletoproviders(seeSubsection 4.4.3 ).Underthecompetitivecircumstances,allpricingschemestendtobesimilar.Bypayingpitogetui,groupGigeneratesbenet 4.4 ),let~birepresent(ilog(xi^xi+1)(xi+c)).Then 4 .InSubgure 4(a) ,xi>Xiandbi(Xi)>0.Since0
PAGE 93

Utilityuiandpricepivs.resourcesxi ,whenxi>Xiandbi(Xi)0,xi=0,i.e.,Giwillnotrequestanyresourcessincenopositivebenetisavailable.If00,asshowninSubgure 4(c) ,xi=xiisthebestchoice.Whenxi0andbi(xi)>0(Subgure 4(d) ),xi=0andnopositivebenetexistsintheavailableresourcerange.InSubgure 4(e) ,xiXiandbi(xi)0,xi=0again.Bymergingallcasesthatresultinxi=0,weget

PAGE 94

Sinceeachuserobtainsthenetworkaccessonlythroughoneprovideratatime,unlikethecaseinaconventionalcompetitivemarket,itmaynotbeagoodstrategyforaprovidertosimplysupplytheamountofresourcesthatmaximizesthebenet,whenithasmoreavailableresources.Asdescribedintheprevioussubsection,auseralwaysseeksthemaximumbenet.Whenthisuserhasmorethanonechoiceforitsproviderthisisexactlythecasethathappensinacompetitivemarketitwillselecttheproviderwhosuppliesmoreresources,ifthesupplyfromotherprovidersislessthedemandthatmaximizestheuser'sbenet.Becausetheusercanonlychooseoneprovider,whenthenonnegativebenetprincipleisconsidered,theproviderselectedbythisuserwillobtainanonnegativebenetgain,whileallotherprovidershavezerobenetgain.Thiscircumstanceforcesallproviderstoadoptthenonnegativebenetprinciple.Thisprinciplehasalsobeenusedinsomepreviousworkineconomics[91].WhengroupGiobtainsaccessfromgroupGi1,itcandecidewhethertofurthersharetheresourceswithgroupsG(j)i+1,j=1;2;.ThefurtherresourcesharingwillchangetheutilityfunctionofGi.Inthissubsection,wewillrstdiscussthenewutilityfunctionofGi.Thenwewillgivemoredetailsofthenonnegativebenetprinciplebasedontheanalyticalresults.AfterGisharestheresourceswithG(1)i+1andreceivesthepaymentfromG(1)i+1,theresourcesusedbyG(1)i+1willbedeductedfromtheutility,butthepaymentwillbeaddedtotheutility.Theutilityfunctionwillbecome

PAGE 95

HerexibecomesthetotalresourcesthatGiandG(1)i+1received,whichisdeterminedby( 4.7 ).TheresourcesusedonlybyGibecomesxix(1)i+1,sincexiissharedbyGiandG(1)i+1.Ontheotherhand,thepricefunctionpiisstillthesameas( 4.2a ),whichrepresentsthepricethatGineedstopayGi1fortheuseofxi.GiallowstheadmissionofG(1)i+1onlywhenitgainsbenetfromtheresourcesharing.Letb(1)ibethebenetforGiaftertheresourcesharingwithG(1)i+1.Thebenetgainis

PAGE 96

Similarly,afterGisharestheresourceswithG(2)i+1,theutilityfunctionbecomes Basedon( 4.9 ),thecurvecorrespondingtob(k)ix(k)i+1hasthreepossibleshapes,asshowninFig. 4 .Asitisrequiredthatx(k)i+1>0and00,werstdiscusstheshapeintherangeof0
PAGE 97

Threepossibleshapesofb(k)ix(k)i+1 4(a) .Ifi=xiPk1j=1x(j)i+1^xi+1,b(k)iisdecreasingintherangeof00.If0^xiwhenitsendstheadmissionrequesttoGi1,from( 4.2a ),( 4.2b )and( 4.3 ),@bi=@xi=i=(xi^xi+1)<0.Wealsoknowthatbi(xi)jxi=^xi=^xic<0.Therefore,bi(xi)<0whenxi^xi.Inotherwords,noresourcesareabletoprovidepositivebenetforGi,andhence0
PAGE 98

DeterminingX(k)i+1whenxiPk1j=1x(j)i+1^xi>0 4(c) .WhenxiPk1j=1x(j)i+1^xi>0,thediscussionfallsintotwocases,categorizedbytherangesofbkixiPk1j=1x(j)i+1^xi.Withoutlossofgenerality,weusetheshapeofFig. 4(a) toillustratethesetwocasesinFig. 4 .Case1.WhenbkixiPk1j=1x(j)i+1^xi<0,asshowninFig. 4(a) ,X(k)i+1isthex-coordinateoftheintersectionofthecurveofb(k)ix(k)i+1andx-axisintherangeof0
PAGE 99

Herewedonotusethevalueofx(k)i+1thatmaximizesb(k)ibecauseofthecompetitionamongproviders.AssumeGiprovideslessresourcesthananotherprovider,G(k)i+1maychoosethatproviderfornetworkaccessaslongasitgetsgreaterbenet(seeSubsection 4.4.2 ).Ontheotherhand,becauseofthecompetition,thepricewillbeadjustedbythemarkettoamarginalvaluepi,whichisdeterminedonlybyxi.Noserviceprovidercansetapricesignicantlyhigherthanpiandwincustomersatthesametime.Byusingthenonnegativebenetprinciple,GishowstheavailableresourcesX(k)i+1togroupG(k)i+1foradmissioncontrol. 1. WhengroupGihasavailableresources,i.e.,X(k)i+1>0,itwillpublicizethevalueofX(k)i+1,aswellasandc.Asdiscussedbefore,andctendtobethesameforallgroups. 2. WhengroupGjrequestsnetworkaccess,itscanstheneighboringgroupsthathaveavailableresources.AftertreatingtheavailableresourcesX(k)i+1>0publicizedbyGiasXj,groupGjuses( 4.7 )tocalculatexj.Ifxj>0,GjusesxjastherequestedresourcesandsendstheadmissionrequesttoGi.Ifseveralneighboringgroupsareavailable,Gjonlysendstheadmissionrequesttothegroupthatgivesthelargestxj. 3. AfterreceivingtherequestfromGj,GisharestheresourceswithGjusingaresourceallocationscheme,whichcanbeabuffermanagementscheme,aschedulingscheme,oranyeffectiveschemethatiscapableofallocatingresourcestodifferentgroups. 4 tosimulateawirelessmeshnetwork.Forsimplicity,inthesimulations,linkcapacityistheonlyresourceweconsidered.Itispossibletoincludemoretypesofresourcesinthefuture.

PAGE 100

Figure4: SimulationnetworkforAPRIL Atthebeginning,onlyG0isconnectedtotheInternet.Thebandwidthofthewiredbackhaulis6000kb/s. 4 .G1andG2areneighboringgroupsofG0.G3isaneighboringgroupofG1.G4isaneighboringgroupofG2aswellasG3.AssumethesamepriceasdescribedinSection 4.3 isused,i.e.,amonthlyfeeofp=x=300+27dollars,whichisequivalenttoaminutefeeofp=7:716108x+6:944104dollars.Inotherwords,=7:716108dollars/kb/sandc=6:944104dollars.Atthebeginning,G0usesthemaximumbenetprincipletodecideamountofband-widthx0=6000kb/s.From( 4.6 ),0=4:167104dollars.Wealsoset^x0=10kb/s,whichisanexperientialbandwidthrequirementtosurftext-basedwebpages.G1andG2havethesamevalueofi=2:308104dollars,whichcorrespondstoxi=3000kb/s.SameasG0,^xi=10kb/s.G3andG4aremoresensitivetothebandwidth.Forthem,i=6:0104dollarsand^xi=20kb/s.

PAGE 101

(b)Bandwidthallocation (c)Benet Availablebandwidth,bandwidthallocation,andbenet

PAGE 102

Wesimulatetheavailablebandwidth,bandwidthallocation,andbenet,whichischangingwithtimeduetothenewadmissions.TheresultsareillustratedinFig. 4 .Thebandwidthandbenetismeasuredateachtimewhenanewgroupobtainsthenetworkaccess.Eachresultisshownasamarkinthegures.Themarkscorrespondingtothesamegroupareconnectedbyasolidlineforeaseofobservation.Notethatthevaluerelatedtothepreviousmarkdoesnotchangeuntilthecurvereachesthenextdot.Attime0,G0publicizestheavailablebandwidthX(1)i+1=5558kb/sbasedon( 4.11 ).Atthesametime,itusesallthebandwidthbyitselfwhenwaitingforanadmissionrequest.Thebandwidthisnotwastedevenbeforeothergroupscanshareit.Attime10,theadmissionofG1changesthebandwidthallocation.ThebandwidthsharesofG0andG1areboth3000kb/s,whichleadstothemaximumbenetx1aswellasapproximatelyadoublebenetintotal.AfterG1isadmitted,theavailablebandwidthofG0changestoX(2)i+1=2644kb/s.GroupG1alsopublicizesitsavailablebandwidth2935kb/s.Attime20,thegroupG2sendstheadmissionrequesttoitsneighboringgroup,G0,askingforbandwidth2644kb/s,calculatedfrom( 4.7 ).Afterthat,theavailablebandwidthofG0changestoX(3)i+1=356kb/s,andtheavailablebandwidthofG2is2580kb/s.Thetotalbenetincreasesto0.4944.Attime30,withtheadmissionofG3viaG1,theavailablebandwidthandbandwidthallocationsofG1andG3arechanged,butthatofG0isnotaffected,eventhoughthetrafcofG3needstogothroughG0toreachtheInternet.InAPRIL,G0treatsthetrafcfromG3asapartoftrafcfromG1.Thustheadmissioncontrolschemehasgoodscalability.WhenG4sendstheadmissionrequestattime50,twoneighboringgroupsareconsid-ered.IfG4connectsG3,theavailablebandwidthprovidedbyG3(2227kb/s)willgeneratealessbenetthanthatisgeneratedbyG2'savailablebandwidth(2580kb/s).Therefore,G4selectsG2astheserviceprovider.Afteritsadmission,thetotalbenetreachesvetimesofthebenetwhenthenetworkhasonlyonesinglegroupG0.

PAGE 103

Wenoticethateachtimeafterannewgroupisadmitted,thedirectserviceprovider(i.e.,theneighboringgroupthatsharestheresourceswiththenewgroup)willhavelessresourcesavailable.Thiscausesahigherpossibilitythatthenewgroupwillselectaneigh-boringgroupthathaslesshopstothehotspot,unlesstheydonothaveenoughresourcesavailable.Thisself-organizingstyleiseconomicwithrespecttotheuseofnetworkre-sourcesandthusishelpfultoimprovenetworkperformanceiftheinvestmentisaconstant.

PAGE 104

93

PAGE 105

AlthoughTCP/IPwillstillbethecommoninfrastructureintheforeseeablefuture,itisevidentthatthefuturenetworkingenvironmentwillbestronglycharacterizedbythehet-erogeneityofnetworks.Theconceptoftrafccontrolisnotlimitedtopackettransferring;italsoimposesadditionalrequirementsformobility,QoSandmediacontrolsignaling.Thedevelopmentofend-to-endQoSisfacedwithchallengessuchasdesigningaQoSsignalingprotocolthatcanbeinterpretedineachdomainwithoutperformanceconictsorcompatibilityproblems.TheQoSrequirementscanbetranslatednotonlytoIP-based(networklayer)servicelevelsemantics,butalsotothoseinvariouslinklayersduetothepopularityofwirelessaccess.Webelievethatthisproblemcannotbecompletelysolvedwithouttheeffortofstandardizationorganizations.

PAGE 106

[1] A.A.Abouzeid,S.Roy,andM.Azizoglu,ComprehensiveperformanceanalysisofaTCPsessionoverawirelessfadinglinkwithqueueing,IEEETransactionsonWirelessCommunications,vol.2,no.2,pp.344,March2003. [2] D.Aguayo,J.Bicket,S.Biswas,G.Judd,andR.Morris,Link-levelmeasurementsfroman802.11bmeshnetwork,presentedatACMSIGCOMM'04,Portland,OR,August/September2004. [3] A.Akella,S.Seshan,R.Karp,andS.Shenker,Selshbehaviorandstabilityoftheinternet:Agame-theoreticanalysisofTCP,presentedatACMSIGCOMM'02,Pittsburgh,PA,July/August2002. [4] I.F.Akyildiz,W.Su,Y.Sankarasubramaniam,andE.Cayirci,Asurveyonsensornetworks,IEEECommunicationsMagazine,vol.40,no.8,pp.102,August2002. [5] I.F.Akyildiz,X.Wang,andW.Wang,Wirelessmeshnetworks:Asurvey,Com-puterNetworks,vol.47,no.4,pp.445,March2005. [6] J.J.Alcaraz,F.Cerdan,andJ.Garca-Haro,OptimizingTCPandRLCinterac-tionintheUMTSradioaccessnetwork,IEEENetwork,vol.20,no.2,pp.56,March/April2006. [7] M.Allman,V.Paxson,andW.Stevens,TCPcongestioncontrol,IETFRFC2581,April1999. [8] P.Almquist,TypeofserviceintheInternetprotocolsuite,IETFRFC1349,July1992. [9] D.Anderson,Y.Osawa,andR.Govindan,Alesystemforcontinuousmedia,ACMTransactionsonComputerSystems,vol.10,no.4,pp.311,November1992. [10] ANSI,DSSIcoreaspectsofframerelay,ANSIT1S1,March1990. [11] G.Appenzeller,I.Keslassy,andN.McKeown,Sizingrouterbuffers,presentedatACMSIGCOMM'04,Portland,OR,August/September2004. [12] S.Athuraliya,V.H.Li,S.H.Low,andQ.Yin,REM:Activequeuemanagement,IEEENetwork,vol.15,no.3,pp.48,May/June2001. 95

PAGE 107

[13] B.AtkinandK.P.Birman,Sizingrouterbuffers,presentedatIEEEINFO-COM'03,SanFrancisco,CA,March/April2003. [14] A.Forum,ATMtrafcmanagementspecicationversion4.0,AF-TM-0056.000,June1996. [15] A.C.Auge,J.L.Magnet,andJ.P.Aspas,Windowpredictionmechanismforim-provingTCPinwirelessasymmetriclinks,presentedatIEEEGlobcom'98,Sydney,Australia,November1998. [16] F.Baker,C.Iturralde,F.L.Faucheur,andB.Davie,AggregationofRSVPforIPv4andIPv6reservations,IETFRFC3175,September2001. [17] A.BakreandB.R.Badrinath,I-TCP:indirectTCPformobilehosts,presentedatICDCS'95,Sydney,Australia,May1995. [18] A.V.BakreandB.R.Badrinath,ImplementationandperformanceevaluationofindirectTCP,IEEETransactionsonComputers,vol.46,no.3,pp.260,March1997. [19] B.S.Bakshi,P.Krishna,N.H.Vaidya,andD.K.Pradhan,ImprovingperformanceofTCPoverwirelessnetworks,presentedatICDCS'97,Baltimore,MD,May1997. [20] H.BalakrishnanandV.N.Padmanabhan,HownetworkasymmetryaffectsTCP,IEEECommunicationsMagazine,vol.39,no.4,pp.60,April2001. [21] H.Balakrishnan,V.N.Padmanabhan,S.Seshan,andR.H.Katz,AcomparisonofmechanismsforimprovingTCPperformanceoverwirelesslinks,IEEE/ACMTransactionsonNetworking,vol.5,no.6,pp.756,December1997. [22] BellSouth,BellSouthDSLinternetservice,lastaccessed:September2006.[Online].Available: http://www.bellsouth.com/consumer/inetsrvcs/index.html [23] J.BennetandH.Zhang,WF2Q:Worstcasefairweightedfairqueuing,presentedatIEEEINFOCOM'96,SanFrancisco,CA,March1996. [24] Y.Bernet,FormatoftheRSVPDCLASSobject,IETFRFC2996,November2000. [25] S.Blake,D.Black,M.Carlson,E.Davies,Z.Wang,andW.Weiss,Anarchitecturefordifferentiatedservice,IETFRFC2475,December1998. [26] U.Bodin,O.Schelen,andS.Pink,Load-tolerantdifferentiationwithactivequeuemanagement,ACMSIGCOMMComputerCommunicationReview,vol.30,no.3,pp.4,July2000. [27] R.Braden,D.Clark,andS.Shenker,IntegratedservicesintheInternetarchitecture:Anoverview,IETFRFC1633,July1994.

PAGE 108

[28] R.Braden,L.Zhang,S.Berson,S.Herzog,andS.Jamin,Resourcereservationprotocol(RSVP):Version1functionalspecication,IETFRFC2205,September1997. [29] R.Braden,D.Clark,J.Crowcroft,B.Davie,S.Deering,andD.Estrin,Recom-mendationsonqueuemanagementandcongestionavoidanceintheInternet,IETFRFC2309,April1998. [30] L.Breslau,E.W.Knightly,S.Shenker,I.Stoica,andH.Zhang,Endpointadmissioncontrol:Architecturalissuesandperformance,presentedatACMSIGCOMM'00,Stockolm,Sweden,August2000. [31] B.Bruno,M.Conti,andE.Gregori,Meshnetworks:Commoditymultihopadhocnetworks,IEEECommunicationsMagazine,vol.43,no.3,pp.123,March2005. [32] L.Bui,A.Eryilmaz,R.Srikant,andX.Wu,Jointasynchronouscongestioncon-trolanddistributedschedulingformulti-hopwirelessnetworks,presentedatIEEEINFOCOM'06,Barcelona,Spain,May2006. [33] G.M.ButlerandK.H.Grace,Adaptingwirelesslinkstosupportstandardnetworkprotocols,presentedatICCCN'98,Lafayette,LA,October1998. [34] J.Byers,J.Considine,M.Mitzenmacher,andS.Rost,Informedcontentdeliveryacrossadaptiveoverlaynetworks,presentedatACMSIGCOMM'02,Pittsburgh,PA,October2002. [35] K.L.Calvert,J.Grifoen,andS.Wen,Lightweightnetworksupportforscalableend-to-endservices,presentedatACMSIGCOMM'02,Pittsburgh,PA,October2002. [36] K.Chandran,S.Raghunathan,S.Venkatesan,andR.Prakash,Afeedback-basedschemeforimprovingTCPperformanceinadhocwirelessnetworks,IEEEPer-sonalCommunications,vol.8,no.1,pp.34,February2001. [37] C.J.Chang,T.T.Su,andY.Y.Chiang,Analysisofacutoffprioritycellularradiosystemwithnitequeueingandreneging/dropping,IEEE/ACMTransactionsonNetworking,vol.2,no.2,pp.166,April1994. [38] E.ChangandA.Zakhor,CostanalysesforVBRvideoservers,IEEEMultiMedia,vol.3,no.4,pp.56,December1996. [39] K.N.Chang,J.T.Kim,andS.Kim,Anefcientborrowingchannelassignmentschemeforcellularmobilesystems,IEEETransactionsonVehicularTechnology,vol.47,no.2,pp.602,May1998. [40] H.Chaskar,T.V.Lakshman,andU.Madhow,OnthedesignofinterfacesforTCP/IPoverwireless,presentedatMilcom'96,Jordan,MA,October1996.

PAGE 109

[41] S.ChenandN.Yang,Congestionavoidancebasedonlightweightbuffermanage-mentinsensornetworks,IEEETrans.onParallelandDistributedSystems,vol.17,no.9,pp.934,September2006. [42] K.Cho,Flow-valve:Embeddingasafety-valveinRED,presentedatIEEEGLOBECOM'99,RiodeJaneireo,Brazil,December1999. [43] C.-T.ChouandK.G.Shin,Smoothhandoffwithenhancedpacketbuffering-and-forwardinginwireless/mobilenetworks,presentedatIEEEQShine'05,Orlando,FL,August2005. [44] C.-T.Chou,S.N.Shankar,andK.G.Shin,Achievingper-streamQoSwithdis-tributedairtimeallocationandadmissioncontrolinIEEE802.11ewirelessLANs,presentedatIEEEINFOCOM'05,Miami,FL,March2005. [45] J.ChungandM.Claypool,AggregateratecontrolforTCPtrafcmanagement,presentedatACMSIGCOMM'04,Portland,OR,August/September2004. [46] D.D.Clark,S.Shenker,andL.Zhang,Supportingreal-timeapplicationsinanin-tegratedservicespacketnetwork:Architectureandmechanism,presentedatACMSIGCOMM'02,Pittsburgh,PA,August/September2002. [47] D.D.ClarkandW.Fang,Explicitallocationofbesteffortpacketdeliveryservice,IEEE/ACMTransactionsonNetworking,vol.6,no.4,pp.362,August1998. [48] R.B.Cooper,IntroductiontoQueueingTheory,2nded.Holland:ElsevierNorth,1981. [49] C.Courcoubetis,G.Kesidis,A.Ridder,andJ.Walrand,Admissioncontrolandroutinginatmnetworksusinginferencesfrommeasuredbufferoccupancy,IEEETransactionsonCommunications,vol.43,no.234,pp.1778,1995. [50] C.CourcoubetisandR.Weber,PricingCommunicationNetworks:Economics,TechnologyandModelling.NewYork:Wiley,2003. [51] M.CrovellaandA.Bestavos,Self-similarityinWorldWideWebtrafc:Evidenceandpossiblecauses,presentedatSIGMETRICS'96,Philadelphia,PA,May1996. [52] A.Demers,S.Keshav,andS.Shenker,Analysisandsimulationsofafairqueueingalgorithm,presentedatACMSIGCOMM'89,Austin,TX,September1989. [53] C.DovrolisandP.Ramanathan,Proportionaldifferentiatedservices,partii:Lossratedifferentiationandpacketdropping,presentedatIWQoS'00,Pittsburgh,PA,June2000. [54] E.C.Efstathiou,P.A.Frangoudis,andG.C.Polyzos,Stimulatingparticipationinwirelesscommunitynetworks,presentedatIEEEINFOCOM'06,Barcelona,Spain,April2006.

PAGE 110

[55] V.Elek,G.Karlsson,andR.Ronngren,Admissioncontrolbasedonend-to-endmeasurements,presentedatIEEEINFOCOM'00,TelAviv,Israel,March2000. [56] K.FallandS.Floyd,Simulation-basedcomparisonsofTahoe,Reno,andSACKTCP,ComputerCommunicationReview,vol.26,no.3,pp.5,July1996. [57] F.L.Faucheur,Protocolextensionsforsupportofdiffserv-awareMPLStrafcen-gineering,IETFRFC4124,June2005. [58] W.Feng,NetworktrafccharacterizationofTCP,presentedatMILCOM'00,LosAngeles,CA,October2000. [59] W.Feng,K.Shin,D.Kandlur,andD.Saha,TheBLUEactivequeuemanagementalgorithm,IEEE/ACMTransactionsonNetworking,vol.10,no.4,pp.513,August2002. [60] S.FloydandV.Jacobson,Randomearlydetectiongatewaysforcongestionavoid-ance,IEEE/ACMTransactionsonNetworking,vol.1,no.4,pp.397,August1993. [61] ,Link-shareandresourcemanagementmodelsforpacketnetworks,IEEE/ACMTransactionsonNetworking,vol.3,no.4,pp.365,August1995. [62] S.Floyd,RED:Discussionsofsettingparameters,lastaccessed:September2006.[Online].Available: http://www.icir.org/oyd/REDparameters.txt [63] S.FloydandK.Fall,Promotingtheuseofend-to-endcongestioncontrolintheIn-ternet,IEEE/ACMTransactionsonNetworking,vol.7,no.4,pp.458,August1999. [64] S.Floyd,Recommendationonusingthegentle variantofRED,lastaccessed:September2006.[Online].Available: http://www.icir.org/oyd/red/gentle.html [65] Fon,Fon:WiFieverywhere,lastaccessed:September2006.[Online].Available: http://en.fon.com/ [66] Z.Fu,H.Luo,P.Zerfos,S.Lu,L.Zhang,andM.Gerla,TheimpactofmultihopwirelesschannelonTCPperformance,IEEE/ACMTransactionsonMobileCom-puting,vol.4,no.2,pp.209,March/April2005. [67] A.E.Gamal,J.Mammen,B.Prabhakar,andD.Shah,Throughput-delaytrade-offinwirelessnetworks,presentedatIEEEINFOCOM'04,HongKong,March2004. [68] V.Gambiroza,B.Sadeghi,andE.Knightly,End-to-endperformanceandfairnessinmultihopwirelessbackhaulnetworks,presentedatACMMobiCom'04,NewYork,NY,September2004.

PAGE 111

[69] A.J.Ganesh,P.B.Key,D.Polis,andR.Srikant,Congestionnoticationandprob-ingmechanismsforendpointadmissioncontrol,IEEETransactionsonNetworking,vol.14,no.3,pp.568,June2006. [70] R.J.GibbensandF.P.Kelly,Onpacketmarkingatpriorityqueues,IEEETrans-actionsonAutomaticControl,vol.47,no.6,pp.1016,June2002. [71] S.Golestani,Aself-clockedfairqueueingschemeforbroadbandapplications,pre-sentedatIEEEINFOCOM'94,Toronto,Canada,June1994. [72] J.Gronkvist,Assignmentmethodsforspatialreuse,presentedatACMMobi-Hoc'00,Boston,MA,August2000. [73] P.GuptaandP.R.Kumar,Thecapacityofwirelessnetworks,IEEE/ACMTrans-actionsonInformationTheory,vol.46,no.2,pp.209,March2000. [74] H.Han,C.V.Hollot,Y.Chait,andV.Misra,TCPnetworksstabilizedbybuffer-basedAQMs,presentedatIEEEINFOCOM'04,HongKong,March2004. [75] H.HaraiandM.Murata,High-speedbuffermanagementfor40Gb/s-basedpho-tonicpacketswitches,IEEE/ACMTransactionsonNetworking,vol.14,no.1,pp.191,February2006. [76] ,Opticalber-delay-linebuffermanagementinoutput-bufferedphotonicpacketswitchtosupportservicedifferentiation,IEEEJournalonSelectedAreasinCommunications,vol.24,no.8,pp.108,August2006. [77] Y.Hayel,D.Ros,andB.Tufn,Less-than-best-effortservices:Pricingandscheduling,presentedatIEEEINFOCOM'04,HongKong,March2004. [78] J.Heinanen,F.Baker,W.Weiss,andJ.Wroclawski,AssuredforwardingPHBgroup,IETFRFC2597,June1999. [79] C.V.Hollot,Y.Liu,V.Misra,andD.Towsley,UnresponsiveowsandAQMperformance,presentedatIEEEINFOCOM'03,SanFrancisco,CA,March/April2003. [80] D.HongandS.S.Rappaport,Trafcmodelandperformanceanalysisforcellu-larmobileradiotelephonesystemswithprioritizedandnonprioritizedhandoffpro-cedures,IEEETransactionsonVehicularTechnology,vol.35,no.3,pp.77,August1986. [81] J.HouandY.Yang,Mobility-basedchannelreservationschemeforwirelessmobilenetworks,presentedatIEEEWCNC'00,Chicago,IL,September2000. [82] J.Hou,J.Yang,andS.Papavassiliou,IntegrationofpricingwithcalladmissioncontroltomeetQoSrequirementsincellularnetworks,IEEETransactionsonPar-allelandDistributedSystems,vol.13,no.9,pp.898,September2002.

PAGE 112

[83] P.R.Jelenkovic,P.Momcilovic,andM.S.Squillante,Bufferscalabilityofwirelessnetworks,presentedatIEEEINFOCOM'06,Barcelona,Spain,May2006. [84] W.S.JeonandD.G.Jeong,Ratecontrolforcommunicationnetworks:Shadowprices,proportionalfairnessandstability,IEEETransactionsonVehicularTech-nology,vol.55,no.5,pp.1582,September2006. [85] F.Kelly,A.Maulloo,andD.Tan,Ratecontrolforcommunicationnetworks:Shadowprices,proportionalfairnessandstability,JournaloftheOperationalRe-searchSociety,vol.49,pp.237,March1998. [86] F.Kelly,Fairnessandstabilityofend-to-endcongestioncontrol,EuropeanJournalofControl,vol.9,pp.159,March2003. [87] H.KimandJ.Hou,NetworkcalculusbasedsimulationforTCPcongestioncontrol:Theorems,implementationandevaluation,presentedatIEEEINFOCOM'04,HongKong,March2004. [88] M.D.KulavaratharasahandA.H.Aghvami,Teletrafcperformanceevaluationofmicrocellularpersonalcommunicationnetworks(PCN's)withprioritizedhandoffprocedures,IEEETransactionsonVehicularTechnology,vol.48,no.1,pp.137152,January1999. [89] L.Lamport,Passwordauthenticationwithinsecurecommunication,Communcia-tionsoftheACM,vol.24,no.11,pp.770,November1981. [90] L.Le,J.Aikat,K.Jeffay,andF.D.Smith,NetworkcalculusbasedsimulationforTCPcongestioncontrol:Theorems,implementationandevaluation,presentedatACMSIGCOMM'03,Karlsruhe,Germany,August2003. [91] Y.LeeandD.Brow,Competition,consumerwelfare,andthesocialcostofmonopoly,inCowlesFoundationDiscussionPaperNo.1528,lastaccessed:September2006.[Online].Available: http://cowles.econ.yale.edu/P/cd/d15a/d1528.pdf [92] S.Lee,G.Narlikar,M.Pal,G.Wilfong,andL.Zhang,Admissioncontrolformul-tihopwirelessbackhaulnetworkswithqossupport,presentedatIEEEWCNC'06,LasVegas,NV,April2006. [93] T.Li,Y.Iraqi,andR.Boutaba,PricingandadmissioncontrolforQoS-enabledInternet,ComputerNetworks,vol.46,no.1,pp.87,September2004. [94] R.R.-F.LiaoandA.T.Campbell,Dynamiccoreprovisioningforquantitativedif-ferentiatedservices,IEEETransactionsonNetworking,vol.12,no.3,pp.429,June2004. [95] Y.LiuandE.Knightly,Opportunisticfairschedulingovermultiplewirelesschan-nelss,presentedatIEEEINFOCOM'03,SanFrancisco,CA,March/April2003.

PAGE 113

[96] S.H.Low,L.L.Peterson,andL.Wang,Understandingvegas:adualitymodel,JournaloftheACM,vol.49,no.2,pp.207,March2002. [97] S.H.Low,F.Paganini,J.Wang,S.Adlakha,andJ.C.Doyle,DynamicsofTCP/REDandascalablecontrol,presentedatIEEEINFOCOM'02,NewYork,NY,June2002. [98] S.H.Low,S.H.Low,F.Paganini,andJ.C.Doyle,Internetcongestioncontrol,IEEEControlSystemsMagazine,vol.22,no.1,pp.28,February2002. [99] S.H.Low,AdualitymodelofTCPandqueuemanagementalgorithm,IEEE/ACMTransactionsonNetworking,vol.11,no.4,pp.525,August2003. [100] R.MahajanandS.Floyd,Controllinghigh-bandwidthowsatthecongestedrouter,ICSITechReportTR-01-001,lastaccessed:September2006.[Online].Available: http://www.icir.org/red-pd/ [101] S.I.Maniatis,E.G.Nikolouzou,andI.S.Venieris,End-to-endQoSspecicationissuesintheconvergedall-IPwiredandwirelessenvironment,IEEECommunica-tionsMagazine,vol.42,no.6,pp.80,June2004. [102] P.Marbach,Pricingdifferentiatedservicesnetworks:Burstytrafc,presentedatIEEEINFOCOM'01,Anchorage,AK,April2001. [103] S.McCanneandS.Floyd,ns-2(networksimulatorversion2),lastaccessed:September2006.[Online].Available: http://www.isi.edu/nsnam/ns/ [104] P.McKenney,Stochasticfairnessqueueing,presentedatIEEEINFOCOM'90,SanFrancisco,CA,June1990. [105] E.Monteiro,G.Quadros,andF.Boavida,Aschemeforthequanticationofcongestionincommunicationservicesandsystems,presentedatIEEESDNE'96,Whistler,BC,Canada,June1996. [106] J.Nagle,Onpacketswitcheswithinnitestorage,IEEE/ACMTransactionsonCommunications,vol.35,no.4,pp.435,April1987. [107] M.J.Neely,E.Modiano,andC.-P.Li,Fairnessandoptimalstochasticcontrolforheterogeneousnetworks,presentedatIEEEINFOCOM'05,Miami,FL,March2005. [108] S.NelsonandL.Kleinrock,SpatialTDMA:Acollision-freemultihopchannelaccessprotocol,IEEETransactionsonCommunications,vol.33,no.9,pp.934944,September1985. [109] N.Nichols,S.Blake,F.Baker,andD.Black,Denitionofthedifferentiatedser-viceseld(DSeld)intheIPv4andIPv6headers,IETFRFC2474,December1998.

PAGE 114

[110] D.NiyatoandE.Hossain,Calladmissioncontrolforqosprovisioningin4gwire-lessnetworks:Issuesandapproaches,IEEENetwork,vol.19,no.5,pp.5,September/October2005. [111] J.Padhye,V.Firoiu,D.Towsley,andJ.Kurose,ModelingTCPthroughput:Asimplemodelanditsempiricalvalidation,presentedatACMSIGCOMM'98,Van-couver,Canada,August/September1998. [112] R.Pan,B.Prabhakar,andK.Psounis,CHOKe:Astatelessactivequeueman-agementschemeforapproximatingfairbandwidthallocation,presentedatIEEEINCOCOM'01,Anchorage,AK,April2001. [113] A.ParekhandR.Gallager,Ageneralizedprocessorsharingapproachtoowcon-trolinintegratedservicesnetworks:thesingle-nodecase,IEEE/ACMTransactionsonNetworking,vol.1,no.3,pp.344,June1993. [114] ,Ageneralizedprocessorsharingapproachtoowcontrolinintegratedser-vicesnetworks:themultiple-nodecase,IEEE/ACMTransactionsonNetworking,vol.2,no.2,pp.137,April1994. [115] E.-C.ParkandC.-H.Choi,Proportionalbandwidthallocationindiffservnet-works,presentedatIEEEINFOCOM'04,HongKong,March2004. [116] V.PaxsonandS.Floyd,Wideareatrafc:thefailureofPoissonmodeling,IEEE/ACMTransactionsonNetworking,vol.3,no.3,pp.226,June1995. [117] R.PindyckandD.Rubinfeld,Microeconomics,6thed.UpperSaddleRiver,NJ:PrenticeHall,2004. [118] J.Postel,Internetprotocol,IETFRFC791,September1981. [119] Y.Qian,R.Q.-Y.Hu,andH.-H.Chen,AcalladmissioncontrolframeworkforvoiceoverWLANs,IEEEWirelessCommunications,vol.13,no.1,pp.44,February2006. [120] S.RamabhadranandJ.Pasquale,Stratiedroundrobin:Alowcomplexitypacketschedulerwithbandwidthfairnessandboundeddelay,presentedatACMSIG-COMM'03,Karlsruhe,Germany,August2003. [121] E.Rosen,A.Viswanathan,andR.Callon,Multiprotocollabelswitchingarchitec-ture,IETFRFC3031,January2001. [122] A.A.SalehandJ.M.Simmons,Evolutiontowardthenext-generationcoreopticalnetwork,IEEEJournalofLightwaveTechnology,vol.24,no.9,pp.3303,September2006. [123] N.B.SalemandJ.-P.Hubaux,Afairschedulingforwirelessmeshnetworks,pre-sentedatWiMesh'05,SantaClara,CA,September2005.

PAGE 115

[124] S.Shenker,FundamentaldesignissuesforthefutureInternet,IEEEJournalonSelectedAreasinCommunications,vol.13,no.7,pp.1176,September1995. [125] M.Shin,S.Chong,andI.Rhee,Dual-resourceTCP/AQMforprocessing-constrainednetworks,presentedatIEEEINFOCOM'06,Barcelona,Spain,May2006. [126] M.ShreedharandG.Varghese,Efcientfairqueuingusingdecitroundrobin,IEEE/ACMTransactionsonNetworking,vol.4,no.3,pp.375,June1996. [127] W.Stevens,TCP/IPIllustrated,Volume1:TheProtocols.Boston:Addison-Wesley,1994. [128] I.Stoica,S.Shenker,andH.Zhang,Core-statelessfairqueueing:Achievingap-proximatelyfairbandwidthallocationinhighspeednetworks,presentedatACMSIGCOMM'98,Vancouver,Canada,August/September1998. [129] A.StriegelandG.Manimaran,Packetschedulingwithdelayandlossdifferentia-tion,ComputerCommunications,vol.25,no.1,pp.21,January2002. [130] S.Suri,G.Varghese,andG.Chandramenon,Leapforwardvirtualclock:Anewfairqueueingschemewithguaranteeddelayandthroughputfairness,presentedatIEEEINFOCOM'97,Kobe,Japan,April1997. [131] B.Suter,T.Lakshman,D.Stiliadis,andA.Choudhury,BuffermanagementschemesforsupportingTCPingigabitrouterswithper-owqueueing,IEEEJour-nalonSelectedAreasinCommunications,vol.17,no.6,pp.1159,June1999. [132] C.Tan,M.Gurusamy,andJ.Lui,Achievingproportionallossopticalburstswich-ingwdmnetworks,presentedatIEEEGLOBECOM'04,Dallas,TX,November2004. [133] A.Tang,J.Wang,andS.Low,UnderstandingCHOKe,presentedatIEEEINFO-COM'03,SanFrancisco,CA,March/April2003. [134] H.Vin,P.Goyal,andA.Goyal,Astatisticaladmissioncontrolalgorithmformul-timediaservers,presentedatACMMultiMedia'94,SanFrancisco,CA,October1994. [135] S.Wen,Y.Fang,andH.Sun,CHOKeW:BandwidthdifferentiationandTCPpro-tectionincorenetworks,presentedatMILCOM'05,AtlanticCity,NJ,October2005. [136] S.WenandY.Fang,CHOKeWpatchonns,lastaccessed:September2006.[Online].Available: http://www.ecel.u.edu/wen/chokew.zip [137] ,ConTax:ApricingschemeforCHOKeW,presentedatMILCOM'06,Washington,DC,October2006.

PAGE 116

[138] Z.Xia,W.Hao,I.-L.Yen,andP.Li,AdistributedadmissioncontrolmodelforQoSassuranceinlarge-scalemediadeliverysystems,IEEETransactionsonParallelandDistributedSystems,vol.16,no.12,pp.1143,December2005. [139] J.XuandR.J.Lipton,Onfundamentaltradeoffsbetweendelayboundsandcom-putationalcomplexityinpacketschedulingalgorithms,presentedatACMSIG-COMM'02,Pittsburgh,PA,August2002. [140] G.Xylomernons,G.C.Polyzos,P.Mahonen,andM.Saaranen,TCPperformanceissueoverwirelesslinks,IEEECommunicationsMagazine,vol.39,no.4,pp.5258,April2001. [141] L.Yang,W.Seah,andQ.Yin,ImprovingfairnessamongTCPowscrossingwire-lessadhocandwirednetworks,presentedatACMMobiCom'03,SanDiego,CA,September2003. [142] I.YeomandA.L.N.Reddy,ModelingTCPbehaviorinadifferentiatedservicesnetwork,IEEE/ACMTransactionsonNetworking,vol.9,no.1,pp.31,Febru-ary2001. [143] J.Zeng,L.Zakrevski,andN.Ansari,Computingthelossdifferentiationparame-tersoftheproportionaldifferentiationservicemodel,IEEProceedingsCommuni-cations,vol.153,no.2,pp.177,April2006. [144] Y.ZhangandY.Fang,Asecureauthenticationandbillingarchitectureforwirelessmeshnetworks,toappearinACMWirelessNetworks. [145] D.Zhao,J.Zou,andT.D.Todd,AdmissioncontrolwithloadbalancinginIEEE802.11-basedESSmeshnetworks,presentedatQSHINE'05,Orlando,FL,August2005.

PAGE 117

ShushanWenreceivedtheB.E.degreefromtheDepartmentofElectronicEngineer-ing,UniversityofElectronicScienceandTechnologyofChina(UESTC),Chengdu,China,in1996,andthePh.D.degreefromtheInstituteofCommunicationandInformationEn-gineeringatthesameuniversityin2002.CurrentlyheispursuingthePh.D.degreeintheDepartmentofElectricalandComputerEngineering,UniversityofFlorida.HisresearchinterestsincludeTCP/IPperformanceanalysis,congestioncontrol,admissioncontrol,pric-ing,fairnessandqualityofservice.HeisastudentmemberofIEEE. 106


Permanent Link: http://ufdc.ufl.edu/UFE0016860/00001

Material Information

Title: Traffic control in TCP/IP networks
Physical Description: Mixed Material
Language: English
Creator: Wen, Shushan ( Dissertant )
Fang, Yuguang. ( Thesis advisor )
Chen, Shigang ( Reviewer )
Wong, Tan ( Reviewer )
McNair, Janise ( Reviewer )
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2006
Copyright Date: 2006

Subjects

Subjects / Keywords: Electrical and Computer Engineering thesis, Ph. D.
Dissertations, Academic -- UF -- Electrical and Computer Engineering

Notes

Abstract: TCP/IP networks have been widely used for wired and wireless communications and will continue to be used in foreseeable future. In our research, we study the traffic control schemes for wired and wireless networks using TCP/IP protocol suite. Traffic control is tightly related to Quality of Service (QoS), for which Differentiated Services (DiffServ) architecture is employed in our research. For core networks, we present a stateless Active Queue Management (AQM) scheme called CHOKeW. With respect to the number of flows being supported, both the memory-requirement complexity and the per-packet-processing complexity for CHOKeW are O(1), which is a significant improvement compared with conventional per-flow schemes. CHOKeW is able to conduct bandwidth differentiation and TCP protection. We combine pricing, admission control, buffer management and scheduling in DiffServ networks by the ConTax-CHOKeW framework. ConTax is a distributed admission controller that works in edge networks. ConTax uses the sum of priority values for all admitted users to measure the network load. By charging a higher price when the network load is heavier, ConTax is capable of controlling the network load efficiently. In addition, network providers can gain more profit, and users have greater flexibility that in turn meets the specific performance requirements of their applications. In Wireless Mesh Networks (WMNs), in order to minimize the number of parties that are involved in the admission control, we categorize WMN devices into groups. In our admission control strategy, Admission control with PRIcing Leverage (APRIL), a group can be a resource user and a resource provider simultaneously. The maximum benefit principle and the nonnegative benefit principle are applied to users and providers, respectively. The resource sharing is transparent to other groups, which gives our scheme good scalability. APRIL also increases the benefit of each involved group as well as the total benefit for the whole system when more groups are admitted into the network, which becomes an incentive for expanding WMNs.
Subject: active, admission, april, buffer, chokew, congestion, contax, control, ip, management, mesh, network, pricing, queue, tcp, traffic, wireless
General Note: Title from title page of source document.
General Note: Document formatted into pages; contains 117 pages.
General Note: Includes vita.
Thesis: Thesis (Ph. D.)--University of Florida, 2006.
Bibliography: Includes bibliographical references.
General Note: Text (Electronic thesis) in PDF format.

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0016860:00001

Permanent Link: http://ufdc.ufl.edu/UFE0016860/00001

Material Information

Title: Traffic control in TCP/IP networks
Physical Description: Mixed Material
Language: English
Creator: Wen, Shushan ( Dissertant )
Fang, Yuguang. ( Thesis advisor )
Chen, Shigang ( Reviewer )
Wong, Tan ( Reviewer )
McNair, Janise ( Reviewer )
Publisher: University of Florida
Place of Publication: Gainesville, Fla.
Publication Date: 2006
Copyright Date: 2006

Subjects

Subjects / Keywords: Electrical and Computer Engineering thesis, Ph. D.
Dissertations, Academic -- UF -- Electrical and Computer Engineering

Notes

Abstract: TCP/IP networks have been widely used for wired and wireless communications and will continue to be used in foreseeable future. In our research, we study the traffic control schemes for wired and wireless networks using TCP/IP protocol suite. Traffic control is tightly related to Quality of Service (QoS), for which Differentiated Services (DiffServ) architecture is employed in our research. For core networks, we present a stateless Active Queue Management (AQM) scheme called CHOKeW. With respect to the number of flows being supported, both the memory-requirement complexity and the per-packet-processing complexity for CHOKeW are O(1), which is a significant improvement compared with conventional per-flow schemes. CHOKeW is able to conduct bandwidth differentiation and TCP protection. We combine pricing, admission control, buffer management and scheduling in DiffServ networks by the ConTax-CHOKeW framework. ConTax is a distributed admission controller that works in edge networks. ConTax uses the sum of priority values for all admitted users to measure the network load. By charging a higher price when the network load is heavier, ConTax is capable of controlling the network load efficiently. In addition, network providers can gain more profit, and users have greater flexibility that in turn meets the specific performance requirements of their applications. In Wireless Mesh Networks (WMNs), in order to minimize the number of parties that are involved in the admission control, we categorize WMN devices into groups. In our admission control strategy, Admission control with PRIcing Leverage (APRIL), a group can be a resource user and a resource provider simultaneously. The maximum benefit principle and the nonnegative benefit principle are applied to users and providers, respectively. The resource sharing is transparent to other groups, which gives our scheme good scalability. APRIL also increases the benefit of each involved group as well as the total benefit for the whole system when more groups are admitted into the network, which becomes an incentive for expanding WMNs.
Subject: active, admission, april, buffer, chokew, congestion, contax, control, ip, management, mesh, network, pricing, queue, tcp, traffic, wireless
General Note: Title from title page of source document.
General Note: Document formatted into pages; contains 117 pages.
General Note: Includes vita.
Thesis: Thesis (Ph. D.)--University of Florida, 2006.
Bibliography: Includes bibliographical references.
General Note: Text (Electronic thesis) in PDF format.

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0016860:00001


This item has the following downloads:


Table of Contents
    Title Page
        Page i
        Page ii
    Dedication
        Page iii
    Acknowledgement
        Page iv
    Table of Contents
        Page v
        Page vi
    List of Tables
        Page vii
    List of Figures
        Page viii
        Page ix
    Abstract
        Page x
        Page xi
    Introduction
        Page 1
        Page 2
        Page 3
        Page 4
        Page 5
        Page 6
        Page 7
        Page 8
        Page 9
        Page 10
    Differentiated bandwidth allocation and TCP protection in core networks
        Page 11
        Page 12
        Page 13
        Page 14
        Page 15
        Page 16
        Page 17
        Page 18
        Page 19
        Page 20
        Page 21
        Page 22
        Page 23
        Page 24
        Page 25
        Page 26
        Page 27
        Page 28
        Page 29
        Page 30
        Page 31
        Page 32
        Page 33
        Page 34
        Page 35
        Page 36
        Page 37
        Page 38
        Page 39
        Page 40
        Page 41
        Page 42
        Page 43
        Page 44
        Page 45
        Page 46
        Page 47
        Page 48
        Page 49
        Page 50
        Page 51
    Contax: An admission control and pricing scheme for Chokew
        Page 52
        Page 53
        Page 54
        Page 55
        Page 56
        Page 57
        Page 58
        Page 59
        Page 60
        Page 61
        Page 62
        Page 63
        Page 64
        Page 65
        Page 66
        Page 67
        Page 68
        Page 69
        Page 70
        Page 71
    A group-based pricing and admission control strategy for wireless mesh networks
        Page 72
        Page 73
        Page 74
        Page 75
        Page 76
        Page 77
        Page 78
        Page 79
        Page 80
        Page 81
        Page 82
        Page 83
        Page 84
        Page 85
        Page 86
        Page 87
        Page 88
        Page 89
        Page 90
        Page 91
        Page 92
    Future work
        Page 93
        Page 94
    References
        Page 95
        Page 96
        Page 97
        Page 98
        Page 99
        Page 100
        Page 101
        Page 102
        Page 103
        Page 104
        Page 105
    Biographical sketch
        Page 106
Full Text











TRAFFIC CONTROL IN TCP/IP NETWORKS


By

SHUSHAN WEN















A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2006

































Copyright 2006

by

Shushan Wen

















To my family.















ACKNOWLEDGMENTS

First and foremost, I would like to express my sincere gratitude to my advisor, Prof.

Yuguang Fang, for his invaluable advice and encouragement within the past several years

when I have been with Wireless Networks Laboratory (WINET). My work would not have

been completed if I had not had his guidance and support.

I would like to acknowledge my other committee members, Prof. Shigang Chen,

Prof. Tan Wong, and Prof. Janise McNair, for serving on my supervisory committee and for

their helpful suggestions and constructive criticism. My thanks also go to Prof. John Shea,

Prof. Dapeng Wu and Prof. Jianbo Gao, for their expert advice.

I would like to extend my thanks to my colleagues in WINET for creating a friendly

environment and for offering great help during my research. They are Dr. Young goo Kwon,

Dr. Wenjing Lou, Dr. Wenchao Ma, Dr. Wei Liu, Dr. Xiang Chen, Dr. Byung-Seo Kim,

Dr. Xuejun Tian, Dr. Hongqiang Zhai, Dr. Yanchao Zhang, Dr. Jianfeng Wang, Yu Zheng,

Xiaoxia Huang, Yun Zhou, Chi Zhang, Frank Goergen, Pan Li, Rongsheng Huang, and

many others who have offered help with this work.

I owe a special debt of gratitude to my family. Without their selfless care, constant

support and unwavering trust, I would never imagine what I have achieved.

I would also like to thank the U.S. Office of Naval Research and the U.S. National

Science Foundation for providing the grants.















TABLE OF CONTENTS
page

ACKNOWLEDGMENTS ................................ iv

LIST OF TABLES ......................... ...... vii

LIST OF FIGURES ........ ............. ......... .... viii

ABSTRACT ................................. ..... x

CHAPTER

1 INTRODUCTION ................................ 1

2 DIFFERENTIATED BANDWIDTH ALLOCATION AND TCP PROTECTION
IN CORE NETWORKS ............................ 11

2.1 Introduction . . . . . . . . 11
2.1.1 TCP Protection .......................... 12
2.1.2 Bandwidth Differentiation .................... 14
2.2 The CHOKeW Algorithm ......................... 16
2.3 M odel . ... .. ... . . . . 23
2.3.1 Some Useful Probabilities .................... 23
2.3.2 Steady-State Features of CHOKeW ............... 27
2.3.3 Fairness . . . . . . . 29
2.3.4 Bandwidth Differentiation .................... 33
2.4 Performance Evaluation ......................... 34
2.4.1 Two Priority Levels with the Same Number of Flows ..... 35
2.4.2 Two Priority Levels with Different Number of Flows ..... 38
2.4.3 Three or More Priority Levels .................. 38
2.4.4 TCP Protection .......................... 39
2.4.5 Fairness ................................. 42
2.4.6 CHOKeW versus CHOKeW-RED ............... 43
2.4.7 CHOKeW versus CHOKeW-avg ... . . 45
2.4.8 TCP Reno in CHOKeW . . . . . 47
2.5 Implementation Considerations . . . . . 48
2.5.1 Buffer for Flow IDs . . . . . . 48
2.5.2 Parallelizing the Drawing Process . . . ... 50
2.6 Conclusion . . . . . . . . 51









3 CONTAX: AN ADMISSION CONTROL AND PRICING SCHEME FOR CHOKEW 52

3.1 Introduction . . . . . . . . 52
3.2 The ConTax Scheme ........................... 54
3.2.1 The ConTax-CHOKeW Framework ............... 54
3.2.2 The Pricing Model of ConTax .................. 56
3.2.3 The Demand Model of Users .................. 58
3.3 Sim ulations . . . . . . . . 60
3.3.1 Two Priority Classes ....................... 61
3.3.2 Three Priority Classes ...................... 61
3.3.3 Higher Arriving Rate ....................... 65
3.4 Conclusion . . . . . . . . 68


4 A GROUP-BASED PRICING AND ADMISSION
FOR WIRELESS MESH NETWORKS . .


CONTROL STRATEGY


Introduction . . . .
Groups in WMNs ..........
The Pricing Model for APRIL .
The APRIL Algorithm .......
4.4.1 Competition Type for APRIL


4.4.2 Maximum Benefit Principle for Users ...
4.4.3 Nonnegative Benefit Principle for Providers
4.4.4 Algorithm Operations ............
4.5 Performance Evaluation ...............
4.6 Conclusion ... ... ... ... .. ... ... ..


5 FUTURE WORK ..........

REFERENCES . . . .


BIOGRAPHICAL SKETCH .............................. 106















LIST OF TABLES
Table page

2-1 The State of CHOKeW vs. the Range of L .... . . 22















LIST OF FIGURES
Figure page

1-1 Buffer management and scheduling modules in a router . . 3

2-1 CHOKeW algorithm . . . . . . . 19

2-2 Algorithm of updating po . . . . . .... 20

2-3 Network topology . . . . . . . 20

2-4 RCF of RIO and CHOKeW under a scenario of two priority levels . 36

2-5 Aggregate TCP goodput vs. the number of TCP flows under a scenario of
two priority levels . . . . . . . 37

2-6 Average per-flow TCP goodput vs. W(2)/w(1) when 25 flows are assigned
W(i) = 1 and 75 flows W(2) ........................ 37

2-7 Aggregate goodput vs. the number of TCP flows under a scenario of three
priority levels . . . . . . . 39

2-8 Aggregate goodput vs. the number of TCP flows under a scenario of four
priority levels . . . . . . . 40

2-9 Aggregate goodput vs. the number of UDP flows under a scenario to
investigate TCP protection . . . . . . 41

2-10 Basic drawing factor po vs. the number of UDP flows under a scenario to
investigate TCP protection . . . . . . 41

2-11 Fairness index vs. the number of flows for CHOKeW, RED and BLUE 43

2-12 Link utilization of CHOKeW and CHOKeW-RED . . . 43

2-13 Average queue length of CHOKeW and CHOKeW-RED . . 45

2-14 Aggregate TCP goodput of CHOKeW and CHOKeW-RED . .... 46

2-15 Average queue length of CHOKeW and CHOKeW-avg . . 46

2-16 Aggregate TCP goodput of CHOKeW and CHOKeW-avg . .... 47

2-17 Link utilization, aggregate goodput (in Mb/s), and the ratio of minimum
goodput to average goodput of TCP Reno . . ... 48

2-18 Extended matched drop algorithm with ID buffer . . . 49









3-1 ConTax-CHOKeW framework. ConTax is in edge routers, while CHOKeW
is in core routers. . . . . . . . 54

3-2 ConTax algorithm . . . . . . . 59

3-3 Supply-demand relationship when ConTax is used. The left graph is price-
supply curves, and the right graph price-demand curves for each class. 60

3-4 Dynamics of network load (i.e., i 1 ini) in the case of two priority classes 62

3-5 Number of users that are admitted into the network in the case of two
priority classes . . . . . . . 63

3-6 Demand of users in the case of two priority classes . . . 64

3-7 Aggregate price in the case of two priority classes . . . 64

3-8 Dynamics of network load in the case of three priority classes . .... 65

3-9 Number of users that are admitted into the network in the case of three
priority classes . . . . . . . 66

3-10 Demand of users in the case of three priority classes . . . 67

3-11 Aggregate price in the case of three priority classes . . . 67

3-12 Dynamics of network load when arriving rate A = 6 users/min . .... 68

3-13 Number of users that are admitted into the network when arriving rate
A 6 users/min . . . ...... ..... . 69

3-14 Demand of users when arriving rate A = 6 users/min . . . 70

3-15 Aggregate price when arriving rate A = 6 users/min . . . 70

4-1 Tree topology formed by groups in a WMN . . ... 76

4-2 Prices vs. bandwidth of BellSouth DSL service plans . . . 78

4-3 Utility ui and price pi vs. resources x ..... . . 82

4-4 Three possible shapes of Ab k) (x . . 86

4-5 Determining Xi) when x, 1 i > 0 . . . 87

4-6 Simulation network for APRIL . . . . . 89

4-7 Available bandwidth, bandwidth allocation, and benefit . . 90















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

TRAFFIC CONTROL IN TCP/IP NETWORKS

By

Shushan Wen

December 2006

Chair: Yuguang "Michael" Fang
Major Department: Electrical and Computer Engineering

TCP/IP networks have been widely used for wired and wireless communications and

will continue to be used in foreseeable future. In our research, we study the traffic control

schemes for wired and wireless networks using TCP/IP protocol suite. Traffic control is

tightly related to Quality of Service (QoS), for which Differentiated Services (DiffServ)

architecture is employed in our research.

For core networks, we present a stateless Active Queue Management (AQM) scheme

called CHOKeW. With respect to the number of flows being supported, both the memory-

requirement complexity and the per-packet-processing complexity for CHOKeW are O(1),

which is a significant improvement compared with conventional per-flow schemes. CHOKeW

is able to conduct bandwidth differentiation and TCP protection.

We combine pricing, admission control, buffer management and scheduling in Diff-

Serv networks by the ConTax-CHOKeW framework. ConTax is a distributed admission

controller that works in edge networks. ConTax uses the sum of priority values for all ad-

mitted users to measure the network load. By charging a higher price when the network

load is heavier, ConTax is capable of controlling the network load efficiently. In addition,









network providers can gain more profit, and users have greater flexibility that in turn meets

the specific performance requirements of their applications.

In Wireless Mesh Networks (WMNs), in order to minimize the number of parties that

are involved in the admission control, we categorize WMN devices into groups. In our ad-

mission control strategy, Admission control with PRIcing Leverage (APRIL), a group can

be a resource user and a resource provider simultaneously. The maximum benefit principle

and the nonnegative benefit principle are applied to users and providers, respectively. The

resource sharing is transparent to other groups, which gives our scheme good scalability.

APRIL also increases the benefit of each involved group as well as the total benefit for

the whole system when more groups are admitted into the network, which becomes an

incentive for expanding WMNs.















CHAPTER 1
INTRODUCTION

Transmission Control Protocol/Internet Protocol (TCP/IP) networks have been widely

used for wired and wireless communications. This is due to the simplicity, scalability, and

robustness in technology. Moreover, in terms of protecting existing investment, it is also

reasonable to expect that TCP/IP protocol suite will continue to be used in the foresee-

able future. Instead of designing an alternative network infrastructure for new applications,

merging those applications into current TCP/IP networks is more likely to be a practical

strategy of research and development for both academia and industry. In this work, we

study the traffic control schemes for wired and wireless networks based on TCP/IP infras-

tructure.

Traffic control is tightly related to Quality of Service (QoS). The effectiveness of a

traffic control scheme needs to be investigated with a certain QoS architecture. In recent

years, many QoS models, such as Service Marking [8], Label Switching [10, 14], Inte-

grated Services/RSVP [27,28], and Differentiated Services (DiffServ) [25,109] have been

proposed. Among them, DiffServ is able to provide a variety of services for IP packets

based on their per-hop behaviors (PHBs) [25]. The capability to well balance the work-

load between the devices inside and those on the edge of networks gives DiffServ great

scalability. We select DiffServ as our QoS architecture model.

In general, routers in DiffServ networks are divided into two categories: edge (bound-

ary) routers and core (interior) routers [128]. Operations that need maintain per-flow states,

such as packet classification and priority marking, are implemented at edge routers. In the

core networks, packet forwarding speed has to match packet arrival rate in the long run;

otherwise the service quality would deteriorate due to packet drops. Therefore, the design









of core routers must trade off the ability of per-flow control for low complexity that ensures

the forwarding speed.

Since routers buffer those packets that cannot be forwarded immediately, and bottle-

neck routers are the devices that are prone to network congestion, buffer management is

one of the crucial technologies for traffic control.

Buffer management schemes usually use packet dropping or packet marking to control

the traffic. For the best compatibility with TCP, we only discuss packet-dropping based

buffer management in this work. Buffer management mainly focuses on when, where, and

how to drop packets from the buffer. Traditional buffer management drops packets only

when the buffer is full (i.e., tail drop), which causes problems such as low link utilization

and global synchronization; i.e., all TCP flows decrease and increase the sending rates at

the same time. If packet drops happen before the buffer is full, the buffer management is

also called Active Queue Management (AQM).

Random Early Detection (RED) [60] was one of the pioneering work in AQM. It pre-

defines two queue length thresholds, minth and maxth. When the current queue length is

smaller than minth, the network is not regarded as congested, and no packet drop happens.

When the current queue length is larger than minth but smaller than maxth, the network

is regarded as congested, and arriving packets are dropped according to a dropping prob-

ability in between 0 and pmax, where pmax (pmax < 1) is a predefined parameter of

adjusting the dropping probability. The larger the queue length is, the higher the dropping

probability will be. If the current queue length is larger than maxth, all arriving pack-

ets are dropped due to the heavy network congestion.1 With fairly low computation costs,

RED can avoid global synchronization and maintain better TCP goodput than traditional

tail drop schemes.



1 Some modifications may set the completely dropping threshold to other values, such
as 2maxth [64], but the basic dropping strategy is still the same.












ID I

Packet arriving |1|




Buffer Pool


Figure 1-1: Buffer management and scheduling modules in a router


Many AQM algorithms were proposed afterwards. Generally, they were aimed at

improving implementation efficiency (e.g., Random Exponential Marking (REM) [12]),

increasing network stability (e.g., BLUE [59] and REM), protecting standard TCP flows

(e.g., RED with Preferential Dropping (RED-PD) [100] and Flow-Valve [42]), or support-

ing multiple priority classes (e.g., RED with In/Out bit (RIO) [47]).

Here we need to clarify the functional difference between buffer management and

scheduling. The relationship of buffer management and scheduling is illustrated in Fig.

1-1. In a router, the total buffer capacity is considered a buffer pool. Logically, the buffer

pool can be shared by multiple queues. When a packet arrives, buffer management is

the module to decide whether to let the arriving packet enter a queue, and which queue

should be used to hold this packet if multiple queues are available. Buffer management

also controls the queue lengths as well as the buffer occupancy by discarding packets from

the buffer. Thus, a buffer management scheme determines at when and from where a packet

should be dropped. On the other hand, scheduling is the module to decide when to forward

a packet and which packet should be forwarded if there are more than one packets in the

buffer. In other words, a scheduler controls the packet forwarding order. Usually, when a

buffer pool consists of multiple logical queues, the scheduling algorithm selects a packet

at one of the heads of the queues to forward.









The simplest scheduling scheme is First Come First Served (FCFS). If all arriving

packets enter the same queue from the tail and packets are sent out one by one from the

head of the queue, FCFS is the scheduler. Generalized Processor Sharing (GPS) [113,114]

was considered an ideal scheduling discipline which is designed to let the flows share the

bandwidth in proportion to the weights. However, the fluid model of GPS is not amenable

to a practical implementation. One of the popular classes of implementation is schedul-

ing schemes with round robin features, including Round robin [106], Deficit Round Robin

(DRR) [126], Stratified Round Robin [120], Class-Based Queueing (CBQ) [61], etc. They

are able to achieve per-packet complexity O(1), but they tend to have poor burstiness and

memory-requirement complexity O(N). Another popular class is timestamp based sched-

ulers, such as Weighted Fair Queueing (WFQ) [52], WF2Q [23] and Self-Clocked Fair

Queueing [71]. They have performance that is closer to the ideal GPS model, but the per-

packet-processing complexity is usually larger than O(log N) and memory-requirement

complexity is O(N).

Recent research of buffer management and scheduling has also been extended to the

following areas. In order to control the bandwidth and CPU resource consumption of in-

network processing, Shin, Chong and Rhee proposed Dual-Resource Queue (DRQ) for

approximating proportional fairness [125]. With respect to optical networks and photonic

packet switches, Harai and Murata proposed a scheme as an expansion of a simple sequen-

tial scheduling [75]. Their scheme uses optical fiber delay lines to construct optical buffers

and the supported data rate is improved due to a parallel and pipeline processing architec-

ture. In wireless networking area, Alcaraz, Cerdan and Garcia-Haro presented an AQM

algorithm for Radio Link Control (RLC) in order to improve the performance of TCP con-

nections over the Third Generation Partnership Project (3GPP) radio access network [6];

Chen and Yang developed a buffer management scheme for congestion avoidance in sen-

sor networks [41]; Chou and Shin used Last Come First Drop (LCFD) buffer management

policy and post-handoff acknowledgement suppression to enhance performance of smooth









handoff in wireless mobile networks [43]. By contrast, our design of buffer management

focuses on the traffic control in DiffServ core networks. In addition, when we evaluate an

AQM scheme, the performance has to be investigated with TCP, taking account of the fact

that the dynamics of TCP have significant interactions with the dropping scheme. There-

fore, bandwidth differentiation and TCP protection are two goals that we want to achieve.

When we evaluate the performance of buffer management, we also need to consider

the combination of the buffer management and a scheduler. Conventionally, RED, BLUE,

RIO, etc., works with FCFS in order to keep the simplicity of implementation and the low

complexity of operations. On the other hand, some AQM schemes, such as Longest Queue

Dropping (LQD) [131], work with WFQ, so that they are able to obtain good isolation

among flows.

As TCP uses packet drops as network congestion signal, some research [53,129, 132,

143] considered loss ratio the measurement of resource differentiation to support priority

service. These schemes simply assign a higher dropping rate to arriving packets from a

flow in higher priority. When they are used in core routers, these schemes are faced with

the dilemma of choosing between per-flow control and class-based control (i.e., all flows

in the same priority class are treated as a single super-flow). If per-flow control is selected,

the memory-requirement complexity will become O(N), which is unacceptable for a router

working in high speed networks with a myriad of flows. If class-based strategy is used, all

flows in the same class-no matter it is a TCP flow or a non-TCP-friendly flow-will have

the same loss ratio, and hence this type of schemes cannot protect TCP flows.

During the evolution process of the Internet, the variety of the applications brings

greatly heterogeneous requirements to the network. The goal of the network design is not

to provide perfect QoS to all users, but rather to give the different categories of applications

a level of service commensurate with the particular needs [122]. In order to let the perfor-

mance of our buffer management scheme meet the speed requirement of core routers, we

combine the buffer management with a FCFS scheduler.









With regard to incorporating the priority services of DiffServ into TCP, two problems

must be solved: TCP protection and bandwidth differentiation. We design a buffer manage-

ment scheme, CHOKeW, to solve these two problems together. In previous work, schemes

either focus only on TCP protection [42, 112] or bandwidth differentiation [23,47,52]. To

the best of our knowledge, no other scheme prior to CHOKeW has reached both goals. We

discuss CHOKeW in detail in Chapter 2.

In addition to using buffer management and scheduling techniques to conduct resource

allocation in core networks, a practical DiffServ solution also needs to include pricing and

admission control. Pricing is an effective way to assign priority, especially in a gigantic

open environment such as the Internet. As everybody wants to acquire the highest priority

if the costs are the same, it is hard to imagine that a practically prioritized network has no

pricing policies. When pricing is applied, users who are willing to pay a higher price are

able to get better service.

It is known that in a classical economic system, consumers select the amount of re-

source consumption that results in the maximum benefit for themselves [50, 117]. When

users have different utility functions (which correspond to different network applications

that are being used by the users), the optimal resource consumption for these users is also

different from each other. In other words, a good pricing scheme can let the users adjust

their network resource consumption based on their own utility functions. In this way, the

limited network resources can be shared among those users based on their own choices.

On the other hand, we believe that admission control is also essential to maintain

network services well. Generally speaking, an admission control scheme is designed for

maintaining the delivered QoS to different users (or connections, sessions, calls, etc.) at

the target level by limiting the amount of ongoing traffic in the system [110]. The lack of

admission control would strongly devalue the benefit that DiffServ can produce, and the

deterioration of service quality resulting from network congestion cannot be solved only

by devices working in core networks.









Admission control can be centralized or distributed. Early research mainly discussed

centralized admission control [9, 38, 134]. Centralized admission control, like any other

centralized techniques, has a failure point, and the admission requests may overload the

admission control center when it is used in large networks. Distributed admission control

can be further classified into two categories: collaborative schemes or local schemes. The

design of collaborative schemes, similar to that of centralized schemes, needs to take ac-

count of network communication overhead. It is possible that the network congestion will

further deteriorate due to the control packets carrying the information for collaboration

when the network is already congested. By contrast, for local schemes, information collec-

tion and decision making are done locally. The challenge of designing a local scheme is to

find the appropriate measurement of the network congestion status.

In the research area of admission control in wireless networks, traditionally, study

mainly focused on the trade-off between the new call blocking probability and the change

of handoff blocking probability due to the admission of new users [37, 39, 80, 81, 88].

For systems with hard capacity, that is, Time-Division Multiple Access (TDMA) and

Frequency-Division Multiple Access (FDMA) systems, this type of admission control

schemes work very well. However, systems with soft capacity, such as Code-Division

Multiple Access (CDMA), Orthogonal Frequency-Division Multiple access (OFDM), or

systems with a contention-based Medium Access Control (MAC) layer, the relationship

between the number of users and the available capacity is much more complicated. A

scheme that only focuses on blocking probability is not enough, and this type of schemes

cannot alleviate the network congestion efficiently. Hou, Yang and Papavassiliou proposed

a pricing based admission control algorithm [82]. Their study attempted to find the optimal

point between utility and revenue in terms of the new call arrival rate, which was affected

by the price that was adjusted dynamically according to the network condition.









Admission control can also be conducted by other techniques. The scheme proposed

by Xia et al. [138] aimed at reducing the response delay of admission decisions for mul-

timedia service. It was experience-based and belonged to aggressive admission control,

where each agent of the admission control system admitted more requests than allocated

bandwidth. Jeon and Jeong [84] combined admission control with packet scheduling. The

scheduler assigned the higher priority to real-time packets over best-effort traffic when

the real-time packets were going to meet the deadline. Thus the admission control scheme

acted as a congestion controller. Cross-layer design was also used in this scheme. Qian, Hu

and Chen [119] focused on admission control for Voice over IP (VolP) application in Wire-

less LANs. The interactions of WLAN voice manager, Medium Access Control (MAC)

layer protocols, soft-switches, routers and other network devices were discussed. Ganesh

et al. [69] developed an admission control scheme that was conducted in end users (i.e.,

endpoint admission control) by probing the network and collecting congestion notification.

This scheme requires close cooperation of end users, which is questioned in open networks

such as the Internet.

One of the critical design considerations of admission control is how to let the ad-

mission controller know the network congestion status. Some approaches used a metric

to evaluate a congestion index at each network element to admit new sessions (e.g., Mon-

teiro, Quadros and Boavida [105]). Some others employed packet probing [30, 55] and

aggregation of RSVP messages in the admission controller [16,24].

The ideal location to conduct pricing and admission control is edge networks. Edge

routers do not have to support a great many flows, which enables edge routers to keep

per-flow states without loosing much performance. By monitoring the dynamics of the

flows, edge routers can charge a higher price when network congestion occurs, and lower

the price when congestion is alleviated. The higher price reduces the demand of users

to use the network, and new users are unlikely to request the admission if the network is

congested, which in turn gives better service quality to the flows that enter the network.









Based on this strategy, in Chapter 3 we propose a pricing and admission control

scheme that works with CHOKeW. When the network congestion is heavier, our pricing

scheme will increase the price by a value that is proportional to the congestion measure-

ment, which is equivalent to charging a tax due to network congestion-thus we name our

pricing scheme ConTax (Congestion Tax).

ConTax-CHOKeW framework is a cost-effective DiffServ network solution that in-

cludes pricing and admission control (provided by ConTax) plus bandwidth differentiation

and TCP protection (supported by CHOKeW). By using the sum of priority classes of all

admitted users as the network load measurement in ConTax, edge routers can work inde-

pendently. This can save the network resource consumption as well as management cost

for sending control messages for the updates of the network congestion status from core

routers to edge routers periodically. ConTax adjusts the prices for all priority classes when

the network load for an edge router is greater than a threshold. The heavier the load is, the

higher the prices will be. The extra price above the line of the basic price, i.e., congestion

tax, is proved to be able to effectively control the number of users that are admitted into

the network. By using simulations, we also show that when more priority classes are sup-

ported, the network provider can earn more profit due to a higher aggregate price. On the

other hand, a network with a variety of priority service provides users greater flexibility to

meet the specific needs for their applications.

In addition to buffer management in core networks and pricing and admission control

in edge networks, our work with respect to traffic control also includes pricing and admis-

sion control in Wireless Mesh Networks (WMNs), which has some specific features that

do not exist in wired networks.

One of the main purposes of using WMNs is to swiftly extend the Internet coverage

in a cost-effective manner. The configurations of WMNs, determined by the user locations

and the application features, however, are greatly dynamic and flexible. As a result, it

is highly possible for a flow to go through a wireless path consisting of multiple parties









before it reaches the hot spot that is connected to the wired Internet. This feature results

in significant difference in admission control for WMNs and for traditional networks. It

is inefficient and infeasible to ask for the confirmation from each hop along the route in

WMNs. A group-based one-hop admission control is more realistic than the traditional

end-to-end admission control.

In our research that is discussed in Chapter 4, we propose a group-based pricing and

admission control scheme, in which only two parties are involved in the operations upon

each admission request, which minimizes the number of involved parties and simplifies

the operations. In this scheme, the determination criteria for network admission are the

available resources and the requested resources, which correspond to supply and demand

in an economic system, respectively. The involved parties use the knowledge of utility,

cost, and benefit to calculate the available and requested resources. Therefore, our scheme

is named APRIL (Admission control with PRIcing Leverage). Since the operations are

conducted distributedly, there is no need for a single control center. By using APRIL, the

admission of new groups leads to benefit increases of both involved groups, and the total

benefit of the whole system also increases. This characteristic can be used as an incentive

to expand Internet coverage by WMNs. Finally, in Chapter 5, we discuss some future

research issues.















CHAPTER 2
DIFFERENTIATED BANDWIDTH ALLOCATION AND TCP PROTECTION IN
CORE NETWORKS

2.1 Introduction

Problems associated with Quality of Services (QoS) in the Internet have been investi-

gated for years but have not been solved completely. One of the technological challenges

is to introduce a reliable as well as a cost-effective method to support multiple services at

different priority levels within core networks that can support thousands of flows.

In recent years, many QoS models, such as Service Marking [8], Label Switching

[10, 14], Integrated Services/RSVP [27, 28], and Differentiated Services (DiffServ) [25]

have been proposed. Each of these models has their own unique features and flaws.

In Service Marking, a method called "precedence marking" is used to record the pri-

ority value within a packet header. However, the service request is only associated with

each individual packet, and does not consider the aggregate forwarding behavior of a flow.

The flow behavior is nevertheless critical to implement QoS. The second model, Label

Switching, including Multi-Protocol Label Switching (MPLS) [121], is designed in a way

that supports packet delivery. In this model, finer granularity resource allocation is avail-

able, but scalability becomes a problem in large networks. In the worst scenario, it scales

in proportion with the square of the number of edge routers. In addition, the basic infras-

tructure of Label Switching is built by Asynchronous Transfer Mode (ATM) and Frame

Relay technology, and hence it is not straightforward to upgrade current IP routers to Label

Switching routers. Integrated Services/RSVP relies upon traditional datagram networks,

but it also has a scalability problem due to the necessity to establish packet classification

and to maintain the forwarding state of the concurrent reservations on each router. Diff-

Serv is a refinement to Service Marking, and it provides a variety of services for IP packets









based on their per-hop behaviors (PHBs) [25]. Because of its simplicity and scalability,

DiffServ has caught the most attention nowadays.

In general, routers in the DiffServ architecture, similar to those proposed in Core-

Stateless Fair Queueing (CSFQ) [128], are divided into two categories: edge (boundary)

routers and core (interior) routers. Sophisticated operations, such as per-flow classification

and marking, are implemented at edge routers. In other words, core routers do not neces-

sarily maintain per-flow states; instead, they only need to forward the packets according to

the indexed PHB values that are predefined. These values are marked in the Differentiated

Services fields (DS fields) in the packet headers [25, 109]. For example, Assured Forward-

ing [78] defined a PHB group and each packet is assigned a level of drop precedence. Thus

packets with primary importance based on their PHB values encounter relatively low drop-

ping probability. The implementation of an Active Queue Management (AQM) to conduct

the dipping. however, is not specified in the framework of Assured Forwarding.

When we design an AQM scheme, the performance has to be investigated along with

TCP, taking account of the fact that almost all error-sensitive data in the Internet are trans-

mitted by TCP and the dynamics of TCP has unavoidable interactions with the dropping

scheme.

In order to incorporate the priority services1 of DiffServ into TCP, the following tech-

nical problems must be solved: (1) TCP protection and (2) bandwidth differentiation. We

discuss them in the following two subsections.

2.1.1 TCP Protection

The importance of TCP protection has been discussed by Floyd and Fall [63]. They

predicted that the Internet would collapse if there was no mechanism to protect TCP flows.



1 As Marbach [102] proposed, a set of priority services can be applied to modeling
and analyzing DiffServ, by mapping the PHBs that receive better services into the higher
priority levels. In the rest of this chapter, we use "priority levels" to represent PHBs for
general purposes.









In the worst case, the routers would be consumed with forwarding packets even though

no packet is useful for receivers. In the meantime, the bandwidth would be completely

occupied by unresponsive senders that do not reduce the sending rates even after their

packets are dropped by the congested routers [63].

Conventional Active Queue Management (AQM) algorithms such as Random Early

Detection (RED) [60] and BLUE [59] cannot protect TCP flows. It is strongly suggested

that novel AQM schemes be designed for TCP protection in routers [29,63]. Cho [42] pro-

posed a mechanism which uses a "flow-valve" filter for RED to punish non-TCP-friendly

flows. However, this approach has to reserve three parameters for each flow, which signif-

icantly increases the memory requirement. Mahajan and Floyd [100] described a simpler

scheme, known as RED with Preferential Dropping (RED-PD), in which the drop history

of RED is used to help identify non-TCP-friendly flows, based on the observation that

flows at higher speeds usually have more packet drops in RED. RED-PD is also a per-flow

scheme and at least one parameter needs to be reserved for each flow to record the number

of drops.

When compared with previous methods including conventional per-flow schemes, the

implementation design of CHOKe, proposed by Pan et al. [112], is simple and it does not

require per-flow state maintenance. CHOKe serves as an enhancement filter for RED in

which a buffered packet is drawn at random and compared with an arriving packet. If both

packets come from the same flow, they are dropped as a pair (hence, we call this "matched

drops"); otherwise, the arriving packet is delivered to RED. Note that a packet that has

passed CHOKe may still be dropped by RED. The validity of CHOKe has been explained

using an analytical model by Tang et al. [133].

CHOKe is simple and effective for TCP protection, but it only supports best-effort

traffic. In DiffServ networks where flows have different priority, TCP protection is still an

imperative task. In this chapter, we use the concept of matched drops to design another









scheme called CHOKeW. The letter W represents a function that supports multiple weights

for bandwidth differentiation.

The TCP protection in DiffServ networks has three scenarios: first, protecting TCP

flows in higher priority from high-speed unresponsive flows in lower priority; second, pro-

tecting TCP flows from high-speed unresponsive flows in the same priority; and third,

protecting TCP flows in lower priority from high-speed unresponsive flows in higher prior-

ity. Since CHOKeW is designed for allocating a greater bandwidth share to higher priority

flows, if TCP protection is effective in the third scenario, it should also be effective in the

first and second scenarios. Here we report the results of the third scenario in Subsection

2.4.4 to demonstrate the effectiveness of TCP protection of CHOKeW.

2.1.2 Bandwidth Differentiation

TCP is the most popular transport protocol in the Internet. It is used to transmit error-

sensitive data from applications such as Simple Mail Transfer Protocol (SMTP), HyperText

Transfer Protocol (HTTP), File Transfer protocol (FTP), or Telnet. For TCP traffic, good-

put is a well-known criterion to measure the performance. Here we use the same definition

for "goodput" as described by the work of Floyd [63] (p. 459), i.e., "the bandwidth deliv-

ered to the receiver, excluding duplicate packets." There is good evidence that the more

congested the traffic in a TCP flow becomes, the higher the dropping rate of packets will

be and the more duplicate packets produced [7,127]. Since TCP decreases the sending rate

when packet drops are detected, it is reasonable to believe that a TCP flow with a larger

bandwidth share also has higher goodput.

Some research has investigated the relationship between the priority of flows and the

magnitude of bandwidth differentiation. RED with In/Out bit (RIO), presented by Clark

and Fang [47], uses two sets of RED parameters to differentiate high priority traffic (marked

as "in") from low priority traffic (marked as "out"). The parameter set for "in" traffic

usually includes higher queue thresholds which results in a smaller dropping probability.

In RIO an "out" flow may be starved because there is no mechanism to guarantee the









bandwidth share for low-priority traffic [26], which is a disadvantage of RIO. Our scheme

uses matched drops to control the bandwidth share. When a low-priority TCP flow only

has a small bandwidth share, the responsiveness of TCP can lead to a small backlog for this

flow in the buffer. The packets from this flow will unlikely be dropped, so this flow will

not be starved. Our model explains this feature in Subsection 2.3.1 (Equation (2.10)).

In fact, some scheduling schemes, such as Weighted Fair Queueing (WFQ) [52] and

other packet approximation of the Generalized Processor Sharing (GPS) model [113],

may also support differentiated bandwidth allocation. However, the main disadvantage

of these schemes is that they require constant per-flow state maintenance, which is not

cost-effective in core networks as it causes memory-requirement complexity O(N) and per-

packet-processing complexity usually larger than O(1).2 Our scheme is a stateless scheme,

and the packet processing time is independent of N. Both the memory-requirement com-

plexity and the per-packet-processing complexity of CHOKeW is O(1).

Moreover, CHOKeW uses First-Come-First-Served (FCFS) scheduling, which short-

ens the tail of the delay distribution [46], and let packets arriving in a small burst be trans-

mitted in a burst. Many applications in the Internet, such as TELNET, benefit from this

feature. Schedulers similar to WFQ or DRR, however, interweave the packets from differ-

ent queues in the forwarding process, which diminishes this feature.

In this chapter, we focus on differentiated bandwidth allocation as well as TCP pro-

tection in the core networks. Our goal is to use a cost-effective method to provide a flow at

a higher priority level with a larger bandwidth share, and to guarantee that no low-priority

TCP flow is starved even if some high-speed unresponsive flows exist. In addition, by using



2 For example, according to Ramabhadran and Pasquale [120], the per-packet-
processing complexity is O(N) for WFQ, O(log N) for WF2Q [23], and O(log log N)
for Leap Forward Virtual Clock [130]. Deficit Round Robin (DRR) [126] reduces the
per-packet-processing complexity to O(1), but its memory-requirement complexity is still
O(N) when the number of logic queues is comparable to the number of active flows, in
order to obtain desired performance.









CHOKeW, we expect better fairness among the flows with the same priority. To the best of

our knowledge, no other stateless scheme has achieved this goal.

The rest of the chapter is organized as follows. Section 2.2 describes the CHOKeW

algorithm, CHOKeW. Section 2.3 derives the equations for the steady state, and explains

the features and effectiveness of CHOKeW, such as fairness and bandwidth differentiation.

Section 2.4 presents and discusses the simulation results, including the effect of supporting

two priority levels and multiple priority levels, TCP protection, the performance of TCP

Reno in CHOKeW, a comparison with CHOKeW-RED (CHOKeW with RED module),

and a comparison with CHOKeW-avg (CHOKeW with a module to calculate the aver-

age queue length by EWMA). Section 2.5.1 discusses the issues involving consideration

of implementation, and gives a suggestion of the extended matched drop algorithm for

CHOKeW designed for some special scenarios. We conclude this chapter in Section 2.6.

2.2 The CHOKeW Algorithm

CHOKeW uses the strategy of matched drops presented by CHOKe [112] to protect

TCP flows. Like CHOKe, CHOKeW is a stateless algorithm that is capable of working in

core networks where a myriad of flows are served.

More importantly, CHOKeW supports differentiated bandwidth allocation for traffic

with different priority weights. Each priority weight corresponds to one of the priority

levels; a heavier priority weight represents a higher priority level.

Although CHOKeW borrows the idea of matched drops from CHOKe for TCP pro-

tection, there are significant differences between these two algorithms. First of all, the goal

of CHOKe is to block high-speed unresponsive flows with the help of RED to inform TCP

flows of network congestion, whereas CHOKeW is designed for supporting differentiated

bandwidth allocation with the assistance of matched drops that are also able to protect TCP

flows.

While Pan et al. [112] suggested to draw more than one packet if there are multiple

unresponsive flows, they did not provide further solutions. In CHOKeW, the adjustable









number of draws is not only used for restricting the bandwidth share of high-speed unre-

sponsive flows, but also used as signals to inform TCP of the congestion status. In order

to avoid functional redundancy, CHOKeW is not combined with RED since RED is also

designed to inform TCP of congestion. Thus we say that CHOKeW is an independent

AQM scheme, instead of an enhancement filter for RED. To demonstrate that RED is not

an essential component for the effectiveness of CHOKeW, the comparison between the per-

formance of CHOKeW and that of CHOKeW-RED (i.e., CHOKeW with RED) is shown

in Subsection 2.4.6.

In order to determine when to draw a packet (or packets) and how many packets are

possibly drawn from the buffer, we introduce a variable, called the drawing factor, to con-

trol the maximum number of draws. For a flow at priority level i (i 1, 2, .. M, where

M is the number of priority levels supported by the router), the drawing factor is denoted

by pi (pi > 0). The value of pi may alter with the time due to the change of congestion

status, but at a particular moment, all flows at priority level i are handled by a CHOKeW

router using the same pi. This is how CHOKeW provides better fairness among flows with

the same priority than other conventional stateless schemes such as RED and BLUE. A

CHOKeW router keeps the values of pi instead of per-flow states. Thus CHOKeW pre-

cludes the memory requirement from rocketing up when more flows go through the router.

Roughly speaking, we may interpret pi as the maximum number of random draws

from the buffer upon an arrival from a flow at priority level i. The precise meaning is

discussed below.

Assume that the number of active flows served by a CHOKeW router is N, and the

number of priority levels supported by the router is M. Let "it (i"- > 1) be the priority

weight of flow i (i = 1, 2, N), and it,, (k = 1, 2,... M) be the weight of priority

level k. If flow i is at priority level k, then it = t,,,. All flows at the same priority

level have the same priority weight. If w(k) > ', ,,,, we say that flows at priority level k









have higher priority than flows at priority level 1, or simply, priority level k is higher than

priority level 1.

Let Po denote the basic drawing factor. The drawing factor used for flow i is calculated

as follows:

Pi= Po/,' (2.1)

From ,,' > 1, po > pi, which means that po also represents the upper bound of

drawing factors.

If ,, > wj, pi < pj, i.e., a flow with higher priority has a smaller drawing factor,

and hence has a lower possibility of becoming the victim of matched drops. This is the

basic mechanism for supporting bandwidth differentiation in CHOKeW (further explained

in Subsection 2.3.4).

The precise meaning of drawing factor pi depends upon its value. It can be categorized

into two cases:

Case 1. When 0 < pi < 1, pi represents the probability of drawing one packet from

the buffer at random for comparison.

Case 2. When pi > 1, pi consists of two parts, and we may rewrite pi as


pi = mi + fi. (2.2)


where mie Z* (the set of nonnegative integers) represents the integral part with the value

of [pi] (the largest integer < pi), and fi the fractional part of pi. In this case, at the most mi

or mT + 1 packets in the buffer may be drawn for comparison. Let di denote the maximum

number of random draws. We have

Prob [di mi + 1] fi,

Prob [di = mi] = 1 fi.






















Initialization:
Po 0

For each packet pkt arrival
(1) L-- L+a
(2) Update Po (see Fig.2)
(3) IF pkt is at priority level k
THEN p +- po/w(k), m [p], f p m
(4) Generate a random number v G [0,1)
IF v THEN m m+ 1
(5) IF L > Lth
THEN
WHILE m > 0
m <- -- 1
Draw a packet (pkt') from the buffer at random
IF oa=$b
THEN
L --L-la-lb, drop pkt' and pkt
RETURN /*wait for the next arrival*/
ELSE keep pkt' intact
(6) IF L > Llim /*buffer is full*/
THEN L- L la, drop pkt
ELSE let pkt enter the buffer

Parameters:
fa: Flow ID of the arriving packet
4b: Flow ID of the packet drawn from buffer
la: Size of the arriving packet
lb: Size of the packet drawn from the buffer
L: Queue length
Litm: Buffer limit
Lth: Queue length threshold of activating matched drops


7: ::: 2-1: CHOKeW algorithm


















IF L THEN
Po Po --P
IF po <0
THEN Po 0
IF L > L+
THEN po Po+P


Parameters:
L: Queue length
L+: Queue length threshold of
L : Queue length threshold of
Lth < L < L+
po: Basic drawing factor
p+: Step length of increasing
p : Step length of decreasing


increasing Po
decreasing Po



Po
Po


Figure 2-2: Algorithm of updating po


B1, rT


Figure 2-3: Network topology









The algorithm of drawing packets is described in Fig. 2-1. As CHOKeW does not

require per-flow states, in this figure, m represents the value of mi (before Step (4)) and di

(after Step (4)), we also use p and f to represent pi and fi, respectively.

The congestion status of a router may become either heavier or lighter after a period of

time, since circumstances (e.g., the number of users, the application types, and the traffic

priority) constantly change. In order to cooperate with TCP and to improve the system

performance, an AQM scheme such as RED [60] needs to inform TCP senders to lower

their sending rates by dropping more packets when the network congestion becomes worse.

Unlike CHOKe [112], CHOKeW does not have to work with RED in order to function

properly. Instead, CHOKeW can adaptively update po based on the congestion status. The

updating process is shown in Fig. 2-2, which details Step (2) of Fig. 2-1. The combination

of Fig. 2-1 and Fig. 2-2 provides a complete description of the CHOKeW algorithm.

CHOKeW updates po upon each packet arrival, but activates matched drops only when

the queue length L is longer than the threshold Lth (Step (5) in Fig. 2-1). Three queue

length thresholds are applied to CHOKeW: Lth is the threshold of activating matched drops,

L increasing po, and L- decreasing po. As the buffer is used to absorb bursty traffic [29],

we set Lth > 0, so that the short bursty traffic can enter the buffer without suffering any

packet drops when the queue length L is less than Lth (although po may be larger than 0

for historical reasons). When L c [L-, L], the network congestion status is considered

to have been stable and po maintains the same value as before (i.e., the algorithm shown in

Fig. 2-2 does not adjust the value of po). Only when L > L, the congestion is considered

to be heavy, and po is increased by p+ each time. The alleviation of network congestion

is represented by L < L-, and as adaptation, po is reduced by p- each time. We keep

Lth < L- so that the matched drops are still active when po starts becoming smaller,

which prevents matched drops from being completely turned off suddenly and endows the

algorithm with higher stability.







22

Table 2-1: The State of CHOKeW vs. the Range of L

State of Range of L
CHOKeW [0, Lth] (Lth, L-) [L-, L+] (L+, Llim]
Matched Drops Inactive Active
po 0- max{0,po p} Po po + p+


The state of CHOKeW can be described by the activation of matched drops and the

process of updating po, which is further determined by the range the current queue length

L falls into, shown in Table 2-1. At any time, CHOKeW works in one of following states:

1. inactive matched drops and decreasing po (unless po = 0), when 0 < L < Lth;

2. active matched drops and decreasing po (unless po = 0), when Lth < L < L-;

3. active matched drops and constant po, when L- < L < L+ ;

4. active matched drops and increasing po, when L+ < L < Lim.

According to the above explanation, 0 < Lth < L- < L+ < Llm, where Llm is the

buffer limit. In the simulations (Section 2.4), we set Lth = 100 packets, L- = 125 packets,

L+ 175 packets, and Lim = 500 packets. The guideline here is similar to RED when

gentle = true [64]; i.e., let the dropping probability increase smoothly, so the queue can

have some time to absorb small bursty traffic.

One advantage of using CHOKeW is that it is easily able to prioritize each packet

based on the value of the DS field, without the aid of the flow ID.3 Therefore, when

CHOKeW is applied in core routers, priority becomes a packet feature. In terms of ser-

vice qualities in the core network, packets from different flows shall equally be served if

they have the same priority; on the other hand, packets from the same flow may be treated

differently if their priority is different (e.g., some packets are remarked by edge routers).



3 In CHOKeW, the flow ID is only used to check whether two packets are from the same
flow. This operation (XOR) can be executed efficiently by hardware.









Now we discuss the complexity of CHOKeW. On the basis of the above description,

we know that CHOKeW needs to remember only W(k) for each predefined priority level

k (k = 1, 2, M), instead of some variables for each flow i (i = 1, 2, N). The

complexity of CHOKeW is only affected by M. In DiffServ networks, it is reasonable to

expect that M will never be a large value in the foreseeable future, i.e., M < N. Thus with

respect to N, the memory-requirement complexity as well as the per-packet-processing

complexity of CHOKeW is O(1), while for conventional per-flow schemes, the memory-

requirement complexity is O(N) and the per-packet-processing complexity is usually larger

than O(1) [120].

2.3 Model

In previous work, Tang et al. [133] proposed a model to explain the effectiveness of

CHOKe. Using matched drops, CHOKe produces a "leaky buffer" where packets may be

dropped when they move forward in the queue, which may result in a flow that maintains

many packets in the queue but can obtain only a small portion of bandwidth share. In this

way TCP protection takes effect on high-speed unresponsive flows [133].

For CHOKeW, we need a model to explain not only how to protect TCP flows (as

shown by Tang et al. [133]), but also how to differentiate the bandwidth share.

The network topology shown in Fig. 2-3 is used for our model. In this figure, two

routers, Ri and R2, are connected to N source nodes (Si, i = 1, 2, N) and N destina-

tion nodes (Di), respectively. The RI-R2 link, with bandwidth Bo and propagation delay

To, allows all flows to go through. Be and T, denotes the bandwidth and the propagation

delay of each link connected to Si or Di, respectively. As we are interested in the network

performance under a heavy load, we always let Bo < Bi, so that the link between the two

routers becomes a bottleneck.

2.3.1 Some Useful Probabilities

In the CHOKeW router, for flow i, let ri be the probability that matched drops occur

at one draw (matching probability in short), which is dependent of the current queue length









L and the number of packets from flow i in the queue (i.e., the packet backlog from flow i,

denoted by Li). The following equation [133] can be also used for CHOKeW:

ri = Lj/L. (2.3)

Now we focus on the features of matched drops. Assuming that the buffer has an

unlimited size and thus packet dropping is due to matched drops instead of overflow, we

can calculate the probability that a packet from flow i is allowed to enter the queue upon

its arrival, denoted by rli:

i (1 r)' (1 fi(rl ) (2.4)

where
{eei -WiJ, (2.5)
fi Pi mi.

The difference between (2.2) and (2.5) is that (2.2) uses m and f rather than mi and

fi. When CHOKeW is implemented, two variables m and f are adequate for all flows

because they can be reused for each arrival. In (2.4), (1 rj)m' is the probability of

no matched drops in the first me draws. After the completion of the first me draws, the

value of fi stochastically determines whether one more packet is drawn. The probability

of no further draw is (1 fi), and the probability that one more packet is drawn but no

matched drops occur is fi(1 ri). Therefore the probability that no matched drops occur

is (1 fi) + fi(1 ri) 1 fri.

We rewrite (1 ri)^ as its Maclaurin Series:


(1 1if fir, + O(rj2).









Assuming the core network serves a vast number of flows, it is reasonable to say ri <

1 for each responsive flow i.4 We have (1 rj)f 1 firi, and (2.4) can be rewritten as

ri = (1 ri)m-+i, or

/i -(1 r)". (2.6)

For a packet from flow i, let si denote the probability that it survives the queue, and qi

the dropping probability (either before or after the packet enters the queue).5 We have


q 1 si. (2.7)


For each packet arrival from flow i, the probability that it is dropped before entering

the queue is 1 Tl. According to the rule of matched drops, a packet from the same flow,

which is already in the buffer, should also be dropped if the arriving packet is dropped.

Thus in a steady state we obtain qi = 2(1 li), 0.5 < Tli < 1. When qi=1, li = 0.5.

In other words, when flow i is starved, the router still needs to let half of the arriving

packets from flow i enter the queue, and the packets in the queue will be used to match the

new arrivals in the future. On the other hand, if Tli < 0.5 temporarily, the number of packets

enter the queue cannot compensate the backlog for the packet losses from the queue, which

causes the reduction of ri until r 1 2-1/p~ and accordingly = 0.5.

By using (2.6) in it, we get


q, 2 2(1 ri)P' (2.8)


and from (2.7),

si = 2(1 ri)P' 1. (2.9)



4 We call a flow responsive if it avoids sending data at a high rate when the network is
congested. A TCP flow is responsive, while a UDP flow is not.

5 Note that it is possible for a packet to become a matched-drop victim even after it has
entered the queue.









After a packet enters the queue, one and only one of the following possibilities will

happen: 1) it will be dropped from the queue due to matched drops, or 2) it will pass the

queue successfully. The passing probability pi satisfies si = Tipi. Using (2.6) and (2.9) in

it, we get
1
pi 2 (1 ri)P

In order for TCP protection, CHOKeW requires 0 < qg < 1 if flow i uses TCP. Equa-

tion (2.8) shows that as long as pi does not exceed a certain range, CHOKeW can guarantee

that flow i will not be starved, even if it may only have low priority. This feature offers

CHOKeW advantages over RIO [26], which neither protects TCP flows, nor prevents the

starvation of low-priority flows.

The algorithm for updating po illustrated in Fig. 2-2 ensures po > 0 after the update.

From Step (3) in Fig. 2-1, pi > 0. Using it in (2.8), we get qi > 0, which means in

CHOKeW the lower bound of q, is satisfied automatically.

Now we discuss the range of pi to satisfy the upper bound of qj (i.e., qj < 1). From

(2.8), we have
-1
Pi < log2 (2.10)

From (2.3), ri can also be interpreted as the normalized backlog from flow i. Equation

(2.10) gives the upper bound of pi, which is a fairly large value if ri is small. For instance,

when ri equals 0.01 (imagine 100 flows share the queue length evenly), the upper bound

of pi is 68.9; in other words, the algorithm may draw up to 69 packets before a flow is

starved, but such a large pi is rarely observed due to the control of unresponsive flows.

Formula (2.10) also explains why a flow in CHOKe (where pi 1) that is not starved

must have a backlog shorter than half of the queue length. This result is consistent with the

conclusion of Tang et al. [133]. In CHOKeW, for flow i with a certain priority weight ,i

and a corresponding drawing factor pi (see Equation (2.1)), the higher the arrival rate, the

larger the backlog, and hence the higher the dropping probability. When the backlog of a









high-rate unresponsive flow reaches the upper bound determined by (2.10), this flow will

be completely blocked by CHOKeW.

2.3.2 Steady-State Features of CHOKeW

In this subsection, assume that there are N independent flows going through the

CHOKeW router, and the packet arrivals for each flow are Poisson.6

The packets arriving at the router can be categorized into two groups: 1) those that will

be dropped and 2) those that will pass the queue. Let A denote the average aggregate arrival

rate for all flows, and A' the average aggregate arrival rate for the packets that will pass the

queue.7 Similarly, L denotes the queue length as mentioned above, and L' the queue length

only counting the survivable packets. Compared to the time that it takes to transmit (serve)

a packet, the delay to drop a packet is negligible. Little's Theorem [48] shows


D L'/A'. (2.11)


where D is the average waiting time for packets in the queue.

For each flow i (i = 1, 2, ... N), let Ai be the average arrival rate, and A' be the

average arrival rate only counting the packets that will survive the queue. Then


A' A(1 qi). (2.12)


As mentioned above, Li denotes the backlog from flow i. Let L' be the backlog for the

survivable packets from flow i. Then these per-flow measurements have the following

relationship with their aggregate counterparts: A -= 1 Aj, A' = 1 A, L = 1 L,

and L' = E L .



6 Strictly speaking, the Poisson distribution is not a perfect representation of Internet
traffic; nevertheless, it can provide some insights into the features of our algorithm.
7 The average arrival rate for the packets that will be dropped is equal to A A'.









Based on the PASTA (Poisson Arrivals See Time Average) property of Poisson ar-

rivals, packets from all flows have the same average waiting time in the queue (i.e., Di = D,

i = 1, 2, ... N). Using Little's Theorem again, for flow i, we get


D L'/A\. (2.13)


Using (2.11) in (2.13),

-- (2.14)

The average number of packet drops from flow i during period D is DAjqi. As packets

from a flow are dropped in pairs (one before entering the queue and one after entering the

queue), flow i has DAjqi/2 packets dropped after entering the queue on average. Thus we

have
SLi = LI + DAqi/2,
N

j 1
Using (2.8), (2.12), (2.13), and (2.14) in it, we obtain

Li DAi(1 ri)Pi,
N (2.15)
j-1

For flow i, let p, denote the average arrival rate only counting the packets entering the

queue. Then /p is determined by Ai and /i4, i.e., p/ A \i. Considering (2.6), we rewrite

(2.15) as
Li = Dpi,
N (2.16)

j 1

and rewrite (2.3) as r p i Pj)

Equations (2.16) can be interpreted as Little's Theorem used for a leaky buffer where

packets may be dropped before reach the front of the queue, which is not a classical queue-

ing system. From (2.16) we get an interesting result: even in a leaky buffer, the average

waiting time is determined by the average queue length for flow i (or the average aggregate









queue length) and the average arrival rate from flow i (or the average aggregate arrival rate)

that only counts the packets entering the queue. The average waiting time is meaningful to

the packets surviving the queue exclusively, whereas packets that are dropped after entering

the queue still contribute to the queue length.

Below are the group of formulas that describe the steady-state features of CHOKeW:

r = -Pi (2.17a)
N

j=1
q, 2 2(1 ri)Pi, (2.17b)

i Ai(1 ri)Pi, (2.17c)

A'- Ai (1 qj). (2.17d)

2.3.3 Fairness

To demonstrate the fairness of our scheme, we study two TCP flows, i and j (i / j,

and i,j {1, 2, ... N}). In this subsection, we analyze the fairness under circumstances

where flows have the same priority and hence the same drawing factor, denoted by p. The

discussion of multiple priority levels is left to the next subsection.

From (2.17a), ri/rj = i/pij. Using (2.17c) in it, we get

ri Ai(1 ri)P
(2.18)
rj Aj(1 r)P

Previous research (for example, Floyd and Fall [63] and Padhye et al. [ 111]) has shown

that the approximate sending rate of TCP is affected by the dropping probability, packet

size, Round Trip Time (RTT), and other parameters such as the TCP version and the speed

of users' computers. In this chapter, we describe TCP sending rate as


A, = aiR(qi).


(2.19)









where qj is the dropping probability, and ai (ai > 0) denotes the combination of other fac-

tors.8 Because the sending rate of TCP decreases as network congestion worsens (indicated

by a higher dropping probability), we have

OR/O). < 0. (2.20)

When flow i and flow j have the same priority, our discussion covers two cases, dis-

tinguished by the equivalence of ai and aj.

Case 1. ai = aj

When a = aj, an intuitive solution to (2.18) is A, = A and r =- rj. From (2.17b)

and (2.17d) we have A = A', i.e., flow i and flow j get the same amount of bandwidth

share. We will show that this is the only solution.

Let

C Ai(1 -ri)Pi ri

Then any solution to (2.18) is also a solution to G = 0.

In the core network, a router usually has to support a great number of flows and the

downstream link bandwidth is shared among them. It is reasonable to assume that fluctu-

ations in the backlog of a TCP flow do not significantly affect the backlog of other flows

(i.e., Orj/Ori a 0, i / j). Then we have

9__Gp 1 (t 1- T- APi- 1i
Or Aj(1 rj)Pj L Orl I rj"

Because =- -o 2 (1 ri)Pi-1, and 9Ai/(.1 < 0 (derived from (2.19) and

(2.20)), for any value of ri (0 < ri < 1), OG/Ori < 0. In other words, G is a monotone



8 The construction of Equation (2.19) results from previous work. In the work of Floyd
and Fall [63], for instance, when a TCP session works in the non-delay mode, the sender's
rate can be estimated by A, = 2, where P, and T, denote packet size and RTT of this
flow, respectively.









decreasing function with respect to ri. As a result, if there is a value of ri satisfying G = 0,

it must be the only solution to G = 0. Thus the only steady state is maintained by A =- Aj

and r =- rj, when TCP flows i and j have the same priority and the same factor a. This

indicates that CHOKeW is capable of providing good fairness to flow i and flow j.

Case 2. ai / aj

Let (A'/A')c and (A'/A')R denote the ratio of average throughput of flow i to that

of flow j for CHOKeW and for conventional stateless AQM schemes, respectively. By

comparing (A'/A,)c to (A,/A,)R, we will show that CHOKeW is able to provide better

fairness, when a i / aj.

Among conventional stateless AQM schemes, RED determines the dropping proba-

bility according to the average queue length, and BLUE calculates the dropping probability

from packet loss and link idle events. In a steady state, for AQM schemes such as RED and

BLUE, every flow has the similar dropping probability. Let q denote the dropping proba-

bility. For all flows, q = q (i = 1, 2, ... N). Therefore, flow i has an average throughput

of

A= Ai(1 q) =(1 q)aR(q).

Similarly, flow j has an average throughput of


A= (1 q)aR(q).

Thus for RED and BLUE,

(A'/A,)R a= /a. (2.21)

If ao > aj, (A'/A')R > 1. Given an AQM scheme, the closer A'/A' is to 1, the better

the fairness. For CHOKeW, if (A,/A,)c is closer to 1 than (A,/A,)R, i.e., (A,/A,)R > (A/A,)c > 1,

we say CHOKeW is capable of providing better fairness.

From (2.17b), R(qi) in (2.19) can be rewritten as


R(q) R (2 -2(1 r









When flow i and flow j have the same priority, p, we define


A(rk) AR (2


2(1 -rk)p), fork cA {i,j}.


Then (2.19) can be rewritten as

Ak akA(rk), fork A {i,j}.


From (2.17b) and (2.17d),


(2.22)


Our goal is to show that the right part of (2.22) is less than ca/aj, if ac > ac. From


(2.17a) and (2.17c),


ajA(ri)(t ri)
N
aA(ri)(1 r )P+ Y ik
k= 1,k i


and hence


ri)P + 03aA(rj)p(1


ri)p j


N
k 1,kAi
(N 2


From


OA OR OR [2p(
aOri O. Oar 2p (1
Br4^^/ [p[Bs .


r)-1] < 0,


we see Ori/Oai > 0, which means when a > ac, we have ri > rj and A(ri) < A(rj).

Using these results in (2.22), for CHOKeW,


(A'/A,)c < (ai/ao). (2.23)


where


aA
3ai OA (1


ajA(rj) [2(1 r)P 1]
ajA(rj) [2(t rj)P t]


OA (rI) (t r [)P 1









A comparison between (2.21) and (2.23) proves that CHOKeW provides better fair-

ness than RED and BLUE.

2.3.4 Bandwidth Differentiation

For any two TCP flows i and j (i / j), if ai = aj, and ,, < wj (from (2.1), pi > pj),

CHOKeW allocates a smaller bandwidth share to flow i than to flow j, i.e., A' < A'. This

seems to be an intuitive strategy, but we also noticed that the interaction among pi, ri and

qj may cause some confusion. The dropping probability of flow i in CHOKeW, qj, is not

only determined by pi, but also by ri. Furthermore, the effects of re and pi are inverse-an

larger value of pi results in a larger qj, but at the same time it leads to a smaller ri, which

may produce a smaller qj. To clear up the confusion, we only need to show that a larger

value of pi leads to a smaller value of bandwidth share, A', which is equivalent to showing

9A'/9pi < 0. From (2.17d), (2.19), and (2.20), we get OA'/1.1 < 0. From the Chain Rule


api a pi

we only need to show

> 0.
9pi
Proof

According to the Chain Rule, we know


a + (2.24)
Opi Ori Opi Ou Opi

where u = pi. We introduce symbol u to distinguish Oqi/Ou from 0. /api, when ri is

treated as a constant in 09 /Ou but not in 10 /api. From (2.17b)

I /Ou = -2(1 ra)nP In(1 rd) (2.25)

and


Oqi/Ori 2pi(1 ri)P -1


(2.26)









According to (2.17a) and (2.17c), we have

ri N1(2.27)
a E :ik + Api(1 ri)P-1
k-1
where

71 (1 -- i)pi +7 i + n(1 r- i)]

Using (2.25), (2.26) and (2.27) in (2.24), we get
N
2(1 r)Pi n(1 r) E lk
kl ->0
opi 72

where
N OA
72 S Pk + 1 p i i-1 2 (1 r)2 p- .
k=i


2.4 Performance Evaluation

To evaluate CHOKeW in various scenarios and to compare it with some other schemes,

we implemented CHOKeW using ns simulator version 2 [103].

The network topology is shown in Fig. 2-3, where Bo = 1 Mb/s and B = 10 Mb/s

(i = 1, 2, N). Unless specified otherwise, the link propagation delay To = Ti = 1 ms.

The buffer limit is 500 packets, and the mean packet size is 1000 bytes. TCP flows are

driven by FTP applications, and UDP flows are driven by CBR traffic. All TCPs are TCP

SACK except in Subsection 2.4.8 where the performance of TCP Reno flows going through

a CHOKeW router is investigated. Each simulation runs for 500 seconds.

Parameters of CHOKeW are set as follows: Lth 100 packets, L- = 125 packets,

L+ 175 packets, p+ = 0.002, and p- 0.001.

Parameters of RED are set as follows: minth = 100 packets, maxth = 200 packets,

gentle = true, the EWMA weight is set to 0.002, and pmax = 0.02 (except in Subsection

2.4.6 where different values of pmax are tested to be compared with CHOKeW).









Parameters of RIO include those for "out" traffic and those for "in" traffic. For "out"

traffic, minth_out = 100 packets, maxth_out = 200 packets, pmax_out = 0.02. For "in"

traffic, minthin 110 packets, maxthin = 210 packets, pmaxin = 0.01 (except in

Subsection 2.4.1 where different parameters are tested). Both gentle-out and gentle-in

are set to true.

For parameters of BLUE, we set i1 = 0.0025 (the step length for increasing the drop-

ping probability), 62 = 0.00025 (the step length for decreasing the dropping probability),

and freeze-time =100 ms.

2.4.1 Two Priority Levels with the Same Number of Flows

One of the main tasks of CHOKeW is supporting bandwidth differentiation for multi-

ple priority levels when working in a stateless method. We validate the effect of supporting

two priority levels with the same number of flows in this subsection, two priority levels

with different number of flows in the next subsection, and three or more priority levels in

Subsection 2.4.3.

As mentioned in Subsection 2.1.2, flow starvation often happens in RIO but is avoid-

able in CHOKeW. In order to quantify and compare the severity of flow starvation among

different schemes, we record the Relative Cumulative Frequency (RCF) of goodput for

flows at each priority level. For a scheme, the RCF of goodput g for flows at a specific

priority level represents the number of flows that have goodput lower than or equal to g

divided by the total number of flows in this priority.

We simulate 200 TCP flows. When CHOKeW is used, W(i) 1 and w(2) = 2 are

assigned to equal number of flows. When RIO is used, the number of "out" flows is also

equal to the number of "in" flows. Fig. 2-4 illustrates the RCF of goodput for flows at each

priority level of CHOKeW and RIO. Here we show three sets of results from RIO, denoted

by RIO_1, RIO_2 and RIO_3, respectively. For RIO_1, we set minthin = 150 packets

and maxthin = 250 packets; for RIO_2, minthin -130 packets and maxth_in = 230

packets; for RIO_3, minthin =110 packets and maxthin = 210 packets.














0.8
0+/ /



0.6



0.4 RIO 1 out
"L RIO 1 in
1 1RIO 2 out
RIO 2 in
RIO 3out --
RIO 3 in
0.2 CHOKeWw (1) -
CHOKeWw (2)


0
0 0.002 0.004 0.006 0.008 0.01 0.012 0.014
Goodput (Mb/s)

Figure 2-4: RCF of RIO and CHOKeW under a scenario of two priority levels


From Fig. 2-4, we see that the RCF of goodput zero for "out" traffic of RIO_1 is 0.1.

In other words, 10 of the 100 "out" flows are starved. Similarly, for RIO_2 and RIO_3, 15

and 6 flows are starved respectively. Moreover, it is observed that some "in" flows of RIO

may also have very low goodput (e.g., the lowest goodput of "in" flows of RIO_2 is only

0.00015 Mb/s) due to a lack of TCP protection. Flow starvation is very common in RIO,

but it rarely happens in CHOKeW.

Now we investigate the relationship between the number of TCP flows and the ag-

gregate TCP goodput for each priority level. The results are shown in Fig. 2-5, where

the curves of w(1) = 1 and W(2) = 2 correspond to the two priority levels. Half of the

flows are assigned w(i) and the other half assigned w(2). As more flows are going through

the CHOKeW router, the goodput difference between the higher-priority flows and the

lower-priority flows changes owing to the network dynamics, but high-priority flows can

get higher goodput no matter how many flows exist.







































60 80 100 120 140 160 180 200
The number of TCP flows


S 2-5: A :. TCP .. :' vs. the number of TCP :... under a scenario of two

priority levels


0.016


0.014


0.012


0.01


0.008


0.006


0.004


w_(2)/w_(1)


F. : 2-6: Average per :: w TCP g .1:: vs. w(2)/ ) when 25 are assigned

w '(1) 1 and 75 flows w1(2)


w_(1)=1.0
w (2)=2.0


CHOKeWw_(1)
CHOKeWw_(2)
WFQ w_(1) --
\ WFQ w_(2)

i-






-_ ^









2.4.2 Two Priority Levels with Different Number of Flows

When the number of flows in each priority level is different, CHOKeW is still capable

of differentiating bandwidth allocation on flow basis. In the following experiment, among

the total 100 TCP flows, 25 flows are assigned fixed priority weight w(1) = 1.0, and 75

flows are assigned W(2). As W(2) varies from 1.5 to 4.0, the average per-flow goodput is

collected in each priority level and shown in Fig. 2-6. The results are compared with

those of WFQ working in an aggregate flow mode, i.e., in order to circumvent the per-

flow complexity, flows at the same priority level are merged into an aggregate flow before

entering WFQ, and WFQ buffers packets in the same queue if they have the same priority,

instead of using strict per-flow queueing. In WFQ, the buffer pool of 500 packets is split

into two queues: the queue for w(1) has a capacity of 125 packets and the queue for W(2)

has a capacity of 375 packets.

In Fig. 2-6, it is readily to see that the goodput of flows assigned W(2) increases along

with the increase of the value of W(2) for both CHOKeW and WFQ, and accordingly the

goodput of flows assigned w(i) decreases. However, when w(2)/w(1) < 3, the average

per-flow goodput with W(2) is even lower than that with w(1) for WFQ. We say that WFQ

does not guarantee to offer higher per-flow goodput to higher priority if the priority is taken

by more flows, when aggregate flows are used. For CHOKeW, bandwidth differentiation

works effectively in the whole range of W(2), even though all packets are mixed in one

single queue in a stateless way.

This feature is developed based on the fact that CHOKeW does not require multiple

queues to isolate flows; by contrast, conventional packet approximation of GPS, such as

WFQ, cannot avoid the complexity caused by per-flow nature and give satisfactory band-

width differentiation on flow basis at the same time.

2.4.3 Three or More Priority Levels

In situations where multiple priority levels are used, the results are similar to those

of two priority levels, i.e., the flows with higher priority achieve higher goodput. Since











0.4



0.35 ; w_(1)=1.0 -
w_(2)=1.5 --
w_(3)=2.0 --
a.
-0
0
0.3

Co
0.25



0.2 IIII
100 150 200 250 300
The number of TCP flows

Figure 2-7: Aggregate goodput vs. the number of TCP flows under a scenario of three
priority levels


RIO only supports two priority levels, the results are not compared with those of RIO

in this subsection. Fig. 2-7 and Fig. 2-8 demonstrate the aggregate TCP goodput for

each priority level versus the number of TCP flows for three priority levels and for four

priority levels respectively. At each level, the number of TCP flows ranges from 25 to

100. In Fig. 2-7, three priority levels are configured using w(1) = 1.0, W(2) = 1.5, and

', = 2.0; W(4) = 2.5 is added to the simulations corresponding to Fig. 2-8 for the fourth

priority levels. Even though the goodput fluctuates when the number of TCP flows changes,

the flows in higher priority are still able to obtain higher goodput. Furthermore, no flow

starvation is observed.

2.4.4 TCP Protection

TCP protection is another task of CHOKeW. We use UDP flows at the sending rate

of 10 Mb/s to simulate misbehaving flows. A total of 100 TCP flows are generated in the

simulations. Priority weights w(1) = 1 and W(2) = 2 are assigned to equal number of

flows. In order to evaluate the performance of TCP protection, the UDP flows are assigned

the high priority weight W(2) = 2. As discussed before, if TCP protection works well in

situation where misbehaving flows are in the priority with W(2), it should also work well

when misbehaving flows only have priority lower than w(2). Hence the effectiveness of








40

0.4
w_(1)=1.0
w_(2)=1.5
0.35 w_(3)=2.0
w_(4)=2.5
0.3

0.25

0.2 -

0.15

0 .1 I I III
100 150 200 250 300 350 400
The number of TCP flows

Figure 2-8: Aggregate goodput vs. the number of TCP flows under a scenario of four
priority levels


TCP protection is validated provided that the high-priority misbehaving flows are blocked

successfully.

The goodput versus the number of UDP flows is shown in Fig. 2-9, where CHOKeW

is compared with RIO. Since no retransmission is provided by UDP flows, goodput is

equal to throughput for UDP. For CHOKeW, even if the number of UDP flows increases

from 1 to 10, the TCP goodput in each priority level (and hence the aggregate goodput of

all TCP flows) is quite stable. In other words, the link bandwidth is shared by these TCP

flows, and the high-speed UDP flows are completely blocked by CHOKeW. By contrast,

the bandwidth share for TCP flows in a RIO router is nearly zero, as high-speed UDP flows

occupy almost all the bandwidth.

Fig. 2-10 illustrates the relationship between po and the number of UDP flows recorded

in the simulations of CHOKeW. As more UDP flows start, po increases, but po rarely

reaches the high value of starting to block TCP flows before high-speed UDP flows are

blocked. In this experiment, we also find that few packets of TCP flows are dropped due to

buffer overflow. In fact, when edge routers cooperate with core routers, the high-speed

misbehaving flows will be marked with lower priority at the edge routers. Therefore,




































U 1 I I I 0 0 ---------
0 2 4 6 8 10
The number of UDP flows


. :. 2-9: A : .. ,..... vs. the number of UDP :... under a scenario to investi-
TCP protection


40


35


0 2 4 6 8 10
The number of UDP flows


Figure 2-10: .. .. ... ,, vs. the number of UDP i. ws under a scenario to
investigate TCP ::.= ;


CHOKeW TCP w (1)=1.0
CHOKeW TCP w (2)=2.0
CHOKeWall TCP
CHOKeW all UDP
RIO all TCP u
RIO all UDP









CHOKeW should be able to block even more misbehaving flows than shown in Fig. 2-

9, and po should also be smaller than shown in Fig. 2-10.

2.4.5 Fairness

In Subsection 2.3.3, we use the analytical model to explain how CHOKeW can pro-

vide better fairness among the flows in the same priority than conventional stateless AQM

schemes such as RED and BLUE. We validate this attribute by showing simulations in this

subsection. Since RED and BLUE do not support multiple priority levels, and are only

used in best-effort networks, we let CHOKeW work in one priority state (i.e., w(i) = 1 for

all flows) in this subsection.

In the simulation network illustrated in Fig. 2-3, the end-to-end propagation delay of

a flow is set to one of 6, 60, 100, or 150 ms. Each of the four values is assigned to 25% of

the total number of flows.9

When there are only a few (e.g., no more than three) flows under consideration, the

fairness can be evaluated by directly observing the closeness of the goodput or throughput

of different flows. In situations where many flows are active, however, it is hard to measure

the fairness by direct observation; in this case, we introduce the fairness index:

(N 2
(:gi
F = (_1 (2.28)
N
N Z gi2
i 1

where N is the number of active flows during the observation period, and gi (i = 1, 2, N)

represents the goodput of flow i. From (2.28), we know F c (0, 1]. The closer the value

of F is to 1, the better the fairness is. In this chapter, we use gi as goodput instead of

throughput so that the TCP performance evaluation can reflect the successful delivery rate




9 For flow i, the end-to-end propagation delay is 4Ti + 27-0. Since To is constant for
all flows in Fig. 2-3, the propagation delay can be assigned a desired value given an
appropriate Ti.












1 -

0.98 -

0.96 -

0.94 -

0.92

0.9

0.88 -

0.86 -

0.84 -

0.82 -

0.8 -
160


Figure 2-11: Fairness


0.8 -


0.7 -


0.6 -


180 200 220 240 260 280
The number of TCP flows


index vs. the number of flows for CHOKeW, RED and BLUE


60 80 100 120 140 160 180
The number of TCP flows


Figure 2-12: Link utilization of CHOKeW and CHOKeW-RED


200


more accurately. Fig. 2-11 shows the fairness index of CHOKeW, RED, and BLUE versus

the number of TCP flows ranging from 160 to 280. Even though the fairness decreases as

the number of flows increases for all schemes, CHOKeW still provides better fairness than

both RED and BLUE.

2.4.6 CHOKeW versus CHOKeW-RED


An adaptive drawing algorithm has been incorporated into the design of CHOKeW,

where TCP flows can get network congestion notifications from matched drops. Simul-

taneously, the bandwidth share of high-speed unresponsive flows are also brought under


CHOKeW
CHOKeW-RED pmax=0.02
CHOKeW-RED pmax=0.05
CHOKeW-RED pmax=0.1


I I I









control by matched drops. As a result, the RED module is no longer required in CHOKeW.

In this subsection, we compare the average queue length, link utilization, and TCP good-

put of CHOKeW with those of CHOKeW-RED (i.e., CHOKeW working with the RED

module).

In RED, pmax is the marginal dropping probability under healthy circumstances and

should not be set to a value greater than 0.1 [62]. For these simulations, we investigate the

performance of CHOKeW-RED with pmax ranging from 0.02 to 0.1.

The relationship between the number of TCP flows and the values of link utilization,

the average queue length, and the aggregate TCP goodput is shown in Fig. 2-12, Fig. 2-13,

and Fig. 2-14 respectively. In each figure, the performance results of CHOKeW-RED are

indicated by three curves, each corresponding to one of the three values for pmax (0.02,

0.05, and 0.1).

Fig. 2-12 shows that all schemes maintain an approximate link utilization of 96%

(shown by the curves overlapping each other), which is considered sufficient for the In-

ternet. From Fig. 2-13, we can see that the average queue length for CHOKeW-RED

increases as the number of TCP flows increases. In contrast, the average queue length can

be maintained at a steady value within the normal range between L- (125 packets) and L+

(175 packets) for CHOKeW. In situations where the number of TCP flows is larger than

100, CHOKeW has the shortest queue length. Since FCFS (First-Come-First-Served) is

used,10 the shorter the average queue length, the less the average waiting time. Among the

above schemes, CHOKeW also provides the shortest average waiting time for packets in

the queue in most cases. In CHOKeW-RED, if L < L+ is maintained by random drops



10 Logically, FCFS is a scheduling strategy and it can be combined with any buffer man-
agement scheme, such as TD (Tail Drop), RED, BLUE, or CHOKeW. FCFS is the simplest
scheduling algorithm. The original RED [60], for instance, works with FCFS; it may also
work with FQ, but the performance is uncertain. CHOKeW uses FCFS to minimize the
complexity.










170 ,
CHOKeW
165 CHOKeW-RED pmax=0.02 x
CHOKeW-RED pmax=0.05
CHOKeW-RED pmax=0.1
O 160

155

D- 150

145 /

140
60 80 100 120 140 160 180 200
The number of TCP flows

Figure 2-13: Average queue length of CHOKeW and CHOKeW-RED


from RED (for example, this may happen when all flows uses TCP), po does not have

an opportunity to increase its value (po is initialized to 0, and po P- Po + p+ only when

L > L+), which causes a longer queue in CHOKeW-RED.

Besides the link utilization and the average queue length, the aggregate TCP goodput

is always of interest when evaluating TCP performance. The comparison of TCP goodput

between CHOKeW and CHOKeW-RED is shown in Fig. 2-14. In this figure, all of the

schemes have similar results. In addition, when the number of TCP flows is larger than

100, CHOKeW rivals the best of CHOKeW-RED (i.e., pmax = 0.1).

In a special environment, if the network has not experienced heavy congestion and the

queue length L < L+ has been maintained by random drops of RED since the beginning,

CHOKeW-RED cannot achieve the goal of bandwidth differentiation as po = 0 and thus

pi = pj = 0 even if ,i / wj. In other words, CHOKeW independent of RED works best.

2.4.7 CHOKeW versus CHOKeW-avg

CHOKeW employs an adaptive mechanism to adjust the basic drawing factor po. The

speed of increase and decrease for po is controlled by the step length p+ and p-, respec-

tively. Based on the process illustrated in Fig. 2-2, if p+ and p- are set to appropriate val-

ues, po neither responds to network congestion too slowly nor oscillates too dramatically
















0.8


-g 0.6
0
0
I-
0.4 CHOKeW
CHOKeW-RED pmax=0.02
CHOKeW-RED pmax=0.05
CHOKeW-RED pmax=0.1
< 0.2


0I
60 80 100 120 140 160 180 200
The number of TCP flows


Figure 2-14: Aggregate TCP goodput of CHOKeW and CHOKeW-RED


154

152

S150 -

I 148

a 146 CHOKeW
g- CHOKeW-avg

E 144 -

142 -

140 i
60 80 100 120 140 160 180 200
The number of TCP flows


Figure 2-15: Average queue length of CHOKeW and CHOKeW-avg



while queue length fluctuates due to transient bursty traffic. For the purpose of smoothing


the traffic measurement, the combination of p+ and p is equivalent to the EWMA average


queue length avg in RED.


In this subsection, we compare the average queue length and aggregate TCP goodput


of CHOKeW with that of CHOKeW-avg (i.e., CHOKeW working with avg). The results


are shown in Fig. 2-15 and Fig. 2-16 respectively.


CHOKeW has an average queue length ranging from 147.7 to 150.7 packets and an


aggregate TCP goodput from 0.923 to 0.942 Mb/s; CHOKeW-avg has an average queue












0.98 -
0.96 -
| 0.94
C- 0.92 -
0.9
0.88
0.86 CHOKeW
08 CHOKeW-avg
0.84 -
0.82 -
0 .8 i i i i i I I
60 80 100 120 140 160 180 200
The number of TCP flows

Figure 2-16: Aggregate TCP goodput of CHOKeW and CHOKeW-avg


length ranging from 148.5 to 152.2 packets and an aggregate TCP goodput from 0.919 to

0.944 Mb/s. CHOKeW and CHOKeW-avg have similar results. Considering avg does not

improve the performance, it is not used as an essential parameter for CHOKeW.

2.4.8 TCP Reno in CHOKeW

It is known that if two or more packets are dropped in one TCP window, the sending

rate of TCP Reno recovers more slowly than other TCP versions such as New Reno, Tahoe,

or SACK. In CHOKeW, matched drops always occur in pairs within a flow, resulting in two

packet drops per TCP window.

In this subsection, we shows that although a network may have TCP-reno flows,

CHOKeW's average sending rate over time can still yield good performance. When a TCP-

Reno flow reduces its sending rate after experiencing matched drops, the bandwidth share

deducted from this flow is automatically reallocated to other TCP flows. Thus CHOKeW

can still maintain good link utilization.

On the other hand, when flow i (i = 1, 2,... N) has only a small backlog in the

buffer, both the matching probability ri and the dropping probability q, are low (see Eq.(2.3)

and (2.17b)). A TCP-Reno flow that has recently suffered matched drops is unlikely to








48






S 0.8 -

I- 0.7-

= 0.6 -

a) 0.5 Link utilization
Average goodput (Mb/s)
Minimum goodput/average goodput
0.4

0.3 L L L --L ----
60 80 100 120 140 160 180 200
The number of TCP flows

Figure 2-17: Link utilization, aggregate goodput (in Mb/s), and the ratio of minimum
goodput to average goodput of TCP Reno


encounter more matched drops in the near future; the sending rate of this flow may increase

for a longer period of time than other flows.

For this simulation, all TCP flows uses TCP Reno. We study the link utilization, the

aggregate TCP goodput and the ratio of minimum per-flow TCP goodput to the average

per-flow TCP goodput (goodput ratio in short). Since all the values of the link utilization,

the aggregate goodput (in Mb/s), and the goodput ratio are in the same range of [0, 1], they

are illustrated in a single diagram, i.e., Fig. 2-17.

Comparing Fig. 2-12 and Fig. 2-17, we notice that the link utilization of TCP Reno

is comparable to TCP SACK. The aggregate TCP goodput in Fig. 2-17 is larger than

0.9 Mb/s (the full link bandwidth is 1 Mb/s), which is comparable to the goodput of TCP

SACK in Fig. 2-14. The goodput ratio decreases when more TCP flows share the link, as

the possibility that one or two flows get small bandwidth is higher when more flows exist.

Nonetheless, positive goodput is always maintained and no flows are starved.

2.5 Implementation Considerations

2.5.1 Buffer for Flow IDs

One of the implementation consideration is the buffer size. As discussed in Braden

et al. [29], the objective of using buffers in the Internet is to absorb data bursts and transmit










IF L > Lth
THEN
Generate a random number v' [0,1)
IF v' < LID/(LID + L)
THEN
m <- 2 x m
WHILE m > 0
m <-- M 1
Draw 4b from ID buffer at random
IF oa=tb
THEN
L- L- la, drop pkt
RETURN /*wait for the next arrival*/
GOTO Step (6) in Fig.l

Parameters:
,a: Flow ID of arriving packet
b: Flow ID drawn from ID buffer at random
la: Size of the arriving packet
LID: Queue length of ID buffer

Figure 2-18: Extended matched drop algorithm with ID buffer


them during subsequent silence. Maintaining normally-small queues does not necessarily

generate poor throughput if appropriate queue management is used; instead, it may help

result in good throughput as well as lower end-to-end delay.

When used in CHOKeW, this strategy, however, may cause a problem in which no two

packets in the buffer are from the same flow, although this is an extreme case and unlikely

happens so often, due to the bursty nature of flows. In this case, no matter how large pi is,

packets drawn from the buffer will never match an arriving packet from flow i. In order

to improve the effectiveness of matched drops, we consider a method that uses a FIFO

buffer for storing the flow IDs of forwarded packets in the history.11 When the packets are

forwarded to the downstream link, their flow IDs are also copied into the ID buffer. If the

ID buffer is full, the oldest ID is deleted and its space is reallocated to a new ID. Since the

size of flow IDs is constant and much smaller than packet size, the implementation does not




11 IPv6 has defined a flow-ID field; for a IPv4 packet, the combination of source and
destination addresses can be used as the flow ID.









require additional processing time or large memory space. We generalize matched drops

by drawing flow IDs from a "unified buffer", which includes the ID buffer and the packet

buffer. This modification is illustrated in Fig. 2-18, interpreted as a step inserted between

Step (4) and Step (5) in Fig. 2-1.

Let LID denote the number of IDs in the buffer when a new packet arrives. Draws

can happen either in the regular packet buffer or in the ID buffer. The probabilities that the

draws happen in the ID buffer and the packet buffer are LID and L -- respectively.

If the draws are from the ID buffer, only one packet (i.e., the new arrival) is dropped

each time, and hence the maximum number of draws is set to 2 x pi, implemented by

m <-- 2 x m in Fig. 2-18.

2.5.2 Parallelizing the Drawing Process

Another implementation consideration is how to shorten the time of the drawing pro-

cess. When po > 1, CHOKeW may draw more than one packets for comparison upon each

arrival. In Section 2.2, we use a serial drawing process for the description (i.e., packets are

drawn one at a time), to let the algorithm be easily understood. If this process does not

meet the time requirement of the packet forwarding in the router, a parallel method can be

introduced.

Let a be the flow ID of the arriving packet, (i = 1, 2, m) the flow IDs of the

packets drawn from the buffer. The logical operation of matched drops can be represented

by bitwise XOR (Y) and bitwise AND (A) as follows: if


A v( V) 0o(false),
i= 1

then conduct matched drops. Note that the above equation is satisfied if any term of a v b

is false, i.e., any (g drawn from the buffer can provoke matched drops if it equals (a.

When the drawing process is applied to the packet buffer, matched drops happen in

pairs. Besides the arriving packet, we can simply drop any one of the buffered packets with

flow ID that makes v = 0.









2.6 Conclusion

In this chapter, we proposed a stateless, cost-effective AQM scheme called CHOKeW

that provides bandwidth differentiation among flows at multiple priority levels. Both the

analytical model and the simulations showed that CHOKeW is capable of providing higher

bandwidth share to flows in higher priority, maintaining good fairness among flows in the

same priority, and protecting TCP against high-speed unresponsive flows when network

congestion occurs. The simulations also demonstrate that CHOKeW is able to achieve

efficient link utilization with a shorter queue length than conventional AQM schemes.

Our analytical model was designed to provide insights into the behavior of CHOKeW

and gave a qualitative explanation of its effectiveness. Further understanding of network

dynamics affected by CHOKeW needs more comprehensive models in the future.

The parameter tuning is another area of exploration for future work on CHOKeW. As

indicated in Fig. 2-6, when the priority-weight ratio w(2)/w(1) is higher, the bandwidth

share being allocated to the higher-priority flows will be greater. In the meantime, con-

sidering that the total available bandwidth does not change, the bandwidth share allocated

to the lower-priority flows will be smaller. The value of w(2)/w(1) should be tailored to

the needs of the applications, the network environments, and the users' demands. This

research can also be incorporated with price-based DiffServ networks to provide differen-

tiated bandwidth allocation as well as TCP protection.















CHAPTER 3
CONTAX: AN ADMISSION CONTROL AND PRICING SCHEME FOR CHOKEW

3.1 Introduction

In differentiated service (DiffServ) networks, flows are assigned a Per-Hop Behavior

(PHB) value, packets from a flow carry the value in the header, and routers along the

path handle the packets according to the value [25,78, 109]. In order to model DiffServ

networks, a PHB that corresponds to better service can be mapped into a higher priority

class [102]. Then the PHB value is considered the class ID, which determines the service

quality that routers provide to packets of this class.

In DiffServ networks, similar to the architecture proposed in Core-Stateless Fair Queue-

ing (CSFQ) [128], routers are divided into two categories: edge (boundary) routers and core

(interior) routers. The number of flows going through an edge router is much fewer than

that going through a core router. Sophisticated operations, such as per-flow classification

and marking, are implemented in edge routers. By contrast, core routers do not require

per-flow-state maintenance so that they can serve packets as fast as possible.

Our previous work, CHOKeW [135], is an Active Queue Management (AQM) scheme

designed for core routers. It is a stateless algorithm but able to provide bandwidth differ-

entiation and TCP protection. It is also able to maintain good fairness among flows of the

same priority class.

For readers' convenience, we give a brief introduction of CHOKeW before starting the

discussion of our pricing scheme that will be mainly focused on in this chapter. CHOKeW

reads the priority value of an arriving packet from the packet header. Assume the arriving

packet has priority i, CHOKeW uses a priority weight ,, to calculate drawing factor pi

from

i =Po/'" (3.1)









where po is the basic drawing factor that is adjusted according to the severity of network

congestion. The heavier the network congestion is, the larger the value of po will be. On

the other hand, a higher priority class corresponds to a larger ,i and a smaller pi. Thus pi

carries the information of network congestion status as well as the priority of the flow.

When a packet arrives at a core router, CHOKeW will draw some packets from the

buffer of the core router at random, and compare them with the arriving packet. If a packet

drawn from the buffer is from the same flow as the new arrival, both of them will be

dropped. In CHOKeW, the maximum number of packets that will be drawn from the buffer

upon each arrival is pi, which is mentioned above. The strategy of "matched drops", i.e.,

dropping packets from the same flow in pairs, was designed for CHOKe by Pan et al. [112],

to protect TCP flows. CHOKe works in traditional best effort networks, while CHOKeW

was designed for DiffServ networks that are able to support multiple priority classes. Other

difference between CHOKe and CHOKeW can be found in our previous work [135].

In DiffServ networks, besides a buffer management scheme for core routers, an ad-

mission control strategy for edge routers is also necessary. Otherwise, whenever users

arrive, they can start to send packets into the network, even if the network has been heavily

congested. The lack of admission control strongly devalues the benefit that DiffServ can

produce, and the deterioration of service quality resulting from network congestion cannot

be solved only by CHOKeW.

Pricing is a straightforward and efficient strategy to assign priority classes to different

flows, and to alleviate network congestion by raising the price when the network load

becomes heavier.

When a network is modeled as an economic system, a user who is willing to pay a

higher price will go to a higher priority class and thus will be able to enjoy higher-quality

service. Moreover, by charging a higher price, a network provider can control the number

of users who are willing to pay the price to use the network, which, in return, becomes a

method to protect the service quality of existing users.



















0 User Q Edge router O Core router

Figure 3-1: ConTax-CHOKeW framework. ConTax is in edge routers, while CHOKeW is
in core routers.

We present a pricing scheme for CHOKeW in this chapter. Our pricing scheme works

in edge networks, which assign higher priority to users who are willing to pay more. When

the network congestion is heavier, our pricing scheme will increase the price by a value

that is proportional to the congestion measurement, which is equivalent to charging a tax

due to network congestion-thus we name our pricing scheme ConTax (Congestion Tax).

The chapter is organized as follows. Our scheme is introduced in Section 3.2, includ-

ing the ConTax-CHOKeW framework in Subsection 3.2.1, the pricing model in Subsection

3.2.2, and the user demand model in Subsection 3.2.3. We use simulations to evaluate the

performance of our scheme in Section 3.3, which covers the experiments for investigating

the control of the number of users that are admitted, the regulation of the network load,

and the gain of aggregate profit for the network service provider. Finally, the chapter is

concluded in Section 3.4.

3.2 The ConTax Scheme

3.2.1 The ConTax-CHOKeW Framework

ConTax is a combination of a pricing scheme and an admission control scheme. It

can be implemented in edge routers, gateways, AAA (authentication, authorization and

accounting) servers, or any devices that are able to control the network access. Without

loss of generality, in this chapter we assume that edge routers are the devices that have a

ConTax module.









The ConTax-CHOKeW framework is illustrated in Fig. 3-1. In this figure, hexagons

represent core routers, circles are edge routers, and diamonds denote users. When users try

to obtain network access, they connect the neighboring edge routers and look up the price

for a desired priority class, which is provided by ConTax. If the price is under the budget,

they pay the price and get the network access; otherwise, they do not request the network

access.

After user U obtains the network access from edge router E by paying credits p(i)

that corresponds to priority i, U can send packets into the network via E. Each packet

from U is marked with priority i by E before it enters the core network. When a packet

from U arrives at core router C, C uses CHOKeW to decide whether to drop this packet

and another packet belonging to the same flow from the buffer, i.e., to conduct matched

drops. If the arriving packet is not dropped, it will enter the buffer. However, this packet

may still be dropped by CHOKeW before it is forwarded to the next hop, if the sending rate

of this flow is much faster than other flows, since a faster sending rate causes more arrivals

during the same period of time. Thus CHOKeW is able to provide better fairness among

the flows in the same priority class than conventional AQM schemes such as RED [60] and

BLUE [59].

In addition to marking arriving packets with priority values, edge router E is respon-

sible for adjusting prices according to the network congestion. A higher price has a higher

potential to exceed the budget of more users. If the price rises when congestion happens,

fewer users are willing to pay the price to use the network, and consequently, the network

congestion is alleviated. By using the pricing scheme, edge routers can effectively restrict

the traffic that enters the core networks to a reasonable volume so that it will not cause

significant congestion. On the other hand, when the network is less congested, the price

should be reduced appropriately to avoid low link utilization. The pricing function will be

discussed in the following section.









3.2.2 The Pricing Model of ConTax

In ConTax, each priority class has a different price (in credits/unit time, e.g., dol-

lars/minute). For a priority class, the price is composed of two parts, a basic price for each

priority class, and an additional price that reflects the severity of network congestion. We

first look at the basic price,

po (i) 3i + c, (3.2)

where i denotes the priority of the service. Bear in mind that a flow in a higher priority class

can get more resources (in CHOKeW, we mainly focus on bandwidth). Eq.(3.2) shows the

relationship between the quantity of resources and the price. The price increasing rate is

determined by the slope, 3 (/3 > 0), and the initial price is controlled by the y-intercept, c

(c > 0).

Particularly, if a flow in priority i gets i times of bandwidth of a flow in priority 1,

formula (3.2) is equivalent to the following continuous function


po(x) Ox + c, (3.3)

where x (x > 0) represents the resource quantity allocated to this flow.

Eq.(3.3) matches many current pricing strategies of ISPs (for example, the DSL Inter-

net services provided by BellSouth [22]).

One of the features of our pricing model is that the more resources a user obtains, the

less the unit-resource price will be. The unit-resource price is denoted by r = po/x. From

(3.3), we have Or/Ox = -c/x2. Since c > 0, Or/Ox < 0.

Another feature of our pricing model is that the unit-resource price is a convex func-

tion of resources, as 02r/2x' > 0. In other words, when a user obtains more resources

from the provider, the unit-resource price will decrease, but the decreasing speed will slow

down, which guarantees that the unit price will never reach zero.









When we have a congestion measurement, we can use it to build a "congestion tax",

represented by t. Then the final price p is determined by


p(i) po(i)(1 + 7t), (3.4)

where 7 (7 > 0) is a constant that reflects the sensitivity of price to network congestion.

Now one may question how to measure the congestion of core routers appropriately

in an edge router. One solution is to let core router C send a control message to E if C

is congested. The control message informs E the current congestion status, which may be

determined by queue length in C, such as the congestion measurement used in RED [60].

However, we argue that this is not a cost-effective solution, since C needs to track the

sources of the packets that cause the congestion before sending the message to the corre-

sponding edge routers. The link capacity in core networks is usually larger than that on the

edges, as illustrated in Fig. 3-1, where a thicker line represents a link with higher band-

width. A core router becomes a bottleneck only because many flows go through it. There-

fore, the congestion in core networks results from the traffic generated by many senders. If

this strategy is used, the core router being congested has to send messages (or many copies

of the same message) to different edge routers, and the control messages could worsen the

congestion, since more bandwidth is required to transmit the messages.

We notice that an edge router is able to record the number of users in each priority

class, as these users receive the network admission from the edge router. For an edge

router, let ni denote the number of users in priority class i.1 As a user in higher priority

tends to consume more network resources in the core network and thus likely contributes

more to the network congestion that is happening now or may happen in the future, a



1 Here ni only counts the users sending packets to a core network. Traffic that does not
enter the core network will not cause any congestion in core routers, and hence is neglected
when ni is calculated.









simple method to measure the network congestion, from the viewpoint of the edge router,

is to use i ini, where positive integer I is the highest priority classes supported by the

network. In the rest of this chapter, we also call Yt 1ini the network load for the edge

router, and it will be used to charge the congestion tax. However, congestion tax should not

be charged when Y 1 ini is very small. We introduce a threshold to indicate the beginning

of congestion, denoted by M (M > 0), and the congestion measurement becomes


t -max 0, m in,- M (3.5)
( Ii 1
Bring (3.2), (3.5) into (3.4), we get


p(i) = (3i + c) 1 + 7 max 0, ini M (3.6)


When >i1 ini < M, no congestion tax is added to the final price, and p(i) po(i).

Based on the above discussions, the ConTax algorithm is described in Fig. 3-2. In

this scheme, when user U keeps using the network, the price Pu that U needs to pay does

not change, which is determined at the moment when U is admitted into the network. The

philosophy here is to consider Pu a commitment made by the network provider. Some

other pricing based admission control protocols, such as that proposed by Li, Iraqi and

Boutaba [93], use class promotion for existing users to maintain the service quality without

charging a higher price. In our scheme, if the network becomes more congested, new users

will be charged higher prices, which prevents further deterioration of the congestion, and

thus maintains the service quality for existing users to some extent. From Fig. 3-2, an edge

router updates price p(j) for all priority class j (j = 1, 2, I) only when a new user is

admitted or an existing user completes the communication.

3.2.3 The Demand Model of Users

When the price alters, the user's demand to obtain the network access also changes.

A popular demand model is to regard the demand function as a probability of network

access which is determined by the price difference between the current price and the basic









(1) When user U arrives at edge router E
U selects a priority class i
IF p(i) is less than the budget of U
THEN
E charges U price u = p(i)
U starts to use the network
E updates p(j) for all j 1,2,... ,I
ELSE U is not admitted into the network

(2) When user U stops using the network
E stops charging U price Pu
E updates p(j) for all j 1,2,... ,I

Figure 3-2: ConTax algorithm

price [82,93]. In ConTax, for priority class i, the user demand d(i) can be modeled as


d(i)= exp (-((i) 1 (3.7)


In (3.7), a is the parameter to determine the sensitivity of demand to the price change.

From (3.6) and (3.7), we illustrate the method to determine the value of demand based

on the network load Y i1 in an edge router in Fig. 3-3. In the market, the network

load is able to be interpreted as supply, i.e, by charging price p(i) for each priority class

i, the network provider is willing to provide service that is equal to 1 ini. The heavier

the load is, the higher the price will be.

We do not draw the supply-demand relationship in one single graph, because in Con-

Tax, the supply is the sum of the load of all priority classes, while the demand takes effect

on each class individually. By using two graphs in Fig. 3-3, given network load i ini,

we can find the price for a priority class according to the supply curve corresponding to the

class in the left graph. Then, in the right graph, the same price maps onto a demand value

that is determined by the corresponding demand curve. The above process is illustrated in

Fig. 3-3 by dashed arrows.







60

p(i) p(i)
i 3

P0(3) p,(3) - -
p,(2) po(2) ----- --- i 2
Po(1) I po(1) -

p-)----in --------------d(i)
i=l

Figure 3-3: Supply-demand relationship when ConTax is used. The left graph is price-
supply curves, and the right graph price-demand curves for each class.


3.3 Simulations

Our simulations are based on ns-2 simulator (version 2.29) [103], and focus on the per-

formance of ConTax in edge router E upon random arrivals of users. In the network shown

in Fig. 3-1, we assume that user arrivals comply with Poisson distribution. Even though the

validity of Poisson model for traffic in Wide Area Networks (WANs) has been questioned,

the investigation has shown that user-initiated TCP sessions can still be well modeled by

Poisson [116]. In simulations, we let the average arriving rate be A = 3 users/min un-

less specified otherwise. An arriving user is admitted into the network according to the

probability that is equal to Eq.(3.7). If the new arrival is admitted, the data transmission

will last for a period of time, which is simulated by a random variable, 7. The Cumulative

Distribution Function (CDF) of 7 satisfies Pareto distribution, i.e.,


F (T) 1 ( (3.8)


where k (k > 0) is the shape of Pareto and To is the minimum possible value of T. When

1 < k < 2, which is most frequently used, Pareto distribution has finite mean value

Tok/(k 1) but an infinite variance. Previous study has shown that WWW traffic complies

with Pareto distribution with k c (1.16, 1.5) [51]. We set k = 1.4, and To = 5.714 min

that corresponds to E(r) = 20 min. The simulations are stopped when the number of user

arrivals reaches 1000. The simulation results are compared with that of pricing without

congestion tax. To determine the price given a traffic load (i.e., f 1 in), we set the









sensitivity parameter 7 5.0 x 10-3 and the threshold of charging a congestion tax M

20. The demand of users is simulated with parameter a = 0.5, which is based on the

assumption that the willingness to use the network is close to half when the price is doubled

(i.e., p(i) = 2po(i)). Two parameters for calculating the basic price, /3 and c, are assigned

values 3.0 x 10-4 dollars/min and 7.0 x 10-4 dollars/min, respectively.

3.3.1 Two Priority Classes

When two priority classes are supported, each user randomly select one of the classes.

The network load for edge router E, the number of users admitted, the demand of users,

and the aggregate price changing with time are shown in Fig. 3-4, 3-5, 3-6 and 3-7,

respectively.

When congestion tax is not charged, i.e., p(i) = po(i) all the time, an arriving user is

always admitted since the demand is constant 1. So the number of admitted users is equal

to the number of arrivals, which rises whenever a user arrives, and decreases when a user

completes the communication and leaves. By contrast, when ConTax is applied, after the

time reaches 306 sec, the demand becomes less than 1 due to more users admitted and a

load exceeding M (Fig. 3-6). Some users choose not to obtain the admission when they

arrive. In each class, the number of users admitted into the network is smaller than the

number of arrivals, illustrated by a sub-figure in Fig. 3-5. Accordingly, in Fig. 3-4, the

network load of using ConTax is also lower than that without congestion tax.

The aggregate price is concerned by the network provider not only for alleviating

network congestion, but also for earning profit. From Fig. 3-7, we see that ConTax can

bring a higher aggregate price and therefore more profit to the network provider. On the

other hand, by paying a price that is slightly higher than the basic price, a user can enjoy

the network service with better quality due to less congestion.

3.3.2 Three Priority Classes

The results under the scenario of three priority classes are similar to that of two priority

classes, i.e., the network load for the edge router shown in Fig. 3-8, the number of users










120 -----i--i

100 -

80 a ,,'. ,I. 1,.


S 60i ', ''
,,60'" I,/ '

40 I
ConTax
20 pricing without congestion tax

0 I I I I I I I
0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000
Time (seconds)

Figure 3-4: Dynamics of network load (i.e., 1 in) in the case of two priority classes


admitted for each priority class in Fig. 3-9, as well as the demand in Fig. 3-10, when

ConTax is employed, is lower than that not using congestion tax, while the aggregate price

is higher (Fig. 3-11).

By comparing Fig. 3-8 with Fig. 3-4, we see that the network load is heavier in the

case of three priority classes than that of two classes, since the users in the third priority

class consume more resources.

The demand curve in Fig. 3-10 is lower than the curve in Fig. 3-6, resulting from the

higher load in the network that supports more priorities.

The curve of aggregate price in Fig. 3-11 is higher than the curve in Fig. 3-7. In other

words, by supporting more priority classes, the network providers can make more profit,

and users are better served by having more options for their applications.2




2 Generally, a higher quality of service can produce higher utility for users, even though
they pay a higher price. When multiple priority classes are available, users are more likely
to find the optimal point that leads to the highest benefit. At this point the difference
between the utility and the price is maximized [50].







































0 2000 4000 6000 8000 10000 12000
Time (seconds)
(a; Prioritv class 1


30

25

20


14000 16000 18000 20000


0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000
Time (seconds)
(b) Priority class 2


r :::, 3-5: Number of users that are admitted into the network in the case of two
classes


i i i ii

h'I I '




- p n w


ConTax
pricing without congestion tax
I I I I I I I I I



































0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000
Time (seconds)


Figure 3-6: Demand of users in the case of two :, :::. classes












1.4 1111

1.2 ConTax -
pricing without congestion tax
1-


0.8

0.6 -

0.4 -

0.2 -


0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000
Time (seconds)


Figure 3-7: '. "-:. gate in the case of two : :, classes










160 -----i--i

140 -

120 -




60 ,,
40 ConTax
40 pricing without congestion tax
20

0 I I I I I I I
0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000
Time (seconds)

Figure 3-8: Dynamics of network load in the case of three priority classes


3.3.3 Higher Arriving Rate

In Subsection 3.3.1 and 3.3.2, the average arriving rate of users is 3 users/min. Since

the traffic volume in the network is variant and changes with time and locations, we are

also interested in the performance of ConTax when the arriving rate is different. In this

subsection, we let A = 6 users/min and repeat the simulations in Subsection 3.3.1. The

results are shown from Fig. 3-12 to Fig. 3-15.

First of all, we can see that the difference of the number of admitted users and the

number of arrivals is more significant in Fig. 3-13 than that in Fig. 3-5, by comparing

the results from the same priority class. The reason is that the network load tends to be

heavier when more users arrive during the same period of time, which leads to a smaller

demand that is demonstrated by the comparison between Fig. 3-14 and Fig. 3-6. Corre-

spondingly, the advantage of using a congestion tax regarding the network load, as shown

by the difference of the two curves in Fig. 3-12, is more significant than that in Fig. 3-4.

The aggregate price curve (Fig. 3-15) has the similar shape as the curve in Fig. 3-

7, but it rises faster when A = 6 users/min with an increasing speed that is also roughly

doubled.














II t I '' I I I I









ConTax -
pricing without congestion tax



0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000
Time (seconds)
(a) Priority class 1



i i' i










ConTax -
pricing without congestion tax



0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000
Time (seconds)
(b) .. ... class 2


- I,


I I I I I I I I I


'II





ConTax
pricing without congestion tax

I I I I I II


0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000
Time (seconds)
(c) Priority class 3


Figure 3-9: Number of users that are admitted into the network in the case of three priority
classes



































0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000
Time (seconds)

. :::... 3-10: Demand of users in the case of three .:..:: classes












1.6 -----i--i

1.4 ConTax
pricing without congestion tax
1.2 -
1-

0.8

0.6 -

0.4 -

0.2 -


0 2000 4000 6000 8000 10000 12000 14000 16000 18000 20000
Time (seconds)


Figure 3-11: '. ..:.:gate in the case of three :1 I classes










250 -----i--i


200 -


150 .,




50 ConTax
pricing without congestion tax

0 I I I I I I I
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Time (seconds)

Figure 3-12: Dynamics of network load when arriving rate A = 6 users/min


3.4 Conclusion

ConTax-CHOKeW framework is a cost-effective DiffServ network solution that in-

cludes pricing and admission control (provided by ConTax) plus bandwidth differentiation

and TCP protection (supported by CHOKeW). By using the sum of priority classes of all

admitted users as the network load measurement in ConTax, edge routers can work inde-

pendently. This can save the network resources as well as management cost for sending

control messages to update the network congestion status from core routers to edge routers

periodically.

ConTax adjusts the prices for all priority classes when the network load for an edge

router is greater than a threshold. The heavier the load is, the higher the prices will be.

The extra price above the line of the basic price, i.e., congestion tax, is proved to be able to

effectively control the number of users that are admitted into the network.

By using simulations, we also show that when more priority classes are supported, the

network provider can earn more profit due to a higher aggregate price. On the other hand,

a network with a variety of priority service provides users greater flexibility that in turn

meets the specific needs for their applications.





















80

70

| 60
E
"( 50
SI
) 40
0
30
E
= 20
z
10

0


0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Time (seconds)
(a) Priority class I


ConTax
pricing without congestion tax



0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Time (seconds)
(b) Priority class 2


F; :::- 3-13: Number of users that are admitted into the network when rate A
6 users/min
















1.05 -i i i i



0.95 ConTax
pricing without congestion tax
0.9 -

E 0.85 -

0.8 -

0.75 -

0.7

0.65
0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Time (seconds)

Figure 3-14: Demand of users when arriving rate A = 6 users/min


0 1000 2000 3000 4000 5000 6000 7000 8000 9000 10000
Time (seconds)

Figure 3-15: A--..:. -:.A :" when :- rate A = 6 users/min







71

When the arriving rate of users rises, the network load also increases, and the de-

mand decreases accordingly. This may result in a more noticeable performance difference

between ConTax and a pricing scheme that does not charge congestion tax.















CHAPTER 4
A GROUP-BASED PRICING AND ADMISSION CONTROL STRATEGY FOR
WIRELESS MESH NETWORKS

4.1 Introduction

Wireless Mesh Networks (WMNs) have attracted a great deal of attention of both

academia and industry. One of the main purposes of using WMNs is to swiftly extend the

Internet coverage in a cost-effective manner. The configurations of WMNs, determined

by the user locations and application features, however, are greatly dynamic and flexible.

As a result, it is highly possible for a flow to go through a wireless path consisting of

multiple parties before it reaches the hot spot that is connected to the wired Internet. This

feature results in significant difference in admission control for WMNs and for traditional

networks.

Admission control is closely related to Quality of Service (QoS) profiles and pricing

schemes. In traditional networks, if only best-effort traffic exists, flat-rate pricing is nor-

mally the most straightforward and practical choice. Under this scenario, users are charged

a constant monthly fee or a constant number of credits for each time period determined by

a similar billing plan, based on a contract agreed by the users and the service provider. We

also notice that only two parties are involved in the admission control procedure in the con-

ventional networks, i.e, an ISP who provides the network access and a user who submits

the admission request.

Even though flat-rate pricing is easy to use for traditional best-effort traffic, if the

network is designed to support multiple priority levels, such as Label Switching [10, 14],

Integrated Services/Resource ReSerVation Protocol (InterServ/RSVP) [27, 28], or Differ-

entiate Services (DiffServ) [25], it is necessary to differentiate the priority in pricing policy.

Accordingly, admission control needs to consider the available resources along the path that









a flow will follow. For example, in ATM networks, an admission decision (for example,

Courcoubetis et al. [49]) is typically made after the connection request completes a round

trip and shows resources are available in each hop.1

Because multiple parties may be in the path, the design of an admission control

scheme for WMNs is different from traditional admission control schemes. It is ineffi-

cient and infeasible to ask for the confirmation from each hop along the route in WMNs.

As the network structures of WMNs are greatly dynamic, a group-based one-hop admission

control is more realistic than the traditional end-to-end admission control.

On the other hand, compared with wireless ad-hoc networks, the mobility of mesh

routers is usually minimized, which results in the possibility of inexpensive maintenance,

reliable service coverage, and nonstop power supply without energy consumption con-

straints [5]. In this chapter, we assume the physical location of the groups do not change

dramatically during the observation period.

Some previous research touched admission control in WMNs [54,92, 145]. Lee et al.

focused on path selection to satisfy the rate and delay requirement upon an admission

request [92]. Zhao et al. devoted to incorporate load balancing in the admission control

to select a mesh path [145]. Both require that the devices in a WMN cooperate tightly

with each other. However, we believe that it is impractical to let devices of one party

be deeply involved into the operations of another party. An admission control scheme

that minimizes the involved parties is necessary for WMNs. In addition, when admission

control happens between two parties, economically, the benefit gain becomes a main reason

to share resources, and thus a pricing mechanism should also be considered.

Efstathiou et al. proposed a public-private key based admission control scheme [54],

where a reciprocity algorithm is employed to identify a contributor, i.e., a user who also



1 A minor exception is for Unspecified Bit Rate (UBR) traffic that does not require a
QoS guarantee, and therefore works in a best-effort manner.









provides network access to other users, and only contributors can obtain network access

from other contributors when they mobile. We notice that their scheme works as a barter

market from an economical viewpoint, since all users trade their network resources for

other users' network resources. As a monetary system is more efficient and more flexible,

we expect to design an admission control scheme combined with pricing, where users can

make payments by any forms of credits that they are used to using in their daily lives.

In industry, an admission control scheme combined with pricing, for IEEE 802.11

based networks, was devised by Fon [65]. The scheme needs all parties to be registered in

a control center before they share network resources with each other, and the price, which

still uses flat rate and is unrelated to the service quality, is also determined by the control

center. This causes the inflexibility for the parties who prefer to make their own admis-

sion decisions according to the current available resources and requested service quality.

Therefore, a distributed admission control scheme is more appropriate for WMNs.

In order to meet the requirements, we propose a group-based admission control scheme,

where only two parties are involved in the operations upon each admission request. The de-

termination criteria are the available resources and requested resources, which correspond

to supply and demand in an economic system. The involved parties use the knowledge

of utility, cost, and benefit of economics to calculate the available and requested resources.

Therefore, our scheme is named APRIL (Admission control with PRIcing Leverage). Since

the operations of our scheme are conducted distributedly, there is no need for a single con-

trol center.

The chapter is organized as follows. We introduce the idea of groups in WMNs in

Section 4.2. The pricing model is discussed in Section 4.3. Based on the pricing model,

Section 4.4 provides the procedure of our admission control scheme, followed by perfor-

mance evaluation in Section 4.5. The chapter is concluded in Section 4.6.









4.2 Groups in WMNs

In order to provide ubiquitous Internet access, a mesh network can be an open system

that allows devices of a party to obtain network access from another party that already has

the network access, and thereafter to further share resources with other parities that need

the network access.

WMNs are composed of hot spots, mesh routers, mesh users, and hybrid devices [5].

A hot spot is the Internet access point of the WMN.2 A mesh router is a device that forwards

packets for other devices. By contrast, a mesh user is a device that sends/receives packet

for itself. A hybrid device functions as a mesh router and a mesh user at the same time.

For a general design, we model a set of devices as a "group", within which admission

control is not considered. A group includes at least one device; but in most cases, it is

a set of multiple devices. The reason to use "groups" instead of physical devices for the

operation and maintenance of WMNs is as follows.

When some devices of the same party are close to each other in physical positions,

they can form a group, and each device becomes a group member. Within the group,

the resources are shared by all group members using some mechanisms or protocols de-

termined by the group administrator or controlled under an agreement of group members,

using either centralized management or distributed management. It is reasonable to assume

that the devices within a group cooperate with each other and work as a single system, and

admission control among these devices is not as important as admission control among

devices belonging to different groups. By splitting devices into groups, we only focus on

the admission control among different groups.

Traffic in WMNs usually goes from mesh users to the Internet through mesh routers

and hot spots, which creates a "parking-lot" scenario [68]. In most cases, based on the



2 It is possible that a WMN has multiple hot spots. In this chapter, we only discuss the
case with one hot spot for simplicity.









Wired link Grou
----- Wireless link 0- Group /--1)- o




Internet Go e





Figure 4-1: Tree topology formed by groups in a WMN

traffic routes, the topology of a WMN can be illustrated by a tree or multiple trees, each

having a hot spot as the root, mesh users as the leaves, and mesh routers or hybrid devices

as branches. If we substitute groups for devices, the root, the branches, and the leaves

are all formed by groups. In a tree topology illustrated in Fig. 4-1, let Go represent the

group that includes the hot spot. Several other groups are connect to Go directly to get the

network access. They are represented by G ), G 2), *, respectively.

For general purpose, we denote a group in the tree topology by Gi, and denote Gi-1

the group that provides network access to Gi. Gi also provides network access to groups

G+, G) ,. G( After G, obtains the permit of network access from G_-1 by paying

some credits, it can further share the resources with G (), j =1, 2, by accepting

payments from them.3 The details of resource sharing between Gi and G) is transparent

to Gi-1. By using this method, only two groups, resource provider GC and resource user

Gi+), are involved in the admission control operations when G(j) sends the request for

network access.



3 The payment transactions need the protection of reliable network security. The design
of WMNs with secure payment transactions is beyond the scope of this chapter. Interested
readers may refer to other literatures such as Zhang and Fang [144] for more information.









In special cases, if the admission control needs to be conducted between different de-

vices within a group-this may happen although these devices are from the same party-

we can always further categorize the devices of the same group into subgroups. The group-

ing process goes forward until all admission control operations occur between groups. In

this manner, we ensure that only two subgroups are involved in the admission control upon

each request, and the number of parties involved in the admission control is minimized.

4.3 The Pricing Model for APRIL

APRIL considers WMNs economic systems. For ease of explanation, when a group

requests the network access, it is called a user; when a group provides the network access,

it is called a provider. Due to the characteristics of WMNs, a user can also be a provider

when it gives network access to other users.

Let p denote the price that a user needs to pay the provider, in order to get resources

x. We assume

p 3 Ox + c, st. O < x < X, (4.1)

where 3, c (3 > 0, c > 0) are constants. The price increasing rate is determined by the

slope, 3, the initial price is controlled by the y-intercept, c, and X (X > 0) is the available

resources. If X = 0, we say that no resources are available; accordingly, no value of x can

satisfy (4.1), which means that the user cannot get any resources from the provider.

This assumption matches many current pricing strategies of ISPs. For example, the

DSL Internet services provided by BellSouth have a different monthly price for each plan

[22]. By collecting the values of the uplink and downlink bandwidth, we illustrate the

prices and the corresponding bandwidth in Fig. 4-2. A linear approximation compliant

with (4.1) is also added in each subfigure. We see that the downlink and uplink prices have

approximation p x/300 + 27 and p = x/16 + 15.5, respectively. In reality, the values of

uplink bandwidth and downlink bandwidth are included in a service plan, so we only have

to focus on one link direction. In our model, we use the general term "resources", which








78


50 50

45 45 -

40 40

3 35 35

30 downlink 30 uplink
p= x/300+27 -x- p=x/16+155 x
25 25 -

20 i i 20
0 1000 2000 3000 4000 5000 6000 100 150 200 250 300 350 400 450 500 550
Link Bandwidth (kb/s) Link Bandwidth (kb/s)
(a) Downlink (b) Uplink

Figure 4-2: Prices vs. bandwidth of BellSouth DSL service plans


can denote the combination of uplink and downlink bandwidth, or other types of resources,

depending on the specific requirements.

One of the features of our pricing model is that the more resources a user gets, the less

the unit-resource price will be. The unit-resource price is denoted by r = p/x. From (4.1),

we have Or/Ox -c/x2. When x > 0, Or/Ox < 0.

Another feature of our pricing model is that the unit-resource price is a convex func-

tion of resources, since 02r/ax2 > 0. In other words, when a user obtains more resources

from the provider, the unit-resource price will decrease, but the decreasing speed will slow

down, so that the unit price will never reach zero.

4.4 The APRIL Algorithm

4.4.1 Competition Type for APRIL

The APRIL algorithm is based on the knowledge of economic systems of WMNs. We

need to find the competition type of the market corresponding to systems. Traditionally, a

market can be modeled by one of the three competition types: 1) a monopoly, if a single

provider controls the amount of goods (i.e., resources in the network access market of

WMNs herein) and determines the market price, 2) a competitive market, if there are many










providers and users but none of them can dictate the price, and 3) an oligopoly, where only

a few providers are available [50].4

APRIL is designed for letting groups that already have network access share the access

(measured by available resources) with new groups swiftly and conveniently. If a group

has available resources for share, it will become a provider. Therefore, it is reasonable to

assume that many providers exist, and thus the economic systems are competitive markets,

instead of monopolies or oligopolies. The competition type may change in the long run,

due to possible lower prices resulting from the growth of one or a few providers that bring

production economies of scale into effect [50, 117]. However, since APRIL is mainly used

for short-term network access where no direct connection to a hot spot of an ISP is available

and for the fast extension of the Internet coverage, the evolution of market competition is

out of the scope of this chapter.

In a competitive market, none of the participants (including the providers and the

users) is capable of controlling the prices, but they can adjust the supply and the demand

in order to obtain maximum benefit.5 A competitive market usually has the supply-demand

constraint,
m n

i= 1 j= 1
where Zi7 Zi represents the aggregate demand of all users, Y: 1 Xj the aggregate supply

of all providers, m the number of users, and n the number of providers [50]. The same

constraint still holds in a resource market of a WMN, where a user adjusts the demand to

maximize the benefit under this constraint (see Subsection 4.4.2 for details). On the other



4 A duopoly, where only two providers exist in the market, is considered to be a special
case of an oligopoly.

5 In some of the literature, benefit of providers is also called profit. We use the word
"benefit" for providers as well as users in this chapter, since a user may also be a provider
at the same time in a WMN.









hand, we assume that each user only obtains the network access through one provider at a

time, so that the complexity of network configurations (such as routing algorithms, address

allocations, load balance, traffic aggicg.lion. etc.) will not be too high. This circumstance

is different from the conventional competitive markets. We apply the nonnegative benefit

principle to providers, which will be discussed in details in Subsection 4.4.3.

4.4.2 Maximum Benefit Principle for Users

For a user, the benefit is the difference between the utility generated by using a certain

amount of resources and the price charged by a provider who supplies the resources to

the user. The maximum benefit principle requires the user to seek the amount of resource

demand that maximizes the benefit. The demand is subject to the supply-demand constraint

mentioned above.

Assume group Gi can generate utility ui by using resources xi provided by group

Gi_1. On the other hand, Gi has to pay Gi_1 price pi in order to use the resources xi. The

values of pi, ui and xi satisfy

Pi = /3x + c, (4.2a)

ui = max (0, ai log(xi Xi + 1)),

st. 0 < Xi < Xi (4.2b)

The utility function ui(xi) is increasing and concave, which matches the features of

elastic traffic [124]. Compared with some other utility models such as Kelly et al. [85], our

utility model (4.2b) has a term Xi. This term represents the minimum resource requirement

for the applications. Many network applications generate utility only when xi > Xi; oth-

erwise, the communication quality would be too poor to be useful. Parameter ai denotes

the "willingness" of the applications to obtain higher resources. A user having applications

with a higher value of ao will be willing to pay more for higher resources. The pricing

model (4.2a) is from (4.1).

Parameters oi and Xi are application-dependent, while 3 and c, determined by the

pricing scheme, are application-independent. In other words, network resources can be









used by any applications which are the user's choices, and the provider who announces the

pricing plan does not have to know the details that which applications use the resources.

This also reflects the fact that the IP based network architecture with the same resources

has been and will be used/shared by a variety of applications.

In APRIL, we use a uniform pricing scheme for all groups, i.e., 3 and c do not have

subscript i. This results from the competition of service providers in the market, and thus

the nonnegative benefit principle is applicable to providers (see Subsection 4.4.3). Under

the competitive circumstances, all pricing schemes tend to be similar.

By paying pi to get ui, group Gi generates benefit


b = ui pi. (4.3)


Note that bi, ui and pi are all functions of xi. Some previous work (for example, Courcou-

betis and Weber [50]) calls the difference of the utility and price "net benefit".

Group Gi requests resources xi based on the maximum benefit principle:


max bi. (4.4)
O
In order to derive the value of xi that satisfies (4.4), let bi represent (a log(xa Xi +

1) (3xi + c)). Then

xi 3. (4.5)
axi Xi 'i + 1

Let x* denote the solution of i/l. xi = 0, i.e.,


x* = ai/3 + xi 1. (4.6)


When xi > x*, 0i'. /0xi < 0 and b, is a decreasing function. On the other hand, when

i 1 < xi < x*, i1' /0xi > 0, i.e., bi is increasing. Thus x* = arg -i:, > bi.

Bearing in mind that bi = bi only when Xi < xi < Xi, we discuss the calculation

of xi in five cases, which are illustrated in Fig. 4-3. In Subfigure 4-3(a), x* > Xi and

bi(Xi) > 0. Since 0 < xi < Xi, the largest benefit is reached when xi = Xi. In Subfigure











Pi Ipi Pi
C IC


0 i Xi x 0 i X 0 I X *Xi
(a) x* > Xi and bi(Xi) > 0 (b) x* > X, and bi(Xi) < 0 (c) 0 < x* < X and bi(x*) > 0




xi ,i 0 XX


01 xi xi Xi
(d) x* < 0 and bi(x*) > 0 (e) x* < Xi and bi(x*) < 0

Figure 4-3: Utility ui and price pi vs. resources xi

4-3(b), when x* > Xi and bi(Xi) < 0, xi = 0, i.e., GC will not request any resources since

no positive benefit is available. If 0 < x* < Xi and bi(x*) > 0, as shown in Subfigure

4-3(c), xi = x* is the best choice. When x* < 0 and bi(x*) > 0 (Subfigure 4-3(d)), xi = 0

and no positive benefit exists in the available resource range. In Subfigure 4-3(e), x* < Xi

and bi(x*) < 0, xi = 0 again. By merging all cases that result in xi = 0, we get

Xi, for x* > Xi and bi(Xi) > 0
S= x*, for 0 < x* < Xi and bi(x*) > 0 (4.7)

0, otherwise.


4.4.3 Nonnegative Benefit Principle for Providers

For a provider, the benefit is equal to the difference between the aggregate price paid

by all users who obtain resources from the provider and the cost of maintaining those

resources. As a provider may also be a user at the same time in a WMN, the benefit of a

group is the sum of the benefit of being a user and that of being a provider.









Since each user obtains the network access only through one provider at a time, unlike

the case in a conventional competitive market, it may not be a good strategy for a provider

to simply supply the amount of resources that maximizes the benefit, when it has more

available resources.

As described in the previous subsection, a user always seeks the maximum benefit.

When this user has more than one choice for its provider-this is exactly the case that

happens in a competitive market-it will select the provider who supplies more resources,

if the supply from other providers is less the demand that maximizes the user's benefit.

Because the user can only choose one provider, when the nonnegative benefit principle is

considered, the provider selected by this user will obtain a nonnegative benefit gain, while

all other providers have zero benefit gain. This circumstance forces all providers to adopt

the nonnegative benefit principle. This principle has also been used in some previous work

in economics [91].

When group Gi obtains access from group G- 1, it can decide whether to further share

the resources with groups G j = 1, 2, The further resource sharing will change the

utility function of Gi. In this subsection, we will first discuss the new utility function of Gi.

Then we will give more details of the nonnegative benefit principle based on the analytical

results.

After Gi shares the resources with G(1) and receives the payment from G),, the

resources used by G(1) will be deducted from the utility, but the payment will be added to

the utility. The utility function will become




uM = max 0, a log (x x j + 1)) + 3x( + c,

st. < x < X. (4.8)
St. 0 < Xgi+1 _









Here x, becomes the total resources that G, and G'1) received, which is determined

by (4.7). The resources used only by Gi becomes xi x i, since xi is shared by Gi and

G(1. On the other hand, the price function pi is still the same as (4.2a), which represents

the price that Gi needs to pay GC_1 for the use of xi.

Gi allows the admission of G (1 only when it gains benefit from the resource sharing.

Let b) be the benefit for Gi after the resource sharing with G(1). The benefit gain is



A/b1) b-1) bi




aj log max (1, xi + 1)

st. 0 < xi < x.


From xi > x ,

_____X (1 ______- _l (_) (1)
) i o max 1,x x ,1 i+ C.
Ab 1) as log +x (1 + + c
Sxi 2 + 1

If (1)
If xi+ > xi x ,

Ab1) -as log (x, x + 1) + /oxi1 + c.

Since bi(xi) > 0, i.e., aQ log (xi X + 1) 3xi c > 0, considering x ,1 < x, we

know that Ab1) < 0 when x'1) > xi xi. Gi has available resources for G 1 only when

Ab(1) > 0, which corresponds to


Ab ai log -i+ i + O +
( x i + 1 +










Similarly, after G, shares the resources with G(,) the utility function becomes


S2) = max 0, *a log


2
x (j)
Zxi+1
j-1


xi + ))


+ i+1 + 2c,
j=1


st. x1 > 0 and 0 < x+ < xi,j 1, 2.


The benefit gain for Gi after it further shares the resources with G() is


Ab(2)


a log ( 1)
| x5i il i + 1


st. x+l > 0 and 0 < YXi+ < x,


+ xi+1 C,


1, 2.


In general, the benefit gain after Gi further shares the resources with G (k

1,2,...)is


max 1, Xi


(k)
Ab,


a log



st. i+1


k (
x i+1
j-1

k- (j)
Yx i+1
j=1


0 and 0 < i+1 < Xi, j


xi + 1



Xi l


1,2, .. ,k.


(4.9)


Based on (4.9), the curve corresponding to Abk) (Xi+1) has three possible shapes, as

shown in Fig. 4-4. As it is required that x (k)> 0 and 0 < x ( < x, we only need


to consider the range of x+, from 0 to xi


x-i (j)
Ji1 i+l*


If x j i x > 0, we first discuss the shape in the range of 0 < xi+1 <

x, E- (1 xj '+x. In this range,


b(k)
a (Abfk))

ax59


+/3.
1 ()
j31


(4.10)









k-1

x,,- a% x 2/ Y1 -+A
A= F' A11 =k-1
k-1J1 l
\z+1 = x 1 (
C-+- 1 \ 1

I I I
>k_ ,(+
1 0 ii o(k)


(a) x -i > 0 and 1 < (b) x xE 1 > 0 and (c) 1 -
Yi/ri<-r l < Xi a,//3 > X, V 1 ) -< 0
k/B XU Zj +1 x-1 aX/B Z~^ (i+1 "Xi

Figure 4-4: Three possible shapes of Ab k) (xil)
S( e+.(k)


If 1 < ai/3 < Xi x) Xi + 1, Ab k) is increasing when 0 < x( < xi

jxi+1 -, ai- a/ + 1, and decreasing when x -, j a/-3 +1 < xi+l <

xi >--13i x xi, as illustrated in Fig. 4-4(a). If a+/f3 > x, 1 xi + 1,
(< (k) k-1 (j)
Ab k) is decreasing in the range of 0 < xi+ < xi j1 i which is illustrated

in Fig. 4-4(b). In both subfigures, when xi- -i Xi < xi+1 < xi j-1 i)

Abk) is always linearly increasing because a Ab ) /9x = 3 > 0. If 0 < ai/3 < 1

(not shown in Fig. 4-4), since Gi is only interested in xi > Xi when it sends the admission

request to Gi-1, from (4.2a), (4.2b) and (4.3), 0/. /xi = aj/ (xi Jx + 1) 3 < 0. We

also know that b (x)\x,-= = 3di, c < 0. Therefore, bi(xi) < 0 when xi > Xi. In other

words, no resources are able to provide positive benefit for Gd, and hence 0 < ai/O3 < 1

will never happen when Gd further tries to share resources with any G (i.

Another possibility is that xi E X xi < 0. In this case, the curve of

Ab k) X ) is increasing from 0 to xi x1, as shown in Fig. 4-4(c).

Let X(k) denote the maximum available resources that G, can provide to G (k) The

nonnegative benefit principle requires that 1) Xk)I is as large as possible, and 2) Ab k) (x(k) >

0 for all x() < X+, i.e, Gi does not lose benefit by further sharing resources with Gi+,

as long as the quantity of resources used by G (k) is no more than X (k)









A.'
kj=1 k


j\1


0 X., (ki+1


(a) i+


3 1
xi Z
3j=-1 -1 .






(b) -A'' i zy a 2 i i >0


Figure 4-5: Determining X)I when xi- l i > 0


Now we discuss how to determine X). When xi Y Xi < 0, X(k)

x E i x, based on Fig. 4-4(c). When xi E i x- > 0, the discussion falls

into two cases, categorized by the ranges of Ab xi j- x x. Without loss of

generality, we use the shape of Fig. 4-4(a) to illustrate these two cases in Fig. 4-5.

Case 1. When Ab (x,- Ei xi+1 i) < 0, as shown in Fig. 4-5(a), Xi is

the x-coordinate of the intersection of the curve of Ab^ [(X+) ) and x-axis in the range of
.(k) k-1 (j)
0 < x+
Case 2. When Abi (xi- i x 'i+ > 0, as illustrated in Fig. 4-5(b), we
(k) (k) k-1 (j) (k)
have Ab 1) > 0 in the whole range of 0 < x(, < x, E- x x+, since x. =
k -- i+l (i+

X -1i- x+l xi gives the smallest value to Abi. Based on the nonnegative benefit

principle described above, X+) Xi- 1 xi+l-

Therefore, we have

k-1 k-1 k-
(j) if Xi X(j) (k) ((J (j)
xi ix if i+ > 'i and Abi > 0
j=1 j=1 j=1
k-1

-=1

X 1

if x, x- x > x' and Ab ) ( +1 -- ( i < 0
j- 1 j= 1

0, otherwise.
(4.11)


or









Here we do not use the value of x(k) that maximizes Ab k) because of the competition

among providers. Assume Gi provides less resources than another provider, G(k) may

choose that provider for network access as long as it gets greater benefit (see Subsection

4.4.2). On the other hand, because of the competition, the price will be adjusted by the

market to a marginal value pi, which is determined only by xi. No service provider can

set a price significantly higher than pi and win customers at the same time. By using the

nonnegative benefit principle, Gi shows the available resources X () to group G(k) for

admission control.

4.4.4 Algorithm Operations

After the discussion in the above two subsections, we can describe the APRIL algo-

rithm as follows.

1. When group Gi has available resources, i.e., X(k) > 0, it will publicize the value of

i+, as well as 3 and c. As discussed before, 3 and c tend to be the same for all

groups.

2. When group Gj requests network access, it scans the neighboring groups that have

available resources. After treating the available resources X ( > 0 publicized by

Gi as Xj, group Gj uses (4.7) to calculate xj. If xj > 0, Gj uses xj as the requested

resources and sends the admission request to Gi. If several neighboring groups are

available, Gj only sends the admission request to the group that gives the largest xj.

3. After receiving the request from Gj, Gi shares the resources with Gj using a resource

allocation scheme, which can be a buffer management scheme, a scheduling scheme,

or any effective scheme that is capable of allocating resources to different groups.

4.5 Performance Evaluation

In this section, we use the network topology shown in Fig. 4-6 to simulate a wireless

mesh network. For simplicity, in the simulations, link capacity is the only resource we

considered. It is possible to include more types of resources in the future.









Wired link 10 min 30 min
----- Wireless link



Internet


50 min

20 min

Figure 4-6: Simulation network for APRIL


At the beginning, only Go is connected to the Internet. The bandwidth of the wired

backhaul is 6000 kb/s.6 Four other groups G1, G2, G3, G4 start to request the network ac-

cess at time 10 min, 20 min, 30 min and 50 min, respectively, which are shown beside the

groups in Fig. 4-6. Gi and G2 are neighboring groups of Go. G3 is a neighboring group

of G1. G4 is a neighboring group of G2 as well as G3.

Assume the same price as described in Section 4.3 is used, i.e., a monthly fee of

p x= /300+27 dollars, which is equivalent to a minute fee of p = 7.716 x 108-*x+6.944 x

10-4 dollars. In other words, / = 7.716 x 10-8 dollars/kb/s and c 6.944 x 10-4 dollars.

At the beginning, Go uses the maximum benefit principle to decide amount of band-

width x* 6000 kb/s. From (4.6), ao 4.167 x 10-4 dollars. We also set o = 10 kb/s,

which is an experiential bandwidth requirement to surf text-based web pages.

Gi and G2 have the same value of ai = 2.308 x 10-4 dollars, which corresponds to

x* = 3000 kb/s. Same as Go, Xi = 10 kb/s. G3 and G4 are more sensitive to the bandwidth.

For them, ao = 6.0 x 10-4 dollars and x, = 20 kb/s.



6 This corresponds to the download speed of the fastest DSL service plan of BellSouth
[22]. When DSL is used as the Internet connection, the upload and download speed is often
different, but since they are co-existent in a service plan, we only focus on the download
speed in this chapter for ease of discussion.