<%BANNER%>

Collaborative Decoding and Its Performance Analysis


PAGE 1

COLLABORATIVEDECODINGANDITSPERFORMANCEANALYSISByXINLIADISSERTATIONPRESENTEDTOTHEGRADUATESCHOOLOFTHEUNIVERSITYOFFLORIDAINPARTIALFULFILLMENTOFTHEREQUIREMENTSFORTHEDEGREEOFDOCTOROFPHILOSOPHYUNIVERSITYOFFLORIDA2006

PAGE 2

Copyright2006byXinLi

PAGE 3

Tomywife,Li,andmyparents.

PAGE 4

ACKNOWLEDGMENTSFirstofall,Iwouldliketogivemysinceregratitudetomyadvisor,Dr.TanF.Wong,forhisadviceduringmydoctorateendeavors.Withouthispatience,guidanceandencouragement,noneofthisworkwouldnothavebeenpossible.Asamentor,hiswisdom,kindnessandenthusiasmbenettedmegreatlyinbothofmyresearchworkandlife.Iwouldalsoliketogivespecialthankstotheco-chairofmysuper-visorycommittee,Dr.JohnShea,formanyfruitfuldiscussionsandgreathelp.Hisvaluableinsightsandideasdirectlyandsignicantlycontributedtotheworkinthisdissertation.Manythanksgotomyfellowgraduatestudentsinthelaboratory.Inparticular,IappreciateArunAvudainayagamforalotofhelpfuldiscussionsandcollaboration,whichdirectlycontributedtothepartialworkofChapter4inthisdissertation.IalsoextendmygratitudetoDr.JoseA.B.FortesandDr.ShigangChenforbeingmycommitteemembers,andIalsothankDr.SanjayRankaforeverbeingmycommitteemember.Iappreciatefortheirconstructivesuggestionsandprecioustime.Meantime,greatappreciationmustgototheUniversityofFloridaforawardingmetheAlumniGraduateFellowship,whichprovidedmeafullfouryearssupportduringmygraduatestudy.Last,butnottheleast,mysincerethanksgotomyfamilyfortheirendlesslove,continuoussupportandencouragementduringmylife.Thisworkdedicatestoallofthem. iv

PAGE 5

TABLEOFCONTENTS page ACKNOWLEDGMENTS ............................. iv LISTOFTABLES ................................. viii LISTOFFIGURES ................................ ix ABSTRACT .................................... xiii CHAPTER 1INTRODUCTION .............................. 1 1.1Motivation ................................ 1 1.2Multi-antennaDiversityTechniques .................. 2 1.3DistributedArray ............................ 5 1.4CollaborativeDecoding ......................... 7 1.5ScopeofThisWork ........................... 11 2COLLABORATIVEDECODINGINATWO-NODEDISTRIBUTEDARRAY .................................... 13 2.1SystemModel .............................. 14 2.2CollaborativeDecodingforRectangularParity-CheckCode ..... 15 2.3CollaborativeDecodingforConvolutionalCode ........... 20 2.4Summary ................................ 23 3COLLABORATIVEDECODINGFORCODEDMODULATION .... 24 3.1SystemModel .............................. 26 3.2IterativeDemodulationandDecodingforBICM ........... 28 3.2.1IterativeDemodulationandDecodingAlgorithm ....... 28 3.2.2EectofMappinginBICM-ID ................. 31 3.3CollaborativeDecodingforBICM-IDwithRectangularParity-CheckCode ................................... 33 3.4PerformanceEvaluation ........................ 36 3.5Summary ................................ 39 4COLLABORATIVEDECODINGFORDISTRIBUTEDARRAYWITHTWOORMORENODES .......................... 40 4.1SystemModelforDistributedArraywithTwoorMoreNodes ... 41 v

PAGE 6

4.2CollaborativeDecodingandInformationExchangeSchemes .... 43 4.2.1InformationExchangewithMemory .............. 44 4.2.2Least-Reliable-BitInformationExchange ........... 46 4.2.3Most-Reliable-BitInformationExchange ........... 49 4.3PerformanceEvaluation ........................ 51 4.4Summary ................................ 55 5PERFORMANCEANALYSISFORCOLLABORATIVEDECODINGWITHLEAST-RELIABLE-BITEXCHANGEONAWGNCHANNELS ..... 56 5.1Gaussian-ApproximatedDensityEvolutionForNonrecursiveConvo-lutionalCodes .............................. 57 5.2ErrorPerformanceAnalysis ...................... 64 5.2.1BERUpperBoundforM3 ................. 65 5.2.2UnionBoundforMax-log-MAPDecoding ........... 68 5.2.3ApplyingMax-log-MAPDecodingUnionBoundtoCollabo-rativeDecoding ......................... 70 5.2.4BERUpperBoundforM=2 ................. 76 5.3NumericalResults ............................ 78 5.4Summary ................................ 82 6PERFORMANCEANALYSISFORCOLLABORATIVEDECODINGWITHMOST-RELIABLE-BITEXCHANGEONAWGNANDRAYLEIGHFAD-INGCHANNELS ............................... 83 6.1StatisticalApproximationforExtrinsicInformation ......... 85 6.1.1AWGNChannel ......................... 86 6.1.2IndependentRayleighFadingChannel ............. 89 6.2DensityEvolutionModel ........................ 93 6.2.1AdditionalInformationGeneration .............. 95 6.2.2FindingParametersinDensityEvolutionModel ....... 99 6.3AGeneralUpperBoundforBER ................... 100 6.4ErrorEventsandProbabilitiesAnalysis ................ 109 6.4.1UnionBoundforCollaborativeDecoding ........... 110 6.4.2AnalysisforMRBInformationExchangeonErrorEvents .. 111 6.4.3UpperBoundsforProbabilitiesInvolvingj ......... 122 6.5EvaluationofBERUpperBound ................... 127 6.5.1AWGNChannel ......................... 128 6.5.2IndependentRayleighFadingChannel ............. 134 6.6NumericalResults ............................ 140 6.7Summary ................................ 147 7CONCLUSIONSANDFUTUREWORK .................. 149 APPENDIX ARECTANGULARPARITY-CHECKENCODINGANDDECODING .. 152 vi

PAGE 7

A.1MultidimensionalParity-CheckEncoding ............... 152 A.2IterativeMultidimensionalParity-CheckDecoding .......... 153 BPROOFOFEQUATION 5{28 ...................... 155 B.1SystemModel .............................. 155 B.2ExtrinsicInformationinMax-log-MAPDecoding .......... 156 B.2.1OptimalLog-MAPDecoding .................. 156 B.2.2Max-log-MAPDecoding .................... 158 CPROOFOFTHEOREM 6.4.4 ........................ 161 DNUMERICALEVALUATIONOFGALCDF ............... 165 REFERENCES ................................... 168 BIOGRAPHICALSKETCH ............................ 172 vii

PAGE 8

LISTOFTABLES Table page 5{1DierentchoicesoffpjgandcorrespondingaverageinformationexchangeamountwithM=8forrate1=2CC5;7code.iscalculatedwithrespecttotheinformationexchangeamountofMRC,MRC. ....... 80 6{1Dierentchoicesoffpjgandthecorrespondingaverageinformationex-changeamountwithM=6forrate-1=2CC;7code.iscalculatedwithrespecttotheinformationexchangeamountofMRC,MRC. .... 145 viii

PAGE 9

LISTOFFIGURES Figure page 1{1LinearcombinerforaSIMOsystem ..................... 4 1{2Distributedarray ............................... 6 1{3Iterativedecoding ............................... 9 1{4Collaborativedecoding ............................ 10 2{1Aclusteroftwonodesformingadistributedantennaarray. ........ 14 2{2Exampleofrectangularparity-checkcodeforpacketofnineinformationbits. ...................................... 16 2{3Conditionalprobabilityoftheeventthatanerroroccursinthebitswhosesoftoutputmagnitudesrankinthelowestpercentiles. ........... 18 2{4Performanceofcollaborativedecodingforthe322RPCCwithinformationexchangebetweentworeceivingnodesoveranAWGNchannel. ...... 19 2{5Performanceofcollaborativedecodingforthe322RPCCwithinformationexchangebetweentworeceivingnodesoveraRayleighfadingchannel. .. 20 2{6PerformanceofcollaborativedecodingfortheCC,7withinformationexchangebetweentworeceivingnodesoveranAWGNchannel. ...... 22 2{7PerformanceofcollaborativedecodingfortheCC,7withinformationexchangebetweentworeceivingnodesoveraRayleighfadingchannel. .. 22 3{1SystemmodelofBICM-IDwithRPCC ................... 27 3{2BERforBICM-IDwith322RPCCand8PSKinthetwo-nodedistributedarrayoverAWGNchannel. .......................... 37 3{3BERforBICM-IDwith322RPCCand8PSKinthetwo-nodedistributedarrayoverRayleighfadingchannel. ..................... 37 3{4AverageSNRat10)]TJ/F21 7.97 Tf 6.586 0 Td[(5BERversusspectraleciencyforBICM-IDwith322RPCCinatwo-nodedistributedarrayoverAWGNchannels. .... 38 3{5AverageSNRat10)]TJ/F21 7.97 Tf 6.586 0 Td[(5BERversusspectraleciencyforBICM-IDwith322RPCCinatwo-nodedistributedarrayoverRayleighfadingchannels. 39 ix

PAGE 10

4{1Distributedarraywithmultiplenodes. ................... 41 4{2Typicalprobabilitydistributionfunctionsofsoft-outputforconvolutionalcodes ...................................... 47 4{3BERperformanceofcollaborativedecodingwithLRBexchangeforthecasesofM=2;3;4;6and8onAWGNchannels,whereCC;7andfpjg=f0:1;0:15;0:25gareused. ....................... 52 4{4BERperformanceofcollaborativedecodingwithLRBexchangeonRayleighfadingchannels,parametersettingsarethesameasFig. 4{3 ....... 52 4{5BERperformanceofcollaborativedecodingwithMRBexchangeonAWGNchannels,wherefpjg=f0:1;0:2;1gareused. ................ 53 4{6BERperformanceofcollaborativedecodingwithMRBexchangeonRayleighfadingchannels,parametersettingsarethesameasFig. 4{5 ....... 54 4{7ComparisonofinformationexchangeamountwithrespecttoMRCforLRBandMRBexchangeschemes ...................... 54 5{1Systemmodelforcollaborativedecodingprocess. ............. 58 5{2EmpiricalpdfsofextrinsicinformationgeneratedbytheMAPdecodersinsuccessiveiterationsincollaborativedecodingwiththeLRBexchangeforM=6andEb=N0=3dBonAWGNchannels,wherethemaximumfreedistance4-statenonrecursiveconvolutionalcodeisused. ....... 59 5{3Comparisonofmeanandvarianceoftheextrinsicinformationfromthedensityevolutionmodelandthatfromtheactualcollaborativedecodingprocess ..................................... 63 5{4Comparisonofthresholdestimatedfromthedensityevolutionmodelthatfromtheactualcollaborativedecodingprocess ............... 64 5{5Comparisonoftheproposedbounds,simulationresultsforthecasesofM=2and6onAWGNchannels,whereCC;7andfpjg=f0:1;0:15;0:25gareused. .................................... 78 5{6Comparisonoftheproposedbounds,simulationresultsinthelastiterationforthecasesofM=2;3;4;6and8onAWGNchannels,whereCC;17andfpjg=f0:1;0:15;0:25gareused. .................... 79 5{7ComparisonofperformanceforM=8andCC;7onAWGNchannelswithdierentchoicesoffpjginTable 5{1 ................. 81 5{8AsymptoticperformanceforM=8andCC;7onAWGNchannelswithdierentchoicesoffpjginTable 5{1 .................... 81 x

PAGE 11

6{1EmpiricalpdfsofextrinsicinformationgeneratedbytheMAPdecoderatsuccessiveiterationsincollaborativedecodingwithMRBexchangeforM=6onanAWGNchannelwithEb=N0=5dB,forwhichthemaximumfreedistance4-statenonrecursiveconvolutionalcodeisused. ....... 87 6{2EmpiricalpdfsofextrinsicinformationgeneratedbytheMAPdecodersinsuccessiveiterationsincollaborativedecodingwiththeMRBexchangeforM=8andEb=N0=8dBonindependentRayleighfadingchannels,wherethemaximumfreedistance4-statenonrecursiveconvolutionalcodeisused. .................................... 93 6{3Densityevolutionmodelforcollaborativedecodingprocess ........ 95 6{4Comparisonofmeanandvarianceoftheextrinsicinformationobtainedfromthedensityevolutionmodelandthatfromtheactualcollaborativedecodingprocess. ............................... 101 6{5Comparisonofthresholdestimatedfromthedensityevolutionmodelandthatfromtheactualcollaborativedecodingprocess. ............ 101 6{6ComparisonoftheboundsandsimulationresultsforthecasesofM=6onanAWGNchannel,whereCC;7andfpjg=f0:1;0:2;0:6g,f0:1;0:2;0:8gandf0:1;0:2;1gareused. .......................... 141 6{7ComparisonoftheboundsandsimulationresultsforthecasesofM=6onanAWGNchannel,whereCC;17andfpjg=f0:1;0:2;0:6g,f0:1;0:2;0:8gandf0:1;0:2;1gareused. ................... 141 6{8ComparisonoftheboundsandsimulationresultsforthecasesofM=6onanindependentRayleighfadingchannel,whereCC;7andfpjg=f0:1;0:2;0:6g,f0:1;0:2;0:8gandf0:1;0:2;1gareused. ........... 142 6{9ComparisonoftheboundsandsimulationresultsforthecasesofM=6onanindependentRayleighfadingchannel,whereCC;17andfpjg=f0:1;0:2;0:6g,f0:1;0:2;0:8gandf0:1;0:2;1gareused. ........... 142 6{10Comparisonoftheproposedbounds,simulationresultsinthelastiterationforthecasesofM=2;3;4and8onanAWGNchannel,whereCC;7andfpjg=f0:1;0:2;1gareused. ...................... 144 6{11Comparisonoftheproposedbounds,simulationresultsinthelastitera-tionforthecasesofM=2;3and4onanindependentRayleighfadingchannels,whereCC;7andfpjg=f0:1;0:2;1gareused. ........ 145 6{12ComparisonofperformanceforM=6andCC;7onanAWGNchannelwiththedierentchoicesoffpjginTable 6{1 ............... 146 6{13ComparisonofperformanceforM=6andCC;7onRayleighfadingchannelswiththedierentchoicesoffpjginTable 6{1 .......... 147 xi

PAGE 12

C{1Summationorderswitchprocedureforfl;mlgj)]TJ/F21 7.97 Tf 6.587 0 Td[(1l=0. ............ 162 xii

PAGE 13

AbstractofDissertationPresentedtotheGraduateSchooloftheUniversityofFloridainPartialFulllmentoftheRequirementsfortheDegreeofDoctorofPhilosophyCOLLABORATIVEDECODINGANDITSPERFORMANCEANALYSISByXinLiMay2006Chair:TanF.WongMajorDepartment:ElectricalandComputerEngineeringAntennaarrayprocessingisacommontechniquethatutilizesspatialdiversitytoenhancetherobustnessofadigitalcommunicationsystemagainstdeteriorativewirelesstransmissioneectssuchaschannelnoiseandfading.Inordertoobtainspacialdiversityeectively,thephysicallyconnectedantennaarrayelementsinatraditionalarrayarerequiredtobeseparatedfarapartsothatthereceivedsignalsatdierentantennasareindependentofeachother.Thissituationusuallymakesthesizeoftheantennaarraytoolargetobefeasibleinmanypracticalscenarios.Unliketraditionalarrays,distributedarraysalleviatethearraysizeconstraintbyutilizingaclusterofindependentphysicallyseparatedreceiversinawirelessnetworkasitsarrayelements.Allthearrayelementsreceivingnodescancommunicatewitheachotherthroughawirelessbroadcastchannel.Byexchanginginformationamongthesereceivingnodesduringthereceptionprocess,itispossibletoobtainspatialdiversitygainorantennagainwithsuchadistributedarray.Sincetheinformationexchangingtracloadamongthereceivingnodesisanimportantissueforawirelessnetwork,conventionalreceivediversitytechniquessuchasmaximumratiocombiningMRCbecomeinecientindistributedarrays. xiii

PAGE 14

Collaborativedecodingisaniterativereceivediversityapproachsuitablefordis-tributedarraywhenerrorcorrectioncodesareusedinthetransmissionprocess.Re-ceivediversityisachievedbyexchangingonlyaportionofdecodinginformationamongreceivingnodesincollaborativedecoding.Bycarefullyselectingthedecod-inginformation,collaborativedecodingcanlowertheinformationexchangeamount,whileprovidingperformanceclosetothatofMRC.Basedonthestatisticcharac-teristicsoftheoutputofmaximumaposterioridecoders,westudytwoinforma-tionexchangeschemesforcollaborativedecoding:least-reliable-bitLRBandmost-reliable-bitMRBexchangeschemes.ErrorperformanceofthesetwoschemesunderdierenttransmissionenvironmentsisinvestigatedandcomparedwithMonteCarlosimulations.TheoreticalanalysisisalsocarriedoutforcollaborativedecodingwiththeLRBandMRBexchangeschemeswhennonrecursiveconvolutionalcodesareused. xiv

PAGE 15

CHAPTER1INTRODUCTION1.1MotivationWirelesscommunicationsprimarilystudytheproblemofreliableinformatione.g.,voice,video,images,text,data,etc.transmissionthroughanatmospheremediumthewirelesschannelbyusingaradiowaveastheinformationcarrier.Wirelesscommunicationtechniquesmakeitpossibleforpeopletocommunicatefreely.Sincethebirthofthewirelesscommunicationerainthe1970's,thedemandforwire-lesstransmissionhasbeengrowingataveryrapidpace.Withthedevelopmentsofdigitalcommunicationtechniques,radiofrequencycircuitfabrication,andvery-large-scaleintegratedcircuittechnologies,continualimprovementofwirelesscommu-nicationtechniqueshasbeenfulllingthedemandslargelyinthepastfewdecades.However,thedemandsarestillgrowingexponentially[ 1 ].Asiswellknown,thetwounderlyingresourcesofwirelesschannelareradiobandwidthandtransmitterpower.Unfortunately,thesetworesourcesareverylim-ited.Traditionalsingleantennacommunicationtechniquesusuallytrytoincreasethecapacityofawirelesschannelbyincreasingtheradiobandwidthandtransmissionpower.However,withtherapidgrowthofwirelessnetworks,bandwidthinusablespectrumhasbeenhighlysaturated,whiletransmissionpowerislimitedduetothephysicalequipmentconstraints,e.g.,limitedbatterylife.Moreover,thetransmissionpowershouldberestrictedbelowsomelimitationinordertoreducethemutualinfer-enceamongwirelesscommunicationdevicesusingthesamewirelesschannel.Thus,itbecomesmoreandmorediculttofulllthecontinuouslyandrapidlygrowingdemandofwirelesschannelcapacitybyusingsingleantennatechniques. 1

PAGE 16

2 Anotherchallengeinwirelesscommunicationisthehostilenatureofthewirelesschannel.Onecommonprobleminsignaltransmissionthroughanychannelisadditivenoise[ 2 ].TheadditivenoiseisusuallymodeledasstatisticallyindependentGaussiannoisewithaatpowerspectraldensity.Thisnoiseisalsocalledthermalnoise.Theprimarysourceofthermalnoiseistheinternalcomponentssuchasresistorsandsolid-statedevicesusedinthereceiver.Whenthetransmittedsignalgoesthroughthereceiver,thedatasymbolwillbeinevitablycorruptedbythethermalnoise.Interference,asanexternalperformancedegradationfactor,isanotherchallengeinwirelesscommunicationsystems.Signalsfromothertransmittersusingthesamewirelesschannelareusuallythesignicantsourcesofinterference.Besidesnoiseandinterference,fadingisalsooneofthemainchannelimpedimentsinwirelesscommunication.Duetothenatureofradiosignalsandthepropagationcharacteristicsofthewirelesschannel,signalstransmittedthroughawirelesschannelcansuerfromattenuation,amplitude,phaseandmultipathdistortion[ 3 ].Inordertocombattheseverechannelimpairmentsduetofadingandnoisewithoutexcessivelyincreasingthetransmissionpower,itisindispensabletoadaptsomeauxiliarywirelesscommunicationtechniquesdierentfromthetraditionalonesusedinsingleantennasystems.Inthisscenario,multi-antenna,orspace,diversitytechniquesareparticularlyattractivebecausetheycanbereadilycombinedwithotherformsofdiversityandoerdramaticperformancegainswhenotherformsofdiversityareunavailable[ 4 5 ].1.2Multi-antennaDiversityTechniquesMulti-antennadiversityiswidelyconsideredtobethemostpromisingavenueforsignicantlyincreasingthebandwidtheciencyofwirelessdatatransmissionsystems.Inmulti-antennadiversitytechniques,diversityisobtainedbyemployingmultipleantennasalsocalledanantennaarrayatthetransmitterand/orthereceiver.Thebasicideabehindthemulti-antennadiversitytechniquesisthat,iftheantennas

PAGE 17

3 areplacedsucientlyfarapart,thechannelfadingbetweendierentantennapairswillbecomemoreorlessindependent.Henceindependentsignalpathsarecreatedbetweenthetransmitterandthereceiver.Reliablecommunicationcanbeguaranteedonlyifoneoftheindependentpathsisstrong.Ifmultipleantennasareemployedatthereceiverendbutonlyoneantennaisusedforthetransmitter,thenthechannelbetweenthetransmitterandreceiveriscalledasingle-input,multi-outputSIMOchannel.ThespacediversityobtainedinaSIMOchanneliscalledreceivediversity.Ifmultipleantennasareemployedforthetransmitteronly,thenthechanneliscalledamulti-input,single-outputMISOchannels.DiversityinaMISOchanneliscalledtransmitdiversity.Ifmultipleanten-nasareemployedforboththetransmitterandreceiver,thenthechanneliscalledamulti-input,multi-outputMIMOchannel.Inthiscase,bothtransmitandreceivediversitiesareprovidedbythechannel.Inthisresearchwork,weonlyconsiderSIMOchannels;i.e.,onlyreceivediversityisstudiedhere.ForaSIMOsystem,thereareseveralwaystoobtainreceivediversity.Usually,theindependentfadingpathsarecombinedtoobtainaresultantsignalthatisthenpassedthroughastandarddemodulatorand/ordecoder.Mostcombiningtechniquesarelinear:theoutputofthecombinerisjustaweightedsumofthereceivedsignalsatdierentantennaarrayelements[ 6 ].Fig. 1.2 showsalinearcombinerforaSIMOsystem.Inthegure,wesupposethatthereceiveantennaarraycontainsMan-tennaelementsandasignalstistransmittedthroughaatfadingwirelesschannel.TheseantennaelementsarefarenoughapartsothatMindependentfadingchannelsbetweenthetransmitterandreceiverarecreated.Letritdenotethereceivedsignalattheithantenna,thenitcanbeexpressedasrit=aist+nit;i=1;M;

PAGE 18

4 Figure1{1:LinearcombinerforaSIMOsystem whereaiisthecomplexfadinggainoftheithfadingchannelandnitistheadditivewhiteGaussiannoiseAWGNattheithantenna.Thenundertheassumptionthatthechannelfadinggainsareperfectlyknown,theoptimalchoiceofthecombiningweightsiistheconjugateofthechannelfadinggainaiforalli[ 2 ].Theresultantcombineroutputsignalrtisgivenbyrt=MXi=1airit:ThisoptimumcombiningtechniqueisknownasmaximumratiocombiningMRC.Infact,MRCmaximizesthesignal-to-noiseratioSNRoftheoutputsignal,whichincreaseslinearlywiththenumberofindependentfadingchannelsM[ 6 ].TheMRCcombinerachievesthefullreceivediversityorderofthechannelandprovidestheoptimalperformanceincomparisonwithotherreceivediversitytechniques.

PAGE 19

5 1.3DistributedArrayIthasbeenshownthatthepotentialgaininchannelcapacityofmulti-antennasystemsoversingle-antennasystemisratherlargeundertheassumptionofindepen-dentfadingandnoisesatdierentreceivingantennas[ 5 ].However,fadingcorrelationdoesexistwhentheelementsarenotspacedsucientlyfarapartinpractice.Thiscansignicantlyreducethecapacityofthemultiple-antennasystem[ 7 ].Itiswellknownthatincreasingantennaspacingcandecorrelatethemultiplechannelscreatedbytheantennaarray.Thus,inordertomaketheindependenceassumptionvalidforthemulti-antennasystem,antennaelementsinthearraysmustbespacedfarapartenough.Sinceconventionalantennaarraysarecomposedofseveralphysicallyconnectedantennaelements,thisrequirementimpliesabigsizeforthearrays.Therequiredantennaseparationdependsonthelocalscatteringenvironmentaswellasonthecarrierfrequency.Foramobiletransmitterwhichisnearthegroundwithmanyscatterersaround,thechanneldecorrelatesovershorterspatialdistances,andtypicalantennaseparationofhalftoonecarrierwavelengthisnecessary.Forbasestationsonhightowers,largerantennaseparationofseveralto10sofwavelengthsmayberequired[ 3 ].Thecarrierwavelengthofaradiosignalatfrequencyfisgivenby=c=f,wherec=3108m/sisthespeedoflight.Forillustration,considertheconcreteexampleofauniformlinearantennaarray,wheretheantennasareevenlyspacedonastraightline.Supposethemulti-antennasystemworksatthecarrierfrequencyof2GHz;thencarrierwavelengthisabout0:15m.Thus,forauniformlineararrayof8antennaswithsmallscatterers,thelengthofthearraywillbelargerthan3feet!Antennaarrayofsuchasizeusuallyistoolargetobefeasibleinmanypracticalscenarios.Thephysicalsizeofthearraywilllimittheapplicabilityofspatialdiversitytechniques,especiallyformobileapplications.Toovercomethephysicalconstraintofconventionalmulti-antennaarraysandtakeadvantageofdiversitytechniques,weconsideranetwork-basedapproachto

PAGE 20

6 Figure1{2:Distributedarray obtainspatialdiversitywithouttheuseofphysicallyconnectedantennaarrays.Thisapproachmakesuseofthefactthatcommunicatingnodesinalocalwirelessnetworkareinherentlyphysicallyseparatedinspace.Whenaremotesourcetransmitsamessagethroughthewirelesschanneltoaclusterofnodes,aSIMOchannelwillbeessentiallycreatedbetweenthesourceandtheclusterofreceivingnodes.Usually,thesereceivingnodesarefarenoughapartthatindependentfadingatdierentnodescanbeguaranteed.Meanwhile,thenodesareincloseproximitysothatsimplelower-power,high-rate,reliablesignalingtechniquesarepermittedforthecommunicationswiththecluster.Hence,thesenodescancoordinatetheirreceptionprocessestoeectivelyformadistributedantennaarray.Inthisway,spatialdiversitycanbeobtainedthroughcollaborationandcommunicationamongthenodesinthecluster.Fig. 1{2 illustratestheconceptofthedistributedantennaarray.Incontrasttoaconventionalmulti-antennaarrayinFig. 1.2 ,wherethereceivedsignalsatallantennaelementsarecollectedandprocessedinacentralizedmanner,eachnodeinthedistributedarrayisanintegralreceiverandpossessesthecapabilityofprocessingitsreceivedsignalindependently.Fromtheviewpointofthewholecluster,there-ceptionprocesscanbethoughtasadistributedprocessperformedatdierentnodesinthearray.Spatialdiversityisthenachievedbyexchanginginformationamongthe

PAGE 21

7 nodesthroughthelocalwirelessnetworkduringthedistributedprocessing.Itisthecombinationofthedistributedprocessingandlocalwirelessnetworkthatallowsustoovercomethephysicalconstraintofconventionalmulti-antennaarrays.ItisworthwhiletopointoutthatthekindoftopologydepictedbyFig. 1{2 isapplicabletomanypracticalwirelesssystems.Forinstance,consideracellularsysteminwhichamobileunitiswithintherangeofmultiplebasestations.Thebasestations,whicharelinkedtogetherbyopticalorhigh-speedwiredconnections,receiveindependentcopiesofthetransmittedsignalfromthemobileunitandjointlyprocessthereceivedsignalstogaindiversityfromtheindependentchannels.ThesamescenarioappliestoawirelessLANsysteminwhichthebasestationsarereplacedbyaccesspoints,andthelinksjoiningtheaccesspointsareusuallywiredEthernetlinks.Foramilitarycommunicationexample,consideraclusteroflocalsensorsinasensornetwork.Thecloseproximityofthesensorsallowsthewirelesslinksbetweenthenodestobeveryhighspeed,whilerequiringonlylow-powerandlow-complexityprocessing.Atransmitter,eitherfromanotherclusterinthenetworkorexternaltothesensornetwork,sendsasignaltothiscluster.Eachsensorreceivesacopyofthetransmittedsignalandprocessesthesignalinadistributedmannerusinginformationfromothersensors.Thesamescenarioappliestointer-groupcommunicationsbetweensmallgroupsofsoldiers,eachcarryingamobilecommunicator,ortoagroupofcollaboratingmobileuserscommunicatingwithabase-stationinacellularnetwork[ 8 9 ].1.4CollaborativeDecodingIndistributedprocessing,theinformationexchangingtracloadamongthear-raynodesisanimportantissuefordistributedarrays.Itisundesirabletoexchangeanextensiveamountofinformationinthereceptionprocessbecausethewirelessnetworkresourceislimited.Conventionaldiversitytechniquesusinglinearcombin-ingdescribedinSection 1.2 becomeexpensiveintermsoftheinformationexchange

PAGE 22

8 amountbecauseallthereceivedsymbolsateachnodeneedtobeforwardedthroughthenetworkinordertoachievethefullspatialdiversity.Infact,itcanbeshownfromtheinformation-theoreticviewpointthatitispossibletoachievethefulldiver-sityadvantagewithamuchsmalleramountofinformationexchangethanusedinthecommoncombiningtechniquessuchasMRC.Toexploreecientdiversitytechniquesfordistributedarrays,informationmustbeexchangedselectivelyandthatinformationmustbeusedeectivelyatthere-ceivingnodes.Usually,thereceivedsignalbeforethereceptionprocessingsuchasdetection,demodulation,decoding,etc.suersminimuminformationlosscausedbytheperformanceconstraintorcapabilityofthereceiverandtheinherentun-certaintyinthecommunicationsystems.However,theinformationorsignalbeforeprocessingusuallysuersmaximumcorruptioncausedbyfadingandnoisecomparedwiththeinformationafterprocessing.Thus,wecandirectlylearnmuchlessaboutthetransmittedmessagefromthesignalbeforethanthatafterprocessing.Althoughitmaysuercertaininformationlossduetothereceivingprocedure,theinforma-tionafterprocessingusuallyreectsthetruemessagewithhighcondence,Thusitispossibletousetheinformationafterprocessingeectivelyforexploitingspatialdiver-sity.Moreover,informationafterprocessingmaypossessdesiredpropertiessuchthataneectiveinformationselectionmethodcanbeadoptedtoreducetheinformationexchangeamount.Errorcorrectioncodingprovidesthecapabilityofdetectingandcorrectingbiterrorsencounteredinthetransmissionprocess.Itisoneofthemostoftenusedtechniquesinwirelesscommunicationsystems.ThemaximumaposterioriMAPdecodingiswidelyusedinerrorcorrectioncodingtechniques.MAPdecoderdecodesmessagebitsbyndingthepossibleonesthatmaximizetheiraposterioriprobabilitiesandoutputthemaximumaposterioriprobabilitiesforeachbit.Theoutputisoften

PAGE 23

9 Figure1{3:Iterativedecoding expressedinlog-likelihoodratioLLRformasL^x=logP^x=+1jy P^x=)]TJ/F15 11.955 Tf 9.299 0 Td[(1jy;where^xisthedecisionofinformationbitx,yisthechannelobservation,P^x=+1jyandP^x=)]TJ/F15 11.955 Tf 9.298 0 Td[(1jyaretheaposterioriprobabilitiesfor^xtobe+1and)]TJ/F15 11.955 Tf 9.298 0 Td[(1giveny,respectively.TheLLRvaluedoesnotonlyreectthesignofabinarybit,butalsoindicatesthereliabilityofthedecision.ItturnsoutthattheoutputofMAPdecoderscanbetheproperinformationtoexchangeforobtainingspatialdiversityindistributedarrays.Thecapacity-approachingturbocodes[ 10 ]havearousedgreatattentionandhavebeenextensivelystudiedsincetheirintroduction.Turbocodeshavebecomealandmarkintheeldoferrorcorrectioncoding.Thekeyideaofturbocodesisusingiterative,orturbo",decodingtoexploitthemulti-componentcodestructureoftheturboencoderbyassociatingadecoderwitheachofthecomponentcodes.Inthedecodingprocedure,eachdecoderperformsMAPdecodingoranysoft-in,soft-outSISOdecodingthatapproximatestheMAPdecodingforitscorrespondingcodecomponent.Thedecodershelpeachotherbyusingtheextrinsicinformationoutputgeneratedinthedecodingprocessoftheotherdecodersasaprioriinformationfortheirowndecodingprocess.Byrepeatingtheprocedureinaniterativefashion,

PAGE 24

10 Figure1{4:Collaborativedecoding turbocodescanachievetheperformanceclosetotheShannoncapacitylimit.Fig. 1{3 showstheiterativedecodingprocedurewithtwodecodingcomponents.Theideaofiterativedecodingisthengeneralizedtomanytransmissionsystemswithmultiplecodecomponentsparallelorseriallyconcatenatedtogether,suchascodedmodulation,iterativedetectionandequalizationsystems.Thisiterativedecodingapproachisusuallyverypowerfulandexhibitsnear-capacityperformance.Withtheaboveconsiderations,westudyanewapproachtoachievediversitycalledcollaborativedecodingindistributedarrayswhenerrorcorrectioncodesareusedinthetransmissionprocess.Thebasicideaofcollaborativedecodingistoextendtheiterativedecodingtechniquestothedistributedarrayscenario.Fig. 1{4 depictshowtheideaofiterativedecodingisextendedtocollaborativedecoding.ByviewingreceivingnodesinthearrayasasetofphysicallyseparateddecodingcomponentsandtheinformationexchangingprocessastheextrinsicinformationfeedbackprocessinFig. 1{3 ,thetypicaliterativedecodingprocedurecanbeperformedinadistributedarray.

PAGE 25

11 ForMAPdecoders,wenoticethatforthebitswithhighdecodingreliabilityinpreviousiterations,thecontributionoftheiraprioriinformationtotheaveragedecodingperformanceismarginal.Howeverforthelessreliablebits,thecontribu-tionoftheiraprioriinformationcanbesignicant.Thisfactmakesitpossibleforcollaborativedecodingtoachievediversitybyexchangingonlyaportionofdecodinginformationamongthereceivingnodesinadistributedarray.Itturnsoutthatbycarefullyselectingthedecodinginformationtoexchange,collaborativedecodingcanlowertheamountofinformationthatmustbeexchangedinthearray,whileprovidingperformanceclosetothatofMRC.Thisadvantagemakescollaborativedecodinganattractivediversitytechniquefordistributedarraysystems.1.5ScopeofThisWorkInthisresearchwork,westudythenewapproachofcollaborativedecodingwithdistributedarraystoachievespatialdiversityinwirelesscommunications.Werstinvestigatethepossibilityofusingcollaborativedecodinginatwo-nodedis-tributedarraytoobtainreceivediversitywhendierentchannelcodingtechniquesareadoptedforAWGNandatRayleighfadingchannels.Thentheapproachisextendedtocodedmodulationsystemswithhigh-ordersignalconstellations,whichprovidehigherspectralecienciesandaredesiredforbandwidth-constrainedwirelesschannels.Byexchangingonlyasmallamountofinformationamongthedistributedarrayincontrasttoconventionalspatialdiversitycombiningtechniques,thecollabo-rativedecodingtechniqueisshowntobeabletoachieveasignicantspatialdiversitygainandperformclosetotheoptimalMRC.Takingintoaccountthescalabilityofthedistributedarray,weextendcollabo-rativedecodingintothemoregeneralcaseofanarbitrarynumberofnodes.Basedonthestatisticcharacteristicsoftheoutputofmaximumaposterioridecoders,weproposetwoecientinformationexchangeschemesforcollaborativedecoding:least-reliable-bitandmost-reliable-bitexchangeschemes.Errorperformanceofthesetwo

PAGE 26

12 schemesunderdierenttransmissionenvironmentswithdierentparametersettingsisinvestigatedandcomparedwithMonteCarlosimulations.Tofurtherstudytheproposedapproach,theoreticalanalysisonthecollabo-rativedecodingtechniqueiscarriedout.Foranalysistractability,weconsiderthecasesinwhichnonrecursiveconvolutionalcodesareusedinthecollaborativedecodingprocedure.TheanalysisisbasedontheassumptionthattheextrinsicinformationgeneratedinthecollaboratingdecodingprocessfornonrecursiveconvolutionalcodescanbeapproximatelydescribedbyaclassofGaussianandgeneralizedasymmetricLaplacedistributionsforAWGNandindependentRayleighfadingchannels,respec-tively.Withthisassumption,wereducethecollaboratingdecodingtoadensityevolu-tionmodelwithasingleMAPdecoder,andproposeasystematicmethodtoevaluatetheerrorperformanceofcollaboratingdecodingsemi-analytically.Theanalysisre-sultsshowthatwithproperchoicesofparameters,collaborativedecodingcanachievefulldiversityandapproachthetheoreticalperformanceboundsasymptotically.

PAGE 27

CHAPTER2COLLABORATIVEDECODINGINATWO-NODEDISTRIBUTEDARRAYInthischapter,weinvestigatethepossibilityofcollaborativedecodingachievingspatialdiversityindistributedarray.Werstfocusonthesimplecaseofatwo-nodenetwork.Considerapairofnodesthatareconnectedviaacommunicationchannelthathasrelativelybenigncharacteristicsthatpermitsimplelower-power,high-rate,reliablesignalingtechniquestobeemployedforcommunicationsbetweenthesetwonodes.Typically,thesetwonodesareincloseproximity.Adistanttransmittersendsapacketofcodeddatabitstothetwonodes.Eachofthetwonodesreceivesanindependentcopyofthetransmittedsignal.Forthedistributedarray,weemployiterativedecodingtoextractimportantinformationfromthereceivedsignalateachnode,andonlypassthisinformationbetweenthetwonodes.Moreprecisely,eachnodedecodesthesignalthatitreceivesandgeneratesreliabilityestimatessoftoutputsforthetransmitteddatabits.Thetwonodesthenexchangesoftoutputsofasmallportionofthebitsthatareleastreliable.Uponreceivingadditionalinformationabouttheleastreliablebitsfromanothernode,anodewillrestartthedecodingprocess.Thisprocessofinformationexchangeanditerativedecodingthencontinuesforanumberofiterations.Theobjectiveistoobtainthemaximumdegreeofdiversityadvantagefromthesignalsreceivedatthetwonodes,whilerequiringaminimumamountofinformationexchangebetweenthem.ThischapterisprimarilybasedontheworkofWongetal.[ 8 9 ].Wewillpresenttheresultsofthesimulationsthatwecarriedouttoinvestigatetheviabilityoftheproposeddistributediterativedecodingapproach.InSection 3.1 ,wedescribethesystemandchannelmodelassumedinthesimulations.InSections 2.2 and 2.3 13

PAGE 28

14 Figure2{1:Aclusteroftwonodesformingadistributedantennaarray. wereportdecodingdesignsandsimulationresultsemployingarectangularparity-checkcodeandaconvolutionalcodetoencodepacketsfromthedistanttransmitter,respectively.InSection 2.4 ,wediscussthepotentialsoftheproposeddistributediterativedecodingapproachindierentapplicationscenarios.2.1SystemModelWeconsiderasystemwiththetopologyshowninFig. 2.1 .Adistanttransmittersendsapacketofcodeddatabitstothetworeceivingnodes.Forsimplicity,weassumethatthetwonodescancommunicatewitheachotherreliably.Weareonlyinterestedinthecommunicationlinkfromthedistanttransmittertothetwonodes.Weassumethatthechannelsfromthetransmittertothetworeceivingnodesareindependent.Wefurtherassumethatthecodedbitsfromthetransmitteraremodulatedusingbinaryphase-shiftkeyingBPSK.Aftermatched-lteringandpropernormalization,thedecisionstatisticsfortheithcodedbitobtainedatthetworeceivingnodesareyi=aixi+ni;yi=aixi+ni;wherexiistheBPSKsymbol1representingtheithbit,andniandniareindependentzero-mean,circular-symmetriccomplexGaussianrandomvariableswithper-componentvarianceN0=2representingthethermalnoisecomponentsattherstandsecondreceivingnodes,respectively.Weconsidertwodierentchannelmodels.TherstmodelistheadditivewhiteGaussiannoiseAWGNmodel.ForAWGN

PAGE 29

15 channels,boththechannelgainsaiandaiare1,i.e.,thenormalizedreceivedenergypercodedbitEc=1.ThesecondmodelweconsideristheatRayleighfadingmodel.ForRayleighfadingchannels,aiandai,foralli,aremodeledasindependentzero-meancircular-symmetricunit-variancecomplexGaussianrandomvariables.ThiscorrespondstotheassumptionofhavingaperfectchannelinterleaverandthenormalizedaveragereceivedenergypercodedbitEc=1.ForbothAWGNandRayleighfadingmodels,weassumethatperfectphaseestimationisachievedandhencecoherentdemodulationisperformedateachnode.InthecaseofRayleighfadingmodel,weassumethatperfectchannelstateinformationisavailableatthenodes.2.2CollaborativeDecodingforRectangularParity-CheckCodeInthissection,weconsiderthedesignofthedistributediterativedecoderwhenarectangularparity-checkcodePRCCisemployedtoencodethedatabitsinthetransmittedpacket.Therectangularparity-checkcodeRPCC[ 11 ]isapuncturedversionoftheproductoftwosingleparity-checkcodes.Anexampleofa33RPCCisshowninFig. 2.2 .Thecodeconsistsofsingleparity-checkcodesthatoperateonrowsandcolumnsofasquarematrixthatcontainstheinformationbits.RPCCswithlargeblocksizesareveryhigh-ratesystematiccodesthatcanbedecodedbyaverysimpleiterativealgorithm[ 11 12 13 ].NotethatforapacketofN2bits,thenumberofparitybitsis2NNbitseachinthehorizontalandverticaldirections.Thus,therateoftheRPCCisN2=N2+2N=N=N+2.Clearly,asNbecomeslarge,therateoftheRPCCcodebecomesveryhigh.MaximumaposterioriMAPdecodingoftheRPCCcanbeapproximatelyperformedbyaniterativedecodingprocedurethattreatstheRPCCasaparallelconcatenationoftheparitycheckcodesdenedalongtherowsandcolumnsofthedatabitmatrix.Foreachcomponentcode,the

PAGE 30

16 Figure2{2:Exampleofrectangularparity-checkcodeforpacketofnineinformationbits. soft-in/soft-out"SISOdecodingmoduleinHagenaueretal.[ 11 ]andWongetal.[ 13 ]amountstothefollowingsimpleprocedure: 1. Findthetwosmallestmagnitudesamongallsoftinputsonarow/column. 2. Takeharddecisionsontherow/columnandchecktheparity. 3. Foreachdatabitexpecttheonewiththeminimummagnitudesoftinputontherow/column,theextrinsicinformationistheminimummagnitudeiftheparitymatcheswiththeharddecisionofthatbit,otherwisetheextrinsicinformationisthenegativeoftheminimummagnitude.Forthedatabitwiththeminimummagnitudesoftinput,thesecondsmallestmagnitudeisemployed. 4. Passtheextrinsicinformationasaprioriinformationtothecomponentcodeintheotherdirection.Thesoftinputsfortherstiterationaresimplyscaledchannelobservations[ 13 ]forboththeAWGNandRayleighfadingmodels.Attheend,theextrinsicinformationsprovidedbythetwocomponentcodesareaddedwiththeinitialchannelobservationtogivethesoftoutput,basedonwhichthebitdecisionismade.Itisclearthat

PAGE 31

17 thisdecodingprocessisverysimple.In[ 12 ],RPCCsandtheirextensionstohigherdimensionsareshowntobeabletoachieveperformancenearthecapacitylimitfortransmissionoverAWGNandburstychannelsforveryhighcoderates.In[ 13 ],itwasshownthatRPCCscanbeusedtoobtainasignicantdiversitygainonfadingchannelswithvirtuallynopenaltyininformationrate.FordetailsoftheRPCCencodinganddecodingalgorithms,seeAppendix A .Asmentionedbefore,thekeytoperformingdistributeddecodingthatrequiresonlysmallamountofinformationtobepassedbetweenthetwonodesistoidentifythesetofbitsthatarelikelytobeinerror.SincetheMAPdecoderoutputstheaposteriorilog-likelihoodratiosofthedatabits,thesoftoutputsoftheiterativedecoderabovearegoodreliabilitymeasuresforthedatabits.ForboththeAWGNandRayleighfadingmodels,adatabitwithasmallsoftoutputmagnitudeismorelikelytobeinerror.Toillustratethis,weconsiderapacketthathaserrorsandplottheconditionalprobabilityoftheeventthatanerrorgiventhatitoccursoccursinthebitswhosesoftoutputmagnitudesrankinthelowestpercentiles.WeplotthisprobabilityinFig. 2.2 forthe322RPCC.TheconditionalprobabilityisestimatedfromMonteCarlosimulationsafter5decodingiterations.WecanseefromFig. 2.2 thatatahighenoughEb=N0,essentiallyattheconvergenceabscissa,mostoftheerrorswilloccurinthebitswhosesoftoutputmagnitudesrankinthelowest,say,5%.Basedonthisobservation,wecanemploythefollowingsimplestrategytogaindiversityadvantagewhilerequiringasmallamountofinformationexchangebetweenthereceivingnodes.Atrst,eachnodedecodesthedatabitsfromthesignalthatitreceives.Afterthedecoding,eachnoderanksthedatabitsaccordingtotheirsoftoutputmagnitudes.Theneachnoderequestsadditionalinformationfromtheothernodeforthosebitswhosesoftoutputmagnitudesrankinthelowestx%.Uponreceivingarequest,anodesendsthesoftoutputsoftherequestedbitsgenerated

PAGE 32

18 Figure2{3:Conditionalprobabilityoftheeventthatanerroroccursinthebitswhosesoftoutputmagnitudesrankinthelowestpercentiles. initsowndecodingprocess.Eachnodewillusethesoftoutputsobtainedfromtheothernodeasaprioriinformationtocontinuetheiterativedecodingprocess.Thewholeprocessthenrepeatswithadditionalexchangeofsoftoutputsbetweenthetwonodes.Toillustratetheadvantageofthisapproach,considerasamplesysteminwhichanoderequestsadditionalinformationfor5%ofthebitswiththesmallestsoftoutputmagnitudesateachiteration.Atotalof3iterationsofinformationexchangeoccurbetweenthenodes,i.e.,altogethertheoveralltracbetweenthenodesis15%neglectingtheoverheadinvolvedintherequestingprotocolofwhatisrequiredbyMRC.InthecaseofMRC,weassumethateachnodepassesallitschannelobservationstotheothernodeandmaximallycombinesthechannelobservationsbeforedecoding.Fig. 2{4 showsthebiterrorrateBERperformance 1 ofthe322 1BERsareplottedagainstEb=N0perreceivingnodeinFigs. 2{4 2{5 2{6 ,and 2{7

PAGE 33

19 Figure2{4:Performanceofcollaborativedecodingforthe322RPCCwithinformationexchangebetweentworeceivingnodesoveranAWGNchannel. RPCCoveranAWGNchannel.Weseethatthe322RPCC,whichhasacoderateof0:94,providesacodinggainofabout3dBat10)]TJ/F21 7.97 Tf 6.587 0 Td[(5BER 2 .WithMRC,anadditional3dBantennagain"isobtainedasexpected.ThemostinterestingobservationfromFig. 2{4 isthatwecanobtaina2.4dBgainoutofthemaximumpossible3dBgainusingthesoftoutputexchangeanditerativedecodingalgorithmdescribedbefore,exchangingonlyatotalof15%ofallsoftoutputs.ForthecaseofRayleighfading,theBERcurvesareshowninFig. 2{5 .ThediversitygainprovidedbytheMRCisabout8dBat10)]TJ/F21 7.97 Tf 6.586 0 Td[(5BER.Moreinterestinglyinthisfadingcase,wecangetallofthe8dBdiversitygainthatMRCcanprovide 2TheBERspresentedhereareaveragesoftheBERsatthe2nodes.ThereisaslightdierencebetweentheBERsatthetwonodesobtainedfromsimulation.However,thedierenceisalwayssmallasexpectedbecauseofthesymmetrybetweenthenodes.Ontheotherhand,thisobservationindicatesthatnoneofthenodesaredisadvantagedagainsteachotherintheiterativedecodingprocess.

PAGE 34

20 Figure2{5:Performanceofcollaborativedecodingforthe322RPCCwithinformationexchangebetweentworeceivingnodesoveraRayleighfadingchannel. at10)]TJ/F21 7.97 Tf 6.586 0 Td[(5BERbyusingthesoftoutputexchangeanditerativedecodingalgorithmdescribedbefore,exchangingonlyatotalof15%ofallsoftoutputs.2.3CollaborativeDecodingforConvolutionalCodeInthissection,weconsiderthedesignofcollaborativedecodingwhenastan-dardconvolutionalcodeCCisemployedtoencodethedatabitsinthetransmittedpacket.Weemployarate-1/2,non-systematic,non-recursive,4-stateCCwithgen-eratormatrix[1+D2;1+D+D2].WewillrefertothisCCasCC,7,basedontheoctalrepresentationofthegeneratorpolynomials.Itiswell-known[ 14 ]thatthisCChasthelargestfreedistanceof5amongallrate-1/2,4-stateCCs.TheMAPdecoderforthisCCistheSISOdecoder[ 15 ]basedonthewell-knownBCJRalgorithm[ 16 ].Hereweemploythelesscomplexmax-log-MAPdecoder[ 15 ]asanapproximationtotheMAPdecoder.Weemploythesamecollaborativedecodingstrategydescribedintheprevioussection.Theonlydierenceinthiscaseisthatthechannelobservationsdonot

PAGE 35

21 directlycorrespondtothesoftinputsforthedatabitsduetofactthattheCCisnon-systematic.Thesoftoutputofabitisgeneratedbysummingtheextrinsicinformationgeneratedatthecurrentdecodingiterationandthecumulativeaprioriinformationfrompreviousiterations.Afterthecurrentdecodingiteration,eachnoderanksthebitsaccordingtotheirsoftoutputmagnitudes.Theneachnoderequestsadditionalinformationfromtheothernodeforthosebitswhosesoftoutputmagnitudesrankinthelowestx%.Uponreceivingarequest,anodesendsthesoftoutputsoftherequestedbitsgeneratedinitsowndecodingprocess.Eachnodewillusethesoftoutputsobtainedfromtheothernodeasaprioriinformationtocontinuetheiterativedecodingprocess.Thewholeprocessthenrepeatswithadditionalexchangeofsoftoutputsbetweenthetwonodes.Similartothepreviouscase,weconsiderasamplesysteminwhichanodere-questsadditionalinformationfor5%ofthedatabitswiththesmallestsoftoutputmagnitudesateachiteration.Atotalof3iterationsofinformationexchangeoccurbetweenthenodes,i.e.,altogethertheoveralltracbetweenthenodesis7.5%ne-glectingtheoverheadinvolvedintherequestingprotocolofwhatisrequiredbyMRC,inwhicheachnodepassesallitschannelobservationstotheothernodeandmax-imallycombinesthechannelobservationsbeforedecoding.Thepacketsizeis1024databits048codedbits.Fig. 2{6 showstheBERperformanceoftheCC5,7overanAWGNchannel.Similartothecaseofthe322RPCC,wecanobtaina2.4dBgainat10)]TJ/F21 7.97 Tf 6.586 0 Td[(5BER,outofthemaximumpossible3dBantennagain,usingthesoftoutputexchangeanditerativedecodingalgorithmdescribedbefore,exchangingonlyatotalof7.5%ofinformationrequiredbyMRC.ForthecaseofRayleighfading,theBERcurvesareshowninFig. 2{7 .ThediversitygainprovidedbytheMRCisabout6dBat10)]TJ/F21 7.97 Tf 6.587 0 Td[(5BER.AsseenfromFig. 2{7 ,wecanget5dBoutofthe6dBdiversitygainthatMRCcanprovideat10)]TJ/F21 7.97 Tf 6.587 0 Td[(5BER

PAGE 36

22 Figure2{6:PerformanceofcollaborativedecodingfortheCC,7withinformationexchangebetweentworeceivingnodesoveranAWGNchannel. Figure2{7:PerformanceofcollaborativedecodingfortheCC,7withinformationexchangebetweentworeceivingnodesoveraRayleighfadingchannel.

PAGE 37

23 byusingthesoftoutputexchangeanditerativedecodingalgorithmdescribedbefore,exchangingonlyatotalof7.5%ofinformationrequiredbyMRC.2.4SummaryTheresultsinSections 2.2 and 2.3 clearlyindicatethepossibilityofgettingfull,orclose-to-full,diversityadvantagebypropercollaborativedecodingofthesignalsreceivedatdierentnodeswithasmallamountofinformationexchangebetweenthenodes.Thecrucialpointsappeartobeidentifyingthebitsthatneedadditionalinformationfromothernodesandemployingproperiterativedecodingtechniquestomakethebestuseoftheadditionalinformation.Wecanobtainsomeverypromisingresultsevenwiththesimple,ad-hocdesignfortheRPCCandCCpresentedinSections 2.2 and 2.3 .ThisleadsustobelievethatcollaborativedecodingcanbeaviabletechniquetoimprovetheperformanceofwirelesscommunicationsystemsthathavetopologiessimilartotheonedescribedinFig. 2.1

PAGE 38

CHAPTER3COLLABORATIVEDECODINGFORCODEDMODULATIONTheexponentialgrowthindemandofhighbit-ratedatatransmissioninwire-lesssystemscontinuouslypropelstheresearchofusingantennaarraytoincreasethecapacityofwirelesscommunicationsystems.Meanwhile,theuseoferrorcorrectioncodingtechniquesalsohelpsgreatlyinexploitingthecapacityofwirelesscommu-nicationchannels.Aswellknown,thepowerfulchannelcodingtechniquessuchasturbocodesandlow-densityparitycheckcodescanattaintheratesapproachingtheShannonlimit,primarilyforAWGNchannelswithbinarymodulation.However,itisclearthattheseerror-correctioncodingschemesreducetransmitpowerattheexpenseofincreasedbandwidthorreduceddatarate.Codedmodulation,bycombiningbinaryerrorcorrectingcodesandhigherlevelmodulationtogether,providesaneectivemethodtoachievecodinggainwithoutusingadditionalbandwidth,thushighbit-ratecommunicationcanbeachieve.Hence,thespectrally-ecientCMtechniqueissuitableforbandwidth-constrainedchannelsespecially.Therstspectrally-ecientcodingbreakthroughcamewhenUngerboeck[ 17 ]introducedacoded-modulationtechniquetojointlyoptimizebothchannelcodingandmodulation.Ungerboeck'strellis-codedmodulationTCM,whichusesmulti-level/phasesignalmodulationandsimpleconvolutionalcodingwithmappingbysetpartitioningSP,canprovideconsiderablecodinggainforAWGNchannels.ThisschememaximizestheminimumfreeEuclideandistanceofacode,whichisthedom-inatingfactortodeterminethecodeperformanceforAWGNchannels.However,thisschemeusuallygiveslowdiversityorder,andleadstoaperformancedegradationover 24

PAGE 39

25 aRayleighfadingchannel.OnegeneralsolutiontothisdrawbackistoapplysymbolinterleavertotheTCM.ItwaslatterrecognizedbyZehavi[ 18 ]thatthediversityordercanbeincreasedtotheminimumnumberofdistinctbitsinsteadofchannelsymbolsbyusingbit-wiseinterleavingtoyieldabettercodinggainoveraRayleighchannel.FollowingZehavi'sidea,Caireetal.[ 19 ]proposedthebit-interleavedcodemodulationBICMthatincreasesthediversityorderfurthertotheminimumHammingdistanceofthecode,thus,leadstoaperformanceimprovementoverfadingchannels.ButtherandommodulationcausedbybitinterleavingdecreasestheminimumfreeEuclideandistanceofthecodes.SoBICMwasthoughtnotsuitableforAWGNchannels[ 19 ].However,BICM,developedprimarilyforfadingchannels,latterlyturnedouttobeabletogiveverygoodperformanceforAWGNchannelsaswell.LiandRitcey[ 20 21 ]showedthatbyusingasimpleiterativedecodingIDwithBICMBICM-ID,theminimumfreeEuclideandistancedegradation,hence,performancedegradation,canbeovercome.Withsoft-decisionfeedback,BICM-IDsignicantlyoutperformsTCM,andtheperformanceisevencomparablewithTurbo-TCMforAWGNchannels.Theauthors[ 20 ]alsoconcludedthatathighSNR,SPmappingclearlyoutperformsGraymappingforBICM-IDusingsoft-feedback.Besides,anadvantageofBICMisitsexibilityindesign.InBICM,encoderisseriallyconnectedtomodulationbyasinglebit-by-bitinterleaver.Thisstructuretreatingcodingandmodulationseparatelymakesitveryconvenienttoemploydierentstructurecodeswithdierentcoderateintheschemes.Byusingsomepowerfulcodessuchaslongparallelorseriallyconcatenatedturbocodesanditerativedecoder,itispossibleforBICMtoobtaingoodperformanceclosedtocapacityoverGaussianchannels[ 22 ].InChapter 2 ,westudiedthecollaborativedecodingtechniqueinatwo-nodedistributedarraywitherrorcorrectioncoding.WhenBPSKsignalisusedinthetransmission,thecollaborativedecodingtechniquedescribedinChapter 2 canobtain

PAGE 40

26 adiversitygainclosetothatprovidedbyMRC.Inthischapter,weconsideremployingcodedmodulationCMinthedistributedarraysystemtoexplorethepossibilityofobtainingspatialdiversitywithhigherspectraleciencyforbandwidth-constrainedwirelesschannels.SimilartoChapter 2 ,westillconsiderthesimplecaseoftwo-nodedistributedarray.Butwewillonlyconsiderthecaseswhenrectangularparity-checkcodecodesareused.Thischapterismainlybasedontheworkof[ 23 ].Theremainderofthischapterisorganizedasfollows.InSection 3.1 ,wepresentthetwo-nodedistributedarraysystemandchannelmodel.InSection 3.2 ,theBICMiterativedemodulationforRPCCsandthedesignofdistributeddecodingarede-scribedindetail.Followingthat,MonteCarlosimulationresultsfordierentsignalconstellationsareshowninSection 3.4 .Finally,summaryisgiveninSection 3.5 .3.1SystemModelAsinChapter 2 ,weconsideradistributedarraysystemwithtwoidenticalre-ceivingnodes.Adistanttransmittersendsablockofmodulatedsignaltothetworeceivernodes,asshowninFig. 2.1 .Thetworeceivernodesarephysicallyseparatedfarapartenoughthatfadingateachnodeisi.i.d..Eachindividualnodereceivesanddecodesitsreceivedsignalindependently.Forsimplicity,weassumethatthetwonodescancommunicatewitheachotherreliably.Weareonlyinterestedinthecommunicationlinkfromthedistanttransmittertothetwonodes.ThetransmitteradoptsatypicalBICMapproach[ 19 ],asshowninFig. 3{1 .Ablockofdatabitsu tobetransmittedareencodedwithanRPCCencoderwithcoderateRc.Thenthecodedbitstreamc arefedintoabit-wiserandominterleaver,generatingbitstreamv =c .Afterthat,thebitstreamv ismodulatedontoasignalsequencex overa2-dimensionsignalsetofsizejj=M=2mbyaM-arymodulatorwithaone-to-onebinarymap:f0;1gm!.Thissignalsequenceisthensentthroughthechannel.TheoverallspectraleciencyofthissystemismRcbits/symbol.

PAGE 41

27 Figure3{1:SystemmodelofBICM-IDwithRPCC HereweuseamemorylessfadingchannelmodelthatincludesAWGNchannelasaspecialcase.Inthismodel,thereceivedsignalyatthetwoantennanodescorrespondingtothetransmittedsignalx2canbeexpressedasy=gx+n;y=gx+n;where:igandgarechannelfadinggains.ForAWGNchannels,g=g=1.ForRayleighfadingchannels,gandgareindependentcircular-symmetriccomplexGaussianrandomvariableswithE[gi]=0andE[jgij2]=1fori=1;2;iinandnareindependentzero-mean,circular-symmetriccomplexadditiveGaussiannoisewithcovarianceE[jnij2]=2fori=1;2.WenormalizethesignalenergyE[jxj2]=1.Thus,theaveragesignal-to-noiseratioSNRis1=2.Inthischannelmodel,weassumethatperfectchannelstateinformationCSIg;gisavailableatthereceivernodesandhencecoherentdemodulationisperformedateachnode.Withthismodelthepdfpyijx,fori=1;2,withperfectCSIisgivenbypyijx=1 2expjyi)]TJ/F42 11.955 Tf 11.955 0 Td[(gixj2=2:{1

PAGE 42

28 Ateachreceivernode,wetreatthemodulationandcodeastwocomponentsofaconcatenatedcodingsystem.ByemployingamaximumaposterioriMAPdemodulator,wefeedtheextrinsicinformationfromtheRPCCdecoderbacktothedemodulatorastheaprioriinformationtocarryoutthedemodulationanddecodinginaniterativemanner.Aftersomeiterations,weexchangeinformationforaportionofsymbolsbetweenthetwonodesandrestartthedemodulationanddecodingprocesses.3.2IterativeDemodulationandDecodingforBICM3.2.1IterativeDemodulationandDecodingAlgorithmOneimportantcomponentinourbit-interleavedcodedmodulationsystemistherectangularparity-checkcode.AnotherimportantcomponentintheBICM-IDsystemistheiterativedemodulationmodule.Basedontheideathatperformingde-modulationanddecodinginaniterativemannerisakeytoimprovetheperformanceofBICM[ 20 21 ],weemploythereceivermodelasillustratedinFig. 3{1 .Sincetheencodinganddecodingforrectangularparity-checkcodesisaddressedinSection 2.2 andAppendix A ,wehereemphasizeonthedemodulationcomponent.Tosimplifytheiterativedecodingprocess,werstmodifythedemodulatortoworkinthelog-likelihoodratioLLRdomain.Supposethateachm-bitvectorv =v1;v2;;vmfromtheinterleaveraremappedintoonesignalxoutofthe2msignalsinthesetbymappingrule,i.e.,x=v 2,atthemodulator,andthatthereceivedsignalcorrespondingtoxisy.Let`ixdenotetheithi=1;2;;mbitofthelabelofx.Forconvenience,weassumethat`ix=bisintheGFwiththeelementsf+1;)]TJ/F15 11.955 Tf 9.299 0 Td[(1g.Inoursoftdemodulator,wewillconsidertheMAPratherthanmaximum-likelihoodMLbitmetric.ItiseasytoseethattheMAPbitmetric

PAGE 43

29 ofvi=b2f+1;)]TJ/F15 11.955 Tf 9.299 0 Td[(1gisgivenbyvi=b;y=logPvi=b;y=logXz2pyjzPzjvi=bPvi=b; {2 wherepyjzisgivenin 4{3 accordingtoourchannelmodel.Weassumeaperfectbit-interleaversuchthatfv1;v2;;vmgareindependenttoeachother.Withthisassumption,wehavePz=Pz=v1;v2;;vm=mYj=1Pvj=`jz:{3Hence,theMAPbitmetriccanbesimpliedtovi=b;ymaxz2bilogpyjz+Xj6=ilogPvj=`jz+logPvi=b+C; {4 wherebidenotesthesubsetofallsignalz2with`iz=b,andCisaconstant.Above,theapproximationlogPiaimaxilogaiisused.ForconveniencewechoosetheconstantasC=)]TJ/F15 11.955 Tf 10.494 8.088 Td[(1 2mXj=1)]TJ/F15 11.955 Tf 7.472 -9.683 Td[(logPvj=+1+logPvj=)]TJ/F15 11.955 Tf 9.299 0 Td[(1:{5Thenthemetricbecomesvi=b;y=maxz2bilogpyjz+1 2mXj=1`jzLvj; {6

PAGE 44

30 whereLvj=log)]TJ/F22 11.955 Tf 5.479 -9.684 Td[(Pvj=+1=Pvj=)]TJ/F15 11.955 Tf 9.299 0 Td[(1istheaprioriLLRofbitvj.ThusthesoftvalueofbitviinLLRformiscomputedbyLvijy=Lvi;y=vi=+1;y)]TJ/F22 11.955 Tf 11.955 0 Td[(vi=)]TJ/F15 11.955 Tf 9.299 0 Td[(1;y=Lvi+maxz2+1ilogpyjz+1 2Xj6=i`jzLvj)]TJ/F15 11.955 Tf 13.722 0 Td[(maxz2)]TJ/F18 5.978 Tf 5.756 0 Td[(1ilogpyjz+1 2Xj6=i`jzLvj: {7 SubtractingtheaprioriLLRofvi,Lvi,from 3{7 wecanobtaintheextrinsicinformationofviLevi=maxz2+1ilogpyjz+1 2Xj6=i`jzLvj)]TJ/F15 11.955 Tf 13.721 0 Td[(maxz2)]TJ/F18 5.978 Tf 5.756 0 Td[(1ilogpyjz+1 2Xj6=i`jzLvj: {8 Wetreatthisextrinsicinformationastheoutputofthesoftdemodulator.From 3{8 ,wecanseethatinordertoobtaintheextrinsicLLRofabitofasignal,weneedtousetheaprioriLLRsoftheotherm)]TJ/F15 11.955 Tf 12.03 0 Td[(1bitsandthechannelobservationofthesignalasinput.Withthemodicationabove,thedemodulationanddecodingprocedurecanperforminaniterativewayconveniently.InFig. 3{1 weuseLntodenotetheLLRatthenthiteration.First,weinitializealltheaprioriLLRsLnv andLnu tozerosforn=0.Atthenthiteration,whenthechannelobservationy ofthetransmittedsignalsequenceisreceived,wedemodulateitusing 3{8 toproduceLnev .Afterdeinterleaver)]TJ/F21 7.97 Tf 6.587 0 Td[(1,Lnec =)]TJ/F21 7.97 Tf 6.586 0 Td[(1Lnev isfedintotheRPCCdecoderfordecoding.SincetheRPCCdecodingisanSISOiterativealgorithm,weshallusetheextrinsicinformationLn)]TJ/F21 7.97 Tf 6.586 0 Td[(1eu ,producedinthen)]TJ/F15 11.955 Tf 11.803 0 Td[(1thiteration,astheaprioriinformationLnu ofdecoderinthenthiteration.TheextrinsicinformationLneu generatedbytheRPCCdecoderisthenpassedthoughttheinterleaverandfedbackastheaprioriinformationLn+1v forthesoftdemodulatoragain.After

PAGE 45

31 anumberofiterationstheestimateofdatabits^uisobtainedfromtheharddecisiononLn^u .3.2.2EectofMappinginBICM-IDThemappinghasasignicanteectontheperformanceofBICM-ID.ForBICM,GraycodemappingoutperformssetpartitioningSPmapping[ 19 ].However,whenassociatedwiththeiterativedemodulation,SPmappingoutperformsGraymappingathighSNR[ 20 21 ].Thiscanbeseenfrom 3{8 that,duetothepropertyoftheGraymappingthatthelabelofasymbolhasonlyonebitdierentformitsnearestneighbors,theeectofaprioriLLRscanbeweakenedsignicantly.However,thisisnotthecaseforSPmapping.ThustheMAPdemodulatorcanmakeamoreeectiveuseoftheaprioriinformationforSPmappingthanforGraymapping.Toillustratetheeectofconstellationmapping,considerthatthesignalx=v1;v2;;vm2istransmittedandchannelobservationisy.TheMAPdecisionforbitvimadebythedemodulatoratthenthiterationisni=argmaxz2+1ilogpyjz+1 2Xj6=i`jzLnvjni=argmaxz2)]TJ/F18 5.978 Tf 5.756 0 Td[(1ilogpyjz+1 2Xj6=i`jzLnvj; {9 whereargmaxz2bifgmeansndingthesignalmaximizingtheexpressioninthebracesfromtheconstellationsubsetbi.From 3{9 ,wehave`ini6=`ini.Withthisnotation,theextrinsicLLRLneviin 3{8 canbewrittenasLnevi=logpyjni pyjni+1 2Xj6=i)]TJ/F22 11.955 Tf 5.48 -9.683 Td[(`jni)]TJ/F22 11.955 Tf 11.955 0 Td[(`jniLnvj: {10

PAGE 46

32 Forn=0,thereisnoaprioriinformationavailable,i.e.,Lv =0.Thus,thedemodulatorgivesMLdecisionresultatthe0thiteration,i=argmaxz2+1ilogpyjz;i=argmaxz2)]TJ/F18 5.978 Tf 5.756 0 Td[(1ilogpyjz; {11 andLevi=maxz2+1ilogpyjz)]TJ/F15 11.955 Tf 14.386 0 Td[(maxz2)]TJ/F18 5.978 Tf 5.756 0 Td[(1ilogpyjz:{12Usually,thepairwiseerrorprobabilityPx!^xi.e.,theprobabilitythatthede-modulatorchooses^xwhenxistransmittedin 3{12 isdominatedbytheminimumfreeEuclideandistanceoftheconstellations.Letusonlyconsiderthecasethati;isatisfying 3{11 isasignalpairhavingtheminimumEuclideandistanceintheconstellation.Otherwise,thelargeEuclideandistancebetweeniandiwillmakethepairwiseerrorprobabilitysosmallthatitcanbeneglected.Weassumethatthedemodulatormakesasingleerroratbitviforthesymbolxatthe0thiteration.IfaGraymappingisusedinthiscase,thenwiththepropertyofGraycodethatacodehasonlyonebitdierentfromitsnearestneighbors,wehave`ji=`jiforallj6=i:{13SupposethatinthefollowingiterationtheaprioriLLRsforotherbitsarereliable,i.e.,`jiLvj=`jiLvj0forallj6=i:{14Foranyconstellationpointz6=iori,thereexistsatleastonej6=isuchthat`jzLvj0.Sothedemodulatorwillmakethechoicei;i=i;iin 3{9 .Thus,using 3{10 and 3{13 ,thedemodulatorstillgivesthewrongMLdecisionresultonthebitviLevi=maxz2+1inlogpyjzo)]TJ/F15 11.955 Tf 14.386 0 Td[(maxz2)]TJ/F18 5.978 Tf 5.756 0 Td[(1inlogpyjzo: {15

PAGE 47

33 Withaboveargument,theerrorcannotbecorrectednomatterhowmanytimesthedemodulatoriteratesinthiscase.Thus,byusingGraymapping,theiterativedemodulationcannotimprovetheperformanceofBICM.However,foraSPmapping,Eq. 3{13 isnottrue.Consequently,athighSNR,withlargeextrinsicLLRsfrompreviousiteration,thesecondtermontheright-handsideof 3{10 couldhelpcorrectingthepairwiseerror.ThisisthereasonwhySPmappingoutperformsGraycodemappingforBICM-IDathighSNR.TheperformancecomparisonbetweenGrayandSPmappingsinSection 3.4 willverifythisstatement.3.3CollaborativeDecodingforBICM-IDwithRectangularParity-CheckCodeThepresentedBICM-IDschemeisreadilyapplicabletoadistributedarray.AswepointedoutinChapter 2 ,adecodeddatabitwithasmallsoftoutputmagnitudefromtheRPCCdecoderismorelikelytobeinerror.However,ifthebit-basedstrategyin[ 8 ]isusedheretogaindiversityfromotherreceivingnode,wewilllosetheadvantageagainstMRCintermofsavinginformationexchangetracwhenthemodulationorderMincreases.Hence,wedevelopasymbol-basedstrategyforBICM-IDtoreducetheinformationexchangingtrac.Atrst,wedenethesymbolreliabilitymeasureoutofthedecoderasL^x=logP^x 1)]TJ/F22 11.955 Tf 11.955 0 Td[(P^x=logP^x Pz6=^xPz;{16where^x=^v1;;^vm2istheestimateoftransmittedsignalx.Forconvenience,wedenesymbolmetricforeachconstellationpointz2asz=1 2mXj=1`jzL^vj;{17whereL^vjisthesoftoutputofthecodedbitvj.ThissymbolmetricreectstheprobabilityPx=zgiventheLLRsL^vjforj=1;2;;m.Infact,^xshouldbe

PAGE 48

34 theconstellationpointthathasthelargestreliability,i.e.,^x=argmaxz2z.Similarto 3{3 3{5 3{16 canbesimpliedtoL^x^x)]TJ/F15 11.955 Tf 19.203 0 Td[(maxz6=^x;z2z=minj=1;;mjL^vjj:{18SincetheLLRmagnitudeofabitcanbeusedasthemeasureofitsreliability, 3{18 indicatesthatthereliabilityofadecodedsymbolisdeterminedbythesoftvalueofitsleastreliablebit,whichisbasicallyinagreementwiththebit-basedideain[ 8 ].Withthisdenition,thecollaborativedecodingprocedureworksasfollows.AftereveryII1iterationsofdemodulationanddecoding,eachnodecomputesthesymbolreliabilityL^x andrankthesymbolsaccordingtotheirreliability.TheneachnoderequestsadditionalinformationfromtheothernodeforsymbolxthatLxranksinthelowesta%.WedenotetheadditionalinformationforxasLax.Supposethattheestimatecorrespondingtosymbolxattheothernodeis~x=~v1;;~vm,whichmaybedierentfrom^xsincetheassumptionofindependentchannels.Uponreceivingtherequest,anodesends:ithereliabilityoftherequestedsymbolsgeneratedinitsowndecodingprocessastheadditionalinformation,i.e.,Lax=L~x;iitheharddecisionof~x,`j~xforj=1;2;;m,whichisalsotheharddecisionof~v1;~v2;;~vm.Herein,weadoptfollowingstrategy,anodedoesnotrequestadditionalinforma-tionforthesymbolifarequesthasbeenmadeforitinallpreviousexchanges.Inthiscase,thenodewillrequestinformationforthenextsymbolintherankingordertomakesurethattherequestforatotalofNa%symbolswillbemadeforthecurrentexchange,whereNisthesymbolblocksize.Theadvantageofthisstrategyisthattheadditionalinformationcancovermoresymbolsforanumberofexchanges.Aftertheexchange,asshowninFig. 3{1 ,eachnodewilluseLaxandtheharddecision`j~xj=1;2;;mobtainedfromtheothernodetoreconstructanadditionalsymbolmetricazsimilarto 3{17 foreachpossibleconstellationpoint

PAGE 49

35 z2.Since`j~xistheharddecisionofbit~vj,wehaveL~vj=`j~xjL~vjj:{19From 3{18 wecanseethatjL~vjjLax.Thismeanseachbitin~xhasasleastareliabilityofLax.NowwereplacejL~vjjwithLaxforj=1;2;;min 3{19 ,whichisequivalentlytosetthereliabilityofallitsbitsthesameasthereliabilityofasymbol.Thus,wecanconstructtheadditionalsymbolmetricasaz=1 2mXj=1`jz`j~xLax:{20Thisadditionalsymbolmetricisthenusedastheaprioriinformationfordemodu-lation,and 3{6 becomesvi=b;y=maxz2binlogpyjz+1 2mXj=1`jzLvj+azo;{21where<1isascalingfactorusedtoreducetheeectoferrorpropagation.Usually,canbesetto0:60:7.InthefollowingIiterations,thewholeprocessthenrepeatswithadditionalexchangeofsymbolreliabilityanditsharddecisionbetweenthetwonodes.NotethatinthisstrategywejustneedtoexchangeonerealnumberL~xandmbits`j~xj=1;2;;mforeachsymbol.However,forMRC,oneneedstoexchangeacomplexnumberychannelobservationandarealnumberjgjmagnitudeoffadinggain,forAWGNchannelnoneedtoexchangeitsincejgj=1foreachsymbol.Thismeanswejustrequirelessthan2=3forRayleighfadingchanneloforequalforAWGNchanneltotheexchangingtracofMRCforeachsymbol,meanwhileweonlyneedtoexchangeinformationforaportionofsymbols.Hencewiththissymbol-basedstrategy,wecanreducetherequiredinformationexchangetracsignicantly.

PAGE 50

36 3.4PerformanceEvaluationInthissection,weexaminetheperformanceoftheproposeddistributedarrayschemebyMonteCarlosimulations.Inthesimulations,wesetthepacketsizeto1024databits,i.e.,thedatabitsarearrangedintoa3232matrixfortheRPCCencoding.Withthisblocksize,theRPCCgivesacoderateof0:94.Inthedecodingprocedure,anoderequestsadditionalinformationfor15%ofthesymbolswiththesmallestreliabilityatthebeginningofevery10iterationsaftertherst10.Forinstance,3exchangescauseanoveralltracapproximatelyequalto30%forRayleighfadingchannelor45%forAWGNchannelofwhatrequiredbyMRC.InthecaseofMRC,weassumethateachnodepassesallitschannelobservationsandfadinggainstotheothernodeandmaximallycombinesthechannelobservationsbeforedemodulation.SimulationsshowthebiterrorratesBERatthetwonodesarealmostthesameaseachother.Sowetaketheaverageofthemastheperformanceofthedistributedarraysystem.Fig. 3{2 showstheBERperformanceofBICM-IDwith322RPCCinthedistrib-utedarrayoverAWGNchannelswhen8PSKwithGrayandSPmappingareused.Inthegure,Ebisthereceivedenergyperbitperantenna.WithMRC,about3dBspatialdiversitygaincanbeachievedforbothmappings.Withourdistributedarrayapproach,weobtaina2:4dBand1:4dBgainforGrayandSPmappingatthetraccostof45%i.e.,3exchangesintotalofMRC.Fig. 3{3 showstheBERcurvesforRayleighfadingchannels.ThespatialdiversitygainprovidedbyMRCisabout8:5dBforbothGrayandSPmapping.Byexchangingatotalof20%i.e.,2exchangesintotaloftheinformationamountrequiredforMRC,ourdistributedBICM-IDsys-temobtaina8:3dBand7:3dBgainattheBERof10)]TJ/F21 7.97 Tf 6.587 0 Td[(5forGrayandSPmapping,respectively.

PAGE 51

37 Figure3{2:BERforBICM-IDwith322RPCCand8PSKinthetwo-nodedistributedarrayoverAWGNchannel. Figure3{3:BERforBICM-IDwith322RPCCand8PSKinthetwo-nodedistributedarrayoverRayleighfadingchannel.

PAGE 52

38 Figure3{4:AverageSNRat10)]TJ/F21 7.97 Tf 6.587 0 Td[(5BERversusspectraleciencyforBICM-IDwith322RPCCinatwo-nodedistributedarrayoverAWGNchannels. InFigs. 3{4 and 3{5 ,weshowtheaverageSNREs=N0atBERof10)]TJ/F21 7.97 Tf 6.587 0 Td[(5versusspectraleciencyforthetwo-nodedistributedarraysystemfordierentconstel-lationswithGraymapping 1 andSPmappingoverAWGNchannelsandRayleighfadingchannels,respectively.TheaverageSNRcanbecomputedapproximatelybySNR=mRcEb=N0,wheremisthenumberofbitspersymbolcarrying,andRcisthecoderateofRPCC.WecanseethatforbothAWGNandRayleighfadingchannelstheproposeddistributedBICM-IDapproachcanachievealmostthediversitygainprovidedbyMRC,butwithonlyexchanging20%exchangesintotaland45%3exchangesintotaloftheamountofinformationrequiredbyMRCbetweenthetwonodesforAWGNchannelandRayleighfadingchannel,respectively.Thisespeciallyshowstheadvantageofourapproachforfadingchannels. 1For32QAM,quasi-GraymappingisusedbecauseGraymappingisimpossibleinthiscase.

PAGE 53

39 Figure3{5:AverageSNRat10)]TJ/F21 7.97 Tf 6.587 0 Td[(5BERversusspectraleciencyforBICM-IDwith322RPCCinatwo-nodedistributedarrayoverRayleighfadingchannels. 3.5SummaryInthischapter,wehaveinvestigatedtheuseofcollaborativedecodingforatwo-nodedistributedarray,inwhichhigh-orderconstellationswithiterativedemodulationandRPCCsareused.Theschemeofbit-interleavedcodemodulationisused,withaniterativedemodulationanddecodingapproachadoptedinthereceiver.WedevelopaSISOdemodulationalgorithmsuitableforiteration.Theeectofdierentchoicesofbit-to-symbolmappingisanalyzed.Inordertoobtainecientcollaborativedecodingschemes,weproposeasymbol-basedinformationexchangestrategyfortheBICM-IDsystem,whichisdierentfromthebit-basedstrategyusedfortheBPSKsystemsinChapter 2 .MonteCalorsimulationresultsshowthat,byusingourcollaborativedecodingtechnique,asignicantdiversitygaincanbeobtainedwitharelativelysmallamountofinformationexchangebetweentheindependentandphysicallyseparatedreceivingnodes.ThisresultsinhighspectralecienciesunderbothAWGNandRayleighfadingchannels.

PAGE 54

CHAPTER4COLLABORATIVEDECODINGFORDISTRIBUTEDARRAYWITHTWOORMORENODESInthepreviouschapters,dierentcollaborativedecodingtechniquesarestudiedfortwo-nodedistributedarrays.Itisshownthat,withproperlydesignedinformationexchangeschemes,collaborativedecodingcanachieveclose-to-fullreceivediversityforbinarycodedandhigherordercodedmodulationsystems.Sinceadistributedarrayiscomposedofaclusterofnodesinawirelessnetwork,thenumberofnodesinthearrayisusuallygreaterthantwo.Thisfacthighlightsthescalabilityrequirementfortheproposedcollaborativedecodingtechniques.Thus,itisnecessarytoconsidergeneraldistributedarrayswithmorethantwonodes.Toemploycollaborativedecodingtoachievespatialdiversityecientlyinthisscenario,theinformationexchangeschemeshouldbeamajorconcern.Itwillbeshownthatwithproperlydesignedinformationexchangeschemes,collaborativedecoding,comparedwithMRC,stillexhibitstheadvantageofachievingspatialdiversitywithsignicantsavingsintheinformationexchangecost.Inthischapter,westudycollaborativedecodingschemesfordistributedarrayswithmorethantwonodes.ThediscussionwillberestrictedtosystemsusingBPSKmodulationandconvolutionalcodes.Werstconsiderthesystemmodelofdis-tributedarrayswithmorethantwonodesinSection 4.1 andextendthetwo-nodecollaborativedecodingtechniquedevelopedpreviouslytothiscase.InSection 4.2 westudythestatisticalcharacteristicsoftheextrinsicinformationgeneratedbytheMAPdecoderfortheconvolutionalcode.ThenaGaussianapproximationfortheextrinsicinformationisintroduced.Basedonthisapproximation,aleast-reliable-bitandmost-reliable-bitinformationexchangeschemesaredescribed.Then,wepresent 40

PAGE 55

41 Figure4{1:Distributedarraywithmultiplenodes. andcomparetheperformanceofthetwoinformationexchangeschemeswithdierentparametersettingsinSection 4.3 .Finally,wedrawasummaryinSection 4.4 .Thischapterisprimarilybasedontheworkin[ 24 ]and[ 25 ].4.1SystemModelforDistributedArraywithTwoorMoreNodesThegeneralmodelofadistributedarraywithmorethantwonodesisshowninFig. 4{1 .Aremotesourcenodetransmitsmessagethroughasingle-input/multiple-outputforwardchanneltothedestinationthatcontainsMM2physicallysepa-ratedreceivingnodes,denotedbyanodesetM=f1;2;;Mg.ThesourceencodesandtransmitsthemessagewithaconvolutionalcodeandBPSKmodulation.Anal-ogoustothemethodsdiscussedChapters 2 and 3 ,withpropermodications,thecollaborativedecodingtechniquescanbeextendedtothecaseofdierentcodesandhigh-ordermodulations.Thus,withoutlossofgenerality,wewillrestrictourstudyheretoconvolutionalcodesandBPSKmodulationonly.Inordertobeabletoapplyiterativedecoding,eachnodeinMusesanapproximatedversionofMAPdecoding,knownasthemax-log-MAPdecodingalgorithm[ 11 ],toprocessthereceivedsymbols.Allnodescanperformthedemodulationanddecodingprocessindividually.Weuseamemorylessindependentfadingchannelmodel,thatincludestheaddi-tivewhiteGaussiannoiseAWGNchannelasaspecialcase,todescribethetrans-missionenvironmentbetweenthesourceandreceivingnodes.Thereceivedsignal

PAGE 56

42 yk;iatthekthreceivingnodescorrespondingtothetransmittedBPSKsignalxii.e.,xi2f+1;)]TJ/F15 11.955 Tf 9.298 0 Td[(1gattimeinstanticanbeexpressedasyk;i=gk;ixi+nk;i;fork=1;2;:::;M{1wherenk;ifork=1;2;:::;Mandalliarei.i.d.zero-meanadditiveGaussianrandomvariableswithvarianceE[jnk;ij2]=2n,andgk;iisthechannelfadinggain.ForAWGNchannelsgk;i=1,andforRayleighfadingchannelsgk;ifork=1;2;:::;Mandalliarei.i.d.Rayleighrandomvariableswithpdfofpgk;i=2gk;ie)]TJ/F23 7.97 Tf 6.587 0 Td[(g2k;i;gk;i0:{2WenormalizethesignalenergyE[jxij2]=1.Thus,theaverageSNRis1=2n.Inthischannelmodel,weassumethatperfectchannelstateinformationCSIisavailableatthekthreceivingnodesandhencecoherentdetectionisperformedateachnode.Withthismodelthepdfpyk;ijxi;gk;i,fork=1;2;:::;Mandalli,withperfectCSIisgivenbypyk;ijxi;gk;i=1 p 2nexp)]TJ 13.151 8.088 Td[(jyk;i)]TJ/F22 11.955 Tf 11.956 0 Td[(gk;ixij2 22n:{3Atthereceivingend,theclusterofnodesformalocalnetworksuchthattheycancommunicatewithoneanotheronanerror-freebroadcastchannel.Broadcastchannelisoneofthesimplestchannelmodelsforwirelessnetworks.Byusingthismodel,simplewirelessLANprotocolssuchastokenring[ 26 ]canbeadoptedtocarryoutthecommunicationamongthenodes,Thus,thenecessityofcomplicatedchannelallocationormediumaccesscontrolprotocolscanbeavoided.Thedetailednetworkprotocolforcollaborativedecodingwithdistributedarrayisoutofthescopeofthisresearchwork.Here,wesimplyassumethattheerror-freecommunicationamongthenodesthroughthebroadcastchannelhasbeenguaranteedbyacertainnetworkprotocol.Theassumptionoferror-freecommunicationisalsoreasonablebecausetheclusternodesareincloseproximitycomparedwiththedistancebetweenthe

PAGE 57

43 sourceandthearray.HencetheSNRofthebroadcastchannelamongthereceivingnodesissignicantlyhigherthanthatoftheforwardchannelfromthesourcetothereceivingnodes.Evenifthebroadcastchannelisnoisyinrealisticsituations,simpleerrordetectionorcorrectioncodingsuchascyclicredundancycheckcodes[ 14 ]canbeemployedtoprovidereliabletransmissioneectivelywithintroducingonlyaminimumredundancy.Inthiscase,theslightadditionalprocesscomplexityandredundancyduetothecodingprotectioncanbeignored.Theperformanceoftheforwardchannelisthemainconcernhere.4.2CollaborativeDecodingandInformationExchangeSchemesToillustratecollaborativedecodingfordistributedarraywithmorethantwonodes,weconsidertheturbodecoderwithmorethantwodecodercomponents.Itisclearthatintheturbodecodingprocedure,adecodercomponentshouldusethesumoftheextrinsicinformationgeneratedbyallotherdecodingcomponents,exceptitself,inpreviousiterationasitsaprioriinformationforthecurrentiteration.Accordingtothisprinciple,collaborativedecodingisproceededasfollows.Eachnodecollectssoft-outputincludingtheextrinsicinformationandreceivedsignalfromallothernodesgeneratedinthepreviousiteration,thenusesthesumofthecollectedinformation,calledadditionalinformation,asitsaprioriinformationforthecurrentdecodingiteration.Toreducetheinformationexchangeamount,thenodesonlyexchangethesoft-outputaboutaportionofinformationbitsineachiteration.Wenotethatsoft-outputisusedincollaborativedecodingwhiletheextrinsicinformationisusedinturbodecoding.Thereasonforusingthesoftoutputratherthantheextrinsicinformationisasfollows.Inturbodecodingthereceivedsignalforthedatabitsisthesameforallcodecomponents.Onlytheextrinsicinformationpartattheoutputofeachdecodingcomponentcontainsnewinformation.However,forcollaborativedecodingthereceivedsignalsatdierentnodessuerfromindependentfadingandnoise.Thus,boththechannelobservationandextrinsicinformationparts

PAGE 58

44 atthedecodingoutputofanodecontainsnewinformationforothernodes.Toexploitasmuchdiversityaspossible,thesoft-output,ratherthantheextrinsicinformation,shouldbeexchangedincollaborativedecoding.Notethatfornon-systematiccon-volutionalcodes,sinceonlythecodedbitsaretransmitted,nochannelobservationisavailablefordatabitsatthereceiver.Hence,thesoft-outputandtheextrinsicinformationofthedecoderarethesamefornon-systematicconvolutionalcodeswhenthereisnoaprioriinformationforthedecodingprocess.4.2.1InformationExchangewithMemoryGenerally,theoutputextrinsicinformationorsoft-outputofaSISOdecoderissignicantlycorrelatedtotheinputaprioriinformationandchannelobservation,andtheoutputforadjacentbitsarecorrelatedwitheachother.Initerativedecoding,duetotheexchangeofextrinsicinformationamongallthedecodingcomponents,theaprioriinformationofadecodercollectedfromotherdecodingcomponentswillbecomemoreandmorecorrelatedtoitsownoutputinthepreviousiterations.Thismeansthatthedecodercanobtainlessandlessnewinformationfromtheotherdecoders.Thiscanseverelylimittheperformanceofiterativedecoding.Inturbocodes,inordertosolvethisproblem,bitinterleaversareappliedtoeachcodecomponents.Theadditionoftheinterleaversistorandomlypermutethebitssothatthecorrelationamongadjacentbitsisbroken.Hence,thecorrelationbetweentheaprioriinformationandpreviousoutputofadecodercanbereduced.Duetotheiterativesoft-outputexchangeprocess,correlationproblemsimilartotheonementionedabovecanariseincollaborativedecoding.Unfortunately,duetothefactthatasinglemessageisbroadcastfromthesourcetothedistributedarray,thecodestructureandbit-sequenceorderatallnodesmustbethesame.Thismeansthattheinterleavingtechniqueinturbocodingcannotbeappliedtoattackthecorrelationprobleminthiscase.Ontheotherhand,anotherdierencebetweenturbodecodingandcollaborativedecodingisthatsoft-output,ratherthanextrinsic

PAGE 59

45 informationisexchangedincollaborativedecoding.Duetothesetwofacts,theexchangeofinformationforthesamebitinsuccessiveiterationswillcausecorrelation.Toillustratethesituation,supposethatadatabitxiinapackettransmittedfromtheremotesource.Inthejthiteration,forsimplicityweassumethatonlyanodek2Mbroadcaststhesoft-outputofxi,denotedasjk;i,toothernodesinM.Theninthej+1thiteration,theotherM)]TJ/F15 11.955 Tf 12.183 0 Td[(1nodeswillusejk;iasaprioriinformationtoperformdecoding.Forconvenience,wedenotethesetofotherM)]TJ/F15 11.955 Tf 11.578 0 Td[(1nodesasMc=fm:m2M;m6=kg.AccordingtoMAPdecoding,thesoft-outputforbitxiatnodem2Mccanbeexpressedasj+1m;i=jk;i+j+1m;i;wherej+1m;iistheextrinsicinformationgeneratedforbitxibynodeminthej+1thiteration.Ifthereareasubsetofnodes,denotedasM0,inMccontinuingtoexchangethesoft-outputofbitxi,thennodekwillobtainallthisinformationanduseitastheaprioriinformationforthenextiteration.Inthiscase,theaprioriinformationforbitxiatnodek,denotedasj+2k;i,isgivenbyj+2k;i=Xm2M0j+1m;i=jM0jjk;i+Xm2M0j+1m;i;{4wherejM0jthecardinalsizeofM0.In 4{4 ,wecanseethatthesoft-outputjk;iisexplicitlyincludedinj+2k;i.Thisimpliestheexistenceofsignicantcorrelationbetweentheaprioriinformationandtheprevioussoft-outputforthedecoderatnodek.Basedontheabovediscussion,weadoptasimpleschemetosolvethecorrelationproblem.Themethodistoassignamemorytoeachinformationbittorecordwhetherthesoft-outputforthebithasbeenexchangedornot.Oncethesoft-outputofabithasbeenexchanged,nofurtherinformationaboutthatbitwillbeexchangedinlateriterations.Sincethebitswhichobtaininformationfromothernodesareverylikelyto

PAGE 60

46 havehighreliabilityvaluesmagnitudesofsoft-outputafteroneiteration,repeatingtheinformationexchangeforthesebitcannotimprovethedecodingperformance.Insomecase,thismayevenhurttheperformancebecausesomedecodingerrorsinthesebitsmayhavehighreliabilityvalues.Also,thisschemehelpstoincreasetheop-portunityforotherlessreliablebitstoreceiveinformationinthefollowingiterations.Besidesattackingthecorrelationproblem,anotheradvantageofthismemory-basedschemeisthattheinformationexchangeamountcanbesignicantlyreduced.4.2.2Least-Reliable-BitInformationExchangeInordertoachievespatialdiversitywithouttheneedofextensiveinformationexchangeasinMRC,onlyasmallamountofinformationcanbeexchangedineachiterationforcollaborativedecoding.Thismeansthatproperinformationmustbechosenandexchangedsothatthedecoderateachnodecanimprovetheerrorperfor-manceeectively.Inthissense,theselectionofinformationbecomesveryimportant.Althoughonlyaccountforasmallportionofthewholepacket,thebitsinerrordirectlydeterminetheerrorperformanceofadecoder.InMAPdecoding,aprioriinformationofabitcandirectlycontributeinitssoft-output.Thus,weconsideramethodtocollectaprioriinformationforthosebitsthatarelikelytobeintheerrorinpreviousdecodingiterationateachnode.AsmentionedinChapter 2 ,thesoft-outputoftheMAPdecoderinlog-likelihoodratioformprovidesagoodreliabilitymeasureforadatabit.Itisdirectlyrelatedtotheaposterioriprobabilityofthedecisionforandatabit.ForbothAWGNandRayleighfadingchannels,adatabitwithasmallsoft-outputmagnitudeismorelikelytobeinerror.Infact,manydecodingalgorithms,suchastheMAPdecodingandbelief-propagationdecodingalgorithm,usedinturbo-likedecodersarethemin-sumormin-productalgorithms[ 27 ].Itturnsoutthatthesoft-outputofthesedecodingalgorithmsinLLRformpossessGaussian-likestatisticalproperties[ 28 29 ].Fig, 4{2 showsthetypicalprobabilitydistributionofthesoft-outputgeneratedbyMAP

PAGE 61

47 Figure4{2:Typicalprobabilitydistributionfunctionsofsoft-outputforconvolutionalcodes decoderforconvolutionalcodesonAWGNchannels.Inthegure,weassumethatdatabitstobedecodedareallzerosforclarity.Withthisassumption,ifsoft-outputofabitisnegative,thenthedecisiononthebitwillbeinerror.Fromthegure,wecanseethattheGaussian-likeprobabilitydistributionfunctionfallsmostlyontheright-hand-sideofy-axis,onlyasmallpartofitslefttailisnegative.SincethetailofaGaussiandistributionfunctiondecaysexponentially,theprobabilityforthesoft-outputoferroneousbitstohavelargereliabilityvaluesisverysmall.Conversely,thereliabilityvaluesforcorrectbitsmayhaveagoodchancetobelarge.Thus,asimplewaytoidentifythepossibleerroneousbitsistomeasurethereliabilityvalues.Withtheaboveargument,weproposeanecientinformationexchangescheme,calledleast-reliable-bitLRBinformationexchange.TheideaoftheLRBexchangeschemeisthateachnoderequestsinformationfromothernodesforitsleastreliablydecodeddatabits[ 24 ].Alltheadditionalinformationcollectedfromothernodesisusedasaprioriinformationinthenextdecodingiteration.UsingtheschemedescribedinSection 4.2.1 ,amemoryisassignedforeachdatabittorecordwhether

PAGE 62

48 theinformationofthatbithasbeenexchangedornot.Onceinformationofabithasbeenexchanged,nofurtherinformationaboutthatbitwillbeexchangedinlateriterations.Ineachiteration,thebitsforwhichinformationhasnotbeenpreviouslyexchangedarecalledcandidatebits,andtheremainingbitsarenon-candidatebits.WedenotethetotalnumberofexchangesbyI,andthefractionofcandidatebitstoexchangeinthejthjI)]TJ/F15 11.955 Tf 12.172 0 Td[(1iterationbypjpj1,respectively.TheprocedureoftheLRBexchangeschemeisasfollws: 1. Setalldatabitstobecandidatebits. 2. Decodethereceivedsignalsateachnode. 3. IfI+1decodingiterationsi.e.,Iexchangeshavebeenperformed,thenstopthedecodingprocedureandgotostep1toprocessanewpacket. 4. Otherwise,eachnoderanksthecandidatebitsaccordingtotheirsoftoutputmagnitudeabsolutevalueofthesoftoutputandrequestssoftinformationforthebottompjfractionofthecandidatebitstheleastreliablecandidatebitsfromothernodes. 5. Eachnodebroadcastssoftoutputforthosebitsthatarerequestedbyothernodes. 6. Thosebitsinvolvedreceivedandbroadcastinthecurrentexchangearesettobenon-candidatebitsforlateriterations. 7. Eachnodeaddstheinformationfromothernodestoitsaprioriinformationandreturnstostep2.Here,fpjgI)]TJ/F21 7.97 Tf 6.587 0 Td[(1j=0arethedesignparameters,whichareusuallychosenbasedonthetradeobetweenperformanceandinformationexchangeamount.OptimizationforthedesignoftheparametersfpjgI)]TJ/F21 7.97 Tf 6.586 0 Td[(1j=0isaninterestingtopic,butisoutsidethescopeofthiswork.Inthischapter,wefocusonthecapabilityofcollaborativedecodingtoachievereceivediversitygiventhenodenumberMandsomeproperchoicesoffpjgI)]TJ/F21 7.97 Tf 6.586 0 Td[(1j=0.

PAGE 63

49 Sincetheinformationbeingexchangedisthesoft-outputinLLRformforaportionofthedatabits,weusetheaveragetotalnumberofLLRstransmittedthroughthebroadcastchannelinthedistributedarrayforprocessingeachpacketasasimplemeasureoftheamountofexchangedinformation.Thecostofoverheadduetotheprotocolisignoredhere.Ifweusetodenotetheaverageinformationexchangeamount,thenwiththeassumptionthatsoft-outputatdierentnodesandfordierentbitsareindependent,foraspecicchoiceoffpjgI)]TJ/F21 7.97 Tf 6.587 0 Td[(1j=0thevalueoffortheLRBschemeisgivenbyLRB=MNI)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xj=01)]TJ/F15 11.955 Tf 11.956 0 Td[()]TJ/F22 11.955 Tf 11.955 0 Td[(pjM)]TJ/F21 7.97 Tf 6.587 0 Td[(1j)]TJ/F21 7.97 Tf 6.587 0 Td[(1Yl=0)]TJ/F22 11.955 Tf 11.955 0 Td[(plM;{5whereNistheblocksizeofdatabits.Correspondingly,theinformationexchangeamountofMRCisgivenbyMRC=MN=Rc;{6whereRcisthecoderate.4.2.3Most-Reliable-BitInformationExchangeIntheLRBexchangescheme,eachnodedirectlyrequestsinformationfromothernodesforitslowreliablebits.Thus,theinformationexchangeprocesshastobecar-riedoutintwostages.Intherststage,eachnodesendsoutitsrequestinformation.Inthesecondstage,eachnodebroadcastsitssoft-outputaccordingtotherequestreceivedfromothernodes.Thistwostageprocessmayincreasethecomplexityoftherequirednetworkprotocol.Inaddition,anoderequestinginformationforitsleastreliablebitsdoesnotnecessarymeanthatothernodescanprovidemorereliableinformationforthosebits.Itispossiblethattheinformationcollectedbyanodeisnotreliableenoughtoimproveitsdecodingperformanceinthenextiteration.Asanalternative,weproposeanotherschemecalledmost-reliable-bitMRBinformationexchange.MRBexchangeisusuallyecientforthedistributedarrayconsistingofalargenumberofnodes,e.g.,M>6.TheideaofthisMRBschemeis

PAGE 64

50 thateachnodedirectlybroadcaststhesoft-outputforitsmostreliablebitsafterade-codingiterationperformedwithouttheinformationrequeststageintheLRBscheme.SimilartoLRB,theidenticationofhighlyreliablebitsisbasedontherankingandcomparisonofthereliabilityvalues.Asmallpercentageofbitswithhighreliabilityvaluesarechosenasthemostreliablebits.Accordingtothestatisticcharacteristicsofthesoft-outputshowninFig. 4{2 ,thesebitsarethecorrectlydecodedbitswithveryhighprobability.Assumethatthesoftoutputfordierentbitsand/oratdier-entnodesareindependentofeachother,thenthemostreliablebitsareevenlyspreadinapacketateachnodeandthepositionsareuncorrelatedfordierentnodes.Thus,whenthenumberofnodesislarge,evenifeachnodeonlybroadcastinformationofitstop10%reliablebits,thetotalinformationcollectedfromallthesenodescancovermorethan50%bitsinthewholepacket.Therefore,thelessreliablebitsatanodethatcollectsthisinformationcanhaveagoodchancetobecovered.ThecollectedadditionalinformationinMRBisusuallymuchmorereliablethanthatinLRB.Combinedwiththememory-basedschemetoavoidcorrelationamongthead-ditionalinformationatdierentiterations,theMRBexchangeschemeisgivenasfollows. 1. Cleartheagregisters,i.e.,setalldatabitstobecandidatebits. 2. Decodethereceivedsignalsateachnode. 3. IfIexchangesi.e.,I+1decodingiterationshavebeenperformed,thenter-minatethedecodingprocedureandgotostep1toprocessanewpacket. 4. Otherwiseeachnoderanksthecandidatebitsaccordingtotheirsoftoutputmagnitudereliability,andbroadcaststhesoftinformationforthetoppjfrac-tionofthecandidatebitsthemostreliablecandidatebitstoothernodes. 5. Eachnodesetstheagsforthosebitsinvolvedreceivedandbroadcastinthecurrentexchangesothattheybecomenon-candidatebitsforlateriterations.

PAGE 65

51 6. Eachnodeaddstheadditionalinformationtoitsaprioriinformationandreturnstostep2.Ignoringthecostofinformationexchangeduetothenetworkprotocolandbitindexes,TheinformationexchangeamountforMRBcanbecomputedasMRB=MNI)]TJ/F21 7.97 Tf 6.586 0 Td[(1Xj=0pjj)]TJ/F21 7.97 Tf 6.586 0 Td[(1Yl=0)]TJ/F22 11.955 Tf 11.955 0 Td[(plM:{74.3PerformanceEvaluationInthissection,weuseMonteCarlosimulationstoevaluatetheperformanceofcollaborativedecodingwiththeLRBandMRBinformationexchangeschemeswhenthedistributedarraysconsistmorethantwonodesM>2.Inthesimulations,thepacketsizeissetto1024databits.WesetthenumberofiterationsIto3i.e.,3exchangesand4decodingiterationsareperformedintotal.WerstexaminetheperformanceofcollaborativedecodingwiththeLRBex-changescheme.Fig. 4{3 showstheBERcurvesofcollaborativedecodingontheAWGNchannelforthecasesofM=2;3;4;6and8,respectively.Inthesystem,weemploythenon-recursiveconvolutionalcodeCC;7,forwhichthegenerationpolynomialis[1+D2;1+D+D2]andcoderateRc=1=2.Theparameterfpjgaresettof0:1;0:15;0:25g.Forclarity,onlytheBERsobtainedafterthelastiterationincollaborativedecodingareshown.InthegurewealsoshowtheBERsforMRCwithM=2;6and8.Fromthegurewecanseethat,forM=2,theperformanceofcollaborativedecodingwiththeLRBexchangeschemeisveryclosethatofMRC,whileitisabout2dBand2:6dBwithinthatofMRCforM=6andM=8,re-spectively.Fig. 4{4 showstheBERperformanceonanindependentRayleighfadingchannel.SimilartothecaseofAWGNchannel,signicantspatialdiversitygainsareobtainedusingcollaborativedecoding.ForcollaborativedecodingwiththeMRBexchangescheme,inordertogainspatialdiversity,wesetfpjgtobef0:1;0:2;1g.Allothersystemsettingsarethe

PAGE 66

52 Figure4{3:BERperformanceofcollaborativedecodingwithLRBexchangeforthecasesofM=2;3;4;6and8onAWGNchannels,whereCC5;7andfpjg=f0:1;0:15;0:25gareused. Figure4{4:BERperformanceofcollaborativedecodingwithLRBexchangeonRayleighfadingchannels,parametersettingsarethesameasFig. 4{3

PAGE 67

53 Figure4{5:BERperformanceofcollaborativedecodingwithMRBexchangeonAWGNchannels,wherefpjg=f0:1;0:2;1gareused. sameasthatfortheLRBscheme.TheBERperformanceforMRBonAWGNandindependentRayleighfadingchannelsareshowninFig. 4{5 and 4{5 ,respectively.SimilartocollaborativedecodingwithLRBexchange,collaborativedecodingwithMRBexchangecanalsoachievesignicantreceivediversitygainsandperformanceclosetothatofMRCforbothAWGNandRayleighfadingchannels.Finally,wecomparetheinformationexchangeamountofLRBexchangeandMRBexchangeschemes.Forthesettingsfpjg=f0:1;0:15;0:25gforLRBandfpjg=f0:1;0:2;1gforMRB,thetwoschemesroughlyachievethesameperformancefordierentnumbersofnodes,M.Weuse 4{5 4{7 and 4{6 tocalculatetheinformationexchangeamountwithrespecttoMRCforthetwoschemes.Fig. 4{7 showstherelativeinformationexchangeamountLRB=MRCandMRB=MRCfordierentvaluesofM.Fromthegure,wecanseethatforthissettingoffpjg,LRBgrowswiththeincreasingofMandapproachestohalfoftheinformationexchangeamountforMRC.Incontrast,MRBdecreaseswithM.Fromthecomparisonwe

PAGE 68

54 Figure4{6:BERperformanceofcollaborativedecodingwithMRBexchangeonRayleighfadingchannels,parametersettingsarethesameasFig. 4{5 Figure4{7:ComparisonofinformationexchangeamountwithrespecttoMRCforLRBandMRBexchangeschemes

PAGE 69

55 concludethat,whenthenumberofnodesissmall,LRBexchangeschemeismoreecientthanMRB.Withmorenodesinthedistributedarray,MRBwillbecomemoreecientthanLRB.4.4SummaryInthischapter,weextendcollaborativedecodingtodistributedarrayswithtwoormorenodes.Twodierentinformationexchangeschemesareproposed.IntheLRBscheme,thenodesrequestsoftinformationforacertainpercentageoftheirleastreliablebits.IntheMRBexchangescheme,nodessendoutsoftinformationaboutasmallsetoftheirmostreliablebits.CollaborativedecodingwithbothofthesetwoschemecanachievemostofthespatialdiversityandprovidesignicantsavingsintermsofinformationexchangeamountcomparedtoMRConAWGNandindependentRayleighfadingchannels.Wealsocomparetheinformationexchangeamountforthetwoschemes.Itisshownthatfordistributedarrayswithsmallnumberofnodes,LRBisecient.Whenthenumberofnodesincreases,MRBbecomesmoreecientthanLRB.

PAGE 70

CHAPTER5PERFORMANCEANALYSISFORCOLLABORATIVEDECODINGWITHLEAST-RELIABLE-BITEXCHANGEONAWGNCHANNELSAsanecientdiversitytechnique,collaborativedecodingwiththeLRBinfor-mationexchangescheme,discussedinChapter 4 ,providessignicantsavingsonthecostofinformationexchangewhilestillachievesperformanceclosetothatofMRC.Inthischapter,wefocusonthetheoreticalanalysisoftheerrorperformanceofcollabo-rativedecodingwiththeLRBexchangeschemeontheAWGNchannel.ThesystemmodelconsideredinthischapteristhesameasthatdescribedinSection 4.1 .SincewerestricttheanalysistothecaseofAWGNchannel,thechannelgaingk;iin 4{1 willbealways1inthischapter.TheanalysisisbasedontheLRBexchangeschemedescribedinSection 4.2.2 .TheanalysiswillbebasedonthestatisticalcharacteristicsofsoftinformationobtainedfromtheMAPdecodersincollaborativedecoding.Fromsimulation,weobservethattheextrinsicinformationgeneratedinthedecodingprocesscanbewellapproximatedbyGaussianrandomvariableswhennonrecursiveconvolutionalcodesareemployed.Unfortunately,forrecursiveconvolutionalcodestheextrinsicinforma-tiongeneratedinthedecodingprocesscannotbeapproximatedbyasimpleGaussiandistribution,whichmakestheperformanceanalysisdicult.Duetothisdiculty,weonlyconsidernonrecursiveconvolutionalcodesinthischapter.Byviewingcollaborativedecodingasaniterativedecodingsystem,weuseatypicalanalysistechniqueforturbo-likecodes,knownasdensityevolution,toana-lyzetheperformanceofcollaborativedecoding.Asinmostoftheliteraturese.g.,[ 30 ]and[ 31 ]onanalysisofturbocodes,weusesimulationtoobtainthestatistical 56

PAGE 71

57 characteristicsoftheextrinsicinformation,whichisapproximatedbyaGaussiandis-tribution.Tosimplifytheproblem,wemodelthecollaborativedecodingprocessasadensityevolutionsystemwithonlyoneMAPdecoder.ThenwecangeneratetheaprioriinformationofthedensityevolutionmodelaccordingtotheLRBexchangescheme.BysimulatingthedensityevolutionmodelwithonlyoneMAPdecoder,weobtainthestatisticalcharacteristicsoftheactualextrinsicinformationwithamodestsimulationloadincomparisontothatoftheactualcollaborativedecodingsystem.Withtheknowledgeoftheextrinsicinformationprobabilitydistributionateachiter-ation,wederiveanapproximatebit-errorrateBERupperboundforcollaborativedecodingwiththeLRBexchangescheme.Therestofthischapterisorganizedasfollows.InSection 5.1 ,wemodelcol-laborativedecodingasaconcatenatedstructureconsistingofaMAPdecoderandaninformationexchangingdevice,andemployGaussianapproximationtoobtainthedensityevolutionoftheextrinsicinformation.InSection 5.2 ,wederiveanupperboundoftheBERofthecollaborativedecodingprocess.NumericalresultsobtainedfromtheanalysisareshowninSection 5.3 .Finally,conclusionsaregiveninSec-tion 5.4 .Thischapterisbasedontheworkin[ 25 ]and[ 32 ].5.1Gaussian-ApproximatedDensityEvolutionForNonrecursiveConvolutionalCodesDuetotheexchangeofsoftinformationintheprocess,knowledgeofthestatis-ticalcharacteristicsofsoftinformationfrommaximumaposterioriMAPdecodersincollaborativedecodingisimportanttoitsperformanceanalysis.Thus,werstconsiderthesoftinformationgeneratedincollaborativedecoding.Notethatthesoftoutputfornon-systematiccodesconsistsonlyofextrinsicinformationandaprioriin-formation,ifacandidatebithasnotobtainedadditionalinformationpreviously,thenrankingandexchangingthesoftoutputforcandidatebitsisequivalenttorankingandexchangingtheextrinsicinformationforthosebits.Also,thesetsofcandidate

PAGE 72

58 Figure5{1:Systemmodelforcollaborativedecodingprocess. bitsandnon-candidatebitsforapacketineachiterationareexactlythesameforallnodes.Thesefactsareimportanttounderstandtheanalysisinthefollowingsections.Becauseofthesymmetryamongthenodesinoursystemmodel,thestatisticalcharacteristicsoftheextrinsicinformationateachnodeisthesameineachiteration.ThismeansthatthebehavioroftheLRBexchangeprocesscanbedeterminedbyknowingthestatisticalcharacteristicsoftheoutputfromtheMAPdecoderatasinglenode.Thus,thecollaborativedecodingprocesscanbemodeledbythejointoperationofaninformationexchangeunitandtheMAPdecoderunitasshowninFig. 5{1 .TheoutputoftheinformationexchangeisfedbacktotheMAPdecoderasaprioriinformationforuseinthenextdecodingiteration.Thefollowinganalysisisbasedonthissystemmodel.Assumingthattheall-zerocodewordistransmitted,itiswellknownthattheextrinsicinformationgeneratedbyaMAPdecoder,inthelog-likelihoodratioLLRform,iswellapproximatedbyGaussianrandomvariableswhentheinputstothedecoderarei.i.d.Gaussian[ 31 ].ForthecollaborativedecodingprocessdescribedinSection 4.1 ,theadditionalinformationobtainedfromtheinformationexchangingprocess,whichisusedasinputtotheMAPdecoder,hasanon-Gaussiandistribution.Nevertheless,weobservethattheprobabilitydistributionoftheextrinsicinformationfromtheMAPdecoderineachiterationcanstillbewellapproximatedasGaussian

PAGE 73

59 Figure5{2:EmpiricalpdfsofextrinsicinformationgeneratedbytheMAPdecodersinsuccessiveiterationsincollaborativedecodingwiththeLRBexchangeforM=6andEb=N0=3dBonAWGNchannels,wherethemaximumfreedistance4-statenonrecursiveconvolutionalcodeisused. whennonrecursiveconvolutionalcodesareemployed.Fig. 5{2 showsthetypicalhistogramsoftheextrinsicinformationgeneratedbyMAPdecodersatsuccessiveiterationsinthecollaborativedecodingprocessfornonrecursiveconvolutionalcodes.ComparingtothecorrespondingidealGaussiandistributions,wecanseethatthehistogramsareveryclosetoGaussiandistributions.Basedontheseobservations,weapplytheGaussian-approximateddensityevolutiontechniquein[ 30 31 ]topredictthebehavioroftheMAPdecodersincollaborativedecoding.Asin[ 31 ],weassumethatateachnodetheextrinsicinformationgeneratedbytheMAPdecoderforalltheinformationbitsatthatnodearei.i.d.Gaussianran-domvariablesineachiteration.Wefurtherassumethattheextrinsicinformationforinformationbitsgeneratedbydierentnodesareindependent.Thus,thestatis-ticalbehavioroftheextrinsicinformationissucientlyspeciedbyitsmeanandvariance.Unfortunately,obtainingananalyticdistributionfortheextrinsicinforma-tiongeneratedbyMAPdecodersisanintractableproblem,especiallyforthecaseof

PAGE 74

60 non-Gaussianinput.Hence,weusesimulation,basedonthemodelinFig. 5{1 ,toquantifytheevolutionoftheprobabilitydistribution.ByinputtingactualadditionalinformationtotheMAPdecoder,themeanandvarianceoftheextrinsicinforma-tioncanbeobtainedwithmodestsimulationcomplexityincomparisontotheactualcollaborativedecodingprocess.ThisknowledgeofextrinsicinformationisusedtoevaluatetheerrorperformanceinSection 5.2 .Werstdescribethegenerationoftheadditionalinformation.Forthejthdecod-ingiteration,letjk;idenotetheextrinsicinformationgeneratedbytheMAPdecoderfortheithinformationbitatnodek,andletBjidenotetheeventthatbitiisacandidatebit.UndertheGaussianassumption,jk;iNj;2j,andfjk;igarei.i.d.forallkandi,whereN;2meansGaussiandistributedwithmeanandvari-ance2.Whenthebitblocksizeislargeenough,theinformationrequestcriterionforjjk;ijtorankinthebottompjfractionamongthecandidatebitsinthekthnodeisapproximatelyequivalenttojjk;ijTj,whereTj0isathresholdrelatedtothedistributionofjk;iandpj.Specically,wehavePjjk;ijTjBji=pj:{1Letjk;idenotetheadditionalinformationfortheithbitatthekthnodegener-atedbytheLRBexchangeprocessinthejthiteration.Thisadditionalinformationwillbeaddedtotheaprioriinformationinthej+1thiterationbynodek.Below,letusassumethatM3.ThecaseofM=2willbediscussedseparatelylater.AccordingtotheLRBscheme,ifbitiisanon-candidatebitinthejthiteration,thenjk;i=0.Otherwise,therearethreepossibilitiesforthecaseofacandidatebit: i Nonoderequestsinformationfortheithbit,i.e.,Tt2Mjjt;ij>Tj,thenjk;i=0;

PAGE 75

61 ii Thekthnodedoesnotrequestinformationforbiti,butthereisoneothernoderequestinginformationforthatbit.Wedenotethiseventby_Rjk;i,i.e.,_Rjk;i=[r2M;r6=knjjr;ijTj;t2M;t6=rjjt;ij>Tjo:{2Thenthekthnodewillobtaininformationfromothernodesexcepttheonesendingoutrequest,i.e.,jk;i=Xt2Mt6=r;t6=k;r6=kjt;i,_jk;i;{3 iii ThekthnodeormorethanonenodesinMrequestinformationforbiti.WedenotethiseventbyRjk;i,i.e.,Rjk;i=fjjk;ijTjg[nfjjk;ij>Tjg [r2M;r6=knjjr;ijTj;t2M;t6=r;t6=kjjt;ij>Tjoo:{4Inthiscase,thekthnodewillobtaininformationfromallothernodes,andjk;iisgivenbyjk;i=Xt2M;t6=kjt;i,jk;i:{5UndertheGaussianassumption,wecanseethat,withouttheconstraintofcandidatebits,_jk;iN)]TJ/F15 11.955 Tf 5.48 -9.684 Td[(M)]TJ/F15 11.955 Tf 10.412 0 Td[(2j;M)]TJ/F15 11.955 Tf 10.413 0 Td[(22jwhilejk;iN)]TJ/F15 11.955 Tf 5.479 -9.684 Td[(M)]TJ/F15 11.955 Tf 10.412 0 Td[(1j;M)]TJ/F15 11.955 Tf 10.412 0 Td[(12j.Clearly,_Rjk;iandRjk;iaredisjointevents.AccordingtotheLRBscheme,onlyundercaseibitiwillbeacandidatebitagaininnextiteration.Hence,P)]TJ/F25 11.955 Tf 5.479 -9.684 Td[(Bj+1iBji=Pk2Mjjk;ij>TjBji=)]TJ/F22 11.955 Tf 11.955 0 Td[(pjM:{6Fromthisrecursiverelation,weimmediatelyobtainPBji=PBji;Bj)]TJ/F21 7.97 Tf 6.587 0 Td[(1i;;Bi=j)]TJ/F21 7.97 Tf 6.587 0 Td[(1Yl=0P)]TJ/F25 11.955 Tf 5.48 -9.684 Td[(Bl+1iBli=j)]TJ/F21 7.97 Tf 6.587 0 Td[(1Yl=0)]TJ/F22 11.955 Tf 11.955 0 Td[(plM;{7

PAGE 76

62 andBji=j)]TJ/F21 7.97 Tf 6.586 0 Td[(1l=0k2Mfjlk;ij>Tlg:{8Dierentfromcasei,incasesiiandiiibitiwillbecomeanon-candidatebitinthenextiteration.Thus,P)]TJET1 0 0 1 153.218 605.649 cmq[]0 d0 J0.478 w0 0.239 m30.005 0.239 lSQ1 0 0 1 -153.218 -605.649 cmBT/F25 11.955 Tf 153.218 592.126 Td[(Bj+1iBji=P)]TJ/F15 11.955 Tf 9.916 -6.662 Td[(_Rjk;i[Rjk;iBji=P)]TJ/F15 11.955 Tf 9.916 -6.662 Td[(_Rjk;iBji+P)]TJ/F15 11.955 Tf 8.616 -6.662 Td[(Rjk;iBji: {9 Withtheabovearguments,wecaneasilysimulatetheadditionalinformationgeneratedintheactualLRBprocessforthedensityevolutionmodelinFig. 5{1 .Withoutlossofgenerality,weassumethattheMAPdecoderinFig. 5{1 isintheMthnode.Also,weassumethattheblocklengthofthecodeislongenoughtoensuretheGaussianapproximationsandthresholding.Inthejthiteration,theMAPdecodergeneratesjM;ifortheithbit.Thenthevaluesofjand2joftheextrinsicinformationfortheinformationbitsareestimated.TondTjusing 5{1 ,werstusenonparametricestimationmethodtoestimatethecumulativedistributionfunctionFjxoftheextrinsicinformationforthecandidatebits,i.e.,Fjx=Pjk;i
PAGE 77

63 Figure5{3:Comparisonofmeanandvarianceoftheextrinsicinformationfromthedensityevolutionmodelandthatfromtheactualcollaborativedecodingprocess prioriinformation,denotedbyj+1M;i,forthej+1thiteration,asj+1M;i=jXl=0lM;i=8><>:lM;iifRlM;i6=?;80lj0otherwise; {12 whereRlM;i,_RlM;i[RlM;i:{13ByinputtingthisaprioriinformationtotheMAPdecoderanditeratingtheaboveprocedure,wecanobtainthestatisticalcharacteristicoftheGaussian-approximatedextrinsicinformationinthewholecollaborativedecodingprocess.Fig. 5.1 and 5.1 showthecomparisonofthemeanandvarianceoftheextrinsicinformationandthethresholdTjestimatedinourdensityevolutionmodelandtheactualcollaborativedecodingprocessforthecaseofM=6.Inthegure,themaximumfreedistance4-statenon-recursiveconvolutioncodeisused,andfpjgissettof0:1;0:2;1g.Fromthegure,weseethatourdensityevolutionmodelgivesanexcellentapproximationfortheactualcollaborativedecodingprocesswithonly1=Mthofthesimulationload.

PAGE 78

64 Figure5{4:Comparisonofthresholdestimatedfromthedensityevolutionmodelthatfromtheactualcollaborativedecodingprocess TheanalysisinthenextsectionalsoshowsthattoevaluatetheerrorperformanceforthetotalIiterations,weonlyneedthestatisticalknowledgeoftheextrinsicinformationintherstI)]TJ/F15 11.955 Tf 11.955 0 Td[(1iterations.5.2ErrorPerformanceAnalysisWithknowledgeofthestatisticalcharacteristicsoftheextrinsicinformation,weevaluatetheerrorperformanceofcollaborativedecodingwithLRBexchange.WeagainconsiderthedecodingprocessandperformanceattheMthnode.LetM0=f1;2;;M)]TJ/F15 11.955 Tf 11.985 0 Td[(1gdenotethesetoftheotherM)]TJ/F15 11.955 Tf 11.985 0 Td[(1nodes.SincetheaverageBERisconsidered,wedropthebitindex,i.e.,thesubscripti,inthenotationofvariablesandeventsforthebitofinterest.Forconvenience,wealsodropthesubscriptMfortheMthnodeinfollowingderivation.Fromthedenition 5{3 ,weknowthat_jM;iisaGaussianrandomvariableforM3butequalszeroforM=2.ThuswetreatM=2asaspecialcaseandconsiderthecaseofM3rstbelow.

PAGE 79

65 5.2.1BERUpperBoundforM3ForthecaseofM3,theBERoftheMAPdecodersinthejthj>0iterationistheprobabilitythatthesoftoutputofabitissmallerthanzerogiventhattheall-zerosequenceistransmitted,i.e.,Pjb=Pj+j<0;{14wherejistheextrinsicinformation,andjistheaprioriinformationinthejthiterationgivenin 5{12 attheMthnode,respectively.Here,weevaluatetheerrorperformancebyndinganupperboundfor 5{14 .Accordingto 5{12 6{10 and 6{16 ,werewrite 5{14 asPjb=Pj+j<0; Bj+Pj<0;Bj=j)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xl=0Pj+l<0;Rl;Bl+Pj<0;Bj: {15 Werstconsidertherstpartin 5{15 .Using 6{16 5{3 and 5{5 ,wehavePj+l<0;Rl;Bl=Pj+_l<0;_Rl;Bl+Pj+l<0;Rl;BlPj+_l<0;_Rl;Bl+Pj+l<0: {16

PAGE 80

66 With 5{2 and 5{8 ,whenpt<1i.e.,Tt<1for0tl,weupperboundthersttermin 5{16 asfollowsPj+_l<0;_Rl;Bl=Pj+_l<0;[r2M0njlrjTl;t2M;t6=rjltj>Tlo;Bl=M)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xr=1Pj+_l<0;jlrjTl;t2M;t6=rjltj>Tl;l)]TJ/F21 7.97 Tf 6.586 0 Td[(1t=0k2Mfjtkj>TtgaM)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xr=1Pj+_l<0;jlrjTl;l)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=0jtrj>Tt {17 b=M)]TJ/F15 11.955 Tf 11.955 0 Td[(1Pj+_l<0Pjl1jTl;l)]TJ/F21 7.97 Tf 6.586 0 Td[(1t=0jt1j>Tt; {18 whereaisobtainedbydroppingalltheeventsinf_Rl;Blgassociatedwithtkforalltandk2M;k6=r,andbisduetothefactthattheprobabilitiesinthesumin 5{17 areequalfor1rM)]TJ/F15 11.955 Tf 12.034 0 Td[(1,andthatjand_lareindependentoftrforallt.ToevaluatetheprobabilityPjl1jTl;Tl)]TJ/F21 7.97 Tf 6.586 0 Td[(1t=0jt1j>Ttin 5{18 ,weuse 5{8 torewritePBjasPBj=PMk=1nj)]TJ/F21 7.97 Tf 6.587 0 Td[(1l=0jlkj>Tlo=MYk=1Pj)]TJ/F21 7.97 Tf 6.586 0 Td[(1l=0jlkj>Tl=hPj)]TJ/F21 7.97 Tf 6.587 0 Td[(1l=0jlkj>TliM;k2M: {19 Bycomparing 5{19 with 5{7 ,forallk2MwehavePj)]TJ/F21 7.97 Tf 6.587 0 Td[(1l=0jlkj>Tl=j)]TJ/F21 7.97 Tf 6.586 0 Td[(1Yl=0)]TJ/F22 11.955 Tf 11.955 0 Td[(pl:{20Inthesimilarmanner,itiseasytoseethatforallk2MPjlkjTl;l)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=0jtkj>Tt=pll)]TJ/F21 7.97 Tf 6.587 0 Td[(1Yt=0)]TJ/F22 11.955 Tf 11.955 0 Td[(pt:{21

PAGE 81

67 Thus,with 5{21 andtakingintoaccountthefactPj+_l<0;_Rl;BlPj+_l<0,werenetheupperbound 5{18 asPj+_l<0;_Rl;Blminn1;M)]TJ/F15 11.955 Tf 9.614 0 Td[(1pll)]TJ/F21 7.97 Tf 6.587 0 Td[(1Yt=0)]TJ/F22 11.955 Tf 9.614 0 Td[(ptoPj+_l<0:{22Thisboundisforthecasethatallptarenotequalto1.Ifthereexistsa0tlsuchthatpt=1,thenPj+_l<0;_Rl;Bl=0becauseofP_Rl;Bl=0.Toincludethiscase,werewritetheupperbound 5{22 asPj+_l<0;_Rl;BlalPj+_l<0;{23whereal=8><>:0ifQlt=0)]TJ/F22 11.955 Tf 11.955 0 Td[(pt=0minn1;M)]TJ/F15 11.955 Tf 11.956 0 Td[(1plQl)]TJ/F21 7.97 Tf 6.586 0 Td[(1t=0)]TJ/F22 11.955 Tf 11.955 0 Td[(ptootherwise: {24 Inthesameway,weconsidertheprobabilityPj<0;Bjin 5{15 .With 5{7 and 5{20 ,thisprobabilitycanbeeasilyexpandedandupperboundedbyPj<0;Bj=Pj<0;j)]TJ/F21 7.97 Tf 6.586 0 Td[(1l=0k2Mjlkj>Tl=Pj<0;j)]TJ/F21 7.97 Tf 6.587 0 Td[(1l=0jlj>TlPj)]TJ/F21 7.97 Tf 6.587 0 Td[(1l=0M)]TJ/F21 7.97 Tf 6.587 0 Td[(1k=1jlkj>TlPj<0;jj)]TJ/F21 7.97 Tf 6.587 0 Td[(1j>Tj)]TJ/F21 7.97 Tf 6.586 0 Td[(1j)]TJ/F21 7.97 Tf 6.586 0 Td[(1Yl=0)]TJ/F22 11.955 Tf 11.955 0 Td[(plM)]TJ/F21 7.97 Tf 6.586 0 Td[(1bjPj)]TJ/F21 7.97 Tf 6.587 0 Td[(1<)]TJ/F22 11.955 Tf 9.299 0 Td[(Tj)]TJ/F21 7.97 Tf 6.586 0 Td[(1+Pj<0;j)]TJ/F21 7.97 Tf 6.586 0 Td[(1>Tj)]TJ/F21 7.97 Tf 6.587 0 Td[(1; {25 wherebj=j)]TJ/F21 7.97 Tf 6.586 0 Td[(1Yl=0)]TJ/F22 11.955 Tf 11.955 0 Td[(plM)]TJ/F21 7.97 Tf 6.586 0 Td[(1:{26

PAGE 82

68 Byinserting 5{23 5{25 and 5{16 into 5{15 ,weobtainfollowingupperboundPjbj)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xl=0halPj+_l<0+Pj+l<0i+bjPj)]TJ/F21 7.97 Tf 6.587 0 Td[(1<)]TJ/F22 11.955 Tf 9.298 0 Td[(Tj)]TJ/F21 7.97 Tf 6.587 0 Td[(1+Pj<0;j)]TJ/F21 7.97 Tf 6.587 0 Td[(1>Tj)]TJ/F21 7.97 Tf 6.586 0 Td[(1; {27 wherealandbjaregivenby 5{24 and 5{26 ,respectively.Below,weemployaunionboundforthemax-log-MAPdecodertofurtherupperboundtheprobabilitiesin 5{27 .5.2.2UnionBoundforMax-log-MAPDecodingLetu andc denoteainformationbitsequenceandthecorrespondingcodewordgeneratedbyanonrecursiveconvolutionalcodeC:u !c ,whereu =u0;u1;;ui;,c =c0;c1;;ci;,anduiandci2f0;1garetheinformationbitandcodedbit,respectively.Correspondingly,yiisthereceivedBPSKi.e.,xi=1)]TJ/F15 11.955 Tf 10.62 0 Td[(2ciin 4{1 sig-nalatthedecoder.Undertheassumptionthattheall-zerosequenceistransmitted,theextrinsicinformationgeneratedbythemax-log-MAPdecoderintheLLRformisgivenbyjk=maxu ;c 2C+)]TJ/F15 11.955 Tf 11.955 0 Td[()]TJ/F21 7.97 Tf 7.314 4.936 Td[(ju ;c +minu ;c 2C)]TJ/F28 11.955 Tf 8.247 15.495 Td[()]TJ/F21 7.97 Tf 7.314 4.936 Td[(ju ;c ;{28whereC+andC)]TJ/F15 11.955 Tf 11.052 -4.338 Td[(arethesetsofallcodewordspairu ;c thatgivesthedecisionofu0=0andu0=1,respectively,and)]TJ/F21 7.97 Tf 124.094 6.11 Td[(ju ;c istheerroreventmetricforu ;c inthejthiteration,denedas)]TJ/F21 7.97 Tf 7.314 4.936 Td[(ju ;c =Xi2fi:ui=1gi6=kji+LcXi2fi:ci=1gyi:{29In 5{29 ,i2fi:ui=1gandi2fi:ci=1gmeantakingtheindicesofthenon-zerobitsinu andc ,jiistheaprioriinformationoftheithinformationbit,andLc=2=2n{30

PAGE 83

69 isknownasthechannelreliabilitymeasure.Adetailedproofof 5{28 canbefoundinAppendix B .Notethatsincetheall-zerocodeword0 ;0 2C+and)]TJ/F21 7.97 Tf 30.378 6.11 Td[(j0 ;0 =0,wehavemaxu ;c 2C+)]TJ/F15 11.955 Tf 11.955 0 Td[()]TJ/F21 7.97 Tf 7.314 4.937 Td[(ju ;c =maxu ;c 2C+0;)]TJ/F15 11.955 Tf 9.299 0 Td[()]TJ/F21 7.97 Tf 7.314 4.937 Td[(ju ;c 0:{31With 5{31 wecanobtainfollowingunionboundfrom 5{28 fortheprobabilitythatjkissmallerthananarbitraryvaluex,Pjk
PAGE 84

70 wheredministheminimumHammingdistanceofthecodeC,andAw;disthenumberoferroreventswithHammingweightdandinputweightofw.Eq. 5{34 isageneralizedunionboundofmax-log-MAPdecoding.Thewellknownunionboundformaximumlikelihooddecodingisaspecialcaseof 5{34 withx=0andtheaprioriinformationequalto0.5.2.3ApplyingMax-log-MAPDecodingUnionBoundtoCollaborativeDecodingToapplythegeneralizedunionboundin 5{34 tocollaborativedecoding,thecrucialstepistheevaluationoftheprobabilityP)]TJ/F21 7.97 Tf 11.866 6.111 Td[(jw;d
PAGE 85

71 whereBj= Sj)]TJ/F21 7.97 Tf 6.586 0 Td[(1l=0Alisthebitsetforwhichnoinformationexchangeoccursinthepreviousj)]TJ/F15 11.955 Tf 11.75 0 Td[(1iterations.ThesetBjcontainsallthenon-zerocandidatebitsleftforthejthdecodingiteration.From 5{33 ,theerroreventmetricassociatedwitheventVjcanbedenedas)]TJ/F21 7.97 Tf 7.314 6.11 Td[(jw;d=j)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xl=0Xi2Alli+Yd=j)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xl=0Xi2_Al_li+Xi2Alli+Yd;{36whereYd:=Lcd)]TJ/F21 7.97 Tf 6.586 0 Td[(1Xi=0yi:{37From 4{1 weknowthatYdNdLc;2dLc.Sinceiniterationltheextrinsicinformationfligarei.i.d.foralli,thestatisticalcharacteristicsof)]TJ/F21 7.97 Tf 96.662 6.111 Td[(jw;din 5{36 andtheprobabilityofVjonlydependsonthesizeof_AlandAlforl
PAGE 86

72 Forconvenience,weuse)]TJ/F21 7.97 Tf 134.982 6.11 Td[(jw;dVjtodenotetheerroreventmetricwithaparticularVj.Thenwehave)]TJ/F21 7.97 Tf 90.557 6.111 Td[(jw;dVjN)]TJ/F22 11.955 Tf 5.479 -9.684 Td[(Vj;2Vj,whereVj=dLc+j)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xl=0ll;and2Vj=2dLc+j)]TJ/F21 7.97 Tf 6.586 0 Td[(1Xl=0l2l;{41inwhichl=mlM)]TJ/F15 11.955 Tf 11.955 0 Td[(1)]TJ/F22 11.955 Tf 11.955 0 Td[(nl:{42RecallthatintheLRBexchangeschemenoinformationcanbeexchangedforanon-candidatebit,i.e.,AlAk=?forl6=k.Thus,thevalueofmlin 5{38 mustsatisfy0mlwl;{43wherewl=wl)]TJ/F21 7.97 Tf 6.586 0 Td[(1)]TJ/F22 11.955 Tf 11.955 0 Td[(ml)]TJ/F21 7.97 Tf 6.586 0 Td[(1;andw0=w)]TJ/F15 11.955 Tf 11.955 0 Td[(1;{44isthethenumberofnon-zerocandidatebitsleftinu giventheeventfjAtj=mtgl)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=0occurs.Basedontheabovearguments,theprobabilityP)]TJ/F21 7.97 Tf 11.867 6.111 Td[(jw;d
PAGE 87

73 possiblechoicesofAVj.Forallthesechoices,theprobabilitiesoftheeventfVj=AVjg,arethesame.Thus,wecanupperbound 5{45 byupperboundingP)]TJ/F15 11.955 Tf 5.479 -9.684 Td[()]TJ/F21 7.97 Tf 7.314 6.11 Td[(jw;dVjTtoPj)]TJ/F21 7.97 Tf 6.587 0 Td[(1l=0i2All)]TJ/F21 7.97 Tf 6.586 0 Td[(1t=0jtM;ij>TtPi2BjBjic0VjP)]TJ/F15 11.955 Tf 5.479 -9.684 Td[()]TJ/F21 7.97 Tf 7.314 6.111 Td[(jw;dVj
PAGE 88

74 withQbeingtheGaussianQ-function.Thenbyinserting 5{49 into 5{45 wehaveP)]TJ/F21 7.97 Tf 11.866 6.111 Td[(jw;dTj)]TJ/F21 7.97 Tf 6.586 0 Td[(1leftin 5{27 toevaluate.Thedicultyhereisthecorrelationbetweenjandj)]TJ/F21 7.97 Tf 6.586 0 Td[(1.Tounveilthiscorrelation,weconsidertheextrinsicinformationexpressiongivenin 5{28 .Letu ;c +opt=argmaxu ;c 2C+)]TJ/F15 11.955 Tf 11.955 0 Td[()]TJ/F21 7.97 Tf 7.315 4.936 Td[(ju ;c denotetheoptimaldecodingsequencefoundbythedecoderinC+,meanwhileu ;c )]TJ/F21 7.97 Tf 0 -7.646 Td[(optdenotetheoptimaldecodingsequenceinC)]TJ/F15 11.955 Tf 7.084 -4.338 Td[(.Accordingtomax-log-MAPdecoding,thenalsurvivalsequenceu ;c optisgeneratedbetweenu ;c +optandu ;c )]TJ/F21 7.97 Tf 0 -7.645 Td[(opt.Ifu ;c +optisnotselectedtobetweenthesurvivorsequence,itbecomesthecompetingsequence.Weassumethecodeisgoodenoughthat,whentheSNRisnottoolow,thedecodercanatleastndthecorrectcodewordasthecompetingsequenceifitisnotselectedtobethesurvivorsequence.Thisassumptionisthesameasthatusedin[ 33 ].Thus,undertheassumptionthattheall-zerosequence0 ;0 istransmitted,wehaveu ;c +opt=0 ;0 since0 ;0 2C+.Thatis,maxu ;c 2C+)]TJ/F15 11.955 Tf 11.955 0 Td[()]TJ/F21 7.97 Tf 7.315 4.936 Td[(ju ;c =)]TJ/F21 7.97 Tf 19.739 6.11 Td[(j0 ;0 =0:

PAGE 89

75 Withtheabovearguments,wecanrewrite 5{28 asfollowsbydroppingthersttermjminu ;c 2C)]TJ/F28 11.955 Tf 8.247 15.495 Td[()]TJ/F21 7.97 Tf 7.314 4.936 Td[(ju ;c whentheSNRishigh.Thus,forj>0wehavePjk<0;j)]TJ/F21 7.97 Tf 6.586 0 Td[(1k>Tj)]TJ/F21 7.97 Tf 6.587 0 Td[(1Pminu ;c 2C)]TJ/F28 11.955 Tf 8.246 15.495 Td[()]TJ/F21 7.97 Tf 7.314 4.936 Td[(ju ;c <0;minu ;c 2C)]TJ/F28 11.955 Tf 8.247 15.495 Td[()]TJ/F21 7.97 Tf 7.315 4.936 Td[(j)]TJ/F21 7.97 Tf 6.586 0 Td[(1u ;c >Tj)]TJ/F21 7.97 Tf 6.587 0 Td[(1=1 KcP[u ;c 2C)]TJ/F15 11.955 Tf 8.246 12.387 Td[()]TJ/F21 7.97 Tf 7.315 4.936 Td[(ju ;c <0;u ;c 2C)]TJ/F15 11.955 Tf 8.247 12.387 Td[()]TJ/F21 7.97 Tf 7.314 4.936 Td[(j)]TJ/F21 7.97 Tf 6.587 0 Td[(1u ;c >Tj)]TJ/F21 7.97 Tf 6.586 0 Td[(11 KcXu ;c 2C)]TJ/F22 11.955 Tf 8.246 12.387 Td[(P)]TJ/F21 7.97 Tf 7.315 4.936 Td[(ju ;c <0;u 0;c 02C)]TJ/F15 11.955 Tf 8.247 12.387 Td[()]TJ/F21 7.97 Tf 7.314 6.11 Td[(j)]TJ/F21 7.97 Tf 6.587 0 Td[(1u 0;c 0>Tj)]TJ/F21 7.97 Tf 6.586 0 Td[(11 KcXu ;c 2C)]TJ/F22 11.955 Tf 8.246 12.387 Td[(P)]TJ/F21 7.97 Tf 11.866 4.936 Td[(ju ;c <0;)]TJ/F21 7.97 Tf 7.314 4.936 Td[(j)]TJ/F21 7.97 Tf 6.587 0 Td[(1u ;c >Tj)]TJ/F21 7.97 Tf 6.586 0 Td[(1; {53 Followingthederivationfrom 5{33 through 5{52 ,wethenobtainPjk<0;j)]TJ/F21 7.97 Tf 6.586 0 Td[(1k>Tj)]TJ/F21 7.97 Tf 6.587 0 Td[(11 KcXddminXw1hwAw;dXVjjcVjP)]TJ/F15 11.955 Tf 5.479 -9.684 Td[()]TJ/F21 7.97 Tf 7.315 6.111 Td[(jw;dVj<0;)]TJ/F21 7.97 Tf 7.315 6.111 Td[(j)]TJ/F21 7.97 Tf 6.586 0 Td[(1w;dVj)]TJ/F21 7.97 Tf 6.586 0 Td[(1>Tj)]TJ/F21 7.97 Tf 6.587 0 Td[(1i: {54 ToevaluatetheprobabilityP)]TJ/F15 11.955 Tf 5.48 -9.683 Td[()]TJ/F21 7.97 Tf 7.314 6.11 Td[(jw;dVj<0;)]TJ/F21 7.97 Tf 7.314 6.11 Td[(j)]TJ/F21 7.97 Tf 6.587 0 Td[(1w;dVj)]TJ/F21 7.97 Tf 6.587 0 Td[(1>Tj)]TJ/F21 7.97 Tf 6.586 0 Td[(1in 5{54 ,werewrite 5{36 as)]TJ/F21 7.97 Tf 7.315 6.11 Td[(jw;dVj=)]TJ/F21 7.97 Tf 27.612 6.11 Td[(j)]TJ/F21 7.97 Tf 6.586 0 Td[(1w;dVj)]TJ/F21 7.97 Tf 6.587 0 Td[(1+;{55where,Xi2_Aj)]TJ/F18 5.978 Tf 5.756 0 Td[(1_j)]TJ/F21 7.97 Tf 6.587 0 Td[(1i+Xi2Aj)]TJ/F18 5.978 Tf 5.756 0 Td[(1j)]TJ/F21 7.97 Tf 6.587 0 Td[(1i:{56GivenVj,weknowthat)]TJ/F21 7.97 Tf 82.688 6.111 Td[(j)]TJ/F21 7.97 Tf 6.586 0 Td[(1w;dVj)]TJ/F21 7.97 Tf 6.586 0 Td[(1NVj)]TJ/F21 7.97 Tf 6.586 0 Td[(1;2Vj)]TJ/F21 7.97 Tf 6.586 0 Td[(1,Nj)]TJ/F21 7.97 Tf 6.587 0 Td[(1j)]TJ/F21 7.97 Tf 6.587 0 Td[(1;j)]TJ/F21 7.97 Tf 6.587 0 Td[(12j)]TJ/F21 7.97 Tf 6.587 0 Td[(1,and)]TJ/F21 7.97 Tf 29.34 6.111 Td[(j)]TJ/F21 7.97 Tf 6.586 0 Td[(1w;dandareindependentofoneanother,whereVj,Vjandjaregiven

PAGE 90

76 in 5{41 and 5{42 ,respectively.Thus,wehaveP)]TJ/F15 11.955 Tf 5.48 -9.684 Td[()]TJ/F21 7.97 Tf 7.314 6.11 Td[(jw;dVj<0;)]TJ/F21 7.97 Tf 7.315 6.11 Td[(j)]TJ/F21 7.97 Tf 6.586 0 Td[(1w;dVj)]TJ/F21 7.97 Tf 6.586 0 Td[(1>Tj)]TJ/F21 7.97 Tf 6.587 0 Td[(1=P)]TJ/F15 11.955 Tf 5.48 -9.684 Td[()]TJ/F21 7.97 Tf 7.314 6.111 Td[(j)]TJ/F21 7.97 Tf 6.587 0 Td[(1w;dVj)]TJ/F21 7.97 Tf 6.586 0 Td[(1+<0;)]TJ/F21 7.97 Tf 7.314 6.111 Td[(j)]TJ/F21 7.97 Tf 6.587 0 Td[(1w;dVj)]TJ/F21 7.97 Tf 6.587 0 Td[(1>Tj)]TJ/F21 7.97 Tf 6.586 0 Td[(1=P)]TJ/F15 11.955 Tf 5.48 -9.683 Td[()]TJ/F21 7.97 Tf 7.314 6.11 Td[(j)]TJ/F21 7.97 Tf 6.587 0 Td[(1w;dVj)]TJ/F21 7.97 Tf 6.586 0 Td[(1+<0;<)]TJ/F22 11.955 Tf 9.299 0 Td[(Tj)]TJ/F21 7.97 Tf 6.586 0 Td[(1)]TJ/F22 11.955 Tf 11.955 0 Td[(P)]TJ/F15 11.955 Tf 5.479 -9.683 Td[()]TJ/F21 7.97 Tf 7.314 6.11 Td[(j)]TJ/F21 7.97 Tf 6.586 0 Td[(1w;dVj)]TJ/F21 7.97 Tf 6.587 0 Td[(1
PAGE 91

77 5{16 becomesPj+l<0;Rl;Bl=Pj<0;_Rl;Bl+Pj+l<0;Rl;BlPj<0;_Rl;Bl+Pj+l<0: {60 Analogousto 5{17 and 5{25 ,weupperboundthersttermin 5{60 asPj<0;_Rl;Bl=Pj<0;jl1jTl;jlj>Tl;BlPj<0;jl1jTl;jlj>Tl;l)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=0jt1j>Tt=Pj<0;jlj>TlPjl1jTl;l)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=0jt1j>TtalPl<)]TJ/F22 11.955 Tf 9.298 0 Td[(Tl+Pj<0;l>Tl; {61 wherealisthesameas 5{24 .Thus,bysubstituting 5{16 with 5{60 ,theupperboundofPjbin 5{27 becomesPjbj)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xl=0nalPl<)]TJ/F22 11.955 Tf 9.298 0 Td[(Tl+Pj<0;l>Tl+Pj+l<0o+bjPj)]TJ/F21 7.97 Tf 6.587 0 Td[(1<)]TJ/F22 11.955 Tf 9.299 0 Td[(Tj)]TJ/F21 7.97 Tf 6.586 0 Td[(1+Pj<0;j)]TJ/F21 7.97 Tf 6.586 0 Td[(1>Tj)]TJ/F21 7.97 Tf 6.587 0 Td[(1: {62 Then,similartothecaseofM3,weapplytheunionboundformax-log-MAPdecodingtofurtherupperboundtheprobabilitiesin 5{62 .AllthederivationsaresameasSection 5.2.3 exceptfor 5{47 and 5{48 .Duetothechangein 5{59 ,Rlibecomesindependentofli.Thus,forthecaseofM=2,wecankeepRlifori2Alwhenwedropalltheeventsassociatedwithliinthederivationof 5{47 .Withthismodication,c0Vjin 5{48 becomesc0Vj=j)]TJ/F21 7.97 Tf 6.586 0 Td[(1Yl=0wlmlmlnlhpmll)]TJ/F22 11.955 Tf 11.955 0 Td[(pl2wj+nll)]TJ/F21 7.97 Tf 6.587 0 Td[(1Yt=0)]TJ/F22 11.955 Tf 11.955 0 Td[(ptml+nli:{63

PAGE 92

78 Figure5{5:Comparisonoftheproposedbounds,simulationresultsforthecasesofM=2and6onAWGNchannels,whereCC;7andfpjg=f0:1;0:15;0:25gareused. 5.3NumericalResultsInthissection,werstpresentnumericalresultstodemonstratetightnessoftheBERupperbounddevelopedinSection 5.2 .Strictlyspeaking,thisboundisanapproximatedupperboundduetotheGaussianapproximationandthesemi-analyticaldensityevolutionmodel.First,wesetthenumberofiterationsIto3i.e.,3exchangesand4decodingiterationsareperformedintotal,andsetfpjgtof0:1;0:15;0:25ginthecollaborativedecodingprocess.Fig. 5{5 comparestheupperboundsineachiterationwiththesimulationresultsforthecasesofM=2andM=6,respectively.Inthesystem,anon-recursiveconvolutionalcodewiththegenerationpolynomialof[1+D2;1+D+D2]isused.WedenoteitbyCC;7.Fromthegure,weseethattheboundsinthelowBERregionareveryclosetothesimulationresultsinalliterations.AttheverylowEb=N0regioni.e.,thehighBERregion,theboundsbecomeloose.Thisisduetothenatureoftheunionboundgivenin 5{32 .Inthegure,wealsoshowtheunionboundsforMRC.Wecanseethat,

PAGE 93

79 Figure5{6:Comparisonoftheproposedbounds,simulationresultsinthelastitera-tionforthecasesofM=2;3;4;6and8onAWGNchannels,whereCC5;17andfpjg=f0:1;0:15;0:25gareused. forM=2,theperformanceofcollaborativedecodingwiththeLRBexchangeschemeisveryclosethatofMRC,whileitisabout2dBwithinthatofMRCforM=6.Thismeansthatmostofthespatialgaincanbeobtainedthroughcollaborativedecoding.InFig. 5{6 ,weshowtheresultsforanothernon-recursiveconvolutionalcodewiththegenerationpolynomialof[1+D2+D3;1+D+D2+D3],denotedbyCC;17.TheparameterfpjgisthesameasthatinFig. 5{5 .WecomparetheupperboundswithsimulationresultsforM=2;3;4;6;and8,respectively.Forclarity,weonlyshowtheBERinthelastiterationforeachM.Wenotethat,duetotheindependencyassumptionandGaussianapproximationinSection 5.1 arenotveryaccurateintherealisticdecodingprocessforCC5;17whenM=2,theBERboundsisalittlebitbelowthesimulationresults.However,whenM3theassumptionsbecomemuchclosertotheactualsituation.Fromthegure,wecanseethattheboundsareverytight.

PAGE 94

80 Table5{1:DierentchoicesoffpjgandcorrespondingaverageinformationexchangeamountwithM=8forrate1=2CC;7code.iscalculatedwithrespecttotheinformationexchangeamountofMRC,MRC. No.ofexchanges ValueoffpjgI)]TJ/F21 7.97 Tf 6.586 0 Td[(1j=0 Case1 I=3 f0:1;0:15;0:25g 1=0:458MRC Case2 I=3 f0:055;0:098;1g 2=0:466MRC Case3 I=5 f0:0405;0:0564;0:0897;0:1902;1g 3=0:456MRC Case4 I=1 f1g 4=0:5MRC Withtheproposedanalysistools,wecanillustratetheeectofdierentchoicesoffpjgtotheerrorperformanceincollaborativedecodingbycomparingtheBERup-perbounds.Whilecomparingerrorperformancefordierentcollaborativedecodingprocesses,itisimportanttoconsiderthenecessaryamountofinformationexchangeduringtheprocess.Here,weuse 4{5 tocalculatetheaverageinformationexchangeamount,andtheinformationexchangeamountofMRCMRCiscalculatedby 4{5 forthepurposeofcomparison.Below,wexthesettingofM=8andrate1=2codeCC;7,andcomparethe4caseslistedinTable 5{1 .InTable 5{1 ,fpjgincase2ischosensuchthatforM=8,theamountofinformationeachnodesendingoutisalmostthesameindierentiterationsonaverage.Incase3,fpjgischosentomakeeachnoderequestinformationfromothernodesforalmostthesamenumberofinformationbitsindierentiterations.Incase4,fpjg=f1gmeansthatthenodesexchangeinformationonlyonce,andeachnoderequestsinformationfromothernodesforalltheinformationbits.InFig. 5.3 ,weshowtheBERboundsofeachiterationsfortheall4cases.Fromthegure,weseethatcases2,3and4achievethesameperformanceintheirlastiterations,andoutperformcase1.InFig. 5.3 ,wecomparetheBERboundsoftheall4caseintheirlastiterationswiththatofMRCintheverylowBERregion.Thisapproximatelyshowstheasymptoticperformanceofthesystems.Inthegure,weseethatincase2,3and4,thereceiversnallyachievethesameerrorperformanceasMRC,butwithmuchlessinformationexchangeamount.Thisshowsthatwithproper

PAGE 95

81 Figure5{7:ComparisonofperformanceforM=8andCC5;7onAWGNchannelswithdierentchoicesoffpjginTable 5{1 Figure5{8:AsymptoticperformanceforM=8andCC5;7onAWGNchannelswithdierentchoicesoffpjginTable 5{1

PAGE 96

82 choicesoffpjg,fullspacialdiversitycanbeachievedbythecollaborativedecodingtechnique.5.4SummaryWehaveanalyzedthebiterrorperformanceforcollaborativedecodingwithLRBexchange.Adensityevolutionmodelisproposedtosimplifytheanalysis.WithGaussianapproximation,knowledgeoftheextrinsicinformationareobtainedbysim-ulatingtheproposedmodeloverAWGNchannels.Then,wederiveanupperboundfortheBERofthecollaborativedecodingprocessviaageneralizedunionboundforthemax-log-MAPdecoder.Numericalresultsdemonstratethetightnessofthebounds.Wealsoshowthatwithproperparametersdesign,collaborativedecodingwithLRBexchangecanachievethesameperformanceofMRCathighSNRs.Theanalysisprovidesanecientwaytoevaluatetheerrorperformanceofthecollabora-tivedecodingsystem.TheanalysisisbasedontheobservationthattheextrinsicinformationgeneratedinthecollaborativedecodingprocesscanbewellapproximatedbyGaussiandistribu-tionswhennon-recursiveconvolutionalcodesareusedinthesystem.Thisadvantagemakesthecalculationsintheanalysissimple.Forrecursiveconvolutionalcodes,ifwecanndaproperprobabilitydistributionmodelfortheextrinsicinformation,thenbyreplacingtheGaussianapproximationwiththenewmodel,ouranalysiscanbeextendedtotherecursiveconvolutionalcodecase.Gaussianmixturemodel[ 36 ]isapossiblesolutioninthiscase.

PAGE 97

CHAPTER6PERFORMANCEANALYSISFORCOLLABORATIVEDECODINGWITHMOST-RELIABLE-BITEXCHANGEONAWGNANDRAYLEIGHFADINGCHANNELSInChapter 5 wehaveanalyzedtheerrorperformanceofcollaborativedecodingwithLRBinformationexchangeoveranAWGNchannel.ThemethodisprimarilybasedonthedensityevolutionmodelandtheGaussianapproximationforextrinsicinformationgeneratedbynonrecursiveconvolutionalcodesincollaborativedecoding.Withthestatisticcharacteristicsofadditionalinformation,itispossibletostudytotheerrorevents,hencethepairwiseerrorprobability,ofanysingledecodingprocessincollaborativedecoding.Oncethepairwiseerrorprobabilitiescanbeobtained,theerrorperformanceofthedecoderareevaluatedbyapplyingtheunionboundofMAPdecodingasinSection 5.2.2 .InthisChapter,weextendthismethodtothescenarioofcollaborativedecodingwiththeMRBinformationexchangescheme,whennonrecursiveconvolutionalcodesare.SimilartothecaseofLRB,inAWGNchannelwestillapplyGaussianapproxi-mationthetoextrinsicinformationgeneratedbythenonrecursiveconvolutionalcodesinMRB.However,forindependentRayleighfadingchannels,thedensityfunctionofextrinsicinformationexhibitsapparentasymmetricproperty,especiallyatthemiddletohighSNRregion.Hence,theGaussianapproximationusedintheAWGNchannelanalysispreciselyisnotvalidanymoreforthecaseofindependentRayleighfadingchannels.Fortunately,fromsimulationwendthatinthiscase.Thestatisticalchar-acteristicsofextrinsicinformationgenerationbythenonrecursiveconvolutionalcodesincollaborativedecodingcanbewellapproximatedbyasetofgeneralizedasymmetricLaplacedistributions[ 37 ].Thisapproximateparametricdescriptionmakesitpossible 83

PAGE 98

84 todescribethestatisticalbehavioroftheextrinsicinformationbyasetoffewpara-meters.Hence,thedensityevolutionmodelcanbeextendtothecaseofindependentRayleighfadingchannelforcollaborativedecoding.Withproperstatisticalapproximationsanddensityevolutionmodel,themajorworkoftheperformanceanalysisforcollaborativedecodingwithMRBexchangebe-comestheevaluationofthepairwiseerrorprobabilitiesPEPintheMAPdecodingprocess.DierentfromLRB,duetothenatureoftheMRBinformationexchangeschemetheadditionalinformationateachdecoderinvolvessumoftruncatedextrinsicinformationfromotherdecoders,andthenumberofsuchtruncatedextrinsicinfor-mationtermsisalsorandom.ThismakestheanalysisofthePEPinMRBmuchmorecomplicatedthanthatinLRB.Inthischapter,weprimarilyrelyonupper-boundtechniquesandcombinatorialtheorytoderivetheerrorprobabilities.Laplacetransformandsaddlepointapproximationtechniquesbasedonmomentgeneratingfunctionsarethemajortoolsusedinevaluatingtheupperbounds.ThesystemmodelwestudyinthischapterisdescribedinSection 4.1 .ThecollaborativedecodingwithMRBexchangeisdescribedinSection 4.2.3 .Weonlyconsidertheperformanceanalysisofnonrecursiveconvolutionalcodesinthischapter.Theremainderofthischapterisarrangedasfollows.InSection 6.1 ,wedescribetheGaussianandthegeneralizedasymmetricLaplaceapproximationsoftheextrinsicinformationinthecollaborativedecodingfortheAWGNandindependentRayleighfadingchannelmodels,respectively.InSection 6.2 ,adensityevolutionmodelisdevel-opedtoevaluatestatisticalparametersfortheextrinsicinformation.InSection 6.3 ,auniformupperboundofbiterrorrateBERisprovidedintermofprobabilitiesinvolvingtheextrinsicinformationinthecurrentiteration.InSection 6.4 wefurtherstudytheerroreventbehaviorsinMAPdecodingwiththeeectofMRBinformationexchange,anddevelopsomeupperboundsofthePEPinvolvedintheBERbound.InSection 6.5 ,weaddressthenumericalevaluationfortheBERupperboundwith

PAGE 99

85 thestatisticalknowledgeofextrinsicinformationfortheAWGNandindependentRayleighfadingchannelmodels,respectively.Then,numericalresultsarepresentedinSection 6.6 .Finally,asummaryisdrawninSection 6.7 .6.1StatisticalApproximationforExtrinsicInformationFromtheprocedureoftheMRBexchangeschemedescribedinSection 4.2.3 ,weknowthatallnodesinthecollaborativedecodingprocessaresymmetric.Thisimpliesthatthestatisticalcharacteristicsoftheextrinsicinformationatallnodesarethesame.Thus,weonlyneedtoconsiderasinglenode,forexample,theMthnode.InordertostudytheextrinsicinformationgeneratedbytheMAPdecoderincollaborativedecoding,wewill,withoutlossofgenerality,assumethattheall-zerocodewordistransmittedthroughthechannelinfollowing.Underthisassumption,weseektondtheprobabilitydistributionoftheextrinsicinformationforeachdatabit.Ourperformanceanalysiswillbebasedontheknowledgeofstatisticalcharacteristicsoftheextrinsicinformation.Asweknow,theextrinsicinformationisgeneratedbyndingtheminimumormaximuminalargetheoreticallyinnitesetofsequencemetricsintheMAPde-codingprocess.Thesequencemetricsareofnon-identicaldistributionanddependentwitheachother.Fromtheextremevaluetheory,theclosed-fromdistributionoftheminimumormaximumdoesnotexistfordependentnon-identicalrandomvariablesgenerally.Infact,eventhetypeofthedistributionforextrinsicinformationgeneratedbyaMAPdecoderisverydiculttondanalytically.Inthiscase,afeasibleapproachtosimplifythestudyoftheMAPdecodingprocessistoemployapproximatemodelstodescribethedistributionoftheextrinsicinformation.Byusingasimpledistributionthattstheobservedhistogramofex-trinsicinformationobtainedfromsimulations,statisticbehaviorofthedecodercanbequantiedandstudiedinasemi-analyticway.Thisapproachhasbeensuccessfullyappliedtothestudyofiterativedecodingprocessformanycodessuchasturbocodes

PAGE 100

86 andlow-densityparitycheckLDPCcodesin[ 30 31 38 39 ].Analysiswiththisap-proximationapproachturnsouttogiveconsiderablyaccurateresultsinthestudyofconvergencepropertyforturbocodesandLDPCcodes.Basedonthisapproximation,eectiveconvergenceanalysistechniquessuchasthedensityevolutionmodelandex-trinsicinformationtransferEXITchartarealsodeveloped.Thesetechniqueshavebeenwidelyusedintheanalysisofmanyiterativedecoding,detectionandequaliza-tionalgorithms.Followingthisidea,wealsouseempiricalapproximationtoavoidtheanalyticallyunsolvableproblemofndingthedistributionoftheextrinsicinformationinMRB.Genericdistributionttingorestimationisawellstudiedtopicinstatistics.Therearemanytechniquesavailableforlearningdistributionoftheextrinsicinfor-mationincollaborativedecoding.Techniquessuchasbootstrapsamplingandmixturemodelcanusuallytanydistributionverywell.However,thesetechniquesusuallybelongtononparametricmethodsorparametricmethodswithagreatnumberofparameters.Representingtheextrinsicinformationbysuchkindofdistributionsusuallyprovidesverylittlebenettotheanalysisoferroreventsassociatedwiththeextrinsicinformationgeneratedinthedecodingprocess.Hence,weconsiderusingan-alyticdistributionswithonlyasmallnumberofparameterstoapproximatestatisticcharacteristicsoftheextrinsicinformation.6.1.1AWGNChannelForAWGNchannels,itisawellknownobservationthatextrinsicinformationofinformationbitsgeneratedbyaMAPdecoderforconvolutionalcodesandturbocodescanbewellapproximatedbyindependentGaussianrandomvariables.Althoughtheerroreventsdeterminingthedecodingperformancebecomemuchmorecomplicatedthanthoseinaregulariterativedecodingprocess,theGaussian-likepropertyoftheextrinsicinformationstillpersistsincollaborativedecodingwithMRBinformationexchangefornonrecursiveconvolutionalcodes.

PAGE 101

87 Figure6{1:EmpiricalpdfsofextrinsicinformationgeneratedbytheMAPdecoderatsuccessiveiterationsincollaborativedecodingwithMRBexchangeforM=6onanAWGNchannelwithEb=N0=5dB,forwhichthemaximumfreedistance4-statenonrecursiveconvolutionalcodeisused. FollowingthenotationinChapter 5 ,weusejk;itodenotetheextrinsicinforma-tiongeneratedbytheMAPdecoderfortheithdatabitatnodekinthejthdecodingiteration,andyk;itodenotetheithsampleofthechannelobservationatnodek.InFig. 6{1 ,weshowthetypicalhistogramsoftheextrinsicinformationgeneratedbytheMAPdecoderatsuccessiveiterationsincollaborativedecodingwithMRBexchangefornonrecursiveconvolutionalcodes.TrueGaussiandensityfunctionsarecomparedwiththehistogramsinthegure.Apparently,theGaussiandensityfunctionscanapproximatethehistogramsveryclosely.Infact,wehaveveriedtheaccuracyoftheGaussianapproximationforMRBbyextensivesimulationsusingdierentchoicesofsystemparameterssuchasnodesnumberM,informationexchangepercentagefpjgandSNR.BesidestheGaussianapproximation,independenceisanotherfundamentalas-sumptionforextrinsicinformationinthischapter.In[ 31 ],GamalandHammons

PAGE 102

88 establishedanindependenceassumptionforturbodecodersthatanycollectionofin-trinsicinformationchannelobservationsandextrinsicinformationgeneratedbyalloftheconstituentMAPdecodersinalldecodingiterationsarepairwiseindependent.Thisassumptionisjustiedbyusingtheargumentsfromtheviewpointofthegraphstructuresofcodes.AspointedoutinChapter 5 ,collaborativedecodingactuallyiscloselyrelatedtotheturbodecodinginsuchwaythattheMAPdecodersatdif-ferentnodescanbeviewedastheconstituentdecodersinturbodecoders,andtheinformationexchangeprocessincollaborativedecodingisanalogoustotheextrinsicinformationpassingamongallconstituentdecodersinturbodecoders.Thus,theindependenceassumptionforturbodecoderscanbedirectlyappliedtoourcase.Thus,similartothecaseofLRB,weformalizetheindependentGaussianas-sumptionfortheMAPdecoderincollaborativedecodingwithMRBexchangeandnonrecursiveconvolutionalcodesasfollows: Assumption6.1.1 IndependentGaussianAssumptionIncollaborativedecodingwithMRBexchangeandnonrecursiveconvolutionalcodesoverAWGNchannels,forarbitraryj0,therandomsequencesfyk;0;yk;1;;yk;i;g;fjk;0;jk;1;;jk;i;gforallk2MarejointlyGaussianandstatisticallyindependentinthesensethatanynitecollec-tionoftheyk;i'sandjk;i'sarejointlyGaussianandpairwiseindependent.Also,forarbitraryk;r2Msuchthatk6=r,therandomsequencesfjk;0;jk;1;;jk;i;g;flr;0;lr;1;;lr;i;gforallj;l0arejointlyGaussianandstatisticallyindependent.Notethat,duetothefactthatMRBexchangeisamemorybasedinformationex-changescheme,i.e.,theinformationexchangeprocessforadatabitidependson

PAGE 103

89 itsstatuscandidateornon-candidateinthepreviousiteration,theextrinsicin-formationforthebitatthesamenodeindierentiterationsarenotindependentgenerally.Thatis,jk;iandlk;iforl6=jarenotindependent.Thisisdierentfromtheindependentassumptionestablishedin[ 31 ].Withtheaboveassumption,basedontheargumentthatalltheinformationbitsarestatisticallyequivalent 1 ,foranyxedkandj,jk;iareidenticallydistributedforallbitindicesi.Thus,withthesymmetryofnodesincollaborativedecoding,wehavethatjk;iforallkandiarei.i.d.Gaussianrandomvariablesforanygivenj.Specically,jk;iNj;2jforallkandi,whereNj;2jdenotestheGaussiandistributionwithmeanjandvariance2j.Thismeansthatthestatisticalbehaviorofalltheextrinsicinformationgeneratedateachiterationincollaborativedecodingcanbesucientlydescribedbytwoparameters:theirmeanandvariance.6.1.2IndependentRayleighFadingChannelForindependentRayleighfadingchannelswithperfectchannelstateinformationCSI,theindependenceassumptionforextrinsicinformationstillholds.However,theGaussianassumptionbecomesinvalid.Histogramsofextrinsicinformationobtainedfromsimulationshowasymmetryofthedistribution,especiallyinthemiddletohighSNRregion.Tothebestofourknowledge,thereisnoresultonthedistributionttingorapproximatingforextrinsicinformationgeneratedbyMAPdecodersoverRayleighfadingchannelbefore.Inthischapter,werstproposetousethegeneralizedasymmetricLaplaceGALdistribution[ 37 ]toapproximatetheextrinsicinformationforRayleighfadingchannels. 1Strictlyspeaking,thebitsattheendofablockmayhavedierentstatisticalbehaviorfromotherbits.However,weusuallyassumeverylargeblocksizesinthedecodingprocesssothattheeectofafewbitsatedgesofablockontheaveragebehavioroftheblockcanbeneglected.

PAGE 104

90 AGALprobabilitydensityfunctionpdfisdenedas 2 fx=p 2e 2x p \050jxj p 2+22!)]TJ/F18 5.978 Tf 7.782 3.258 Td[(1 2K)]TJ/F18 5.978 Tf 7.782 3.259 Td[(1 2p 2+22 2jxj!{1for2Rand;0,where)1(istheGammafunction,andKisthemodiedBesselfunctionofthethirdkindwithindex.AnyrandomvariableXthathasthepdfdenedin 6{1 isaGALrandomvariable.WedenotethisasXGAL;2; 3 .ThemomentgeneratingfunctionmgfsofaGALrandomvariableXisdenedasthedouble-sidedLaplacetransformoffx,i.e.,s=E[e)]TJ/F23 7.97 Tf 6.586 0 Td[(sX]=1 +s)]TJ/F23 7.97 Tf 13.15 4.707 Td[(2 2s2;{2andtheregionofconvergenceROCofsis)]TJ/F28 11.955 Tf 11.955 10.365 Td[(p 2+22 2<
PAGE 105

91 i.e.,thepdfofWisgx=x)]TJ/F21 7.97 Tf 6.587 0 Td[(1 \050e)]TJ/F23 7.97 Tf 6.587 0 Td[(x;x>0:{5AnotherimportantpropertyofGALrandomvariablesisself-decomposability.Thatis,givenanarbitrarynumberofpairwiseindependentGALrandomvariablesX1,X2,,Xi,withcommonparametersand2,i.e.,XiGAL;2;iforalli,thesumoftheserandomvariables,S=PiXi,stillhasaGALdistribution.ThedistributionofSisgivenasGAL;2;with=Pii.ThispropertycanbeeasilyprovedbyusingthecharacteristicfunctionofGALdistribution[ 37 ],andwillbeusefulinouranalysis.TheGALdistributioniscloselyrelatedtotheMAPdecodingprocessoverin-dependentRayleighfadingchannelswithperfectCSI.WithperfectCSI,thechannelobservationyk;idenedin 4{1 willbescaledbyLcgk;i,whereLcisthechannelreliabilitymeasuredenedin 5{30 andgk;iisthechannelfadinggaincorrespondedtotheobservationyk;i.ThisscaledsignalsequencefLcgk;iyk;igistheninputtothedecoderastheintrinsicinformationtoperformMAPdecoding.Sincethechannelgainsequencefgk;igarei.i.d.Rayleighrandomvariables,itiseasytoverifythatLcgk;iyk;itakesarepresentationconsistentwith 6{4 .ThismeansthattheintrinsicinformationoftheMAPdecoderisasequenceofi.i.d.GALrandomvariables.More-over,withtheself-decomposablepropertyofGALrandomvariable,theerroreventmetricduetotheintrinsicinformationintheMAPdecodingprocessalsohasaGALdistribution.Thiscanbeseenintheanalysis.Usually,thestatisticalbehaviorofMAPdecodersisdeterminedbythestatisticaldistributionofitsinputvariables[ 31 ].WiththeGALdistributedintrinsicinforma-tion,itturnsoutthatthestatisticcharacteristicsofextrinsicinformationgeneratedbytheMAPdecodersisveryclosetothoseofGALrandomvariables.Ingeneral,theGAL-likeextrinsicinformationisthenusedasaprioriinformationfortheMAPdecodersintheiterativedecodingprocess.ThisGAL-likeinputwiththeintrinsic

PAGE 106

92 informationwillgeneratethenewGAL-likeextrinsicinformationagain.Thus,theex-trinsicinformationgeneratedintheregulariterativedecodingprocess,suchasturbodecoding,canbeapproximatedbyGALrandomvariablesveryclosely.Duetoitssimplicityandniceproperties,itisattractivetousetheGALapprox-imationofextrinsicinformationforthepurposeofperformanceanalysis.Motivatedbythisreason,wealsoconsideremployingtheGALapproximationfortheanalysisofcollaborativedecodingwithMRBexchange.AlthoughtheaprioriinformationincollaborativedecodingmaydeviatefromtheGALdistributionsduetotheMRBinformationexchangeprocedure,weobserveempiricallythatthehistogramshapeoftheextrinsicinformationisnotdraggedtoofarawayfromGALdistributionsbythedeviation.ThisisillustratedinFig. 6{2 .Inthegure,thetypicalhistogramsoftheextrinsicinformationatsuccessiveiterationsincollaborativedecodingwithMRBexchangefornonrecursiveconvolutionalcodesarecomparedwiththecorrespondingGALdistributions.ThesimilaritybetweenthehistogramsandtheGALdistributionsjustiestheGALapproximationforextrinsicinformation.Basedontheabovediscussion,weformalizetheindependentGALassumptionforcollaborativedecodingoverindependentRayleighfadingchannelasfollows: Assumption6.1.2 IndependentGALAssumptionIncollaborativedecodingwiththeMRBexchangeandnonrecursiveconvolutionalcodesoverindependentRayleighfadingchannels,forarbitraryj0,therandomsequencesfyk;0;yk;1;;yk;i;g;fjk;0;jk;1;;jk;i;gforallk2MareGALdistributedandstatisticallyindependentinthesensethatanynitecollectionoftheyk;iandjk;iareGALandpairwiseindependent.Forarbitraryk6=rwithk;r2M,therandomsequencesfjk;0;jk;1;;jk;i;g;flr;0;lr;1;;lr;i;gforallj;l0areGALdistributedandstatisticallyindependent.

PAGE 107

93 Figure6{2:EmpiricalpdfsofextrinsicinformationgeneratedbytheMAPdecodersinsuccessiveiterationsincollaborativedecodingwiththeMRBexchangeforM=8andEb=N0=8dBonindependentRayleighfadingchannels,wherethemaximumfreedistance4-statenonrecursiveconvolutionalcodeisused. Again,noindependencyisassumedbetweentheextrinsicinformationatdierentiterationsforthesamebitandatthesamenode,i.e,jk;iandlk;iforj6=lmaynotbeindependent.SimilartotheargumentforAWGNchannels,wealsohavethatjk;iforallkandiarestatisticallyidenticalforanygivenj.Specically,jk;iGALj;2j;jforallkandi.Thus,thestatisticalbehaviorofalltheextrinsicinformationgeneratedateachiterationonaRayleighfadingchannelcanbesucientlydescribedbythethreeparameters,j,2jandj.6.2DensityEvolutionModelWithproperstatisticalapproximationsforextrinsicinformation,itispossibletoquantifythestatisticalbehavioroftheMAPdecoderwithonlyafewsetsofparame-tersintheiterativedecodingprocess.Thus,evaluatingthedistributionparametersfortheextrinsicinformationbecomesthenextnecessarystepforanalyzingthedecod-ingprocess.Sincethereisnoanalyticmethodavailableforndingtheparametersfor

PAGE 108

94 collaborativedecoding,theonlyapproachwecanuseistoestimatetheparametersbasedontheobservationsofextrinsicinformationfromsimulations.Thisideaissimilartothatofthedensityevolutiontechniqueproposedin[ 30 ]and[ 31 ]forturbodecodingandLDPCdecoding.Inthissection,wewilldevelopadensityevolutionmodelforthecollaborativedecodingprocess.Nevertheless,weneedtopointoutthatourgoalhereisverydierentfromthatinusualdensityevolutiontechniques.Intheusualdensityevolutiontechnique,decodercomponentsaretreatedassomeinput-outputfunctionsofdistributionparameters.Thetrajectoriesoftheinput-outputfunctionsareobtainedbysimulation.Thentheconvergencebehavioroftheiterativedecodingprocessisstudiedbasedonthesetrajectories.However,duetothereasonthatusuallyonlyafewinformationexchangeshencedecodingiterationscanbeperformedincollaborativedecoding,convergencepropertiesarenotofconcerninouranalysis.Hence,noinput-outputfunctionandtrajectoriesforthedecoderswillbeexploredhere.Ourgoalistoprovideanequivalentbutsimplerdensityevolutionmodelfortherealcollaborativedecodingprocesssothatthedistributionparametersfortheextrinsicinformationcanbeobtainedwithalowercomplexity.BasedonthestatisticalassumptionsestablishedinSection 6.1 andthesymme-tryofnodesinthecollaborativedecidingprocess,itiseasytoseethatthestatisticalparametersneededtodescribeofjk;i,forallkandi,canbefoundfromanynodeinthedistributedarray,andthebehavioroftheadditionalinformationcollectedaseachnodearestatisticallyequivalent.Also,thebehavioroftheMRBinformationexchangeprocesscanbedeterminedwiththeknowledgeofthestatisticalcharac-teristicsofjk;i.Hence,wemodeltheMRBinformationexchangeasanadditionalinformationgenerationprocesswiththestatisticalparametersthatdescribejk;iaretheinputandasequenceofadditionalinformationforanodearetheoutput.Thus,thecollaborativedecodingprocesscanbemodeledbythejointoperationofanaddi-tionalinformationgeneratingunitandtheMAPdecoderunitasshowninFig. 6{3

PAGE 109

95 Figure6{3:Densityevolutionmodelforcollaborativedecodingprocess TheoutputoftheadditionalinformationgeneratingunitisfedbacktotheMAPdecoderasaprioriinformationforthenextiterationofdecoding.6.2.1AdditionalInformationGenerationWerstconsiderthegenerationoftheadditionalinformationinthejthdecodingiteration.Recallthatduringthejthexchange,onlywhenthereliabilityi.e.,jjk;ijforacandidatebitiisrankedinthetoppjfractionamongthecandidatebitssetatnodek,theextrinsicinformationjk;icanbebroadcasttoothernodes,asspeciedintheMRBexchangeprocess.LetBjidenotetheeventthatbitiisacandidatebit,thenbasedonthei.i.d.assumptionforjk;iforallkandiinSection 6.1 ,wecanseethattheeventsBjmandBlnform6=nandarbitraryj;lareindependent.Toformalizetheinformationexchangeprocess,wefurtherintroducethefollowingassumption. Assumption6.2.1 TheMRBinformationexchangecriterionforthecandidatebitiatnodekinthejthiterationisequivalenttojjk;ijTj,forsomeTj0satisfyingPjjk;ijTjjBji=pj:{6Letjk;idenotetheadditionalinformationfortheithbitatthekthnodegener-atedbytheMRBexchangeschemeinthejthiteration.AccordingtoSection 4.2.3 ,jk;iactuallyisthesumofextrinsicinformationcollectedfromothernodesforbiti

PAGE 110

96 inthejthiteration.Thisadditionalinformationwillbeaddedtotheaprioriinfor-mationinthej+1thiterationbynodek.AccordingtotheMRBscheme,ifbitiisanon-candidatebitinthejthiteration,thenjk;i=0.Otherwise,therearethreepossibilitiesforthecaseofacandidatebit: i Nonodeinthedistributedarraybroadcastsinformationfortheithbit.AccordingtoAssumption 6.2.1 ,thiscorrespondstotheeventTt2Mjjt;ij
PAGE 111

97 where Kjk;iisthecomplementarysetofKjk;iinM0k.Hence,theeventRjk;iisformedbytheunionofallpossibleeventsRKjk;i,i.e.,Rjk;i=[Kjk;iM0kRKjk;i:{10Inthiscase,theadditionalinformationjk;iwilldependonthesetKjk;i.ForaparticulareventRKjk;i,nodekwillobtaininformationfromallnodesinKjk;i.WedenotethisadditionalinformationasKjk;i,whichisgivenbyKjk;i=Xt2Kjk;ijt;i;witht2Kjk;ijjt;ijTj:{11SincefRKjk;igaredisjointevents,theadditionalinformationjk;icanbewrit-tenasjk;i=Kjk;i:{12AccordingtotheMRBscheme,onlyinthecaseiofallabovethreecases,thecandidatebitiwillstillbeacandidatebitinnextiteration.Hence,wehaveP)]TJ/F25 11.955 Tf 5.479 -9.684 Td[(Bj+1iBji=Pk2Mjjk;ij
PAGE 112

98 candidatebiti,thebitwillbecomeanon-candidatebitinthenextiteration.TheprobabilityofthiseventcanbewrittenasP)]TJET1 0 0 1 188.909 661.797 cmq[]0 d0 J0.478 w0 0.239 m30.005 0.239 lSQ1 0 0 1 -188.909 -661.797 cmBT/F25 11.955 Tf 188.909 648.274 Td[(Bj+1iBji=P)]TJ/F25 11.955 Tf 5.479 -9.684 Td[(Sjk;iBji+P)]TJ/F25 11.955 Tf 5.48 -9.684 Td[(Rjk;iBji=P)]TJ/F25 11.955 Tf 5.479 -9.684 Td[(Sjk;iBji+XKjk;iM0kP)]TJ/F25 11.955 Tf 5.479 -9.684 Td[(RKjk;iBji; {16 wherethefactthatSjk;iandRjk;iaredisjointeventsisused.Inabove,wehaveanalyzedtheadditionalinformationjk;iforallk2MintheMRBexchangeprocess.NowweconsidertheadditionalinformationgeneratinginthedensityevolutionmodelshowninFig. 6{3 .Duetothesymmetryofnodes,thedensityevolutionmodelincludesonlyoneMAPdecoder.Withoutlossofgenerality,weassumethattheMAPdecoderisattheMthnode.Thus,weonlyneedtogeneratejM;iinthemodel.Accordingly,wedescribetheadditionalinformationgeneratingunitinthedensityevolutionmodelasfollows: 1. Inthejthiteration,takefjM;igforalli,andthestatisticalparametersTjandfj;2jgforAWGNchannelsorfj;2j;jgforRayleighfadingchannelsastheinput. 2. SetjM;i=0forallnon-candidatebits. 3. Foreachcandidatebiti,generateM)]TJ/F15 11.955 Tf 10.02 0 Td[(1i.i.d.randomvariablesjk;ifork2M0MaccordingtotheconditionaldistributionofjM;igiventhatbitiisacandidatebit.WeobtainthisconditionaldistributionempiricallyasdescribedinSec-tion 6.2.2 below. 4. CheckeventsSjM;iandfRKjM;igaccordingto 6{7 and 6{9 .IfSjM;iorRjM;ioccurs,thensetjM;i=0orapply 6{12 ,respectively,andagbitiasanon-candidatebitforthenextiteration.Otherwise,setjM;i=0.

PAGE 113

99 6.2.2FindingParametersinDensityEvolutionModelTosimulatethecollaborativedecodingprocesswithMRBexchange,theMAPdecoderinthedensityevolutionmodelwillaccumulatetheadditionalinformationlM;iineachiteration,forl<>:lM;iifRlM;ioccurs0otherwise; {17 Withthisaprioriinformationfj+1M;igandtheintrinsicinformationfromchan-nelobservationasinput,weruntheMAPdecodertogeneratetheextrinsicinforma-tionfj+1M;igforthej+1thiteration.Basedontheobservationoffj+1M;ig,wecanestimatetheirdistributionparame-tersaccordingtothedistributionassumptionsinSection 6.1 .ForAWGNchannels,wehavej+1M;iNj+1;2j+1.Inthiscase,theparametersj+1and2j+1areob-tainedbythesamplemeanandvarianceoffj+1M;ig,respectively.ForRayleighfadingchannels,wehavej+1M;iGALj+1;2j+1;j+1.Inthiscase,weusethemethodofmomentstoestimatej+1,2j+1andj+1.ForarandomvariableXGAL;2;,wecanusethestandardmethods,suchasthemethodofmomentsandmaximumlikelihoodestimation,toestimatethetheparameters,2and.Besidesthedistributionparameters,wealsoneedtoestimatethethresholdTj+1denedin 6{6 .Sincetheconditionalprobabilityin 6{6 isverycomplicated,analyticsolutionforTj+1isnotreadilyavailable.Inordertoestimatethevalue

PAGE 114

100 ofTj+1,werstusenonparametricmethodtoestimatethecumulativedistributionfunctionFj+1xoftheextrinsicinformationforthecandidatebits,i.e.,Fj+1x=Pj+1M;i
PAGE 115

101 Figure6{4:Comparisonofmeanandvarianceoftheextrinsicinformationobtainedfromthedensityevolutionmodelandthatfromtheactualcollaborativedecodingprocess. Figure6{5:Comparisonofthresholdestimatedfromthedensityevolutionmodelandthatfromtheactualcollaborativedecodingprocess.

PAGE 116

102 fordatabits,theerrorperformanceisthesameforalldatabits.Hence,wedropthebitindex,i.e.,thesubscripti,inthenotationofvariablesandeventsforthebitofinterest.Forconvenience,wealsodropthesubscriptMfortheMthnodeinfollowingderivation.Undertheassumptionthattheall-zerodatabitsequenceistransmitted,theBERoftheMAPdecoderinthejthj1iterationistheprobabilitythatthesoftoutputofabitisnegative.SincethesoftoutputofaMAPdecoderisthesumofitsextrinsicinformationandaprioriinformation,theBERisgivenasPjb=Pj+j<0;{20wherejistheextrinsicinformation,andjistheaprioriinformationinthejthiterationgivenin 6{17 attheMthnode,respectively.SimilartothecaseofLRBexchange,weevaluatetheerrorperformancebyndinganupperboundfor 6{20 .Accordingto 6{17 andtheanalysisinSection 6.2.1 ,theaprioriinformationforacandidatebitmustbezero,i.e.,Pj=0jBj=1:{21Foranon-candidatebit,werstconsideritsstatustransitionfromacandidatebittoanon-candidatebit.AccordingtotheMRBscheme,alldatabitsareinitializedascandidatebitsinthecollaborativedecodingprocess.Onceabitbecomesanon-candidatebitinaniteration,itsstatuswillnotbechangedinlateriterations,i.e.,P Bj+1 Bj=1Thus,wehaveP Bj=P Bj;Bj)]TJ/F21 7.97 Tf 6.587 0 Td[(1+P Bj)]TJ/F21 7.97 Tf 6.587 0 Td[(1:{22Withthisrecursiverelationand 6{16 ,wecanobtainP Bj=j)]TJ/F21 7.97 Tf 6.586 0 Td[(1Xl=0P Bl+1;Bl=j)]TJ/F21 7.97 Tf 6.586 0 Td[(1Xl=0PSl;Bl+PRl;Bl: {23

PAGE 117

103 From 6{17 ,weknowthatonlyiftheeventRloccursforcertainl0.

PAGE 118

104 Nowweconsiderthersttwotermsin 6{27 .With 6{7 and 6{13 ,thersttwotermsin 6{27 canbewrittenasPj<0;Bj+Pj<0;Sj)]TJ/F21 7.97 Tf 6.587 0 Td[(1;Bj)]TJ/F21 7.97 Tf 6.586 0 Td[(1=Pj<0;k2Mjj)]TJ/F21 7.97 Tf 6.586 0 Td[(1kj
PAGE 119

105 Proof Basedonthedenitionin 6{15 ,theprobabilityofthedatabitbeingacandidatebitinthej+1thdecodingiterationisgivenbyPBj+1=PMk=1njl=0jlkj
PAGE 120

106 Thus,theprobabilityPj<0;Sl;Blfor0lj)]TJ/F15 11.955 Tf 12.209 0 Td[(2canbeupperboundedasPj<0;Sl;BlPl<)]TJ/F22 11.955 Tf 9.299 0 Td[(Tl+Pj<0;lTllYt=0)]TJ/F22 11.955 Tf 10.002 0 Td[(ptM)]TJ/F21 7.97 Tf 6.587 0 Td[(1:{38Now,weupperboundtheprobabilitiesinthelasttermof 6{27 .Byusing 6{9 through 6{12 ,wehavePj+l<0;Rl;Bl=XKlM0Pj+Kl<0;RKl;Bl:{39AccordingtoAssumption 6.1.1 andAssumption 6.1.2 ,extrinsicinformationgeneratedbydierentnodesforthesamebitinaniterationisi.i.d..Then,thestatisticalcharacteristicsofKlandprobabilityofeventRKldependonlyonthecardinalsizeofKl,jKlj.Hence,forallsetsKlwiththesamecardinalsize,theprobabilitiesPj+Kl<0;RKl;Blareequal.Forconvenience,wedenotethecardinalsizeofKlbyKl,i.e.,Kl=jKlj.ThenthetotalnumberofpossiblechoicesforsubsetKlinM0is)]TJ/F23 7.97 Tf 5.48 -4.379 Td[(M)]TJ/F21 7.97 Tf 6.587 0 Td[(1Kl.Withoutlossofgenerality,weonlyneedtoconsiderthecaseofKl=f1;2;;Klg.Inthiscase,wewillusethenotationsKlandRKltoreplaceKlandRKlforKl=f1;2;;Klg,respectively.Withtheabovearguments,wehaveXKlM0Pj+Kl<0;RKl;Bl=M)]TJ/F21 7.97 Tf 6.586 0 Td[(1XKl=1M)]TJ/F15 11.955 Tf 11.956 0 Td[(1KlPj+Kl<0;RKl;Bl: {40

PAGE 121

107 ByinsertingthedenitionofKl,RKlandBl,weexpandandupperboundtheprobabilityPj+Kl<0;RKl;Blin 6{40 asfollows:Pj+Kl<0;RKl;Bl=Pj+Kl<0;Klk=1jlkjTl;M)]TJ/F21 7.97 Tf 6.586 0 Td[(1k=Kl+1jlkj
PAGE 122

108 arbitrarynodeinthejthdecodingiteration,forj1,isupperboundedasPjbaM)]TJ/F21 7.97 Tf 6.586 0 Td[(1j)]TJ/F21 7.97 Tf 6.587 0 Td[(1Pj<0+j)]TJ/F21 7.97 Tf 6.586 0 Td[(2Xl=0aM)]TJ/F21 7.97 Tf 6.587 0 Td[(1lPl<)]TJ/F22 11.955 Tf 9.298 0 Td[(Tl+Pj<0;lTl+j)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xl=0M)]TJ/F21 7.97 Tf 6.587 0 Td[(1XKl=1bl;KlPj+KlXk=1lk<0;Klk=1jlkjTl; {44 whereal=8><>:Qlt=0)]TJ/F22 11.955 Tf 11.955 0 Td[(ptl01l<0;{45bl;K=M)]TJ/F15 11.955 Tf 11.955 0 Td[(1KaM)]TJ/F23 7.97 Tf 6.587 0 Td[(K)]TJ/F21 7.97 Tf 6.587 0 Td[(1l)]TJ/F21 7.97 Tf 6.587 0 Td[(1;{46andflkgaretheextrinsicinformationgeneratedbyallnodesinthelthiteration,undertheassumptionthattheall-zerocodewordistransmitted.Notethat,accordingtothedenitionin 6{28 ,thesummationPj)]TJ/F21 7.97 Tf 6.587 0 Td[(2l=0in 6{44 equals0forj<2.Sincetheprobabilitydistributionassumptionsfortheextrinsicinformationarenotusedinthederivation, 6{44 isageneralboundvalidforbothofAWGNchannelsandRayleighfadingchannels.Forthespecialcaseofonlyoneexchangeandp0=1,i.e.,allnodesbroadcasttheextrinsicinformationforalldatabitsintherstthdecodingiteration,wehavethefollowingcorollary. Corollary6.3.3 IncollaborativedecodingwithMRBexchangeandnonrecursivecon-volutionalcodes,ifp0=1,thenthenalBERisupperboundedbyPbP+M)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xk=1k<0:{47 Proof Sincep0=1,alldatabitswillbecomenon-candidatebitsimmediatelyaftertherstinformationexchange.Thus,only1exchangei.e.,2decodingiterationscanbeperformedinthecollaborativedecodingprocess,i.e.,PbisthenalBER.Withp0=1,weknowa0=0andb0;K0=0,forK6=M)]TJ/F15 11.955 Tf 12.052 0 Td[(1,andb0;K=1,forK=M)]TJ/F15 11.955 Tf 12.343 0 Td[(1,from 6{45 and 6{46 .Accordingto 6{19 ,p0=1yieldsT0=0

PAGE 123

109 regardlessthedistributionofk.Hence,PTKk=1jkjT0=1.Applyingtheseresultsto 6{44 forj=1,weobtainthecorollary.Infact,forthisspecialcase,wecaneasilyseethattheaprioriinformation=PM)]TJ/F21 7.97 Tf 6.586 0 Td[(1k=1k.Insertinginto 6{20 directlyyieldsPb=P+PM)]TJ/F21 7.97 Tf 6.587 0 Td[(1k=1k<0,whichisconsistentwithCorollary 6.3.3 .TheconsistencyveriesthevalidityandtightnessoftheboundsinTheorem 6.3.2 forthissimplecase.6.4ErrorEventsandProbabilitiesAnalysisTheorem 6.3.2 givesanupperboundoftheBERPjbintermsofprobabilitiesinvolvingtheextrinsicinformationgeneratedinthejthandallpreviousiterationsofthecollaborativedecodingprocess.Inordertoevaluatetheupperbound,itisnecessarytofurtherevaluatethoseprobabilitiesusingthestatisticalassumptionsinSection 6.1 fortheextrinsicinformationforAWGNchannelsandRayleighfadingchannels,respectively.Forthispurpose,wewillstudythedecodingprocesswithMRBinformationexchangefromtheviewpointofsequentialdecodingerroreventbehaviorduetothefollowingreasons.Firstly,asmentionedpreviously,ourgoalistoevaluatetheBERperformanceinthejthiterationbyonlyusingthestatisticalknowledgeofextrinsicinformationuptothej)]TJ/F15 11.955 Tf 11.746 0 Td[(1thiteration.Thisallowsustopredictthecollaborativedecodingperformancewithlessknowledgeabouttheextrinsicinformation.Thus,weneedtoexpresstheprobabilitiesinvolvingjin 6{44 intermsoflforl
PAGE 124

110 thestatisticalassumptionfortheextrinsicinformationinSection 6.1 .Thus,inordertoevaluationofPj<0;lTl,analysisofthedecodingprocessinthecontextofMRBinformationexchangeisnecessary.Inthefollowing,westudytheprobabilitiesinvolvingjbasedonthegeneralizedunionboundformax-log-MAPdecodingandtheerroreventsanalysiswithMRBinformationexchange.6.4.1UnionBoundforCollaborativeDecodingFollowingthenotationinSection 5.2.2 ,wedenoteanonrecursiveconvolutionalcodeasabinarysequencemappingC:u !c ,whereu andc aredatabitsequenceandthecorrespondingcodeword.Letu =u0;u1;;ui;andc =c0;c1;;ci;,thenuiandci2f0;1garethedatabitandcodedbit,respectively.Consistentwiththedenitionin 4{1 ,wedenoteyithereceivedBPSKi.e.,xi=1)]TJ/F15 11.955 Tf 12.775 0 Td[(2ciin 4{1 signalatthedecoderofinterest.Again,throughoutthefollowinganalysis,weassumethattheall-zerosequenceistransmitted.Then,weapplythegeneralizedunionboundinSection 5.2.2 tothecaseofcollaborativedecodingwiththeMRBexchangescheme.Thatis,Pj
PAGE 125

111 positionforlargecodingblocksize[ 14 34 ],theunionboundisvalidforarbitrarydatabit.Thus,wehavedroppedthesubscript0inj0forclarity.Also,wehaveindexedtheotherw)]TJ/F15 11.955 Tf 11.195 0 Td[(1non-zerobitsuiinu asi=1;2;;w)]TJ/F15 11.955 Tf 11.195 0 Td[(1in 6{49 withoutlossofgenerality.Inordertoapplytheunionbound 6{48 toouranalysis,weneedtoevaluatethePEPP)]TJ/F21 7.97 Tf 11.866 6.11 Td[(jw;d
PAGE 126

112 wherewedonotdistinguishthedierencebetweenofbitandbitindexforconve-nience.Further,wedene_AlasasubsetofAlthattheeventSlioccursfori2_Al,andAlasthecomplementarysubsetof_AlinAlthattheeventRKlioccursfori2Al,i.e.,_Al=fi:1iw)]TJ/F15 11.955 Tf 11.955 0 Td[(1;Sli;Blig;{51andAl=fi:1iw)]TJ/F15 11.955 Tf 11.955 0 Td[(1;RKli;Blig:{52Withthesedenitions,wecanseethatinfuigw)]TJ/F21 7.97 Tf 6.586 0 Td[(1i=1,onlythebitsfori2AlcanobtaintheadditionalinformationKliinthelthiteration.SinceallbitsinAlbecomenon-candidateinlateriterations,wehaveAlAk=?forl6=k.Thus,inthejthiteration,Sj)]TJ/F21 7.97 Tf 6.586 0 Td[(1l=0Alformsthecompletenon-candidatebitsetinfuigw)]TJ/F21 7.97 Tf 6.586 0 Td[(1i=1.Theremainingbitsformthecandidatebitsetinthatiteration.WedenotethissetbyBj,whichisgivenbyBj= j)]TJ/F21 7.97 Tf 6.587 0 Td[(1[l=0Al:{53Basedontheabovedenitions,weformalizethecorrespondinginformationex-changeeventforthesequencefuigw)]TJ/F21 7.97 Tf 6.587 0 Td[(1i=1duringtherstjexchangesasWj=j)]TJ/F21 7.97 Tf 6.587 0 Td[(1l=0ni2_AlSli;i2AlRKli;i2AlBlio;i2BjBji:{54With 6{12 6{17 and 6{49 ,theerroreventmetricassociatedwiththeinforma-tionexchangeeventWjcanbewrittenas)]TJ/F21 7.97 Tf 7.315 6.111 Td[(jw;dWj=j)]TJ/F21 7.97 Tf 6.586 0 Td[(1Xl=0Xi2AlKli+Yd=j)]TJ/F21 7.97 Tf 6.586 0 Td[(1Xl=0Xi2AlXk2Klilk;i+Yd;{55whereYd=Lcd)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xi=0giyi:{56

PAGE 127

113 DierentchoicesofsetsfAl;_AlAlgandfKligi2Alforl
PAGE 128

114 PTj)]TJ/F21 7.97 Tf 6.586 0 Td[(1l=0Ti2_AlSli;BliandPTi2BjBjiin 6{60 asfollows.Pj)]TJ/F21 7.97 Tf 6.586 0 Td[(1l=0i2_AlSli;Bli=j)]TJ/F21 7.97 Tf 6.587 0 Td[(1Yl=0Yi2_AlP)]TJ/F25 11.955 Tf 5.48 -9.683 Td[(Sli;Bli=j)]TJ/F21 7.97 Tf 6.586 0 Td[(1Yl=0Yi2_AlPjlM;ijTl;k2M0jlk;ij
PAGE 130

116 thatassociatewithWj.Forthispurpose,letml=jAlj;andnl=jAlj{64with0nlml.SinceSliandRliaredisjointevents,weknowthat_AlAl=?,andAl=_Al[Al.Hence,j_Alj=ml)]TJ/F22 11.955 Tf 11.955 0 Td[(nl;{65whichissucientlydeterminedby 6{64 .Forconvenience,wedenotethenumberofcandidatebitsinfuigw)]TJ/F21 7.97 Tf 6.586 0 Td[(1i=1inthelthdecodingiterationbywl.Sincethenumberofcandidatebitsremovedafterthelthiterationisml,wehavewl+1=wl)]TJ/F22 11.955 Tf 11.955 0 Td[(ml;withw0=w)]TJ/F15 11.955 Tf 11.955 0 Td[(1;{66and0mlwl;foralll0:{67With 6{66 ,thecardinalsizeofBjdenedin 6{53 isgivenbyjBjj=wj:{68Wealsodenelasthetotalnumberofcopiesofextrinsicinformationcollectedforsequencefuigw)]TJ/F21 7.97 Tf 6.587 0 Td[(1i=1inthelthinformationexchange,i.e.,l=Xi2AljKlij:{69Since1jKlijM)]TJ/F15 11.955 Tf 11.955 0 Td[(1,wehavenllnlM)]TJ/F15 11.955 Tf 11.955 0 Td[(1:{70

PAGE 131

117 ApplyingtheabovenotationtotheupperboundinLemma 6.4.1 ,thecoecientcjin 6{59 canbewrittenascj=j)]TJ/F21 7.97 Tf 6.586 0 Td[(1Yl=0hpml)]TJ/F23 7.97 Tf 6.586 0 Td[(nll)]TJ/F22 11.955 Tf 11.955 0 Td[(plmlM)]TJ/F21 7.97 Tf 6.586 0 Td[(1+wjM)]TJ/F23 7.97 Tf 6.586 0 Td[(ll)]TJ/F21 7.97 Tf 6.587 0 Td[(1Yt=0)]TJ/F22 11.955 Tf 11.955 0 Td[(ptmlM)]TJ/F23 7.97 Tf 6.587 0 Td[(nl)]TJ/F23 7.97 Tf 6.586 0 Td[(li:{71AccordingtoCorollary 6.4.2 ,giventhedistributionofflk;igandthresholdTlforl
PAGE 132

118 Thus,theupperbound 6{58 canberewrittenasP)]TJ/F15 11.955 Tf 5.48 -9.684 Td[()]TJ/F21 7.97 Tf 7.314 6.11 Td[(jw;dWj
PAGE 133

119 withwlcalculatedby 6{66 ,andM0nl;l=nlM0l)]TJ/F23 7.97 Tf 11.955 17.464 Td[(nldl M0eXi=1nliM0nl)]TJ/F22 11.955 Tf 11.955 0 Td[(i;l;{81wheredemeanstakingthegreatestintegernolargerthan,andM0n;=0forn
PAGE 134

120 NowweevaluateM0nl;lin 6{82 .DirectcalculationofM0nl;lisusuallydicultduetotheconstraintf1KliM0gnli=1.Toavoidthisdiculty,werstconsiderthechoiceofsetf~Kligi2AlfromM0subjecttothelooserconstraintsthatf0~KliM0gnli=1andl=Pnli=1~Kli,where~Kli=j~Klij.Forconvenience,wedenotethetotalnumberofpossiblechoicesforf~Kligi2Alas~M0nl;l.Sincechoosingf~Kligi2AlisequivalenttorandomlychoosinglnodesfromasetofnlM0nodeswithoutreplacement,wehave~M0nl;l=Xf0~KliM0gnli=1l=Pnli=1~KlinlYi=1M0~Kli=nlM0l:{83Meanwhile,wenotethatbyremovingallsetswith~Kli=0i.e.,~Kli=?foranyi2Alfromtheensembleofallf~Kligi2AlwecanobtaintheensembleofallpossiblefKligi2Al.Namely,theensembleforfKligi2Alisasubsetofthatforf~Kligi2Al.Tospecifytheensembleoff~Kligi2Alwithatleastone~Kli=0,letEdenoteasubsetofAlsuchthat~Kli=0fori2E,andlet EdenoteitscomplementarysetinAl.Thus,wehave1~KliM0fori2 E.Accordingtotheconstraintl=Pnli=1~Kli,itcanbeseenthatEmustsatisfy1jEjnl)-262(dl M0e.Withtheabovearguments,~M0nl;lcanbewrittenas~M0nl;l=Xf1~KliM0gnli=1l=Pnli=1~KlinlYi=1M0~Kli+XEAl1jEjnldl M0eXf1~KliM0gi2 El=Pi2 E~KlinlYi=1M0~Kli=Xf1~KliM0gnli=1l=Pnli=1~KlinlYi=1M0~Kli+nldl M0eXi=1nliXf1~KljM0gnl)]TJ/F24 5.978 Tf 5.756 0 Td[(ij=1l=Pnl)]TJ/F24 5.978 Tf 5.756 0 Td[(ij=1~Kljnl)]TJ/F23 7.97 Tf 6.586 0 Td[(iYj=1M0~Klj=M0nl;l+nldl M0eXi=1nliM0nl)]TJ/F22 11.955 Tf 11.955 0 Td[(i;l: {84

PAGE 135

121 Byinserting 6{83 into 6{84 ,wecanestablisharecursionforM0nl;lasin 6{81 .WiththeinitialconditionM0n;=0forn
0.Then,byswitchingthesummationorderinPWjwithoutanychangeonfWj,wehaveXWjfWj=w00M0X0=0w01M0X1=0w0j)]TJ/F18 5.978 Tf 5.756 0 Td[(1M0Xj)]TJ/F18 5.978 Tf 5.756 0 Td[(1=0w000Xm0=d0 M0ew001Xm1=d1 M0ew00j)]TJ/F18 5.978 Tf 5.756 0 Td[(1Xmj)]TJ/F18 5.978 Tf 5.757 0 Td[(1=dj)]TJ/F18 5.978 Tf 5.756 0 Td[(1 M0ew0000Xn0=d0 M0ew0001Xn1=d1 M0ew000j)]TJ/F18 5.978 Tf 5.756 0 Td[(1Xnj)]TJ/F18 5.978 Tf 5.757 0 Td[(1=dj)]TJ/F18 5.978 Tf 5.756 0 Td[(1 M0efWj; {85 wherew0l+1=w0l)-226(dl M0e,w00l=wl)]TJ/F28 11.955 Tf 11.998 8.966 Td[(Pj)]TJ/F21 7.97 Tf 6.586 0 Td[(1t=l+1dt M0e,w000l=minfml;lg,wl+1=wl)]TJ/F22 11.955 Tf 11.997 0 Td[(mlforalll,andw00=w0. Proof SeeAppendix C .ApplyingTheorem 6.4.4 to 6{78 ,wecanobtainanewformoftheupperboundforprobabilityP)]TJ/F21 7.97 Tf 11.867 6.11 Td[(jw;d
PAGE 136

122 Theorem6.4.5 ForthecollaborativedecodingwithMRBinformationexchangeandnonrecursiveconvolutionalcodes,giventhenodesnumberMandtheinformationexchangeparametersfpl;Tlgj)]TJ/F21 7.97 Tf 6.586 0 Td[(1l=0,wehavethefollowingupperboundfortheerroreventmetric)]TJ/F21 7.97 Tf 7.314 6.111 Td[(jw;ddenedin 6{49 foranyarbitraryconstantx,P)]TJ/F21 7.97 Tf 11.867 6.111 Td[(jw;d
PAGE 137

123 Theorem6.4.6 ForcollaborativedecodingwithMRBinformationexchangeandnon-recursiveconvolutionalcodes,giventhenodesnumberMandtheinformationex-changeparametersfpl;Tlgj)]TJ/F21 7.97 Tf 6.586 0 Td[(1l=0,wehavethefollowingupperboundfortheextrinsicinformationjandanyarbitraryconstantx,Pj
PAGE 138

124 eventsforsequenceu .Thus,wehavePj+KlXk=1lk<0;Klk=1jlkjTl=P)]TJ/F22 11.955 Tf 5.479 -9.684 Td[(j+X<0;TlKl=P)]TJ/F22 11.955 Tf 5.48 -9.683 Td[(j+X<0TlKlP)]TJ/F25 11.955 Tf 5.479 -9.683 Td[(TlKl=Pj+X<0P)]TJ/F25 11.955 Tf 5.479 -9.684 Td[(TlKl: {91 SupposethattherandomvariableXhasapdfoffx.SinceXisindependentofjandtheassociatederrorevents,byemployingTheorem 6.4.6 ,wecanobtainPj+X<0=ZPj+X<0jX=xfxdx=ZPj+x<0fxdxZ1 KcXddminXw1wAw;dXVj}VjP)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj+x<0;T~Vjfxdx=1 KcXddminXw1wAw;dXVj}VjZP)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj+x<0;T~Vjfxdx=1 KcXddminXw1wAw;dXVj}VjP)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj+X<0;T~Vj: {92 Inserting 6{92 into 6{91 ,wehavePj+KlXk=1lk<0;Klk=1jlkjTl1 KcXddminXw1wAw;dXVj}VjP)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj+X<0;T~VjP)]TJ/F25 11.955 Tf 5.479 -9.684 Td[(TlKl=1 KcXddminXw1wAw;dXVj}VjP)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj+X<0;T~Vj;TlKl: {93

PAGE 139

125 NowweconsidertheprobabilityP)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj+X<0;T~Vj;TlKlin 6{93 .Bysubstitutingthedenitionsof~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj,X,T~VjandTlKlintoit,weobtainP)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.315 -1.793 Td[(dVj+X<0;T~Vj;TlKl=Pj)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xt=0tXi=1~ti+KlXk=1lk+Yd<0;j)]TJ/F21 7.97 Tf 6.586 0 Td[(1t=0ti=1j~tijTt;Klk=1jlkjTl!a=Pj)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xt=00tXi=1~ti+Yd<0;j)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=00ti=1j~tijTt;!b=P)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dV0j<0;T~V0j; {94 Intheabove,aisobtainedbysubstitutingf~ligl+Kli=l+1forfligKli=1basedonthefactthat~liandliareindependentandhavethesamedistribution.Withthedenition0t=tfort6=land0l=l+Kl,thissubstitutionwillnotchangetheprobability.Inaddition,bisobtainedbasedonthedenitionsof)]TJ/F23 7.97 Tf 349.015 -1.793 Td[(dVjandT~VjwithV0j=f0tgj)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=0.Finally,byinserting 6{94 into 6{93 ,corollary 6.4.7 isproved.Tothispoint,weonlyhaveprobabilityPj<0;lTllefttoupperbound.Dierentfromthepreviousresults,thisprobabilityinvolvesthecorrelationbetweentheextrinsicinformationforthesamedatabitsindierentiterations,i.e.,jandlforl
PAGE 140

126 decodingiterations.Thenwehavethefollowingupperbound,Pj<0;lTl1 KcXddminXw1wAw;dXVj}VjP)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj<0;~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVlTl;T~Vj; {95 whereVl=ftgl)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=0. Proof Withtheapproximationofl=minu ;c 2C)]TJ/F28 11.955 Tf 8.745 9.385 Td[()]TJ/F21 7.97 Tf 7.314 6.11 Td[(lu ;c forl
PAGE 141

127 Then,replacingP)]TJ/F15 11.955 Tf 5.479 -9.684 Td[()]TJ/F21 7.97 Tf 7.314 6.11 Td[(jw;dWj
PAGE 142

128 forAWGNchannels,andbyPl<)]TJ/F22 11.955 Tf 9.298 0 Td[(Tl=G)]TJ/F22 11.955 Tf 9.299 0 Td[(Tl;l;2l;l{102forindependentRayleighfadingchannel,respectively.Inabove,QistheGaussianQ-functionandG;;2;isthecdfofthedistributionGAL;2;.AnecientmethodforthenumericalevaluationofG;;2;isgiveninAppendix D .ThustheproblemofevaluatingtheBERboundbecomesevaluatingtheprob-abilitiesP)]TJ/F15 11.955 Tf 6.21 -6.661 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.794 Td[(dVj<0;T~Vj,P)]TJ/F15 11.955 Tf 6.21 -6.661 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.794 Td[(dV0j<0;T~V0jandP)]TJ/F15 11.955 Tf 6.21 -6.661 Td[(~)]TJ/F23 7.97 Tf 7.315 -1.794 Td[(dVj<0;~)]TJ/F23 7.97 Tf 7.314 -1.794 Td[(dVlTl;T~VjforallpossibleVj,giventhestatisticalknowledgeofthesequencef~lig.Withthedenitionsof~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVjandT~Vj,weknowthattheseprobabilitiesarees-sentiallyaboutsumsofindependenttruncatedrandomvariables,forwhichtheexactcalculationareusuallydicult.Inthiscase,weagainrelyonupper-boundingtech-niquestoapproximatetheprobabilities.Intheliterature,thewellstudiedcasesconcernwithsumsofinnertruncatedrandomvariables,i.e.,jXij
PAGE 143

129 Pj~lij0=1,thetruncationfj~lijTlgforallicanberemovedfromtheeventT~Vjandtherandomvariablesf~ligcanbemergedintoYdforsimplicity.Inordertoincludethisspecialcaseintothefollowingderivation,wedeneaniterationindexLjsuchthatTl>0forlj)]TJ/F15 11.955 Tf 11.955 0 Td[(1,thenTl=0doesnotoccurforalll.Withthisdenition,wehave~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj=j)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xl=0lXi=1~li+Yd=SVL+Y0d; {103 whereVL=flgL)]TJ/F21 7.97 Tf 6.586 0 Td[(1l=0,SVL=L)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xl=0lXi=1~li;{104andY0d=j)]TJ/F21 7.97 Tf 6.586 0 Td[(1Xl=LlXi=1~li+Yd:{105Fromtheabove,letY=dLc+j)]TJ/F21 7.97 Tf 6.586 0 Td[(1Xi=Lll;and2Y=2dLc+j)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xi=Ll2l;{106thenY0dNY;2Y.Withtheabovenotation,wehaveP)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj<0;T~Vj=P)]TJ/F22 11.955 Tf 5.48 -9.684 Td[(SVL+Y0d<0;T~VL:{107WeevaluatetheprobabilityP)]TJ/F22 11.955 Tf 5.479 -9.684 Td[(SVL+Y0d<0;T~VLfordierentvaluesofVLasfollows.WhenVL=f0g,i.e.,l=0forl
PAGE 144

130 withYandYdenedin 6{106 .WhenSVLcontainsonlyonerandomvariable,i.e.,l=1andt=0witht6=linVLforarbitraryl
PAGE 145

131 Proposition6.5.1 ThemgfofthetruncatedGaussianrandomvariable^lidenedin 6{111 with~liNl;2lisgivenby^ls=E[e)]TJ/F23 7.97 Tf 6.587 0 Td[(s^li]=lshls P)]TJ/F25 11.955 Tf 5.479 -9.684 Td[(j~lijTl; {113 wherelsismfgoftheGaussianrandomvariable~li,i.e.,ls=e)]TJ/F23 7.97 Tf 6.586 0 Td[(sl+s22l 2;{114hlsisdenedashls=QTl+l+s2l l+QTl)]TJ/F22 11.955 Tf 11.956 0 Td[(l)]TJ/F22 11.955 Tf 11.955 0 Td[(s2l l;{115andtheROCis<<+1. Proof Since~liNl;2l,with 6{111 wecanobtainthepdfof^lias^flx=8><>:1 Pj~lijTlx;l;2ljxjTl0)]TJ/F22 11.955 Tf 9.299 0 Td[(Tl
PAGE 146

132 Forconvenience,denote^sthemgfof^SVL+Y0d,thenwiththefactthatY0d;f^ig;;f^L)]TJ/F21 7.97 Tf 6.587 0 Td[(1igarepairwiseindependent,^sisgivenby^s=YsL)]TJ/F21 7.97 Tf 6.587 0 Td[(1Yl=0^lls=1 QL)]TJ/F21 7.97 Tf 6.587 0 Td[(1l=0Qli=1P)]TJ/F25 11.955 Tf 5.48 -9.683 Td[(j~lijTlYsL)]TJ/F21 7.97 Tf 6.586 0 Td[(1Yl=0llshlls=1 P)]TJ/F25 11.955 Tf 5.479 -9.684 Td[(T~VLs; {118 wheresisdenedass=YsL)]TJ/F21 7.97 Tf 6.587 0 Td[(1Yl=0llshlls:{119TheROCof^sistheintersectionoftheROCsofYsandf^lsgL)]TJ/F21 7.97 Tf 6.586 0 Td[(1l=0,andhenceisstill<<+1.Byapplyingthewell-knownChernobound,wehaveP)]TJ/F15 11.955 Tf 7.476 -6.662 Td[(^SVL+Y0d<00^:{120Thus,with 6{110 6{118 and 6{120 ,weobtainaChernoboundforthejointprobabilityP)]TJ/F22 11.955 Tf 5.48 -9.684 Td[(SVL+Y0d<0;T~VLasP)]TJ/F22 11.955 Tf 5.479 -9.684 Td[(SVL+Y0d<0;T~VL0:{121Fromthedenitionofsin 6{119 ,wenotethatduetothefunctionhls,weneedtousenumericalrootndingalgorithmssuchastheNewton-RaphsonmethodtondtheminimizerCin 6{124 .Recallthatduetothej-foldsummationPVjin 6{100 ,thetotalcomplexityofevaluatingtheBERboundispolynomialwiththecomplexityofevaluatingtheprobabilitiesinsidePVjandisexponentialwithj.Inordertosimplifythecomputation,weloosentheboundin 6{124 bysubstitutingsomenearbyandeasyfoundpoint~CforC.Onepossiblechoiceof~Cistheminimizerof~sdenedas~s=YsL)]TJ/F21 7.97 Tf 6.587 0 Td[(1Yl=0lls=expn)]TJ/F22 11.955 Tf 11.294 0 Td[(sY+L)]TJ/F21 7.97 Tf 6.586 0 Td[(1Xl=0ll+s2 22Y+L)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xl=0l2lo;{122

PAGE 147

133 whichisobtainedfrom 6{119 bydroppingallhls.Then,itiseasytoobtain~C=argmin>0~=Y+PL)]TJ/F21 7.97 Tf 6.587 0 Td[(1l=0ll 2Y+PL)]TJ/F21 7.97 Tf 6.587 0 Td[(1l=0l2l:{123Thus,withCfor6=Cand>0,thejointprobabilityP)]TJ/F22 11.955 Tf 5.479 -9.684 Td[(SVL+Y0d<0;T~VLcanbeupperboundedasP)]TJ/F22 11.955 Tf 5.479 -9.684 Td[(SVL+Y0d<0;T~VL<~C;{124wheresisdenedin 6{119 .TheaboveresultsforP)]TJ/F15 11.955 Tf 6.211 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj<0;T~Vjcanbeeasilyappliedtotheevalua-tionforP)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.315 -1.793 Td[(dV0j<0;T~V0jbysubstitutingV0jforVjproperly.Now,weonlyhavetheprobabilityP)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj<0;~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVlTl;T~Vjforlj)]TJ/F15 11.955 Tf 9.809 0 Td[(2lefttoevaluate.SinceVj=ftgj)]TJ/F21 7.97 Tf 6.586 0 Td[(1t=0andVl=ftgl)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=0,wehaveVj=Vl;ftgj)]TJ/F21 7.97 Tf 6.586 0 Td[(1t=land,~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj=~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVl+j)]TJ/F21 7.97 Tf 6.587 0 Td[(1Xt=ltXi=1~ti:{125Thus,foranyVjwithftgj)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=l=f0g,wehaveP)]TJ/F15 11.955 Tf 6.21 -6.661 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.794 Td[(dVj<0;~)]TJ/F23 7.97 Tf 7.314 -1.794 Td[(dVlTl;T~Vj=P)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVl<0;~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVlTl;T~Vj=0;{126sinceTl0.Whenallt=0inVjexceptonlyoner=1withrl,i.e.,Vl=f0gandPj)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=lt=r=1,wehave~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVl=Ydand~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj=Yd+~r1.Hence,theprobabilitycanbeupperboundedasP)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj<0;~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVlTl;T~Vj=P)]TJ/F22 11.955 Tf 5.48 -9.684 Td[(Yd+~r1<0;YdTl;j~r1jTr=P)]TJ/F22 11.955 Tf 5.48 -9.683 Td[(Yd+~r1<0;YdTl;~r1)]TJ/F22 11.955 Tf 21.918 0 Td[(Tr+P)]TJ/F22 11.955 Tf 5.479 -9.683 Td[(Yd+~r1<0;YdTl;~r1Tr=P)]TJ/F22 11.955 Tf 5.48 -9.684 Td[(Yd+~r1<0;YdTl;~r1)]TJ/F22 11.955 Tf 21.917 0 Td[(Tr
PAGE 148

134 wherewehaveusedYdNdLc;2dLcand~r1Nr;2r.FortheothercaseofVjwithftgj)]TJ/F21 7.97 Tf 6.586 0 Td[(1t=l6=f0g,i.e.,Pj)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=lt2andftgj)]TJ/F21 7.97 Tf 6.586 0 Td[(1t=l6=f0g,weupperboundP)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj<0;~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVlTl;T~Vjbydroppingtheevent~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVlTl,i.e.,P)]TJ/F15 11.955 Tf 6.211 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj<0;~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVlTl;T~Vj
PAGE 149

135 Sincenoclosed-formresultexistsforit,weevaluatetheprobabilityPY0d<0numer-icallybasedonthesaddlepointapproximation[ 46 47 ],whichisknownasanaccurateapproximationtechniqueforprobabilitydistributions.Aswellknown,giventhemgfYs,theprobabilityPY0d<0canbeobtainedasPY0d<0=1 2jZc+j1c)]TJ/F23 7.97 Tf 6.586 0 Td[(j1Xs sdx;{133wherec>0liesintheROCofXs.Inthiscase,basedonthesaddlepointapproximationtheory, 6{133 canbeapproximatedas[ 48 ]PY0d<0=1 2jZS+j1S)]TJ/F23 7.97 Tf 6.586 0 Td[(j1Ys sdxs YS 200YSYS S;{134whereY;1
PAGE 150

136 hlsisgivenashls=G)]TJ/F22 11.955 Tf 9.299 0 Td[(Tl;^l;^2l;l+G)]TJ/F22 11.955 Tf 9.298 0 Td[(Tl;)]TJ/F15 11.955 Tf 10.218 0 Td[(^l;^2l;l; {137 withG;;2;thecdfofGAL;2;,^l=l)]TJ/F22 11.955 Tf 11.955 0 Td[(2ls 1+ls)]TJ/F23 7.97 Tf 13.151 6.547 Td[(2l 2s2;and^2l=2l 1+ls)]TJ/F23 7.97 Tf 13.15 6.547 Td[(2l 2s2;{138andtheROCisl)]TJ/F25 11.955 Tf 6.587 7.842 Td[(p 2l+22l 2l<<>:flx Pj~lijTljxjTl0)]TJ/F22 11.955 Tf 9.298 0 Td[(Tl0intheROC.

PAGE 151

137 NowweconsidertherstintegralR)]TJ/F23 7.97 Tf 6.587 0 Td[(Tle)]TJ/F23 7.97 Tf 6.586 0 Td[(sxflxdx.Byinserting 6{141 ,wecanrewritethisintegralasZ)]TJ/F23 7.97 Tf 6.587 0 Td[(Tle)]TJ/F23 7.97 Tf 6.586 0 Td[(sxflxdx=Z)]TJ/F23 7.97 Tf 6.587 0 Td[(Tle)]TJ/F23 7.97 Tf 6.587 0 Td[(sx1 2jZc+j1c)]TJ/F23 7.97 Tf 6.587 0 Td[(j1lzezxdzdx=1 2jZc+j1c)]TJ/F23 7.97 Tf 6.587 0 Td[(j1lzZ)]TJ/F23 7.97 Tf 6.586 0 Td[(Tlez)]TJ/F23 7.97 Tf 6.587 0 Td[(sxdxdz=1 2jZc+j1c)]TJ/F23 7.97 Tf 6.587 0 Td[(j1lze)]TJ/F23 7.97 Tf 6.586 0 Td[(Tlz)]TJ/F23 7.97 Tf 6.587 0 Td[(s z)]TJ/F22 11.955 Tf 11.955 0 Td[(sdz=1 2jZc0+j1c0)]TJ/F23 7.97 Tf 6.586 0 Td[(j1lz+se)]TJ/F23 7.97 Tf 6.587 0 Td[(Tlz zdz; {142 wherec0=c)-172(
PAGE 152

138 Thus,byinserting 6{145 and 6{146 into 6{140 ,weobtain 6{135 .Sincethetruncationdoesnotchangetheconvergencepropertyofmgf[ 49 ],theROCof^lsisthesameasthatofls.WithYsand^lsgivenin 6{129 and 6{135 forRayleighfadingchannels,followingthederivationinAWGNchannelcase,weobtainthemfg^softherandomvariable^SVL+Y0das^s=s P)]TJ/F25 11.955 Tf 5.48 -9.684 Td[(T~VL; {147 wheresisgivenbys=YsL)]TJ/F21 7.97 Tf 6.587 0 Td[(1Yl=0llshlls;{148andtheROCisl<<2with1=max0l0.ForVLwithPL)]TJ/F21 7.97 Tf 6.587 0 Td[(1l=0l=1,weadoptthesaddlepointapproximation 6{151 toevaluate

PAGE 153

139 P)]TJ/F22 11.955 Tf 5.479 -9.684 Td[(SVL+Y0d<0;T~VLforaccuracy.ForVjwithPL)]TJ/F21 7.97 Tf 6.586 0 Td[(1l=0l2,weconsiderusingtheChernobound 6{152 toupperboundtheprobability.Inthiscase,inordertoreducethecomplexityofndingC,weloosen 6{152 asP)]TJ/F22 11.955 Tf 5.479 -9.684 Td[(SVL+Y0d<0;T~VL<~C;{153where~Cisthesaddlepointof~s=YsQL)]TJ/F21 7.97 Tf 6.586 0 Td[(1l=0lls.TheabovemethodscanbeusedtohandletheprobabilityP)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dV0j<0;T~V0jaswell.Finally,fortheprobabilityP)]TJ/F15 11.955 Tf 6.21 -6.662 Td[(~)]TJ/F23 7.97 Tf 7.314 -1.793 Td[(dVj<0;~)]TJ/F23 7.97 Tf 7.315 -1.793 Td[(dVlTl;T~Vjwithlj)]TJ/F15 11.955 Tf 11.587 0 Td[(2,wehaveshownin 6{125 and 6{126 thatittakesvalueof0foranyVjwithftgj)]TJ/F21 7.97 Tf 6.586 0 Td[(1t=l=f0g.Whenftgj)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=l6=f0g,weseparateitintotwocasessimilartothescenarioofAWGNchannel.ForVjwithPj)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=lt2andftgj)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=l6=f0g,weupperboundtheprobabilityasin 6{128 .FortheremainingcaseofVj,i.e.,ftgl)]TJ/F21 7.97 Tf 6.587 0 Td[(1t=0=f0gandPj)]TJ/F21 7.97 Tf 6.586 0 Td[(1t=lt=r=1forlr
PAGE 154

140 6.6NumericalResultsInthissection,wepresentnumericalresultstoexaminethevalidityoftheBERupperboundsdevelopedinthischapter.Recallthattheperformanceanalysisproce-dureisprimarilybasedontheindependenceassumptionandstatisticaldistributionapproximationsfortheextrinsicinformationwiththedensityevolutionmodel.TheBERbounds,strictlyspeaking,areapproximatedupperboundsfortheperformanceofthedensityevolutionmodel.Theassumptionsidealizetheiterativedecodingandinformationexchangeprocedures,whichmaydeviatefromtheactualsituationintherealisticcollaborativedecodingprocess.Forexample,theindependenceassumptionforextrinsicinformationtendstobeinvalidinverylowSNRregion.Asaresult,theperformancepredictedbyouranalysismaybecometoooptimisticwhencomparedwiththeactualcollaborativedecodingwithMRBinformationexchange.Thus,inordertoverifytheirtightness,wewillcomparetheBERupperboundstothesimula-tionresultsfrombothoftherealisticcollaborativedecodingprocessandthedensityevolutionmodels.FirstwesetthenumberofexchangesIto3i.e.,3exchangesand4decod-ingiterationsareperformedintotal,setMto6,andcomparethecasesofdif-ferentinformationexchangeparametersforfpjg=f0:1;0:2;0:6g,f0:1;0:2;0:8gandf0:1;0:2;1g,i.e.,wevarytheinformationexchangepercentageinthelastexchange.Fig. 6{6 andFig. 6{7 comparetheupperboundsineachiterationwiththesim-ulationresultsoveranAWGNchannelwhenthenon-recursiveconvolutionalcodewiththegenerationpolynomialof[1+D2;1+D+D2]denotedbyCC;7and[1+D2+D3;1+D+D2+D3]denotedbyCC;17areused,respectively.Mean-while,Fig.'s 6{8 and 6{9 showthecomparisonforCC;7andCC5;17overanindependentRayleighfadingchannel,respectively.Inthegures,weuseCDandDEMtodenotecollaborativedecodinganddensityevolutionmodelforshort,respectively.Fromthegures,weseethattheproposed

PAGE 155

141 Figure6{6:ComparisonoftheboundsandsimulationresultsforthecasesofM=6onanAWGNchannel,whereCC;7andfpjg=f0:1;0:2;0:6g,f0:1;0:2;0:8gandf0:1;0:2;1gareused. Figure6{7:ComparisonoftheboundsandsimulationresultsforthecasesofM=6onanAWGNchannel,whereCC15;17andfpjg=f0:1;0:2;0:6g,f0:1;0:2;0:8gandf0:1;0:2;1gareused.

PAGE 156

142 Figure6{8:ComparisonoftheboundsandsimulationresultsforthecasesofM=6onanindependentRayleighfadingchannel,whereCC5;7andfpjg=f0:1;0:2;0:6g,f0:1;0:2;0:8gandf0:1;0:2;1gareused. Figure6{9:ComparisonoftheboundsandsimulationresultsforthecasesofM=6onanindependentRayleighfadingchannel,whereCC5;17andfpjg=f0:1;0:2;0:6g,f0:1;0:2;0:8gandf0:1;0:2;1gareused.

PAGE 157

143 upperboundsareverytightinthemiddleandhighSNRregionstheboundsdivergeinthelowSNRregionduetothenatureofunionboundcomparedwiththesimula-tionresultsfromthedensityevolutionmodel,butslightlylowerthanthosefromthecollaborativecodingprocess.Asmentionedabove,thisphenomenonisduetotheidealizedassumptionsandapproximationsusedintheanalysis.However,wenotethatwithincreasingSNR,theperformanceoftheactualcollaborativecodingprocesstendstoapproachthatofthedensityevolutionmodel.Thus,theBERboundsac-tuallyillustratetheperformanceofcollaborativedecodingprocessinthehighSNRregionsorthelowBERregion.Theconsistencewiththesimulationresultsfordif-ferentchoicesoffpjg,codesandchannelmodelsveriesthevalidityofouranalysisandthetightnessofthebounds.Inthegures,wealsoshowtheunionboundsforMRC.TheBERupperboundsshowthat,forM=6,collaborativedecodingwithMRBexchangecanachievetheperformanceabout2dBand3dBwithinthatofMRCattheBERof10)]TJ/F21 7.97 Tf 6.586 0 Td[(10forAWGNandRayleighfadingchannels,respectively,whenfpjg=f0:1;0:2;1gisused.InFig.s 6{10 and 6{11 ,wecomparetheboundswiththesimulationresultsfordierentvaluesofMoverAWGNandRayleighfadingchannels.Inthecomparison,wextheinformationexchangeparameterstofpjg=f0:1;0:2;1ganduseCC;7asexample.Forclarity,weonlyshowtheresultsforthelastdecodingiterationsforeachM.Similartothepreviouscomparison,theBERboundsmatchthesimulationresultsfromthedensitymodelverywell.Again,atthelowBERregion,theBERcurvesoftheactualcollaborativedecodingtendstomergewiththoseofthedensityevolutionmodelfordierentvaluesofM.Hence,theBERboundsapproximatelyreecttheperformanceoftheactualcollaborativedecodinginthelowBERregion.FromtheBERbounds,weseethatwiththechoiceoffpjg,collaborativedecodingwithMRBinformationexchangeeventuallycanprovidetheperformanceveryclosetothatofMRCforM=2,3,and4.ForRayleighfadingchannels,thismeansthat

PAGE 158

144 Figure6{10:Comparisonoftheproposedbounds,simulationresultsinthelastiter-ationforthecasesofM=2;3;4and8onanAWGNchannel,whereCC;7andfpjg=f0:1;0:2;1gareused. mostofthediversitygainprovidedbythechannelisachievedinthecollaborativedecodingprocess.AccordingtoFig. 4{7 ,theaverageinformationexchangeamountisonly39%,31%,and25%ofMRC,respectively,inthiscase.WiththeproposedBERboundsforcollaborativedecoding,wecanillustratetheeectofdierentchoicesoffpjgontheerrorperformanceintheverylowBERregion,whichcannotbereachedbysimulations.Here,wexthesettingofM=6andrate1=2codeCC;7,andcomparethe8caseslistedinTable 6{1 ,inwhichtheaverageinformationexchangeamountcorrespondingtoeachchoiceoffpjgiscalculatedby 4{7 ,andisnormalizedbytheinformationexchangeamountofMRCMRCcalculatedby 4{6 forthepurposeofcomparison.InFig. 6{12 ,weshowtheBERboundsofthelastdecodingiterationsoveranAWGNchannelfortheall8cases.Fromthegure,weseethatincase1,collaborativedecodingwithMRBexchangegivestheperformanceveryclosetoMRC,butwiththelargestaverageinformationexchangeamountcomparedtoallothercases.Meanwhile,

PAGE 159

145 Figure6{11:Comparisonoftheproposedbounds,simulationresultsinthelastiter-ationforthecasesofM=2;3and4onanindependentRayleighfadingchannels,whereCC;7andfpjg=f0:1;0:2;1gareused. Table6{1:Dierentchoicesoffpjgandthecorrespondingaverageinformationex-changeamountwithM=6forrate-1=2CC5;7code.iscalculatedwithrespecttotheinformationexchangeamountofMRC,MRC. No.ofexchanges ValueoffpjgI)]TJ/F21 7.97 Tf 6.587 0 Td[(1j=0 Averageinfoexchangeamount Case1 I=1 f1g 1=0:5MRC Case2 I=2 f0:6;0:8g 20:302MRC Case3 I=2 f0:5;1g 30:258MRC Case4 I=3 f0:1;0:2;1g 40:173MRC Case5 I=3 f0:1;0:2;0:8g 50:159MRC Case6 I=3 f0:1;0:2;0:6g 60:145MRC Case7 I=3 f0:2;0:3;1g 70:155MRC Case8 I=4 f0:08;0:12;0:3;1g 80:135MRC

PAGE 160

146 Figure6{12:ComparisonofperformanceforM=6andCC;7onanAWGNchannelwiththedierentchoicesoffpjginTable 6{1 wenotethatduetopj=1intheirlastexchanges,cases3,4,7and8signicantlyoutperformcases2,5and6,andgiveperformanceabout2dBwithinthatofMRCwithmuchloweraverageinformationexchangeamountsthanthatincase1.InFig. 6{13 ,wecomparetheBERboundsofthelastdecodingiterationsoveranindependentRayleighfadingchannelforthe8cases.SimilartothecaseofAWGNchannel,bysettingpj=1inthelastexchanges,thecollaborativedecodingwithMRBinformationexchangeincases1,3,4,7and8signicantlyoutperformsthatincases2,5and6.Especially,withthechoicesoffpjgincases4and8,mostofthediversitygaininthefadingchannelcanbeachievedbyexchangingonly17:3%and13:5%oftheinformationamountinMRC,respectively.Ontheotherhand,incases2,5and6,theBERboundshavealmostthesameslopeasthatofsinglereceiverinthehighSNRregion.Thisimpliesthatbychoosingpj<1forthelastexchanges,nearlynodiversitygaincanbeobtainedincollaborativedecodingwithMRBinformationexchange.However,signicantperformancegaincanstillbeachievedbyexchanging

PAGE 161

147 Figure6{13:ComparisonofperformanceforM=6andCC;7onRayleighfadingchannelswiththedierentchoicesoffpjginTable 6{1 thedecodinginformationinthissituation.Fromthegure,about11dB,9dBand6dBgainareobtainedincases2,5,and6,respectively,comparedwithasinglereceiverovertheRayleighfadingchannel.6.7SummaryWehaveanalyzedthebiterrorperformanceforcollaborativedecodingwithMRBinformationexchange.Adensityevolutionmodelisproposedtosimplifytheanalysisbasedontheindependenceassumptionforextrinsicinformation.WiththeGaussianandGALapproximations,knowledgeoftheextrinsicinformationisobtainedbysim-ulatingtheproposedmodeloverAWGNandRayleighfadingchannels.ThentheMRBinformationexchangeprocessforeachsingledatabitanddecod-ingerroreventsareanalyzedaccordingtothedensityevolutionmodel.Bycombiningtheanalysiswithageneralizedunionboundforthemax-log-MAPdecoder,wede-riveageneralBERupperboundforcollaborativedecodingwithMRBinformationexchange.Thisunionboundexpressedintermsofthejointprobabilitiesinvolvinga

PAGE 162

148 setoftruncatedindependentrandomvariableswiththesamedistributionasthatoftheextrinsicinformation.Finally,theBERupperboundisevaluatedbyusingthemethodsofmomentgeneratingfunctionandsaddlepointapproximationforAWGNchannelsandindependentRayleighfadingchannels.TheBERupperboundprovidesaneectivewaytostudytheerrorperformanceofcollaborativedecodingwithMRBinformationexchange,especiallyforthelowBERregionatwhichtheBERcannotbeeasilyestimatedbysimulations.Fromthenumericalresultsofthebound,wendthattoobtainspacediversitygaineectively,theinformationexchangeparameterpjshouldbesetto1inthelastexchange.

PAGE 163

CHAPTER7CONCLUSIONSANDFUTUREWORKInthisdissertation,wehavestudiedaclassofnetwork-basediterativereceivingdiversitytechniquesknownascollaborativedecoding.Thesetechniquesaresuit-ablefordistributedarrayswhenerrorcorrectioncodesareusedinthetransmissionprocess.Collaborativedecodingachievesreceivingdiversitybyexchangingdecodinginformationamongreceivingnodesinadistributedarray.Bycarefullyselectingwhatdecodinginformationtoexchange,collaborativedecodingcanlowertheamountofinformationthatmustbeexchangedinthearray,whileprovidingperformanceclosetothatofmaximumratiocombiningMRC.Basedonthestatisticalcharacteristicsoftheoutputofmaximumaposterioridecoders,westudytwoinformationexchangeschemesforcollaborativedecoding:theleast-reliable-bitLRBandmost-reliable-bitMRBexchangeschemes.Errorperformanceofthesetwoschemesunderdier-enttransmissionenvironmentswithdierentparametersettingsisinvestigatedandcomparedwithMonteCarlosimulations.Tofurtherstudythecollaborativedecodingapproach,theoreticalanalysisiscar-riedoutfortheLRBandMRBinformationexchangeschemes,respectively.Foran-alyticaltractability,weconsiderthecasesinwhichnonrecursiveconvolutionalcodesareusedinthesystemundertheAWGNandRayleighfadingchannelmodels.Theanalysisisbasedontheassumptionthattheextrinsicinformationgeneratedinthecollaboratingdecodingprocessfornonrecursiveconvolutionalcodescanbeapproxi-matelydescribedbycertainsimpledistributions.Withtheindependenceassumptionforextrinsicinformation,werepresentthecollaboratingdecodingprocessbyaden-sityevolutionmodelwithasingleMAPdecoder,andproposeasystematicmethodtoevaluatetheerrorperformanceofcollaboratingdecodinganalytically.Theresulting 149

PAGE 164

150 analysisshowsthatwithproperchoicesofparameters,collaborativedecodingcanachievefulldiversityandapproachthetheoreticalperformanceboundsasymptoti-cally.Ontheotherside,theanalysisapproachproposedinthisworkprovidesanewwayforevaluatingthebiterrorperformanceateachiterationinaniterativedecod-ingprocess.Thisissignicantlydierentfromtheconventionalanalysisapproachforiterativedecodinginthatonlytheperformanceafteralargenumberofdecodingitera-tionsasymptoticalperformanceisconsideredinconventionalanalysis.Meanwhile,asanotherimportantcontribution,theGALapproximationandrelatedstatisticalmethodsintroducedinthisworkfortheextrinsicinformationgeneratedbyMAPdecodingalsoprovideanewstatisticaltoolforthestudyofiterativedecodingoverindependentRayleighfadingchannelmodels,whicharereadilyemployedtomodelpracticalwirelesscommunicationscenarios.Whileextensivesimulationstudyandtheoreticalanalysishavebeendoneforcollaborativedecoding,allthisworkshouldonlyberegardedasapreliminarystudyforcollaborativedecodingbecausetherearestillalotofopenproblemsleftforfuturework,especiallyintheperformanceanalysisaspects.Firstofall,inouranalysis,werelyonsimulatingthedensityevolutionmodeltoobtainthestatisticalparametersfortheextrinsicinformationgeneratedbytheMAPdecoderinthecollaborativedecodingprocess.Thisfactmakesouranalysisasemi-analyticalone,whichisnotcompletelydesirable.Unfortunately,toobtainthedistributionanditsparametersfortheextrinsicinformationinMAPdecodinganalyticallyisawideopenproblem.Necessarymathematicaltoolsorgoodapprox-imationmethodsforsolvingthisproblemseemstillunavailablesofar.Besides,duetothefactthattheindependenceassumptionfortheextrinsicinformationmaynotberealistic,thereissomesmallgapbetweenouranalysisresultsandthoseobtainedinactualcollaborativedecodingwithMRBinformationexchange.Wehopethatin

PAGE 165

151 futurework,thisdependencecanbeconsideredintheperformanceanalysissothatthegapcanbereduced.Inaddition,wealsohopethattheanalysiscanbeextendedtothecaseofrecursiveconvolutionalcodes.Anotherimportantaspectisthedesignofparametersandtheiroptimizationforcollaborativedecoding.Asanimportantpurposeofperformanceanalysis,itisdesiretodevelopsomeusefuldesigncriteriaforcollaborativedecodingbasedonminimizingtheBERboundsormaximizingthediversityorderoftheboundsforRayleighfadingchannels.ThismayrequirefurtherstudyofthemathematicpropertyfortheBERupperboundsproposedinthisdissertation.However,thismaybeadiculttaskduetothecomplicatedformofthebounds.Meanwhile,unveilingthetradeorelationbetweenperformanceandinformationexchangeamountanalyticallyisalsoatouchbutverymeaningfultaskforthedesignofcollaborativedecodingsystemsinthefuturework.Finally,furtherimprovingtheperformanceofcollaborativedecodingwithanaslowaspossibleinformationexchangetracloadinthedistributionarrayisalsoanimportantissue.Inthiswork,weseethatinsomecases,thecollaborativedecodingwithLRBinformationexchangeprovidesbettercompromiseintermsofperformanceandinformationexchangeamountthanthatwithMRBinformationexchange.How-ever,insomeothercases,wehavetheoppositesituation.ThisimpliesthattheLRBandMRBschemesmaybefarawayfromoptimum.Thus,itispossibletocombinetheLRBandMRBinformationexchangeschemesinsomemanner,ortodevelopsomeotherinformationexchangeapproachesforexploitingthespacediversitypro-videdbythechannelsmoreeciently.Thismaybecomeaninterestingdirectioninfutureresearchwork.

PAGE 166

APPENDIXARECTANGULARPARITY-CHECKENCODINGANDDECODINGRectangularparitycheckRPCcodesareaspecialclassofmultidimensionalparitycheckMDPCcodeswiththenumberofdimensionMequalto2.Here,wedescribetheencodinganddecodingalgorithmsofthegeneralMDPCcodes.Theencodinganddecodingalgorithmsforrectangularparitycheckcodescanbeimmedi-atelyobtainedbysettingM=2.A.1MultidimensionalParity-CheckEncodingMDPCcodesareactuallyparallelconcatenatedsignalparitycheckSPCcodesfollowingthestructureofturbocodes.Basedontheideain[ 11 ],bygeneralizingthesingleparitycheckcodestomultipledimensionallatticesgeometrically,aclassofMDPCcodescanbeconstructed[ 12 13 ].LetMM>1denotethenumberofdimensions.ArrangethedatabitsintoaM-dimensionalhypercubewithsizeAforeachside,whereAisanintegerlargerthan1.Bydoingso,wecanobtainablockofdatabitsfui1;i2;;iMgwithblocksizeofAMindexedbytheirpositionsalongeachdimensioni1;i2;:::;iM.Thenapplysingleparitycheckcodestoeachlayerhyperplanealongeachdimensionofthehypercube.Forconvenience,wewilldonatethebitui1;i2;;iMwithim=jasumj.Thus,theMAparitybitscanberepresentedaspm;j=AXi1=1AXim)]TJ/F18 5.978 Tf 5.757 0 Td[(1=1AXim+1=1AXiM=1umjA{1form=1;2;;Mandj=1;2;;A,wherePdenotesmodulo-2sum.Append-ingtheMAparitybitsfpm;jgtotheAMdatabits,weobtaintheM-dimensionalparitycheckcodes.IfweregardalltheSPCcodesalongeachdimensionmasa 152

PAGE 167

153 componentofMDPCcodes,thentheMDPCcodesareformedbyparallelconcate-natingalltheMcomponentstogether.Inanotherviewpoint,MDPCcodescanalsobethoughtasapuncturedversionofmultidimensionalproductcodes.Clearly,thecoderateRcofMDPCisRc=1 1+M AM)]TJ/F18 5.978 Tf 5.757 0 Td[(1:A{2Ingeneral,MDPCcodesareveryhighratecodesforlargecodeblocksizes.A.2IterativeMultidimensionalParity-CheckDecodingThebasicideabehinditerativedecodingofconcatenatedcodesusingsoft-in,soft-outdecodersistobreakupthedecodingofafairlycomplexandlongcodeintostepswherethetransferofsoftinformationamongthedecodingstepsguaranteesalmostnolossofinformation.ItiswellknownthatiterativedecodingcanachieveperformanceclosetothatofthemaximumaposterioriMAPrule.Supposetheobservationofui1;i2;;iMandpm;jatthedecoderarexi1;i2;;iMandym;j,respectively.SimilartothenotationumjdenedinII.A,wedenotethexi1;i2;;iMwithim=jasxmj.ByapplyingtheiterativeSISOdecodingruletoeachSPCsuggestedin A{1 ,theextrinsicinformationofeachbitinlog-likelihoodratioLLRformforthemthcomponentcodesatthenthiterationisgivenbyLmenui1;;im)]TJ/F18 5.978 Tf 5.756 0 Td[(1;j;im+1;;iM=Xm;6=)]TJ/F22 11.955 Tf 5.479 -9.684 Td[(Lxmjjumj+LnumjLym;jjpm;j; A{3 whereLnuistheaprioriLLRofthedatabitu,andLxjuandLyjparethelog-likelihoodratiosofreceivedsymbolsconditionedonthetransmittedbits,whicharealsoknownaschannelmeasurementorreliabilityvalueofthechannel.Above,thesymbolisdenedaslog-likelihoodratioaddition,andthenotationPm;6=meansoperatingthelog-likelihoodratioadditionovertheLLRscorrespondingtoallthebitsonthejthhyperplanealongthemthdimensioni.e.,fumjg,excepttheoneontheleft-handsideoftheequationindexedbyi1;;im)]TJ/F21 7.97 Tf 6.586 0 Td[(1;j;im+1;;iM.Alittledierentfrom[ 12 ]and[ 13 ],inordertottheiterativedemodulationanddecoding

PAGE 168

154 featureoftheBICMsysteminChapter 3 ,wealsoneedtocomputetheextrinsicinformationofparitybitsLnepm;j=Xfxmj;umjgLxmjjumj+Lnumj:A{4Assuggestedin[ 11 ],theLLRsumcanbeapproximatedverywellasJXj=1Luj)]TJ/F23 7.97 Tf 12.422 5.26 Td[(JYj=1sign)]TJ/F22 11.955 Tf 5.479 -9.684 Td[(Lujminj=1;;JjLujj:A{5Withthisapproximation,thecomplexityofthedecodingprocedurein A{3 and A{4 becomesverylow.TheaprioriLLRLnuisupdatedinthedecodingforthemthcomponentcodesatthenthiterationinthefollowingwayLnumj=XkmLken)]TJ/F21 7.97 Tf 6.587 0 Td[(1umj:A{6ThesoftoutputofiterativedecodingafterniterationsisgivenbyLn^u=Lxju+MXm=1Lmenu:A{7MakingharddecisiononthesoftoutputLn^u,theestimate^ucorrespondingtothetransmittedbitucanbeobtained.Owingtotheirconcatenatedstructure,withlowdecodingcomplexity,MDPCcodeshaveexhibitedclosetocapacityperformanceatveryhighcoderate.

PAGE 169

APPENDIXBPROOFOFEQUATION 5{28 B.1SystemModelLetu andc denoteainformationbitsequenceandthecorrespondingcodewordgeneratedbyanonrecursiveconvolutionalcodeC:u !c ,whereu =u0;u1;;ui;,c =c0;c1;;ci;,andui;ci2f0;1garethedataorinformationbitandcodedbit,respectively.WeconsidertheBPSKmodulationhere.Thus,thetransmittedsignalxiisdenedasxi=1)]TJ/F15 11.955 Tf 11.955 0 Td[(2ci;B{1withxi2f)]TJ/F15 11.955 Tf 26.567 0 Td[(1;1g.Inthesystem,weuseamemorylessindependentfadingchannelmodel,whichincludestheadditivewhiteGaussiannoiseAWGNchannelasaspecialcase,todescribethetransmissionenvironmentbetweenthesourceanddestination.Then,thereceivedsignalyiatthedestinationcorrespondingtothetransmittedBPSKsignalxiattimeinstanticanbeexpressedasyi=gixi+ni;B{2whereniforalliarei.i.d.zero-meanadditiveGaussianrandomvariableswithvari-anceE[jnij2]=2n,andgiisthechannelfadinggain.ForAWGNchannelsgi=1,andforRayleighfadingchannelsgialliarei.i.d.Rayleighrandomvariableswithpdfofpgig=2ge)]TJ/F23 7.97 Tf 6.587 0 Td[(g2;g0:B{3 155

PAGE 170

156 WenormalizethesignalenergyE[jxij2]=1.Thus,theaverageSNRis1=2n.Inthischannelmodel,weassumethatperfectchannelstateinformationCSIisavailableandhencecoherentdetectionisperformedatthereceiver.Withthismodeltheconditionalpdfpyijxi;gi,foralli,withperfectCSIisgivenbypyijxi;gi=1 p 2nexp)]TJ 13.151 8.088 Td[(jyi)]TJ/F22 11.955 Tf 11.955 0 Td[(gixij2 22n:B{4B.2ExtrinsicInformationinMax-log-MAPDecodingB.2.1OptimalLog-MAPDecodingInordertostudytheextrinsicinformationforMAPdecoding,werstconsiderthesoftoutputoftheMAPdecoder.LetLkdenotethesoftoutputofaMAPdecoderforthekthdatabit,thenaccordingtotheoptimallog-MAPdecodingalgorithm,softoutputLkisdenedasthelog-likelihoodratiooftheaposterioriprobabilityofthekthdatabit,forarbitraryk,i.e.[ 11 ],Lk=logP)]TJ/F15 11.955 Tf 6.209 -9.684 Td[(^uk=0jy ;g P)]TJ/F15 11.955 Tf 6.209 -9.684 Td[(^uk=1jy ;g =logP)]TJ/F15 11.955 Tf 6.21 -9.684 Td[(^uk=0jy ;g py jg P)]TJ/F15 11.955 Tf 6.21 -9.684 Td[(^uk=1jy ;g py jg =logP)]TJ/F42 11.955 Tf 5.479 -9.684 Td[(y j^uk=0;g P^uk=0 P)]TJ/F42 11.955 Tf 5.479 -9.684 Td[(y j^uk=1;g P^uk=1=logPu :uk=0py ju ;g P^u =u Pu :uk=1py ju ;g P^u =u =logPu ;c 2C+kpy jc ;g P^u =u Pu ;c 2C)]TJ/F24 5.978 Tf -0.563 -6.416 Td[(kpy jc ;g P^u =u ; B{5 where^ukand^u arethedecisionorestimationofukandu ,y =y0;y1;;yi;andg =g0;g1;;gi;arethereceivedsignalsequenceandthechannelfadinggainsequence,C+kandC)]TJ/F23 7.97 Tf -0.858 -8.277 Td[(karethesetofallcodewordspairu ;c thatgivesthedecisionofuk=0anduk=1,P^u =u andpy jc ;g aretheaprioriprobabilityofdatabitsequenceu andtheconditionalpdfofsignalsequencey giventhecodewordc andthechannelfadinggainsequenceg ,respectively.

PAGE 171

157 Basedontheassumptionofindependentdatabitsandmemorylesschannelmodel,wehaveP^u =u =YjP^uj=uj; B{6 py jc ;g =Yipyijci;gi: B{7 WiththeBPSKmodulationin B{1 andtheconditionalpdfpyijxi;giin B{4 ,theconditionalpdfpyijci;gicanbeexpressedaspyijci;gi=1 p 2nexp)]TJ/F15 11.955 Tf 13.15 8.088 Td[(yi)]TJ/F22 11.955 Tf 11.955 0 Td[(gi2 22nexp)]TJ/F15 11.955 Tf 16.369 8.088 Td[(2 2ncigiyi=fyi;giexp)]TJ/F22 11.955 Tf 9.298 0 Td[(Lccigiyi; B{8 wherethefunctionfyi;gidenedasfyi;gi=1 p 2nexp)]TJ/F15 11.955 Tf 13.15 8.088 Td[(yi)]TJ/F22 11.955 Tf 11.955 0 Td[(gi2 22nisindependentofcodedbitci,andLcdenedasLc=2 2nisknownasthechannelreliabilitymeasure.Forconvenience,denefy ;g =Yifyi;gi;B{9thenitiseasytoseethatfy ;g isindependentofsequencec .Withthisdenition,theconditionalpdfpy jc ;g canberewrittenaspy jc ;g =fy ;g Yiexp)]TJ/F22 11.955 Tf 9.299 0 Td[(Lccigiyi=fy ;g exp)]TJ/F22 11.955 Tf 11.955 0 Td[(LcXicigiyi: B{10

PAGE 172

158 Inserting B{10 and B{6 into B{5 yieldsLk=logPu ;c 2C+kexp)]TJ/F22 11.955 Tf 11.955 0 Td[(LcPicigiyi+PjlogP^uj=uj Pu ;c 2C)]TJ/F24 5.978 Tf -0.563 -6.416 Td[(kexp)]TJ/F22 11.955 Tf 11.956 0 Td[(LcPicigiyi+PjlogP^uj=uj=logXu ;c 2C+kexp)]TJ/F22 11.955 Tf 11.956 0 Td[(LcXicigiyi+XjlogP^uj=uj)]TJ/F15 11.955 Tf 11.291 0 Td[(logXu ;c 2C)]TJ/F24 5.978 Tf -0.563 -6.416 Td[(kexp)]TJ/F22 11.955 Tf 11.956 0 Td[(LcXicigiyi+XjlogP^uj=uj: B{11 B.2.2Max-log-MAPDecodingInthesuboptimalmax-log-MAPdecoding,theapproximationlogXiaimaxiflogaigB{12isappliedintothecalculationof B{11 ,andresultsinthefollowingmax-log-MAPsoftoutputexpressionLk=maxu ;c 2C+kn)]TJ/F22 11.955 Tf 11.955 0 Td[(LcXicigiyi+XjlogP^uj=ujo)]TJ/F15 11.955 Tf 19.217 0 Td[(maxu ;c 2C)]TJ/F24 5.978 Tf -0.563 -6.415 Td[(kn)]TJ/F22 11.955 Tf 11.955 0 Td[(LcXicigiyi+XjlogP^uj=ujo: B{13 Wenotethat,theaboveequationwillnotbechangedbyintroducinganarbitraryconstantCinthefollowingwayLk=maxu ;c 2C+kn)]TJ/F22 11.955 Tf 11.955 0 Td[(LcXicigiyi+XjlogP^uj=uj+Co)]TJ/F15 11.955 Tf 19.217 0 Td[(maxu ;c 2C)]TJ/F24 5.978 Tf -0.563 -6.416 Td[(kn)]TJ/F22 11.955 Tf 11.955 0 Td[(LcXicigiyi+XjlogP^uj=uj+Co: B{14 Forconvenience,wecanchooseC=)]TJ/F28 11.955 Tf 11.291 11.357 Td[(XjlogP^uj=0:B{15

PAGE 173

159 Recallthatuj2f0;1g,thuswehaveXjlogP^uj=uj+C=)]TJ/F28 11.955 Tf 11.291 11.357 Td[(XjlogP^uj=0)]TJ/F15 11.955 Tf 11.955 0 Td[(logP^uj=uj=)]TJ/F28 11.955 Tf 11.291 11.357 Td[(XjujlogP^uj=0 P^uj=1=)]TJ/F28 11.955 Tf 11.291 11.357 Td[(Xjujj; B{16 wherejdenedasj=logP^uj=0 P^uj=1B{17istheaprioriinformationofthekthdatabit.Applying B{16 and B{17 to B{14 ,thesoftoutputofmax-log-MAPdecodingbecomesLk=maxu ;c 2C+kn)]TJ/F22 11.955 Tf 11.955 0 Td[(LcXicigiyi)]TJ/F28 11.955 Tf 11.955 11.357 Td[(Xjujjo)]TJ/F15 11.955 Tf 19.881 0 Td[(maxu ;c 2C)]TJ/F24 5.978 Tf -0.563 -6.416 Td[(kn)]TJ/F22 11.955 Tf 11.955 0 Td[(LcXicigiyi)]TJ/F28 11.955 Tf 11.955 11.357 Td[(Xjujjo=maxu ;c 2C+kn)]TJ/F22 11.955 Tf 11.955 0 Td[(LcXicigiyi)]TJ/F28 11.955 Tf 11.955 11.357 Td[(Xjujjo+minu ;c 2C)]TJ/F24 5.978 Tf -0.564 -6.416 Td[(knLcXicigiyi+Xjujjo=maxu ;c 2C+k)]TJ/F15 11.955 Tf 11.955 0 Td[(u ;c +minu ;c 2C)]TJ/F24 5.978 Tf -0.563 -6.416 Td[(ku ;c ; B{18 whereu ;c denedasu ;c =LcXicigiyi+XjujjB{19istheerroreventmetricforu ;c includingtheaprioriinformationofthekthdatabit,k.Explicitly,werewrite B{19 asu ;c =LcXicigiyi+Xj;j6=kujj+ukk=)]TJ/F43 7.97 Tf 19.74 -1.793 Td[(u ;c +ukk;B{20where)]TJ/F43 7.97 Tf 7.314 -1.793 Td[(u ;c =LcXicigiyi+Xj;j6=kujjB{21istheerroreventmetricforu ;c withoutincludingtheaprioriinformationofthekthdatabit,k.Recallthat,foru ;c 2C+k,uk=0,andforu ;c 2C)]TJ/F23 7.97 Tf -0.859 -8.278 Td[(k,uk=1.Thus,with B{20 wehaveu ;c =)]TJ/F43 7.97 Tf 19.739 -1.793 Td[(u ;c B{22

PAGE 174

160 foru ;c 2C+k,andu ;c =)]TJ/F43 7.97 Tf 19.74 -1.793 Td[(u ;c +kB{23foru ;c 2C)]TJ/F23 7.97 Tf -0.859 -8.278 Td[(k,respectively.Byinserting B{22 and B{23 into B{18 ,wehaveLk=maxu ;c 2C+k)]TJ/F15 11.955 Tf 11.955 0 Td[()]TJ/F43 7.97 Tf 7.314 -1.794 Td[(u ;c +minu ;c 2C)]TJ/F24 5.978 Tf -0.563 -6.416 Td[(k)]TJ/F43 7.97 Tf 7.314 -1.794 Td[(u ;c +k=maxu ;c 2C+k)]TJ/F15 11.955 Tf 11.955 0 Td[()]TJ/F43 7.97 Tf 7.314 -1.793 Td[(u ;c +minu ;c 2C)]TJ/F24 5.978 Tf -0.563 -6.415 Td[(k)]TJ/F43 7.97 Tf 7.314 -1.793 Td[(u ;c +k: B{24 Withtheaboveandaccordingtothedenitionofextrinsicinformationkforthebit^uk,i.e.,Lk=k+k;B{25itiseasytoseethattheextrinsicinformationkinmax-log-MAPdecodingcanberepresentedask=maxu ;c 2C+k)]TJ/F15 11.955 Tf 11.955 0 Td[()]TJ/F43 7.97 Tf 7.314 -1.793 Td[(u ;c +minu ;c 2C)]TJ/F24 5.978 Tf -0.563 -6.415 Td[(k)]TJ/F43 7.97 Tf 7.314 -1.793 Td[(u ;c ; B{26 wheretheerroreventmetric)]TJ/F43 7.97 Tf 154.759 -1.793 Td[(u ;c isgivenby B{21 .Sinceujandci2f0;1g,wecanrewrite)]TJ/F43 7.97 Tf 46.985 -1.793 Td[(u ;c as)]TJ/F43 7.97 Tf 7.315 -1.793 Td[(u ;c =Xi2fi:ui=1gi6=ki+LcXi2fi:ci=1ggiyi;B{27whichisconvenientforustorepresentthefollowingderivationsintermsoftheHam-mingweightsofu andc

PAGE 175

APPENDIXCPROOFOFTHEOREM 6.4.4 Werstconsiderthelast2j-foldsummationinPWjdenedby 6{79 ,i.e.,m0Xn0=0m1Xn1=0mj)]TJ/F18 5.978 Tf 5.756 0 Td[(1Xnj)]TJ/F18 5.978 Tf 5.756 0 Td[(1=0| {z }jfoldsn0M0X0=n0n1M0X1=n1nj)]TJ/F18 5.978 Tf 5.756 0 Td[(1M0Xj)]TJ/F18 5.978 Tf 5.756 0 Td[(1=nj)]TJ/F18 5.978 Tf 5.756 0 Td[(1| {z }jfolds;C{1whereM0=M)]TJ/F15 11.955 Tf 10.405 0 Td[(1.Sincetherangeofldependsonlyonnlforeachl,wecanrewrite C{1 asm0Xn0=0n0M0X0=n0| {z }2foldsm1Xn1=0n1M0X1=n1| {z }2foldsmj)]TJ/F18 5.978 Tf 5.756 0 Td[(1Xnj)]TJ/F18 5.978 Tf 5.756 0 Td[(1=0nj)]TJ/F18 5.978 Tf 5.756 0 Td[(1M0Xj)]TJ/F18 5.978 Tf 5.756 0 Td[(1=nj)]TJ/F18 5.978 Tf 5.757 0 Td[(1| {z }2folds| {z }jpairs;C{2whichcontainsjpairsof2-foldsumswithindicesfnl;lgj)]TJ/F21 7.97 Tf 6.587 0 Td[(1l=0.Thenweconsiderswitchingthesummationorderwithineachpairofthesums.Inthepairofsumswithindicesnlandl,nlrangesfrom0toml,andltakesvaluefromnltonlM0foranygivennl.Wedenotethesebynl2[0;ml]andl2[nl;nlM0],respectively.Clearly,forallthepossiblevaluesofnl,lrunsthroughtheregion[0;mlM0].Ontheotherhand,foranyxedvalueofl2[0;mlM0],wehavenl2hl M0;li[0;ml]=hl M0;minfl;mlgi:Thus,thesummationorderforfnl;lgcanbeswitchedasmlXnl=0nlM0Xl=nl=mlM0Xl=0minfl;mlgXnl=dl M0e;for0lj)]TJ/F15 11.955 Tf 11.955 0 Td[(1:C{3 161

PAGE 176

162 FigureC{1:Summationorderswitchprocedureforfl;mlgj)]TJ/F21 7.97 Tf 6.586 0 Td[(1l=0. Plug C{3 into C{2 ,andwithsomerearrangements,weobtainXVj=w0Xm0=0w1Xm1=0wj)]TJ/F18 5.978 Tf 5.756 0 Td[(1Xmj)]TJ/F18 5.978 Tf 5.756 0 Td[(1=0m0M0X0=0m1M0X1=0mj)]TJ/F18 5.978 Tf 5.757 0 Td[(1M0Xj)]TJ/F18 5.978 Tf 5.756 0 Td[(1=0w0000Xn0=d0 M0ew0001Xn1=d1 M0ew000j)]TJ/F18 5.978 Tf 5.756 0 Td[(1Xnj)]TJ/F18 5.978 Tf 5.757 0 Td[(1=dj)]TJ/F18 5.978 Tf 5.756 0 Td[(1 M0e;C{4wherew000l=minfl;mlgforalll.Nowweconsidertherst2j-foldsummationin C{4 .Similarto C{2 ,wecanrewritetherst2j-foldsummationin C{4 asjpairsof2-foldsumswitheachpairindexedbymlandl.Thenbyusingthesameargumentfor C{3 ,weswitchthesummationorderwithineachofthesepairs.Thus,therst2j-foldsummationin C{4 isequivalenttow0M0X0=0w0Xm0=d0 M0ew1M0X1=0w1Xm1=d1 M0ewj)]TJ/F18 5.978 Tf 5.757 0 Td[(1M0Xj)]TJ/F18 5.978 Tf 5.756 0 Td[(1=0wj)]TJ/F18 5.978 Tf 5.756 0 Td[(1Xmj)]TJ/F18 5.978 Tf 5.757 0 Td[(1=dj)]TJ/F18 5.978 Tf 5.756 0 Td[(1 M0e:C{5Sincewl+1=wl)]TJ/F22 11.955 Tf 12.041 0 Td[(ml=w0)]TJ/F28 11.955 Tf 12.04 8.966 Td[(Plt=0mt,thesummationrangeofldependsonfmkgforallk
PAGE 177

163 weswitchthesummationorderforthepairoffml;l+2g,forl=0;1;;j)]TJ/F15 11.955 Tf 12.278 0 Td[(3,inthesecondround.Inthekthround,weswitchtheorderforeachpairoffml;l+kg,forl=0;1;;j)]TJ/F22 11.955 Tf 12.503 0 Td[(k)]TJ/F15 11.955 Tf 12.504 0 Td[(1.Byrepeatingthisprocedureuntilthej)]TJ/F15 11.955 Tf 12.504 0 Td[(1thround,thedesiredsummationordercanbeobtained.Toperformthisprocedure,werstconsiderageneral2-foldsummationontheintegerpairfa;bgyXa=xy)]TJ/F23 7.97 Tf 6.586 0 Td[(aM0Xb=0fa;b;wherexandyyxarearbitraryintegersindependentofaandb,andfa;bisanarbitraryfunctionoffa;bg.Sinceb2[0;y)]TJ/F22 11.955 Tf 11.24 0 Td[(aM0]foranya2[x;y],itiseasytoseethatbrunsthrough[0;y)]TJ/F22 11.955 Tf 11.528 0 Td[(xM0],andforanyb2[0;y)]TJ/F22 11.955 Tf 11.528 0 Td[(xM0],arunsthroughhx;y)]TJ/F28 11.955 Tf 11.956 9.683 Td[(b M0i.Thus,wehaveyXa=xy)]TJ/F23 7.97 Tf 6.587 0 Td[(aM0Xb=0fa;b=y)]TJ/F23 7.97 Tf 6.587 0 Td[(xM0Xb=0ydb M0eXa=xfa;b:C{6Now,weconsidertheprocedureinFig. C{1 .Fortherstroundswitches,leta=ml,b=l+1,x=l M0,andy=wl.Byusingwl+1=wl)]TJ/F22 11.955 Tf 11.955 0 Td[(mland C{6 ,wehavewlXml=dl M0ewl+1M0Xl+1=0=wldl M0eM0Xl+1=0wldl+1 M0eXml=dl M0eforl=0;1;;j)]TJ/F15 11.955 Tf 11.955 0 Td[(2:C{7Similarly,forthesecondroundswitches,leta=ml,b=l+2,x=l M0,andy=wl)]TJ/F28 11.955 Tf 12.133 9.684 Td[(l+1 M0.Thenbyapplyingwl+1=wl)]TJ/F22 11.955 Tf 12.133 0 Td[(mland C{6 tothesummationonfml;l+2gobtainedfrom C{7 ,wehavewldl+1 M0eXml=dl M0ewl+1dl+1 M0eM0Xl+2=0=wldl M0e)-1(dl+1 M0eM0Xl+2=0wldl+1 M0e)-1(dl+2 M0eXml=dl M0eforl=0;1;;j)]TJ/F15 11.955 Tf 11.956 0 Td[(3:C{8

PAGE 178

164 Byusingmathematicalinduction,wecaneasilyprovethattheswitchinginthekthkj)]TJ/F15 11.955 Tf 11.955 0 Td[(1roundisgivenbywl)]TJ/F29 7.97 Tf 6.586 5.977 Td[(Pk)]TJ/F18 5.978 Tf 5.756 0 Td[(1t=1dl+t M0eXml=dl M0ewl+1)]TJ/F29 7.97 Tf 6.587 5.977 Td[(Pk)]TJ/F18 5.978 Tf 5.756 0 Td[(1t=1dl+t M0eM0Xl+k=0=wl)]TJ/F29 7.97 Tf 6.586 5.977 Td[(Pk)]TJ/F18 5.978 Tf 5.756 0 Td[(1t=0dl+t M0eM0Xl+k=0wl)]TJ/F29 7.97 Tf 6.587 5.977 Td[(Pkt=1dl+t M0eXml=dl M0eforl=0;1;;j)]TJ/F22 11.955 Tf 9.299 0 Td[(k)]TJ/F15 11.955 Tf 9.299 0 Td[(1:C{9AsshowninFig. C{1 ,thesummationorderwillbecomef0;1;;j)]TJ/F21 7.97 Tf 6.586 0 Td[(1;m0;m1;;mj)]TJ/F21 7.97 Tf 6.587 0 Td[(1gafterthej)]TJ/F15 11.955 Tf 12.212 0 Td[(1throundofswitches.Wealsonotethatthenalsummationrangeforkandmj)]TJ/F23 7.97 Tf 6.587 0 Td[(k)]TJ/F21 7.97 Tf 6.587 0 Td[(1arecalculatedatthetwoedgesofthekthroundi.e.,l=0andl=j)]TJ/F22 11.955 Tf 11.955 0 Td[(k)]TJ/F15 11.955 Tf 11.955 0 Td[(1in C{9 ,respectively.Thus,theresultingsummationisgivenbyw00M0X0=0w01M0X1=0w0j)]TJ/F18 5.978 Tf 5.756 0 Td[(1M0Xj)]TJ/F18 5.978 Tf 5.757 0 Td[(1=0w000Xm0=d0 M0ew001Xm1=d1 M0ew00j)]TJ/F18 5.978 Tf 5.756 0 Td[(1Xmj)]TJ/F18 5.978 Tf 5.756 0 Td[(1=dj)]TJ/F18 5.978 Tf 5.757 0 Td[(1 M0e;C{10wherew0l+1=w0l)-306(dl M0e,w00l=wl)]TJ/F28 11.955 Tf 12.961 8.966 Td[(Pj)]TJ/F21 7.97 Tf 6.586 0 Td[(1t=l+1dt M0e,wl+1=wl)]TJ/F22 11.955 Tf 12.962 0 Td[(mlforalll,andw00=w0.Thissummationisequivalenttotherst2j-foldsummationin C{4 .Hence,byreplacingitwith C{10 ,wenallyobtainXWj=w00M0X0=0w01M0X1=0w0j)]TJ/F18 5.978 Tf 5.756 0 Td[(1M0Xj)]TJ/F18 5.978 Tf 5.756 0 Td[(1=0w000Xm0=d0 M0ew001Xm1=d1 M0ew00j)]TJ/F18 5.978 Tf 5.756 0 Td[(1Xmj)]TJ/F18 5.978 Tf 5.756 0 Td[(1=dj)]TJ/F18 5.978 Tf 5.757 0 Td[(1 M0ew0000Xn0=d0 M0ew0001Xn1=d1 M0ew000j)]TJ/F18 5.978 Tf 5.756 0 Td[(1Xnj)]TJ/F18 5.978 Tf 5.756 0 Td[(1=dj)]TJ/F18 5.978 Tf 5.756 0 Td[(1 M0e:C{11

PAGE 179

APPENDIXDNUMERICALEVALUATIONOFGALCDFSupposethattherandomvariableXhasaGALdistributiondenedin 6{1 ,i.e.,XGAL;2;,for2Rand;0.LetGt;;2;denotethecdfofX,i.e.,Gt;;2;=PXt;t2R:D{1Ingeneral,noclosed-formexpressionexistsforthefunctionGt;;2;.Hence,weconsideritsnumericalevaluationhere.DuetothecomplicatedformoftheGALpdf,itisusuallydiculttoevaluateGt;;2;ecientlywithhighaccuracythroughdirectnumericalintegrationoftheGALpdf.Toavoidthisdiculty,weutilizethemixturerepresentationof 6{4 foraGALrandomvariabletocomputeitscdf,i.e.,X=W+p WZ;D{2whereWandZaretwostatisticallyindependentrandomvariableswiththepropertiesthatZN;1andWhasaChi-squaredistributionwith2degreesoffreedom,i.e.,thepdfofWisgx=x)]TJ/F21 7.97 Tf 6.587 0 Td[(1 \050e)]TJ/F23 7.97 Tf 6.587 0 Td[(x;x>0:D{3Withthismixturerepresentation,weknowthatwhen2=0and=0,XwillreducetoaGammaandGaussianrandomvariable,respectively.Forthesetwocases,numericalevaluatingthecdfofXiswellstudied.Thus,weonlyconsiderthecaseof 165

PAGE 180

166 2;6=0here.Forthiscase,werewritethecdfGt;;2;asGt;;2;=PW+p WZ
PAGE 181

167 Itiseasytocheckthatfor1,fxin D{9 isboundedforx2[0;+1.Thus,theintegralin D{4 canbecomputedasZ10e)]TJ/F23 7.97 Tf 6.586 0 Td[(xfxdxnXi=1wifxiD{10wheretheLaguerreabscissasxiandweightswicanbefoundintheliterature,e.g.,[ 51 ].Forthecaseof0<<1,wenotethatfxdenedin D{9 hasasingularpointatx=0whent>0.Inthiscase,using D{10 tocalculatetheintegralwillintroducelargeerror.Toavoidthisproblem,weusetheweightfunctionin D{8 with=)]TJ/F15 11.955 Tf 11.955 0 Td[(1.Correspondingly,fxbecomesfx=Qx)]TJ/F22 11.955 Tf 11.955 0 Td[(t p x;D{11whichguaranteestobebounded.Then,theintegralin D{4 isgivenbyZ10xe)]TJ/F23 7.97 Tf 6.586 0 Td[(xfxdxnXi=1wifxi;)]TJ/F15 11.955 Tf 9.299 0 Td[(1<=)]TJ/F15 11.955 Tf 11.955 0 Td[(1<0:Forthisintegral,[ 52 ]givesanecientmethodandthenecessarycoecienttablestocomputetheLaguerreabscissasxiandweightswibyChebyshevexpansions.

PAGE 182

REFERENCES [1] T.S.Rappaport,WirelessCommunicationsPrincipleandPractice,Prentice-Hall,UpperSaddleRiver,NJ,1996. [2] J.G.Proakis,DigitalCommunications,4thed.,McGraw-Hill,NewYork,2000. [3] D.Tse,andP.Viswanath,FundamentalsofWirelessCommunication,Cam-bridgeUniversityPress,Cambridge,2004. [4] J.N.Laneman,D.N.C.Tse,andG.W.Wornell,Cooperativediversityinwirelessnetworks:ecientprotocalsandoutagebehavior,"IEEETrans.Infom.Theory,vol.50,pp.3062{3080,Dec.2004. [5] _I.E.Telatar,Capacityofmulti-antennaGaussianchannel,"EuropeanTrans-actionsonTelecommunications,vol.10,no.6,pp.585-595,Nov.1999. [6] A.Goldsmith,WirelessCommunications,CambridgeUniversityPress,Cam-bridge,2004. [7] C.Chuah,D.N.C.Tse,J.M.KahnandR.A.Valenzuela,CapacityscalinginMIMOwirelesssystemsundercorrelatedfading,"IEEETrans.Inform.Theory,vol.48,no.3pp.637-650,March2002. [8] T.F.Wong,X.LiandJ.M.Shea,Iterativedecodinginatwo-nodedistributedarray,"inProc.IEEEMilcom'02,Anaheim,CA,pp.1320{1324,Oct.2002. [9] T.F.Wong,X.LiandJ.M.Shea,Distributeddecodingofrectangularparity-checkcode,"ElectronicsLetters,vol.38,no.22,pp.1364{1365,Oct.2002. [10] C.Berrou,andA.Glavieux,Nearoptimumerrorcorrectingcodingandde-coding:Turbo-codes,"IEEETrans.Commun.,vol.44,no.10,pp.1261{1271,Oct.1996. [11] J.Hagenauer,E.Oer,andL.Papke,Iterativedecodingofbinaryblockandconvolutionalcodes,"IEEETrans.Inform.Theory,vol.42,pp.429{445,Mar.1996. [12] T.F.WongandJ.M.Shea,Multi-dimensionalparitycheckcodesforburstychannels,"inProc.2001IEEEInt.Symp.InformationTheory,Washington,D.C.,pp.123,June2001. 168

PAGE 183

169 [13] T.F.Wong,J.M.Shea,andX.Li,Usingmulti-dimensionalparity-checkcodestoobtaindiversityinRayleighfadingchannels,"inProc.IEEEGlobecom,SanAntonio,TX,pp.1210{1214,Nov.2001. [14] S.LinandD.J.Costello,Jr.,ErrorControlCoding:FundamentalsandAppli-cations,PrenticeHall,EnglewoodClis,NJ,1983. [15] S.Benedetto,D.Divsalar,G.Montorsi,andF.Pollara,Asoft-inputsoft-outputmaximumaposterioriMAPmoduletodecodeparallelandserialcon-catenatedcodes,"TDAProgressReport,42-127,JetPropulsionLab.,Pasadena,CA,Nov.1996. [16] L.R.Bahl,J.Cocke,F.Jelinek,andJ.Raviv,Optimaldecodingoflinearcodesforminimizingsymbolerrorrates,"IEEETrans.Inform.Theory,vol.20,pp.248{287,Mar.1974. [17] G.Ungerboeck,Channelcodingwithmultilevel/phasesignals,"IEEETrans.Infom.Theory,vol.IT-28,no.1,pp.55{67,Jan.1982. [18] E.Zehavi,8-PSKTrelliscodesforaRayleighchannel,"IEEETrans.Com-mun.,vol.40,no.5,pp.873{884,May1992. [19] G.Caire,G.Taricco,andE.Biglieri,Bit-interleavedcodedmodulation,"IEEETrans.Infom.Theory,vol.44,pp.927{945,May1998. [20] X.Li,andJ.A.Ritcey,Bit-interleavedcodedmodulationwithiterativede-coding,"inProc.IEEEICC'99,Vancouver,BC,Cananda,pp.858{864,June1999. [21] X.LiandJ.A.Ritcey,Trellis-codedmodulationwithbitinterleavinganditerativedecoding,"IEEEJ.Select.AreasCommun.,vol.17,no.4,pp.715{724,Apr.1999. [22] G.D.ForneyandG.Ungerboeck,ModulationandcodingforlinearGaussianchannels,"IEEETrans.Infom.Theory,vol.44,no.6,pp.2384{2415,Oct.1998. [23] X.Li,T.F.Wong,andJ.M.Shea,Bit-interleavedrectangularparity-checkcodedmodulationwithiterativedemodulationinatwo-nodedistributedarray,"inProc.IEEEICC'03,Anchorage,AK,pp.2812{2816,May2003. [24] A.Avudainayagam,J.M.Shea,T.F.Wong,andX.Li,Reliabilityexchangeschemesforiterativepacketcombiningindistributedarrays,"inProc.IEEEWCNC'03,NewOrleans,LA,pp.832{837,Mar.2003. [25] X.Li,T.F.Wong,andJ.M.Shea,Performanceanalysisforcollaborativedecodingwithleast-reliable-bitExchangeonAWGNchannels,"inProc.IEEEICC'05,Souel,Korea,pp.678{682,May2005.

PAGE 184

170 [26] A.S.Tanenbaum,ComputerNetworks,3rd.ed.,Prentice-Hall,UpperSaddleRiver,NJ,1996. [27] N.Wiberg,CodesandDecodingonGeneralGraphs,Ph.Dthesis,LinkopingUniversity,Linkoping,Sweden,1996. [28] S.tenBrink,Convergencebehaviorofiterativelydecodedparallelconcate-natedcodes,"IEEETrans.Commun.,vol.49,no.10,pp.1727-1737,Oct.2001. [29] T.J.Richardson,M.A.Shokrollahi,andR.L.Urbanke,Designofcapacity-approachingirregularlow-densityparity-checkcodes,"IEEETrans.Inform.Theory,vol.47,no.2,pp.619{637,Feb.2001. [30] D.Divsalar,S.Donlinar,andF.Pollara,Iterativeturbodecoderanalysisbasedondensityevolution,"IEEEJ.Select.AreasCommun.,vol.19,no.5,pp.891{907,May.2001. [31] H.E.GamalandA.R.Hammons,Jr.,AnalyzingtheturbodecoderusingtheGaussianapproximation,"IEEETrans.Inform.Theory,vol.47,no.2,pp.671{686,Feb.2001. [32] X.Li,T.F.Wong,andJ.M.Shea,Performanceanalysisforcollaborativedecodingwithleast-reliable-bitExchangeonAWGNchannels,"submittedtoIEEETrans.Commun.. [33] L.ReggianiandG.Tartara,Probabilitydensityfunctionofsoftinformation,"IEEELett.Commun.,vol.6,no.2,pp.52{54,Feb.2002. [34] S.G.Wilson,DigitalModulationandCoding,PrenticeHall,EnglewoodClis,NJ,1996 [35] M.SimonandD.Divsalar,SomenewtwiststoprobleminvolvingtheGaussianprobabilityintegral,"IEEETrans.Commun.,vol.46,no.2,pp.200{210,Feb.1998. [36] G.McLachlanandD.Peel,FiniteMixtureModels,Wiley,NewYork,2000. [37] S.Kotz,T.Kozubowski,andPodgorski,TheLaplaceDistributionandGener-alizations:ARevisitwithApplicationstoCommunications,Economics,Engi-neering,andFinance,Birkhauser,Boston,MA,2001. [38] T.J.RichardsonandR.L.Urbanke,Thecapacityoflow-densityparity-checkcodesundermessage-passingdecoding,"IEEETrans.Inform.Theory,vol.47,no.2,pp.599{618,Feb.2001. [39] S.-Y.Chung,T.J.Richardson,andR.L.Urbanke,Analysisofsum-productdecodingoflow-densityparity-checkcodesusingaGaussianapproximation,"IEEETrans.Inform.Theory,vol.47,no.2,pp.657{670,Feb.2001.

PAGE 185

171 [40] W.Hoeding,Probabilityinequalitiesforsumsofboundedrandomvariables,"J.Amer.Statist.Assoc.,vol.58,pp.13{30,Mar.1963. [41] M.L.Eaton,Aprobabilityinequalityforlinearcombinationsofboundedrandomvariables,"Ann.Statist.,vol.2,no.3,pp.609{613,1974. [42] G.Bennett,Probabilityinequalitiesforthesumofindependentrandomvari-ables,"J.Amer.Statist.Assoc.,vol.57,pp.33{45,Mar.1962. [43] G.Bennett,Aone-sidedprobabilityinequalityforthesumofindependent,boundedrandomvariables,"Biometrika,vol.55,no.3,pp.565{569,1968. [44] V.Bentuks,OnHoeding'sinequalities,"Ann.Probab.,vol.32,no.2,pp.1650{1673,2004. [45] A.Cohen,Y.Rabinovich,A.SchusterandH.Shachnai,Optimalboundsontailprobabilities:astudyofanapproach,"AdvancesinRandomizedParallelComputing,KluwerAcademicPublishers,Boston,MA,1999. [46] E.T.Copson,AsymptocticExpensions,CambridgeUniversityPress,Cam-bridge,1965. [47] J.L.Jensen,SaddlepointApproximations,OxfordUniversityPress,Oxford,1995. [48] E.Biglieri,G.CaireandG.Taricco,Approximationthepairwiseerrorprob-abilityforfadingchannels,"ElectronicsLetters,vo.31,no.19,pp.1625{1627,Sep.1995. [49] W.RonaldandA.Wood,Saddlepointapproximationformomentgeneratingfucntionsoftruncatedrandomvariables,"Ann.Statist.,vo.32,no.6,pp.2712{2730,Dec.2004. [50] Z.Kopal,NumericalAnalysis,Wiley,NewYork,1961. [51] T.Takemasa,AbscissaeandweightsfortheGauss-Laguerrequadraturefor-mula,"MathSciNet,Comput.Phys.Comm.,vol.52,no.1,pp.133{140,1988. [52] P.LambinandJ.P.Vigneron,TablesfortheGaussiancomputationofR10xe)]TJ/F23 7.97 Tf 6.586 0 Td[(xfxdxforvaluesofvaryingcontinuouslybetween)]TJ/F15 11.955 Tf 9.298 0 Td[(1and+1,"MathematicsofComputation,vol.33,no.146,pp.805{811,Apr.1979.

PAGE 186

BIOGRAPHICALSKETCHXinLireceivedhisB.S.andM.S.degreesinelectricalengineeringfromNorth-westernPolytechnicalUniversity,Xi'an,China,in1996,andfromtheShanghaiJiaoTongUniversity,Shanghai,China,in1999,respectively.HejoinedtheWire-lessInformationNetworkingGroupWINGattheUniversityofFloridaDepart-mentofElectricalandComputerEngineeringin2000.HeiscurrentlypursuingthePh.D.degreeasagraduateresearchassistant.Hisresearchinterestsincludediversitytechniquesinwirelesscommunication,iterativecoding/decoding,modula-tion/demodulation,equalization,channelestimation,timingandcarriersynchroniza-tion,QoSandhighthroughputMACprotocolsforWLAN,adaptivesignalprocessing,andstatisticalsignalprocessing. 172


Permanent Link: http://ufdc.ufl.edu/UFE0013385/00001

Material Information

Title: Collaborative Decoding and Its Performance Analysis
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0013385:00001

Permanent Link: http://ufdc.ufl.edu/UFE0013385/00001

Material Information

Title: Collaborative Decoding and Its Performance Analysis
Physical Description: Mixed Material
Copyright Date: 2008

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0013385:00001


This item has the following downloads:


Full Text











COLLABORATIVE DECODING AND
ITS PERFORMANCE ANALYSIS
















By

XIN LI
















A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY


UNIVERSITY OF FLORIDA


2006


































Copyright 2006

by

Xin Li

















To my wife, Li, and my parents.















ACKNOWLEDGMENTS

First of all, I would like to give my sincere gratitude to my advisor, Dr. Tan F.

Wong, for his advice during my doctorate endeavors. Without his patience, guidance

and encouragement, none of this work would not have been possible. As a mentor,

his wisdom, kindness and enthusiasm benefitted me greatly in both of my research

work and life. I would also like to give special thanks to the co-chair of my super-

visory committee, Dr. John Shea, for many fruitful discussions and great help. His

valuable insights and ideas directly and significantly contributed to the work in this

dissertation.

Many thanks go to my fellow graduate students in the laboratory. In particular,

I appreciate Arun Avudailr' I- _: 'i, for a lot of helpful discussions and collaboration,

which directly contributed to the partial work of C'!i lter 4 in this dissertation. I also

extend my gratitude to Dr. Jose A. B. Fortes and Dr. Shigang C'!. i1 for being my

committee members, and I also thank Dr. ,lii ,y Ranka for ever being my committee

member. I appreciate for their constructive sl-.-, Iir i'- and precious time. Meantime,

great appreciation must go to the University of Florida for awarding me the Alumni

Graduate Fellowship, which provided me a full four years support during my graduate

study.

Last, but not the least, my sincere thanks go to my family for their endless love,

continuous support and encouragement during my life. This work dedicates to all of

them.















TABLE OF CONTENTS
page

ACKNOWLEDGMENTS ................... ...... iv

LIST OF TABLES ................... .......... viii

LIST OF FIGURES ................... ......... ix

ABSTRACT . . . . . . . . xiii

CHAPTER

1 INTRODUCTION .................... ....... 1

1.1 Motivation .................. ............ 1
1.2 Multi-antenna Diversity Techniques ......... ........ 2
1.3 Distributed Array ............................ 5
1.4 Collaborative Decoding ........... ............. 7
1.5 Scope of This W ork ........................... 11

2 COLLABORATIVE DECODING IN A TWO-NODE DISTRIBUTED
A R R AY . . . . . . . . .. 13

2.1 System Model ................... ......... 14
2.2 Collaborative Decoding for Rectangular Parity-Cl I: Code ..... 15
2.3 Collaborative Decoding for Convolutional Code . ... 20
2.4 Sum m ary .. .. .. ... .. .. .. .. .... .. .. .... 23

3 COLLABORATIVE DECODING FOR CODED MODULATION .... 24

3.1 System Model ....... . . .......... 26
3.2 Iterative Demodulation and Decoding for BIC' . . .. 28
3.2.1 Iterative Demodulation and Decoding Algorithm ...... ..28
3.2.2 Effect of Mapping in BIC\!-[D . . . 31
3.3 Collaborative Decoding for BICG i-[D with Rectangular Parity-Cl! 1:
Code . . . .. . . . ... 33
3.4 Performance Evaluation .................. .... .. 36
3.5 Summary .................. ............. .. 39

4 COLLABORATIVE DECODING FOR DISTRIBUTED ARRAY WITH
TWO OR MORE NODES .................. ....... .. 40

4.1 System Model for Distributed Array with Two or More Nodes 41









4.2 Collaborative Decoding and Information Exchange Schemes . 43
4.2.1 Information Exchange with Memory ........... .44
4.2.2 Least-Reliable-Bit Information Exchange .......... ..46
4.2.3 Most-Reliable-Bit Information Exchange .......... ..49
4.3 Performance Evaluation ............... .. .. 51
4.4 Summary ............... .......... .. .. 55

5 PERFORMANCE ANALYSIS FOR COLLABORATIVE DECODING WITH
LEAST-RELIABLE-BIT EXCHANGE ON AWGN CHANNELS ..... 56

5.1 Gaussian-Approximated Density Evolution For Nonrecursive Convo-
lutional Codes .................. ......... .. .. 57
5.2 Error Performance Analysis .................. ..... 64
5.2.1 BER Upper Bound for M > 3 ................ 65
5.2.2 Union Bound for Max-log-MAP Decoding . . ... 68
5.2.3 Applying Max-log-MAP Decoding Union Bound to Collabo-
rative Decoding ................ ... .. 70
5.2.4 BER Upper Bound for M = 2 ................ 76
5.3 Numerical Results ............... ........ .. 78
5.4 Summary ............... .......... .. .. 82

6 PERFORMANCE ANALYSIS FOR COLLABORATIVE DECODING WITH
MOST-RELIABLE-BIT EXCHANGE ON AWGN AND RAYLEIGH FAD-
ING CHANNELS .................. ............ .. 83

6.1 Statistical Approximation for Extrinsic Information . ... 85
6.1.1 AW GN ('! .1i, I .................. ..... .. 86
6.1.2 Independent Rayleigh Fading ('C!i i i, I ............ 89
6.2 Density Evolution Model .................. .... .. 93
6.2.1 Additional Information Generation ............. ..95
6.2.2 Finding Parameters in Density Evolution Model ...... ..99
6.3 A General Upper Bound for BER .................. .100
6.4 Error Events and Probabilities Analysis ............... ..109
6.4.1 Union Bound for Collaborative Decoding . . ... 110
6.4.2 Analysis for MRB Information Exchange on Error Events 111
6.4.3 Upper Bounds for Probabilities Involving . ... 122
6.5 Evaluation of BER Upper Bound ................. 127
6.5.1 AW GN C('! ,iii, I .................. .... .. 128
6.5.2 Independent Rayleigh Fading ('Ci i I . . ..... 134
6.6 Numerical Results .................. ....... .. 140
6.7 Summary .................. ............. 147

7 CONCLUSIONS AND FUTURE WORK ................. ..149

APPENDIX

A RECTANGULAR PARITY-CHECK ENCODING AND DECODING 152









A.1 Multidimensional Parity-C('! Encoding .............. .. 152
A.2 Iterative Multidimensional Parity-C(hl. I: Decoding . ... 153

B PROOF OF EQUATION (5-28) .................. ..... 155

B.1 System Model ..... . . ..... ........... 155
B.2 Extrinsic Information in Max-log-MAP Decoding . ... 156
B.2.1 Optimal Log-MAP Decoding ...... . . 156
B.2.2 Max-log-MAP Decoding ................ 158

C PROOF OF THEOREM 6.4.4 .................. .... .161

D NUMERICAL EVALUATION OF GAL CDF .............. ..165

REFERENCES .................. ................ .. 168

BIOGRAPHICAL SKETCH ............. . . .. 172














LIST OF TABLES


Table page

5-1 Different choices of {pj} and corresponding average information exchange
amount with M = 8 for rate 1/2 CC(5, 7) code. is calculated with
respect to the information exchange amount of MRC, OMRC . ..80

6-1 Different choices of {pj} and the corresponding average information ex-
change amount S with M = 6 for rate-1/2 CC(5, 7) code. S is calculated
with respect to the information exchange amount of MRC, OMRC ..... 145















LIST OF FIGURES
Figure page

1-1 Linear combiner for a SIMO system .................. 4

1-2 Distributed array .................. .... 6

1-3 Iterative decoding ................... ... 9

1-4 Collaborative decoding .................. ......... .. 10

2-1 A cluster of two nodes forming a distributed antenna array . .... 14

2-2 Example of rectangular parity-check code for packet of nine information
bits . .. . . ... . . . .... 16

2-3 Conditional probability of the event that an error occurs in the bits whose
soft output magnitudes rank in the lowest percentiles. . .... 18

2-4 Performance of collaborative decoding for the 322 RPCC with information
exchange between two receiving nodes over an AWGN channel ....... 19

2-5 Performance of collaborative decoding for the 322 RPCC with information
exchange between two receiving nodes over a Rayleigh fading channel. 20

2-6 Performance of collaborative decoding for the CC(5,7) with information
exchange between two receiving nodes over an AWGN channel ....... 22

2-7 Performance of collaborative decoding for the CC(5,7) with information
exchange between two receiving nodes over a Rayleigh fading channel. 22

3-1 System model of BIC\!-[D with RPCC .................. .. 27

3-2 BER for BIC'\!-[D with 322 RPCC and 8PSK in the two-node distributed
array over AWGN channel. .................. ..... 37

3 3 BER for BIC\ D with 322 RPCC and 8PSK in the two-node distributed
array over Rayleigh fading channel ............... .. .. 37

3-4 Average SNR at 10-5 BER versus spectral efficiency for BIC'\!-[D with
322 RPCC in a two-node distributed array over AWGN channels. . 38

3-5 Average SNR at 10-5 BER versus spectral efficiency for BIC'\!-[D with
322 RPCC in a two-node distributed array over Rayleigh fading channels. 39









4-1 Distributed array with multiple nodes. ................. 41

4-2 Typical probability distribution functions of soft-output for convolutional
codes ..................................... 47

4-3 BER performance of collaborative decoding with LRB exchange for the
cases of M = 2,3,4,6 and 8 on AWGN channels, where CC(5,7) and
{pj} {0.1,0.15,0.25} are used. ................... ... 52

4-4 BER performance of collaborative decoding with LRB exchange on Rayleigh
fading channels, parameter settings are the same as Fig. 43. ...... 52

4-5 BER performance of collaborative decoding with MRB exchange on AWGN
channels, where {pj} {0.1,0.2,1} are used. .............. 53

4-6 BER performance of collaborative decoding with MRB exchange on Rayleigh
fading channels, parameter settings are the same as Fig. 45. ...... 54

4-7 Comparison of information exchange amount with respect to MRC for
LRB and MRB exchange schemes ................ ...... 54

5-1 System model for collaborative decoding process. . . 58

5-2 Empirical pdfs of extrinsic information generated by the MAP decoders
in successive iterations in collaborative decoding with the LRB exchange
for M = 6 and Eb/No = 3dB on AWGN channels, where the maximum
free distance 4-state nonrecursive convolutional code is used. ...... ..59

5-3 Comparison of mean and variance of the extrinsic information from the
density evolution model and that from the actual collaborative decoding
process . . . . . . . . ..... 63

5-4 Comparison of threshold estimated from the density evolution model that
from the actual collaborative decoding process .............. ..64

5-5 Comparison of the proposed bounds, simulation results for the cases of
M = 2 and 6 on AWGN channels, where CC(5, 7) and {p} = {0.1, 0.15, 0.25}
are used .................. ................. .. 78

5-6 Comparison of the proposed bounds, simulation results in the last iteration
for the cases of M = 2, 3, 4, 6 and 8 on AWGN channels, where CC(15, 17)
and {pj} {0.1,0.15,0.25} are used. .................. 79

5-7 Comparison of performance for M = 8 and CC(5, 7) on AWGN channels
with different choices of {p } in Table 5-1 ................ 81

5-8 Asymptotic performance for M = 8 and CC(5, 7) on AWGN channels with
different choices of {pj} in Table 5-1 ................ 81









6-1 Empirical pdfs of extrinsic information generated by the MAP decoder
at successive iterations in collaborative decoding with MRB exchange for
M = 6 on an AWGN channel with Eb/No = 5dB, for which the maximum
free distance 4-state nonrecursive convolutional code is used. ...... ..87

6-2 Empirical pdfs of extrinsic information generated by the MAP decoders
in successive iterations in collaborative decoding with the MRB exchange
for M = 8 and Eb/No = 8dB on independent Rayleigh fading channels,
where the maximum free distance 4-state nonrecursive convolutional code
is used .................. ................. .. 93

6-3 Density evolution model for collaborative decoding process . ... 95

6-4 Comparison of mean and variance of the extrinsic information obtained
from the density evolution model and that from the actual collaborative
decoding process. .................. .. ........ 101

6-5 Comparison of threshold estimated from the density evolution model and
that from the actual collaborative decoding process. . ..... 101

6-6 Comparison of the bounds and simulation results for the cases of M = 6 on
an AWGN channel, where CC(5, 7) and {p } {0.1, 0.2, 0.6}, {0.1, 0.2, 0.8}
and {0.1, 0.2, 1} are used. .................. ..... 141

6-7 Comparison of the bounds and simulation results for the cases of M =
6 on an AWGN channel, where CC(15,17) and {pj} = {0.1,0.2,0.6},
{0.1,0.2,0.8} and {0.1,0.2, 1} are used. .................. 141

6-8 Comparison of the bounds and simulation results for the cases of M = 6
on an independent Rayleigh fading channel, where CC(5, 7) and {pj} =
{0.1,0.2,0.6}, {0.1,0.2,0.8} and {0.1,0.2, 1} are used . ..... 142

6-9 Comparison of the bounds and simulation results for the cases of M = 6
on an independent Rayleigh fading channel, where CC(15, 17) and {pj} =
{0.1,0.2,0.6}, {0.1,0.2,0.8} and {0.1,0.2, 1} are used . ..... 142

6-10 Comparison of the proposed bounds, simulation results in the last iteration
for the cases of M = 2, 3, 4 and 8 on an AWGN channel, where CC(5, 7)
and {pj} {0.1,0.2, 1} are used. ................ ..... 144

6-11 Comparison of the proposed bounds, simulation results in the last itera-
tion for the cases of M = 2, 3 and 4 on an independent Rayleigh fading
channels, where CC(5, 7) and {pj} {0.1, 0.2, 1} are used. . ... 145

6-12 Comparison of performance for M = 6 and CC(5, 7) on an AWGN channel
with the different choices of {pj} in Table 6-1 .............. ..146

6-13 Comparison of performance for M = 6 and CC(5, 7) on Rayleigh fading
channels with the different choices of {pj} in Table 61 . ... 147








C-1 Summation order switch procedure for {vi,mn} .. . ... 162















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Phil. .. hi,

COLLABORATIVE DECODING AND
ITS PERFORMANCE ANALYSIS

By

Xin Li

li ,v 2006

C('! ir: Tan F. Wong
Major Department: Electrical and Computer Engineering

Antenna array processing is a common technique that utilizes spatial diversity

to enhance the robustness of a digital communication system against deteriorative

wireless transmission effects such as channel noise and fading. In order to obtain

spacial diversity effectively, the physically connected antenna array elements in a

traditional array are required to be separated far apart so that the received signals

at different antennas are independent of each other. This situation usually makes the

size of the antenna array too large to be feasible in many practical scenarios.

Unlike traditional arrays, distributed arrays alleviate the array size constraint by

utilizing a cluster of independent physically separated receivers in a wireless network

as its array elements. All the array elements (receiving nodes) can communicate with

each other through a wireless broadcast channel. By exchanging information among

these receiving nodes during the reception process, it is possible to obtain spatial

diversity gain (or antenna gain) with such a distributed array. Since the information

exchanging traffic load among the receiving nodes is an important issue for a wireless

network, conventional receive diversity techniques such as maximum ratio combining

( I RC) become inefficient in distributed arrays.









Collaborative decoding is an iterative receive diversity approach suitable for dis-

tributed array when error correction codes are used in the transmission process. Re-

ceive diversity is achieved by exchanging only a portion of decoding information

among receiving nodes in collaborative decoding. By carefully selecting the decod-

ing information, collaborative decoding can lower the information exchange amount,

while providing performance close to that of MRC. Based on the statistic charac-

teristics of the output of maximum a posteriori decoders, we study two informa-

tion exchange schemes for collaborative decoding: least-reliable-bit (LRB) and most-

reliable-bit ( I B) exchange schemes. Error performance of these two schemes under

different transmission environments is investigated and compared with Monte Carlo

simulations. Theoretical analysis is also carried out for collaborative decoding with

the LRB and MRB exchange schemes when nonrecursive convolutional codes are

used.















CHAPTER 1
INTRODUCTION

1.1 Motivation

Wireless communications primarily study the problem of reliable information

(e.g., voice, video, images, text, data, etc.) transmission through an atmosphere
medium (the wireless channel) by using a radio wave as the information carrier.

Wireless communication techniques make it possible for people to communicate freely.

Since the birth of the wireless communication era in the 1970's, the demand for wire-

less transmission has been growing at a very rapid pace. With the developments

of digital communication techniques, radio frequency circuit fabrication, and very-

large-scale integrated circuit technologies, continual improvement of wireless commu-

nication techniques has been fulfilling the demands largely in the past few decades.

However, the demands are still growing exponentially [1].

As is well known, the two underlying resources of wireless channel are radio

bandwidth and transmitter power. Unfortunately, these two resources are very lim-

ited. Traditional single antenna communication techniques usually try to increase the

capacity of a wireless channel by increasing the radio bandwidth and transmission

power. However, with the rapid growth of wireless networks, bandwidth in usable

spectrum has been highly saturated, while transmission power is limited due to the

physical equipment constraints, e.g., limited battery life. Moreover, the transmission

power should be restricted below some limitation in order to reduce the mutual infer-

ence among wireless communication devices using the same wireless channel. Thus,

it becomes more and more difficult to fulfill the continuously and rapidly growing

demand of wireless channel capacity by using single antenna techniques.









Another challenge in wireless communication is the hostile nature of the wireless

channel. One common problem in signal transmission through any channel is additive

noise [2]. The additive noise is usually modeled as statistically independent Gaussian

noise with a flat power spectral density. This noise is also called thermal noise. The

primary source of thermal noise is the internal components such as resistors and

solid-state devices used in the receiver. When the transmitted signal goes through

the receiver, the data symbol will be inevitably corrupted by the thermal noise.

Interference, as an external performance degradation factor, is another challenge in

wireless communication systems. Signals from other transmitters using the same

wireless channel are usually the significant sources of interference. Besides noise

and interference, fading is also one of the main channel impediments in wireless

communication. Due to the nature of radio signals and the propagation characteristics

of the wireless channel, signals transmitted through a wireless channel can suffer from

attenuation, amplitude, phase and multipath distortion [3].

In order to combat the severe channel impairments due to fading and noise

without excessively increasing the transmission power, it is indispensable to adapt

some auxiliary wireless communication techniques different from the traditional ones

used in single antenna systems. In this scenario, multi-antenna, or space, diversity

techniques are particularly attractive because they can be readily combined with

other forms of diversity and offer dramatic performance gains when other forms of

diversity are unavailable [4, 5].

1.2 Multi-antenna Diversity Techniques

Multi-antenna diversity is widely considered to be the most promising avenue for

significantly increasing the bandwidth efficiency of wireless data transmission systems.

In multi-antenna diversity techniques, diversity is obtained by employing multiple

antennas (also called an antenna array) at the transmitter and/or the receiver. The

basic idea behind the multi-antenna diversity techniques is that, if the antennas









are placed sufficiently far apart, the channel fading between different antenna pairs

will become more or less independent. Hence independent signal paths are created

between the transmitter and the receiver. Reliable communication can be guaranteed

only if one of the independent paths is strong.

If multiple antennas are employ, 1 at the receiver end but only one antenna is

used for the transmitter, then the channel between the transmitter and receiver is

called a single-input, multi-output (SIMO) channel. The space diversity obtained

in a SIMO channel is called receive diversity. If multiple antennas are employ, 1 for

the transmitter only, then the channel is called a multi-input, single-output (\!ISO)

channels. Diversity in a MISO channel is called transmit diversity. If multiple anten-

nas are employ, -l for both the transmitter and receiver, then the channel is called a

multi-input, multi-output (I\! \ 1O) channel. In this case, both transmit and receive

diversities are provided by the channel. In this research work, we only consider SIMO

channels; i.e., only receive diversity is studied here.

For a SIMO system, there are several v-,v to obtain receive diversity. Usually,

the independent fading paths are combined to obtain a resultant signal that is then

passed through a standard demodulator and/or decoder. Most combining techniques

are linear: the output of the combiner is just a weighted sum of the received signals

at different antenna array elements [6]. Fig. 1.2 shows a linear combiner for a SIMO

system. In the figure, we suppose that the receive antenna array contains M an-

tenna elements and a signal s(t) is transmitted through a flat fading wireless channel.

These antenna elements are far enough apart so that M independent fading channels

between the transmitter and receiver are created. Let ri(t) denote the received signal

at the ith antenna, then it can be expressed as


ri(t) ais(t) + ni(t), I t' M,






















6
0
0
6


Transm itter I/



aM

Receive antenna array

Figure 1-1: Linear combiner for a SIMO system


where ai is the complex fading gain of the ith fading channel and ni(t) is the additive

white Gaussian noise (AWGN) at the ith antenna. Then under the assumption that

the channel fading gains are perfectly known, the optimal choice of the combining

weights ai is the conjugate of the channel fading gain ai for all i [2]. The resultant

combiner output signal rx(t) is given by

M

i=

This optimum combining technique is known as maximum ratio combining (\ RC).

In fact, MRC maximizes the signal-to-noise ratio (SNR) of the output signal, which

increases linearly with the number of independent fading channels M [6]. The MRC

combiner achieves the full receive diversity order of the channel and provides the

optimal performance in comparison with other receive diversity techniques.






5


1.3 Distributed Array

It has been shown that the potential gain in channel capacity of multi-antenna

systems over single-antenna system is rather large under the assumption of indepen-

dent fading and noises at different receiving antennas [5]. However, fading correlation

does exist when the elements are not spaced sufficiently far apart in practice. This

can significantly reduce the capacity of the multiple-antenna system [7]. It is well

known that increasing antenna spacing can decorrelate the multiple channels created

by the antenna array. Thus, in order to make the independence assumption valid

for the multi-antenna system, antenna elements in the arrays must be spaced far

apart enough. Since conventional antenna arrays are composed of several physically

connected antenna elements, this requirement implies a big size for the arrays.

The required antenna separation depends on the local scattering environment as

well as on the carrier frequency. For a mobile transmitter which is near the ground

with many scatterers around, the channel decorrelates over shorter spatial distances,

and typical antenna separation of half to one carrier wavelength is necessary. For base

stations on high towers, larger antenna separation of several to 10s of wavelengths

may be required [3]. The carrier wavelength of a radio signal at frequency f is given

by A = c/f, where c = 3 x 108m/s is the speed of light. For illustration, consider the

concrete example of a uniform linear antenna array, where the antennas are evenly

spaced on a straight line. Suppose the multi-antenna system works at the carrier

frequency of 2GHz; then carrier wavelength is about 0.15m. Thus, for a uniform

linear array of 8 antennas with small scatterers, the length of the array will be larger

than 3 feet! Antenna array of such a size usually is too large to be feasible in many

practical scenarios. The physical size of the array will limit the applicability of spatial

diversity techniques, especially for mobile applications.

To overcome the physical constraint of conventional multi-antenna arrays and

take advantage of diversity techniques, we consider a network-based approach to









Receiving
Nodes





Wireless
Network




Remote Transmitter Distributed Array

Figure 1-2: Distributed array


obtain spatial diversity without the use of physically connected antenna arrays. This

approach makes use of the fact that communicating nodes in a local wireless network

are inherently physically separated in space. When a remote source transmits a

message through the wireless channel to a cluster of nodes, a SIMO channel will be

essentially created between the source and the cluster of receiving nodes. Usually,

these receiving nodes are far enough apart that independent fading at different nodes

can be guaranteed. Meanwhile, the nodes are in close proximity so that simple lower-

power, high-rate, reliable signaling techniques are permitted for the communications

with the cluster. Hence, these nodes can coordinate their reception processes to

effectively form a distributed antenna array. In this way, spatial diversity can be

obtained through collaboration and communication among the nodes in the cluster.

Fig. 1-2 illustrates the concept of the distributed antenna array. In contrast

to a conventional multi-antenna array in Fig. 1.2, where the received signals at all

antenna elements are collected and processed in a centralized manner, each node in

the distributed array is an integral receiver and possesses the capability of processing

its received signal independently. From the viewpoint of the whole cluster, the re-

ception process can be thought as a distributed process performed at different nodes

in the array. Spatial diversity is then achieved by exchanging information among the









nodes through the local wireless network during the distributed processing. It is the

combination of the distributed processing and local wireless network that allows us

to overcome the physical constraint of conventional multi-antenna arrays.

It is worthwhile to point out that the kind of topology depicted by Fig. 1-2

is applicable to many practical wireless systems. For instance, consider a cellular

system in which a mobile unit is within the range of multiple base stations. The

base stations, which are linked together by optical or high-speed wired connections,

receive independent copies of the transmitted signal from the mobile unit and jointly

process the received signals to gain diversity from the independent channels. The

same scenario applies to a wireless LAN system in which the base stations are replaced

by access points, and the links joining the access points are usually wired Ethernet

links. For a military communication example, consider a cluster of local sensors in a

sensor network. The close proximity of the sensors allows the wireless links between

the nodes to be very high speed, while requiring only low-power and low-complexity

processing. A transmitter, either from another cluster in the network or external to

the sensor network, sends a signal to this cluster. Each sensor receives a <" li of the

transmitted signal and processes the signal in a distributed manner using information

from other sensors. The same scenario applies to inter-group communications between

small groups of soldiers, each carrying a mobile communicator, or to a group of

collaborating mobile users communicating with a base-station in a cellular network

[8, 9].

1.4 Collaborative Decoding

In distributed processing, the information exchanging traffic load among the ar-

ray nodes is an important issue for distributed arrays. It is undesirable to exchange

an extensive amount of information in the reception process because the wireless

network resource is limited. Conventional diversity techniques using linear combin-

ing described in Section 1.2 become expensive in terms of the information exchange









amount because all the received symbols at each node need to be forwarded through

the network in order to achieve the full spatial diversity. In fact, it can be shown

from the information-theoretic viewpoint that it is possible to achieve the full diver-

sity advantage with a much smaller amount of information exchange than used in the

common combining techniques such as MRC.

To explore efficient diversity techniques for distributed arrays, information must

be exchanged selectively and that information must be used effectively at the re-

ceiving nodes. Usually, the received signal before the reception processing (such as

detection, demodulation, decoding, etc.) suffers minimum information loss caused

by the performance constraint (or capability) of the receiver and the inherent un-

certainty in the communication systems. However, the information or signal before

processing usually suffers maximum corruption caused by fading and noise compared

with the information after processing. Thus, we can directly learn much less about

the transmitted message from the signal before than that after processing. Although

it may suffer certain information loss due to the receiving procedure, the informa-

tion after processing usually reflects the true message with high confidence, Thus it is

possible to use the information after processing effectively for exploiting spatial diver-

sity. Moreover, information after processing may possess desired properties such that

an effective information selection method can be adopted to reduce the information

exchange amount.

Error correction coding provides the capability of detecting and correcting bit

errors encountered in the transmission process. It is one of the most often used

techniques in wireless communication systems. The maximum a posteriori (1\ AP)

decoding is widely used in error correction coding techniques. MAP decoder decodes

message bits by finding the possible ones that maximize their a posteriori probabilities

and output the maximum a posteriori probabilities for each bit. The output is often




















Figure 1-3: Iterative decoding


expressed in log-likelihood ratio (LLR) form as

P(: = +l|y)
L(: ) = log
P(G' -1|y),

where x is the decision of information bit x, y is the channel observation, P(x = +1y)

and P(x = -1|y) are the a posteriori probabilities for x to be +1 and -1 given y,

respectively. The LLR value does not only reflect the sign of a binary bit, but

also indicates the reliability of the decision. It turns out that the output of MAP

decoders can be the proper information to exchange for obtaining spatial diversity in

distributed arrays.

The capacity-approaching turbo codes [10] have aroused great attention and

have been extensively studied since their introduction. Turbo codes have become

a landmark in the field of error correction coding. The key idea of turbo codes is

using iterative, or "turbo", decoding to exploit the multi-component code structure

of the turbo encoder by associating a decoder with each of the component codes. In

the decoding procedure, each decoder performs MAP decoding or any soft-in, soft-

out (SISO) decoding that approximates the MAP decoding for its corresponding code

component. The decoders help each other by using the extrinsic information (output)

generated in the decoding process of the other decoders as a prior information for

their own decoding process. By repeating the procedure in an iterative fashion,










SAtenna


Input 1 Output 1
Node 1
Decoder

Wireless
network

Information Exchange


SAtenna

Node 2
Input 2 Decoder Output 2


Figure 1-4: Collaborative decoding


turbo codes can achieve the performance close to the Shannon capacity limit. Fig. 1

3 shows the iterative decoding procedure with two decoding components. The idea

of iterative decoding is then generalized to many transmission systems with multiple

code components parallel or serially concatenated together, such as coded modulation,

iterative detection and equalization systems. This iterative decoding approach is

usually very powerful and exhibits near-capacity performance.

With the above considerations, we study a new approach to achieve diversity

called collaborative decoding in distributed arrays when error correction codes are

used in the transmission process. The basic idea of collaborative decoding is to extend

the iterative decoding techniques to the distributed array scenario. Fig. 1-4 depicts

how the idea of iterative decoding is extended to collaborative decoding. By viewing

receiving nodes in the array as a set of physically separated decoding components and

the information exchanging process as the extrinsic information feedback process in

Fig. 1-3, the typical iterative decoding procedure can be performed in a distributed

array.









For MAP decoders, we notice that for the bits with high decoding reliability

in previous iterations, the contribution of their a priori information to the average

decoding performance is marginal. However for the less reliable bits, the contribu-

tion of their a priori information can be significant. This fact makes it possible for

collaborative decoding to achieve diversity by exchanging only a portion of decoding

information among the receiving nodes in a distributed array. It turns out that by

carefully selecting the decoding information to exchange, collaborative decoding can

lower the amount of information that must be exchanged in the array, while providing

performance close to that of MRC. This advantage makes collaborative decoding an

attractive diversity technique for distributed array systems.

1.5 Scope of This Work

In this research work, we study the new approach of collaborative decoding

with distributed arrays to achieve spatial diversity in wireless communications. We

first investigate the possibility of using collaborative decoding in a two-node dis-

tributed array to obtain receive diversity when different channel coding techniques

are adopted for AWGN and flat Rayleigh fading channels. Then the approach is

extended to coded modulation systems with high-order signal constellations, which

provide higher spectral efficiencies and are desired for bandwidth-constrained wireless

channels. By exchanging only a small amount of information among the distributed

array in contrast to conventional spatial diversity combining techniques, the collabo-

rative decoding technique is shown to be able to achieve a significant spatial diversity

gain and perform close to the optimal MRC.

Taking into account the scalability of the distributed array, we extend collabo-

rative decoding into the more general case of an arbitrary number of nodes. Based

on the statistic characteristics of the output of maximum a posteriori decoders, we

propose two efficient information exchange schemes for collaborative decoding: least-

reliable-bit and most-reliable-bit exchange schemes. Error performance of these two









schemes under different transmission environments with different parameter settings

is investigated and compared with Monte Carlo simulations.

To further study the proposed approach, theoretical wi, ,! -i-; on the collabo-

rative decoding technique is carried out. For analysis tractability, we consider the

cases in which nonrecursive convolutional codes are used in the collaborative decoding

procedure. The analysis is based on the assumption that the extrinsic information

generated in the collaborating decoding process for nonrecursive convolutional codes

can be approximately described by a class of Gaussian and generalized .i-vmmetric

Laplace distributions for AWGN and independent Rayleigh fading channels, respec-

tively. With this assumption, we reduce the collaborating decoding to a density evolu-

tion model with a single MAP decoder, and propose a systematic method to evaluate

the error performance of collaborating decoding semi-analytically. The analysis re-

sults show that with proper choices of parameters, collaborative decoding can achieve

full diversity and approach the theoretical performance bounds .,-i-, !!i,1 l ically.















CHAPTER 2
COLLABORATIVE DECODING IN A TWO-NODE DISTRIBUTED ARRAY

In this chapter, we investigate the possibility of collaborative decoding achieving

spatial diversity in distributed array. We first focus on the simple case of a two-node

network. Consider a pair of nodes that are connected via a communication channel

that has relatively benign characteristics that permit simple lower-power, high-rate,

reliable signaling techniques to be emplo, 1 for communications between these two

nodes. Typically, these two nodes are in close proximity. A distant transmitter sends

a packet of coded data bits to the two nodes. Each of the two nodes receives an

independent copy of the transmitted signal.

For the distributed array, we employ iterative decoding to extract important

information from the received signal at each node, and only pass this information

between the two nodes. More precisely, each node decodes the signal that it receives

and generates reliability estimates (soft outputs) for the transmitted data bits. The

two nodes then exchange soft outputs of a small portion of the bits that are least

reliable. Upon receiving additional information about the least reliable bits from

another node, a node will restart the decoding process. This process of information

exchange and iterative decoding then continues for a number of iterations. The

objective is to obtain the maximum degree of diversity advantage from the signals

received at the two nodes, while requiring a minimum amount of information exchange

between them.

This chapter is primarily based on the work of Wong et al. [8, 9]. We will

present the results of the simulations that we carried out to investigate the viability

of the proposed distributed iterative decoding approach. In Section 3.1, we describe

the system and channel model assumed in the simulations. In Sections 2.2 and 2.3,
















Distant transmitter Receiving
nodes

Figure 2-1: A cluster of two nodes forming a distributed antenna array.


we report decoding designs and simulation results employing a rectangular parity-

check code and a convolutional code to encode packets from the distant transmitter,

respectively. In Section 2.4, we discuss the potentials of the proposed distributed

iterative decoding approach in different application scenarios.

2.1 System Model

We consider a system with the topology shown in Fig. 2.1. A distant transmitter

sends a packet of coded data bits to the two receiving nodes. For simplicity, we assume

that the two nodes can communicate with each other reliably. We are only interested

in the communication link from the distant transmitter to the two nodes. We assume

that the channels from the transmitter to the two receiving nodes are independent.

We further assume that the coded bits from the transmitter are modulated using

binary phase-shift keying (BPSK). After matched-filtering and proper normalization,

the decision statistics for the ith coded bit obtained at the two receiving nodes are

(1) (1) (1)
i a\' xi+ni
(2) (2) (2)
i a\' xi+ni


where xi is the BPSK symbol (1) representing the ith bit, and n 1) and (2)are

independent zero-mean, circular-symmetric complex Gaussian random variables with

per-component variance No/2 representing the thermal noise components at the first

and second receiving nodes, respectively. We consider two different channel models.

The first model is the additive white Gaussian noise (AWGN) model. For AWGN









channels, both the channel gains a1l) and a(2) are 1, i.e., the normalized received

energy per coded bit E, 1. The second model we consider is the flat Rayleigh

fading model. For Rayleigh fading channels, a and a2), for all i, are modeled as

independent zero-mean circular-symmetric unit-variance complex Gaussian random

variables. This corresponds to the assumption of having a perfect channel interleaver

and the normalized average received energy per coded bit E, 1. For both AWGN

and Rayleigh fading models, we assume that perfect phase estimation is achieved

and hence coherent demodulation is performed at each node. In the case of Rayleigh

fading model, we assume that perfect channel state information is available at the

nodes.

2.2 Collaborative Decoding for Rectangular Parity-Check Code

In this section, we consider the design of the distributed iterative decoder when

a rectangular parity-check code (PRCC) is employ, -l to encode the data bits in the

transmitted packet.

The rectangular parity-check code (RPCC) [11] is a punctured version of the

product of two single parity-check codes. An example of a 3 x 3 RPCC is shown in

Fig. 2.2. The code consists of single parity-check codes that operate on rows and

columns of a square matrix that contains the information bits. RPCCs with large

block sizes are very high-rate systematic codes that can be decoded by a very simple

iterative algorithm [11, 12, 13]. Note that for a packet of N2 bits, the number of

parity bits is 2N (N bits each in the horizontal and vertical directions). Thus, the

rate of the RPCC is N2/(N2 + 2N) = N/(N + 2). Clearly, as N becomes large, the

rate of the RPCC code becomes very high. Maximum a posteriori (1 AP) decoding

of the RPCC can be approximately performed by an iterative decoding procedure

that treats the RPCC as a parallel concatenation of the parity check codes defined

along the rows and columns of the data bit matrix. For each component code, the









Data bits: 1, 0, 0, 1, 1,0, 1, 0,1

Horizontal
Parity

Parity check arrray: 1 0 0 1

1 1 0 0

1 0 1 0

Vertical 1 1 1
Parity


RPCC-encoded bits: 1, 0, 0, 1, 1, 0, 1, 0 ,1, 1, 0, 0, 1, 1, 1



Figure 2-2: Example of rectangular parity-check code for packet of nine information
bits.

"soft-in/soft-out" (SISO) decoding module in Hagenauer et al. [11] and Wong et al.

[13] amounts to the following simple procedure:

1. Find the two smallest magnitudes among all soft inputs on a row/column.

2. Take hard decisions on the row/column and check the parity.

3. For each data bit (expect the one with the minimum magnitude soft input)

on the row/column, the extrinsic information is the minimum magnitude if

the parity matches with the hard decision of that bit, otherwise the extrinsic

information is the negative of the minimum magnitude. For the data bit with

the minimum magnitude soft input, the second smallest magnitude is employ, 1

4. Pass the extrinsic information as a prior information to the component code

in the other direction.

The soft inputs for the first iteration are simply scaled channel observations [13] for

both the AWGN and Rayleigh fading models. At the end, the extrinsic informations

provided by the two component codes are added with the initial channel observation

to give the soft output, based on which the bit decision is made. It is clear that









this decoding process is very simple. In [12], RPCCs and their extensions to higher

dimensions are shown to be able to achieve performance near the capacity limit for

transmission over AWGN and bursty channels for very high code rates. In [13], it

was shown that RPCCs can be used to obtain a significant diversity gain on fading

channels with virtually no penalty in information rate. For details of the RPCC

encoding and decoding algorithms, see Appendix A.

As mentioned before, the key to performing distributed decoding that requires

only small amount of information to be passed between the two nodes is to identify

the set of bits that are likely to be in error. Since the MAP decoder outputs the

a posteriori log-likelihood ratios of the data bits, the soft outputs of the iterative

decoder above are good reliability measures for the data bits. For both the AWGN

and Rayleigh fading models, a data bit with a small soft output magnitude is more

likely to be in error. To illustrate this, we consider a packet that has errors and plot

the conditional probability of the event that an error (given that it occurs) occurs in

the bits whose soft output magnitudes rank in the lowest percentiles. We plot this

probability in Fig. 2.2 for the 322 RPCC. The conditional probability is estimated

from Monte Carlo simulations after 5 decoding iterations. We can see from Fig. 2.2

that at a high enough Eb/No, essentially at the convergence abscissa, most of the

errors will occur in the bits whose soft output magnitudes rank in the lowest, iv,



Based on this observation, we can employ the following simple strategy to gain

diversity advantage while requiring a small amount of information exchange between

the receiving nodes. At first, each node decodes the data bits from the signal that

it receives. After the decoding, each node ranks the data bits according to their

soft output magnitudes. Then each node requests additional information from the

other node for those bits whose soft output magnitudes rank in the lowest ,'. Upon

receiving a request, a node sends the soft outputs of the requested bits generated











1 1 1 1 1





07
SE/N = OdB
o b
S06

05
O
a 04

S03

02

01

0 10 20 30 40 50 60 70 80 90 100
x%


Figure 2-3: Conditional probability of the event that an error occurs in the bits whose
soft output magnitudes rank in the lowest percentiles.


in its own decoding process. Each node will use the soft outputs obtained from the

other node as a priori information to continue the iterative decoding process. The

whole process then repeats with additional exchange of soft outputs between the two

nodes.

To illustrate the advantage of this approach, consider a sample system in which

a node requests additional information for 5'. of the bits with the smallest soft

output magnitudes at each iteration. A total of 3 iterations of information exchange

occur between the nodes, i.e., altogether the overall traffic between the nodes is 15'

(neglecting the overhead involved in the requesting protocol) of what is required

by MRC. In the case of MRC, we assume that each node passes all its channel

observations to the other node and maximally combines the channel observations

before decoding. Fig. 2-4 shows the bit error rate (BER) performance1 of the 322





1 BERs are plotted against Eb/No per receiving node in Figs. 2-4, 2-5, 2-6, and
2-7.











322 code over AWGN channel
100
***** uncoded BPSK
1 receiver
.-. 2 receivers with MRC
2 receivers with 15% exchange
10


102 ..

w10 \ ",


103 I
104 \





106
0 1 2 3 4 5 6 7 8 9 10
Eb/No (dB)


Figure 2-4: Performance of collaborative decoding for the 322 RPCC with information
exchange between two receiving nodes over an AWGN channel.


RPCC over an AWGN channel. We see that the 322 RPCC, which has a code rate of

0.94, provides a coding gain of about 3dB at 10-5 BER2 With MRC, an additional

3dB ,~ ii. i i, gain" is obtained as expected. The most interesting observation from

Fig. 2-4 is that we can obtain a 2.4dB gain (out of the maximum possible 3dB gain)

using the soft output exchange and iterative decoding algorithm described before,

exchanging only a total of 15'. of all soft outputs.

For the case of Rayleigh f ,li_- the BER curves are shown in Fig. 2-5. The

diversity gain provided by the MRC is about 8dB at 10-5 BER. More interestingly

in this fading case, we can get all of the 8dB diversity gain that MRC can provide




2 The BERs presented here are averages of the BERs at the 2 nodes. There is
a slight difference between the BERs at the two nodes obtained from simulation.
However, the difference is ahv--, small as expected because of the symmetry between
the nodes. On the other hand, this observation indicates that none of the nodes are
disadvantaged against each other in the iterative decoding process.











322 code over Rayleigh fading channel
100
..... uncoded BPSK
1 receiver
.-. 2 receivers with MRC
10,1 2 receivers with 15% exchange

10-2
,S., -


10 \



10 \



10 1



10
0 5 10 15 20 25
Eb/No (dB)


Figure 2-5: Performance of collaborative decoding for the 322 RPCC with information
exchange between two receiving nodes over a Rayleigh fading channel.


at 10-5 BER by using the soft output exchange and iterative decoding algorithm

described before, exchanging only a total of 15'. of all soft outputs.

2.3 Collaborative Decoding for Convolutional Code

In this section, we consider the design of collaborative decoding when a stan-

dard convolutional code (CC) is employ, ,1 to encode the data bits in the transmitted

packet. We employ a rate-1/2, non-systematic, non-recursive, 4-state CC with gen-

erator matrix [1+ D2, 1+ D + D2]. We will refer to this CC as CC(5,7), based on the

octal representation of the generator polynomials. It is well-known [14] that this CC

has the largest free distance of 5 among all rate-1/2, 4-state CCs. The MAP decoder

for this CC is the SISO decoder [15] based on the well-known BCJR algorithm [16].

Here we employ the less complex max-log-MAP decoder [15] as an approximation to

the MAP decoder.

We employ the same collaborative decoding strategy described in the previous

section. The only difference in this case is that the channel observations do not









directly correspond to the soft inputs for the data bits due to fact that the CC is non-

systematic. The soft output of a bit is generated by summing the extrinsic information

generated at the current decoding iteration and the cumulative a prior information

from previous iterations. After the current decoding iteration, each node ranks the

bits according to their soft output magnitudes. Then each node requests additional

information from the other node for those bits whose soft output magnitudes rank

in the lowest '. Upon receiving a request, a node sends the soft outputs of the

requested bits generated in its own decoding process. Each node will use the soft

outputs obtained from the other node as a prior information to continue the iterative

decoding process. The whole process then repeats with additional exchange of soft

outputs between the two nodes.

Similar to the previous case, we consider a sample system in which a node re-

quests additional information for 5'. of the data bits with the smallest soft output

magnitudes at each iteration. A total of 3 iterations of information exchange occur

between the nodes, i.e., altogether the overall traffic between the nodes is 7.5'. (ne-

glecting the overhead involved in the requesting protocol) of what is required by MRC,

in which each node passes all its channel observations to the other node and max-

imally combines the channel observations before decoding. The packet size is 1024

data bits (2048 coded bits). Fig. 2-6 shows the BER performance of the CC(5,7) over

an AWGN channel. Similar to the case of the 322 RPCC, we can obtain a 2.4dB gain

at 105- BER, out of the maximum possible 3dB antenna gain, using the soft output

exchange and iterative decoding algorithm described before, exchanging only a total

of 7.5'. of information required by MRC.

For the case of Rayleigh fllii- the BER curves are shown in Fig. 2-7. The

diversity gain provided by the MRC is about 6dB at 10-5 BER. As seen from Fig. 2

7, we can get 5dB out of the 6dB diversity gain that MRC can provide at 10-5 BER














CC(5,7) over AWGN channel


4
Eb/N (dB)


Figure 2-6: Performance of collaborative decoding for the CC(5,7) with information
exchange between two receiving nodes over an AWGN channel.


CC(5,7) over Rayleigh fading channel
.. uncoded BPSK
1 receiver
S- 2 receivers with MRC
2 receivers with with 7 5% exchange



,,,,, ....


8 10
Eb/NO (dB)


12 14 16 18 20


Figure 2-7: Performance of collaborative decoding for the CC(5,7) with information
exchange between two receiving nodes over a Rayleigh fading channel.









by using the soft output exchange and iterative decoding algorithm described before,

exchanging only a total of 7.5' of information required by MRC.

2.4 Summary

The results in Sections 2.2 and 2.3 clearly indicate the possibility of getting full,

or close-to-full, diversity advantage by proper collaborative decoding of the signals

received at different nodes with a small amount of information exchange between

the nodes. The crucial points appear to be identifying the bits that need additional

information from other nodes and employing proper iterative decoding techniques to

make the best use of the additional information. We can obtain some very promising

results even with the simple, ad-hoc design for the RPCC and CC presented in

Sections 2.2 and 2.3. This leads us to believe that collaborative decoding can be

a viable technique to improve the performance of wireless communication systems

that have topologies similar to the one described in Fig. 2.1.















CHAPTER 3
COLLABORATIVE DECODING FOR CODED MODULATION

The exponential growth in demand of high bit-rate data transmission in wire-

less systems continuously propels the research of using antenna array to increase the

capacity of wireless communication systems. Meanwhile, the use of error correction

coding techniques also helps greatly in exploiting the capacity of wireless commu-

nication channels. As well known, the powerful channel coding techniques such as

turbo codes and low-density parity check codes can attain the rates approaching the

Shannon limit, primarily for AWGN channels with binary modulation. However, it is

clear that these error-correction coding schemes reduce transmit power at the expense

of increased bandwidth or reduced data rate.

Coded modulation, by combining binary error correcting codes and higher level

modulation together, provides an effective method to achieve coding gain without

using additional bandwidth, thus high bit-rate communication can be achieve. Hence,

the spectrally-efficient C'\ technique is suitable for bandwidth-constrained channels

especially.

The first spectrally-efficient coding breakthrough came when Ungerboeck [17]

introduced a coded-modulation technique to jointly optimize both channel coding

and modulation. Ungerboeck's trellis-coded modulation (TC'i\), which uses multi-

level/phase signal modulation and simple convolutional coding with mapping by set

partitioning (SP), can provide considerable coding gain for AWGN channels. This

scheme maximizes the minimum free Euclidean distance of a code, which is the dom-

inating factor to determine the code performance for AWGN channels. However, this

scheme usually gives low diversity order, and leads to a performance degradation over









a Rayleigh fading channel. One general solution to this drawback is to apply symbol

interleaver to the TC'\1.

It was latter recognized by Zehavi [18] that the diversity order can be increased

to the minimum number of distinct bits instead of channel symbols by using bit-

wise interleaving to yield a better coding gain over a Rayleigh channel. Following

Zehavi's idea, Caire et al. [19] proposed the bit-interleaved code modulation (BIC\ !)

that increases the diversity order further to the minimum Hamming distance of the

code, thus, leads to a performance improvement over fading channels. But the random

modulation caused by bit interleaving decreases the minimum free Euclidean distance

of the codes. So BIC\ I was thought not suitable for AWGN channels [19].

However, BIC \!, developed primarily for fading channels, latterly turned out to

be able to give very good performance for AWGN channels as well. Li and Ritcey

[20, 21] showed that by using a simple iterative decoding (ID) with BIC\ I (BIC\ I-[D),

the minimum free Euclidean distance degradation, hence, performance degradation,

can be overcome. With soft-decision feedback, BIC\ 1-[D significantly outperforms

TC \!, and the performance is even comparable with Turbo-TCO\ i for AWGN channels.

The authors [20] also concluded that at high SNR, SP mapping clearly outperforms

Gray mapping for BIC\ I-[D using soft-feedback. Besides, an advantage of BIC\ I

is its flexibility in design. In BIC;\!, encoder is serially connected to modulation

by a single bit-by-bit interleaver. This structure treating coding and modulation

separately makes it very convenient to employ different structure codes with different

code rate in the schemes. By using some powerful codes such as long parallel or

serially concatenated turbo codes and iterative decoder, it is possible for BIC'\ to

obtain good performance closed to capacity over Gaussian channels [22].

In '! Ilpter 2, we studied the collaborative decoding technique in a two-node

distributed array with error correction coding. When BPSK signal is used in the

transmission, the collaborative decoding technique described in '!0 Ipter 2 can obtain









a diversity gain close to that provided by MRC. In this chapter, we consider employing

coded modulation (C\ l) in the distributed array system to explore the possibility of

obtaining spatial diversity with higher spectral efficiency for bandwidth-constrained

wireless channels. Similar to C'! lpter 2, we still consider the simple case of two-node

distributed array. But we will only consider the cases when rectangular parity-check

code codes are used. This chapter is mainly based on the work of [23].

The remainder of this chapter is organized as follows. In Section 3.1, we present

the two-node distributed array system and channel model. In Section 3.2, the BIC' I

iterative demodulation for RPCCs and the design of distributed decoding are de-

scribed in detail. Following that, Monte Carlo simulation results for different signal

constellations are shown in Section 3.4. Finally, summary is given in Section 3.5.

3.1 System Model

As in C! Ilpter 2, we consider a distributed array system with two identical re-

ceiving nodes. A distant transmitter sends a block of modulated signal to the two

receiver nodes, as shown in Fig. 2.1. The two receiver nodes are physically separated

far apart enough that fading at each node is i.i.d.. Each individual node receives

and decodes its received signal independently. For simplicity, we assume that the

two nodes can communicate with each other reliably. We are only interested in the

communication link from the distant transmitter to the two nodes.

The transmitter adopts a typical BIC'\ approach [19], as shown in Fig. 3-1. A

block of data bits u to be transmitted are encoded with an RPCC encoder with code

rate R,. Then the coded bit stream c are fed into a bit-wise random interleaver r,

generating bit stream v= 7r(c). After that, the bit stream v is modulated onto a

signal sequence x over a 2-dimension signal set X of size II = M = 2' by a M-ary

modulator with a one-to-one binary map p : {0, 1}", X. This signal sequence is

then sent through the channel. The overall spectral efficiency of this system is mR,

bits/symbol.











[ .. .-" Channel



D CdC rrDe

RPCC SISO y
Decoder 71-1 Demod
Additional info
from other node

Figure 3-1: System model of BIC \!-D with RPCC

Here we use a memoryless fading channel model that includes AWGN channel

as a special case. In this model, the received signal y at the two antenna nodes

corresponding to the transmitted signal x E X can be expressed as


y(1) g(M)x + n(i),
y(2) (2) (2)
Y 9 x+n

where: i) g(1) and g(2) are channel fading gains. For AWGN channels, g() = 9(2)

1. For Rayleigh fading channels, g(1) and g(2) are independent circular-symmetric

complex Gaussian random variables with E[g()] = 0 and E[g(i) 2] = 1 for i = 1, 2;

ii) n(1) and n(2) are independent zero-mean, circular-symmetric complex additive

Gaussian noise with covariance E[n(i) 2] a2 for i = 1, 2. We normalize the signal

energy E[x 12] = 1. Thus, the average signal-to-noise ratio (SNR) is 1/-2. In this

channel model, we assume that perfect channel state information (CSI) (g(), g(2)) is

available at the receiver nodes and hence coherent demodulation is performed at each

node. With this model the pdf p(y(i)Ix), for i = 1, 2, with perfect CSI is given by


p(y(i)lx) 2 exp (-y(i) g(i)X|2/,2) (31)
71.0.2









At each receiver node, we treat the modulation and code as two components

of a concatenated coding system. By employing a maximum a posteriori (\!.AP)

demodulator, we feed the extrinsic information from the RPCC decoder back to

the demodulator as the a priori information to carry out the demodulation and

decoding in an iterative manner. After some iterations, we exchange information

for a portion of symbols between the two nodes and restart the demodulation and

decoding processes.

3.2 Iterative Demodulation and Decoding for BICM

3.2.1 Iterative Demodulation and Decoding Algorithm

One important component in our bit-interleaved coded modulation system is

the rectangular parity-check code. Another important component in the BIC'\1-[D

system is the iterative demodulation module. Based on the idea that performing de-

modulation and decoding in an iterative manner is a key to improve the performance

of BIC\ 1 [20, 21], we employ the receiver model as illustrated in Fig. 3-1. Since the

encoding and decoding for rectangular parity-check codes is addressed in Section 2.2

and Appendix A, we here emphasize on the demodulation component.

To simplify the iterative decoding process, we first modify the demodulator to

work in the log-likelihood ratio (LLR) domain. Suppose that each m-bit vector

v = (1, 2, 2 v,) from the interleaver are mapped into one signal x out of the 2"

signals in the set X by mapping rule p, i.e., x = p(v) E X, at the modulator, and that

the received signal corresponding to x is y. Let f1(x) denote the ith (i = 1,2, .. m)

bit of the label of x. For convenience, we assume that 1(x) = b is in the GF(2) with

the elements {+1,-1}. In our soft demodulator, we will consider the MAP rather

than maximum-likelihood (i\l,) bit metric. It is easy to see that the MAP bit metric









of = b {+1, -1} is given by


A(, = b, y)=log P(, = b, y)

=log p(y z)P(zl, =b)P(, =b), (3-2)
zEX

where p(y z) is given in (4-3) according to our channel model. We assume a perfect

bit-interleaver 7r such that {Vi, v2, ,m} are independent to each other. With this

assumption, we have


P(z)= p- (z= (VI, v2, n U))= (j -P (Z)). (3-3)
j=1

Hence, the MAP bit metric can be simplified to


A(, =b,y), max{logp(ylz)+ logP(vj ~Y(z))

+logP(, =b)+C}, (3-4)


where Xb denotes the subset of all signal z E X with f'(z) = b, and C is a constant.

Above, the approximation log(y a:) a maxi(log a) is used. For convenience we

choose the constant as
1 m
C 2- (logP(vj +1) +logP(v -1)). (3 5)
j=1

Then the metric becomes


A( =b,y) max{logp(y|z)+ 1f (z)L(vj)}, (3-6)
zexj 2 1
z j= 1









where L(vy) log (P(vy +1)/P(vy -1)) is the a prior LLR of bit vy. Thus the

soft value of bit in LLR form is computed by

L(, Iy)=L(, ,y)= A( = +1,y)- A( =-1,y)

L(, )+ max (logp(ylz)+ (z)L(v,)}
zEi 2t

max { logp(ylz) + 1 (z)L(v,)}. (3-7)
zEXi1 ji

Subtracting the a priori LLR of L(i ), from (3-7) we can obtain the extrinsic

information of ,

L,(, ) max { logp(ylz) + (z)L(v,)}
zEyj 4i

max { logp(ylz)+ (z)L(v,)}. (3-8)
zEXi1 ji

We treat this extrinsic information as the output of the soft demodulator. From (3

8), we can see that in order to obtain the extrinsic LLR of a bit of a signal, we need

to use the a priori LLRs of the other m 1 bits and the channel observation of the

signal as input.

With the modification above, the demodulation and decoding procedure can

perform in an iterative way conveniently. In Fig. 3-1 we use L() (.) to denote the

LLR at the nth iteration. First, we initialize all the a priori LLRs L(")(v) and

L() (u) to zeros for n = 0. At the nth iteration, when the channel observation y of
the transmitted signal sequence is received, we demodulate it using (3-8) to produce

L) (v). After deinterleaver 7-1, L() (c) r- 1(L(')(v)) is fed into the RPCC decoder
for decoding. Since the RPCC decoding is an SISO iterative algorithm, we shall

use the extrinsic information Li'") (u), produced in the (n 1)th iteration, as the a

prior information L() (u) of decoder in the nth iteration. The extrinsic information

Li~ () generated by the RPCC decoder is then passed thought the interleaver 7 and
fed back as the a priori information L("+l)(v) for the soft demodulator again. After









a number of iterations the estimate of data bits u is obtained from the hard decision

on L")().

3.2.2 Effect of Mapping in BICM-ID

The mapping p has a significant effect on the performance of BIC\1-I[D. For

BIC\!, Gray code mapping outperforms set partitioning (SP) mapping [19]. However,

when associated with the iterative demodulation, SP mapping outperforms Gray

mapping at high SNR [20, 21]. This can be seen from (3-8) that, due to the property

of the Gray mapping that the label of a symbol has only one bit different form its

nearest neighbors, the effect of a priori LLRs can be weakened significantly. However,

this is not the case for SP mapping. Thus the MAP demodulator can make a more

effective use of the a priori information for SP mapping than for Gray mapping.

To illustrate the effect of constellation 1i Illppii: consider that the signal x =

p(vI, v2, v,) c X is transmitted and channel observation is y. The MAP decision
for bit made by the demodulator at the nth iteration is

1
an) arg max (logp(ylz) +
+2

zEXi I
/P) arg max (logp(yIz) + 1 (zL ()}, (3-9)
Zex 2 i

where argmaxz&~b{*} means finding the signal maximizing the expression in the

braces from the constellation subset x From (3-9), we have i'(a ") i(i()").

With this notation, the extrinsic LLR L )(i ) in (3-8) can be written as

L( ") g( Iat)) + 1 ((aj) Y- tC( )) (v). (3-10)
p 2 log (n)
A j7 i









For n = 0, there is no a prior information available, i.e., L()(v) = 0. Thus, the

demodulator gives ML decision result at the Oth iteration,

(0) ma {lo
a = arg max gp(yz)},
zEyX
/(o) -arg max { logp(y z)}; (3-11)
zEXi

and

Lt)(, ) max { logp(ylz)}- max {logp(ylz)}. (3-12)
zExf zEXy
Usually, the pairwise error probability P(x x) (i.e., the probability that the de-

modulator chooses x when x is transmitted) in (3-12) is dominated by the minimum

free Euclidean distance of the constellations. Let us only consider the case that

(a0 ), 3(1)) satisfying (3-11) is a signal pair having the minimum Euclidean distance
in the constellation X. Otherwise, the large Euclidean distance between aoi) and f0o)

will make the pairwise error probability so small that it can be neglected.

We assume that the demodulator makes a single error at bit for the symbol x

at the Oth iteration. If a Gray mapping is used in this case, then with the property of

Gray code that a code has only one bit different from its nearest neighbors, we have

P(azo)) j(30)) for all j / i. (3-13)

Suppose that in the following iteration the a priori LLRs for other bits are reliable,

i.e.,

j(a o))L()(vj) = (/0)L(1)(vj) > 0 for all j / i. (3-14)

For any constellation point z / ai) or 03o), there exists at least one j / i such that

j(z)L()(vj) < 0. So the demodulator will make the choice (all), 1) ) (ao10), 30))
in (3-9). Thus, using (3-10) and (3-13), the demodulator still gives the wrong ML

decision result on the bit ,

LL)( ) max {logp(yz) max {logp(y z)}. (3 15)
x lzEyXi









With above argument, the error can not be corrected no matter how many times

the demodulator iterates in this case. Thus, by using Gray mapping, the iterative

demodulation can not improve the performance of BIC \!.

However, for a SP mapping, Eq. (3-13) is not true. Consequently, at high SNR,

with large extrinsic LLRs from previous iteration, the second term on the right-

hand side of (3-10) could help correcting the pairwise error. This is the reason

why SP mapping outperforms Gray code mapping for BIC\ ID at high SNR. The

performance comparison between Gray and SP mappings in Section 3.4 will verify

this statement.

3.3 Collaborative Decoding for BICM-ID with Rectangular
Parity-Check Code

The presented BIC\I -ID scheme is readily applicable to a distributed array. As

we pointed out in C'i plter 2, a decoded data bit with a small soft output magnitude

from the RPCC decoder is more likely to be in error. However, if the bit-based

strategy in [8] is used here to gain diversity from other receiving node, we will lose

the advantage against MRC in term of saving information exchange traffic when

the modulation order M increases. Hence, we develop a symbol-based strategy for

BIC\ I D to reduce the information exchanging traffic. At first, we define the symbol

reliability measure out of the decoder as

P(A) P(A)
L(x) =log P ) log (316)
1 P(x) EZ# P(z))'

where = p(01, Vm) X is the estimate of transmitted signal x. For convenience,

we define symbol metric for each constellation point z E X as


A(z) = Y P(z)L(v ), (3-17)
J1
j= 1

where L(vj) is the soft output of the coded bit vj. This symbol metric reflects the

probability P(x = z) given the LLRs L(vj) for j = 1, 2, m. In fact, x should be









the constellation point that has the largest reliability, i.e., = argmaxzex {A(z)}.

Similar to (3-3)-(3-5), (3-16) can be simplified to


L(x) ) max {A(z)} minm L()|}. (3-18)
z7x,zEX j=1,-- ,m

Since the LLR magnitude of a bit can be used as the measure of its reliability, (3-18)

indicates that the reliability of a decoded symbol is determined by the soft value of

its least reliable bit, which is basically in agreement with the bit-based idea in [8].

With this definition, the collaborative decoding procedure works as follows. After

every I (I > 1) iterations of demodulation and decoding, each node computes the

symbol reliability L(x) and rank the symbols according to their reliability. Then

each node requests additional information from the other node for symbol x that

L(x) ranks in the lowest o'. We denote the additional information for x as La(x).

Suppose that the estimate corresponding to symbol x at the other node is x

p(vl, vt), which may be different from x since the assumption of independent

channels. Upon receiving the request, a node sends: i) the reliability of the requested

symbols generated in its own decoding process as the additional information, i.e.,

La(x) = L(x); ii) the hard decision of x, fj(x) for j = 1, 2, m, which is also the

hard decision of (vi,* v v,).

Herein, we adopt following strategy, a node does not request additional informa-

tion for the symbol if a request has been made for it in all previous exchanges. In this

case, the node will request information for the next symbol in the ranking order to

make sure that the request for a total of N o'. symbols will be made for the current

exchange, where N is the symbol block size. The advantage of this strategy is that

the additional information can cover more symbols for a number of exchanges.

After the exchange, as shown in Fig. 3-1, each node will use La(x) and the

hard decision 0(x) (j = 1, 2, m) obtained from the other node to reconstruct an

additional symbol metric A,(z) similar to (3-17) for each possible constellation point









z e X. Since V(x) is the hard decision of bit ~j, we have


L(vj) -= ()| L(v)| (3-19)

From (3-18) we can see that IL(vj)| > La(x). This means each bit in x has as least a

reliability of La(x). Now we replace IL(~j)| with La(x) for j = 1, 2, m in (3-19),

which is equivalently to set the reliability of all its bits the same as the reliability of

a symbol. Thus, we can construct the additional symbol metric as


a,,(z) = ()( Z P(JX)La(x). (3-20)
j=1

This additional symbol metric is then used as the a priori information for demodu-

lation, and (3-6) becomes
1 in
( =b,y) maxlogp(yz) + (z)L(vi) + 6a(z), (3-21)
zexb 2 j 1
j=1

where 6 < 1 is a scaling factor used to reduce the effect of error propagation. Usually,

6 can be set to 0.6 ~ 0.7.

In the following I iterations, the whole process then repeats with additional

exchange of symbol reliability and its hard decision between the two nodes. Note

that in this strategy we just need to exchange one real number L(x) and m bits

S(x) (j = 1,2, m) for each symbol. However, for MRC, one needs to exchange a

complex number y (channel observation) and a real number |gl (magnitude of fading

gain, for AWGN channel no need to exchange it since |gl = 1) for each symbol. This

means we just require less than 2/3 (for Rayleigh fading channel) of or equal (for

AWGN channel) to the exchanging traffic of MRC for each symbol, meanwhile we

only need to exchange information for a portion of symbols. Hence with this symbol-

based strategy, we can reduce the required information exchange traffic significantly.









3.4 Performance Evaluation

In this section, we examine the performance of the proposed distributed array

scheme by Monte Carlo simulations. In the simulations, we set the packet size to 1024

data bits i.e., the data bits are arranged into a 32 x 32 matrix for the RPCC encoding.

With this block size, the RPCC gives a code rate of 0.94. In the decoding procedure,

a node requests additional information for 15'. of the symbols with the smallest

reliability at the beginning of every 10 iterations after the first 10. For instance, 3

exchanges cause an overall traffic approximately equal to fi i'. (for Rayleigh fading

channel) or 45' (for AWGN channel) of what required by MRC. In the case of MRC,

we assume that each node passes all its channel observations and fading gains to the

other node and maximally combines the channel observations before demodulation.

Simulations show the bit error rates (BER) at the two nodes are almost the same as

each other. So we take the average of them as the performance of the distributed

array system.

Fig. 3-2 shows the BER performance of BIC\ -[D with 322 RPCC in the distrib-

uted array over AWGN channels when 8PSK with Gray and SP mapping are used.

In the figure, Eb is the received energy per bit per antenna. With MRC, about 3dB

spatial diversity gain can be achieved for both mappings. With our distributed array

approach, we obtain a 2.4dB and 1.4dB gain for Gray and SP mapping at the traffic

cost of 45'. (i.e., 3 exchanges in total) of MRC. Fig. 3-3 shows the BER curves for

Rayleigh fading channels. The spatial diversity gain provided by MRC is about 8.5dB

for both Gray and SP mapping. By exchanging a total of 21"'-. (i.e., 2 exchanges in

total) of the information amount required for MRC, our distributed BIC\ 1-[D sys-

tem obtain a 8.3dB and 7.3dB gain at the BER of 10-5 for Gray and SP rn ppii:

respectively.














10 -



10
10 r e ..!! : !: :
4 o..


3 4 5 6 7
Eb/No (dB)


Figure 3-2: BER for BIC\ -[D with 322
array over AWGN channel.


-'.


- Single receiving node
- 2 nodes with 45% traffic
- 2 nodes with MRC
-e- Gray mappina
-:1- -,, 1 i ...1


9 10 11


RPCC and 8PSK in the two-node distributed


- Single receiving node
- 2 nodes with 20% traffic
* 2 nodes with MRC
-e- Grav mappinq
S:1 -, 1, 1 ...)"


S .




ala
S~ 'K
%. A


'C


xxx
\ 0\

X
: : :^ : : :: :t


6 8 10 12 14 16 18 20 22 24
Eb/N, (dB)


Figure 3 3: BER for BIC\ I-[D with 322 RPCC and 8PSK in the two-node distributed
array over Rayleigh fading channel.







38



65----------------------------------
65

6 -
AA a m 4D
55 64QAM

5-
SA oD 0
S45 32QAM

4-
aA& L o n



28PSK



15 A 2 receivers with MRC
Unfilled symbol Gray mapping
Solid symbol SP mapping
6 8 10 12 14 16 18 20 22
Average SNR (dB)

Figure 3 4: Average SNR at 10-5 BER versus spectral efficiency for BIC\ -[D with
322 RPCC in a two-node distributed array over AWGN channels.


In Figs. 3 4 and 3 5, we show the average SNR (Es/No) at BER of 10-5 versus

spectral efficiency for the two-node distributed array system for different constel-

lations with Gray mapping1 and SP mapping over AWGN channels and Rayleigh

fading channels, respectively. The average SNR can be computed approximately by

SNR = mREb/NO, where m is the number of bits per symbol carrying, and R, is the

code rate of RPCC.

We can see that for both AWGN and Rayleigh fading channels the proposed

distributed BIC\ [D approach can achieve almost the diversity gain provided by

MRC, but with only exchanging 21 (2 exchanges in total) and 45'. (3 exchanges

in total) of the amount of information required by MRC between the two nodes for

AWGN channel and Rayleigh fading channel, respectively. This especially shows the

advantage of our approach for fading channels.





1 For 32QAM, quasi-Gray mapping is used because Gray mapping is impossible in
this case.







39



65

6-
AA M] 64QAM 0
55-

5-
SAA rn 32QAM O
45-

4-
SA O. 16QAM S O
35

3-
a. 8PSK 0
25

2 A QPSK 0O O Single receiver
D 2 receivers with 20% traffic
1 5 A 2 receivers with MRC
Unfilled symbol Gray mapping
Solid symbol SP mapping
1
10 125 15 175 20 225 25 275 30 325 35
Average SNR (dB)


Figure 3-5: Average SNR at 10-5 BER versus spectral efficiency for BIC\!-[D with
322 RPCC in a two-node distributed array over Rayleigh fading channels.



3.5 Summary

In this chapter, we have investigated the use of collaborative decoding for a two-

node distributed array, in which high-order constellations with iterative demodulation

and RPCCs are used. The scheme of bit-interleaved code modulation is used, with an

iterative demodulation and decoding approach adopted in the receiver. We develop a

SISO demodulation algorithm suitable for iteration. The effect of different choices of

bit-to-symbol mapping is analyzed. In order to obtain efficient collaborative decoding

schemes, we propose a symbol-based information exchange strategy for the BIC\ -[D

system, which is different from the bit-based strategy used for the BPSK systems

in C'i lpter 2. Monte Calor simulation results show that, by using our collaborative

decoding technique, a significant diversity gain can be obtained with a relatively small

amount of information exchange between the independent and physically separated

receiving nodes. This results in high spectral efficiencies under both AWGN and

Rayleigh fading channels.















CHAPTER 4
COLLABORATIVE DECODING FOR DISTRIBUTED ARRAY WITH TWO OR
MORE NODES

In the previous chapters, different collaborative decoding techniques are studied

for two-node distributed arrays. It is shown that, with properly designed information

exchange schemes, collaborative decoding can achieve close-to-full receive diversity for

binary coded and higher order coded modulation systems. Since a distributed array is

composed of a cluster of nodes in a wireless network, the number of nodes in the array

is usually greater than two. This fact highlights the scalability requirement for the

proposed collaborative decoding techniques. Thus, it is necessary to consider general

distributed arrays with more than two nodes. To employ collaborative decoding to

achieve spatial diversity efficiently in this scenario, the information exchange scheme

should be a in i" concern. It will be shown that with properly designed information

exchange schemes, collaborative decoding, compared with MRC, still exhibits the

advantage of achieving spatial diversity with significant savings in the information

exchange cost.

In this chapter, we study collaborative decoding schemes for distributed arrays

with more than two nodes. The discussion will be restricted to systems using BPSK

modulation and convolutional codes. We first consider the system model of dis-

tributed arrays with more than two nodes in Section 4.1 and extend the two-node

collaborative decoding technique developed previously to this case. In Section 4.2

we study the statistical characteristics of the extrinsic information generated by the

MAP decoder for the convolutional code. Then a Gaussian approximation for the

extrinsic information is introduced. Based on this approximation, a least-reliable-bit

and most-reliable-bit information exchange schemes are described. Then, we present














Forward Channel


Source

Distributed Array

Figure 4-1: Distributed array with multiple nodes.


and compare the performance of the two information exchange schemes with different

parameter settings in Section 4.3. Finally, we draw a summary in Section 4.4. This

chapter is primarily based on the work in [24] and [25].

4.1 System Model for Distributed Array with Two or More Nodes

The general model of a distributed array with more than two nodes is shown in

Fig. 4-1. A remote source node transmits message through a single-input/multiple-

output forward channel to the destination that contains M (M > 2) physically sepa-

rated receiving nodes, denoted by a node set MA {1, 2, .. M}. The source encodes

and transmits the message with a convolutional code and BPSK modulation. Anal-

ogous to the methods discussed C'! Ipters 2 and 3, with proper modifications, the

collaborative decoding techniques can be extended to the case of different codes and

high-order modulations. Thus, without loss of generality, we will restrict our study

here to convolutional codes and BPSK modulation only. In order to be able to apply

iterative decoding, each node in /M uses an approximated version of MAP decoding,

known as the max-log-MAP decoding algorithm [11], to process the received symbols.

All nodes can perform the demodulation and decoding process individually.

We use a memoryless independent fading channel model, that includes the addi-

tive white Gaussian noise (AWGN) channel as a special case, to describe the trans-

mission environment between the source and receiving nodes. The received signal









yk,i at the kth receiving nodes corresponding to the transmitted BPSK signal xi (i.e.,
xi E {+1, -1}) at time instant i can be expressed as


yk,i gk,ixi +nk,i, for k 1,2,...,M (4-1)

where nk,i for k 1, 2,..., M and all i are i.i.d. zero-mean additive Gaussian random

variables with variance E[ Ink,i 2] = a, and gk,i is the channel fading gain. For AWGN

channels gk,i = 1, and for Rayleigh fading channels gk,i for k = 1, 2,..., M and all i

are i.i.d. Rayleigh random variables with pdf of


p(gk,i) =2.i, ,- gk,> 0. (42)

We normalize the signal energy E[|Ix 2] = 1. Thus, the average SNR is l/a2. In this

channel model, we assume that perfect channel state information (CSI) is available

at the kth receiving nodes and hence coherent detection is performed at each node.

With this model the pdf p(yk,i i, 9k,i), for k = 1, 2,..., M and all i, with perfect CSI

is given by
/1 IYk,i gk,ixi 2 ,
P(yk,i Xi, gk,i)) = a exp -2 (4-3
n 27
At the receiving end, the cluster of nodes form a local network such that they

can communicate with one another on an error-free broadcast channel. Broadcast

channel is one of the simplest channel models for wireless networks. By using this

model, simple wireless LAN protocols such as token ring [26] can be adopted to carry

out the communication among the nodes, Thus, the necessity of complicated channel

allocation or medium access control protocols can be avoided. The detailed network

protocol for collaborative decoding with distributed array is out of the scope of this

research work. Here, we simply assume that the error-free communication among

the nodes through the broadcast channel has been guaranteed by a certain network

protocol. The assumption of error-free communication is also reasonable because

the cluster nodes are in close proximity compared with the distance between the









source and the array. Hence the SNR of the broadcast channel among the receiving

nodes is significantly higher than that of the forward channel from the source to

the receiving nodes. Even if the broadcast channel is noisy in realistic situations,

simple error detection or correction coding such as cyclic redundancy check codes

[14] can be employ, -1 to provide reliable transmission effectively with introducing

only a minimum redundancy. In this case, the slight additional process complexity

and redundancy due to the coding protection can be ignored. The performance of

the forward channel is the main concern here.

4.2 Collaborative Decoding and Information Exchange Schemes

To illustrate collaborative decoding for distributed array with more than two

nodes, we consider the turbo decoder with more than two decoder components. It is

clear that in the turbo decoding procedure, a decoder component should use the sum

of the extrinsic information generated by all other decoding components, except itself,

in previous iteration as its a prior information for the current iteration. According to

this principle, collaborative decoding is proceeded as follows. Each node collects soft-

output (including the extrinsic information and received signal) from all other nodes

generated in the previous iteration, then uses the sum of the collected information,

called additional information, as its a prior information for the current decoding

iteration. To reduce the information exchange amount, the nodes only exchange the

soft-output about a portion of information bits in each iteration.

We note that soft-output is used in collaborative decoding while the extrinsic

information is used in turbo decoding. The reason for using the softoutput rather

than the extrinsic information is as follows. In turbo decoding the received signal for

the data bits is the same for all code components. Only the extrinsic information part

at the output of each decoding component contains new information. However, for

collaborative decoding the received signals at different nodes suffer from independent

fading and noise. Thus, both the channel observation and extrinsic information parts









at the decoding output of a node contains new information for other nodes. To exploit

as much diversity as possible, the soft-output, rather than the extrinsic information,

should be exchanged in collaborative decoding. Note that for non-systematic con-

volutional codes, since only the coded bits are transmitted, no channel observation

is available for data bits at the receiver. Hence, the soft-output and the extrinsic

information of the decoder are the same for non-systematic convolutional codes when

there is no a priori information for the decoding process.

4.2.1 Information Exchange with Memory

Generally, the output (extrinsic information or soft-output) of a SISO decoder is

significantly correlated to the input (a priori information and channel observation),

and the output for .,Ii] i ent bits are correlated with each other. In iterative decoding,

due to the exchange of extrinsic information among all the decoding components,

the a priori information of a decoder collected from other decoding components

will become more and more correlated to its own output in the previous iterations.

This means that the decoder can obtain less and less new information from the

other decoders. This can severely limit the performance of iterative decoding. In

turbo codes, in order to solve this problem, bit interleavers are applied to each code

components. The addition of the interleavers is to randomly permute the bits so that

the correlation among .,.i ,i:ent bits is broken. Hence, the correlation between the a

prior information and previous output of a decoder can be reduced.

Due to the iterative soft-output exchange process, correlation problem similar

to the one mentioned above can arise in collaborative decoding. Unfortunately, due

to the fact that a single message is broadcast from the source to the distributed

array, the code structure and bit-sequence order at all nodes must be the same. This

means that the interleaving technique in turbo coding can not be applied to attack

the correlation problem in this case. On the other hand, another difference between

turbo decoding and collaborative decoding is that soft-output, rather than extrinsic









information is exchanged in collaborative decoding. Due to these two facts, the

exchange of information for the same bit in successive iterations will cause correlation.

To illustrate the situation, suppose that a data bit xi in a packet transmitted

from the remote source. In the jth iteration, for simplicity we assume that only a

node k E MA broadcasts the soft-output of xi, denoted as 6(, to other nodes in

M. Then in the (j + 1)th iteration, the other M 1 nodes will use 6( as a prior

information to perform decoding. For convenience, we denote the set of other M 1

nodes as MiA {m : mE M4, m / k}. According to MAP decoding, the soft-output

for bit xi at node mE c Ai can be expressed as

60j+l) 60(j) U '(+I)
r,i k,i + Sm,i

where ~,I) is the extrinsic information generated for bit xi by node m in the (j+l)th

iteration. If there are a subset of nodes, denoted as /M', in MiA continuing to exchange

the soft-output of bit xi, then node k will obtain all this information and use it as

the a priori information for the next iteration. In this case, the a priori information

for bit xi at node k ,denoted as r, is given by

(j+2) (+l) |/|) (j (j+) (4
mi k (+) I.J4 11,6 (4 4)
lk,i k,i m,)
mrEM' mrnM'/

where 1M'1 the cardinal size of M'. In (4-4), we can see that the soft-output 6J

is explicitly included in jki2 This implies the existence of significant correlation

between the a priori information and the previous soft-output for the decoder at

node k.

Based on the above discussion, we adopt a simple scheme to solve the correlation

problem. The method is to assign a memory to each information bit to record whether

the soft-output for the bit has been exchanged or not. Once the soft-output of a bit

has been exchanged, no further information about that bit will be exchanged in later

iterations. Since the bits which obtain information from other nodes are very likely to









have high reliability values (magnitudes of soft-output) after one iteration, repeating

the information exchange for these bit can not improve the decoding performance.

In some case, this may even hurt the performance because some decoding errors in

these bits may have high reliability values. Also, this scheme helps to increase the op-

portunity for other less reliable bits to receive information in the following iterations.

Besides attacking the correlation problem, another advantage of this memory-based

scheme is that the information exchange amount can be significantly reduced.

4.2.2 Least-Reliable-Bit Information Exchange

In order to achieve spatial diversity without the need of extensive information

exchange as in MRC, only a small amount of information can be exchanged in each

iteration for collaborative decoding. This means that proper information must be

chosen and exchanged so that the decoder at each node can improve the error perfor-

mance effectively. In this sense, the selection of information becomes very important.

Although only account for a small portion of the whole packet, the bits in error

directly determine the error performance of a decoder. In MAP decoding, a prior

information of a bit can directly contribute in its soft-output. Thus, we consider a

method to collect a priori information for those bits that are likely to be in the error

in previous decoding iteration at each node.

As mentioned in C'! Ilpter 2, the soft-output of the MAP decoder in log-likelihood

ratio form provides a good reliability measure for a data bit. It is directly related

to the a posteriori probability of the decision for an data bit. For both AWGN and

Rayleigh fading channels, a data bit with a small soft-output magnitude is more likely

to be in error. In fact, many decoding algorithms, such as the MAP decoding and

belief-propagation decoding algorithm, used in turbo-like decoders are the min-sum

or min-product algorithms [27]. It turns out that the soft-output of these decoding

algorithms in LLR form possess Gaussian-like statistical properties [28, 29]. Fig, 4

2 shows the typical probability distribution of the soft-output generated by MAP










012


01 E/N = OdB


0 08


4006 Eb/N= 3dB


0 04


0 02 -


0
-20 -10 0 10 20 30 40
Value of soft-output

Figure 4-2: Typical probability distribution functions of soft-output for convolutional
codes


decoder for convolutional codes on AWGN channels. In the figure, we assume that

data bits to be decoded are all zeros for clarity. With this assumption, if soft-output

of a bit is negative, then the decision on the bit will be in error. From the figure,

we can see that the Gaussian-like probability distribution function falls mostly on

the right-hand-side of y-axis, only a small part of its left tail is negative. Since the

tail of a Gaussian distribution function decays exponentially, the probability for the

soft-output of erroneous bits to have large reliability values is very small. Conversely,

the reliability values for correct bits may have a good chance to be large. Thus, a

simple way to identify the possible erroneous bits is to measure the reliability values.

With the above argument, we propose an efficient information exchange scheme,

called least-reliable-bit (LRB) information exchange. The idea of the LRB exchange

scheme is that each node requests information from other nodes for its least reliably

decoded data bits [24]. All the additional information collected from other nodes

is used as a prior information in the next decoding iteration. Using the scheme

described in Section 4.2.1, a memory is assigned for each data bit to record whether









the information of that bit has been exchanged or not. Once information of a bit

has been exchanged, no further information about that bit will be exchanged in later

iterations. In each iteration, the bits for which information has not been previously

exchanged are called candidate bits, and the remaining bits are non-candidate bits.

We denote the total number of exchanges by I, and the fraction of candidate bits to

exchange in the jth (0 < j < I 1) iteration by pj (0 < pj < 1), respectively. The

procedure of the LRB exchange scheme is as follows:

1. Set all data bits to be candidate bits.

2. Decode the received signals at each node.

3. If I + 1 decoding iterations (i.e., I exchanges) have been performed, then stop

the decoding procedure and go to step 1) to process a new packet.

4. Otherwise, each node ranks the candidate bits according to their soft output

magnitude (absolute value of the soft output) and requests soft information for

the bottom pj fraction of the candidate bits (the least reliable candidate bits)

from other nodes.

5. Each node broadcasts soft output for those bits that are requested by other

nodes.

6. Those bits involved (received and broadcast) in the current exchange are set to

be non-candidate bits for later iterations.

7. Each node adds the information from other nodes to its a prior information

and returns to step 2).

Here, {pj } j- are the design parameters, which are usually chosen based on the

tradeoff between performance and information exchange amount. Optimization for

the design of the parameters {py}fj is an interesting topic, but is outside the scope

of this work. In this chapter, we focus on the capability of collaborative decoding

to achieve receive diversity given the node number M and some proper choices of

{pjl}fo.
I 1
{pJ j=0.









Since the information being exchanged is the soft-output (in LLR form) for

a portion of the data bits, we use the average total number of LLRs transmitted

through the broadcast channel in the distributed array for processing each packet

as a simple measure of the amount of exchanged information. The cost of overhead

due to the protocol is ignored here. If we use to denote the average information

exchange amount, then with the assumption that soft-output at different nodes and

for different bits are independent, for a specific choice of {p,-}1 the value of for

the LRB scheme is given by

1-1 j-1
TLRB MN [1 (1 --p)M-1] H -1 )M (4-5)
j=0 1=0

where N is the block size of data bits. Correspondingly, the information exchange

amount of MRC is given by

(MRC = MN/R,, (4-6)

where Re is the code rate.

4.2.3 Most-Reliable-Bit Information Exchange

In the LRB exchange scheme, each node directly requests information from other

nodes for its low reliable bits. Thus, the information exchange process has to be car-

ried out in two stages. In the first stage, each node sends out its request information.

In the second stage, each node broadcasts its soft-output according to the request

received from other nodes. This two stage process may increase the complexity of

the required network protocol. In addition, a node requesting information for its

least reliable bits does not necessary mean that other nodes can provide more reliable

information for those bits. It is possible that the information collected by a node is

not reliable enough to improve its decoding performance in the next iteration.

As an alternative, we propose another scheme called most-reliable-bit (\I RB)

information exchange. MRB exchange is usually efficient for the distributed array

consisting of a large number of nodes, e.g., M > 6. The idea of this MRB scheme is









that each node directly broadcasts the soft-output for its most reliable bits after a de-

coding iteration performed without the information request stage in the LRB scheme.

Similar to LRB, the identification of highly reliable bits is based on the ranking and

comparison of the reliability values. A small percentage of bits with high reliability

values are chosen as the most reliable bits. According to the statistic characteristics

of the soft-output shown in Fig. 4-2, these bits are the correctly decoded bits with

very high probability. Assume that the soft output for different bits and/or at differ-

ent nodes are independent of each other, then the most reliable bits are evenly spread

in a packet at each node and the positions are uncorrelated for different nodes. Thus,

when the number of nodes is large, even if each node only broadcast information of its

top 10'. reliable bits, the total information collected from all these nodes can cover

more than 50'. bits in the whole packet. Therefore, the less reliable bits at a node

that collects this information can have a good chance to be covered. The collected

additional information in MRB is usually much more reliable than that in LRB.

Combined with the memory-based scheme to avoid correlation among the ad-

ditional information at different iterations, the MRB exchange scheme is given as

follows.

1. Clear the flag registers, i.e., set all data bits to be candidate bits.

2. Decode the received signals at each node.

3. If I exchanges (i.e., I + 1 decoding iterations) have been performed, then ter-

minate the decoding procedure and go to step 1) to process a new packet.

4. Otherwise each node ranks the candidate bits according to their soft output

magnitude (reliability), and broadcasts the soft information for the top pj frac-

tion of the candidate bits (the most reliable candidate bits) to other nodes.

5. Each node sets the flags for those bits involved (received and broadcast) in the

current exchange so that they become non-candidate bits for later iterations.









6. Each node adds the additional information to its a priori information and

returns to step 2).

Ignoring the cost of information exchange due to the network protocol and bit indexes,

The information exchange amount for MRB can be computed as

I-1 j-1
OMRB MN pY (1, p)M. (4 7)
j=0 1=0

4.3 Performance Evaluation

In this section, we use Monte Carlo simulations to evaluate the performance of

collaborative decoding with the LRB and MRB information exchange schemes when

the distributed arrays consist more than two nodes (M > 2). In the simulations, the

packet size is set to 1024 data bits. We set the number of iterations I to 3 (i.e., 3

exchanges and 4 decoding iterations are performed in total).

We first examine the performance of collaborative decoding with the LRB ex-

change scheme. Fig. 4-3 shows the BER curves of collaborative decoding on the

AWGN channel for the cases of M = 2, 3, 4, 6 and 8, respectively. In the system,

we employ the non-recursive convolutional code CC(5, 7), for which the generation

polynomial is [1 + D2, 1 + D + D2] and code rate Re = 1/2. The parameter {pj} are

set to {0.1,0.15, 0.25}. For clarity, only the BERs obtained after the last iteration

in collaborative decoding are shown. In the figure we also show the BERs for MRC

with M = 2, 6 and 8. From the figure we can see that, for M = 2, the performance

of collaborative decoding with the LRB exchange scheme is very close that of MRC,

while it is about 2dB and 2.6dB within that of MRC for M = 6 and M = 8, re-

spectively. Fig. 4-4 shows the BER performance on an independent Rayleigh fading

channel. Similar to the case of AWGN channel, significant spatial diversity gains are

obtained using collaborative decoding.

For collaborative decoding with the MRB exchange scheme, in order to gain

spatial diversity, we set {pj} to be {0.1,0.2, 1}. All other system settings are the















Collaborative decoding
with LRB

MRC






\single reciever


MS3


\:I\


i:ii ii,


-3 -2 -1 0 1 2
Eb/N0 (dB)


4 5 6


Figure 4-3: BER performance
the cases of M = 2,3,4,6 and
{0.1,0.15, 0.25} are used.


of collaborative decoding with LRB exchange for
8 on AWGN channels, where CC(5,7) and {pj,}


Collaborative decoding
with LRB

I II ,-


single receiver


M =2


!\ MRC !
M =2


S4


-2 0 2 4 6
Eb/N0 (dB)


8 10 12 14


Figure 4-4: BER performance of collaborative decoding with LRB exchange on
Rayleigh fading channels, parameter settings are the same as Fig. 4-3.







53




,ll I I1 1
10 :
MRC

10 2


S10'

2 single receiver

10-
=6
10 '










AWGN channels, where {p }= {0.1, 0.2, 1} are used.


same as that for the LRB scheme. The BER performance for MRB on AWGN and

independent Rayleigh fading channels are shown in Fig. 4-5 and 4-5, respectively.

Similar to collaborative decoding with LRB exchange, collaborative decoding with

MRB exchange can also achieve significant receive diversity gains and performance

close to that of MRC for both AWGN and Rayleigh fading channels.

Finally, we compare the information exchange amount of LRB exchange and

MRB exchange schemes. For the settings {pj} = {0.1,0.15, 0.25} for LRB and {p }

{0.1,0.2, 1} for MRB, the two schemes roughly achieve the same performance for

different numbers of nodes, M. We use (4-5), (4-7) and (4-6) to calculate the

information exchange amount with respect to MRC for the two schemes. Fig. 4-7

shows the relative information exchange amount OLRB/OMRC and OMRB/OMRC for

different values of M. From the figure, we can see that for this setting of {pj}, LRB

grows with the increasing of M and approaches to half of the information exchange

amount for MRC. In contrast, MRB decreases with M. From the comparison we


















Collaborative decoding
with MRB


10-1


M=2
i\ \ ngel receiver
Ii i


S M= \\' M= 2




M=8\


-2 0 2 4 6 8 10 12 14
Eb/No (dB)


Figure 4-6: BER performance of collaborative decoding with MRB exchange on

Rayleigh fading channels, parameter settings are the same as Fig. 4-5.


045


04


035-


03-


025-


LRB/MRC
p=0 1,0 15,0 25


"MRB/MRC
p=O 1,0 2,1


015r


5
Number of nodes, M


Figure 4-7: Comparison of information exchange amount

LRB and MRB exchange schemes


with respect to MRC for


I I1---









conclude that, when the number of nodes is small, LRB exchange scheme is more

efficient than MRB. With more nodes in the distributed array, MRB will become

more efficient than LRB.

4.4 Summary

In this chapter, we extend collaborative decoding to distributed arrays with two

or more nodes. Two different information exchange schemes are proposed. In the

LRB scheme, the nodes request soft information for a certain percentage of their

least reliable bits. In the MRB exchange scheme, nodes send out soft information

about a small set of their most reliable bits. Collaborative decoding with both of

these two scheme can achieve most of the spatial diversity and provide significant

savings in terms of information exchange amount compared to MRC on AWGN and

independent Rayleigh fading channels. We also compare the information exchange

amount for the two schemes. It is shown that for distributed arrays with small number

of nodes, LRB is efficient. When the number of nodes increases, MRB becomes more

efficient than LRB.















CHAPTER 5
PERFORMANCE ANALYSIS FOR COLLABORATIVE DECODING WITH
LEAST-RELIABLE-BIT EXCHANGE ON AWGN CHANNELS

As an efficient diversity technique, collaborative decoding with the LRB infor-

mation exchange scheme, discussed in C!i ipter 4, provides significant savings on the

cost of information exchange while still achieves performance close to that of MRC. In

this chapter, we focus on the theoretical analysis of the error performance of collabo-

rative decoding with the LRB exchange scheme on the AWGN channel. The system

model considered in this chapter is the same as that described in Section 4.1. Since

we restrict the analysis to the case of AWGN channel, the channel gain gk,i in (4-1)

will be alv--- 1 in this chapter. The analysis is based on the LRB exchange scheme

described in Section 4.2.2.

The analysis will be based on the statistical characteristics of soft information

obtained from the MAP decoders in collaborative decoding. From simulation, we

observe that the extrinsic information generated in the decoding process can be well

approximated by Gaussian random variables when nonrecursive convolutional codes

are emplo,, ,1 Unfortunately, for recursive convolutional codes the extrinsic informa-

tion generated in the decoding process can not be approximated by a simple Gaussian

distribution, which makes the performance analysis difficult. Due to this difficulty,

we only consider nonrecursive convolutional codes in this chapter.

By viewing collaborative decoding as an iterative decoding system, we use a

typical analysis technique for turbo-like codes, known as density evolution, to ana-

lyze the performance of collaborative decoding. As in most of the literatures (e.g.,

[30] and [31]) on analysis of turbo codes, we use simulation to obtain the statistical









characteristics of the extrinsic information, which is approximated by a Gaussian dis-

tribution. To simplify the problem, we model the collaborative decoding process as

a density evolution system with only one MAP decoder. Then we can generate the

a prior information of the density evolution model according to the LRB exchange

scheme. By simulating the density evolution model with only one MAP decoder, we

obtain the statistical characteristics of the actual extrinsic information with a modest

simulation load in comparison to that of the actual collaborative decoding system.

With the knowledge of the extrinsic information probability distribution at each iter-

ation, we derive an approximate bit-error rate (BER) upper bound for collaborative

decoding with the LRB exchange scheme.

The rest of this chapter is organized as follows. In Section 5.1, we model col-

laborative decoding as a concatenated structure consisting of a MAP decoder and an

information exchanging device, and employ Gaussian approximation to obtain the

density evolution of the extrinsic information. In Section 5.2, we derive an upper

bound of the BER of the collaborative decoding process. Numerical results obtained

from the analysis are shown in Section 5.3. Finally, conclusions are given in Sec-

tion 5.4. This chapter is based on the work in [25] and [32].

5.1 Gaussian-Approximated Density Evolution For Nonrecursive
Convolutional Codes

Due to the exchange of soft information in the process, knowledge of the statis-

tical characteristics of soft information from maximum a posteriori (1\ AP) decoders

in collaborative decoding is important to its performance analysis. Thus, we first

consider the soft information generated in collaborative decoding. Note that the soft

output for non-systematic codes consists only of extrinsic information and a priori in-

formation, if a candidate bit has not obtained additional information previously, then

ranking and exchanging the soft output for candidate bits is equivalent to ranking

and exchanging the extrinsic information for those bits. Also, the sets of candidate










Info Exchange
(Additional Info
Generation)


Additional
Info


Ininsic Info Extrinsic Info
Intrine 5 System model for collaborative decoding process.
Figure 5-1: System model for collaborative decoding process.


bits and non-candidate bits for a packet in each iteration are exactly the same for all

nodes. These facts are important to understand the analysis in the following sections.

Because of the symmetry among the nodes in our system model, the statistical

characteristics of the extrinsic information at each node is the same in each iteration.

This means that the behavior of the LRB exchange process can be determined by

knowing the statistical characteristics of the output from the MAP decoder at a single

node. Thus, the collaborative decoding process can be modeled by the joint operation

of an information exchange unit and the MAP decoder unit as shown in Fig. 5-1.

The output of the information exchange is fed back to the MAP decoder as a prior

information for use in the next decoding iteration. The following analysis is based on

this system model.

Assuming that the all-zero codeword is transmitted, it is well known that the

extrinsic information generated by a MAP decoder, in the log-likelihood ratio (LLR)

form, is well approximated by Gaussian random variables when the inputs to the

decoder are i.i.d. Gaussian [31]. For the collaborative decoding process described

in Section 4.1, the additional information obtained from the information exchanging

process, which is used as input to the MAP decoder, has a non-Gaussian distribution.

Nevertheless, we observe that the probability distribution of the extrinsic information

from the MAP decoder in each iteration can still be well approximated as Gaussian

















005

0 04
#_


20
Extrinsic information


Figure 5-2: Empirical pdfs of extrinsic information generated by the MAP decoders
in successive iterations in collaborative decoding with the LRB exchange for M = 6
and Eb/No = 3dB on AWGN channels, where the maximum free distance 4-state
nonrecursive convolutional code is used.


when nonrecursive convolutional codes are employ, -1 Fig. 5-2 shows the typical

histograms of the extrinsic information generated by MAP decoders at successive

iterations in the collaborative decoding process for nonrecursive convolutional codes.

Comparing to the corresponding ideal Gaussian distributions, we can see that the

histograms are very close to Gaussian distributions. Based on these observations, we

apply the Gaussian-approximated density evolution technique in [30, 31] to predict

the behavior of the MAP decoders in collaborative decoding.

As in [31], we assume that at each node the extrinsic information generated by

the MAP decoder for all the information bits at that node are i.i.d. Gaussian ran-

dom variables in each iteration. We further assume that the extrinsic information

for information bits generated by different nodes are independent. Thus, the statis-

tical behavior of the extrinsic information is sufficiently specified by its mean and

variance. Unfortunately, obtaining an analytic distribution for the extrinsic informa-

tion generated by MAP decoders is an intractable problem, especially for the case of








non-Gaussian input. Hence, we use simulation, based on the model in Fig. 5-1, to
quantify the evolution of the probability distribution. By inputting actual additional
information to the MAP decoder, the mean and variance of the extrinsic informa-
tion can be obtained with modest simulation complexity in comparison to the actual
collaborative decoding process. This knowledge of extrinsic information is used to
evaluate the error performance in Section 5.2.
We first describe the generation of the additional information. For the jth decod-
ing iteration, let (') denote the extrinsic information generated by the MAP decoder
for the ith information bit at node k, and let B() denote the event that bit i is a
candidate bit. Under the Gaussian assumption, k( ~ -A/(pj, a2), and {li)} are i.i.d.
for all k and i, where A(/p, a2) means Gaussian distributed with mean p and vari-
ance a2. When the bit block size is large enough, the information request criterion
for (j) to rank in the bottom pj fraction among the candidate bits in the kth node
is approximately equivalent to (ki < Tj, where Tj > 0 is a threshold related to the
distribution of (j) and pj. Specifically, we have

P(ClK < T B () p= p. (5-1)

Let A ) denote the additional information for the ith bit at the kth node gener-
ated by the LRB exchange process in the jth iteration. This additional information
will be added to the a priori information in the (j + 1)th iteration by node k. Below,
let us assume that M > 3. The case of M = 2 will be discussed separately later.
According to the LRB scheme, if bit i is a non-candidate bit in the jth iteration, then
Aj = 0. Otherwise, there are three possibilities for the case of a candidate bit:
~~~~(J)i) No node recess minor n for he h bi i.e., then AJ 0;
i) No node requests information for the ith bit, i.e., OteM Qti > te k,i









ii) The kth node does not request information for bit i, but there is one other node

requesting information for that bit. We denote this event by 71(, i.e.,
k U < Tj n t,,, I.
S = U { _id ^ < Ini> > }. (5-2)
r M7,rk tEMl,t4r

Then the kth node will obtain information from other nodes except the one

sending out request, i.e.,

Z(j) (j) A (j). (5 3)
\ ~i / t,i k,i,
tCM
t r,t k,rvk

iii) The kth node or more than one nodes in AM request information for bit i. We

denote this event by ,) i.e.,

S = {(|ii <- }U {|( > T}n U {i < T". n I (.I> -
k1i r" I ,"n tti
r7M4,r k tEM,t4r,t4k
(5-4)
In this case, the kth node will obtain information from all other nodes, and A(j)

is given by
A(j) t(j) A ) (5 5)
k,i -St^i ^k,i"
tEML,t4k
Under the Gaussian assumption, we can see that, without the constraint of candidate

bits, A/ ~ A((M- 2)p,, (M- 2)o) while A ~ [((M- 1)pj, (M- 1)2). Clearly,

7 ') and Z(J) are di,- .1l events. According to the LRB scheme, only under case i)

bit i will be a candidate bit again in next iteration. Hence,


P(B B ) = ( () I > Tj B) = (1- p). (5-6)
kcM
From this recursive relation, we immediately obtain
j-1 j-1
P(/B F) P( ? ,- 1), o)) P ) )) (1 -)M, (5 7)
1-0 1=0









and
j-1
B0 n {( > Ti}. (5-8)
1=0 keM
Different from case i), in cases ii) and iii) bit i will become a non-candidate bit in the

next iteration. Thus,


P(Bj'y B1) = P(? U 7u () = P(Qi B ) + P({ B? ). (5-9)

With the above arguments, we can easily simulate the additional information

generated in the actual LRB process for the density evolution model in Fig. 5-1.

Without loss of generality, we assume that the MAP decoder in Fig. 5-1 is in the

Mth node. Also, we assume that the block length of the code is long enough to

ensure the Gaussian approximations and thresholding. In the jth iteration, the MAP

decoder generates ()i for the ith bit. Then the values of pj and o2 of the extrinsic

information for the information bits are estimated. To find Tj using (5-1), we first use

nonparametric estimation method to estimate the cumulative distribution function

Fj(x) of the extrinsic information for the candidate bits, i.e.,

Fj(x) = P(( < x Bj). (5-10)

Then, according to (5-1) we have


py = F,(Ty) F,(-Ty). (5-11)

For pj < 1, by solving (5-11) numerically we can obtain Tj approximately. For

the case of pj = 1, we set Tj = oc. In the information exchange module, we use

(M 1) i.i.d. random variables following distribution Fj(x) to simulate the extrinsic

information ~( for candidate bit i at node k, for k = 1 2, M 1, respectively.

Then, with {(i)}, for all k c M we check if -O or 7)) occurs, set A) to ) or

IMi accordingly, and flag this bit as a non-candidate bit for the next iteration. For
all other cases, we set A = 0. Then based on the LRB scheme we construct the a











-Density evolution model
c Actual collaborative decoding
90

80

70

60
50variance

40

30


-3 -2 -1 0 1 2 3 4 5 6 7
Eb/NO (dB)

Figure 5-3: Comparison of mean and variance of the extrinsic information from the
density evolution model and that from the actual collaborative decoding process


prior information, denoted by TjI l), for the (j + 1)th iteration, as


A(A) AjOif R
0+1)y A() Mi M,i /
U S(j+l) Vj 0,(<) / (5 12)
lMi Mi512)
o 0 otherwise,

where

R(1) A (0) t) (5-13)
M,i M,i M,i (5 )

By inputting this a priori information to the MAP decoder and iterating the above

procedure, we can obtain the statistical characteristic of the Gaussian-approximated

extrinsic information in the whole collaborative decoding process. Fig. 5.1 and 5.1

show the comparison of the mean and variance of the extrinsic information and the

threshold Tj estimated in our density evolution model and the actual collaborative

decoding process for the case of M = 6. In the figure, the maximum free distance

4-state non-recursive convolution code is used, and {pj} is set to {0.1,0.2, 1}. From

the figure, we see that our density evolution model gives an excellent approximation

for the actual collaborative decoding process with only 1/Mth of the simulation load.











- Density evolution model
- Actual collaborative decoding


30 3rd Iter

25 -

20 -l








Eb/No (dB)


Figure 5-4: Comparison of threshold estimated from the density evolution model
that from the actual collaborative decoding process


The analysis in the next section also shows that to evaluate the error performance

for the total I iterations, we only need the statistical knowledge of the extrinsic
information in the first I 1 iterations.

5.2 Error Performance Analysis

With knowledge of the statistical characteristics of the extrinsic information

we evaluate the error performance of collaborative decoding with LRB exchange.

We again consider the decoding process and performance at the MAth node. Let

M' {1, 2,... M 1} denote the set of the other M 1 nodes. Since the average

BER is considered, we drop the bit index, i.e., the subscript i, in the notation of

variables and events for the bit of interest. For convenience, we also drop the subscript

M for the Ath node in following derivation. From the definition (5 3), we know that


i a is a Gaussian random variable for M > 3 but equals zero for M = 2. Thus we

treat = 2 as a special case and consider the case of iM > 3 first below.









5.2.1 BER Upper Bound for M > 3

For the case of M > 3, the BER of the MAP decoders in the jth (j > 0) iteration

is the probability that the soft output of a bit is smaller than zero given that the

all-zero sequence is transmitted, i.e.,

Pt = P(-() + rj) < 0), (5-14)

where (0) is the extrinsic information, and l(j) is the a priori information in the jth

iteration given in (5-12) at the Mth node, respectively. Here, we evaluate the error

performance by finding an upper bound for (5-14).

According to (5-12), (6-10) and (6-16), we rewrite (5-14) as


Pt) = P(() + (j) < o, B)) + P(<() < 0, B())
j-1
P((J)+ A)< 0, R(), 3() + P((jK)< LJ)). (5-15)
l=0
We first consider the first part in (5-15). Using (6-16), (5-3) and (5-5), we have

P(\(j) + A() < 0 R(), B())

P((J) + X()< 0, i~,), 3(0)) + P(() + A()< O, R,), 3()

< P((J) + A() < 0, O.), Lt0) + P(() + A(') < 0). (5-16)









With (5-2) and (5-8), when pt < 1 (i.e., Tt < oo) for 0 < t < 1, we upper bound the

first term in (5-16) as follows


P((,) + < 0, B) p( + (I)< 0, U {\^ I < T, nl I'l>T }, )
r7M/ tEMA,t4r
M-1 l-1
= ^P Ti,,n n{|e >rt)
r=1 tEJM,t7r t=0 kE
M-1 1-1
< P(O) + 1 < 0, | < TI, n) l > T, (5-17)
r=1 t=0
1-1
(b) (M I)P() + A() < o)P(l) < T, | 1|1 > T ), (5- 8)
t=0

where (a) is obtained by dropping all the events in {7 t), Bl) } associated with (t) for

all t and k E AM, k / r, and (b) is due to the fact that the probabilities in the sum

in (5-17) are equal for 1 < r < M 1, and that 0 ( and A(') are independent of t)

for all t.

To evaluate the probability P < Ti, n i > T) in (5-18), we use (5-8)

to rewrite P(B(J)) as


P(B3())


M j-1 M j-1

k=l 1=0 k=l 1=0

j- T,
1=0


(5-19)


By comparing (5-19) with (5-7), for all k E MA we have


j-1 j-1
P1(n l) 1 Pl).
l 0 l 0


In the similar manner, it is easy to see that for all k E M


l-1
P(l)I |< Ti nlt) > T
t=0


(5-20)


1-1
pl H t).
t=0


(5-21)









Thus, with (5-21) and taking into account the fact P((J) + () < 0, -(), Bl)) <

P(O(j) + A() < 0), we refine the upper bound (5-18) as
1-1
P((+( 0) < i'), B(')) < min {1, (M-1)p, I(1-pt)}.P((J)+(') < 0). (5-22)
t=0

This bound is for the case that all pt are not equal to 1. If there exists a 0 < t < 1

such that pt = 1, then P((J) + X() < 0, 7(), B()) = 0 because of P(7i(), B(l) 0.

To include this case, we rewrite the upper bound (5-22) as


p((j) + +() < 0, i(), B')) < a P((J) + () < 0), (5-23)


where

0 if no (1 t) 0
a, = (5-24)
Smin 1, (M 1)p nt (1- t) otherwise.

In the same way, we consider the probability P(3(J) < 0, B(j)) in (5-15). With

(5-7) and (5-20), this probability can be easily expanded and upper bounded by

j-1
P(( < 0, B ) = P ( < 0 | kl) 1
1=0 kEM
j-1 j-1M-1

1=0 1=0 k=1
j-1
r_-1) (1 pl)M-1
1=0
(TJ)< 0, (J-1)>T-l) (5-25)

where
j-1
j 1 pm-1. (5-26)
l-0









By inserting (5-23), (5-25) and (5-16) into (5-15), we obtain following upper

bound
j-1
Pb < LP((J + (l) < 0) + ) + < 0)]
l=0
+bj [P( j-) < -Tj_I) + P( ) < o, (j-) > Tji)], (5-27)


where a, and bj are given by (5-24) and (5-26), respectively. Below, we employ a

union bound for the max-log-MAP decoder to further upper bound the probabilities

in (5-27).

5.2.2 Union Bound for Max-log-MAP Decoding

Let u and c denote a information bit sequence and the corresponding codeword

generated by a nonrecursive convolutional code C: u -- c, where u = (UO, U1, u, *),

c = (co, ci, .. c, ..* ), and ui and c Ec {0, 1} are the information bit and coded bit,

respectively. Correspondingly, y, is the received BPSK (i.e., xi = 1- 2c in (4-1)) sig-

nal at the decoder. Under the assumption that the all-zero sequence is transmitted,

the extrinsic information generated by the max-log-MAP decoder in the LLR form is

given by

()= max {- } + min {F)}, (5-28)
k (U,C)EC+ U'C (U,C)EC- ", C

where C+ and C- are the sets of all codewords pair (i, c) that gives the decision of

uo = 0 and u0 1, respectively, and F,(j) is the error event metric for (i, c) in the

jth iteration, defined as


(j) 5 ) (529)
U (= j + L, yi. (5-29)
iC{i:ui=l} iE{i:ci=1}
i/c

In (5-29), i {i : ui = 1} and i c {i : c = 1} mean taking the indices of the non-zero

bits in u and c, tj) is the a prior information of the ith information bit, and


Lc 2/ao


(5-30)









is known as the channel reliability measure. A detailed proof of (5-28) can be found

in Appendix B. Note that since the all-zero codeword (0, 0) E C+ and pj = 0, we

have

max { )} max {0, -F,} > 0. (5-31)
(U,c)EC+ (U,c)EC+ -
With (5-31) we can obtain following union bound from (5-28) for the probability

that dJ) is smaller than an arbitrary value x,

P( < x) P max {- ~ }) + min {I(j)




< P(r(j) < x), (532)
S(U,)EC

where Kc is the number of input bits per trellis state transition.

Now, let d = w(c) and w = w(u) denote the Hamming weights of the codeword

c and the corresponding information bit sequence u, respectively. Since the error

event metric FU,'c in (5-29) does not depend on the codeword pattern and k, but only

on the weights w and d, we can rewrite the metric as
w-l d-l
S + yLi. (5 33)
i=1 i=0

Since the statistical property for Fw,d and the probabilities in (5-33) can be

regarded as independent of the error events starting position for large coding block

size [14, 34], the union bound is valid for arbitrary data bit. Thus, we have dropped

the subscript k in (k for clarity. Also, we have indexed the other w 1 non-zero bits

ui in u as i = 1, 2, w 1 in (5-33) without loss of generality.

Thus, by using (5-33) and dropping the subscript k, the union bound in (5-32)

can be written as

P(() < x) < wA,,dP(P T, < x). (5-34)
d>dmin w>1









where dmin is the minimum Hamming distance of the code C, and Aw,d is the number

of error events with Hamming weight d and input weight of w. Eq. (5-34) is a

generalized union bound of max-log-MAP decoding. The well known union bound

for maximum likelihood decoding is a special case of (5-34) with x = 0 and the a

prior information equal to 0.

5.2.3 Applying Max-log-MAP Decoding Union Bound to Collaborative
Decoding

To apply the generalized union bound in (5-34) to collaborative decoding, the

crucial step is the evaluation of the probability P(rFP, < x). Thus we study the error

event metric in (5-33). From (5-12), we know that in the jth decoding iteration,

not all the w 1 non-zero information bits in u (the first non-zero bit u0 itself is

excluded) can obtain the a priori information. Among those bits that obtain a priori

information, some of them obtain XI) while the other obtain ) for I < j. For

convenience, we use Al to denote the bit set in the w 1 nonzero bits of u that

obtain additional information AI) in the lth iteration for 1 < j. This means that in

the Ith iteration the event 7 1) defined in (6-10) occurs only for i E A, (we do not

distinguish bit and bit index here for convenience) in the w 1 non-zero information

bits of u. Further, we define Al as the subset of Al that the event 7"1) occurs for

i E Al, which means the information bits in Al obtain additional information XA

in the Ith iteration. Also, we define Al as the complementary subset of Al that the

event 7) occurs for i E Al, i.e., those bits obtain additional information AX) in the

lth iteration. Note due to the fact that no information can be exchanged for a non-

candidate bit, we have Al n Ak = 0 for 1 / k. With these notations, the above event

can be expressed as
j-1
V f= iA1 inKtI iA}, iB3'} (5-35)
1-0 iEA, iEA, iEA1 iEBj









where Bj U= J A1 is the bit set for which no information exchange occurs in the

previous j 1 iterations. The set Bj contains all the non-zero candidate bits left for

the jth decoding iteration. From (5-33), the error event metric associated with event

Vj can be defined as

j-1 j-1
F(j) A() ("-3)
Swd + Z(Z + ) + Yd, (5-36)
1 0 ieA1 1=0 iEA1 ieA1

where
d-l
Yd:= L, Yi. (5-37)
i=0
From (4-1) we know that Yd N~ (dLc, 2dLc).

Since in iteration I the extrinsic information {( } are i.i.d. for all i, the statistical

characteristics of F(j) in (5-36) and the probability of Vj only depends on the size of

Al and Al for 1 < j given the statistical knowledge of the extrinsic information, but

not on the particular choices of the bit sets. Let


| A1 =mi, and IA = In (5-38)

with 0 < nz < mi. Due to the fact that the events 7) and 7iz) are disjoint, we know

that Al n A = 0. Since A, = A, U Az, we have


|AII = mi nq, (5-39)

which is sufficiently determined by (5-38). Thus, to determine the statistical charac-

teristics of I(j and Vj, it is sufficient to specify a 2j-tuple Vj that
Iwld


V, = \Ail = mi, IA l =ni .


(5-40)









For convenience, we use F wd(Vj) to denote the error event metric with a particular

Vj. Then we have Fj(Vj) ~ K(p(Vj), a2(Vj)), where

j-1 j-1
p(V4) = dL, + S p, and a2() 2dL, + 5 12, (5-41)
1=0 1=0
in which

1 m(M- 1)- ni. (5-42)

Recall that in the LRB exchange scheme no information can be exchanged for a non-

candidate bit, i.e., Al n Ak = 0 for 1/ k. Thus, the value of mi in (5-38) must

satisfy

0 < mi < wi, (5-43)

where

wI = W1-1 mi-1, and wo = w 1, (5-44)

is the the number of non-zero candidate bits left in u given the event {|At| = mt} -

occurs. Based on the above arguments, the probability P(l~K < x) can be calculated

as


V
(2j)
where >,j) means the (2j)-fold summation over all possible values of Vj, i.e.,

(2j) Wo wl W1j-1 mo mi T-l

/j mo=0 m=10 mjl 1no=0 n1=0 nj 1

and we use {Vj = V} to denote the occurrence of all possible sets A(Vj)

{AA,A,}jjo satisfying that {IAI m, lAl| ni} 1. Since the (2j)-tuple Vj

only constrains the size of Al and Al, for all 1, Al can be an arbitrary bit set in the

wl nonzero bits of u and the subset Al can be arbitrary subset in Al. Hence, for a

given Vj, there are
j-1 (546)
( Wl) 0 1 (5-46)
=0 Tn1 1







possible choices of A(Vj). For all these choices, the probabilities of the event {Vj
A(Vj)}, are the same. Thus, we can upper bound (5-45) by upper bounding P(rF (Vj) <
x, V, = A(Vj)) for each particular choice of A(Vj). In a manner similar to (5-17) and
(5-25), we drop all the events associated with 'l) or ^) in Vj and use (5-20) and
(5-21) to obtain

P(P \(V\) < x, I, |= V)

j-1= (i) (mi) P(F),d(V) < x A(Vj))
j-1
< iw) mi) P( (J) x)
xP(0 ( { UJ /<
x(nP n u Tl)
1 0 iEAI rEMA' t 0 t=0
j-1 1-1
Pnn > Tn P n Bj))
1 0 iEi t-0 iB,
< c'(V)P(F (V) < x), (5-47)

where c'(Vj) is calculated as
j-1 1-1
c'(Vj) w)(m ){[(M -1)p,(1 -p,)Mwj+_(-pt)'+- n( (5-48)
1 0 t=0
On the other hand, we know that P(PF) (V) < x, IV Vj) < P(F (Vj) < x).
Then the upper bound in (5-47) can be refined as

P( rd() ) < x, |I ( v)) < c(Vj)P(Fjd(V) < x), (5-49)

where c(Vj) = min{l, c'(Vj)}, and

P(l (Vj) < x) Q (P(V x (5-50)









with Q(.) being the Gaussian Q-function. Then by inserting (5-49) into (5-45) we
have
P( j) x) < c(' )P(,w()
Combining (5-50), (5-51) and (5-34), we obtain following upper bound

P() < x) < A c(V)QP(V -) X). (5-52)
d> dmin l w>-1 Vi

This closed-form bound is also readily applied to the probabilities P((J) + (') < 0),
P(O(J) + ()' < 0), and P($J-1) < -T,_1) in (5-27) without any difficulty.
Now, we only have P(~() < 0, ~-1) > Tj_i) left in (5-27) to evaluate. The
difficulty here is the correlation between (j() and ~ (-1). To unveil this correlation,
we consider the extrinsic information expression given in (5-28). Let
(u,c)t = arg max ( r })
(u,c)EC+

denote the optimal decoding sequence found by the decoder in C+, meanwhile (u, c)opt
denote the optimal decoding sequence in C-. According to max-log-MAP decoding,
the final survival sequence (u, c)opt is generated between (u, c),t and (u, c)t. If

(u, c)pt is not selected to between the survivor sequence, it becomes the competing
sequence. We assume the code is good enough that, when the SNR is not too low,
the decoder can at least find the correct codeword as the competing sequence if it is
not selected to be the survivor sequence. This assumption is the same as that used
in [33]. Thus, under the assumption that the all-zero sequence (0, 0) is transmitted,
we have (u, c)+t (0, 0) since (0, 0) e C+. That is,

max {- (i)} 0.
(u,c)EC+ -









With the above arguments, we can rewrite (5-28) as follows by dropping the first


term

( mm in F
St S i h T
when the SNR is high. Thus, for j > 0 we have


< P min {r} < 0, mmin
\ (u,c)Ec- ( "U,c)EC-


r(j-, )} > Tj-1
U'C


< P U N ) FTj-j1
K (uc)EC (UC)EC

< ( Y Pc(j) | | ,c1 > Tj
(UC)EC- (U'CI)EC-


< P(F < 0( ~ 0() > Tj-),
(u,c)EC


Following the derivation from (5-33) through (5-52), we then obtain

1
P({J) < 0(J- > _) E E [wA,d
d>dmin w>1
x c(2J)P(r (y() (V ,< ,0 )(j- ) > ).
P(Fw,d wd V- T-1)



(5-53)


(5-54)


To evaluate the probability P (Fr(,(V) < 0, j),(Vj_ ) > TI) in (5-54), we rewrite

(5-36) as


(5-55)


where


- E J-) + E !J--)


(5-56)


iEAj I


Given Vj, we know that (V,1) ~ fa(p(V_, e2 (V _i)), p ~ A(/_i _lj, a_1, i),

and (j-1) and T are independent of one another, where p(Vj), 7(Vj) and Oj are given


P(() < 0, (J-1) > Tj_
k^f k j^c >-1)i


( (V-) F ( (VJ ) + T,
'- ^ 1.} 'w~ .lJ-J+ ^


iEAj 1








in (5-41) and (5-42), respectively. Thus, we have


](F w,d(V) < 0 w, jl (( 1)1 ( Tj-l)
W, (I) Wj- i)-
(F jd )(Vj-i) + T <0, O (,r ( i) > Tj_I )
P(Fj 1(Vj-i) + T < 0,T < -rj,_) P(Fgj\vj}) < T,< -T,_i1)
(Fjd1 (VT-_i) + T < 0, T < -rj,_) P(F]9 \vj}) < Tj_,)P(P < -Tr,i)
Q ( j -i) TjiI) +( Tjo -_I)
(7 ( Tj) V 7 7(Vj_ Ol)
--Q ( /( j --1) Q-( -1/j-I+ Tj-1 (5-57)
"17 (jV -1 M7)


where the relations


/(V)j) (Vj-,) + ,j -11j-1, and 2(Vj) 2(Vji) + ji

are used, and
Q(x,y;p) A2 exp 2dzdZ2 (5-58)
/ p2 2(1 p2)
is the bivariate Gaussian Q-function, for which [35] gives a simplified expression for
numerical evaluation. To this point, we have upper bounded the BER PJ) for the
cases of M > 3.
5.2.4 BER Upper Bound for M = 2
For the case of M = 2, we note that the additional information V) defined in
(5-3) becomes 0 for all k, i and j, and the event j) defined in (5-4) reduces to
(53)becmes fo allk, ~IU J ~IU ~I~ ~~II I"k~i


(J) (j <- T}
lki-isc -< Jr


(5-59)


Due to these differences, it is necessary to make some modifications to the previous
analysis to obtain a tight bound for this case. Again, we consider the error perfor-
mance at the Mth node. Following the notation in Section 5.2.1, the inequality in









(5-16) becomes


P((J) + A(\) < 0, R(I), B(l))


SP((J) < 0, 7(l), L(')) + P((J) + X) < 0, (l), 3(0)

< P(( ) < 0, &('), B(l)) + P((j) + < 0). (5-60)


Analogous to (5-17) and (5-25), we upper bound the first term in (5-60) as


- P ( ) <, < 0' JI) I -TI, I(() > TI, BL(1)
l-1
< P(:(< 0,| TI, T)
t=0
1-1
= P( < o, 0'11 > TI)P( f)|I < T i, 1 >Tt)
t=0
< a1 [P( () < -Ti) + P((J) < 0, !) > T)], (5-61)


where a, is the same as (5-24). Thus, by substituting (5-16) with (5-60), the upper
bound of Pbj) in (5-27) becomes
j-1
Pj) < a [ P((a )< -TI) + P(( ) < 0, > TI)] + P( 1=0
+bj [P((j-1) < -T,_i) + P((j) < 0, (-1) > TrI)]. (5-62)

Then, similar to the case of M > 3, we apply the union bound for max-log-MAP
decoding to further upper bound the probabilities in (5-62). All the derivations are
same as Section 5.2.3 except for (5-47) and (5-48). Due to the change in (5-59), $)
becomes independent of ) 1). Thus, for the case of M = 2, we can keep 7Rl) for i E A,

when we drop all the events associated with i) in the derivation of (5-47). With
this modification, c'(Vj) in (5-48) becomes
j-1 (-1
c'(V) -H n -.A (5-63)
1=0 vt=0


P((j) < 0, -n(1), L3())











10
S10 A Simulation with M=1
I Simulation with M=2
o Simulation with M=6
Proposed bound
102 Union bound for MRC


104 4 E


S106 M=2
S\ MRC
S5 Union bound for
0 2 a 6 \ single receiver
10 -\2e h\\
3 exch es
M=6 1 ex h


101

-20 2 4 6 8 10
Eb/N0 (dB)


Figure 5-5: Comparison of the proposed bounds, simulation results for the cases of
M = 2 and 6 on AWGN channels, where CC(5, 7) and {pj} = {0.1,0.15,0.25} are
used.


5.3 Numerical Results

In this section, we first present numerical results to demonstrate tightness of

the BER upper bound developed in Section 5.2. Strictly speaking, this bound is

an approximated upper bound due to the Gaussian approximation and the semi-

analytical density evolution model. First, we set the number of iterations I to 3

(i.e., 3 exchanges and 4 decoding iterations are performed in total), and set {pj} to

{0.1,0.15, 0.25} in the collaborative decoding process. Fig. 5-5 compares the upper

bounds in each iteration with the simulation results for the cases of M = 2 and

M = 6, respectively. In the system, a non-recursive convolutional code with the

generation polynomial of [1 + D2, 1 + D + D2] is used. We denote it by CC(5, 7).

From the figure, we see that the bounds in the low BER region are very close to the

simulation results in all iterations. At the very low Eb/No region (i.e., the high BER

region), the bounds become loose. This is due to the nature of the union bound given

in (5-32). In the figure, we also show the union bounds for MRC. We can see that,











10I
10 E Simulation
E- Proposed bound
E \ Union bound for MRC
102


S10


10
El \ E D



10-

M=8 -4 M=2
MRC, M8 \ \
10 0 M--
=6

10 12
-2 0 2 4 6 8 10
Eb/NO (dB)


Figure 5-6: Comparison of the proposed bounds, simulation results in the last itera-
tion for the cases of M = 2, 3,4, 6 and 8 on AWGN channels, where CC(15, 17) and
{pj} {0.1,0.15, 0.25} are used.


for M = 2, the performance of collaborative decoding with the LRB exchange scheme

is very close that of MRC, while it is about 2dB within that of MRC for M = 6. This

means that most of the spatial gain can be obtained through collaborative decoding.

In Fig. 5-6, we show the results for another non-recursive convolutional code with

the generation polynomial of [1 + D2 + D3, 1+ D + D2 + D3], denoted by CC(15, 17).

The parameter {pj} is the same as that in Fig. 5-5. We compare the upper bounds

with simulation results for M = 2, 3, 4, 6, and 8, respectively. For clarity, we only

show the BER in the last iteration for each M. We note that, due to the independency

assumption and Gaussian approximation in Section 5.1 are not very accurate in the

realistic decoding process for CC(15, 17) when M = 2, the BER bounds is a little bit

below the simulation results. However, when M > 3 the assumptions become much

closer to the actual situation. From the figure, we can see that the bounds are very

tight.









Table 5-1: Different choices of {pj} and corresponding average information exchange
amount 0 with M = 8 for rate 1/2 CC(5, 7) code. 0 is calculated with respect to
the information exchange amount of MRC, MRC-

No. of exchanges Value of {pj } j
Case 1 I =3 {0.1, 0.15, 0.25} 01 = 0.4580MRC
Case 2 I =3 {0.055, 0.098, 1} 2 = 0.4660MRC
Case 3 I =5 {0.0405, 0.0564, 0.0897, 0.1902, 1} 03 = 0.4560MRC
Case 4 I 1 {1} 04 = 0.50MRC


With the proposed analysis tools, we can illustrate the effect of different choices

of {pj } to the error performance in collaborative decoding by comparing the BER up-

per bounds. While comparing error performance for different collaborative decoding

processes, it is important to consider the necessary amount of information exchange

during the process. Here, we use (4-5) to calculate the average information exchange

amount 0, and the information exchange amount of MRC MRC is calculated by

(4-5) for the purpose of comparison.

Below, we fix the setting of M = 8 and rate 1/2 code CC(5, 7), and compare

the 4 cases listed in Table 5-1. In Table 5-1, {pj} in case 2 is chosen such that

for M = 8, the amount of information each node sending out is almost the same in

different iterations on average. In case 3, {pj} is chosen to make each node request

information from other nodes for almost the same number of information bits in

different iterations. In case 4, {pj} = {1} means that the nodes exchange information

only once, and each node requests information from other nodes for all the information

bits. In Fig. 5.3, we show the BER bounds of each iterations for the all 4 cases. From

the figure, we see that cases 2, 3 and 4 achieve the same performance in their last

iterations, and outperform case 1. In Fig. 5.3, we compare the BER bounds of the

all 4 case in their last iterations with that of MRC in the very low BER region. This

approximately shows the .i-vmptotic performance of the systems. In the figure, we

see that in case 2, 3 and 4, the receivers finally achieve the same error performance as

MRC, but with much less information exchange amount. This shows that with proper
















Bounds for case 1
-Bounds for case 2
x Bounds for case 3
-i- Bounds for case 4
Union bound for MRC









Union bound for
single receiver


-2 0 2 4 6 8 10
Eb/No (dB)



Figure 5-7: Comparison of performance for M = 8 and CC(5, 7) on AWGN channels

with different choices of {pj} in Table 5-1


Case 2. 3 and 4


MRC


Single receiver
















\Case 1


-2 0 2 4
Eb/N, (dB)


Figure 5-8: Asymptotic performance for M

with different choices of {pj} in Table 5-1


8 10 12


8 and CC(5, 7) on AWGN channels


SLast iteratiorns
for case 2, 3 and 4


Last ieration
for case 1









choices of {pj}, full spacial diversity can be achieved by the collaborative decoding

technique.

5.4 Summary

We have analyzed the bit error performance for collaborative decoding with

LRB exchange. A density evolution model is proposed to simplify the analysis. With

Gaussian approximation, knowledge of the extrinsic information are obtained by sim-

ulating the proposed model over AWGN channels. Then, we derive an upper bound

for the BER of the collaborative decoding process via a generalized union bound

for the max-log-MAP decoder. Numerical results demonstrate the tightness of the

bounds. We also show that with proper parameters design, collaborative decoding

with LRB exchange can achieve the same performance of MRC at high SNRs. The

analysis provides an efficient way to evaluate the error performance of the collabora-

tive decoding system.

The analysis is based on the observation that the extrinsic information generated

in the collaborative decoding process can be well approximated by Gaussian distribu-

tions when non-recursive convolutional codes are used in the system. This advantage

makes the calculations in the analysis simple. For recursive convolutional codes, if we

can find a proper probability distribution model for the extrinsic information, then

by replacing the Gaussian approximation with the new model, our analysis can be

extended to the recursive convolutional code case. Gaussian mixture model [36] is a

possible solution in this case.















CHAPTER 6
PERFORMANCE ANALYSIS FOR COLLABORATIVE DECODING WITH
MOST-RELIABLE-BIT EXCHANGE ON AWGN AND RAYLEIGH FADING
CHANNELS

In C!i ipter 5 we have wi, &i-. .1 the error performance of collaborative decoding

with LRB information exchange over an AWGN channel. The method is primarily

based on the density evolution model and the Gaussian approximation for extrinsic

information generated by nonrecursive convolutional codes in collaborative decoding.

With the statistic characteristics of additional information, it is possible to study to

the error events, hence the pairwise error probability, of any single decoding process

in collaborative decoding. Once the pairwise error probabilities can be obtained,

the error performance of the decoder are evaluated by applying the union bound of

MAP decoding as in Section 5.2.2. In this C!i lpter, we extend this method to the

scenario of collaborative decoding with the MRB information exchange scheme, when

nonrecursive convolutional codes are.

Similar to the case of LRB, in AWGN channel we still apply Gaussian approxi-

mation the to extrinsic information generated by the nonrecursive convolutional codes

in MRB. However, for independent Rayleigh fading channels, the density function of

extrinsic information exhibits apparent .,- immetric property, especially at the middle

to high SNR region. Hence, the Gaussian approximation used in the AWGN channel

analysis precisely is not valid any more for the case of independent Rayleigh fading

channels. Fortunately, from simulation we find that in this case. The statistical char-

acteristics of extrinsic information generation by the nonrecursive convolutional codes

in collaborative decoding can be well approximated by a set of generalized .-i- mmetric

Laplace distributions [37]. This approximate parametric description makes it possible









to describe the statistical behavior of the extrinsic information by a set of few para-

meters. Hence, the density evolution model can be extend to the case of independent

Rayleigh fading channel for collaborative decoding.

With proper statistical approximations and density evolution model, the i: ii 1"

work of the performance analysis for collaborative decoding with MRB exchange be-

comes the evaluation of the pairwise error probabilities (PEP) in the MAP decoding

process. Different from LRB, due to the nature of the MRB information exchange

scheme the additional information at each decoder involves sum of truncated extrinsic

information from other decoders, and the number of such truncated extrinsic infor-

mation terms is also random. This makes the analysis of the PEP in MRB much

more complicated than that in LRB. In this chapter, we primarily rely on upper-

bound techniques and combinatorial theory to derive the error probabilities. Laplace

transform and saddle point approximation techniques based on moment generating

functions are the i1 i Pr tools used in evaluating the upper bounds.

The system model we study in this chapter is described in Section 4.1. The

collaborative decoding with MRB exchange is described in Section 4.2.3. We only

consider the performance analysis of nonrecursive convolutional codes in this chapter.

The remainder of this chapter is arranged as follows. In Section 6.1, we describe

the Gaussian and the generalized .,- mmetric Laplace approximations of the extrinsic

information in the collaborative decoding for the AWGN and independent Rayleigh

fading channel models, respectively. In Section 6.2, a density evolution model is devel-

oped to evaluate statistical parameters for the extrinsic information. In Section 6.3,

a uniform upper bound of bit error rate (BER) is provided in term of probabilities

involving the extrinsic information in the current iteration. In Section 6.4 we further

study the error event behaviors in MAP decoding with the effect of MRB information

exchange, and develop some upper bounds of the PEP involved in the BER bound.

In Section 6.5, we address the numerical evaluation for the BER upper bound with









the statistical knowledge of extrinsic information for the AWGN and independent

Rayleigh fading channel models, respectively. Then, numerical results are presented

in Section 6.6. Finally, a summary is drawn in Section 6.7.

6.1 Statistical Approximation for Extrinsic Information

From the procedure of the MRB exchange scheme described in Section 4.2.3,

we know that all nodes in the collaborative decoding process are symmetric. This

implies that the statistical characteristics of the extrinsic information at all nodes

are the same. Thus, we only need to consider a single node, for example, the Mth

node. In order to study the extrinsic information generated by the MAP decoder in

collaborative decoding, we will, without loss of generality, assume that the all-zero

codeword is transmitted through the channel in following. Under this assumption, we

seek to find the probability distribution of the extrinsic information for each data bit.

Our performance analysis will be based on the knowledge of statistical characteristics

of the extrinsic information.

As we know, the extrinsic information is generated by finding the minimum (or

maximum) in a large (theoretically infinite) set of sequence metrics in the MAP de-

coding process. The sequence metrics are of non-identical distribution and dependent

with each other. From the extreme value theory, the closed-from distribution of the

minimum (or maximum) does not exist for dependent non-identical random variables

generally. In fact, even the type of the distribution for extrinsic information generated

by a MAP decoder is very difficult to find analytically.

In this case, a feasible approach to simplify the study of the MAP decoding

process is to employ approximate models to describe the distribution of the extrinsic

information. By using a simple distribution that fits the observed histogram of ex-

trinsic information obtained from simulations, statistic behavior of the decoder can

be quantified and studied in a semi-analytic way. This approach has been successfully

applied to the study of iterative decoding process for many codes such as turbo codes









and low-density parity check (LDPC) codes in [30, 31, 38, 39]. Analysis with this ap-

proximation approach turns out to give considerably accurate results in the study of

convergence property for turbo codes and LDPC codes. Based on this approximation,

effective convergence analysis techniques such as the density evolution model and ex-

trinsic information transfer (EXIT) chart are also developed. These techniques have

been widely used in the analysis of many iterative decoding, detection and equaliza-

tion algorithms. Following this idea, we also use empirical approximation to avoid the

analytically unsolvable problem of finding the distribution of the extrinsic information

in MRB.

Generic distribution fitting or estimation is a well studied topic in statistics.

There are many techniques available for learning distribution of the extrinsic infor-

mation in collaborative decoding. Techniques such as bootstrap sampling and mixture

model can usually fit any distribution very well. However, these techniques usually

belong to nonparametric methods or parametric methods with a great number of

parameters. Representing the extrinsic information by such kind of distributions

usually provides very little benefit to the analysis of error events associated with the

extrinsic information generated in the decoding process. Hence, we consider using an-

alytic distributions with only a small number of parameters to approximate statistic

characteristics of the extrinsic information.

6.1.1 AWGN Channel

For AWGN channels, it is a well known observation that extrinsic information of

information bits generated by a MAP decoder for convolutional codes and turbo codes

can be well approximated by independent Gaussian random variables. Although the

error events determining the decoding performance become much more complicated

than those in a regular iterative decoding process, the Gaussian-like property of the

extrinsic information still persists in collaborative decoding with MRB information

exchange for nonrecursive convolutional codes.