<%BANNER%>
Recursive principal components analysis using eigenvector matrix perturbation
CITATION PDF VIEWER
Full Citation
STANDARD VIEW MARC VIEW
Permanent Link: http://ufdc.ufl.edu/AA00008934/00001
 Material Information
Title: Recursive principal components analysis using eigenvector matrix perturbation
Series Title: EURASIP Journal on Applied Signal Processing
Physical Description: Archival
Language: English
Creator: Erdogmus, Deniz
Rao, Yadunandana N.
Peddaneni, Hemanth
Hedge, Anant
Principe, Jose C.
Publisher: BioMed Central
Hindawi Publishing Corporation
Publication Date: 2004
 Notes
Abstract: Principal components analysis is an important and well-studied subject in statistics and signal processing. The literature has an abundance of algorithms for solving this problem, where most of these algorithms could be grouped into one of the following three approaches: adaptation based on Hebbian updates and deflation, optimization of a second-order statistical criterion (like reconstruction error or output variance), and fixed point update rules with deflation. In this paper, we take a completely different approach that avoids deflation and the optimization of a cost function using gradients. The proposed method updates the eigenvector and eigenvalue matrices simultaneously with every new sample such that the estimates approximately track their true values as would be calculated fromthe current sample estimate of the data covariance matrix. The performance of this algorithm is compared with that of traditionalmethods like Sanger’s rule and APEX, as well as a structurally similar matrix perturbation-based metho
General Note: Publication of this article was funded in part by the University of Florida Open-Access publishing Fund. In addition, requestors receiving funding through the UFOAP project are expected to submit a post-review, final draft of the article to UF's institutional repository, IR@UF, (www.uflib.ufl.edu/ufir) at the time of funding. The Institutional Repository at the University of Florida (IR@UF) is the digital archive for the intellectual output of the University of Florida community, with research, news, outreach and educational materials
 Record Information
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution.
Resource Identifier: doi - 1687-6180-2004-263984
System ID: AA00008934:00001

Downloads

This item is only available as the following downloads:

( PDF )


Full Text

PAGE 1

EURASIPJournalonAppliedSignalProcessing2004:13,2034c 2004HindawiPublishingCorporationRecursivePrincipalComponentsAnalysisUsingEigenvectorMatrixPerturbationDenizErdogmusDepartmentofComputerScienceandEngineering,CSE,OregonGraduateInstitute,OregonHealth&ScienceUniversity,Beaverton,OR97006,USAEmail:deniz@cse.ogi.eduYadunandanaN.RaoComputationalNeuroEngineeringLaboratory(CNEL),DepartmentofElectrical&ComputerEngineering(ECE),UniversityofFlorida,Gainesville,FL32611,USAEmail:yadu@cnel.u.eduHemanthPeddaneniComputationalNeuroEngineeringLaboratory(CNEL),DepartmentofElectrical&ComputerEngineering(ECE),UniversityofFlorida,Gainesville,FL32611,USAEmail:hemanth@cnel.u.eduAnantHegdeComputationalNeuroEngineeringLaboratory(CNEL),DepartmentofElectrical&ComputerEngineering(ECE),UniversityofFlorida,Gainesville,FL32611,USAEmail:ahegde@cnel.u.eduJoseC.PrincipeComputationalNeuroEngineeringLaboratory(CNEL),DepartmentofElectrical&ComputerEngineering(ECE),UniversityofFlorida,Gainesville,FL32611,USAEmail:principe@cnel.u.eduReceived4December2003;Revised19March2004;RecommendedforPublicationbyJohnSorensenPrincipalcomponentsanalysisisanimportantandwell-studiedsubjectinstatisticsandsignalprocessing.Theliteraturehasanabundanceofalgorithmsforsolvingthisproblem,wheremostofthesealgorithmscouldbegroupedintooneofthefollowingthreeapproaches:adaptationbasedonHebbianupdatesanddeation,optimizationofasecond-orderstatisticalcriterionlikereconstructionerrororoutputvariance),andedpointupdateruleswithdeation.Inthispaper,wetakeacompletelydi er-entapproachthatavoidsdeationandtheoptimizationofacostfunctionusinggradients.Theproposedmethodupdatestheeigenvectorandeigenvaluematricessimultaneouslywitheverynewsamplesuchthattheestimatesapproximatelytracktheirtruevaluesaswouldbecalculatedfromthecurrentsampleestimateofthedatacovariancematrix.TheperformanceofthisalgorithmiscomparedwiththatoftraditionalmethodslikeSangersruleandAPEX,aswellasastructurallysimilarmatrixperturbation-basedmethod. Keywordsandphrases:PCA,recursivealgorithm,rank-onematrixupdate.1.INTRODUCTIONPrincipalcomponentsanalysisPCA)isawell-knownstatis-ticaltechniquethathasbeenwidelyappliedtosolveimpor-tantsignalprocessingproblemslikefeatureextraction,sig-nalestimation,detection,andspeechseparation[1 2 3 4 ]. Manyanalyticaltechniquesexist,whichcansolvePCAoncetheentireinputdataisknown[5 ].However,mostoftheanalyticalmethodsrequireextensivematrixoperationsandhencetheyareunsuitedforreal-timeapplications.Further,inmanyapplicationssuchasdirectionofarrivalDOA)tracking,adaptivesubspaceestimation,andsoforth,signalstatisticschangeovertimerenderingtheblockmethodsvir-tuallyunacceptable.Insuchcases,fast,adaptive,on-lineso-lutionsaredesirable.MajorityoftheexistingalgorithmsforPCAarebasedonstandardgradientprocedures[2 3 6 7 8 9 ],whichareextremelyslowconverging,andtheirperfor-mancedependsheavilyonstep-sizesused.Toalleviatethis,

PAGE 2

RecursivePrincipalComponentsAnalysis2035 subspacemethodshavebeenexplored[10 11 12 ].How-ever,manyofthesesubspacetechniquesarecomputation-allyintensive.Therecentlyproposeded-pointPCAalgo-rithm[13 ]showedfastconvergencewithlittleornochangeincomplexitycomparedwithgradientmethods.However,thismethodandmostoftheexistingmethodsinliteraturerelyonusingthestandarddeationtechnique,whichbringsinsequentialconvergenceofprincipalcomponentsthatpo-tentiallyreducestheoverallspeedofconvergence.Were-centlyexploredasimultaneousprincipalcomponentextrac-tionalgorithmcalledSIPEX[14 ]whichreducedthegradientsearchonlytothespaceoforthonormalmatricesbyusingGivensrotations.AlthoughSIPEXresultedinfastandsimul-taneousconvergenceofallprincipalcomponents,thealgo-rithmsu eredfromhighcomputationalcomplexityduetotheinvolvedtrigonometricfunctionevaluations.Arecentlyproposedalternativeapproachsuggestediteratingtheeigen-vectorestimatesusingarst-ordermatrixperturbationfor-malismforthesamplecovarianceestimatewitheverynewsampleobtainedinrealtime[15 ].However,theperformance(speedandaccuracy)ofthisalgorithmishinderedbythegeneralToeplitzstructureoftheperturbedcovariancema-trix.Inthispaper,wewillpresentanalgorithmthatunder-takesasimilarperturbationapproach,butincontrast,thecovariancematrixwillbedecomposedintoitseigenvectorsandeigenvaluesatalltimes,whichwillreducethepertur-bationsteptobeemployedonthediagonaleigenvaluema-trix.Thisfurtherrestrictionofstructure,asexpected,allevi-atesthedi cultiesencounteredintheoperationofthepre-viousrst-orderperturbationalgorithm,resultinginafastconvergingandaccuratesubspacetrackingalgorithm.Thispaperisorganizedasfollows.First,wepresentabriefdenitionofthePCAproblemtohaveaself-containedpaper.Second,theproposedrecursivePCA(RPCA)algo-rithmismotivated,derived,andextendedtonon-stationaryandcomplex-valuedsignalsituations.Next,asetofcom-puterexperimentsispresentedtodemonstratetheconver-gencespeedandaccuracycharacteristicsofRPCA.Finally,weconcludethepaperwithremarksandobservationsaboutthealgorithm.2.PROBLEMDEFINITIONPCAisawell-knownproblemandisextensivelystudiedintheliteratureaswehavepointedoutintheintroduction.However,forthesakeofcompleteness,wewillprovideabriefdenitionoftheprobleminthissection.Forsimplicity,andwithoutlossofgenerality,wewillconsiderareal-valuedzero-mean, n -dimensionalrandomvectorx anditsn projectionsy 1 ... y n suchthaty j = w T j x ,wherew j sareunit-normvec-torsdeningtheprojectiondimensionsinthen -dimensional inputspace.Therstprincipalcomponentdirectionisdenedasthesolutiontothefollowingconstrainedoptimizationproblem,whereR istheinputcovariancematrix:w 1 = argmaxw w T Rw subjecttow T w = 1 (1) Thesubsequentprincipalcomponentsaredenedbyinclud-ingadditionalconstraintstotheproblemthatenforcetheor-thogonalityofthesoughtcomponenttothepreviouslydis-coveredones:w j = argmaxw w T Rw ,s.t.w T w = 1, w T w l = 0, l
PAGE 3

2036EURASIPJournalonAppliedSignalProcessing Bydirectcomparison,therecursiveupdaterulesfortheeigenvectorsandtheeigenvaluesaredeterminedtobeQ k = Q k 1 V k k = D k k (6) Inspiteofthefactthatthematrix[(k 1) k 1 + k T k ]hasaspecialstructuremuchsimplerthanthatofageneralcovari-ancematrix,determiningtheeigendecompositionV k D k V T k analyticallyisdi cult.However,especiallyifk islarge,theproblemcanbesolvedinasimplerwayusingamatrixper-turbationanalysisapproach.Thiswillbedescribednext.3.1.Perturbationanalysisforrank-oneupdateWhenk islarge,thematrix[(k 1) k 1 + k T k ]isstronglydiagonallydominant;hence(duetotheGershgorintheorem)itseigenvalueswillbeclosetothoseofthediagonalportion k 1) k 1 .Inaddition,itseigenvectorswillalsobeclosetoidentity(i.e.,theeigenvectorsofthediagonalportionofthesum). Insummary,theproblemreducestondingtheeigen-decompositionofamatrixintheform + T ),thatis,arank-oneupdateonadiagonalmatrix ,usingthefollowingapproximations:D = + P and V = I + P V ,whereP and P V aresmallperturbationmatrices.Theeigenvalueperturba-tionmatrixP isnaturallydiagonal.Withthesedenitions,whenVDVT isexpanded,wegetVDVT = I + P V + P I + P V T = + P T V + P + P P T V + P V + P V P T V + P V P + P V P P T V = + P + DP T V + P V D + P V P T V + P V P P T V (7) Equating7 )to + T ,andassumingthatthetermsP V P T V and P V P P T V arenegligible,weget T = P + DP T V + P V D (8) TheorthonormalityofV bringsanadditionalequationthatcharacterizesP V .SubstitutingV = I + P V in VV T = I ,andassumingthatP V P T V 0 ,wehaveP V P T V CombiningthefactthattheeigenvectorperturbationmatrixP V isantisymmetricwiththefactthatP and D arediagonal,thesolutionsfortheperturbationmatricesarefoundfrom8 asfollows:thei thdiagonalentryofP is 2 i andthei j )thentryofP V is i j / j + 2 j i 2 i )ifj = i and0ifj = i 3.2.TherecursivePCAalgorithmTheRPCAalgorithmissummarizedinAlgorithm1.Thereareafewpracticalissuesregardingtheoperationofthealgo-rithm,whichwillbeaddressedinthissubsection. (1)InitializeQ 0 and 0 (2)Ateachtimeinstantk dothefollowing.(a)Getinputsamplex k (b)Setmemorydepthparameter k (c)Calculate k = Q T k 1 x k (d)FindperturbationsP V and P correspondingto 1 k k 1 + k k T k (e)Updateeigenvectorandeigenvaluematrices: Q k = Q k 1 I + P V k = 1 k k 1 + P (fNormalizethenormsofeigenvectorestimatesby Q k = Q k T k ,whereT k isadiagonalmatrixcontainingtheinversesofthenormsofeachcolumnof Q k (g)Correcteigenvalueestimatesby k = k T 2 k whereT 2 k isadiagonalmatrixcontainingthesquarednormsofthecolumnsof Q k Algorithm1:TherecursivePCAalgorithmoutline.SelectingthememorydepthparameterInastationarysituation,wherewewouldliketoweighteachindividualsampleequally,thisparametermustbesetto k = 1 /k .Inthiscase,therecursiveupdateforthecovariancematrixisasshownin3 ).Inanonstationaryenvironment,arst-orderdynamicalforgettingstrategycouldbeemployedbyselectingaxeddecayrate.Setting k = correspondstothefollowingrecursivecovarianceupdateequation:R k = (1 R k + x k x T k (9) Typically,inthisforgettingscheme, (0,1isselectedtobeverysmall.Consideringthattheaveragememorydepthofthisrecursionis1 samples,theselectionofthisparameterpresentsatrade-o betweentrackingcapabilityandestima-tionvariance.InitializingtheeigenvectorsandtheeigenvaluesThenaturalwaytoinitializetheeigenvectormatrixQ 0 and theeigenvaluematrix 0 istousetherstN 0 samplestoob-tainanunbiasedestimateofthecovariancematrixandde-termineitseigendecompositionN 0 >n .Theiterationsinstep(2)canthenbeappliedtothefollowingsamples.Thismeansinstep2)k = N 0 +1,... N .Inthestationarycase k = 1 /k ),thismeansintherstfewiterationsofstep2)theperturbationapproximationswillbeleastaccuratecom-paredtothesubsequentiterations).Thisissimplydueto(1 k k 1 + k k T k notbeingstronglydiagonallydom-inantforsmallvaluesofk .Compensatingtheerrorsinducedintheestimationsatthisstagemightrequirealargenumberofsampleslateron.Thisproblemcouldbeavoidedifintheiterationstage(step2))theindexk couldbestartedfromalargeinitialvalue.Inordertoachievethiswithoutintroducinganybias

PAGE 4

RecursivePrincipalComponentsAnalysis2037 totheestimates,oneneedstousealargenumberofsam-plesintheinitialization(i.e.,choosealargeN 0 ).Inprac-tice,however,thisisundesirable.Thealternativeistoper-formtheinitializationstillusingasmallnumberofsamples(i.e.,asmallN 0 ),butsettingthememorydepthparameterto k = 1 / k +( 1) N 0 ).Thisway,whentheiterationsstartatsamplek = N 0 +1,thealgorithmthinks thattheinitializa-tionisactuallyperformedusing = 0 samples.Therefore,fromthepointofviewofthealgorithm,thedatasetlookslike x 1 ... x N 0 ... x 1 ... x N 0 repeated times x N 0 +1 ... x N (10) Thecorrespondingcovarianceestimatoristhennaturallybi-ased.Attheendoftheiterations,theestimatedcovariancematrixisR N ,biased = N N +( 1) N 0 R N + 1) N 0 N +( 1) N 0 R N 0 ,11)whereR M = (1 /M M j = 1 x j x T j .Consequently,weconcludethatthebiasintroducedtotheestimationbytrickingtheal-gorithmcanbeasymptoticallydiminishedasN ). Inpractice,weactuallydonotwanttosolveforaneigen-decompositionproblematall.Therefore,onecouldsimplyinitializetheestimatedeigenvectortoidentityQ 0 = I )andtheeigenvaluestothesamplevariancesofeachinputentryoverN 0 samples 0 = diag R N 0 ).Wethenstarttheiterationsoverthesamplesk = 1, ... N andsetthememorydepthpa-rameterto k = 1 / k 1+ ).E ectivelythiscorrespondstothefollowingbiased(butasymptoticallyunbiasedasN covarianceestimate:R N ,biased = N N + R N + N + 0 (12) Thislatterinitializationstrategyisutilizedinallthecom-puterexperimentsthatarepresentedinthefollowingsec-tions.2 Inthecaseofaforgettingcovarianceestimatori.e., k = ),theinitializationbiasisnotaproblem,sinceitse ect willdiminishinaccordancewiththeforgettingtimeconstantanyway.Therefore,inthenonstationarycase,onceagain,wesuggestusingthelatterinitializationstrategy:Q 0 = I and 0 = diag R N 0 .Inthiscase,inordertoguaranteetheaccu-racyoftherstorderperturbationapproximation,weneedtochoosetheforgettingfactor suchthattheratio1 islarge.Typically,aforgettingfactor 10 2 willyieldac-curateresults,althoughifnecessaryvaluesupto = 10 1 couldbeutilized. 2 Afurthermodicationthatmightbeinstalledistouseatime-varying value.Intheexperiments,weusedanexponentiallydecayingprolefor = 0 exp ).Thisforcesthecovarianceestimationbiastodiminishevenfaster.3.3.Extensiontocomplex-valuedPCATheextensionofRPCAtocomplex-valuedsignalsistriv-ial.Basically,allmatrix-transposeoperationsneedtobere-placedbyHermitianconjugate-transpose)operators.Be-low,webrieydiscussthederivationofthecomplex-valuedRPCAalgorithmfollowingthestepsofthereal-valuedver-sion. Thesamplecovarianceestimateforzero-meancomplexdataisgivenbyR k = 1 k k i = 1 x i x H i = k 1) k R k 1 + 1 k x k x H k ,13)wheretheeigendecompositionisR k = Q k k Q H k .Notethattheeigenvaluesarestillreal-valuedinthiscase,buttheeigen-vectorsarecomplexvectors.Dening k = Q H k 1 x k andfol-lowingthesamestepsasin4 )to(8 ),wedeterminethatP V P H V .Therefore,asopposedtotheexpressionsde-rivedinSection3.1,herethecomplexconjugation and magnitude operationsareutilized.Thei thdiagonalen-tryofP isfoundtobe| i | 2 andthei j )thentryofP V is i j / j + | j | 2 i i | 2 )ifj = i ,and0ifj = i .Thealgo-rithminAlgorithm1isutilizedasitisexceptforthemodi-cationsmentionedinthissection.4.NUMERICALEXPERIMENTSThePCAproblemisextensivelystudiedintheliteratureandthereexistanexcessivevarietyofalgorithmstosolvethisproblem.Therefore,anexhaustivecomparisonofthepro-posedmethodwithexistingalgorithmsisnotpractical.In-stead,acomparisonwithastructurallysimilaralgorithm(whichisalsobasedonrst-ordermatrixperturbations)willbepresented[15 ].Wewillalsocommentontheper-formancesoftraditionalbenchmarkalgorithmslikeSangersruleandAPEXinsimilarsetups,althoughnoexplicitde-tailednumericalresultswillbeprovided.4.1.ConvergencespeedanalysisIntherstexperimentalsetup,thegoalistoinvestigatetheconvergencespeedandaccuracyoftheRPCAalgorithm.Forthis, n -dimensionalrandomvectorsaredrawnfromanor-maldistributionwithanarbitrarycovariancematrix.Inpar-ticular,thetheoreticalcovariancematrixofthedataisgivenby AA T ,whereA isann n real-valuedmatrixwhoseen-triesaredrawnfromazero-meanunit-varianceGaussiandistribution.Thisprocessresultsinawiderangeofeigen-spreadsasshowninFigure1),thereforetheconvergencere-sultsshownhereencompasssuche ects.Specically,theresultsofthe3-dimensionalcasestudyarepresentedhere,wherethedataisgeneratedby3-dimensionalnormaldistributionswithrandomlyselectedcovariancematrices.Atotalof1000simulationsMonteCarloruns)arecarriedoutforeachofthethreetargeteigen-vectorestimationaccuracies(measuredintermsofdegreesbetweentheestimatedandactualeigenvectors):10 ,5 ,and2 .Theconvergencetimeismeasuredintermsofthenumber

PAGE 5

2038EURASIPJournalonAppliedSignalProcessing 10 0 10 1 10 2 10 3 10 4 10 5 10 6 10 7 Eigenspread0 5 10 15 20 25 30 35 40 Histogramcounts Figure1:DistributionofeigenspreadvaluesforAA T ,whereA 3 3 isgeneratedtohaveGaussiandistributedrandomentries.ofiterationsittakesthealgorithmtoconvergetothetargeteigenvectoraccuracyinalleigenvectors(notjusttheprinci-palcomponent).Thehistogramsofconvergencetimesupto10000samples)forthesethreetargetaccuraciesareshowninFigure2,whereeverythingabove10000isalsolumpedintothelastbin.IntheseMonteCarloruns,theinitialeigenvectorestimatesweresettotheidentitymatrixandtherandomlyselecteddatacovariancematriceswereforcedtohaveeigen-vectorssuchthatalltheinitialeigenvectorestimationerrorswereatleast25 .Theinitial valuewassetto400andthedecaytimeconstantwasselectedtobe50samples.ValuesinthisrangewerefoundtoworkbestintermsofnalaccuracyandconvergencespeedinextensiveMonteCarloruns.Itisexpectedthattherearesomecases,especiallythosewithhigheigenspreads,whichrequireaverylargenumberofsamplestoachieveveryaccurateeigenvectorestimations,especiallyfortheminorcomponents.Thenumberofiter-ationsrequiredforconvergencetoacertainaccuracylevelisalsoexpectedtoincreasewiththedimensionalityoftheprob-lem.Forexample,inthe3-dimensionalcase,about2%ofthesimulationsfailedtoconvergewithin10 in10000on-lineit-erations,whereasthisratioisabout17%for5dimensions.Thefailuretoconvergewithinthegivennumberofiterationsisobservedforeigenspreadsover5 10 4 Inasimilarsetup,Sangersruleachievesameanconver-gencespeedof8400iterationswithastandarddeviationof2600iterations.Thisresultsinanaverageeigenvectordirec-tionerrorofabout9 withastandarddeviationof8 .APEXontheotherhandconvergesrarelytowithin10 .Itsaver-ageeigenvectordirectionerrorisabout30 withastandarddeviationof15 4.2.Comparisonwithrst-orderperturbationPCATherst-orderperturbationPCAalgorithm[15 ]isstruc-turallysimilartotheRPCAalgorithmpresentedhere.Themaindi erenceisthenatureoftheperturbedmatrix:theformerworksonaperturbationapproximationforthecom-pletecovariancematrix,whereasthelatterconsiderstheper-turbationofadiagonalmatrix.Weexpectthisstructuralre-strictiontoimproveperformanceintermsofoverallalgo-rithmperformance.Totestthishypothesis,anexperimentalsetupsimilartotheoneinSection4.1isutilized.Thistime,however,thedataisgeneratedbyacoloredtimeseriesus-ingatime-delayline(makingtheprocedureatemporalPCAcasestudy).Gaussianwhitenoiseiscoloredusingatwo-poleerwhosepolesareselectedfromarandomuniformdistri-butionontheinterval0,1).Asetof15MonteCarlosimula-tionswasrunon3-dimensionaldatageneratedaccordingtothisprocedure.Thetwoparametersoftherst-orderpertur-bationmethodweresetto = 10 3 / 6 5and = 10 2 .TheparametersofRPCAweresetto 0 = 300and = 100.TheaverageeigenvectordirectionestimationconvergencecurvesareshowninFigure3. Often,signalsubspacetrackingisnecessaryinsignalpro-cessingapplicationsdealingwithnonstationarysignals.ToillustratetheperformanceofRPCAforsuchcases,apiece-wisestationarycolorednoisesequenceisgeneratedbyer-ingwhiteGaussiannoisewithsingle-polelterswiththefol-lowingpoles:0.5,0.7,0.3,0.9inorderofappearance).Theforgettingfactorissettoaconstant = 10 3 .Thetwopa-rametersoftherst-orderperturbationmethodwereagainsetto = 10 3 / 6 5and = 10 2 .Theresultsof30MonteCarlorunswereaveragedtoobtainFigure4. 4.3.DirectionofarrivalestimationTheuseofsubspacemethodsforDOAestimationinsensorarrayshasbeenextensivelystudiedsee[14 ]andtherefer-encestherein).InFigure5,asamplerunfromacomputersimulationofDOAaccordingtotheexperimentalsetupde-scribedin[14 ]ispresentedtoillustratetheperformanceofthecomplex-valuedRPCAalgorithm.Toprovideabench-mark(andanupperlimitinconvergencespeed),wealsoperformedthissimulationusingMatlabseigfunctionseveraltimesonthesamplecovarianceestimate.Thelattertypicallyconvergedtothenalaccuracydemonstratedherewithin10samples.TheRPCAestimatesontheotherhandtakeafewhundredsamplesduetothetransientinthe value.Themaindi erenceintheapplicationofRPCAisthattypicalDOAalgorithmwillconvertthecomplexPCAproblemintoastructuredPCAproblemwithdoublethenumberofdimen-sions,whereastheRPCAalgorithmworksdirectlywiththecomplex-valuedinputvectorstosolvetheoriginalcomplexPCAproblem.4.4.Anexamplewith20dimensionsThenumericalexamplesconsideredinthepreviousexam-pleswere3-dimensionaland12-dimensional(6dimensionsincomplexvariables).Thelatterdidnotrequirealltheeigenvectorstoconvergesinceonlythe6-dimensionalsig-nalsubspacewasnecessarytoestimatethesourcedirections;hencetheproblemwasactuallyeasierthan12dimensions.Todemonstratetheapplicabilitytohigher-dimensionalsit-uations,anexamplewith20dimensionsispresentedhere.ThePCAalgorithmsgenerallycannotcopewellwithhigher-dimensionalproblemsbecausetheinterplaybetweentwo

PAGE 6

RecursivePrincipalComponentsAnalysis2039 0500010000Convergencetime0 20 40 60 80 100 120 140 160 180 200 Numberofruns (a) 0500010000Convergencetime0 20 40 60 80 100 120 140 160 180 200 Numberofruns (b) 0500010000Convergencetime0 20 40 60 80 100 120 140 160 180 200 Numberofruns (c) Figure2:TheconvergencetimehistogramsforRPCAinthe3-dimensionalcaseforthreedi erenttargetaccuracylevels:a)targeterror= 10 ,(btargeterror= 5 ,andc)targeterror= 2 010002000300040005000600070008000900010000Iterations0 5 10 15 20 25 30 35 Directionerrorindegrees Figure3:Theaverageeigenvectordirectionestimationerrors,de-nedastheanglebetweentheactualandtheestimatedeigenvectors,versusiterationsareshownfortherst-orderperturbationmethod(thindottedlines)andforRPCA(thicksolidlines).competingstructuralpropertiesoftheeigenspacemakesacompromisefromoneortheotherincreasinglydi cult. Specically,thesetwocharacteristicsaretheeigenspread(max i / min i andthedistributionofratiosofconsecutiveeigenvalues n n 1 ... 2 1 whentheyareorderedfromlargesttosmallestwhere n > 1 aretheordered 00 20 40 60 811. 21 41 61 82 10 4 Iterations0 10 20 30 40 50 60 70 Directionerrorindegrees Figure4:Theaverageeigenvectordirectionestimationerrors,de-nedastheanglebetweentheactualandtheestimatedeigenvectors,versusiterationsfortherst-orderperturbationmethod(thindot-tedlines)andforRPCA(thicksolidlines)inapiecewisestation-arysituationareshown.Theeigenstructureoftheinputabruptlychangesevery5000samples.eigenvalues).Largeeigenspreadsleadtoslowconvergenceduetothescarcityofsamplesrepresentingtheminorcom-ponents.Insmall-dimensionalproblems,thisistypicallythedominantissuethatcontrolstheconvergencespeedsofPCAalgorithms.Ontheotherhand,asthedimensionalityin-creases,whileverylargeeigenspreadsarestillundesirabledue

PAGE 7

2040EURASIPJournalonAppliedSignalProcessing 10 0 10 1 10 2 10 3 Iterations0 0 5 1 1 5 Sourcedirectionsandtheirestimates Figure5:Directionofarrivalestimationinalinearsensorarrayusingcomplex-valuedRPCAina3-source6-sensorcase.tothesamereason,smallerandpreviouslyacceptableeigen-spreadvaluestoobecomeundesirablebecauseconsecutiveeigenvaluesapproacheachother.Thiscausesthediscrim-inabilityoftheeigenvectorscorrespondingtotheseeigen-valuesdiminishastheirratioapproachesunity.Therefore,thetrade-o betweensmallandlargeeigenspreadsbecomessignicantlydi cult.Ideally,theratiosbetweenconsecutiveeigenvaluesmustbeidenticalforequaldiscriminabilityofallsubspacecomponents.Variationsfromthisuniformitywillresultinfasterconvergenceinsomeeigenvectors,whileoth-erswillsu erfromalmostsphericalsubspacesindiscrim-inability.In Figure6,theconvergenceofthe20estimatedeigenvec-torstotheircorrespondingtruevaluesisillustratedintermsoftheanglebetweenthemindegrees)versusthenumberofon-lineiterations.Thedataisgeneratedbya20-dimensionaljointlyGaussiandistributionwithzeromean,andacovari-ancematrixwitheigenvaluesequaltothepowers(from0to19)of1.5andeigenvectorsselectedrandomly.3 Thisre-sultistypicalofhigher-dimensionalcaseswheremajorcom-ponentsconvergerelativelyfastandminorcomponentstakemuchlonger(intermsofsamplesanditerations)toreachthesamelevelofaccuracy.5.CONCLUSIONSInthispaper,anovelapproximatexed-pointalgorithmforsubspacetrackingispresented.Thefasttrackingcapabilityisenabledbytherecursivenatureofthecompleteeigenvec-tormatrixupdates.Theproposedalgorithmisfeasibleforreal-timeimplementationsincetherecursionsarebasedonwell-structuredmatrixmultiplicationsthataretheconse-quencesoftherank-oneperturbationupdatesexploitedin 3 Thiscorrespondstoaneigenspreadof1. 5 19 2217. 00 511. 522. 533. 544. 55 10 5 Iterations0 10 20 30 40 50 60 70 Directionerrorindegrees Figure6:Theconvergenceoftheangleerrorbetweentheestimatedeigenvectors(usingRPCA)andtheircorrespondingtrueeigenvec-torsina20-dimensionalPCAproblemisshownversuson-lineiter-ations. thederivationofthealgorithm.Performancecomparisonswithtraditionalalgorithmsaswellasastructurallysimi-larperturbation-basedapproachdemonstratedtheadvan-tagesoftherecursivePCAalgorithmintermsofconvergencespeedandaccuracy.ACKNOWLEDGMENTThisworkissupportedbyNSFGrantECS-0300340.REFERENCES [1]R.O.DudaandP.E.Hart,PatternClassicationandSceneAnalysis,JohnWiley&Sons,NewYork,NY,USA,1973.[2]S.Y.Kung,K.I.Diamantaras,andJ.S.Taur,Adaptiveprin-cipalcomponentextraction(APEX)andapplications,IEEE Trans.SignalProcessing,vol.42,no.5,pp.1202,1994.[3]J.MaoandA.K.Jain,Articialneuralnetworksforfeatureextractionandmultivariatedataprojection,IEEETransac-tionsonNeuralNetworks,vol.6,no.2,pp.296,1995.[4]Y.Cao,S.Sridharan,andA.Moody,ultichannelspeechseparationbyeigendecompositionanditsapplicationtoco-talkerinterferenceremoval,IEEETrans.SpeechandAudioProcessing,vol.5,no.3,pp.209,1997.[5]G.H.GolubandC.F.VanLoan,MatrixComputations,JohnsHopkinsUniversityPress,Baltimore,Md,USA,1983.[6]E.Oja,SubspaceMethodsforPatternRecognition,JohnWiley&Sons,NewYork,NY,USA,1983.[7]T.D.Sanger,Optimalunsupervisedlearninginasingle-layerlinearfeedforwardneuralnetwork,NeuralNetworks,vol.2,no.6,pp.459,1989.[8]J.RubnerandK.Schulten,Developmentoffeaturedetectorsbyself-organization:anetworkmodel,BiologicalCybernetics, vol.62,no.3,pp.193,1990.[9]J.RubnerandP.Tavan,Aself-organizingnetworkforprincipal-componentanalysis,EurophysicsLetters,vol.10,no.7,pp.693,1989.[10]L.Xu,Leastmeansquareerrorreconstructionprincipleforself-organizingneural-nets,NeuralNetworks,vol.6,no.5,pp.627,1993.

PAGE 8

RecursivePrincipalComponentsAnalysis2041 [11]B.Yang,Projectionapproximationsubspacetracking,IEEE Trans.SignalProcessing,vol.43,no.1,pp.95,1995.[12]Y.Hua,Y.Xiang,T.Chen,K.Abed-Meraim,andY.Miao,aturalpowermethodforfastsubspacetracking,inProc.IEEENeuralNetworksforSignalProcessing,pp.176,Madison,Wis,USA,August1999.[13]Y.N.RaoandJ.C.Principe,obuston-lineprincipalcomponentanalysisbasedonaxed-pointapproach,inProc.IEEEInt.Conf.Acoustics,Speech,SignalProcessing,vol.1,pp.981,Orlando,Fla,USA,May2002.[14]D.Erdogmus,Y.N.Rao,K.E.HildII,andJ.C.Principe,Si-multaneousprincipal-componentextractionwithapplicationtoadaptiveblindmultiuserdetection,EURASIP.J.Appl.Sig-nalProcess.,vol.2002,no.12,pp.1473,2002.[15]B.Champagne,Adaptiveeigendecompositionofdataco-variancematricesbasedonrst-orderperturbations,IEEE Trans.SignalProcessing,vol.42,no.10,pp.2758,1994. DenizErdogmusreceivedhisB.S.degreesinelectricalengineeringandmathematicsin1997,andhisM.S.degreeinelectricalengineering,withemphasisonsystemsandcontrol,in1999,allfromtheMiddleEastTechnicalUniversity,Turkey.HereceivedhisPh.D.inelectricalengineeringfromtheUniversityofFlorida,Gainesville,in2002.Since1999,hehasbeenwiththeComputa-tionalNeuroEngineeringLaboratory,Uni-versityofFlorida,workingwithJosePrincipe.Hiscurrentresearchinterestsincludeinformation-theoreticaspectsofadaptivesignalprocessingandmachinelearning,aswellastheirapplicationstoproblemsincommunications,biomedicalsignalprocessing,andcontrols.HeistherecipientoftheIEEESPS2003YoungAuthorAward,andisaMemberofIEEE,TauBetaPi,andEtaKappaNu. YadunandanaN.RaoreceivedhisB.E.de-greeinelectronicsandcommunicationen-gineeringin1997,fromtheUniversityofMysore,India,andhisM.S.degreeinelec-tricalandcomputerengineeringin2000,fromtheUniversityofFlorida,Gainesville,Fla.From2000to2001,heworkedasade-signengineeratGEMedicalSystems,Wis.Since2001,hehasbeenworkingtowardhisPh.D.intheComputationalNeuroEngi-neeringLaboratory(CNEL)attheUniversityofFlorida,underthesupervisionofJoseC.Principe.Hiscurrentresearchinterestsin-cludedesignofneuralanalogsystems,principalcomponentsanal-ysis,generalizedSVDwithapplicationstoadaptivesystemsforsig-nalprocessingandcommunications. HemanthPeddanenireceivedhisB.E.de-greeinelectronicsandcommunicationen-gineeringfromSriVenkateswaraUniversity,Tirupati,India,in2002.Heisnowpursu-inghisMastsdegreeinelectricalengi-neeringattheUniversityofFlorida.Hisre-searchinterestsincludeneuralnetworksforsignalprocessing,adaptivesignalprocess-ing,waveletmethodsfortimeseriesanal-ysis,digitalerdesign/implementation,anddigitalimageprocessing. AnantHegdegraduatedwithanM.S.de-greeinelectricalengineeringfromtheUni-versityofHouston,Tex.DuringhisMas-tes,heworkedintheBio-SignalAnal-ysisLaboratory(BSAL)withhisresearchmainlyfocusingonunderstandingthepro-ductionmechanismsofevent-relatedpo-tentialssuchasP50,N100,andP300.HegdeiscurrentlypursuinghisPh.D.researchintheComputationalNeuroEngineeringLab-oratoryCNEL)attheUniversityofFlorida,Gainesville.Hisfocusisondevelopingsignalprocessingtechniquesfordetectingasym-metricdependenciesinmultivariatetimestructures.HisresearchinterestsareinEEGanalysis,neuralnetworks,andcommunicationsystems. JoseC.PrincipeisaDistinguishedProfes-sorofElectricalandComputerEngineeringandBiomedicalEngineeringattheUniver-sityofFlorida,whereheteachesadvancedsignalprocessing,machinelearning,andar-ticialneuralnetworks(ANNs)modeling.HeisBellSouthProfessorandtheFounderandDirectoroftheUniversityofFloridaComputationalNeuroEngineeringLabora-tory(CNEL).Hisprimaryareaofinterestisprocessingoftime-varyingsignalswithadaptiveneuralmodels.TheCNELhasbeenstudyingsignalandpatternrecognitionprin-ciplesbasedoninformationtheoreticcriteriaentropyandmutualinformation).Dr.PrincipeisanIEEEFellow.HeisaMemberoftheADCOMoftheIEEESignalProcessingSociety,MemberoftheBoardofGovernorsoftheInternationalNeuralNetworkSociety,andEditorinChiefoftheIEEETransactionsonBiomedicalEngi-neering.HeisaMemberoftheAdvisoryBoardoftheUniversityofFloridaBrainInstitute.Dr.Principehasmorethan90publicationsinrefereedjournals,10bookchapters,and200conferencepapers.Hedirected35Ph.D.dissertationsand45Mastestheses.HehasrecentlywroteaninteractiveelectronicbookentitledNeuralandAdaptiveSystems:FundamentalsThroughSimulationpublishedbyJohnWileyandSons.