<%BANNER%>

Network algorithms for supply chain optimization problems

University of Florida Institutional Repository

PAGE 3

Iwouldliketothankmydissertationchair,PanosPardalos,forhistechnicaladvice,professionalism,encouragementandinsightfulcommentsthroughoutmydissertationresearch. Iacknowledgemycommitteemembers,SelcukErenguc,JosephGeunes,EdwinRomeijn,andMaxShen,fortheirconstructivecriticismconcerningthematerialofthisdissertation.IwouldliketothankEdwinRomeijnforalltheeorthehasdevotedtothesupervisionofthisresearchandJoeGeunesforhistimeandsuggestions.IalsowouldliketoverbalizemyappreciationtoallofmysincerefriendsattheISEDepartmentandinGainesville. Nowordscanexpressallmythankstomyparents,_InceserandGalipEksioglu,mysister,Burcu,andbrother,Oguz,fortheirlove,encouragement,motivation,andeternalsupport.Lastbutnotleast,Iwouldliketothankmywife,Sandra,forherpatience,kindness,andcontinuoussupportthroughoutallmyyearshereattheUniversityofFlorida.iii

PAGE 4

TABLEOFCONTENTS page A CKN O WLEDGMENT S ............................ iii LIS T O F T ABLE S .............. ................. vi LIS T O F FIGURE S ............................... vii ABSTR A C T ................................... ix CHAPTERS1 INT R ODUCTIO N ........................... . 1 1. 1 Suppl y Chai n Manageme n t .................... . 1 1. 2 Globa l Optimizatio n ........................ 3 1. 3 Goa l an d Summar y ......................... 5 2 P R ODUCTIO N PLANNIN G AN D LOGISTIC S P R OBLEM S ..... 8 2. 1 Single-Ite m Economi c Lo t Sizin g Proble m ............. 8 2. 2 Pr o duction-I n v e n tory-Distributio n (PID ) Proble m ........ 10 2. 3 Complexi t y o f th e PI D Proble m .................. 12 2. 4 Proble m F or m ulatio n ........................ 13 2.4. 1 Fixe d Charg e Ne t w or k Fl o w Problem s .... ...... 15 2.4. 2 Piecewise-linea r Conc av e Ne t w or k Fl o w Problem s .... 16 3 SOLUTIO N P R OCEDURE S ....................... 25 3. 1 Existin g Solutio n Approa c he s ................... 25 3. 2 L o ca l Sear c h ............................. 26 3.2. 1 Histor y o f L o ca l Sear c h ................... 27 3.2. 2 Complexi t y o f L o ca l Sear c h ................. 28 3.2. 3 L o ca l Sear c h fo r th e PI D Proble m ............. 29 3. 3 Dynami c Slo p e Scalin g Pr o cedur e (DSSP ) ............ 37 3.3. 1 Fixe d Charg e Cas e ..................... 39 3.3. 2 Piecewise-linea r Conc av e Cas e ............... 40 3.3. 3 P erformanc e o f DSS P o n Som e S p ecia l Case s ....... 41 3. 4 Greed y Randomize d Adapti v e Sear c h Pr o cedur e (GRASP ) ... 50 3.4. 1 Constructio n Phas e ..................... 50 3.4. 2 M o die d Constructio n Phas e ................ 52 3. 5 L o w e r Bound s .. ......................... 53 iv

PAGE 5

4 SUB R OUTINE S AN D COMPU T A TIONA L EXPERIMENT S ..... 59 4. 1 Desig n an d Impleme n tatio n o f th e Subroutine s .......... 59 4. 2 Usag e o f th e Subroutine s ...................... 61 4. 3 Ex p erime n ta l Dat a ......................... 63 4. 4 Randoml y Generate d Problem s .................. 65 4.4. 1 Problem s wit h Fixe d Charg e Cost s ............. 66 4.4. 2 Problem s wit h Piecewise-Linea r Conc a v e Cost s ...... 76 4. 5 Librar y T es t Problem s ....................... 83 5 CONCLUDIN G REMARK S AN D FU R THE R RESEA R C H ...... 86 5. 1 Summar y .............................. 86 5. 2 Pro p ose d Resear c h ......................... 88 5.2. 1 Extensio n t o Othe r Cos t Structure s ............ 88 5.2. 2 Generatin g Problem s wit h Kn o w n Optima l Solution s ... 88 REFERENCE S .................................. 91 BIOGRAPHICA L SKETC H ... ....................... 98 v

PAGE 6

LISTOFTABLES Table page 4{ 1 Th e distribution ............................. 59 4{ 2 Proble m size s fo r xe d c harg e ne t w orks ................ 67 4{ 3 Characteristic s o f tes t problems .................... 68 4{ 4 Nu m b e r o f time s th e erro r w a s zer o fo r dat a se t A .......... 68 4{ 5 A v erag e error s fo r problem s wit h dat a se t E ............. 72 4{ 6 Proble m size s fo r xe d c harg e ne t w ork s wit h capaci t y constrai n ts .. 73 4{ 7 A v erag e error s fo r problem s wit h pr o ductio n capacities ....... 74 4{ 8 CP U time s o f DSS P fo r problem s wit h pr o ductio n capacities .... 74 4{ 9 CP U time s o f CPLE X fo r problem s wit h pr o ductio n capacities ... 75 4{1 0 P erce n tag e o f pr o ductio n arc s use d i n th e optima l solution ..... 76 4{1 1 Proble m size s fo r piecewise-linea r conc a v e ne t w orks ......... 78 4{1 2 Summar y o f result s fo r problem s usin g dat a se t A .......... 79 4{1 3 Summar y o f result s fo r problem s usin g dat a se t B .......... 80 4{1 4 Summar y o f result s fo r problem s usin g dat a se t C ..... .... 81 4{1 5 Summar y o f result s fo r problem s usin g dat a se t D .......... 82 4{1 6 Summar y o f result s fo r problem s usin g dat a se t E . ........ 83 4{1 7 Proble m size s fo r th e extende d for m ulatio n afte r NS P ........ 84 4{1 8 Summar y o f result s fo r problem s 4 0 an d 4 6 afte r NS P ........ 84 4{1 9 Uncapacitate d w arehous e l o catio n problem s fro m th e O R librar y .. 85 vi

PAGE 7

LISTOFFIGURES Figure page 2{ 1 Th e single-ite m EL S m o del ...................... 9 2{ 2 A suppl y c hai n ne t w or k wit h 2 facilities 3 retailers an d 2 p eri o ds . 11 2{ 3 Piecewise-linea r conc av e pr o ductio n costs .............. 17 2{ 4 Ar c separatio n pr o cedure ....................... 21 2{ 5 N o d e separatio n pr o cedure ....................... 23 3{ 1 A n exampl e fo r -neig h b orh o o d .................... 31 3{ 2 A n exampl e o f m o vin g t o a neig h b orin g solution ........... 33 3{ 3 Th e l o ca l sear c h pr o cedure ....................... 35 3{ 4 Cycl e detectio n an d elimination .................... 35 3{ 5 A n exampl e o f a t y p e I cycle ..................... 36 3{ 6 A n exampl e o f a t y p e I I cycle ..................... 37 3{ 7 Th e DSS P algorithm .......................... 38 3{ 8 Linearizatio n o f th e xe d c harg e cos t function ............ 40 3{ 9 Linearizatio n o f th e piecewise-linea r conc a v e cos t function ..... 41 3{1 0 A bipartit e ne t w or k wit h ma n y facilitie s an d on e retailer ...... 42 3{1 1 A bipartit e ne t w or k wit h t w o facilitie s an d t w o retailers ...... 44 3{1 2 A bipartit e ne t w or k wit h m ultipl e facilitie s an d m ultipl e retailers . 47 3{1 3 Th e constructio n pr o cedure ...................... 52 3{1 4 Extende d ne t w or k represe n tation ................... 55 4{ 1 Inpu t l e (gr-param.dat ) fo r gr-pid.c ................. 62 4{ 2 Sampl e dat a l e (sample.dat ) fo r gr-pid. c an d ds-pid.c ....... 63 4{ 3 Sampl e outpu t (sample-gr.out ) o f gr-pid.c .............. 64 4{ 4 A v erag e error s fo r problem s usin g dat a se t D ............ 70 vii

PAGE 8

4{ 5 Nu m b e r o f GRAS P iteration s vs solutio n quali t y ......... 71 5{ 1 Piecewise-conc a v e cos t functions ................... 89 viii

PAGE 9

Formanyyears,researchersandpractitionershaveconcentratedontheindividualprocessesandentitieswithintheSC.Recently,however,manycompanieshaverealizedthatimportantcostsavingscanbeachievedbyintegratinginventorycontrolandtransportationpoliciesthroughouttheirSC.AscompaniesbeganrealizingthebenetsofoptimizingtheirSCasasingleentity,researchersbeganutilizingoperationsresearchtechniquestobettermodelSCs. TypicalmodelsforSCdesign/managementproblemsassumethattheinvolvedcostscanberepresentedbysomewhatrestrictivecostfunctionssuchaslinearand/orix

PAGE 10

TheobjectiveofthisresearchistomodelandsolveSCoptimizationproblemswithxedchargeandpiecewise-linearconcavecostfunctions.Intheseproblemsasingleitemisproducedatasetoffacilitiesanddistributedtoasetofretailerssuchthatthedemandismetandthetotalproduction,transportation,andinventorycostsareminimizedoveraniteplanninghorizon.x

PAGE 11

1.1 Supply Chain Management Supplychainmanagementisaeldofgrowinginterestforbothcompaniesandresearchers.AsnicelytoldintherecentbookbyTayur,Ganeshan,andMagazine[71]everyeldhasagoldenage:Thisisthetimeofsupplychainmanagement.Thetermsupplychainmanagement(SCM)hasbeenaroundformorethantwentyyearsanditsdenitionvariesfromoneenterprisetoanother.Wedeneasupplychain(SC)asanintegratedprocesswheredierentbusinessentitiessuchassuppliers,manufacturers,distributors,andretailersworktogethertoplan,coordinate,andcontroltheowofmaterials,parts,andnishedgoodsfromsupplierstocustomers.Thischainisconcernedwithtwodistinctows:aforwardowofmaterialsandabackwardowofinformation.Geunes,Pardalos,andRomeijn[27]haveeditedabookthatprovidesarecentreviewonSCMmodelsandapplications. Formanyyears,researchersandpractitionershaveconcentratedontheindividualprocessesandentitieswithintheSC.Recently,however,therehasbeenanincreasingeortintheoptimizationoftheentireSC.AscompaniesbeganrealizingthebenetsofoptimizingtheSCasasingleentity,researchersbeganutilizingoperationsresearch(OR)techniquestobettermodelsupplychains.Typically,aSCmodeltriestodetermine

PAGE 12

FollowingHaxandCandea's[37]treatmentofproductionandinventorysystems,theaboveSCdecisionscanbeclassiedinthefollowingway: Beamon[6]gaveasummaryofmodelsintheareaofmulti-stagesupplychaindesignandanalysis.Erenguc,Simpson,andVakharia[17]surveyedmodelsintegratingproductionanddistributionplanninginSC.ThomasandGrin[72]surveyedcoordinationmodelsonstrategicandoperationalplanning. Asaresultoftheglobalizationoftheeconomy,however,themodelshavebecomemorecomplex.GlobalSCmodelsnowoftentrytoincludefactorssuchasexchangerates,internationalinterestrates,tradebarriers,taxesandduties,marketprices,anddutydrawbacks.Allofthesefactorsaregenerallydiculttoincludeinmathematicalmodelsbecauseofuncertaintyandnonlinearity.VidalandGoetschalckx[78]providedareviewofstrategicproduction-distributionmodelswithemphasisonglobalSCmodels.Eksioglu[14]discussedsomeoftherecentmodelsthataddressthedesignandmanagementofglobalSCnetworks.CohenandHuchzermeier[10]alsogaveanextensivereviewonglobalSCmodels.TheyfocusontheintegrationofSCnetworkoptimizationwithrealoptionspricingmethods.MostoftheseSCproblemscanbemodeledasmathematicalprogramsthataretypicallyglobaloptimizationproblems.Therefore,wenextgiveabriefoverviewoftheareaofglobaloptimization.

PAGE 13

1.2 Global Optimization Theeldofglobaloptimizationwasinitiatedduringthemid1960smainlythroughtheprimaryworksofHoangTuy.Sincethen,andinparticularduringthelastfteenyears,therehasbeenalotofinterestintheoreticalandcomputationalinvestigationsofchallengingglobaloptimizationproblems.Thishasresultedinthedevelopmentandapplicationofglobaloptimizationmethodstoimportantproblemsinscience,appliedscience,andengineering.Excitingandintriguingtheoreticalndingsandalgorithmicdevelopmentshavemadeglobaloptimizationoneofthemostattractiveareasofresearch. Globaloptimizationdealswiththesearchforaglobaloptimuminproblemswheremanylocaloptimaexist.ThegeneralglobaloptimizationproblemisdenedbyHorst,Pardalos,andThoai[43]asDenition1.1

PAGE 14

Incontrasttotheobjectiveoftheglobaloptimization,theareaoflocaloptimizationaimsatdeterminingafeasiblesolutionthatisalocalminimumoftheobjectivefunctionfinD(i.e.,itisaminimuminitsneighborhood,butnotnecessarilythelowestvalueoftheobjectivefunctionf).Therefore,ingeneral,fornonlinearoptimizationproblemswheremultiplelocalminimaexist,alocalminimum(aswithanyotherfeasiblesolution)representsonlyanupperboundontheglobalminimumoftheobjectivefunctionfonD. Incertainclassesofnonlinearproblems,alocalsolutionisalwaysaglobalone.Forexample,inaminimizationproblemwithaconvex(orquasi-convex)objectivefunctionfandaconvexfeasiblesetD,alocalminimizerisaglobalsolution(seeforinstance,Avriel[4],Horst[41],ZangandAvriel[83]). Ithasbeenshownthatseveralimportantoptimizationproblemscanbeformulatedasconcaveminimizationproblems.Awell-knownresultbyRaghavachari[64]statesthatthezero-oneintegerprogrammingproblemisequivalenttoaconcave(quadratic)minimizationproblemoveralinearsetofconstraints.GiannessiandNiccolucci[29]haveshownthatanonlinear,nonconvexintegerprogramcanbeequivalentlyreducedtoarealconcaveprogramundertheassumptionthattheobjectivefunctionisboundedandsatisestheLipschitzcondition.Similarly,thequadraticassignmentproblemcanbeformulatedasaglobaloptimizationproblem(seeforinstance,BazaraandSherali[5]).Ingeneral,bilinearprogrammingproblemsareequivalenttoakindofconcaveminimizationproblem(Konno[52]).Thelinearcomplementarityproblemcanbereducedtoaconcaveproblem(Mangasarian[57])andlinearmin-maxproblemswithconnectedvariablesandlinearmulti-stepbimatrixgamesarereducibletoaglobaloptimizationproblem(Falk[18]).Theaboveexamplesindicateonceagainthebroadrangeofproblemsthatcanbeformulatedasglobaloptimizationproblems,andthereforeexplaintheincreasinginterestinthisarea.

PAGE 15

GlobaloptimizationproblemsremainNP-hardevenforveryspecialcasessuchastheminimizationofaquadraticconcavefunctionovertheunithypercube(seeforexampleGareyetal.[26],Hammer[34]),incontrasttothecorrespondingconvexquadraticproblemthatcanbesolvedinpolynomialtime(ChungandMurty[9]). MostoftheoptimizationproblemsthatariseinSCareglobaloptimizationproblems.Theseproblemsareofgreatpracticalinterest,buttheyarealsoinherentlydicultandcannotbesolvedbyconventionalnonlinearoptimizationmethods. Despitethefactthatthemajorityofthechallengingandimportantproblemsthatariseinscience,appliedscienceandengineeringexhibitnonconvexitiesandhencemultipleminima,therehasbeenrelativelylittleeortdevotedtotheareaofglobaloptimizationascomparedtothedevelopmentsintheareaoflocaloptimization.Thisispartlyattributedtotheuseoflocaloptimizationtechniquesascomponentsofglobaloptimizationapproaches,andalsoduetothedicultiesthatariseinthedevelopmentofglobaloptimizationmethods.However,therecentadvancesinthisareaandtheexplosivegrowthofcomputingcapabilitiesshowgreatpromisetowardsaddressingtheseissues. Globaloptimizationmethodsaredividedintotwoclasses,deterministicandstochasticmethods.Themostimportantdeterministicapproachestononconvexglobaloptimizationare:enumerativetechniques,cuttingplanemethods,branchandbound,solutionofapproximatesubproblems,bilinearprogrammingmethodsordierentcombinationsofthesetechniques.Specicsolutionapproacheshavebeenproposedforproblemswheretheobjectivefunctionhasaspecialstructure(e.g.,quadratic,separable,factorable,etc.)orthefeasibleregionhasasimpliedgeometry(e.g.,unithypercube,networkconstraints,etc.). 1.3 Goal and Summary Thefocusofthisdissertationistostudyoptimizationproblemsinsupplychainoperationswithcoststructuresthatariseinseveralreal-lifeapplications.A

PAGE 16

Thesupplychainoptimizationproblemsweconsiderareformulatedaslarge-scalemixedintegerprogrammingproblems.Thenetworkstructureinherentinsuchproblemsisutilizedtodevelopecientalgorithmstosolvetheseproblems.Duetothescaleanddicultyoftheseproblems,thefocusistodevelopecientheuristicmethods.Theobjectiveistodevelopapproachesthatproduceoptimalornear-optimalsolutionstologisticsproblemswithxedchargeandpiecewise-linearconcavecoststructures.Inthisdissertationwealsoaddressthegenerationofexperimentaldatafortheseoptimizationmodelssincetheperformanceofheuristicproceduresaretypicallymeasuredbythecomputationtimerequiredandthequalityofthesolutionobtained.Conclusionsaboutthesetwoperformancemeasuresaredrawnbytestingtheheuristicapproachesonacollectionofproblems.Thevalidityofthederivedconclusionsstronglydependsonthecharacteristicsoftheproblemschosen.Therefore,wegeneratedseveralsetsofproblemswithdierentcharacteristics. Theoutlineofthedissertationisasfollows.InChapter2werstintroducethesingle-itemeconomiclotsizingproblemandthendiscussextensionstothebasicproblemtoarriveatourproblem,whichwecalltheproduction-inventory-distribution(PID)problem.InChapter3wepresentlocalsearchbasedheuristicapproaches.Wegiveabriefhistoryoflocalsearchandpresenttwoapproachesforconstructingsolutionstoinitiateourlocalsearchprocedure.Wediscussthecomplexityofthe

PAGE 18

2.1 Single-Item Economic Lot Sizing Problem ManyproblemsinSCoptimizationsuchasinventorycontrol,productionplanning,capacityplanning,etc.arerelatedtothesimpleeconomiclotsizing(ELS)problem.Harris[35]isusuallycitedasthersttostudyELSmodels.Heconsideredamodelthatassumesdeterministicdemandswhichoccurcontinuouslyovertime.In1958,adierentapproachwasproposedindependentlybyManne[58]andbyWagnerandWhitin[80].Theydividedtimeintodiscreteperiodsandassumedthatthedemandoveranitehorizonisknowninadvance.InthepastfourdecadesELShasreceivedconsiderableattentionandmanypapershavedirectlyorindirectlydiscussedthismodel.AggarwalandPark[2]gaveabriefreviewoftheELSmodelanditsextensions. Todescribethebasicsingle-itemELSmodelwewillusethefollowingnotation.Demand(dt)fortheproductoccursduringeachofTconsecutivetimeperiods.Thedemandduringperiodtcanbesatisedeitherthroughproductioninthatperiodorfrominventorythatiscarriedforwardintime.Themodelincludesproductionandinventorycosts,andtheobjectiveistoscheduleproductiontosatisfydemandatminimumcost.Thecostofproducingptunitsduringperiodtisgivenbyrt(pt)andthecostofstoringItunitsofinventoryfromperiodttoperiodt+1isht(It).Withoutlossofgenerality,weassumeboththeinitialinventoryandthenalinventoryarezero.ThemathematicalrepresentationoftheELSmodelcannowbegivenasminimizeTXt=1(rt(pt)+ht(It))8

PAGE 19

subjectto(P0)pt+It1=It+dtt=1;:::;T;(2.1)I0;IT=0(2.2)pt;It0t=1;:::;T:(2.3) Intheaboveformulation,therstsetofconstraints(2.1)requiresthatthesumoftheinventoryatthestartofaperiodandtheproductionduringthatperiodequalsthesumofthedemandduringthatperiodandtheinventoryattheendoftheperiod.Constraint(2.2)simplyassuresthattheinitialandnalinventoriesarezero,whilethelastsetofconstraints(2.3)limitsproductionandinventorytononnegativevalues. ThebasicELSproblemcanalsobeformulatedasanetworkowproblem(NFP).ThisformulationwasrstintroducedbyZangwill[84].ThenetworkinFigure2{1consistsofasinglesourcenodeandTsinknodes.Eachsinknoderequiresaninowofdt(t=1;2;:::;T)andnodeDiscapableofgeneratinganoutowofPTt=1dt.ForeacharcfromnodeDtonodetthereisanassociatedcostfunctionrt()fort=1;2;:::;T,andforeacharcfromnodettonodet+1thereisanassociatedcostfunctionht()fort=1;2;:::;T1. Figure2{1:Thesingle-itemELSmodel. Ifthecostfunctionsrt()andht()areallowedtobearbitraryfunctions,thenthebasicELSproblemisquitediculttosolve;asFlorian,LenstraandRinnooyKan[22]

PAGE 20

WeconsiderextensionstothebasicELSproblemandincludedistributiondecisionsinthemodel.Wealsoconsidermultipleproductionplants(facilities)andmultipledemandpoints(retailers).Thegoalistomeettheknowndemandattheretailersthroughproductionatthefacilities,suchthatthesystemwidetotalproduction,inventory,anddistributioncostisminimized.AsmentionedearlierinChapter1,werefertothisproblemasthePIDproblem. 2.2 Production-Inventory-Distribution (PID) Problem ThePIDproblemcanbeformulatedasanetworkowproblemonadirected,singlesourcegraphconsistingofseverallayers.Figure2{2givesanexamplewithtwofacilities,threeretailersandtwotimeperiods.Eachlayerofthegraphrepresentsatimeperiod.Ineachlayer,abipartitegraphrepresentsthetransportationnetworkbetweenthefacilitiesandtheretailers.Facilitiesinsuccessivetimeperiodsareconnectedthroughinventoryarcs.Thereisadummysourcenodewithsupplyequaltothetotaldemand.Productionarcsconnectthedummysourcetoeachfacilityineverytimeperiod. Thisisaneasyproblemifallcostsarelinear.However,manyproductionanddistributionactivitiesexhibiteconomiesofscaleinwhichtheunitcostoftheactivitydecreasesasthevolumeoftheactivityincreases.Forexample,productioncostsoftenexhibiteconomiesofscaleduetoxedproductionsetupcostsandlearningeectsthatenablemoreecientproductionasthevolumeincreases.Transportationcostsexhibiteconomiesofscaleduetothexedcostofinitiatingashipmentandthelowerperunitshippingcostasthevolumedeliveredpershipmentincreases.Therefore,weassume

PAGE 21

Figure2{2:Asupplychainnetworkwith2facilities,3retailers,and2periods.theproductioncostsatthefacilitiesareeitherofthexedchargetypeorpiecewise-linearconcavetype,thecostoftransportinggoodsfromfacilitiestoretailersareofthexedchargetype,andtheinventorycostsarelinear.Wealsomakethefollowingsimplifyingassumptionstothemodel: TherstthreeassumptionscaneasilyberelaxedbyaddingextraarcsinthenetworkandthelastassumptionisjustiedsincethefollowingresultbyWagner[79]showsthatanetworkowproblemwithcapacityconstraintscanbetransformedintoanetworkowproblemwithoutcapacityconstraints.

PAGE 22

2.3 Complexity of the PID Problem ThePIDproblemwithconcavecostsfallsunderthecategoryofminimumconcavecostnetworkowproblems(MCCNFP).GuisewiteandPardalos[30]gaveadetailedsurveyonMCCNFPthroughouttheearly1990s.Itiswell-knownthatevencertainspecialcasesofMCCNFP,suchasthexedchargenetworkowproblem(FCNFP)orthesinglesourceuncapacitatedminimumconcavecostnetworkowproblem(SSUMCCNFP),areNP-hard(GuisewiteandPardalos[31]).MCCNFPisNP-hardevenwhenthearccostsareconstant,theunderlyingnetworkisbipartite,ortheratioofthexedchargetothelinearchargeforallarcsisconstant.Thishasmotivatedtheconsiderationofadditionalstructureswhichmightmaketheproblemmoretractable.Infact,polynomialtimealgorithmshavebeendevelopedforanumberofspeciallystructuredvariantsofMCCNFP(DuandPardalos[13]andPardalosandVavasis[62]).

PAGE 23

ThenumberofsourcenodesandthenumberofarcswithnonlinearcostsaectthedicultyofMCCNFP.ItisthereforeconvenienttorefertoMCCNFPwithaxednumber,h,ofsourcesandxednumber,k,ofarcswithnonlinearcostsasMCCNFP(h,k).GuisewiteandPardalos[32]werethersttoprovethepolynomialsolvabilityofMCCNFP(1,1).Later,stronglypolynomialtimealgorithmsweredevelopedforMCCNFP(1,1)byKlinzandTuy[51]andTuy,DanandGhannadan[75].InaseriesofpapersbyTuy[73,74]andTuyetal.[76,77]polynomialtimealgorithmswerepresentedforMCCNFP(h,k)wherehandkareconstants.ItwasalsoshownthatMCCNFP(h,k)canbesolvedinstronglypolynomialtimeifminfh,kg=1. 2.4 Problem Formulation Theproblemthatweconsiderisamulti-facilityproduction,inventory,anddistributionproblem.Asingleitemisproducedinmultiplefacilitiesovermultipleperiodstosatisfythedemandattheretailers.Theobjectiveistominimizethesystemwidetotalproduction,inventory,andtransportationcost. LetJ,K,andTdenotethenumberoffacilities,thenumberofretailers,andtheplanninghorizon,respectively.ThentheresultingmodelisminimizeJXj=1TXt=1rjt(pjt)+JXj=1KXk=1TXt=1fjkt(xjkt)+JXj=1TXt=1hjtIjt

PAGE 24

where Constraints(2.4)and(2.5)aretheowconservationconstraintsand(2.6),(2.7),and(2.8)arethecapacityconstraints.IfUjkt,Vjt,andWjtarelargeenoughthentheproblemiseectivelyuncapacitated.AswepointedoutearlierMCCNFPhasthecombinatorialpropertythatifithasanoptimalsolution,thenthereexistsanoptimalsolutionthatisavertexofthecorrespondingfeasibledomain.Afeasibleowisanextremeow(vertex)ifitisnottheconvexcombinationofanyotherfeasibleows.ExtremeowshavebeencharacterizedforpossibleexploitationinsolvingtheMCCNFP.Aowisextremalforanetworkowproblemifitcontainsnopositivecycles.Apositivecycleinanuncapacitatednetworkisacyclethathaspositiveowonallofitsarcs.Ontheotherhand,foraproblemwithcapacityconstraintsonarcows,apositivecycleisacyclewhereallofthearcsinthecyclehavepositiveowswhicharestrictlylessthanthecapacity.ThisimpliesthatforthePIDproblemwithunlimitedcapacityanextremeowisatree.Inotherwords,anoptimalsolutionexistsinwhichthedemandofeachretailerwillbesatisedthroughonlyoneofthefacilities.

PAGE 25

2.4.1 Fixed Charge Network Flow Problems Inthexedchargecasetheproductioncostfunctionrjt(pjt)isofthefollowingform: wheresjtandcjtare,respectively,thesetupandthevariablecostsofproduction. Ifthedistributioncostfunction,fjkt(xjkt),isalsoofaxedchargeformthenitisgivenby wheresjktandcjktrepresent,respectively,thesetupandthevariablecostsofdistribution. Duetothediscontinuityofthecostfunctionsattheorigin,theproblemcanbetransformedintoa01mixedintegerlinearprogramming(MILP)problembyintroducingabinaryvariableforeacharcwithaxedcharge.Assumingsjt>0,thecostfunctionrjt(pjt),canbereplacedwithrjt(pjt)=cjtpjt+sjtyjt:

PAGE 26

subjectto(P2)(2:4);(2:5);(2:7);(2:9);andpjtWjtyjtj=1;:::;J;t=1;:::;T;(2.10)xjktUjktyjktj=1;:::;J;k=1;:::;K;t=1;:::;T;(2.11)yjt;yjkt2f0;1gj=1;:::;J;k=1;:::;K;t=1;:::;T:(2.12) TheMILPformulationisusedtondanoptimalsolution.Thisformulationisusefulinpracticesincetherelaxedproblemsarelinearcostnetworkowproblems.WeusethegeneralpurposesolverCPLEX,whichemploysbranchandboundalgorithms,tosolvetheMILPformulation.Thisisdonetogaugethedicultyofndinganoptimalsolution. 2.4.2 Piecewise-linear Concave Network Flow Problems Ifthecostfunctionsinproblem(P1)arepiecewise-linearconcavefunctions(Figure2{3),thentheyhavethefollowingform: Wealsoassumecjt;ljt>0andsjt;1>0,sincetheseareproductioncosts.

PAGE 27

Figure2{3:Piecewise-linearconcaveproductioncosts. Similarly,ifthetransportationcostfunctionsarepiecewise-linearandconcave,theyhavethefollowingform: ThePIDproblemwithpiecewise-linearconcavecostscanbeformulatedasaMILPprobleminseveraldierentways.BelowwegivefourdierentMILPformulationsfortheproblem:-formulation,slope-formulation,ASPformulation,andNSPformulation. .Anypjtvalueonthex-axisofFigure2{3canbewrittenasaconvexcombinationoftwooftheadjacentbreakpoints,i.e.,pjt=Pljti=0jt;ijt;iwherejt;0=0,Pljti=0jt;i=1and0jt;i1fori=0;1;:::;ljt.Notethatatmosttwoofthejt;ivaluescanbepositiveandonlyiftheyareadjacent.Thisformulationcanbeusedtomodelanypiecewise-linearfunction(notenecessarilyconcave)byintroducingthefollowingconstraints:

PAGE 28

... NotealsothatwehaveO(ljt)(notO(2ljt))possibilitiesforeachzjtvector.Inotherwords,onlyoneofthecomponentsofthevectorzjt=(zjt;1;zjt;2;:::zjt;ljt)willbe1andtherestwillbezero.Theproductioncostfunctionscannowbewrittenintermsofthenewvariables.Anadditionalbinaryvariable,yjt,isaddedtotakecareofthediscontinuityattheorigin subjectto(P3)(2:4);(2:5);and

PAGE 29

.Theslope-formulationissimilartothe-formulationinthesensethatthepjtvaluesonthex-axisofFigure2{3areagainrewrittenintermsofthebreakpoints.However,thistimepjtisnotaconvexcombinationofthebreak

PAGE 30

subjectto(P4)(2:4);(2:5);andpjt=ljtXi=1jt;ijt;ij=1;:::;J;t=1;:::;T;(2.32)jt;izjt;ij=1;:::;J;t=1;:::;T;i=1;:::;ljt1;(2.33)jt;i+1zjt;ij=1;:::;J;t=1;:::;T;i=1;:::;ljt1;(2.34)yjtjt;1j=1;:::;J;t=1;:::;T;(2.35)xjkt=ljktXi=1jkt;ijkt;ij=1;:::;J;k=1;:::;K;t=1;:::;T;(2.36)jkt;izjkt;ij=1;:::;J;k=1;:::;K;t=1;:::;T;(2.37)i=1;:::;ljkt1;jkt;i+1zjkt;ij=1;:::;J;k=1;:::;K;t=1;:::;T;(2.38)i=1;:::;ljkt1;yjktjkt;1j=1;:::;J;k=1;:::;K;t=1;:::;T;(2.39)jt;i;jkt;i0j=1;:::;J;k=1;:::;K;t=1;:::;T;(2.40)i=0;:::;ljkt;yjt;yjkt2f0;1gj=1;:::;J;k=1;:::;K;t=1;:::;T;(2.41)zjt;i;zjkt;i2f0;1gj=1;:::;J;k=1;:::;K;t=1;:::;T;(2.42)

PAGE 31

ASP formulation .KimandPardalos[50]usedanArcSeparationProcedure(ASP)totransformnetworkowproblemswithpiecewise-linearconcavecoststonetworkowproblemswithxedcharges.Usingthesameprocedure,thePIDproblemwithpiecewise-linearconcavecostscanbetransformedintoaxedchargenetworkowproblem.EacharcisseparatedintoljtarcsasshowninFigure2{4.Allofthenewarcshavexedchargecostfunctions. Figure2{4:Arcseparationprocedure. Thenetworkgrowsinsizeafterthearcseparationprocedure.ThenumberofproductionandtransportationarcsintheextendednetworkisgivenbyJXj=1TXt=1ljt+JXj=1KXk=1TXt=1ljkt:

PAGE 32

Theproductioncostcannowbewrittenintermsofthecostfunctionsofthenewarcscreatedinthefollowingway:rjt(pjt)=ljtXi=1rjt;i(pjt;i)=ljtXi=1(cjt;1pjt;1+sjt;1yjt;1):(2.43) Notethattheequality(2.43)maynotholdingeneralwithoutadditionalconstraintstorestrictthedomainforeachseparatedarccostfunction.However,forconcavecostfunctionstheequalityholdsduetothepropertiesgivenby(2.13)and(2.14).KimandPardalos[50]gaveasimpleproofbasedoncontradiction.Thisformulationiscloselyrelatedtothe-formulationinthesensethatonlyoneofthelinearpieceswillbeusedinanoptimalsolution.Asthezjtvariablesinthe-formulation,thevectorofyjtvariablesinthebelowformulationwillhaveO(ljt)possiblevalues.TheMILPformulationafterthearcseparationprocedureisminimizeJXj=1TXt=1ljtXi=1(cjt;ipjt;i+sjt;iyjt;i)+JXj=1KXk=1TXt=1ljtXi=1(cjkt;ixjkt;i+sjkt;iyjkt;i)+JXj=1TXt=1hjtIjt

PAGE 33

formulation .WedevelopedaNodeSeparationProcedure(NSP)totransformthePIDproblemwithpiecewise-linearandconcavecoststoPIDproblemswithxedchargecostsinanextendednetwork.TheNSPprocedureissimilartotheASPprocedure,butitresultsinalargernetworkbecausetheNSPseparatestheinventoryarcsaswell.Althoughthenetworkgrowsinsize,thisformulationisusefulbecauseitslinearprogramming(LP)relaxationleadstotightlowerbounds.OncetheproblemistransformedintoaxedchargenetworkowproblemthroughNSP,thelowerboundingprocedureexplainedinsection3.5canbeusedtogetgoodlowerbounds.Figure2{5illustratesthenodeseparationprocedure. Figure2{5:Nodeseparationprocedure. TheMILPformulationforPIDproblemswithpiecewise-linearconcavecosts,linearinventorycosts,andxedchargedistributioncostsafterthenodeseparation

PAGE 34

subjectto(P6)pjt;i+Ij;t1;i=Ijt;i+KXk=1xjkt;ij=1;:::;J;t=1;:::;T;(2.51)i=1;:::;ljkt;JXj=1ljtXi=1xjkt;i=dktk=1;:::;K;t=1;:::;T;(2.52)pjt;ijt;iyjt;ij=1;:::;J;t=1;:::;T;(2.53)i=1;:::;ljkt;Ijt;iVjtj=1;:::;J;t=1;:::;T;(2.54)xjkt;iUjktyjkt;ij=1;:::;J;k=1;:::;K;(2.55)t=1;:::;T;i=1;:::;ljkt;yjt;i;yjkt;i2f0;1gj=1;:::;J;k=1;:::;K;(2.56)t=1;:::;T;i=1;:::;ljkt;pjt;i;Ijt;i;xjkt;i0j=1;:::;J;k=1;:::;K;(2.57)t=1;:::;T;i=1;:::;ljkt:

PAGE 35

3.1 Existing Solution Approaches ThePIDproblem,asdescribedinsection2.4,isformulatedasa01MixedIntegerLinearProgram(MILP)duetothestructureoftheproductionanddistributioncostfunctions.Infact,mostoftheexactsolutionapproachesforcombinatorialoptimizationproblemstransformtheproblemintoanequivalent01MILPandusebranchandboundtechniquestosolveitoptimally.TwoofthelatestbranchandboundalgorithmsforxedchargetransportationproblemswerepresentedbyMcKeownandRagsdale[60]andLamarandWallace[53].Recently,Bell,LamarandWallace[7]presentedabranchandboundalgorithmforthecapacitatedxedchargetransportationproblem.Mostofthebranchandboundalgorithmsuseconditionalpenaltiescalledupanddownpenalties,thatcontributetondinggoodlowerbounds. Otherexactsolutionalgorithmsincludevertexenumerationtechniques,dynamicprogrammingapproaches,cutting-planemethods,andbranch-and-cutmethods.Althoughexactmethodshavematuredgreatly,thecomputationaltimerequiredbythesealgorithmsgrowsexponentiallyastheproblemsizeincreases.Inmanyinstancestheyarenotabletoproduceanoptimalsolutioneciently.OurcomputationalexperimentsindicatethatevenformoderatesizePIDproblemstheexactapproachesfailtondasolution. TheNP-hardnessoftheproblemmotivatestheuseofapproximateapproaches.SincethePIDproblemwithxedchargeorpiecewise-linearconcavecostsfallsunderthecategoryofMCCNFP,itachievesitsoptimalsolutionatanextremepointofthefeasibleregion(HorstandPardalos[42]).Mostoftheapproximate25

PAGE 36

AwidevarietyoftheheuristicapproachesforNP-hardproblemsarebasedonlocalsearch.Localsearchmethodsprovideaframeworkforsearchingthesolutionspacefocusingonlocalneighborhoods.InthefollowingwepresentlocalsearchbasedsolutionmethodsforthePIDproblem. 3.2 Local Search Localsearchisbasedonasimpleandnaturalmethodwhichisperhapstheoldestoptimizationmethod,trialanderror.However,localsearchalgorithmshaveproventobepowerfultoolsforhardcombinatorialoptimizationproblems,inparticularthoseknownasNP-hard.ThebookeditedbyAartsandLenstra[1]isanexcellentsourcewhichprovidessomeapplicationsaswellascomplexityresults. Thesetofsolutionsofanoptimizationproblemthatmaybevisitedbyalocalsearchalgorithmiscalledthesearchspace.Typically,thefeasibleregionoftheproblemisdenedasthesearchspace.However,ifgeneratingfeasiblesolutionsisnoteasy,thenadierentsearchspacemaybedened,whichtakesadvantageofthespecialstructuresoftheproblem.Ifasearchspaceotherthanthefeasibleregionisused,thentheobjectivefunctionshouldbemodiedsothattheinfeasibilityofagivensolutioncanbeidentied. Abasicversionoflocalsearchisiterativeimprovement.Inotherwords,agenerallocalsearchalgorithmstartswithsomeinitialsolution,S,andkeepsreplacingitwithanothersolutioninitsneighborhood,N(S),untilsomestoppingcriterionissatised.Therefore,toimplementalocalsearchalgorithmthefollowingmustbeidentied:

PAGE 37

Goodneighborhoodsoftentakeadvantageofthespecialcombinatorialstructureoftheproblemandaretypicallyproblemdependent.Thealgorithmstopsifthecurrentsolutiondoesnothaveanyneighborsoflowercost(inthecaseofminimization).Thebasicmovestrategyinlocalsearchistomovetoanimprovedsolutionintheneighborhood.Usuallytwostrategiesareimplementedwhichareknownas:therstbettermovestrategyandthebestadmissiblemovestrategy.Intherstbettermovestrategy,theneighboringsolutionsareinvestigatedinapre-speciedorderandtherstsolutionthatshowsanimprovementistakenasthenextsolution.Theorderinwhichtheneighborsaresearchedmayaectthesolutionqualityandthecomputationaltime.Inthebestadmissiblemovestrategy,theneighborhoodissearchedexhaustivelyandthebestsolutionistakenasthenextsolution.Inthiscase,sincethesearchisexhaustivetheorderofthesearchisnotimportant. Lately,othermoresophisticatedstrategieshavebeendevelopedtoallowthesearchtoescapefromlocallyoptimalsolutionsinthehopesofndingbettersolutions.Thesesophisticatedproceduresarereferredtoasmetaheuristicsintheliterature.Inthefollowingwerstgiveabriefhistoryoflocalsearchincludingalistofmetaheuristics,thendiscussthecomputationalcomplexityoflocalsearch,andnallygiveourlocalsearchprocedureforthePIDproblem. 3.2.1 History of Local Search Theuseoflocalsearchincombinatorialoptimizationhasalonghistory.Backinlate1950s,Bock[8]andCroes[11]solvedtravelingsalesmanproblems(TSP)usingedge-exchangelocalsearchalgorithmsforthersttime.Later,Lin[55]renedthe

PAGE 38

Inthe1980smoregeneralizedapproacheswereproposedwhichcombinedlocalsearchwithotherheuristicalgorithms.Theseapproachesallowmovestosolutionsthatdonotnecessarilygivebetterobjectivefunctionvalues.Examplesofthesesophisticatedapproaches,calledmetaheuristics,are:simulatedannealing,tabusearch,geneticalgorithms,neuralnetworks,greedyrandomizedadaptivesearchprocedure(GRASP),variableneighborhoodsearch,antsystems,populationheuristics,memeticalgorithms,andscattersearch.RecentreviewsontheseprocedurescanbefoundinthebookeditedbyPardalosandResende[61]. 3.2.2 Complexity of Local Search Animportantquestionaboutalocalsearchalgorithmisthenumberofstepsittakestoreachalocallyoptimalsolution.Thetimeittakestosearchtheneighborhoodofagivensolutionisusuallypolynomiallybounded.However,thenumberofmovesittakestoreachalocaloptimumfromagivensolutionmaynotbepolynomial.Forexample,thereareTSPinstancesandinitialsolutionsforwhichthelocalsearchtakesanexponentialnumberofstepsunderthe2-exchangeneighborhood.ToanalyzethecomplexityoflocalsearchJohnson,PapadimitriouandYannakakis[45]introducedacomplexityclasscalledPLS(polynomial-timelocalsearch).PLScontainsproblemswhoseneighborhoodscanbesearchedinpolynomialtime.Yannakakis[82]providesanextensivesurveyforthetheoryofPLS-completeness. Itisimportanttodistinguishbetweenthecomplexityofalocalsearchproblemandalocalsearchheuristic.Alocalsearchproblemistheproblemofnding

PAGE 39

Inthepasttwodecadestherehasbeenconsiderableworkonthetheoryoflocalsearch.SeverallocalsearchproblemshavebeenshowntobePLS-complete,butthecomplexityofndinglocaloptimaformanyinterestingproblemsremainsopen,althoughcomputationalresultsareencouraging. 3.2.3 Local Search for the PID Problem Inthissection,wegivedetailsabouttheneighborhood,themovestrategy,thestoppingcriterion,andtheevaluationfunctionforthePIDproblem,whicharethemainingredientsofalocalsearchalgorithm.ThePIDproblemisformulatedasanetworkowprobleminsection2.4,wheretheobjectiveistheminimizationofaconcavefunction.Wetakeadvantageofthespecialstructureofnetworkowproblemstodeneaneighborhoodandamovestrategy.WerstgiveseveraldenitionsofneighborhoodforMCCNFPandthenadoptoneofthesedenitionsandprovideaneighborhooddenitionforthePIDproblem.Next,wediscussthemovestrategy,thestoppingcondition,andtheevaluationfunction.Thegenerationofinitialsolutionswillbediscussedlaterinsections3.3and3.4.

PAGE 40

Neighborhood denitions for MCCNFP .TheusualdenitionoflocaloptimalitydenesaneighborhoodofasolutionSinthefollowingwaywherekkisavectornormand">0.Denition3.1 .Inordertocreatean"-neighborofagivenextremeowletusreroutean"amounttooneofthedemandnodes.Subtractan"fromallthearcsonthepathfromthesourcetothatdemandnode.Thiswillnotchangethecurrentcostbecauseoftworeasons.First,thecurrentowonallarcsonthepathfromthesourcetothatdemandnodeishigherthan"sotheywillstillhavepositiveow.Second,thearcshavethefollowingcoststructure:

PAGE 41

Thereareotherconcavecostfunctions,however,forwhicheveryextremeowisnotnecessarilyalocaloptimumunderN".ConsiderthefollowingexampleinFigure3{1wheretherearetwodemandnodes,twotranshipmentnodes,andasinglesourcenode. Figure3{1:Anexamplefor"-neighborhood. IfthecostfunctionsarexedchargethentheextremeowontheleftinFigure3{1hasatotalcostofs11+c11d11+s21+c21d21+s111+c111d11+s221+c221d21(seesection2.4.1forthedenitionofthevariables).Thenon-extreme"-neighborontherightinFigure3{1hasatotalcostofs11+c11(d11")+s21+c21(d21+")+s111+c111(d11")+s221+c221d21+s211+c211".Thedierencebetweenthecostsiss211+(c211+c21c11c111)".Ifappropriatecostvaluesarechosenthenthisdierencecanbenegativewhichindicatesthattheextremeowisnotalocaloptimum.However,the"-neighborhoodisstillnotusefulforourproblembecauseitisnoteasytosearchthisneighborhoodeciently. GalloandSodini[25]developedthefollowinggeneralizeddenitionofneighborhoodforMCCNFP.Denition3.2

PAGE 42

GuisewiteandPardalos[31]developedthefollowingmorerelaxeddenitionofneighborhoodforMCCNFP.Denition3.3 Neighborhood denition for the PID problem .Agenerallyacceptedruleisthatthequalityofthesolutionsarebetterandtheaccuracyofthenalsolutionsaregreaterifalargerneighborhoodisusedwhichmayrequireadditionalcomputationaltime.Therefore,itislikelythatNAF(),whichisdenedabove,willleadtobettersolutions.Thus,wedeneaneighborhoodforthePIDprobleminthefollowingwaywhichis,inprinciple,similartoNAF().Denition3.4

PAGE 43

InFigure3{2,forexample,thereisacyclebetweennodesD,F2;1,andF2;2.Thiscycleindicatesthatthereisapositiveamountofinventorycarriedforwardfromperiod1toperiod2atfacility2andatthesametimethereisapositiveamountofproductionatfacility2inperiod2.Toremovethiscycleweshouldeithereliminatetheinventoryortheproductionfromthecycle.However,weknowthatreroutingtheinventorythroughnodeF2;2willactuallyincreasethetotalcostandthiscaneasilybeshown.WhenwemovedfromtheextremepointsolutiononthelefttothenonextremesolutionontherightreroutingthedemandofnodeR1;2wascheaperifnodeF1;2isusedratherthanF2;2.Therefore,theproductionarcshouldberemovedfromthecycle.Amoreformaldiscussiononcycledetectionandeliminationisgivenlaterinthissection. Figure3{2:Anexampleofmovingtoaneighboringsolution. The move strategy, the evaluation function, and the stopping Condition .Themovestrategydeterminestheorderinwhichtheneighboringsolutionsarevisited.

PAGE 44

ThelocalsearchalgorithmissummarizedinFigure3{3.Letj0bethefacilitythatcurrentlysuppliesretailerk'sdemandinperiodtthroughproductioninperiod0(t0).Also,letjbetheadditionalcostofchangingthesuppliertofacilityjinperiod(t).Inotherwords,jisthecostofreroutingthedemandofretailerkunderconsideration.Notethatj=0whenj=j0and=0.InthelocalsearchproceduregiveninFigure3{3,jandaresuchthatj==minfj:j=1;:::;J;=1;:::;tg detection and elimination .Duetothespecialstructureofournetwork,identifyingcyclesandremovingthemcanbedoneeciently.AcycleforthePIDproblemmeansthatatleastoneofthefacilitiesisproducingandalsocarryinginventoryinthesameperiod.Therefore,withasinglepassoverallfacilitiesandtimeperiodsthecyclescanbeidentied.TheprocedureissummarizedinFigure3{4.Notethattwotypesofcyclescanbecreatedandtheyareanalyzedseparatelyinthefollowingparagraphs.

PAGE 45

LetSbeaninitialextremefeasiblesolution for(t=1;:::;T)and(k=1;:::;K)do Checkforanycyclesandeliminatethem endfortandk returnS if(pjl>0andIjl>0)then ifpjlisonthenewpaththenrerouteIjl returnS

PAGE 46

Type I cycle .Assumethatthedemandofretailerkinperiodtissatisedthroughproductionatfacilityjattimeafteritisrerouted.Iffacilityjisalreadycarryinginventoryfromperiod1toperiod,thenatypeIcycleiscreated.Onlyonecycleiscreatedanditcanbeeliminatedbyreroutingtheinventory.Figure3{5givesanexampleofatypeIcyclewherethedemandofretailer1inperiod3isoriginallysatisedfromproductionatfacility2inperiod3.Ifitischeapertosatisfythedemandofretailer1inperiod3byproductionatfacility1inperiod3,thenthiswillresultinthenetworkowgivenontherightofFigure3{5.ToeliminatethecycletheamountofinventoryenteringnodeF1;3issubtractedfromitscurrentpathandaddedontotheproductionarcenteringF1;3. Figure3{5:AnexampleofatypeIcycle. Type II cycle .Assumethatfacilityjcurrentlyproducesinseveralperiodsandthatthedemandofretailerkinperiodtissatisedthroughproductionfromanearlierperiodafteritisrerouted.Thereroutingofretailerk'sdemandmayresultinoneormorecycles.Figure3{6givesanexampleofatypeIIcycle.Theowontheleftisthestartingextremeowwhichindicatesthatthedemandofretailer1in

PAGE 47

Figure3{6:AnexampleofatypeIIcycle. Sofar,wehavetalkedabouttheneighborhood,themovestrategy,theevaluationfunction,andthestoppingcondition.However,wehavenotdiscussedhowtheinitialsolutionsarefound.Generationofinitialsolutionsmaybecriticalforlocalsearchprocedures.Goodstartingsolutionsmayleadtobetterqualitylocaloptima.Insections3.3and3.4wediscussseveralproceduresforgeneratinggoodinitialsolutions. 3.3 Dynamic Slope Scaling Procedure (DSSP) TheDSSPisaprocedurethatiterativelyapproximatestheconcavecostfunctionbyalinearfunction,andsolvesthecorrespondingnetworkowproblem.Notethateachoftheapproximatingnetworkproblemshasexactlythesamesetofconstraints,

PAGE 48

Initialize Solvethefollowingnetworkowproblem: minimizePJj=1PTt=1( Update returnS ImportantissuesregardingtheDSSPalgorithmarendinginitiallinearcosts,updatingthelinearcosts,andstoppingconditions. Finding an initial linear cost ( .KimandPardalos[48,50]investigatedtwodierentwaystoinitiatethealgorithmforxedchargeandpiecewise-linearconcavenetworkowproblems.Here,wegeneralizetheheuristicforallconcavecostswiththepropertythatthetotalcostiszeroiftheactivityleveliszero.Inotherwords,rjt()andfjkt()areconcavefunctionsontheinterval[0;1)andrjt(0)=fjkt(0)=0.Theinitiallinearcostfactorsweuseare

PAGE 49

Updating scheme for c(q)jtifp(q)jt=0; c(q)jktifx(q)jkt=0: condition .Iftwoconsecutivesolutionsintheabovealgorithmareequal,thenthelinearcostcoecientsandtheobjectivefunctionvaluesinthefollowingiterationswillbeidentical.Asaresult,onceS(q)=S(q1)therecanbenomoreimprovement.Therefore,anaturalstoppingcriterionistoterminatethealgorithmwhenkS(q)S(q1)k<".Analternativeistoterminatethealgorithmafteraxednumberofiterationsiftheabovecriterionisnotsatised. Theinitiationandupdatingschemesforxedchargeandpiecewise-linearconcavecasesareanalyzedbelowseparately.IntheremainderofthisdissertationwewillrefertothelocalsearchapproachthatusesDSSPtogeneratetheinitialsolutionsasLS-DSSP. 3.3.1 Fixed Charge Case KimandPardalos[48]investigatedtwodierentwaystoinitiatethealgorithm.Theinitiationschemepresentedhereisshowntoprovidebetterresultswhenusedforlargescaleproblems(Figure3{8).Theinitialvalueofthelinearcostweuseis Thelinearcostcoecientsareupdatedinsuchawaythattheyreectthevariablecostsandthexedcostssimultaneouslyinthefollowingway:

PAGE 50

Figure3{8:Linearizationofthexedchargecostfunction. c(q)jtifp(q)jt=0; c(q)jktifx(q)jkt=0: Piecewise-linear Concave Case Whenproductionortransportationcostsarepiecewise-linearandconcave,theproblemcanbetransformedintoaxedchargenetworkowproblemasdescribedinsection2.4.2.Afterthetransformation,thesolutionapproachgiveninsection3.3.1canbeusedtosolvetheproblem.However,thetransformedproblemdoesnotalwaysleadtogenerationoffeasiblesolutionssincetheproblemisnotsolvedtooptimality.Therefore,weapplytheproceduretotheoriginalprobleminsteadoftheextendednetworkwithsomemodicationtoaccountforeachpieceinthecostfunctions. Thelinearcostcoecientsareinitializedinthesamewayasthexedchargecase,i.e.

PAGE 51

c(q)jtifp(q)jt=0; c(q)jktifx(q)jkt=0; 3.3.3 Performance of DSSP on Some Special Cases InthissectionweanalyzetheperformanceofDSSPonsomebasicproblems.Theproblemsconsideredaresimplecaseswithonlyasingletimeperiod.WeidentifysomecaseswheretheDSSPheuristicactuallyndstheoptimalsolution. Multiple facilities and one retailer .Ifthereisonlyasingleretailerbutmultiplefacilitiesinthenetwork,thentheproblemiseasytosolve,providedthattherearenocapacitiesontheproductionandtransportationarcsandthecostfunctionsareconcave.Theprobleminthiscasesimplyreducestoashortestpathproblem(Figure3{10)whichcanbesolvedinO(J)timebycompleteenumeration.Inanoptimalsolutiononlyonepairofproductionandtransportationarcswillbeusedsincethisisanuncapacitatedconcaveminimizationproblem.Theoptimalsolutionhasthefollowingcharacteristics(jindicatesthefacilityusedintheoptimalsolution):

PAGE 52

.IneveryiterationoftheDSSPheuristicthefollowingnetworkowproblemissolved(derivedfromproblem(P1)forasingleretailer):minimizeJXj=1( Although(P10)isuncapacitated,theheuristicrequiresanupperboundtoinitializethelinearcostcoecients.Therefore,letWj1andUj11betheupperboundsfortheproductionandtransportationarcs,respectively,suchthatWj1d11andUj11d11.Hence,theinitialcostcoecientsare

PAGE 53

Problem(P10)isacontinuousknapsackproblem.Therefore,atagivenDSSPiteration,q,thesolutionwillbeofthefollowingform:p(q)j01=d11andp(q)j1=0where( Theheuristicterminateswhenthesolutionsfoundintwoconsecutiveiterationsarethesame.However,thiswillnothappenunlessthesolutionfoundistheoptimalone.Inotherwords,atiteration,q+1,thesolutionwillbep(q+1)j001=d11(allothervariablesarezero)andtheprocedurewillstopifj00=j0.Notethatj00=j0onlyifj0=jwherejisthefacilityusedintheoptimalsolutionto(P1)withasingleretailer.Assumej06=j.Alsoassume,withoutlossofgenerality,thattheoptimalsolutionisuniquesothat(rj1(d11)=d11+fj11(d11)=d11)<(rj1(d11)=d11+fj11(d11)=d11)8jf1;2;:::;Jgnfjg.Duetotheconcavityofthecostfunctionsrj1(Wj1)=Wj1rj1(d11)=d11andfj11(Uj11)=Uj11fj11(d11)=d11(rememberthatrj1(0)=fj11(0)=0)forj=1;2;:::;J.Therefore,atiteration,q+1, Theheuristicwillvisiteachfacilityatmostonceandfacility,j,willbevisitedatmostthreetimes(twiceinthelasttwoiterationsandpossiblyonceduringthepreviousiterations).Thetotalnumberofiterations,therefore,isJ+2intheworstcase.IfWj1=Uj11=d11forj=1;2;:::;Jthentheoptimalsolutionwillbefoundinonly2iterations.2 facilities and two retailers .Whenthereareonlytwofacilitiesandtworetailerstheproblemisobviouslyeasysincethereareonlyafewfeasiblesolutionstoconsider(Figure3{11).Iftherearenoproductionandtransportationcapacitiesthenthetworetailersarenotcompetingforcapacityandtheycanactindependently.Thisindicatesthattheproblemdecomposesintotwosingle-retailerproblemsbothofwhichcanbesolvedtooptimalityusingtheDSSP.Ingeneral,ifthereareJfacilities,

PAGE 54

Figure3{11:Abipartitenetworkwithtwofacilitiesandtworetailers. If,however,thefacilitieshaveproductioncapacitiesthentheDSSPmayfailtondtheoptimalsolution.WewillillustratethisbycreatinganexampleforwhichDSSPfailstondtheoptimalsolution.Weassumeproductionanddistributioncostsarexedchargeratherthangeneralconcavefunctionstosimplifytheanalysis. TheMILPformulationfortheprobleminFigure3{11withxedchargecostscanbegivenas(derivedfromproblem(P2)insection2.4.1)minimizec11p11+c21p21+s11y11+s21y21+c111x111+c121x121+c211x211+c221x221+s111y111+s121y121+s211y211+s221y221

PAGE 55

Wewantcapacitiesontheproductionarcs,buttohaveafeasibleproblemwemusthaveW11+W21d11+d21.LettingW11+W21=d11+d21forcesthefollowingconditions:p11=W11,p21=W21,y11=1,andy21=1.Undertheseconditionstheonlydecisionvariablesleftintheproblemarethedistributionamountsfromthefacilitiestotheretailers.If(W11=W21=d11=d21)or(W11=d11,W21=d21,andd11
PAGE 56

subjectto(P200)x111x121=W11;(3.15)x211x221=W21;(3.16)x111+x211=d11;(3.17)x121+x221=d21;(3.18)x111=W11y111;(3.19)x121=W11(1y111);(3.20)x211d11;(3.21)x221d21;(3.22)y111f0;1g;(3.23)xjk10j=1;2;k=1;2:(3.24) Notethattheproductioncostisnowaconstantsincep11=W11andp21=W21.Thus,itisnotincludedintheobjectivefunction.Theproblemformulationcanfurtherbesimpliedbytakingadvantageoftheequalityconstraints. minimize(c111W11c121W11c211W11+c221W11+s111s121)y111 Theoptimalsolutionto(P2000)isy111=1if(c111W11c121W11c211W11+c221W11+s111s121)<0andy111=0otherwise.Assume(c111W11c121W11c211W11+c221W11+s111s121)0sothatwehavex111=0,x121=W11,x211=d11,andx221=d21W11astheoptimalsolution.

PAGE 57

Undertheassumptionsandsimplicationsgivenbefore,theDSSPsolvesthethefollowingnetworkowproblemateveryiteration:minimize( Theinitiallinearcostcoecientsare Multiple facilities and multiple retailers .Iftheproblemhasmultiplefacilities,multipleretailers,andconcavecostfunctions,thenitfallsunderthecategoryofMCCNFPwhichisknowntobeNP-hard(section2.3). Figure3{12:Abipartitenetworkwithmultiplefacilitiesandmultipleretailers.

PAGE 58

However,if(i)thedistributioncostsareconcave,(ii)thereareequalnumberoffacilitiesandretailers,(iii)thedemandsattheretailersareallthesame,and(iv)allproductioncapacitiesareequaltotheconstantdemand,thentheproblembecomeseasier(Figure3{12).Inotherwords,ifJ=KandWj1=dk1=dforj;k=1;2;:::;JtheproblemformulationisminimizeJXj=1rj1(pj1)+JXj=1JXk=1fjk1(xjk1) subjectto(P7)pj1JXk=1xjk1=0j=1;:::;J;(3.28)JXj=1xjk1=dk=1;:::;J;(3.29)pj1dj=1;:::;J;(3.30)xjk1dj;k=1;:::;J;(3.31)pj1;xjk10j;k=1;:::;J:(3.32) Duetotightproductioncapacitiestheonlyfeasiblewayofmeetingthedemandisifallfacilitiesproduceattheirfullcapacity.Thismeansthatpj1=dforj=1;2;:::;Jso,theproductionvariablescanbedroppedfromtheformulationandtheproblemreducesto minimizeJXj=1JXk=1fjk1(xjk1)subjectto(P70)JXk=1xjk1=dj=1;:::;J;(3.33)JXj=1xjk1=dk=1;:::;J;(3.34)xjk10j;k=1;:::;J:(3.35)

PAGE 59

Nowletzjk1=xjk1=dforj;k=1;2;:::;J.Substitutingthisinto(P70)givesminimizeJXj=1JXk=1fjk1(dzjk1) subjectto(P700)JXk=1zjk1=1j=1;:::;J;(3.36)JXj=1zjk1=1k=1;:::;J;(3.37)zjk10j;k=1;:::;J:(3.38) Notethatconstraints(3.36)and(3.37)togetherwithzjk12f0;1garetheconstraintsofthewell-knownassignmentproblem.Ifthetransportationcosts,fjk1(),areconcavethenwewillreferto(P700)astheconcaveassignmentproblem.Proposition4 .IfDSSPisusedtosolve(P70)thefollowingnetworkowproblemissolvedineveryiteration: minimizeJXj=1JXk=1 ThisisequivalenttosolvingtheLPrelaxationofalinearassignmentproblem.Therefore,thesolutionto(P8)givestheoptimalsolutionto(P70).TheobjectivefunctioncoecientsinthenextiterationofDSSPwillbethesamesincex1jk1=0or

PAGE 60

Greedy Randomized Adaptive Search Procedure (GRASP) TheGreedyRandomizedAdaptiveSearchProcedureisaniterativeprocessthatprovidesafeasiblesolutionateveryiterationforcombinatorialoptimizationproblems(FeoandResende[20]).GRASPisusuallyimplementedasamulti-startprocedurewhereeachiterationconsistsofaconstructionphaseandalocalsearchphase.Intheconstructionphase,arandomizedgreedyfunctionisusedtobuildupaninitialsolution.Thissolutionisthenusedforimprovementattemptsinthelocalsearchphase.Thisiterativeprocessisperformedforaxednumberofiterationsandthenalresultissimplythebestsolutionfoundoveralliterations.Thenumberofiterationsisoneofthetwoparameterstobetuned.Theotherparameteristhesizeofthecandidatelistusedintheconstructionphase. GRASPhasbeenappliedsuccessfullyforawiderangeofoperationsresearchandindustryproblemssuchasscheduling,routing,logic,partitioning,location,assignment,manufacturing,transportation,telecommunications.FestaandResende[21]giveanextendedbibliographyofGRASPliterature.GRASPhasbeenimplementedinvariouswayswithsomemodicationstoenhanceitsperformance.Here,wedevelopaGRASPforthePIDproblem.Inthefollowing,wediscussgenerationofinitialsolutions.Wegivetwodierentconstructionprocedures.Thesecondconstructionprocedure,calledthemodiedconstructionphase,isdevelopedtoimprovetheperformanceoftheheuristicprocedure. 3.4.1 Construction Phase Intheconstructionphaseafeasiblesolutionisconstructedstepbystep,utilizingsomeoftheproblemspecicproperties.Sinceourproblemsareuncapacitatedconcavecostnetworkowproblemstheoptimalsolutionwillbeatreeonthenetwork.Therefore,weconstructsolutionswhereeachretailerissuppliedbyasingle

PAGE 61

Theunitcost,j,ofassigningfacilityjtoretailerkisfjkt(dkt)=dkt+rjt(pjt+dkt)=(pjt+dkt).Thecheapestconnectionsforthisretailerarethenputintoarestrictedcandidatelist(RCL)andoneofthefacilitiesfromRCLisselectedrandomly.ThesizeoftheRCLisoneoftheparametersthatrequirestuning.HartandShogan[36]andFeoandResende[19]proposeacardinality-basedschemeandavalue-basedschemetobuildanRCL.Inthecardinality-basedscheme,axednumberofcandidatesareplacedintheRCL.Inthevalue-basedscheme,allcandidateswithgreedyfunctionvalueswithin(100)%ofthebestcandidateareplacedintheRCLwhere2[0;1].Weusethevalue-basedschemeandacandidatefacilityisaddedtotheRCLifitscostisnomorethanamultipleofthecheapestone.Finally,whenthechosenfacilityisconnectedtotheretailer,theowsonthearcsareupdatedaccordingly.TheprocedureisgiveninFigure3{13.When,inFigure3{13,iszerotheprocedureistotallyrandomizedandthegreedyfunctionvalue,j,isirrelevant.However,whenisequalto1theprocedureisapuregreedyheuristicwithoutanyrandomization. Holmqvistetal.[39]developasimilarGRASPforsinglesourceuncapacitatedMCCNFP.Theirproblemsaredierentfromoursbecausetheyusedierentcostfunctionsandnetworkstructures.Also,ourapproachdiersfromtheirsinthegreedyfunctionusedtoinitializetheRCL.Theyusetheactualcostwhereaswechecktheunitcostofconnectingretailerstooneofthefacilities.Duetothepresenceofxedcoststheunit-costapproachperformedbetter.Wehavetestedbothapproachesandourapproachconsistentlygavebetterresultsfortheproblemswehavetested.TheresultsarepresentedinChapter4.

PAGE 62

xjkt:=0,j=1;:::;J;k=1;:::;K TheconstructionprocedureinFigure3{13handleseachperiodindependently.Inotherwords,theinitialsolutioncreateddoesnotcarryanyinventoryanddemandsaremetthroughproductionwithinthesameperiod.Asecondapproachistoallowdemandtobemetbyproductionfrompreviousperiods.Thesecondapproachledtobetterinitialsolutions,butthenalsolutionsafterlocalsearchwereworsecomparedtotheoneswegotfromtherstapproach.Oneexplanationtothisisthattheinitialsolutionsinthesecondapproacharelikelyalreadylocaloptima.Intherstapproach,however,wedonotstartwithaverygoodsolutioninmostcasesbutthelocalsearchphaseimprovesthesolution.ThelocalsearchapproachwhichusestheconstructionproceduredescribedinthissectionwillbereferredtoasLS-GRASPfromhereon. 3.4.2 Modied Construction Phase Inamulti-startprocedure,ifinitialsolutionsareuniformlygeneratedfromthefeasibleregion,thentheprocedurewilleventuallyndaglobaloptimalsolution

PAGE 63

InordertodiversifytheinitialsolutionsweusethesamegreedyfunctionbutweallowcandidateswithworsegreedyvaluestoentertheRCL.ThemodiedconstructionprocedureissimilartotheproceduregiveninFigure3{13.TheonlydierenceisinthewaytheRCLisconstructed.TheRCLinthemodiedconstructionprocedureisdenedas RCL:=fj: +g =minfj:j=1;:::;Jg,=( ), =maxfj:j=1;:::;Jg,and01. Afteraxednumberofiterations isupdated( = +)sothattheprocedureallowsworsesolutionstoentertheRCL.Notethatwiththismodicationtheprocedurerandomlychoosesthenextcandidatefromasetofsecondbestcandidates.Thisisdoneuntil issetbacktominfj:j=1;:::;Jg.IntherestofthedissertationthetermLSM-GRASPwillbeusedtodenotethelocalsearchapproachwiththemodiedconstructionprocedure. 3.5 Lower Bounds ThelocalsearchprocedurespresentedabovegivesuboptimalbutfeasiblesolutionstothePIDproblem.Thesefeasiblesolutionsareupperboundstoourminimizationproblem.Toevaluatethequalityoftheheuristics,whentheoptimal

PAGE 64

Toreformulatetheproblemwithxedchargecostswedenenewvariableswhichhelpusdecomposetheamounttransportedfromafacilitytoaretailerbyorigin.Figure3{14givesanetworkrepresentationforthenewformulation. Letyjktdenotethefractionofdemandofretailerkinperiodt,dkt,satisedbyfacilityjthroughproductioninperiod.Wecannowrewritexjkt,Ijtandpjtintermsofyjktasxjkt=tX=1yjktdkt;(3.42)Ijt=KXk=1tX=1TXl=t+1yjkldkl;(3.43)pjt=KXk=1TX=tyjtkdk:(3.44) Equation(3.42)simplyindicatesthattheamountshippedfromfacilityjtoretailerkduringperiodtisequaltopartofthedemand,dkt,satisedfromproductionatfacilityjinperiods1throught.Equation(3.43)denotesthattheamountofinventory

PAGE 65

Figure3{14:Extendednetworkrepresentation.carriedatfacilityjfromperiodttot+1isequaltothetotaldemandofallretailersinperiodst+1toTsatisedbyfacilityjthroughproductioninperiods1tot.Finally,equation(3.44)indicatesthattheamountofproductionatfacilityjduringperiodtisequaltotheamountofdemandmetfromthisproductionforallretailersinperiodstthroughT. Substituting(3.42),(3.43)and(3.44)into(P2)willgivethefollowingnewformulation:minimizeJXj=1TXt=1sjtyjt+JXj=1KXk=1TXt=1sjktyjkt+JXj=1KXk=1TXt=1tX=1jktyjkt

PAGE 66

Intheaboveformulationitcaneasilybeshownthatjkt=(cj+cjkt+Pt1l=hjl)dkt.Thisindicatesthatjktisthetotalvariablecost(includingproduction,inventory,anddistribution)ifthedemandofretailerkattimetismetthroughproductionatfacilityjduringtime.Moreover,foruncapacitatedproblemstheupperboundsWjt,Vjt,andUjktcanbesetequaltoWjt=KXk=1TX=tdk;(3.52)Vjt=KXk=1TX=t+1dk;(3.53)Ujkt=dkt:(3.54) Substituting(3.52),(3.53),and(3.54)into(3.46),(3.47),and(3.48),respectively,willgivethefollowingconstraints:KXk=1TX=tyjtkdkKXk=1TX=tdkyjt(3.55)KXk=1TXl=t+1dkl(tX=1yjkl)KXk=1TXl=t+1dkl(3.56)tX=1yjktyjkt(3.57)

PAGE 67

Constraints(3:55),(3:56),and(3:57)can,respectively,bereplacedbythefollowingsetofstrongerconstraints:yjtkyjt(3.58)tX=1yjkl1(3.59)tX=1yjkt=yjkt(3.60) Itiseasytoshowthat(3:58),(3:59),and(3:60)imply(3:55),(3:56),and(3:57),respectively.Moreover,(3:58)and(3:59)areclearlyvalid.Thelastconstraint(3:60)isbasedonthepropertyofanoptimalsolution.Inanoptimalsolutioneachdemandpointwillbesatisedbyonlyonefacility.Therefore,wecanforceyjkttobebinarywhichindicatesthat(3:60)isalsovalid.Usingthenewconstraintswearriveatthefollowingformulationaftersomesimplication:minimizeJXj=1TXt=1sjtyjt+JXj=1KXk=1TXt=1tX=10jktyjkt

PAGE 68

Proof .Let(yjt,yjkt)beafeasiblesolutionto(P10).Multiplyingtheconstraints(3:62)bydktandsummingthemoverkandtleadsto(3:55).Constraints(3:56)canbereachedinasimilarway.Substitutingtheyjktvaluesinto(3:60),(3:42),(3:43),and(3:44)willgiveafeasiblesolutionto(P2).ItfollowsthateverysolutiontotheLPrelaxationof(P10)givesrisetoafeasiblesolutionoftheLPrelaxationof(P2).2

PAGE 69

4.1 Design and Implementation of the Subroutines Oneofthemajorcomponentsofthisdissertationistheextensiveamountofcomputationalresults.Therefore,wewantedmakeoursubroutinesavailabletootherresearchers.ThesubroutinesexecutetheheuristicapproachespresentedinChapter3.Asetofles,referredtoasthedistribution,isavailableuponrequest.Thedistributionincludesthesubroutinesaswellassomesampleinputandoutputles. Thefollowingguidelinesareusedintheimplementationofthesubroutines.ThecodesarewritteninstandardCandareintendedtorunwithoutmodicationonUNIXplatforms.Forsimplicity,onlythesubroutinesforPIDproblemswithxedchargecostsarepresentedinthissectionandthefollowingsection.Thesubroutinesgr-pid.candds-pid.c(Table4{1)applylocalsearchwithGRASPandlocalsearchwithDSSP,respectively.Theoptimizeringr-pid.cisaself-containedsetofsubroutines.However,ds-pid.cusesthecallablelibrariesofCPLEX7.0tosolvethelinearnetworkowproblems.Inputandoutput,aswellasarraydeclarationsandparametersettingsaredoneindependently,outsideoftheoptimizermodules.ThedistributionconsistsofeightleswhicharelistedinTable4{1.Thelesgr-pid.candds-pid.carethecoreofthepackage. Table4{1:Thedistribution. subroutines gr-pid.cds-pid.c sampleinput gr-param.datds-param.dat sampledata sample.dat sampleoutput sample-gr.outsample-ds.out instructions README.txt 59

PAGE 70

Subroutinegr-pid.ccallstwosubroutines,TIMEandfree-and-null.SubroutineTIMEisusedtomeasuretheCPUtimeofthealgorithmandsubroutinefree-and-nullisusedtonullifythepointerscreatedduringtheexecutionofthecode.Subroutinegr-pid.ctakesthefollowingasinputfromlegr-param.dat: Aftertheparametersarereadthefollowingdataaretakenfromtheinputle: BeforetheGRASPiterationsareexecuted,thevalueofthebestsolutionfoundisinitializedtoalargevalue.ThewhileloopimplementstheconstructionphaseoftheGRASP(section3.4.1).Eachsolutionconstructediskeptinalinkedlistcalledstarts.Thelistkeepsthesolutionsinincreasingorderoftotalcostandeachtimeanewsolutioniscreateditisaddedtothelistifitisdierentfromtheexistingsolutions. Inthedoloopfollowingtheconstructionphase,alocalsearch(section3.2.3)iscarriedoutintheneighborhoodofeachsolutioninthestartslist.Eachtimealocallyoptimalsolutionisfound,itisaddedtoalistcalledlocals.Thesolutionsinthelistareinincreasingorderwithrespecttotheirtotalcosts.Afterthelocalsearchphaseiscompleted,anoutputle(e.g.Figure4{3)iscreatedwhichincludesthefollowing:

PAGE 71

4.2 Usage of the Subroutines Thesubroutinesinlesgr-pid.candds-pid.ccomputeanapproximatesolutiontoaproduction-inventory-distributionproblemusingGRASPandDSSP,respectively.Thevariablesneededasinputforgr-pid.careasfollows:

PAGE 72

8 1 -1 1 sample.dat sample-gr.out Figure4{1:Inputle(gr-param.dat)forgr-pid.c. Theinputvariablesfords-pid.caresimilar.Theonlyvariablesneededareseed,out,in-le,andout-le.Alloftheabovevariablesareprovidedinaninputle.Anexampleofaninputleforgr-pid.cisgiveninFigure4{1. Thesubroutinesgr-pid.candds-pid.crstreadthevariablesfromtheinputlesgr-param.datandds-param.dat,respectively.Thisisfollowedbyreadinginthedata.Thedataforsubroutinesgr-pid.candds-pid.cincludethefollowing: Thisdataisprovidedinanin-leintheordergivenabove.Figure4{2givestheformatofasamplein-le.Thedataisreadintothefollowingarrays:

PAGE 73

222 19.97751 35.62797 8.669039 29.90926 8.78828919.680712 12.9311310.667306 7.88200814.782811 7.67156719.095344 9.04318313.516924 6.52260519.325337 2.03627616.544358 1.93491910.210702 9.04318315.122049 6.52260512.020189 2.03627619.399770 1.93491912.040823 1.912763 1.942124 Figure4{2:Sampledatale(sample.dat)forgr-pid.candds-pid.c. Onceallparametersthatcontroltheexecutionofthealgorithmareset,theoptimizationmodulestarts.ForGRASP,theoptimizationmoduleistheiterativeconstructionandlocalsearchphases.ForDSSP,thecallablelibrariesofCPLEX7.0areusedtosolvethenetworkowproblemateachiteration.Theproblemissolvedwithanewsetoflinearcostseachtimeasdescribedinsection3.3.AftertheoptimizationissuccessfullycompletedanoutputleiscreatedasshowninFigure4{3.Iftheoptimizationisnotsuccessful,anerrormessageisreturned. 4.3 Experimental Data TheprimarygoalofourcomputationalexperimentsistoevaluatethesolutionproceduresdevelopedandpresentedinChapter3.Foraprocedurethatndsanoptimalsolution,animportantcharacteristicisthecomputationaleortneeded.Foraheuristicprocedure,oneisofteninterestedinevaluatinghowclosethesolutionvalue

PAGE 74

GRASPforProduction-Inventory-TransportationProblem CPUtime0.00 Numberofplants2 Numberofretailers2 Numberofperiods2 Numberofiterations8 Initialseed1 Minimumcostbeforelocalsearch1288.10 Minimumcostafterlocalsearch1288.10 ProductionPeriodPlantRetailer 55.6112 38.5822 Inventory Shipment 19.98121 35.63122 8.67221 29.91222 Allothervariablesarezero. Figure4{3:Sampleoutput(sample-gr.out)ofgr-pid.c.

PAGE 75

TotestthebehaviorofthesolutionproceduresdevelopedforthePIDproblemweprimarilyuserandomlygeneratedproblemsandsomelibrarytestproblemsfromtheliterature.Unfortunately,wewerenotabletondlibrarytestproblemsthattourproblemrequirementsandcharacteristicsexactly.Therefore,wefoundotherlibraryproblemssuchasfacilitylocationproblemsandformulatedthemasPIDproblems.OncetheywereformulatedasPIDproblems,wetestedthealgorithmsontheseproblems. 4.4 Randomly Generated Problems Thebehaviorandperformanceofsolutionproceduresarefrequentlytestedonacollectionofprobleminstances.Theconclusionsdrawnabouttheseperformancecharacteristicsstronglydependonthesetofprobleminstanceschosen.Therefore,itisimportantthatthetestproblemshavecertaincharacteristicstoensurethecredibilityoftheconclusions.HallandPosner[33]gavesuggestionsongeneratingexperimentaldatawhichareapplicabletomostclassesofdeterministicoptimizationproblems.Theydevelopedspecicproposalsforthegenerationofseveraltypesofschedulingdata.Theyproposedvariety,practicalrelevance,invariance,regularity,describability,eciency,andparsimonyassomeoftheprinciplesthatshouldbeconsideredwhen

PAGE 76

4.4.1 Problems with Fixed Charge Costs TherstsetofproblemswesolvedwerePIDproblemswithxedchargeproductionanddistributioncosts.LS-DSSPandLS-GRASPwereusedtosolvedtheseproblems.ForsomeoftheproblemsLSM-GRASPwasused.Problemsize,structure,andthecostvaluesseemtoaectthedicultyoftheproblems.Therefore,wegeneratedvegroupsofproblems.Group1andGroup2problemsaresingleperiodproblemswithoutinventoryandGroups3,4,and5aremulti-periodproblems.Tocapturetheeectofthestructureofthenetworkonthedicultyoftheproblemswevariedthenumberoffacilities,retailers,andperiodswithineachgroup.TheproblemsizesaregiveninTable4{2.Problems1through5areinGroup1,problems6to10areinGroup2,problems11to15areinGroup3,problems16to20areinGroup4,andGroup5consistsofproblems21to25.Atotalof1250problemsweresolved. Ithasbeenshownthattheratioofthetotalvariablecosttothetotalxedcostisanimportantmeasureoftheproblemdiculty(seeHochbaumandSegev[38]).Therefore,werandomlygenerateddemands,xedcharges,andvariablecostsforeachtestproblemfromdierentuniformdistributionsasgiveninTable4{3.Notethatonlythexedchargesarevaried.Forexample,datasetAcorrespondstothoseproblemsthathavesmallxedcharges.ThexedchargesincreaseaswegofromdatasetAtodatasetE.Thedistancesbetweenthefacilitiesandtheretailersareusedastheunittransportationcosts(cjkt).Foreachfacilityandretailerweuniformlygeneratedxandycoordinatesfroma10by10gridandcalculatedthedistances.Thevaluesoftheparameter,whichisusedintheconstructionphaseofLS-GRASP,werechosenfromtheinterval(0,1).Experimentsindicatedthatbetterresultswereobtainedforintheinterval[0.80,0.85].

PAGE 77

Table4{2:Problemsizesforxedchargenetworks. No.of No.of No.of No.of Group Problem Facilities Retailers Periods Arcs 1 25 400 1 10,025 2 50 400 1 20,050 1 3 75 400 1 30,075 4 100 400 1 40,100 5 125 400 1 50,125 6 125 200 1 25,125 7 125 250 1 31,375 2 8 125 300 1 37,625 9 125 350 1 43,875 10 125 400 1 50,125 11 10 70 20 14,390 12 15 70 20 21,585 3 13 20 70 20 28,780 14 25 70 20 35,975 15 30 70 20 43,170 16 30 60 5 9,270 17 30 60 10 18,570 4 18 30 60 15 27,870 19 30 60 20 37,170 20 30 60 25 46,470 21 30 50 20 31,170 22 30 55 20 34,170 5 23 30 60 20 37,170 24 30 65 20 40,170 25 30 70 20 43,170 Foreachproblemsizeandforeachdatasetwerandomlygenerated10probleminstances.ThealgorithmsareprogrammedintheClanguage,andCPLEX7.0callablelibrariesareusedtosolvethelinearNFPsandtheMILPs.ThealgorithmsarecompiledandexecutedonanIBMcomputerwith2Power3PCprocessors,200MhzCPUseach. TheerrorsfoundforproblemsusingdatasetAwerethesmallest.Thisisexpectedsincethexedchargesforthissetaresmall.Asthexedchargeincreases,thelinearapproximationisnotacloseapproximationanymore.Thus,theerrorboundsfordatasetEwerethehighest.Nevertheless,allerrorsreportedwerelessthan

PAGE 78

Table4{3:Characteristicsoftestproblems. Dataset 10-20 5-15 1-3 5-55 B 50-100 5-15 1-3 5-55 C 100-200 5-15 1-3 5-55 D 200-400 5-15 1-3 5-55 E 1000-2000 5-15 1-3 5-55 9%.Allerrorsreportedherearewithrespecttothelowerboundsunlessotherwisestated.ThelowerboundsareobtainedfromtheLPrelaxationoftheextendedMILPformulationgiveninsection3.5.Theerrorsarecalculatedasfollows: Error=UpperBound-LowerBound LowerBound100. Table4{4reportsthenumberoftimesthegapbetweentheupperandlowerboundswaszerofordatasetA(notethat50problemsweresolvedforeachgroup).Thenumberoftimestheheuristicapproachesfoundtheoptimalsolutionmay,however,bemorethanthosenumbersreportedinthetablesincetheerrorsarecalculatedwithrespecttoalowerbound.FordatasetB,theerrorswerestillsmall(allwereunder0:6%).LS-DSSPfounderrorsequaltozerofor42ofthe50problemsinGroup1,for36oftheproblemsinGroup2,andonlyfor1probleminGroup3.AllerrorsinGroups4and5weregreaterthanzero.ThenumberoferrorsequaltozerousingLS-GRASP,however,was12forGroup1problems,only1forGroup4problems,andnonefortheothergroups.Asthexedchargeswereincreasedtheerrorsalsoincreased. Table4{4:NumberoftimestheerrorwaszerofordatasetA. DSSP GR8 GR16 GR32 GR64 GR128 Group1 50 26 28 29 32 33 Group2 50 19 23 27 30 32 Group3 42 24 25 26 29 30 Group4 30 17 22 29 30 32 Group5 29 15 18 20 22 23 GR8:GRASPwith8iterations,GR16:GRASPwith16iterations,etc.

PAGE 79

ThemaximumerrorencounteredfordatasetAsetwas0:02%forbothheuristics.TheresultsindicatethatLS-DSSPandLS-GRASParebothpowerfulheuristicsfortheseproblems,butanotherreasonforgettingsuchgoodsolutionsisbecausetheproblemsinsetAarerelativelyeasy.WewereactuallyabletondoptimalsolutionsfromCPLEXforallproblemsinthissetinareasonableamountoftime.Onaverage,LS-DSSPtooklessthanasecondtosolvethelargestproblemsinGroups1and2.LS-GRASPwith8iterationstookabout1.5secondsforthesameproblems.Findinglowerboundsalsotookaboutasecond,whereasittookmorethan25secondstondoptimalsolutionsforeachoftheseproblemsusingCPLEX.TheproblemsinGroups3,4,and5wererelativelymoredicult.Averagetimeswere8,10,15,and50secondsforLS-DSSP,LS-GRASPwith8iterations,LP,andCPLEX,respectively.OptimalsolutionswerefoundforallproblemsusingdatasetBaswell.However,fordatasetC,optimalsolutionswerefoundforonlyGroup1,2,and3problemsandforsomeoftheproblemsinGroups4,and5.FordatasetD,optimalsolutionswerefoundforsomeoftheGroup1and2problems.Finally,datasetEwasthemostdicultset.CPLEXwasabletoprovideoptimalsolutionsforonlyafewoftheGroup1and2problemsandnonefortheother3groups. FromFigure4{4wecanseethatLS-DSSPgavebetterresultsforGroup1and2problemsandLS-GRASPgaveslightlybetterresultsforproblemsinGroups3,4,and5.Similarbehaviorwasobservedfortheotherdatasets.TheresultsalsoindicatethatLS-GRASPwasmorerobustinthesensethatthequalityofthesolutionswasnotaectedgreatlybythenetworkstructure.Theproposedlocalsearchalsoprovedtobepowerful.Forexample,averageerrorsfortheproblemsinFigure4{4-dwithoutthelocalsearchwereroughly2:9%fortheDSSPand11%forGRASPwith8iterations. Ifmoreiterationsareperformed,LS-GRASPndsbettersolutions.Figure4{5showsthattheerrorsdecreaseasthenumberofiterationsincrease.However,thecomputationaltimerequiredbyLS-GRASPincreaseslinearlyasmoreiterationsare

PAGE 80

Figure4{4:AverageerrorsforproblemsusingdatasetD.

PAGE 81

Figure4{5:NumberofGRASPiterationsvs.solutionquality. Withthemodiedconstructionprocedurewetrytogeneratesolutionsfromdierentpartsofthefeasibleregion.Thisisdonebyallowingtheconstructionofsolutionsthatarenotnecessarilygood.Ournewcriterionallowslessdesirablecandidatestoentertherestrictedcandidatelist.Themodiedconstructionprocedurewasexplainedinsection3.4.2. Thebestresultswereobtainedforintheinterval[0.05,0.20].SinceLS{GRASPdidrelativelypoorlyforGroup1and2problems,wepresentresultsinTable4{5thatshowtheimprovementachievedwiththenewimplementationofLSM-GRASP.Tohaveareasonablecomparison,LSM-GRASPwasrununtil128dierentsolutionswereinvestigated.AswecanseethereissignicantimprovementovertheLS-GRASP

PAGE 82

Table4{5:AverageerrorsforproblemswithdatasetE. Error(%) CPUTime(seconds) LSLSMLSLSMProblem DSSP GRASP CPLEX DSSP GRASP CPLEX 1 0.02 3.86 0.82 0.32 4.14 4.55 2 0.17 4.11 1.95 0.78 8.05 8.81 3 0.24 3.55 1.72 1.14 11.47 13.12 4 0.54 4.15 2.02 1.75 15.03 17.26 5 0.56 3.86 2.00 2.85 19.08 22.17 6 2.83 2.91 2.45 2.12 9.19 10.69 7 1.75 3.43 2.18 2.17 11.79 13.57 8 1.23 4.03 2.29 2.04 14.04 16.40 9 0.86 3.58 2.04 2.32 16.28 19.03 10 0.56 3.86 2.00 2.92 19.01 22.17 Problems with xed charge costs and production capacities .ThesecondsetofproblemswesolvedwerePIDproblemswithxedchargecostsandproductioncapacities.Table4{6givesthesetofproblemssolvedinthissection.Notethatthelocalsearchprocedureswehavedevelopedareforuncapacitatedproblems.Forexample,theconstructionphaseofLS-GRASPandLSM-GRASPmakeuseofthepropertyofanoptimalsolutionforuncapacitatedproblems,i.e.,anoptimalsolutionisatreeonthenetwork.ThelocalsearchprocedureforLS-DSSPalsousesthesamepropertyofanuncapacitatedPIDproblem.DSSP,however,isapplicabletobothcapacitatedanduncapacitatedPIDproblems.Therefore,weuseonlyDSSPtosolvetheproblemsgeneratedinthissection. Assoonascapacitiesareintroduced,theproblemsbecomemuchmoredicult.Thedicultyoftheproblemsdependsonhowtightthecapacitiesare.Inordertosimplifytheproblems,weintroduceonlyproductioncapacitiesandwealsoassumethatthedistributioncostsarelinear.Inotherwords,thePIDproblemsgeneratedinthissectionhavexedchargeproductioncosts,linearinventoryanddistributioncosts,

PAGE 83

Table4{6:Problemsizesforxedchargenetworkswithcapacityconstraints. No.of No.of No.of No.of Problem Facilities Retailers Periods Arcs 26 5 30 30 4,795 27 5 30 40 6,395 28 5 30 50 7,995 29 10 30 30 9,590 30 10 30 40 12,790 31 10 30 50 15,990 32 10 40 30 12,590 33 10 40 40 16,790 34 10 40 50 20,990 andalsocapacitiesontheamountofproduction.ThesesimplifyingassumptionscaneasilyberelaxedsinceDSSPisageneralprocedurethatcansolvealmostanyPIDproblemwithnetworkconstraintsandconcavecosts.DatasetCinTable4{3isusedtogeneratetheinputdatafortheseproblems.Thecapacityoneachproductionarcforagivenproblemisaconstantanditiscalculatedby Capacity=MeanDemand*Numberofretailers Numberofplants ForeachprobleminTable4{6wegenerated10instances.Wealsotested8dierentvalues.Thus,atotalof720problemsweresolved.TheaverageerrorsfortheseproblemsarepresentedinTable4{7(anexplanationofhowtheseerrorswerecalculatedisgivenbelow).Aswecanseefromthetable,theerrorsareallclosetozero.Infact,themaximumerroramongthe720instanceswas0.34%.Table4{7alsoshowsthattheaverageerrorsareslightlyhigherwhen=2.SimilarbehaviorisobservedinTables4{8and4{9.Table4{8givestheaverageCPUtimeDSSPtooktosolveeach

PAGE 84

Table4{7:Averageerrorsforproblemswithproductioncapacities. AverageError(%) Problem 26 0.02 0.02 0.02 0.04 0.06 0.07 0.09 0.09 27 0.03 0.05 0.04 0.04 0.13 0.10 0.14 0.13 28 0.02 0.04 0.04 0.04 0.17 0.07 0.08 0.10 29 0.06 0.08 0.07 0.09 0.20 0.16 0.10 0.14 30 0.07 0.09 0.12 0.14 0.21 0.13 0.14 0.15 31 0.11 0.12 0.13 0.13 0.23 0.15 0.16 0.17 32 0.03 0.03 0.06 0.08 0.11 0.11 0.09 0.12 33 0.05 0.06 0.08 0.06 0.15 0.11 0.10 0.09 34 0.07 0.07 0.08 0.11 0.18 0.11 0.13 0.14 problemwhereasTable4{9givestheaverageCPUtimesforCPLEX.ThehighestCPUtimesforDSSPandCPLEXwereobservedwhen=2and=1.2,respectively.Thisindicatesthattheproblemsbecamemoredicultaswasincreasedfrom1.1to2,butaswasfurtherincreasedtheproblemsbecameeasieragain. Table4{8:CPUtimesofDSSPforproblemswithproductioncapacities. CPUTime(seconds) Problem 26 0.08 0.09 0.09 0.09 0.19 0.09 0.07 0.08 27 0.11 0.13 0.11 0.13 0.21 0.12 0.11 0.10 28 0.14 0.16 0.17 0.17 0.34 0.17 0.14 0.14 29 0.21 0.17 0.21 0.21 0.39 0.20 0.17 0.15 30 0.23 0.32 0.31 0.34 0.49 0.31 0.26 0.25 31 0.37 0.38 0.36 0.46 0.66 0.37 0.30 0.32 32 0.24 0.26 0.27 0.28 0.40 0.24 0.21 0.20 33 0.39 0.40 0.43 0.41 0.60 0.42 0.35 0.33 34 0.45 0.46 0.60 0.59 0.74 0.49 0.44 0.42 InourimplementationsCPLEXwasrunforatmost600CPUseconds.Ifanoptimalsolutionisfoundwithin600CPUseconds,thentheobjectivefunctionvalueofthesolutionobtainedfromDSSPiscomparedtotheobjectivefunctionvalueofthisoptimalsolution.Inotherwords,theerroriscalculatedas Error=DSSPSolutionValue-OptimalSolutionValue OptimalSolutionValue100.

PAGE 85

However,ifanoptimalsolutionisnotfoundwithin600CPUseconds,thenCPLEXreturnsalowerbound.Inthiscase,theobjectivefunctionvalueoftheDSSPsolutioniscomparedtothelowerboundandtheerrorisgivenby Error=DSSPSolutionValue-LowerBound LowerBound100. SomeoftheproblemsinTable4{9haveaverageCPUtimesof600.ThismeansthatfortheseproblemsCPLEXwasnotablendanoptimalsolutionwithinourtimelimit.Thus,theaverageerrorsreportedinTable4{7,correspondingtotheseproblems,arewithrespecttoalowerbound.However,theerrorsarestillsmall. Table4{9:CPUtimesofCPLEXforproblemswithproductioncapacities. CPUTime(seconds) Problem 26 4 6 7 8 9 7 3 2 27 9 31 22 265 24 5 4 4 28 16 40 34 124 69 15 8 7 29 234 186 572 411 373 85 33 9 30 553 600 600 600 600 352 71 28 31 600 600 600 600 600 600 600 63 32 40 137 600 600 235 69 17 14 33 600 600 600 600 600 242 49 46 34 600 600 600 600 600 600 108 55 Table4{10alsogivesusanideaaboutthedicultyoftheproblems.ThistableshowsthepercentageofproductionarcsthathavepositiveowintheoptimalsolutionobtainedfromCPLEX.Forexample,when=1.1,onaverage93%oftheproductionarcswereusedinanoptimalsolution(N/Adenotesthatanoptimalsolutionwasnotavailable).Thisindicatesthatthecapacityconstraintsaresotightthatthereisproductionatalmosteveryfacilityineveryperiod.Inotherwords,thenumberoffeasiblesolutionsthatneedtobeinvestigatedissmallthusCPLEXdoesnottaketoomuchtimetosolvetheseproblems.Asincreases,thepercentageofproductionarcsusedinanoptimalsolutiondecreases.When=5,thispercentagedropsdowntoabout30%andtheseproblemsarepracticallyuncapacitated.

PAGE 86

Table4{10:Percentageofproductionarcsusedintheoptimalsolution. Percentageofproductionarcs(%) Problem 26 94 91 90 87 56 43 37 34 27 94 92 90 86 56 43 39 37 28 93 91 89 86 56 42 37 34 29 91 89 87 84 51 37 31 27 30 91 N/A N/A N/A N/A 37 30 27 31 N/A N/A N/A N/A N/A N/A N/A 27 32 91 89 N/A N/A 52 39 32 30 33 N/A N/A N/A N/A N/A 38 32 28 34 N/A N/A N/A N/A N/A N/A 32 29 4.4.2 Problems with Piecewise-Linear Concave Costs Wealsotestedourlocalsearchproceduresonproblemswithpiecewise-linearconcavecosts.Theseproblemsweremuchmoredicultcomparedtotheproblemswithxedchargecostsduetotheincreaseintheproblemsizeasthenumberofpiecesineacharcincreases.Thestructureofthenetworksgeneratedandthecharacteristicsoftheinputdataaresimilartothoseoftheproblemsinsection4.4.1.Theinventorycosts,hjt,aregeneratedinexactlythesameway,i.e.,fromaUniformdistributionintheinterval[1,3].Thedemandsarealsouniformlygeneratedfromthesamedistributionintheinterval[5,55].Thetransportationcostsarealsothesame.ThesamedistancesbetweenthefacilitiesandretailersareusedasthevariablecostsandthexedcostsareuniformlygeneratedfromdierentintervalsasgiveninTable4{3.Thereasonwegeneratexedchargetransportationcostsisthattheproblemsweconsiderareuncapacitatedconcaveminimizationproblems,whichmeansthatinanoptimalsolutiontheowontransportationarcswillbeequaltoeitherzeroorthedemand.Inotherwords,xjkt=0orxjkt=dktinanoptimalsolutionforallj=1;:::;J,k=1;:::;K,andt=1;:::;T.Thus,ifpiecewise-linearandconcavecostsaregeneratedfortransportationarcstheycanbereducedtoxedchargecostfunctionstosimplifytheformulation.

PAGE 87

Theproductioncosts,however,aredierentsincetheyarepiecewise-linearconcaveratherthanxedcharge.Togenerateapiecewise-linearconcavecostfunctionwithljtpiecesforagivenproductionarcwerstdividethemaximumpossibleowonthatarcintoljtequalintervals.Next,wegenerateavariablecostforeachpiece.Notethatthevariablecostsshouldbedecreasing(seeequation2.13)toguaranteeconcavity.Therefore,ljtvariablecostsareuniformlygeneratedfromtheinterval[5,15]andthentheyaresortedindecreasingorder.ThexedcostfortherstintervalisrandomlygeneratedfromoneofthevedierentuniformdistributionsgiveninTable4{3.Thexedcostoftheotherintervalscanbecalculatedusingthefollowingequation:sjt;i=sjt;i1+(cjt;i1cjt;i)jt;i8i=2;:::;ljt: ForeachprobleminTable4{11vedierentdatasetsareused.Inotherwords,thexedchargesaregeneratedfromvedierentuniformdistributionsasgiveninTable4{3whichleadstodierentproblemcharacteristics.Tenprobleminstancesaresolvedforeachdatasetandproblemcombination(atotalof750problems).BasedonourobservationsfromproblemswithxedchargecostsonlyLSM-GRASPisusedtosolvethesenewproblemsthathavepiecewise-linearconcaveproductioncosts.TheGRASPparameterischosenrandomlyfromtheinterval[0.05,0.20]and256iterationsareperformed.Theseproblemsaremoredicultcomparedtothosepresentedinsection4.4.1.Therefore,weranmoreiterationsofLSM-GRASPtogetgoodsolutionsintheexpenseofextracomputationaltime.Asshownin

PAGE 88

Table4{11:Problemsizesforpiecewise-linearconcavenetworks. No.of No.of No.of No.of No.of No.of Group Problem Facilities Retailers Periods Pieces Arcs Arcs (org) (ext) 35 25 400 1 2 10,025 10,050 6 36 25 400 1 4 10,025 10,100 37 25 400 1 8 10,025 10,200 38 125 200 1 2 25,125 25,250 7 39 125 200 1 4 25,125 25,500 40 125 200 1 8 25,125 26,000 41 10 70 20 2 14,390 14,590 8 42 10 70 20 4 14,390 14,990 43 10 70 20 8 14,390 15,790 44 30 60 5 2 9,270 9,420 9 45 30 60 5 4 9,270 9,720 46 30 60 5 8 9,270 10,320 47 30 50 20 2 31,170 31,770 10 48 30 50 20 4 31,170 32,970 49 30 50 20 8 31,170 35,370 org:originalnetwork,ext:extendednetworkafterASPTables4{12,4{13,4{14,4{15,and4{16,LSM-GRASPtookmoretimecomparedtoLS-DSSPbutgaveslightlybetterresults. CPLEXisusedtosolvetheASPformulationsoftheproblemsgiveninsection2.4.2.WelimitedtherunningtimeofCPLEXto2,400CPUseconds.Ifanoptimalsolutionisfoundwithin2,400secondsthenCPLEXreturnsalowerboundandanupperbound,bothofwhichareequaltotheoptimalsolutionvalue.Ifanoptimalsolutionisnotreachedin2,400secondsthenCPLEXreturnsalowerbound.CPLEXwasnotevenabletondafeasiblesolutionwithinthespeciedtimelimitforsomeproblemsand,therefore,anupperboundmaynotbeavailable.TheerrorsreportedinTables4{12,4{13,4{14,4{15,and4{16aretheaverageerrorsoftheupperboundsobtainedfromLS-DSSP,LSM-GRASP,andCPLEX(ifanupperboundisavailable)withrespecttothelowerboundofCPLEX,whichiscalculatedinthefollowingway: Error=UpperBound-CPLEXLowerBound CPLEXLowerBound100.

PAGE 89

Table4{12:SummaryofresultsforproblemsusingdatasetA. Error(%) CPUTime(seconds) LSLSMLSLSMProblem DSSP GRASP CPLEX DSSP GRASP CPLEX 35 0.36 0.36 0.00 0.40 23.98 30.45 36 3.40 1.35 0.00 0.97 27.98 208.40 37 2.26 0.84 0.00 2.84 40.41 508.54 38 0.00 0.00 0.00 0.88 56.51 69.31 39 0.49 0.46 0.00 2.67 84.10 631.32 40 9.97 9.41 11.15 5.21 93.43 2,400.00 41 0.00 0.00 0.00 6.08 301.26 222.10 42 2.56 2.52 3.43 12.42 376.34 2,400.00 43 16.09 16.14 17.72 38.06 447.86 2,400.00 44 0.00 0.00 0.00 1.09 55.70 46.13 459 0.33 0.26 2.34 70.02 1,134.02 46 11.03 10.90 13.16 6.53 111.03 2,400.00 47 0.01 0.00 0.00 14.29 627.57 1,078.99 48 7.67 7.68 8.28 33.04 874.33 2,400.00 49 26.14 26.19 N/A 84.77 1249.34 2,400.00 ThesuperscriptsovertheproblemnumbersdenotethenumberofinstancesoutoftenforwhichCPLEXfoundanoptimalsolution.IfCPLEXndsoptimalsolutionsinallteninstances,thentheaverageerrorforCPLEXwillbezeroandthesuperscriptisomitted.IfCPLEXcannotndanoptimalsolutionforanyoftheteninstances,thentheaverageerrorwillbehigherthanzeroandtherewillbenosuperscriptoverthatproblemnumber.Forexample,inTable4{12problem45hasasuperscript9whichmeansthatCPLEXfoundoptimalsolutionsfor9outof10instances.Problems35,36,37,38,39,41,44,and47havenosuperscriptsbuttheaverageerrorsforCPLEXareallzerofortheseproblemsandthisindicatesthatCPLEXfoundoptimalsolutionsforallinstancesintheseproblems.Fortheremainingproblems(40,42,43,46,48,and49)CPLEXwasnotabletondanoptimalsolutionforanyoftheinstances.Infact,forproblem49CPLEXdidnotevenndafeasiblesolution. Asthenumberoflinearpiecesineachproductioncostfunctionincreasestheproblemsseemtogetmoredicult,whichisexpectedbecausethenetworkincreases

PAGE 90

Table4{13:SummaryofresultsforproblemsusingdatasetB. Error(%) CPUTime(seconds) LSLSMLSLSMProblem DSSP GRASP CPLEX DSSP GRASP CPLEX 35 0.25 0.24 0.00 0.46 22.22 76.50 36 2.32 1.19 0.00 1.08 29.51 811.98 378 1.82 2.39 3.35 44.30 1,499.32 38 0.08 0.02 0.00 1.51 60.04 268.35 391 3.01 5.40 3.81 88.01 2,304.37 40 18.15 17.77 49.22 8.89 105.70 2,400.00 414 0.05 0.02 7.30 293.44 2,106.00 42 6.15 6.10 7.12 16.33 395.21 2,400.00 43 21.76 21.74 23.56 48.04 487.02 2,400.00 44 0.07 0.06 0.00 1.29 57.91 142.50 453 3.90 4.65 3.15 73.99 2,296.52 46 17.05 16.71 20.40 7.64 109.98 2,400.00 471 0.23 0.28 18.33 661.42 2,388.10 48 14.37 14.20 N/A 42.54 910.27 2,400.00 49 23.63 23.37 N/A 114.22 1243.62 2,400.00 insize.Forexample,inTable4{13problems38,39,and40havethesamenetworkstructureandthesamecostcharacteristicsbutthenumberofpiecesintheirproductionarcsaredierent.Thenumberofpiecesare2,4,and8forproblems38,39,and40,respectively,asgiveninTable4{11.LS-DSSPandLSM-GRASPsolvetheproblemsontheoriginalnetwork,therefore,theincreaseintheCPUtimesofthesetwoapproachesisnotashighcomparedtotheincreaseintheCPUtimesofCPLEX.ThisisduetothefactthatCPLEXworksontheextendednetworkwhichislargerinsize.Forproblem38,CPLEXwasabletondanoptimalsolutionforallinstancesandtheaveragetimespentwasabout268CPUseconds.TheaverageerrorforLS-DSSPwas0.08%whichtookabout1.5seconds.TheCPUtimeforLSM-GRASPwas60secondsandtheaverageerrorwas0.02%.AsthenumberofpiecesincreasedtheerrorsandtheCPUtimesbothincreased.Forexample,theaverageCPUtimesforLS-DSSP,LSM-GRASP,andCPLEXwere8.89,105.7,and2,400,respectively,forproblem31.ItisclearthatLS-DSSPwasmuchfastercomparedtotheothertwofor

PAGE 91

Table4{14:SummaryofresultsforproblemsusingdatasetC. Error(%) CPUTime(seconds) LSLSMLSLSMProblem DSSP GRASP CPLEX DSSP GRASP CPLEX 35 0.10 0.02 0.00 0.52 22.87 155.00 368 1.16 0.71 1.18 28.37 1,412.73 372 3.08 7.94 3.18 42.84 2,340.82 38 0.08 0.14 0.00 1.65 59.14 598.26 39 9.29 8.75 11.92 4.70 82.22 2,400.00 40 16.17 15.41 N/A 8.86 106.15 2,400.00 41 0.26 0.30 0.30 8.10 291.93 2,400.00 42 6.23 6.25 7.21 14.71 386.89 2,400.00 43 20.20 20.23 21.96 43.81 505.90 2,400.00 44 0.15 0.21 0.00 1.52 57.38 371.82 45 7.36 7.00 8.37 3.53 76.54 2,400.00 46 17.56 16.74 23.52 7.90 110.72 2,400.00 47 2.45 2.59 N/A 20.18 655.19 2,400.00 48 13.35 13.08 N/A 55.96 893.06 2,400.00 49 22.23 21.64 N/A 118.03 1282.63 2,400.00 AnotherobservationaboutTables4{12,4{13,4{14,4{15,and4{16iswithrespecttotheCPUtimes.NotethatthexedchargesincreaseaswegofromTable4{12toTable4{16andtheCPUtimesofCPLEXincreaseasthexedchargesincrease.However,theCPUtimesforLS-DSSPandLSM-GRASParenotaectedbytheinputdata. TheerrorsreportedinTables4{12to4{16arewithrespecttoeitheranoptimalsolutionoralowerboundthatareobtainedfromCPLEX.Aswecanseefromthesetables,theerrorsarerelativelylargerforproblems40,43,46,and49.Theproductioncostfunctionsintheseproblemshaveeightpieceswhichmakesitdiculttosolvethem.Thelargeerrorsarepossiblyduetopoorlowerbounds.Therefore,wersttransformedthesepiecewise-linearandconcavePIDproblemsintoxedchargePID

PAGE 92

Table4{15:SummaryofresultsforproblemsusingdatasetD. Error(%) CPUTime(seconds) LSLSMLSLSMProblem DSSP GRASP CPLEX DSSP GRASP CPLEX 35 0.04 0.12 0.00 0.71 21.83 229.56 366 1.37 1.69 0.96 26.43 1,993.51 37 7.88 6.19 11.02 2.33 40.88 2,400.00 385 0.72 0.33 2.54 57.01 1,978.91 39 9.29 8.67 N/A 5.63 76.78 2,400.00 40 15.30 13.99 N/A 9.46 96.96 2,400.00 41 0.63 0.99 1.29 8.61 294.96 2,400.00 42 4.92 5.28 6.44 18.22 378.43 2,400.00 43 16.17 16.58 18.01 39.33 502.67 2,400.00 447 0.80 0.36 1.89 54.59 1,505.39 45 9.19 8.93 9.69 3.46 69.54 2,400.00 46 16.39 15.51 N/A 9.42 103.10 2,400.00 47 4.86 5.53 N/A 26.54 610.61 2,400.00 48 13.12 12.87 N/A 53.67 807.60 2,400.00 49 20.71 20.54 N/A 135.07 1181.78 2,400.00 problemsusingthenodeseparationprocedure.Then,weusedtheextendedproblemformulationgiveninsection3.5toobtainlowerbounds.Weobservedsignicantimprovementintheerrors.However,onedrawbackoftheextendedformulationisthattheproblemgrowstremendouslyafterthetransformations.Therefore,solvingeventheLPrelaxationsoftheseproblemsbecomescomputationallyexpensive.Infact,CPLEXranoutofmemorywhenweattemptedtosolveproblems43and49.Table4{17givestheproblemsizeforproblems40,43,46,and49. AswecanseefromTable4{17,problem43and49,respectively,haveabout1.2and2.5millionvariablesintheircorrespondingextendedformulations.CPLEXwasnotabletosolvetheseproblems.However,wegotgoodlowerboundsfortheothertwoproblems.TheerrorsfortheseproblemsaregiveninTable4{18.Forexample,theerrorsreportedinTable4{12forproblem40were9.97%forLS-DSSP,9.41%forLSM-GRASP,and11.5%forCPLEX,butthoseerrorsdecreasedto0.88%,0.71%,and1.81%,respectively.

PAGE 93

Table4{16:SummaryofresultsforproblemsusingdatasetE. Error(%) CPUTime(seconds) LSLSMLSLSMProblem DSSP GRASP CPLEX DSSP GRASP CPLEX 35 0.16 0.76 0.00 0.79 18.82 456.69 365 0.60 0.34 1.05 22.42 1,817.66 37 3.59 3.85 4.72 2.27 30.00 2,400.00 38 3.83 4.33 N/A 3.17 53.45 2,400.00 39 7.30 7.15 N/A 5.23 67.91 2,400.00 40 9.18 8.88 N/A 9.57 82.76 2,400.00 41 1.93 3.06 3.88 12.03 276.38 2,400.00 42 3.58 4.73 5.99 20.38 323.73 2,400.00 43 6.75 7.58 9.15 43.04 411.45 2,400.00 44 3.75 4.46 3.40 3.15 52.33 2,400.00 45 9.17 9.35 9.80 4.51 65.20 2,400.00 46 12.34 12.03 N/A 8.16 83.65 2,400.00 47 9.25 9.77 N/A 40.08 587.33 2,400.00 48 12.72 13.68 N/A 70.12 750.53 2,400.00 49 15.92 16.04 N/A 112.23 959.70 2,400.00 4.5 Library Test Problems Aswementionedinsection4.3,real-worldproblemsandlibrarytestproblemsareuseful,butasMcGeoch[59]said,itisusuallydiculttoobtainthesekindofproblems.Healsonotesthatlibrarydatasetsmayquicklybecomeobsolete.Moreover,itisdiculttogeneralizeresultsfromasmallsetofprobleminstances.Anotherdisadvantageofusinglibraryproblemsisthatresearchersmaybetemptedtotunetheiralgorithmsothatitworkswellonthatparticularsetofinstances.Thismayfavorproceduresthatworkpoorlyforgeneralproblemsandmaydisfavorthoseproceduresthathavesuperiorperformance(Hooker[40]). WecouldnotndanyPIDproblemsintheOR-library,buttherewereuncapacitatedwarehouselocationproblems.WeformulatedtheseasPIDproblemsandusedLS-DSSPandLS-GRASPtosolvethem.Thedatalesarelocatedathttp://mscmga.ms.ic.ac.uk/jeb/orlib/uncapinfo.htmlandtheyhavethefollowinginformation:

PAGE 94

Table4{17:ProblemsizesfortheextendedformulationafterNSP. No.of No.of No.of No.of No.of No.of Problem Facilities Retailers Periods Pieces Arcs Arcs (org) (ext) 40 125 200 1 8 25,125 201,000 43 10 70 20 8 14,390 1,177,600 46 30 60 5 8 9,270 217,200 49 30 50 20 8 31,170 2,524,800 org:originalnetworkext:extendednetworkformulationafterNSP Table4{18:Summaryofresultsforproblems40and46afterNSP. Error(%) LP-NSP LSLSMCPU DataSet Problem DSSP GRASP CPLEX (seconds) A 40 0.88 0.71 1.81 6,761 46 0.77 0.66 2.73 1,370 B 40 1.04 0.72 27.67 6,340 46 0.78 0.48 3.69 1,662 C 40 1.17 0.51 N/A 6,053 46 1.43 0.73 6.59 1,617 D 40 1.89 0.73 N/A 6,079 46 1.48 1.39 N/A 2,350 E 40 1.65 1.41 N/A 7,126 46 2.94 2.66 N/A 3,626 ThewarehousesinthelocationproblemsaretreatedasthefacilitiesinthePIDproblemandthecustomersaretreatedastheretailers.Thereisonlyasingleperiodandthexedcostsofopeningawarehouseareusedasthexedproductioncostsatthefacilities.Notethatthevariablecostsofproductionwillbezero.Thedistribution

PAGE 95

Table4{19:UncapacitatedwarehouselocationproblemsfromtheORlibrary. Error(%) LS-LSMProblem DSSPGRASP DSSPGRASP cap71 0.734.65 0.000.00 cap72 1.496.21 0.130.00 cap73 1.352.68 0.470.45 cap74 0.564.12 0.370.37 cap101 1.777.86 0.560.24 cap102 2.266.30 0.390.09 cap103 1.216.46 0.810.71 cap104 1.371.37 0.610.00 cap131 4.498.51 2.220.85 cap132 3.902.92 2.340.86 cap133 3.165.98 1.030.79 cap134 2.091.37 1.560.00 TheerrorsintherstcolumnreportedinTable4{19arefortheDSSPprocedureifnolocalsearchisapplied.Similarly,inthesecondcolumnerrorsaregivenforGRASPwithonlytheconstructionphase.ThelasttwocolumnsaretheerrorsforLS-DSSPandLSM-GRASP.Aswecanseefromtheresults,thelocalsearchprocedureimprovedtheerrorsboundssignicantly.TheerrorsforLSM-GRASPwereallunder1%.LS-DSSPgavehighererrorbounds.IntermsofCPUtimesLS-DSSPwasfasterthanLSM-GRASP.ThesearerelativelysmallsizeproblemsandtheCPUtimesforLS-DSSPwereabout0.01secondsforallproblems.TheCPUtimesforLSM-GRASPvariedfrom0.3secondsto1.1seconds.

PAGE 96

5.1 Summary Wehavedevelopedlocalsearchalgorithmsfortheuncapacitatedconcaveproduction-inventory-distributionproblem.WerstcharacterizedthepropertiesofanoptimalsolutiontouncapacitatedPIDproblemswithconcavecosts,andthenpresentedDSSPandGRASPalgorithmswhichtakeadvantageoftheseproperties.DSSPandGRASPprovideinitialsolutionsforthelocalsearchprocedure.Computationalresultsfortestproblemsofvaryingsize,structure,andcostcharacteristicsareprovided.TherstsetofproblemsforwhichexperimentalresultswerepresentedwasuncapacitatedPIDproblemswithlinearinventorycosts,andxedchargeproductionanddistributioncosts.ThesecondsetofproblemsincludedPIDproblemswithxedchargeproductioncosts,andlinearinventoryanddistributioncosts.Theseproblemshadproductioncapacities.Sinceourlocalsearchalgorithmsaredesignedforuncapacitatedproblems,onlyDSSPwasusedtosolvetheseproblemsbecauseDSSPworksforbothcapacitatedanduncapacitatedproblems.ThethirdsetofcomputationalresultspresentedwereforuncapacitatedPIDproblemswithpiecewise-linearandconcaveproductioncosts,xedchargedistributioncosts,andlinearinventorycosts. Ageneralpurposesolver,CPLEX,wasusedinanattempttosolveourproblemsoptimally.WecomparedthesolutionvaluesobtainedfromLS-DSSP,LS-GRASP,andLSM-GRASPtothesolutionvaluesobtainedfromCPLEX.DuetothedicultyoftheproblemsCPLEXfailedtondanoptimalsolutioninmostcases.Infact,forsomeproblemsCPLEXwasnotabletondevenafeasiblesolution.WeinitiallycomparedoursolutionvaluestolowerboundsobtainedfromCPLEX.However,86

PAGE 97

ThecomputationalexperimentsindicatedthatLS-DSSPwasmoresensitivetoproblemstructureandinputdata,buttheerrorsobtainedfromLS-GRASPandLSM-GRASPweremorerobust.LS-DSSPgavesmallererrorsforproblemswithxedchargecosts,particularlyforsingleperiodproblems.LSM-GRASP,ingeneral,gavebetterresultsforproblemswithpiecewise-linearandconcavecosts.However,theCPUtimesrequiredbyLS-DSSPweremuchsmallercomparedtotheCPUtimesofLS-GRASPandLSM-GRASP.LS-DSSPhasanaturalstoppingcondition,butLS-GRASPandLSM-GRASPcanbeperformedformanyiterations.Thus,theGRASPapproachesleadtobetterresultsifmoreiterationsareperformedintheexpenseofextracomputationaltime.TheCPUtimesofthelocalsearchprocedureswithbothDSSPandGRASPcanfurtherbeimprovedsincetheyaremulti-startapproachesandaresuitableforparallelimplementation. Themaincontributionsofthisworkcanbesummarizedasfollows:

PAGE 98

Theproposedsolutionapproachesareusefulinpractice,particularly,forproduction,inventory,anddistributionproblemsforwhichthesupplychainnetworkislarge.Forexampleanetworkwith30facilities,70retailers,10timeperiods,andxedchargecostsisquitebig.Findingoptimalsolutionsfornetworksofthissize,andevenforsmallernetworks,maybecomputationallyexpensiveorimpossible.Thecomputationalresultsindicatedthatifthenumberofretailersismuchmorelargerthanthenumberoffacilities,theheuristicapproachesperformedbetter.Therefore,werecommendtheuseoftheproposedalgorithmsforproblemsthatcanbemodeledasPIDproblemswhichhavethecharacteristicsmentionedabove. 5.2 Proposed Research 5.2.1 Extension to Other Cost Structures ThesolutionapproachespresentedinChapter3,particularlyGRASP,takeadvantageofthefactthattheoptimalsolutionisatreeinthesupplychainnetwork.Thisisduetothefactthatthecostfunctionsusedareconcavecostfunctions.Therefore,anaturalextensiontotheworkpresentedhereistodevelopsolutionapproachestosolveproblemswithmoregeneralcoststructure(e.g.Figure5{1)whichmayariseinvariousapplications. KimandPardalos[49]actuallyextendedtheDSSPideatohandlenonconvexpiecewise-linearnetworkowproblems.Theydevelopedacontractionruletoreducethefeasibledomainfortheproblembyintroducingadditionalconstraints. 5.2.2 Generating Problems with Known Optimal Solutions Asmentionedearlierinsection4.4,selectionandgenerationoftestproblemsisanimportantpartofcomputationalexperiments.AlargeproportionofexperimentalresearchonalgorithmsconcernsheuristicsforNP-hardproblemsasisthecasein

PAGE 99

Figure5{1:Piecewise-concavecostfunctions.thisdissertation.Onequestionoftenaskedaboutanapproximatesolutionis\Howcloseistheobjectivevaluetotheoptimalvalue?"TheobviousdicultyhereisthatoptimalvaluesforNP-hardproblemscannotbeecientlycomputedunlessP=NP.Therefore,generatinginstanceswheretheoptimalsolutionisknownisanattractiveareaofresearchbutnotnecessarilyaneasyone. ArthurandFrendewey[3]presentedasimpleeectivealgorithmforrandomlygeneratingtravellingsalesmanproblemsforwhichtheoptimaltourisknown.PilcherandRardin[63]gaveamorecomplexgeneratorutilizingarandomcutapproach.Theyrstdiscussedarandomcutconceptingeneralforgeneratinginstancesofdiscreteoptimizationproblemsandthenprovideddetailsonitsimplementationforsymmetrictravellingsalesmanproblems.TherandomcutconcepttheypresentedmaybeimplementedtogeneratePIDproblems.Thiswillprovideavarietyoftestproblemsthatcanbeusedtoevaluateheuristicapproaches.Itwillalsomotivateresearchintheareaofcuttingplanealgorithms. Anotherpracticalapplicationofgeneratingproblemswithknownoptimalsolutionsistohelporganizationssetpricesfortheirproductsandservices.Forexample,givenasupplychainnetworkletusassumethatduetocertainregulationssomeoftheroutesmustbeusedinanoptimalsolution.Ratherthanmodelingthese

PAGE 101

E.H.LAartsandJ.K.Lenstra,editors.LocalSearchincombinatorialoptimization.JohnWiley&Sons,Chichester,England,1997.[2] A.AggarwalandJ.K.Park.Improvedalgorithmsforeconomiclotsizeproblems.OperationsResearch,41(3):549{571,1993.[3] J.L.ArthurandJ.O.Frendewey.Generatingtravelling-salesmanproblemswithknownoptimaltours.JournaloftheOperationalResearchSociety,39(2):153{159,1988.[4] M.Avriel.Nonlinearprogramming:Analysisandmethods.Prentice-Hall,EnglewoodClis,NJ,1976.[5] M.S.BazaraandH.D.Sherali.Ontheuseofexactandheuristiccuttingplanemethodsforthequadraticassignmentproblem.JournaloftheOperationalResearchSociety,13:991{1003,1982.[6] B.M.Beamon.Supplychaindesignandanalysis:Modelsandmethods.InternationalJournalofProductionEconomics,55:281{294,1998.[7] G.J.Bell,B.W.Lamar,andC.A.Wallace.Capacityimprovement,penalties,andthexedchargetransportationproblem.NavalResearchLogistics,46:341{355,1999.[8] F.Bock.Analgorithmforsolvingtraveling-salesmanandrelatednetworkoptimizationproblems.page897,1958.AbstractinBulletin14thNationalMeetingoftheOperationsResearchSocietyofAmerica.[9] S.J.ChungandK.G.Murty.Polynomiallyboundedellipsoidalgorithmsforconvexquadraticprogramming.InO.L.Mangasarian,R.R.Meyer,andS.M.Robinson,editors,Nonlinearprogramming4,pages439{485.AcademicPress,NewYork,1981.[10] M.A.CohenandA.Huchzermeier.Globalsupplychainmanagement:Asurveyofresearchandapplications.InS.Tayur,R.Ganeshan,andM.Magazine,editors,Quantitativemodelsforsupplychainmanagement.KluwerAcademicPublisher,Boston,1999.[11] G.A.Croes.Amethodforsolvingtraveling-salesmanproblems.OperationsResearch,6:791{812,1958.91

PAGE 102

M.Diaby.Successivelinearapproximationprocedureforgeneralizedxed-chargetransportationproblem.JournaloftheOperationalResearchSociety,42:991{1001,1991.[13] D.-Z.DuandP.M.Pardalos,editors.Networkoptimizationproblems.WorldScientic,Singapore,1993.[14] B.Eksioglu.Globalsupplychainmodels.InC.A.FloudasandP.M.Pardalos,editors,Encyclopediaofoptimization,volume4,pages350{353.KluwerAcademicPublisher,Dordrecht,TheNetherlands,2001.[15] B.Eksioglu,S.D.Eksioglu,andP.M.Pardalos.Solvinglarge-scalexedchargenetworkowproblems.InP.Daniele,F.Giannesi,andA.Mangeri,editors,Variationalinequalitiesandequilibriummodels.KluwerAcademicPublisher,Dordrecht,TheNetherlands,2001.[16] S.D.Eksioglu,P.M.Pardalos,andH.E.Romeijn.Adynamicslopescalingprocedureforthexed-chargecostmulti-commoditynetworkowproblem.InP.M.PardalosandV.K.Tsitsiringos,editors,Financialengineering,e-commerceandsupplychain.KluwerAcademicPublisher,Dordrecht,TheNetherlands,2002.[17] S.S.Erenguc,N.C.Simpson,andA.J.Vakharia.Integratedproduction/distributionplanninginsupplychains:Aninvitedreview.EuropeanJournalofOperationalResearch,115:219{236,1999.[18] J.E.Falk.Alinearmax-minproblem.MathematicalProgramming,5:169{188,1973.[19] T.A.FeoandM.G.C.Resende.Aprobabilisticheuristicforacomputationallydicultsetcoveringproblem.OperationsResearchLetters,8:67{71,1989.[20] T.A.FeoandM.G.C.Resende.Greedyrandomizedadaptivesearchprocedures.JournalofGlobalOptimization,6(2):109{133,1995.[21] P.FestaandM.G.C.Resende.Grasp:Anannotatedbibliography.InP.HansenandC.C.Ribeiro,editors,Essaysandsurveysonmetaheuristics.KluwerAcademicPublisher,Norwell,MA,2001.[22] M.Florian,J.K.Lenstra,andA.H.G.RinnooyKan.Deterministicproductionplanning:Algorithmsandcomplexity.ManagementScience,26:669{679,1980.[23] C.A.Floudas.Deterministicglobaloptimization:Theory,algorithmsandapplications.KluwerAcademicPublisher,Dordrecht,TheNetherlands,2000.[24] R.Freling,H.E.Romeijn,D.RomeroMorales,andA.P.M.Wagelmans.Abranch-and-pricealgorithmforthemulti-periodsingle-sourcingproblem.TechnicalReport99{12,DepartmentofIndustrialandSystemsEngineering,UniversityofFlorida,1999.

PAGE 103

G.GalloandC.Sodini.Adjacentextremeowsandapplicationtominconcavecostowproblems.Networks,9:95{121,1979.[26] M.R.Garey,D.S.Johnson,andL.Stockmeyer.Somesimpliednp-completeproblems.TeoreticalComputerScience,1:237{268,1976.[27] J.Geunes,P.M.Pardalos,andH.E.Romeijn,editors.Supplychainmanagement:Models,applications,andresearchdirections.KluwerAcademicPublisher,Dordrecht,TheNetherlands,2002.[28] S.Ghannadan,A.Migdalas,H.Tuy,andP.Varbrand.Heuristicsbasedontabusearchandlagrangeanrelaxationfortheconcaveproduction-transportationproblem.StudiesinRegionalandUrbanPlanning,3:127{140,1994.[29] F.GiannessiandF.Niccolucci.Connectionsbetweennonlinearandintegerprogrammingproblems.SymposiaMathematica,XIX:161{176,1976.[30] G.M.GuisewiteandP.M.Pardalos.Minimumconcave-costnetworkowproblems:Applications,complexity,andalgorithms.AnnalsofOperationsResearch,25:75{100,1990.[31] G.M.GuisewiteandP.M.Pardalos.Algorithmsforthesingle-sourceuncapacitatedminimumconcave-costnetworkowproblem.JournalofGlobalOptimization,1:245{265,1991.[32] G.M.GuisewiteandP.M.Pardalos.Apolynomialtimesolvableconcavecostnetworkowproblem.Networks,23:143{147,1993.[33] N.G.HallandM.E.Posner.Generatingexperimentaldataforcomputationaltestingwithmachineschedulingapplications.OperationsResearch,49(7):854{865,2001.[34] P.L.Hammer.Somenetworkowproblemssolvedwithpseudo-booleanprogramming.OperationsResearch,13:388{399,1965.[35] F.W.Harris.Whatquantitytomakeatonce.InOperationsandcosts:Planningandllingorders,costkeepingmethods,controllingyouroperations,standardizingmaterialandlaborcosts,volume5ofTheLibraryofFactoryManagement,pages47{52.A.W.ShawCompany,Chicago,1915.[36] J.P.HartandA.W.Shogan.Semi-greedyheuristics:Anempiricalstudy.OperationsResearchLetters,6:107{114,1987.[37] A.C.HaxandD.Candea.Productionandinventorymanagement.Prentice-Hall,EnglewoodClis,NJ,1984.[38] D.S.HochbaumandA.Segev.Analysisofaowproblemwithxedcharges.Networks,19:291{312,1989.

PAGE 104

K.Holmqvist,A.Migdalas,andP.M.Pardalos.Agraspalgorithmforthesinglesourceuncapacitatedminimumconcave-costnetworkowproblem.DIMACSSeriesinDiscreteMathematicsandTheoreticalComputerScience,40:131{142,1998.[40] J.N.Hooker.Testingheuristics:Wehaveitallwrong.JournalofHeuristics,1:33{42,1995.[41] R.Horst.Anoteonfunctions,whoselocalminimumareglobal.JournalofOptimizationTheoryandApplications,36:457{463,1982.[42] R.HorstandP.M.Pardalos,editors.Handbookofglobaloptimization.KluwerAcademicPublisher,Dordrecht,TheNetherlands,1995.[43] R.Horst,P.M.Pardalos,andN.V.Thoai.Introductiontoglobaloptimization.KluwerAcademicPublisher,Dordrecht,TheNetherlands,1995.[44] R.HorstandH.Tuy.Thegeometriccomplementarityproblemandtranscendingstationarityinglobaloptimization.DIMACSSeriesinDiscreteMathematicsandTheoreticalComputerScience,4:341{354,1991.[45] D.S.Johnson,C.H.Papadimitriou,andM.Yannakakis.Howeasyislocalsearch?JournalofComputerandSystemSciences,37:79{100,1988.[46] B.W.KernighanandS.Lin.Anecientheuristicprocedureforpartitioninggraphs.BellSystemTechnicalJournal,49:291{307,1970.[47] D.B.KhangandO.Fujiwara.Approximatesolutionsofcapacitatedxed-chargeminimumcostnetworkowproblems.Networks,21:689{704,1991.[48] D.KimandP.M.Pardalos.Asolutionapproachforthexedchargenetworkowproblemusingadynamicslopescalingprocedure.OperationsResearchLetters,24:195{203,1999.[49] D.KimandP.M.Pardalos.Adynamicdomaincontractionalgorithmfornonconvexpiecewiselinearnetworkowproblems.JournalofGlobalOptimization,17:225{234,2000.[50] D.KimandP.M.Pardalos.Dynamicslopescalingandtrustintervaltechniquesforsolvingconcavepiecewiselinearnetworkowproblems.Networks,35:216{222,2000.[51] B.KlinzandH.Tuy.Minimmumconcave-costnetworkowproblemswithasinglenonlineararccost.InD.-Z.DuandP.M.Pardalos,editors,Networkoptimizationproblems,pages125{145.WorldScientic,Singapore,1993.[52] H.Konno.Acuttingplanealgorithmforsolvingbilinearprograms.MathematicalProgramming,11:14{27,1976.

PAGE 105

B.W.LamarandC.A.Wallace.Revised-modiedpenaltiesforxedchargetransportationproblems.ManagementScience,43:1431{1436,1997.[54] T.Larsson,A.Migdalas,andM.Ronnqvist.Alagrangeanheuristicforthecapacitatedconcaveminimumcostnetworkowproblem.EuropeanJournalofOperationalResearch,78:116{129,1994.[55] S.Lin.Computersolutionsofthetravelingsalesmanproblem.BellSystemTechnicalJournal,44:2245{2269,1965.[56] S.LinandB.W.Kernighan.Aneectiveheuristicalgorithmforthetravelingsalesmanproblem.OperationsResearch,21:498{516,1973.[57] O.L.Mangasarian.Characterizationoflinearcomplementarityproblemsaslinearprograms.MathematicalProgrammingStudy,7:74{87,1978.[58] A.S.Manne.Programmingofeconomiclotsizes.ManagementScience,4:115{135,1958.[59] C.C.McGeoch.Towardanexperimentalmethodforalgorithmsimulation.InformsJournalonComputing,8(1):1{15,1996.[60] P.G.McKeownandC.T.Ragsdale.Acomputationalstudyofusingpreprocessingandstrongerformulationstosolvegeneralxedchargeproblems.Computers&OperationsResearch,17:9{16,1990.[61] P.M.PardalosandM.G.C.Resende,editors.Handbookofappliedoptimization.OxfordUniversityPress,NewYork,2002.[62] P.M.PardalosandS.A.Vavasis.Openquestionsincomplexitytheoryfornonlinearoptimization.MathematicalProgramming,57:337{339,1992.[63] M.G.PilcherandR.L.Rardin.Partialpolyhedraldescriptionandgenerationofdiscreteoptimizationproblemswithknownoptima.NavalResearchLogistics,39:839{858,1992.[64] M.Raghavachari.Onconnectionsbetweenzero-oneintegerprogrammingandconcaveprogrammingunderlinearconstraints.OperationsResearch,17:680{684,1969.[65] S.ReiterandG.Sherman.Discreteoptimizing.Journal.SocietyofIndustrialandAppliedMathematics,13:864{889,1965.[66] H.E.RomeijnandD.RomeroMorales.Asymptoticanalysisofagreedyheuristicforthemulti-periodsingle-sourcingproblem:Theacycliccase.TechnicalReport99{13,DepartmentofIndustrialandSystemsEngineering,UniversityofFlorida,1999.

PAGE 106

H.E.RomeijnandD.RomeroMorales.Anasymptoticallyoptimalgreedyheuristicforthemulti-periodsingle-sourcingproblem:Thecycliccase.TechnicalReport99{11,DepartmentofIndustrialandSystemsEngineering,UniversityofFlorida,1999.[68] H.E.RomeijnandD.RomeroMorales.Agreedyheuristicforathree-levelmulti-periodsingle-sourcingproblem.TechnicalReport2000{3,DepartmentofIndustrialandSystemsEngineering,UniversityofFlorida,2000.[69] H.E.RomeijnandD.RomeroMorales.Aprobabilisticanalysisofthemulti-periodsingle-sourcingproblem.DiscreteAppliedMathematics,112:301{328,2001.[70] M.Sun,J.E.Aronson,P.G.McKeown,andD.Drinka.Atabusearchheuristicforthexedchargetransportationproblem.EuropeanJournalofOperationalResearch,106:441{456,1998.[71] S.Tayur,R.Ganeshan,andM.Magazine,editors.Quantitativemodelsforsupplychainmanagement.KluwerAcademicPublisher,Boston,1999.[72] D.J.ThomasandP.M.Grin.Coordinatedsupplychainmanagement.EuropeanJournalofOperationalResearch,94(1):1{15,1996.[73] H.Tuy.TheMCCNFPproblemwithaxednumberofnonlineararccosts:Complexityandapproximation.InP.M.Pardalos,editor,Approximationandcomplexityinnumericaloptimization,pages525{544.KluwerAcademicPublisher,Dordrecht,TheNetherlands,2000.[74] H.Tuy.Strongpolynomialsolvabilityofaminimumconcavecostnetworkowproblem.ACTAMathematicaVietnamica,25(2):209{217,2000.[75] H.Tuy,N.D.Dan,andS.Ghannadan.Stronglypolynomialtimealgorithmsforcertainconcaveminimizationproblemsonnetworks.OperationsResearchLetters,14:99{109,1993.[76] H.Tuy,S.Ghannadan,A.Migdalas,andP.Varbrand.Theminimumconcavecostowproblemwithxednumberofnonlineararccostsandsources.JournalofGlobalOptimization,6:135{151,1995.[77] H.Tuy,S.Ghannadan,A.Migdalas,andP.Varbrand.Astronglypolynomialtimealgorithmforaconcaveproduction-transportationproblemwithaxednumberofnonlinearvariables.MathematicalProgramming,72:229{258,1996.[78] C.J.VidalandM.Goetschalckx.Strategicproduction-distributionmodels:Acriticalreviewwithemphasisonglobalsupplychainmodels.EuropeanJournalofOperationalResearch,98:1{18,1997.[79] H.M.Wagner.Onaclassofcapacitatedtransportationproblems.ManagementScience,5:304{318,1959.

PAGE 107

H.M.WagnerandT.M.Whitin.Dynamicversionoftheeconomiclotsizemodel.ManagementScience,5:89{96,1958.[81] S.D.WuandH.Golbasi.Manufacturingplanningoveralternativefacilities:Modeling,analysis,andalgorithms.InJ.Geunes,P.M.Pardalos,andH.E.Romeijn,editors,Supplychainmanagement:Models,applications,andresearchdirections,pages279{316.KluwerAcademicPublisher,Dordrecht,TheNetherlands,2002.[82] M.Yannakakis.Computationalcomplexity.InE.AartsandJ.K.Lenstra,editors,Localsearchincombinatorialoptimization,chapter2,pages19{55.JohnWiley&Sons,Chichester,England,1997.[83] I.ZangandM.Avriel.Anoteonfunctions,whoselocalminimumareglobal.JournalofOptimizationTheoryandApplications,16:556{559,1976.[84] W.I.Zangwill.Minimumconcavecostowsincertainnetworks.ManagementScience,14:429{450,1968.

PAGE 108

BurakEksiogluwasbornonFebruary14,1972,inAdana,Turkiye.HereceivedhishighschooleducationinTarsusAmericanCollege.In1994,hewasawardedabachelor'sdegreeinindustrialengineeringfromBogaziciUniversityin_Istanbul,Turkiye.Hehadhismaster'sdegreefromUniversityofWarwickinEnglandin1996.Afterhismaster'sstudies,hewentbacktoTurkiyeandworkedforMarsaKJSasanOrganizationDevelopmentSpecialistforaboutayear.HethenbegandoctoralstudiesattheUniversityofFlorida,andreceivedhisPh.D.inDecember2002.BurakEksiogluisamemberoftheTauBetaPiEngineeringHonorSociety.98


Permanent Link: http://ufdc.ufl.edu/UFE0000528/00001

Material Information

Title: Network algorithms for supply chain optimization problems
Physical Description: Mixed Material
Creator: Eksioglu, Burak ( Author, Primary )
Publication Date: 2002
Copyright Date: 2002

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0000528:00001

Permanent Link: http://ufdc.ufl.edu/UFE0000528/00001

Material Information

Title: Network algorithms for supply chain optimization problems
Physical Description: Mixed Material
Creator: Eksioglu, Burak ( Author, Primary )
Publication Date: 2002
Copyright Date: 2002

Record Information

Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
System ID: UFE0000528:00001


This item has the following downloads:


Full Text











NETWORK ALGORITHMS FOR SUPPLY CHAIN OPTIMIZATION
PROBLEMS















By

BURAK EKSIOGLU


A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL
OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT
OF THE REQUIREMENTS FOR THE DEGREE OF
DOCTOR OF PHILOSOPHY

UNIVERSITY OF FLORIDA


2002

















To my family...















ACKNOWLEDGMENTS

I would like to thank my dissertation chair, Panos Pardalos, for his technical

advice, professionalism, encouragement and insightful comments throughout my

dissertation research.

I acknowledge my committee members, Selcuk Erengiiu, Joseph Geunes, Edwin

Romeijn, and Max Shen, for their constructive criticism concerning the material of

this dissertation. I would like to thank Edwin Romeijn for all the effort he has devoted

to the supervision of this research and Joe Geunes for his time and -i.r--. -Ii..n- I

also would like to verbalize my appreciation to all of my sincere friends at the ISE

Department and in Gainesville.

No words can express all my thanks to my parents, Inceser and Galip Eksioglu,

my sister, Burcu, and brother, Oguz, for their love, encouragement, motivation, and

eternal support. Last but not least, I would like to thank my wife, Sandra, for

her patience, kindness, and continuous support throughout all my years here at the

University of Florida.















TABLE OF CONTENTS
page

ACKNOWLEDGMENTS ................... ...... iii

LIST OF TABLES ...................... ......... vi

LIST OF FIGURES ..................... ......... vii

ABSTRACT ....................... ........... ix

CHAPTERS

1 INTRODUCTION ........................... 1

1.1 Supply C('i mi Management ...................... 1
1.2 Global Optimization .......... ............... 3
1.3 Goal and Summary .......................... 5

2 PRODUCTION PLANNING AND LOGISTICS PROBLEMS .. ... 8

2.1 Single-Item Economic Lot Sizing Problem ....... ....... 8
2.2 Production-Inventory-Distribution (PID) Problem ......... 10
2.3 Complexity of the PID Problem ......... .......... 12
2.4 Problem Formulation .................. .... .. 13
2.4.1 Fixed C('!i ge Network Flow Problems . . .... 15
2.4.2 Piecewise-linear Concave Network Flow Problems ..... 16

3 SOLUTION PROCEDURES .................. .... .. 25

3.1 Existing Solution Approaches ............. ... .. .. 25
3.2 Local Search .................. ......... .. .. 26
3.2.1 History of Local Search ............ .. .. .. 27
3.2.2 Complexity of Local Search ................. .. 28
3.2.3 Local Search for the PID Problem . . ..... 29
3.3 Dynamic Slope Scaling Procedure (DSSP) . . ..... 37
3.3.1 Fixed (C! .rge Case .................. .... 39
3.3.2 Piecewise-linear Concave Case . . ..... 40
3.3.3 Performance of DSSP on Some Special Cases . . 41
3.4 Greedy Randomized Adaptive Search Procedure (GRASP) . 50
3.4.1 Construction Phase .................. .... 50
3.4.2 Modified Construction Phase ................ .. 52
3.5 Lower Bounds .................. ...... 53









4 SUBROUTINES AND COMPUTATIONAL EXPERIMENTS ...... 59

4.1 Design and Implementation of the Subroutines ........... 59
4.2 Usage of the Subroutines .......... ............ 61
4.3 Experimental Data ................... .... 63
4.4 Randomly Generated Problems .................. .. 65
4.4.1 Problems with Fixed C'!i ige Costs . . ..... 66
4.4.2 Problems with Piecewise-Linear Concave Costs ...... ..76
4.5 Library Test Problems .................. ... .. 83

5 CONCLUDING REMARKS AND FURTHER RESEARCH ....... 86

5.1 Sum m ary .. .. ... .. .. .. ... .. .. .. ... 86
5.2 Proposed Research .................. ....... .. 88
5.2.1 Extension to Other Cost Structures . . ... 88
5.2.2 Generating Problems with Known Optimal Solutions. 88

REFERENCES ................... ... ... ........ .. 91

BIOGRAPHICAL SKETCH .................. ......... .. 98















LIST OF TABLES
Table page

4-1 The distribution. .................. .. ...... 59

4-2 Problem sizes for fixed charge networks. ............... 67

4-3 ('!C i :teristics of test problems. ................. .... 68

4-4 Number of times the error was zero for data set A. ........... ..68

4-5 Average errors for problems with data set E. ............. .72

4-6 Problem sizes for fixed charge networks with capacity constraints 73

4-7 Average errors for problems with production capacities. . ... 74

4-8 CPU times of DSSP for problems with production capacities. . 74

4-9 CPU times of CPLEX for problems with production capacities . 75

4-10 Percentage of production arcs used in the optimal solution. . 76

4-11 Problem sizes for piecewise-linear concave networks. . .... 78

4-12 Summary of results for problems using data set A. ......... ..79

4-13 Summary of results for problems using data set B. ......... ..80

4-14 Summary of results for problems using data set C. ......... .81

4-15 Summary of results for problems using data set D. ......... ..82

4-16 Summary of results for problems using data set E. ......... ..83

4-17 Problem sizes for the extended formulation after NSP. . ... 84

4-18 Summary of results for problems 40 and 46 after NSP. . ... 84

4-19 Uncapacitated warehouse location problems from the OR library. 85
















Figure

21

22


LIST OF FIGURES


The single-item ELS model . ..........

A supply chain network with 2 facilities, 3 retailers, and


2 periods..


2-3 Piecewise-linear concave production costs.


2-4 Arc separation procedure . ............

2-5 Node separation procedure . ..........

3-1 An example for E-neighborhood . .........

3-2 An example of moving to a neighboring solution . .

3-3 The local search procedure . ..........

3-4 Cycle detection and elimination . .........

3-5 An example of a type I cycle . ..........

3-6 An example of a type II cycle . ..........

3-7 The DSSP algorithm . ...............

3-8 Linearization of the fixed charge cost function . .

3-9 Linearization of the piecewise-linear concave cost function

3-10 A bipartite network with many facilities and one retailer.

3-11 A bipartite network with two facilities and two retailers.

3-12 A bipartite network with multiple facilities and multiple i

3-13 The construction procedure . ...........

3-14 Extended network representation . ........

4-1 Input file (gr-param.dat) for gr-pid.c . ......

4-2 Sample data file (sample.dat) for gr-pid.c and ds-pid.c. .

4-3 Sample output (sample-gr.out) of gr-pid.c . ....

4-4 Average errors for problems using data set D . .


. . .





retailers. .


page

9

11

17

21

23

31

33

35

35

36

37

38

40

41

42

44

47

52

55

62

63

64

70









4-5 Number of GRASP iterations vs. solution quality. . 71

5-1 Piecewise-concave cost functions ............ .. .. .. 89















Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy

NETWORK ALGORITHMS FOR SUPPLY CHAIN OPTIMIZATION
PROBLEMS

By

Burak Eksioglu

December 2002

C'!h In: Panagote M. Pardalos
Major Department: Industrial and Systems Engineering

The term supply chain management (SC'\l) has been around for more than

twenty years. The supply chains for suppliers, manufacturers, distributors, and

retailers look very different because of the different business functions that they

perform and the types of companies with which they deal. Thus, the definition

of supply chain varies from one enterprise to another. We define supply chain (SC) as

an integrated process where these business entities work together to plan, coordinate

and control materials, parts, and finished goods from suppliers to customers.

For many years, researchers and practitioners have concentrated on the individual

processes and entities within the SC. Recently, however, many companies have

realized that important cost savings can be achieved by integrating inventory control

and transportation policies throughout their SC. As companies began realizing the

benefits of optimizing their SC as a single entity, researchers began utilizing operations

research techniques to better model SCs.

Typical models for SC design/management problems assume that the involved

costs can be represented by somewhat restrictive cost functions such as linear and/or









convex functions. However, many of the applications encountered in practice involve

a fixed charge whenever the activity is performed, plus some variable unit cost which

makes the problem more complicated.

The objective of this research is to model and solve SC optimization problems

with fixed charge and piecewise-linear concave cost functions. In these problems a

single item is produced at a set of facilities and distributed to a set of retailers such

that the demand is met and the total production, transportation, and inventory costs

are minimized over a finite planning horizon.















CHAPTER 1
INTRODUCTION

1.1 Supply C(',ii Management

Supply chain management is a field of growing interest for both companies and

researchers. As nicely told in the recent book by Tayur, Ganeshan, and Magazine [71]

every field has a golden age: This is the time of supply chain management. The term

supply chain management (SC'\ ) has been around for more than twenty years and its

definition varies from one enterprise to another. We define a supply chain (SC) as an

integrated process where different business entities such as suppliers, manufacturers,

distributors, and retailers work together to plan, coordinate, and control the flow

of materials, parts, and finished goods from suppliers to customers. This chain is

concerned with two distinct flows: a forward flow of materials and a backward flow

of information. Geunes, Pardalos, and Romeijn [27] have edited a book that provides

a recent review on SC'\ models and applications.

For many years, researchers and practitioners have concentrated on the individual

processes and entities within the SC. Recently, however, there has been an increasing

effort in the optimization of the entire SC. As companies began realizing the benefits

of optimizing the SC as a single entity, researchers began utilizing operations research

(OR) techniques to better model supply chains. Typically, a SC model tries to

determine

the transportation modes to be used,

the suppliers to be selected,

the amount of inventory to be held at various locations in the chain,

the number of warehouses to be used, and

the location and capacities of these warehouses.









Following Hax and Candea's [37] treatment of production and inventory systems, the

above SC decisions can be classified in the following way:

Strategic level. These are long-term decisions that have long-lasting effects
on the firm such as the number, location and capacities of warehouses and
manufacturing facilities, or the flow of material through the SC network. The
time horizon for these strategic decisions is often around three to five years.

Tactical level. These are decisions that are typically updated once every quarter
or once every year. Examples include purchasing and production decisions,
inventory policies and transportation strategies including the frequency with
which customers are visited.

Operational level. These are d-4-to-day decisions such as scheduling, routing
and loading trucks.

Beamon [6] gave a summary of models in the area of multi-stage supply

chain design and analysis. Erengiu, Simpson, and Vakharia [17] surveyed models

integrating production and distribution planning in SC. Thomas and Griffin [72]

surveyed coordination models on strategic and operational planning.

As a result of the globalization of the economy, however, the models have become

more complex. Global SC models now often try to include factors such as exchange

rates, international interest rates, trade barriers, taxes and duties, market prices, and

duty drawbacks. All of these factors are generally difficult to include in mathematical

models because of uncertainty and nonlinearity. Vidal and Goetschalckx [78] provided

a review of strategic production-distribution models with emphasis on global SC

models. Eksioglu [14] discussed some of the recent models that address the design

and management of global SC networks. Cohen and Huchzermeier [10] also gave an

extensive review on global SC models. They focus on the integration of SC network

optimization with real options pricing methods. Most of these SC problems can be

modeled as mathematical programs that are typically global optimization problems.

Therefore, we next give a brief overview of the area of global optimization.









1.2 Global Optimization

The field of global optimization was initiated during the mid 1960s mainly

through the primary works of Hoang Tuy. Since then, and in particular during the

last fifteen years, there has been a lot of interest in theoretical and computational

investigations of challenging global optimization problems. This has resulted in the

development and application of global optimization methods to important problems

in science, applied science, and engineering. Exciting and intriguing theoretical

findings and algorithmic developments have made global optimization one of the

most attractive areas of research.

Global optimization deals with the search for a global optimum in problems

where many local optima exist. The general global optimization problem is defined

by Horst, Pardalos, and Thoai [43] as

Definition 1.1 Given a no,. ,niij' closed set D C R' and a continuous function

f : -- R, where Q C R" is a suitable set containing D, find at least one point

x D e ,/.;-.fi:, f(x*) < f(x) for all x e D.

A ii difficulty of global optimization problems is the existence of many

local optima. As Horst and Tuy [44] stated, standard local optimization methods

are trapped at a local optimum or more generally at a stationary point for

which there is not even any guarantee of local optimality. Thus, the use of

standard local optimization techniques is normally insufficient for solving global

optimization problems. Therefore, more sophisticated methods need to be designed

for global optimization problems, resulting in more complex and computationally

more expensive methods. Horst and Pardalos [42] gave a detailed and comprehensive

survey of global optimization methods. Floudas [23] presented a review of recent

theoretical and algorithmic advances in global optimization along with a variety of

applications.









In contrast to the objective of the global optimization, the area of local

optimization aims at determining a feasible solution that is a local minimum of

the objective function f in D (i.e., it is a minimum in its neighborhood, but not

necessarily the lowest value of the objective function f). Therefore, in general, for

nonlinear optimization problems where multiple local minima exist, a local minimum

(as with any other feasible solution) represents only an upper bound on the global
minimum of the objective function f on D.

In certain classes of nonlinear problems, a local solution is alv--i- a global one.

For example, in a minimization problem with a convex (or quasi-convex) objective

function f and a convex feasible set D, a local minimizer is a global solution (see for

instance, Avriel [4], Horst [41], Zang and Avriel [83]).

It has been shown that several important optimization problems can be

formulated as concave minimization problems. A well-known result by Raghavachari

[64] states that the zero-one integer programming problem is equivalent to a concave

(quadratic) minimization problem over a linear set of constraints. Giannessi and

Niccolucci [29] have shown that a nonlinear, nonconvex integer program can be

equivalently reduced to a real concave program under the assumption that the

objective function is bounded and satisfies the Lipschitz condition. Similarly, the

quadratic assignment problem can be formulated as a global optimization problem

(see for instance, Bazara and Sherali [5]). In general, bilinear programming problems
are equivalent to a kind of concave minimization problem (Konno [52]). The linear

complementarity problem can be reduced to a concave problem (\I 1ii ";- i i ii [57])

and linear min-max problems with connected variables and linear multi-step bimatrix

games are reducible to a global optimization problem (Falk [18]). The above examples

indicate once again the broad range of problems that can be formulated as global

optimization problems, and therefore explain the increasing interest in this area.









Global optimization problems remain NP-hard even for very special cases such

as the minimization of a quadratic concave function over the unit hypercube (see for

example Garey et al. [26], Hammer [34]), in contrast to the corresponding convex

quadratic problem that can be solved in polynomial time (Chuing and Murty [9]).

Most of the optimization problems that arise in SC are global optimization

problems. These problems are of great practical interest, but they are also inherently

difficult and cannot be solved by conventional nonlinear optimization methods.

Despite the fact that the i1i ii i ly of the challenging and important problems

that arise in science, applied science and engineering exhibit nonconvexities and hence

multiple minima, there has been relatively little effort devoted to the area of global

optimization as compared to the developments in the area of local optimization. This

is partly attributed to the use of local optimization techniques as components of global

optimization approaches, and also due to the difficulties that arise in the development

of global optimization methods. However, the recent advances in this area and the

explosive growth of computing capabilities show great promise towards addressing

these issues.

Global optimization methods are divided into two classes, deterministic and

stochastic methods. The most important deterministic approaches to nonconvex

global optimization are: enumerative techniques, cutting plane methods, branch

and bound, solution of approximate subproblems, bilinear programming methods

or different combinations of these techniques. Specific solution approaches have been

proposed for problems where the objective function has a special structure (e.g.,

quadratic, separable, factorable, etc.) or the feasible region has a simplified geometry

(e.g., unit hypercube, network constraints, etc.).
1.3 Goal and Summary

The focus of this dissertation is to study optimization problems in supply

chain operations with cost structures that arise in several real-life applications. A









typical feature of the logistics problems encountered in practice is that their cost

structure is not linear, due to the presence of fixed charges, discount structures,

and different modes of transportation. These cost structures have not been given

sufficient attention in the literature, perhaps due to the difficulty of the underlying

mathematical optimization problems. The goal of this dissertation is to develop new

optimization models and algorithms for solving large-scale logistics problems with

nonlinear cost structure.

The supply chain optimization problems we consider are formulated as large-

scale mixed integer programming problems. The network structure inherent in such

problems is utilized to develop efficient algorithms to solve these problems. Due to

the scale and difficulty of these problems, the focus is to develop efficient heuristic

methods. The objective is to develop approaches that produce optimal or near-

optimal solutions to logistics problems with fixed charge and piecewise-linear concave

cost structures. In this dissertation we also address the generation of experimental

data for these optimization models since the performance of heuristic procedures

are typically measured by the computation time required and the quality of the

solution obtained. Conclusions about these two performance measures are drawn

by testing the heuristic approaches on a collection of problems. The validity of the

derived conclusions strongly depends on the characteristics of the problems chosen.

Therefore, we generated several sets of problems with different characteristics.

The outline of the dissertation is as follows. In C'! lpter 2 we first introduce

the single-item economic lot sizing problem and then discuss extensions to the basic

problem to arrive at our problem, which we call the production-inventory-distribution

(PID) problem. In C'! lpter 3 we present local search based heuristic approaches.

We give a brief history of local search and present two approaches for constructing

solutions to initiate our local search procedure. We discuss the complexity of the









problem and give different formulations for problems with fixed charge and piecewise-

linear concave cost structures. A dynamic slope scaling procedure (DSSP) is presented

in section 3.3 and a greedy randomized adaptive search procedure (GRASP) is

developed in section 3.4. DSSP was first introduced by Kim and Pardalos [48]. We

refined the heuristic to improve the quality of the solutions obtained. The final section

in ('! Ilpter 3 discusses lower bound procedures that are used to test the quality of the

solutions obtained from local search. The results of extensive computational results

are presented in ('!i lpter 4. Details on the design, implementation, and usage of the

subroutines developed are also included in ('!i lpter 4. Finally, in ('!i lpter 5 we end

the dissertation with a summary of the findings and future research directions.















CHAPTER 2
PRODUCTION PLANNING AND LOGISTICS PROBLEMS

2.1 Single-Item Economic Lot Sizing Problem

Many problems in SC optimization such as inventory control, production

planning, capacity planning, etc. are related to the simple economic lot sizing (ELS)

problem. Harris [35] is usually cited as the first to study ELS models. He considered

a model that assumes deterministic demands which occur continuously over time.

In 1958, a different approach was proposed independently by Manne [58] and by

Wagner and Whitin [80]. They divided time into discrete periods and assumed that

the demand over a finite horizon is known in advance. In the past four decades

ELS has received considerable attention and many papers have directly or indirectly

discussed this model. Aggarwal and Park [2] gave a brief review of the ELS model

and its extensions.

To describe the basic single-item ELS model we will use the following notation.

Demand (dt) for the product occurs during each of T consecutive time periods. The

demand during period t can be satisfied either through production in that period

or from inventory that is carried forward in time. The model includes production

and inventory costs, and the objective is to schedule production to satisfy demand at

minimum cost. The cost of producing pt units during period t is given by rt(pt) and

the cost of storing It units of inventory from period t to period t+ 1 is ht(It). Without

loss of generality, we assume both the initial inventory and the final inventory are

zero. The mathematical representation of the ELS model can now be given as

T
minimize J(rt(pt) + ht(It))
t=i









subject to (PO)


Pt + It- = It + dt t = ,... T, (2.1)

I IT = 0 (2.2)

Pt,t > 0 t = ,..., T. (2.3)

In the above formulation, the first set of constraints (2.1) requires that the sum

of the inventory at the start of a period and the production during that period equals

the sum of the demand during that period and the inventory at the end of the period.

Constraint (2.2) simply assures that the initial and final inventories are zero, while

the last set of constraints (2.3) limits production and inventory to nonnegative values.

The basic ELS problem can also be formulated as a network flow problem (NFP).

This formulation was first introduced by Zangwill [84]. The network in Figure 2-1

consists of a single source node and T sink nodes. Each sink node requires an inflow of

dt (t = 1, 2,..., T) and node D is capable of generating an outflow ofEt T dr. For each

arc from node D to node t there is an associated cost function rt(.) for t = 1, 2,..., T,

and for each arc from node t to node t + 1 there is an associated cost function ht(.)

fort 1,2,...,T- 1.





Lr() r2(') rT(.)



1 2 ____ T
hl() h2(') hT-l()

Figure 2-1: The single-item ELS model.


If the cost functions rt(-) and ht(*) are allowed to be arbitrary functions, then the

basic ELS problem is quite difficult to solve; as Florian, Lenstra and Rinnooy Kan [22]









have shown it is NP-hard. Due to this difficulty and to represent cost functions found

in practice, certain assumptions are often made about the cost functions. Aggarwal

and Park [2] gave a review of some of these assumptions and provided improved

algorithms for the ELS problem.

We consider extensions to the basic ELS problem and include distribution

decisions in the model. We also consider multiple production plants (facilities)

and multiple demand points (retailers). The goal is to meet the known demand

at the retailers through production at the facilities, such that the system wide total

production, inventory, and distribution cost is minimized. As mentioned earlier in

C'! lpter 1, we refer to this problem as the PID problem.

2.2 Production-Inventory-Distribution (PID) Problem

The PID problem can be formulated as a network flow problem on a directed,

single source graph consisting of several lv--ir-. Figure 2-2 gives an example with two

facilities, three retailers and two time periods. Each lI-.--r of the graph represents a

time period. In each l .v--r, a bipartite graph represents the transportation network

between the facilities and the retailers. Facilities in successive time periods are

connected through inventory arcs. There is a dummy source node with supply equal

to the total demand. Production arcs connect the dummy source to each facility in

every time period.

This is an easy problem if all costs are linear. However, m ni,-r production and

distribution activities exhibit economies of scale in which the unit cost of the activity

decreases as the volume of the activity increases. For example, production costs often

exhibit economies of scale due to fixed production setup costs and learning effects that

enable more efficient production as the volume increases. Transportation costs exhibit

economies of scale due to the fixed cost of initiating a shipment and the lower per unit

shipping cost as the volume delivered per shipment increases. Therefore, we assume












A. -- Prod. Arcs
/ ( ) ------ Inv. Arcs
/ F21 Trans. Arcs
/ ./ \ \
SR3,1

S-\ \


F1,2


F2,2




Figure 2-2: A supply chain network with 2 facilities, 3 retailers, and 2 periods.


the production costs at the facilities are either of the fixed charge type or piecewise-

linear concave type, the cost of transporting goods from facilities to retailers are of

the fixed charge type, and the inventory costs are linear. We also make the following

simplifying assumptions to the model:

Backorders are not allowed.

Transportation is not allowed between facilities.

Products are stored at their production location until being transported to a

retailer.

There are no capacity constraints on the production, inventory, or distribution

arcs.

The first three assumptions can easily be relaxed by adding extra arcs in the

network and the last assumption is justified since the following result by Wagner [79]

shows that a network flow problem with capacity constraints can be transformed into

a network flow problem without capacity constraints.









Proposition 1 Every capacitated minimum cost network flow problem (_M[CCNFP)

on a network with m nodes and n arcs can be ,.-f.irmed into an equivalent

uncapacitated MCCNFP on an expanded network with (n + m) nodes and (n + n)

arcs.

Freling et al. [24] and Romeijn and Romero Morales [66-69] considered similar

problems. They assumed that production and inventory costs are linear and that

there is a fixed cost of assigning a facility to a retailer. In other words, they accounted

for the presence of so-called single-sourcing constraints where each retailer should be

supplied from a single facility only. Eksioglu, Pardalos and Romeijn [16] and Wu and

G6lbali [81] considered the multi commodity case where there are multiple products

flowing on the network. Eksioglu et al. [16] assumed production and inventory costs

are linear and transportation costs are of fixed charge type, whereas Wu and G6lbasi

[81] assumed production costs are fixed charge and inventory and transportation costs

are linear.

2.3 Complexity of the PID Problem

The PID problem with concave costs falls under the category of minimum concave

cost network flow problems ( !CCNFP). Guisewite and Pardalos [30] gave a detailed

survey on MCCNFP throughout the early 1990s. It is well-known that even certain

special cases of MCCNFP, such as the fixed charge network flow problem (FCNFP)

or the single source uncapacitated minimum concave cost network flow problem (SSU

MCCNFP), are NP-hard (Guisewite and Pardalos [31]). MCCNFP is NP-hard even

when the arc costs are constant, the underlying network is bipartite, or the ratio of

the fixed charge to the linear charge for all arcs is constant. This has motivated the

consideration of additional structures which might make the problem more tractable.

In fact, polynomial time algorithms have been developed for a number of specially

structured variants of MCCNFP (Du and Pardalos [13] and Pardalos and Vavasis

[62]).









The number of source nodes and the number of arcs with nonlinear costs affect

the difficulty of MCCNFP. It is therefore convenient to refer to MCCNFP with a

fixed number, h, of sources and fixed number, k, of arcs with nonlinear costs as

MCCNFP(h,k). Guisewite and Pardalos [32] were the first to prove the polynomial

solvability of MCCNFP(1,1). Later, strongly polynomial time algorithms were

developed for MCCNFP(1,1) by Klinz and Tuy [51] and Tuy, Dan and Ghannadan

[75]. In a series of papers by Tuy [73, 74] and Tuy et al. [76, 77] polynomial time

algorithms were presented for MCCNFP(h,k) where h and k are constants. It was also

shown that MCCNFP(h,k) can be solved in strongly polynomial time if min{h,k}=1.

2.4 Problem Formulation

The problem that we consider is a multi-facility production, inventory, and

distribution problem. A single item is produced in multiple facilities over multiple

periods to satisfy the demand at the retailers. The objective is to minimize the system

wide total production, inventory, and transportation cost.

Let J, K, and T denote the number of facilities, the number of retailers, and the

planning horizon, respectively. Then the resulting model is

J T J K T JT
minimize E rit(Pjt) + E E E fjkt(xjkt) + E E hjljt
j=it1 j=lk=t= 1 j t= 1

subject to (P1)

K
Pjt + I,t-1 = Ijt + xjkt j =,...,J; t = ,...,T, (2.4)
k=1
J
E X kt dkt k 1,..., ; t 1,..., T, (2.5)
j=1
Pjt < W t j ,...J;t ,...T, (2.6)

Ijt < Vt j. 1,...,J;t 1,...,T, (2.7)

xjkt < Ukt Ut j- ,...,J; t,...,K;t- t,...,T, (2.8)

pit, Ijt, Xjkt > 0 j 1,..., J; k 1,...,K ; t 1,...,T, (2.9)









where

pjt = amount produced at facility j in period t,

Xjkt = amount shipped from facility j to retailer k in period t,

Ijt = inventory carried at facility j during period t,

Wjt = production capacity at facility j in period t,

Ujkt = capacity on the shipment from facility j to retailer k in period t,

Vjt = inventory capacity at facility j in period t,

rjt(pjt) = cost of producing pjt units at facility j in period t,

fjkt(xjkt) = cost of shipping xjkt units from facility j to retailer k in period t,
hjt = unit holding cost per period at facility j for period t, and

dkt = demand at retailer k in period t.

Constraints (2.4) and (2.5) are the flow conservation constraints and (2.6), (2.7),

and (2.8) are the capacity constraints. If Ujkt, Vjt, and Wjt are large enough then

the problem is effectively uncapacitated. As we pointed out earlier MCCNFP has

the combinatorial property that if it has an optimal solution, then there exists an

optimal solution that is a vertex of the corresponding feasible domain. A feasible flow

is an extreme flow (vertex) if it is not the convex combination of any other feasible

flows. Extreme flows have been characterized for possible exploitation in solving the

MCCNFP. A flow is extremal for a network flow problem if it contains no positive

cycles. A positive cycle in an uncapacitated network is a cycle that has positive flow

on all of its arcs. On the other hand, for a problem with capacity constraints on arc

flows, a positive cycle is a cycle where all of the arcs in the cycle have positive flows

which are strictly less than the capacity. This implies that for the PID problem with

unlimited capacity an extreme flow is a tree. In other words, an optimal solution

exists in which the demand of each retailer will be satisfied through only one of the

facilities.









2.4.1 Fixed ('i ,ige Network Flow Problems

In the fixed charge case the production cost function rjt(pjt) is of the following

form:

rjt(pjt) 0 if jt 0,
Sjt + cjtpjt if 0 < pjt < Wjt.

where sjt and cjt are, respectively, the setup and the variable costs of production.

If the distribution cost function, fjkt(xjkt), is also of a fixed charge form then it

is given by

fjkt Xjkt) 0 if Xjkt 0,
Sjkt + CjktXjkt if 0 < Xjkt < Ujkt.

where Sjkt and Cjkt represent, respectively, the setup and the variable costs of

distribution.

Due to the discontinuity of the cost functions at the origin, the problem can

be transformed into a 0 1 mixed integer linear programming (lI II.P) problem by

introducing a binary variable for each arc with a fixed charge. Assuming sjt > 0, the

cost function rjt(pit), can be replaced with


rjt(pjt) = Cjtpjt + Sjtjt.


Similarly, the distribution cost functions can be replaced with


fjkt(xjkt) = Cjk t + SjktYjkt

where

jt = if 0, and yjkt = 0,
1 if pt >0, 1 if xjkt >0.

The MILP formulation of the problem is as follows:

J T J K T
minimize 1 Y(cjtpjt + Sjtyjt hjtt) + (CjktxJkt + SjktYjkt)
j=it=1 j=lk =t=1









subject to


(2.4), (2.5), (2.7), (2.9), and

Pit < Wjtjt j= ,..., J;t=l,...,T,

Xjkt Ujktjkt j 1,..., J; k ... ,K ;t

Yjt,Yjkt E {0,1} j 1,...,J;k l,...,K ;t


1,...,T.

1,... ,T.


(2.10)

(2.11)

(2.12)


The MILP formulation is used to find an optimal solution. This formulation is

useful in practice since the relaxed problems are linear cost network flow problems. We

use the general purpose solver CPLEX, which employs branch and bound algorithms,

to solve the MILP formulation. This is done to gauge the difficulty of finding an

optimal solution.

2.4.2 Piecewise-linear Concave Network Flow Problems

If the cost functions in problem (P1) are piecewise-linear concave functions

(Figure 2-3), then they have the following form:


0

Sjt,l + Cjt,lpjt
rjt(PJt) = sjt,2 + C, p'.,



SJt,lJt + Cjt,ljtpjt


if pt = 0,

if 0 < pt < 3jt,1,

if jt,1 < Pjt < 3pt,2,



if /jt,ljt- < Pjt < jtljat,


where jt,i for i = 1, 2, ..., ljt 1 are the break points in the interval (0, Wjt), /jt,l,

Wjt, and ljt is the number of linear segments of production cost function rjt(-). Due

to the concavity of the cost functions the following properties hold:


* Cjt, > Cjt,2 > ... > Cjt,ljt

* sjt,1 < Sjt,2 < < Sjt,ljt


(2.13)

(2.14)


We also assume cjt,lt > 0 and sjt,1 > 0, since these are production costs.


(P2)















Sjt,3 .....----"" ... Cjt,2

Sjt,2 2 |
I I I
Stj I I
Sjt,l I I


I I I pit
--------------iP jt
i8jt,I f8jt,2 f8jt,3

Figure 2-3: Piecewise-linear concave production costs.


Similarly, if the transportation cost functions are piecewise-linear and concave,

they have the following form:
0 if xjkt = 0,

Sjkt,1 + Cjkt,lXjkt if 0 < Xjkt < /3,t.1,

fjkt(xjkt) = Sjkt,2 Cjkt,2Xjkt if jkt,1 < Xjkt jkt,2,



Sjkt,ljkt + Cjkt,ljktXjkt if /jkt,ljkt-1 < Xjkt < pjkt,ljCt

The PID problem with piecewise-linear concave costs can be formulated as

a MILP problem in several different v- -v-. Below we give four different MILP

formulations for the problem: A-formulation, slope-formulation, ASP formulation,

and NSP formulation.

A-formulation. Any pjt value on the x-axis of Figure 2-3 can be written as a

convex combination of two of the .,li i,:ent break points, i.e., pjt = E o jt,i jt,i

where /3io 0, Ajt,i = 1 and 0 < Ajti < 1 for i = 0, ..., ljt. Note that at most

two of the Ajt,i values can be positive and only if they are .,-li i .ent. This formulation

can be used to model any piecewise-linear function (note necessarily concave) by

introducing the following constraints:









Ajt,o < zjt,1,

Ajt,l zj t,i + Zjt,2,

Ajt,2 < zjt,2 + zjt,3,



1jt,ljt-l + zjt,ljt-l < zjt,ljt,

Ajt,ljt < zjt,ljt,

i zjt,i 1, and

zt,i e {0, 1}.

Note also that we have O(ljt) (not O(2'jt)) possibilities for each zjt vector. In

other words, only one of the components of the vector zjt = (zjt,, ztjt,2 jt,lt) will

be 1 and the rest will be zero. The production cost functions can now be written in

terms of the new variables. An additional binary variable, yjt, is added to take care

of the discontinuity at the origin

rjt(pjt) = Sjt,l~jt + Eltl(rjt(jt,i) Sjt,)jt,i

The new variable, Yjt, must equal 1 if there is a positive amount of production at

plant j during period t and it must be 0 if there is no production. This is handled

by the following constraints: yjt > (1 Ajt,o) and yjt E {0, 1}. The distribution cost

functions can similarly be modelled by introducing new y, z, and A variables. The

PID problem with piecewise-linear concave production and distribution costs can now

be written as

J T ljt
minimize E(sjt,1t + hjtlt + (rjt(jt,i) sjt,1)\jt,i)
j=1 t= 1 i 1

J K T ljkt
+ : (sjjkt,lyjkt + (fjkt(kjkt,i) Sjkkt1)Ajkt,i)
j= lkt= 1 i 1
subject to (P3)


(2.4), (2.5), and











j 1,. J;t


ljt
A tjt,i
i=0
Ajt,o

Ajt,i

Ajt,ljt
l t
zjt,i




Xjkt
i= 1




Yjkt

: Ajkt,i
i=0
Ajkt,O

Ajkt,i




Ajt,ljkt
Ijkt
zjkt,i
i= 1
yjkt

Ajt,i, Ajkt,i




Yjt, Yjkt

zjt,i, zjkt,i


1,... ,J;t


1,..., J;t

1, ,J; t
1,..., J;t

1,...,J;t

1,...,J;t

1,..., J; k


- 1

< zjt,1

< Zjt,i + Zjt,i+l

< zJt,ljt

- 1

> 1- At,o
Ijkt
Ajkt,i jkt,i
i= 1

= 1

SZjkt, 1

I Zjkt,i + fjkt,i+l



< Zjkt,ljkt

= 1

S1 Ajkt,o

> 0




e {0,1}

e {0,1}


,J; k

.,J; k

, J; k


. 1jkt
., ; k

.,J; k

.,J;k

.,J;k

ljkt,

.,J; k

.,J; k

Ijkt-


(2.15)


, .. IT ,

1,. ,T,

1,.. ,T ;

1, ,T ,

1,. ,T,

1,. ,T,

1,... K ;t


S1,... ,K ;t

1,...,K ;t

1,... ,K ;t


1,

S1,... K ;t

1,... ,K ;t

1,... ,K ;t

1,... ,K ;t



1,... ,K ;t

1,...,K ;t


(2.16)

(2.17)

,ljt 1,(2.18)

(2.19)

(2.20)

(2.21)

S1,... ,T, (2.22)


1,.. ,T,

1,... ,T,
1,...,T ;




1,.. ,T,

1,...,T,
1,... ,T ,




1,... ,T ;




1,... ,T ,

1,.. ,T;


(2.23)

(2.24)

(2.25)




(2.26)

(2.27)

(2.28)

(2.29)




(2.30)

(2.31)


Slope-formulation. The slope-formulation is similar to the A-formulation in the

sense that the pjt values on the x-axis of Figure 2-3 are again rewritten in terms of

the break points. However, this time pit is not a convex combination of the break


ljt
Pjt = Ajt,ijt,i
i= 1


1, ..

1, ..

1, ..

1, .

1, ..

1,..

1,..

1,..

0, ..

1,..

1,.

1, .










points. The production amounts are now defined as pjt = 17jt,i(3jt,i -/3jt,i-)

where 3jt,o 0 and 0 < <., < 1 for i = 1, 2, ..., lt. As in the A-formulation, the

vector zjt in the following formulation takes O(ljt) possible values and not O(21jt).

However, in this case one or more of the components of zjt can be 1. If one of the

components is 1 (e.g. zjt,,m 1), then all of the preceding components must equal 1

(zt,i = 1 for all i = 1,..., m 1). The slope-formulation of the PID problem is given

by


minimize


subject to


J T ljt J K T ljkt
SE(sjt,It jtIjt+ Arjti7 ti) + (Sjkt,lYjkt Ajkt,i7jkt,i)
j= t= 1 i 1 j lk=t= 1 i 1


(P4)


(2.4), (2.5), and
ljt
Pit = A ,r.,
i= 1
-.* >_ zjt,i


. +1


< zjt,i


J

j

j


Yjt > I j
ljkt
Xjkt JA3,At..vjkt,ii



i

"' +1 < Zjkt,i 3

i

Yjkt > "., J

S,, > 0 j


Yjt, Yjkt

Zjt,i, Zjkt,i


{0,1}

{0,1}


i

3

3


S1,.. J; t

t'l,. .. ,J; t -

t'l,. .. ,J; t -
S 1,...,J;t

1,... ,J;t

= 1,... J;tk

= 1,..., J; k
1,... J;k


S 1,. ljkt -
= 1,. ,J; k -

1, Jjkt -

=1,... ,J;k

= 1,...,J;k

= 0, ... 1jkt,

= 1,...,J; k

= 1,...,J; k


1,...,T;,
1,...,T ;i


1,... ,T;

1 ,... ;t


1,... ,K;t


1,






1,...,K ;t
1,


S1,...,K ;t




S1,...,K ;t

= 1,...,K;t


(2.32)

1,(2.33)

1,(2.34)

(2.35)

(2.36)

(2.37)


1,...,lit -

1,...,lit -




1,... ,T ,

S 1,... ,T ;


(2.38)


, ,

1,...,T ;



1,...,T ,

1,... ,T ;


(2.39)

(2.40)



(2.41)

(2.42)










i= 1,... ,ljkt-


where

Arjt,i [rjt(3jt,i) rjt(Ljt,i-1) sjt,] for i = 1, 2,..., jt,

SAfjkt,i = [fjkt(pjkt,i) fjktOjkt,i-1)- Sjkt,1] for i = 2, ..., It,

/jt,i = [Jjt,i pjt,i-I] for i= 1, 2,..., lt,

/Ajkt,i [/jkt,i /jkt,i-] for i = 1, 2, ..., lt.

ASP formulation. Kim and Pardalos [50] used an Arc Separation Procedure

(ASP) to transform network flow problems with piecewise-linear concave costs to

network flow problems with fixed charges. Using the same procedure, the PID

problem with piecewise-linear concave costs can be transformed into a fixed charge

network flow problem. Each arc is separated into ljt arcs as shown in Figure 2-4. All

of the new arcs have fixed charge cost functions.








r(jt(pj) rjt, (Pjt,1) rjt,2(jt,2) rt,3 pjt,3)








Pjt
Pit, Pjt,2 Pjr,3

Figure 2-4: Arc separation procedure.


The network grows in size after the arc separation procedure. The number of

production and transportation arcs in the extended network is given by

J T J K T

j= 1t=1 j= lk=lt=









The production cost can now be written in terms of the cost functions of the

new arcs created in the following way:

Ijt ijt
rjt(pjt) =E rjt,i(Pt,i) 5(cjt,ipjt,i + Sjt,ilyjt,i). (2.43)
i=1 i=1

Note that the equality (2.43) may not hold in general without additional constraints

to restrict the domain for each separated arc cost function. However, for concave cost

functions the equality holds due to the properties given by (2.13) and (2.14). Kim

and Pardalos [50] gave a simple proof based on contradiction. This formulation is

closely related to the A-formulation in the sense that only one of the linear pieces will

be used in an optimal solution. As the zjt variables in the A-formulation, the vector

of yjt variables in the below formulation will have O(ljt) possible values. The MILP

formulation after the arc separation procedure is

J T Ijt
minimize E E(cjt,ijt,i + sjt,iijt,i)+
j= t=li 1

J K T Ijt J T
J K I: I:(CjktiXjkt,i + Sjkt,iYjkt,i) + E hjljt
j=lklt= i= 1 j=it 1
subject to (P5)

Ijt K Ijt
Pjt,i + ,t-1 jt + EE1 jkt,i j= ,..., J;t = ,...,T, (2.44)
i 1 k=li 1
J Ijt
Xjkti dkt k- ,...,K;t 1,...,T, (2.45)
j= i 1
pjt,i < wjyjt,i j ... ,J;t ,...,T; (2.46)

i= 1,...,ljkt,

Ijt < VIt ,J;t ,...,r, (2.47)

Xjkt,i < UjktYjkt,i j 1,..., J; k 1,..., K; (2.48)

t ,...,T; = i,...,ljkt,

Yjt,i, jkt,i e {0,1} j = ,... ,J; k ,... ,K; (2.49)









t 1,...,T ;i 1,... ,ljkt,


t = .. ,J;k = i ,jkt.
t= ,...,T;i= t,....,1 jkt-


NSP formulation. We developed a Node Separation Procedure (NSP) to transform

the PID problem with piecewise-linear and concave costs to PID problems with fixed

charge costs in an extended network. The NSP procedure is similar to the ASP

procedure, but it results in a larger network because the NSP separates the inventory

arcs as well. Although the network grows in size, this formulation is useful because its

linear programming (LP) relaxation leads to tight lower bounds. Once the problem

is transformed into a fixed charge network flow problem through NSP, the lower

bounding procedure explained in section 3.5 can be used to get good lower bounds.

Figure 2-5 illustrates the node separation procedure.


Fjt,1

Pjjt
jt Fjt Fjt,2

D DFjt,3


jt,3 Fjt+u)

Fj,t+l Fj,t+1,2

Fj,t+1,3

Figure 2-5: Node separation procedure.


The MILP formulation for PID problems with piecewise-linear concave costs,

linear inventory costs, and fixed charge distribution costs after the node separation


Pjt,i, Ijt, jkt,i > 0


(2.50)










procedure is

J T ljt
minimize E E E(cjt,ipjt,i + jt,iyjt,i + hjtljt,i)
j= t=li=1

subject to


J K T ljkt
+ 1S S S 1(cjktXjkt,i + SjktYjkt,i)
j=lk =t=li=1

(P6)


= 1, J; = 1,..., T, (2.51)


Pjt,i + Ij,t-l,i


J ljt
E L Xjkt,i
j=1 i 1
Pjt,i



ijt,i

Xjkt,i




Yjt,i, Yjkt,i



Pjt,i, ijt,i, Xjkt,i


K
jt,i + Xjkt,i
k= 1



dkt

Sjt,iyjt,i



< Vit

< UjktYjkt,i



e {0,1}



> 0


1, l jkt ,

1, K ;t

1, J;t

1, ljkt

1,... ,J;t

1,... J; k

1,... ,T ;i

1, ..., J; k

1, ..,T ;i

1, J; k

1, .. ,T ;i


, T;


1, .,T,

1, ,K ;


1, K;



1,. .,lkt,


1, .. ,ljkt.


(2.52)

(2.53)



(2.54)

(2.55)




(2.56)



(2.57)















CHAPTER 3
SOLUTION PROCEDURES

3.1 Existing Solution Approaches

The PID problem, as described in section 2.4, is formulated as a 0 1

Mixed Integer Linear Program (M!ll.P) due to the structure of the production and

distribution cost functions. In fact, most of the exact solution approaches for

combinatorial optimization problems transform the problem into an equivalent 0 1

MILP and use branch and bound techniques to solve it optimally. Two of the

latest branch and bound algorithms for fixed charge transportation problems were

presented by McKeown and Ragsdale [60] and Lamar and Wallace [53]. Recently, Bell,

Lamar and Wallace [7] presented a branch and bound algorithm for the capacitated

fixed charge transportation problem. Most of the branch and bound algorithms use

conditional penalties called up and down penalties, that contribute to finding good

lower bounds.

Other exact solution algorithms include vertex enumeration techniques, dynamic

programming approaches, cutting-plane methods, and branch-and-cut methods.

Although exact methods have matured greatly, the computational time required by

these algorithms grows exponentially as the problem size increases. In many instances

they are not able to produce an optimal solution efficiently. Our computational

experiments indicate that even for moderate size PID problems the exact approaches

fail to find a solution.

The NP-hardness of the problem motivates the use of approximate approaches.

Since the PID problem with fixed charge or piecewise-linear concave costs falls

under the category of MCCNFP, it achieves its optimal solution at an extreme

point of the feasible region (Horst and Pardalos [42]). Most of the approximate









solution approaches exploit this property. Some recent heuristic procedures have

been developed by Holmqvist, Migdalas and Pardalos [39], Diaby [12], Khang and

Fujuwara [47], Larsson, Migdalas and Ronnqvist [54], Ghannadan et al. [28], and

Sun et al. [70]. Recently, Kim and Pardalos [48-50] provided a dynamic slope

scaling procedure (DSSP) to solve FCNFP. Eksioglu et al. [15] refined the DSSP

and presented results for bipartite and 1 ,v. r, fixed charge network flow problems.

A wide v ii. Iv of the heuristic approaches for NP-hard problems are based on

local search. Local search methods provide a framework for searching the solution

space focusing on local neighborhoods. In the following we present local search based

solution methods for the PID problem.

3.2 Local Search

Local search is based on a simple and natural method which is perhaps the oldest

optimization method, trial and error. However, local search algorithms have proven

to be powerful tools for hard combinatorial optimization problems, in particular those

known as NP-hard. The book edited by Aarts and Lenstra [1] is an excellent source

which provides some applications as well as complexity results.

The set of solutions of an optimization problem that may be visited by a local

search algorithm is called the search space. Typically, the feasible region of the

problem is defined as the search space. However, if generating feasible solutions is

not easy, then a different search space may be defined, which takes advantage of the

special structures of the problem. If a search space other than the feasible region

is used, then the objective function should be modified so that the infeasibility of a

given solution can be identified.

A basic version of local search is iterative improvement. In other words, a general

local search algorithm starts with some initial solution, S, and keeps replacing it with

another solution in its neighborhood, N(S), until some stopping criterion is satisfied.

Therefore, to implement a local search algorithm the following must be identified:









a neighborhood, N(S),

a stopping criterion,

a move strategy,

and an evaluation function (this is simply the objective function if the search

space is the feasible region).

Good neighborhoods often take advantage of the special combinatorial structure

of the problem and are typically problem dependent. The algorithm stops if

the current solution does not have any neighbors of lower cost (in the case of

minimization). The basic move str 'l, ,; in local search is to move to an improved

solution in the neighborhood. Usually two strategies are implemented which are

known as: the first better move strategy and the best admissible move strategy. In

the first better move strategy, the neighboring solutions are investigated in a pre-

specified order and the first solution that shows an improvement is taken as the next

solution. The order in which the neighbors are searched may affect the solution quality

and the computational time. In the best admissible move strategy, the neighborhood

is searched exhaustively and the best solution is taken as the next solution. In this

case, since the search is exhaustive the order of the search is not important.

Lately, other more sophisticated strategies have been developed to allow the

search to escape from locally optimal solutions in the hopes of finding better solutions.

These sophisticated procedures are referred to as metaheuristics in the literature.

In the following we first give a brief history of local search including a list of

metaheuristics, then discuss the computational complexity of local search, and finally

give our local search procedure for the PID problem.

3.2.1 History of Local Search

The use of local search in combinatorial optimization has a long history. Back in

late 1950s, Bock [8] and Croes [11] solved traveling salesman problems (TSP) using

edge-exchange local search algorithms for the first time. Later, Lin [55] refined the









edge-exchange algorithms for the TSP and presented 3-exchange and Or-exchange

neighborhood functions. Reiter and Sherman [65] examined various neighborhoods

for the TSP and introduced the multi-start strategy. Subsequently, Kernighan and

Lin [46] presented a variable-depth search algorithm for uniform graph partitioning.

Lin and Kernighan [56] also successfully applied a variable-depth search algorithm

for the TSP.

In the 1980s more generalized approaches were proposed which combined

local search with other heuristic algorithms. These approaches allow moves to

solutions that do not necessarily give better objective function values. Examples

of these sophisticated approaches, called metaheuristics, are: simulated aii,,, Iii-,

tabu search, genetic algorithms, neural networks, greedy randomized adaptive

search procedure (GRASP), variable neighborhood search, ant systems, population

heuristics, memetic algorithms, and scatter search. Recent reviews on these

procedures can be found in the book edited by Pardalos and Resende [61].

3.2.2 Complexity of Local Search

An important question about a local search algorithm is the number of steps it

takes to reach a locally optimal solution. The time it takes to search the neighborhood

of a given solution is usually polynomially bounded. However, the number of moves

it takes to reach a local optimum from a given solution may not be polynomial. For

example, there are TSP instances and initial solutions for which the local search takes

an exponential number of steps under the 2-exchange neighborhood. To ,,i i1. the

complexity of local search Johnson, Papadimitriou and Yannakakis [45] introduced a

complexity class called PLS (polynomial-time local search). PLS contains problems

whose neighborhoods can be searched in polynomial time. Yannakakis [82] provides

an extensive survey for the theory of PLS-completeness.

It is important to distinguish between the complexity of a local search problem

and a local search heuristic. A local search problem is the problem of finding









local optima by any means whereas a local search heuristic is finding local optima

by the standard iterative procedure. An interesting and typical example is linear

programming (LP). For LP local optimality coincides with global optimality and the

Simplex method can be viewed as a local search algorithm. The move strategy of a

local search algorithm affects the running time. Similarly, the pivoting rule affects

the running time of the Simplex method. It is well-known that in the worst case the

Simplex method may take an exponential number of steps for most pivoting rules. It

is still an open problem whether there exists a pivoting rule which makes the Simplex

method a polynomial time algorithm. However, LP can be solved by other methods

such as the ellipsoid or interior point algorithms in polynomial time.

In the past two decades there has been considerable work on the theory of local

search. Several local search problems have been shown to be PLS-complete, but

the complexity of finding local optima for many interesting problems remains open,

although computational results are encouraging.

3.2.3 Local Search for the PID Problem

In this section, we give details about the neighborhood, the move strategy, the

stopping criterion, and the evaluation function for the PID problem, which are the

main ingredients of a local search algorithm. The PID problem is formulated as

a network flow problem in section 2.4, where the objective is the minimization of a

concave function. We take advantage of the special structure of network flow problems

to define a neighborhood and a move strategy. We first give several definitions of

neighborhood for MCCNFP and then adopt one of these definitions and provide a

neighborhood definition for the PID problem. Next, we discuss the move strategy, the

stopping condition, and the evaluation function. The generation of initial solutions

will be discussed later in sections 3.3 and 3.4.









Neighborhood definitions for MCCNFP. The usual definition of local optimality

defines a neighborhood of a solution S in the following way where || || is a vector

norm and E > 0.

Definition 3.1 N,(S) = {S' : S' is feasible to the problem and IIS S'1 < E}.

Under this definition of a neighborhood a solution S is locally optimal if it is not

possible to decrease the objective function value by rerouting a small portion E of flow.

Actually, for certain concave functions the E-neighborhood results in all extreme flows

to be locally optimal. Therefore, this standard definition of neighborhood, called the

E-neighborhood, is not very useful for our problem.

Proposition 2 Every extreme flow is a local optimum for an uncapacitated network

flow problem with a single source and with fixed arc costs under N,.

Proof. In order to create an E-neighbor of a given extreme flow let us reroute an

E amount to one of the demand nodes. Subtract an E from all the arcs on the path

from the source to that demand node. This will not change the current cost because

of two reasons. First, the current flow on all arcs on the path from the source to that

demand node is higher than E so they will still have positive flow. Second, the arcs

have the following cost structure:

t) jt if jt > 0,
0 if pjt = 0.

Now flow that E amount from a different path. This will create at least one cycle in

the network which means at least one arc that had zero flow will now have positive

flow. Thus, the total cost will not decrease. o

Gallo and Sodini [25] have also shown that every extreme flow is a local optimum

under N, for similar problems with the following cost structure:

rjt(pjt) ,p


where 0 < a < 1 and p > 0.









There are other concave cost functions, however, for which every extreme flow is

not necessarily a local optimum under N,. Consider the following example in Figure

3-1 where there are two demand nodes, two transhipment nodes, and a single source

node.


did di-c dE



F2,1 d2 R2,1 ) F, d2 ,1


Figure 3-1: An example for E-neighborhood.


If the cost functions are fixed charge then the extreme flow on the left in Figure

3-1 has a total cost of sil + c11dil + s21 + c21d21 + sil + c111d1l + 221 + C221d21

(see section 2.4.1 for the definition of the variables). The non-extreme E-neighbor

on the right in Figure 3-1 has a total cost of sil + cll(d1 E) + s21 + c21(d21 +

E) + si + c(di ) + S221 + C221d21 + 211 + C211E. The difference between the

costs is s211 + (c211 + C21 C1 C1)E. If appropriate cost values are chosen then

this difference can be negative which indicates that the extreme flow is not a local

optimum. However, the E-neighborhood is still not useful for our problem because it

is not easy to search this neighborhood efficiently.

Gallo and Sodini [25] developed the following generalized definition of

neighborhood for MCCNFP.

Definition 3.2 NAEF(S) = {S' : S' is feasible to the problem and it is an adjacent

extreme flow}.

Here, two extreme flows are .,li i,:ent if and only if they differ only on the path

between two vertices. Thus, the graph joining the two extreme flows S and S' contains

a single undirected cycle. Gallo and Sodini [25] described a procedure which detects

if a given extreme flow is a local optimum or finds a new better extreme flow by









solving a series of shortest path problems. Their procedure constructs a modified

network and solves a shortest weighted path problem for each vertex in the current

solution, S. The modified network is constructed to prevent moving to nonextreme

or 1n. ii ,di ,:ent solutions.

Guisewite and Pardalos [31] developed the following more relaxed definition of

neighborhood for MCCNFP.

Definition 3.3 NAF(S) = {S' : S' is feasible to the problem and it is an adjacent

flow}.

Under this definition, S' is .,.i ,i:ent to the extreme flow S if it is obtained by

rerouting a single sub-path within S. NAF(') is a relaxed neighborhood with respect to

NAEF(') in the sense that .,.i I.ent solutions but not only extreme .,li I.ent solutions

are included in the neighborhood. In this case it is easier to reach the neighboring

solutions because the modified graph that prevented nonextreme solutions is not

required and the shortest weighted path problem need not be solved for all vertices of

S. It is sufficient to solve the shortest weighted path problem for the branch points

(nodes with degree greater than two) and the sink nodes.

Neighborhood definition for the PID problem. A generally accepted rule is that

the quality of the solutions are better and the accuracy of the final solutions are

greater if a larger neighborhood is used which may require additional computational

time. Therefore, it is likely that NAF(-), which is defined above, will lead to better

solutions. Thus, we define a neighborhood for the PID problem in the following way

which is, in principle, similar to NAF(').

Definition 3.4 A solution S' is a neighbor to the current feasible solution S if the

f, :7:1;/ -'.,i''1;1:;,i one of the retailers is lI,.,r, .1 or if the demand of a retailer comes

from the same f i. /:1:1, but through production in a different period.

Here, to reach a neighbor S' from an extreme solution S we first subtract the

demand of a retailer from the current path to that retailer then route it through a









different path. An example is shown in Figure 3-2 where the demand of retailer 1

in period 2 (R1,2) initially comes from facility 1 in period 2 (F1,2), but it is rerouted

through facility 2 in period 1 (F2,1). Note that changing the facility that supplies a

retailer may not necessarily result in an extreme point feasible solution. Therefore,

after the flow to a retailer is rerouted we need to force the solution to be a tree, which

may result in additional cost savings.

In Figure 3-2, for example, there is a cycle between nodes D, F2,1, and F2,2.

This cycle indicates that there is a positive amount of inventory carried forward from

period 1 to period 2 at facility 2 and at the same time there is a positive amount of

production at facility 2 in period 2. To remove this cycle we should either eliminate

the inventory or the production from the cycle. However, we know that rerouting

the inventory through node F2,2 will actually increase the total cost and this can

easily be shown. When we moved from the extreme point solution on the left to the

nonextreme solution on the right rerouting the demand of node R1,2 was cheaper if

node F1,2 is used rather than F2,2. Therefore, the production arc should be removed

from the cycle. A more formal discussion on cycle detection and elimination is given

later in this section.




R2, F2,1 R
D D e





F2.2 R2, 1 )2. 2 R2,2


Figure 3-2: An example of moving to a neighboring solution.


The move strategy, the evaluation function, and the stopping Condition. The

move strategy determines the order in which the neighboring solutions are visited.









As we mentioned earlier, first better and best admissible are the two common move

strategies implemented. Guisewite and Pardalos [31] and Eksioglu et al. [15] presented

computational results using both move strategies on network flow problems. These

authors provided empirical evidence that the first better move strategy requires less

time and the quality of the solutions are comparable for both strategies. Therefore, in

our local search algorithms we used the first better move strategy. In this strategy, the

neighboring solutions are investigated in a pre-specified order and the first solution

that shows an improvement is taken as the next solution. The improvements are

measured by an evaluation function. The evaluation function that we use calculates

the total cost of a solution which is simply the objective function. The local search

algorithm terminates when there is no more improvement. In other words, the

stopping condition determines whether the current solution is a local optimum or

not. If the current solution is a local optimum then the procedure terminates.

The local search algorithm is summarized in Figure 3-3. Let j' be the facility

that currently supplies retailer k's demand in period t through production in period

r' (t > r'). Also, let 6j, be the additional cost of changing the supplier to facility j in

period r (t > r). In other words, 6j, is the cost of rerouting the demand of retailer k

under consideration. Note that 6j, = 0 when j = j' and 7 = -'. In the local search

procedure given in Figure 3-3, j* and -* are such that 6j, =- A = min{6j, : j

t,... J,r = 1, ... ,t}

Cycle detection and elimination. Due to the special structure of our network,

identifying cycles and removing them can be done efficiently. A cycle for the PID

problem means that at least one of the facilities is producing and also carrying

inventory in the same period. Therefore, with a single pass over all facilities and

time periods the cycles can be identified. The procedure is summarized in Figure

3-4. Note that two types of cycles can be created and they are analyzed separately

in the following paragraphs.





































Figure 3-3: The local search procedure.


procedure Cycle
for (1 t,...,0) do
if (pji > 0 and Ijl > 0) then
if pjl is on the new path then reroute Ijl
otherwise reroute pjl
end for
return S
end procedure

Figure 3-4: Cycle detection and elimination.


procedure Local
Let S be an initial extreme feasible solution
while (S is not a local optimum) do
for (t 1,...,T) and (k = 1,...,K) do
Calculate j,, j = 1,... J, r 1,..., t
A := m: n{jr :j 1,..., J, = 1,...,t}
if A = 0 then S is a local optimum
else if A < 0 then
Reroute the flow through the new path
C'! i. I: for any cycles and eliminate them
end if
end for t and k
end while
return S
end procedure










Type I cycle. Assume that the demand of retailer k in period t is satisfied

through production at facility j at time 7 after it is rerouted. If facility j is already

carrying inventory from period 7 1 to period r, then a type I cycle is created.

Only one cycle is created and it can be eliminated by rerouting the inventory. Figure

3-5 gives an example of a type I cycle where the demand of retailer 1 in period 3 is

originally satisfied from production at facility 2 in period 3. If it is cheaper to satisfy

the demand of retailer 1 in period 3 by production at facility 1 in period 3, then this

will result in the network flow given on the right of Figure 3-5. To eliminate the

cycle the amount of inventory entering node F1,3 is subtracted from its current path

and added on to the production arc entering F1,3.




Fz R,_ F(z R2,1



Fi,, R1,2 F1,-2 R1,2

D D











Figure 3 5: An example of a type I cycle.


Type II cycle. Assume that facility j currently produces in several periods and

that the demand of retailer k in period t is satisfied through production from an

earlier period after it is rerouted. The rerouting of retailer k's demand may result

in one or more cycles. Figure 3 6 gives an example of a type II cycle. The flow on

the left is the starting extreme flow which indicates that the demand of retailer 1 in










period 3 is currently satisfied through production in period 3 at facility 2. Assume

that it is actually cheaper to satisfy the demand of retailer 1 in period 3 through

production at facility 1 in period 1. This will result in the flow given on the right of

Figure 3-6 which has two cycles. To eliminate these cycles the production amounts

in periods 2 and 3 at facility 1 are eliminated and carried forward as inventory from

period 1.


Fi1 Ri- F1I1 R-j








R22 R2,2





F1,3 F1,3





Figure 3-6: An example of a type II cycle.


So far, we have talked about the neighborhood, the move strategy, the evaluation

function, and the stopping condition. However, we have not discussed how the initial

solutions are found. Generation of initial solutions may be critical for local search

procedures. Good starting solutions may lead to better quality local optima. In

sections 3.3 and 3.4 we discuss several procedures for generating good initial solutions.

3.3 Dynamic Slope Scaling Procedure (DSSP)

The DSSP is a procedure that iteratively approximates the concave cost function

by a linear function, and solves the corresponding network flow problem. Note that

each of the approximating network problems has exactly the same set of constraints,









procedure DSSP
q := 0 /*set the iteration counter to zero*/
Initialize c- and cj, /*find an initial linear cost*/
while (stopping criterion not satisfied) do
q := q+1
Solve the following network flow problem:
minimize T7 T ( '-(q-1 h).q J IK X:T -(q-1) q
minimize Ei E1 1(? jt+ t) + + 1k= L=I t 1 jkt
j tl it lj t ht]) + 14K 1 1Ct) kt
subject to original constraints of (P1)
(q) -(q)
Update cjt and Cjkt
if cost(S(q)) < cost(S) then S : S(q)
end while
return S
end procedure

Figure 3-7: The DSSP algorithm.

and differs only with respect to the objective function coefficients. The motivation

behind the DSSP is the fact that a concave function, when minimized over a set

of linear constraints, will have an extreme point optimal solution. Therefore, there

exists a linear cost function that yields the same optimal solution as the concave cost

function. Figure 3-7 summarizes the DSSP algorithm.

Important issues regarding the DSSP algorithm are finding initial linear costs,

updating the linear costs, and stopping conditions.

Finding an initial linear cost (c )). Kim and Pardalos [48,50] investigated two

different v--,v to initiate the algorithm for fixed charge and piecewise-linear concave

network flow problems. Here, we generalize the heuristic for all concave costs with

the property that the total cost is zero if the activity level is zero. In other words,

rjt(') and fjkt(') are concave functions on the interval [0, oo) and rjt(0) = fjkt(0) = 0.

The initial linear cost factors we use are 0) = rjt(Wjt)/Wjt and c() = fjkt(Ujt)/Ujt.









Updating scheme for c). Given a feasible solution S) = (p(q), Iq), x()) in

iteration q, the objective function coefficients for the next iteration are expressed in

linear form as

J rg(pq)/p if p) > 0,
_(q) rjtPjt )/Pjt i Pjt >
Cijt -
S cif p) 0,
and


Cjkt i kt

Stopping condition. If two consecutive solutions in the above algorithm are equal,

then the linear cost coefficients and the objective function values in the following

iterations will be identical. As a result, once S() = S(-1) there can be no more

improvement. Therefore, a natural stopping criterion is to terminate the algorithm

when ||S(q S(q-1) I < e. An alternative is to terminate the algorithm after a fixed

number of iterations if the above criterion is not satisfied.

The initiation and updating schemes for fixed charge and piecewise-linear concave

cases are analyzed below separately. In the remainder of this dissertation we will refer

to the local search approach that uses DSSP to generate the initial solutions as LS-

DSSP.

3.3.1 Fixed C('! 'ge Case

Kim and Pardalos [48] investigated two different v- -v- to initiate the algorithm.

The initiation scheme presented here is shown to provide better results when used

for large scale problems (Figure 3-8). The initial value of the linear cost we use

is jt = cjt + sjt/Wjt. This is simply the LP relaxation of the original problem.

However, note that the problems we are solving are NFPs which can be solved much

faster than LPs.

The linear cost coefficients are updated in such a way that they reflect the variable

costs and the fixed costs simultaneously in the following way:









rjt(pjt)





Cjs
SjtI
iI
,jt /jI

S jt I
it
-q / j j r
I
.I Pit


Figure 3-8: Linearization of the fixed charge cost function.


(q) Cjt + Sjt/pj if pj > 0,
(t 1 if p = 0,-


(q) C jkt + Sjkt/jkL if Zj > 0,
jkt _(q) (q)
S jkt if (jkt 0.

3.3.2 Piecewise-linear Concave Case

When production or transportation costs are piecewise-linear and concave, the

problem can be transformed into a fixed charge network flow problem as described in

section 2.4.2. After the transformation, the solution approach given in section 3.3.1

can be used to solve the problem. However, the transformed problem does not alv--i-

lead to generation of feasible solutions since the problem is not solved to optimality.

Therefore, we apply the procedure to the original problem instead of the extended

network with some modification to account for each piece in the cost functions.

The linear cost coefficients are initialized in the same way as the fixed charge

case, i.e. ) = cjt + sjt/Wjt for the production arcs. To update the costs, the flow

values are checked to see which interval it falls into and the corresponding fixed and

variable costs are used. The updating scheme (Figure 3-9) is as follows:










_(q) Cjt + 5jt/P ( if (/jt,-1 < Pt < .It, ,

Iit i jt ,
and
(q) Cjkt + Sjkt/XJkt if /3jkt,i-1 < xy( ,
CJkt (q) if x'(q) 0,
I t jkt

rjt(pjt)



IC I I
Sjt,3 .........- Cjt,2 2

Sjt,2 j / t

Sjt,l I


S I Pjt

i.t,I 8t,2 Pjt jt,3

Figure 3-9: Linearization of the piecewise-linear concave cost function.


3.3.3 Performance of DSSP on Some Special Cases

In this section we analyze the performance of DSSP on some basic problems.

The problems considered are simple cases with only a single time period. We identify

some cases where the DSSP heuristic actually finds the optimal solution.

Multiple facilities and one retailer. If there is only a single retailer but multiple

facilities in the network, then the problem is easy to solve, provided that there are

no capacities on the production and transportation arcs and the cost functions are

concave. The problem in this case simply reduces to a shortest path problem (Figure

3-10) which can be solved in O(J) time by complete enumeration. In an optimal

solution only one pair of production and transportation arcs will be used since this

is an uncapacitated concave minimization problem. The optimal solution has the

following characteristics (j* indicates the facility used in the optimal solution):










* Pjl1

* Pjl


x- j*l d11

xj= 0 Vje{1,2,..,J}\{j*}


Figure 3-10: A bipartite network with many facilities and one retailer.


Proposition 3 The DSSP finds the optimal solution in J+2 iterations, in the worst

case, for the single retailer problem.

Proof. In every iteration of the DSSP heuristic the following network flow

problem is solved (derived from problem (P1) for a single retailer):

J
E --(q-- 1) -(q-l) (q)
minimize (ci + c@j )p
j-1


J

j=1
> 0
pji


j = 1,. J.


Although (P1') is uncapacitated, the heuristic requires an upper bound to

initialize the linear cost coefficients. Therefore, let Wjl and Ujll be the upper

bounds for the production and transportation arcs, respectively, such that Wjl > dl

and Ujll > d1l. Hence, the initial cost coefficients are (0) = r7jl(W j)/W and
=(0)
j11 fjll(gj1l)/Ujll forj 1,2,..., J.


subject to


(P ')


(3.1)

(3.2)









Problem (P1') is a continuous knapsack problem. Therefore, at a given DSSP

iteration, q, the solution will be of the following form: pj, d1 and p, 0 where
(- 1) + 1)) < (- 1) 1)) V j{1, 2,.., J}\{'}. Since ) is the only positive

value, the cost coefficient of that arc will be updated and the others will not change.
Thus, c( = rj'l(dll)/dll and c f (d)/d.

The heuristic terminates when the solutions found in two consecutive iterations

are the same. However, this will not happen unless the solution found is the

optimal one. In other words, at iteration, q + 1, the solution will be p/j,, = dll

(all other variables are zero) and the procedure will stop if j" = j. Note that

j" = j only if j' = j* where j* is the facility used in the optimal solution
to (P1) with a single retailer. Assume j' / j*. Also assume, without loss of

generality, that the optimal solution is unique so that (rji(dii)/dii+ fj,1(dii)/dll) <

(rij(dll)/dll + fjll(d1l)/dll) V je{1, 2,..., J}\{j*}. Due to the concavity of the cost
functions rji(Wjl)/Wjl < rjl(dll)/dll and fjll(Ujll)/Ujll < fjll(dll)/d (remember

that rjl(0) = fjl(O) = 0) for j = 1,2,...,J. Therefore, at iteration, q + 1,
_(q) __ 1 0 1
il r'i(dll)/di > rJ*i(dll)/dll > c Hence, p(q) 0 and f / j".

The heuristic will visit each facility at most once and facility, j*, will be visited

at most three times (twice in the last two iterations and possibly once during the

previous iterations). The total number of iterations, therefore, is J + 2 in the worst

case. If Wjl = Uj = d1l for j = 1, 2,..., J then the optimal solution will be found in

only 2 iterations. O

Two facilities and two retailers. When there are only two facilities and two

retailers the problem is obviously easy since there are only a few feasible solutions

to consider (Figure 3-11). If there are no production and transportation capacities

then the two retailers are not competing for capacity and they can act independently.

This indicates that the problem decomposes into two single-retailer problems both of

which can be solved to optimality using the DSSP. In general, if there are J facilities,










K retailers, T periods, and no capacity constraints, then the problem can be solved

to optimality by solving KT single-retailer problems (with the additional condition

that only distribution costs are nonlinear).








F2,1 --R2,1

Figure 3-11: A bipartite network with two facilities and two retailers.


If, however, the facilities have production capacities then the DSSP may fail to

find the optimal solution. We will illustrate this by creating an example for which

DSSP fails to find the optimal solution. We assume production and distribution costs

are fixed charge rather than general concave functions to simplify the analysis.

The MILP formulation for the problem in Figure 3-11 with fixed charge costs

can be given as (derived from problem (P2) in section 2.4.1)


minimize c11pll + c21P21 + s11y11 + s21y21

+c1iiiX1 + c121x121 + C211i211 + c221x221
+S111Y111 + S121Y121 + S211Y211 + S221Y221

subject to (P2')


P11 X111 X121

P21 X211 X221


x111 + x211

X121 + X221

pll

P21


(3.3)

(3.4)

(3.5)

(3.6)

(3.7)

(3.8)


d21,

< Wily11,

< W21Y21,









x111 < U111111, (3.9)

x121 < U121Y121, (3.10)

x211 < U211y211, (3.11)

x221 < U221i221, (3.12)

yjl, yjk {0,1} j 1,2;k 1,2, (3.13)

Pj, xjk1 > 0 j 1,2;k 1,2. (3.14)

We want capacities on the production arcs, but to have a feasible problem we

must have W11 + W21 > dll + d21. Letting W11 + W21 dll + d21 forces the following

conditions: p11 = W1, p21 W21, Y11 = 1, and y21 = 1. Under these conditions

the only decision variables left in the problem are the distribution amounts from the

facilities to the retailers. If (WI1 = W1=21 dll d2) or (W1l = dll, W21 d 21,

and d1r < da2) then the DSSP finds the optimal solution in only two iterations. This

can easily be shown by following a similar analysis given below for the case where

W1l < dll < d21 < W21.

The distribution arcs are uncapacitated, so the upper bounds on these arcs can

be set equal to the minimum of the demand and the capacity of the corresponding

production arc, i.e. Ujkt min{Wjt, dkt}. This leads to U11 = W11, U121 W11,

U211 d= l, and U221 d21. Since W11 < dll < d21 < W21, this indicates that the

capacity of facility 1 is not enough to satisfy the demand of either of the retailers.

Therefore, both of the distribution arcs from facility 2 must be used, i.e. y211

y221 = 1. This leaves us with only one decision "Which of the two distribution arcs

from facility 1 should be used?"

After doing all the above substitutions (P2') can now be rewritten as


minimize C111X111 + C121X121 + C211X211 + C221X221 + S111111 + S121(1 Y111)









subject to


Wi1,

W21,


d21

W (1 -Y111,
l1(1 /111),


d21,

{0, 1},

0


X111 X121

x211 x221

X111 + X211

x121 + x221

X111

X121


Note that the production cost is now a constant since pl = W11 and p21 = W21.

Thus, it is not included in the objective function. The problem formulation can

further be simplified by taking advantage of the equality constraints.


minimize (ciiiW ci21il1 c2111l1 + c221 ll+ 81 S121i)111


subject to


(P2'")


(3.25)


111n {0, 1}.


The optimal solution to (P2'") is 111 = 1 if (cilli W c121W11 c211Wl +

c221W11 + siI s121) < 0 and y11 = 0 otherwise. Assume (ciiiWiI c121W11 -

C211Wll + c221W11 + si81 s121) > 0 so that we have x1ll = 0, 121 W11, x211 dll,

and x221 d21 W1 as the optimal solution.


(3.15)

(3.16)

(3.17)

(3.18)

(3.19)

(3.20)

(3.21)

(3.22)

(3.23)

j 2;k 1,2. (3.24)


x211

X221

Y/111

Xjkl


(P2")









Under the assumptions and simplifications given before, the DSSP solves the the

following network flow problem at every iteration:

minimize ((q-1) _(q- 1) -1) (q)
minimize (cI c121 211 +C221 111


subject to (P1")


x < Wn, (3.26)

() > 0. (3.27)

(0) C 1 (0) C121+S121/W 11
The initial linear cost coefficients are c1 = Cin+si /Wu, 121 C121 +121/W11,

4211 211 +211/d11, and i = c221+ 221/d21. Let (ciII+ sii/Wi- c21- 121 /W11-

C211 S211/dl + C221 + S221/d21 < 0) so that the solution to (PI") is x1 =1 W11,
x(1) 0, x ( = (di1 WI), and x(i = d21. In the next iteration the linear cost
X121 211 221 21-
coefficients do not change except for c211 which becomes f 1) = C211S 211/(di W11).

Since 211/(dll W11) > 8211/d11 the objective function coefficient in (P1") is still

negative. Thus, the solution in the next iteration is the same and DSSP terminates

with a solution different from the optimal solution.

Multiple facilities and multiple retailers. If the problem has multiple facilities,

multiple retailers, and concave cost functions, then it falls under the category of

MCCNFP which is known to be NP-hard (section 2.3).


Figure 3-12: A bipartite network with multiple facilities and multiple retailers.










However, if (i) the distribution costs are concave, (ii) there are equal number of

facilities and retailers, (iii) the demands at the retailers are all the same, and (iv) all

production capacities are equal to the constant demand, then the problem becomes

easier (Figure 3-12). In other words, if J = K and Wjl = dkl = d for j, k = 1, 2,..., J

the problem formulation is


Pji : Xijk =
k=1
J
I:Xijk1 l
j=1
Pjl <

Xjkl <

Pjl, Xjk1


J J
+ fjkli(Xjkl)
j=1 k1


0 j 1,..., J,


d k 1= J,
d =l..J


j= 1,... ,J,

j,k ,..., J,

,k 1,..., J.


Due to tight production capacities the only feasible way of meeting the demand is

if all facilities produce at their full capacity. This means that pjl = d for j = 1, 2,..., J

so, the production variables can be dropped from the formulation and the problem

reduces to

J J
minimize S fjkl(x jkl)
=-1 k=1


J
I Xjk d
k=1
J
E Xjk1 = d
j=1
Xjkl > 0


j 1,... J,


k ,... J,

j,k 1,...,J.


J
minimize Erjl(pji)
j=1


subject to


(P7)


(3.28)


(3.29)

(3.30)

(3.31)

(3.32)


subject to


(P7')


(3.33)


(3.34)

(3.35)










Now let Zjkl =jkl/d for j, k = 1, 2,..., J. Substituting this into (P7') gives

J J
minimize E E fjkl(d Zjkl)
j= k=1

subject to (P7")

J
EZjkl 1 j 1,...,J, (3.36)
k= 1
J
E Zkl 1 k 1,..., J, (3.37)
j 1
Zjk1 > 0 j,k 1,..., J. (3.38)


Note that constraints (3.36) and (3.37) together with Zjkl e {0,1} are the

constraints of the well-known assignment problem. If the transportation costs, fjk ('),

are concave then we will refer to (P7") as the concave assignment problem.

Proposition 4 The DSSP finds the optimal solution in 2 iterations for a concave

-.:,I m,, ,: problem.

Proof. If DSSP is used to solve (P7') the following network flow problem is solved

in every iteration:

J J
minimize I I Cjkik1
j=1k=1

subject to (P8)

J
Xjk d j 1,...,J, (3.39)
k=1
J
5jk1 d k 1,..., J, (3.40)
j=1
Xjki > 0 j,k 1,...,J. (3.41)


This is equivalent to solving the LP relaxation of a linear assignment problem.

Therefore, the solution to (P8) gives the optimal solution to (P7'). The objective

function coefficients in the next iteration of DSSP will be the same since = 0 or









, = d for j, k = 1, 2,..., J. The same solution will be found in the second iteration

and DSSP will terminate with the optimal solution. o

3.4 Greedy Randomized Adaptive Search Procedure (GRASP)

The Greedy Randomized Adaptive Search Procedure is an iterative process that

provides a feasible solution at every iteration for combinatorial optimization problems

(Feo and Resende [20]). GRASP is usually implemented as a multi-start procedure

where each iteration consists of a construction phase and a local search phase. In

the construction phase, a randomized greedy function is used to build up an initial

solution. This solution is then used for improvement attempts in the local search

phase. This iterative process is performed for a fixed number of iterations and the

final result is simply the best solution found over all iterations. The number of

iterations is one of the two parameters to be tuned. The other parameter is the size

of the candidate list used in the construction phase.

GRASP has been applied successfully for a wide range of operations research

and industry problems such as scheduling, routing, logic, partitioning, location,

assignment, manufacturing, transportation, telecommunications. Festa and Resende

[21] give an extended bibliography of GRASP literature. GRASP has been

implemented in various v--v with some modifications to enhance its performance.

Here, we develop a GRASP for the PID problem. In the following, we discuss

generation of initial solutions. We give two different construction procedures. The

second construction procedure, called the modified construction phase, is developed

to improve the performance of the heuristic procedure.

3.4.1 Construction Phase

In the construction phase a feasible solution is constructed step by step, utilizing

some of the problem specific properties. Since our problems are uncapacitated

concave cost network flow problems the optimal solution will be a tree on the

network. Therefore, we construct solutions where each retailer is supplied by a single









facility. The construction phase starts by connecting one of the retailers to one of the

facilities. The procedure finds the facilities that give the lowest per unit production

and transportation cost for a retailer, taking into account the effect that already

connected retailers have on the solution.

The unit cost, Oj, of assigning facility j to retailer k is fjkt(dkt)/dkt + rjt(pjt +

dkt)/(pjt+dkt). The cheapest connections for this retailer are then put into a restricted

candidate list (RCL) and one of the facilities from RCL is selected randomly. The size

of the RCL is one of the parameters that requires tuning. Hart and Shogan [36] and

Feo and Resende [19] propose a cardinality-based scheme and a value-based scheme

to build an RCL. In the cardinality-based scheme, a fixed number of candidates are

placed in the RCL. In the value-based scheme, all candidates with greedy function

values within (1OOa) of the best candidate are placed in the RCL where a E [0, 1].

We use the value-based scheme and a candidate facility is added to the RCL if its

cost is no more than a multiple of the cheapest one. Finally, when the chosen facility

is connected to the retailer, the flows on the arcs are updated accordingly. The

procedure is given in Figure 3-13. When a, in Figure 3-13, is zero the procedure is

totally randomized and the greedy function value, Oj, is irrelevant. However, when a

is equal to 1 the procedure is a pure greedy heuristic without any randomization.

Holmqvist et al. [39] develop a similar GRASP for single source uncapacitated

MCCNFP. Their problems are different from ours because they use different cost

functions and network structures. Also, our approach differs from theirs in the greedy

function used to initialize the RCL. They use the actual cost where as we check the

unit cost of connecting retailers to one of the facilities. Due to the presence of fixed

costs the unit-cost approach performed better. We have tested both approaches and

our approach consistently gave better results for the problems we have tested. The

results are presented in ('! Ipter 4.










procedure Construct
for (t= 1,..., T) do
pjt := 0, j = 1,...,J
x'jkt : O, j = 1,...,J,k 1,...,K
for (k 1,..., K) do
S'= fjkt(dkt)/dkt + rjt(pjt + dkt)/(pjt + dkt), J = 1,...,
S:= min{Oj : j =1,...,J}
RCL: {:j: (, < ,0 Select I at random from RCL
pit := Pit + dkt
Ilkt := dkt
end for k
end for t
return the current solution, S
end procedure

Figure 3-13: The construction procedure.


The construction procedure in Figure 3-13 handles each period independently.

In other words, the initial solution created does not carry any inventory and demands

are met through production within the same period. A second approach is to allow

demand to be met by production from previous periods. The second approach led to

better initial solutions, but the final solutions after local search were worse compared

to the ones we got from the first approach. One explanation to this is that the initial

solutions in the second approach are likely already local optima. In the first approach,

however, we do not start with a very good solution in most cases but the local search

phase improves the solution. The local search approach which uses the construction

procedure described in this section will be referred to as LS-GRASP from here on.

3.4.2 Modified Construction Phase

In a multi-start procedure, if initial solutions are uniformly generated from the

feasible region, then the procedure will eventually find a global optimal solution









(provided that it is started enough times). If initial solutions are generated using

a greedy function, they may not be in the region of attraction of a global optimum

because the amount of variability in these solutions is typically smaller. Thus, the

local search will terminate with a suboptimal solution in most cases. GRASP tries

to find a balance between these two extremes by generating solutions using a greedy

function and at the same time adding some variability to the process. After the first

set of results we obtained by using the construction procedure explained in section

3.4.1, we wanted to modify our approach to diversify the initial solutions and increase

their variability.

In order to diversify the initial solutions we use the same greedy function but

we allow candidates with worse greedy values to enter the RCL. The modified

construction procedure is similar to the procedure given in Figure 3-13. The only

difference is in the way the RCL is constructed. The RCL in the modified construction

procedure is defined as

RCL: j :
where 0 = min{Oj j 1,..., J}, Ae = (0 O)a, 0 = max{j : j = 1,..., J},

and 0 < a < 1.

After a fixed number of iterations 9 is updated (9 -= + Ae) so that the

procedure allows worse solutions to enter the RCL. Note that with this modification

the procedure randomly chooses the next candidate from a set of second best

candidates. This is done until 9 > 0 at which point 9 is set back to min{Oj

j = 1,..., J}. In the rest of the dissertation the term LSM-GRASP will be used to

denote the local search approach with the modified construction procedure.

3.5 Lower Bounds

The local search procedures presented above give suboptimal but feasible

solutions to the PID problem. These feasible solutions are upper bounds to our

minimization problem. To evaluate the quality of the heuristics, when the optimal









solution value is unavailable, we need a lower bound. It is as important to get

good lower bounds as it is to get good upper bounds. For the fixed charge case,

the LP relaxation of (P2), unfortunately, gives poor lower bounds, particularly for

problems with large fixed charges. Therefore, we reformulated the problem, which

led to an increase in problem size but the LP relaxation of the new formulation

gave tighter lower bounds. For the piecewise-linear concave case we reported lower

bounds obtained from CPLEX in C'!i pter 4. The MILP formulation is run for a

certain amount of time and CPLEX may fail to find even a feasible solution by that

time, but a lower bound is available. For problems with piecewise-linear and concave

costs, we have also obtained lower bounds using the procedure given below. These

problems were first transformed into fixed charge network flow problems using the

node separation procedure (section 2.4.2). Once they are fixed charge type problems

the procedure provided below can be used to find lower bounds.

To reformulate the problem with fixed charge costs we define new variables which

help us decompose the amount transported from a facility to a retailer by origin.

Figure 3-14 gives a network representation for the new formulation.

Let yjrkt denote the fraction of demand of retailer k in period t, dkt, satisfied by

facility j through production in period r. We can now rewrite Xjkt, Ijt and pjt in

terms of yjrkt as

t
xjkt yjr7ktdkt, (3.42)
T=l
K t T
lit I:I:I: yj7kldkl, (3.43)
k= = 1 l=t+1
K T
Pgt= I YjtkrdkT. (3.44)
k=1 7r=t

Equation (3.42) simply indicates that the amount shipped from facility j to retailer

k during period t is equal to part of the demand, dkt, satisfied from production at

facility j in periods 1 through t. Equation (3.43) denotes that the amount of inventory











-J\- .. Prod. Arcs
/ -------> Inv. & Trans. Arcs
21/ 0 Trans. Arcs

R/ 3,1
\ "
\\



SRF1,22

2R2,2
F2,2



Figure 3-14: Extended network representation.

carried at facility j from period t to t+1 is equal to the total demand of all retailers in

periods t + 1 to T satisfied by facility j through production in periods 1 to t. Finally,

equation (3.44) indicates that the amount of production at facility j during period t

is equal to the amount of demand met from this production for all retailers in periods

t through T.

Substituting (3.42), (3.43) and (3.44) into (P2) will give the following new

formulation:

J T J K T J K T t
minimize I styt + SjiktYjkt + i t S S VjrktYjrkt
j=1 t= 1 j= 1 k=1 t= 1 j= lk= it= iT= 1


subject to


1 y:Jrkt = k= ,... ,K;t = ,...,T,
j- 1- 1
K T
1 :jtkrdkr Wjt~jt j = t,..., J; t= 1,..., T,
K t T
S S yjrkldkl < Vjt j ,..., J;t t 1,..., T,
k= 1- l=lt+l


(P9)


(3.45)


(3.46)


(3.47)











E YjTktdkt < UjktYjkt j
T=1
yjrkt < 1 j

t

yjrkt > 0 j

t

yit,y.jkt e {0,1} j


= t,...,J;k= 1,...,K;t= ,...,T,(3.48)

S ,... ,J;k ,... ,K, (3.49)

1,.. ,T ; 1,. .. ,t,

= ,...,J;k= 1,...,K, (3.50)

S 1,.. ,T;r-= 1,.. ,t,

= ,...,J;k= ,...,K;t= ,...,T.(3.51)


In the above formulation it can easily be shown that Vjrkt = (cj + Cjkt + Yit h ..)dkt-

This indicates that vjrkt is the total variable cost (including production, inventory,

and distribution) if the demand of retailer k at time t is met through production

at facility j during time T. Moreover, for uncapacitated problems the upper bounds

Wjt, Vjt, and Ujkt can be set equal to


K T
me =EEk-,
k i t
k=r1 =t
K T
v =t E I dkT,
k 1 T-t+1
Ukt dkt.


(3.52)


(3.53)

(3.54)


Substituting (3.52), (3.53), and (3.54) into (3.46), (3.47), and (3.48), respectively,

will give the following constraints:

K T K T
E EjtkTdkT < Et (3.55)
k= T=t k=l =t
K T t K T
AE I dkl (Yjrkl) k E Eidkl (3.56)
k=ll=t+l T=1 k=ll=t+1
t
YjTkt < jkt (3.57)
Til










Constraints (3.55), (3.56), and (3.57) can, respectively, be replaced by the following

set of stronger constraints:


yjtkT < yjt (3.58)
t
SYjk-l < 1 (3.59)
T= 1
t
Yrkt j Yjkt (3.60)
T=1

It is easy to show that (3.58), (3.59), and (3.60) imply (3.55), (3.56), and (3.57),

respectively. Moreover, (3.58) and (3.59) are clearly valid. The last constraint (3.60)

is based on the property of an optimal solution. In an optimal solution each demand

point will be satisfied by only one facility. Therefore, we can force yjrkt to be binary

which indicates that (3.60) is also valid. Using the new constraints we arrive at the

following formulation after some simplification:

J T J K T t
minimize styjt + E E E 1 ktYjrkt
j= t= 1 j=lk=t= 1= 1

subject to (P10)

J t
EEJkt 1 k 1,...,K;t 1,...,T, (3.61)
j= 1 7=- 1
yjrkt- yjrT < 1 j ,...,J;k1- ,...,K, (3.62)

t= 1,...,T;r = 1,...,t,

Yjt,Yjrkt e {0,1} j 1,...,J;k 1,...,K, (3.63)

t= 1,...,T;r = 1,...,t,


In the above formulation V/jkt (cj Cj+kt 1+ -l hjl)dkt + jSkt-

Proposition 5 For uncapacitated PID problems with fixed ,i ,,/. costs the optimal

cost of the LP relaxation of (P10) is greater than or equal to the optimal cost of the

LP relaxation of (P2).







58

Proof. Let (yjt, yjrkt) be a feasible solution to (P10). Multiplying the constraints

(3.62) by dkt and summing them over k and t leads to (3.55). Constraints (3.56) can

be reached in a similar way. Substituting the yjrkt values into (3.60), (3.42), (3.43),

and (3.44) will give a feasible solution to (P2). It follows that every solution to the

LP relaxation of (P10) gives rise to a feasible solution of the LP relaxation of (P2).

F















CHAPTER 4
SUBROUTINES AND COMPUTATIONAL EXPERIMENTS

4.1 Design and Implementation of the Subroutines

One of the 1i ii 'r components of this dissertation is the extensive amount of

computational results. Therefore, we wanted make our subroutines available to other

researchers. The subroutines execute the heuristic approaches presented in Chapter

3. A set of files, referred to as the distribution, is available upon request. The

distribution includes the subroutines as well as some sample input and output files.

The following guidelines are used in the implementation of the subroutines. The

codes are written in standard C and are intended to run without modification on

UNIX platforms. For simplicity, only the subroutines for PID problems with fixed

charge costs are presented in this section and the following section. The subroutines

gr-pid.c and ds-pid.c (Table 4-1) apply local search with GRASP and local search with

DSSP, respectively. The optimizer in gr-pid.c is a self-contained set of subroutines.

However, ds-pid.c uses the callable libraries of CPLEX 7.0 to solve the linear network

flow problems. Input and output, as well as array declarations and parameter settings

are done independently, outside of the optimizer modules. The distribution consists

of eight files which are listed in Table 4-1. The files gr-pid.c and ds-pid.c are the core

of the package.


Table 4-1: The distribution.
subroutines gr-pid.c ds-pid.c
sample input gr-param.dat ds-param.dat
sample data sample.dat
sample output sample-gr.out sample-ds.out
instructions README.txt









Subroutine gr-pid.c calls two subroutines, TIME and free-and-null. Subroutine

TIME is used to measure the CPU time of the algorithm and subroutine free-and-null

is used to nullify the pointers created during the execution of the code. Subroutine

gr-pid.c takes the following as input from file gr-param.dat:

the maximum number of iterations (max-iteration),

a random seed (seed),

the GRASP parameter a (alpha),

a binary variable which denotes the format of the output (out),

the file name that includes the input data (sample.dat),and

the output filename (sample-gr.out).

After the parameters are read the following data are taken from the input file:

the number of plants,

the number of retailers,

the number of time periods,

the demands at the retailers, and

the related cost values.

Before the GRASP iterations are executed, the value of the best solution found

is initialized to a large value. The while loop implements the construction phase of

the GRASP (section 3.4.1). Each solution constructed is kept in a linked list called

starts. The list keeps the solutions in increasing order of total cost and each time

a new solution is created it is added to the list if it is different from the existing

solutions.

In the do loop following the construction phase, a local search (section 3.2.3) is

carried out in the neighborhood of each solution in the starts list. Each time a locally

optimal solution is found, it is added to a list called locals. The solutions in the list

are in increasing order with respect to their total costs. After the local search phase

is completed, an output file (e.g. Figure 4-3) is created which includes the following:









the total CPU time,

the number of plants,

the number of retailers,

the number of periods,

total number of iterations,

the initial seed,

minimum cost before and after local search, and

the actual amounts of flow on the arcs.

4.2 Usage of the Subroutines

The subroutines in files gr-pid.c and ds-pid.c compute an approximate solution to

a production-inventory-distribution problem using GRASP and DSSP, respectively.

The variables needed as input for gr-pid.c are as follows:

max-iteration: This is the maximum number of GRASP iterations. It is an
integer such that max-iteration>0. The default value for max-iteration is 128.

seed: This is an integer in the interval [0, 231- 1]. It is the initial seed for the
pseudo random number generator. The default for seed is 1.

alpha: This is the restricted candidate list parameter a, whose value is between
0 and 1 inclusive. It can be set to a value less than 0 or greater than 1 to
indicate that each GRASP iteration uses a different randomly generated value.
The default alpha value is -1.

out: This is a binary variable which indicates the format of the output file. If it
is equal to 1, then all production, inventory, and distribution values are printed
in the output file. If out is equal to 0, then only the cost values are printed in
the output file. The default is 0.

in-file: This is a string that gives the name of the data file.

out-file: This variable is also a string which gives the name of the file where the
output should be written.









8
1
-1
1
sample.dat
sample-gr.out

Figure 4-1: Input file (gr-param.dat) for gr-pid.c.


The input variables for ds-pid.c are similar. The only variables needed are seed,

out, in-file, and out-file. All of the above variables are provided in an input file. An

example of an input file for gr-pid.c is given in Figure 4-1.

The subroutines gr-pid.c and ds-pid.c first read the variables from the input files

gr-param.dat and ds-param.dat, respectively. This is followed by reading in the data.

The data for subroutines gr-pid.c and ds-pid.c include the following:

the number of production facilities in the network (plants),

the number of demand points in the network (retailers),

the planning horizon (periods),

amount of demand at each retailer,

variable production and distribution costs,

fixed production and distribution costs, and

unit inventory costs.

This data is provided in an in-file in the order given above. Figure 4-2 gives the

format of a sample in-file. The data is read into the following arrays:

B: The size of this array is m, where m is the total number of nodes in supply
chain network (m = periods (plants + retailers) + 1). It is an array of
real numbers that contains the demand information for the retailers. The first
element of the array is equal to the total demand for all retailers over the
planning horizon. In other words, this is the total supply. Note that the facilities
do not have any demand.

obj-c: The unit production, inventory, and distribution costs are stored in this
array whose size is n, where n is the total number of arcs in the network
(n = plants periods (2 + retailers) plants).









222
19.97751
35.62797
8.669039
29.90926
8.788289 19.680712
12.93113 10.667306
7.882008 14.782811
7.671567 19.095344
9.043183 13.516924
6.522605 19.325337
2.036276 16.544358
1.934919 10.210702
9.043183 15.122049
6.522605 12.020189
2.036276 19.399770
1.934919 12.040823
1.912763
1.942124

Figure 4-2: Sample data file (sample.dat) for gr-pid.c and ds-pid.c.


obj-s: This array contains the fixed production and distribution costs. This is
an array of real numbers and its size is also n.

Once all parameters that control the execution of the algorithm are set, the

optimization module starts. For GRASP, the optimization module is the iterative

construction and local search phases. For DSSP, the callable libraries of CPLEX

7.0 are used to solve the network flow problem at each iteration. The problem is

solved with a new set of linear costs each time as described in section 3.3. After the

optimization is successfully completed an output file is created as shown in Figure

4-3. If the optimization is not successful, an error message is returned.

4.3 Experimental Data

The primary goal of our computational experiments is to evaluate the solution

procedures developed and presented in C'! lpter 3. For a procedure that finds an

optimal solution, an important characteristic is the computational effort needed. For

a heuristic procedure, one is often interested in evaluating how close the solution value






















GRASP for Production-Inventory-Transportation Problem

CPU time 0.00
Number of plants 2
Number of retailers 2
Number of periods 2
Number of iterations 8
Initial seed 1

Minimum cost before local search 1288.10
Minimum cost after local search 1288.10

Production Period Plant Retailer
55.61 1 2
38.58 2 2
Inventory
Shipment
19.98 1 2 1
35.63 1 2 2
8.67 2 2 1
29.91 2 2 2
All other variables are zero.

Figure 4-3: Sample output (sample-gr.out) of gr-pid.c.









is to the optimal value. One method for evaluating these characteristics is running

experiments on real-world or library test problems. Real-world problems are useful

because they can accurately assess the practical usefulness of a solution procedure.

Library test problems are important because the properties of the problems and the

performance of other procedures are usually known for these problems. However, the

availability of data for either of these types of problem instances is limited. Moreover,

there may also be disadvantages of using real-world and library test problems, which

are briefly described later in section 4.5. Therefore, many research studies use

randomly generated problem instances as we do in this dissertation.

To test the behavior of the solution procedures developed for the PID problem we

primarily use randomly generated problems and some library test problems from the

literature. Unfortunately, we were not able to find library test problems that fit our

problem requirements and characteristics exactly. Therefore, we found other library

problems such as facility location problems and formulated them as PID problems.

Once they were formulated as PID problems, we tested the algorithms on these

problems.

4.4 Randomly Generated Problems

The behavior and performance of solution procedures are frequently tested on

a collection of problem instances. The conclusions drawn about these performance

characteristics strongly depend on the set of problem instances chosen. Therefore, it is

important that the test problems have certain characteristics to ensure the credibility

of the conclusions. Hall and Posner [33] gave -i-i-, -1 i'. on generating experimental

data which are applicable to most classes of deterministic optimization problems.

They developed specific proposals for the generation of several types of scheduling

data. They proposed v i. 1 iv, practical relevance, invariance, regularity, describability,

efficiency, and parsimony as some of the principles that should be considered when









generating test problems. We take these properties into account when generating our

data.

4.4.1 Problems with Fixed C('! ige Costs

The first set of problems we solved were PID problems with fixed charge

production and distribution costs. LS-DSSP and LS-GRASP were used to solved

these problems. For some of the problems LSM-GRASP was used. Problem size,

structure, and the cost values seem to affect the difficulty of the problems. Therefore,

we generated five groups of problems. Group 1 and Group 2 problems are single period

problems without inventory and Groups 3, 4, and 5 are multi-period problems. To

capture the effect of the structure of the network on the difficulty of the problems we

varied the number of facilities, retailers, and periods within each group. The problem

sizes are given in Table 4-2. Problems 1 through 5 are in Group 1, problems 6 to 10

are in Group 2, problems 11 to 15 are in Group 3, problems 16 to 20 are in Group 4,

and Group 5 consists of problems 21 to 25. A total of 1250 problems were solved.

It has been shown that the ratio of the total variable cost to the total fixed cost

is an important measure of the problem difficulty (see Hochbaum and Segev [38]).

Therefore, we randomly generated demands, fixed charges, and variable costs for

each test problem from different uniform distributions as given in Table 4-3. Note

that only the fixed charges are varied. For example, data set A corresponds to those

problems that have small fixed charges. The fixed charges increase as we go from

data set A to data set E. The distances between the facilities and the retailers are

used as the unit transportation costs (cjkt). For each facility and retailer we uniformly

generated x and y coordinates from a 10 by 10 grid and calculated the distances. The

values of the parameter a, which is used in the construction phase of LS-GRASP,

were chosen from the interval (0, 1). Experiments indicated that better results were

obtained for a in the interval [0.80, 0.85].










Table 4-2: Problem sizes for fixed charge networks.
No. of No. of No. of No. of
Group Problem Facilities Retailers Periods Arcs
1 25 400 1 10,025
2 50 400 1 20,050
1 3 75 400 1 30,075
4 100 400 1 40,100
5 125 400 1 50,125
6 125 200 1 25,125
7 125 250 1 31,375
2 8 125 300 1 37,625
9 125 350 1 43,875
10 125 400 1 50,125
11 10 70 20 14,390
12 15 70 20 21,585
3 13 20 70 20 28,780
14 25 70 20 35,975
15 30 70 20 43,170
16 30 60 5 9,270
17 30 60 10 18,570
4 18 30 60 15 27,870
19 30 60 20 37,170
20 30 60 25 46,470
21 30 50 20 31,170
22 30 55 20 34,170
5 23 30 60 20 37,170
24 30 65 20 40,170
25 30 70 20 43,170


For each problem size and for each data set we randomly generated 10 problem

instances. The algorithms are programmed in the C language, and CPLEX 7.0

callable libraries are used to solve the linear NFPs and the MILPs. The algorithms

are compiled and executed on an IBM computer with 2 Power3 PC processors, 200

Mhz CPUs each.

The errors found for problems using data set A were the smallest. This is

expected since the fixed charges for this set are small. As the fixed charge increases,

the linear approximation is not a close approximation anymore. Thus, the error

bounds for data set E were the highest. Nevertheless, all errors reported were less than










Table 4-3: C'! i l :teristics of test problems.
Data set sjt, sjkt cjt hjt dkt
A 10-20 5-15 1-3 5-55
B 50-100 5-15 1-3 5-55
C 100-200 5-15 1-3 5-55
D 200-400 5-15 1-3 5-55
E 1000-2000 5-15 1-3 5-55


9'. All errors reported here are with respect to the lower bounds unless otherwise

stated. The lower bounds are obtained from the LP relaxation of the extended MILP

formulation given in section 3.5. The errors are calculated as follows:

Error Upper Bound Lower Bound t
Lower Bound
Table 4-4 reports the number of times the gap between the upper and lower

bounds was zero for data set A (note that 50 problems were solved for each group).

The number of times the heuristic approaches found the optimal solution may,

however, be more than those numbers reported in the table since the errors are

calculated with respect to a lower bound. For data set B, the errors were still small

(all were under 0. .'.). LS-DSSP found errors equal to zero for 42 of the 50 problems

in Group 1, for 36 of the problems in Group 2, and only for 1 problem in Group 3.

All errors in Groups 4 and 5 were greater than zero. The number of errors equal to

zero using LS-GRASP, however, was 12 for Group 1 problems, only 1 for Group 4

problems, and none for the other groups. As the fixed charges were increased the

errors also increased.

Table 4-4: Number of times the error was zero for data set A.
DSSP GR8 GR 16 GR 32 GR64 GR128
Group 1 50 26 28 29 32 33
Group 2 50 19 23 27 30 32
Group 3 42 24 25 26 29 30
Group 4 30 17 22 29 30 32
Group 5 29 15 18 20 22 23
GR 8: GRASP with 8 iterations,
GR 16: GRASP with 16 iterations, etc.









The maximum error encountered for data set A set was 0.0-' for both heuristics.

The results indicate that LS-DSSP and LS-GRASP are both powerful heuristics for

these problems, but another reason for getting such good solutions is because the

problems in set A are relatively easy. We were actually able to find optimal solutions

from CPLEX for all problems in this set in a reasonable amount of time. On average,

LS-DSSP took less than a second to solve the largest problems in Groups 1 and 2.

LS-GRASP with 8 iterations took about 1.5 seconds for the same problems. Finding

lower bounds also took about a second, whereas it took more than 25 seconds to find

optimal solutions for each of these problems using CPLEX. The problems in Groups

3, 4, and 5 were relatively more difficult. Average times were 8, 10, 15, and 50 seconds

for LS-DSSP, LS-GRASP with 8 iterations, LP, and CPLEX, respectively. Optimal

solutions were found for all problems using data set B as well. However, for data set

C, optimal solutions were found for only Group 1, 2, and 3 problems and for some of

the problems in Groups 4, and 5. For data set D, optimal solutions were found for

some of the Group 1 and 2 problems. Finally, data set E was the most difficult set.

CPLEX was able to provide optimal solutions for only a few of the Group 1 and 2

problems and none for the other 3 groups.

From Figure 4-4 we can see that LS-DSSP gave better results for Group 1 and 2

problems and LS-GRASP gave slightly better results for problems in Groups 3, 4, and

5. Similar behavior was observed for the other data sets. The results also indicate

that LS-GRASP was more robust in the sense that the quality of the solutions was not

affected greatly by the network structure. The proposed local search also proved to be

powerful. For example, average errors for the problems in Figure 4-4-d without the

local search were roughly 2.9'. for the DSSP and 11. for GRASP with 8 iterations.

If more iterations are performed, LS-GRASP finds better solutions. Figure 4-5

shows that the errors decrease as the number of iterations increase. However, the

computational time required by LS-GRASP increases linearly as more iterations are

























-'-



;'.



25 50 75 100 125

Number of Plants

(a) Group 1









_- .._* -x* -


o 2.0

0
S1.5

1.0

S0.5

0.0










2.5

2.0

" 1.5
LU
a)



S0.5

0.0










2.5

S2.0

1 1.5
IUI
1.0


, 0.5

0.0


-a-


2.5

^ 2.0

0
r 1.5

1.0

0.5

0.0
20









2.5

S2.0

" 1.5
LU
0)
1.0

< 0.5


.-1


0 300 350

Number of Retailers

(b) Group 2


50 55 60 65 70

Number of Retailers

(d) Group 5


DSSP
------- GR 8
---------- GR32

_._._-.-_ GR128


10 15 20 25

Number of Periods

(e) Group 4


Figure 4-4: Average errors for problems using data set D.


- --


10 15 20 25 3

Number of Plants

(c) Group 3






** .


* 0


-- '.- .. --- -...
---- ---- -------------
-.----.---


I--


0


i











performed. Most of the improvement is achieved earlier in the search. Performing

more than 16 or 32 iterations does not seem to improve the solution quality by much.

This -.-..- -r-; that the construction phase shows small diversification. To overcome

this problem we modified our construction of solutions and tried to diversify the

solutions.

6.0


5.5


7 5.0
Problem 4
S --- Problem 7
4.5- -A- -Problem 15
S- x ---Problem 18
Problem 21
> 4.0 \

3.5 -- -"
S --- -.- ..

3.0
0 32 64 96 128
Number of GRASP iterations

Figure 4-5: Number of GRASP iterations vs. solution quality.



With the modified construction procedure we try to generate solutions from

different parts of the feasible region. This is done by allowing the construction

of solutions that are not necessarily good. Our new criterion allows less desirable

candidates to enter the restricted candidate list. The modified construction procedure

was explained in section 3.4.2.

The best results were obtained for a in the interval [0.05, 0.20]. Since LS-GRASP

did relatively poorly for Group 1 and 2 problems, we present results in Table 4-5

that show the improvement achieved with the new implementation of LSM-GRASP.

To have a reasonable comparison, LSM-GRASP was run until 128 different solutions

were investigated. As we can see there is significant improvement over the LS-GRASP









approach. However, LS-DSSP still seems to be doing better for these problems and

also takes less time.

Table 4-5: Average errors for problems with data set E.
Error ( .) CPU Time (seconds)
LS- LSM- LS- LSM-
Problem DSSP GRASP CPLEX DSSP GRASP CPLEX
1 0.02 3.86 0.82 0.32 4.14 4.55
2 0.17 4.11 1.95 0.78 8.05 8.81
3 0.24 3.55 1.72 1.14 11.47 13.12
4 0.54 4.15 2.02 1.75 15.03 17.26
5 0.56 3.86 2.00 2.85 19.08 22.17
6 2.83 2.91 2.45 2.12 9.19 10.69
7 1.75 3.43 2.18 2.17 11.79 13.57
8 1.23 4.03 2.29 2.04 14.04 16.40
9 0.86 3.58 2.04 2.32 16.28 19.03
10 0.56 3.86 2.00 2.92 19.01 22.17


Problems with fixed charge costs and production capacities. The second set

of problems we solved were PID problems with fixed charge costs and production

capacities. Table 4-6 gives the set of problems solved in this section. Note that

the local search procedures we have developed are for uncapacitated problems. For

example, the construction phase of LS-GRASP and LSM-GRASP make use of the

property of an optimal solution for uncapacitated problems, i.e., an optimal solution

is a tree on the network. The local search procedure for LS-DSSP also uses the same

property of an uncapacitated PID problem. DSSP, however, is applicable to both

capacitated and uncapacitated PID problems. Therefore, we use only DSSP to solve

the problems generated in this section.

As soon as capacities are introduced, the problems become much more difficult.

The difficulty of the problems depends on how tight the capacities are. In order to

simplify the problems, we introduce only production capacities and we also assume

that the distribution costs are linear. In other words, the PID problems generated in

this section have fixed charge production costs, linear inventory and distribution costs,










Table 4-6: Problem sizes for fixed charge networks with capacity constraints.
No. of No. of No. of No. of
Problem Facilities Retailers Periods Arcs
26 5 30 30 4,795
27 5 30 40 6,395
28 5 30 50 7,995
29 10 30 30 9,590
30 10 30 40 12,790
31 10 30 50 15,990
32 10 40 30 12,590
33 10 40 40 16,790
34 10 40 50 20,990


and also capacities on the amount of production. These simplifying assumptions can

easily be relaxed since DSSP is a general procedure that can solve almost any PID

problem with network constraints and concave costs. Data set C in Table 4-3 is used

to generate the input data for these problems. The capacity on each production arc

for a given problem is a constant and it is calculated by

Capacity = Mean Demand* Number of retailers ,
Number of plants
where ( is a fixed parameter. If is very large ( > 2 number of plants number of

periods), then the problem is an uncapacitated PID problem. If, on the other hand,

is very small, then the problem may not have any feasible solution. We varied the

values of from 1.1 to 5. When t=1.1, there were several infeasible instances and

when = 5, the problems were practically uncapacitated.

For each problem in Table 4-6 we generated 10 instances. We also tested 8

different ( values. Thus, a total of 720 problems were solved. The average errors for

these problems are presented in Table 4-7 (an explanation of how these errors were

calculated is given below). As we can see from the table, the errors are all close to zero.

In fact, the maximum error among the 720 instances was 0.3 !'. Table 4-7 also shows

that the average errors are slightly higher when = 2. Similar behavior is observed in

Tables 4-8 and 4-9. Table 4-8 gives the average CPU time DSSP took to solve each










Table 4-7: Average errors for problems with production capacities.


Average Error ( .)
Problem = 1.1 1.125 = 1.15 = 1.2 =2 1 3 ~ 4 = 5
26 0.02 0.02 0.02 0.04 0.06 0.07 0.09 0.09
27 0.03 0.05 0.04 0.04 0.13 0.10 0.14 0.13
28 0.02 0.04 0.04 0.04 0.17 0.07 0.08 0.10
29 0.06 0.08 0.07 0.09 0.20 0.16 0.10 0.14
30 0.07 0.09 0.12 0.14 0.21 0.13 0.14 0.15
31 0.11 0.12 0.13 0.13 0.23 0.15 0.16 0.17
32 0.03 0.03 0.06 0.08 0.11 0.11 0.09 0.12
33 0.05 0.06 0.08 0.06 0.15 0.11 0.10 0.09
34 0.07 0.07 0.08 0.11 0.18 0.11 0.13 0.14


problem whereas Table 4-9 gives the average CPU times for CPLEX. The highest

CPU times for DSSP and CPLEX were observed when =2 and =1.2, respectively.

This indicates that the problems became more difficult as was increased from 1.1

to 2, but as was further increased the problems became easier again.


Table 4-8: CPU times of DSSP for problems with production capacities.

CPU Time (seconds)
Problem 1.1 1.125 1.15 1.2 = 2 3 4 = 5
26 0.08 0.09 0.09 0.09 0.19 0.09 0.07 0.08
27 0.11 0.13 0.11 0.13 0.21 0.12 0.11 0.10
28 0.14 0.16 0.17 0.17 0.34 0.17 0.14 0.14
29 0.21 0.17 0.21 0.21 0.39 0.20 0.17 0.15
30 0.23 0.32 0.31 0.34 0.49 0.31 0.26 0.25
31 0.37 0.38 0.36 0.46 0.66 0.37 0.30 0.32
32 0.24 0.26 0.27 0.28 0.40 0.24 0.21 0.20
33 0.39 0.40 0.43 0.41 0.60 0.42 0.35 0.33
34 0.45 0.46 0.60 0.59 0.74 0.49 0.44 0.42


In our implementations CPLEX was run for at most 600 CPU seconds. If an

optimal solution is found within 600 CPU seconds, then the objective function value

of the solution obtained from DSSP is compared to the objective function value of

this optimal solution. In other words, the error is calculated as
Error DSSP Solution Value-Optimal Solution Value
Optimal Solution Value









However, if an optimal solution is not found within 600 CPU seconds, then CPLEX

returns a lower bound. In this case, the objective function value of the DSSP solution

is compared to the lower bound and the error is given by

Error DSSP Solution Value-Lower Bound 100.
Lower Bound
Some of the problems in Table 4-9 have average CPU times of 600. This means

that for these problems CPLEX was not able find an optimal solution within our

time limit. Thus, the average errors reported in Table 4-7, corresponding to these

problems, are with respect to a lower bound. However, the errors are still small.


Table 4-9: CPU times of CPLEX for problems with production capacities.

CPU Time (seconds)
Problem =1.1 =1.125 =1.15 = 1.2 =2 =3 = 4 = 5
26 4 6 7 8 9 7 3 2
27 9 31 22 265 24 5 4 4
28 16 40 34 124 69 15 8 7
29 234 186 572 411 373 85 33 9
30 553 600 600 600 600 352 71 28
31 600 600 600 600 600 600 600 63
32 40 137 600 600 235 69 17 14
33 600 600 600 600 600 242 49 46
34 600 600 600 600 600 600 108 55


Table 4-10 also gives us an idea about the difficulty of the problems. This table

shows the percentage of production arcs that have positive flow in the optimal solution

obtained from CPLEX. For example, when = 1.1, on average 9 ;'. of the production

arcs were used in an optimal solution (N/A denotes that an optimal solution was

not available). This indicates that the capacity constraints are so tight that there is

production at almost every facility in every period. In other words, the number of

feasible solutions that need to be investigated is small thus CPLEX does not take

too much time to solve these problems. As ( increases, the percentage of production

arcs used in an optimal solution decreases. When =5, this percentage drops down

to about il' and these problems are practically uncapacitated.










Table 4-10: Percentage of production arcs used in the optimal solution.
Percentage of production arcs ( .)
Problem = 1.1 = 1.125 = 1.15 = 1.2 ~ 2 = 3 = 4 = 5
26 94 91 90 87 56 43 37 34
27 94 92 90 86 56 43 39 37
28 93 91 89 86 56 42 37 34
29 91 89 87 84 51 37 31 27
30 91 N/A N/A N/A N/A 37 30 27
31 N/A N/A N/A N/A N/A N/A N/A 27
32 91 89 N/A N/A 52 39 32 30
33 N/A N/A N/A N/A N/A 38 32 28
34 N/A N/A N/A N/A N/A N/A 32 29


4.4.2 Problems with Piecewise-Linear Concave Costs

We also tested our local search procedures on problems with piecewise-linear

concave costs. These problems were much more difficult compared to the problems

with fixed charge costs due to the increase in the problem size as the number of pieces

in each arc increases. The structure of the networks generated and the characteristics

of the input data are similar to those of the problems in section 4.4.1. The inventory

costs, hjt, are generated in exactly the same way, i.e., from a Uniform distribution

in the interval [1,3]. The demands are also uniformly generated from the same

distribution in the interval [5,55]. The transportation costs are also the same. The

same distances between the facilities and retailers are used as the variable costs and

the fixed costs are uniformly generated from different intervals as given in Table 4

3. The reason we generate fixed charge transportation costs is that the problems

we consider are uncapacitated concave minimization problems, which means that in

an optimal solution the flow on transportation arcs will be equal to either zero or

the demand. In other words, xjkt = 0 or xjkt = dkt in an optimal solution for all

j = 1,... k = 1,..., K, and t = 1,..., T. Thus, if piecewise-linear and concave

costs are generated for transportation arcs they can be reduced to fixed charge cost

functions to simplify the formulation.









The production costs, however, are different since they are piecewise-linear

concave rather than fixed charge. To generate a piecewise-linear concave cost function

with ljt pieces for a given production arc we first divide the maximum possible flow on

that arc into ljt equal intervals. Next, we generate a variable cost for each piece. Note

that the variable costs should be decreasing (see equation 2.13) to guarantee concavity.

Therefore, ljt variable costs are uniformly generated from the interval [5,15] and then

they are sorted in decreasing order. The fixed cost for the first interval is randomly

generated from one of the five different uniform distributions given in Table 4-3. The

fixed cost of the other intervals can be calculated using the following equation:


Sjt,i = Sjt,i-1 + (cj ,i)1 jt,i i V= 2,..., Ijt.

For a given problem we assume that all production arcs have an equal number of

pieces, i.e., ljt = for all j = 1,..., J and t = 1,...,T. We took the smallest

problems in each group in Table 4-2 and varied the number of pieces, which resulted

in Table 4-11. Table 4-11 summarizes the set of problems solved with piecewise-linear

concave costs. The last column gives the number of arcs in the extended network

after the arc separation procedure.

For each problem in Table 4-11 five different data sets are used. In other words,

the fixed charges are generated from five different uniform distributions as given in

Table 4-3 which leads to different problem characteristics. Ten problem instances

are solved for each data set and problem combination (a total of 750 problems).

Based on our observations from problems with fixed charge costs only LSM-GRASP

is used to solve these new problems that have piecewise-linear concave production

costs. The GRASP parameter a is chosen randomly from the interval [0.05, 0.20]

and 256 iterations are performed. These problems are more difficult compared to

those presented in section 4.4.1. Therefore, we ran more iterations of LSM-GRASP

to get good solutions in the expense of extra computational time. As shown in










Table 4-11: Problem sizes for piecewise-linear concave networks.
No. of No. of No. of No. of No. of No. of
Group Problem Facilities Retailers Periods Pieces Arcs Arcs
(org) (ext)
35 25 400 1 2 10,025 10,050
6 36 25 400 1 4 10,025 10,100
37 25 400 1 8 10,025 10,200
38 125 200 1 2 25,125 25,250
7 39 125 200 1 4 25,125 25,500
40 125 200 1 8 25,125 26,000
41 10 70 20 2 14,390 14,590
8 42 10 70 20 4 14,390 14,990
43 10 70 20 8 14,390 15,790
44 30 60 5 2 9,270 9,420
9 45 30 60 5 4 9,270 9,720
46 30 60 5 8 9,270 10,320
47 30 50 20 2 31,170 31,770
10 48 30 50 20 4 31,170 32,970
49 30 50 20 8 31,170 35,370
org: original network, ext: extended network after ASP


Tables 4-12, 4-13, 4-14, 4-15, and 4-16, LSM-GRASP took more time compared to

LS-DSSP but gave slightly better results.

CPLEX is used to solve the ASP formulations of the problems given in section

2.4.2. We limited the running time of CPLEX to 2,400 CPU seconds. If an optimal

solution is found within 2,400 seconds then CPLEX returns a lower bound and an

upper bound, both of which are equal to the optimal solution value. If an optimal

solution is not reached in 2,400 seconds then CPLEX returns a lower bound. CPLEX

was not even able to find a feasible solution within the specified time limit for some

problems and, therefore, an upper bound may not be available. The errors reported

in Tables 4-12, 4-13, 4-14, 4-15, and 4-16 are the average errors of the upper bounds

obtained from LS-DSSP, LSM-GRASP, and CPLEX (if an upper bound is available)

with respect to the lower bound of CPLEX, which is calculated in the following way:
Error Upper Bound CPLEX Lower Bound 10.
CPLEX Lower Bound










Table 4-12: Summary of results for problems using data set A.
Error ( .) CPU Time (seconds)
LS- LSM- LS- LSM-
Problem DSSP GRASP CPLEX DSSP GRASP CPLEX
35 0.36 0.36 0.00 0.40 23.98 30.45
36 3.40 1.35 0.00 0.97 27.98 208.40
37 2.26 0.84 0.00 2.84 40.41 508.54
38 0.00 0.00 0.00 0.88 56.51 69.31
39 0.49 0.46 0.00 2.67 84.10 631.32
40 9.97 9.41 11.15 5.21 93.43 2,400.00
41 0.00 0.00 0.00 6.08 301.26 222.10
42 2.56 2.52 3.43 12.42 376.34 2,400.00
43 16.09 16.14 17.72 38.06 447.86 2,400.00
44 0.00 0.00 0.00 1.09 55.70 46.13
459 0.34 0.33 0.26 2.34 70.02 1,134.02
46 11.03 10.90 13.16 6.53 111.03 2,400.00
47 0.01 0.00 0.00 14.29 627.57 1,078.99
48 7.67 7.68 8.28 33.04 874.33 2,400.00
49 26.14 26.19 N/A 84.77 1249.34 2,400.00


The superscripts over the problem numbers denote the number of instances out of

ten for which CPLEX found an optimal solution. If CPLEX finds optimal solutions in

all ten instances, then the average error for CPLEX will be zero and the superscript is

omitted. If CPLEX cannot find an optimal solution for any of the ten instances, then

the average error will be higher than zero and there will be no superscript over that

problem number. For example, in Table 4-12 problem 45 has a superscript 9 which

means that CPLEX found optimal solutions for 9 out of 10 instances. Problems 35,

36, 37, 38, 39, 41, 44, and 47 have no superscripts but the average errors for CPLEX

are all zero for these problems and this indicates that CPLEX found optimal solutions

for all instances in these problems. For the remaining problems (40, 42, 43, 46, 48,

and 49) CPLEX was not able to find an optimal solution for any of the instances. In

fact, for problem 49 CPLEX did not even find a feasible solution.

As the number of linear pieces in each production cost function increases the

problems seem to get more difficult, which is expected because the network increases










Table 4-13: Summary of results for problems using data set B.
Error ( .) CPU Time (seconds)
LS- LSM- LS- LSM-
Problem DSSP GRASP CPLEX DSSP GRASP CPLEX
35 0.25 0.24 0.00 0.46 22.22 76.50
36 2.32 1.19 0.00 1.08 29.51 811.98
378 2.56 1.82 2.39 3.35 44.30 1,499.32
38 0.08 0.02 0.00 1.51 60.04 268.35
391 3.13 3.01 5.40 3.81 88.01 2,304.37
40 18.15 17.77 49.22 8.89 105.70 2,400.00
414 0.06 0.05 0.02 7.30 293.44 2,106.00
42 6.15 6.10 7.12 16.33 395.21 2,400.00
43 21.76 21.74 23.56 48.04 487.02 2,400.00
44 0.07 0.06 0.00 1.29 57.91 142.50
453 4.03 3.90 4.65 3.15 73.99 2,296.52
46 17.05 16.71 20.40 7.64 109.98 2,400.00
471 0.22 0.23 0.28 18.33 661.42 2,388.10
48 14.37 14.20 N/A 42.54 910.27 2,400.00
49 23.63 23.37 N/A 114.22 1243.62 2,400.00


in size. For example, in Table 4-13 problems 38,

network structure and the same cost characteristics but


39, and 40 have the same

the number of pieces in their


production arcs are different. The number of pieces are 2, 4, and 8 for problems 38,

39, and 40, respectively, as given in Table 4-11. LS-DSSP and LSM-GRASP solve the

problems on the original network, therefore, the increase in the CPU times of these

two approaches is not as high compared to the increase in the CPU times of CPLEX.

This is due to the fact that CPLEX works on the extended network which is larger

in size. For problem 38, CPLEX was able to find an optimal solution for all instances

and the average time spent was about 268 CPU seconds. The average error for LS-

DSSP was 0.C0-'. which took about 1.5 seconds. The CPU time for LSM-GRASP was

60 seconds and the average error was 0.02-'. As the number of pieces increased the

errors and the CPU times both increased. For example, the average CPU times for

LS-DSSP, LSM-GRASP, and CPLEX were 8.89, 105.7, and 2,400, respectively, for

problem 31. It is clear that LS-DSSP was much faster compared to the other two for









all problems, but LSM-GRASP in general gave slightly better results. The average

errors for problem 40 were about 1' for LS-DSSP, 17'-. for LSM-GRASP, and 49',

for CPLEX.

Table 4-14: Summary of results for problems using data set C.
Error ( .) CPU Time (seconds)
LS- LSM- LS- LSM-
Problem DSSP GRASP CPLEX DSSP GRASP CPLEX
35 0.10 0.02 0.00 0.52 22.87 155.00
368 2.40 1.16 0.71 1.18 28.37 1,412.73
372 5.26 3.08 7.94 3.18 42.84 2,340.82
38 0.08 0.14 0.00 1.65 59.14 598.26
39 9.29 8.75 11.92 4.70 82.22 2,400.00
40 16.17 15.41 N/A 8.86 106.15 2,400.00
41 0.26 0.30 0.30 8.10 291.93 2,400.00
42 6.23 6.25 7.21 14.71 386.89 2,400.00
43 20.20 20.23 21.96 43.81 505.90 2,400.00
44 0.15 0.21 0.00 1.52 57.38 371.82
45 7.36 7.00 8.37 3.53 76.54 2,400.00
46 17.56 16.74 23.52 7.90 110.72 2,400.00
47 2.45 2.59 N/A 20.18 655.19 2,400.00
48 13.35 13.08 N/A 55.96 893.06 2,400.00
49 22.23 21.64 N/A 118.03 1282.63 2,400.00


Another observation about Tables 4-12, 4-13, 4-14, 4-15, and 4-16 is with

respect to the CPU times. Note that the fixed charges increase as we go from Table

4-12 to Table 4-16 and the CPU times of CPLEX increase as the fixed charges

increase. However, the CPU times for LS-DSSP and LSM-GRASP are not affected

by the input data.

The errors reported in Tables 4-12 to 4-16 are with respect to either an optimal

solution or a lower bound that are obtained from CPLEX. As we can see from these

tables, the errors are relatively larger for problems 40, 43, 46, and 49. The production

cost functions in these problems have eight pieces which makes it difficult to solve

them. The large errors are possibly due to poor lower bounds. Therefore, we first

transformed these piecewise-linear and concave PID problems into fixed charge PID










Table 4-15: Summary of results for problems using data set D.
Error ( .) CPU Time (seconds)
LS- LSM- LS- LSM-
Problem DSSP GRASP CPLEX DSSP GRASP CPLEX
35 0.04 0.12 0.00 0.71 21.83 229.56
366 2.58 1.37 1.69 0.96 26.43 1,993.51
37 7.88 6.19 11.02 2.33 40.88 2,400.00
385 0.67 0.72 0.33 2.54 57.01 1,978.91
39 9.29 8.67 N/A 5.63 76.78 2,400.00
40 15.30 13.99 N/A 9.46 96.96 2,400.00
41 0.63 0.99 1.29 8.61 294.96 2,400.00
42 4.92 5.28 6.44 18.22 378.43 2,400.00
43 16.17 16.58 18.01 39.33 502.67 2,400.00
447 0.56 0.80 0.36 1.89 54.59 1,505.39
45 9.19 8.93 9.69 3.46 69.54 2,400.00
46 16.39 15.51 N/A 9.42 103.10 2,400.00
47 4.86 5.53 N/A 26.54 610.61 2,400.00
48 13.12 12.87 N/A 53.67 807.60 2,400.00
49 20.71 20.54 N/A 135.07 1181.78 2,400.00


problems using the node separation procedure. Then, we used the extended problem

formulation given in section 3.5 to obtain lower bounds. We observed significant

improvement in the errors. However, one drawback of the extended formulation is

that the problem grows tremendously after the transformations. Therefore, solving

even the LP relaxations of these problems becomes computationally expensive. In

fact, CPLEX ran out of memory when we attempted to solve problems 43 and 49.

Table 4-17 gives the problem size for problems 40, 43, 46, and 49.

As we can see from Table 4-17, problem 43 and 49, respectively, have about 1.2

and 2.5 million variables in their corresponding extended formulations. CPLEX was

not able to solve these problems. However, we got good lower bounds for the other

two problems. The errors for these problems are given in Table 4-18. For example,

the errors reported in Table 4-12 for problem 40 were 9.97' for LS-DSSP, 9.41 t for

LSM-GRASP, and 11.5' for CPLEX, but those errors decreased to 0.,-' 0.71,

and 1.81, respectively.










Table 4-16: Summary of results for problems using data set E.
Error ( -) CPU Time (seconds)
LS- LSM- LS- LSM-
Problem DSSP GRASP CPLEX DSSP GRASP CPLEX
35 0.16 0.76 0.00 0.79 18.82 456.69
365 0.36 0.60 0.34 1.05 22.42 1,817.66
37 3.59 3.85 4.72 2.27 30.00 2,400.00
38 3.83 4.33 N/A 3.17 53.45 2,400.00
39 7.30 7.15 N/A 5.23 67.91 2,400.00
40 9.18 8.88 N/A 9.57 82.76 2,400.00
41 1.93 3.06 3.88 12.03 276.38 2,400.00
42 3.58 4.73 5.99 20.38 323.73 2,400.00
43 6.75 7.58 9.15 43.04 411.45 2,400.00
44 3.75 4.46 3.40 3.15 52.33 2,400.00
45 9.17 9.35 9.80 4.51 65.20 2,400.00
46 12.34 12.03 N/A 8.16 83.65 2,400.00
47 9.25 9.77 N/A 40.08 587.33 2,400.00
48 12.72 13.68 N/A 70.12 750.53 2,400.00
49 15.92 16.04 N/A 112.23 959.70 2,400.00


4.5 Library Test Problems

As we mentioned in section 4.3, real-world problems and library test problems

are useful, but as McGeoch [59] said, it is usually difficult to obtain these kind

of problems. He also notes that library data sets may quickly become obsolete.

Moreover, it is difficult to generalize results from a small set of problem instances.

Another disadvantage of using library problems is that researchers may be tempted

to tune their algorithm so that it works well on that particular set of instances. This

may favor procedures that work poorly for general problems and may disfavor those

procedures that have superior performance (Hooker [40]).

We could not find any PID problems in the OR-library, but there were

incapacitated warehouse location problems. We formulated these as PID problems

and used LS-DSSP and LS-GRASP to solve them. The data files are located

at http://mscmga.ms.ic.ac.uk/jeb/orlib/uncapinfo.html and they have the following

information:










Table 4-17: Problem sizes for the extended formulation after NSP.
No. of No. of No. of No. of No. of No. of
Problem Facilities Retailers Periods Pieces Arcs Arcs
(org) (ext)
40 125 200 1 8 25,125 201,000
43 10 70 20 8 14,390 1,177,600
46 30 60 5 8 9,270 217,200
49 30 50 20 8 31,170 2,524,800
org: original network
ext: extended network formulation after NSP


Table 4-18: Summary of results for problems 40 and 46 after NSP.
Error ( .) LP-NSP
LS- LSM- CPU
Data Set Problem DSSP GRASP CPLEX (seconds)
A 40 0.88 0.71 1.81 6,761
46 0.77 0.66 2.73 1,370
B 40 1.04 0.72 27.67 6,340
46 0.78 0.48 3.69 1,662
C 40 1.17 0.51 N/A 6,053
46 1.43 0.73 6.59 1,617
D 40 1.89 0.73 N/A 6,079
46 1.48 1.39 N/A 2,350
E 40 1.65 1.41 N/A 7,126
46 2.94 2.66 N/A 3,626


number of potential warehouse locations,

number of customers,

the fixed cost of opening a warehouse,

the demand of each customer, and

the total cost of flowing all of the demand from a warehouse to a customer.

The warehouses in the location problems are treated as the facilities in the PID

problem and the customers are treated as the retailers. There is only a single period

and the fixed costs of opening a warehouse are used as the fixed production costs at

the facilities. Note that the variable costs of production will be zero. The distribution









arcs, on the other hand, will have linear cost functions since there is no fixed cost of

placing an order in the location problems. The results are summarized in Table 4-19.


Table 4-19: Uncapacitated warehouse location problems from the OR library.
Error ( .)
LS- LSM-
Problem DSSP GRASP DSSP GRASP
cap71 0.73 4.65 0.00 0.00
cap72 1.49 6.21 0.13 0.00
cap73 1.35 2.68 0.47 0.45
cap74 0.56 4.12 0.37 0.37
capl01 1.77 7.86 0.56 0.24
capl02 2.26 6.30 0.39 0.09
capl03 1.21 6.46 0.81 0.71
capl04 1.37 1.37 0.61 0.00
capl31 4.49 8.51 2.22 0.85
capl32 3.90 2.92 2.34 0.86
capl33 3.16 5.98 1.03 0.79
capl34 2.09 1.37 1.56 0.00


The errors in the first column reported in Table 4-19 are for the DSSP procedure

if no local search is applied. Similarly, in the second column errors are given for

GRASP with only the construction phase. The last two columns are the errors for

LS-DSSP and LSM-GRASP. As we can see from the results, the local search procedure

improved the errors bounds significantly. The errors for LSM-GRASP were all under

1 LS-DSSP gave higher error bounds. In terms of CPU times LS-DSSP was faster

than LSM-GRASP. These are relatively small size problems and the CPU times for

LS-DSSP were about 0.01 seconds for all problems. The CPU times for LSM-GRASP

varied from 0.3 seconds to 1.1 seconds.















CHAPTER 5
CONCLUDING REMARKS AND FURTHER RESEARCH

5.1 Summary

We have developed local search algorithms for the uncapacitated concave

production-inventory-distribution problem. We first characterized the properties

of an optimal solution to uncapacitated PID problems with concave costs, and

then presented DSSP and GRASP algorithms which take advantage of these

properties. DSSP and GRASP provide initial solutions for the local search

procedure. Computational results for test problems of varying size, structure, and

cost characteristics are provided. The first set of problems for which experimental

results were presented was uncapacitated PID problems with linear inventory costs,

and fixed charge production and distribution costs. The second set of problems

included PID problems with fixed charge production costs, and linear inventory

and distribution costs. These problems had production capacities. Since our local

search algorithms are designed for uncapacitated problems, only DSSP was used to

solve these problems because DSSP works for both capacitated and uncapacitated

problems. The third set of computational results presented were for uncapacitated

PID problems with piecewise-linear and concave production costs, fixed charge

distribution costs, and linear inventory costs.

A general purpose solver, CPLEX, was used in an attempt to solve our problems

optimally. We compared the solution values obtained from LS-DSSP, LS-GRASP,

and LSM-GRASP to the solution values obtained from CPLEX. Due to the difficulty

of the problems CPLEX failed to find an optimal solution in most cases. In fact, for

some problems CPLEX was not able to find even a feasible solution. We initially

compared our solution values to lower bounds obtained from CPLEX. However,









these lower bounds were poor, particularly for problems with high fixed charges

and for problems with piecewise-linear and concave costs. Therefore, we provided

extended formulations for these difficult problems and solved the LP relaxations

of these extended formulations. A disadvantage of these formulations is that the

problems grow in size and they become harder to solve. However, the lower bounds

obtained from the extended formulation were much tighter compared to the ones

obtained from CPLEX.

The computational experiments indicated that LS-DSSP was more sensitive to

problem structure and input data, but the errors obtained from LS-GRASP and

LSM-GRASP were more robust. LS-DSSP gave smaller errors for problems with

fixed charge costs, particularly for single period problems. LSM-GRASP, in general,

gave better results for problems with piecewise-linear and concave costs. However,

the CPU times required by LS-DSSP were much smaller compared to the CPU times

of LS-GRASP and LSM-GRASP. LS-DSSP has a natural stopping condition, but LS-

GRASP and LSM-GRASP can be performed for many iterations. Thus, the GRASP

approaches lead to better results if more iterations are performed in the expense of

extra computational time. The CPU times of the local search procedures with both

DSSP and GRASP can further be improved since they are multi-start approaches

and are suitable for parallel implementation.

The main contributions of this work can be summarized as follows:

Several local procedures are developed that provide solutions to uncapacitated
PID problems with concave costs in a reasonable amount of time.

An extended formulation is given for PID problems with fixed charges costs. It
is also shown that the LP relaxation of the extended formulation gives lower
bounds that are no worse than the lower bounds obtained from the LP relaxation
of the original formulation.

Extensive amount of computational results are presented that show the
effectiveness of the proposed solution approaches.









Some special PID problems are identified for which one of the heuristic
procedures finds an optimal solution.

The codes developed for the implementation of the algorithms are made
available for distribution.

The proposed solution approaches are useful in practice, particularly, for

production, inventory, and distribution problems for which the supply chain network

is large. For example a network with 30 facilities, 70 retailers, 10 time periods, and

fixed charge costs is quite big. Finding optimal solutions for networks of this size,

and even for smaller networks, may be computationally expensive or impossible. The

computational results indicated that if the number of retailers is much more larger

than the number of facilities, the heuristic approaches performed better. Therefore,

we recommend the use of the proposed algorithms for problems that can be modeled

as PID problems which have the characteristics mentioned above.

5.2 Proposed Research

5.2.1 Extension to Other Cost Structures

The solution approaches presented in C'i lpter 3, particularly GRASP, take

advantage of the fact that the optimal solution is a tree in the supply chain network.

This is due to the fact that the cost functions used are concave cost functions.

Therefore, a natural extension to the work presented here is to develop solution

approaches to solve problems with more general cost structure (e.g. Figure 5-1)

which may arise in various applications.

Kim and Pardalos [49] actually extended the DSSP idea to handle nonconvex

piecewise-linear network flow problems. They developed a contraction rule to reduce

the feasible domain for the problem by introducing additional constraints.

5.2.2 Generating Problems with Known Optimal Solutions

As mentioned earlier in section 4.4, selection and generation of test problems is

an important part of computational experiments. A large proportion of experimental

research on algorithms concerns heuristics for NP-hard problems as is the case in














II
rjttzPiP)








--- ----~~---- -- -. i
II
II



I I I


Figure 5-1: Piecewise-concave cost functions.


this dissertation. One question often asked about an approximate solution is "How

close is the objective value to the optimal value?" The obvious difficulty here is that

optimal values for NP-hard problems cannot be efficiently computed unless P = NP.

Therefore, generating instances where the optimal solution is known is an attractive

area of research but not necessarily an easy one.

Arthur and Frendewey [3] presented a simple effective algorithm for randomly

generating travelling salesman problems for which the optimal tour is known. Pilcher

and Rardin [63] gave a more complex generator utilizing a random cut approach.

They first discussed a random cut concept in general for generating instances of

discrete optimization problems and then provided details on its implementation for

symmetric travelling salesman problems. The random cut concept they presented

may be implemented to generate PID problems. This will provide a variety of test

problems that can be used to evaluate heuristic approaches. It will also motivate

research in the area of cutting plane algorithms.

Another practical application of generating problems with known optimal

solutions is to help organizations set prices for their products and services. For

example, given a supply chain network let us assume that due to certain regulations

some of the routes must be used in an optimal solution. Rather than modeling these







90

restrictions as constraints of the problem, one could set the production, inventory,

and distribution costs in such a way that those routes will be part of an optimal

solution. Toll pricing is another good example. Given a network of high--,iv and

roads, assume we want the drivers to follow certain routes during certain hours of

the d4 In this case, how should the roads be tolled so that the desired flow is the

optimal flow?