UFDC Home  myUFDC Home  Help 



Full Text  
OPTIMIIZATION PROB LEM~S IN TELECOMMIIIUNIC ATI ONS WITH MILITARY APPLICATIONS By CLAYITON WARREN COMMINANIDER A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLORIDA IN PARTIAL FULFILLMENT OF THIE REQUIREMENTS FOR THIE DEGREE OF DOCTOR OF PHIILOSOPHY UNIVERSITY OF FLORIDA 2007 S2007 Clayton WNarren Commander To my family. ACKNOWLEDGM~ENTS First, I must thank my advisor Panos Pardalos. He has been an incredible mentor, leader, and friend to me since the day we met. His excitement and passion for life, research, and family have had a profound effect on me and have encouraged me greatly. He will always have a ;.pdlll; place in my heart. Mly appreciation goes to my committee members, Stan Uryasev, J. Cole Smith, and William Hager for their time and helpful ideas that guided me along the way. I would also like to thank the members of the graduate committee Farid AitSahlia, Elif Akgali, and Edwin Romeijn for giving a student from whom they had nothing to gain another chance. Next I wvish to thank my wonderful coauthors and collaborators for working with me and for helping me learn how to be a researcher: Ashwin Arulselvan, Sergiy Butenko, Lily Elefteriadou, Paola Festa, Michael J. Hirsch, Carlos A.S. Oliveira, Michelle Ragle, Mauricio G.C. Resende, Valeriy Ryabchenko, Oleg Shylo, Marco Tsitselis, Stan Uryasev, Yinyu Ye, and Grigory Zharshevsky. I would like to thank Jonathan King for instilling in me a love of mathematics and an appreciation for clearly written mathematical discourse. I am particularly grateful to Mauricio G.C. Resende of AT&T Labs Research for his wonderful collaborations and help over the years. He is every bit as gracious as he is brilliant. Finally, I am grateful to Claudio Meneses and Onur Seref for their thoughtful advice. I am truly grateful to the United States Air Force for supporting and financing my educational endeavors. Particular thanks go to Rob Murphey, David Jeffcoat, Michelle White, and many others at the Air Force Research Laboratory for their support. Thanks go to Don Grnmdel who always gave good advice and kept me on the straight and narrow. Finally, but certainly not least, my most heartfelt appreciation goes to my family. I thank my parents and my grandmother who always listened and encouraged me. I thank my parentsinlaw for always giving me a place to stay. Finally, I thank my beautiful wife Leah who has been my constant source of love, passion, and inspiration. TABLE OF CONTENTS page ACKNOWLEDGMENT S . 4 LIST OF TABLES . 8 LIST OFFIGURES . . 10 ABSTRACT. ................................ . 13 CHAPTER 1 INTRODUCTION . . 15 2 GLOBAL OPTIMIZATION ISSUES . . 16 2.1 Introduction . . 16 2.2 Idiosyncrasies . . 16 2.3 Fundamental Results . . 17 2.4 Discrete Optimization . . 21 2.5 Computational Complexity . . 22 2.6 Upper and LowerBounds . . 24 2.7 Algorithms for Optimization Problems . . 28 2.7.1 Exact Methods. . . 28 2.7.2 Heuristics . . 31 2.8 Concluding Remarks . . 36 3 JAMMING COMMUNICATION NETWORKS VIA CRITICAL NODE DETECTION . . 38 3.1 Introduction . . 38 3.2 Problem Formulations. . . 40 3.2.1 Critical Node Problem. . . 40 3.2.2 Cardinality Constrained Problem . . 44 3.3 Heuristics for Critical Node Problems . . 46 3.3.1 CNP Heuristic . . 46 3.3.2 CCCNP Heuristic. . . . 49 3.3.3 Genetic Algorithm for the CCCNP . . 50 3.4 Computational Results . . 53 3.4.1 CNP Results . . 53 3.4.2 CCCNP Results. . . 55 3.5 Concluding Remarks . . 59 4 THE WIRELESS NETWORK JAMMING PROBLEM . . 62 4.1 Introduction . . 62 4.2 Definitions and Assumptions . . 63 4.3 Deterministic Formulations . 4.3.1 Coverage Approach 4.3.2 Connectivity Formulation . 4.4 Deterministic Setup with Percentile Constraints 4.4.1 ValueatRisk (VaR) and Conditional ValueatRisk (CVaR) . 4.4.2 Percentile Constraints and the WNJP . 4.5 Case Studies and Algorithms 4.5.1 Coverage Formulation. . 4.5.2 Connectivity Formulation . 4.6 Concluding Remarks 5 JAMlMING COMMUNICATION NETWORKS UNDER COMPLETE UNCERTAINTY Introduction. Descriptions, Assumptions, and Definitions Problem Formulation Upper and Lower Bounds. Heuristic for Uncertain Jamming. Concluding Remarks 6 COOPERATIVE COMMUNICATION IN MOBILE AD HOC NETWORKS . . 106 6.1 Introduction . ............... ........ . 106 6.2 Discrete Formulations (CCPMANETD) . . 109 6.3 Algorithms for CCPMANETD . . 113 6.3.1 Construction Heuristic . . 113 6.3.2 Local Improvement . . 115 6.3.3 OnePass Heuristic . . 117 6.3.4 Greedy Randomized Adaptive Search . . 119 6.3.5 Complexity of the Heuristic . . 123 6.3.6 Computational Experiments . . 125 6.4 A Continuous Formulation (CCPMANETC) . . 131 6.5 Concluding Remarks . . 136 7 THE TDMA MESSAGE SCHEDULING PROBLEM 139 Introduction . . 139 Problem Description . . 140 Computational Complexity . . 142 H euristics . . . . ... ... .. ... .. . .145 7.4.1 Combinatorial Algorithm for TDMA Message Scheduling . . 145 7.4.2 G R A SP . . . . . ... .. . 150 7.4.3 Sequential Vertex Coloring . . 151 7.4.4 Mean Field Annealing . . 152 7.4.5 Mixed NeuralGenetic Algorithm . . 152 7.5 Computational Results . . 153 7.6 Concluding Remarks . . 159 8 CONCLUSION . ........................... .. .162 REFERENCES . . 165 BIOGRAPHICAL SKETCH . . 177 LIST OF TABLES Table page 21 Growth rates of several polynomial and nonpolynomial functions. . . 24 31 Results of IP model and heuristic on terrorist network data. . . 53 32 Results of IP model and heuristic on randomly generated scale free graphs. . 56 33 Results of IP model and heuristics on terrorist network data. . . 57 34 Results of the IP model and genetic algorithm and the combinatorial heuristic on randomly generated scale free graphs. . . 58 35 Comparative results of the genetic algorithm and the combinatorial heuristic when tested on the larger random graphs. Due to the complexity, we were unable to compute the corresponding optimal solutions. . . 59 41 Optimal solutions using the coverage formulation with regular and VaR constraints. 80 42 Optimal solutions using the coverage formulation with regular and VaR, and CVaR constraints. . . 81 51 Comparing for various values of k. .. ........ .. 95 52 Numerical results are provided for several regions with various required j amming levels. The Upper Bound, Lower Bound, Optimal Grid, and Local Search columns provide the number of jamming devices required for the corresponding region according to the theorems presented and the proposed local search. The Percent Decrease shows the savings when comparing the local search to the optimal grid approach . . 105 61 Comparative results between shortest path solutions and heuristic solutions. . 118 62 Three instances with different sets of agents on 50 node graphs are given. The value in the UBound column was found using Corollary 1........... . . 127 63 Three instances with different sets of agents on 75 node graphs are given. The value in the UBound column was found using Corollary 1........... . . 128 64 A 100 node instance with solutions with radius varying from 1 to 5 units. The value in UBound was found using Corollary 1................. . . 129 65 Average solution values for GRASP and GRASP with pathrelinking on 50 node graphs.. ... ... .. ... ... ... ... .. ... ... .... . .130 66 Comparative solutions of GRASP and GRASP with pathrelinking on 75 node graphs. 130 67 Results of GRASP and GRASP with pathrelinking on 100 node graphs. . . 131 71 Comparison of solutions for the benchmark instances from Wang and Ansari. . 156 72 Comparison of optimal and heuristic solutions for graphs with  V = 50 stations. An * indicates that the solution is optimal, while a t indicates the solution is the best found by XpressMP after 3600s. Solutions are reported as (X,1) M) ..... . . 157 73 Comparison of optimal solver and heuristic solutions for the 75 station networks. . 158 74 Comparison of optimal solver and heuristic solutions for networks with V = 100 stations. 160 LIST OF FIGURES Figure page 21 Notice that the rounded integer solution is not optimal. . . 22 22 Visualization of complexity classes. . . 23 23 Pseudocode for a greedy algorithm which makes change using the minimum number of coins. . . 32 24 GRASP for maximization . . 33 25 Generic simulated annealing maximization algorithm. . . 35 26 Pseudocode for generic genetic algorithm.. . . 36 31 Connectivity Index of nodes A,B,C,D is 3. Connectivity Index of E,F,G is 2. Connectivity Index of H is 0. ....................... . . 45 32 Heuristic for detecting critical nodes. . . 46 33 Local search algorithm for critical node heuristic. . . 48 34 Heuristic with local search for detecting critical nodes. . . 49 35 Heuristic for the CARDINALITY CONSTRAINED CRITICAL NODE PROBLEM. . 50 36 Pseudocode for generic genetic algorithm. . . 51 37 Example of the crossover operation. In this case, CrossProb = 0.65. . . 52 38 Terrorist network compiled by Krebs. . . 54 39 Optimal solution when k = 20. . . 55 310 Optimal solution when L = 4. . . 57 41 Connectivity Index of nodes A,B,C,D is 3. Connectivity Index of E,F,G is 2. Connectivity Index of H is 0. ....................... . . 66 42 Graphical representation of Var and CVaR. . . 73 43 Case study 1. The placement of jammers is shown when the problem is solved using the original and VaRconstraints. . . 81 44 Case study 1 continued. The placement of jammers is shown when the problem is solved using VaR and CVaR constraints. . . 82 45 Case Study 2: Original graph. . . 83 46 A comparison of the percentile constrained solutions. In both cases, the triangles represent the placement of jamming devices. . . 83 Uniform grid with j amming devices .... The least covered point is shown in the lower left grid cell. .... Square Decomposition .... Equivalent Points .... Cumulative emanation of jamming devices. .... Integral Lower Bound. .... Integral Upper Bound. .... Comparison of the lower and upper bounds. .... Pseudocode for the randomized local search for uncertain j amming. . 0 Example of heuristic versus uniform placement. .... Pseudocode for the shortestpath construction heuristic. .... Pseudocode for the Hill Climbing intensification procedure. .... Pseudocode for the onepass heuristic. .... GRASP for maximization .... Greedy randomized constructor for CCPMANETD. .... Local search for CCPMANETD. .... Pathrelinking subroutine. ..... GRASP with pathrelinking for maximization. .... . 88 . 89 . 89 . 90 . 91 . 92 . 97 . .100 . 103 104 114 . 116 . 117 . 119 121 122 .. ...124 . .126 munication . .132 munication . 133 mmunication 134 135 136 . .137 69 Evolution of GRASP+PR solution values on radius increases from 1 to 5 units. .... 610 Evolution of GRASP+PR solution values on radius increases from 1 to 5 units. .... 611 Evolution of GRASP+PR solution values on radius increases from 1 to 5 units. .... 612 The heavyside function H1. .... 613 Alternate objective function H2. .... 614 Second alternate objective function H3. . 50 node graphs as the com 75 node graphs as the com 100 node graphs as the cor 71 Counterexample to the claim of Wang & Ansari that optimal graph coloring can be found by recursively finding a maximum independent set and removing it from the graph. . . . . . . . . ... . .143 72 Construction of graph G' from G. . . 144 73 Pseudocode of the proposed heuristic for MSPTDMA. . . 146 74 Greedy randomized heuristic for frame length minimization. . . 147 75 Throughput maximization pseudocode. . . 149 76 Benchmark TDMA test cases. . . 155 77 Example GRASP broadcast schedules for the networks given in Figure 76: (a) 15 station network, (b) 30 station network, (c) 40 station network. . . 161 Abstract of Dissertation Presented to the G~raduate School of the University of Florida in Partial Fulfillment of the t;,ii lir. com, of . for the Degree of Doctor of Philosophy OPTIMIZATION PROBLEMS IN TELEC OMMUNICATIONS WITH MVIILITARY APPLICATIONS By Clayton WVarren Commander August 2007 Chair: Panagote M. Pardalos N T~a.>r: Industrial and Systems Engineering In recent decades, optimization problems in telecommunication systems have been the focus of an intensive amount of research. These problems are important for several reasons including speed and quality of communication among others. In this dissertation, we present several problems arising in telecommunication networks in military applications. Several problems we consider involve wMireless communication networks. These networks are an extraordinarily convenient method of communication. How~ever, along with this convenience comes a myriad of complicated problems that must be addressed to preserve the attractive features of the networks. Furthermore, problems arising in adversarial environments differ from those in conventional settings, in that time is I .1.1. ll; a critically constrained factor. This is troublesome because many of the problems are difficult to solve and would require a tremendous amount of time to compute the optimal solution. HiowYever in a b. iri.ln:.rl environment, time spent computing a solution and not fighting the enemy leads to a potential loss of materiel and lives. Thus for the problems studied, we will focus a great deal of attention on designing heuristic algorithms which are capable of c;i p Itint a.I near optimal solutions very efficiently. Wie will consider two classes of problems involving telecommunication networks. The first class focuses on denying communication on a network and destroying its functionality. The other class has the objective of guaranteeing communication on a network. At first glance, these twio sets appear to be polar opposites of one another. However, with any emerging technology studies which assess both vulnerabilities and capabilities must be performed in order to achieve a system which will not fail in its intended operational environment. Our goal is to showN how these problems can be formulated and solved using tools from global and combinatorial optimization. For the problems considered, we examine the computational complexity and examine several mathematical programming formulations. Then we present several algorithms and examine extensive computational results ca~lll~ onesina their effectiveness. Finally, we conclude by ; Iin lisl m; ;lin our work and indicating future directions of research. CHAPTER 1 INTRODUCTION Optimization problems in telecommunication systems have been the focus of an intensive amount of research in recent decades [135, 155]. These problems are important for several reasons including speed and quality of communication and cost related issues. In this dissertation, we present several problems arising in military applications involving telecommunication networks. Several problems we consider involve wireless networks. These networks are an extraordinarily convenient method of communication; however, along with this convenience comes a myriad of complicated problems which must be addressed in order to preserve the attractive features of the networks. Furthermore, problems arising in adversarial environments differ from conventional problems in that time is usually a critically constrained factor. This presents somewhat of a problem because many of the problems are extremely difficult to solve and would require a tremendous amount of time to compute the optimal solution. However in a battlespace environment, time spent computing a solution and not fighting the enemy leads to a potential loss of materiel and lives. Thus throughout this dissertation we will focus a great deal of attention on designing heuristic algorithms which are capable of computing near optimal solutions very efficiently. The remaining chapters of this dissertation present the results of my efforts to model and solve many important telecommunication problems facing the military in the ever evolving global war on terrorism. We will consider two classes of problems involving telecommunication networks. The first class (Chapters 3, 4 and 5) focus on denying communication on a network and destroying its functionality. Conversely, the problems in Chapter 6 and Chapter 7 have the obj ective of guaranteeing communication on a network. At first glance, these two sets appear to be polar opposites of one another. However, with any emerging technology studies which assess both vulnerabilities and capabilities must be performed to achieve a system which will not fail in its intended operational environment. Our goal is to show how these problems can be formulated and solved using tools from global and combinatorial optimization [74, 106, 127]. CHAPTER 2 GLOBAL OPTIMIZATION ISSUES 2.1 Introduction Over the past 60 years, Operations Research (OR) has emerged as one of the most exciting, fastpaced, and interdisciplinary fields of mathematics. Since its rebirth during World War II, OR has turned into a fascinating subj ect which crosses all divides from real analysis, probability, statistics, economics, theoretical computer science, and biology in an attempt to solve some of the most computationally difficult problems known to exist. As mentioned in [98], OR was first formalized during World War II when supplies were limited and needed to be allocated to the allied forces overseas. OR teams were fundamental in developing methods for using radar which was crucial in the allies winning the air war. Later, researchers developed methods for optimally transporting convoys and derived methods for tracking submarines thus leading to success in the Pacific theater. The original name of the field was M~ilitary Operations Research; however, due to the success of the methods derived during the war, scientists and engineers began applying these techniques to other problems in mathematics and industrial engineering. The word military was subsequently dropped because of this. Since the early 1950's, researchers have been expanding the techniques and methods of OR. Reminiscent of the time of Gauss and Euler, scientists are making contributions at incredible rates in fields ranging from facility location problems to the mapping of the human genome. With the advent of the digital computer, algorithms are now able to be implemented providing the capability to solve problems never before thought tractable. In this chapter we present the foundation of global optimization. This will provide the necessary tools for us as we investigate the problems presented in the succeeding chapters. 2.2 Idiosyncrasies In this subsection, we introduce the symbols and notations we will employ most frequently throughout this dissertation. Denote a graph G = (V, E) as a pair consisting of a set of vertices V, and a set of edges E. Let the map w : E H R be a weight function defined on the set of edges. We will denote an edgeweighted graph as a pair (G, w). Thus we can easily generalize an unweighted graph G = (V, E) as an edgeweighted graph (G, w), by defining the weight function as 0, if (i, j) 5' E. 21 We use the symbol "b := a" to mean "the expression a defines the (new) symbol b" in the manner of King [115]. Of course, this could be conveniently extended so that a statement like "(1 e)/2 := 7" means "define the symbol e so that (1 e)/2 = 7 holds" [114]. We will employ the typical symbol S" to denote the complement of the set S; further let A UB denote the setdifference, A n Be. Agree to let the expression x < y mean that the value of the variable y is assigned to the variable x. To denote the cardinality of a set S, we use  S. Finally, we will use italics for emphasis, and SMALL CAPS for formal problem names. Any other locally used terms and symbols will be defined in the sections in which they appear. 2.3 Fundamental Results In global optimization, the obj ective is to determine the maximum or minimum point attained by an obj ective function defined over a set. In general, an optimization problem has the form minimize or maximize f (x) subject to x E S, where SC cR" is the feasible region and f (x) is a real valued function defined on S. That is, f : SH R'. Definition 1. An optimization problem aI ith feasible region S C IR" is said to be infeasible if Throughout this dissertation, we will rely heavily on the notion of a neighborhood which is defined next. Definition 2. For a given optimization problem on a set S C I: a neighborhood is a mapping NV: S H2s defined for each instance. In subsequent chapters we will see that cleverly defining a neighborhood for a particular problem can greatly increase the effectiveness of heuristics. For example, if S = IR", then the set of points that fall within some Euclidean distance provide a natural choice for the neighborhood [144]. If   represents the Euclidean norm, then a point x* E S is said to be a local minimum point of f if f(x*) < f(x) for all points x E S such that Ix x* < e for some e > 0. In other words, given e > 0, define the neighborhood of x* as rv(x*) := {x : x E S and x x* < e} ~. (22) Then, x* is a local minimum if f (x*) < f (x) for all x E 1V(x*). A point x* is said to be a global minimum if f (x*) < f (x), for all x E Sn {x : Ix x* < e} ~ [107]. The global minimum point is referred to as an optimal solution. As we will see in the upcoming theorems, the existence of local minima can sometimes lead to incredible difficulties when searching for a global minimum. We will later show that in certain cases it is a very hard problem to determine if a local minimum is also a global minimum. Before we investigate the fundamental results of global optimization, we provide some basic definitions. Definition 3. A function f : I H D, is continuous at a point c ElIc R if given any e > 0, there exists 6 > 0 such that if x ElI and x c < 6, then  f(x) f (c)  < e. Definition 4. A set S C IR" is said to be closed ifS contains the limits of all convergent sequences ofpoints xi E S. Definition 5. A set S C IR" is said to be compact ifS is both closed and bounded. We know from real analysis that if S is a compact set, then every infinite sequence of points xi E S has a convergent subsequence whose limit is in S [161]. Furthermore, for a continuous function f f (xi) f (x*) whenever xi x*. This leads us to fundamental result by Weierstrass which we state without proof [107]. Theorem 1. If S is a nonempty compact set in IR", and f (x) is a continuous function defined on S, then f (x) has at least one global minimum (maximum) point in S. We can now move on and examine some properties of local and global minima. Recall from calculus that if the function f is continuously differentiable in a neighborhood of a point x* E S, and de ER", then dTV f(x*) is said to be the directional derivative of f at x* in the direction d. If we fix x* and d, then the function h(X) := f (x* + Ad), for As ER+, describes f along the ray {x = x* + Ad, A > 0}. If we evaluate the derivative of h with respect to A at the point A = 0, using the first order Taylor expansion of h at A, we see that this is precisely d V f(x*). Thus, d V f(x*) < 0 implies that there exists A > 0 such that f (x* + Ad) < f (x*) for every 0 < A < A. Hence the condition d V f(x*) < 0 put simply, implies that we can decrease f locally in the direction d. Clearly, we are only interested in those directions d that do not immediately leave the feasible region S. Definition 6. A direction vector de ER" is said to be a feasible direction at x* if there exists A > 0 such that x* + Ad E S r every 0 < A < A*. This leads us to the following theorem which provides a necessary condition for locally optimal solutions. Theorem 2. Suppose that the function f (x) is continuously di~ff~erentiable on an open set containing S C IR". If x* is a local minimum of f (nl ithr respect to S), then d V f(x*) > 0 for every d EZ (x*), where Z (x*) is the set of all feasible directions at x*. x* E S is called a critical point if d V f(x*) > 0 for every d E Z(x*). Now, we will determine in which cases critical points represent global optima. We first need to recall the definitions of convex sets and convex functions. Definition 7. A set S C R is called convex set if for every xl, x2 E S, and A E IR, O < A < 1, the point Axl + (1 A)x2 E S. In a geometric sense, S is convex if for any two points in S, the line segment joining these two points is wholly contained in S [148]. Definition 8. Given a convex set S C IR", the unction : S H R is said to be a convex function if for any xl, x2 E S, and A E IR, O < A < 1, the following condition holds: f (Axx + (1 X~A)2 1 2fx) 1 ~fx The function f is said to be a concave function if and only if f is convex. In the following theorem, we prove that for optimization problems where f is convex and S is a convex set, that critical points are always globally optimal solutions. Theorem 3. Let f : S H R be a convex function, where SC cR", is a convex set. Thzen every local minimum of f is also a global minimum. Proof: Let x* be a local minimum point and assume for the sake of contradiction that there exists another point E S such that f (x) < f (x*). From Definition 8, we know that convexity of f implies that for any A E R such that 0 < A < 1, f (x* + A(x x*)) < Af (x) + (1 A) f(x*) < f (x*). Notice that we have contradicted the local optimality assumption of x* since for a local minimum point, there must exist some A* > 0 such that for 0 < A < A*, f (x* + A(x x*) > f (x*). O In the following theorem, we provide an optimality criterion for minimizing a concave function over a convex set. Theorem 4. Given a compact convex set Sc CR", the global minimum of a concave function f : S ~ R is attained at an extreme point of S. Proof: We know that any point x in a convex set can be written as the convex combination of the extreme points of S. That is, x = CE zA such that CE As = 1, As > 0 V i. Since f is concave, we have f .r > Asf r) A mi f(r) i=1, N} (23) i= 1 i= 1 = min f (l ) :i= 1,...,N}). (24) Recall that linear functions are both convex and concave. Therefore if we are considering a limearprogrananzing problem, i.e. that of minimizing a continuous linear function over a polytope, both Theorem 3 providing optimality of local minima for convex functions and Theorem 4 providing extreme point optimality of concave functions apply. Thus, for this class of problems we can restrict the search for the global solution by examining only the extreme points of the polytope. In the absence of convexity however, a global minimum point can occur at a point other than an extreme point. In this dissertation, we focus on problems of this type. In particular the problems we will later investigate contain many locally optimal solutions which differ from the global solution. Also, until now we have focused on theorems for continuous functions. However, several problems we will encounter are have discrete variables. These problems are called conabinatorial optimiza~tionproblems. The next section contains some basic results regarding combinatorial problems which we will later use. 2.4 Discrete Optimization In certain applications, it is necessary to restrict the values of the decision variables of a problem to be integer valued. Such problems are referred to as integer progrananzing problems. Sometimes, it is convenient to include integer variables in a problem when one is attempting to model a situation that has two possible values. In this case, binary variables that take the value 0 or 1 are used. Integer programming problems present unique challenges in that the techniques and theorems for linear programming problems as described above do not necessarily apply. For example, consider the polytope in Figure 21. Notice that the integer points do not lie at the ** Optimall LP solution Optimal IP solution i~rcinodecreasing cost. Figure 21: Notice that the rounded integer solution is not optimal. extreme points of the polytope. We see then that the result from Theorem 4 does not hold. Another common misconception is that the integer optimal solution can be found by rounding the linear programming solution to the nearest integer. We see by examining the figure that this doesn't work either. Notice that the integer point nearest the linear programming optimal solution falls outside the feasible region of the polytope. Clearly, we need more advanced methods for solving such problems. We will look at a variety of exact and heuristic methods in Section 2.7. Now, we provide an introduction to computational complexity. Complexity theory lies at the heart of global optimization and provides tools for empirically determining the level of difficulty of a given problem as well as the effectiveness of an algorithm. Later we will confirm our suspicions about the difficulty of integer programs and nonconvex continuous programming problems. 2.5 Computational Complexity In this section, we develop a means by which we can classify a problem as either being "easy" or "hard". Then for the socalled "hard" problems, we look at ways to answer the question: "how hard' is hard? ". An algorithm is said to be a polynomial time algorithm if its number of elementary operations, i.e. its running time on a computer, are in the worst cast, bounded above by a Figure 22: Visualization of complexity classes. polynomial in the size of the input [107]. For instance, an algorithm is said to be O(F~) if the polynomial which bounds the running time is of order p in the size of the input data I. An algorithm is said to be an exponential time algorithm if it is not bounded by a polynomial in the length of the input [6]. When discussing problems in OR, we split the collection of all problems into two classes as visualized in Figure 22. Those problems which can be solved optimally by a polynomial time algorithm are said to belong to the class P. The other complexity class contains those problems which can be solved by a "nondeterministic" algorithm in polynomial time. This class is called HiP. A problem in MiP is one in which it is easy to verify the correctness of solution, but is very hard to solve, where a problem belonging to p is simply "easy" to solve. Among the problems in HiP, those which are the most difficult to solve are said to be HiPcomplete. A problem atl is said to be polynomially transformable to problem c~fag polynomial time algorithm for ca2 WOuld imply a polynomial time algorithm for atl. Problems in NiPcomplete are special in that every problem in HiP can be polynomially transformed to every other problem in NiPcomplete. Thus, since P C Nip, it follows that if one could design a polynomial time algorithm for a single HiPcomplete problem, then every problem in MiP could be solved with a polynomial time algorithm and thus p would equal HiP [79]. However, Table 21: Growth rates of several polynomial and nonpolynomial functions. n nl n2 n4 2" n! 10 101 102 104 103 3.6 x106 100 102 104 10s 1.27 x1030 9.33 x10157 1000 103 106 1012 1.07 x10301 4.02 x102,567 10, 000 104 10s 1016 0.99 x103,010 2.85 x1035,659 despite the incredible amount of research and investigation, the question as to whether P = NiP remains the single greatest unsolved problem in theoretical computer science [145]. In fact, the Clay Mathematics Institute has named this problem as one of seven prize millennium problems and is offering $1 million to anyone who presents an answer to the question, "does P = HiP?" [109]. Definition 9. An optimization problem xr is said to be HiPhard, if there exists and HiPcomplete problem which is polynomially transfr mablert~r~rt~t~rtrt~ to ;T. Throughout this dissertation, we are going to focus on problems that are HiPhard and HiPcomplete. The next reasonable question one asks of H~P problems is what the implication is for solving them. That is, how does being in MiP really complicate the computational tractability of a problem. Table 21 provides a several examples of the growth rates of some polynomial and exponential functions [6]. Notice how quickly the exponential algorithms grow. This is one reason why polynomial time algorithms are preferred over exponential time algorithms. Most discrete optimization problems turn out to be either HiPhard or HiPcomplete, even if they are linear [149]. 2.6 Upper and Lower Bounds When attempting to solve integer programs (IPs), we are faced with the problem of how to prove that a given point is an optimal solution [173]. This problem arises since local optimality does not imply global optimality for IPs. Oftentimes being able to derive upper and lower bounds on the optimal solution is helpful to identify good approximate solutions and narrow the search for the optimal solution. This topic will be studied extensively in Chapter 5. Now we introduce some basic properties of bounds for integer programming problems. Consider the IP given below and assume that the point x* is an optimal solution. minimize cx (25) subj ect to x E S (26) x E Z". (27) Then in order to solve this IP we need to determine a lower bound x such that c(x) < c(x*) and an upper bound x where c(x) > c(x*) such that c(X) = c(X*) = c(X). (28) In order find these bounds in practice, we need an algorithm that can compute a decreasing sequence of upper bounds c(X,) > c(X2) > .. > C Us) > c(X*), (29) and a corresponding increasing sequence of lower bounds c(X1) < c(X2) < .. < C gt) I C Z*), (210) which stops when c(xs) c(Ze) < e, (211) for some e > 0 [173]. We use the idea of a relaxation in order to find these bounds. A relaxation typically enlarges the set of feasible solutions, but is easier to solve than the original problem. Definition 10. A problem (RP) z" = max f (x) : x E TC IR"} is said' to be a relaxation of (IP) z = max~c(x) : x E Sc CR"} if (i) Sc T, and' (ii) f (x) > c(x) for all x E S. Using this definition, the following lemma holds. Lemma 1. If (RP) is a relaxation of (IP), then z" > z. Proof: Let x* E S be an optimal solution for (IP). Then, we have x* E Sc T, which implies c(x*) < f (x*). Furthermore, since x* E T, f (x*) is a lower bound on z That is, z= c(X*) < f (X*) < z and we have the lemma. O The problem of formulating useful relaxations is an important problem in its own right which has been studied since the founding of OR [53]. Among the most common relaxations are the linear programming relaxation, and the Lagrangian relaxation. We now provide a brief introduction to these. Definition 11. Given an integer program (IP) z := max~cx : x E Sn Z" c IR" }, the linear programming relaxation of(IP) is the linear program (LPR) z := max(cx : x E Sc CR"}. Proposition 1. The linearprogramming problem (LPR) is a relaxation of (IP). Proof: The proof is trivial as S n Z" C S and the objective function of (LPR) remains the same as in (IP). Thus, by Lemma 1, we have the result. O We see then that all linear programming relaxations provide bounds on the original integer program. Further, the following lemma shows that relaxations can be helpful for identifying cases in which the original integer program is infeasible [173]. Lemma 2. Given an integer program (IP) z"P := max~cx : x E Sn Z" CE } and its correspondingLP relaxation (LPR) z := max(cx : x E Sc CR"}, thefollowingsta~tements hold. (i) If (LPR) is infeasible, then (IP) is also infeasible. (ii) If x* is an optimal solution to (LPR) such that x* E Sn Z" and f (x*) = c(x*), then x* is an optimal solution for (IP). Proof: (i) This follows from the fact that S n Z" c S. Since (LPR) is infeasible, we have that S = 0, thus implying S n Z" = 0. (ii) Since (LPR) is a relaxation, then by Definition 10, x* E Sn Z" implies z"P > c(x*) = f(x*) = z F. However, by definition z"P < zL implying c(x*) = z"P = zL Another common relaxation used to tackle hard integer programs is the Lagrangian relaxation. This method was first introduced by Held and Karp in [96, 97] in a formulation for the TRAVELING SALESMAN PROBLEM [54]. Lagrangian relaxation relaxes the constraints by adding them to the obj ective function with an associated penalty. Consider the following optimization problem, (P) x* = max cr (212) s.t. A4: < b (213) xrE X. (214) Then the corresponding Lagrangian relaxation is formulated as follows. (P~f )) L (f ) = max Cr + <(b ,4:r (215) xrE X. (216) With this we can formulate the following lemma regarding Lagrangian relaxation. Lemma 3. (P~ft) is a relaxation o~f (P). Pr~oof In order for (P0ft)) to be a relaxation of (P), as defined by Definition 10, we must show the following two conditions: (i) the feasible region of the original problem is a subset of the relaxed problem, and (ii) for all vectors ts > 0, L(ft) > x*. Condition (i) follows trivially. To show condition (ii), let :r* E X be an optimal solution for (P). Then r* is clearly feasible for (P(ft)). Further, since r* feasible for (P), b Azr* > 0. Therefore cr* < cr* + p(b Axr*), for all real vectors s '> 0. O From this lemma, we see that any feasible solution to (P(ft)) is an upper bound on the optimal value of (P). The problem of finding the best, or tightest bound, say L* is known as the LAGRANGIAN MULTIPLIER PROBLEM [124] and is given as L* := min{L(ft) : 4< > 0}. With this the following lemma, which is stated without proof, holds. Lemma 4. For all real vectors p, cx < z* < L* < L(p). Furthermore, if L(p) = L* = z* = cx*, then x* is optimal for the original problem (P), and p is optimal for the LAGRANGIAN MULTIPLIER PROBLEM. Linear programming and Lagrangian relaxations are helpful for showing when a solution is close to (in some cases equals) the optimal solution. They are also useful in branch and bound algorithms, which we will introduce in the following subsection. 2.7 Algorithms for Optimization Problems Consider a discrete optimization problem that has a binary decision variables. Since there are a finite number of integer feasible solutions, in theory one could enumerate all possible solutions. However, to do this would require 2" function evaluations [31]. This is impractical since if a > 1000, then with the present computers available the computation time required for this enumeration would take millions of years. Clearly we need more efficient algorithms to solve these problems. Algorithms for optimization problems are broken up into two categories: exact methods and heuristics. Exact methods guarantee that the termination of the algorithm will result in the optimal solution provided one exists. Heuristics on the other hand, have no guarantee of optimality but usually find high quality solutions much faster than the exact methods. In fact, most nontrivial instances of problems in HiP cannot be solved by exact algorithms. Thus, we need efficient heuristics to find near optimal solutions to realworld instances. In the following sub sections, we provide an overview of several exact and heuristic methods that we will use for solving the problems appearing in later chapters. 2.7.1 Exact Methods Linear Programming Techniques. We begin our discussion of exact algorithms with the simplex method for linear programming [52]. Consider an instance of a linear programming problem minimize ex (217) subj ect to Ax = b (218) x > 0, (219) where A is an m x n matrix with rank m, c an nvector, and bT is an mvector. The simplex method is an algorithm which moves along the extreme points of the polytope defined by Ax = b in search of the optimal solution. The extreme points are visited in such a way that the obj ective function value at a new point is at least as good as the previous. Since we showed in the previous theorems that the optimal solution to a linear program is an extreme point of the polytope, the simplex method is guaranteed to find the optimal solution. Notice however, that this algorithm is not polynomial. The polytope in question can have ( ) extreme points, and it is possible to construct examples for which the simplex method must enumerate all (") of them. However, despite the theoretical exponential worstcase complexity of the simplex method, it is very efficient in practice and is easy to implement. This is not to say that linear programs are HiPhard. In fact, all linear programs are in P. The class of algorithms known as interior point methods are able to solve linear programming problems in polynomial time. The first efficient interior point algorithm was proposed in 1984 by Karmarkar [111]. The first implementation of Karmarkar's algorithm was reported in 1991 by Adler, Karmarkar, Resende, and Veiga in [4] and [5]. Excellent reference texts on linear programming include the work of Chvatal [34] and Bazaraa et al. [15]. Integer Programming Algorithms. Branch and bound (B&B) algorithms are the most common class of algorithms used for solving discrete optimization problems [61]. B&B methods are implicit enumeration techniques based on the idea of divide and conquer [90]. In a B&B algorithm, the set of feasible solutions is decomposed into smaller and smaller sets until the optimal solution is eventually reached. Consider the integer programming problem z"P := max~cx : x E Sn Z" CE } To apply the B&B algorithm, the linear programming relaxation z"P := max(cx : x E S} is solved, generally resulting in a nonintegral solution. This solution is taken as initial upper bound on the optimal solution. Suppose that in the LP relaxation, some variable xi = xi 5/ Z. Then one way to branch is to divide the feasible region S into two subdomains, namely S. :=Sn {x : Xi < [Xs] } (220) S2 := Sn {x : Xi > [Xs] } (221) Notice that S1 U S2 = 0 and S1 n S2 = 0. The two linear programs z"P := max {cx : x E St } and zL := max~cx : x E S2) are HOW SOlVed and the smallest objective value is taken as the new upper bound. In essence, a search tree is formed by the repetition of the decomposition/bounding process applied to each subproblem. However, due to a preestablished lower bound, many of the resulting subproblems are pruned from the search tree and not considered. Thus, an optimal permutation is constructed iteratively, one element at a time [151]. The process is repeated on variables which are nonintegral until eventually the integer optimal solution z"P is reached [173]. Though branch and bound methods are the most commonly used algorithms for discrete optimization problems, they are not the only techniques available. The branch and cut method is a hybrid of branch and bound which falls into the class of socalled cutting plane techniques [54, 107, 173]. Introduced by Gomory in [88], cutting plane methods solve IPs by introducing constraints which cuts off the noninteger solution found by solving the LP relaxation without removing any feasible integer solutions. Finally, column generation, or socalled branch and price algorithms are effective decomposition methods and are commonly used for solving largescale integer programming problems [55, 57, 58]. The design of software packages which efficiently execute optimal integer programming algorithms is a global enterprise. Today the most efficient and widely used commercial IP solvers are CPLEXB by ILOG, Inc. [50] and XpressMPB by Dash Optimization Inc. [108]. In later chapters, we will apply branch and bound techniques to several problems in order to find the optimal solutions and verify the effectiveness of heuristics. 2.7.2 Heuristics Despite the guarantee of eventually reaching the optimal solution, B&B methods are inefficient on large problems. Therefore, we must look for efficient ways of producing high quality solutions. Heuristics, or suboptimal algorithms provide this outlet. The term heuristic is derived from the Greek word heuriskein (evptowew),, meaning "to find or discover". Heuristics are approximation algorithms and are the only alternative to finding "good" feasible solutions when problems are too difficult to apply branch and bound methods. The study of heuristics is vast, and has let to the creation of algorithmic methods which are capable of producing excellent solutions in seconds, for problems in which a B&B or other optimal algorithm would require years to solve. In the following paragraphs, we provide a brief introduction to several heuristics which we will analyze in later chapters of this dissertation. We begin with a the simplest type of heuristic known as the greedy algorithm . Greedy Heuristics and Local Search. A greedy algorithm is a local search metaheuristic which gets its name from the myopic way in which it creates candidate solutions [123]. At each step, the greedy method makes whatever choice seems best at that particular moment in time. Once a decision is made, it is permanent and cannot be later changed. Therefore, one must ensure that a candidate element is feasible before adding it to the incumbent solution [105]. An example of a greedy algorithm is as follows. Suppose a cashier owes a customer $0.42 cents. The cashier can use the greedy method to determine the minimum number of coins required for this transaction. Pseudocode for this method is provided in Figure 23. The algorithm takes as input n, the amount of change due, in this case, a = 80.42. To begin with, one quarter is selected bringing the balance to $0.17. Next, one dime is chosen and the remainder is $0.07. By selecting one nickel and two pennies, the problem is solved. We see that greed is manifested in this example as the algorithm selects the highest valued coins first. For this problem, the greedy algorithm computes the optimal solution from the 31 unique combinations of coins which add up to $0.42 cents. procedure GreedyChangeMaker(u) S C < {, 5, 10, 25} 3Sum < 0 /s sum of coins in CI,* ,~ i~ */ 4 while Sum / n do 5 x maxcE C : Sum + c n} 6 if }9 x then 7 return: NO SOLUTION 8 else to sum < sum + x 11 end if L2 end while 13return Ch. .i~~e end procedure GreedyChangeMaker Figure 23: Pseudocode for a greedy algorithm which makes change using the minimum number of coins. Other problems for which the greedy method finds the optimal solution include the MINIMUM SPANNING TREE problem where Kruskal's algorithm [122] finds a minimum weight spanning tree of a given graph [6]. Despite the performance of the greedy algorithm on the above example, greedy methods almost always fall short of the optimal solution when applied to HiPcomplete problems. This is because greedy methods select a local optimum from the neighborhood of the current solution at each step with the hope that in the end, the global optimum is found. However as we learned earlier in the chapter, this isn't necessarily the case. Other local search heuristics involve simple examinations of neighborhoods in the quest for a "good" solution [51]. The method moves from one solution to the next in the feasible region, until the current solution cannot be improved by selecting an alternate solution in its neighborhood. The specific neighborhood structure depends upon the problem at hand, and as mentioned earlier, a clever choice of neighborhood can greatly improve the efficacy of the heuristic. Popular local search methods include the 2exchange (or, 2opt) method [173], hill climbing procedures, the method of conjugated gradients [91, 92], and steepest ascent/descent methods. We will see several examples of local search algorithm in the later chapters. For procedure GRASP(Maxlter, RandomSeed) S f < 2 X*t < 3for i = 1 to Maxlter do 4 X < ConstructionSolution (G, g, X, a~) 5 X < LocalSearch(X, NV(X)) 6 if fX) > fX*) then 7 X <X 8 f f (X) 9end to end 11 return X* end procedure GRASP Figure 24: GRASP for maximization detailed implementation specifications, one should consult textbook on local search, such as the work of [2]. For an annotated bibliography of local search, the reader is also referred to [3]. Greedy Randomized Adaptive Search Procedure (GRASP). GRASP [69] is a multistart metaheuristic that has been used with great success to provide solutions for several difficult combinatorial optimization problems [72], including SATISFIABILITY [154], JOB SHOP SCHEDULING [7], VEHICLE ROUTING [32], and QUADRATIC ASSIGNMENT [128, 140]. For an annotated bibliography of GRASP, the reader should reference the paper by Festa and Resende [72]. GRASP is a twophase procedure which generates solutions through the controlled use of random sampling, greedy selection, and local search. For a given problem II, let F be the set of feasible solutions for II. Each solution X E F is composed of k discrete components al, .., ak GRASP constructs a sequence {X}4 of solutions for II, such that each Xi E F. The algorithm returns the best solution found after all iterations. The GRASP procedure can be described as in the algorithm presented in Figure 24. The construction phase receives as parameters an instance of the problem G, a ranking function g : A(X) H R (where A(X) is the domain of feasible components al, .., ak, for a partial solution X), and a parameter 0 < a~ < 1. The construction phase begins with an empty partial solution X. Assuming that A(X) = k, the algorithm creates a list of the best ranked a~k components in A(X), and returns a uniformly chosen element x from this list. The current partial solution is augmented to include x, and the procedure is repeated until the solution is feasible, i.e. until X E F. The intensification phase consists of the implementation of a hillclimbing procedure. Given a solution X E F, let NV(X) be the set of solutions that can found from X by changing one of the components a s X. Recall that NV(X) is called the neighborhood of X. The improvement algorithm consists of finding, at each step, the element X* such that X* := argmax f (X'), where f : F R is the objective function of the problem. At the end of each step we make X* < X if f (X) > f (X*). The algorithm will eventually achieve a local optimum, in which case the solution X* is such that f (X*) > f (X') for all X' E NV(X*). X* is returned as the best solution from the iteration and the best solution from all iterations is returned as the overall GRASP solution. Simulated Annealing. In statistical mechanics, the physical process of annealing is used to relax a system to the state of minimal energy. This is done by heating the solid until it melts and then cooling it slowly so that at each temperature the particles randomly arrange themselves until reaching thermal equilibrium. In [116], Kirkpatrick et al. introduced a method for combinatorial problems known as simulated annealing. Based on the theory of the physical process, simulated annealing was shown to asymptotically converge to the global optimum after performing a number of socalled transitions at decreasing temperatures. Pseudocode for a generic simulated annealing algorithm is presented in Figure 25. The algorithm takes as input the initial temperature T and a reduction factor r E (0, 1). Simulated annealing essentially chooses a neighbor at random to replace the incumbent solution. If the chosen neighbor is a better solution then it is accepted with probability 1. However, in order to escape and evade local optima, if the chosen neighbor is worse than the incumbent, then it is procedure S imul ated~nne al ing (T, r) S f < 2 X*t < 3 X < randomSolution() 4 while T / 0 do 5 for i = 1 to Maxlter do 6 X' <randomNeighbor (X) 7 if fX') > fX) then 8 X < X' else /(x')f(x) to X < X' with probability e T 11 end if 12 T < rT L3 if f (X) > f then 14 X* <X L5 f f (X) 16 end if 17 end for I8 end while 19 return X* end procedure SimulatedAnnealing Figure 25: Generic simulated annealing maximization algorithm. accepted with some positive probability which is a decreasing function of the temperature [1]. Thus the cooling schedule, or the method in which the temperature decreases is an important part of the heuristic. It has been shown that a logarithmically slow cooling schedule guarantees that the algorithm will converge to the global optimum in exponential time [24]. Therefore in practice, faster cooling schedules are often used. Another method closely resembling simulated annealing is the method of mean field annealing. Mean field annealing (MFA), is a heuristic which mimics the idea of mean field approximation from statistical physics [150]. In MFA, the stochastic process in simulated annealing is replaced by a set of deterministic equations. Though MFA does not guarantee convergence to a global optimal solution, it can provide an excellent approximation to an optimal solution and is much less expensive computationally. Genetic Algorithms. procedure GeneticAlgorithm SGenerate population PI 2 Evaluate population PI 3 while terminating condition not met do 4 Select individuals from PI and copy to Pk~+1 SCrossover individuals from PI and put in Pk~+1 6 Mutate individuals from Pk, and put in Pk~+1 7 Evaluate population PI+1 to end while 11 return best individual in Pk, end procedure GeneticAlgorithm Figure 26: Pseudocode for generic genetic algorithm. Genetic algorithms receive their name from an explanation of the way they behave. It comes as no surprise, they are based on Darwin's Theory of Natural Selection [56]. Genetic algorithms store a set of solutions, or a population, and the population evolves by replacing these solutions with better ones based on certain fitness criterion represented by the obj ective function value. In successive iterations, or generations, the population evolves by reproduction, crossover, and mutation. Reproduction is the probabilistic selection of the next generations elements determined by their fitness level. Crossover is the combination of two current solutions, called parents which produces one or more other solutions, referred to as o~ff~spring. Finally, mutation is the random modification of the offspring. Mutation is performed as an escape mechanism to avoid getting trapped at a local optimum [86]. In successive generations, only those solutions having the best fitness are carried to the next generation in a process which mimics the fundamental principle of natural selection, survival of the fittest [56]. Figure 26 provides pseudocode for a standard genetic algorithm. Genetic algorithms were introduced in 1977 by Holland [102], and were greatly invigorated by the work of Goldberg in [86]. 2.8 Concluding Remarks In this chapter we have provided a brief history and introduction to global optimization. The chapter is not intended to be all inclusive; instead, the purpose of its inclusion is as follows. First, we have provided the fundamental results and underlying theory that we will use throughout this dissertation. This includes the theory of computational complexity, and a short overview of the most common solution techniques we will encounter and apply to several problems as we progress. Secondly, we have provided several definitions, lemmata, and theorems that we will reference in the chapters to come. The intent is to have a concise location to which the reader can refer. Also, presenting the maj or theorems here will prevent redundancy as we will not restate the theorems in each chapter they are applied. We will now move on and begin the examination of several combinatorial problems that occur in military telecommunication networks. We conclude this chapter with a list of references on theory, algorithms, and applications of global and combinatorial optimization. Excellent references on global and combinatorial optimization include the work of Du and Pardalos [63, 64, 65], Floudas and Pardalos [74, 76], Horst and Pardalos [106], Horst, Pardalos, and Thoai [107], Pardalos [146], Pardalos and Resende [147], Pardalos and Rosen [148], and Wolsey [173] to name a few. Perhaps the most inclusive onestop reference is the monumental work of Floudas and Pardalos in the six volume Encyclopedia of Optimization [75]. The list of algorithms is also not intended to be exhaustive. Other exact algorithms include dynamic programming [16] and outer approximation methods [107]. Other effective heuristics include tabu search [81, 82, 83], scatter search [80], hybrid heuristics which combine elements of several methods [22, 71, 159], and algorithms designed for the specific problem which exploit the combinatorial structure of the problem [25, 30, 43, 139]. Other algorithmic reference books include Ahuja et al. [6], Floudas and Pardalos [73], Goldberg [86], Minieka [130], Osman [142], and Osman et al. [143]. CHAPTER 3 JAMMING COMMUNICATION NETWORKS VIA CRITICAL NODE DETECTION 3.1 Introduction In this chapter, we study two variants of the CRITICAL NODE PROBLEM. In general, the obj ective of the CRITICAL NODE PROBLEM (CNP) is to find a set of k nodes in a graph whose deletion results in the maximum network fragmentation. By this we mean, maximize the number of components in the kvertex deleted subgraph. Studies carried out in this line include those by Bavelas [14] and Freeman [78] which emphasize node centrality and prestige, both of which are usually functions of a nodes degree. However, they lacked applications to problems which emphasized network fragmentation and connectivity. We can apply the CNP to the problem of jamming wired telecommunication networks by identifying the critical nodes and suppressing the communication on these nodes. This will result in the maximum number of disconnected components which are unable to communicate with each other. The CNP can also be applied to the study of covert terrorist networks, where a certain number of individuals have to be identified whose deletion would result in the maximum breakdown of communication between individuals in the network [118]. Likewise in order to stop the spreading of a virus over a telecommunication network, one can identify the critical nodes of the graph and take them offline. The CNP also finds applications in network immunization [36, 176] where mass vaccination is an expensive process and only a specific number of people, modeled as nodes of a graph, can be vaccinated. The immunized nodes cannot propagate the virus and the goal is to identify the individuals to be vaccinated in order to reduce the overall transmissibility of the virus. There are several vaccination strategies in the literature [36, 176] offering control of epidemic outbreaks; however, none of the proposed are optimal strategies. The vaccination strategies suggested emphasize the centrality of nodes as a maj or factor rather than critical nodes whose deletion will maximize disconnectivity of the graph. Deletion of central nodes may not guarantee a fragmentation of the network or even disconnectivity, in which case disease transmission cannot be prevented. Of course, owing to its dynamic stature, the relationships between people, represented by edges in the social network are transient and there is a constant rewiring between nodes, and alternate relationships could be established in the future. The proposed critical node technique helps in a maximum prevention of disease transmission over an instance of the dynamic network. Before proceeding, we mention one final area in which the CRITICAL NODE PROBLEM finds several applications, and that is in the field of transportation engineering [66]. Two particular examples are as follows. In general, for transportation networks, it is important to identify critical nodes in order to ensure they operate reliably for transporting people and goods throughout the network. Further, in planning for emergency evacuations, identifying the critical nodes of the transportation network is crucial. The reason is twofold. First, knowledge of the critical nodes will help in planning the allocation of resources during the evacuation. Secondly, in the aftermath of a disaster they will help in reestablishing critical traffic routes. Borgatti [21] has studied a similar problem, focusing on node detection resulting in maximum network disconnectivity. Other studies in the area of node detection such as centrality [14, 78] focus on the prominence and reachability to and from the central nodes. However, little emphasis is placed on the importance of their role in the network connectivity and diameter. Perhaps one reason for this is because all of the aforementioned references relied on simulation to conduct their studies. Although the simulations have been successful, a mathematical formulation is essential for providing insight and helping to reveal some of the fundamental properties of the problem [138]. In the next section, we present a mathematical model based on integer linear programming which provides optimal solutions for the CRITICAL NODE PROBLEM. We organize this chapter by first formally defining the problem and discussing its computational complexity. Next, we provide an integer programming (IP) formulation for the corresponding optimization problem. In Section 3.3 we introduce a heuristic to quickly provide solutions to largescale instances of the problem. We present a computational study in Section 3.4, in which we compare the performance of the heuristic against the optimal solutions which were determined using a commercial software package. Some concluding remarks are given in Section 3.5. 3.2 Problem Formulations Denote a graph G = (V, E) as a pair consisting of a set of vertices V, and a set of edges E. All graphs in this chapter are assumed to be undirected and unweighted. For a subset W c V, let G(W) denote the subgraph induced by W on G. A set of vertices I CV is called an independent or stable set if for every i, j E I, (i, j) 5' E. That is, the graph G(I) induced by I is edgeless. An independent set is maximal if it is not a subset of any larger independent set (i. e., it is maximal by inclusion), and maximum if there are no larger independent sets in the graph. 3.2.1 Critical Node Problem The formal definition of the problem is given by: CRITICAL NODE PROBLEM (CNP) INPUT: An undirected graph G = (V, E) and an integer k. OUTPUT: A1 = a~rg min CiljE(V\A) 1ij (G(V \ A)) : A < k, where 1, if i and j are in the same component of G(V \ A) Uij 10, otherwise. The objective is to find a subset A cV of nodes such that A < k, whose deletion results in the minimum value of C uij in the edge induced sub graph G(V \ A). This obj ective function results in a minimum cohesion in the network, while also ensuring a minimum difference in the sizes of the components. An illustration is best suited to explain the choice of obj ective function. Consider an arbitrary unweighted graph with 150 nodes. According to our obj ective, it is more preferable to have a partition with 3 components with each 50 nodes as opposed to a partition with 5 components with one having 146 nodes and the rest of them having a single node. This problem is similar to MINIMUM kVERTEX SHARING [133], where the obj ective is to minimize the number of nodes deleted to achieve a kway partition. Here we are considering the complementary problem, where we know the number of vertices to be deleted and we try to maximize the number of components formed and implicitly limit the sizes of the components. Borgatti [21] has given a comprehensive illustration to facilitate the understanding of the obj ective function and its nontriviality. We now prove that the recognition version of the CNP is HiPcomplete. Consider the following decision problem for the CNP: KCRITICAL NODE PROBLEM (KCNP) INPUT: An undirected graph G = (V, E) and an integer k. QUESTION: Does there exist a zero cost Kway partition of G by deleting k nodes or less? Theorem 5. Thze KCRITICAL NODE PROBLEM is HiPcomplete. Proof: To show this, we must prove that (1) KCNP E NiP; (2) Some NiPcomplete problem reduces to KCNP in polynomial time. (1) KCNP E NiP since given any graph G = (V, E), we can verify the validity of G in polynomial time. More specifically, by deleting any set of at most k nodes, we determine if the there is a zerocost Kway partition of G in O( E + V) time using a depthfirst search [6]. (2) To complete the proof, we show a reduction from the KINDEPENDENT SET PROBLEM (KISP) [24], which is wellknown to be HiPcomplete [79]. Recall that the obj ective of the KISP is to determine if G contains an independent set containing at least K nodes. Let G = (V, E) be a graph in which we seek an independent set. There are no necessary transformations required for the graph in which we are solving the corresponding KCNP. We will show that a 'yes' instance of the KISP corresponds to a 'yes' instance of the KCNP on G. In particular, G has an independent set of size K if and only if the KCNP has a zero cost solution where k < V K. Suppose G contains an independent set I where 7 = K. Notice that the objective of the KCNP will be 0 as the subgraph induced by deleting the nodes in V \ I is edgeless. Therefore, a 'yes' instance of the KISP implies a 'yes' instance for the KCNP with k~ = V K. To prove the converse, observe that the cost of any KCNP is at least 0. Thus, a 'yes' instance of the KCNP would imply that once the k critical nodes are removed, the resulting subgraph consists of K components whose obj ective function is 0. This implies that the induced subgraph is edgeless, i.e. each of the K components consists of a single node. Hence, the K remaining nodes form an independent set of G, resulting in a 'yes' instance for the KINDEPENDENT SET PROBLEM. Thus the proof is complete. When studying combinatorial problems, integer programming models are usually quite helpful for providing some of the formal properties of the problem [13 8]. With this in mind we now develop a linear integer programming formulation for the CNP. To begin with, define the surjection it : V xVB { 0, 1} as above. Further, we introduce a surjection t' : VB {0(, 1} defined by 1, if node i is deleted in the optimal solution, 0, otherwise. 31 Then the CRITICAL NODE PROBLEM admits the following integer programming formulation (CNP1) Minimize C L/(32) s.t. ujk Zi < 1j> V (i, j,) E V, (34) i .,Ilki< 1 V (, j k) V,(35) It 't + lki< 1,V (, jk) V,(36) <' kl (37) Its {0,1}, i, e V,(38) E 0, 1, Vi V.(39) Theorem 6. CNP1 is a correct formulation for the CRITICAL NODE PROBLEM. Proof: First, we note that the obj ective is to find the set of k nodes whose removal results in a graph which has the maximum number of disconnected components. This is accomplished by the obj ective function. Notice that the first set of constraints in (33) implies that if nodes i and j are in different components and if there is an edge between them, then one of them must be deleted. Furthermore, constraints (34)(36) together imply that for all triplets of nodes i, j, k, that if i and j are in same component and j and k are in same component, then k and i must be in the same component. Constraint (37) ensures that the total number of deleted nodes is less than or equal to k. Finally, (38) and (39) define the proper domains for the variables used. Thus, a solution to the integer programming formulation CNP1 characterizes a feasible solution to the CNP. On the other hand, it is clear that a feasible solution to the CNP will define at least one feasible solution to CNP1. Therefore, CNP1 is a correct formulation for the CNP. O Notice that the conditions which satisfy the circular constraints (34), (35), and (36) in CNP1 can be satisfied by the single constraint uij + "/ ., + uki / 2, V (i, j, k) E V Thus we have an equivalent, more compact integer program given as (CNP2) Minimize u p, (310) s.t. ui + + vj > 1, V (i, j) E E, (311) ui ujk + uki / 2, V (i, j, k) E V, (312) <' kl (313) u E 0, 1, Vi, j V,(314) E 0, 1, Vi V,(315) where ui,j and I are as defined above. Notice that if the obj ective function had only the number of components, then an approximation for the MAXIMUM KCUT PROBLEM [79, 112] could be employed by modifying the cost function of the GomoryHu tree [89]. An even simpler approach would be to identify the cut vertices in the graph, if any exist. However, the obj ective function also involves the sizes of the components formed, which makes the problem harder and subsequently implies that the methods suggested above are not suitable for our problem. Recall that CijE m is a measure of the total disconnectivity of the graph. If we observe carefully, the obj ective function could be rewritten as E 2 (316) iLES where S is set of all components and as is the size of the ith component, which can be easily identified by fast algorithms like breadth or depth first search algorithms in O(V + E) time [48]. We now provide an intuitive explanation for the choice of our obj ective function. For a fixed number of components the variance in the sizes of the components will be the sum of the squares of deviation of sizes of the components from the mean size of a component. However notice that the mean size of any component is constant because the sum of the sizes of the components is the constant, V k. Thus minimizing the variance of the size of the components reduces to minimizing the sum of squares of the sizes of the components, which is our obj ective function. Also, when the sizes of the components are equal the obj ective function is the minimum when the number of components is the maximum. We will use this obj ective function in the following section to implement a heuristic for identifying critical nodes. 3.2.2 Cardinality Constrained Problem We now provide the formulation for a slightly modified version of the CNP based on constraining the connectivity index of the nodes in the graph. Given a graph G = (V, E), the connectivity index of a node is defined as the number of nodes reachable from that vertex. Examples are provided in Figure 31. To constrain the network connectivity in optimization models, we can impose constraints on the connectivity indices. This leads to a cardinality constrained version of the CNP which we aptly refer to as the CARDINALITY CONSTRAINED CRITICAL NODE DETECTION PROBLEM (CCCNP). The D C F G Figure 31: Connectivity Index of nodes A,B,C,D is 3. Connectivity Index of E,F,G is 2. Connectivity Index of H is 0. obj ective is to detect a set of nodes A cV such that the connectivity indices of the nodes in the vertex deleted subgraph G(V \ 4) is less than some threshold value, say L. Using the same definition of the variables as in the previous subsection, we can formulate the CCCNP as the following integer linear programming problem. (CCCNP1) Minimize Cr(31 7) s.t. uij+ i+ v > 1, V (i, j) E E, (318) u jk + Z~ ki yd 2, V (i, j, k) E V, (319) < ~ L, (320) is E{0, }, Vi, j V,(321) E 0, 1, Vi V,(322) where L is the maximum allowable connectivity index for any node in V. Theorem 7. CCCNP1 is a correct formulation for the CARDINALITY CONSTRAINED CRITICAL NODE DETECTION PROBLEM. Proof: This proof follows in much the same way as Theorem 6. First, we see that the obj ective function given clearly minimizes the number of nodes deleted. Constraints (318) and (319) follow exactly as in the CNP formulation. The only difference is now we must constrain procedure CriticalNode(G, k~) S MIS < MaximallndepSet (G) 2 while (MIS / V k) do 3 i  arMri(, ies : SE G(MIS U {i}), ie V I\MIS} 4 MIS < MIS U {i} 5 end while 6 return V \ MIS /s set of k nodes to delete s/ end procedure CriticalNode Figure 32: Heuristic for detecting critical nodes. the connectivity index of each node. This is accomplished by constraint (320). Finally constraints (321) and (322) define the domains of the decision variables, and we have the proof. O 3.3 Heuristics for Critical Node Problems 3.3.1 CNP Heuristic Pseudocode for the proposed heuristic is provided in Figure 32. To begin with, the algorithm finds a maximal independent set (MIS). Then in the loop from lines 25, the heuristic greedily selects the node i E V not currently in MIS which returns the minimum objective function for the graph G(MIS U {i}). The set MIS is augmented to include node i, and the process repeats until MISI = V k. The method terminates and the set of critical nodes to be deleted is given as those nodes j E V such that j e V \ MIS. The intuition behind using an independent set is that the subgraph induced by this set is empty. Stated otherwise, the deletion of nodes that are not in the independent set from the graph will result in an empty subgraph. Notice that this will provide the optimal solution for an instance of the CNP if MIS > V k. However, if the size of MIS is less than V k, we simply keep adding nodes which provide the best obj ective value to the set until it reaches the desired size. In the following lemma, we establish a relationship between the CNP and the MAXIMUM INDEPENDENT SET problem, which also provides a bound on the optimal solution for an instance of the CNP. Lemma 5. Given a graph G = (V, E), the cardinality of the maximum independent set of G, denoted a~ ( G) provides an upper bound on the number of components produced in the optimal solution of the corresponding CRITICAL NODE PROBLEM for any value of k E Z. Proof: Obviously, removing the critical nodes determined by the optimal solution for any instance of the CNP results in a set of disconnected components of G. One node from each of these components forms an independent set. Hence a~(G) should be at least as large as the number of components formed in the optimal solution to the CNP. Furthermore, the components formed in the subgraph induced by the maximum independent set are of size one, and hence result in the optimal solution for the CNP instance if a~(G) > V k, i.e. if the deletion of some k nodes results in an empty graph. Thus, we have the lemma. O We note that this bound is not particularly useful in practice since the MAXIMUM INDEPENDENT SET problem is HiPhard in general [24, 79]. However, a maximal independent set can be computed in polynomial time. This motivates our decision to use maximal instead of maximum independent sets in the heuristic. Subsequently the heuristic is computationally efficient, with the complexity given in the following theorem. Theorem 8. T2e proposed algorithm has complexity O(k2 + Vlk)). Proof: To begin with, the while loop from lines 25 will iterate at most O(V k) times. In each iteration, the number of search operations decreases from V 1 to V (V k) = k. Note that we are performing the search of a sparse graph, which is initially empty. Hence the total complexity will be o(V 1+IV 2 + +IV IV+k) = O C iIVV i = O(k2 +IVk). i= 1 i= 1 Thus the proof is complete. O The proposed algorithm finds a feasible solution to the CRITICAL NODE PROBLEM; however, the solution is not guaranteed to be globally or locally optimal. Therefore, we can enhance the heuristic with the application of local search routine as follows. Consider the procedure LocalSearch(V \ MIS) S X* < MIS 2 local _improvement < .TRUE. 3 while local _improvement do 4 local _improvement < .FALSE. 5 if i s MIS and j 5( MIS then 6 MIS < MIS \ i 7 MIS < MIS U j 8 if f (\Ii) < f (X*) then 9 ~X* < MIS to local_improvement < .TRUE. 11 end if L2 end if 13end while 14 return (V \ X*) /s set of k nodes to delete s/ end procedure LocalSearch Figure 33: Local search algorithm for critical node heuristic. pseudocode presented in Figure 33. The routine receives as input the solution from the CriticalNode heuristic and performs a 2exchange local search. Let f : VB Z be a function returning the obj ective function value for a given set, in the sense of (316) above. That is, consider a pair of nodes i and j such that i s MIS and j 5( MIS. Then for all such pairs, we set j E MIS and i 5 MIS and examine the change in the objective function. If it improves, then the swap is kept; otherwise, we undo the swap and continue to the next node pair. Notice that the loop from lines 313 repeats while the solution is not locally optimal. This general statement can lead to implementation problems and it is a common practice to limit the number of local search iterations by some user defined value, say U. The intuition is that the as U 00o, the solution becomes optimal with respect to its local neighborhood. Theorem 9. If the number of iterations of the local search is bounded by a constant U E R as described above, then the complexity of the procedure is O ( V 2U Proof: The is clear as the while loop from lines 313 will iterate U times. Since each iteration requires an examination of V2 COmponents, we have the proof. O procedure CriticalNodeLS(G, k) S X* < 2 f (X*)t < 3 for j = 1 to Maxlter do 4 X < CriticalNode(G, k~) 5 X < LocalSearch(X) 6 iff (X) < f (X*) then 7 X <X 8 end if 9end to return (V \ X*) /s set of k nodes to delete s/ end procedure CriticalNodeLS Figure 34: Heuristic with local search for detecting critical nodes. Finally, we can combine the construction and local improvement algorithms into one multistart heuristic CriticalNodeLS as shown in Figure 34. This procedure produces Maxlter local optima and the overall best solution from all iterations is returned. Theorem 10. The CriticalNodeLS heuristic has overall complexity of O(V2UT (k2 + Vlk)), where T = Maxlter, and' U is the iteration limit on the local search. Proof: This result follows directly from Theorem 8 and Theorem 9 above. O 3.3.2 CCCNP Heuristic With a subtle modification to the heuristic described above for the CNP, we can create an effective heuristic for the CCCNP. To do this, notice that now we are only concerned with the connectivity indices of the nodes. Stated differently, we are only concerned with the sizes of the components in the vertex deleted sub graph. Unlike before, there is no limit on the number of critical nodes we choose, so long as the connectivity constraints are satisfied. Pseudocode for the proposed algorithm is provided in Figure 35. The heuristic starts off the same as before by identifying a maximal independent set (MIS). Then, the boolean variable OPT is set to FALSE. Finally in line 3, a variable NoAdd is initialized to 0. This variable determines when to exit the main loop from lines 416. After this loop is entered, the procedure iterates through the vertices and determines which can be added back to the graph while still maintaining feasibility. If vertex i can be added, MIS is augmented to include i in step procedure ConstrainedCriticalNode (G, L) S MIS < MaximallndepSet (G) 2 OPT < FALSE 3 NoAdd < 0 4 while (OPT .NOT.TRUE) do 5 for (i =1 toIV)do Eiif (Is~E < L, VeS c G;(MIS U {i})) : ie V \MIS) then 7 MIS < MIS U {i} 8 else 9 ~NoAdd < NoAdd +1 to end if 11 if (NoAdd = V MIS) then 12 OPT < TRUE 13 BREAK 14 end if 15 end for 16 end while 17 return V \ MIS /s set of nodes to delete s/ end procedure ConstrainedCriticalNode Figure 35: Heuristic for the CARDINALITY CONSTRAINED CRITICAL NODE PROBLEM. 7, otherwise NoAdd is incremented. If NoAdd is ever equal to V MISi, then no nodes can be returned to the graph and OPT is set to TRUE. Then loop is then exited and the algorithm returns the set of nodes to be deleted, i.e. V\MIS. Theorem 11. The worstca~se complexity of the ConstrainedCriticalNode heuristic is O(V2 + VE). Proof: This proof is similar to the proof of Theorem 8 above. The loop from lines 416 will iterate at most O(V) times. Each loop requires at most O(V + E) time to verify the if a solution will remain feasible after a node is reincluded in the graph. Thus we have the result. O 3.3.3 Genetic Algorithm for the CCCNP As mentioned in Subsection 2.7.2, genetic algorithms (GAs) mimic the biological process of evolution. In this subsection, we describe the implementation of a GA for the CCCNP. Recall the general structure of a GA as outlined in Figure 36. When designing a genetic algorithm for an procedure GeneticAlgorithm SGenerate population PI 2 Evaluate population PI 3 while terminating condition not met do 4 Select individuals from PI and copy to PIt+ 5 Crossover individuals from PI and put in PI 1 6 Mutate individuals from PI and put in PI 1 7 Evaluate population PI 1 9 P,+1t Q to end while 11 return best individual in PI end procedure GeneticAlgorithm Figure 36: Pseudocode for a generic genetic algorithm. optimization problem, one must provide a means to encode the population, define the crossover operator, and define the mutation operator which allows for random changes in offspring to help prevent the algorithm from converging prematurely [10]. For our implementation, we use binary vectors as an encoding scheme for individuals within the population of solutions. When the population is generated, (Figure 36, line 1), a random deviate from a distribution which is uniform onto (0, 1) E R is generated for each node. If the deviate exceeds some specified value, the corresponding allele is assigned value 1, indicating this node should be deleted. Otherwise, the allele is given a 0, implying it is not deleted. In order to evaluate the fitness of the population, per line 2, we must determine whether each individual solution is feasible or not. Determining feasibility is a relatively straightforward task and can accomplished in O(V + E) using a depthfirst search [6]. In order to evolve the population over successive generations, we use a reproduction scheme in which the parents chosen to produce the offspring are selected using the binary tournament method [131, 172]. Using this method, two chromosomes are chosen at random from the population and the one having the best fitness, i.e. the lowest obj ective function value, is kept as a parent. The process is then repeated to select the second parent. The two parents are then combined using a crossover operator to produce an offspring [94]. Coin Toss T H H T H MOM 0.56 0.81 0.22 0.7 0.86; DAD Offspring 0.81 0.22 0.86; Figure 37: Example of the crossover operation. In this case, CrossProb = 0.65. To breed new solutions, we implement a strategy known as parameterized uniform crossover [167]. This method works as follows. After the selection of the parents, refer to the parent having the best fitness as MOM. For each of the nodes alleless), a biased coin is tossed. If the result is heads, then the allele from the MOM chromosome is chosen. Otherwise, the allele from the least fit parent, call it DAD, is selected. The probability that the coin lands on heads is known as CrossProb, and is determined empirically. Figure 37 provides an example of a potential crossover when the number of nodes is 5 and CrossProb = 0.65 [10]. After the child is produced, the mutation operator is applied. Mutation is a randomizing agent which helps prevent the GA from converging prematurely and escape to local optima. This process works by flipping a biased coin for each allele of the chromosome. The probability of the coin landing heads, known as the mutation rate (MutRate) is typically a very small user defined value. If the result is heads, then the value of the corresponding allele is reversed. For our implementation, MutRate = 0.03. After the crossover and mutation operators create the new offspring, it replaces a current member of the population using the socalled steadystate model [37, 94, 13 1]. Using this methodology, the child replaces the least fit member of the population, provided that a clone of the child is not an existing member in the population. This method ensures that the worst element of the population is monotonically improving in every generation. In the subsequent iteration, the child becomes eligible to be a parent and the process repeats. Though the GA does converge in probability to the optimal solution, it is common to stop the procedure after some "terminating condition" (Figure 36, line 3) is satisfied. This condition could be one of several things including, a maximum running time, a target obj ective value, or a limit on the number of Table 31: Results of IP model and heuristic on terrorist network data. Instance IP Model Heuristic Heuristic + LS Nodes Obj ective Execution Objective Execution Objective Execution Deleted (k) Value Time (s) Value Time (s) Value Time (s) 20 20 12.69 22 0.08 20 0.01 15 61 277.77 66 0.03 61 0.01 10 169 3337.06 190 0.06 169 0.02 9 214 2792.33 229 0.15 214 0.02 8 282 15111.94 309 0.04 282 0.01 7 327 10792.08 329 0.09 327 0.01 generations. For our implementation, we use the latter option and the best solution after MaxGen generations is returned. 3.4 Computational Results All of the proposed heuristics were implemented in the C++ programming language and complied using GNU g++ version 3.4.4, using optimization flags 02. It was tested on a PC equipped with a 1700MHz IntelB PentiumB M processor and 1.0 gigabytes of RAM operating under the MicrosoftB WindowsB XP Professional environment. 3.4.1 CNP Results We begin with the numerical results of the combinatorial algorithm for the CRITICAL NODE PROBLEM. We tested the IP model and the aforementioned heuristic on the terrorist network from Krebs [1 18] as well as on a set of randomly generated scalefree [13] graphs ranging in size from 75 to 150 nodes with various densities. The graphs were generated with version 1.4 of the publicly available Barabasi graph generator by Dreier [62]. For each instance tested, we report solutions for 3 values of k, the number of nodes to be deleted. As a basis for comparison, we have implemented the integer programming model for the CRITICAL NODE PROBLEM using the CPLEXTM Optimization suite from ILOG [50]. CPLEX contains an implementation of the simplex method [98], and uses a branch and bound algorithm [173] together with advanced cuttingplane techniques [107, 139]. We begin by providing the results from the terrorist network [118]. The graph, which is shown in Figure 38 has 62 nodes and 153 edges. Notice that node 38 is the central node with Figure 38: Terrorist network compiled by Krebs. degree 22. We applied the IP formulation and the heuristic to this network with 6 values of k. The results are provided in Table 31. Notice that for all values of k, the heuristic computed the optimal solution requiring on average 0.013 seconds of computation time. The average time to compute the optimal solution using CPLEX was 5387.31 seconds. Clearly even for this relatively small network, the heuristic is the method of choice. Figure 39 shows the resulting graph of the terrorist network according to the optimal solution to the CNP for the instance of k = 20. In order to determine the scalability and robustness, the proposed heuristic was tested on a set of randomly generated scalefree graphs. Table 32 presents the results of the heuristic and the optimal solver when applied to the random instances. For each instance, we report the number of nodes and arcs, the value of k being considered, the optimal solution and computation time required by CPLEX, and finally the heuristic solution and the corresponding computation time. For each graph, we report solutions for 3 different values of k. f48 40' 46 4( 426 25 12 34~2 i 1 g 17 Figure 39: Optimal solution when k = 20. Notice that for all instances tested, our method was able to compute the optimal solution. Furthermore, the required time to compute the optimal solution was less than one second for all but one instance, averaging only 0.33 seconds for all 27 instances. On the other hand, CPLEX required 289.44 seconds on average to compute the optimal solution, requiring over 5000 seconds in the worst case. Our computational experiments indicate that the proposed heuristic is able to efficiently provide excellent solutions for largescale instances of the CNP. 3.4.2 CCCNP Results We continue with the results of the two algorithms developed for the CCCNP, namely the combinatorial algorithm and the genetic algorithm. As above, we tested the IP model and both heuristics on the terrorist network [1 18] and a set of randomly generated graphs. For each instance tested, we report solutions for 3 values of L, the connectivity index threshold. Finally, we have implemented the integer programming model for the CCCNP using CPLEXTM Table 32: Results of IP model and heuristic on randomly generated scale free graphs. Instance IP Model Heuristic Heuristic + LS Nodes Arcs Deleted Obj Comp Obj Comp Obj Comp Nodes (k) Value Time (s) Value Time (s) Value Time (s) 75 140 20 36 66.7 92 0.12 36 0.03 75 140 25 18 33.28 39 0.28 18 0.03 75 140 30 7 4.23 18 0.02 7 0.04 75 210 25 26 93.71 78 0.1 26 0.04 75 210 30 8 3.57 31 0.05 8 0.05 75 210 35 2 4.36 16 0.18 2 0.04 75 280 33 26 749.19 54 0.00 26 0.04 75 280 35 20 164.34 38 0.09 20 0.06 75 280 37 13 83.98 24 0.39 13 0.11 100 194 25 44 151.14 142 0.731 44 0.09 100 194 30 20 59.66 72 0.56 20 0.11 100 194 35 10 8.51 33 0.66 10 0.12 100 285 40 23 136.47 48 1.151 23 0.11 100 285 42 17 263.82 38 0.4 17 0.17 100 285 45 11 16.78 29 0.53 11 0.23 100 380 45 22 128.13 58 0.58 22 0.15 100 380 47 16 243.07 42 1.191 16 0.16 100 380 50 10 228.72 23 0.31 10 0.11 125 240 33 62 5047.511 97 0.721 62 0.30 125 240 40 29 118.92 49 1.5632 29 0.24 125 240 45 16 17.09 32 0.14 16 0.39 150 290 40 40 41.6 125 1.832 40 0.47 150 290 50 12 26.29 64 2.773 12 0.831 150 290 60 1 24.92 35 1.091 1 0.851 150 435 61 19 29.55 53 2.313 19 0.741 150 435 65 13 31.45 37 0.991 13 1.952 150 435 67 11 37.91 31 0.52 11 0.801 Table 33 presents computational results of the IP model and heuristic solutions when tested on the terrorist network data. Notice that for all 5 values of L tested, the genetic algorithm and the combinatorial algorithm with local search (ComAlg + LS) computed optimal solutions. Figure 310 shows the optimal solution for the case when L = 4. We now consider the performance of the algorithms when tested on the randomly generated data sets containing up to 50 nodes taken from [9]. The results are shown in Table 34. For these relatively small instances, we were able to compute the optimal solutions using CPLEX. For each Table 33: Results of IP model and heuristics on terrorist network data. Instance IP Model Genetic Alg ComAlg ComAlg + LS Max Conn. Obj Comp Obj Comp Obj Comp Obj Comp Index (L) Val Time (s) Val Time (s) Val Time (s) Val Time (s) 3 21 188.98 21 0.25 22 0.01 21 0.1 4 17 886.09 17 0.741 19 0.01 17 0.45 5 15 30051.09 15 0.871 20 0.18 25 1.331 8 13 0.39 14 0.05 13 0.07 10 1 11 0.741 12 0.07 11 0.05 r44 58 24 *61 *** **1 ** Figure 310: Optimal solution when L instance, we provide solutions for 3 values of L, the maximum connectivity index. Notice that for these problems, the genetic algorithm computed optimal solutions for each instance tested in a fraction of the time required by CPLEX. The combinatorial heuristic found optimal solutions for all but 3 cases requiring approximately half of the time of the GA. Table 35 presents the solutions for the random instances from 75 to 150 nodes [9, 11]. Again, in order to demonstrate the robustness of the heuristics, we provide solutions for 3 the maximum network disconnectivity. In general, the problem of detecting critical nodes has a wide variety of applications from jamming communication networks and other antiterrorism applications, to epidemiology and transportation science [9, 11]. In particular we examined two problems, namely the CRITICAL NODE PROBLEM (CNP) as well as the CARDINALITY CONSTRAINED CNP (CCCNP). Given a graph and an integer k, the obj ective of the CNP is to detect a set of k critical nodes whose deletion results in the maximum number of disconnected components whose cardinalities have the minimum variance. The definition of the CCCNP is slightly different in that instead of given k E Z, the maximum number of nodes to delete, we are given some value L EZ which represents the maximum connectivity index a node may have. The obj ective in this case is to delete the minimum number of nodes while ensuring that the connectivity index of each node does not exceed L. The proposed problems were modeled as integer linear programming problems. Then we proved that the corresponding decision problems are HiPcomplete. Furthermore, we proposed a several heuristics for efficiently computing quality solutions to largescale instances. The heuristic proposed for the CNP was a combinatorial algorithm which exploited properties of the graph in order to compute basic feasible solutions. The method was further intensified by the application of a local search mechanism. By using the integer programming formulation we were able to determine the precision of our heuristic by comparing their relative solutions and computation times for several networks. The computational experiments indicated that the heuristic found optimal solutions for all instances tested in a fraction of the time required by the commercial IP solver CPLEX. For the CCCNP we proposed two algorithms, namely a modified version of the combinatorial algorithm described above and a genetic algorithm [87]. Once again, the computational experiments indicated that both methods are robust and are able to efficiently compute approximate solutions for instances up to 150 nodes. We also conclude with a few words on the possibility of future expansion of this work. A heuristic exploration of cutting plane algorithms on the IP formulation would be an interesting alternative. Other heuristic approaches worthy of investigation include hybridizing the genetic algorithm with the addition of a local search or pathrelinking enhancement procedure [85]. Finally, the local search used in the combinatorial algorithm was a simple 2exchange method, which was the cause of a significant slow down in computation as noted in Table 35. A more sophisticated local search such as a modification of the one proposed by Resende and Werneck [159, 160] should be a maj or focus of attention. Furthermore, it would be interesting to study the weighted version of the problem to see how weights added to the nodes affect the solutions. For example, it is rational to perceive applications containing weighted networks in which the cost of deleting one node is different from another. Also, pertaining to applications outside the scope of jamming networks, a study of epidemic threshold variation with respect to the heuristic results will help determine the impacts on contagion suppression in biological and social networks. CHAPTER 4 THE WIRELESS NETWORK JAMMING PROBLEM 4.1 Introduction Military strategists are constantly seeking ways to increase the effectiveness of their force while reducing the risk of casualties. In any adversarial environment, an important goal is always to neutralize the communication system of the enemy. In this chapter, we are interested in j amming a wireless communication network. Specifically, we study the problem of determining the optimal number and placement for a set of jamming devices in order to neutralize communication on the network. This is known as the WIRELESS NETWORK JAMMING PROBLEM (WNJP). Despite the enormous amount of research on telecommunication systems [155], the topic of jamming communication networks has received little attention. In fact, the material that follows in the next two chapters present the first such efforts, in so far as we can tell. We will begin this chapter by describing and formulating the problem of jamming a wired telecommunication network, and extend this result to the wireless domain. We will see that there is a bit more versatility when considering the wireless version of the problem due to the wireless multicast advantage, i.e. the ability of wireless transmitters to communicate affect nodes that are not directly adj acent to them. We can generalize the work of [9] to study the problem of jamming and eavesdropping wireless communication networks. As we will see, there are several variations that can be made depending on the overall obj ectives. This is aided by the fact that wireless j amming devices not only affect those nodes which are directly adj acent to them; rather, they propagate energy throughout the network to all the communication nodes as we will see in the next section. The organization of the chapter is as follows. After a review of related work, we present several deterministic formulations of the WNJP in Section 4.3. In particular, Subsection 4.3.1 contains several coverage formulations of the WNJP. Then in Subsection 4.3.2, we use tools from graph theory to define the connectivity of the network and develop an alternative formulation based on constraining the connectivity indices of the nodes, analogous to the CCCNP. Next, in Section 4.4 we incorporate percentile constraints to develop formulations which are computationally more efficient and have similar solution quality. In Section 4.5, we will present two case studies comparing the solutions and computation time for all formulations. Finally, conclusions and future directions of research will be addressed. 4.2 Definitions and Assumptions Before formally defining the problem statement, we will state some basic assumptions about the j amming devices and the communication nodes being j ammed. We assume that parameters such as the frequency range of the j amming devices are known. In addition, the j amming devices are assumed to have omnidirectional antennas. The communication nodes are also assumed to be outfitted with omnidirectional antennas and function as both receivers and transmitters. Given a graph G = (V, E), we can represent the communication devices as the vertices of the graph. An undirected edge would connect two nodes if they are within a certain communication threshold. Given a set MZ/ = {1i, 2, ..., m} of communication nodes to be jammed, the goal is to find a set of locations for placing j amming devices in order to suppress the functionality of the network. The jamming effectiveness of device j is calculated using d : (V x V) H IR, where d is a decreasing function of the distance from the j amming device to the node being j ammed. Here we are considering radio transmitting nodes, and correspondingly, jamming devices which emit electromagnetic waves. Thus the j amming effectiveness of a device depends on the power of its electromagnetic emission, which is assumed to be inversely proportional to the squared distance from the j amming device to the node being j ammed. We note that this assumption is made without the loss of generality. The results presented in this chapter hold as long as the function d is a smooth monotonically decreasing function. Specifically, where A E R is a constant, and r(i, j) represents the distance between node i and j amming device j. Without the loss of generality, we can set A = 1. possible placement locations will likely be limited. Define the decision variable xj as 1, if a jamming device is installed at location j, xj:10, otherwise. 43 If we redefine r(i, j) to be the distance between communication node i and j amming location j, then we have the OPTIMAL NETWORK COVERING (ONC) formulation of the WNJP as (ONC) Minimize cyy(44) j=1 s.t. j=1 Xj E {0,1, j=12,.,, (46) where Ci is defined as above. Here the obj ective is to minimize the number of jamming devices used while achieving some minimum level of coverage at each node. The coefficients cj in (44) represent the costs of installing a j amming device at location j. In a battlefield scenario, placing a jamming device in the direct proximity of a network node may be theoretically possible; however, such a placement might be undesirable due to security considerations. In this case, the location considered would have a higher placement cost than would a safer location. If there are no preferences for device locations, then without the loss of generality, cj = 1, j=1 ,...,n Though we have removed the nonconvex covering constraints, this formulation remains computationally difficult. Notice that oNC is formulated as a MULTIDIMENSIONAL KNAPSACK PROBLEM which is known to be HiPhard in general [79]. 4.3.2 Connectivity Formulation In the general WNJP, it is important that the distinction be made that the obj ective is not simply to j am all of the nodes, but to destroy the functionality of the underlying communication network. In this section, we use tools from graph theory to develop a method for suppressing D C F G Figure 41: Connectivity Index of nodes A,B,C,D is 3. Connectivity Index of E,F,G is 2. Connectivity Index of H is 0. the network by jamming those nodes with several communication links and derive an alternative formulation of the WNJP. Given a graph G = (V, E), the recall that the connectivity index of a node is defined as the number of nodes reachable from that vertex (as shown in Figure 41). To constrain the network connectivity in optimization models, we can impose constraints on the connectivity indices instead of using covering constraints. We can now develop a formulation for the WNJP based on the connectivity indices of the communication graph. We assume that the set of communication nodes MZ/ = {1, 2, .., m} to be j ammed is known and a set of possible locations Ni = {1, 2, ..., n} for the j amming devices is given. Note than in the communication graph, V M. Let S~i := CE dijxy denote the cumulative level of jamming at node i. Then node i is said to be j ammed if Si exceeds some threshold value Ci. We say that communication is severed between nodes i and j if at least one of the nodes is jammed. Further, let y : MZ1x Ms/ { 0, 1} be a surjection where yij := 1 if there exists a path from node i to node j in the jammed network. Lastly, let z : Ms/ { 0, 1}) be a surj ective function where ze returns 1 if node i is not j ammed. The objective of the CONNECTIVITY INDEX PROBLEM (CIP) formulation of the WNJP is to minimize total j amming cost subj ect to a constraint that the connectivity index of each node does not exceed some predescribed level L. The corresponding optimization problem is given as: c~yxy j=1 (CIP) Minimize s.t. (47) j=1 xi E (0, 1}, jE N, I (48) (49) (410) (411) (412) where MllE R is some large constant. Let v : M~/lx MZ/ {0 1} and v' : Mtlx Msl {0,( 1} be defined as follows: 0), otherwise, (413) (414) 1, if (i,: j) exists in the j ammed network, 0, otherwise. With this, we can formulate an equivalent integer program as (CIP1) Minimize cx, j=1 (415) M~(1 ze) > Si Ci > Mzei, Vie M Z, Lemma 6. If CIP has an optimal solution then, CIP1 has an optimal solution. Further, any optimal solution x* of the optimization problem CIP1 is an optimal solution of CIP. Proof: It is easy to establish that if i and j are reachable from each other in the j ammed network then in CIP1, yij = 1. Indeed, if i and j are adj acent then there exists a sequence of pairwise adj acent vertices: { (io, ii), ..., (im1, im) }, (424) where io = i, and im = j. Using induction it can be shown that yeimi = 1, V k = 1, 2, .., m. From (416), we have that yiclt = 1. If yieai = 1, then by (417), yeoirct > Yiaik~ikikt = 1, which proves the induction step. The proven property implies that in CIP1: yei > connectivity index of i. j=1 (425) Therefore, if (x*, y*) and (x**, y**) are optimal solutions of CIP1 and CIP correspondingly, then: Vx* >' V x** (426) where V is the obj ective in CIP 1 and CIP. As (x**, y**) is feasible in CIP, it can be easily checked that y** satisfies all feasibility constraints in CIP1 (it follows from the definition of myi in CIP). So, (x**, y**) is feasible in CIP1; thus proving the first statement of the lemma. Hence from CIP1, V x** > V x* (427) From (426) and (427): V(x**) = V(x*). (428) Let us define y7 such that yij := 1 ++ j is reachable from i in the network jammed by x*. Using (425), (x*, y) is feasible in CIP1, and hence optimal. From the construction of y it follows that (x*, y) is feasible in CIP. Relying on (428) we can claim that x* is an optimal solution of CIP. The lemma is proved. O We have therefore established a onetoone correspondence between formulationS CIP and CIP1. Now, we can linearize the integer program CIP1 by applying some standard transformations. The resulting linear 01 program, CIP2 is given as j=1 (CIP2) Minimize s.t. (429) ylo > v(4 V ,j .,M y i k 1 ,j;Vi eM xy E {0, 1}, Vje N, , E {0, 1}, V i, je M, In the following lemma, we provide a proof of equivalence between CIP1 and CIP2. Lemma 7. If CIP1 has an optimal solution then CIP2 has an optimal solution. Furthermore, any optimal solution x* of CIP2 is an optimal solution of CIP1. Proof: For 01 variables the following equivalence holds: The only differences between CIP1 and CIP2 are the constraints: vUi = .zyze v > < + z+ zj 2 (438) (439) Note that (438) implies (439) (1 .: > I z e + zy 2). Therefore, the feasibility region of CIP2 includes the feasibility region of CIP1 This proves the first statement of the lemma. From the last property we can also deduce that for all xl, x2 Such that xl is an optimal solution of CIP1, and x2 is optimal for CIP2, that V(xi) > V(Z2), (440) where V(x) is the objective of CIP1 and CIP2. Let (x*, y*, v/*, z*) be an optimal solution of CIP2. Construct v//* using the following rules: ,, 1, if I + zf + z f 2 = 1, vij (441) 0, otherwise. v4 > 4; (x*, y*, v /*, z*) is feasible in CIP2 (yij > v4'f), hence optimal (the objective value is V(x*), which is optimal). Using (441), (v /*, z*) satisfies: Using this we have that (x*, y*, v//*, z*) is feasible for CIP1. If x1 is an optimal solution of CIP1 then: V(x ) < V(x*). (442) On the other hand, using (440): V(x*) < V(xi). (443) (442) and (443) together imply V(xl) = V(x*). The last equality proves that x* is an optimal solution of CIP1. Thus, the lemma is proved. O We have as a result of the above lemmata the following theorem which states that the optimal solution to the linearized integer program CIP2 is an optimal solution to the original connectivity index problem CIP. Theorem 12. If CIP has an optimal solution then CIP2 has an optimal solution. Furthermore, any optimal solution of CIP2 is an optimal solution of CIP. Proof: The theorem is an immediate corollary of Lemma 6 and Lemma 7. O 4.4 Deterministic Setup with Percentile Constraints As we have seen, to suppress communication on a wireless network may not necessarily imply that all nodes must be jammed. We might instead choose to constrain the connectivity index of the nodes as in the CIP formulations. Alternatively, it may be sufficient to j am some percentage of the total number of nodes in order to acquire an effective control over the network. The latter can be accomplished by adding percentile risk constraints to the mathematical formulation. Used extensively in financial engineering applications and optimization of stochastic systems, risk measures have proven effective when applied to deterministic problems [120]. In this section, we review two risk measures, namely Value at Risk (VaR) and Conditional Value at Risk (CVaR) and provide formulation of the WNJP with the incorporation of these risk measures. 4.4.1 ValueatRisk (VaR) and Conditional ValueatRisk (CVaR) The ValueatRisk (VaR) percentile measure is perhaps the most widely used in all applications of risk management [103]. Stated simply, VaR is an upper percentile of a given loss distribution. In other words, given a specified confidence level a~, the corresponding a~VaR is the lowest amount ( such that, with probability a~, the loss is less or equal to ( [121]. VaR type risk measures are popular for several reasons including their simple definition and ease of implementation. An alternative risk measure is Conditional ValueatRisk (CVaR). Developed by Rockafellar and Uryasev, CVaR is a percentile risk measure constructed for estimation and control of risks in stochastic and uncertain environments. However, CVaRbased optimization techniques can also be applied in a deterministic percentile framework. CVaR is defined as the conditional expected loss under the condition that it exceeds VaR [168]. Figure 42 provides a graphical representation of the VaR and CVaR concepts. As we will see, CVaR has many properties that offer nice alternatives to VaR. Let f (x, y) be a performance or loss function associated with the decision vector x C X C IR", and a random vector in ye ERm. The y vector can be interpreted as the uncertainties that may affect the loss. Then, for each x E X, the corresponding loss f (x, y) is a random variable having aValR Success a CVaR Figure 42: Graphical representation of Var and CVaR. a distribution in IR which is induced by y. We assume that y is governed by a probability measure P on a Borel set, say Y. Therefore, the probability of f (2, y) not exceeding some threshold value ( is given by ~(2, () := P {y f (2, y) < (}. (444) For a fixed decision vector 2, ~((2, () is the cumulative distribution function of the loss associated with 2. This function is fundamental for defining VaR and CVaR [121]. With this, the a~VaR and caCVaR values for the loss random variable f(2, y) for any specified as (0, 1) are denoted by g(xi) and #o(2) respectively. From the aforementioned definitions, they are given by g,(2) := min{( E R: ~((2, () > c0}, (445) and oa(2) := E{ f(2, y) f(2, y) > g,(2)}. (446) Notice that the probability that f(2, y) > g(0(2) is equal to 1 ca. Finally by definition, we have that 40 (2) is the conditional expectation that the loss corresponding to 2 is greater than or equal to g(s(2) [162]. The key to including VaR and CVaR constraints into a model are the characterizations of g,(:r) and #o(:r) in terms of a function F, : X x IR HR defined by F,(:r, () := ( + E {max {f (:, y) (, 0} }. (447) (1 a) The following theorem, which provides the crucial properties of the function F, follow directly from the paper by Rockafellar and Uryasey [162]. Theorem 13. As a function of (, Fo (:, () is convex and continuously di~ff~erentiable. The a~C~aR of the loss associated as ithr any xrE X can be determined f on the formula #o(:r) = mmn Fa ir, (). (448) CEIR hI this formula, the set consisting of the vahtes of (for 'I ithr the naininsun is attained, namely A,(:r) = argmin Fa ir, (), (449) CEIR is a nonenspty closed, bounded interval, and the ~Va~R of the loss is given by g,(:r) = left endpoint of A,(:r). (45 0) In particular, it is aheays the case that g,(:r) E argmin F,(:r, () and i' (:) = F,(:r, Co(:r)). (451) CEIR This result provides an efficient linear optimization algorithm for CVaR. However, from a numerical perspective, the convexity of Fair, () with respect to r and ( as provided by Theorem 13 is more valuable than the convexity of #o (:) with respect to r. As we will see in the following theorem due to Rockafellar and Uryasey [163], this allows us to minimize CVaR without having to proceed numerically through repeated calculations of #o (:) for various decisions r. Theorem 14. Minimizing 40 (x) 0I ithr respect to x E X is equivalent to minimizing Fo (x, () over all (x, () E Xx IR, in the sense that mmn a (x) =mmn Fo (x, (), (452) zEX (2,()EXxR where moreover (X*, (*) E argmin F,(x, () ++ x* E argmin #o(x), (* E argmin F,(x*, (). (45 3) (z,()EX xR zEX (GIR In the deterministic setting of the WNJP, we are not particularly interested in minimizing VaR or CVaR as it pertains to the loss. Rather, we would like to impose percentile constraints on the optimization model in order to handle a desired probability threshold. The following theorem from [163] provides this capability. Theorem 15. For any selection of probability thresholds asi and loss tolerances wi, i = 1, .. ., m, the problem mmn g (x) (454) zEX s. t. ,. (X) < Lo', for i = 1, m, (45 5) where g is any objective function defined on X, is equivalent to the problem mmn g(x) (456) (2,C1,...,Cm)E~xxam s. t. Fo, (x, (4) < wi, for i = 1, m. (45 7) Indeed,J (x*, (f ) solves the second problem ifandI only ifxr* solves the furst problem and the inequality Fo, (x, (<) < wi holds for i = 1,. ., m. Furthermore, ". (x*) < wi holds for all i = 1,. ., m. In particularfor each i such that Fo, (x*, (*) = i, one has that ', (x*) = wi. 4.4.2 Percentile Constraints and the WNJP In this section, we investigate the use of VaR and CVaR constraints when applied to the formulations of the WNJP derived in Sections 4.3 and 4.4 above. As we have seen, risk measures are generally designed for optimization under uncertainty. Since we are considering deterministic formulations of the WNJP, we can interpret each communication node i EMi/ as a random scenario, and apply the desired risk measures in this context. We begin with the OPTIMAL NETWORK COVERING formulation of the WNJP. Suppose it is determined that jamming some fraction a~E (0, 1) of the nodes is sufficient for effectively dismantling the network. This can be accomplished by the inclusion of a~VaR constraints in the original model. Let y : Ms/ { 0, 1} be a surjection defined by 1, if node i is jammed, Yi ~0, otherwise. (8 Recall from Section 4.3 that Ni = {1, .., n} is the set of locations for the j amming devices, and x is a binary vector of length a where xj = 1 if a j amming device is placed at location j. Then to find the minimum number of jamming devices that will allow for covering a~ 101 1' of the network nodes with prescribed levels of jamming Ci, we must solve the following integer program (ONCVaR) Minimize cyxy (459) j=1 s.t. ye > am,(460) i= 1 dxj > Ciysi= ,,., (461) j=1 Xj E {0,1} =1,,.,, (462) yi E {0,1}, i 12,.m.(463) Notice that this formulation differs from the ONC formulation with the addition of the a~VaR constraint (460). According to (461), if yi = 1 then node i is jammed. Lastly, we have from (460) that at least 100 ( o' of the y variables are equal to 1. The optimal solution to the ONCVaR formulation will provide the minimum number of j amming devices required to suppress communication on at least a~ 100% of the network nodes. The resulting solution may provide coverage levels comparable to those provided by the ONC model, while potentially reducing the number of jamming devices used. However, notice that the remaining (1 a~) 100% of the nodes for which yi is potentially 0, there is no guarantee that they will receive any amount of coverage. Furthermore, the addition of the m binary variables adds a computational burden to a problem which is already NiPhard. We can also reformulate the CONNECTIVITY INDEX PROBLEM to include ValueatRisk constraints. Let p : MZ1 Z+ be a surjection where pi returns the connectivity index of node i. That is, pi := Cm,ypi, yid. Further let wU : Ms { 0, 1} be a decision variable having the property that if It = 1, then pi < L. With this, the connectivity index formulation of WNJP with VaR percentile constraints is given as (CIPVaR) Minimize cy xy (464) j=1 s.t. p < et+ (1 re M, i 1,,..,m,(465) ~ ~> am (466) i= 1 Xj E {0,1}), j= 1,2,..., n, (467) I, E {0,1}, i=i 12,..., m, (468) pi E {0,1}, i= 1,2,..., m, (469) where M~E R is some large constant. Analogous to constraints (460)(461), constraints (465)(466) guarantee that at least a~ 101 of the nodes will have connectivity index less than L. As with the ONCVaR formulation, there are two drawbacks of CIPVaR. First, there is no control guarantee at all on any of the remaining (1 a~) 101I' .; nodes for which I, = 0. Secondly, the addition of m binary variables adds a tremendous computational burden to the problem. As an alternative to VaR, we now examine formulations of the WNJP using Conditional ValueatRisk constraints [162]. We first consider the OPTIMAL NETWORK COVERING problem. In order to put this into our derived framework, we need to define the loss function associated with an instance of the ONC. We introduce the function f : {0, 1}" x MZ/ HR defined by f (x~,i):= Os xyda. (470) j=1 That is, given a decision vector x representing the placement of the j amming devices, the loss function is defined as the difference between the energy required to j am the network node i and the cumulative amount of energy received at node i due to x. With this, we can formulate the oNC with the addition of CVaR constraints as the following integer linear program: (ONCCVaR) Minimize cyxy (471) j=1 s.t. (+~ ~ 1 a mi ydy(,O< (472) (1 a)mnCi i= 1 j= 1 ( E R, (473) xj E {0, 1}, (474) where Cmin is the minimal prescribed jamming level and dij is defined as above. The expression on the left hand side of (472) is F,(x, (). Further, from Theorem 15 we see that constraint (472) corresponds to having #o(x) < co = 0 [163]. Said differently, the CVaR constraint (472) implies that in the (1 a~) 101I' .; of the worst (least) covered nodes, the average value of f (x) < 0. For the case when Ci C for all i, it follows that the average level of jamming energy received by the worst (1 a~) 101I' .; of nodes exceeds C. The important point about this formulation is that we have not introduced additional integer variables to the problem in order to add the percentile constraints. Recall, that in ONCVaR we introduced m discrete variables. Since we have to add only m real variables to replace maxexpressions under the summation and a real variable (, this formulation is much easier to solve than ONCVaR. In a similar manner, we can formulate the CONNECTIVITY INDEX PROBLEM with the addition of CVaR constraints. As before, we need to first define an appropriate loss function. Recall that the definition of pi, the connectivity index of node i, is given as the number of nodes reachable from i. Then can define the loss function f' for a network node i as the difference between the connectivity index of i and the maximum allowable connectivity index L which occurs as a result of the placement of the j amming devices according to x. That is, let f': {0, 1}" x Ms1 Z be defined by f '(x, i) := pi L. (475) With this, the CIPCVaR formulation is given as follows. (CIPCVaR) MinimnizeC cyxy (476) j=1 s.t. S+ 1max ,i ; L (,O} (477) (1 a~mi= 1 pi E Z, (478) ( E R, (479) where pi is defined as above. As with the previous formulation, the expression on the lefthand side of (477) is F,(x, () from (447). Furthermore, we have from from Theorem 15 that (477) corresponds to having #o (x) < co = 0. This constraint on CVaR provides that for the (1 a~) 10a' .; of the worst cases, the average connectivity index will not exceed L. Again, we see that in order to include the CVaR constraint, we only need to add (m + 1) real variables to the Table 41: Optimal solutions using the coverage formulation with regular and VaR constraints. Optimal Solutions Regular Constraints VaR Constraints Number of Jammers 6 4 Level of Jamming 101II' V nodes 101 1' for II. of nodes, 85% (of reqd.) for 4% of nodes CPLEX Time 0.81 sec 0.98 sec problem. Computationally, CVaR provides a more conservative solution and will be much easier to solve than the CIPVaR formulation as we will see in the next section. 4.5 Case Studies and Algorithms In order to demonstrate the advantages and disadvantages of the proposed formulations for the WNJP, we will present two case studies. The experiments were performed on a PC equipped with a 1.4MHz Intel PentiumB 4 processor with 1GB of RAM, working under the Microsoft Windows@ XP SP1 operating system. In the first study, an example network is given and the problem is modeled using the proposed coverage formulation. The problem is then solved exactly using the commercial integer programming software package, CPLEX Next, we modify the problem to include VaR and CVaR constraints and again use CPLEXB to solve the resulting problems. Numerical results are presented and the three formulations are compared. In the second case study, we model and solve the problem using the connectivity index formulation. We then include percentile constraints reoptimize. Finally, we analyze the results. 4.5.1 Coverage Formulation Here we present two networks and solve the WNJP using the network covering (oNC) formulation. The first network has 100 communication nodes and the number of available j amming devices is 36. The cost of placing a j amming device at location j, cj is equal to 1 for all locations. This problem was solved using the regular constraints and the VaR type constraints. Recall that there is a set of possible locations at which j amming devices can be placed. In these examples, this set of points constitutes a uniform grid over the battlespace. The placement of the jamming devices from each solution can be seen in Figure 43. The numerical results detailing the level of jamming for the network nodes is given in Table 41. Notice that the VaR solution *I * * I ,I I *I * Figure 43: Case study 1. The placement of jammers is shown when the problem is solved using the original and VaR constraints. Table 42: Optimal solutions using the coverage formulation with regular and VaR, and CVaR constraints. Opt Solns Reg (all) VaR (.9 conf) CVaR (.7 conf) # Jammers 9 8 7 Jamming Level 101II' V nodes 10I o 1'. 100% for 57% of nodes, of nodes, 72% for 10% 90% for 20% of nodes ofnodes, 76% for 23% of nodes CPLEXB Time 15 sec 15h 55min 41 sec 11sec called for 33% less jamming devices than the original problem while providing almost the same jamming quality. In the second example, the network has 100 communication nodes and 72 available jammers. This problem was solved using the regular constraints as well as both types of *Network nodes m Regular Constraints Ar VAR Constra nt ** I~ * *1 * ** .C * A1 \1 *Network Nodes X Re gula r C onstrali nts *) CVAR Constraint At VA R C onstraint Figure 44: Case study 1 continued. The placement of jammers is shown when the problem is solved using VaR and CVaR constraints. percentile constraints. The resulting graph is shown in Figure 44. The corresponding numerical results are given in Table 42. In this example, the VaR formulation requires 1 1% less jamming devices with almost the same quality as the formulation with the standard constraints. However, this formulation requires nearly 16 hours of computation time. The CVaR formulation gives a solution with a very good j amming quality and requires 22% less j amming devices than the standard formulation and 1 1% less devices than the VaR formulation. Furthermore, the CVaR formulation requires an order of magnitude less computing time than the formulation with VaR constraints. 4.5.2 Connectivity Formulation We now present a case study where the WNJP was solved using the connectivity index formulation (CIP). The communication graph consists of 30 nodes and 60 edges. The maximal number of jamming devices available is 36. We set the maximal allowed connectivity index of any node to be 3. In Figure 45 we can see the original graph with the communication links prior to j amming. The result of the VaR and CVaR solutions is seen in Figure 46. The confidence * r* * A * ~* rA * Figure 45: Case Study 2: Original graph. Figure 46: A comparison of the percentile constrained solutions. In both cases, the triangles represent the placement of jamming devices. (a) VaR Solution. (b) CVaR Solution. level for both the VaR and CVaR formulations was 0.9. Both formulations provide optimal solutions for the given instance. The resulting computation time for the VaR formulation was 15 minutes 34 seconds, while the CVaR formulation required only 7 minutes 33 seconds. 4.6 Concluding IRemarks In this chapter we introduced the deterministic WIRELESS NETWORK JAMMING PROBLEM and provided several formulations using node covering constraints as well as constraints on the connectivity indices of the network nodes. WNe also incorporated percentile constraints into the derived formulations. Further, we provided two case studies comparing the twvo formulations with and without the risk constraints. With the introduction of this problem, we also recognize that several extensions can be made. For example, all of the formulations presented in this chapter assume that the network topology of the enemy network is known. It is reasonable to assume that this is not always the case. In fact, there may be little or no a priori information about the netwvorki to be jammed. In this case, stochastic formulations should be considered and analyzed. Thi s brings us to the topic of the next Ihi.rlpi in which we consider the case when no information is known about the network to be jammed other than its relative location inside a planar region. CHAPTER 5 JAMMING COMMUNICATION NETWORKS UNDER COMPLETE UNCERTAINTY 5.1 Introduction This chapter describes a problem of interdicting/j amming communication networks in uncertain environments [44]. Interdiction of communication networks is an important application, but as previously mentioned, has not been intensively researched despite the vast amount of work on optimizing telecommunication systems [155]. Most papers on network interdiction are about preventing j amming and analyzing network vulnerability [68, 134]. To our knowledge, the only literature on network interdiction involving optimal placement of jamming devices is the work of Commander et al. [45] (presented in Chapter 4) in which several mathematical programming formulations were given for the deterministic WIRELESS NETWORK JAMMING PROBLEM. The only other thoroughly studied cases are problems of minimizing the maximal network flow and maximizing the shortest path between given nodes via arc interdiction using limited resources. Cormican et al. [49], Israeli et al. [110], and Wood [174] studied stochastic and deterministic cases and suggested efficient heuristics. A similar setup but with a different obj ective was recently studied by Held in 2005 [95]. This problem is particularly important in the global war on terrorism as improvised explosive devices (IEDs) continue to plague the coalition forces in Iraq. These homemade bombs are almost always detonated by some form of radio frequency device such as cellular telephones, pagers, and garage door openers. The ability to suppress radio waves in a region will help prevent casualties resulting from IEDs. Furthermore, since most situations arise in military battlefield scenarios, exact information about the topology of the adversary's network is unknown. Thus, deterministic network interdiction approaches have limited applicability. In this case, a stochastic approach involving some risk measure for evaluating the efficiency of the jamming device placement may be helpful. However, choosing an appropriate risk measure is a challenging problem in its own right. In this chapter, we consider an extreme case where there is no a priori information about the topology of the network to be j ammed. The only information used in our approach is a bounding area, containing the communication network. The organization of this chapter is as follows. Section 5.3 gives a formal description of the problem and the j amming model. We derive bounds and prove a convergence result for the case of complete uncertainty in Section 5.4. Here we also demonstrate the advantage of the proposed method compared to the simplified case which does not account for the cumulative effect of the j amming devices. In Section 5.5 we present a randomized local search and illustrate its effectiveness by using the bounds derived in the previous section. Section 5.6 provides some concluding remarks. 5.2 Descriptions, Assumptions, and Definitions In general, the problem of jamming a communication network is to determine the minimum number of jamming devices required to interdict or suppress functionality of the network. Starting with this general statement, more specific ones can be obtained by considering various types of jamming devices and interdiction criteria. Depending on the given information about the communication nodes and the network topology, stochastic or deterministic setups can be constructed [45]. Below we provide assumptions and basic definitions of the considered framework. We consider radiotransmitting communication networks and j amming devices operating with electromagnetic waves. We assume that the jamming devices have omnidirectional antennas and emit electromagnetic waves in all directions with the same intensity. We also assume that jamming power decreases reciprocally to the squared distance from a device. Definition 12. A point (conanunication node) X is said to be jananed or covered if the cumulative energy received fion; all ja~nmning devices exceeds some threshold vahte E: C~> E, (51) 71 R(X, i) where A E R and R(X, i) represents the distance fr~om X to ja~nmning device i. This condition can be rewritten as: 1 _1 (52) SR2(X, i) L2 where L= . The latter inequality implies that a j amming device covers any point inside a circle of radius Definition 13. A connection (arc) beaveen avo conanunication nodes is considered blocked ifany of the avo nodes is covered. Usually, interdiction efficiency is determined by a fraction of covered nodes and/or arcs. More complicated criteria used are based on the amount of information transmitted through the network or the length of the shortest path between pairs of nodes. We do not consider a specific criterium because we are interested in the case of complete uncertainty. Thus, we are assuming that we have no knowledge of the network topology, including information about the node coordinates. 5.3 Problem Formulation If we ignore the cumulative effect of the j amming devices, then the problem reduces to determining the optimal covering of an area on a plane by circles. This covering problem was solved in 1936 by Kershner [113]. The current chapter shows that accounting for the cumulative effect of all the devices can lead to significant losses in costs, i.e. required number of jamming devices. Since we assume no information is known about the network to be jammed, the only reasonable approach is to cover all points in some area known to contain the network. This approach would also be appropriate when some information about the network is available, but is potentially inaccurate. We consider a case when a communication network is located inside a square. However, all of the following theorems can be formulated for a more general case. For example, to i i i i I Figure 51: Uniform grid with j amming devices obtain results when the network is contained inside a rectangular region in the plane, the only modification required to the calculations is an appropriate updating of the summation bounds. An optimal covering is one which contains the minimum number of jamming devices that jam all points in the particular area of interest. However, finding a globally optimal solution for the general problem is difficult [45]. Therefore, we consider a subproblem of covering a square with j amming devices located at the nodes of a uniform grid. The solution to this problem will provide a feasible solution (optimal in certain cases) to the general problem. Suppose the grid step size is R. If the length of a square side a is not a multiple of R, then we cover a bigger square with a side of length R([ ] + 1). See Figure 51 for an example. The optimal solution in the considered problem is a uniform grid with the largest possible step size which covers the square. The problem remains nontrivial, even for this simplified setup. Lemma 8. For any covering of a square 0I ithr a uniform grid, a point which receives the least amount of jamming energy lies inside a corner grid cell (see Figure 52). Proof: Consider a corner cell So and an arbitrary noncorner cell Si. We prove that for any point P E Si, there is a corresponding point P' E So such that E(P) > E(P'), where E(X) is the cumulative j amming energy from all devices received at point X. Let P' be a symmetric correspondence of point P inside So. Here, symmetry implies that P and P' are equidistant from the sides of their respective cells. We split the square into the a H]il)R least covered point S, Y P' Figure 52: The least covered point is shown in the lower left grid cell. four rectangles A, B, C, and D, where A is the rectangle containing cells So and Si (see Figure Figure 53: Square Decomposition 53). Denote the other two corner cells of rectangle A by C1 and C2. Let also Ti and T2 be points inside C1 and C2 TOSpectively, such that TIPT2F' is a rectangle with sides parallel to the sides of the square as in Figure 54. Using symmetry we get the following relations: E(P', A) E(P', B) < E(TI, B) E(P', D) < E(T2, D) E(P', C) = (P, A), = (P, B), = (P, D), < E(P, C), (53) (54) (55) (56) Figure 54: Equivalent Points where E(X, I) is the cumulative j amming energy from all devices inside rectangle I received by point X. Relations (53) (56) imply E (P') = E (P', A) + E (P', B) + E (P', C) + E (P', D) < E(P, A) + E(P, B) + E(P, C) + E(P, D) (57) =E(P) , and the lemma is proved. O 5.4 Upper and Lower Bounds Below we formulate theorems for upper R and lower R bounds for the optimal grid step size R* : R < R* < R. In all formulated theorems, we consider covering a square with side length a. Theorem 16. Thze unique sohttion of the equation 1a 1 2R2~ R Llt; I 2 58 is a lower bound R for the optimal grid step size R*. Proof: In Lemma 8, we proved that the least covered point lies inside a corner cell. Consider now a grid with step size R. Without the loss of generality, let P(.ro, Yo) be a point inside the ,.u Pi iv Figure 55: Cumulative emanation of jamming devices. bottom left corner cell as shown in Figure 55. Il, I2, and I3 are cumulative j amming energy received at P by jamming devices located in regions C, A, and B correspondingly. Similarly, I4 is the j amming energy from the j amming device located at the bottom left node O. With this, the jamming energy received at point P is calculated through the expression E(P) = II + I2 3 4 I, where T1 T1 I~CC (R xo + i R)2 90 o R2 i=0 j=0 T1 i= o (R xo i R) 2 Y2, T1 IjCX~ (R yo j R)2 j=0 [R Notice that we can estimate I2 3 4 S Tl1 2 l 1 G~hPC R2( i2 + 2 R2 0 1 1+X2 i=0 (59) (510) (511) (512) (515) Figure 56: Integral Lower Bound. This follows from the fact that f i)> f (x)dx, (516) i=0 where f (x) is a decreasing function. This property can be easily established geometrically. Notice in Figure 56 that the left side of inequality (516) represents the shaded region in the figure, while the right side represents the area under f (x). Continuing from (515) above we have dx / T 1 (1 + x)2 =arctan(T + 1)  2 T14 xr1 S4 T+1 (517) Here and further, we use the inequalities given below: arctan(x) < x, O arctan(x) > x  ,O < x< 1. 3' (518) (519) Now combining (515) and (517), we obtain I23> I2  SR2 4 1  . / J (520) We also have the following approximation for I4 which follows clearly 1 ( S2R2 For estimating II we use a property similar to (516), but in a higher dimension. Namely, 21) i=0 j=0 .N+1 *N+1 0 J 0 f (x, y).1<. Li (522) where as above, f (x, y) is a decreasing function of x and y. Using this inequality, we derive the following approximation for Il. / dxdy (Rx+ x R)2 _9 (R. @2) / + + dxdy R2 1f J1 2 2' (523) Furthermore, .1 T1 T+1 dxdy / .1 > ' T 1 dl T 1 arctan dx T 1 arctan dx 1x   xx 1  arctan 1 + (524) x arct~an( xn( 1 SIn(T + 1) In(T + 1) x~ T+1i 1 + 2 1 71 arctan ( T+1 T+1 > i f1+1 oX Combining this result with (523) we have 1 1 li > I( +1 21 (525) Summing (515), (521), and (525) we obtain an overestimate of the total coverage at point P. That is 1 x2 xr 2 1\ E(P) > ~  n(T 1)P t 2 R2 2T+12 T+1 2/ In( + )i +\2 (526) 2R2f 2 2 2u+)+) > 1 xI + 1 ) iT .i)>(7 2R2 C2 Since f (R) is monotonically decreasing on (0, +oo), the largest R satisfying the above inequality is the unique solution R of the equation f (R) = 2.(528) Thus, a uniform grid with step size R jams any point P inside a corner cell. According to Lemma 8, the grid jams the least covered point in the square implying that the whole square is jammed. Thus we have the desired result. O Since the function f (R) =k (x In(~ 1) + i 3) is monotonic, equation (58) can be easily solved using a numerical procedure such as a binary search. Therefore, using (58), we can obtain a step size R such that the corresponding uniform grid covers the entire square. Further, it is easy to see that the number of jamming devices in the grid does not exceed NI +2 (529) Table 51: Comparing ~for various values of k. 102 2.44 2.3 10 3.54 4.8 106 4.40 7.5 108 5.14 10.2 A more straightforward solution of the initial problem could be based on the property that a jamming device covers all the points inside a circle of radius L as mentioned in Definition 12. Using that, we could reduce the problem to finding the optimal covering of a square with circles of radius L. A direct result from [1 13] (that was mentioned in [134]) is that in the limit, the minimum number of circles to cover an area a2 is 2a2 N2 = (530) To compare the approaches, we consider the ratio N2 R2 1 (53 1) where L x ndk Using these substitutions, equation (58) can be rewritten in terms of variables x and k as follows I 1 x 3 = 2. (532) By solving (532) for different values of k, one can find corresponding values of x and . To evaluate the advantage of the uniform grid approach over the naive one, we provide some computational results in the Table 51. From the table, we see that as k increases, the advantage of using our approach becomes more significant. In fact, it can be proved that limem~ = o This will follow as a corollary of Theorem 18. To establish the quality of the lower bound rigorously, we need to first establish a similar result for an upper bound. This follows in the next theorem. Theorem 17. Thze unique solution of the equation 1 x 2a1 xr 19 1 In + + + (533) is an upper bound R of the optimal grid step size R*. Proof: Let P(xo, yo) be the least j ammed point, that lies inside a corner cell according to Lemma 8. Without the loss of generality, as in the proof of Theorem 16, we assume that P is inside the bottom left corner cell. The j amming energy received at point P is calculated through the expressions (59) (514). Since P is the least covered point, the following inequality holds. E(P)< E x= y= 0 = I' + I' + I' I', where (534) T1 T1 I' =(535) T1 I' = ( )2,(536) T1 I' = 1n~j ) ? (5 37) I' = .(53 8) I' nd cn be, estimatedl thr~roug integrals similarly to the techniques used in the proof of Theorem 16. The following inequality holds f= (i) f(x;d, (539) where f (x) is a decreasing function. This property can also be proven geometrically. Figure 57 represents a graphical interpretation of this relation. The left side of the inequality is represented by the shaded area. The right side of (539) is the area under f (x). With this property we have Figure 57: Integral Upper Bound. from (536) that 1 r I'aa < + 1 6 R2 1 (540) Furthermore, using inequalities (518) and (519), we see that (537) is estimated by I' < 3 (~2 R+zR2 3R22 R22 arctan j 3R2 2 2 2 21 a 2T1 24T31 12T3 ' (541) 15 R2 3 1 T To estimate I' a pr~nropert similarr~l to ~ (539cn be used.l This inmneulityr~ is givn by i= 1 j= 1 0J O (542) 2T where f (x, y) is a decreasing function of x and y. With the above inequality, O  1 r I' < +I 1 2) 2 0 /7IT1 dx (~2 R+zR2 dxdy dx Where Jo 4 5R2 Jo (4 + x ) t R)2 2 ) C 1 d(X 2 1 "2 R2 002 )dy :Y 12 (543) arctan(2) + arctan T : 2T1 rcta 2/( 1 1 2 !4T3 2 2T 1 Irctan 2T1 2~Th 1) C = 2 arctan(2T) S 2 arctan r2 xr+1 2 (544) The double integral in (543) is bounded as follows d( + x)2 2 J1 {2 2 a r c t a n 21 43 T 1  arctan Tj\ dt  1\~ \ S6(T' 1 )21 211 1 l t\ t 3t3 (545) 02( 5 1 1 6(T 1)2 In(2T 1)  2 + + 3 6iT 12T2 36T3 IT  12(T ~)2' xT 20 5 1 < n(2T 1) +; + Combining the results from (543), (544), and (545) gives the overestimate for 7( a 7(< I2 ~n(2T 1 S12(T 1 )21 xr 16 5 1) +I + 2 3 6iT T (546) Jo Jo  rcand Recall equation (5J 34)sated E(P) < 7( + 7 + If + Id. So using the expression for If given in (538) and the overestimates for 7(, I(, and I' derived in equatio~ns1 (546) (50) and (541) l respectively, we obtain 1 x1 xr 19\ E(P < In2T 1 + (547) EP R2 n~ qn21 6) T 2 3/ Finally, if we let T = [ ] 1 < + ,w e 1 x 2a1 xr 19\ E(P)< In1 + + (548) R2 2a R> 6 ( 1) 2 3 The function f(R) = In,! 1) is monotone, hence the equation f (R) = ~zhas a unique solution R. Equation (548) implies that a grid with step size R does not cover the entire square. That is, there exists at least one point P that remains uncovered. Thus R is an upper bound for the optimal grid covering problem. Since the optimal grid step size R* < R, the theorem is proved. O In Figure 58, we see an example in which we are covering at 40 x 40 square and the required j amming level at each point is 3.0 units. In part (a), we see the coverage associated with the required number of devices from the lower bound of Theorem 16. In this case, 202 = 400 jamming devices are used to cover the area. Notice that there are no holes in the region. This, together with the scallop shell outside the bounding box indicates that all points within the region are covered. In part (b), we see the coverage corresponding to the placement of the j amming devices on a uniform grid according to the upper bound of Theorem 17. Here, the required number of devices is 192 = 361. Notice the holes located at the four corners of the region indicating that these points are uncovered. This validates the theoretical results obtained in Theorem 16 and Theorem 17. Now that we have established both upper and lower bounds for an optimal grid step size, we can determine the quality of the bounds. The result is obtained in the following theorem. Theorem 18. lim = 1, (549) 155 0 1 0 0 1 0 2 10> b Fiue58 oprs o h oe n pe ons a h oea o h mig 0ie r lcdacrigtotelwrbudfo hoe 2 h oa ubrojamn Figure 58: Comparisonse of2 th lowe ade s upper bounds. (a)c The coverage poft whe n jamming where R and R are bounds obtained fr~om equations (58) and (533), correspondingly. Moreover, the following inequality holds: R c 1 < < 1 + (550) R In~a ' for constants Me E R, ce ER, such that R > M~. Proof: By letting x := and y := ,,, equaion (58 and 2 (533 can be nrespetivly~ mreritten as a = L~ x ej ("+ ' 1, and (551) S 2a 19 x L In = 2 (552) 2 Ly3 2 6(a + L y)' To prove the theorem, we need to show that limn = 1 (5_53) where x > 0 and y > 0 are solutions of (551) and (552), correspondingly. From (552), we obtain SI2 Lya + 1>u y2_1 where 19 xr C = and 32 (554) (555) (556) ay > e: 2 vc ) From (551) and (556) we see that x e"(22+c2) C3 1 6 (y 01) ) 1 where 3 C2 and /2 C3 = 61. (557) (558) (559) Since y L and x L are upper and lower bounds, correspondingly, the following relation holds > 1. (560) With (551) and (560) above, we can also conclude that lim x 00 and lirn y (561) For all Me E R, where M~ > zC, there exists Q E R such that (557) can be reduced to < Q e 2("2Y2), and y > M. (562) In(Q) the following, inequality, holds Moreover, for c (y2 C: 1 < ,,and y > M. 2' (563) Assume for the sake of contradiction that the inequality in (563) does not hold for some (r*, y*). That is assume that ( )2  1> . Using (562) we have Y* <&e X* (564) which contradicts (560). Applying (560) and (563) we get 1 < < and y > M~. (565) Letting a tend to 00 and taking (561) into account, we see that in fact lim = 1.(566) Finally, by using (565) and (551), the following relation can be obtained y k 1 < < 1 +(567) x In(a>) for some constant k E IR, when y > M~. Thus, the theorem is proved. O 5.5 Heuristic for Uncertain Jamming Here, we describe the implementation of a randomized local search heuristic for the case of jamming under complete uncertainty. Recall that the subproblem for which the bounds in Theorem 16 and Theorem 17 we derived place n jamming devices, where n is a perfect square. The obvious drawback of this technique is the situation where for example R requires 25 jamming devices and R calls for 16, and the optimal solution to the general problem is 18. Using the uniform grid approach will require nearly 1 1II' more devices than are needed to cover the region. Pseudocode for the local search is given in Figure 59. The heuristic takes as input the size of the region containing the network (region). The number of jamming devices required to cover the area of region by the lower bound on the grid step (upper bound on j amming devices) derived in Theorem 16 (ubJammers) is the second input parameter. In line 1, the optimal solution (X*) is set to ubJammers. The while loop from lines 212 is where the local optimization takes place. In line 3, the jamming devices are randomly scattered within the square region known to contain the network. Next in the while loop from lines 58, those points which are receiving the least amount of procedure randLocalSear ch(region, ub Jammers) S X* < ubJammers 2 while stoppingCriteria = FALSE do 3 randScatter(region,X*) 4 local~pt = FALSE 5 while local~pt = FALSE do 6 P <leastJammedPoints (region) 7 move~ammers(p) 8 end 9if all Jammed = TRUE then to X* < X* 1 11 end 12 end 13return X* end procedure randLocalSearch Figure 59: Pseudocode for the randomized local search for uncertain j amming. j amming energy are assigned to the set p. Then, the j amming devices are moved along a gradient towards the points in P until these are points are covered. Several methods are available for the function moveJammers including the method of steepest descent [17] or the more efficient method of conjugated gradients [91, 92]. The heuristic then determines if all points have been j ammed. If this is the case, then in line 9 we decrement the number of jamming devices by one and return to line 2. If all points are not j ammed, we repeat the loop until either all points are covered or until a stopping criteria is met in which case we exit the while loop. The final value of X* is returned as the solution in line 13. An example can be seen in Figure 510. For this example, a point requires 3 units of jamming energy before it is declared to be jammed. Figure 510(a) represents the placement of the j amming devices according to the optimal uniform grid solution from Theorem 25. In this case, 400 devices are required. In Figure 510(b) we see the associated coverage from this solution. The scallop shell around the bounding box containing the network indicates that in fact, the entire area is jammed, but perhaps more devices are used than are necessary. In subfigure (c), we see the placement of the 298 j amming devices according to the heuristic solution. Notice gu2 5 10 5 0 5 10 15 20 (a) 15 * 10 5 10 15 * 20 15 10 5 0 5 10 15 20 25 (b) 20 15 10 5 0 5 10 15 20 25 Figure 510: Example of heuristic versus uniform placement. (a) Device placement on uniform grid. The total number of jamming devices required is 400. (b) Coverage display of uniform placement. (c) Heuristic jammer placement. The total number of required devices is 298. (d) Heuristic coverage plot. in Figure 510(d) that the coverage outside the bounding box is reduced significantly while still jamming all points in the region. The heuristic reduces the required number of devices by 25.5' Numerical results for several regions with various required jamming levels can be seen in Table 52. In this table, we list, the side length of the region ( a), the required j amming level (L), the number of jamming devices required by the upper and lower bounds computed using Theorem 16 and Theorem 17. Next, we list the required j amming devices corresponding to the optimal grid step size which was determined using a binary search with the upper and lower 104 CHAPTER 6 COOPERATIVE COMMUNICATION IN MOBILE AD HOC NETWORKS 6.1 Introduction In many situations, multiple "agents" work together to achieve a shared goal. Cooperation between the agents is important to improve the efficiency and effectiveness by which their goal is reached. In information systems this idea also holds. That is, in wireless networks, groups of agents are often employed to perform a number of cooperative tasks including the synchronization of information among a set of users, and the accomplishment of missions in remote areas. In such situations, it is useful to maintain collaboration among the agents performing the cooperative tasks in order to maximize the probability of success. Communication is an important measure of collaboration between entities involved in a mission. It allows different agents to perform the set of tasks that have been planned, and at the same time to implement changes in the case that an unexpected event occurs. Moreover, high communication levels are necessary in order to perform complicated tasks, where several agents must be coordinated. We describe in this section the main concepts found in the literature related to optimizing communication time in ad hoc network systems. One of the main difficulties concerning the maintenance of communication is an ad hoc network is determining the location of agents at a given moment in time. Several methods have been proposed for improving localization in this situation. Moore et al. [132], for example, presented a linear time algorithm for determining the location of nodes in an ad hoc network in the presence of noise. Other algorithms for the same problem have been suggested by Capkun et al. [29], Doherty et al. [60], and Priyantha et al. [153]. While such algorithms can be useful in determining the correct location of nodes, they are only able to provide information about current positions, and are not meant to optimize locations for a specific obj ective. Packet routing, on the other hand, has been previously studied with the goal of optimizing some common parameters, such as latency, cost of the resulting route, and energy consumed. For example, Butenko et al. [25] proposed a new algorithm for computing a backbone for wireless networks with minimum size, based on a number of related algorithms for this problem [23, 33, 129]. Another problem involving the minimization of an obj ective function over all feasible positions of agents in an ad hoc network is the socalled location error minimization problem. In the LOCATION ERROR MINIMIZATION PROBLEM, given a set of measurements of node positions (taken from different sources), the goal is to determine a set of locations for wireless nodes such that errors in the given measurements are minimized. This problem has been formulated and solved using mathematical programming techniques, by the use of a relaxation for the general problem into a semidefinite programming model [19, 20, 46, 165]. There are many applications of this described system. These include situations where communication in a region is required, but no topologically fixed transmission system exists. Specific examples include emergency/rescue operations, disaster relief, battlefield operations, and BluetoothB systems [137]. In each of these examples the goals and obj ectives are fixed in advance and communication is important for the attainment of these goals. The current technologies used in these type of applications allow improved communication systems that rely on ad hoc wireless protocols. However, it is a combinatorial problem to decide how to maintain communication for the maximum possible time, when faced with the inherent restrictions of wireless systems. Advances in wireless communication and networking have lead to the development of new network organizations based on autonomous systems. Among the most important example of such networks systems are mobile ad hoc networks (MANETs). MANETs are composed of a set of loosely coupled mobile agents which communicate using a wireless medium via a shared radio channel. Agents in the network act as both clients and as servers and use various multihop protocols to route messages to other users in the system [137, 141]. Unlike traditional cellular systems, mobile ad hoc networks have no fixed topology. Moreover, in a MANET the topology changes each time an agent changes its location. Thus, the communication between the agents depends on their physical location and their particular radio devices. Interest in MANETs has surged in the recent years, due to their numerous civilian and military applications [155]. MANETs can be successfully implemented in situations where communication is necessary, but no fixed telephony system exists. Real applications abound, especially when considering adversarial environments, such as the coordination of unmanned aerial vehicles (UAVs) and combat search and rescue groups. Other examples include the coordination of agents in a hostile environment, sensing, and monitoring. More generally, the study of protocols and algorithms for MANETs is of high importance for the successful deployment of sensor networks which are themselves composed of a large number of autonomous processors that can coordinate to achieve some higher level task such as sensing and monitoring. The lack of a central authority in MANETs leads to several problems in the areas of routing and quality assurance [25]. Many of these problems can be viewed as combinatorial in nature, since they involve finding sets of discrete obj ects satisfying some definite property, such as for example, connectedness or minimum cost. Among the challenging problems encountered in MANETs, we can cite routing, or path planning as one of the most difficult to solve, because of the temporary nature of communication links in such a system. In fact, as nodes move around, they dynamically define topologies for the entire network. In such an environment, it is difficult to determine if two nodes are connected, since any of the intermediate nodes may leave the network at any time. This scenario makes clear the importance of close coordination among groups of nodes if a definite goal needs to be attained. If at all possible, a plan must be devised such that communication among nodes is maintained for as long as possible. With this obj ective in mind, we study in this chapter, a problem involving the coordination of wireless users involved in a mission of tasks that requires each user to go from an initial location to a target location. The problem consists of maximizing the amount of connectivity among the set of users, subj ect to constraints on the maximum distance traveled by users, as well as restrictions on what types of movement can be performed. The resulting problem, called the COOPERATIVE COMMUNICATION PROBLEM ON MOBILE ADHOC NETWORKS (CCPMANET),is formally defined in a later subsection. This chapter is organized as follows. Section 6.2 begins with a brief review of some of the previous work in the areas related to cooperative communication in wireless systems. Then we derive the discrete version of the CCPMANET, referred to as CCPMANETD. Specifically, we formulate the problem as a combinatorial problem on a graph and provide an integer programming formulation. In Section 6.3, we provide a suite of heuristic algorithms for the problem beginning with a simple construction algorithm and culminating with the implementation of a GRASP with pathrelinking for the CCPMANETD. Computational results are presented and the methods compared in Subsection 6.3.6. In Section 6.4 we derive a continuous formulation called CCPMANETC. A continuous version is more likely to model realworld scenarios as we no longer rely on the underlying graph structure and the agents are free to move subj ect to kinematic constraints. Finally, concluding remarks and future research ideas are presented in Section 6.5. 6.2 Discrete Formulations (CCPMANETD) As mentioned above, ad hoc networks represent an extremely active area of research [155]. Several problems related to routing, power control, and accurate position update, have been studied in the last few years [136]. In terms of routing, one of the main problems in adhoc networks is the computation of a network backbone. The obj ective is to find a subset of nodes with a small number of elements that can be used to send routing information. The use of such a structure is useful to simplify the management tasks required by a routing protocol. The backbone computation problem can be modeled as a CONNECTED DOMINATING SET (CDS) problem. Here, the obj ective is to find a set of minimum size forming a connected backbone, with the additional property that each network client can directly reach this set. The CDS problem, which can be modeled using unit graphs, has several approximation algorithms [23, 26, 33], all of which are based on approximation properties of the MAXIMUM INDEPENDENT SET problem on planar, unit graphs [12]. The use of discrete optimization techniques to maximize connectivity in ad hoc systems was is a relatively new idea which was put forth by Oliveira and Pardalos in [137]. We now present some discrete formulations for the CCMPANETD. Consider a graph G = (V, E), where V = {vl, v2, v,) TepreSents the set of candidate positions for the wireless agents. Suppose that a node in G is connected only to those nodes that can be reached in one unit of time. Let U represent the set of agents, S = {sl, s2, **>sU} C V represent the set of initial positions, and D = {dl, d2, *, dU} C V the set of destination nodes. Let NV(v) C 2V, for v E V, represent the set of neighbors of node v in G. Given a time horizon T, the obj ective of the problem is to determine a set of routes for the agents in U, such that each agent ui E U starts at a source node si and finishes at the destination node di E D after at most T units of time. For each agent u E U, the function pt : U V returns the position of the agent at time te { 1, 2, .. ., T}, where T is the time limit by which the agents must reach their destinations. Then at each time instant t, an agent E U can either remain in its current location, i.e. pt1(u), or move to a node in NV(pt1(u)). We can represent a route for an agent E U as a path P = {vl, v2,. ., Uk} in G where vi1 = s,, I I = d,, and, for is { 2,..., k}, E N(I _1) U {< }. Finally, if {p,)} ,l is the set of traj ectories for the agents, we are given a corresponding vector L such that 4 is a threshold on the size of path pi. This value is typically determined by fuel or battery life constraints on the wireless agents. We now have to decide what the actual measure of connectivity amongst the agents on their traj ectories will be. Obviously, the best possible situation would be the case when all agents in the network are linked. However, in ad hoc systems, this is unlikely due to limits on power and fuel. As noted in [137], one possible metric of connectivity in a graph would be the number of connected components in the graph as a result of the traj ectories of the agents. This measure has a maj or drawback however, which is easily demonstrated with the following example. Consider a graph consisting of a nodes. If the number of connected components is 2, then we are unsure as to how to interpret this. It is possible that the two components contain [n/2] and [n/2] nodes respectively. On the other hand, it is also possible that the two components contain 1 and n 1 nodes. These are two very different situations that can arise from the connected components metric, however with an obj ective value of simply 2, we have no intuition of the resulting network structure. Another more obj ective measure of connectivity is as follows. Assume that the agents have omnidirectional antennas and that two agents in the network are connected if the distance between them is less than some radius r. More specifically, let 6 : V xV H R represent the Euclidean distance between a pair of nodes in the graph. Then, we can define a function c : V xVB {0(, 1} such that c~pt~us)Pt~uy)) if 6(pt(ui),pt(j>)) r ,(61 c~pt~ui>pt~uj>>0, otherwise. 61 With this, we can define the CCPMANETD as the following optimization problem: max cp ),t())(62) t=1 n,vEU s.t. iij(vy_1,y) < C 4 Vp= {vy1,v2a,..., v,}, (63) j=2 pl (u) = s, Vu E U, (64) p (U) = d, Vu E U, (65) where constraint (63) ensures that the length of each path pi is less than or equal to its maximum allowed length 4. The reader is referred to the paper by Oliveira and Pardalos [137] for additional integer programming formulations in which other obj ectives are considered and discussed. We finish this section by providing two results related to the computational complexity of the problem. Theorem 19. Finding an optimal solution for an instance of the COOPERATIVE COMMUNICATION PROBLEM ON MOBILE ADHOC NETWORKS is HiPhard. This result, due to Oliveira and Pardalos [137], follows by a reduction from MAXIMUM 3SAT [79]. We now extend this result in the following theorem. Theorem 20. Consider an instance of the CCPMANETD, 11 ithr T as the tintehorizon. Finding an optimal sohttion at each tintestep t E [ 1, T ] is HiPhard. Proof: We will show this result by reducing CLIQUE to CCPMANETD at an arbitrary timestep. Recall that the CLIQUE problem is as follows. Given a graph G = (V, E) and an integer J < V, does G contain a clique, or complete subgraph, of size J or more [79]? Consider an instance of CCPMANETD at any time step t. An optimal solution is one in which all the agents are pairwise connected. Thus, for n agents the number of connections in an optimal solution is n(n 1)/2. Notice that this is equivalent to finding clique on n nodes of the graph. Therefore, given an instance of CLIQUE, by letting J = n, we have the result. Thus there is a bij section between optimal configurations of agents and cliques in the graph. O Corrolary 1. For any instance of CCPMANETD, an upper bound on the optimal sohttion is given u(u 1) T (66) where T is the time horizon and u = U  is the number of agents. Proof: This proof follows directly from Theorem 20. If all u agents communicate at a given time, then they form a clique on u nodes. The clique will contain u(u 1)/2 vertices representing the communication links. If the agents maintain the clique formation over all time steps, then the number of communication connections will be u(u 1) T (67) and the lemma is proved. O 6.3 Algorithms for CCPMANETD 6.3.1 Construction Heuristic In this section, we propose a construction heuristic to quickly create high quality solutions for the CCPMANETD. Our obj ective is to provide a fast way of constructing a set of paths, connecting wireless agents from their initial positions S to the destinations D such that the resulting routes are feasible for the problem. The union of such sequences of nodes will uniquely determine the cost of the solution, which is calculated using equation (62). The algorithm also tries to create solutions that have as large a value as possible for the obj ective function. The pseudocode for the construction heuristic is showed in Figure 61. The algorithm starts initializing the cost of the solution to zero. The incumbent solution, represented by the variable solution, is initialized with the empty set. The next step consists of finding shortest paths connecting each source as E S to a destination di E D. Standard minimum cost flow algorithms can be used to calculate these shortest paths. For example, the FloydWarshall algorithm [77, 170] can be used to compute the shortest path between all pairs of nodes in a graph. The Dijkstra algorithm [59] can also be used to perform this step of the algorithm (with the only difference that, being a singlesource shortest path algorithm, it must be run for  U iterations, one for each of the  U sourcedestination pairs). In the loop from lines 4 to 10, the algorithm performs the assignment of new paths to the solution, using the shortest path algorithm described above. First, a sourcedestination path asdi is selected, and based on this a shortest path pi corresponding to this pair is generated. Notice that, if the length (number of edges) of the shortest path pi is more than T there is not feasible solution for the problem, since the destinations cannot be reached at the end of the requested time horizon. The algorithm checks for this condition on line 6. If all sourcedestination pairs are found to be feasible, then a solution is generated by the union of all pi. Notice that once agent i reaches node di it can simply loiter at di during all remaining time (until instant T), as shown in line 7. The sequence of nodes found as a result of this process is then added to the solution in line 8 of the algorithm, and the optimum obj ective procedure ShortestPath(G, U, S, D, T) S c < 2 solution < Q) 3 Compute all shortest paths SP(si, di) for each pair (s d ) E Sx D 4 for i = 1 to  U do 5 ~Pi SP (si, di) 6 if length of pi > T then 7 return Q) 8 else 9 ~solution < solution U ~Pi to c < c + new connections generated by pi 11 end L2 end 13 return (c, solution) end procedure ShortestPath Figure 61: Pseudocode for the shortestpath construction heuristic. value is updated (line 9). Finally, a complete solution is returned on line 11, along with the value of that solution. Theorem 21. The construction algorithm presented above finds a feasible solution for the CCPMANETD in O(V3) ie Proof: A feasible solution for this problem is given by a sequence of positions starting at si and ending at di, for each agent ui E U. Clearly, the union of the shortest paths provide the required connection between each sourcedestination pair, according to the remarks in the preceding paragraph; therefore the solution is feasible. Suppose that, in line 3, we use the FloydWarshall algorithm for allpairs shortest path [77, 170]. This algorithm runs in O(V3) time. Then, at each step of the for loop we need only to refer to the solution calculated by the FloydWarshall algorithm and add it to the variable solution. This can be done in time O(V), and therefore the f or loop will run in at most O( VI U) time. Thus the step with highest time complexity is the one appearing in line 3, which implies that the total complexity of the algorithm is o(v"> V 6.3.2 Local Improvement A construction algorithm is a good starting point in the process of solving a combinatorial optimization problem. However, due to the HiPhard nature of the CCPMANETD, such an algorithm provides no guarantee that a good solution will be found. In fact, it is possible that for some instances the solution found by the construction heuristic is far from the optimum, and not even a locally optimal solution. To guarantee that the solution found is at least locally optimal, we propose a local search algorithm for the CCPMANETD. A local search algorithm receives as input a feasible solution and, given a neighborhood structure for the problem, returns a solution that is guaranteed to be optimal with respect to that neighborhood. For the CCPMANETD, the neighborhood structure is defined as follows. Given an instance II of the CCPMANETD, let S be the set of feasible solutions for that instance. Then, if s ES is feasible for II, the neighborhood Ni(s) of a is the set of all solutions s' E S that differ from s in exactly one route. Obviously, considering this neighborhood, there are  U positions where a new path could be inserted; moreover, the number of feasible paths between any sourcedestination pair is exponential. Thus, in our algorithm, instead of exhaustively searching the entire neighborhood for each point, we probe only  U neighbors at each iteration (one for each sourcedestination pair). Also, because of the exponential size of the neighborhood, we limit the maximum number of iterations performed to a constant Maxlte r. We use randomization to select a new route, given a sourcedestination pair. This is done in our proposed implementation using a modified version of the depthfirst search algorithm [47]. A randomizeddephrlfir~stsearch is identical to a depthfirst search algorithm, but at each step the node selected to explore is uniformly chosen among the available children of the current node. Using the randomized depthfirst search we are able to find a route that may improve the solution, while avoiding being trapped at a local optimum after only a few iterations. procedure HillClimb(solution) S c < f (solution) 2 while solution not locally optimal and iter < Maxlter do 3 for i = 1 to  U do 4 solution < solution \ {~Pi} 5 Pf< DFS(si, di) 6 c' < f(solution U Pf) 7 if length of Pf < T and c > c then 8 c <C ' 9 ~iter < 0 to else 11 Restore path pi 12 end 13end for 14 iter <iter+1 15 end while 16 return (solution) end procedure HillClimb Figure 62: Pseudocode for the Hill Climbing intensification procedure. A description of the local search procedure in form of pseudocode is given in Figure 62. The algorithm used can be described as follows. Initially, the algorithm receives as input the basic feasible solution generated on phase 1 (the construction phase). A neighborhood for this solution is then defined to be the set of feasible solutions that differ from the current solution by one route, as previously described. Given the basic feasible solution obtained from the construction subroutine, the neighborhood is explored in the following manner. For each agent ui E U, we reroute the agent on an alternate feasible path from as to di (lines 3 to 13). Recall that a path pi is feasible if the total length of this path is less than 4 and the agent reaches its target node by time T. This alternate path is created on line 5 using a modified depthfirst search algorithm[6]. The modification to the DFS is a randomization which selects the child node uniformly during each iteration. This procedure is efficient in that it can be implemented in polynomial time, as shown bellow. procedure OnePass(G, U, S, D, T) S solution < ShortestPath(G, U, S, D, T) 2 solution < HillClimb(solution) 3 return (solution) end procedure OnePass Figure 63: Pseudocode for the onepass heuristic. Theorem 22. The time complexity of the algorithm above is O(kTu2m), where k = Maxlter, T is the time horizon, a =  U and m =  E  Proof: Notice that the most time consuming step of the algorithm is the construction of a new path (line 5). However, using a randomized depthfirst search procedure this can be done in O(m) time [6]. Each iteration of the while loop (lines 2 to 13) will perform local improvements in the solution using the rerouting procedure to improve the obj ective function. An upper bound on the best solution for an instance of this problem is Tu(u 1)/2 (the time horizon multiplied by maximum number of connections). Each improvement can require at most Maxl ter iterations to be achieved. Therefore, in the worst case this heuristic will end after O(kTu2m) time. O 6.3.3 OnePass Heuristic The two algorithms described in Sections 6.3.1 and 6.3.2 can be combined into a single onepass heuristic for the CCPMANETD [42]. The pseudocode for the complete algorithm used can be seen in Figure 63. The new algorithm now behaves as a singlestart, diversification and intensification heuristic for the CCPMANETD. The total time complexity of this heuristic can be determined from Theorems 21 and 22. Taking the maximum of the two time complexities determined previously, we have a total time of O(max~n3, kTu2m}), where T is the time horizon, a = U, a = V, m = E, and k = Maxlter is the maximum number of iterations allowed on the local search phase. The algorithm proposed above was tested to verify the quality of the solutions produced, as well as the efficiency of the resulting method. The test instances employed in the experiments were composed of 60 random unit graphs, distributed into groups of 20, each group having graphs with 50, 75, and 100 nodes. The communication radius of the wireless agents was allowed Table 61: Comparative results between shortest path solutions and heuristic solutions. Instance Nodes Radius Agents OnePass SP Soln Agents OnePass SP Soln Agents OnePass SP Soln 1 50 20 10 63.6 52.4 15 152 120.8 25 414.66 353.6 2 50 30 10 83.8 58.4 15 182.2 124.4 25 516.2 415.6 3 50 40 10 95.4 67.4 15 228.6 171.8 25 695 474.8 4 50 50 10 115.4 64.4 15 275.8 167.4 25 797.4 I 5 75 20 10 76.8 59 20 270.2 228.6 30 575.2 I. 6 75 30 10 85.8 56 20 299.6 241.2 30 725.4 554 7 75 40 10 96.4 64.4 20 386 261 30 862.6 595.4 8 75 50 10 125 67.8 20 403.2 246.8 30 1082.4 670.8 9 100 20 15 113.6 100.4 25 333.4 269.4 50 1523.2 1258.8 10 100 30 15 166.2 124.4 25 511.2 365 50 1901.4 1515.8 11 100 40 15 203.4 141 25 600.6 389.8 50 2539.2 1749.4 12 100 50 15 255.8 151.8 25 756.8 479.6 50 3271.2 2050. 6 to vary from 20 to 50 units. This has provided us with a greater base for comparison, resulting in random graphs and wireless units that more closely resemble realworld instances. The graphs used in the experiment were created with the algorithm proposed by Butenko et. al [39, 40] in the context of the TDMA MESSAGE SCHEDULING PROBLEM. The routines were coded in FORTRAN. Random numbers were generated using Schrage's algorithm [166]. In all experiments, the random number generator was started with the seed value 270001. Results obtained in our preliminary experiments are reported in Table 61. In this table, the results of the onepass algorithm (OnePass column) are compared to a simple routing scheme where only the construction phase is explored (the SP Soln column). The solutions shown in the table represent the average of the obj ective function values from the 5 instances in each class. The numerical results provided in the table demonstrate the effectiveness of the proposed heuristic when the improvement phase is added to the procedure. The proposed heuristic increased the obj ective value of the shortest path solutions by an average of 3 One reason for this is the fact that, when agents are routed solely according to a shortest path, they are not taking advantage of the remaining time they are allotted (i.e. the time horizon T) and the values from the distance limit given by L. procedure GRASP(Maxlter, RandomSeed) S f <o 2 X*t < 3for i = 1 to Maxlter do 4 X < ConstructionSolution (G, g, X) 5 X < LocalSearch(X, NV(X)) 6 if fX) > fX*) then 7 X <X s f f (X) Send 10 end 11 return X* end procedure GRASP Figure 64: GRASP for maximization Our heuristic, on the other hand, allows wireless agents to take full advantage of these bounds. The algorithm can do this by adjusting the paths to include those nodes within the range of other agents. In addition, at any given time an agent is allowed to loiter in its current position, possibly waiting for other agents to come into its range. This cannot occur in the phase 1 algorithm because, according to the shortest path routing protocol, loitering is forbidden. We notice that our method provides solutions that are better than the shortest path protocol. The time spent on the algorithm has always been less than a few seconds, therefore the computational time is small enough for the problem sizes explored in our experiments. We believe, however, that the quality of the solutions and computational time can be further improved using a better implementation, and more sophisticated data structures to handle the information stored during the algorithm. 6.3.4 Greedy Randomized Adaptive Search In this section, we describe the implementation of the Greedy Randomized Adaptive Search Procedure (GRASP) (Section 2.7.2) for the CCPMANETD. Pseudocode for the generic GRASP is provided in Figure 64. We discuss in this section how the above algorithm can be specialized to provide approximate solutions for the CCPMANETD. In the following subsection, we describe an algorithm for the GRASP construction phase that provides initial solutions for instances of the CCPMANETD problem. Then we provide a local search algorithm for the improvement phase. Construction Phase. The first task in a GRASP algorithm is to build good feasible solutions in terms of a given obj ective function. To do this, we need to specify the set A, the greedy function g, the parameter a~, and the neighborhood NV(X), for X E F. The components of each solution X are feasible moves of a member of the adhoc network from a node v to a node we N (v) U {v}. We say that for an agent ui E U located at node v in the graph that pitv) represents a shortest path from the current node v to the destination for agent ui, namely node di. The complete solution is constructed according to the following procedure outlined in the pseudocode reported in Figure 65. In the figure, ah refers to the current location of an agent. First, the solution which is initially empty is augmented to include the starting locations for all agents. Then, the time variable t is initialized to 1, and in line 6 an agent ui E U is selected at random and routed from its along a shortest path pi(si) from its source node si to its destination node di. If the total distance of pi(as) is greater than Li, then the instance is clearly infeasible and the algorithm ends. Otherwise the procedure continues and the remaining agents are scheduled in the loop beginning at line 8. The procedure considers each feasible move (q, w, u) before scheduling an agent. A feasible move connects the final node q of a subpath P,, for a E U, to another node w, such that the shortest path from w to d, has distance at most L, CE gL dist(e). The set of all feasible moves in a solution is defined as A(X). The loop from lines 1214 ensure that a node currently at its destination remains there. Likewise, the loop from lines 1517 schedule an agent & on a shortest path ph(ah) from its current position ah to dh if the maximum allowed travel time for agent & is equal to the ~Ph(ah)i. From 1921, the set L C A(X) is formed and consists of all feasible moves for agents not yet scheduled. Then in line 25, the greedy function g returns for each move k E L the number of additional connections created by that move. As described above, the construction procedure will rank the elements of L according to g. The best a~k elements are then added to the RCL and procedure ConstructionSolution (G, g: X) xi X 8 2 for i = 0 to k~ do 3 X+ X U{ }. 4 end 6 select randomly ui EUI and route usi on shortestpath P (a ) 7 X+ X U {P;(si) } 8 while t < T' AND 3 us EUI \ X do lo RCLT + 8 rl for ush E Ui\ Xdo 3 2 if a;, = d;,do 142 end 15 if diSt(a;,, d;,) L,' 4ds~)d 16i route Uh On shortest path Pal(ah) 17 X~ + X U {p(ah)} rs end 1g while 3 (1,, 12,, us1) do 20 { l ,1,, 21end 22 end 23 a+ rand(0, 1) 24 6 (Maxnn;Contribute~bc l (aK J (Max~z~l ontribute~bc l Ai2rn.Cont~riibue))) 25 for all (14, lyi, ne) such that g((14, lj ne)) > 6 do 26 R~C:L <  RCL U { (14,; 6,1L,u) 27 end 28 (14,t lj, ne) < get randomly from RCL 29 add (1 lj ne) into the path of agent we, in the solution 30 t < 1 31end 32 return(XY) end procedure ConstructionSolution Figure 65: Greedy randomized constructor for CCPMANETD. in lines 2829 a move is selected at random and added to the solution. Thi s is repeated until a complete solution for the problem is obtained. Improvement phase. In the local search phase, GRASP attempts to improve the solution built in the construction phase. As mentioned above, we use a hillclimbing procedure where the procedure LocalSearch(X, MaxlterLS) S X' < 2 t < 3 Lastlmp~rove < 1 4 i <2 5 iter < 0 6 while i / Lastlmp~rove and iter < MaxlterLS do 7 Remove current path from si to di for agent ui 8 while ai / di do 9 if dist(ai, di) Li e ,Pz diSt(e) then to Route user ui using its shortest path pi(as) 11 else L2 Bestl~ove < {(ai, Is,i us)((as, i,, ui)) > g((ai, ly, ui)), V (ai, ly, ui) } 13 end 14 Add Bestl~ove into the new path for ui in solution X' 15i t<t + 1 16 end 17 if f(X') > f (X) then 18 X < X' 19 Lastlmp~rove < i 20 end 21 i < i + 1 mod k 22 iter < iter + 1 23 end 24 return(X') end procedure LocalSearch Figure 66: Local search for CCPMANETD. obj ective is to improve the solution as much as possible until a local optimal solution is found as described in the pseudocode provided in Figure 66. The local search receives the construction phase solution X and a parameter MaxlterLS as input. In each iteration, the neighborhood NV(X) of X is explored in search of a solution X' such that f (X') > f (X). In order to explore NV(X), a perturbation function is defined as follows. In the loop in lines 521, agents are rerouted using a greedy method similar to that of the construction phase. In line 6, the current construction phase path for agent ui is removed from the solution. Then each feasible move is considered and the move which adds the greatest increase to the obj ective function, Bestl~ove, is added to the new path for agent ui. This is repeated for all agents until a new feasible solution X' E NV(X) C A(X) is created. If f (X') > f (X), then in line 17, X' is set as the new current solution. The process returns to line 5 and repeats until no agent can be rerouted according to this greedy method and improve the current solution or until some maximum number of iterations MaxlterLS are completed. 6.3.5 Complexity of the Heuristic The following theorems address the computational complexity of the proposed algorithm. Theorem 23. 7he construction phase finds a feasible solution for the CCPMANETD in O(Tmu2) time, where T is the time horizon, a = U, and m = E(G) . Proof: Notice that the while loop from lines 831 will require T(U 1) iterations to complete. Likewise, the loop from lines 1122 requires  U iterations. Within the loop, the most time consuming step is the construction of shortest path. However, this can be done using a breadthfirst search in O(m) time [6]. Thus we have the result. O Theorem 24. 7he time complexity of the local search phase is O(kTu2m), where T is the time horizon, a = U, m = E(G) , and k = M~axlterLS. Proof: The proof is similar to Theorem 23. Notice that the while loop from lines 522 perform local improvements according the greedy rerouting scheme. Again the most time consuming step is the construction of a shortest path which can be accomplished in O(m) time. Each improvement can require up to k iterations of the loop. Thus we have the proof. O Corrolary 2. 7he overall time complexity of the proposed GRASP is O(ITu2m(k + 1) ), where T is the time horizon, a = U, m = E (G) , k = M~axlterLS, and I = M~axlter is the overall number of GRASP iterations. Proof: The proof is immediate from Theorem 23 and Theorem 24. O Pathrelinking . First introduced by Glover in [84], pathrelinking (PR) was used as an enhancement for tabu search heuristics. PR was first combined with GRASP by Laguna and Marti [125]. When applied to GRASP, pathrelinking introduces a memory to the heuristic which usually results in improvements in solution quality. This is because in the standard GRASP framework, the procedure PathRelinking(x,, 8) S x, < randSelect (y E : A(x,, y) > 6) 2 f max {f (x,), f (x,) } 3 x* < arg max {f (x,), f (x, ) 5 while A(x,, x,) / 0 do 6 m* < arg max f (x 8 m) : m E a(x, x,)} 9 if f (x) > f then 1o f f (x) 11 Z 4 12 end 13end 14 return x* end procedure PathRelinking Figure 67: Pathrelinking subroutine. multistart nature of the heuristic does not include any longterm memory mechanism for saving traits of good solutions generated by the algorithm. Pathrelinking allows GRASP to remember these traits and favor them in successive iterations. GRASP with pathrelinking has been successfully applied to problems such as MAXIMUM CUT [71], QUADRATIC ASSIGNMENT [140], TDMA MESSAGE SCHEDULING [40], and originally for LINE CROSSING MINIMIZATION [125]. For a survey of GRASP with pathrelinking, the reader is referred to [158]. Pathrelinking works by maintaining a set of elite solutions 8, known as guides and examines pointtopoint traj ectories between a guiding solution and an incumbent solution in search of an optimum. Pseudocode for a generic pathrelinking procedure is provided in Figure 67. To perform pathrelinking, we begin with a guiding solution x, E 8, and an initial starting solution x,. The guiding solution x, is selected at random from the pool of elite solutions 8, so long as the symmetric difference A(x,, x, ) between the two solutions x, and x, is sufficiently large. The symmetric difference is defined as the set of pairwise exchanges needed to transform x, in to x,. Recall that all solutions in 8 are local optima, and we are trying to discover solutions which are not located in the neighborhoods of x, or x,. Therefore this constraint prevents us from applying pathrelinking to solutions which are too similar to each other, and would not likely yield an improved solution [70]. At each step, the procedure examines all moves me A(x, x,), and greedily selects the move which results in the maximum increase in the obj ective of the current solution. This occurs in line 6 of the pseudocode in which the move m* is selected as the move which maximizes f (x e m), where x e m is the solution which results from incorporating m in to x. In line 7, the symmetric difference is updated, and if necessary the best solution is updated in lines 912. The procedure ends when A(x, x,) = Q), i.e. when x = x, [158]. Pathrelinking can be applied to a pure GRASP in a straightforward manner, which can be visualized in the pseudocode of Figure 68. First, the set of elite solutions 8 is initialized to the empty set in line 2 and is built by including the solutions from the first MaxElite iterations. After a standard GRASP iteration of greedy randomized construction and local search produces a local optimal solution X, the PathRelinking procedure is called on line 7. For the CCPMANETD, the elements in the symmetric difference are the agent paths which differ between the initial and guiding solutions. The value of m* from Figure 67 is the path for an agent in the symmetric difference which results in the maximum increase in the total number of communications between the agents. In line 8, a function UpdateElite is called in which the elite pool is possibly updated. The solution returned from pathrelinking is included in the elite pool if it is better than the best solution in 8 or if it worse than the best but better than the worst and is sufficiently different from all elite solutions [158]. Finally, the optimal solution is updated in lines 12 to 14 if necessary. 6.3.6 Computational Experiments The proposed procedure was implemented in the C programming language and compiled using the MicrosoftO Vi sual C++ 6.0. It was tested on a PC equipped with a 1 800MHz Intel ~ PentiumB 4 processor and 256 megabytes of RAM operating under the MicrosoftB WindowsB 2000 Professional environment. procedure GRASP+PR(Maxlter, RandomSeed) 3for i = 1 to Maxlter do 4 X < ConstructionSolution (G, g, X) 5 X < LocalSearch(X, MaxlterLS) 6 if E = MaxElite then 7 X < PathRelinking(X, 8) 8 UpdateElite (X, 8) else to 8 <SU {X 11 end L2 if f (X) > f (X*) then 13 X* <X 14 end 15 end 16 return X* end procedure GRASP+PR Figure 68: GRASP with pathrelinking for maximization. Both the pure GRASP and the GRASP with pathrelinking were tested on a set of 60 random unit graphs with varying densities 20 each having 50, 75, and 100 nodes. The radius of communication varies from 1 to 5 units (miles) in unit increments. We tested each case with three sets of mobile agents to achieve better comparisons and model realworld scenarios. Thus, in total 900 test cases were examined. The graphs were created by a generator used by Butenko et al. [39] on the TDMA MESSAGE SCHEDULING PROBLEM. Since any instance of the CCPMANETD is composed of several parameters, i.e. the number of mobile agents, their respective source and destination nodes, the radius of communication, and the maximum time horizon, each of which impacts the optimal solution for the instance, we will provide our numerical results in several sets of tables. First, we report solutions for several representative instances and provide all input parameters in order to establish an inference base for the overall experiment. Then we will summarize the overall results by providing the average solutions for each problem set. Table 62: Three instances with different sets of agents on 50 node graphs are given. The value in the UBound column was found using Corollary 1. Instance: 50r30il Nodes: 50 Agents: 10 MaxTime: 10 Source: [ 6 10 10 3 5 7 4 2 10 6 ] Destination: [ 49 47 44 48 46 40 48 42 47 47 ] Radius GRASP GRASP+PR UBound 1 291 303 450 2 365 373 450 3 412 423 450 4 443 443 450 5 449 449 450 (a) Instance: 50r30il Nodes: 50 Agents: 15 MaxTime: 10 Source: [ 109 89 68 17 25 55 2 11] Destination: [ 49 47 44 48 46 40 48 42 47 47 ] Radius GRASP GRASP+PR UBound 1 756 757 1050 2 881 909 1050 3 963 972 1050 4 1029 1029 1050 5 1050 1050 1050 (b) Instance: 50r40i4 Nodes: 50 Agents: 25 MaxTime: 10 Source: [89845446274217589381187568] Destination: [ 49 48 44 48 46 42 49 40 48 49 45 46 49 45 48 44 42 41 48 43 40 49 45 49 43 ] Radius GRASP GRASP+PR UBound 1 2613 2653 3000 2 2896 2918 3000 3 3000 3000 3000 4 3000 3000 3000 5 3000 3000 3000 In Table 62, we report solutions for three different instances on 50 node graphs. The Source vector and Destination vector provide the respective (as, di) pair for each agent respectively. The specific values of si were randomly selected from the first 21 1' of the nodes of the graph. Likewise, the di values were chosen randomly from the last 21 1' of nodes. This method of selection is preferred over a completely randomized design because in realworld situations such as a combat scenario, the available entry and exit points from a battle space are likely to be limited. However, using a random selection from the available subset of nodes allows for more thorough testing and helps avoid unintentional biases. Table 63: Three instances with different sets of agents on 75 node graphs are given. The value in the UBound column was found using Corollary 1. Instance: 75r30i2 Nodes: 75 Agents: 10 MaxTime: 15 Source: [ 7 6 13 3 10 13 15 6 6 2 ] Destination: [ 68 68 71 73 68 68 73 70 74 62 ] Radius GRASP GRASP+PR UBound 1 571 575 675 2 614 621 675 3 658 658 675 4 670 670 675 5 675 675 675 (a) Instance: 75r40i4 Nodes: 75 Agents: 20 MaxTime: 15 Source: [ 115 7153 89 4 613 143 8 31014 1115 9 3] Destination: [ 63 66 69 61 62 68 62 67 68 66 62 60 61 66 63 73 72 64 71 71 ] Radius GRASP GRASP+PR UBound 1 2535 2554 2850 2 2746 2758 2850 3 2842 2842 2850 4 2850 2850 2850 5 2850 2850 2850 (b) Instance: 75r30il Nodes: 75 Agents: 30 MaxTime: 15 source: [141528104133451204023158512037711341154] Destination: [ 08 00 03 02 05 72 02 02 o~ 71 05 00 03 04 00 04 00 00 00 04 74 03 73 04 04 03 05 05 00 03 ] Radius GRASP GRASP+PR UBound 1 4721 4870 6525 2 6002 6012 6525 3 6265 6285 6525 4 6497 6497 6525 5 6525 6525 6525 The column MaxTime is the maximum time horizon T. Recall that all agents must reach their destination node by this time. The GRASP column provides the solution from GRASP after 1000 iterations and UBound is the upper bound on the solution value and was calculated by the equation in Corollary 1. Notice that as the radius value increases, the number of connections between the agents tends to converge to the value of the upper bound. Recall that the upper bound value from Corollary 1 is not an upper bound on the optimal solution for the given graph per se; it is an upper bound on the solution for the given time horizon and number of agents. Thus, the more dense the graph, the tighter the bound. Table 64: A 100 node instance with solutions with radius varying from 1 to 5 units. The value in UBound was found using Corollary 1. Instance: 100r30i2 Nodes: 100 Agents: 15 MaxTime: 20 Source: [ 9 19 10 18 13 18 12 18 15 8 6 6 20 18 1 ] Destination: [ 84 88 83 84 96 96 81 95 83 82 93 80 90 85 81 ] Radius GRASP GRASP+PR UBound 1 1819 1821 2100 2 1960 1974 2100 3 2065 2067 2100 4 2100 2100 2100 5 2100 2100 2100 Instance: 100r30il Nodes: 100 Agents: 25 MaxTime: 20 Source: [ 17 6 9 19 9 12 2 15 7 8 1 2 8 6 3 13 16 17 13 13 17 19 2 5 21 ] Destination: [ 81 89 84 82 88 99 93 89 93 97 84 96 96 91 90 86 98 86 81 89 82 89 81 80 99 ] Radius GRASP GRASP+PR UBound 1 5183 5186 6000 2 5577 5647 6000 3 5898 5909 6000 4 5992 5992 6000 5 6000 6000 6000 (b) Instance: 100r30i2 Nodes: 100 Agents: 35 MaxTime: 20 source: [35112144451012141317417810~17151312 05 011308002 Destination: [ so as so at 84 00 as at ea sa as so as so as as 02 so 81 85 as 04 sa so oo 0 061 oa co 0 00 00 00 st sa 82 Radius GRASP GRASP+PR UBound 1 10222 10255 11900 2 11108 11224 11900 3 11660 11704 11900 4 11842 11845 11900 5 11900 11900 11900 Table 63 presents the specific parameters and related solutions for three instances of the CCPMANETD on 75 node graphs. On these networks, the number of agents varied from 10 to 30, and the maximum time horizon was 15. Again, we see that as the communication radius increases the solutions tend to the upper bound values. Similar results for three graphs having 100 nodes are provided in Table 64. For the 100 node instances, the number of agents varied from 15 to 35 and the maximum travel time was 20 units. The results for these instances also indicate that the heuristic is robust and able to provide excellent solutions for large instances. Table 65: Average solution values for GRASP and GRASP with pathrelinking on 50 node graphs. Nodes Agents Radius GRASP GRASP+PR Bound 50 10 1 347 352.21 450 50 10 2 404.58 407.58 450 50 10 3 428.32 429.47 450 50 10 4 437.84 438.53 450 50 10 5 444.37 444.58 450 50 15 1 813.11 817.32 1050 50 15 2 937.74 945.47 1050 50 15 3 1001.11 1003.58 1050 50 15 4 1025.37 1026.21 1050 50 15 5 1037.16 1037.53 1050 50 25 1 2272.79 2315.58 3000 50 25 2 2686.26 2704.53 3000 50 25 3 2850.84 2861.95 3000 50 25 4 2924.05 2927.68 3000 50 25 5 2959 2959.26 3000 Average Comp Time (s) 2.89 4.29 Table 66: Comparative solutions of GRASP and GRASP with pathrelinking on 75 node graphs. Nodes Agents Radius GRASP GRASP+PR Bound 75 10 1 574.95 577.42 675 75 10 2 629.42 631.37 675 75 10 3 653.53 654.63 675 75 10 4 665.42 665.89 675 75 10 5 669.47 669.84 675 75 20 1 2288 2319.63 2850 75 20 2 2639.37 2651.5 2850 75 20 3 2756.69 2762 2850 75 20 4 2805.53 2807.68 2850 75 20 5 2827.42 2828.42 2850 75 30 1 5349.84 5391.26 6525 75 30 2 6037.47 6064 6525 75 30 3 6310.90 6332.37 6525 75 30 4 6422.11 6430.80 6525 75 30 5 6472.42 6478.84 6525 Average Comp Time (s) 6.16 7.43 Tables 65, 66, and 67 show the evolution of the average solution values as the communication range increases for the 50, 75, and 100 node graphs, respectively. Notice once more that as the communication range increases, the average solution converges to the value Table 67: Results of GRASP and GRASP with pathrelinking on 100 node graphs. Nodes Agents Radius GRASP GRASP+PR Bound 100 15 1 1838.25 1840.45 2100 100 15 2 1996.75 2003.15 2100 100 15 3 2061.9 2064.7 2100 100 15 4 2083.1 2084.4 2100 100 15 5 2093.95 2094.05 2100 100 25 1 4979.1 5019.2 6000 100 25 2 5655.3 5674.35 6000 100 25 3 5869.35 5876.9 6000 100 25 4 5940.65 5944.7 6000 100 25 5 5978.2 5979.2 6000 100 35 1 9947.45 9997.15 11900 100 35 2 11254.55 11280 11900 100 35 3 11636.85 11664.5 11900 100 35 4 11787.9 11793 11900 100 35 5 11859.1 11860.35 11900 Average Comp Time (s) 5.17 8.05 of the upper bound given by Corollary 1. In these tables we also report the average computing time required by both the pure GRASP and the GRASP+PR to find their best solutions within the specified number of iterations. For all of the experiments, the GRASP+PR found solutions at least as good as the pure GRASP, finding superior solutions for 45' of the instances tested. In Figures 69, 610, and 611, we provide plots of the average objective function value versus communication range found using GRASP with pathrelinking. The upper bound values for each case as computed by Corollary 1 are also plotted in the charts. These graphs indicate that on average, as the radius of communication increases, the obj ective function values tend to the upper bound values. 6.4 A Continuous Formulation (CCPMANETC) In this section, we present a continuous formulation for the CCPMANETD. This formulation will provide a more realistic scenario than the discrete formulation provided above. We will assume that the agents are operating in battlespace QC cRd, where Q is a compact, convex set with unit volume and the Euclidean norm I  ~ 2 d. For our purposes, we are going to consider the planar case, i.e. d = 2, with the understanding that extensions to higher dimensions are 300025 agents upper bound RFASP+PR 2500 cn 2000 0 1500 15 agentsupebon 1000 RASP+PR 500 10aet pebod GRASP+PR 1 2 3 4 5 Communication radius Figure 69: Evolution of GRASP+PR solution values on 50 node graphs as the communication radius increases from 1 to 5 units. possible. Suppose there are NV wireless agents in the ad hoc network. The NV agents are assumed to be omnidirectional and are modeled as point masses. We will suppose that the agents are free to move within Q at some bounded velocity. Assume without the loss of generality, that the maximum velocity magnitude is unitary, i.e.   v (t)l   < for ie { 1, .. ., N}). In order to derive a continuous formulation, we need to to define an obj ective function that is consistent with that of the discrete formulation. Let Ri be the communication constant for agents i and j. That is, Rij is the radius of communication for the two agents. Then one possible obj ective is a socalled heavyside function defined as 0 if  ,t ~,2 > ij* 68 A graphical representation of H1 is given in Figure 612. While this function will work as an obj ective, it is very extreme. That is, if H [Ri 4 IIr~ 2 ~ ] = 0, then there is no information 7000 30 agents upper bound 6000 C; SP+PR 5000 S4000 300 20agents upper on GdRASP+PR 2000 1000 10 agents __ ________ pper bond ~RASP+PR 1 2 3 4 5 Communication radius Figure 610: Evolution of GRASP+PR solution values on 75 node graphs as the communication radius increases from 1 to 5 units. provided which might indicate where a better solution might lie. A more desirable function would be one that approximates H1 but is continuous. We consider to alternatives to H1. The first function is a piecewise continuous function given as H2[, S33a r~ r ific c R ,~< rr 2 < R (69) 0, if  ,r X4,2 > 2Rij. This function whose graph is provided in Figure 613 has a value equal to one if agents i and j are within the communication radius Rij of one another. The function then decreases constantly until the agents are 2Rij apart at which time they are unable to communicate. 35 agents upper bound RjFASP+PR 10000 8000 a 00 25 agents upper bound 3REP+PR 4000 15 agents upper bound ::  IRASP+PR 1 2 3 4 5 Communication radius Figure 611: Evolution of GRASP+PR solution values on 100 node graphs as the communication radius increases from 1 to 5 units. The third and final obj ective function we will consider is a continuously differentiable decreasing function of the distance between agents i and j and the communication radius Rij . This function, given as H3 r r2,i (610) can be seen in Figure 614. This is perhaps the best approximation of H1 in that it can be interpreted as the probability of agents i and j communicating when they are some distance Now that we have found a suitable obj ective function we can define the remaining parameters and constraints of the problem. Let x" (t) be the position of agent i at time t. Then, we can describe the location of all NV agents at a time t E Z+ as an N~vector X(t) = (xl(t), x2() ~ Nt) E N. Similarly, let v(t) be the velocity of agent i at time t. The relationship between velocity and position is typical and is given by (~t)=. SI 15 H Figure 612: The heavyside function H1. In order to formulate the continuous time analog of the CCPMANETD, we must constrain the maximum velocity of each agent. This will enforce the constraints on the maximum distance traveled in the discrete formulation. If Si E IR2 is the starting position of agent i, and Di E IR2 is the destination point of agent i, then we can formulate the CONTINUOUS COOPERATIVE COMMUNICATION PROBLEM ON MOBILE AD HOC NETWORKS (CCPMANETC) as follows. Maximize H3CI, 2~i'l ,ij] (611) x (0) = Si, F (T) = Di, if(t) = %, x"i(t) E IR2, 1, 2, ... N, tE [0, T] , =, 12,...,NV, tE [0,T]. (612) (613) (614) (615) tr tr 1li 15 Fl Figure 613: Alternate objective function H2 Using this formulation as a starting point, heuristics for continuous global optimization problems can be implemented. Currently we are developing an algorithm based on the Continuous Greedy Randomized Adaptive Search Procedure (CGRASP) proposed by Hirsch et al. [101]. Subsequent work with CGRASP including enhancements and stopping rules can be found in [100]. CGRASP has also been used to solve systems of nonlinear equations [99], and for solving continuous formulations of discrete optimization problems [38]. This work is currently in process and the results will appear in a paper later this year [41]. 6.5 Concluding Remarks In this chapter, we introduced the COOPERATIVE COMMUNICATION PROBLEM ON MOBILE AD HOC NETWORKS. We presented both discrete and continuous formulations, discussed the computational complexity, and presented several algorithms for solving each formulation. Furthermore, extensive computational results were presented which show the effectiveness of the proposed algorithms. Lastly, we derived a continuous formulation of the problem. Using Figure 614: Second alternate objective function H3 this version, several heuristics for continuous optimization problems can be applied to achieve solution which more closely mirror realworld situations. This is because we have removed the underlying graph structure and the motion of the agents is constrained kinematically. Future work on the problem of pathplanning for a group of wireless users such as the one presented here should could focus on a multiobj ective problem in which the agents not only maximize the communication time but also maximize the amount of the battlespace (i.e. the communication graph) that is covered. This formulation would be particularly useful in combat search and rescue operations and other reconnaissance applications. In the formulation considered, it is still possible that the agents will loiter at a given node until such time passes that they must proceed to their destinations. Instead of merely circling a single node, the obj ective of maximum battlespace coverage will still allow the agents to maximize the communication, while visiting many areas of the region. 1/5rl;u brj I In the following IIh liphi r we take a closer look at the actual mechanisms used for communication in the MANET described in this chapter. Instead of the generalized view of agent connectivity used here, we examine a particular type of transceiver which may be used by the agents. In particular, we consider the time division II1 r thiple access (TDMLA) style radios. TDMA transceivers are popular because they make efficient use of the available bandwidth by allowiing frequency reuse. Not surprisingly however, it turns out that several problems must be mitigated in order to ensure effective group communication. CHAPTER 7 THE TDMA MESSAGE SCHEDULING PROBLEM 7.1 Introduction The MANET such as the one described in the previous chapter is an example of a socalled wireless mesh network (WMN). WMNs have become an important means of communication in recent years. In these networks, a shared radio channel is used in conjunction with some packet switching protocol to provide highspeed communication between many users. The stations in the network act as transmitters and receivers, and are thus capable of utilizing a multihop transmission procedure. The advantage of this is that several stations can be used as relays to forward messages to the intended recipient, allowing for beyond line of sight communication for stations that are geographically disbursed and potentially mobile [39]. Mesh networks have increased in popularity in the recent years and the number of applications is steadily increasing [155]. As mentioned in [8], WMNs allow users to integrate various networks, such as WiFi, the internet and cellular systems. WMNs can also be utilized in a military setting in which tactical datalinks network various communication, intelligence, and weapon systems allowing for streamlined communication between several different entities. For a survey of wireless mesh networks, the reader is referred to [8]. In WMNs, the critical problem involves efficiently utilizing the available bandwidth to provide collision free message transmissions. Unfettered transmission by the network stations over the shared channel will lead to message collisions. Therefore, some medium access control (MAC) scheme should be employed to schedule message transmissions so that collisions are prevented. The time division multiple access (TDMA) protocol is a MAC scheme introduced by Kleinrock in 1987 which was shown to provide collision free broadcast schedules [117]. In a TDMA network, time is divided into frames with each frame consisting of a number of unit length slots in which the messages are scheduled. Stations scheduled in the same slot broadcast simultaneously. Thus, the goal is to schedule as many stations as possible in the same slot so long as there are no message collisions. When considering the message scheduling problem on TDMA networks, there are two optimization problems which must be addressed [175]. The first involves finding the minimum frame length, or the number of slots required to schedule all stations at least once. The second problem is that of maximizing the number of stations scheduled within each slot, thus maximizing the throughput. In this chapter, we consider the MESSAGE SCHEDULING PROBLEM ON TDMA NETWORKS (MSPTDMA). We provide a integer programming formulation and prove that the problem is HiPhard. We then derive several heuristics and compare their performance against other algorithms for the literature. Extensive computational results indicate the superiority of our methods against realworld instances. 7.2 Problem Description A TDMA network can be conveniently described as a graph G = (V, E) where the vertex set V represents the stations and the set of edges E represents the set of communication links between adj acent stations. There are two types of message collisions which must be avoided when scheduling in TDMA networks. The first, called a direct collision occurs between onehop neighboring stations, or those stations i, j E V such that (i, j) E E. Onehop neighbors which broadcast during the same slot cause a direct collision. Further, if (i, j) 5' E, but (i, k) E E and (j, k) E E, then i and j are called twohop neighbors. Twohop neighbors transmitting in the same slot cause a hidden collision [39]. Assume that there are M~ slots per frame. Further, assume that packets sent at the beginning of each time slot and are received in the same slot in which they are sent. Let x : M~x V {0, 1}, be a function where 1, if station n scheduled in slot m, Xmn ~0, otherwise. 71 Also, let c : E { 0, 1} return 1 if i and j are onehop neighbors, i.e., if (i, j) E E and i / j. As mentioned above, there are two problems which have to be solved in order to obtain optimal broadcast schedules using the TDMA protocol. The first is the FRAME LENGTH MINIMIZATION PROBLEM (FLMP) and the second is the THROUGHPUT MAXIMIZATION PROBLEM (TMP). Using the aforementioned definitions and assumptions, we can now formulate the MESSAGE SCHEDULING PROBLEM ON TDMA NETWORKS (MSPTDMA) as the following multiobj ective optimization problem: Minimize M~ Maximize xi;, i= 1 j= 1 subj ect to xmn >1, VnE V, (72) ce + xmi xmy cik mi Cksj mj < 1, Vi ,keV ,j/k ,m=1 (74) Xmn E {0, 1},Vn )=1.. (75) M~EZ (76) The obj ective provides a minimum frame length with maximum bandwidth utilization, while constraint (72) ensures that all stations broadcast at least once. Constraints (73) and (74) prevent direct and hidden collisions, respectively. We note here that will not be attempting to solve this problem by using the typical multiobj ective optimization approach, in which one combines the multiple obj ectives into one scalar obj ective whose optimal value is a Pareto optimal solution to the original problem. Instead we will decouple the obj ectives and handle each independently. This is done because for the MSPTDMA, frame length minimization usually takes precedence over the utilization maximization problem. This is the usual modus operandi used by other heuristics in the literature [164, 169, 175]. Suppose that we relax the MSPTDMA and only the consider the first obj ective function. This is referred to as the FRAME LENGTH MINIMIZATION PROBLEM (FLMP) and is given by the following integer program: min{M~ : (72) (76)}. Clearly any feasible solution to this problem is feasible for MSPTDMA. Now, consider a graph G' = (V, E') where V follows from the original communication graph G, but whose edge set is give yE U{(,j i, j are twohop neighbors}. Then using this augmented graph, we can formulate the following theorem . Theorem 25. Thze FRAME LENGTH MINIMIZATION PROBLEM on G = (V, E) is equivalent to finding an optimal coloring of the vertices of G' (V, E'). Proof: Recall that in order for a message schedule to be feasible, all stations must broadcast at least once and no collisions occur, either hidden or direct. Notice now that E' contains both onehop and twohop neighbors, and in any feasible solution, neither of these can transmit in the same slot. Thus, there is a onetoone correlation between time slots in G and vertex colors in G'. Hence, a minimum coloring of the vertices of G' provides the minimum required slots needed for a collision free broadcast schedule on G. O After one has successfully solved the FLMP yielding an optimal frame length M~*, then the THROUGHPUT MAXIMZATIO PROBLE (TMP) give as"" follows max,,,,, {pM xy : (72) (76) } can be solved, where M~ is replaced by M~* in (72) (76). It turns out that both the FLMP and the TMP have been shown to be HiPhard [39, 67]. Thus it is unlikely that a polynomial algorithm exists for finding the optimal broadcast schedule [79]. It is interesting to note however, that if we ignore constraint (74) which prevents twohop neighbors from transmitting simultaneously, then the resulting problem is in p, and a polynomial time algorithm is provided in [93]. 7.3 Computational Complexity This section presents the computational complexity results for the MSPTDMA. It was first noted that the MSPTDMA is HiPcomplete by Wang and Ansari in [169]. However, their proof of the HiPcompleteness of the recognition version of the problem was incorrect due to some faulty arguments. Namely, they claimed that the GRAPH COLORING PROBLEM is equivalent to the MAXIMUM INDEPENDENT SET PROBLEM based on an incorrect assumption that, given an arbitrary graph, an optimal coloring can be found by recursively computing a maximum independent set and removing it from the graph. Thus, by coloring different independent sets in Figure 71: Counterexample to the claim of Wang & Ansari that optimal graph coloring can be found by recursively finding a maximum independent set and removing it from the graph. different colors, they claim that the chromatic number of the graph equals the total number of independent sets computed. Figure 71 presents a counterexample to this statement. It is easy to see that the independence number of the graph in this figure is 3. Assuming that the first maximum independent set found using the soclaimed "optimal" coloring algorithm of Wang and Ansari is {4, 5, 6} and consequently removing this set from the graph, we obtain a clique (complete subgraph) of the three vertices {1, 2, 3}. The independence number of the remaining graph is 1, so all three of the remaining vertices have to be colored in different colors. Thus, the WangAnsari coloring algorithm results in a 4coloring. However, it is easy to see that the chromatic number of this graph is 3. For example, one optimal coloring is given by the following partition: {1, 5, 6}, {2, 4}, {3}. Therefore, the coloring obtained using the WangAnsari approach is not optimal. Next we prove that the recognition version of the MSPTDMA is in fact HiPcomplete. We consider the following problem: TDMA MESSAGE SCHEDULING PROBLEM INSTANCE: A undirected graph G = (V, E) and an integer K. QUESTION: Does there exist a broadcast schedule with frame length < K? Theorem 26. Thze TDMA MESSAGE SCHEDULING PROBLEM is NiPcomplete. Figure 72: Construction of graph G' from G. Proof: To show that KMSP is HiPcomplete, we need to show that (1) KMSP E NiP; (2) Some HiPcomplete problem reduces to KMSP in polynomial time. Suppose that n =  V and m = E. Without the loss of generality, we assume that G is connected (if it is not, we can consider each connected component separately). KMSP E NiP since a given broadcast schedule with frame length k < K can be verified for validity in O(n3) time. Indeed, the verification of validity consists of checking, for each vertex i E V, that the set Li of all time slots in which the vertices from {i} [] N(i) transmit according to the given schedule does not contain any repeated elements. This can be done using the sorting of time slot numbers in L, in O((~L + 1 ) log(Lei + 1)) time for vertex i, therefore the total run time will be OIi= (Le + 1i) loC)= ((mb + n) log(u)) = O(rL3) We will show that the graph kcoloring problem can be reduced to KMSP in polynomial time. Recall that the kcoloring problem is, given G = (V, E) and an integer k, does there exist a proper coloring of the vertices of G that uses < k colors? This is a wellknown NiPcomplete problem [79]. Given a graph G = (V, E), we will construct the corresponding graph G' = (V', E'), where V'' = vU E~ and E' = { id, (i, j)] : (i, j) E E, i, je } U { ((el ,62) : 61, 62 E }.) An example of this graph is shown in Figure 72. Obviously G' can be constructed in polynomial time. Moreover, G has a proper kcoloring if and only if G' allows a broadcast schedule with frame length I k + m. To see this, note that by the construction of graph G', (vl, v2) E E if and only if vl, v2 E V are 2hop neighbors in G'. Also, V' \ V forms a clique in G', and any vertex in this clique is a 2hop neighbor of any vertex in V since G is connected. Thus no other vertex can transmit in the same time slot with a vertex from the clique, so any broadcast schedule in G' will require m time slots just for vertices from the clique to transmit. The remaining vertices in V' (i.e., vertices from V) can transmit according to any proper coloring in G, where different time slots in the resulting broadcast schedule will correspond to different colors in the coloring. Therefore, there is a onetoone correspondence between proper colorings in G and feasible broadcast schedules in G'. We see that in fact kcoloring reduces in polynomial time to KMSP, where K = k + m. Thus, the proof is complete. 7.4 Heuristics In this section, we introduce and discuss several heuristics which have been applied to the MSPTDMA with varying degrees of success. The specific algorithm which are compared include: Greedy Randomized Adaptive Search Procedure (GRASP) [39], GRASP with Path Relinking [28], Reactive GRASP with Path Relinking [27], Sequential Vertex Coloring (SVC) [175], Mean Field Annealing (MFA) [169], a Mixed NeuralGenetic Algorithm (HNNGA) [164], and we present a new combinatorial algorithm by Commander and Pardalos [43]. 7.4.1 Combinatorial Algorithm for TDMA Message Scheduling The inherent intractability of the problem motivates the need for efficient heuristics to quickly provide good solutions for nontrivial instances. In this section, we describe a new algorithm for the MESSAGE SCHEDULING PROBLEM ON TDMA NETWORKS. The heuristic is a twophase iterative procedure for which pseudocode is provided in Figure 73. First, we procedure ComAlgBSP(G') 1 M*< V  2 X* < 3for i = 1 to Maxlter do 4 M < S10tMinimization(G', a~, S10tIter) 5 if M~ < M* then 6 M~* < 7 X < BurstMaximization(G', M~*, V*) 8 end 9if X > X* then to X* <X 11 end 12 end 13 return (M~*, X*, V*) end procedure ComAlgBSP Figure 73: Pseudocode of the proposed heuristic for MSPTDMA. concentrate primarily on the frame length minimization portion of the MSPTDMA by using a greedy heuristic for graph coloring which computes near optimal solutions for the FLMP. Since this solution will only have each station transmitting exactly once, a local improvement method is then applied which attempts to maximize the throughput within the derived frame length. To increase the efficiency of the procedure, the BurstMaximizat ion procedure is only entered if the current frame length M~ is as least as small as the current best value M~*. After some specified number of iterations, the algorithm terminates returning the best overall solution, which consists of the frame length M~*, the total number of bursts X*, and the schedule of slot assignments V*. Frame Length Minimization. For the first phase of the algorithm, we apply a greedy construction heuristic to determine the value for M~, the number of time slots required for all stations to transmit. As a result of Theorem 25, the method is based on the construction phase of the Greedy Randomized Adaptive Search Procedure (GRASP) [157] for coloring sparse graphs proposed by Laguna and Marti in [126]. This particular method was chosen because it is able to quickly provide excellent solutions for the frame length. That being said, any other coloring heuristic would work fine for the frame length minimization phase. In fact, in [175] a method based on Sequential Vertex Coloring was used to determine the value of M~. However, procedure S10tMinimization(G', a~, S10tIter) 2 V' <V 3 while V' / 0) do 5 Ecount < 00 6 for j = 1 to S10tIter do 7 Vi < V', U < 0, S < 0 8 while V / 0) do 9 if U = 0) then to RCL < {(1 ~) 101 1' stations of max degree in V} 11 else L2 RCL < {(1 ~) 101 1' stations of max degree in V n U} 13 end if 14 s < randSelect(RCL) 15 S < SU 16 N(s)t < w(S, W)E E'} 17 V < V/(({s} U Ncs) 18 U <U U N(s) 19 end while 20 E* < (u, v)E E'u, vE V'/S} 21 if E* < Ecount then 23 Ecount < E* 24 end if 25j end for 26 V' <V'/1 27 end while 28 return (M, V* = {VI, V2..., VM ~)) end procedure S10tMinimization Figure 74: Greedy randomized heuristic for frame length minimization. the randomized approach of the chosen method allows us to explore the search space more thoroughly and provides several feasible solutions to work with in the throughput maximization phase. This is because different optimal colorings will yield different solutions in the second phase. Furthermore since sparse graphs usually contain an exponential number of optimal colorings [1 19], the chosen method leads to a variety of solutions to explore in phase two. Pseudocode for this routine is given in Figure 74. Our implementation of the frame length minimization heuristic is exactly as described in [126]. The procedure takes the augmented graph G', a proportional parameter a~, and a value S10tIter as input and creates an initial broadcast schedule one slot at a time. The value a~E [0, 1] determines the amount of randomness, or conversely greediness that the procedure uses. S10tIter is the number of candidate schedules for a particular slot from which the best is chosen. Initially, the frame length M~ is initialized to 0 and V', the set of unscheduled stations is initialized to V. The initial schedule is created in the while loop from lines 327. After incrementing the frame length, the for loop from lines 625 is entered. In this loop, S10tIter candidate schedules are created for the current slot VM~. Initially, V, the set of admissible unscheduled stations is initialized to V' and U, the set of inadmissible scheduled vertices is initialized to the empty set. S, the set of stations scheduled in the current slot, is also set to 0). From lines 911 a socalled Restricted Candidate List (RCL) is constructed and contains the (1 a~)10H' .; admissible stations of maximum degree. It is now clear how the particular value of a~ controls the amount of randomness that is used by the algorithm. A value of a~ = 0 would result in a simple random search, while a~ = 1 would yield a pure greedy search [152]. After the construction of the RCL, an element s E RCL is chosen at random and scheduled in the current slot in line 15. The sets V and U are updated and the loop continues. After the slot capacity is maximal, the set E* is computed which contains the set of edges remaining in the graph induced by the yet unscheduled stations. If E*I is less than the current minimum value Ecount, then the current candidate slot schedule is saved in VM~ in line 22. In line 26 after S10tIter samples, the best slot schedule is removed from the graph and the main loop repeats [126]. Finally, the frame length M~ and the final slot schedule V* = { V, V2, .., VM} are returned to the main procedure. The result of this procedure is a feasible solution for MSPTDMA in which each station is scheduled to broadcast in exactly one slot during the frame. This follows directly from the result proven in Theorem 25. For a discussion of the computational complexity of the proposed procedure, the reader is referred to [126]. procedure BurstMaximization(G', M~*, V*) S X< V  2 for i = 1 to M~ do 3 T <V 4 while T / 0) do 5 T < {vl~v 5( 1 and V se E (v, s) 5( E'} 6 8 < randSelect(T) 7 M < 1U { s } 8 X <X +1 9end while to end for II retunm (X, V* {I 2,.,V end procedure BurstMaximization Figure 75: Throughput maximization pseudocode. Throughput Maximization Phase. The second phase of the proposed method attempts to maximize the throughput beginning with the feasible solution found in the frame length minimization phase. Clearly, the solution from the first phase will not provide an optimal throughput in general, because each station will only be scheduled to transmit once in the frame. Therefore, we use a randomized local improvement method to schedule each station as many times as possible in the frame. Pseudocode for the throughput maximization heuristic is provided in Figure 75, and the method proceeds as follows. Since each station is only scheduled once, the total number of bursts, X is set to V. The main loop from lines 210 locally optimizes each slot in the frame. First, the set of stations which can transmit along with those stations already scheduled in the current slot, namely T is initialized to V. T is then updated and contains those stations v which are not already scheduled in the current slot and are not adj acent to any station a which is scheduled in the current slot. An element of T is then selected randomly and added to the current slot. In line 8 the total number of bursts is incremented, and the loop repeats. The method proceeds to the next slot when there are no stations which can transmit with those currently scheduled, i.e., when T = 0. The method returns the total number of bursts, X and the updated broadcast schedule V* in line 11. 7.4.2 GRASP Recall that as described above, the Greedy Randomized Adaptive Search Procedure (GRASP) is a twophase iterative metaheuristic for combinatorial optimization [69, 72, 157]. In the first phase, referred to as the construction phase, a greedy randomized initial feasible solution is created. Then in the second phase, the initial solution is improved by the application of a local search procedure. The solution which is best out of all GRASP iterations is returned. GRASP has been applied to many combinatorial problems such as quadratic assignment [128, 140], job shop scheduling [18, 7], private virtual circuit routing [156], and satisfiability [154]. GRASP was successfully applied to the MSPTDMA by Commander et al. in [39]. We describe the implementation below. Construction Phase. The construction phase for the GRASP constructs a solution iteratively from a partial broadcast schedule which is initially empty. The stations are first sorted in descending order of the number of onehop and twohop neighbors. Next, a socalled Restricted Can2didate List (RCL) is created and consists of those greedily selected stations which may broadcast simultaneously with the stations previously assigned to the current slot. From this RCL a station is randomly chosen and assigned in the current slot. A new RCL is created and another station is randomly selected. This process continues until there are no stations to put in the RCL, at which time the slot number is incremented and the procedure is repeated recursively for the subgraph induced by the set of all vertices whose corresponding stations have not yet been assigned to a time slot. Local Search. The local search phase used is a swapbased procedure which is adapted from a similar method for graph coloring implemented by Laguna and Marti in [126]. First, the two slots with the fewest number of scheduled transmissions are combined and the total number of slots is now given as k = us 1, where mr is the frame length of the schedule computed in the construction phase. Denote the new broadcast schedule as {s,7t*,;, m7' = 1, .., k, a = 1, .., N}). Now, let the function f (s) := CE E(?rs(), wheret E(?rs() is the set of collisions in slot ?rt( f (s) is then minimized by the application of a local search procedure as follows. A colliding station in the combined slot is chosen randomly and every attempt is made to swap this station with another from the remaining k 1 slots. After a swap is made, f (s) is reevaluated. If the result is better, that is if f (s) has a lower value than before the swap, the swap is kept and the process repeated with the remaining colliding stations. If after every attempt to swap a colliding station the result is unimproved, a new colliding station is chosen and the swap routine is attempted. This continues until either a successful swap is made or for some specified number of iterations. If a solution is improved such that f (s) = 0, then the frame length has been successfully decreased by one slot. The value of k is then decremented and the process is repeated beginning with the combination of the two "smallest" slots. If the procedure ends with f (s) > 0, then no improved solution was found. 7.4.3 Sequential Vertex Coloring In [175], Yeo et al. take a multiobj ective optimization approach to solving the MSPTDMA. They implement a twophase heuristic based on the idea of sequential vertex coloring (SVC). In the first phase, they only consider the problem of minimizing the frame length. Then in phase 2, the frame length is fixed with the solution from phase 1 and the utilization within the frame is maximized. Frame Length Minimization. For this phase, the frame length minimization in the MSPTDMA is attacked by solving the graph coloring problem in the augmented graph. More specifically, an algorithm based on the sequential vertex ordering method is used to solve this problem. This is done by first ordering the stations in descending order of the number of onehop and twohop neighbors. The first vertex is colored and the list of the other NV 1 vertices are scanned downward. The remaining vertices are colored with the smallest color which has not already been assigned to a onehop neighboring station. The process is continued until all vertices have been assigned a color. Utilization Maximization. Beginning with this initial schedule, phase 2 attempts to maximize the throughput in the TDMA frame. To maximize the utilization within the frame whose length was determined in phase 1, an ordering method of the sequential vertex coloring algorithm is applied. The stations are now ordered in ascending order of the the number of onehop and twohop neighbors. The first ordered station is then assigned to any slots in which it can simultaneously broadcast with the previously assigned stations. This process is repeated with every station in the ordered list. 7.4.4 Mean Field Annealing In 1997, Wang and Ansari [169] proposed a heuristic for the MSPTDMA based on Mean Field Annealing (MFA). In statistical mechanics, the physical process of annealing is used to relax a system to the state of minimal energy. This is done by heating the solid until it melts and then cooling it slowly so that at each temperature the particles randomly arrange themselves until reaching thermal equilibrium. In [116], Kirkpatrick et al. introduced a method for combinatorial problems known as simulated annealing (Section 2.7.2). Based on the theory of the physical process, simulated annealing was shown to asymptotically converge to the global minimum after performing a number of socalled transitions at decreasing temperatures. Though simulated annealing is guaranteed to converge to the global optimal solution, this process is quite often computationally expensive. Mean field annealing, a heuristic which mimics the idea of mean field approximation from statistical physics [150] can be employed instead. In MFA, the stochastic process in simulated annealing is replaced by a set of deterministic equations. Though MFA does not guarantee convergence to a global optimal solution, it can provide an excellent approximation to an optimal solution and is much less expensive computationally. 7.4.5 Mixed NeuralGenetic Algorithm As in the algorithm presented by Yeo et al. in [175], SalcedoSanz et al. [164] introduced a twophase heuristic based on combining both Hopfield neural networks [104] and genetic algorithms as in [171]. As with the vertex coloring algorithm, phase one of the mixed neuralgenetic algorithm minimizes framelength and phase two attempts to maximize the utilization within the slot. Frame Length Minimization. The frame length minimization problem presented in [164] is the same as described above. For the solution, a discretetime binary Hopfield neural network (HNN) is used. As described in [164], the HNN can be represented as a graph whose vertices are the neurons (stations) and whose edges are the direct collisions. The graph is then mapped to the schedule matrix S as defined above. The neurons are updated one at a time after a randomized initialization until the system converges. Utilization Maximization. In this phase, a genetic algorithm is used to maximize the channel utilization within the frame length that was determined in phase one. A HNN is also used to ensure that all constraints are satisfied. Genetic algorithms receive their name from an explanation of the way they behave. Not surprisingly, they are based on Darwin's Theory of Natural Selection. Genetic algorithms store a set of solutions and then work to replace these solutions with better ones based on certain fitness criterion represented by the obj ective function value. 7.5 Computational Results The proposed heuristic was coded in the C++ programming language and compiled using MicrosoftB Visual C++ 6.0. The test machine was a PC equipped with a 1700MHz IntelB PentiumB M processor and 1GB of RAM operating under the MicrosoftB Windows@ XP environment. The heuristic was tested on three classical instances as well as a set of 60 random unit disk graphs [35] with varying densities, 20 graphs each having 50, 75, and 100 nodes. The graphs are those which were used by Butenko et al. in a prior MSPTDMA study [39, 40]. We compared our results to those found by several heuristics from the literature, all of which were tested on the same PC described above. As mentioned by Pitsoulis and Resende [152], the particular value of a~ used in randomized greedy heuristics is typically determined either empirically or chosen randomly during each iteration. Alternatively, in a Reactive GRASP the value of a~ is tuned automatically to favor specific values that tend to produce better solutions. Nevertheless, during our testing, we found that a value of a~ = 0.1 generally produced the best overall solutions for the instances tested. The other parameter, S10tIter, was set to 5. In addition, we have implemented the integer programming (IP) model for the THROUGHPUT MAXIMIZATION PROBLEM using the XpressMPTAI Optimization suite from Dash Optimization [108]. XpressMP contains an implementation of the simplex method [98], and uses a branch and bound algorithm [173] together with advanced cuttingplane techniques [107, 139]. Thus not only are we able to compare our heuristic to those in the literature, but we can also see how the heuristics compare with the optimal solutions. Though finding the optimal frame length is HiPhard, we can use the IP model for the TMP to confirm whether a frame length is optimal or not. Consider an instance of MSPTDMA and let Af* be the optimal frame length. Then if we set At = Af* 1 in the integer programming model for the TMP, the resulting IP will be not yield any feasible integer solutions. In fact, the linear programming relaxation could also be infeasible; thus implying the particular instance of the TMP is also infeasible. The proposed heuristic was first tested using three examples first introduced by Wang and Ansari in [169] which have since become the de facto test cases for TDMA broadcast scheduling algorithms. These examples include networks of varying densities with 15, 30, and 40 stations. The graphs of the networks can be seen in Figure 76. Table 71 provides the optimal solutions for the three aforementioned networks as well as the heuristic solutions found by our combinatorial algorithm (ComAlg), the GRASP from [39], the Mixed NeuralGenetic Algorithm (HNNGA) proposed in [164], the Mean Field Annealing (MFA) method from Wang and Ansari [169], and the Sequential Vertex Coloring (SVC) heuristic from [175]. The solutions are reported as (X, Af). Notice that the proposed algorithm found the optimal solution for each of the three instances. The average computation time required for these instances by our method was 1.375s. The average time required by XpressMP to compute the optimal solutions was 3411.4 seconds, with the 30 station network taking 10212 seconds. Next, in order to test the scalability of the new method and evaluate its performance for general networks, we tested the algorithms on the 60 random graphs from [39]. 4 ~ 9 10  8 13  Figure 76: Benchmarki TDMA test, cases. (a) 15 station networks. (b) 30 station networks. (c) 40 station networks. Table 71: Comparison of solutions for the benchmark instances from Wang and Ansari. Stations Optimal Soln ComAlg GRASP HNNGA MFA SVC 15 (20,8) (20,8) (20,8) (20,8) (18,8) (18,8) 30 (36,10) (36,10) (36,10) (35,10) (39,12) (37,11) 40 (69,8) (69,8) (65,8) (67,8) (71,9) (60,8) The comparative results of the proposed algorithm against the best solutions computed by XpressMP after 3600 seconds, as well as the aforementioned heuristicsl on the 50 station graphs from [39] are given in Table 72. The first column represents the instance name followed by the density of the graph G'. Notice that the solutions from the new method (ComAlg) are at least as good as any other heuristic for all of these instances. Specifically, the new method provides better solutions for 15 of the 20 instances. The asterisk implies that the reported solution is optimal. For these instances, the new algorithm found optimal solutions for 401' of the test cases. The average frame utilization is also reported at the bottom of the table. The utilization p, provides a measure of the efficiency of a broadcast schedule and is computed as follows p := .(77) We see that for the 50 station networks, the proposed algorithm has an average channel utilization that is 101 'II .' greater than the other heuristics. The average optimality gap for the throughput maximization phase was 1.921 The average computation time for our algorithm on these instances was 2.8 seconds. The comparative solutions for the 75 station networks are given in Table 73. Notice that our method outperforms the other heuristics in the literature on every instance. For these networks, the proposed algorithm has an average channel utilization that is 8.1 .< greater than the other methods. The heuristic required on average 6.62 seconds to find the target solution, and as with the 50 station networks, optimal frame lengths are achieved for all instances. Furthermore, the 1 The MFA algorithm of [169] was not available to the authors for testing. Table 72: Comparison of optimal and heuristic solutions for graphs with  V = 50 stations. An * indicates that the solution is optimal, while a t indicates the solution is the best found by XpressMP after 3600s. Solutions are reported as (X, M~). Instance Density XpressMP ComAlg GRASP HNNGA SVC 50r20i6 0.1136 (146,10) (145,10) (143,10) (145,10) (111,10) 50r20i2 0.0824 (86,6) (86,6)* (84,6) (86,6)* (82,6) 50r20i3 0.1040 (85,6) (85,6)* (83,6) (85,6)* (60,7) 50r20i7 0.0872 (90,6) (89,6) (87,6) (89,6) (52,6) 50r20i5 0.0968 (107,7) (107,7)* (107,7)* (105,7) (64,8) 50r30il 0.1728 (78,8) (76,8) (74,8) (75,8) (54,9) 50r30i2 0.2122 (77,9) (75,9) (81,10) (70,9) (73,10) 50r30i3 0.1960 (84,9) (84,9)* (78,9) (78,9) (78,10) 50r30i4 0.2048 (74,8)t (71,8) (67,8) (67,8) (60,10) 50r30i5 0.2096 (82,9) (79,9) (76,9) (84,10) (89,11) 50r40il 0.3048 (76,12) (74,12) (73,12) (71,12) (58,14) 50r40i2 0.3680 (83,14)t (80,14) (77,14) (77,14) (83,16) 50r40i3 0.3408 (76,12)t (76,12)* (80,13) (77,13) (56,15) 50r40i4 0.3712 (81,15) (81,15)* (80,15) (76,15) (81,17) 50r40i5 0.3208 (71,12) (70,12) (67,12) (65,12) (55,14) 50r50il 0.4280 (72,17) (72,17)* (71,17) (75,18) (61,19) 50r50i2 0.4640 (61,15)t (61,15)t (65,16) (68,17) (55,17) 50r50i3 0.4480 (66.15)t (66,15)t (64,15) (65,16) (56,17) 50r50i4 0.4376 (70,15) (70,15)* (72,16) (72,16) (79,18) 50r50i5 0.4088 (55,14)t (55,14)t (58,15) (56,15) (61,18) Avg Soln 0.2603 (80. 1,10.95) (80. 1,10.95) (79.35,11.2) (79.3,11.35) (68.4,12.6) Avg Util I 0. 1463 0. 1463 0. 1417 0. 1397 0. 1086 solutions from our method are always within 101' of optimal solutions, with an average gap of Finally, the solutions for the 100 station networks are given in Table 74. Once more, the new algorithm finds solutions which are superior to the other heuristics for each instance. The utilization was an average of 10. 17' higher than the other algorithms. The average computation time was 12.17 seconds, with reported gaps of less than 101' of the best solution found by XpressMP after 3600 seconds. Notice also that XpressMP was unable to compute a solution superior to the proposed heuristic for 100r50i3 and 100r50i6. Each of these instances were ran for 10000 seconds and XpressMP was unable to compute a feasible solution in the frame length achieved by our method. Table 73: Com arison of optime I solver and heuristic solutions for the 75 station networks. Instance Density XpressMP ComAlg GRASP HNNGA SVC 75r20il 0.0988 (145,8) (139,8) (135,8) (136,8) (161,10) 75r20i2 0.1038 (122,8)t (119,8) (113,8) (112,8) (79,10) 75r20i3 0.1159 (113,7)t (108,7) (139,9) (121,8) (150,10) 75r20i4 0.0946 (116,7) (114,7) (109,7) (111,7) (84,8) 75r20i5 0.0988 (145,8) (138,8) (131,8) (135,8) (161,10) 75r30il 0.1927 (114,12) (110,12) (117,13) (117,13) (91,13) 75r30i2 0.1867 (110,11) (105,11) (109,12) (101,11) (94,12) 75r30i3 0.2190 (140,15) (133,15) (132,15) (132,15) (81,17) 75r30i4 0.2009 (142,13)t (133,13) (127,13) (128,13) (144,15) 75r30i5 0.1927 (119,12)t (111,12) (106,12) (108,12) (89,12) 75r40il 0.3328 (105,17)1 (103,17) (104,18) (113,19) (79,20) 75r40i2 0.2980 (108,16)t (106,16) (109,17) (115,18) (86,19) 75r40i3 0.3403 (112,19)t (109,19) (105,19) (103,19) (87,20) 75r40i4 0.3492 (126,20)t (118,20) (124,21) (119,21) (79,24) 75r40i5 0.3143 (104,16)t (97,16) (100,17) (109,18) (114,20) 75r50il 0.4587 (110,23)1 (106,23) (107,24) (108,25) (123,29) 75r50i2 0.4622 (102,23)t (97,23) (99,24) (104,25) (108,27) 75r50i3 0.4807 (106,24)t (102,24) (104,25) (111,27) (114,29) 75r50i4 0.4750 (121,26)t (115,26) (112,26) (110,26) (102,28) 75r50i5 0.5088 (106,25)t (104,25) (111,27) (107,27) (106,28) Avg Soln 0.2686 (118.45,15.5) (113.25,15.5) (114.65,16. 15) (115,16.4) (106.6,18.05) Avg Util 0. 1019 0.0974 0.0947 0.0935 0.0787 As mentioned above, the strategy of first minimizing the frame length and then attempting to maximize the throughput within this frame is a common approach [39, 164, 175]. In particular, in [175], the authors propose a heuristic based on sequential vertex coloring to provide a feasible frame length. Then, a greedy heuristic is used to maximize the throughput. Similarly, SalcedoSanz et al. propose a hybrid heuristic which minimizes the frame length using a neural network and then maximizes the throughput using a genetic algorithm [164]. The algorithm proposed in [39] based on the GRASP metaheuristic described above, uses a slightly different strategy to provide approximate solutions for the MSPTDMA. In this algorithm, the construction phase creates a solution iteratively in a similar manner to the one proposed in this chapter. The maj or difference is that the constructor in [39] does not contain the slot candidate construction loop (Figure 74, lines 625). Instead the final station assignments in each slot are taken as the first produced during the greedy randomized construction. This is equivalent to setting S10tIter equal to 1 in the S10tMinimization method above. By setting S10tIter > 0, the proposed method is more likely to produce better solutions during the frame length minimization phase. The local search used in the GRASP in [39] not only attempts to maximize the throughput within the frame created during the construction phase, but also tries to reduce the frame length further. This method was adapted from the GRASP for coloring sparse graphs of Laguna and Marti in [126]. Here the two slots with the fewest broadcasts are combined creating a new (infeasible) schedule with one less slot than construction solution. The set of stations which cause message collisions as a result of the slot combination is determined. For each station causing a collision, every attempt is made to swap them with another station from the remaining slots. If the swap reduces the number of collisions it is kept and the remaining colliding stations are considered. If all collisions are successfully averted, the process repeats with the combination of two more slots. This contrasts with the proposed method in that after the frame length is determined by the more picky S10tMinimization method, it is fixed for the current iteration. Experimental analysis shows that our algorithm is superior to the other heuristics in the literature. For all 63 instances tested, the method found solutions at least as good as any of the other algorithms from the literature for all of the networks, outperforming them on 56 cases. Also, we see that attempting to solve largescale instances optimally is impractical. However, our heuristic required only 7.49 seconds on average to find solutions that are within 4.15' of the average best solution found by the commercial IP solver in 3600 seconds. 7.6 Concluding Remarks In this chapter, we described and implemented several heuristics for the MESSAGE SCHEDULING PROBLEM ON TDMA NETWORKS. In addition, we have implemented an optimal solver using XpressMP [108]. The MSPTDMA is an important problem that occurs in wireless mesh networks regarding efficiently scheduling collision free broadcasts for the network stations. The obj ective of the MSPTDMA is twofold. First, the number of slots required to schedule all stations is minimized. Then the throughput is to be maximized by scheduling as many stations as 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 1 B 2~B B 3 B B 4 B B B 5 B B B 6 B B B SB B B 8 B B B Main1 2 3 4 5 6 ? 8 9 10I 11 12 13 14 IS 16 17 13 li 23 11 22 21 i 5 26 2T 28 29 2 slot 1 BB B 2 B B B 3 BB B 4 B 8 B 6 B B B B 6 B B B ? BB B B 8 B B B 9 B 6 B B 10 B B B 8 B (b) 1121 145 1 781 1819 11 2III131 151 1 718IBIiiI I19 1 21 lp2]lar 5 2 7 2 5 9 112 5 5 5 5 3i5 5@ls I~ I e a B B Bs B 3 8 B B B 8 5 8 1 b a 8 B a I B I I a B B B I B 8I B 8 B 8 (c) Figure 77: Example GRASP broadcast schedules for the networks given in Figure 76: (a) 15 station network, (b) 30 station network, (c) 40 station network. CHAPTER 8 CONCLUSION Throughout this dissertation, we focused on optimization problems in telecommunication systems, with a particular emphasis on wireless ad hoc networks operating in a military environment. We examined two broad classes of problems; namely, that of ensuring communication on the network, and conversely at problems of denying service on the network. The problems presented all have similar traits. For example, they are all modeled as discrete optimization problems on graphs. Furthermore, as we saw all the problems we examined were HiPhard. We presented an indepth look at the computational complexity of each problem and examined ways of designing efficient algorithms for each. Chapter 1 provided an introduction to the dissertation with a brief description of the maj or contributions and a description of the problems to follow in the subsequent chapters. In Chapter 2, we presented an introduction to global optimization which included many theorems and basic definitions which were applied in the later chapters. This chapter provided a foundation for the work to follow. The subj ects of the next three chapters studied methods of denying service on telecommunication networks. These problems help to identify weaknesses and vulnerabilities in networks. In Chapter 3, we began with the study of jamming wired telecommunication networks. We presented two formulations and analyzed their computational complexity. Next we provided heuristics which provided excellent solutions for real and randomly generated data sets in a fraction of the time required by a commercial software package. In Chapter 4, the general WIRELESS NETWORK JAMMING PROBLEM was introduced. This problem is an extension of the CRITICAL NODE PROBLEM studied in Chapter 3. We examined several variations of the problem and provided integer programming formulations for each. Furthermore, we provided formulations which included percentile constraints. The case studies presented showed that the addition of the percentile risk measures provided excellent solutions with a significant reduction in cost. Heuristic algorithms were also presented and a computational study was presented. For each problem presented in this chapter, we assumed that there as a priori knowledge of the network to be j ammed. We saw that even with this seemingly generous assumption the problems remained HiPhard. The work in Chapter 5 relaxed this assumption and considered the problem of jamming a network when no information was assumed other than the general area known to contain the network. We considered a subproblem of placing the j amming devices on a lattice overlaying the region. A rigorous analysis followed in which we derived upper and lower bounds on the optimal number of jamming devices required to suppress the network. We showed that by considering the cumulative effect of the j amming devices, that our result was superior to the classical method of covering a region in the plane with uniform circles. Furthermore, a convergence result was provided showing that the bounds are tight within a constant. To conclude this chapter we presented a randomized local search algorithm which began with the lower bound value derived and attempted to minimize the number of devices need to cover the region. Experimental results indicated that the heuristic was able to reduce the number of jamming devices by approximately 25' The COOPERATIVE COMMUNICATION PROBLEM ON MOBILE AD HOC NETWORKS (CCPMANET) was the topic of Chapter 6. This problem is concerned with determining the routes for a set of mobile agents in such a manner that communication amongst the agents is maximized. We examined several obj ective functions and compared their relative advantages and provided an integer programming formulation. We then provided a computational study and proved that the problem is HiPhard via a reduction from the 3SATISFIABILITY problem. Further, we proved that it is HiPhard to compute an optimal solution at each discrete time step. Next, we derived several heuristics and provided an extensive computational analysis. In Chapter 7, we took a closer look at the particular communication devices used by the agents in the CCPMANET problem. In particular, we examined the TDMA MESSAGE SCHEDULING PROBLEM. TDMA is a type of timedivision multiplexing where multiple users share the same frequency channel by dividing the signal into different time slots. The users are then scheduled to broadcast in a set of time slots such that there i s no interference by users which broadcast in the same slot. We began by examining the recognition version of the problem and showed that it is HP~complete. WVe followed this by designing several heuristics and comparing their effectiveness against other heuri sti cs from the literature. As telecommunication systems evolve ever so rapidly, there are as many direction for future research as one can imagine. At the conclusion of each chapter we indicated several problems and extensions which could fll a11 .: from the specific work at hand. Of course, the quest for efficient algorithms with better worst case complexity will always lie at the forefront for all the problems considered. Also, the development of tight upper and lower bounds will certainly aid all future endeavors for each problem. It is my hope that my work represents the current stateoftheart for the problems presented, and that my efforts will help our military perform better as they face the daunting task of defending our freedoms wherever they are called. REFERENCES [ 1] E.H.L. Aarts and J. Korst. Simzulated Annealing7 andir Ai! man~ Ma chines. John Wiley & Sons, Chinchester, UK, 1989. [ 2] E.H.L. Aarts and J.K. Lenstra, editors. Local Search in C'ombinatorial Op~imizat~ion2. WJ~iley, 1997. [3] E.H.L. Aarts and M. Verhoeven. Local search. In M. Dell'Amico, F. Maffioli, and S. Martello, editors, Annotatled Bibliog~raphies inz Combinatorial O penusu..u ,/~!/ chapter 11. Wiley, 1997. [4] I. Adler, N. Karmarkar, M.G.C. Resende, and G. Veiga. Data structures and programming techniques for the implementation of karmarkar's algorithm. ORCSA Journal on C'omutying, 1:84106, 1989. [5] I. Adler, N.K. Karmarkar, M.G.C. Resende, and G. Veiga. An implementation of karmarkar's algorithm for linear programming. M\ ii!!wount, /, r/Program ming,, 44:29)7335, 1989. [6] R.K. Ahuja, T.L. Magnanti, and J.B. Orlin. Network F'lows: 77a1 9 y, A!;. 0 itio ;,!r an~vd Appli. al ii; r,.. PrenticeHall, 1993. [7] R.M. Aiex, S. Binato, and M.G.C. Resende. Parallel GRASP with pathrelinking for job shop scheduling. 1'. !I.,U. I C'omutying 29:393430, 2003. [8] I.F. Akiyildiz, X. Wanatl and W. Wang. Wireless mesh networks: a survey. C'omzputer NetwNorks, 47(4):445487, 2005. [9] A. Arulselvan, C.W. Commander, IL. Elefteriadou, and P.M. Pfardalos. Detecting critical nodes in social networks. Social Netwiorks, submitted, "ns.11^7 [10] A. Arulselvan, C.W. Commander, and P.MVI. Pardalos. A hybrid genetic algorithm for the target visitation problem. Narval Researrch Logistics, submitted, ;(^11^ 7. [11] A. Arulselvan, C.W. Commander, IP.MVI. Pfardalos, and O. Shylo. Managing network risk via critical node identification. In N. G~ulpinar and B3. Rustem, editors, Risk. "I,!e .,_ ,II. !I ina TelecommunP~ication Netw/orks. Springer, submitted 2007. [12] BZ.S. Baker. Approximation algorithms for NPIImpllt problems on planar graphs. Jlournalof ,la ACM,1 41(1):153180, 1994. [13] F. Barahona, M. G~rotschel, M. Jiinger, and G. Reinelt. An application of combinatorial optimization to statisical physics and circuit layout design. Operations Researceh, 36:493513, 1998. [14] A. Bavelas. A mathematical model for group structure. Human Organizationzs, 7: 1630, 1948. [15] MVI.S. B~azaraa, J.J. Jarvis, andH[.lD. Sherali. LinrearPr~ogrammv~ll~inga ndNetwMorkFlow1s. John Wiley & Sons, second edition, 1990. [16] R. Bellman and R. Kalaba. D~yznamicf. I; voes an:111111:~d Modern Clontrol The.7/ J. Academic Press, Newv York, NY, 1965. [17] D.P. Bertsekas. Nonlinear Programming. Athena Scientific, second edition, 1999. [18] S. Binato, W.J. Henry, D.M. Loewenstern, and M.G.C. Resende. A GRASP for job shop scheduling. In C.C. Ribeiro and P. Hansen, editors, Ei'ssays andu Surveys on M~etaheuristics, pages 5879. Kluwer Academic Publishers, 2002. [19] P. Biswas and Y. Ye. A distributed method for solving semidefinite programs arising from ad hoc wireless sensor network localization. Technical report, D3ept of Management Science and Engineering, Stanford University, 2003. [20] P. Biswas and Y. Ye. Semidefinite programming for ad hoc wireless sensor network localization. In P ,1 I <, ii, o.. of the th2irdr international sEymposEiuml on Inlformatlion processing in sensor netwior~ks, pages 4654. ACM Press, liU~ . [21] S.P. Borgatti. Identifying sets of key players in a network. Clom~putatioi lnal, Mahematical an~d OrganYizatfional TDI. ? y, 12(1):2134, 2006. [22] L.S. Buriol, M.G.C. Resende, C.C. Ribeiro, and M. Thorup. A hybrid genetic algorithm for the weight setting problem in OSPF/ISIS routing. Netw~iorks, 46(1):3656, 2005. [23] S. Butenko, X. < bcour D.Z. Du, and P. M. Pardalos. On the constructionof virtual backbone for ad hoc wireless network. In S. Butenko, R. Murphey, and P. M. Pardalos, editors, Cooprerative C:ontrol: M~odels,~II Appli;al ..tr andAN.> iton,!l pages 4354. Kluwer Academic Publishers, 2002. [ 24] S.I. Butenko. Maximum Intenstal,~"~! Set an:vd Relatled??.. ~ ile o ~, w~ith Applications. PhD thesis, University of Florida, 2003. [25] S.I. B~utenko, X. Cheng, C.A.S. Oliveira, and P.M. Pardalos. A new algorithm for connected dominating sets in ad hoc networks. In S. Butenko, R. Murphey, and P. Pardalos, editors, R,<,w ;!l Developm~nents in Clooperative C~ontrol an~d ( r: r,,!l_,, .u pages 6173. Kluwer Academic Publishers, 2003. [26] S.I. Butenko, X. Cheng, C.A.S. Oliveira, and P.M. Pardalos. A new algorithm for connected dominating sets in ad hoc networks. In S. Butenko, R. Murphey, and P. Pardalos, editors, R,<,w ;!l Developm~nents in Clooperative C~ontrol an~d ( r: r,,!l_,, .u pages 6173. Kluwer Academic Publishers, 2003. [27] S.I. Butenko, C.WN. Commander, C.A.S. Oliveira, and P.M. Pardalos. Reactive GRASP with path relinking for the broadcast scheduling problem. In Fr,... .... lilr: of the to,!r In~ternational Telemetry~ C1. qp, iis n,<, pages 792800, 2004. [28] S.I. B3utenko, C.W. Commander, and IP MV. Pardalos. A G;RASP for broadcast scheduling in adhoc tdma networks. In InternationalE C;,. c, le .... on Coar tl'i/l!.g. Communications, anEd Control I, ril;?.* 7,I . volume 5, pages 322328, 2004. [29] S. Capkun, M. Hamdi, and J. Hubaux. GPSfree positioning in mobile adhoc networks. In HTC~SSI '01~: Proceed'ings olf il 3 ///2 Annukal Haw~aii Intern;ational ConfeJ~rence on Syjstem Sciences (HIC'SS'34~i)Volume~ 9, page 9008, Washington, DC, USA, 2001. IEEE Computer Soci ety. [30] R. Carraghan and P.M. Pardalos. An exact algorithm for the maximum cliqlue problem. Operations Research LTetters, 9:3 753 82, 1 990. [3 1] W. Chaovalitwongse. Optimization a cnd Dynalmicalil At "I ., Il, in Nonliaear Time SeiePLs Anralysis i;/ Ap~plications in Bioengin:neeri ng26. PhD thesis, University of Florida, :' 111 [32] W. Chaovalitwongse, D. Kim, and P.M. Pardalos. GRASP with a new local search scheme for vehicle routing problems with time windowNs. JouZrnalof~Co(mbin~ato)rj~ldiai~l Op ini I! 7: 179207, 2003. [33] X. Ch.c no X. Huang, D. Li, WJi WNu, and D.Z. Du. A polynomialtime approximation scheme for the minimumconnected dominating set in ad hoc wireless networks. Networks, 42(4):202208, ~ 1 [34] V. Chvatal. Lin~ear Programmzning. Freeman, New York, NY, 1983. [35] B.N. Clark, C.J. Colbourn, and D.S. Johnson. Unit disk graphs. D~iscrete lad;,mI,;Ir;i. . 86:165177, 1990. [36] R. Cohen, K. Erez, D. ben Avraham, and S. Havlin. Efficient immunization strategies for computer networks and populations. Physical R2eview Letters, 85:4626, 2000. [ 37] D.A. Col ey. An inr~odulction to genzetic 1 ar~ t. ;r ir ms ~for sEcientists and engineers. World Scientific, 1999. [38] C.W. Commander. Mtlaximum cut problem, MAXCUTr. In C.A. Floudas and P.M. Pardalos, editors, EI i. I., ~. b. t~ of Optimizatfion, volume 2. Springer, 2007. [391] C.W. Commander, S.I. B~utenko, and IP.MVI. Pfardalos. On the performance of heuristics for broadcast scheduling. In D. Grundel, R. Aba phelliy, and P. Pardalos, editors, Til. no and A l.', ,7;IJFII! for Clooperative ,Sy/stems, pages 6380. WJ~orld Scientific, 2004. [40] C.W. Commander, S.I. B~utenko, P.M. Pardalos, and C.A.S. Oliveira. Reactive y:P;.lp with path relinkiing for the broadcast scheduling problem. In~r,. I.. Ji;!y o f the, w,,It Annuzal International irdlemetry Conlference, pages 79)2800, 2004. [41] C.W. Commander, M.J. Hirsch, IP.M. Pfardalos, and M1.G.C. Resende. Cooperative communication in adhoc netwvorkis. In 20017 National Fire Control Sy/mposiutm, 2007. [42] C.W. Commander, C.A.S. Oliveira, P.M. Pardalos, and M.G.C. Resende. A onepass heuristic for cooperative communication in mobile adhoc networks. In D.A. Grundel, R.A. Murphey, P.M. Pardalos, and O.A. Prokopyev, editors, Cooperative Systems: Control and Optimization, pages 285296. Springer, 2007. [43] C.W. Commander and P.M. Pardalos. A combinatorial algorithm for the TDMA message scheduling problem. Computational Optimization and Applications, accepted, 2007. [44] C.W. Commander, P.M. Pardalos, V. Ryabchenko, O. Shylo, and S. Uryasev. Jamming communication networks under complete uncertainty. Optimization Letters, published online, DOI 10. 1007/s11 59000600430, 2007. [45] C.W. Commander, P.M. Pardalos, V. Ryabchenko, and S. Uryasev. The wireless network jamming problem. Journal of Conbinatorial Optimization, published online, DOI 10.1007/sl087800790717, 2007. [46] C.W. Commander, M.A. Ragle, and Y. Ye. Semidefinite programming and the sensor network localization problem. In C.A. Floudas and P.M. Pardalos, editors, Encyclopedia of Optimization, volume 2. Springer, 2007. [47] T. H. Cormen, C. E. Leiserson, and R. L. Rivest. hItroduction to AlgSt" within. MIT Press, Cambridge, MA, 1992. [48] T.H. Cormen, C.E. Leiserson, R.L. Rivest, and C. Stein. hItroduction to AlgSt" within. MIT Press, Cambridge, MA, 2001. [49] K.J. Cormican, D.P. Morton, and R.K. Wood. Stochastic network interdiction. Operations Research, 46(2): 184197, 1998. [50] ILOG CPLEX. http://www. ilog.com/products/cplex, Accessed October 2006. [51] G.A. Croes. A method for solving travelling salesman problems. Operations Resea~th, 6:791812, 1958. [52] G.B. Dantzig. Programming in a linear structure. Technical report, Comptroller, United States Air Force, 1948. [53] G.B. Dantzig. Discrete variable extremum problems. Operations Resea~th, 5:266277, 1957. [54] G.B. Dantzig, D.R. Fulkerson, and S.M. Johnson. Solution of a large scale traveling salesman problem. Operations Research, 2:393410, 1954. [55] G.B. Dantzig and P. Wolfe. Decomposition principle for linear programs. Operations Research, 8:101111, 1960. [56] C. Darwin. The Origin of Species. Murray, sixth edition, 1872. [57] M. Desrochers, J. Desrosiers, and F. Soumis. Routing with time windows by column generation. Networks, 14:545565, 1984. [58] J. Desrosiers and F. Soumis. A column generation approach to the urban transit crew scheduling problem. ?l...e I <,II.,I; ,Ir Sc~ience, 23:113, 1989. [59] E. W. Dijkstra. A note on two problems in connexion with graphs. Nu~merical Mednto..W.,,; ., 1:269271, 1959. [60] L. Doherty, K. S. J. Sister, and Ghaoui L. E. Convex position estimation in wireless sensor networks. In Pn 1. ..;1 i dn; oflEEE1 NliiCOM, Anchorage, AK, 2001. [61] A.G. Doig and A.H. Land. An automatic method for solving discrete programming problems. Economzetr~ica, 28:49)7520, 1960. [62] D. Dreier. Barabasi graph generator v1.4. http://www. cs.ucr.edu/ ddreier, Accessed November 2006. [63] D.Z. Du and P.M. Pardalos, editors. Handbook of Combinatorial OIptimization, volume 1. Kluwer Academic Publishers, 1998. [64] D.Z. Du and P.MVI. Pfardalos, editors. Handbook of Combinatorial O thlm/lllli i', volume 2. Kluwer Academic Publishers, 1999. [65] D.Z. Du and P.MVI. Pfardalos, editors. Handbooko C~dombinatorial O thlm/llli lli volume 3. Kluwver Academic Publishers, 2001. [66] L. Elefteriadou. Highway capacity. In M. Kutz, editor, FJ<;>;. n'5,0k oXf .0;>ir,l i, li ts , Eniginaeering, chapter 8, pages 81 817. MlcCrawvHill, 2004. [67] A. Ephremides and T.V. Truong. Scheduling broadcasts in unt ih Ill.r~l radio networks. IEEE'F Ri~l :..,, ri;, ..*,. on C'ommutnications, 38(4):456460, 1990. [68] A. Farago. Graph theoretic analysis of ad hoc network vulnerability. JIn 1. Jo 7/<;r and Of i;;,i tir: i~an Mobile, Ad Hoc and Wtireless Netw~orks (Wi'Opt '0)3), : an;^1 [69] T.A. Feo and M.G.C. Resende. Greedy randomized adaptive search procedures. .JournaYlT o]f Global Optimizartiona, 6:109133, 1995. [70] P. Festa, P.M. Pardalos, L.S. Pitsoulis, and M. G. C. Resende. GRASP with pathrelinking for the weighted maximum satisfiability problem. In S.E. Nikoletseas, editor, Proeed~ings o~f V Wor~kshop on flk, i, r;/ and E~xperimzentarllg ar dan*,, volume 3 503 of Lectur~e Note~s ina C'ompukter Sceience, pages 367379. :.1n n1 [71] P. Festa, P.M. Pardalos, M.G.C. Resende, and C.C. Ribeiro. Randomized heuristics for the MAXCUT' problem. Optimization 1 i A rt,,..?. anyd Softwiare, 7:103310_58, 2002. [72] P. Festa and M.G.C. Resende. GRASP: An annotated bibliography. In C. Ribeiro and P.Hansen, editors, Essays and surveys in metah2euristics, pages 325367. Kluwer Academic Publishers, 2, no1 [73] C. Floudas and P.M. Pardalos, Lsh wnl. Sftate of the Art in Global Vim~lfii.7it~i7 C. 1/acul~llat '(i/Me2thlods anud~l vpil;*h..m ve:. Kluwer Academic Publishers, 1996. [74] C.A. Floudas and IPMII. Pardalos, editors. Recent Advances in G~lobal Oilimil;, l; il.. Princeton University Press, 1992. [75] C.A. Floudas and P.M. Pardalos, editors. Einc~yclopred'ia of Optimization. Kluwer Academic Publishers, 2001. [76] C.A. Floudas and P.M. Pardalos, editors. FriontiersE inz Global Optr~imization. Kluwer Academic Publishers, 2003. [77] R.WV. Floyd. Algorithm 9)7 (shortest path). C:ommnunications of the A(T 7 5(6):345, 1962. [78] L.C. Freeman. Centrality in social networks I: Conceptual Clarification. S'ocialNetworks, 1:215239, 1979. [79] ML.R. Garey and D.S. Johnson. Computters and Intr~actability~: A Guzide to dI!, Zeory of` NPCompleteness. WN.H. Freeman and Company, 1979). [80] F. Glover. Heuristics for integer pu>''""; '~ ..ma; so 'is surrogate constraints. Decision Sciences, 8: 156166, 1977. [81] F. Glover. Future paths for integer programming and links to artificial intelligence. C'omzputers and OL/I lilll ii! I Research, 5:533549, I 986. [82] F. Glover. Tabu search methods in artificial intelligence and operations research. ORS;A Artificial .! /9r vuI!L, 1(2):6, 1987. [83] F. Glover. Tabu search: Part I. O~RSAI Journal on Computting, 1(3): 190206, 1989. [84] F. Glover. Tabu search and adaptive memory programming  advances, applications, and challenges. In R.S. Barr, R.V. Helgason, and J.L. Kenngington, editors, Intferlfaes in Computer Science an~d Operations Resear~ch, pages 175. K~luwer Academic Publishers, 1996. [85] F. Glover, M. Laguna, and R. Marti. Fundamentals of scatter search and pathrelinking. C'onfrol Cybhernaetics, 39:653684, ;(1^1^ [ 86] D.E. Goldberg. Genetic Algorith7ms in S~tearch, Optimi~zatfioni and Mac~nhine Learrning. Kluwer Academic Publishers, 1989. [ 87] D.E. Goldberg. Genetic Algor~ithms in Searc~h, Optim7ization, and Mazchinze Learrning. AddisonWiesley, 1989. [88] R.E. Gomory. Outline of an algorithm for integer solutions to linear programs. Full.i. mei of the Am~ericanz M ah.ri ..... iii. .,! Society, 64:275278, 1958. [89] R.E. Gomory and T.C. Hu. Multiterminal network flows. Journal ofSIAM,~ 9(4):551570, 1961. [ 90] D.A. Grnmdel. Probabilistic analysis and results of combizopinatrorapoblems2 nI IrJ military .^lyrp!!.^.7:j tion PhD thesis, University of Florida, 2004. [ 108] Dash Optimization Inc. XpressOptimizer Reference Manual, 2003. [109] Clay Mathematics Institute. http://claymath. org/millennium, Accessed October 2006. [110] E. Israeli and R.K. Wood. Shortestpath network interdiction. Networks, 40(2):97111, 2002. [111] N. Karmarkar. A new polynomial time algorithm for linear programming. Combinatorica, 4(4):373395, 1984. [112] R.M. Karp. Reducability among combinatorial problems. In R. Miller and J. Thatcher, editors, Complexity of Computer Computations, pages 85103. Plenum Press, 1972. [113] R. Kershner. The number of circles covering a set. American Journal of~\ theinalltliL \, 61(3):665671, 1939. [114] J. King. The commutant is the weakclosure of the powers, for rank1 transformations. Ergodic Theory and Dynamical Systems, 6:3633 84, 1986. [115] J. King. Three problems in search of a measure. American Ma'~thematical M~onthly, 101:609628, 1994. [116] S. Kirkpatrick, C. Gelatt, and M. Vecchi. Optimization by simulated annealing. Science, 220:671680, 1983. [117] L. Kleinrock and J. Silvester. Spatial reuse in multihop packet radio networks. In Proceedings of the IEEE 75, 1987. [118] V. Krebs. Uncloaking terrorist networks. First Mondaly, 7(4), April 2002. [119] M. Krivelevich. Sparse graphs usually have exponentially many optimal colorings. Electronic Journal of Combinatorics, 9:8pp, 2002. [120] P. Krokhmal, R. Murphey, P. Pardalos, S. Uryasev, and G. Zrazhevski. Robust decision making: Addressing uncertainties in distributions. In S. Butenko, R. Murphey, and P. Pardalos, editors, Cooperative Control: M~odels, Applications, and Algorithms, pages 165185. Kluwer Academic Publishers, 2003. [121] P. Krokhmal, J. Palmquist, and S. Uryasev. Portfolio optimization with conditional valueatrisk obj ective and constraints. The Journal ofRisk, 4(2): 1127, 2002. [122] J.B. Kruskal. On the shortest spanning subtree and the traveling salesman problem. Proceedings of the American Mathematical Society, 7:4850, 1956. [123] A.A. Kuehn and M.J. Hamburger. A heuristic program for locating warehouses. Management Science, 9:643666, 1963. [124] H.W. Kuhn. Nonlinear programming: A historical view. In R.W. Cottle and C.E. Lemke, editors, Nonlinear Programming, volume IX of SIAM~AM~S Proceedings, pages 126. American Mathematical Society, Providence, RI, 1976. [125] MVI. Laguna and R. Marti. GRASP with path relinking for 2layer straight line, I i.i n. . minimization. INFORM~S Journnal on C:a Illlt;iing; 11:4452, 1999. [126] M. Laguna and R. Marti. A GRASP for coloring sparse graphs. ('.;n;l,,i,tart.l, ti Optimization a cndA; y 4.,...o,I; a r 19(2): 165178, 2001. [ 127] E. Lawler. Clombilnatorial Optimization: Networks and Matroids. Dover, 1976. [128] Y. Li, P.M. Pardalos, and M.G.C. Resende. A greedy randomized adaptive search procedure for the quadratic assignment problem. In P.M. Pardalos and H. WJ~olkowicz, editors, Quadmraic Ass~ignment and Rlelat~ed Problems, volume 16 of D1MACCS' SeriesE on Dllscrete 9 aein.l~~ uI:i< li an~d The. .; <o, Il ( omzputler Science, pages 23 7261. 1994. [129] M.V. Marathe, H. Breu, H.B. Hunt III, S.S. Ravi, and D.J. Rosenkrantz. Simple heuristics for unit disk graphs. NetworkE, 25:5968, 199C5. [130] E. Minieka. Optimizatfion Al;. ,! le/;;;; for Netw~iorks and Graphs. Dekker, New York, NY, 1978. [131] M. Mitchell. An Introdu~ction to Genetici A;l;:, ideter. MIT Press, Cambridge, MA, 1996. [132] David Mloore, John Leonard, Daniela Rus, and Seth Teller. Robust distributed network localization with noisy range measurements. In Sen~jys iit /: P;:, I .17G oi!Lcf dr. 2nd international <. w !j, we < / on Eimbedded networked sensor systems, pages 5061, New York, NY, USA, 2004. ACM Press. [133] HI. Narayanan, S. Roy, and S. Patkiar. Approximation algorithms for mink~overlap problems using the principal lattice of partitions approach. volume 21, 1.C' _. 306330, 1996. [ 134] G. Noubir. On C'onnectivity in Ad Hoc Netwiorks under Jlamming Uising Directional Antelnnas aznd obiilty, volume 2957 of Lecture Notes inz C~omp2uter Sc'ienzce, pages 186200. Springer, 2004. [ 135] C.A.S. Oliveira. Optimizartion Problemzs in i Jo e !!!!ist;r>n, u.rri and du! Inte~rnet. PhD thesis, University of Florida, m~~ i. [136] C.A.S. Oliveira and IP.M. Pfardalos. Ad hoc networks: Optimization problems and solution methods. In D.Z. Du, M1. (']lil and Y. Li, editors, C'ombinatorial Optimizaztion in Communication Networks. Kluwer, 2005. [137] C.A.S. Oliveira and PMN. Pardalos. An optimization approach for cooperative communication in ad hoc networks. Technical report, School of Industrial Engineering and Management, Oklahoma State University, 2005. [138] C.A.S. Oliveira, P.M. Pardalos, and T.M. Querido. Integer formulations for the message scheduling problem on controller area networks. In D. Grundel, R. Murphey, and P. Pardalos, editors, The.. y~ a, ndA/. ..t~ Divers i, for Cooprerative ,Sy/stems, pages 3 533 65. World Scientific, 2004. [139] C.A.S. Oliveira, IP.M~. Pardalos, and T.MVI. Querido. A combinatorial algorithm for message scheduling on controller area networks. Intern~~ation alI Jourrnal of Op :.I .,il, a Re~searr~ch 1(1/2):160171, 2005. [140] C.A.S. Oliveira, P.M. Pardalos, and M.G.C. Resende. GRASP with pathrelinking for the QAP. In 5th Ml';fxetaheuistics In~tertnaional C1' I,4; .o , pages 57. 157.6, 2 011 [141] C.A.S. Oliveira, P.M. Pardalos, and M.G.C. Resende. Optimization problems in multicast tree construction. In / ,I. >071... o~~cf Op~timizartion in :i il ..;;;;i,,l,;:. .,,, ..; ., pages 701733. Kluwer, Dordrecht, 2005. [142] I.H. Osman. MetaH euris~tics: 777, rJ &1) A; 'I; 'j' anl! '!.. Springer, 2002. [143] I.H. Osman, V J. RaywardSmith, C.R. Reeves, and G.D. Smith. Mlloder~n Heuristic Sear~ch 3\ /r, >is... John WJ~iley & Sons, New York, NY, 1996. [ 144] C. Papadimitriou and K. Steiglitz. Combinatoriall Optimizatfion : Algou~rihms anvd Complexity. Dover, 199C8. [145] C.H. Papadimitriou. Computational C?iii,, vity.' AddisonWesley, 1994. [ 146] P.M. Pardalos, editor. Advances in OL/ lleniz illl ii and n;ivill. i Comzputingh. El sevier, 1992. [147] P.MI. Pardalos and MI.G.C. Resende, editors. TT .,n.! &1~, ofApplied Optimization, Newv York, NYi, 2002. Oxford University Press. [ 148] P.MI. Pardalos and J.B. Rosen. C'onfrained G~lobal Optimzizationa: At, > !1!r<,. anld Applications, volume 268 of LTecturet Notes in C~ ,rmp utre Siencee. SpringerVerlag, 1987. [149] P.M. Pardalos and S.A. Vavisis. Quadratic programming with one negative eigenvalue is NPhard. Journal o~f G7lobal Optimizartion, 1: 1522, 199 I. [150] C. Peterson and B. Soderberg. A new method for mapping optimization problems onto neural networks. International J/oulrnal of _.1. iii .. JNetworks, 1(1):322, 1989. [ 151] L. Pitsoulis. A sparuse GR~ASP for solving th2e Ir:,. i. ii e. as: ;srig~nment problem.n M.eng. thesis, University of Florida, 1994. [152] L.S. Pitsoulis and M.G.C. Resende. Greedy randomized adaptive search procedures. In M.G.C. Resende and P.M. Pardalos, editors, Hi.m..rlj 4, ofilpplied Optimizationz, pages 168183. Oxford University Press, 2002. [153] N.B. Priyantha, H. Balakrishnan, E.D. Demaine, and S. Teller. Mobileassisted localization in wireless sensor networks. In INFOCOM~ 20)05. 2 ir/> Annukal Joint C'onferencee of 17:.  IEE Compu)2ter an~d Commnunicationz s ,Societies, volume 1, pages 1721 83, 2005. [154] M.G.C. Resende and T.A. Feo. A GRASP for satisfiability. In D.S. Johnson and M.A. Trik, ditrsCliues ClorngandinhilabiigSeco~dnld DIMACS Implemzentartion Ch.,!.;l~,:,, ., volume 26, pages 499520. American Mlathematical Society, 1996. [ 155] MVI.G.C. Resende and IP M~. Pardalos. Handbhook of Op~tim~izationr in 5.*,llll~r, rli inno uo.. Springer, 2006. [156] M.CGC. Resende and C.C. Ribeiro. A GRASP with pathrelinking for private virtual circuit routing. Netwnorks, 41:104114, 2003. [157] M.CGC. Resende and C.C. Ribeiro. Greedy randomized adaptive search procedures. In F. Glover and G. Kochenberger, editors, i IO*! 4~ o;I~~ fi\~etaheuristics pages 219249. Kluwver Academic Publishers, 2003. [158] M.G.C. Resende and C.C. Ribeiro. GRASP with pathrelinking: Recent advances and applications. In T. Ibaraki, K. Nonobe, and M. Yagiura, editors, M\~etaheuristics: Prog~ress, as Real Problem SolvLers, pages 2963. Springer, 2005. [159] M..GC. Resende and R.F. W~erneck. A hybrid multistart heuristic for the uncapacitated facility location problem. E~uropean Journal of Operations R2esearch, 174:5468, 2006. [160] M..GC. Resende and R.F. W~erneck. A fast swapbased local search procedure for location problems. An2nals of OperationsE ResEearch, published online, DOI: 10.1007/s1047900601540, 2007. [161] R.T. Rockafellar. ConvtexAnalysisE. Princeton University Press, 1970. [162] R.T. Rockafellar and S. Uryasev. Optimization of conditional valueatrisk. T'he Journlaloqf Risk, 2(3):2141, 2000. [163] R.T. Rfockafellar and S. Uryasev. Conditional valueatrisk for general loss distributions. Journal ofBankingl and Filnance, 26: 14431471, 2002. [164] S. SalcedoSanz, C. BusofioCalzon, and A.R. FigueiralVidal. A mixed neuralgenetic algorithm for the broadcast scheduling problem. IEEE1 Tranlsactions on !T'i. i.  C'omm/7unications, 2(2):277283, 2003. [165] C. Savarese, J. Rabay, and K. Langendoen. Robust 1.1wI.; 1;1.0;lls .,1illan~l~ fjai for distributed adhoc wireless sensor networks. In UISEIINIX T~ehn~icarlAn nualu C. !qI, !,L e <, 2< E [166] L. Schrage. A more portable FORTRAN random number generator. ACMI/ / ,itm i,, 1,1. [167] W.MI. Spears and K.A. DeJong. On the virtues of Io.nnelleslr ; ed uniform crossover. In P1~~l un di!a o fd; :1! F; ild international Con ,r; [168] S. Uryasev. Conditional valueatrisk: Optimization algorithms and applications. Finanmcial Eng~ineering NewsE, 14: 15, 2000. [1691] G;. Wang and N. Ansari. Optimal broadcast scheduling in 1~..r1 of radio networks using mean field annealing. ~kIEEEF Journal on S'elected Areas in Ciommunications, 15(2):250260, 1997. [170] S. Warshall. A theorem on boolean matrices. Journal oftin Ac! ()12 92 [171] Y. Watanabe, N. Mizuguchi, and Y. Fujii. Solving optimization problems by using a hopfield neural network and genetic algorithm combination. SystEems and C~omp2uters in q...m.li 29(10):6873, 1998. [172] J.M. WJ~islon. A genetic algorithm for the generalised assignment problem. J/ournlal ofth2e O;I nt// n, P!/Rieswearc~dh Society, 48:804809, 1997. [173] L. W~olsey. Inzteger ;'/s.r I/ronano.:.. W~iley, 19)98. [174] K. Wood. Deterministic network interdiction. 3\ /I !!<. o Il. o.r /andu Computer I in. I ila !:7 17(2): 118, 1993. [175] J. Yeo, HI. Lee, and S. Kim. An efficient broadcast scheduling algorithm for TIDMA adhoc networks. ComputersE and Operations Resea7rc~h, 29: 17931806, 2002. [176] T. Zhou, ZQ. Fu, and BZH. W'.i editors. Q J./. en!. dynlamics ona complex networks. 452457, Progress in Natural Science, 2006 16. BIOG;RAIPHIICAL SKETCH Clayton W. Commander was born in Ft. Walton Beach, Florida, on August 2.3, 1~1` He was raised in nearby Niceville, Florida, and graduated from Niceville High School in t'r r Ir After receiving an Associate of Arts degree from OkaloosaWalton Community College, Clayton enrolled in the Department of Mathematics at the University of Florida in August of 2001 In May :'(1( if he graduated summa cum laude and began working for the United States Air Force. In January _'s~ 1, while working at Eglin Air Force Base, Clayton entered graduate school in the Department of Industrial and Systems Engineering at the University of Florida and began studying optimization with Professor Panos Pardalos. The happiest day of Clayton's life came on June 18, '1 II I when he married the love of his life, Leah Susi. Hie received a master's degree in December 2005 and earned his Ph.D. in August 2007. Clayton and Leah live happily with their two chihuahuas Reina and Isabelle in Niceville, FL. PAGE 1 OPTIMIZATIONPROBLEMSINTELECOMMUNICATIONS WITHMILITARYAPPLICATIONS By CLAYTONWARRENCOMMANDER ADISSERTATIONPRESENTEDTOTHEGRADUATESCHOOL OFTHEUNIVERSITYOFFLORIDAINPARTIALFULFILLMENT OFTHEREQUIREMENTSFORTHEDEGREEOF DOCTOROFPHILOSOPHY UNIVERSITYOFFLORIDA 2007 1 PAGE 2 c r 2007ClaytonWarrenCommander 2 PAGE 3 Tomyfamily. 3 PAGE 4 ACKNOWLEDGMENTS First,ImustthankmyadvisorPanosPardalos.Hehasbeenani ncrediblementor,leader,and friendtomesincethedaywemet.Hisexcitementandpassionf orlife,research,andfamilyhave hadaprofoundeffectonmeandhaveencouragedmegreatly.He willalwayshaveaspecialplace inmyheart. Myappreciationgoestomycommitteemembers,StanUryasev, J.ColeSmith,andWilliam Hagerfortheirtimeandhelpfulideasthatguidedmealongth eway.Iwouldalsoliketothank themembersofthegraduatecommitteeFaridAitSahlia,Elif Akcali,andEdwinRomeijnfor givingastudentfromwhomtheyhadnothingtogainanotherch ance. NextIwishtothankmywonderfulcoauthorsandcollaborato rsforworkingwithme andforhelpingmelearnhowtobearesearcher:AshwinArulse lvan,SergiyButenko,Lily Elefteriadou,PaolaFesta,MichaelJ.Hirsch,CarlosA.S.O liveira,MichelleRagle,Mauricio G.C.Resende,ValeriyRyabchenko,OlegShylo,MarcoTsitse lis,StanUryasev,YinyuYe, andGrigoryZharshevsky.IwouldliketothankJonathanKing forinstillinginmealoveof mathematicsandanappreciationforclearlywrittenmathem aticaldiscourse.Iamparticularly gratefultoMauricioG.C.ResendeofAT&TLabsResearchforh iswonderfulcollaborationsand helpovertheyears.Heiseverybitasgraciousasheisbrilli ant.Finally,IamgratefultoClaudio MenesesandOnurSereffortheirthoughtfuladvice. IamtrulygratefultotheUnitedStatesAirForceforsupport ingandnancingmy educationalendeavors.ParticularthanksgotoRobMurphey ,DavidJeffcoat,MichelleWhite, andmanyothersattheAirForceResearchLaboratoryforthei rsupport.ThanksgotoDon Grundelwhoalwaysgavegoodadviceandkeptmeonthestraigh tandnarrow. Finally,butcertainlynotleast,mymostheartfeltappreci ationgoestomyfamily.Ithankmy parentsandmygrandmotherwhoalwayslistenedandencourag edme.Ithankmyparentsinlaw foralwaysgivingmeaplacetostay.Finally,Ithankmybeaut ifulwifeLeahwhohasbeenmy constantsourceoflove,passion,andinspiration. !ylydwdwydwdlyn' 4 PAGE 5 TABLEOFCONTENTS page ACKNOWLEDGMENTS ....................................4 LISTOFTABLES .......................................8 LISTOFFIGURES .......................................10 ABSTRACT ...........................................13 CHAPTER 1INTRODUCTION ....................................15 2GLOBALOPTIMIZATIONISSUES ...........................16 2.1Introduction .....................................16 2.2Idiosyncrasies ....................................16 2.3FundamentalResults ................................17 2.4DiscreteOptimization ................................21 2.5ComputationalComplexity .............................22 2.6UpperandLowerBounds ..............................24 2.7AlgorithmsforOptimizationProblems .......................28 2.7.1ExactMethods ................................28 2.7.2Heuristics ..................................31 2.8ConcludingRemarks ................................36 3JAMMINGCOMMUNICATION NETWORKSVIACRITICALNODEDETECTION ..................38 3.1Introduction .....................................38 3.2ProblemFormulations ................................40 3.2.1CriticalNodeProblem ............................40 3.2.2CardinalityConstrainedProblem ......................44 3.3HeuristicsforCriticalNodeProblems .......................46 3.3.1CNPHeuristic ................................46 3.3.2CCCNPHeuristic ..............................49 3.3.3GeneticAlgorithmfortheCCCNP .....................50 3.4ComputationalResults ...............................53 3.4.1CNPResults .................................53 3.4.2CCCNPResults ...............................55 3.5ConcludingRemarks ................................59 4THEWIRELESSNETWORKJAMMINGPROBLEM .................62 4.1Introduction .....................................62 4.2DenitionsandAssumptions ............................63 5 PAGE 6 4.3DeterministicFormulations .............................64 4.3.1CoverageApproach .............................64 4.3.2ConnectivityFormulation ..........................65 4.4DeterministicSetupwithPercentileConstraints ..................72 4.4.1ValueatRisk(VaR)andConditionalValueatRisk( CVaR) ........72 4.4.2PercentileConstraintsandtheWNJP ....................76 4.5CaseStudiesandAlgorithms ............................80 4.5.1CoverageFormulation ............................80 4.5.2ConnectivityFormulation ..........................82 4.6ConcludingRemarks ................................84 5JAMMINGCOMMUNICATION NETWORKSUNDERCOMPLETEUNCERTAINTY .................85 5.1Introduction .....................................85 5.2Descriptions,Assumptions,andDenitions ....................86 5.3ProblemFormulation ................................87 5.4UpperandLowerBounds ..............................90 5.5HeuristicforUncertainJamming ..........................102 5.6ConcludingRemarks ................................105 6COOPERATIVECOMMUNICATION INMOBILEADHOCNETWORKS ...........................106 6.1Introduction .....................................106 6.2DiscreteFormulations(CCPMANETD) ......................109 6.3AlgorithmsforCCPMANETD ...........................113 6.3.1ConstructionHeuristic ...........................113 6.3.2LocalImprovement .............................115 6.3.3OnePassHeuristic .............................117 6.3.4GreedyRandomizedAdaptiveSearch ...................119 6.3.5ComplexityoftheHeuristic .........................123 6.3.6ComputationalExperiments .........................125 6.4AContinuousFormulation(CCPMANETC) ...................131 6.5ConcludingRemarks ................................136 7THETDMAMESSAGESCHEDULINGPROBLEM ..................139 7.1Introduction .....................................139 7.2ProblemDescription ................................140 7.3ComputationalComplexity .............................142 7.4Heuristics ......................................145 7.4.1CombinatorialAlgorithmforTDMAMessageScheduling .........145 7.4.2GRASP ...................................150 7.4.3SequentialVertexColoring .........................151 7.4.4MeanFieldAnnealing ............................152 7.4.5MixedNeuralGeneticAlgorithm ......................152 6 PAGE 7 7.5ComputationalResults ...............................153 7.6ConcludingRemarks ................................159 8CONCLUSION ......................................162 REFERENCES .........................................165 BIOGRAPHICALSKETCH ..................................177 7 PAGE 8 LISTOFTABLES Table page 21Growthratesofseveralpolynomialandnonpolynomialf unctions. ...........24 31ResultsofIPmodelandheuristiconterroristnetworkda ta. ...............53 32ResultsofIPmodelandheuristiconrandomlygenerateds calefreegraphs. .......56 33ResultsofIPmodelandheuristicsonterroristnetworkd ata. ..............57 34ResultsoftheIPmodelandgeneticalgorithmandthecomb inatorialheuristicon randomlygeneratedscalefreegraphs. ..........................58 35Comparativeresultsofthegeneticalgorithmandthecom binatorialheuristicwhen testedonthelargerrandomgraphs.Duetothecomplexity,we wereunableto computethecorrespondingoptimalsolutions. ......................59 41Optimalsolutionsusingthecoverageformulationwithr egularandVaRconstraints. ..80 42Optimalsolutionsusingthecoverageformulationwithr egularandVaR,andCVaR constraints. ........................................81 51Comparing N 2 N 1 forvariousvaluesof k ..........................95 52Numericalresultsareprovidedforseveralregionswith variousrequiredjamming levels.TheUpperBound,LowerBound,OptimalGrid,andLoca lSearchcolumns providethenumberofjammingdevicesrequiredforthecorre spondingregion accordingtothetheoremspresentedandtheproposedlocals earch.ThePercent Decreaseshowsthesavingswhencomparingthelocalsearcht otheoptimalgrid approach. .........................................105 61Comparativeresultsbetweenshortestpathsolutionsan dheuristicsolutions. ......118 62Threeinstanceswithdifferentsetsofagentson50nodeg raphsaregiven.Thevalue inthe UBound columnwasfoundusingCorollary1. ...................127 63Threeinstanceswithdifferentsetsofagentson75nodeg raphsaregiven.Thevalue inthe UBound columnwasfoundusingCorollary1. ...................128 64A 100 nodeinstancewithsolutionswithradiusvaryingfrom 1 to 5 units.Thevalue in UBound wasfoundusingCorollary1. .........................129 65AveragesolutionvaluesforGRASPandGRASPwithpathre linkingon50node graphs. ...........................................130 66ComparativesolutionsofGRASPandGRASPwithpathreli nkingon75nodegraphs. 130 67ResultsofGRASPandGRASPwithpathrelinkingon100nod egraphs. ........131 8 PAGE 9 71Comparisonofsolutionsforthebenchmarkinstancesfro mWangandAnsari. .....156 72Comparisonofoptimalandheuristicsolutionsforgraphswi th j V j =50 stations.Anindicatesthatthesolutionisoptimal,whileayindicatesthesolutionisthebestfoundby XpressMPafter 3600 s.Solutionsarereportedas ( X;M ) ..................157 73Comparisonofoptimalsolverandheuristicsolutionsforth e 75 stationnetworks. .......158 74Comparisonofoptimalsolverandheuristicsolutionsforne tworkswith j V j =100 stations. ..160 9 PAGE 10 LISTOFFIGURES Figure page 21Noticethattheroundedintegersolutionisnotoptimal. .................22 22Visualizationofcomplexityclasses. ...........................23 23Pseudocodeforagreedyalgorithmwhichmakeschangeus ingtheminimumnumber ofcoins. ..........................................32 24GRASPformaximization .................................33 25Genericsimulatedannealingmaximizationalgorithm. ..................35 26Pseudocodeforgenericgeneticalgorithm. ........................36 31ConnectivityIndexofnodesA,B,C,Dis3.ConnectivityI ndexofE,F,Gis2. ConnectivityIndexofHis0. ...............................45 32Heuristicfordetectingcriticalnodes. ...........................46 33Localsearchalgorithmforcriticalnodeheuristic. ....................48 34Heuristicwithlocalsearchfordetectingcriticalnode s. .................49 35HeuristicfortheCARDINALITYCONSTRAINEDCRITICALNODEPROBLEM. .....50 36Pseudocodeforagenericgeneticalgorithm. .......................51 37Exampleofthecrossoveroperation.Inthiscase, CrossProb =0 : 65 ..........52 38TerroristnetworkcompiledbyKrebs. ..........................54 39Optimalsolutionwhen k =20 ..............................55 310Optimalsolutionwhen L =4 ..............................57 41ConnectivityIndexofnodesA,B,C,Dis3.ConnectivityI ndexofE,F,Gis2. ConnectivityIndexofHis0. ...............................66 42GraphicalrepresentationofVarandCVaR. ........................73 43Casestudy1.Theplacementofjammersisshownwhenthepr oblemissolvedusing theoriginalandVaRconstraints. .............................81 44Casestudy1continued.Theplacementofjammersisshown whentheproblemis solvedusingVaRandCVaRconstraints. .........................82 45CaseStudy2:Originalgraph. ...............................83 46Acomparisonofthepercentileconstrainedsolutions.I nbothcases,thetriangles representtheplacementofjammingdevices. .......................83 10 PAGE 11 51Uniformgridwithjammingdevices ...........................88 52Theleastcoveredpointisshowninthelowerleftgridcel l. ...............89 53SquareDecomposition ..................................89 54EquivalentPoints .....................................90 55Cumulativeemanationofjammingdevices. .......................91 56IntegralLowerBound. ..................................92 57IntegralUpperBound. ..................................97 58Comparisonofthelowerandupperbounds. .......................100 59Pseudocodefortherandomizedlocalsearchforuncerta injamming. ..........103 510Exampleofheuristicversusuniformplacement. .....................104 61Pseudocodefortheshortestpathconstructionheuris tic. ................114 62PseudocodefortheHillClimbingintensicationproce dure. ..............116 63Pseudocodefortheonepassheuristic. ..........................117 64GRASPformaximization .................................119 65GreedyrandomizedconstructorforCCPMANETD. ...................121 66LocalsearchforCCPMANETD. ..............................122 67Pathrelinkingsubroutine. .................................124 68GRASPwithpathrelinkingformaximization. ......................126 69EvolutionofGRASP+PRsolutionvalueson50nodegraphsa sthecommunication radiusincreasesfrom1to5units. ............................132 610EvolutionofGRASP+PRsolutionvalueson75nodegraphs asthecommunication radiusincreasesfrom1to5units. ............................133 611EvolutionofGRASP+PRsolutionvalueson100nodegraph sasthecommunication radiusincreasesfrom1to5units. ............................134 612Theheavysidefunction H 1 ................................135 613Alternateobjectivefunction H 2 .............................136 614Secondalternateobjectivefunction H 3 .........................137 11 PAGE 12 71CounterexampletotheclaimofWang&Ansarithatoptimal graphcoloringcanbe foundbyrecursivelyndingamaximumindependentsetandre movingitfromthe graph. ...........................................143 72Constructionofgraph G 0 from G ............................144 73PseudocodeoftheproposedheuristicforMSPTDMA. .................146 74Greedyrandomizedheuristicforframelengthminimizat ion. ..............147 75Throughputmaximizationpseudocode. .........................149 76BenchmarkTDMAtestcases. ..............................155 77ExampleGRASPbroadcastschedulesforthenetworksgive ninFigure76:(a)15 stationnetwork,(b)30stationnetwork,(c)40stationnetw ork. .............161 12 PAGE 13 AbstractofDissertationPresentedtotheGraduateSchool oftheUniversityofFloridainPartialFulllmentofthe RequirementsfortheDegreeofDoctorofPhilosophy OPTIMIZATIONPROBLEMSINTELECOMMUNICATIONS WITHMILITARYAPPLICATIONS By ClaytonWarrenCommander August2007 Chair:PanagoteM.PardalosMajor:IndustrialandSystemsEngineering Inrecentdecades,optimizationproblemsintelecommunica tionsystemshavebeenthefocus ofanintensiveamountofresearch.Theseproblemsareimpor tantforseveralreasonsincluding speedandqualityofcommunicationamongothers.Inthisdis sertation,wepresentseveral problemsarisingintelecommunicationnetworksinmilitar yapplications.Severalproblems weconsiderinvolvewirelesscommunicationnetworks.Thes enetworksareanextraordinarily convenientmethodofcommunication.However,alongwithth isconveniencecomesamyriadof complicatedproblemsthatmustbeaddressedtopreservethe attractivefeaturesofthenetworks. Furthermore,problemsarisinginadversarialenvironment sdifferfromthoseinconventional settings,inthattimeisusuallyacriticallyconstrainedf actor.Thisistroublesomebecausemany oftheproblemsaredifculttosolveandwouldrequireatrem endousamountoftimetocompute theoptimalsolution.Howeverinabattlespaceenvironment ,timespentcomputingasolution andnotghtingtheenemyleadstoapotentiallossofmaterie landlives.Thusfortheproblems studied,wewillfocusagreatdealofattentionondesigning heuristicalgorithmswhichare capableofcomputingnearoptimalsolutionsveryefcientl y. Wewillconsidertwoclassesofproblemsinvolvingtelecomm unicationnetworks.Therst classfocusesondenyingcommunicationonanetworkanddest royingitsfunctionality.The otherclasshastheobjectiveofguaranteeingcommunicatio nonanetwork.Atrstglance,these twosetsappeartobepolaroppositesofoneanother.However ,withanyemergingtechnology studieswhichassessbothvulnerabilitiesandcapabilitie smustbeperformedinordertoachieve 13 PAGE 14 asystemwhichwillnotfailinitsintendedoperationalenvi ronment.Ourgoalistoshow howtheseproblemscanbeformulatedandsolvedusingtoolsf romglobalandcombinatorial optimization.Fortheproblemsconsidered,weexaminethec omputationalcomplexityand examineseveralmathematicalprogrammingformulations.T henwepresentseveralalgorithms andexamineextensivecomputationalresultscomparingthe ireffectiveness.Finally,weconclude bysummarizingourworkandindicatingfuturedirectionsof research. 14 PAGE 15 CHAPTER1 INTRODUCTION Optimizationproblemsintelecommunicationsystemshaveb eenthefocusofan intensiveamountofresearchinrecentdecades[ 135 155 ].Theseproblemsareimportant forseveralreasonsincludingspeedandqualityofcommunic ationandcostrelatedissues. Inthisdissertation,wepresentseveralproblemsarisingi nmilitaryapplicationsinvolving telecommunicationnetworks.Severalproblemsweconsider involvewirelessnetworks.These networksareanextraordinarilyconvenientmethodofcommu nication;however,alongwith thisconveniencecomesamyriadofcomplicatedproblemswhi chmustbeaddressedinorder topreservetheattractivefeaturesofthenetworks.Furthe rmore,problemsarisinginadversarial environmentsdifferfromconventionalproblemsinthattim eisusuallyacriticallyconstrained factor.Thispresentssomewhatofaproblembecausemanyoft heproblemsareextremely difculttosolveandwouldrequireatremendousamountofti metocomputetheoptimal solution.Howeverinabattlespaceenvironment,timespent computingasolutionandnotghting theenemyleadstoapotentiallossofmaterielandlives.Thu sthroughoutthisdissertationwewill focusagreatdealofattentionondesigningheuristicalgor ithmswhicharecapableofcomputing nearoptimalsolutionsveryefciently. Theremainingchaptersofthisdissertationpresenttheres ultsofmyeffortstomodeland solvemanyimportanttelecommunicationproblemsfacingth emilitaryintheeverevolving globalwaronterrorism.Wewillconsidertwoclassesofprob lemsinvolvingtelecommunication networks.Therstclass(Chapters 3 4 and 5 )focusondenyingcommunicationonanetwork anddestroyingitsfunctionality.Conversely,theproblem sinChapter 6 andChapter 7 havethe objectiveofguaranteeingcommunicationonanetwork.Atr stglance,thesetwosetsappearto bepolaroppositesofoneanother.However,withanyemergin gtechnologystudieswhichassess bothvulnerabilitiesandcapabilitiesmustbeperformedto achieveasystemwhichwillnotfailin itsintendedoperationalenvironment.Ourgoalistoshowho wtheseproblemscanbeformulated andsolvedusingtoolsfromglobalandcombinatorialoptimi zation[ 74 106 127 ]. 15 PAGE 16 CHAPTER2 GLOBALOPTIMIZATIONISSUES 2.1Introduction Overthepast60years,OperationsResearch(OR)hasemerged asoneofthemostexciting, fastpaced,andinterdisciplinaryeldsofmathematics.S inceitsrebirthduringWorldWarII, ORhasturnedintoafascinatingsubjectwhichcrossesalldi videsfromrealanalysis,probability, statistics,economics,theoreticalcomputerscience,and biologyinanattempttosolvesomeof themostcomputationallydifcultproblemsknowntoexist. Asmentionedin[ 98 ],ORwasrstformalizedduringWorldWarIIwhensupplieswe re limitedandneededtobeallocatedtothealliedforcesovers eas.ORteamswerefundamentalin developingmethodsforusingradarwhichwascrucialinthea llieswinningtheairwar.Later, researchersdevelopedmethodsforoptimallytransporting convoysandderivedmethodsfor trackingsubmarinesthusleadingtosuccessinthePacicth eater.Theoriginalnameoftheeld was MilitaryOperationsResearch ;however,duetothesuccessofthemethodsderivedduringth e war,scientistsandengineersbeganapplyingthesetechniq uestootherproblemsinmathematics andindustrialengineering.Theword military wassubsequentlydroppedbecauseofthis. Sincetheearly1950's,researchershavebeenexpandingthe techniquesandmethodsofOR. ReminiscentofthetimeofGaussandEuler,scientistsarema kingcontributionsatincredible ratesineldsrangingfromfacilitylocationproblemstoth emappingofthehumangenome. Withtheadventofthedigitalcomputer,algorithmsarenowa bletobeimplementedproviding thecapabilitytosolveproblemsneverbeforethoughttract able.Inthischapterwepresentthe foundationofglobaloptimization.Thiswillprovidethene cessarytoolsforusasweinvestigate theproblemspresentedinthesucceedingchapters. 2.2Idiosyncrasies Inthissubsection,weintroducethesymbolsandnotationsw ewillemploymostfrequently throughoutthisdissertation.Denoteagraph G =( V;E ) asapairconsistingofasetofvertices V ,andasetofedges E .Letthemap w : E 7! R beaweightfunctiondenedonthesetof 16 PAGE 17 edges.Wewilldenoteanedgeweightedgraphasapair ( G;w ) .Thuswecaneasilygeneralize anunweightedgraph G =( V;E ) asanedgeweightedgraph ( G;w ) ,bydeningtheweight functionas w ij := 8>><>>: 1 ; if ( i;j ) 2 E; 0 ; if ( i;j ) 62 E: (21) Weusethesymbol b := a tomeantheexpression a denesthe(new)symbol b inthe mannerofKing[ 115 ].Ofcourse,thiscouldbeconvenientlyextendedsothatast atementlike (1 ) = 2:=7 meansdenethesymbol sothat (1 ) = 2=7 holds[ 114 ].Wewill employthetypicalsymbol S c todenotethecomplementoftheset S ;furtherlet A [ B denotethe setdifference, A \ B c .Agreetolettheexpression x y meanthatthevalueofthevariable y isassignedtothevariable x .Todenotethecardinalityofaset S ,weuse j S j .Finally,wewilluse italics foremphasis,andSMALLCAPSforformalproblemnames.Anyotherlocallyusedterms andsymbolswillbedenedinthesectionsinwhichtheyappea r. 2.3FundamentalResults Inglobaloptimization,theobjectiveistodeterminethema ximumorminimumpoint attainedbyanobjectivefunctiondenedoveraset.Ingener al,anoptimizationproblemhasthe form minimizeormaximize f ( x ) subjectto x 2 S; where S R n isthefeasibleregionand f ( x ) isarealvaluedfunctiondenedon S .Thatis, f : S 7! R 1 Denition1. Anoptimizationproblemwithfeasibleregion S R n issaidtobe infeasible if S = ; Throughoutthisdissertation,wewillrelyheavilyontheno tionofa neighborhood whichis denednext. 17 PAGE 18 Denition2. Foragivenoptimizationproblemonaset S R n ,a neighborhood isamapping N : S 7! 2 S denedforeachinstance. Insubsequentchapterswewillseethatcleverlydeningane ighborhoodforaparticular problemcangreatlyincreasetheeffectivenessofheuristi cs.Forexample,if S = R n ,thentheset ofpointsthatfallwithinsomeEuclideandistanceprovidea naturalchoicefortheneighborhood [ 144 ].If jjjj representstheEuclideannorm,thenapoint x 2 S issaidtobea localminimum pointof f if f ( x ) f ( x ) forallpoints x 2 S suchthat jj x x jj ,forsome 0 : Inother words,given > 0 ,denetheneighborhoodof x as N ( x ):= f x : x 2 S and jj x x jj g : (22) Then, x isalocalminimumif f ( x ) f ( x ) forall x 2 N ( x ) .Apoint x issaidtobea global minimum if f ( x ) PAGE 19 function f f ( x i ) ( i !1 ) f ( x ) whenever x i ( i !1 ) x .Thisleadsustofundamentalresultby Weierstrasswhichwestatewithoutproof[ 107 ]. Theorem1. If S isanonemptycompactsetin R n ,and f ( x ) isacontinuousfunctiondenedon S ,then f ( x ) hasatleastoneglobalminimum(maximum)pointin S Wecannowmoveonandexaminesomepropertiesoflocalandglo balminima.Recallfrom calculusthatifthefunction f iscontinuouslydifferentiableinaneighborhoodofapoint x 2 S and d 2 R n ,then d T r f ( x ) issaidtobethe directionalderivative of f at x inthedirection d Ifwex x and d ,thenthefunction h ( ):= f ( x + d ) ,for 2 R + ,describes f alongtheray f x = x + d; 0 g : Ifweevaluatethederivativeof h withrespectto atthepoint =0 usingtherstorderTaylorexpansionof h at ,weseethatthisisprecisely d T r f ( x ) .Thus, d T r f ( x ) < 0 impliesthatthereexists > 0 suchthat f ( x + d ) PAGE 20 Inageometricsense, S isconvexifforanytwopointsin S ,thelinesegmentjoiningthese twopointsiswhollycontainedin S [ 148 ]. Denition8. Givenaconvexset S R n ,thefunction f : S 7! R issaidtobea convexfunction ifforany x 1 ;x 2 2 S ,and 2 R ; 0 1 ,thefollowingconditionholds: f ( x 1 +(1 ) x 2 ) f ( x 1 )+(1 ) f ( x 2 ) : Thefunction f issaidtobea concavefunction ifandonlyif f isconvex.Inthefollowing theorem,weprovethatforoptimizationproblemswhere f isconvexand S isaconvexset,that criticalpointsarealwaysgloballyoptimalsolutions.Theorem3. Let f : S 7! R beaconvexfunction,where S R n ,isaconvexset.Thenevery localminimumof f isalsoaglobalminimum. Proof. Let x bealocalminimumpointandassumeforthesakeofcontradict ionthatthereexists anotherpoint x 2 S suchthat f ( x ) PAGE 21 concave,wehave f ( x ) N X i =1 i f ( v i ) N X i =1 i min f f ( v i ): i =1 ;:::;N g (23) =min f f ( v i ): i =1 ;:::;N g : (24) Recallthatlinearfunctionsarebothconvexandconcave.Th ereforeifweareconsidering a linearprogrammingproblem ,i.e.thatofminimizingacontinuouslinearfunctionovera polytope,bothTheorem 3 providingoptimalityoflocalminimaforconvexfunctionsa nd Theorem 4 providingextremepointoptimalityofconcavefunctionsap ply.Thus,forthisclassof problemswecanrestrictthesearchfortheglobalsolutionb yexaminingonlytheextremepoints ofthepolytope.Intheabsenceofconvexityhowever,agloba lminimumpointcanoccurata pointotherthananextremepoint. Inthisdissertation,wefocusonproblemsofthistype.Inpa rticulartheproblemswewill laterinvestigatecontainmanylocallyoptimalsolutionsw hichdifferfromtheglobalsolution. Also,untilnowwehavefocusedontheoremsforcontinuousfu nctions.However,several problemswewillencounterarehavediscretevariables.The seproblemsarecalled combinatorial optimizationproblems .Thenextsectioncontainssomebasicresultsregardingcom binatorial problemswhichwewilllateruse. 2.4DiscreteOptimization Incertainapplications,itisnecessarytorestricttheval uesofthedecisionvariablesofa problemtobeintegervalued.Suchproblemsarereferredtoa s integerprogrammingproblems Sometimes,itisconvenienttoincludeintegervariablesin aproblemwhenoneisattemptingto modelasituationthathastwopossiblevalues.Inthiscase, binaryvariablesthattakethevalue 0 or 1 areused. Integerprogrammingproblemspresentuniquechallengesin thatthetechniquesand theoremsforlinearprogrammingproblemsasdescribedabov edonotnecessarilyapply.For example,considerthepolytopeinFigure 21 .Noticethattheintegerpointsdonotlieatthe 21 PAGE 22 Figure21:Noticethattheroundedintegersolutionisnoto ptimal. extremepointsofthepolytope.Weseethenthattheresultfr omTheorem 4 doesnothold. Anothercommonmisconceptionisthattheintegeroptimalso lutioncanbefoundbyrounding thelinearprogrammingsolutiontothenearestinteger.Wes eebyexaminingthegurethatthis doesn'tworkeither.Noticethattheintegerpointnearestt helinearprogrammingoptimalsolution fallsoutsidethefeasibleregionofthepolytope. Clearly,weneedmoreadvancedmethodsforsolvingsuchprob lems.Wewilllookat avarietyofexactandheuristicmethodsinSection 2.7 .Now,weprovideanintroductionto computationalcomplexity.Complexitytheoryliesatthehe artofglobaloptimizationand providestoolsforempiricallydeterminingthelevelofdif cultyofagivenproblemaswellasthe effectivenessofanalgorithm.Laterwewillconrmoursusp icionsaboutthedifcultyofinteger programsandnonconvexcontinuousprogrammingproblems. 2.5ComputationalComplexity Inthissection,wedevelopameansbywhichwecanclassifyap roblemaseitherbeing easyorhard.Thenforthesocalledhardproblems,we lookatwaystoanswerthe question: howhardishard? Analgorithmissaidtobea polynomialtimealgorithm ifitsnumberofelementary operations,i.e.itsrunningtimeonacomputer,areinthewo rstcast,boundedabovebya 22 PAGE 23 Figure22:Visualizationofcomplexityclasses. polynomialinthesizeoftheinput[ 107 ].Forinstance,analgorithmissaidtobe O ( I p ) ifthe polynomialwhichboundstherunningtimeisoforder p inthesizeoftheinputdata I .An algorithmissaidtobean exponential timealgorithmifitisnotboundedbyapolynomialinthe lengthoftheinput[ 6 ]. WhendiscussingproblemsinOR,wesplitthecollectionofal lproblemsintotwoclassesas visualizedinFigure 22 .Thoseproblemswhichcanbesolvedoptimallybyapolynomia ltime algorithmaresaidtobelongtotheclass P .Theothercomplexityclasscontainsthoseproblems whichcanbesolvedbyanondeterministicalgorithminpol ynomialtime.Thisclassiscalled NP .Aproblemin NP isoneinwhichitiseasytoverifythecorrectnessofsolutio n,butisvery hardtosolve,whereaproblembelongingto P issimplyeasytosolve. Amongtheproblemsin NP ,thosewhicharethemostdifculttosolvearesaidtobe NP complete.Aproblem 1 issaidtobepolynomiallytransformabletoproblem 2 ifa polynomialtimealgorithmfor 2 wouldimplyapolynomialtimealgorithmfor 1 .Problems in NP completearespecialinthateveryproblemin NP canbepolynomiallytransformedto everyotherproblemin NP complete.Thus,since PNP ,itfollowsthatifonecoulddesign apolynomialtimealgorithmforasingle NP completeproblem,theneveryproblemin NP couldbesolvedwithapolynomialtimealgorithmandthus P wouldequal NP [ 79 ].However, 23 PAGE 24 Table21:Growthratesofseveralpolynomialandnonpolyn omialfunctions. n n 1 n 2 n 4 2 n n 10 10 1 10 2 10 4 10 3 3 : 6 10 6 100 10 2 10 4 10 8 1 : 27 10 30 9 : 33 10 157 1000 10 3 10 6 10 12 1 : 07 10 301 4 : 02 10 2 ; 567 10 ; 000 10 4 10 8 10 16 0 : 99 10 3 ; 010 2 : 85 10 35 ; 659 despitetheincredibleamountofresearchandinvestigatio n,thequestionastowhether P = NP remainsthesinglegreatestunsolvedproblemintheoretica lcomputerscience[ 145 ].Infact,the ClayMathematicsInstitutehasnamedthisproblemasoneofs evenprizemillenniumproblems andisoffering$1milliontoanyonewhopresentsananswerto thequestion,does P = NP ? [ 109 ]. Denition9. Anoptimizationproblem issaidtobe NP hard,ifthereexistsand NP complete problemwhichispolynomiallytransformableto Throughoutthisdissertation,wearegoingtofocusonprobl emsthatare NP hardand NP complete.Thenextreasonablequestiononeasksof NP problemsiswhattheimplicationis forsolvingthem.Thatis,howdoesbeingin NP reallycomplicatethecomputationaltractability ofaproblem.Table 21 providesaseveralexamplesofthegrowthratesofsomepolyn omial andexponentialfunctions[ 6 ].Noticehowquicklytheexponentialalgorithmsgrow.This isone reasonwhypolynomialtimealgorithmsarepreferredoverex ponentialtimealgorithms.Most discreteoptimizationproblemsturnouttobeeither NP hardor NP complete,eveniftheyare linear[ 149 ]. 2.6UpperandLowerBounds Whenattemptingtosolveintegerprograms(IPs),weareface dwiththeproblemofhowto provethatagivenpointisanoptimalsolution[ 173 ].Thisproblemarisessincelocaloptimality doesnotimplyglobaloptimalityforIPs.Oftentimesbeinga bletoderiveupperandlowerbounds ontheoptimalsolutionishelpfultoidentifygoodapproxim atesolutionsandnarrowthesearch fortheoptimalsolution.Thistopicwillbestudiedextensi velyinChapter 5 .Nowweintroduce somebasicpropertiesofboundsforintegerprogrammingpro blems. 24 PAGE 25 ConsidertheIPgivenbelowandassumethatthepoint x isanoptimalsolution. minimize cx (25) subjectto x 2 S (26) x 2 Z n : (27) TheninordertosolvethisIP,weneedtodeterminealowerbou nd x suchthat c ( x ) c ( x ) and anupperbound x where c ( x ) c ( x ) suchthat c ( x )= c ( x )= c ( x ) : (28) Inorderndtheseboundsinpractice,weneedanalgorithmth atcancomputeadecreasing sequenceofupperbounds c ( x 1 ) >c ( x 2 ) >:::>c ( x s ) c ( x ) ; (29) andacorrespondingincreasingsequenceoflowerbounds c ( x 1 ) PAGE 26 Proof. Let x 2 S beanoptimalsolutionfor(IP).Then,wehave x 2 S T ,which implies c ( x ) f ( x ) .Furthermore,since x 2 T f ( x ) isalowerboundon z R .Thatis, z = c ( x ) f ( x ) z R ; andwehavethelemma. Theproblemofformulatingusefulrelaxationsisanimporta ntprobleminitsownright whichhasbeenstudiedsincethefoundingofOR[ 53 ].Amongthemostcommonrelaxations arethelinearprogrammingrelaxation,andtheLagrangianr elaxation.Wenowprovideabrief introductiontothese.Denition11. Givenanintegerprogram(IP) z :=max f cx : x 2 S \ Z n R n g ,the linear programmingrelaxation of(IP)isthelinearprogram(LPR) z LP :=max f cx : x 2 S R n g Proposition1. Thelinearprogrammingproblem(LPR)isarelaxationof(IP) Proof. Theproofistrivialas S \ Z n S andtheobjectivefunctionof(LPR)remainsthesame asin(IP).Thus,byLemma 1 ,wehavetheresult. Weseethenthatalllinearprogrammingrelaxationsprovide boundsontheoriginalinteger program.Further,thefollowinglemmashowsthatrelaxatio nscanbehelpfulforidentifyingcases inwhichtheoriginalintegerprogramisinfeasible[ 173 ]. Lemma2. Givenanintegerprogram(IP) z IP :=max f cx : x 2 S \ Z n R n g ,andits correspondingLPrelaxation(LPR) z LP :=max f cx : x 2 S R n g ,thefollowingstatements hold. (i) If(LPR)isinfeasible,then(IP)isalsoinfeasible. (ii) If x isanoptimalsolutionto(LPR)suchthat x 2 S \ Z n and f ( x )= c ( x ) ,then x is anoptimalsolutionfor(IP). Proof. (i)Thisfollowsfromthefactthat S \ Z n S .Since(LPR)isinfeasible,wehavethat S = ; ,thusimplying S \ Z n = ; (ii)Since(LPR)isarelaxation,thenbyDenition 10 x 2 S \ Z n implies z IP c ( x )= f ( x )= z LP : However,bydenition z IP z LP implying c ( x )= z IP = z LP 26 PAGE 27 Anothercommonrelaxationusedtotacklehardintegerprogr amsistheLagrangian relaxation.ThismethodwasrstintroducedbyHeldandKarp in[ 96 97 ]inaformulationfortheTRAVELINGSALESMANPROBLEM[ 54 ].Lagrangianrelaxationrelaxestheconstraintsbyadding themtotheobjectivefunctionwithanassociatedpenalty.C onsiderthefollowingoptimization problem, (P) z =max cx (212) s.t. Ax b (213) x 2 X: (214) ThenthecorrespondingLagrangianrelaxationisformulate dasfollows. P( ) L ( )=max cx + ( b Ax ) (215) x 2 X: (216) WiththiswecanformulatethefollowinglemmaregardingLag rangianrelaxation. Lemma3. P ( ) isarelaxationof ( P ) Proof. Inorderfor P( ) tobearelaxationof(P),asdenedbyDenition 10 ,wemustshow thefollowingtwoconditions:(i)thefeasibleregionofthe originalproblemisasubsetofthe relaxedproblem,and(ii)forallvectors 0 L ( ) z Condition(i)followstrivially.Toshowcondition(ii),le t x 2 X beanoptimalsolution for(P).Then x isclearlyfeasiblefor P ( ) .Further,since x feasiblefor(P), b Ax 0 Therefore cx cx + ( b Ax ) ,forallrealvectors 0 Fromthislemma,weseethatanyfeasiblesolutionto(P( ))isanupperboundonthe optimalvalueof(P).Theproblemofndingthebest,or tightest bound,say L isknownasthe LAGRANGIANMULTIPLIERPROBLEM[ 124 ]andisgivenas L :=min f L ( ): 0 g .With thisthefollowinglemma,whichisstatedwithoutproof,hol ds. 27 PAGE 28 Lemma4. Forallrealvectors cx z L L ( ) .Furthermore,if L ( )= L = z = cx ,then x isoptimalfortheoriginalproblem(P),and isoptimalforthe LAGRANGIAN MULTIPLIERPROBLEM. LinearprogrammingandLagrangianrelaxationsarehelpful forshowingwhenasolutionis closeto(insomecasesequals)theoptimalsolution.Theyar ealsousefulinbranchandbound algorithms,whichwewillintroduceinthefollowingsubsec tion. 2.7AlgorithmsforOptimizationProblems Consideradiscreteoptimizationproblemthathas n binarydecisionvariables.Sincethere areanitenumberofintegerfeasiblesolutions,intheoryo necouldenumerateallpossible solutions.However,todothiswouldrequire 2 n functionevaluations[ 31 ].Thisisimpractical sinceif n> 1000 ,thenwiththepresentcomputersavailablethecomputation timerequiredfor thisenumerationwouldtakemillionsofyears.Clearlywene edmoreefcientalgorithmstosolve theseproblems. Algorithmsforoptimizationproblemsarebrokenupintotwo categories:exactmethods andheuristics.Exactmethodsguaranteethattheterminati onofthealgorithmwillresultin theoptimalsolutionprovidedoneexists.Heuristicsonthe otherhand,havenoguaranteeof optimalitybutusuallyndhighqualitysolutionsmuchfast erthantheexactmethods.Infact, mostnontrivialinstancesofproblemsin NP cannotbesolvedbyexactalgorithms.Thus,we needefcientheuristicstondnearoptimalsolutionstore alworldinstances.Inthefollowing subsections,weprovideanoverviewofseveralexactandheu risticmethodsthatwewillusefor solvingtheproblemsappearinginlaterchapters.2.7.1ExactMethods LinearProgrammingTechniques .Webeginourdiscussionofexactalgorithmswiththe simplexmethod forlinearprogramming[ 52 ].Consideraninstanceofalinearprogramming 28 PAGE 29 problem minimize cx (217) subjectto Ax = b (218) x 0 ; (219) where A isan m n matrixwithrank m c an n vector,and b T isan m vector.Thesimplex methodisanalgorithmwhichmovesalongtheextremepointso fthepolytopedenedby Ax = b insearchoftheoptimalsolution.Theextremepointsarevis itedinsuchawaythatthe objectivefunctionvalueatanewpointisatleastasgoodast heprevious.Sinceweshowedin theprevioustheoremsthattheoptimalsolutiontoalinearp rogramisanextremepointofthe polytope,thesimplexmethodisguaranteedtondtheoptima lsolution.Noticehowever,that thisalgorithmisnotpolynomial.Thepolytopeinquestionc anhave n m extremepoints,andit ispossibletoconstructexamplesforwhichthesimplexmeth odmustenumerateall n m ofthem. However,despitethetheoreticalexponentialworstcasec omplexityofthesimplexmethod,it isveryefcientinpracticeandiseasytoimplement.Thisis nottosaythatlinearprogramsare NP hard .Infact,alllinearprogramsarein P .Theclassofalgorithmsknownas interiorpoint methods areabletosolvelinearprogrammingproblemsinpolynomial time.Therstefcient interiorpointalgorithmwasproposedin1984byKarmarkar[ 111 ].Therstimplementationof Karmarkar'salgorithmwasreportedin1991byAdler,Karmar kar,Resende,andVeigain[ 4 ] and[ 5 ].Excellentreferencetextsonlinearprogramminginclude theworkofChvatal[ 34 ]and Bazaraaetal.[ 15 ]. IntegerProgrammingAlgorithms .Branchandbound(B&B)algorithmsarethemost commonclassofalgorithmsusedforsolvingdiscreteoptimi zationproblems[ 61 ].B&Bmethods are implicitenumeration techniquesbasedontheideaof divideandconquer [ 90 ].InaB&B algorithm,thesetoffeasiblesolutionsisdecomposedinto smallerandsmallersetsuntilthe optimalsolutioniseventuallyreached. 29 PAGE 30 Considertheintegerprogrammingproblem z IP :=max f cx : x 2 S \ Z n R n g .To applytheB&Balgorithm,thelinearprogrammingrelaxation z LP :=max f cx : x 2 S g issolved, generallyresultinginanonintegralsolution.Thissoluti onistakenasinitialupperboundonthe optimalsolution.SupposethatintheLPrelaxation,someva riable x i = x i 62 Z .Thenonewayto branchistodividethefeasibleregion S intotwosubdomains,namely S 1 := S \f x : x i b x i cg (220) S 2 := S \f x : x i d x i eg (221) Noticethat S 1 [ S 2 = ; and S 1 \ S 2 = ; .Thetwolinearprograms z LP 1 :=max f cx : x 2 S 1 g and z LP 2 :=max f cx : x 2 S 2 g arenowsolvedandthesmallestobjectivevalueistakenasth enew upperbound.Inessence,a searchtree isformedbytherepetitionofthedecomposition/bounding processappliedtoeachsubproblem.However,duetoapree stablishedlowerbound,manyof theresultingsubproblemsare pruned fromthesearchtreeandnotconsidered.Thus,anoptimal permutationisconstructediteratively,oneelementatati me[ 151 ].Theprocessisrepeatedon variableswhicharenonintegraluntileventuallytheinteg eroptimalsolution z IP isreached[ 173 ]. Thoughbranchandboundmethodsarethemostcommonlyusedal gorithmsfordiscrete optimizationproblems,theyarenottheonlytechniquesava ilable.Thebranchandcutmethod isahybridofbranchandboundwhichfallsintotheclassofso called cuttingplane techniques [ 54 107 173 ].IntroducedbyGomoryin[ 88 ],cuttingplanemethodssolveIPsbyintroducing constraintswhichcutsoffthenonintegersolutionfoundby solvingtheLPrelaxationwithout removinganyfeasibleintegersolutions.Finally,columng eneration,orsocalledbranchandprice algorithmsareeffectivedecompositionmethodsandarecom monlyusedforsolvinglargescale integerprogrammingproblems[ 55 57 58 ].Thedesignofsoftwarepackageswhichefciently executeoptimalintegerprogrammingalgorithmsisaglobal enterprize.Todaythemostefcient andwidelyusedcommercialIPsolversareCPLEX R r byILOG,Inc.[ 50 ]andXpressMP R r by DashOptimizationInc.[ 108 ].Inlaterchapters,wewillapplybranchandboundtechniqu esto severalproblemsinordertondtheoptimalsolutionsandve rifytheeffectivenessofheuristics. 30 PAGE 31 2.7.2Heuristics Despitetheguaranteeofeventuallyreachingtheoptimalso lution,B&Bmethodsare inefcientonlargeproblems.Therefore,wemustlookforef cientwaysofproducinghigh qualitysolutions.Heuristics,orsuboptimalalgorithmsp rovidethisoutlet.Thetermheuristicis derivedfromtheGreekword heuriskein ( ),meaningtondordiscover.Heuristics areapproximationalgorithmsandaretheonlyalternativet ondinggoodfeasiblesolutions whenproblemsaretoodifculttoapplybranchandboundmeth ods.Thestudyofheuristicsis vast,andhaslettothecreationofalgorithmicmethodswhic harecapableofproducingexcellent solutionsinseconds,forproblemsinwhichaB&Borotheropt imalalgorithmwouldrequire yearstosolve. Inthefollowingparagraphs,weprovideabriefintroductio ntoseveralheuristicswhichwe willanalyzeinlaterchaptersofthisdissertation.Webegi nwithathesimplesttypeofheuristic knownasthe greedyalgorithm GreedyHeuristicsandLocalSearch .Agreedyalgorithmisalocalsearchmetaheuristic whichgetsitsnamefromthemyopicwayinwhichitcreatescan didatesolutions[ 123 ].Ateach step,thegreedymethodmakeswhateverchoiceseemsbestatt hatparticularmomentintime. Onceadecisionismade,itispermanentandcannotbelaterch anged.Therefore,onemustensure thatacandidateelementisfeasiblebeforeaddingittothei ncumbentsolution[ 105 ]. Anexampleofagreedyalgorithmisasfollows.Supposeacash ierowesacustomer $0.42cents.Thecashiercanusethegreedymethodtodetermi netheminimumnumberof coinsrequiredforthistransaction.Pseudocodeforthism ethodisprovidedinFigure 23 .The algorithmtakesasinput n ,theamountofchangedue,inthiscase, n =$0 : 42 .Tobeginwith, onequarterisselectedbringingthebalanceto$0.17.Next, onedimeischosenandtheremainder is$0.07.Byselectingonenickelandtwopennies,theproble missolved.Weseethatgreed ismanifestedinthisexampleasthealgorithmselectsthehi ghestvaluedcoinsrst.Forthis problem,thegreedyalgorithmcomputestheoptimalsolutio nfromthe 31 uniquecombinationsof coinswhichaddupto$0.42cents. 31 PAGE 32 procedure GreedyChangeMaker ( n ) 1 C f 1 ; 5 ; 10 ; 25 g 2 Change ; 3 Sum 0 / sumofcoinsin Change / 4 while Sum 6 = n do 5 x max f c 2 C : Sum + c n g 6 if 69 x then 7 return:NOSOLUTION 8 else 9 Change Change [ x 10 sum sum + x 11 endif 12 endwhile 13 return Change endprocedure GreedyChangeMaker Figure23:Pseudocodeforagreedyalgorithmwhichmakesc hangeusingtheminimumnumber ofcoins. Otherproblemsforwhichthegreedymethodndstheoptimals olutionincludetheMINIMUMSPANNINGTREEproblemwhereKruskal'salgorithm[ 122 ]ndsaminimum weightspanningtreeofagivengraph[ 6 ].Despitetheperformanceofthegreedyalgorithm ontheaboveexample,greedymethodsalmostalwaysfallshor toftheoptimalsolutionwhen appliedto NP completeproblems.Thisisbecausegreedymethodsselecta localoptimumfrom theneighborhoodofthecurrentsolutionateachstepwithth ehopethatintheend,theglobal optimumisfound.Howeveraswelearnedearlierinthechapte r,thisisn'tnecessarilythecase. Otherlocalsearchheuristicsinvolvesimpleexaminations ofneighborhoodsinthequest foragoodsolution[ 51 ].Themethodmovesfromonesolutiontothenextinthefeasib le region,untilthecurrentsolutioncannotbeimprovedbysel ectinganalternatesolutioninits neighborhood.Thespecicneighborhoodstructuredepends upontheproblemathand,and asmentionedearlier,acleverchoiceofneighborhoodcangr eatlyimprovetheefcacyofthe heuristic.Popularlocalsearchmethodsincludethe2exch ange(or,2opt)method[ 173 ],hill climbingprocedures,themethodofconjugatedgradients[ 91 92 ],andsteepestascent/descent methods.Wewillseeseveralexamplesoflocalsearchalgori thminthelaterchapters.For 32 PAGE 33 procedure GRASP ( MaxIter ; RandomSeed ) 1 f 0 2 X ; 3 for i =1 to MaxIter do 4 X ConstructionSolution ( G;g;X; ) 5 X LocalSearch ( X;N ( X )) 6 if f ( X ) f ( X ) then 7 X X 8 f f ( X ) 9 end 10 end 11 return X endprocedure GRASP Figure24:GRASPformaximization detailedimplementationspecications,oneshouldconsul ttextbookonlocalsearch,suchasthe workof[ 2 ].Foranannotatedbibliographyoflocalsearch,thereader isalsoreferredto[ 3 ]. GreedyRandomizedAdaptiveSearchProcedure(GRASP) GRASP[ 69 ]isamultistartmetaheuristicthathasbeenusedwithgrea tsuccessto providesolutionsforseveraldifcultcombinatorialopti mizationproblems[ 72 ],includingSATISFIABILITY[ 154 ],JOBSHOPSCHEDULING[ 7 ],VEHICLEROUTING[ 32 ],andQUADRATIC ASSIGNMENT[ 128 140 ].ForanannotatedbibliographyofGRASP,thereadershould reference thepaperbyFestaandResende[ 72 ]. GRASPisatwophaseprocedurewhichgeneratessolutionsth roughthecontrolleduseof randomsampling,greedyselection,andlocalsearch.Forag ivenproblem ,let F bethesetof feasiblesolutionsfor .Eachsolution X 2 F iscomposedof k discretecomponents a 1 ;:::;a k GRASPconstructsasequence f X g i ofsolutionsfor ,suchthateach X i 2 F .Thealgorithm returnsthebestsolutionfoundafteralliterations.TheGR ASPprocedurecanbedescribedasin thealgorithmpresentedinFigure 24 .The constructionphase receivesasparametersaninstance oftheproblem G ,arankingfunction g : A ( X ) 7! R (where A ( X ) isthedomainoffeasible components a 1 ;:::;a k forapartialsolution X ),andaparameter 0 << 1 .Theconstruction phasebeginswithanemptypartialsolution X .Assumingthat j A ( X ) j = k ,thealgorithmcreates 33 PAGE 34 alistofthebestranked k componentsin A ( X ) ,andreturnsauniformlychosenelement x from thislist.Thecurrentpartialsolutionisaugmentedtoincl ude x ,andtheprocedureisrepeated untilthesolutionisfeasible,i.e.until X 2 F The intensicationphase consistsoftheimplementationofahillclimbingprocedur e.Given asolution X 2 F ,let N ( X ) bethesetofsolutionsthatcanfoundfrom X bychangingoneof thecomponents a 2 X .Recallthat N ( X ) iscalledtheneighborhoodof X .Theimprovement algorithmconsistsofnding,ateachstep,theelement X suchthat X :=argmax X 0 2 N ( X ) f ( X 0 ) ; where f : F 7! R istheobjectivefunctionoftheproblem.Attheendofeachst epwemake X X if f ( X ) >f ( X ) .Thealgorithmwilleventuallyachievealocaloptimum,inw hich casethesolution X issuchthat f ( X ) f ( X 0 ) forall X 0 2 N ( X ) X isreturnedasthe bestsolutionfromtheiterationandthebestsolutionfroma lliterationsisreturnedastheoverall GRASPsolution. SimulatedAnnealing .Instatisticalmechanics,thephysicalprocessofanneali ngisusedto relaxasystemtothestateofminimalenergy.Thisisdonebyh eatingthesoliduntilitmeltsand thencoolingitslowlysothatateachtemperaturethepartic lesrandomlyarrangethemselvesuntil reachingthermalequilibrium. In[ 116 ],Kirkpatricketal.introducedamethodforcombinatorial problemsknownas simulatedannealing .Basedonthetheoryofthephysicalprocess,simulatedanne alingwas showntoasymptoticallyconvergetotheglobaloptimumafte rperforminganumberofsocalled transitionsatdecreasingtemperatures. Pseudocodeforagenericsimulatedannealingalgorithmis presentedinFigure 25 .The algorithmtakesasinputtheinitialtemperature T andareductionfactor r 2 (0 ; 1) .Simulated annealingessentiallychoosesaneighboratrandomtorepla cetheincumbentsolution.Ifthe chosenneighborisabettersolutionthenitisacceptedwith probability 1 .However,inorderto escapeandevadelocaloptima,ifthechosenneighboriswors ethantheincumbent,thenitis 34 PAGE 35 procedure SimulatedAnnealing ( T;r ) 1 f 0 2 X ; 3 X randomSolution () 4 while T 6 =0 do 5 for i =1 to MaxIter do 6 X 0 randomNeighbor ( X ) 7 if f ( X 0 ) f ( X ) then 8 X X 0 9 else 10 X X 0 withprobability e f ( X 0 ) f ( X ) T 11 endif 12 T rT 13 if f ( X ) >f then 14 X X 15 f f ( X ) 16 endif 17 endfor 18 endwhile 19 return X endprocedure SimulatedAnnealing Figure25:Genericsimulatedannealingmaximizationalgo rithm. acceptedwithsomepositiveprobabilitywhichisadecreasi ngfunctionofthetemperature[ 1 ]. Thusthecoolingschedule,orthemethodinwhichthetempera turedecreasesisanimportant partoftheheuristic.Ithasbeenshownthatalogarithmical lyslowcoolingscheduleguarantees thatthealgorithmwillconvergetotheglobaloptimuminexp onentialtime[ 24 ].Thereforein practice,fastercoolingschedulesareoftenused.Another methodcloselyresemblingsimulated annealingisthemethodofmeaneldannealing. Meaneldannealing(MFA),isaheuristicwhichmimicstheid eaofmeaneld approximationfromstatisticalphysics[ 150 ].InMFA,thestochasticprocessinsimulated annealingisreplacedbyasetofdeterministicequations.T houghMFAdoesnotguarantee convergencetoaglobaloptimalsolution,itcanprovideane xcellentapproximationtoanoptimal solutionandismuchlessexpensivecomputationally. GeneticAlgorithms 35 PAGE 36 procedure GeneticAlgorithm 1 Generatepopulation P k 2 Evaluatepopulation P k 3 while terminatingconditionnotmet do 4 Selectindividualsfrom P k andcopyto P k +1 5 Crossoverindividualsfrom P k andputin P k +1 6 Mutateindividualsfrom P k andputin P k +1 7 Evaluatepopulation P k +1 8 P k P k +1 9 P k +1 ; 10 endwhile 11 return bestindividualin P k endprocedure GeneticAlgorithm Figure26:Pseudocodeforgenericgeneticalgorithm. Geneticalgorithmsreceivetheirnamefromanexplanationo fthewaytheybehave.Itcomesas nosurprise,theyarebasedonDarwin'sTheoryofNaturalSel ection[ 56 ].Geneticalgorithms storeasetofsolutions,ora population ,andthepopulation evolves byreplacingthesesolutions withbetteronesbasedoncertaintnesscriterionrepresen tedbytheobjectivefunctionvalue. Insuccessiveiterations,or generations ,thepopulationevolvesby reproduction crossover and mutation .Reproductionistheprobabilisticselectionofthenextge nerationselements determinedbytheirtnesslevel.Crossoveristhecombinat ionoftwocurrentsolutions, called parents whichproducesoneormoreothersolutions,referredtoas offspring .Finally, mutationistherandommodicationoftheoffspring.Mutati onisperformedasanescape mechanismtoavoidgettingtrappedatalocaloptimum[ 86 ].Insuccessivegenerations,only thosesolutionshavingthebest tness arecarriedtothenextgenerationinaprocesswhich mimicsthefundamentalprincipleofnaturalselection, survivalofthettest [ 56 ].Figure 26 providespseudocodeforastandardgeneticalgorithm.Gen eticalgorithmswereintroducedin 1977byHolland[ 102 ],andweregreatlyinvigoratedbytheworkofGoldbergin[ 86 ]. 2.8ConcludingRemarks Inthischapterwehaveprovidedabriefhistoryandintroduc tiontoglobaloptimization.The chapterisnotintendedtobeallinclusive;instead,thepur poseofitsinclusionisasfollows.First, 36 PAGE 37 wehaveprovidedthefundamentalresultsandunderlyingthe orythatwewillusethroughout thisdissertation.Thisincludesthetheoryofcomputation alcomplexity,andashortoverviewof themostcommonsolutiontechniqueswewillencounterandap plytoseveralproblemsaswe progress.Secondly,wehaveprovidedseveraldenitions,l emmata,andtheoremsthatwewill referenceinthechapterstocome.Theintentistohaveaconc iselocationtowhichthereadercan refer.Also,presentingthemajortheoremsherewillpreven tredundancyaswewillnotrestate thetheoremsineachchaptertheyareapplied.Wewillnowmov eonandbegintheexamination ofseveralcombinatorialproblemsthatoccurinmilitaryte lecommunicationnetworks.We concludethischapterwithalistofreferencesontheory,al gorithms,andapplicationsofglobal andcombinatorialoptimization. Excellentreferencesonglobalandcombinatorialoptimiza tionincludetheworkofDuand Pardalos[ 63 64 65 ],FloudasandPardalos[ 74 76 ],HorstandPardalos[ 106 ],Horst,Pardalos, andThoai[ 107 ],Pardalos[ 146 ],PardalosandResende[ 147 ],PardalosandRosen[ 148 ],and Wolsey[ 173 ]tonameafew.Perhapsthemostinclusiveonestopreferenc eisthemonumental workofFloudasandPardalosinthesixvolume EncyclopediaofOptimization [ 75 ]. Thelistofalgorithmsisalsonotintendedtobeexhaustive. Otherexactalgorithmsinclude dynamicprogramming[ 16 ]andouterapproximationmethods[ 107 ].Othereffectiveheuristics includetabusearch[ 81 82 83 ],scattersearch[ 80 ],hybridheuristicswhichcombineelements ofseveralmethods[ 22 71 159 ],andalgorithmsdesignedforthespecicproblemwhichexp loit thecombinatorialstructureoftheproblem[ 25 30 43 139 ].Otheralgorithmicreferencebooks includeAhujaetal.[ 6 ],FloudasandPardalos[ 73 ],Goldberg[ 86 ],Minieka[ 130 ],Osman[ 142 ], andOsmanetal.[ 143 ]. 37 PAGE 38 CHAPTER3 JAMMINGCOMMUNICATION NETWORKSVIACRITICALNODEDETECTION 3.1Introduction Inthischapter,westudytwovariantsoftheCRITICALNODEPROBLEM.Ingeneral,the objectiveoftheCRITICALNODEPROBLEM(CNP)istondasetof k nodesinagraphwhose deletionresultsinthemaximumnetworkfragmentation.Byt hiswemean,maximizethenumber ofcomponentsinthe k vertexdeletedsubgraph.Studiescarriedoutinthislinei ncludethose byBavelas[ 14 ]andFreeman[ 78 ]whichemphasizenodecentralityandprestige,bothofwhic h areusuallyfunctionsofanodesdegree.However,theylacke dapplicationstoproblemswhich emphasizednetworkfragmentationandconnectivity. WecanapplytheCNPtotheproblemofjammingwiredtelecommunicationnetworks by identifyingthecriticalnodesandsuppressingthecommuni cationonthesenodes.Thiswill resultinthemaximumnumberofdisconnectedcomponentswhi chareunabletocommunicate witheachother.TheCNPcanalsobeappliedtothestudyofcovertterroristnetworks ,wherea certainnumberofindividualshavetobeidentiedwhosedel etionwouldresultinthemaximum breakdownofcommunicationbetweenindividualsinthenetw ork[ 118 ].Likewiseinorderto stopthespreadingofavirusoveratelecommunicationnetwo rk,onecanidentifythecritical nodesofthegraphandtakethemofine. TheCNPalsondsapplicationsinnetworkimmunization[ 36 176 ]wheremassvaccination isanexpensiveprocessandonlyaspecicnumberofpeople,m odeledasnodesofagraph, canbevaccinated.Theimmunizednodescannotpropagatethe virusandthegoalistoidentify theindividualstobevaccinatedinordertoreducetheovera lltransmissibilityofthevirus. Thereareseveralvaccinationstrategiesintheliterature [ 36 176 ]offeringcontrolofepidemic outbreaks;however,noneoftheproposedareoptimalstrate gies.Thevaccinationstrategies suggestedemphasizethe centrality ofnodesasamajorfactorratherthan critical nodeswhose deletionwillmaximizedisconnectivityofthegraph.Delet ionofcentralnodesmaynotguarantee afragmentationofthenetworkorevendisconnectivity,inw hichcasediseasetransmission 38 PAGE 39 cannotbeprevented.Ofcourse,owingtoitsdynamicstature ,therelationshipsbetweenpeople, representedbyedgesinthesocialnetworkaretransientand thereisaconstantrewiringbetween nodes,andalternaterelationshipscouldbeestablishedin thefuture.Theproposedcritical nodetechniquehelpsinamaximumpreventionofdiseasetran smissionoveraninstanceofthe dynamicnetwork. Beforeproceeding,wementiononenalareainwhichtheCRITICALNODEPROBLEMnds severalapplications,andthatisintheeldoftransportat ionengineering[ 66 ].Twoparticular examplesareasfollows.Ingeneral,fortransportationnet works,itisimportanttoidentifycritical nodesinordertoensuretheyoperatereliablyfortransport ingpeopleandgoodsthroughoutthe network.Further,inplanningforemergencyevacuations,i dentifyingthecriticalnodesofthe transportationnetworkiscrucial.Thereasonistwofold. First,knowledgeofthecriticalnodes willhelpinplanningtheallocationofresourcesduringthe evacuation.Secondly,intheaftermath ofadisastertheywillhelpinreestablishingcriticaltra fcroutes. Borgatti[ 21 ]hasstudiedasimilarproblem,focusingonnodedetectionr esultingin maximumnetworkdisconnectivity.Otherstudiesinthearea ofnodedetectionsuchascentrality [ 14 78 ]focusontheprominenceandreachabilitytoandfromthecen tralnodes.However,little emphasisisplacedontheimportanceoftheirroleinthenetw orkconnectivityanddiameter. Perhapsonereasonforthisisbecausealloftheaforementio nedreferencesreliedonsimulationto conducttheirstudies.Althoughthesimulationshavebeens uccessful,amathematicalformulation isessentialforprovidinginsightandhelpingtorevealsom eofthefundamentalpropertiesof theproblem[ 138 ].Inthenextsection,wepresentamathematicalmodelbased onintegerlinear programmingwhichprovidesoptimalsolutionsfortheCRITICALNODEPROBLEM. Weorganizethischapterbyrstformallydeningtheproble manddiscussingits computationalcomplexity.Next,weprovideanintegerprog ramming(IP)formulationfor thecorrespondingoptimizationproblem.InSection 3.3 weintroduceaheuristictoquickly providesolutionstolargescaleinstancesoftheproblem. Wepresentacomputationalstudyin Section 3.4 ,inwhichwecomparetheperformanceoftheheuristicagains ttheoptimalsolutions 39 PAGE 40 whichweredeterminedusingacommercialsoftwarepackage. Someconcludingremarksare giveninSection 3.5 3.2ProblemFormulations Denoteagraph G =( V;E ) asapairconsistingofasetofvertices V ,andasetofedges E Allgraphsinthischapterareassumedtobeundirectedandun weighted.Forasubset W V ,let G ( W ) denotethesubgraphinducedby W on G .Asetofvertices I V iscalledan independent or stableset ifforevery i;j 2 I; ( i;j ) 62 E .Thatis,thegraph G ( I ) inducedby I isedgeless.An independentsetis maximal ifitisnotasubsetofanylargerindependentset( i.e. ,itismaximalby inclusion),and maximum iftherearenolargerindependentsetsinthegraph. 3.2.1CriticalNodeProblem Theformaldenitionoftheproblemisgivenby: C RITICAL N ODE P ROBLEM (CNP) INPUT:Anundirectedgraph G =( V;E ) andaninteger k OUTPUT: A =argmin P i;j 2 ( V n A ) u ij G ( V n A ) : j A j k; where u ij := 8>><>>: 1 ; if i and j areinthesamecomponentof G ( V n A ) 0 ; otherwise. Theobjectiveistondasubset A V ofnodessuchthat j A j k ,whosedeletionresults intheminimumvalueof P u ij intheedgeinducedsubgraph G ( V n A ) .Thisobjectivefunction resultsinaminimumcohesioninthenetwork,whilealsoensu ringaminimumdifferenceinthe sizesofthecomponents.Anillustrationisbestsuitedtoex plainthechoiceofobjectivefunction. Consideranarbitraryunweightedgraphwith 150 nodes.Accordingtoourobjective,itismore preferabletohaveapartitionwith 3 componentswitheach 50 nodesasopposedtoapartition with 5 componentswithonehaving 146 nodesandtherestofthemhavingasinglenode. ThisproblemissimilartoMINIMUMk VERTEXSHARING[ 133 ],wheretheobjectiveis tominimizethenumberofnodesdeletedtoachievea k waypartition.Hereweareconsidering thecomplementaryproblem,whereweknowthenumberofverti cestobedeletedandwetryto 40 PAGE 41 maximizethenumberofcomponentsformedandimplicitlylim itthesizesofthecomponents. Borgatti[ 21 ]hasgivenacomprehensiveillustrationtofacilitatetheu nderstandingofthe objectivefunctionanditsnontriviality. WenowprovethattherecognitionversionoftheCNPis NP complete.Considerthe followingdecisionproblemfortheCNP: KC RITICAL N ODE P ROBLEM (KCNP) INPUT:Anundirectedgraph G =( V;E ) andaninteger k QUESTION:Doesthereexistazerocost K waypartitionof G bydeleting k nodesorless? Theorem5. The KCRITICALNODEPROBLEMis NP complete. Proof. Toshowthis,wemustprovethat( 1 )KCNP2NP ;( 2 )Some NP completeproblem reducestoKCNPinpolynomialtime. ( 1 )KCNP2NP sincegivenanygraph G =( V;E ) ,wecanverifythevalidityof G in polynomialtime.Morespecically,bydeletinganysetofat most k nodes,wedetermine ifthethereisazerocost K waypartitionof G in O ( j E j + j V j ) timeusingadepthrst search[ 6 ]. ( 2 )Tocompletetheproof,weshowareductionfromtheKINDEPENDENTSETPROBLEM(KISP)[ 24 ],whichiswellknowntobe NP complete[ 79 ].Recallthattheobjectiveof theKISPistodetermineif G containsanindependentsetcontainingatleast K nodes. Let G =( V;E ) beagraphinwhichweseekanindependentset.Therearenonec essary transformationsrequiredforthegraphinwhichwearesolvi ngthecorrespondingKCNP. Wewillshowthata`yes'instanceoftheKISPcorrespondstoa`yes'instanceofthe KCNPon G .Inparticular, G hasanindependentsetofsize K ifandonlyiftheKCNPhasazerocostsolutionwhere k j V j K .Suppose G containsanindependentset I where j I j = K .NoticethattheobjectiveoftheKCNPwillbe 0 asthesubgraphinduced bydeletingthenodesin V n I isedgeless.Therefore,a`yes'instanceoftheKISPimplies a`yes'instancefortheKCNPwith k = j V j K 41 PAGE 42 Toprovetheconverse,observethatthecostofanyKCNPisatleast 0 .Thus,a`yes' instanceoftheKCNPwouldimplythatoncethe k criticalnodesareremoved,theresulting subgraphconsistsof K componentswhoseobjectivefunctionis 0 .Thisimpliesthatthe inducedsubgraphisedgeless,i.e.eachofthe K componentsconsistsofasinglenode. Hence,the K remainingnodesformanindependentsetof G ,resultingina`yes'instance fortheKINDEPENDENTSETPROBLEM.Thustheproofiscomplete. Whenstudyingcombinatorialproblems,integerprogrammin gmodelsareusuallyquite helpfulforprovidingsomeoftheformalpropertiesofthepr oblem[ 138 ].Withthisinmindwe nowdevelopalinearintegerprogrammingformulationforth eCNP. Tobeginwith,denethesurjection u : V V 7!f 0 ; 1 g asabove.Further,weintroducea surjection v : V 7!f 0 ; 1 g denedby v i := 8>><>>: 1 ,ifnode i isdeletedintheoptimalsolution, 0 ,otherwise. (31) ThentheCRITICALNODEPROBLEMadmitsthefollowingintegerprogrammingformulation (CNP1) Minimize X i;j 2 V u ij (32) s.t. u ij + v i + v j 1 ; 8 ( i;j ) 2 E; (33) u ij + u jk u ki 1 ; 8 ( i;j;k ) 2 V; (34) u ij u jk + u ki 1 ; 8 ( i;j;k ) 2 V; (35) u ij + u jk + u ki 1 ; 8 ( i;j;k ) 2 V; (36) X i 2 V v i k; (37) u ij 2f 0 ; 1 g ; 8 i;j 2 V; (38) v i 2f 0 ; 1 g ; 8 i 2 V: (39) 42 PAGE 43 Theorem6. CNP1 isacorrectformulationfortheCRITICALNODEPROBLEM. Proof. First,wenotethattheobjectiveistondthesetof k nodeswhoseremovalresultsina graphwhichhasthemaximumnumberofdisconnectedcomponen ts.Thisisaccomplishedby theobjectivefunction.Noticethattherstsetofconstrai ntsin( 33 )impliesthatifnodes i and j areindifferentcomponentsandifthereisanedgebetweenth em,thenoneofthemmustbe deleted.Furthermore,constraints( 34 )( 36 )togetherimplythatforalltripletsofnodes i;j;k; thatif i and j areinsamecomponentand j and k areinsamecomponent,then k and i mustbein thesamecomponent.Constraint( 37 )ensuresthatthetotalnumberofdeletednodesislessthan orequalto k .Finally,( 38 )and( 39 )denetheproperdomainsforthevariablesused.Thus, asolutiontotheintegerprogrammingformulation CNP1 characterizesafeasiblesolutionto theCNP.Ontheotherhand,itisclearthatafeasiblesolutiontotheCNPwilldeneatleastone feasiblesolutionto CNP1 .Therefore, CNP1 isacorrectformulationfortheCNP. Noticethattheconditionswhichsatisfythecircularconst raints( 34 ),( 35 ),and( 36 )in CNP1 canbesatisedbythesingleconstraint u ij + u jk + u ki 6 =2 ; 8 ( i;j;k ) 2 V: Thuswehave anequivalent,morecompactintegerprogramgivenas (CNP2) Minimize X i;j 2 V u ij (310) s.t. u ij + v i + v j 1 ; 8 ( i;j ) 2 E; (311) u ij + u jk + u ki 6 =2 ; 8 ( i;j;k ) 2 V; (312) X i 2 V v i k; (313) u ij 2f 0 ; 1 g ; 8 i;j 2 V; (314) v i 2f 0 ; 1 g ; 8 i 2 V; (315) where u i;j and v i areasdenedabove. Noticethatiftheobjectivefunctionhadonlythenumberofc omponents,thenan approximationfortheMAXIMUMK CUTPROBLEM[ 79 112 ]couldbeemployedbymodifying 43 PAGE 44 thecostfunctionoftheGomoryHutree[ 89 ].Anevensimplerapproachwouldbetoidentify thecutverticesinthegraph,ifanyexist.However,theobje ctivefunctionalsoinvolvesthesizes ofthecomponentsformed,whichmakestheproblemharderand subsequentlyimpliesthatthe methodssuggestedabovearenotsuitableforourproblem. Recallthat P i;j 2 V u ij isameasureofthetotaldisconnectivityofthegraph.Ifweo bserve carefully,theobjectivefunctioncouldberewrittenas X i 2 S s i ( s i 1) 2 ; (316) where S issetofallcomponentsand s i isthesizeofthe i thcomponent,whichcanbeeasily identiedbyfastalgorithmslikebreadthordepthrstsear chalgorithmsin O ( j V j + j E j ) time [ 48 ].Wenowprovideanintuitiveexplanationforthechoiceofo urobjectivefunction.Foraxed numberofcomponentsthevarianceinthesizesofthecompone ntswillbethesumofthesquares ofdeviationofsizesofthecomponentsfromthemeansizeofa component.Howevernoticethat themeansizeofanycomponentisconstantbecausethesumoft hesizesofthecomponentsis theconstant, j V j k .Thusminimizingthevarianceofthesizeofthecomponentsr educesto minimizingthesumofsquaresofthesizesofthecomponents, whichisourobjectivefunction. Also,whenthesizesofthecomponentsareequaltheobjectiv efunctionistheminimumwhen thenumberofcomponentsisthemaximum.Wewillusethisobje ctivefunctioninthefollowing sectiontoimplementaheuristicforidentifyingcriticaln odes. 3.2.2CardinalityConstrainedProblem Wenowprovidetheformulationforaslightlymodiedversio noftheCNPbasedon constrainingtheconnectivityindexofthenodesinthegrap h.Givenagraph G =( V;E ) the connectivityindex ofanodeisdenedasthenumberofnodesreachablefromthatv ertex. ExamplesareprovidedinFigure 31 .Toconstrainthenetworkconnectivityinoptimization models,wecanimposeconstraintsontheconnectivityindic es. ThisleadstoacardinalityconstrainedversionoftheCNPwhichweaptlyrefertoas theCARDINALITYCONSTRAINEDCRITICALNODEDETECTIONPROBLEM(CCCNP).The 44 PAGE 45 Figure31:ConnectivityIndexofnodesA,B,C,Dis3.Connec tivityIndexofE,F,Gis2. ConnectivityIndexofHis0.objectiveistodetectasetofnodes A V suchthattheconnectivityindicesofthenodesin thevertexdeletedsubgraph G ( V n A ) islessthansomethresholdvalue,say L .Usingthesame denitionofthevariablesasintheprevioussubsection,we canformulatetheCCCNPasthe followingintegerlinearprogrammingproblem. (CCCNP1) Minimize X i 2 V v i (317) s.t. u ij + v i + v j 1 ; 8 ( i;j ) 2 E; (318) u ij + u jk + u ki 6 =2 ; 8 ( i;j;k ) 2 V; (319) X i;j 2 V u ij L; (320) u ij 2f 0 ; 1 g ; 8 i;j 2 V; (321) v i 2f 0 ; 1 g ; 8 i 2 V; (322) where L isthemaximumallowableconnectivityindexforanynodein V Theorem7. CCCNP1 isacorrectformulationfortheCARDINALITYCONSTRAINEDCRITICAL NODEDETECTIONPROBLEM. Proof. ThisprooffollowsinmuchthesamewayasTheorem 6 .First,weseethattheobjective functiongivenclearlyminimizesthenumberofnodesdelete d.Constraints( 318 )and( 319 ) followexactlyasintheCNPformulation.Theonlydifferenceisnowwemustconstrain 45 PAGE 46 procedure CriticalNode ( G;k ) 1 MIS MaximalIndepSet ( G ) 2 while ( j MIS j6 = j V j k ) do 3 i argmin P i 2 S s i ( s i 1) 2 : S 2 G ( MIS [f i g ) ;i 2 V n MIS 4 MIS MIS [f i g 5 endwhile 6 return V n MIS/ setof k nodestodelete / endprocedure CriticalNode Figure32:Heuristicfordetectingcriticalnodes. theconnectivityindexofeachnode.Thisisaccomplishedby constraint( 320 ).Finally constraints( 321 )and( 322 )denethedomainsofthedecisionvariables,andwehavethe proof. 3.3HeuristicsforCriticalNodeProblems 3.3.1CNPHeuristic PseudocodefortheproposedheuristicisprovidedinFigur e 32 .Tobeginwith,the algorithmndsamaximalindependentset(MIS).Theninthel oopfromlines 2 5 ,theheuristic greedilyselectsthenode i 2 V notcurrentlyinMISwhichreturnstheminimumobjective functionforthegraph G ( MIS [f i g ) .Theset MIS isaugmentedtoincludenode i ,andthe processrepeatsuntil j MIS j = j V j k .Themethodterminatesandthesetofcriticalnodestobe deletedisgivenasthosenodes j 2 V suchthat j 2 V n MIS. Theintuitionbehindusinganindependentsetisthatthesub graphinducedbythissetis empty.Statedotherwise,thedeletionofnodesthatare not intheindependentsetfromthegraph willresultinanemptysubgraph.Noticethatthiswillprovi detheoptimalsolutionforaninstance oftheCNPif j MIS jj V j k .However,ifthesizeofMISislessthan j V j k ,wesimply keepaddingnodeswhichprovidethebestobjectivevaluetot hesetuntilitreachesthedesired size.Inthefollowinglemma,weestablisharelationshipbe tweentheCNPandtheMAXIMUM INDEPENDENTSETproblem,whichalsoprovidesaboundontheoptimalsolution foraninstance oftheCNP. 46 PAGE 47 Lemma5. Givenagraph G =( V;E ) ,thecardinalityofthemaximumindependentsetof G denoted ( G ) providesanupperboundonthenumberofcomponentsproduced intheoptimal solutionofthecorrespondingCRITICALNODEPROBLEMforanyvalueof k 2 Z Proof. Obviously,removingthecriticalnodesdeterminedbytheop timalsolutionforany instanceoftheCNPresultsinasetofdisconnectedcomponentsof G .Onenodefromeach ofthesecomponentsformsanindependentset.Hence ( G ) shouldbeatleastaslargeasthe numberofcomponentsformedintheoptimalsolutiontotheCNP.Furthermore,thecomponents formedinthesubgraphinducedbythemaximumindependentse tareofsizeone,andhence resultintheoptimalsolutionfortheCNPinstanceif ( G ) j V j k ,i.e.ifthedeletionofsome k nodesresultsinanemptygraph.Thus,wehavethelemma. Wenotethatthisboundisnotparticularlyusefulinpractic esincetheMAXIMUM INDEPENDENTSETproblemis NP hardingeneral[ 24 79 ].However,amaximalindependent setcanbecomputedinpolynomialtime.Thismotivatesourde cisiontouse maximal instead of maximum independentsetsintheheuristic.Subsequentlytheheuris ticiscomputationally efcient,withthecomplexitygiveninthefollowingtheore m. Theorem8. Theproposedalgorithmhascomplexity O ( k 2 + j V j k )) Proof. Tobeginwith,the while loopfromlines 2 5 williterateatmost O ( j V j k ) times.Ineach iteration,thenumberofsearchoperationsdecreasesfrom j V j 1 to j V j ( j V j k )= k .Note thatweareperformingthesearchofasparsegraph,whichisi nitiallyempty.Hencethetotal complexitywillbe O ( j V j 1+ j V j 2+ + j V jj V j + k )= O 0@ j V j X i =1 i j V j k X i =1 i 1A = O ( k 2 + j V j k ) : Thustheproofiscomplete. TheproposedalgorithmndsafeasiblesolutiontotheCRITICALNODEPROBLEM; however,thesolutionisnotguaranteedtobegloballyorloc allyoptimal.Therefore,wecan enhancetheheuristicwiththeapplicationoflocalsearchr outineasfollows.Considerthe 47 PAGE 48 procedure LocalSearch ( V n MIS) 1 X MIS 2 local improvement : TRUE : 3 while local improvement do 4 local improvement : FALSE : 5 if i 2 MIS and j 62 MIS then 6 MIS MIS n i 7 MIS MIS [ j 8 if f (MIS) PAGE 49 procedure CriticalNodeLS ( G;k ) 1 X ; 2 f ( X ) 1 3 for j =1 to MaxIter do 4 X CriticalNode ( G;k ) 5 X LocalSearch (X) 6 if f ( X ) PAGE 50 procedure ConstrainedCriticalNode ( G;L ) 1 MIS MaximalIndepSet ( G ) 2 OPT FALSE 3 NoAdd 0 4 while (OPT : NOT : TRUE ) do 5 for ( i =1 to j V j ) do 6 if j s j ( j s j 1) 2 L 8 s 2 S G ( MIS [f i g ): i 2 V n MIS then 7 MIS MIS [f i g 8 else 9 NoAdd NoAdd +1 10 endif 11 if ( NoAdd = j V jj MIS j ) then 12 OPT TRUE 13 BREAK 14 endif 15 endfor 16 endwhile 17 return V n MIS/ setofnodestodelete / endprocedure ConstrainedCriticalNode Figure35:HeuristicfortheCARDINALITYCONSTRAINEDCRITICALNODEPROBLEM. 7 ,otherwiseNoAddisincremented.IfNoAddiseverequalto j V jj MIS j ,thennonodescanbe returnedtothegraphandOPTissetto TRUE .Thenloopisthenexitedandthealgorithmreturns thesetofnodestobedeleted,i.e. V n MIS. Theorem11. Theworstcasecomplexityofthe ConstrainedCriticalNode heuristicis O ( j V j 2 + j V jj E j ) Proof. ThisproofissimilartotheproofofTheorem 8 above.Theloopfromlines 4 16 will iterateatmost O ( j V j ) times.Eachlooprequiresatmost O ( j V j + j E j ) timetoverifythe ifasolutionwillremainfeasibleafteranodeisreinclude dinthegraph.Thuswehavethe result. 3.3.3GeneticAlgorithmfortheCCCNPAsmentionedinSubsection 2.7.2 ,geneticalgorithms(GAs)mimicthebiologicalprocessof evolution.Inthissubsection,wedescribetheimplementat ionofaGAfortheCCCNP.Recallthe generalstructureofaGAasoutlinedinFigure 36 .Whendesigningageneticalgorithmforan 50 PAGE 51 procedure GeneticAlgorithm 1 Generatepopulation P k 2 Evaluatepopulation P k 3 while terminatingconditionnotmet do 4 Selectindividualsfrom P k andcopyto P k +1 5 Crossoverindividualsfrom P k andputin P k +1 6 Mutateindividualsfrom P k andputin P k +1 7 Evaluatepopulation P k +1 8 P k P k +1 9 P k +1 ; 10 endwhile 11 return bestindividualin P k endprocedure GeneticAlgorithm Figure36:Pseudocodeforagenericgeneticalgorithm. optimizationproblem,onemustprovideameanstoencodethe population,denethecrossover operator,anddenethemutationoperatorwhichallowsforr andomchangesinoffspringtohelp preventthealgorithmfromconvergingprematurely[ 10 ]. Forourimplementation,weusebinaryvectorsasanencoding schemeforindividualswithin thepopulationofsolutions.Whenthepopulationisgenerat ed,(Figure 36 ,line 1 ),arandom deviatefromadistributionwhichisuniformonto (0 ; 1) 2 R isgeneratedforeachnode.Ifthe deviateexceedssomespeciedvalue,thecorrespondingall eleisassignedvalue 1 ,indicating thisnodeshouldbedeleted.Otherwise,thealleleisgivena 0 ,implyingitisnotdeleted.Inorder toevaluatethetnessofthepopulation,perline 2 ,wemustdeterminewhethereachindividual solutionisfeasibleornot.Determiningfeasibilityisare lativelystraightforwardtaskandcan accomplishedin O ( j V j + j E j ) usingadepthrstsearch[ 6 ]. Inordertoevolvethepopulationoversuccessivegeneratio ns,weuseareproductionscheme inwhichtheparentschosentoproducetheoffspringaresele ctedusingthebinarytournament method[ 131 172 ].Usingthismethod,twochromosomesarechosenatrandomfr omthe populationandtheonehavingthebesttness,i.e.thelowes tobjectivefunctionvalue,iskept asaparent.Theprocessisthenrepeatedtoselectthesecond parent.Thetwoparentsarethen combinedusingacrossoveroperatortoproduceanoffspring [ 94 ]. 51 PAGE 52 CoinToss T H H T H MOM 0 : 56 0 : 81 0 : 22 0 : 7 0 : 86 DAD 0 : 29 0 : 49 0 : 98 0 : 12 0 : 32 Offspring 0 : 29 0 : 81 0 : 22 0 : 12 0 : 86 Figure37:Exampleofthecrossoveroperation.Inthiscase CrossProb =0 : 65 Tobreednewsolutions,weimplementastrategyknownas parameterizeduniformcrossover [ 167 ].Thismethodworksasfollows.Aftertheselectionofthepa rents,refertotheparent havingthebesttnessas MOM .Foreachofthenodes(alleles),abiasedcoinistossed.Ift he resultisheads,thentheallelefromthe MOM chromosomeischosen.Otherwise,theallelefrom theleasttparent,callit DAD ,isselected.Theprobabilitythatthecoinlandsonheadsis known as CrossProb ,andisdeterminedempirically.Figure 37 providesanexampleofapotential crossoverwhenthenumberofnodesis 5 and CrossProb =0 : 65 [ 10 ]. Afterthechildisproduced,themutationoperatorisapplie d.Mutationisarandomizing agentwhichhelpspreventtheGAfromconvergingprematurel yandescapetolocaloptima. Thisprocessworksbyippingabiasedcoinforeachalleleof thechromosome.Theprobability ofthecoinlandingheads,knownasthemutationrate( MutRate )istypicallyaverysmalluser denedvalue.Iftheresultisheads,thenthevalueofthecor respondingalleleisreversed.Forour implementation, MutRate =0 : 03 Afterthecrossoverandmutationoperatorscreatethenewof fspring,itreplacesacurrent memberofthepopulationusingthesocalled steadystate model[ 37 94 131 ].Usingthis methodology,thechildreplacestheleasttmemberofthepo pulation,providedthataclone ofthechildisnotanexistingmemberinthepopulation.This methodensuresthattheworst elementofthepopulationismonotonicallyimprovingineve rygeneration.Inthesubsequent iteration,thechildbecomeseligibletobeaparentandthep rocessrepeats.ThoughtheGAdoes convergeinprobabilitytotheoptimalsolution,itiscommo ntostoptheprocedureaftersome terminatingcondition(Figure 36 ,line 3 )issatised.Thisconditioncouldbeoneofseveral thingsincluding,amaximumrunningtime,atargetobjectiv evalue,oralimitonthenumberof 52 PAGE 53 Table31:ResultsofIPmodelandheuristiconterroristnet workdata. Instance IPModel Heuristic Heuristic+LS Nodes ObjectiveExecution ObjectiveExecution ObjectiveExecution Deleted( k ) ValueTime(s) ValueTime(s) ValueTime(s) 20 2012 : 69 220 : 08 200 : 01 15 61277 : 77 660 : 03 610 : 01 10 1693337 : 06 1900 : 06 1690 : 02 9 2142792 : 33 2290 : 15 2140 : 02 8 28215111 : 94 3090 : 04 2820 : 01 7 32710792 : 08 3290 : 09 3270 : 01 generations.Forourimplementation,weusethelatteropti onandthebestsolutionafter MaxGen generationsisreturned. 3.4ComputationalResults Alloftheproposedheuristicswereimplementedinthe C ++programminglanguageand compliedusingGNU g ++version 3 : 4 : 4 ,usingoptimizationagsO2 .Itwastestedona PC equippedwitha1700MHzIntel R r Pentium R r Mprocessorand 1 : 0 gigabytesofRAMoperating undertheMicrosoft R r Windows R r XPProfessionalenvironment. 3.4.1CNPResults Webeginwiththenumericalresultsofthecombinatorialalg orithmfortheCRITICALNODE PROBLEM.WetestedtheIPmodelandtheaforementionedheuristicont heterroristnetwork fromKrebs[ 118 ]aswellasonasetofrandomlygeneratedscalefree[ 13 ]graphsranginginsize from 75 to 150 nodeswithvariousdensities.Thegraphsweregeneratedwit hversion 1 : 4 ofthe publiclyavailableBarabasigraphgeneratorbyDreier[ 62 ].Foreachinstancetested,wereport solutionsfor 3 valuesof k ,thenumberofnodestobedeleted. Asabasisforcomparison,wehaveimplementedtheintegerpr ogrammingmodelfortheCRITICALNODEPROBLEMusingtheCPLEX TM optimizationsuitefromILOG[ 50 ].CPLEX containsanimplementationofthesimplexmethod[ 98 ],andusesabranchandboundalgorithm [ 173 ]togetherwithadvancedcuttingplanetechniques[ 107 139 ]. Webeginbyprovidingtheresultsfromtheterroristnetwork [ 118 ].Thegraph,whichis showninFigure 38 has 62 nodesand 153 edges.Noticethatnode 38 isthecentralnodewith 53 PAGE 54 Figure38:TerroristnetworkcompiledbyKrebs. degree 22 .WeappliedtheIPformulationandtheheuristictothisnetw orkwith 6 valuesof k TheresultsareprovidedinTable 31 .Noticethatforallvaluesof k ,theheuristiccomputedthe optimalsolutionrequiringonaverage 0 : 013 secondsofcomputationtime.Theaveragetimeto computetheoptimalsolutionusingCPLEXwas 5387 : 31 seconds.Clearlyevenforthisrelatively smallnetwork,theheuristicisthemethodofchoice.Figure 39 showstheresultinggraphofthe terroristnetworkaccordingtotheoptimalsolutiontotheCNPfortheinstanceof k =20 Inordertodeterminethescalabilityandrobustness,thepr oposedheuristicwastestedona setofrandomlygeneratedscalefreegraphs.Table 32 presentstheresultsoftheheuristicandthe optimalsolverwhenappliedtotherandominstances.Foreac hinstance,wereportthenumber ofnodesandarcs,thevalueof k beingconsidered,theoptimalsolutionandcomputationtim e requiredbyCPLEX,andnallytheheuristicsolutionandthe correspondingcomputationtime. Foreachgraph,wereportsolutionsfor 3 differentvaluesof k 54 PAGE 55 Figure39:Optimalsolutionwhen k =20 Noticethatforallinstancestested,ourmethodwasabletoc omputetheoptimalsolution. Furthermore,therequiredtimetocomputetheoptimalsolut ionwaslessthanonesecondforall butoneinstance,averagingonly 0 : 33 secondsforall 27 instances.Ontheotherhand,CPLEX required 289 : 44 secondsonaveragetocomputetheoptimalsolution,requiri ngover 5000 seconds intheworstcase.Ourcomputationalexperimentsindicatet hattheproposedheuristicisableto efcientlyprovideexcellentsolutionsforlargescalein stancesoftheCNP. 3.4.2CCCNPResults Wecontinuewiththeresultsofthetwoalgorithmsdeveloped fortheCCCNP,namely thecombinatorialalgorithmandthegeneticalgorithm.Asa bove,wetestedtheIPmodeland bothheuristicsontheterroristnetwork[ 118 ]andasetofrandomlygeneratedgraphs.Foreach instancetested,wereportsolutionsfor 3 valuesof L ,theconnectivityindexthreshold.Finally, wehaveimplementedtheintegerprogrammingmodelfortheCCCNPusingCPLEX TM 55 PAGE 56 Table32:ResultsofIPmodelandheuristiconrandomlygene ratedscalefreegraphs. Instance IPModel Heuristic Heuristic+LS NodesArcsDeleted ObjComp ObjComp ObjComp Nodes( k ) ValueTime(s) ValueTime(s) ValueTime(s) 7514020 3666 : 7 920 : 12 360 : 03 7514025 1833 : 28 390 : 28 180 : 03 7514030 74 : 23 180 : 02 70 : 04 7521025 2693 : 71 780 : 1 260 : 04 7521030 83 : 57 310 : 05 80 : 05 7521035 24 : 36 160 : 18 20 : 04 7528033 26749 : 19 540 : 00 260 : 04 7528035 20164 : 34 380 : 09 200 : 06 7528037 1383 : 98 240 : 39 130 : 11 10019425 44151 : 14 1420 : 731 440 : 09 10019430 2059 : 66 720 : 56 200 : 11 10019435 108 : 51 330 : 66 100 : 12 10028540 23136 : 47 481 : 151 230 : 11 10028542 17263 : 82 380 : 4 170 : 17 10028545 1116 : 78 290 : 53 110 : 23 10038045 22128 : 13 580 : 58 220 : 15 10038047 16243 : 07 421 : 191 160 : 16 10038050 10228 : 72 230 : 31 100 : 11 12524033 625047 : 51 970 : 721 620 : 30 12524040 29118 : 92 491 : 562 290 : 24 12524045 1617 : 09 320 : 14 160 : 39 15029040 4041 : 6 1251 : 832 400 : 47 15029050 1226 : 29 642 : 773 120 : 831 15029060 124 : 92 351 : 091 10 : 851 15043561 1929 : 55 532 : 313 190 : 741 15043565 1331 : 45 370 : 991 131 : 952 15043567 1137 : 91 310 : 52 110 : 801 Table 33 presentscomputationalresultsoftheIPmodelandheuristi csolutionswhentested ontheterroristnetworkdata.Noticethatforall 5 valuesof L tested,thegeneticalgorithmand thecombinatorialalgorithmwithlocalsearch(ComAlg+LS) computedoptimalsolutions. Figure 310 showstheoptimalsolutionforthecasewhen L =4 Wenowconsidertheperformanceofthealgorithmswhenteste dontherandomlygenerated datasetscontainingupto 50 nodestakenfrom[ 9 ].TheresultsareshowninTable 34 .Forthese relativelysmallinstances,wewereabletocomputetheopti malsolutionsusingCPLEX.Foreach 56 PAGE 57 Table33:ResultsofIPmodelandheuristicsonterroristne tworkdata. Instance IPModel GeneticAlg ComAlg ComAlg+LS MaxConn. ObjComp ObjComp ObjComp ObjComp Index( L ) ValTime(s) ValTime(s) ValTime(s) ValTime(s) 3 21188 : 98 210 : 25 220 : 01 210 : 1 4 17886 : 09 170 : 741 190 : 01 170 : 45 5 1530051 : 09 150 : 871 200 : 18 251 : 331 8 130 : 39 140 : 05 130 : 07 10 110 : 741 120 : 07 110 : 05 Figure310:Optimalsolutionwhen L =4 instance,weprovidesolutionsfor 3 valuesof L ,themaximumconnectivityindex.Noticethatfor theseproblems,thegeneticalgorithmcomputedoptimalsol utionsforeachinstancetestedina fractionofthetimerequiredbyCPLEX.Thecombinatorialhe uristicfoundoptimalsolutionsfor allbut 3 casesrequiringapproximatelyhalfofthetimeoftheGA. Table 35 presentsthesolutionsfortherandominstancesfrom 75 to 150 nodes[ 9 11 ]. Again,inordertodemonstratetherobustnessoftheheurist ics,weprovidesolutionsfor 3 57 PAGE 58 Table34:ResultsoftheIPmodelandgeneticalgorithmandt hecombinatorialheuristicon randomlygeneratedscalefreegraphs. Instance IPModel GeneticAlg ComAlg+LS NodesArcsMaxConn. ObjComp ObjComp ObjComp Index( L ) ValueTime(s) ValueTime(s) ValueTime(s) 20452 90 : 04 90 : 02 90 : 03 20454 60 : 13 60 : 04 60 : 862 20458 50 : 39 50 : 04 51 : 482 25602 110 : 07 110 : 49 110 : 08 25604 914 : 1 92 : 113 100 : 01 25608 726 : 64 70 : 05 80 : 06 30502 110 : 07 110 : 06 110 : 01 30504 80 : 1 80 : 05 80 30508 61152 : 15 60 : 09 60 30754 1018 : 77 100 : 14 100 : 02 30756 9442 : 41 90 : 09 90 : 04 307510 764 : 94 70 : 18 80 35602 120 : 13 120 : 14 120 : 14 35604 829 : 89 80 : 711 80 35606 731 : 61 70 : 31 70 : 01 40702 150 : 17 150 : 1 150 : 101 40704 11341 : 97 110 : 06 110 40706 878 : 94 80 : 2 80 : 04 45802 160 : 24 160 : 06 160 : 1 45804 1148 : 17 110 : 05 110 : 02 45806 8118 : 23 80 : 09 80 : 071 501352 190 : 36 190 : 27 190 : 05 501354 15165 : 18 150 : 63 150 : 291 501356 145722 : 88 140 : 721 140 : 03 Total(Sum) 248257 : 58 246 : 705 273 : 417 valuesof L foreachinstance.Inthistable,weprovidetheresultsfort hegeneticalgorithmand combinatorialheuristicwithandwithoutthelocalsearche nhancement.CPLEXwasunableto computeoptimalsolutionswithinreasonabletimelimitsfo ranyoftheinstancesrepresentedin thistable. Weseefromthistablethattheintermsofsolutionqualityth eGAisthebestperforming method.The ComAlg + LS alsofavorswell,butrequiresmorecomputationtimethanth eGAand requiresmorecomputingtimeonaverage.Thecombinatorial algorithmwithoutthelocalsearch procedureproducessolutionwhicharearguablyreasonable giventhattherequiredcomputation timeisover 36 timesfasterthantheGA,whilethesolutionsareonly 1 : 2 timesworsethanthose 58 PAGE 59 Table35:Comparativeresultsofthegeneticalgorithmand thecombinatorialheuristicwhen testedonthelargerrandomgraphs.Duetothecomplexity,we wereunabletocomputethe correspondingoptimalsolutions. Instance GeneticAlgorithm ComAlg ComAlg+LS NodesArcsMaxConn. ObjComp ObjComp ObjComp Index( L ) ValueTime(s) ValueTime(s) ValueTime(s) 751405 181 : 622 210 181 : 502 751408 141 : 442 200 : 02 141 : 181 7514010 121 : 231 200 : 12 123 : 364 752105 231 : 532 290 : 01 2318 : 476 752108 212 : 443 230 : 01 222 : 934 7521010 202 : 794 240 : 09 2021 : 17 752805 313 : 464 350 : 101 313 : 144 752808 292 : 874 310 : 05 293 : 746 7528010 283 : 775 300 : 13 284 : 787 1001945 225 : 317 330 : 02 222 : 774 10019410 173 : 224 220 : 241 176 : 499 10019415 152 : 954 220 : 021 150 : 44 1002855 335 : 08 380 : 02 331 : 262 10028510 284 : 376 310 : 05 2811 : 076 10028515 275 : 728 280 : 16 271 : 142 1003805 409 : 052 470 : 051 425 : 739 10038010 3611 : 506 410 : 02 373 : 866 10038015 356 : 198 400 : 39 363 : 034 1252405 297 : 951 370 : 251 311 : 472 12524010 249 : 984 290 : 07 241 : 993 12524015 225 : 888 260 : 18 229 : 233 1502905 317 : 981 400 : 421 305 : 798 15029010 264 : 967 320 : 2 255 : 107 15029015 235 : 457 291 : 101 2319 : 889 1504355 499 : 143 570 : 06 496 : 459 15043510 4019 : 407 500 : 44 415 : 518 15043515 389 : 703 450 : 07 3813 : 699 Total(Sum) 731155 : 183 8804 : 297 737165 : 304 computedbytheGA.Nevertheless,thegeneticalgorithmreq uiredonly 5 : 748 secondsonaverage tocomputethebestsolution.Thetradeoffofsolutionqual ityversuscomputationtimeisa decisionthatwouldbemadebyanoperatordependingonthesi zeofthenetworkandthetime constraintsimposedondetectingthecriticalnodesofagiv engraph. 3.5ConcludingRemarks Inthischapter,weproposedseveralmethodsofjammingcomm unicationnetworksbased onthedetectionofthecriticalnodes.Criticalnodesareth oseverticeswhosedeletionresultsin 59 PAGE 60 themaximumnetworkdisconnectivity.Ingeneral,theprobl emofdetectingcriticalnodeshas awidevarietyofapplicationsfromjammingcommunicationn etworksandotherantiterrorism applications,toepidemiologyandtransportationscience [ 9 11 ]. Inparticularweexaminedtwoproblems,namelytheCRITICALNODEPROBLEM(CNP)as wellastheCARDINALITYCONSTRAINEDCNP(CCCNP).Givenagraphandaninteger k ,the objectiveoftheCNPistodetectasetof k criticalnodeswhosedeletionresultsinthemaximum numberofdisconnectedcomponentswhosecardinalitieshav etheminimumvariance.The denitionoftheCCCNPisslightlydifferentinthatinsteadofgiven k 2 Z ,themaximumnumber ofnodestodelete,wearegivensomevalue L 2 Z whichrepresentsthemaximumconnectivity indexanodemayhave.Theobjectiveinthiscaseistodeletet heminimumnumberofnodes whileensuringthattheconnectivityindexofeachnodedoes notexceed L Theproposedproblemsweremodeledasintegerlinearprogra mmingproblems.Thenwe provedthatthecorrespondingdecisionproblemsare NP complete.Furthermore,weproposed aseveralheuristicsforefcientlycomputingqualitysolu tionstolargescaleinstances.The heuristicproposedfortheCNPwasacombinatorialalgorithmwhichexploitedpropertieso f thegraphinordertocomputebasicfeasiblesolutions.Them ethodwasfurtherintensiedby theapplicationofalocalsearchmechanism.Byusingtheint egerprogrammingformulation wewereabletodeterminetheprecisionofourheuristicbyco mparingtheirrelativesolutions andcomputationtimesforseveralnetworks.Thecomputatio nalexperimentsindicatedthatthe heuristicfoundoptimalsolutionsforallinstancestested inafractionofthetimerequiredbythe commercialIPsolverCPLEX. FortheCCCNPweproposedtwoalgorithms,namelyamodiedversionofthe combinatorialalgorithmdescribedaboveandageneticalgo rithm[ 87 ].Onceagain,the computationalexperimentsindicatedthatbothmethodsare robustandareabletoefciently computeapproximatesolutionsforinstancesupto 150 nodes. Wealsoconcludewithafewwordsonthepossibilityoffuture expansionofthiswork.A heuristicexplorationofcuttingplanealgorithmsontheIP formulationwouldbeaninteresting 60 PAGE 61 alternative.Otherheuristicapproachesworthyofinvesti gationincludehybridizingthegenetic algorithmwiththeadditionofalocalsearchorpathrelink ingenhancementprocedure[ 85 ]. Finally,thelocalsearchusedinthecombinatorialalgorit hmwasasimple 2 exchangemethod, whichwasthecauseofasignicantslowdownincomputationa snotedinTable 35 .Amore sophisticatedlocalsearchsuchasamodicationoftheonep roposedbyResendeandWerneck [ 159 160 ]shouldbeamajorfocusofattention. Furthermore,itwouldbeinterestingtostudytheweightedv ersionoftheproblemtosee howweightsaddedtothenodesaffectthesolutions.Forexam ple,itisrationaltoperceive applicationscontainingweightednetworksinwhichthecos tofdeletingonenodeisdifferent fromanother.Also,pertainingtoapplicationsoutsidethe scopeofjammingnetworks,astudyof epidemicthresholdvariationwithrespecttotheheuristic resultswillhelpdeterminetheimpacts oncontagionsuppressioninbiologicalandsocialnetworks 61 PAGE 62 CHAPTER4 THEWIRELESSNETWORKJAMMINGPROBLEM 4.1Introduction Militarystrategistsareconstantlyseekingwaystoincrea setheeffectivenessoftheir forcewhilereducingtheriskofcasualties.Inanyadversar ialenvironment,animportant goalisalwaystoneutralizethecommunicationsystemofthe enemy.Inthischapter,weare interestedinjammingawirelesscommunicationnetwork.Sp ecically,westudytheproblem ofdeterminingtheoptimalnumberandplacementforasetofj ammingdevicesinorderto neutralizecommunicationonthenetwork.Thisisknownasth eWIRELESSNETWORKJAMMING PROBLEM(WNJP). Despitetheenormousamountofresearchontelecommunicati onsystems[ 155 ],thetopicof jammingcommunicationnetworkshasreceivedlittleattent ion.Infact,thematerialthatfollows inthenexttwochapterspresenttherstsuchefforts,insof araswecantell.Wewillbegin thischapterbydescribingandformulatingtheproblemofja mmingawiredtelecommunication network,andextendthisresulttothewirelessdomain.Wewi llseethatthereisabitmore versatilitywhenconsideringthewirelessversionofthepr oblemduetothewirelessmulticast advantage,i.e.theabilityofwirelesstransmitterstocom municateaffectnodesthatarenot directlyadjacenttothem. Wecangeneralizetheworkof[ 9 ]tostudytheproblemofjammingandeavesdropping wirelesscommunicationnetworks.Aswewillsee,thereares everalvariationsthatcanbemade dependingontheoverallobjectives.Thisisaidedbythefac tthatwirelessjammingdevices notonlyaffectthosenodeswhicharedirectlyadjacenttoth em;rather,theypropagateenergy throughoutthenetworktoallthecommunicationnodesaswew illseeinthenextsection. Theorganizationofthechapterisasfollows.Afterareview ofrelatedwork,wepresent severaldeterministicformulationsoftheWNJPinSection 4.3 .Inparticular,Subsection 4.3.1 containsseveralcoverageformulationsoftheWNJP.TheninSubsection 4.3.2 ,weuse toolsfromgraphtheorytodenetheconnectivityofthenetw orkanddevelopanalternative 62 PAGE 63 formulationbasedonconstrainingtheconnectivityindice softhenodes,analogoustotheCCCNP.Next,inSection 4.4 weincorporatepercentileconstraintstodevelopformulat ions whicharecomputationallymoreefcientandhavesimilarso lutionquality.InSection 4.5 ,we willpresenttwocasestudiescomparingthesolutionsandco mputationtimeforallformulations. Finally,conclusionsandfuturedirectionsofresearchwil lbeaddressed. 4.2DenitionsandAssumptions Beforeformallydeningtheproblemstatement,wewillstat esomebasicassumptionsabout thejammingdevicesandthecommunicationnodesbeingjamme d.Weassumethatparameters suchasthefrequencyrangeofthejammingdevicesareknown. Inaddition,thejammingdevices areassumedtohaveomnidirectionalantennas.Thecommunic ationnodesarealsoassumedtobe outttedwithomnidirectionalantennasandfunctionasbot hreceiversandtransmitters.Givena graph G =( V;E ) ,wecanrepresentthecommunicationdevicesasthevertices ofthegraph.An undirectededgewouldconnecttwonodesiftheyarewithinac ertaincommunicationthreshold. Givenaset M = f 1 ; 2 ;:::;m g ofcommunicationnodestobejammed,thegoalisto ndasetoflocationsforplacingjammingdevicesinorderto suppressthefunctionalityofthe network.The jammingeffectiveness ofdevice j iscalculatedusing d :( V V ) 7! R ,where d isadecreasingfunctionofthedistancefromthejammingdev icetothenodebeingjammed. Hereweareconsideringradiotransmittingnodes,andcorre spondingly,jammingdeviceswhich emitelectromagneticwaves.Thusthejammingeffectivenes sofadevicedependsonthepower ofitselectromagneticemission,whichisassumedtobeinve rselyproportionaltothesquared distancefromthejammingdevicetothenodebeingjammed.We notethatthisassumptionis madewithoutthelossofgenerality.Theresultspresentedi nthischapterholdaslongasthe function d isasmoothmonotonicallydecreasingfunction.Specicall y, d ij := r 2 ( i;j ) ; where 2 R isaconstant,and r ( i;j ) representsthedistancebetweennode i andjammingdevice j .Withoutthelossofgenerality,wecanset =1 63 PAGE 64 4.3DeterministicFormulations 4.3.1CoverageApproach Thecumulativelevelofjammingenergyreceivedatnode i isdenedas Q i := n X j =1 d ij = n X j =1 1 r 2 ( i;j ) ; where n isthenumberofjammingdevices.Then,wecanformulatetheWIRELESSNETWORK JAMMINGPROBLEM(WNJP)astheminimizationofthenumberofjammingdevicesplaced subjecttoasetof qualitycovering constraints: (QCP) Minimize n (41) s.t. Q i C i ;i =1 ; 2 ;:::;m: (42) Thesolutiontothisproblemprovidestheoptimalnumberofj ammingdevicesneededto ensureacertainjammingthreshold C i ismetateverynode i 2M .Acontinuousoptimization approachwhereoneisseekingtheoptimalplacementcoordin ates ( x j ;y j ) ;j =1 ; 2 ;:::;n forjammingdevicesgiventhecoordinates ( X i ;Y i ) ;i =1 ; 2 ;:::;m ,ofnetworknodes,leads tohighlynonconvexformulations.Forexample,considert hequalitycoveringconstraintfor networknode i n X j =1 1 ( x j X i ) 2 +( y j Y i ) 2 C i : Itiseasytoverifythatthisconstraintisnonconvex.Find ingtheoptimalsolutiontothis nonlinearprogrammingproblemwouldrequireanextensivea mountofcomputationaleffort. Toovercomethenonconvexityoftheaboveformulation,wep roposeseveralinteger programmingmodelsfortheproblem.Supposenowthatalongw iththesetofcommunication nodes M = f 1 ; 2 ;:::;m g ,thereisaxedset N = f 1 ; 2 ;:::;n g ofpossiblelocationsforthe jammingdevices.Thisassumptionisreasonablebecauseinr ealbattleeldscenarios,thesetof 64 PAGE 65 possibleplacementlocationswilllikelybelimited.Dene thedecisionvariable x j as x j := 8>><>>: 1 ; ifajammingdeviceisinstalledatlocation j; 0 ; otherwise : (43) Ifweredene r ( i;j ) tobethedistancebetweencommunicationnode i andjamminglocation j thenwehavetheOPTIMALNETWORKCOVERING(ONC)formulationoftheWNJPas (ONC) Minimize n X j =1 c j x j (44) s.t. n X j =1 d ij x j C i ;i =1 ; 2 ;:::;m (45) x j 2f 0 ; 1 g ;j =1 ; 2 ;:::;n; (46) where C i isdenedasabove.Heretheobjectiveistominimizethenumb erofjammingdevices usedwhileachievingsomeminimumlevelofcoverageateachn ode.Thecoefcients c j in( 44 ) representthecostsofinstallingajammingdeviceatlocati on j .Inabattleeldscenario,placing ajammingdeviceinthedirectproximityofanetworknodemay betheoreticallypossible; however,suchaplacementmightbeundesirableduetosecuri tyconsiderations.Inthiscase,the locationconsideredwouldhaveahigherplacementcostthan wouldasaferlocation.Ifthereare nopreferencesfordevicelocations,thenwithoutthelosso fgenerality, c j =1 ;j =1 ; 2 ;:::;n: Thoughwehaveremovedthenonconvexcoveringconstraints ,thisformulationremains computationallydifcult.NoticethatONCisformulatedasaMULTIDIMENSIONALKNAPSACK PROBLEMwhichisknowntobe NP hardingeneral[ 79 ]. 4.3.2ConnectivityFormulation InthegeneralWNJP,itisimportantthatthedistinctionbemadethattheobject iveisnot simplytojamallofthenodes,buttodestroythefunctionali tyoftheunderlyingcommunication network.Inthissection,weusetoolsfromgraphtheorytode velopamethodforsuppressing 65 PAGE 66 Figure41:ConnectivityIndexofnodesA,B,C,Dis3.Connec tivityIndexofE,F,Gis2. ConnectivityIndexofHis0.thenetworkbyjammingthosenodeswithseveralcommunicati onlinksandderiveanalternative formulationoftheWNJP.Givenagraph G =( V;E ) ,therecallthatthe connectivityindex of anodeisdenedasthenumberofnodesreachablefromthatver tex(asshowninFigure 41 ). Toconstrainthenetworkconnectivityinoptimizationmode ls,wecanimposeconstraintsonthe connectivityindicesinsteadofusingcoveringconstraint s. WecannowdevelopaformulationfortheWNJPbasedontheconnectivityindicesofthe communicationgraph.Weassumethatthesetofcommunicatio nnodes M = f 1 ; 2 ;:::;m g to bejammedisknownandasetofpossiblelocations N = f 1 ; 2 ;:::;n g forthejammingdevices isgiven.Notethaninthecommunicationgraph, V M .Let S i := P nj =1 d ij x j denotethe cumulativelevelofjammingatnode i .Thennode i issaidtobejammedif S i exceedssome thresholdvalue C i .Wesaythatcommunicationisseveredbetweennodes i and j ifatleastone ofthenodesisjammed.Further,let y : MM7!f 0 ; 1 g beasurjectionwhere y ij :=1 if thereexistsapathfromnode i tonode j inthejammednetwork.Lastly,let z : M7!f 0 ; 1 g bea surjectivefunctionwhere z i returns1ifnode i isnotjammed. TheobjectiveoftheCONNECTIVITYINDEXPROBLEM(CIP)formulationoftheWNJPisto minimizetotaljammingcostsubjecttoaconstraintthatthe connectivityindexofeachnodedoes 66 PAGE 67 notexceedsomepredescribedlevel L .Thecorrespondingoptimizationproblemisgivenas: (CIP) Minimize n X j =1 c j x j (47) s.t. m X j =1 j 6 = i y ij L; 8 i 2M ; (48) M (1 z i ) >S i C i Mz i ; 8 i 2M ; (49) x j 2f 0 ; 1 g ; 8 j 2N ; (410) z i 2f 0 ; 1 g8 i 2M ; (411) 8 i;j 2M ;y ij 2f 0 ; 1 g ; 8 i;j 2M ; (412) where M 2 R issomelargeconstant. Let v : MM7!f 0 ; 1 g and v 0 : MM7!f 0 ; 1 g bedenedasfollows: v ij := 8>><>>: 1 ; if ( i;j ) 2 E; 0 ; otherwise, (413) and v 0 ij := 8>><>>: 1 ; if ( i;j ) existsinthejammednetwork ; 0 ; otherwise : (414) 67 PAGE 68 Withthis,wecanformulateanequivalentintegerprogramas (CIP1) Minimize n X j =1 c j x j ; (415) s.t. y ij v 0 ij ; 8 i;j 2M ; (416) y ij y ik y kj ;k 6 = i;j ; 8 i;j 2M ; (417) v 0 ij v ij z j z i ;i 6 = j ; 8 i;j 2M ; (418) m X j =1 j 6 = i y ij L; 8 i 2M ; (419) M (1 z i ) >S i C i Mz i ; 8 i 2M ; (420) z i 2f 0 ; 1 g ; 8 i 2M ; (421) x j 2f 0 ; 1 g ; 8 j 2N ;y ij 2f 0 ; 1 g8 i;j 2M ; (422) v ij 2f 0 ; 1 g ; 8 i;j 2M ;v 0 ij 2f 0 ; 1 g ; 8 i;j 2M : (423) Lemma6. IfCIPhasanoptimalsolutionthen,CIP1 hasanoptimalsolution.Further,any optimalsolution x oftheoptimizationproblemCIP1 isanoptimalsolutionofCIP. Proof. Itiseasytoestablishthatif i and j arereachablefromeachotherinthejammednetwork theninCIP1, y ij =1 .Indeed,if i and j areadjacentthenthereexistsasequenceofpairwise adjacentvertices: f ( i 0 ;i 1 ) ;:::; ( i m 1 ;i m ) g ; (424) where i 0 = i; and i m = j .Usinginductionitcanbeshownthat y i 0 i k =1 ; 8 k =1 ; 2 ;:::;m From( 416 ),wehavethat y i k i k +1 =1 .If y i 0 i k =1 ,thenby( 417 ), y i 0 i k +1 y i 0 i k y i k i k +1 =1 whichprovestheinductionstep. TheprovenpropertyimpliesthatinCIP1: m X j =1 j 6 = i y ij connectivityindexof i: (425) 68 PAGE 69 Therefore,if ( x ;y ) and ( x ;y ) areoptimalsolutionsofCIP1andCIPcorrespondingly, then: V ( x ) V ( x ) ; (426) where V istheobjectiveinCIP1andCIP. As ( x ;y ) isfeasibleinCIP,itcanbeeasilycheckedthat y satisesallfeasibility constraintsinCIP1(itfollowsfromthedenitionof y ij inCIP).So, ( x ;y ) isfeasibleinCIP1;thusprovingtherststatementofthelemma. HencefromCIP1, V ( x ) V ( x ) : (427) From( 426 )and( 427 ): V ( x )= V ( x ) : (428) Letusdene y suchthat y ij :=1 j isreachablefrom i inthenetworkjammedby x : Using( 425 ), ( x ;y ) isfeasibleinCIP1,andhenceoptimal.Fromtheconstructionof y it followsthat ( x ;y ) isfeasibleinCIP.Relyingon( 428 )wecanclaimthat x isanoptimal solutionofCIP.Thelemmaisproved. Wehavethereforeestablishedaonetoonecorrespondence betweenformulationsCIPandCIP1.Now,wecanlinearizetheintegerprogramCIP1byapplyingsomestandard 69 PAGE 70 transformations.Theresultinglinear01program,CIP2isgivenas (CIP2) Minimize n X j =1 c j x j (429) s.t. y ij v 0 ij ; 8 i;j =1 ;:::; M ; (430) y ij y ik + y kj 1 ;k 6 = i;j ; 8 i;j 2M ; (431) v 0 ij v ij + z j + z i 2 ;i 6 = j ; 8 i;j 2M ; (432) m X j =1 j 6 = i y ij L; 8 i 2M ; (433) M (1 z i ) >S i C i Mz i ; 8 i 2M ; (434) z i 2f 0 ; 1 g ; 8 i 2M ; (435) x j 2f 0 ; 1 g ; 8 j 2N ;y ij 2f 0 ; 1 g8 i;j 2M ; (436) v ij 2f 0 ; 1 g ; 8 i;j 2M ;v 0 ij 2f 0 ; 1 g ; 8 i;j 2M : (437) Inthefollowinglemma,weprovideaproofofequivalencebet weenCIP1andCIP2. Lemma7. IfCIP1 hasanoptimalsolutionthenCIP2 hasanoptimalsolution.Furthermore, anyoptimalsolution x ofCIP2 isanoptimalsolutionofCIP1 Proof. For01variablesthefollowingequivalenceholds: y ij y ik y kj y ij y ik + y kj 1 TheonlydifferencesbetweenCIP1andCIP2aretheconstraints: v 0 ij = v ij z j z i (438) v 0 ij v ij + z i + z j 2 (439) Notethat( 438 )implies( 439 )( v ij z j z i v ij + z i + z j 2 ).Therefore,thefeasibilityregionofCIP2includesthefeasibilityregionofCIP1.Thisprovestherststatementofthelemma. 70 PAGE 71 Fromthelastpropertywecanalsodeducethatforall x 1 ;x 2 suchthat x 1 isanoptimal solutionofCIP1,and x 2 isoptimalforCIP2,that V ( x 1 ) V ( x 2 ) ; (440) where V ( x ) istheobjectiveofCIP1andCIP2. Let ( x ;y ;v 0 ;z ) beanoptimalsolutionofCIP2.Construct v 00 usingthefollowingrules: v 00 ij := 8>><>>: 1 ; if v ij + z i + z j 2=1 ; 0 ; otherwise : (441) v 0 ij v 00 ij ) ( x ;y ;v 00 ;z ) isfeasibleinCIP2( y ij v 00 ij ),henceoptimal(theobjectivevalueis V ( x ) ,whichisoptimal).Using( 441 ), ( v 00 ;z ) satises: v 00 ij = v ij z j z i : Usingthiswehavethat ( x ;y ;v 00 ;z ) isfeasibleforCIP1.If x 1 isanoptimalsolutionofCIP1 then: V ( x 1 ) V ( x ) : (442) Ontheotherhand,using( 440 ): V ( x ) V ( x 1 ) : (443) ( 442 )and( 443 )togetherimply V ( x 1 )= V ( x ) .Thelastequalityprovesthat x isanoptimal solutionofCIP1.Thus,thelemmaisproved. Wehaveasaresultoftheabovelemmatathefollowingtheorem whichstatesthatthe optimalsolutiontothelinearizedintegerprogramCIP2isanoptimalsolutiontotheoriginal connectivityindexproblemCIP. Theorem12. IfCIPhasanoptimalsolutionthenCIP2 hasanoptimalsolution.Furthermore, anyoptimalsolutionofCIP2 isanoptimalsolutionofCIP. Proof. ThetheoremisanimmediatecorollaryofLemma 6 andLemma 7 71 PAGE 72 4.4DeterministicSetupwithPercentileConstraints Aswehaveseen,tosuppresscommunicationonawirelessnetw orkmaynotnecessarily implythatallnodesmustbejammed.Wemightinsteadchooset oconstraintheconnectivity indexofthenodesasintheCIPformulations.Alternatively,itmaybesufcienttojamsom e percentageofthetotalnumberofnodesinordertoacquirean effectivecontroloverthenetwork. Thelattercanbeaccomplishedbyadding percentileriskconstraints tothemathematical formulation.Usedextensivelyinnancialengineeringapp licationsandoptimizationofstochastic systems,riskmeasureshaveproveneffectivewhenappliedt odeterministicproblems[ 120 ].In thissection,wereviewtworiskmeasures,namelyValueatRi sk(VaR)andConditionalValueat Risk(CVaR)andprovideformulationoftheWNJPwiththeincorporationoftheseriskmeasures. 4.4.1ValueatRisk(VaR)andConditionalValueatRisk( CVaR) TheValueatRisk(VaR)percentilemeasureisperhapsthem ostwidelyusedinall applicationsofriskmanagement[ 103 ].Statedsimply,VaRisanupperpercentileofagiven lossdistribution.Inotherwords,givenaspeciedconden celevel ,thecorresponding VaR isthelowestamount suchthat,withprobability ,thelossislessorequalto [ 121 ].VaR typeriskmeasuresarepopularforseveralreasonsincludin gtheirsimpledenitionandeaseof implementation. AnalternativeriskmeasureisConditionalValueatRisk( CVaR).DevelopedbyRockafellar andUryasev,CVaRisapercentileriskmeasureconstructedf orestimationandcontrolofrisks instochasticanduncertainenvironments.However,CVaRb asedoptimizationtechniquescan alsobeappliedinadeterministicpercentileframework.CV aRisdenedastheconditional expectedlossundertheconditionthatitexceedsVaR[ 168 ].Figure 42 providesagraphical representationoftheVaRandCVaRconcepts.Aswewillsee,C VaRhasmanypropertiesthat offernicealternativestoVaR. Let f ( x;y ) beaperformanceorlossfunctionassociatedwiththedecisi onvector x X R n ,andarandomvectorin y 2 R m .The y vectorcanbeinterpretedastheuncertaintiesthatmay affecttheloss.Then,foreach x 2 X ,thecorrespondingloss f ( x;y ) isarandomvariablehaving 72 PAGE 73 Figure42:GraphicalrepresentationofVarandCVaR. adistributionin R whichisinducedby y .Weassumethat y isgovernedbyaprobabilitymeasure P onaBorelset,say Y .Therefore,theprobabilityof f ( x;y ) notexceedingsomethresholdvalue isgivenby ( x; ):= P f y j f ( x;y ) g : (444) Foraxeddecisionvector x ( x; ) isthecumulativedistributionfunctionofthelossassocia ted with x .ThisfunctionisfundamentalfordeningVaRandCVaR[ 121 ]. Withthis,the VaRand CVaRvaluesforthelossrandomvariable f ( x;y ) forany specied 2 (0 ; 1) aredenotedby ( x ) and ( x ) respectively.Fromtheaforementioned denitions,theyaregivenby ( x ):=min f 2 R : ( x; ) g ; (445) and ( x ):= E f f ( x;y ) j f ( x;y ) ( x ) g : (446) Noticethattheprobabilitythat f ( x;y ) ( x ) isequalto 1 .Finallybydenition,wehave that ( x ) istheconditionalexpectationthatthelosscorresponding to x isgreaterthanorequal to a ( x ) [ 162 ]. 73 PAGE 74 ThekeytoincludingVaRandCVaRconstraintsintoamodelare thecharacterizationsof ( x ) and ( x ) intermsofafunction F : X R 7! R denedby F ( x; ):= + 1 (1 ) E f max f f ( x;y ) ; 0 gg : (447) Thefollowingtheorem,whichprovidesthecrucialproperti esofthefunction F followdirectly fromthepaperbyRockafellarandUryasev[ 162 ]. Theorem13. Asafunctionof F ( x; ) isconvexandcontinuouslydifferentiable.The CVaR ofthelossassociatedwithany x 2 X canbedeterminedfromtheformula ( x )=min 2 R F ( x; ) : (448) Inthisformula,thesetconsistingofthevaluesof forwiththeminimumisattained,namely A ( x )=argmin 2 R F ( x; ) ; (449) isanonempty,closed,boundedinterval,andthe VaRofthelossisgivenby ( x )= leftendpointof A ( x ) : (450) Inparticular,itisalwaysthecasethat ( x ) 2 argmin 2 R F ( x; ) and ( x )= F ( x; ( x )) : (451) Thisresultprovidesanefcientlinearoptimizationalgor ithmforCVaR.However,froma numericalperspective,theconvexityof F ( x; ) withrespectto x and asprovidedbyTheorem 13 ismorevaluablethantheconvexityof ( x ) withrespectto x .Aswewillseeinthefollowing theoremduetoRockafellarandUryasev[ 163 ],thisallowsustominimizeCVaRwithouthaving toproceednumericallythroughrepeatedcalculationsof ( x ) forvariousdecisions x 74 PAGE 75 Theorem14. Minimizing ( x ) withrespectto x 2 X isequivalenttominimizing F ( x; ) over all ( x; ) 2 X R ,inthesensethat min x 2 X ( x )=min ( x; ) 2 X R F ( x; ) ; (452) wheremoreover ( x ; ) 2 argmin ( x; ) 2 X R F ( x; ) x 2 argmin x 2 X ( x ) ; 2 argmin 2 R F ( x ; ) : (453) InthedeterministicsettingoftheWNJP,wearenotparticularlyinterestedinminimizing VaRorCVaRasitpertainstotheloss.Rather,wewouldliketo imposepercentileconstraintson theoptimizationmodelinordertohandleadesiredprobabil itythreshold.Thefollowingtheorem from[ 163 ]providesthiscapability. Theorem15. Foranyselectionofprobabilitythresholds i andlosstolerances i ;i =1 ;:::;m theproblem min x 2 X g ( x ) (454) s.t. i ( x ) i ; for i =1 ;:::;m; (455) where g isanyobjectivefunctiondenedon X ,isequivalenttotheproblem min ( x; 1 ;:::; m ) 2 X R m g ( x ) (456) s.t. F i ( x; i ) i ; for i =1 ;:::;m: (457) Indeed, ( x ; 1 ;:::; m ) solvesthesecondproblemifandonlyif x solvestherstproblemand theinequality F i ( x; i ) i holdsfor i =1 ;:::;m Furthermore, i ( x ) i holdsforall i =1 ;:::;m .Inparticular,foreach i suchthat F i ( x ; )= i ,onehasthat i ( x )= i 75 PAGE 76 4.4.2PercentileConstraintsandtheWNJP Inthissection,weinvestigatetheuseofVaRandCVaRconstr aintswhenappliedtothe formulationsoftheWNJPderivedinSections 4.3 and 4.4 above.Aswehaveseen,riskmeasures aregenerallydesignedforoptimizationunderuncertainty .Sinceweareconsideringdeterministic formulationsoftheWNJP,wecaninterpreteachcommunicationnode i 2M asarandom scenario,andapplythedesiredriskmeasuresinthiscontex t. WebeginwiththeOPTIMALNETWORKCOVERINGformulationoftheWNJP.Supposeit isdeterminedthatjammingsomefraction 2 (0 ; 1) ofthenodesissufcientforeffectively dismantlingthenetwork.Thiscanbeaccomplishedbytheinc lusionof VaRconstraintsinthe originalmodel.Let y : M7!f 0 ; 1 g beasurjectiondenedby y i := 8>><>>: 1 ; ifnode i isjammed, 0 ; otherwise : (458) RecallfromSection 4.3 that N = f 1 ;:::;n g isthesetoflocationsforthejammingdevices, and x isabinaryvectoroflength n where x j =1 ifajammingdeviceisplacedatlocation j Thentondtheminimumnumberofjammingdevicesthatwillal lowforcovering 100% of thenetworknodeswithprescribedlevelsofjamming C i ,wemustsolvethefollowinginteger program (ONCVaR) Minimize n X j =1 c j x j (459) s.t. m X i =1 y i m; (460) n X j =1 d ij x j C i y i ;i =1 ; 2 ;:::;m; (461) x j 2f 0 ; 1 g ;j =1 ; 2 ;:::;n; (462) y i 2f 0 ; 1 g ;i =1 ; 2 ;:::;m: (463) 76 PAGE 77 NoticethatthisformulationdiffersfromtheONCformulationwiththeadditionofthe VaR constraint( 460 ).Accordingto( 461 ),if y i =1 thennode i isjammed.Lastly,wehavefrom ( 460 )thatatleast 100 % ofthe y variablesareequalto 1 TheoptimalsolutiontotheONCVaRformulationwillprovidetheminimumnumberof jammingdevicesrequiredtosuppresscommunicationonatle ast 100 %ofthenetworknodes. Theresultingsolutionmayprovidecoveragelevelscompara bletothoseprovidedbytheONCmodel,whilepotentiallyreducingthenumberofjammingdev icesused.However,noticethatthe remaining (1 ) 100 %ofthenodesforwhich y i ispotentially 0 ,thereisnoguaranteethatthey willreceiveanyamountofcoverage.Furthermore,theaddit ionofthe m binaryvariablesaddsa computationalburdentoaproblemwhichisalready NP hard. WecanalsoreformulatetheCONNECTIVITYINDEXPROBLEMtoincludeValueatRisk constraints.Let : M7! Z + beasurjectionwhere i returnstheconnectivityindexofnode i .Thatis, i := P mj =1 ;j 6 = i y ij .Furtherlet w : M7!f 0 ; 1 g beadecisionvariablehavingthe propertythatif w i =1 ,then i L .Withthis,theconnectivityindexformulationofWNJPwith VaRpercentileconstraintsisgivenas (CIPVaR) Minimize n X j =1 c j x j (464) s.t. i Lw i +(1 w i ) M;i =1 ; 2 ;:::;m; (465) m X i =1 w i m; (466) x j 2f 0 ; 1 g ;j =1 ; 2 ;:::;n; (467) w i 2f 0 ; 1 g ;i =1 ; 2 ;:::;m; (468) i 2f 0 ; 1 g ;i =1 ; 2 ;:::;m; (469) where M 2 R issomelargeconstant. Analogoustoconstraints( 460 )( 461 ),constraints( 465 )( 466 )guaranteethatat least 100% ofthenodeswillhaveconnectivityindexlessthan L .AswiththeONCVaR77 PAGE 78 formulation,therearetwodrawbacksofCIPVaR.First,thereisnocontrolguaranteeatallonany oftheremaining (1 ) 100% nodesforwhich w i =0 .Secondly,theadditionof m binary variablesaddsatremendouscomputationalburdentothepro blem.AsanalternativetoVaR,we nowexamineformulationsoftheWNJPusingConditionalValueatRiskconstraints[ 162 ]. WerstconsidertheOPTIMALNETWORKCOVERINGproblem.Inordertoputthisintoour derivedframework,weneedtodenethelossfunctionassoci atedwithaninstanceoftheONC. Weintroducethefunction f : f 0 ; 1 g n M7! R denedby f ( x;i ):= C i n X j =1 x j d ij : (470) Thatis,givenadecisionvector x representingtheplacementofthejammingdevices,theloss functionisdenedasthedifferencebetweentheenergyrequ iredtojamthenetworknode i and thecumulativeamountofenergyreceivedatnode i dueto x .Withthis,wecanformulatetheONCwiththeadditionofCVaRconstraintsasthefollowinginteg erlinearprogram: (ONCCVaR) Minimize n X j =1 c j x j (471) s.t. + 1 (1 ) m m X i =1 max C min n X j =1 x j d ij ; 0 0 ; (472) 2 R ; (473) x j 2f 0 ; 1 g ; (474) where C min istheminimalprescribedjammingleveland d ij isdenedasabove.Theexpression onthelefthandsideof( 472 )is F ( x; ) .Further,fromTheorem 15 weseethatconstraint ( 472 )correspondstohaving ( x ) =0 [ 163 ].Saiddifferently,theCVaRconstraint ( 472 )impliesthatinthe (1 ) 100% oftheworst(least)coverednodes,theaveragevalueof f ( x ) 0 .Forthecasewhen C i C forall i ,itfollowsthattheaveragelevelofjammingenergy receivedbytheworst (1 ) 100% ofnodesexceeds C 78 PAGE 79 Theimportantpointaboutthisformulationisthatwehaveno tintroducedadditionalinteger variablestotheprobleminordertoaddthepercentileconst raints.Recall,thatinONCVaRweintroduced m discretevariables.Sincewehavetoaddonly m realvariablestoreplace max expressionsunderthesummationandarealvariable ,thisformulationismucheasierto solvethanONCVaR. Inasimilarmanner,wecanformulatetheCONNECTIVITYINDEXPROBLEMwiththe additionofCVaRconstraints.Asbefore,weneedtorstden eanappropriatelossfunction. Recallthatthedenitionof i ,theconnectivityindexofnode i ,isgivenasthenumberofnodes reachablefrom i .Thencandenethelossfunction f 0 foranetworknode i asthedifference betweentheconnectivityindexof i andthemaximumallowableconnectivityindex L which occursasaresultoftheplacementofthejammingdevicesacc ordingto x .Thatis,let f 0 : f 0 ; 1 g n M7! Z bedenedby f 0 ( x;i ):= i L: (475) Withthis,theCIPCVaRformulationisgivenasfollows. (CIPCVaR) Minimize n X j =1 c j x j (476) s.t. + 1 (1 ) m m X i =1 max f i L ; 0 g 0 ; (477) i 2 Z ; (478) 2 R ; (479) where i isdenedasabove.Aswiththepreviousformulation,theexp ressiononthelefthand sideof( 477 )is F ( x; ) from( 447 ).Furthermore,wehavefromfromTheorem 15 that ( 477 )correspondstohaving ( x ) =0 .ThisconstraintonCVaRprovidesthatforthe (1 ) 100% oftheworstcases,theaverageconnectivityindexwillnote xceed L .Again,we seethatinordertoincludetheCVaRconstraint,weonlyneed toadd ( m +1) realvariablestothe 79 PAGE 80 Table41:Optimalsolutionsusingthecoverageformulatio nwithregularandVaRconstraints. OptimalSolutions RegularConstraints VaRConstraints NumberofJammers 6 4 LevelofJamming 100% 8 nodes 100% for 96% ofnodes, 85%(ofreqd.)for4%ofnodes CPLEX R r Time 0.81sec 0.98sec problem.Computationally,CVaRprovidesamoreconservati vesolutionandwillbemucheasier tosolvethantheCIPVaRformulationaswewillseeinthenextsection. 4.5CaseStudiesandAlgorithms Inordertodemonstratetheadvantagesanddisadvantagesof theproposedformulationsfor theWNJP,wewillpresenttwocasestudies.Theexperimentswereperf ormedonaPCequipped witha1.4MHzIntelPentium R r 4processorwith1GBofRAM,workingundertheMicrosoft Windows R r XPSP1operatingsystem.Intherststudy,anexamplenetwor kisgivenandthe problemismodeledusingtheproposedcoverageformulation .Theproblemisthensolvedexactly usingthecommercialintegerprogrammingsoftwarepackage ,CPLEX R r .Next,wemodifythe problemtoincludeVaRandCVaRconstraintsandagainuseCPL EX R r tosolvetheresulting problems.Numericalresultsarepresentedandthethreefor mulationsarecompared.Inthe secondcasestudy,wemodelandsolvetheproblemusingtheco nnectivityindexformulation.We thenincludepercentileconstraintsreoptimize.Finally ,weanalyzetheresults. 4.5.1CoverageFormulation HerewepresenttwonetworksandsolvetheWNJPusingthenetworkcovering(ONC) formulation.Therstnetworkhas100communicationnodesa ndthenumberofavailable jammingdevicesis36.Thecostofplacingajammingdeviceat location j c j isequalto1forall locations.Thisproblemwassolvedusingtheregularconstr aintsandtheVaRtypeconstraints. Recallthatthereisasetofpossiblelocationsatwhichjamm ingdevicescanbeplaced.Inthese examples,thissetofpointsconstitutesauniformgridover thebattlespace.Theplacementofthe jammingdevicesfromeachsolutioncanbeseeninFigure 43 .Thenumericalresultsdetailing thelevelofjammingforthenetworknodesisgiveninTable 41 .NoticethattheVaRsolution 80 PAGE 81 Figure43:Casestudy1.Theplacementofjammersisshownwh entheproblemissolvedusing theoriginalandVaRconstraints.Table42:Optimalsolutionsusingthecoverageformulatio nwithregularandVaR,andCVaR constraints. OptSolns Reg(all) VaR(.9conf) CVaR(.7conf) #Jammers 9 8 7 JammingLevel 100% 8 nodes 100% for 90% ofnodes,72%for10%ofnodes 100%for57%ofnodes,90%for20%ofnodes,76%for23%ofnodes CPLEX R r Time 15sec 15h55min11sec 41sec calledfor33%lessjammingdevicesthantheoriginalproble mwhileprovidingalmostthesame jammingquality. Inthesecondexample,thenetworkhas100communicationnod esand72available jammers.Thisproblemwassolvedusingtheregularconstrai ntsaswellasbothtypesof 81 PAGE 82 Figure44:Casestudy1continued.Theplacementofjammers isshownwhentheproblemis solvedusingVaRandCVaRconstraints.percentileconstraints.TheresultinggraphisshowninFig ure 44 .Thecorrespondingnumerical resultsaregiveninTable 42 Inthisexample,theVaRformulationrequires11%lessjammi ngdeviceswithalmostthe samequalityastheformulationwiththestandardconstrain ts.However,thisformulationrequires nearly16hoursofcomputationtime.TheCVaRformulationgi vesasolutionwithaverygood jammingqualityandrequires22%lessjammingdevicesthant hestandardformulationand11% lessdevicesthantheVaRformulation.Furthermore,theCVa Rformulationrequiresanorderof magnitudelesscomputingtimethantheformulationwithVaR constraints. 4.5.2ConnectivityFormulation WenowpresentacasestudywheretheWNJPwassolvedusingtheconnectivityindex formulation(CIP).Thecommunicationgraphconsistsof30nodesand60edges. Themaximal numberofjammingdevicesavailableis36.Wesetthemaximal allowedconnectivityindexof anynodetobe3.InFigure 45 wecanseetheoriginalgraphwiththecommunicationlinkspr ior tojamming.TheresultoftheVaRandCVaRsolutionsisseenin Figure 46 .Thecondence 82 PAGE 83 Figure45:CaseStudy2:Originalgraph. (a) (b) Figure46:Acomparisonofthepercentileconstrainedsolu tions.Inbothcases,thetriangles representtheplacementofjammingdevices.(a)VaRSolutio n.(b)CVaRSolution. levelforboththeVaRandCVaRformulationswas0.9.Bothfor mulationsprovideoptimal solutionsforthegiveninstance.Theresultingcomputatio ntimefortheVaRformulationwas15 minutes34seconds,whiletheCVaRformulationrequiredonl y7minutes33seconds. 83 PAGE 84 4.6ConcludingRemarks InthischapterweintroducedthedeterministicWIRELESSNETWORKJAMMINGPROBLEMandprovidedseveralformulationsusingnodecoveringcons traintsaswellasconstraintsonthe connectivityindicesofthenetworknodes.Wealsoincorpor atedpercentileconstraintsintothe derivedformulations.Further,weprovidedtwocasestudie scomparingthetwoformulationswith andwithouttheriskconstraints. Withtheintroductionofthisproblem,wealsorecognizetha tseveralextensionscanbe made.Forexample,alloftheformulationspresentedinthis chapterassumethatthenetwork topologyoftheenemynetworkisknown.Itisreasonabletoas sumethatthisisnotalwaysthe case.Infact,theremaybelittleornoaprioriinformationa boutthenetworktobejammed. Inthiscase,stochasticformulationsshouldbeconsidered andanalyzed.Thisbringsustothe topicofthenextchapterinwhichweconsiderthecasewhenno informationisknownaboutthe networktobejammedotherthanitsrelativelocationinside aplanarregion. 84 PAGE 85 CHAPTER5 JAMMINGCOMMUNICATION NETWORKSUNDERCOMPLETEUNCERTAINTY 5.1Introduction Thischapterdescribesaproblemofinterdicting/jammingc ommunicationnetworks inuncertainenvironments[ 44 ].Interdictionofcommunicationnetworksisanimportant application,butaspreviouslymentioned,hasnotbeeninte nsivelyresearcheddespitethevast amountofworkonoptimizingtelecommunicationsystems[ 155 ].Mostpapersonnetwork interdictionareaboutpreventingjammingandanalyzingne tworkvulnerability[ 68 134 ].To ourknowledge,theonlyliteratureonnetworkinterdiction involvingoptimalplacementof jammingdevicesistheworkofCommanderetal.[ 45 ](presentedinChapter 4 )inwhichseveral mathematicalprogrammingformulationsweregivenforthed eterministicWIRELESSNETWORK JAMMINGPROBLEM.Theonlyotherthoroughlystudiedcasesareproblemsofmin imizingthe maximalnetworkowandmaximizingtheshortestpathbetwee ngivennodesviaarcinterdiction usinglimitedresources.Cormicanetal.[ 49 ],Israelietal.[ 110 ],andWood[ 174 ]studied stochasticanddeterministiccasesandsuggestedefcient heuristics.Asimilarsetupbutwitha differentobjectivewasrecentlystudiedbyHeldin 2005 [ 95 ]. Thisproblemisparticularlyimportantintheglobalwaront errorismasimprovised explosivedevices(IEDs)continuetoplaguethecoalitionf orcesinIraq.Thesehomemade bombsarealmostalwaysdetonatedbysomeformofradiofrequ encydevicesuchascellular telephones,pagers,andgaragedooropeners.Theabilityto suppressradiowavesinaregion willhelppreventcasualtiesresultingfromIEDs.Furtherm ore,sincemostsituationsarisein militarybattleeldscenarios,exactinformationaboutth etopologyoftheadversary'snetwork isunknown.Thus,deterministicnetworkinterdictionappr oacheshavelimitedapplicability.In thiscase,astochasticapproachinvolvingsomeriskmeasur eforevaluatingtheefciencyofthe jammingdeviceplacementmaybehelpful.However,choosing anappropriateriskmeasureisa challengingprobleminitsownright.Inthischapter,wecon sideranextremecasewherethereis 85 PAGE 86 noaprioriinformationaboutthetopologyofthenetworktob ejammed.Theonlyinformation usedinourapproachisaboundingarea,containingthecommu nicationnetwork. Theorganizationofthischapterisasfollows.Section 5.3 givesaformaldescriptionof theproblemandthejammingmodel.Wederiveboundsandprove aconvergenceresultforthe caseofcompleteuncertaintyinSection 5.4 .Herewealsodemonstratetheadvantageofthe proposedmethodcomparedtothesimpliedcasewhichdoesno taccountforthecumulative effectofthejammingdevices.InSection 5.5 wepresentarandomizedlocalsearchandillustrate itseffectivenessbyusingtheboundsderivedinthepreviou ssection.Section 5.6 providessome concludingremarks. 5.2Descriptions,Assumptions,andDenitions Ingeneral,theproblemofjammingacommunicationnetworki stodeterminetheminimum numberofjammingdevicesrequiredtointerdictorsuppress functionalityofthenetwork. Startingwiththisgeneralstatement,morespeciconescan beobtainedbyconsideringvarious typesofjammingdevicesandinterdictioncriteria.Depend ingonthegiveninformationabout thecommunicationnodesandthenetworktopology,stochast icordeterministicsetupscan beconstructed[ 45 ].Belowweprovideassumptionsandbasicdenitionsofthec onsidered framework. Weconsiderradiotransmittingcommunicationnetworksan djammingdevicesoperating withelectromagneticwaves.Weassumethatthejammingdevi ceshaveomnidirectionalantennas andemitelectromagneticwavesinalldirectionswiththesa meintensity.Wealsoassumethat jammingpowerdecreasesreciprocallytothesquareddistan cefromadevice. Denition12. Apoint(communicationnode) X issaidtobejammedorcoveredifthe cumulativeenergyreceivedfromalljammingdevicesexceed ssomethresholdvalue E : X i R 2 ( X;i ) E; (51) 86 PAGE 87 where 2 R and R ( X;i ) representsthedistancefrom X tojammingdevice i .Thiscondition canberewrittenas: X i 1 R 2 ( X;i ) 1 L 2 ; (52) where L = q E Thelatterinequalityimpliesthatajammingdevicecoversa nypointinsideacircleofradius L Denition13. Aconnection(arc)betweentwocommunicationnodesisconsi deredblockedifany ofthetwonodesiscovered. Usually,interdictionefciencyisdeterminedbyafractio nofcoverednodesand/orarcs. Morecomplicatedcriteriausedarebasedontheamountofinf ormationtransmittedthroughthe networkorthelengthoftheshortestpathbetweenpairsofno des.Wedonotconsideraspecic criteriumbecauseweareinterestedinthecaseofcompleteu ncertainty.Thus,weareassuming thatwehavenoknowledgeofthenetworktopology,including informationaboutthenode coordinates. 5.3ProblemFormulation Ifweignorethecumulativeeffectofthejammingdevices,th entheproblemreducesto determiningtheoptimalcoveringofanareaonaplanebycirc les.Thiscoveringproblemwas solvedin 1936 byKershner[ 113 ].Thecurrentchaptershowsthataccountingforthecumulat ive effectofallthedevicescanleadtosignicantlossesincos ts,i.e.requirednumberofjamming devices. Sinceweassumenoinformationisknownaboutthenetworktob ejammed,theonly reasonableapproachistocoverallpointsinsomeareaknown tocontainthenetwork.This approachwouldalsobeappropriatewhensomeinformationab outthenetworkisavailable,butis potentiallyinaccurate. Weconsideracasewhenacommunicationnetworkislocatedin sideasquare.However, allofthefollowingtheoremscanbeformulatedforamoregen eralcase.Forexample,to 87 PAGE 88 Figure51:Uniformgridwithjammingdevices obtainresultswhenthenetworkiscontainedinsidearectan gularregionintheplane,theonly modicationrequiredtothecalculationsisanappropriate updatingofthesummationbounds. Anoptimalcoveringisonewhichcontainstheminimumnumber ofjammingdevicesthat jamallpointsintheparticularareaofinterest.However, ndingagloballyoptimalsolutionfor thegeneralproblemisdifcult[ 45 ].Therefore,weconsiderasubproblemofcoveringasquare withjammingdeviceslocatedatthenodesofauniformgrid.T hesolutiontothisproblemwill provideafeasiblesolution(optimalincertaincases)toth egeneralproblem.Supposethegrid stepsizeis R .Ifthelengthofasquareside a isnotamultipleof R ,thenwecoverabigger squarewithasideoflength R ([ a R ]+1) .SeeFigure 51 foranexample.Theoptimalsolution intheconsideredproblemisauniformgridwiththelargestp ossiblestepsizewhichcoversthe square.Theproblemremainsnontrivial,evenforthissimp liedsetup. Lemma8. Foranycoveringofasquarewithauniformgrid,apointwhich receivestheleast amountofjammingenergyliesinsideacornergridcell(seeF igure 52 ). Proof. Consideracornercell S 0 andanarbitrarynoncornercell S i .Weprovethatforanypoint P 2 S i ,thereisacorrespondingpoint P 0 2 S 0 suchthat E ( P ) >E ( P 0 ) ,where E ( X ) isthe cumulativejammingenergyfromalldevicesreceivedatpoin t X Let P 0 beasymmetriccorrespondenceofpoint P inside S 0 .Here,symmetryimpliesthat P and P 0 areequidistantfromthesidesoftheirrespectivecells.We splitthesquareintothe 88 PAGE 89 Figure52:Theleastcoveredpointisshowninthelowerleft gridcell. fourrectangles A;B;C; and D ,where A istherectanglecontainingcells S 0 and S i (seeFigure Figure53:SquareDecomposition 53 ).Denotetheothertwocornercellsofrectangle A by C 1 and C 2 .Letalso T 1 and T 2 bepoints inside C 1 and C 2 respectively,suchthat T 1 PT 2 P 0 isarectanglewithsidesparalleltothesidesof thesquareasinFigure 54 .Usingsymmetrywegetthefollowingrelations: E ( P 0 ;A )= E ( P;A ) ; (53) E ( P 0 ;B ) PAGE 90 Figure54:EquivalentPoints where E ( X;I ) isthecumulativejammingenergyfromalldevicesinsiderec tangle I receivedby point X .Relations( 53 )( 56 )imply E ( P 0 )= E ( P 0 ;A )+ E ( P 0 ;B )+ E ( P 0 ;C )+ E ( P 0 ;D ) PAGE 91 Figure55:Cumulativeemanationofjammingdevices. bottomleftcornercellasshowninFigure 55 I 1 ;I 2 ,and I 3 arecumulativejammingenergy receivedat P byjammingdeviceslocatedinregions C;A ,and B correspondingly.Similarly, I 4 isthejammingenergyfromthejammingdevicelocatedattheb ottomleftnode O .Withthis,the jammingenergyreceivedatpoint P iscalculatedthroughtheexpression E ( P )= I 1 + I 2 + I 3 + I 4 ; where(59) I 1 = T 1 X i =0 T 1 X j =0 1 ( R x 0 + i R ) 2 +( R y 0 + j R ) 2 ; (510) I 2 = T 1 X i =0 1 ( R x 0 + i R ) 2 + y 2 0 ; (511) I 3 = T 1 X j =0 1 x 20 +( R y 0 + j R ) 2 ; (512) I 4 = 1 x 20 + y 2 0 ; (513) T = h a R i +1 : (514) Noticethatwecanestimate I 2 + I 3 as I 2 + I 3 2 T 1 X i =0 1 R 2 (1+ i ) 2 + R 2 2 R 2 Z T 0 1 1+(1+ x ) 2 dx: (515) 91 PAGE 92 Figure56:IntegralLowerBound. Thisfollowsfromthefactthat N X i =0 f ( i ) Z N +1 0 f ( x ) dx; (516) where f ( x ) isadecreasingfunction.Thispropertycanbeeasilyestabl ishedgeometrically. NoticeinFigure 56 thattheleftsideofinequality( 516 )representstheshadedregioninthe gure,whiletherightsiderepresentstheareaunder f ( x ) .Continuingfrom( 515 )abovewe have Z T 0 1 1+(1+ x ) 2 dx =arctan( T +1) 4 = 2 arctan 1 T +1 4 (517) 4 1 T +1 : Hereandfurther,weusetheinequalitiesgivenbelow: arctan( x ) x; 0 x 1 ; (518) arctan( x ) x x 3 3 ; 0 x 1 : (519) 92 PAGE 93 Nowcombining( 515 )and( 517 ),weobtain I 2 + I 3 2 R 2 4 1 T +1 : (520) Wealsohavethefollowingapproximationfor I 4 whichfollowsclearly I 4 1 2 R 2 : (521) Forestimating I 1 weuseapropertysimilarto( 516 ),butinahigherdimension.Namely, N X i =0 N X j =0 f ( i;j ) Z N +1 0 Z N +1 0 f ( x;y ) dxdy; (522) whereasabove, f ( x;y ) isadecreasingfunctionof x and y .Usingthisinequality,wederivethe followingapproximationfor I 1 I 1 Z T 0 Z T 0 dxdy ( R x 0 + x R ) 2 +( R y 0 + y R ) 2 Z T 0 Z T 0 dxdy ( R + x R ) 2 +( R + y R ) 2 (523) = 1 R 2 Z T +1 1 Z T +1 1 dxdy x 2 + y 2 : Furthermore, Z T +1 1 Z T +1 1 dxdy x 2 + y 2 = Z T +1 1 1 x arctan T +1 x dx Z T +1 1 1 x arctan 1 x dx Z T +1 1 1 x arctan T +1 x dx Z T +1 1 dx x 2 = Z T +1 1 1 x x arctan x T +1 dx 1+ 1 T +1 (524) = 2 ln( T +1) 1+ 1 T +1 Z T +1 0 1 x arctan x T +1 dx 2 ln( T +1) 1+ 1 T +1 Z T +1 0 1 x x T +1 dx = 2 ln( T +1) 2 1 1 T +1 : 93 PAGE 94 Combiningthisresultwith( 523 )wehave I 1 1 R 2 2 ln( T +1) 2 1 1 T +1 : (525) Summing( 515 ),( 521 ),and( 525 )weobtainanoverestimateofthetotalcoverageat point P .Thatis E ( P ) 1 R 2 2 ln( T +1) 2+ 2 T +1 + 2 2 T +1 + 1 2 = 1 R 2 2 ln( T +1)+ 2 3 2 (526) 1 2 R 2 ln a R +1 + 3 : Toguaranteecoverageofpoint P ,itissufcienttoclaimthat f ( R )= 1 2 R 2 ln a R +1 + 3 1 L 2 : (527) Since f ( R ) ismonotonicallydecreasingon (0 ; + 1 ) ,thelargest R satisfyingtheaboveinequality istheuniquesolution R oftheequation f ( R )= 1 L 2 : (528) Thus,auniformgridwithstepsize R jamsanypoint P insideacornercell.AccordingtoLemma 8 ,thegridjamstheleastcoveredpointinthesquareimplying thatthewholesquareisjammed. Thuswehavethedesiredresult. Sincethefunction f ( R )= 1 2 R 2 ( ln( a R +1)+ 3) ismonotonic,equation( 58 )canbe easilysolvedusinganumericalproceduresuchasabinaryse arch.Therefore,using( 58 ),wecan obtainastepsize R suchthatthecorrespondinguniformgridcoverstheentires quare.Further,it iseasytoseethatthenumberofjammingdevicesinthegriddo esnotexceed N 1 = a R +2 2 : (529) 94 PAGE 95 Table51:Comparing N 2 N 1 forvariousvaluesof k k x N 2 N 1 10 2 2.44 2.3 10 4 3.54 4.8 10 6 4.40 7.5 10 8 5.14 10.2 Amorestraightforwardsolutionoftheinitialproblemcoul dbebasedonthepropertythata jammingdevicecoversallthepointsinsideacircleofradiu s L asmentionedinDenition 12 Usingthat,wecouldreducetheproblemtondingtheoptimal coveringofasquarewithcircles ofradius L .Adirectresultfrom[ 113 ](thatwasmentionedin[ 134 ])isthatinthelimit,the minimumnumberofcirclestocoveranarea a 2 is N 2 = 2 a 2 3 p 3 L 2 : (530) Tocomparetheapproaches,weconsidertheratio N 2 N 1 = R L 2 2 3 p 3 1 (1+2 R a ) 2 = 2 x 2 3 p 3 1 (1+ 2 x k ) 2 ; (531) where x = R L and k = a L .Usingthesesubstitutions,equation( 58 )canberewrittenintermsof variables x and k asfollows 1 x 2 ln k x +1 + 3 =2 : (532) Bysolving( 532 )fordifferentvaluesof k ,onecanndcorrespondingvaluesof x and N 2 N 1 Toevaluatetheadvantageoftheuniformgridapproachovert henaiveone,weprovidesome computationalresultsintheTable 51 .Fromthetable,weseethatas k increases,theadvantage ofusingourapproachbecomesmoresignicant.Infact,itca nbeprovedthat lim a !1 N 2 N 1 = 1 : ThiswillfollowasacorollaryofTheorem 18 Toestablishthequalityofthelowerboundrigorously,wene edtorstestablishasimilar resultforanupperbound.Thisfollowsinthenexttheorem. 95 PAGE 96 Theorem17. Theuniquesolutionoftheequation 1 R 2 2 ln 2 a R +1 1 6( a R +1) + 2 + 19 3 = 1 L 2 (533) isanupperbound R oftheoptimalgridstepsize R Proof. Let P ( x 0 ;y 0 ) betheleastjammedpoint,thatliesinsideacornercellacco rdingtoLemma 8 .Withoutthelossofgenerality,asintheproofofTheorem 16 ,weassumethat P isinside thebottomleftcornercell.Thejammingenergyreceivedatp oint P iscalculatedthroughthe expressions( 59 )( 514 ).Since P istheleastcoveredpoint,thefollowinginequalityholds. E ( P ) E P 0 x = R 2 ;y =0 = I 0 1 + I 0 2 + I 0 3 + I 0 4 ; where(534) I 0 1 = T 1 X i =0 T 1 X j =0 1 ( R 2 + i R ) 2 +( R + j R ) 2 ; (535) I 0 2 = T 1 X i =0 1 ( R 2 + i R ) 2 ; (536) I 0 3 = T 1 X j =0 1 ( R 2 ) 2 +( R + j R ) 2 ; (537) I 0 4 = 1 ( R 2 ) 2 : (538) I 0 2 and I 0 3 canbeestimatedthroughintegralssimilarlytothetechniq uesusedintheproofof Theorem 16 .Thefollowinginequalityholds N X i =1 f ( i ) Z N 0 f ( x ) dx; (539) where f ( x ) isadecreasingfunction.Thispropertycanalsobeprovenge ometrically.Figure 57 representsagraphicalinterpretationofthisrelation.Th eleftsideoftheinequalityisrepresented bytheshadedarea.Therightsideof( 539 )istheareaunder f ( x ) .Withthispropertywehave 96 PAGE 97 Figure57:IntegralUpperBound. from( 536 )that I 0 2 1 ( R 2 ) 2 + Z T 1 0 dx ( R 2 + x R ) 2 = 1 R 2 6 1 T 1 2 : (540) Furthermore,usinginequalities( 518 )and( 519 ),weseethat( 537 )isestimatedby I 0 3 1 ( R 2 ) 2 +( R + x R ) 2 = 2 3 R 2 + 2 R 2 arctan 1 2 arctan 1 2 T 2 3 R 2 + 2 R 2 1 2 1 2 T + 1 24 T 3 (541) = 1 R 2 5 3 1 T + 1 12 T 3 : Toestimate I 0 1 apropertysimilarto( 539 )canbeused.Thisinequalityisgivenby N X i =1 N X j =1 f ( i;j ) Z N 0 Z N 0 f ( x;y ) dxdy + Z N 0 f ( x; 0) dx + Z N 0 f (0 ;y ) dy; (542) 97 PAGE 98 where f ( x;y ) isadecreasingfunctionof x and y .Withtheaboveinequality, I 0 1 1 ( R 2 2 )+ R 2 + Z T 1 0 dx ( R 2 ) 2 +( R + x R ) 2 + Z T 1 0 dx ( R 2 + x R ) 2 + R 2 + + Z T 1 0 Z T 1 0 dxdy ( R 2 + x R ) 2 +(( R + y R ) 2 = 4 5 R 2 + C R 2 + 1 R 2 Z T 1 0 Z T 1 0 d ( x + 1 2 ) dy ( 1 2 + x ) 2 +( y +1) 2 ; where(543) C =2arctan(2 T ) arctan(2)+arctan T 1 2 2 = 2 2arctan 1 2 T +arctan 1 2 arctan 2 2 T 1 (544) 2 2 1 2 T 1 24 T 3 + 1 2 2 2 T 1 8 3(2 T 1) 3 +1 2 : Thedoubleintegralin( 543 )isboundedasfollows Z T 1 0 Z T 1 0 d ( x + 1 2 ) dy ( 1 2 + x ) 2 +( y +1) 2 = Z T 1 2 1 2 Z T 1 dtdy t 2 + y 2 = Z T 1 2 1 2 1 t arctan T t arctan 1 t dt Z T 1 2 1 2 1 t 2 arctan t T dt Z T 1 2 1 2 1 t 1 t 1 3 t 3 dt (545) 2 ln T 1 2 ln 1 2 Z T 1 2 1 2 1 t t T t 3 3 T 3 dt 4 3 1 T 1 2 + 1 6( T 1 2 ) 2 = 2 ln(2 T 1) 20 3 + 5 6 T + 1 12 T 2 1 36 T 3 + 1 T 1 2 1 6( T 1 2 ) 2 < 2 ln(2 T 1) 20 3 + 5 6 T + 1 T 1 2 1 12( T 1 2 ) 2 : Combiningtheresultsfrom( 543 ),( 544 ),and( 545 )givestheoverestimatefor I 0 1 as I 0 1 < 1 R 2 2 ln(2 T 1)+ 2 16 3 + 5 6 T + 1 T 1 2 1 12( T 1 2 ) 2 : (546) 98 PAGE 99 Recallequation( 534 )stated E ( P ) I 0 1 + I 0 2 + I 0 3 + I 4 .Sousingtheexpressionfor I 0 4 given in( 538 )andtheoverestimatesfor I 0 1 ;I 0 2 ,and I 0 3 derivedinequations( 546 ),( 540 ),and( 541 ) respectively,weobtain E ( P ) 1 R 2 2 ln(2 T 1) 1 6 T + 2 + 19 3 : (547) Finally,ifwelet T =[ a R ]+1 a R +1 ,weget E ( P ) < 1 R 2 2 ln 2 a R +1 1 6( a R +1) + 2 + 19 3 (548) Thefunction f ( R )= 1 R 2 2 ln 2 a R +1 1 6( a R +1) + 2 + 19 3 ismonotone,hencetheequation f ( R )= 1 L 2 hasauniquesolution R .Equation( 548 )impliesthatagridwithstepsize R does notcovertheentiresquare.Thatis,thereexistsatleaston epoint P thatremainsuncovered. Thus R isanupperboundfortheoptimalgridcoveringproblem.Sinc etheoptimalgridstepsize R < R ,thetheoremisproved. InFigure 58 ,weseeanexampleinwhichwearecoveringat 40 40 squareandthe requiredjamminglevelateachpointis 3 : 0 units.Inpart(a),weseethecoverageassociatedwith therequirednumberofdevicesfromthelowerboundofTheore m 16 .Inthiscase, 20 2 =400 jammingdevicesareusedtocoverthearea.Noticethatthere arenoholesintheregion.This, togetherwiththescallopshelloutsidetheboundingboxind icatesthatallpointswithintheregion arecovered.Inpart(b),weseethecoveragecorrespondingt otheplacementofthejamming devicesonauniformgridaccordingtotheupperboundofTheo rem 17 .Here,therequired numberofdevicesis 19 2 =361 .Noticetheholeslocatedatthefourcornersoftheregion indicatingthatthesepointsareuncovered.Thisvalidates thetheoreticalresultsobtainedin Theorem 16 andTheorem 17 Nowthatwehaveestablishedbothupperandlowerboundsfora noptimalgridstepsize,we candeterminethequalityofthebounds.Theresultisobtain edinthefollowingtheorem. Theorem18. lim a !1 R R =1 ; (549) 99 PAGE 100 25 20 15 10 5 0 5 10 15 20 25 25 20 15 10 5 0 5 10 15 20 25 25 20 15 10 5 0 5 10 15 20 25 25 20 15 10 5 0 5 10 15 20 25 (a) (b) Figure58:Comparisonofthelowerandupperbounds.(a)The coverageofwhenjamming devicesareplacedaccordingtothelowerboundfromTheorem 2.Thetotalnumberofjamming devicesrequiredis 20 2 =400 .(b)Weseethecoverageassociatedwiththeresultobtained from Theorem 17 .Inthiscase, 19 2 =361 devicesareplaced.Noticethecornerpointsarenotjammed. where R and R areboundsobtainedfromequations( 58 )and( 533 ),correspondingly. Moreover,thefollowinginequalityholds: 1 R R r 1+ c ln( a ) ; (550) forconstants M 2 R ;c 2 R ,suchthat R>M Proof. Byletting x := R L and y := R L ,equations( 58 )and( 533 )canberespectivelyrewrittenas a = L x e 2 ( x 2 + 3 2 ) 1 1 ,and(551) 2 ln 2 a L y +1 = y 2 19 3 2 + L y 6( a + L y ) : (552) Toprovethetheorem,weneedtoshowthat lim a !1 y x =1 ; (553) 100 PAGE 101 where x> 0 and y> 0 aresolutionsof( 551 )and( 552 ),correspondingly.From( 552 ),we obtain 2 ln 2 a L y +1 >y 2 C 1 ; where(554) C 1 = 19 3 + 2 ; and(555) a> L y 2 e 2 ( y 2 C 1 ) 1 : (556) From( 551 )and( 556 )weseethat x e 2 ( x 2 + C 2 ) C 3 1 > y 2 e 2 ( y 2 C 1 ) 1 ; where(557) C 2 = 3 2 ; and(558) C 3 = e 1 : (559) Since y L and x L areupperandlowerbounds,correspondingly,thefollowing relationholds y x > 1 : (560) With( 551 )and( 560 )above,wecanalsoconcludethat lim a !1 x = 1 and lim a !1 y = 1 : (561) Forall M 2 R ,where M> p C 1 ,thereexists Q 2 R suchthat( 557 )canbereducedto y x M: (562) Moreover,for c = 2 ln( Q ) thefollowinginequalityholds y x 2 1 c x 2 ; and y>M: (563) Assumeforthesakeofcontradictionthattheinequalityin( 563 )doesnotholdforsome ( x ;y ) : Thatisassumethat y x 2 1 > c x 2 .Using( 562 )wehave y x 