Citation
Models and Algorithms for Operations Scheduling Problems with Resource Flexibility and Schedule Disruptions

Material Information

Title:
Models and Algorithms for Operations Scheduling Problems with Resource Flexibility and Schedule Disruptions
Creator:
YANG, BIBO ( Author, Primary )
Copyright Date:
2008

Subjects

Subjects / Keywords:
Algorithms ( jstor )
Cost functions ( jstor )
Heuristics ( jstor )
Lateness ( jstor )
Minimization of cost ( jstor )
Overtime ( jstor )
Overtime costs ( jstor )
Scheduling ( jstor )
Total costs ( jstor )
Unit costs ( jstor )

Record Information

Source Institution:
University of Florida
Holding Location:
University of Florida
Rights Management:
Copyright Bibo Yang. Permission granted to the University of Florida to digitize, archive and distribute this item for non-profit research and educational purposes. Any reuse of this item in excess of fair use or other copyright exemptions requires permission of the copyright holder.
Embargo Date:
8/31/2009
Resource Identifier:
439083752 ( OCLC )

Downloads

This item is only available as the following downloads:


Full Text

PAGE 1

MODELS AND ALGORITHMS FOR OPER ATIONS SCHEDULING PROBLEMS WITH RESOURCE FLEXIBILITY AND SCHEDULE DISRUPTIONS By BIBO YANG A DISSERTATION PRESENTED TO THE GRADUATE SCHOOL OF THE UNIVERSITY OF FLOR IDA IN PARTIAL FULFILLMENT OF THE REQUIREMENTS FOR THE DEGREE OF DOCTOR OF PHILOSOPHY UNIVERSITY OF FLORIDA 2004

PAGE 2

Copyright 2004 by Bibo Yang

PAGE 3

I dedicate this thesis to my father.

PAGE 4

ACKNOWLEDGMENTS I would like to thank my advisor Dr. Joseph Geunes, for his help in my research and dissertation. I would also like to thank all of my committee members, Dr. Ravi Ahuja, Dr. Elif Akcali, Dr. William J. O’Brien, Dr. Zuo-Jun Shen, for their time and advice. iv

PAGE 5

TABLE OF CONTENTS page ACKNOWLEDGMENTS.................................................................................................iv LIST OF TABLES............................................................................................................vii LIST OF FIGURES.........................................................................................................viii ABSTRACT.......................................................................................................................ix CHAPTER 1 INTRODUCTION........................................................................................................1 2 SINGLE MACHINE SCHEDULING TO MINIMIZE TOTAL TARDINESS AND OVERTIME RESOURCE COSTS..............................................................................6 2.1 Introduction and Problem Definition.....................................................................6 2.1.1 Introduction.................................................................................................6 2.1.2 Literature Review........................................................................................7 2.1.3 Problem Definition....................................................................................11 2.2 Algorithm for Minimizing Cost for a Fixed Sequence........................................13 2.2.1 The Compact and Relax Algorithm...........................................................13 2.2.2 Properties of the Compact and Relax Algorithm......................................14 2.2.3 Extension to Job-Specific Release Times..................................................20 2.3 Heuristic Methods for Job Sequencing for Min-WTOT......................................24 2.3.1 Finding a Good Initial Sequence—Priority Rules.....................................24 2.3.2 Improving the Initial Solution—Local Search Methods...........................29 2.4 Gauging Solution Quality—Lower Bounds for Min-WTOT..............................32 2.5 Computational Tests............................................................................................36 2.5.1 Problem Instance Generation....................................................................36 2.5.2 Priority Rule Performance vs. Optimal Solution Value............................38 2.5.3 Heuristic Performance on Large Problem Instances.................................40 2.6 Conclusion...........................................................................................................45 3 SINGLE MACHINE SCHEDULING PROBLEMS WITH JOB-SELECTION FLEXIBILITY............................................................................................................47 3.1 Introduction and Problem Definition...................................................................47 3.2 Modified Two-phase Algorithm (2PA) for the TMP..........................................51 v

PAGE 6

3.3 Mod-2PA Algorithm for Generalizations of the TMP........................................58 3.3.1 TMP with Job Tardiness............................................................................58 3.3.2 TMP with Job Tardiness and Controllable Process Times........................59 3.3.3 TMP with Job Tardiness and Extendable Time Horizon..........................61 3.3.4 TMP with Tardiness, Controllable Process Times, and Extendable Time Horizon...................................................................................................64 3.4 Heuristic Approach for the TMP(t, c)..................................................................67 3.4.1 Compress and Relax Algorithm................................................................67 3.4.2 Determining a Good Job Sequence...........................................................74 3.5 Computational Tests for the TMP(t, c)................................................................77 3.6 Conclusions..........................................................................................................81 4 SINGLE MACHINE RESCHEDULING WITH NEW JOB ARRIVALS AND PROCESSING TIME COMPRESSION COSTS.......................................................82 4.1 Introduction and Problem Definition...................................................................82 4.2 Literature Review................................................................................................83 4.3 Rescheduling Policy Approaches........................................................................87 4.4 Rescheduling with Fixed Sequence Approach....................................................89 4.5 Heuristic Approach for Resequencing Original Jobs..........................................95 4.6 Computational Testing.......................................................................................101 4.7 Conclusions........................................................................................................104 5 PREDICTIVE SCHEDULING ON A SINGLE MACHINE WITH UNCERATIN FUTURE JOBS................................................................................105 5.1 Problem Motivation and Literature Review......................................................105 5.2 Problem Definition and Modeling Assumptions...............................................110 5.3 Minimizing Cost with a Single Uncertain Job...................................................112 5.3.1 Methods SM and SM(1) to Generate a Feasible Schedule......................114 5.3.2 Method SM() for the Predictive Schedule.............................................115 5.3.3 The Decision to Compete for the Job......................................................123 5.4 Heuristic Predictive Scheduling for Multiple Uncertain Jobs...........................124 5.5 Conclusion.........................................................................................................131 6 CONCLUSION AND FUTURE RESEARCH DIRECTIONS................................132 6.1 Conclusion.........................................................................................................132 6.2 Future Research Directions................................................................................133 APPENDIX: MIP FORMULATION OF MIN-WTOT PROBLEM...............................136 LIST OF REFERENCES.................................................................................................138 BIOGRAPHICAL SKETCH...........................................................................................142 vi

PAGE 7

LIST OF TABLES Table page 2-1 Local search algorithm description..........................................................................33 2-2 Equations used for problem parameter generation...................................................37 2-3 Parameter settings for test set 1................................................................................39 2-4 Results of test set 1...................................................................................................40 2-5 Parameter settings for 90-job problems....................................................................41 2-6 Illustration of effects of removing critical steps of the local search algorithm........44 2-7 Relative performance of heuristic solution approach as compared to strengthened linear programming relaxation lower bound.......................................45 3-1 Rules used for randomly generating test problem parameters.................................79 3-2 Summary of computational test results for four problem classes. a ..........................79 4-1 Results for problem class 1....................................................................................103 4-2 Results for problem class 2....................................................................................103 vii

PAGE 8

LIST OF FIGURES Figure page 2-1 Total tardiness cost as a function of overtime utilization in the compact and relax algorithm.........................................................................................................16 2-2 Total cost curve as a function of overtime utilization under the compact and relax algorithm.........................................................................................................19 2-3 Illustration of independent subsets...........................................................................21 2-4 Illustration of blocking and merging of independent subsets into a new independent subset in relax phase of algorithm.......................................................22 2-5 Illustration of local search performance improvement as a function of the number of local search iterations.............................................................................42 3-1 Profit as a function of reduction in compression time ..........................................71 4-1 A partial improvement graph...................................................................................99 5-1 Cost as a function of ............................................................................................118 5-2 Four possible cases for the cost function...............................................................119 viii

PAGE 9

Abstract of Dissertation Presented to the Graduate School of the University of Florida in Partial Fulfillment of the Requirements for the Degree of Doctor of Philosophy MODELS AND ALGORITHMS FOR OPERATIONS SCHEDULING PROBLEMS WITH RESOURCE FLEXIBILITY AND SCHEDULE DISRUPTIONS By Bibo Yang August, 2004 Chair: Dr. Joseph Geunes Major Department: Industrial and Systems Engineering This thesis addresses new classes of single-resource scheduling problems with resource flexibility and schedule disruption. We first consider two problems in which the scheduling firm has options to use additional resources to speed up the processing of a predetermined set of jobs. The usage of overtime or process time “compression” costs must be balanced against penalty costs for tardy job delivery. In the first problem, the firm must process all jobs from a static set of jobs at minimum total weighted tardiness plus overtime usage costs. We develop a novel new compact and relax algorithm for this problem. The second problem allows the firm to select a subset of jobs for processing that maximizes profit, where each job has associated revenue. We use a two-phase interval selection algorithm along with a compress and relax algorithm to solve this problem. We also consider two problems in which the set of jobs may be “dynamic,” meaning that the set of all jobs that require processing may not be known with certainty ix

PAGE 10

at the time the initial schedule must be developed. We first address a rescheduling problem with newly arriving jobs. The objective function is to minimize total cost, which includes schedule disruption costs, tardiness costs, and processing time compression costs. We apply linear programming techniques to solve the problem with a fixed sequence of jobs and use a heuristic based on very large scale neighborhood search (VLSN) techniques for the general problem. We then consider a predictive reactive schedule that accounts for the possibility of future uncertain job arrivals. The predictive schedule is generated by inserting idle time in the initial schedule to account for uncertain jobs. We discuss several predictive scheduling policies and reactive approaches for dealing with this uncertainty. x

PAGE 11

CHAPTER 1 INTRODUCTION With the advancement of information technologies, more and more firms are collaborating with partners to share information and reduce operational inefficiency by coordinating their operations. The virtual production network is one form of this kind of collaboration. In a virtual production network, groups of firms that work together in a supply chain agree to become virtual partners and are connected through an information network, e.g., a set of systems connected through the Internet. A coordinating firm (a.k.a., a central hub) is at the center of the network; it receives requests from customers, seeks the appropriate or “best” firm in the network to fulfill the request, and sends order commitment information to the firm. In a project-based environment, the coordinating firm is effectively a centralized project manager, who works with the other firms in the network to complete a project. In other contexts, the hub may be an on-line marketplace where customers come to place requests for goods or services. Such a virtual production network may operate in a variety of ways and under a variety of contractual operational agreements between the member firms. We consider scenarios in which such a hub firm receives requests from customers and then effectively subcontracts these jobs to the member firms of the production network. Given a request by the hub firm to perform a set of jobs, a member firm may in certain contexts choose to accept or reject any subset of these jobs. The member firm evaluates the attractiveness of the proposed jobs and notifies the hub firm of those jobs it will choose to perform. In other contexts, the firm may be required to perform all jobs 1

PAGE 12

2 requested by the hub, either because all jobs are extremely profitable, or because all jobs come from critical customers. To evaluate such decisions, we will consider several of the member firm’s scheduling problems in the production network. In this setting the hub firm proposes some set of jobs to a member firm, each with a required processing time and due date. The hub may seek the lowest cost member firm to complete the jobs, or may require the only firm in the network that is capable of executing the job to accept the job. In the former case, the member firm considers the profitability of the jobs and either proposes a cost for performing the job or rejects the job altogether. We refer to this kind of scheduling problem as scheduling for maximum profit in a production network. In the latter case, when the firm must accept all jobs, the member firm must create a schedule for its resources that minimizes the cost of performing the jobs. In these kinds of problems, several new scheduling issues appear. An important issue is how to make the best use of all available resources. In traditional scheduling problems, often times all jobs must be finished by a given deadline to avoid tardiness cost penalties. In practice, a firm may use additional resources (such as overtime resources) to meet the deadline for a job; or the firm may use additional equipment or resources to reduce or “compress” the processing times of the jobs. Although the additional overtime resource usage or the compression of the processing times helps to finish a job on time, it also adds additional resource cost. In a production network where the firm considers its profit instead of being required to complete all jobs on time, it is not uncommon for a firm to tolerate a certain degree of tardiness cost for a job if the job is profitable and the overtime resource cost is high.

PAGE 13

3 The second issue we face is how to select jobs in order to maximize the firm’s profit. Not all orders proposed by the hub will necessarily bring profit to the firm. Moreover, the profitability of a job largely depends on the total set of jobs accepted and performed. Since the firm has the freedom to select jobs, if a given job cannot provide profit to the firm, it may reject the job. This is called the job selection problem. The third problem involves schedule planning in a dynamic and uncertain environment. In today’s production environments, customer demands are highly dynamic. Neither the hub nor the firm has full knowledge about future customer orders. Since the firm does not have full knowledge about the entire scope of its potential future work requirements, it can only forecast a rough work load for a given period of time, but the exact information is not revealed until an actual order commitment is made. Here, we distinguish between the “uncertainty” and “imprecision” of an order and associated data. The uncertainty of an order is associated with whether a future forecasted order will actually materialize, while the imprecision is associated with the parameters of the order (e.g., process time, due date, release date). Uncertainty occurs when a firm forecasts an order knowing only the possibility of obtaining the order from the hub or customer, or when a previously committed order is later cancelled by the customer. The firm needs to determine a good schedule plan to account for such uncertainty in orders. Although there is much work to be done in this research area, our work emphasizes several new scheduling problems in the production network associated with the single resource scheduling problem. In summary, our research will address scheduling problems in two environments: A static environment: the firm has full knowledge of the whole set of jobs and makes its schedule under certainty of this information;

PAGE 14

4 A dynamic environment: the firm does not have full knowledge about the whole set of jobs before it makes its initial schedule. We will consider two scheduling problems in a static environment: Single machine scheduling problem to minimize total tardiness and overtime resource costs. In this problem, we assume the firm must finish all jobs required by the hub, and each job has an associated processing time, release date, due date and tardiness cost. A fixed amount of regular time resource is available in every period to complete the jobs. If the firm cannot finish jobs on time, additional overtime resources are available, which have higher cost than regular time resources. The firm must decide how much overtime resource it should use in every period and create a schedule that minimizes total tardiness plus overtime resource costs. Single machine scheduling problem with job-selection flexibility. In this problem class, each job has associated job-specific revenue. To complete any job, some cost is incurred, including resource usage cost and (possibly) tardiness cost. The firm will select the subset of available jobs from the hub to maximize its own profit. We will focus on two dynamic scheduling problems: Rescheduling problem with new job arrivals. New jobs arrive to the hub, and the hub seeks the best firm to finish the jobs. Each firm already has an ongoing schedule before knowing about the new jobs. Given the opportunity to perform the new jobs, the firm reviews its current schedule to determine how it can complete the new jobs at the lowest additional cost. It then provides a price quote to the hub. Predictive scheduling on a single machine with uncertain future jobs. In this problem, the firm must make a schedule before it has full knowledge of all job requirements. Uncertain jobs in this context refer to jobs on which the firm competes with other firms to win, and only knows some probability of winning prior to creating its schedule. The firm must determine whether to allot time in the schedule to perform the uncertain jobs. The schedule is a kind of predictive schedule that may also be reactive when uncertainty is resolved, i.e., the firm will reschedule if the actual situation is different from predicted. The objective of the firm is to minimize the expected cost of the predictive schedule. In summary, this thesis consists of four main chapters in addition to the introduction. Each chapter focuses in turn on the above four problems. Each chapter begins with an introduction, motivation, and definition of the scheduling problem, followed by a literature review. We then develop a model for the scheduling problem.

PAGE 15

5 After developing the model, we investigate solution procedures to solve instances of the problem. Given a particular solution method, we implement and test the method to determine its ability to solve a variety of problem instances. The algorithms and solution approaches developed in this thesis are proved to be efficient by the computational testing. These approaches can be applied in the operations of the virtual production network and many other scheduling practices.

PAGE 16

CHAPTER 2 SINGLE MACHINE SCHEDULING TO MINIMIZE TOTAL TARDINESS AND OVERTIME RESOURCE COSTS 2.1 Introduction and Problem Definition 2.1.1 Introduction In many practical scheduling contexts, a firm attempts to schedule a set of jobs such that each job is completed by some due date; if this is not possible, a penalty for tardiness must be paid. To meet the due dates of the jobs, the firm may in some cases use additional resources (such as overtime) if its regular resource capacity is insufficient. O’Brien and Fischer (2000) have found that construction contractors are strongly capacity constrained, with very limited abilities to find additional resources or outsource excess work on short notice. Hence, the primary method they employ to meet due dates is through the use of overtime. Thus a key management tradeoff directing operational choices is the cost of tardiness for projects versus the use of overtime. This problem differs from most of the extant scheduling literature that attempts to minimize average job cost or some function of tardiness. Problems addressed in past deterministic scheduling literature typically assume that each job takes some fixed amount of time on the resource, and that both processing times and resource usage can be measured in some continuous time increments such as minutes, hours, or days. An implicit assumption usually exists that the cost of processing jobs on the machine is the same no matter when the jobs are processed, and that the amount of available processing time in each processing period is fixed. In practice, however, firms can use additional, 6

PAGE 17

7 costly resources such as overtime and (in many cases outside of construction) outsourcing to increase the available processing capacity in a time period. Additionally, with the exception of prior work on minimizing weighted tardiness, many prior approaches assume that either the degree of tardiness is not important (when minimizing, e.g., total number of tardy jobs), or that only the maximum tardiness is important. These approaches do not capture the essence of scheduling challenges in some environments, where the amount of the tardiness of each job is an important performance measure. In this chapter, we consider a single-resource scheduling problem in which the scheduling firm has the option to use overtime to gain additional processing time in any time period. Each additional unit of overtime capacity comes at a cost to the firm and the available amount of overtime in any period is fixed. The jobs have different priorities, reflected through different tardiness costs assessed against the number of periods a job is delivered tardy. We frame the tradeoff problem as between total tardiness costs and total overtime costs and refer to this problem as the minimum combined weighted tardiness and overtime (Min-WTOT) problem. We present heuristic approaches for scheduling a finite set of jobs to address this tradeoff, along with methods for strengthening the linear programming relaxation of the problem formulation to provide goods lower bounds on the optimal solution value. These approaches are shown to provide good solutions on a broad range of test instances. To our knowledge, no optimization-based modeling approaches have been developed that consider this tradeoff between overtime and total weighted tardiness in single-resource scheduling. 2.1.2 Literature Review For a broader overview and analysis of the general single machine scheduling literature, please see Lawler, Lenstra, Rinnooy Kan, and Shmoys (1993). Hoogeveen,

PAGE 18

8 Lenstra, and van de Velde (1997) contains an excellent annotated bibliography on scheduling literature in operations research, and Hopp and Spearman (2001) also provide an interesting discussion on the history of scheduling problems. Since the Min-WTOT problem seeks to minimize a composite function of overtime and total weighted tardiness costs, we briefly discuss past work in similar areas. Lawler (1977) first provided a pseudopolynomial time algorithm for minimizing total (unweighted) tardiness on a single machine, and Du and Leung (1990) later showed that this problem is NP-complete in the ordinary sense. The same problem with job-specific release dates is strongly NP-complete (Lawler 1977). See Lawler et al. (1993) for a discussion on providing good lower bounds and heuristic algorithms for this problem class. Scheduling problems with controllable processing times are the subject of a considerable number of papers in the recent literature (see, e.g., the survey by Nowicki and Zdrzalka, 1990, which summarizes research results for the following problems: 1| | T max , 1|r j |C max , 1|r j |L max , and 1| |f max ). Controllable scheduling problems assume a job has a “normal” process time which can be compressed (shortened); typically there is a unit cost associated with the compression of a job. Both the processing times and compression times are integers and the compression cost is linear with the reduction of the process time. The objective functions are often bi-criteria, combining regular objectives (such as minimizing weighted tardy, the total number of tardy activities, weighted mean flow time, etc.) and compression cost. Cheng et al. (1998) studied the problem with the objective of minimizing the sum of compression costs and the cost

PAGE 19

9 associated with the number of the late jobs. The resulting problem was shown to be NP-hard, even when the due dates of all the jobs are identical. The project scheduling literature also contains a variety of work dealing with single-resource scheduling problems. The so-called resource constrained project scheduling problem (RCPSP) can be separated into single-mode and multi-mode RCPSPs. In a single-mode RCPSP, each project (activity) has a single execution mode: both the activity duration and its requirements for a set of resources are assumed to be fixed, and only one execution mode is available for any activity. In a multi-mode RCPSP, given the estimated work content for an activity, a set of allowable execution modes can be specified for the activity’s execution. Each mode is characterized by a processing time and amount of a particular resource type required for completing an activity or job. The multi-mode RCPSP generally assumes that once a mode is selected, the activity continues in this mode until completion. The objectives of problems considered in the RCPSP class often consist of minimizing makespan or some other regular objectives. For an in-depth discussion on this problem class please see the literature review by Brucker, Drexl, Mhring, Neumann, and Pesch (1999). Time-cost tradeoff problems represent a subset of the multi-mode RCPSP, which is related to the problem we consider. A time-cost tradeoff project scheduling problem assumes that each activity can be completed in one of a set of different processing times (each processing time represents a mode), and associated with each processing time is a certain cost (this cost is assumed non-increasing with processing time). If the mode set for every project can be represented as a closed interval, and the cost of each activity is an affine and decreasing function of its processing time, we have a linear time-cost

PAGE 20

10 tradeoff problem (Kelley and Walker 1959). If the modes consist of a discrete set and the cost of an activity is decreasing in its processing time, we have a discrete time-cost tradeoff problem (see Harvey and Patterson 1979, and Hindelang and Muth 1979). The Min-WTOT problem is different from scheduling problems with controllable processing times and the multi-mode RCPSP. Papers on scheduling with controllable processing times assume integer processing times and compression times for the jobs, and that the compression cost is linearly increasing with the compression time. In the Min-WTOT problem, the processing time is expressed as a required amount of a resource, which is a real number and can not be reduced. In each time period, the firm has an amount of regular capacity for the resource; it also has an overtime resource which may be used to reduce the number of periods the job is processed. The multi-mode RCPSP assumes that when a certain mode is chosen, the activity consumes the same amount of a renewable resource in a set of consecutive time periods (see Brucker et al., 1999). The Min-WTOT problem is a time-cost tradeoff problem in which the processing time per period for an activity in a given mode can vary (the mode set we consider for any activity is also effectively a closed interval and not a discrete set). The objective function in our model is non-regular, implying that the cost of an activity can increase with decreased completion time of an activity. The most closely related work to the problem considered in this paper is that of Daniels (1990). He considers a single-machine scheduling problem, allows for different modes that can complete an activity in different processing times, and minimizes resource usage cost subject to limits on both total tardiness and individual job tardiness. To our knowledge, the literature has not

PAGE 21

11 considered the single-resource time-cost tradeoff problem with a composite objective function that minimizes total overtime plus tardiness costs. 2.1.3 Problem Definition Consider a firm that must schedule a set of jobs, each of which requires a single resource. We initially assume that no precedence relations exist among jobs. Although, as we later discuss, allowing for simple precedence constraints is a straightforward extension of our approach. The basic problem is that of scheduling a finite set of jobs on a single resource during some finite time horizon of length T. We assume that time is separated into a set of discrete periods of equal length and we index periods by t, where t = 1, , T. Let j index the set J of jobs where j = 1, , n. Job j has a due date d j and a required processing time of p j . The resource has an available processing time of R t on day t during regular time, plus an additional O t for overtime. Note that processing time, p j , is expressed in some base units of time; that is, we can alternatively express p j using p j = p ’ j R t , where p ’ j is the number of processing periods required by job j if only using regular time. The total amount of processing time available in a given period, i.e., R t + O t , can be expressed as R t + O t = kR t , for some scalar k > 1. Equivalently we can write O t = (k – 1)R t . Processing a job during regular time incurs cost at a rate of c R per unit time, while processing during overtime incurs cost at a rate of c O per unit time, where c O > c R , i.e., overtime is more expensive. Job j incurs a cost of l j per period tardy, and so if C j denotes the finish period of job j, then its tardiness cost equals l j [C j – d j ] + , where [x] + = max{x, 0}. We do not allow job preemption on the resource, i.e., once a job is started it must proceed to completion before any other job can be started. Moreover, we assume that job

PAGE 22

12 processing can be split between periods, i.e., we can begin a job in period t and continue processing on the job in period t + 1 if the job has not completed processing in period t. Let w jt denote a decision variable for the total amount of work performed by the resource on job j in period t (measured in time units). The total amount of work performed in period t then equals W t := 1M j tjw , which cannot exceed (R t + O t ) in any given day. Let u t denote the total overtime resource used in period t, i.e., u t = [W t – R t ] + . The overtime cost in period t will equal c O u t . We refer to the regular time limit in a period, R t , as the soft limit on resource usage in any period, and the regular plus overtime limit (R t + O t ) as the hard limit of the resource. The firm wishes to find a schedule for all jobs that minimizes total tardiness penalty costs plus resource overtime usage costs while satisfying available processing time constraints. In standard scheduling terminology, we consider the problem 1| prec | j w j T j + c O t u t (where, using our prior notation the tardiness of job j, T j , is defined by T j = [C j – d j ] + , and the weight of job j, w j , is defined as w j = l j ), which implies a single-machine problem with (possibly) precedence constraints that minimizes the sum of weighted tardiness and overtime costs. Finding a schedule that minimizes these combined costs requires: Determining an optimal sequence of jobs and, for this sequence, Determining the start date, finish date, and overtime usage for each job in the sequence. The Min-WTOT problem we have defined generalizes the problem of minimizing the sum of weighted tardiness on a single machine (without preemption), which was shown to be strongly NP-hard by Lawler (1977) and Lenstra, Rinnooy Kan, and Brucker (1977). This implies that the Min-WTOT problem is strongly NP-hard. For a mixed integer programming formulation of this problem, please see the Appendix.

PAGE 23

13 This chapter is organized as follows. Because of the complexity of this problem, Section 2.2 initially focuses on the second decision listed above, i.e., given a predetermined job sequence, we determine the optimal start times, finish times, and overtime usage for each job. Having determined this, Section 2.3 discusses heuristic methods for determining the optimal job sequence (the first decision listed above). Our heuristic methods use priority rules for obtaining good initial job sequences, and then applying local search methods to improve upon the initial solutions. Given a good sequence, we then use the results of Section 2.2 to determine the start and finish times and overtime usage for that sequence. Section 2.4 deals with valid inequalities that strengthen the linear programming relaxation lower bound. Section 2.5 presents the results of a broad set of computational experiments using our heuristic solution approach. 2.2 Algorithm for Minimizing Cost for a Fixed Sequence This section describes the approach for determining the optimal start and finish times for a predetermined (fixed) sequence of jobs. We initially assume that all jobs are released at time zero. We refer to our solution methodology for a fixed sequence of jobs as the Compact and Relax Algorithm, for reasons that will become clear during the explanation of the algorithm. 2.2.1 The Compact and Relax Algorithm The compact and relax algorithm first approaches the problem by fully utilizing all available overtime in any period containing a job assignment before considering allocating jobs to the following period (regular time is allocated first, then overtime is used). This first compact phase of the algorithm creates a schedule that guarantees the minimum total tardiness costs under the fixed sequence, but also produces high overtime costs. We next sequentially relax the schedule by decreasing the total amount of

PAGE 24

14 overtime used (beginning with the latest scheduled job that uses overtime and working backwards in time). The total amount by which we relax the compacted schedule is a function of whether the relax phase produces a lower total cost schedule. The steps below summarize this approach: Step 1: (Compact) Set the resource availability in each period to the hard limit (regular time plus overtime, i.e., R t + O t ). Schedule jobs according to the predetermined sequence using all available overtime in each period before allocating jobs to the following period. We refer to the resulting schedule after the compact phase of the algorithm as the compact schedule. Step 2: (Relax) Beginning with the job scheduled last, decrease the amount of overtime scheduled for this job as much as possible without increasing the cost of the current schedule (cost will initially strictly decrease as we decrease the amount of overtime used; the only way cost will increase is if we increase the tardiness of this job by one or more periods). Repeat this procedure for the second last job, then the third last job, and so on, until we reach the first scheduled job. 2.2.2 Properties of the Compact and Relax Algorithm This section develops certain properties of the compact and relax algorithm that allow us to show that this algorithm produces a schedule that minimizes the sum of weighted tardiness and overtime costs for a fixed sequence of jobs. Proposition 2.1: Given any schedule created using the initial compact algorithm, consider decreasing the total overtime resource usage in this initial schedule by some fixed amount. Decreasing the total overtime resource usage for jobs scheduled in later

PAGE 25

15 periods provides at least as great a benefit as an equivalent decrease for jobs scheduled in earlier periods. Proof: Because all jobs are available at time zero, the jobs are dispatched one by one without any unnecessary delay. For instance, consider periods t 1 and t 2 (t 1 < t 2 ), with jobs 1, 2, 3 , k scheduled between periods t 1 and t 2 and with job k utilizing some overtime in period t 2 . Suppose job k is delayed. If we reduce overtime resource usage by in period t 2 , where is less than or equal to the amount of overtime resource usage for job k in period t 2 , job k will be delayed by one more period. But if we reduce overtime resource by in period t 1 instead, the completion date of jobs 1, 2, 3 k – 1 will either remain unchanged or increase, and job k will still be delayed by one more period. The total penalty cost is at least as high as if we decrease overtime usage in period t 1 , while the savings from reducing overtime resource is the same. So the total penalty cost and resource cost obtained from relaxing resource usage in period t 1 is no less than that from relaxing resource in period t 2 . Proposition 2.1 implies that when we consider decreasing the total overtime usage relative to that in the schedule produced in the compact phase of the algorithm, we should begin with the latest period first and move from right to left in time. The following proposition illustrates the behavior of the total tardiness penalty as a function of the total overtime resource usage under the compact and relax algorithm. Proposition 2.2: Total tardiness penalty cost is a non-increasing step function in overtime resource usage under the compact and relax algorithm.

PAGE 26

16 To verify that Proposition 2.2 holds, consider a schedule that uses no overtime and consider increasing the total amount of overtime usage, beginning in the first period. Assume that at least one job is completed tardy and let j denote the total amount of processing time a tardy job j requires in its final processing period, where job j is the job in the initial schedule with the minimum amount of processing time required in its final processing period among all tardy jobs. As we increase overtime usage between 0 and j , there is no reduction in total tardiness penalty costs. When overtime usage equals j , total tardiness penalty costs decrease by l j , since job j is delivered one day earlier. We can repeat this argument for all remaining tardy jobs in the sequence and we will observe a stepwise decrease in total tardiness penalty costs as a function of total overtime usage; see Figure 2-1. Total Tardiness Cost Figure 2-1. Total tardiness cost as a function of overtime utilization in the compact and relax algorithm. (2) (1) l3 l2 l1 Overtime Utilization (3)

PAGE 27

17 In Figure 2-1, (j), which we define more precisely in Theorem 2.1 below, represents the maximum decrease in overtime that we can achieve without delaying job j one additional period, while l j is the cost per day tardy for job j. Before introducing this theorem we need to define some additional notation used in the theorem. We define ()j as the total amount of resource time allocated to jobs following job j in job j’s completion period, C j . If we index jobs in sequence order and let r j,t denote the amount of the resource allocated to job j in period t, then ,1() j ikCkjj r , where job i is the last job scheduled in period C j . If C j = C n , then we define . ,1max{,0jnkCtCkjjrRW ()n } Theorem 2.1: The maximum overtime reduction that can be achieved without delaying job j an additional period beyond its due date (with respect to the initial compact schedule), denoted by (j), is given by the following formula: 1()()(,0)njCtjjtCjujmaxdC j = 1, 2, , n. tR Proof: We first consider the first part of the function, . The compact and relax algorithm reduces the overtime resource usage in later periods first, so only after reducing all overtime resource usage in periods later than the completion date of job j will the completion time of job j be affected. Next we consider the second part of the function, (j). Since job j finishes in period C 1njCttCu j , if in or before period C j , more than (j) units of overtime resource are reduced, the completion date of job j will be one date later. Finally we consider the third part of the function. This part denotes the current number of days early job j is scheduled to be delivered by the compact phase of the algorithm.

PAGE 28

18 Corollary: The maximum overtime reduction without delaying job j an additional k periods beyond its due date is given by the following formula: , i = 1, 2,, n. ()()(1)ktjjk R If we let 11,...,minjnj , then by Theorem 2.1 we can decrease total overtime usage by 1 without increasing the tardiness costs over that already in the compact schedule (and the total tardiness in the compact schedule is the minimum possible total tardiness for the fixed sequence). Let 1,...,argminjnv j . When we decrease total overtime usage by 1 , the total cost of overtime decreases by c O 1 , and the total tardiness cost does not increase. If we decrease total overtime usage by 1 + (where > 0 and sufficiently small) then total tardiness cost increases by l v (assuming v is unique). If c O ( 1 + ) > l v , then the resulting schedule clearly improves total cost over that of the initial schedule. Similarly, we can sort the values of (j) in increasing order, i.e., 1 < 2 < < n . Since the total cost of overtime is linear in overtime resource utilization, the total cost function we seek to minimize is the sum of a linear function and the stepwise decreasing total tardiness cost function shown in Figure 2-1. The resulting total cost curve is shown as the saw tooth curve in Figure 2-2. The lateness cost for a job determines the amount of cost decrease at each of the steps in the curve, while the overtime cost per unit time, c O , determines the rate of increase after each step. Note that every bottom point is associated with an overtime resource reduction and total tardiness

PAGE 29

19 penalty cost, and must be the maximum overtime resource reduction without delaying some job by an additional period, i.e., , , is positive integer. jtkRjJk (1) Cost (2) (3) Overtime Utilization Total Cost cO Overtime Cost Tardiness Cost Figure 2-2. Total cost curve as a function of overtime utilization under the compact and relax algorithm. If is larger than k (j), then job j will incur an additional k periods of tardiness compared to the compact schedule. Denote 0 as the total overtime resource used in the compact schedule. The total cost of an overtime resource reduction of is given by 0()()()jjORjtclccR Notice that function is piecewise linear with break points at j , for j=1,n. Therefore, the candidates for the optimal value are simply 0, j , for j=1,n, and 0 .

PAGE 30

20 Proposition 2.3: The compact and relax algorithm minimizes the sum of total overtime plus tardiness costs for a pre-defined sequence of jobs when all job release times equal zero, and the complexity of the algorithm is O(n 2 ). Proof: There are O(n) candidates for optimal , calculating each j need O(n) time; therefore we need O(n 2 ) for compact and relax algorithm. 2.2.3 Extension to Job-Specific Release Times The approach outlined in the previous section assumed that all jobs are available for processing effectively at time zero. In many practical scheduling problem contexts, however, jobs typically have release dates that do not all coincide at the beginning of the scheduling horizon. This section considers the necessary extensions to the compact and relax algorithm under job-specific release dates. To begin addressing this extension we let r j denote the release date for job j, where r j {1, 2, , T} for all j J. In the compact phase of the algorithm, the only difference is that we cannot necessarily schedule all jobs such that no slack exists between jobs in the initial compact schedule. For the relax phase of the algorithm several adjustments must be made. To facilitate the discussion of these adjustments, we first introduce the notion of an independent subset. Definition 2.1 (Independent subset): For any schedule of jobs, an independent subset of jobs satisfies the following properties: (i) the release date of the first job in the subset is strictly greater than the completion date of the job’s immediate predecessor; (ii) the completion date of the last job in the subset is strictly less than the release date of its immediate successor; and (iii) no unscheduled regular time exists between the start of the

PAGE 31

21 first job in the subset and the completion of the last job in the subset; see Figure 2-3 for an illustration of an independent subset. Resource Usage Time Independent Subset 3 Rt + Ot Rt Independent Subset 1 Independent Subset 2 Figure 2-3. Illustration of independent subsets. Note that our definition of an independent subset does not preclude an independent subset consisting of a single job or an independent subset consisting of all jobs to be scheduled (which occurs, for example, when all release times equal zero). Suppose that after the compact phase of the algorithm, the initial compact schedule consists of m independent subsets, denoted by S 1 , S 2 , , S m , where m is a positive integer. We index independent subsets in increasing order of the start of the first job in the subset, and we say that S l > S k for any subsets S k and S l if the start of the first job in subset S l is later than the start of the first job in subset S k . If m = 1 then we simply proceed with the relax phase of the algorithm as in the previous section. If m > 1 then we apply the relax phase of the algorithm individually to independent subsets beginning with subset S m , then subset S m-1 , and so on. In applying the relax phase of the algorithm to an independent subset, with the exception of subset S m , the subset may become “blocked” when the completion time of the last job in the subset reaches the period immediately before the starting period of the next independent subset (and exhausts all regular time in the

PAGE 32

22 period). If no subsets become blocked during the relax phase of the algorithm, then no further adjustments are necessary. If independent subset S i does become blocked by subset S i+1 , then the two subsets S i and S i+1 merge into a single new subset. We then restart the relax procedure on the newly formed independent subset. subset S i+1 , then the two subsets S i and S i+1 merge into a single new subset. We then restart the relax procedure on the newly formed independent subset. Note that when subsets merge, we need to revise the values of j for every job j in the merged subsets except for the earliest (lowest indexed) subset in the merge. We denote the regular time resource between subsets k and k + 1 in the initial compact schedule as S k,k+1 . Then clearly we can relax the overtime in subset k by an amount equal to S k,k+1 before any of the jobs in subset k + 1 are affected, i.e., are shifted later in time. Thus the resource S k,k+1 will need to be added to the values of j for j in subset k + 1 to reflect this additional overtime reduction that occurs without affecting these jobs. Note that when subsets merge, we need to revise the values of j for every job j in the merged subsets except for the earliest (lowest indexed) subset in the merge. We denote the regular time resource between subsets k and k + 1 in the initial compact schedule as S k,k+1 . Then clearly we can relax the overtime in subset k by an amount equal to S k,k+1 before any of the jobs in subset k + 1 are affected, i.e., are shifted later in time. Thus the resource S k,k+1 will need to be added to the values of j for j in subset k + 1 to reflect this additional overtime reduction that occurs without affecting these jobs. We will refer to the compact and relax algorithm with these adjustments as the generalized compact and relax algorithm. Figure 2-4, which is based on the example shown in Figure 2-3, provides an example of one independent subset blocking another in the relax phase of the algorithm, producing a new independent subset. We will refer to the compact and relax algorithm with these adjustments as the generalized compact and relax algorithm. Figure 2-4, which is based on the example shown in Figure 2-3, provides an example of one independent subset blocking another in the relax phase of the algorithm, producing a new independent subset. Resource Utilization Independent Subset 1 Independent Subset 2 is “blocked” by 3, and Independent Subsets 2 and 3 merge into a new Independent Subset Rt + Ot Rt Time Figure 2-4. Illustration of blocking and merging of independent subsets into a new independent subset in relax phase of algorithm. Figure 2-4. Illustration of blocking and merging of independent subsets into a new independent subset in relax phase of algorithm.

PAGE 33

23 Proposition 2.3 showed the optimality of the compact and relax algorithm for a fixed sequence when all job release times are zero. We next show that the generalized compact and relax algorithm with the adjustments noted in the previous section provides the minimum total overtime plus tardiness cost with general job release times. Proposition 2.4: The generalized compact and relax algorithm minimizes total overtime plus tardiness costs for a predefined sequence of jobs under general release times, and the complexity of the algorithm is O(n 3 ). Proof: If only one independent subset results after the compact phase, then by Proposition 2.3 the resulting schedule is optimal. If no blocking occurs during the relax phase, then each subset can be viewed as a self-contained schedule of jobs with a pre-defined sequence. By treating the start time of the first job in the independent subset as if its start time is zero, we can effectively view the independent subset as an independent set of jobs each with a release time of zero. Since each independent subset is scheduled at minimum total cost, the entire schedule is an optimal schedule (under the predefined sequence) since no interaction exists between subsets. We next suppose that blocking occurs at some point in the algorithm. Regardless of whether release dates all equal zero or they can take non-zero values, the compact phase algorithm provides an initial schedule with the minimum possible tardiness cost (for the predefined sequence). Proposition 2.1 implies that the maximum benefit from a decrease of overtime usage occurs if the overtime is decreased as late as possible in the schedule, and the same is true for any independent subset. When independent subset i becomes blocked by subset i + 1 then, by construction, the newly merged subset clearly cannot achieve lower costs by

PAGE 34

24 increasing the overtime for any of the jobs in subsets i or i + 1. By evaluating the effects of further decreases in overtime in the newly merged subset from right to left in time, Proposition 2.1 implies that the generalized compact and relax algorithm finds the minimum total cost for the newly merged subset. We apply this argument recursively each time a newly merged subset is formed and the final result is a set of independently optimized subsets. As in the case when no blocking occurs, we can view each final, independent subset as a separate set of jobs with effectively zero release times and the result is an optimal solution for the entire set of jobs. Since the complexity of the compact and relax algorithm for a single subset problem is O(n 2 ), and there are at most O(n) subsets, the total complexity of the algorithm is O(n 3 ). 2.3 Heuristic Methods for Job Sequencing for Min-WTOT The previous section focused on providing an optimal solution for a fixed sequence of jobs. Since our goal is to find both the best sequence of jobs as well as the best overtime resource usage for those jobs, we next consider the problem of finding a sequence of jobs that produces the minimum total overtime plus tardiness costs. As we discussed in the previous section, finding the best job sequence is an NP-hard optimization problem, and so we focus on heuristic methods for this problem. Note that when precedence relationships exist among jobs, this decreases the total number of potential sequences we must consider and generally reduces the problem’s overall complexity. 2.3.1 Finding a Good Initial Sequence—Priority Rules Past work in job sequencing has demonstrated the value of using good “priority rules” for determining good sequences (e.g., Morton and Pentico 1993). A priority rule uses some quantitative measure to determine a priority ordering among all outstanding

PAGE 35

25 jobs. For certain simple single-machine problems, such priority rules can provide optimal sequences (see Nahmias 2001, for example, for a discussion of basic single-machine operations scheduling and the use of priority rules). We begin by discussing some basic priority rules provided by Morton and Pentico (1993) for minimizing weighted tardiness. We plan to schedule jobs in decreasing order of priority, i.e., highest priority first. Let t i denote the time at which we schedule the i th job (where t 1 = 0), and let j (t i ) denote the priority of job j at time t i . The priority rules all begin by considering each job’s ratio of lateness cost to processing time, l j /p j . Intuitively, if a job has a low lateness cost and a long processing time, this job should be scheduled later in the sequence to avoid delaying other jobs at its expense. Similarly, if a job can be done quickly but has a high tardiness cost, it should be scheduled early in the sequence. This measure is then augmented by considering how much slack the job has at time t i , i.e., if a job has a long time until its due date, we can afford to delay scheduling the job. Note that at time t i , the slack for job j equals [d j – p j – t i ] + . Define P av (t i ) as the average processing time for all unscheduled jobs at time t. If J'(t) is the set of all remaining unscheduled jobs at time t, then '/'avijijJptpJt . At time t i , we set the priority of job j according to the equation j (t i ) = (l j /p j ){1 – [d j – p j – t i ] + /p av (t i )} + . (2.1) The above priority rule (Morton and Pentico 1993) allows for decreasing the priority of a job if it has positive slack at time t i . If the slack exceeds p av (t i ), then the job receives zero priority. An alternative to the priority rule in Equation (2.1) is to augment the l j /p j ratio through an exponential function as shown below. j (t i ) = (l j /p j )exp(-[d j – p j – t i ] + /kp av (t i )), (2.2)

PAGE 36

26 where k is a scalar such that k (1.0, 3.0) . This rule allows us to decrease the priority of jobs with higher slack as well. When not all jobs have a release time of zero, we can incorporate information about the release date, r j , of job j into the priority rule as well (since those jobs with a later release date should receive lower priority). Morton and Pentico (1993) suggest the following priority rule in the presence of non-zero release dates: j '(t i ) = j (t i )(1 – B[r j – t i ] + /p av (t i )) (2.3) where j (t i ) is obtained from either of the Equations (2.1) and (2.2) above, B is a scalar set equal to 1.3 + , and is the average machine utilization. We have adapted a combination of Equation (2.2) and (2.3) for addressing the Min-WTOT problem. Unlike the priority rules cited thus far, we will allow for both diminishing the l j /p j priority term and for amplifying this term to increase the priority of jobs with negative slack (by removing the ‘+’ superscript in the exponential term of Equation (2.2)). Before discussing our approach for setting priorities, we first note that p j , the number of periods of processing time for a job, is a function of the amount of overtime the job uses per period, which is effectively a decision variable. In order to use the priority rule, we will need to state a processing time for each job (measured in periods) before determining a solution. Recall that we expressed job processing times in units of regular time period in Section 2.1 using p j ' = p j /R t and in units of regular plus overtime using p j '' = p j /(R t + O t ), where p j '' < p j ' . We might consider using p j ' and p j '' in our priority rule calculation. We express the number of periods of processing time as j p , where '1jj ''j p p p , and 0 1. Tuning the value of provides a continuum of processing time values between p j '' and p j ' that we can use in our priority rule

PAGE 37

27 calculation and which will affect the heuristic solution we obtain. We set heuristically in our experiments based on the following intuition. If lateness costs are very high relative to overtime costs, we are more likely to utilize overtime in order to complete jobs on time. We would therefore like to be close to or equal to 1 to reflect this increased use of overtime. If, on the other hand, overtime costs are very high relative to lateness costs, we are less likely to utilize overtime and would, therefore, prefer closer to zero. We let l denote the average tardiness cost per period tardy, i.e., 1njjll n , and consider the ratio of incremental overtime cost per unit time to average cost per period tardy, ORccl , when setting , according to the following heuristic rule: 1,if 0.4,0.7,if 0.40.6,0.5,if 0.60.8,0.3,if 0.81,0,if 1.ORORORORORcclcclcclcclccl The basic priority rule we use is a variant of the priority rule given in Equation (2.3) with both B and k set equal to one, and can be expressed as: Priority Rule 1: j (t i ) = (l j / j p )exp{–(d j – j p – t i )/P av (t i )}(1 – [r j – t i ] + /p av (t i )). (2.4) The definition of p av (t i ) in Equation (2.4) uses '/'iavijijJtptpJt . To implement this priority rule, we initially set i = 1 (i is an iteration counter) and, beginning at time zero (t 1 = 0), schedule the job with the highest calculated priority first. After determining job i in the sequence, which we denote as job j [i] , we let

PAGE 38

28 [][]1'()\max;miniiiiijjJtjttp jr , set i = i + 1, and let J'(t i ) = J'(t i-1 )/{j [i-1] }, and repeat this procedure until all jobs have been scheduled. ji t 1 j p 2 j p The second priority rule essentially omits the final 1 – [r j – t i ] + /p av (t i ) term from Equation (2.4) and sets priorities using: Priority Rule 2: /exp/,if ,,i f , j jjjiavi j ilpdptptrt M rt (2.5) ji where M is a large number. This rule is roughly equivalent to the rule given by Morton and Pentico (1993) in Equation (2.2), except that we allow for amplifying the l j / j p priority term and we give very low priority to jobs that have not yet been released. We augment our priority rules by explicitly comparing the tardiness that will result if we interchange the jobs with the two highest priorities. That is, assume that at time t i , jobs j1 and j2 have the two highest priorities under Equation (2.5). If we schedule job j1 immediately before job j2, the resulting tardiness would equal T 1 = [t i + – d j1 ] + + [t i + 1 2 j j p p – d j2 ] + , (2.6) whereas, if we schedule job j2 immediately before job j1, the resulting tardiness would equal T 2 = [t i + – d j2 ] + + [t i + 1 2 j j p p – d j1 ] + . (2.7) If T 1 T 2 the next job scheduled in the sequence is job j1; otherwise we next schedule job j2. As the results in Section 2.5 indicate, we have found that Rule 2 provides the best initial solution on average for the problem instances we tested.

PAGE 39

29 2.3.2 Improving the Initial Solution—Local Search Methods Local search methods are based on taking a starting solution and iteratively locating a better solution within a reasonably sized search “neighborhood” of the initial solution. Such local search methods and the so-called meta-heuristic approaches that incorporate them have been shown to produce near optimal solutions on a variety of difficult combinatorial optimization problems (see, for example, Ribeiro and Hansen 2001). Since the compact and relax algorithm can find an optimal schedule for a given sequence of jobs, the primary goal of our approach is to improve upon the initial solution obtained through our priority rules in an attempt to find an optimal or near-optimal sequence. We tried several meta-heuristic approaches for the Min-WTOT problem, with varying degrees of success, including GRASP methods (see Feo, Resende, and Smith 1994), Path Re-Linking (see Glover, Kelly, and Laguna 1994), and local search with variable local neighborhood definitions, and found that using a basic local search method with variable local neighborhood redefinition worked the best. We next discuss the details of our local search approach. We begin by constructing an initial heuristic solution using both of our priority rules and keeping the better of the two solutions. Within any local search method, we can define several neighborhood structures, where a neighborhood structure is a subset of the feasible region in close proximity to our current solution, and then seek a locally optimal solution within this neighborhood using exhaustive search within the neighborhood. For the Min-WTOT problem we consider three local neighborhood structures with respect to any solution: the set of all two-exchange solutions, and the sets of all right and left shift solutions, defined as follows:

PAGE 40

30 1. Two-Exchange Solutions: A two-exchange solution selects two jobs in the current sequence and exchanges their positions. 2. Left Shift: A left shift solution selects two jobs in the current solution and moves the job scheduled later in the current sequence to the position immediately preceding the job scheduled earlier in the current sequence. 3. Right Shift: A right shift solution selects two jobs in the current solution and moves the job scheduled earlier in the current sequence to the position immediately succeeding the job scheduled earlier in the current sequence. Note that in the presence of precedence constraints, when two jobs are selected, we must first ensure that the proposed two-exchange, left shift, or right shift operation does not violate any precedence constraints. If it does, then we cannot complete the operation and must select a different pair of jobs for local exchange consideration. Suppose we select two pre-scheduled jobs i and j (where i is currently scheduled prior to j) for performing one of the three exchange or shift operations. Let B(i) and A(j) denote the set of all jobs currently scheduled before job i and after job j, respectively. Similarly, let P(i) and S(i) denote, respectively, the set of all (either direct or indirect) predecessors and successors of any job i. We require the following in order for each of the local search operations to be valid for the pair of jobs i and j: 1. Two-Exchange Solutions: For a two-exchange involving jobs i and j (with i scheduled before j) we require S(i) A(j), P(j) B(i), and j S(i). 2. Left Shift: For a left-shift involving jobs i and j, we require P(j) B(i) and j S(i). 3. Right Shift: For a right-shift involving jobs i and j, we require S(i) A(j) and i P(j). We use two basic strategies for determining when to redefine a new local search neighborhood. The first is known as the Best Improving Move (BIM), which means that after finding the local minimum within a predefined neighborhood, this local minimum then serves as the starting point for a newly defined neighborhood. The second is called

PAGE 41

31 the First Improving Move (FIM), which implies that after finding the first point that improves in a neighborhood, this point immediately serves as the starting point for a newly defined neighborhood. Within our neighborhood searches we use a restriction mechanism to help improve the efficiency of the search. Because jobs are initially sequenced in the construction phase as a function of their weights (as determined by the greedy function), we are less likely to see improvement if we exchange jobs that are far away from each other in the sequence. To reduce the size of the two-exchange and left/right shift neighborhoods considered in the local search phase, we only consider exchanges such that the distance between the positions of the two jobs is no more than , where is a predefined parameter (we used = 3 for n < 20 jobs, and = 5 for n 20 jobs). Since this approach reduces the number of two-exchanges and left/right shifts we must consider, we will refer to a neighborhood defined by all two exchanges in which jobs are no more than jobs apart in a sequence as a -restricted two-exchange neighborhood; similarly we define the set of left/right shifts in which we shift a job at most positions as a -restricted left/right shift neighborhood. Let K denote the maximum number of local neighborhood searches performed by the algorithm. Define the cost reduction, k , at iteration k using k := c, where c equals the initial solution value at iteration k, and c is the final solution value after local search during iteration k for k =1, , K. Let 0fkc k 0k fk 1 and 2 equal two predefined convergence parameters (with 1 < 2 ), which we will use to determine when to change our local search strategy. We next summarize the critical features of our local search algorithm, and then provide a more formal description of the procedure.

PAGE 42

32 During our initial iterations, the local search considers -restricted two-exchange neighborhoods and finds the BIM move within such a neighborhood. After finding the BIM move in a -restricted two-exchange neighborhood, we then intensify the search around the jobs that produced the BIM by attempting further left and right shifts for these jobs. For example, if jobs i * and j * produced the BIM (by exchanging the positions of jobs i * and j * , where i * preceded j * in the initial sequence and j * precedes i * after the BIM) the intensified search attempts to further left shift j * and right shift i * to seek further improvements. After each iteration k, we calculate the cost reduction k . If 1 k 2 , we use a -restricted left-shift neighborhood followed by a -restricted right-shift neighborhood structure with FIM moves, in an attempt to find new improving directions. If k > 2 , we return to our -restricted two-exchange neighborhood and BIM move strategy. Otherwise, if k < 1 we use a -restricted two-exchange neighborhood and FIM move strategy. We found that by using this variable local neighborhood redefinition strategy, our solutions improve quickly and we can adapt the search to explore new space when necessary, allowing us to find near optimal solutions in reasonably fast computing time. Table 2-1 presents a more formal characterization of our local search strategy. 2.4 Gauging Solution Quality—Lower Bounds for Min-WTOT Determining the performance capabilities of our heuristic procedure without knowing the optimal solution value for the problem (which is all but impossible for reasonably large problem instances, since the problem is NP-hard) requires an ability to determine a lower bound on the solution value. Our experience with the linear

PAGE 43

33 programming relaxation objective function value (for the formulation provided in the Appendix) revealed that this bound is extremely loose for most problem instances. We discuss a set of valid inequalities that strengthen the linear programming relaxation and provide a better indication of the performance of our heuristic, as we discuss in next section. Table 2-1. Local search algorithm description Step 0 (Initialization): Set iteration counter, k, equal to 1. Generate initial solution using Priority Rule 1. Step 1 (Two-Exchange with BIM): Compute . Consider all -restricted two-exchange solutions, find the best improving move (BIM), and let i 0kc * and j * denote the jobs that produced the BIM. Step 2 (Intensified Local Search): Consider all -restricted left shifts for job j * and all -restricted right shifts for job i * and make the BIM among such exchanges. Go to Step 6. Step 3 (Two-Exchange with FIM): Compute . Consider all -restricted two-exchange solutions and find the first improving move (FIM). Go to Step 6. 0kc Step 4 (Right Shift with FIM): Compute . Consider all -restricted right shift solutions and find the first improving move (FIM). Compute 0kc f k c and k . If 1 k 2 and k < K, set k = k + 1 and continue with Step 5. Otherwise, go to Step 6. Step 5 (Left Shift with FIM): Compute . Consider all -restricted left shift solutions and find the first improving move (FIM). Continue with Step 6. 0kc Step 6 (Convergence Check): Compute and fkc k . If k = K stop with solution value f K c . Otherwise set k = k + 1 and if k > 2 , go to Step 1; if k < 1 , go to Step 3; if 1 k 2 , go to Step 4. The first set of valid inequalities is based on inequalities discussed in Wolsey (1985) and Queyranne (1993) for a related scheduling problem. Letting S denote any subset of the set of jobs, J, and assuming all jobs are available at time zero, these inequalities are written as 2212jjjjjSjSjSpCpp , for each S J. (2.8)

PAGE 44

34 Wolsey (1985) and Queyranne (1993) show the validity of (2.8) for general scheduling problems. It is straightforward to verify the validity of (2.8) when | S | = 2. For example, for a two job case, if job 1 is before job 2, then C 1 p 1 and C 2 p 1 + p 2 , which implies 211221212 p CpCpppp (the same inequality results if job 2 precedes job 1 and we follow the same argument). Note that if jobs have nonzero release times, the quantity Min j jjSjSrp can be added to the right hand side of (2.8), further strengthening this inequality. Since we measure completion time in periods and processing time in base time units, we must divide the right hand side of (2.8) by R t + O t for the Min-WTOT problem. We therefore have the following valid inequality for the Min-WTOT problem. 2212 j jjjjSjSjSjSjStt jj p CppMinrRO p , for each S J. (2.9) The second set of valid inequalities we use were developed specifically for the Min-WTOT problem and therefore, to our knowledge, have not been applied previously. These inequalities link the completion time and start time (C j and s j ) variables to the work-period allocation variables (w jt ) for each job and serve to reduce the degree of preemption allowed by the linear programming relaxation, while also forcing higher values of completion time than allowed in the linear programming relaxation. These inequalities are motivated by the following observations. If, in an optimal solution, job j starts at the end of period s j (and utilizes an arbitrarily small amount of time in period s j ), then an optimal solution exists that does not complete job j later than p j /R t periods after period s j , i.e., in period s j + / j t p R . Given that job j starts at the end of period s j , then a solution that uses R t units of processing time in each period between s j + 1 and s j +

PAGE 45

35 / j t p R , along with the remaining processing time of p j – R t / j t p R in period s j + / j t p R will satisfy the equation /11/jjtjTspR j tttjtttswRRpR jp j p 1T . We next consider the period-weighted activity of job j, i.e., the quantity j t ttw . Using the expression above for 1T j ttw , when job j begins in period s j in an optimal solution, then an optimal solution exists such that 1T j tttw will be no greater than //jjtpR /1jjjspts/ tRtjttRps tjRpR . j t p R / j tj t p RpR / j t p R 1T j tttw //jjtCpR 1jjjCtCp /ttjtRtRp tjRpR /jttpRO tjttpRO /jtpR OtO /jtpR / Similarly, if job j finishes at the beginning of period C j (and utilizes an arbitrarily small amount of time in period C j ), then an optimal solution exists such that job j does not start earlier than p j /R t periods prior to the beginning of period C j , i.e., in period C j – . Such a solution will utilize R t units of processing time in periods C j – / j t p R through C j – 1, and will utilize the remaining time units in period C j – . Using the same logic, if an optimal solution completes job j in period C j , then an optimal solution exists such that is not less than . Finally, if job j finishes at the end of period C j , then in any feasible solution it cannot start later P j /(R t + O t ) periods prior to period C j + 1, i.e., in period C j – . Such a solution will utilize R t + O t time units in periods C j – + 1 through C j , with the remaining p j – (R t + O t ) units of processing time occurring in period C j – . The valid inequalities we add to the formulation are thus

PAGE 46

36 /11//jjtjTspRjttjtjtjjtttstwtRpRpRspR (2.10) 11//jjjtTCjttjtjtjjtttCpRtwtRpRpRCpR / (2.11) /11(1)()()jtTpRjjjtjttjttjttttttpptwCtROpROCRORO for all jJ, (2.12) which we have rewritten to remove the variables s j and C j from the summation limits. Inequalities (2.10) and (2.11) together serve to reduce the degree to which a job may be spread among multiple periods in the linear programming relaxation, thus reducing the degree of pre-emption allowed by the relaxation. Inequalities (2.9) and (2.12) force C j values to be higher than typically allowed in the linear programming relaxation, thus leading to an improved lower bound in many instances with high tardiness costs. We note however, that these inequalities lose their effectiveness in strengthening the linear programming relaxation when using overtime is not an attractive option, i.e., when tardiness costs are relatively low. For such instances it is not unlikely that inequalities (2.9) and (2.12) will be loose in the optimal linear programming relaxation solution. Our results in Section 2.5 verify this intuition. 2.5 Computational Tests 2.5.1 Problem Instance Generation In order to gauge the ability of our priority rules and local search strategy for providing good solutions for the Min-WTOT problem, we ran a series of computational test on a variety of problem instances. The project scheduling problem library ( ftp://ftp.bwl.uni-kiel.de/pub/operations-research/progen ) provides good standardized test data for resource constrained scheduling problems, and we have based our computational

PAGE 47

37 tests on problem instances found in this library. In the project scheduling problem library, each problem instance (project) is composed of several jobs, each with unique resource requirements, and each project has a due date and an associated penalty cost for tardy completion. Different problem instances also involve varying degrees of complexity of the project network structure, i.e., the precedence relationships among jobs. For the Min-WTOT problem, each job contains a corresponding release date, due date, and per-period tardiness cost, although this data is absent in the project scheduling data library. Additional data needed for our problem instances includes both regular time and overtime cost per unit time. This required us to augment the library’s problem instances to account for these necessary problem parameters. To facilitate the description of this problem parameter augmentation, we introduce a new set of parameters, a i , i = 1, , 6, where each a i represents a positive parameter whose value will partially determine the characteristics of the problem instances we generate. Let and l 0jp 0 respectively denote the processing time for job j (for j = 1, , n) and the project tardiness cost parameters obtained from a problem instance from the project scheduling library. We set the amount of regular time available per period equal to the average job processing time, i.e., 01ntj j R pn , and varied the values of regular time cost, c R , and total available overtime, O t . Table 2-2. Equations used for problem parameter generation. Release date, r j r j = UNIF(1, a 1 ) Due date, d j d j = r j + 0/j t p R + UNIF(a 2 , a 3 ) Tardiness penalty cost, l j l j = l 0 – UNIF(a 4, a 5 ) Overtime cost per unit time, c O c O = c R + a 6

PAGE 48

38 We then used the following rules in Table 2-2 for generating additional required problem data (UNIF(a, b) denotes a random variable generated from a Uniform distribution on (a, b)). We ran three different kinds of experiments. The first set of experiments was intended to gauge the performance of our priority rules (without local search) in comparison to the optimal solution value. In order to do this, we limited our analysis to 7-job problem instances, to ensure that we could quickly identify an optimal solution. The second set of experiments demonstrates the value of our local search approach in further improving the initial solution provided by the priority rules. For this test set we use the same 7-job problems used in the first experiment. The third and final set of experiments shows the performance of the priority rules and local search on very large problem instances (90 jobs). Sections 2.5.2 and 2.5.3 discuss the results of each of these experiments. 2.5.2 Priority Rule Performance vs. Optimal Solution Value This section discusses results of applying our priority rules to a set of small 7-job problems. Our goal here is to benchmark the performance of our priority rules against the optimal solution value for a broad set of problems. A 7-job problem instance (without precedence constraints) contains 7! or 5,040 potential sequences. For problems of this size, it actually turns out that complete enumeration of all possible sequences works much more quickly than solving the problems using the CPLEX MIP solver. For each of the 5,040 sequences we can then apply the compact and relax algorithm to obtain the optimal solution value. We then compare the solution value obtained by the better of our priority rules (rules 1 and 2) to the optimal solution value.

PAGE 49

39 Since we expect that the performance of our priority rules will vary with different problem parameters, we constructed eight different classes of problem instances (each class corresponds to a basic problem instance from the project scheduling library). For each of these problem instances we set c R = 2, R t = 3, and O t = 2. The remaining parameters used for these problem classes are provided in Table 2-3. For each problem class we generated 30 problem instances using the parameter generation equations in Table 2-2, for a total of 240 problem instances. Table 2-3. Parameter settings for test set 1. Problem Class a 1 a 2 a 3 a 4 a 5 a 6 1 10 0 2 6 7 4 2 10 3 5 6 7 4 3 4 0 6 0 8 4 4 4 3 5 6 7 4 5 4 0 2 0 3 8 6 4 0 2 0 3 2 7 4 0 2 6 8 8 8 4 0 2 6 8 2 Table 2-4 presents the results of this set of experiments. Note that in the vast majority of test cases one (or both) of our priority rules was able to quickly find an optimal solution. On average, the performance of our heuristic priority rules is quite good for this set of small problem instances. The worst performance occurs in classes 5 and 6, where we have very little slack for jobs combined with high tardiness costs (Class 5 is further compounded by high overtime costs, making this the worst overall case). These results imply that the more restricted a problem instance becomes (in terms of slack between release and due dates), the worse our priority rules perform. We next compare the performance of our priority rules 1 and 2. For our 7-job test instances, we found that the initial solution produced by priority rule 2 was better than that produced by priority rule 1 in 51.7% of the cases, and as good as priority rule 1 in

PAGE 50

40 another 32.5% of the cases (In only 15.8% of the problems priority rule 1 was strictly better). We conclude that for the problems we tested, essentially ignoring jobs that have not yet been released (as is done in priority rule 2) generally leads to better results. Since both priority rules can be implemented in virtually negligible time, however, we obtain better overall results be choosing the better of the two rules across all test cases. Table 2-4. Results of test set 1. Problem Class % of instances priority rules find optimal solution Maximum % error a Average % error a 1 100% 0 0 2 100% 0 0 3 100% 0 0 4 100% 0 0 5 80% 10.4% 1.35% 6 90% 5.9% 0.4% 7 96.7% 8.3% 0.27% 8 96.7% 5.4% 0.18% a % error = (Heuristic solution – Optimal solution)/Optimal solution 100% We conclude this section by briefly discussing the solution improvement obtained by using our local search heuristic approach on the 7-job problem instances. In each of the 7-job problem instances, our local search procedure required less than one second of computing time. Surprisingly, the local search procedure found an optimal solution in this short computing time for each of the test instances. Given the fast computing time of the local search procedure, implementing local search is clearly worthwhile for small sized problems. The next section expands on the value of the local search procedure by considering much larger problem instances, and illustrating the tradeoff between local search time and solution value improvement. 2.5.3 Heuristic Performance on Large Problem Instances Our final set of experiments focused on gauging our heuristic performance for large-scale problem instances. The results presented in this section are based on problem

PAGE 51

41 instances involving 90 jobs. For these 90-job problems we created a total of four problem classes, the parameters of which are shown in Table 2-5. For each problem class we generated 30 problem instances for a total of 120 problem instances. Set c R = 2, R t = 3, and O t = 2. Table 2-5. Parameter settings for 90-job problems. Problem Class a 1 a 2 a 3 a 4 a 5 a 6 1 40 0 2 5 15 8 2 40 0 2 5 15 2 3 40 0 2 20 30 8 4 40 0 2 20 30 2 As Table 2-5 indicates, the main parameters we chose to vary among the problem instances were the tardiness costs and overtime costs, both key drivers of our scheduling decisions for the Min-WTOT problem. The main difference between these problem classes is that those in classes 3 and 4 contain far lower tardiness costs than those in classes 1 and 2. Our goal in this set of experiments was to get some insight into the performance of our local search procedure as a function of certain controllable parameters, such as the number of local search iterations. We also later discuss how these parameters (relative tardiness costs in particular) affect our lower bounding procedures. Figure 2-5 illustrates the local search improvement over the initial priority rule-based heuristic solution for each category as a function of iteration number, the maximum iteration number K = 130.

PAGE 52

42 051015202530354045501112131415161718191101111121Iteration numberAverage % Reductiona Class 1 Class 2 Class 3 Class 4 a Each figure represents the average across 30 test instances. % reduction = (initial objective value – current objective value)/initial objective value 100% Figure 2-5. Illustration of local search performance improvement as a function of the number of local search iterations Figure 2-5 illustrates two important phenomena. First, for large problem instances, the priority rule-based solutions are not very close to an optimal solution, and the local search procedure provides significant improvement, on average, over this initial solution value. Second, the degree of improvement from local search is reasonably small as we get beyond 70 local search iterations, implying that it is likely sufficient to discontinue local search at k = 70 for problems of this size. Since each iteration of our local search procedure takes an average of approximately five seconds, this would decrease the overall search time for these problems from nearly eleven minutes to approximately six minutes with little degradation in solution performance. We next performed an additional test to gauge the benefits of the various features of our local search algorithm. In particular, we generated five 90-job problems from problem class four in Table 2-5 and tested the following variations of our local search procedure shown in Table 2-1: 1. Elimination of Intensified Search (Step 2).

PAGE 53

43 2. Change all First Improving Moves to Best Improving Moves in the algorithm. 3. Using only two-exchange neighborhood structure and eliminating Left/Right Shift neighborhood structures from consideration (Steps 4 and 5). Table 2-6 shows the results of applying these variations of our complete local search algorithm in Table 2-1. Surprisingly, the complete algorithm not only provides the best average performance improvement, but also does so in far less computing time. Gauging the relative performance of our overall heuristic solution approach requires a lower bound on the optimal solution values for these problems. Very little past work has been devoted to worst-case analysis of heuristic algorithms for scheduling problems, and linear relaxations of mixed integer programming formulations often provide poor lower bounds for such problems (Lawler et al. 1993). To provide such lower bounds, we initially used the base linear programming relaxation solution of the formulation provided in the Appendix. This, however, proved to be far too loose a lower bound, with relative optimality gaps sometimes exceeding 100% of the lower bound solution value. To improve the quality of these lower bounds, we added inequalities (2.9) through (2.12) to the base formulation. We found that the addition of these valid inequalities served to significantly strengthen the lower bounds relative to the base formulation, particularly for problems with relatively high job tardiness costs (which are the primary problems of interest, particularly within the construction industry). We added one inequality of the form of (2.10 – 2.12) for each job, and added an additional 21 inequalities of the form of (2.9) for each problem instance. Observe that defining a specific form for inequality (2.9) requires defining each subset S for the inequality. Since an exponential number of such subsets exist, we select such subsets

PAGE 54

44 heuristically as follows. We first consider the set of jobs with the 70 highest job tardiness values and used this as our first choice of S. We then created 20 additional supersets of Table 2-6. Illustration of effects of removing critical steps of the local search algorithm. Average % Reduction K Complete Local Search Algorithm 1. Without Intensified Search (Step 2) 2. With all FIMs Replaced by BIMs 3. With only two-exchange neighborhood structure 1 1.103 0.716 1.103 1.103 10 8.893 3.918 8.893 8.893 20 16.209 7.144 16.209 16.209 30 21.775 10.136 21.775 21.775 40 25.004 13.020 25.004 24.844 50 27.521 15.699 27.521 26.335 60 29.203 18.098 29.213 27.298 70 30.188 20.299 30.316 27.812 80 31.192 22.197 31.363 28.414 90 31.733 23.917 31.602 28.784 100 31.962 25.466 31.866 29.120 110 32.131 26.888 32.113 29.507 120 32.329 27.952 32.148 29.775 130 32.591 28.559 32.167 30.218 Time (minutes) 10.906 15.670 17.444 14.549 this set of 70 jobs, by first randomly selecting an additional job from the remaining 20 jobs and adding it to the set, then randomly selecting two additional jobs and adding them to the set, and so on, until completing this process by adding all 20 remaining jobs to the set. The results of the lower bounds obtained through this strengthened linear programming relaxation are shown for the four problem classes in Table 2-7. The results for problem classes 1 and 2 show that our heuristic procedures provide good results for the Min-WTOT problem. For problem classes 3 and 4, which contain problem instances with very low tardiness costs, however, the gap between the heuristic and lower bound values is still less than acceptable. As we noted at the conclusion of Section 2.4, when tardiness costs are relatively low, our valid inequalities do little to strengthen the lower bound of the linear programming relaxation relative to the case of high tardiness costs.

PAGE 55

45 Moreover, the primary focus of this research is on problem instances in which job tardiness costs are relatively high. So, for the contexts that motivated this research, i.e., construction project contexts that emphasize on-time delivery, our heuristic solution procedures provide good performance in fast computing times. Providing even stronger Table 2-7. Relative performance of heuristic solution approach as compared to strengthened linear programming relaxation lower bound. Performance Ratio Value Problem Class Min Max Average 1 1.037 1.121 1.074 2 1.027 1.202 1.086 3 1.067 1.375 1.188 4 1.034 1.334 1.194 Note: Performance ratio = heuristic solution value lower bound linear programming relaxation formulations that provide effective lower bounds across a broader range of parameter values therefore represents an area of future research worth pursuing. 2.6 Conclusion This chapter examines a practical single-resource scheduling problem that to our knowledge has not been considered in the literature, the Min-WTOT (weighted tardiness and overtime) problem. This problem considers a critical tradeoff between overtime and tardiness costs often faced by a wide array of firms that use overtime as a mechanism to increase capacity to meet demand. The problem requires determining an optimal sequence of jobs and determining for each job in the sequence, the start date, finish date, and overtime usage required for completing the job. We have provided a polynomial-time algorithm that we call the compact and relax algorithm for minimizing the sum of weighted tardiness and overtime costs for a given sequence of jobs. For solving the general Min-WTOT problem, we provided two priority rules that determine an initial sequence of jobs. To improve upon this initial solution we have developed a local search

PAGE 56

46 algorithm with variable local neighborhood definitions. To provide a benchmark for comparing the quality of our solutions, we have developed customized valid inequalities that strengthen the lower bound provided by the linear programming relaxation. The computational test results we presented show that the priority rules with local search serve as an efficient method for providing good quality solutions for the Min-WTOT problem.

PAGE 57

CHAPTER 3 SINGLE MACHINE SCHEDULING PROBLEMS WITH JOB-SELECTION FLEXIBILITY 3.1 Introduction and Problem Definition This chapter considers a set of resource scheduling problems in which the firm exercises some discretion over the number of jobs it accepts, with a goal of maximizing profit. Trends in manufacturing and supply chain management have led to the incorporation of demand management approaches in operations planning (e.g., Lee, 2001; Chopra and Meindl, 2003). This perspective recognizes the fact that a supplier often has a degree of control over the demands it must satisfy, and by appropriately exercising this control, can enhance profitability. That is, a supplier generally has some flexibility in determining the set of downstream demands that provide the best match for its resource capabilities. We begin by considering the basic problem of selecting and scheduling jobs with job-specific revenues, release dates, and due dates, and a limit on the total time allowed for processing all jobs. We consider a set of available jobs J = {1, 2, , n}, from which we must choose some subset for processing. Those jobs selected must be processed sequentially on a single resource without preemption. Job j has an associated revenue w j 0, release date r j , due date d j , and processing time p j . We initially assume that a job can be executed only between its release time and due date, although we will later consider the case of tardy delivery at some job-specific tardiness cost. We denote the starting time 47

PAGE 58

48 of job j as s j (a decision variable) and its execution time interval as (s j, s j + p j ) [r j , d j ]. The goal is to select a subset of jobs to be processed in the fixed time interval [0, T], such that the throughput of the schedule (the sum of the revenues of the scheduled jobs) is maximized. This problem as defined above is referred to as the throughput maximization problem (TMP) (see, for example, Lawler 1990). Later we will consider generalizations involving job-related costs; in such cases we will focus on maximizing profit rather than revenue. With the recent proliferation of on-line markets, such job selection contexts, where customers post requests for supply resources, are becoming more prevalent. This class of TMP applies to many organizations that are capacity constrained and thus need to select the capacity-feasible set of available jobs that will maximize profit during a given time horizon. We note that we do not explicitly consider any job rejection costs here. If job rejection costs exist, then a job may either be accepted, thus generating a profit or it may be rejected at a certain rejection penalty cost. But, if we add the rejection cost to the profit of each job, and set the rejection penalty cost equal to 0 for all jobs, the problem is transformed into an equivalent problem without rejection costs. The TMP is NP-hard even when all jobs are released simultaneously (Sahni, 1976). The preemption version of the TMP was studied by Lawler (1990), who developed a pseudo-polynomial time algorithm for solving the problem. On-line versions of the problem for the preemptive and non-preemptive cases were considered in Baruah et al. (1992), Koren and Shasha (1995), and Lipto and Tomkins (1994). Berman and Dasgupta (2000) studied the case when all job parameters are positive integers, and provided a pseudo-polynomial algorithm (a so-called two-phase algorithm, or 2PA) with a performance ratio of 2, which has an O(nT(1 + log log T)) worst-case complexity. Our

PAGE 59

49 research improves the absolute worst-case complexity of the 2PA algorithm, using an algorithm we call the modified 2PA (Mod-2PA) that runs time O(nT) time and has the same worst-case performance ratio of 2. In addition, we consider several generalizations of the basic TMP problem, to allow for the development and application of our solution algorithms to a broader class of practical problem settings. To characterize a generalization of the TMP, we use the notation TMP(z), where z is a vector of parameters that describe the problem generalization. For example, as we next discuss, the TMP(t) implies that we consider the basic TMP, but allow jobs to violate their due dates at some cost per unit tardy. Thus, the TMP(t) implies the basic TMP problem with allowable late delivery and job tardiness costs. We later present other generalizations of the problem, defining and using multiple elements to characterize the relevant vector of parameters z. For the TMP(t), if job j is completed later than its due date, a penalty cost l j is incurred for each unit of tardiness; letting C j denote the completion time of job j, then the tardiness cost for job j equals l j (C j – d j ) + . The profit of a tardy job is therefore reduced by an amount proportional to its cost per unit time tardy. We will show that our Mod-2PA algorithm also provides good solutions for the TMP(t) after some slight revisions to the base algorithm to account for allowing tardiness. An additional extension we will consider is the case in which the jobs have controllable process times. Here the actual processing time of job j is (p j – x j ) for some x j such that 0 x j u j , where x j is a decision variable for the time by which the “normal” process time p j is compressed (reduced), and u j is the maximum possible amount of compression time for job j (with u j p j for all jobs j). In other words, the firm can choose

PAGE 60

50 to allocate a greater number of resources than under normal operating conditions in order to complete a job more quickly. We assume that the compression cost is c j x j if job j’s processing time is reduced by x j , where c j is the cost per unit time reduction for job j. In this case, the profit of a job will equal the job’s revenue less the tardiness and compression costs. In terms of our problem notation, we refer this problem as TMP,tc . Recall that in Chapter 2, we considered the Min-WTOT problem, which allows using overtime resources to reduce the completion time of a job. Both the Min-WTOT and problems with controllable processing times provide the flexibility to use additional resources to speed up the processing of jobs. However, scheduling problems with controllable processing times typically require treating all of the processing times and compression times as integer valued, and the additional resource cost is linear in the processing time reduction, which helps to simplifies the model and solution approaches. For the TMP,tc , we will show that after some necessary revisions to the basic 2PA algorithm, the Mod-2PA algorithm runs in O(u max nT) time with a performance ratio 2, where u max = max jJ {u j }. We also consider an additional practical generalization of the TMP model, where, in addition to allowing job tardiness, we can extend the scheduling horizon time limit T to (T + y) at an additional cost y, as might be the case when the firm can continue to work past all deadlines (or past a project deadline) in order to complete all accepted work. We refer this problem as the TMP(t, e), and present an approximation algorithm which has worst-case performance of O(nT), where is an input parameter selected by the user. Finally, we briefly consider the case in which we allow job tardiness, time compression, and time-horizon extension, which we denote as the TMP,,tce , and

PAGE 61

51 present a mixed integer programming formulation of this general version of the model, along with a heuristic solution approach. To our knowledge, none of these problems (TMP(t), TMP,tc , TMP,te , and TMP,,tce ), despite their relevance to practical scheduling settings, have yet received attention in the scheduling literature. The remainder of this chapter is organized as follows. Sections 3.2 and 3.3 focus on refining and generalizing the previously developed two-phase algorithm (Berman and Dasgupta 2000). Section 3.2 discusses the basic two-phase algorithm as applied to the standard TMP, and then presents the modifications we make to the algorithm in order to reduce its worst-case complexity. Section 3.3 then discusses how we modify this algorithm for application to the generalizations of the TMP discussed above, as well as resulting changes in worst-case bounds and/or complexity results. Beginning with Section 3.4, because of the similarity of the different problem versions, we focus exclusively on the TM problem for developing a customized heuristic method. Section 3.4 thus departs from the two-phase algorithm approach and presents a new heuristic approach for the P , tc TM P,tc . Section 3.5 then presents a set of computational test results which compare the performance of the generalized two-phase algorithm and the heuristic algorithm in Section 3.4 for the TMP,tc , while Section 3.6 presents conclusions. 3.2 Modified Two-phase Algorithm (2PA) for the TMP The original two-phase algorithm (Berman and Dasgupta 2000) proposed for the TMP is derived by solving an equivalent (abstracted) problem called the interval selection problem (ISP). For each j [1, n], we are given a family of integer intervals S j , and each interval is described by (j, w j , s, C), where w j is the profit (here profit equals

PAGE 62

52 revenue) associated with the interval, s is the interval start time and C is the interval finish time. In scheduling terms, each element of S j thus describes a partial schedule plan including only job j. If any integer interval (j, w j , s, C) from S j is selected, then a profit of w j is realized. The ISP problem requires selecting at most one interval from each set of interval families, so that the selected intervals are disjoint and the sum of the individual profits is maximized. The TMP is easily formulated as an equivalent ISP problem in the following way: for each job j, we associate it with a family of integer intervals S j , where each interval is denoted by (j, w j , s j , C j ), w j is the job’s profit, s j is the starting time and C j is the finishing time of the job, with s j r j and C j min{d j , T}. To facilitate a clear explanation of the modified two-phase algorithm (2PA), we first discuss the original 2PA algorithm, along with its complexity and performance ratio. For a detailed discussion of the algorithm and the complexity and performance results, see Berman and Dasgupta (2000). We first create a list L of all possible (j, w j , s, C) quadruples associated with all jobs, sorted in non-decreasing order of finish time. We then create a stack , into which we will insert all potentially desirable elements of L (i.e., desirable quadruples), with a certain computed value associated with job j in the w SS j position in the quadruple. To do this, for every possible start time s, beginning at s = 1 and working forward in time, we keep track of all of those elements already in the stack whose finish times are greater than s (these are jobs that, if scheduled, would conflict with scheduling any job starting at time s). Let TOTAL(s) equal the sum of all of the values for those jobs in whose completion times are greater than s (the value of a job is the incremental profit the job provided over those jobs with which the job would conflict if added to the stack). For S

PAGE 63

53 every possible start time and each job j, we also keep track of all elements of that correspond to job j, whose completion times are less than or equal to s (any one of these items, if selected, would conflict with scheduling job j starting at time s, since otherwise job j would be selected twice). Let total(j, s) denote the sum of the values of all elements of who have corresponding job j and completion times less than or equal to s. S S Each time we consider an element (j, w j , s, C) of the list L, we look at all previous items that we have inserted into the stack . In order to insert (j, w S j , s, C) into the stack, we would like to see that job j at start time s provides greater profit than any prior occurrences of job j in the stack whose completion time is less than or equal to s. Moreover, since all jobs in the stack with completion times greater than s would have to be in process at time s if we were to select them (and would thus conflict with starting job j at time s), the profit of job j starting at time s would need to be greater than the profit of any such job if we were to consider starting job j at time s. We therefore consider the value v = w j – total(j, s) – TOTAL(s), which provides an indication of the incremental profit of job j beginning at time s, in comparison to jobs in the stack with which job j conflicts. That is, v provides the profit of job j, less values associated with all prior occurrences of job j in the stack whose completion time does not exceed s, and the profit associated with jobs in the stack whose completion time exceeds s, i.e., whose scheduled time would conflict with that of job j under quadruple (j, w j , s, C). Certainly any element from the list that provides a positive value of v will be a potentially attractive candidate in this evaluation phase. If v > 0, we insert the quadruple (j, w j , s, C) into the stack. When the evaluation phase is completed, we then begin the selection or scheduling phase. The stack now serves as the set of candidate intervals, from which we select S

PAGE 64

54 actual intervals for creating the final schedule. Beginning at the top of the stack, which now contains intervals in non-increasing order of completion time, we consider each element of the stack, which has an associated quadruple (j, w j , s, C). If the job j associated with the stack element under consideration has not been scheduled, and the completion time C of the job does not overlap any previously scheduled job, then we schedule job j at start time s and completion time C. If these conditions do not hold, then we discard the quadruple under consideration and proceed to the next element in the stack. When we have considered every element of the stack, the algorithm terminates. The selection or scheduling phase of the algorithm by construction selects no more than one occurrence of each job j, and ensures that scheduled jobs do not overlap, thus ensuring a feasible solution at termination. Berman and Dasgupta (2000) show that the 2PA algorithm solves the TMP in O(nT(1+ log log T)) time with an approximation ratio of 2. The 2PA algorithm was designed for the ISP problem, where in the most general case each interval is unrelated to all other intervals. For the TMP problem, however, the intervals for the same job are of course related. Observe that in the selection phase of the 2PA algorithm, if the interval with the largest v is selected, no other interval ending at the same time can be selected. By maintaining a specific set of data arrays, we can therefore reduce the worst-case running time of the 2PA algorithm, which motivates the modified 2PA algorithm (denoted by Mod-2PA). We next describe the details of the modifications we make to the 2PA algorithm. We begin by generating all intervals (j, w j , s, C) for job j, with s ( r j , T – p j ) and C = s + p j , for j = 1, , n. Let L t denote the set of the intervals (j, w j , s, C) such that C = t, for t =

PAGE 65

55 2, , T. We maintain a two-dimensional data array A, where entry a jk records the value of the selected interval for job j when ending = k; initially we let each entry a jk = 0, j = 1, , n, k = 2, , T. The evaluation phase that we next describe begins at time t = 2 and works in increasing time period index order, and we assume that all jobs take at least two periods. *Evaluation Phase* If t = 2 and L 2 , select j' such that interval (j', w j' , 1, 2) contains the largest w j in L 2 , and insert this interval into the stack ; set a S j'2 equal to w j' ; (otherwise let t = 3 and begin the following loop). for (each t > 2 in increasing time index order) If L t = , set t = t + 1, and repeat this step. Otherwise, continue. { for (each interval (j, w j , s, t) of L t ) Set v sktskniikjkjjaaw1111:; Select j' such that interval (j', w j' , s, t) contains the largest v j in L t and insert this interval (j', w j ' , s, t) into the Stack ; S Set a j't := v j' ; } The selection phase is the same as that for the original 2PA. Theorem 3.1: The Mod-2PA algorithm provides a solution to TMP in O(nT) time with an approximation ratio of 2.

PAGE 66

56 Proof: In the evaluation phase, the Mod-2PA algorithm evaluates O(T) sets of intervals, and within each set there are O(n) intervals. Within each interval, calculating v requires O(n) time, selecting the largest v requires O(n) time, and updating elements of A requires O(n) time; therefore, the evaluation phase runs in O(nT) time. The selection phase runs in O(T) time. Therefore the worst-case complexity of the algorithm is O(nT). To show that the approximation ratio is 2, we only need to show that the Mod-2PA algorithm provides a solution that is the same as one that can be provided by the original 2PA algorithm in the evaluation phase. The difference between the two algorithms lies in two areas. The first difference is in the calculation of the value of the interval. In the Mod-2PA algorithm, for each interval (j,w j , s, t), s = t – p j . Since each entry a jk of A records the value of the selected interval of job j with ending = k, the quantity 1s j kka is the sum of the values of those intervals associated with job j in the stack that have ending less than or equal to s, . Since records the value of the selected interval with ending = k, and no interval with ending larger than or equal to t has been selected, is the sum of the values of those intervals in that have ending greater than s, . Thus, for a given job j, the computed value v S 1(,)sjkkatotaljs1nikia111tnikksiaTOTA111st 1nikia 11tks: S ()Ls j satisfies: 1 n j jjkkksa i(,)ljsTAL kjwtota ia vw ( )s OT , and the calculation of the value v is the same for both algorithms. The second difference between the two algorithms is the number of intervals added to the stack. In the original 2PA, all of the intervals are sequentially evaluated and added to the stack if they have positive values, while in the Mod-2PA, we first sort intervals in

PAGE 67

57 non-increasing order of value and only add the largest valued interval in each set L t to the stack. To illustrate this, let s jt denote the start time of job j if this job finishes at time t, i.e., s jt = t – p j . When considering the set of intervals in L t , for each job j corresponding to an interval in L t we have TOTAL(s), which is more precisely defined as TOTAL(s jt ). If some arbitrary job j' is the first job that we consider adding to the stack among those jobs corresponding to intervals in L t , then we will compute a value for job j' equal to ''':(',) '() j jjtvwtotaljsTOTALs:(,)( jt) , and we will add the interval (j', w j' , t – p j' , t) to the stack if v j' > 0. Suppose, however, that job k provides the largest value of j jjtvwtotaljsTOTALs''':(',) jt) prior to inserting any intervals in L t into the stack (and clearly we must have v k v j' by definition). If we added job k to the stack prior to adding job j', then we would compute the value of job j' as '( j jjtvwtotaljsTOTALs''(',)()kjjtjtvwtotaljsTOTALs jtv' k (since clearly job k would complete processing after period s j't ). But, we know by definition of the indices j' and k that , which implies that our new computation of the value of job j' produces v j' 0. So, sorting in non-increasing order of v j and choosing the largest one for each L t allows us to avoid adding jobs to the stack whose value would have been negative when considered in a different order in the 2PA algorithm. Moreover, if the largest v j in the set L t is negative, all intervals in the set are also negative, and none of these should be further evaluated or selected. Therefore, for each set L t , we only need to consider the interval with the largest value. The implementation of this version of the algorithm is more straightforward from a coding standpoint, and in the worst case, when T is very large, the Mod-2PA algorithm's complexity is better than the 2PA, with the same approximation ratio.

PAGE 68

58 3.3 Mod-2PA Algorithm for Generalizations of the TMP This section discusses the necessary modifications for adapting the Mod-2PA algorithm for solving the generalizations of the TMP we discussed in the introduction. We first discuss the TMP(t), which allows for job tardiness at some job-specific cost per time unit tardy. We then discuss the TMP,tc , which allows for compressing job processing times at a job-specific time compression cost. Following this, we address cases in which the overall job process time deadline, T, can be violated at a cost, which is denoted as the TMP,te . Finally, we will briefly discuss the most general version of the TMP problem (the TM), by providing a mixed-integer programming formulation, and sketching a heuristic approach based on the 2PA algorithm. P ,,tce 3.3.1 TMP with Job Tardiness As we noted previously, the TM considers the case in which jobs incur a tardiness cost if delivered beyond their due dates. We next discuss how to adapt the Mod-2PA algorithm to handle the Pt TMPt . In the TMPt , the profit of each interval associated with job j is not simply w j but is also a function of the starting (and associated completion) time, i.e., (jjjls ) jjjjwswpd . An interval is now characterized by ,,, j jjjjjwsssp , where one such interval exists for each possible starting time s j = r j , , T – p j . Note that the profit of the interval depends only on the job’s parameters. Thus, in the TMPt , after updating the profit of all intervals from w j to j jws , the Mod-2PA algorithm works exactly the same way as for the TMP. Obviously, if the profit of the interval is negative,

PAGE 69

59 it should not be included in the interval set before the evaluation phase. We therefore revise the interval generation phase of the Mod-2PA algorithm as follows: Mod-2PA(t) algorithm for TMP(t): Generate all intervals ,,, j jjjjjwsssp for job j for s j = r j , , T – p j , with (jjjjjjjwswlspd ) , for every j [1, n]. Denote L t as the set of the intervals ,,,jjjjjjwsssp such that C j = s j + p j = t, for t (2, T) and . 0jjws The rest of the algorithm is the same as the Mod-2PA. For the performance ratio, notice that the proof of the Theorem 3.1 only considers the profit w j of the interval in a feasible solution and the value v of the associated intervals, and it does not require that all intervals of the same family have the same profit w j . Therefore the algorithm's correctness and worst-case complexity still hold for the TMPt . Theorem 3.2 thus follows directly. Theorem 3.2: The Mod-2PA(t) algorithm solves TMPt in O(nT) time with an approximation ratio of 2. 3.3.2 TMP with Job Tardiness and Controllable Process Times In the TMP,tc , where we allow job tardiness and processing time reduction or compression, we not only consider the tardiness cost for each job, but also the compression cost incurred when reducing a job’s processing time. The decision variables for the TMP ,tc include the set of jobs selected, the sequence and start times of these jobs, and the compression time for each job selected. Our approach is similar to our

PAGE 70

60 treatment of the TM, and we therefore implement the 2PA algorithm by embedding the compression cost in the profit of each interval. For this case, for a given job j and completion time C Pt,, j , we can have a number of associated intervals, depending on the amount of time compression utilized. Given a job j and completion time C j , we can have at most u j starting times s j equal to C j – p j , , C j – p j +u j . We therefore denote w j and s j as functions of the completion time C j and compression time x j , and will have a set of intervals ,,, j jjCxs jjjjjwCxC for every possible (j, C j ) pair. The interval ,,,jjxsC ,,jjjCx jjjwC has net profit ,() j jjjjjjjwCxwlCdcx j (recall that c j is the cost per unit of time compression for job j) and starting time , j jj j jj s Cx C px . Since x j is an integer decision variable, we must consider all possible value of x j such that 0 x j u j . We revise the interval generation scheme of the Mod-2PA algorithm as follows: ,,, j jjjjsCxC jwC , j jjj j s Cx Cp , j jjwCx jjjjwpxd jjls j j j x u ,,,,,jjjjjjjjwCxsCxC ,0 jjjwCx Mod-2PA(t, c) algorithm for the TMP(t, c): Generate the intervals ,, jjx for every job j and completion time C j , such that j x and jcx , for 0 and integer, and j [1, n]. Denote L t as the set of the intervals such that C j = t, t = 2, , T and ; The rest is of the algorithm is the same as the Mod-2PA.

PAGE 71

61 Theorem 3.3: The Mod-2PA(t, c) algorithm solves the TMP,tc in O(u max nT) time with an approximation ratio of 2, where u max = max j[1,n] u j . Proof: In the evaluation phase, there are O(T) interval sets L t , and each set is composed of O(u max n) intervals, since each job that finishes at time t has O(u max ) possible starting times, , 0 j jjj j s tpxxu . Thus the total complexity of the algorithm is O(u max nT). For the performance ratio, the proof of Theorem 3.2 is still valid since it does not require that the intervals of the same family have the same profit and interval length. We will later provide additional heuristic approaches for TMP,tc in Section 3.4. But before providing these heuristics, the next section discusses how to adapt the Mod-2PA algorithm for TMP,te problem. 3.3.3 TMP with Job Tardiness and Extendable Time Horizon In this section we consider the TMP,te problem, where each job has a fixed process time (i.e., no job time compression is allowed), but the scheduling time horizon T can be extended by some value y (a decision variable) to T + y at an additional cost y, where denotes an overall project or batch delivery tardiness cost per unit time tardy. This situation might apply to contexts in which the set of jobs might be a part of some larger project, which has a due date at time T, and late project delivery implies some project tardiness cost. We assume for convenience and based on practical considerations that extending the horizon length to a value of more than twice the original horizon length is either severely sub-optimal or infeasible, i.e., we assume y T, and that y must take an integer value. The Mod-2PA can also solve this problem after certain additional

PAGE 72

62 modifications. Our discussion of these modifications requires first defining some additional notation. Notation: Y: Maximum allowable time horizon extension, i.e., we require y Y T. P H (y): Value of the TM solution obtained by the Mod-2PA algorithm when the time horizon equals (T + y). P,te P * (y): Value of the optimal TMP,te solution when the horizon equals (T + y). y * : Total time horizon extension in an optimal solution for TMP,te . P * (y * ): Value of the optimal solution for TMP,te , also denoted as P * . P H : Best TMP,te solution value obtained by our solution algorithm. We next describe the required Mod-2PA algorithm modifications for finding a heuristic solution for TM. This heuristic approach selects a set of discrete values for the time horizon T + y based on the user-specified parameter , and applies the Mod-2PA(t) algorithm for each discrete time horizon value. P,te Mod-2PA(t, e) for the TMP(t, e): Let denote a pre-specified scalar such that Y/ is integer (Y/ is the time between successive discrete final horizon values the heuristic evaluates; thus setting Y = implies we evaluate all integer horizon values between T and T + Y, setting Y = 2 implies we evaluate every other integer value between T and T + Y, and so on). For (k = 0, , ) Set T k () = T + kY/

PAGE 73

63 Execute the Mod-2PA(t) algorithm for this instance, and let HkPT denote the solution value when the time horizon equals Tk(). Select the best solution: max{:0,...,}HHkPPTk . Theorem 3.4: The Mod-2PA(t,e) algorithm solves the TMP(t, e) in O(nT) time with *1122HPPY . Proof: The complexity of the algorithm clearly follows since we run the Mod-2PA algorithm a total of ( + 1) times. To validate the performance ratio, note that *,(1)Yykk Y for some value of k such that k[0, ). Let k' denote the value of k such that *,(1)Yykk Y . Let y' = (' 1)Yk and recall that by Theorem 3.1, twice the profit obtained by the Mod-2PA algorithm in the absence of the horizon extension cost is at least as great as the optimal profit in the absence of the extension cost. As a result we have: ***1122''''()HPyyPyyPyy * The second inequality holds because *'('1)Yyyk , and in the absence of the horizon extension cost, a longer allowable scheduling horizon cannot lead to reduced total profit. The profit attained by the algorithm thus satisfies: ***************()()'()()'22222()()112222HPyyPykYYyPyYyPyyyPyYyPyY 2

PAGE 74

64 Note that the performance of the algorithm improves with higher values of , a user-specified parameter, although the complexity of the algorithm also increases with . Large values of Y and/or , on the other hand, degrade the algorithm’s worst-case performance. When = Y, since we require y to be integer, we check every value of y (including y * ) and the above inequalities become ******()()()222HPyyPyYPyy 2 which is our best performance guarantee among all possible choices of such that Y/ is integer. If we do not require y integer, and allow to go to infinity, then we get the same performance ratio of *****()()()2222HPyyPyYPy . Observe also that when = Y the algorithm complexity is O(nYT), which is bounded by O(nT 2 ). 3.3.4 TMP with Tardiness, Controllable Process Times, and Extendable Time Horizon The most general TMP problem we consider combines all of the elements of the above three problems. The TMP,,tce allows for job tardiness at some job-specific cost per time unit tardy, allows for compressing job processing times at a job-specific compression cost, and allows extending the time horizon at a cost. For smallto medium-size problem instances, we can use the mixed integer linear programming formulation provided below, and a corresponding solver, such as CPLEX, to solve the problem optimally. In addition to our previously defined notation, this formulation uses the following additional decision variables: Decision Variables: j : tardiness for job j, equal to [C j – d j ] + .

PAGE 75

65 j : binary variable equal to 1 if job j is selected for processing, and 0 otherwise. z ij : binary variable equal to 1 if job i if processed before job j, and 0 otherwise. Maximize 1njjjjjjjwcxl y Subject to: ()(1jjj ), s rTY j = 1, , n, (3.1) , j jjjjC pxs j = 1, , n, (3.2) , j jjCd j = 1, , n, (3.3) (),()(1)jiji j iiTY z Cs TY zCs j j = 1, , n, i = 1, , n, i j, (3.4) j = 1, , n, (3.5) jCTy , j jjxu j = 1, , n, (3.6) j, z ij {0, 1}, j = 1, , n, i = 1, , n, i j, (3.7) s j, f j, x j, l j, y 0, j = 1, , n. (3.8) The objective function maximizes net profit, defined as the total revenue from selected jobs, less compression, tardiness, and horizon extension costs. Constraint (3.1) requires that each job, if selected, starts no earlier than its release time. Constraint (3.2) determines the finishing time of each job, while Constraint (3.3) obtains the tardiness of the job. Constraint set (3.4) forbids preemption, and Constraint (3.5) is the horizon limit. Constraint (3.6) limits the maximum amount of compression time for each job. Note that it is not necessary to require the variables s j, f j, x j, l j , y to be integer, since for any fixed j and z ij , the constraint matrix becomes totally unimodular, and these variables therefore automatically become integer for any choice of binary j and z ij variables. Each of the

PAGE 76

66 models we have considered thus far is a special case of the above formulation. Even for these special cases, however, the above formulation will not be amenable to solution via standard mixed integer programming solvers. We next consider the adjustments required to handle the general TM problem using the Mod-2PA algorithm. P,,tce We apply the Mod-2PA algorithm to the TMP,,tce problem in a similar manner as was done for the TMP,te : in the evaluation phase we set the horizon length equal to T + Y. Then, we enumerate the discrete values y = 0, 1, , Y and apply the selection phase for each y, and select the value of y that provides the maximum profit. The enumeration of values of y is only done during the selection phase, and so the complexity increases as a multiple of Y. Because of the similarity of this approach to the approach we described for the TMP(t, e), we omit the details. Theorem 3.5: The Mod-2PA(t,c,e) algorithm provides a solution to TMP,,tce in time with 2maxOunTT *22HPYP . Proof: Since the algorithm enumerates all possible integer y values (where = Y) and selects the best solution, it follows from Theorem 3.4 that *22HPP Y . The evaluation phase is the same as that for the Mod-2PA(t, c) algorithm with a horizon length of T + Y, which runs in Ou time in the worst case (because the maximum horizon length is T + Y). Since we run the selection phase of the algorithm Y times, the total complexity is maxnTY maxOunTY TYY , and if Y T, then this complexity becomes . 2maxOunTT

PAGE 77

67 3.4 Heuristic Approach for the TMP(t, c) In this section, we consider a practical special case of the TM where the compression cost per unit time is job independent and equal to , and present a heuristic approach for this special case. P,tcc 3.4.1 Compress and Relax Algorithm Job-independent processing time compression costs might apply if all jobs have roughly the same work effort requirements. Our heuristic approach for this problem is based on compact and relax algorithm developed in Chapter 2. The compact and relax algorithm minimizes the sum of total overtime cost and weighted tardiness cost for a fixed sequence of jobs on a single machine. In this section we will show that a variation of the compact and relax algorithm, which we call the compress and relax algorithm, provides the optimal amount of job compression and tardiness for the TMP(t, c) for a fixed sequence of jobs in polynomial time. The compress and relax algorithm has two phases: in the first phase (called the compression phase), given a fixed sequence of jobs, we set each job’s process time equal to the shortest possible process time, i.e., let p j = p j – u j , and consider the resulting schedule for the fixed sequence; note that this produces the largest possible compression cost and generates the greatest possible revenue. We refer to the resulting schedule as the compressed schedule. In the second phase (called the relax phase), beginning with the job scheduled last, we sequentially reduce the amount of job time compression; consequently, some jobs may increase their tardiness cost (relative to the compressed schedule), and some jobs initially scheduled within the (0, T) scheduling horizon may start and/or finish after the scheduled time limit T (such jobs would have to be excluded,

PAGE 78

68 since problem TMP,tc does not allow violating the schedule horizon length); the net revenue from jobs is reduced (due to tardiness costs and excluded jobs) and the total compression cost decreases with respect to the first phase solution. We therefore face a tradeoff since we may simultaneously reduce net revenue and compression costs. We try to find the best amount of time compression x j for each job j such that the net revenue minus cost (net profit) is maximized for the fixed sequence. Notation: : The amount of total reduced compression time after implementing the relax phase, i.e., 1n j jjux . Note that is a decision variable whose final value is determined in the relax phase of the algorithm. j : The maximum value of such that, in the relax phase, job j is not delayed beyond its due date with respect to the initial compressed schedule. If a job j is already delayed beyond its due date after the compress procedure, we then set j = 0. j : The maximum value of such that job j will be completed within the original scheduling horizon (0, T); for jobs that finish beyond T after the compression procedure, we set j = 0. In a manner similar to Chapter 2, we define an independent subset as follows: Definition 3.1 (Independent subset): For any schedule of jobs, an independent subset of jobs satisfies the following properties: (i) the release date of the first job in the subset is strictly greater than the completion date of the job’s immediate predecessor, (ii) the completion date of the last job in the subset is strictly less than the release date of its

PAGE 79

69 immediate successor, and (iii) in a subset, all the jobs are scheduled one after another without idle time. We first assume there is only one independent subset in the schedule, which occurs when all release times are zero, and present the following three propositions, which allow us to show the optimality of the compress and relax algorithm for a fixed sequence of jobs in the TM: P,tc Proposition 3.1: (i) Within an independent subset, compressing the same amount of processing time for jobs scheduled earlier generates greater benefit than for jobs scheduled later. (ii) The quantities j and j for job j are given by the following equations, where L(j) denotes the set of successors of job j, and C is the completion time of job j after the compression phase: 0j 0()()jijiLjudC jT 00()0(),if 0,if ijjiLjjjuTCCCT (iii)The total profit as a function of is a piecewise linear curve, where the peak points are the points where equals some j or j . The maximum possible profit must occur at one of these peak points. Proof: Part (i) follows since compressing the processing times for jobs scheduled earlier will decrease the completion times of a greater number of jobs and, as a result, less total tardiness cost is incurred for the same amount of processing time compression. Part (ii) illustrates that the values of j and j are the sum of the compression times of job j’s successor jobs and the difference between its completion time after the compression phase and its due date (for j ) or horizon limit T (for j ). This proposition directly follows from part (i), since we should reduce the compression time of successor jobs before we

PAGE 80

70 reduce job j’s compression time. Also, if a job i is lost during the relax phase by finishing beyond time T, we effectively lose all of its compression time used in the initial schedule. Therefore, even if a successor job i is dropped from the schedule before relaxing all of its compression time, we can still say that we have relaxed all of the compression time associated with job i. Part (iii) is illustrated through the profit curve in Figure 3-1. The profit equals the total revenue less the sum of total tardiness and compression costs as a function of . Note that the revenue curve is a non-increasing step function, since only when equals some j , will job j exceed the time limit T and as a result the total revenue will decrease by w j (the revenue of job j). The tardiness cost curve is a piecewise-linear and discontinuous function of ; and only when equals j for some job j will that job begin to incur tardiness at a rate of l j per unit time tardy. If the reduction in compression time is larger than j , job j’s completion time will exceed the time horizon T and we no longer include its revenue or tardiness cost, i.e., for > j , the tardiness cost of job j will be subtracted from the total tardiness cost. Thus we see that the profit curve changes slope at any value of j , and has downward steps of discontinuity at each value of j . The compression cost, on the other hand, is simply linear in total compression time with slope . As the combination of the three, the profit curve is a piecewise linear discontinuous curve. The curve’s “peaks” occur when the compression time is reduced by some c j or j , and the maximum profit must occur at one of these peak points.

PAGE 81

71 Cost Revenue Tardiness cost Profit Compression cost Compression time reduction, 1 3 2 1 Figure 3-1. Profit as a function of reduction in compression time . If 0 j C denotes the completion time of job j in the compressed schedule, we can characterize this profit function in terms of the total compression reduction time using the following functional representation: 0()() () , j jjjjjjJjJjjjjjjJjJwCdlfcwlfcu jcu where 1, if ()0, otherwise j jf , and 0 j jjjwwCdl j .

PAGE 82

72 To illustrate the derivation of this function, note that the first part of the function is the revenue less the total tardiness cost. Here j w is the profit contribution of job j, which includes the original tardiness cost resulting from the compression phase. The quantity j is the additional tardiness cost if we reduce the compression time in the relax phase by . If > j , job j is scheduled after period T (thus zero profit is obtained for job j). The function ()jf ensures that such jobs incur no revenue or tardiness costs. The last part of the function is the total compression cost incurred. The peak points occur at the boundary points 0 , 1=n j ju , and at points of the form j or j , j = 1, , n. We can thus evaluate profit at all of these points to determine an optimal solution for a fixed sequence. Next we consider the case in which more than one independent subset can exist in the initial compressed schedule, which will tend to occur when release times are non-zero. In this case, we begin by applying the relax phase of the algorithm on the independent subset that finished last in the compressed schedule. We then apply the relax phase to the second last independent subset. It may happen, however, that when relaxing the compression time in the second last subset, this subset becomes blocked from further relaxation by the first job in the last subset. When this happens, based on the results of Proposition 3.1 (part (i)), we must reconsider relaxing additional compression time in the last subset, if compression time still exists in this subset, before further relaxing compression time in the second last subset. More generally, suppose that after the compression phase we have m independent scheduled subsets denoted by S 1 , S 2 , , S m . We index independent subsets in increasing

PAGE 83

73 order of the start of the first job in the subset, and we say that S l > S k for any subsets S k and S l if the start of the first job in subset S l is later than the start of the first job in subset S k . If, during the relax procedure, a subset’s compression time is “relaxed” enough such that the finish time of the last job in the subset reaches the start time of the first job in the succeeding subset, then these two subsets merge into a new subset and, based on Proposition 3.1, we restart the relax procedure on the newly formed independent subset. Note that when subsets merge, we need to revise the values of j and j for all jobs j in the merged subsets except for the earliest (lowest indexed) subset in the merge. We denote the time between subsets k and k + 1 in the initial compressed schedule as S k,k+1 , which equals the starting time of the first job in set k + 1 minus the completion time of the last job in set k. Then clearly we can relax the compression time in subset k by an amount equal to S k,k+1 before any of the jobs in subset k + 1 are affected, i.e., are shifted later in time. Thus the time S k,k+1 will need to be added to the values of j and j for j in subset k + 1 to reflect this additional compression time reduction that occurs without affecting these jobs. There are at most O(n) possible independent subsets. In each subset, computing each j and j requires O(n) time. Thus, the total complexity for the compact and relax procedure is O(n 3 ). This leads to the following theorem: Theorem 3.6: The Compress and Relax algorithm solves the TMP,tc problem with identical compression costs optimally in O(n 3 ) time for a given fixed sequence of jobs.

PAGE 84

74 We next summarize the Compress and Relax algorithm. Compress and Relax algorithm Step 1: Set each job’s compression time to its maximum value u j , and schedule jobs using the sequence determined by the priority rule. Keep track of each independent subset that results. Step 2: Begin reducing the compression time starting with the last job in the last subset and working backward in time. Calculate the maximum profit for each subset by checking the possible values of in the set, i.e., j and j for all j in the subset, and 0 and 1n j ju . Merge the subsets as necessary and restart the time compression relaxation process at the end of the new subset after merging. Continue this procedure until reaching the first subset. This Compress and Relax algorithm can also solve the more general TM problem for a fixed sequence of jobs. Since the sequence of jobs is fixed, we need only enumerate the possible horizon extension values P,,tce j y for each job j = 1, , n, and accordingly update ii j for all jobs i j. While the Compress and Relax algorithm solves TM for a fixed sequence of jobs in O(n P,tc 3 ) time, it solves TMP ,,tce for a fixed sequence of jobs in O(n 4 ) time. 3.4.2 Determining a Good Job Sequence Even though we can optimally solve the problem for a fixed sequence, we still require a method to find the optimal sequence. One possible heuristic approach would be to set all job compression times to their maximum value u j , and run the Mod-2PA

PAGE 85

75 algorithm to find an initial sequence. The Mod-2PA algorithm has complexity O(nT), which leads to an overall complexity of the entire algorithm of O(nT+ n 3 ). Another potential approach would be to use a classical priority rule that successively schedules jobs based on a priority formula. Using a priority rule, we first compute a priority value for each job at time 0 and schedule the job with the highest priority first. We then consider the next job, which will start at a time equal to the minimum between the completion time of the first job and the minimum release time among all remaining available jobs. We use a priority rule similar to that in Chapter 2. Before presenting this priority rule, we first require some additional notation in order to define our priority rule. The notation of the parameters used in the priority rule is as follows: t k : The time at which we schedule the start of the job in the k th position in the sequence; initially, for the first scheduled job, t 1 = 0. After scheduling job k – 1, t k equals the minimum between the completion time of the (k – 1) st job, and the minimum release time among all remaining unscheduled jobs. AVG : The weighted average value of the job revenue, weighted by tardiness cost per time unit tardy, i.e., 1nAVGjjjwln . Note that when either the revenue or the tardiness cost of a job is large relative to the compression cost, we desire greater compression time for a job. j x : An estimate of the amount of compression time that will be applied to job j in the final schedule. That is, we set j j x u where min,1AVGc , and is a scalar between 0 and 1; observe that if the value of the compression cost relative to the weighted average revenue is very large, is near 0, which means little or no compression

PAGE 86

76 will be applied. Conversely, when is 1, this means that the weighted average job revenue far exceeds the compression cost, and the maximum amount of compression time is used. j p : An estimate of the processing time that will be used for job j, i.e., j jj p pu . The priority rule we use for evaluating jobs at time t k is given by the following formula: ,if ,,ojjkjjjjkjkjwltpdcxrttp therwise. This priority rule is motivated by the following ideas: The numerator of j kt , i.e., the quantity j jkjjjwltpdcx , at time t k provides an estimate of the net revenue of job j when scheduled as the k th job in the sequence (i.e., the next job in the sequence). A job with high net revenue of course has a high selection priority. The denominator of j kt ensures that if the job has a long processing time, it receives lower priority and will be scheduled later, so as not to delay too many other jobs in the schedule. In terms of job selection, those jobs that remain unscheduled at time T are not selected, while those that are scheduled have been selected. The priority does not guarantee providing the best candidate job at every step in the process. Therefore, to improve the solution, we apply a meta-heuristic solution procedure called GRASP. The Greedy Randomized Adaptive Search Procedure (GRASP) is a multi-start or iterative procedure, where each GRASP iteration consists of

PAGE 87

77 two phases. In the first construction phase, a feasible solution is produced and in the second local search phase, a locally optimal solution in the neighborhood of the constructed solution is sought. The best overall solution is kept as the result. In the construction phase, a feasible solution is iteratively constructed, one element (here one job) at a time. The next element to be added at each step is determined by ordering all candidate elements in a candidate list with respect to a greedy function (here the greedy function is the priority rule we previously discussed). A probabilistic component of GRASP is applied by randomly choosing from the candidates in the list, but not necessarily the best candidate. In our heuristic approach, we use the construction phase of GRASP to generate candidate solutions, and select the best one as the result. 3.5 Computational Tests for the TMP(t, c) This section discusses a set of computational tests designed to assess the effectiveness of the various heuristic solution approaches we have discussed for the TMP,tc . As the discussion in the previous section indicates, there are several potential heuristic approaches we can use to solve the TM problem. In our computational tests we evaluated the following four heuristic methods, denoted by H1 – H4. P,tc H1: Directly use the Mod-2PA(t, c) algorithm, which solves the TMP,tc in O(u max nT) time with an approximation ratio of 2; H2: A GRASP-based heuristic, which uses a priority rule to determine the sequence of jobs and then uses the Compress and Relax algorithm to get a final solution; H3: A hybrid algorithm, which uses the Mod-2PA(t, c) algorithm to generate the sequence of jobs and then uses the Compress and Relax algorithm to improve the solution;

PAGE 88

78 H4: Use the basic Mod-2PA approach, while setting fixed compression time values for jobs a priori. To do this, we separate the time horizon T into three disjoint segments of equal duration. For intervals in the first segment, we apply a high amount of compression time (x j = u j ); for intervals in the second segment we apply medium values of compression time (x j = 0.5u j ); and for intervals in the third and final segment we apply low values of compression time (x j = 0.25u j ). This approach recognizes the fact (from Proposition 3.1) that compression time utilized earlier in the schedule is more valuable. Among these approaches, as we have discussed, only the first one has a proven approximation ratio. We therefore designed a set of computational tests to compare the effectiveness of all of these approaches. Because it is extremely difficult to obtain strong linear-programming based lower bounds, and because the first approach discussed above has a proven worst-case performance ratio, we use the solution from direct application of the Mod-2PA algorithm as our benchmark, against which we compare the other heuristic methods. We generated problem instances using random and uniformly distributed data according to the rules in Table 3-1. The parameter a in Table 3-1 is an integer scalar, whose value we vary to create a set of four different problem classes. The four different problem classes correspond to values of a = 0.5, 0.2, 0.1, and 0.05. As Table 3-1 indicates, we tested problem instances where jobs are reasonably similar to each other, with relatively tight due dates, which tends to increase the problem’s difficulty. Our goal is to determine how each of the proposed heuristic solution approaches works when the relative values of profit, tardiness penalty cost, and compression cost differ (determined by the parameter a). Each problem instance consists of 50 jobs, contains a time horizon

PAGE 89

79 of T = 200, and 30 instances are tested for each problem class for a total of 120 randomly generated test problems. Table 3-2 presents the relative performance results of the algorithms. The performance ratios shown in the last three columns show the performance of heuristics H2, H3, and H4, relative to the base heuristic H1, which has a worst-case performance ratio of 2. Thus, a higher ratio value in the last three columns of the table implies better algorithm performance (note that these ratios should not, therefore, be interpreted as optimality gaps). Table 3-1. Rules used for randomly generating test problem parameters. Parameter Rule Release times (0,180)jrUNIF Processing times (12,24)jpUNIF Due dates (1,2)jjjdrpUNIF Maximum compression times (1,50)jjpuUNIF Job revenues (3,80)jwUNIF Tardiness cost (3,5)jjwlUNIF Compression cost 1n j jlcan Table 3-2. Summary of computational test results for four problem classes. a Problem Class Compression cost multiplier a H1 profit H2 profit H3 profit H4 profit H2/H1 H3/H1 H4/H1 1 0.5 532.6 707.8 644.1 750.6 1.329 1.209 1.409 2 0.2 803.4 859.2 891.5 760.5 1.069 1.110 0.947 3 0.1 1221.3 1307.5 1277.5 788.8 1.071 1.046 0.646 4 0.05 1495.3 1550.3 1523.9 791.1 1.037 1.019 0.529 Overall Average: 1.127 1.096 0.883 a Figures in Columns H1 – H4 represent average heuristic objective function value among 30 problem instances. From the results in Table 3-2, and based on our analysis of the corresponding solutions, we make several observations. Heuristics H2 and H3 provide the best average performance, and appear to be quite robust to the changing cost of compression time.

PAGE 90

80 Both of these algorithms dominate, on average, heuristic H1 performance (which has a worst-case performance ratio of 2). The common link between these heuristics (H2 and H3) is that they both use the Compress and Relax algorithm to optimize the tradeoffs between compression time, tardiness costs, and lost revenue; their only difference lies in the method used to generate the job sequence, i.e., the GRASP procedure (H2) versus the Mod-2PA(t , c) algorithm (H3). The superior performance of these heuristics demonstrates the value of applying the Compress and Relax algorithm. Note that the basic Mod-2PA(t, c) algorithm (H1) performs relatively poorly when compression costs are high. The reason for this is the following. When compression costs are high, the jobs that have high value in the evaluation phase have longer processing times. In the evaluation phase, the Mod-2PA(t, c) algorithm can repeatedly pick the same high-valued job with a long processing time as the most valuable job for successive intervals. Since at most one interval from each job can be selected in the selection phase, this leads to many time slots without attractive alternative jobs to pick from, creating a sparse schedule. Since heuristic H3 starts with the Mod-2PA(t, c) algorithm, and then applies the Compress and Relax algorithm, its performance dominates the basic Mod-2PA(t, c) algorithm (H1), and displays similar performance trends across the different problem classes. Heuristic H2 generates a complete sequence of jobs (using the priority rule and GRASP heuristic), and then uses the Compress and Relax algorithm to determine the best amount of compression time for all jobs. Since heuristics H2 and H3 use the Compress and Relax algorithm, these approaches are better at addressing the tradeoffs between compression time, tardiness costs, and lost revenue. Similarly, since heuristic H4 applies fixed amounts of compression time regardless of the compression cost, it consistently

PAGE 91

81 chooses the same number of jobs over the fixed horizon, and does not take advantage of applying 100% compression to all jobs when compression costs are very low. Thus, its performance becomes relatively worse as the compression cost decreases. 3.6 Conclusions This chapter focused on a single-resource scheduling problem with job-selection flexibility. We extended the basic throughput scheduling problem (TMP) to a set of three more general and practical problems by considering tardiness costs, controllable processing times, and a cost for violating a target makespan value. We improved the two-phase algorithm for the basic TMP problem and provided a generalized version of the algorithm for each generalized version of the problem. For each of these generalized versions of the algorithm, we also provided a lower bound on the algorithm’s worst-case performance. We also presented a set of heuristic approaches specialized to the generalized version of the problem with tardiness costs and processing time compression costs, i.e., the TMP,tc . In particular, we provided the Compress and Relax algorithm, which optimizes the tradeoffs between tardiness costs, processing time compression costs, and lost revenue for a predetermined sequence of jobs. Our computational test results demonstrated the effectiveness of the Compress and Relax algorithm for providing good solution values.

PAGE 92

CHAPTER 4 SINGLE MACHINE RESCHEDULING WITH NEW JOB ARRIVALS AND PROCESSING TIME COMPRESSION COSTS 4.1 Introduction and Problem Definition In scheduling practice, the ongoing processing of jobs is often disrupted by random events. Job rescheduling is often necessary when such a disruption occurs. The types of possible disruptions include, for example, machine failures, resource level changes, processing time changes, altered release dates, or new order arrivals. Relatively little research exists to date on rescheduling in response to such disruptions, particularly in the area of new job arrival disruptions, when compared to the abundant research on scheduling in general. This chapter considers a rescheduling problem arising in a make-to-order (MTO) environment. We assume there is a set of original jobs in the system that has been previously scheduled. Each order has its own associated deadline, and the firm can pay an additional premium to reduce the processing time of an order (through, for example, overtime or subcontracting). During processing of the original jobs, some new orders may arrive to the system. We assume each time a new order (referred to as a job in the remainder of the paper) arrives, the firm must reschedule its set of remaining jobs in order to accommodate the new job. This problem is a single machine rescheduling problem, which is defined as follows. There exists an original schedule for (n – 1) original jobs. Each job i has associated parameters including a processing time p i , due date d i , tardiness cost l i (per 82

PAGE 93

83 unit time tardy), maximum compression time u i , and unit time compression cost c (identical for all jobs). This compression time and associated cost represents the possibility of reducing the required processing time of a job at some cost per unit time reduction. The processing time of job i can therefore be reduced by x i ( u i ) units of time at a compression cost equal to cx i . The tardiness cost is assessed such that if the job completes at time C i , a tardiness cost of (C i – d i ) + l i is incurred, where (x) + = max{x, 0}. Job preemption is not permitted, which implies that any new schedule of remaining jobs that results from rescheduling cannot begin until the job currently in process reaches completion. Suppose, for example, that at time t r , a new job n arrives. Denote the job set containing jobs 1, , n as J. We need to insert the new job somewhere into the existing schedule. Since the new job arrives at time t r , we cannot change the initial schedule before time t r ; however, the remaining schedule is free to change at some rescheduling cost. A rescheduling, or disruption cost is associated with the degree of deviation from the initial schedule. Here we use the absolute change in the scheduled starting time of a job as the measure of schedule deviation. We define a unit disruption cost of h, i.e., when a job’s starting time is changed from s i to i s , the disruption cost equals iihss . Our objective is to minimize total rescheduling cost, which consists of some traditional scheduling objective (e.g., total weighted tardiness cost, maximum lateness, or total flow time), plus processing time compression and schedule disruption costs. 4.2 Literature Review Any rescheduling problem must simultaneously address two (possibly conflicting) criteria: one is the efficiency of the schedule, which is measured in a similar way to a

PAGE 94

84 traditional scheduling problem approach, using an objective such as maximum makespan, minimum weighted completion time, minimum total tardiness cost, minimum number of tardy jobs, etc. The second criterion is schedule stability, which refers to the degree of deviation from the original schedule. The measure of stability may be some measure of the change in completion times, the change in job sequence, or the change in the starting times, which we use in our analysis. The potential for diversity in rescheduling research directions lies in the ways in which one can select and model these two criteria. We next discuss relevant past work that addresses such problems. In early work on rescheduling, Norbis and Smith (1988) study a rescheduling problem with resource constraints, where disruption events are defined as changes in resource availability, changes in due dates, changes in processing times, or new job arrivals. They present a multi-objective mathematical programming formulation for the resource scheduling problem. A quasi-dynamic approach, based on data updating and re-optimizing is presented to take control whenever a disruption event occurs. If some resource availability is violated, the procedure may delay operations with low priority and release the associated resources for work on higher priority jobs. Bean and Birge (1991) considered a rescheduling problem with machine disruptions, and investigated “match-up” scheduling heuristics, which compute a transient schedule after a machine disruption. When a disruption occurs, their approach determines how to merge back into the pre-planned schedule at some future date. This approach first fixes an initial matchup point, and resequences jobs on disrupted machines to minimize total tardiness cost. If the cost for the selected matchup point exceeds a predetermined threshold cost, the matchup point is incremented by some value, until the

PAGE 95

85 matchup point reaches a predetermined maximum value. A matchup is therefore possible only if there is enough idle time existing in the original schedule. Wu, Storer, and Chang (1993) proposed a rescheduling procedure that generates a new schedule at each occurrence of a shop disruption. They use a bicriterion approach, where the two conflicting objectives are minimizing makespan and minimizing deviation from the original schedule. Two sets of local search heuristics were developed; the first set used pairwise swapping methods, using a weighted combination of the two objectives to create a single objective. The second is a local search heuristic based on a genetic algorithm approach, considering a two dimensional space of makespan and deviation; the quality of any given solution point in the space can be measured based on its Euclidian distance to the origin. Leon, Wu, and Storer (1994) consider robustness measures and robust scheduling methods for job-shop rescheduling with machines break downs. Robustness is expressed as a linear combination of the actual makespan of a schedule after disruption and the deviation of the new makespan from the previous value. A time window called slack is defined for each operation within which the operation can be started without incurring any makespan delay. The robustness measure is the average slack time of the operations to be processed on fallible machines. A robust scheduling solution based on genetic algorithms is proposed. Meybodi and Foote (1995) use hierarchical production planning (HPP) to solve the production planning and scheduling problem with random demand and production failures (with known failure probability distributions). The main idea in HPP is to decompose a large problem into smaller subproblems, solve the individual subproblems,

PAGE 96

86 and then link the results in a coordinated manner to produce a solution to the original problem. A subset of decisions is made at the beginning of the planning horizon, while allowing the remaining decisions to be made dynamically, and retaining schedule flexibility to compensate for future unforeseen disturbances. Unal, Uzsoy, and Kiran (1997) consider a single machine scheduling problem with newly arriving jobs that have setup times that depend on their part types. They consider inserting new jobs into the original schedule so as to minimize the total weighted completion time or makespan of the new job. Their approach does not, however, incorporate a measure of schedule disruption. Hall and Potts (2004) consider a rescheduling problem with newly arriving orders. They present two classes of models. In the first class, they minimize schedule cost, subject to a limit on the deviation from the original schedule. In the second class, they minimize a total cost objective, which includes both the original cost measure and the cost of deviation. The efficiency measures they use are either maximum lateness or total completion time, while the stability measures are the maximum (or total) sequence disruption or maximum (or total) completion time deviation. For each problem, with the performance measures defined above, they either present a polynomial algorithm or prove that one does not exist. The problem we consider is similar to that of Hall and Potts (2004), since we consider a new job arrival disruption, and use a total cost objective function. But we consider a more complex efficiency measure, which is the weighted total tardiness cost. Single machine scheduling to minimize total weighted tardiness cost is an NP-hard problem. In certain scheduling contexts, total weighted tardiness cost is a more

PAGE 97

87 appropriate objective than maximum lateness or total completion time. In addition, we consider the added feature of available compression time, which allows speeding up processing by paying an additional premium, which represents a new direction in rescheduling research. The structure of this chapter is as follows. In Section 4.3 we discuss several potential rescheduling policies to deal with disruptions. Section 4.4 considers the case where the relative sequence of the old jobs must remain fixed, and provides an integer programming (IP) formulation for addressing the rescheduling problem. We show that this rescheduling problem can be solved equivalently as a linear program, i.e., the linear relaxation of the IP is integer. We then consider the general case where the sequence of jobs is free to change and the original schedule may no longer be optimal, and we present a very large scale neighborhood (VLSN) heuristic based on a network flow approach. Finally, we provide computational results to validate our heuristic performance. 4.3 Rescheduling Policy Approaches This section briefly considers a range of heuristic policy approaches a scheduler might take in response to a newly arriving job to a system that has a schedule in place. Depending on the complexity of the rescheduling problem and the relative disruption costs, a scheduler might adopt such an approach to quickly provide good feasible solutions. In response to a new job arrival during original job processing, a scheduler has several options. We refer to these as rescheduling policy options. Policy Option 1 (Maintain Fixed Sequence): Here we insert the new job into a position in the original schedule, keeping the sequence of all other jobs fixed, and right shift the jobs scheduled after the insertion position.

PAGE 98

88 This policy is very simple to implement, but it may potentially lead to poor performance as a result of high tardiness costs, depending on the due date and processing time of the newly arriving job and those of the originally scheduled jobs. If the new job is scheduled last, then there is no disruption cost under this approach, although the new job might be severely tardy. This policy approach may be appropriate when disruption costs are very high relative to tardiness costs. Policy Option 2 (Complete Rescheduling to Minimize Total Cost): Under this approach, the scheduler completely re-solves the scheduling problem with the subset of all available jobs that have not begun processing at the arrival time t r of the new job, using, for example, the original objective of the sum of weighted total tardiness, disruption, and compression costs. Policy Option 2 is obviously the preferred approach in principle. However, due to the complexity of the scheduling problems we consider, it is not a practical approach in general for large-scale problems. Because of this, later in Section 4.4 we consider the best way to implement Policy Option 1, which may be appropriate when resequencing the original jobs creates substantial disruption costs. Section 4.5 then presents a heuristic approach for implementing Policy Option 2. As noted in the previous section, past literature (e.g., Bean and Birge 1991) also considers the following policy option, although we do not consider such approaches in this thesis. Policy Option 3 (Match-up Scheduling): Under this approach, after the new job arrives at time t r , the scheduler sets a pre-defined future match up time t r + T, inserts the new job between time t r and t r + T, adjusting the part of the original schedule between

PAGE 99

89 time t r and t r + T, while the jobs scheduled beyond time t r + T remain as in the original schedule. Match-up scheduling can be viewed as a hybrid of policies 1 and 2, as part of the schedule (beyond time t r + T) remains fixed, while the part of the schedule between time t r and t r + T may be re-optimized. This policy approach may be reasonable when there is a large number of jobs in the original problem and the new job must be completed relatively soon after arrival (and the optimal solution will likely thus return to the original schedule at some point). However, as we mentioned in the previous section, it may be difficult to set the appropriate value for the match-up point. Moreover, given a match-up point, we are not necessarily guaranteed that a match-up can occur at this time. 4.4 Rescheduling with Fixed Sequence Approach This section considers the problem of scheduling a newly arriving job when the sequence of originally scheduled jobs must remain fixed. We know that the minimum weighted tardiness cost rescheduling problem is NP-hard since the corresponding single-machine scheduling problem is NP-hard. However, if the original sequence of jobs must remain fixed, we can solve the resulting rescheduling problem under a newly arriving job in polynomial time. Fixing the original sequence of the jobs is reasonable in certain situations, particularly when supplier material arrivals must occur in sequence with planned production (e.g,. as in a JIT supply mode). Although we do not consider a disruption cost associated with a sequence change, in some situations, changing the sequence of the jobs in the initial schedule is not permitted or has a high associated penalty cost. In this section, we discuss the case where the fixed sequence policy is used. The problem is reduced to finding an optimal insertion position for the new job, along

PAGE 100

90 with the optimal compression time for each job, which together determine the sum of weighted tardiness, compression, and disruption costs. We assume that a non-delay schedule is always used, meaning that no idle time exists between jobs in the schedule. The availability of “compression time” is an important feature in our problem, and the following property allows us to limit the solution space in our rescheduling problem with respect to the compression time values we must consider. Property 4.1: If the compression time in the original schedule is set optimally, and Policy Option 1 is used, then there exists an optimal new schedule (after rescheduling), where the compression times of the original jobs do not decrease. Proof: In the original schedule, given a job i, suppose job i uses some positive amount of compression time. Let x i denote a reduction in job i compression time, where x i is positive and arbitrarily small. The total compression cost will decrease by cx i (with respect to the original schedule), while the completion times of job i and its succeeding jobs will be delayed by an addition x i . Let a o (x i ) denote the increase in tardiness cost at x i in the original schedule. Since the original amount of compression time was set optimally, we must have . Let a (ioicxax )) r (x i ) denote the increase in the tardiness cost at x i after the new job has been inserted at some point in the schedule. Because (since at least as many jobs are tardy after rescheduling) we have , and the solution with compression time reduction cannot be better than the one without. In addition to the change in compression and tardiness costs, an additional disruption cost of x ()(oiriaxax()iricxax i h is also incurred for each successor of job i if the compression time

PAGE 101

91 of job i is reduced. Since cx, where m is the number of jobs succeeding job i, compression time reduction strictly increases the cost of the schedule. ()iiaxmhx]jx[][1][iknnCpd ii To determine the increase in compression times, we first assume that we already know the insertion position of the new job, and denote this as position k. Since the new job arrives at time t r , we cannot change the schedule before this time. Let [q] be the index of the first job (in position q) whose characteristics we can change. We divide the initial schedule after job [q] into two parts: the jobs scheduled before the new job belong to the set J 1 , and we index these jobs as [q], , [k – 1]; the jobs scheduled after the new job belong to the set J 2 , and we index these jobs as [k + 1], ,[n]. Property 4.1 implies that the compression time of the jobs will not decrease after inserting the new job, and we now let x [i] denote the increase in compression time of job [i] with respect to the original schedule. Note that since we assume we know the insertion position of the new job, x [i] (for each job [i]) is our only set of decision variables. Let denote the cumulative increase in compression times up to and including job [i] for the new schedule and recall that the newly arriving job has index n. The total tardiness cost as a function of the z [][iijqz [i] variables is given by 1[][][]][][][][][]1kniiikkiniiiqikCdzlzlCpdzl , (4.1) where the completion times C [i] are the fixed completion times from the original schedule. The cost function (4.1) represents the total tardiness cost as a function of the z [i] variables, after inserting the new job n in position k with processing time p n . Defining

PAGE 102

92 a [i] = C [i] – d [i] for each job [i] J 1 , a n = C [k-1] + p n – d n , and a [i] = C [i] + p n – d [i] for jobs [i] J 2 , we can express (4.1) in a more compact form as [][][]niiiqazl i . Note that after inserting the new job and before adjusting the compression time in the schedule, the starting times of jobs in the set J 2 are delayed by an additional p n . If z [k] = p n , then the starting time deviation for jobs in J 2 is zero. If z [k] < p n , adding compression time for jobs in J 2 will reduce the starting time deviation from the original schedule; on the other hand, if z [k] > p n , this will again begin to increase the starting time deviation (since we follow a non-delay schedule). Since the original schedule for the old jobs is optimal, if z [k] > p n , no additional compression time should be added to jobs in the set J 2 (otherwise additional compression time would have been optimal in the original schedule). Thus either z [k] p n and z [k+i] = z [k] for i = 1, , n – k, or z [i] < p n with z [k+i] z [k + i – 1] for i = 1, , n – k. We can therefore express the disruption cost as follows: [][]1[][][][]1()(), for (), for and knknkniiqnikninikhnkzpzpzh p zhzpzp Adding to this the tardiness cost [][][]niiiqazl in and additional compression cost , the total cost can be written as []nzc 1[][][][][][][][][][]1()()(),for (),for and .nkiiiniiqiqknknnnikniikgazlzczhnkhzpzp p zhzpzpz (4.2)

PAGE 103

93 The minimum total cost problem can be formulated as follows. Minimize g(z) Subject to: i = q, , n. [][1][][]0,iiiizzux The constraints require increasing z [i] values and that the compression time of each job is less than its compression time limit. To simplify the notation, we assume without loss of generality that l jobs remain in the schedule, and index the remaining jobs in the schedule from 1 to l according to their sequence in the new schedule (here we assume the new job has been inserted in position k in the new schedule with 1 k l). This produces the following equivalent formulation: Minimize 1111()lkliiikiliiiklyhzhlkbhbcz (4.3) Subject to: i = 1, , l, (4.4) 10,iizz i = 1, , l, (4.5) 1()iiiizzux ,n i = 1, , l, (4.6) ,iiizya , (4.7) kkzbp i = k + 1, , l, (4.8) ,iinzbp y i , b i , z i 0 and integer, i = 1, , l. (4.9) Constraints (4.4) and (4.5) ensure that the cumulative compression time is non-decreasing and that compression times do not exceed their upper limits. Constraint set (4.6) tracks tardiness for each job, where y i equals the tardiness of job i. Constraint (4.7) forces the term h(n – k)b k in the objective function to zero if z k p n ; otherwise it equals

PAGE 104

94 h(n – k)(z k – p n ). Constraint set (4.8) similarly forces the individual terms of to zero if z 1niikhbi k p n (since z k+1 z k ); otherwise this term equals 1nnikhpz . For the above formulation, we can show that the constraint matrix is totally unimodular, which implies that as long as all problem data are integer, we can solve the problem as a linear program and obtain an integer optimal solution. To show this, let A be the matrix of coefficients for the constraint set 10iizz , where A has a network flow structure, and each row of A has exactly one ” and one “-1”, i.e., 110...0011...0...............00011llA The constraint matrix D for the formulation (4.3 – 4.9) can then be written as: 112 0000000001000iikizybbAADIIBCI The columns of D correspond to the coefficients of the z i (i = 1, , l), y i (i = 1, l), b k , and b i (i = k + 1, , l) variables, respectively. The matrix I 1 is the l l identity matrix; I 2 is the (l – k)(l – k) identity matrix; B is a zero row vector of dimension l, except that the k th entry is ; C is an l l matrix where the first k columns contain all zeros, and columns k + 1, , l consist of the I 2 identity matrix. Thus there are four groups of columns in D as shown above. We can partition these columns into two sets: M 1 ={column groups 1 and 3}; M 2 ={column groups 2 and 4 }.

PAGE 105

95 Let a ij denote the entry in row i, column j. Then for each row i = 1, , 4l of the matrix D, we have: 1201ijijjMjMaa . Nemhauser and Wolsey (1988, p.542) show that since the a ij values satisfy this property, the constraint matrix is totally unimodular. Assuming all problem data are integer, this problem can be solved as a linear program and the solution is automatically integer. For the position of the new job k, which is between 1 and l, the best value can be found by simple enumeration and re-solving the above problem for each k. 4.5 Heuristic Approach for Resequencing Original Jobs In this section, we discuss the general case where the original sequence of jobs is allowed to be changed in rescheduling. We propose a heuristic method based on neighborhood search to solve this problem, with a starting solution that uses the same sequence for the old jobs as in the original schedule. This heuristic can be used to solve the rescheduling problem with different objective functions, using different rescheduling policies (e.g., Policy 1, 2, or 3 from Section 4.3), and it can even be extended to solving the problem under multiple new job arrivals. As an example of the application of this heuristic method, we consider a rescheduling problem with a newly arriving job and an objective of minimizing the sum of total weighted tardiness cost, compression cost, and disruption cost. The other rescheduling problems we have discussed can utilize a similar approach to the one we next discuss. Given a feasible solution (a schedule), its neighborhood is generated by changing the sequence of jobs and changing the compression time applied to each job. A larger neighborhood typically implies better quality of locally optimal solutions and of the final solution. At the same time, the larger the neighborhood, the longer it takes to search the

PAGE 106

96 neighborhood at each iteration. For this reason, a larger neighborhood does not necessarily produce a more effective heuristic, unless one can search the larger neighborhood in a very efficient manner. There are a number of neighborhood search heuristics in the literature applied to a broad range of combinatorial optimization problems. Here we will use a technique called VLSN (very large-scale neighborhood search); see, for example, Ahuja, Ergun, Orlin, and Punnen (2002). Although this heuristic approach has been used to solve a number of classical NP-hard problems, to our knowledge, this is the first application of VLSN to a rescheduling problem. In VLSN, the neighborhood is searched using network flow techniques that implicitly enumerate solutions in the neighborhood. Given an initial schedule, we construct an improvement graph that allows us to search a pre-defined large neighborhood structure. Each arc in this graph represents an alteration of the schedule, and has an associated cost determined by the nature of the change. In our application, an improved schedule is found by finding a shortest path in the improvement graph with negative cost, which implies that a lower cost neighboring solution exists. We next explain how to construct the improvement graph. To limit the size of the required improvement graph and associated computational effort, we only allow varying the compression time of the newly arriving job, while the other jobs’ compression times are fixed as in the original schedule. As we will see, if we were to allow changing the compression times for the original jobs, an additional O(u max n 2 ) arcs for compression time changes would be required for the improvement graph. In addition to the increased complexity of the improvement graph, the complexity of the branch and bound algorithm

PAGE 107

97 (which we later discuss) also increases substantially if we permit changing all job compression times in rescheduling. Let G(V, E) be the improvement graph associated with the current schedule. Then the node set V = {0, 1, 2, , n, n + 1} of G consists of n + 2 elements, where each node i represents the job in position [i] for 1 i n; nodes 0 and n + 1 are dummy nodes. An arc (i, j) in the graph represents a move involving jobs (i, , j – 1), for 1 i, j n + 1. Arc (0, 1) represents a change in compression time for the newly arriving job. The arc set E consists of all the arcs that we next define. The arc set E is defined and created as follows. Arcs (i, i + 1) for 1 i n: these arcs represent the situation where job [i] is not involved in any move. Hence the cost on these arcs is: c(i, i + 1) = 0. Arcs (i, j) for 1 i n – 1 and i + 2 j n + 1: these arcs represent local updates that only affect jobs [i], , [j – 1]. Let 0[]i s denote the starting time of job [i] in the original schedule, while C [i] , s [i] , and p [i] are the completion, starting, and processing times for job [i] in the current schedule. There are three of these kinds of arcs in the graph for each possible (i, j) pair as we next describe: Insertions: Arcs (i, j) 1 represent the ejection of job [i] from its current position and its re-insertion between job [j-1] and job [j]. The cost for these arcs is as follows: 11[][][][][][]1110[][][][][][][][][]111000[][][][][][][]1(,)jiikiiikijjkkikkkikkikijiikikkkkiCijlCpdCdlCpdCdhspshsshspsss i

PAGE 108

98 The first and the second terms in the function are the increase in tardiness cost of job [i] and jobs [i+1], , [j – 1], while the third and fourth terms are the increase in disruption costs of job [i] and jobs [i+1], , [j – 1]. Swaps: Arcs (i, j) 2 represents the swap of job [i] with job [j – 1]. The cost for these arcs is as follows: 12[][][][][][]12[1][1][][1][1][1]2[][][][1][][][]11[][][1(,)jiikiiikijjjkjjjkijkkijkkkkijikikiCijlCpdCdlCpdCdlCppdCdhsps 200][][][1][][200[1][1][][][1][][][]1jiijkjkijjjkijkkkkihsshspshsshsppsss 01]0 The first through third terms in C(i, j) 2 represent the increase in tardiness cost of jobs [i], , [j-1] and [i+1], , [j – 2], while the fourth through sixth terms are the increase in disruption costs for jobs [i], , [j – 1] and [i+1], , [j-2]. 2-opts: Arc(i, j) 3 represents a 2-opt move (reverse) involving the subsequence of the jobs [i], [i+1],[j – 1] in the current schedule, i.e., the processing order of the jobs [i], [i+1], , [j – 1] is reversed. Let C' [l] denote the completion time of job [l] after such a reversal, where 11[][][][]1jlllqqlqiCCpp q , for l = i, , j – 1. The cost for such arcs is then given by 11300[][][][][][][][][][](,)()()jjkkkkkkkkkkkikiCijlCdCdhCpsss .

PAGE 109

99 Compression: Arc (0, 1) y represents the action of adding y units of compression time to the newly arriving job. If the new job in the current schedule has x n units of compression time, the graph will have (u n +1) copies of Arc (0, 1) y , with y = –x n , , 0,, (u n – x n ). If the new job is in position [k], the cost for these arcs is as follows: 00[][][][][][][][][][]1(0,1)()()nnyiiiiiiiiiiikikClCydCdhCypsssc y The first term in the above expression is the change in tardiness cost, the second term is the disruption cost, while the third term is the new job’s compression cost. As Figure 4-1 shows, the improvement graph is acyclic. An improved schedule is obtained by finding a path from node 0 to node (n + 1), where the sum of costs of the arcs in the path is negative. 0 1 2 3 4 5 6 Figure 4-1. A partial improvement graph. Theorem 4.1: There exists an algorithm which searches the defined neighborhood in O(n 3 ). Proof: Since the graph is acyclic, the distance label correcting algorithm solves the shortest path problem in O(m) time, where m is the number of edges. There are O(n) arcs for each kind of move from a node, which implies that there are O(n 2 ) arcs in the graph. At each step, the calculation of the cost of arc (i, j) requires O(n). Therefore, the total complexity for searching the neighborhood is O(n 3 ).

PAGE 110

100 Notice that the cost of arc (i, j) not only depends on nodes i and j, but also on the path leading to node i, because the schedule may be changed after traversing each arc. If we want to find the best improvement, we must therefore enumerate all possible paths to every node, which implies exponential complexity. As a heuristic approach to overcome this, in our label correcting algorithm, we express the state of the schedule at each node by the completion time of all jobs and keep this information updated at each step (at each node). When we come to node i, we check all incoming arcs to the node and select the arc (j, i) that produces the lowest node i distance label, and let the distance label of node i equal the distance label of node j plus the cost of arc(j, i). We then update the completion time for all jobs on node i according to the move taken, and update the cost of each arc leaving this node. Note that for each node, we only select one incoming arc to the node, and we only check a subset of paths to each node. Therefore, the algorithm does not guarantee obtaining the best improvement possible in the neighborhood for the current schedule. If no negative shortest path exists at an iteration, this does not mean there is no improving schedule in the neighborhood of current schedule. At each iteration, the algorithm constructs an improvement graph and finds the shortest path in the graph. If the shortest path has positive cost, we stop the iteration; otherwise, we update the current schedule, construct another improvement graph, and continue the search. An initial solution is constructed by simply inserting the new job in some position in the old schedule while keeping the sequence of all other jobs fixed. To improve the solution procedure, we use multiple starting points. To obtain multiple starting points, we insert the new job into several different positions of the old schedule.

PAGE 111

101 4.6 Computational Testing To evaluate the efficiency of the VLSN heuristic, we compare its performance to the optimal solution obtained via branch and bound for a set of randomly generated test problems of medium size. The branch and bound algorithm begins with an empty schedule and adds a job into the schedule at each level k, for k = 1, , n, where a level corresponds to the number of jobs added to create a partial schedule. At each node at level k, a job is added in the (n – k + 1) st position in the schedule, and a lower bound on cost at this node and its children nodes is computed as the cost of the partial schedule. Note that this implies that we add jobs to the schedule in reverse order as we move down the tree. The algorithm uses a depth-first branching strategy in order to quickly obtain a feasible solution. When we add the new job at a level, we select the best compression time for the new job for the corresponding partial schedule. Note that the start time of the first job (here the new job) in the partial schedule is fixed, and equals the sum of the processing times of unscheduled jobs. The compression time for the new job only influences the cost of the partial schedule containing the new job and its successor jobs, which is the same for all children nodes. Thus, the best compression time applied to the new job when it is inserted is also the best amount of compression time for all children nodes. If the lower bound on cost for a node is larger than the current upper bound, we prune the node. The initial upper bound is the cost of the solution from the VLSN heuristic; this value is updated whenever we find a better feasible solution. We randomly generate two problem types, with each problem instance containing 40 jobs. The first problem type uses an optimal initial schedule for the original set of jobs, while the second problem type does not guarantee the initial schedule is optimal. In

PAGE 112

102 the first class of problems, the optimal schedule after rescheduling is expected to be similar to the original one. Thus it is easier for VLSN to perform the neighborhood search than for the second class of problems, where we may require a great deal of neighborhood search. The test problem parameters are generated using the following rules, where U[a, b] denotes the uniform distribution between a and b. Disruption cost per unit time, h = U[1, 3]; Job processing times, p j = U[3, 12], for each jJ; Original schedule completion times, C j =C j-1 + p j , for each j J; Job tardiness cost per unit time, l j = U[1, 2h], for each j J; New job due date, d n = U[10, 11nii p ]; Compression time upper bound, u n = U[1, p n – 1)]. For the first problem class, we set d j = C j + U[0, 3], while for the second problem class, we set d j = C j + U[-4 , 4], for j J. For each problem class we generated 50 test instances. Since the branch and bound algorithm may run quite long for some problem instances, we limit the branch and bound solution time to one hour, and use the best upper bound obtained by the branch and bound algorithm as a benchmark for comparison to the VLSN approach. For the VLSN heuristic, for each instance, we generate eight initial schedules by inserting the new jobs in eight different positions in the schedule (positions 5, 10, 15, 20, 25, 30, 35, and 40). Tables 4-1 and 4-2 summarize the results of our computational tests.

PAGE 113

103 Table 4-1. Results for problem class 1. Problem Class 1 ( 50 instances) # instances B&B finds opt. solution # instances VLSN and B&B both find opt. solution # instances B&B truncated by one hour limit # instances B&B gets better solution Avg. run time for VLSN (min.) Avg. run time for B&B (min.) 41 41 9 0 < 1 12 Table 4-2. Results for problem class 2. Problem Class 2 ( 50 instances) # instances B&B finds opt. solution # instances VLSN and B&B both find opt. solution # instances B&B truncated by one hour limit # instances B&B gets better solution Avg. run time for VLSN (min.) Avg. run time for B&B (min.) 31 31 19 0 < 1 24 The tables show that for the instances in which branch and bound obtains the optimal solution within one hour, the VLSN heuristic method also obtains an optimal solution. For those problems where the branch and bound algorithm cannot find an optimal solution within an hour, the solution obtained by the VLSN heuristic was always better than the best solution from branch and bound, and is obtained in less than one minute. Therefore, the VLSN heuristic is a fast and efficient method for solving the rescheduling problem with a newly arriving job. As expected, we also observe from the tables that both VLSN and B&B do much better in problem class one than in problem class two. B&B has the same worst-case complexity for the two problem classes, since it can, in the worst case, enumerate all possible solutions. However, B&B uses the solution of VLSN as the original upper bound, and this upper bound is generally closer to the optimal solution for problem class 1, since the original schedule is optimal for this problem class. This leads to better B&B performance for problem class 1.

PAGE 114

104 4.7 Conclusions This chapter examines the rescheduling problem for a newly arriving job. We measure the stability of the schedule using a disruption cost, which is associated with the deviation of the job starting times from the original schedule. The efficiency of the schedule is evaluated by the total weighted tardiness cost plus compression cost. The objective function is the sum of the disruption cost and the efficiency cost; thus we transform the rescheduling problem into a minimum cost problem with a single objective. We provide three approaches for rescheduling and two computational test set cases for this problem. For the case in which the original jobs must retain their original sequence, an MIP formulation is provided that can be solved as a linear program with a guaranteed integer optimal solution. In the general case where the sequence of jobs is free to change, we implement a VLSN search heuristic. The computational results are compared with the results from a branch and bound algorithm, and we showed that the VLSN heuristic provides high-quality solutions in very short time.

PAGE 115

CHAPTER 5 PREDICTIVE SCHEDULING ON A SINGLE MACHINE WITH UNCERATIN FUTURE JOBS 5.1 Problem Motivation and Literature Review This chapter considers a single machine scheduling problem with uncertain jobs. Such problems arise for firms who need to plan a schedule for jobs long before the jobs are executed, in order to plan the timing of material purchases and preparation (within, for example, a JIT environment), resource allocation, or worker training. At the time the schedule is created, uncertainty exists with respect to the future jobs that must be processed. For example, at the time the schedule is being planned, the firm may be in competition with other firms for potential future jobs, and will therefore face some uncertainty regarding the set of jobs it will need to perform in the future. The schedule planner must therefore create a schedule over some planning horizon without complete knowledge of all jobs that will require processing over the horizon. Such contexts raise new and interesting questions about how to schedule resources for uncertainty in future jobs. Uncertainty in scheduling can take many forms. For example, we might consider scheduling a known set of jobs with uncertainty in job-specific parameters such as processing times or due dates. Another form of uncertainty involves schedule disruptions due to, for example, machine failures, new job arrivals, or changes in due dates. In this latter case, the complete set of jobs that require processing might not be known at the time the schedule is created. In the extreme case, jobs may arrive periodically according 105

PAGE 116

106 to some probability distribution of inter-arrival times, which leads to a class of problems collectively known as on-line scheduling problems. The problem we consider in this chapter will address planning contexts where job-specific parameters are known a priori, but a subset of jobs exists that the firm may or may not need to perform during the planning horizon, depending on whether the customers associated with those jobs choose the firm as their supplier. Before discussing this problem in greater detail, we next discuss past literature that deals with scheduling under uncertainty. Mehta and Uzsoy (1999) classified approaches to scheduling in the presence of uncertainties into four main categories: completely reactive approaches, predictive-reactive scheduling, robust scheduling approaches, and knowledge-based scheduling approaches. Since their focus was on scheduling problems subject to disruptions, in addition to these categories, we discuss an additional category: non-reactive expected value approaches, since this is also an approach for dealing with uncertainty in scheduling. We next discuss these approaches. Non-reactive expected value approaches. A conventional approach to dealing with uncertainty in scheduling is the use of stochastic optimization methods. A number of researchers (e.g., Pinedo 1983 and Sarin 1991) have studied the single machine problem in an uncertain environment where job attributes (e.g., processing times, due dates) are stochastic, but the set of all jobs that require processing is known in advance, along with the associated parameter probability distributions. These papers assume the processing times and due dates are statistically independent and follow known distributions. Consequently, the objective of their model is to determine a sequence that minimizes the expected value of some regular scheduling objective.

PAGE 117

107 This distribution-based method cannot be applied to all contexts, since it requires availability of past data regarding job-specific parameters to develop appropriate probability distributions. Alternative probability theory based methods may be used for situations where past data is of little help for the future. Hapke et al. (1994) state that fuzzy numbers can be used for estimation of job-specific parameter distributions. In this study, fuzzy numbers are used in the representation of skill levels of the resources, complexity levels of the activities, and activity durations. Similarly, Pan et al. (2001) use fuzzy set theory to solve the project scheduling problem where uncertainty exists in activity durations. Uncertainty in scheduling has also been considered within a queuing framework where orders arrive randomly with random processing times, according to some known inter-arrival distribution and processing time distribution. Kumar and Meyn (1995) develop a programmatic procedure for establishing the stability of queuing networks and simple scheduling policies for such problems. Each of these non-reactive expected value based approaches to scheduling tries to determine a sequence for a known set of jobs (or a policy for dealing with random arrivals, in the case of queuing based models) in advance to optimize the expected value of some regular scheduling measure, such as expected flow time, expected tardiness, or expected maximum lateness. The schedule or policy is then fixed during execution. We next discuss the four categories identified by Mehta and Uzsoy (1999) for dealing with uncertainty in schedule disruptions. Completely reactive approaches. Under this approach, no schedule is generated in advance and decisions are made locally in real time, based on the current set of

PAGE 118

108 available jobs. Many on-line scheduling methods fall in this category (for examples, please see Lu et al. 2003 and Zhang et al. 2002). Predictive-reactive scheduling. A predictive schedule is generated in advance of execution using shop floor information regarding the potential for disruptions. When a disruption occurs during execution, the predictive schedule may need to be modified. Wu et al. (1993) propose a multi-criteria rescheduling approach for a single machine problem, with an objective function that consists of a weighted sum of the makespan and the total modified cost as a result of rescheduling, which is an associated sequence change cost. Robust scheduling approaches. Under this approach, a predictive schedule is built using the available information regarding possible disruptions with a focus on minimizing the deviation between the critical performance measure values of the realized and predictive schedules. An example of this approach is provided by Yang and Yu (2002). This approach focuses on minimizing the effects of the disruption on the critical performance measure (such as makespan or maximum lateness), whereas predictive-reactive scheduling tries to ensure that the predictive and realized schedules result in minimal disruption effects, minimizing the disruption in the schedule itself (such as the deviation in starting times or change of planned job sequence). Knowledge-based reactive scheduling approaches. This approach provides a mechanism for selecting a rescheduling method from the alternatives available. The use of knowledge-based reasoning is indicated when an application encounters a set of situations, each requiring the selection of one response from a large set of possible responses. The approaches include AI-based scheduling, genetic algorithms, and neural-network-based and case-based reasoning methodologies (e.g., Szelke and Kerr 1994).

PAGE 119

109 Mehta and Uzsoy (1999) consider a predictive scheduling approach which reduces the effects of machine breakdowns on the ability to complete jobs as planned, while maintaining acceptable schedule performance. This is done by inserting idle time into the predictive schedule to allow for recovering from the disruptions that may occur. Thus, the scheduled completion time of a job in the predictive schedule is based on both the structure of the predicted schedule and information on any disruptions that can be expected to occur. They use the total tardiness of the jobs as the primary performance measure and evaluate different predictive and rescheduling methods through computational experiments. In a similar manner to Mehta and Uzsoy (1999), we will use the idea of planned idle time to generate a predictive schedule. Instead of disruptions occurring as a result of machine breakdowns, however, we consider uncertainty in whether the firm must ultimately perform a given job. We refer to a job that the firm may have to perform in the future as an uncertain job. If the customer associated with the job chooses the firm to perform the job, we say that the firm “wins” the job. Any idle time planned in the schedule is slack time planned for the processing of an uncertain job, and the duration of this slack time is a decision variable in our model. Although the complete set of jobs that must be processed is unknown to the firm at the start of the planning horizon (because the firm competes against other firms for certain jobs), we assume that a probability of winning each uncertain job is known to the firm prior to schedule planning. The firm learns whether it will win or lose the uncertain jobs at some point prior to required execution of the jobs, but subsequent to creating the schedule plan.

PAGE 120

110 Some limited past research has considered uncertainty in requiring processing of a potential job or set of jobs. Liberatore and Pollack-Johnson (2003) consider project network structures, where sources of uncertainty result from different sequences or combinations of events occurring or not occurring, and random durations of project activities. The activities are represented in an activity on node (AoN) network. The uncertain network structure is expressed through a set of network scenarios, each having a specified probability of occurrence. The expected values of several objective function forms are used. However, this scenario analysis approach cannot be applied to situations where the number of uncertain events is extremely large. The objective function we will use considers the cost of schedule disruption and an opportunity cost for unused idle time, which was not considered in Mehta and Uzsoy (1999). For each uncertain job, some (possibly zero) slack time is planned in the schedule. If no job is processed during this slack time in actual execution of the schedule, a cost is incurred, which represents any opportunity cost associated with unused resources. The disruption cost is associated with the deviation in the planned starting time of the certain jobs during schedule execution. In Mehta and Uzsoy (1999), the performance of the schedule is evaluated by two criteria separately (schedule disruption and total tardiness cost). In our problem, the objective function is to minimize total expected cost, which is composed of the cost of unused idle time, the disruption cost, and the weighted tardiness costs. Thus, our objective function combines three different performance criteria. 5.2 Problem Definition and Modeling Assumptions We first consider the scheduling problem with only one uncertain job. Later we will extend our approach to account for multiple uncertain jobs. In this problem, a set of

PAGE 121

111 n jobs is considered for scheduling on a single resource. In particular, at the beginning of the scheduling horizon we have a set of n – 1 committed jobs that we know we must perform, along with an additional uncertain job that the firm hopes to perform, but will not find out whether the job is won until some future time. Associated with each job are a processing time, p j , a due date, d j , and a tardiness cost of l j per unit of time that the job’s completion time, C j , exceeds its due date. Given a schedule in which job j is delivered at time C j , the tardiness cost assessed to job j equals l j max{C j – d j , 0}. Associated with the uncertain job, with job index n, is a probability n that the firm will be awarded the job. We assume that no job preemption is allowed, i.e., once a job begins processing it cannot be interrupted until it is completed, and that no precedence constraints exist among jobs. We also initially assume that with the exception of job n, all jobs are available at time zero, i.e., the beginning of the planning horizon. The job set is denoted by J. We refer to the schedule created at the beginning of the planning horizon, prior to knowing whether the firm is awarded job n, as the predictive schedule (S p ). This predictive schedule, upon creation, is communicated to suppliers/customers and we therefore assume that any future changes to the initial schedule incur a disruption cost. At the end of the planning horizon, we have a realized schedule S r . We assess a cost of h for each unit of time that a job’s actual start time differs from the start time in the predictive schedule. This cost is assessed per unit of absolute deviation between the planned and actual start times. Let s j and js denote, respectively, the start time of job j in S p and S r . Then the total schedule disruption cost equals 11njjhs js , which is also a measure of the effective predictability of S p .

PAGE 122

112 We include in the predictive schedule an option to include planned idle time to account for the uncertain job n and reduce the potential disruption cost due to its uncertainty. Our analysis assumes that the planned idle time inserted in the schedule for job n is continuous in time. We further assume that any idle time that goes unused incurs a cost of c I per unit time. Both the decision on whether the firm is awarded job n and job n’s availability to begin processing occur at some time t n > 0. Our goal, therefore, is to determine the sequence and starting times of the n jobs in S p that minimize expected weighted tardiness, disruption, and unused idle time costs. This initial schedule determines the optimal start time and duration of any planned idle time to account for the possibility of being required to perform job n. Since this problem is NP-hard (it generalizes the minimum weighted tardiness cost problem), we first focus on determining the optimal timing and duration of planned idle time for any fixed sequence of jobs in an initial schedule. We address this problem in the following section. 5.3 Minimizing Cost with a Single Uncertain Job First we generate a predictive schedule S p . Since the problem denoted as 1|w j T j in the scheduling literature (the minimum weighted tardiness problem, where the weights and tardiness values are denoted by w j and T j , respectively, using this standard notation) is NP-hard, no optimal polynomial time algorithm exists for this problem unless P=NP. However, we assume there is a method which generates a good feasible schedule for 1|w j T j when all jobs are certain and denote this method as SM. We denote this schedule for jobs 1, , n – 1 as S(J/n), and the schedule after inserting job n in position k at its full processing time p n in S(J/n) as S(J,[k]).

PAGE 123

113 We propose generating the predictive schedule using one of two methods: SM(1): Here we directly generate the schedule for jobs 1, , n (as if job n is a certain job) using method SM, and denote the resulting schedule as S p (J). SM(): Here we generate S(J /n) using method SM, inserting units of idle time for job n. If the job is inserted in position k in the schedule, we denote the resulting schedule as S p (, k). We use the following notation to measure schedule performance: F(S(.)): Cost of executing schedule S(.). B j (x | S(.)): Increase in the tardiness cost of job j if its starting time is delayed x units in schedule S(.), i.e., . jjjjjjldcxdcSxB])()[((.))|( As the predictive schedule is executed it becomes subject to alterations due to the status of the uncertain job. If the idle time inserted in some position is less than the processing time of the job when the job is won, a disruption occurs, which we call a Type I disruption. Alternatively, if we insert idle time for job n in some position but the job is lost, we refer to this as a Type II disruption. We consider two possible reactions to a Type I disruption: RM1: When the status of the job is known and differs from the predictive schedule, we reschedule all remaining unprocessed jobs (including the new job) using method SM. RSH (Right-shift Rescheduling): If the idle time is less than the processing time needed for the uncertain job, we simply right shift all successor jobs after job n, and otherwise maintain the sequence provided by the predictive schedule. We also consider two possible reactions for Type II disruptions: LSH (Left-shift Rescheduling): If the planned idle time will not be used due to the loss of the job, we simply left shift the jobs following the planned idle time to fill in the idle time, and maintain the sequence in the predictive schedule.

PAGE 124

114 NR (No Rescheduling): If the planned idle time will not be used due to the loss of the job, we maintain the schedule and leave the idle time unused. This method may be appropriate when schedule changes are extremely costly. We next discuss a method to generate a good feasible schedule. 5.3.1 Methods SM and SM(1) to Generate a Feasible Schedule Since the problem is NP-hard, we use a heuristic method developed by Rachamadugu and Morton (1982) to provide a good solution. Under this heuristic, whenever the machine becomes free, the job with the highest priority index is scheduled next, where the priority index of job j at time t is given by expjjjjavlst t p Kpt , where K[1, 3], jjjstdpt , and p av (t) is the average processing time for the remaining jobs at time t; K is a scalar to control the speed of increase in priorities as the slack s j (t) decreases. We next discuss method SM(1) to generate a predictive schedule directly. Consider the probability n of winning job n; clearly when n is close to 1, we will likely want to plan some idle time p n for the new job; if n is close to 0, we likely will not want to plan idle time for the new job. In the priority function for the uncertain job, we compute slack time s n (t) for job n by weighting the processing time for job n by the probability of winning the job, i.e., st (nnnndpt ) . Therefore a job with a high probability of being won receives a higher priority index. Similarly, when we calculate p av (t), we use n p n for job n.

PAGE 125

115 5.3.2 Method SM() for the Predictive Schedule In this section, we will discuss approach SM() to generate a predictive schedule. The decision problem is to determine the length of the planned idle time and the insertion position k for the uncertain job in the predictive schedule. We will also discuss reacting to a disruption during execution. Since method SM does not guarantee an optimal schedule, and it is also difficult to evaluate its performance except through numerical experiments, we assume that the reaction to a Type I disruption is RSH (Right-shift Rescheduling). We will then find that the appropriate reaction for disruption Type II can be determined using some simple rules, as we next discuss. If we insert planned idle time of in position k in the predictive schedule, the expected cost of the idle time will be the weighted combination of the cost of two scenarios: (i) job n is won and (ii) job n is lost. We denote the job in position j as job [j]. Scenario 1: job n is won If we insert p n units of planned idle time, the schedule cost simply equals F(S(J,[k])). But if the planned idle time < p n , we must right-shift the succeeding jobs to ensure there is enough time to process job n. Thus, when the job is won, the start time of each succeeding job will be delayed p n – time units, and a disruption cost of h(p n –)(n – k) is incurred. Notice that the final weighted tardiness cost is the same for any when the job is won. The resulting cost under scenario 1 is given by ((,[]))()()nFSJkpnkh (5.1) Scenario 2: job n is lost If we do not insert any idle time, the schedule cost simply equals F(S(J /n)). But if the planned idle time is positive, we will either use LSH (Left-shift Rescheduling) or

PAGE 126

116 NR (no rescheduling). Under LSH, the start time of the jobs after the planned idle time will be shifted units earlier; thus a disruption cost of h(n – k) is incurred. Under NR, units of idle time is unused; in addition, when compared with schedule S(J /n), the jobs [k], , [n – 1] begin processing units later. Obviously, we will select the option that incurs the lower cost. The cost is given by [1][]((/))min(|(/)),(-)njIjkFSJnBSJnchnk (5.2) Thus, the expected cost for inserting planned idle time equal to in position k is given by: [1][]((,[]))()()((/))min(|(/)),()1nnn j InjkFSJkhpnkFSJnBSJnchnk (5.3) Note that F(S(J /n)) and F(S(J, [k])) are constant for any value of , and after deleting all constant terms from the objective, the optimal is the solution to the following minimization problem: [1][][1][]min ()min(|(/)),()(1)()(12),If we use min (())(|(/))(1),If we use nnjInjknnnIIjnjkhnkBSJnchnkhnkLSHhknccBSJnNR (5.4) For certain cases, we can clearly determine the optimal value of and the preferred Type II disruption rule easily based on the above problem. For example, under the LSH rule, if ()(12)0nhnk (which occurs when 0.5n ), the optimal value of is 0;

PAGE 127

117 otherwise, the optimal value of is p n . Thus, when we use the LSH rule, it is optimal to either insert no idle time, or insert idle time equal to the uncertain job’s processing time. For jobs with probability of being won greater than 0.5, we insert p n units of idle time; when n 0.5, we insert zero idle time. Under the NR rule, the term is a monotonically increasing function of , and if [1][](|(/))njjkBSJn)0IIcc (()nhkn (which occurs when (()nIcnkh )Ic ), it is clear that the optimal value of = 0. To further analyze the difference between the two Type II disruption rules, we compute the difference between the cost of using the NR and LSH rules as [1][](|(/))()(1)njIjkBSJnchnk n ) Note that is an increasing piecewise-linear function of and its peak points lie in a convex function, since a greater number of jobs becomes tardy as increases (see Figure 5-1). The term [1][](|(/))njjkBSJn (hnk , on the other hand, is a linear function with a constant (positive) slope. Letting T k () denote the set of tardy items in the schedule S p (, k), it follows that: [1][1][][]():[](|(/))(1|(/))knnjjjkjkjTjk j B SJnBSJnl jl . Since 0 p n , we have (1):[]():[]():[]kkknjjjTjkjTjkjTpjkll . These results lead to the following lemmas.

PAGE 128

118 Lemma 5.1: If, we should the NR rule instead of LSR. In this case, if ():[](knjIjTpjklchnk ) ((n ))IIcnkhc , the optimal value of equals 0. Lemma 5.2: If , we should LSR instead of NR . In this case, if (1):[](kjjTjklhnk ) 0.5n , the optimal value of is 0; otherwise, the optimal value of is p n . hkn)( [1][](|(/))njIjk B SJnc Cost Figure 5-1. Cost as a function of . If the problem instance does not fit into one of the above categories described by the lemmas, then further analysis is required to determine the preferred Type II disruption rule and the best value of . For these cases, let denote the value of that minimizes [1][](())(|(/))(1)nnIIjjkhknccBSJn n , which is the tardiness plus opportunity cost incurred when the job is lost under the NR rule. There are four possible cases that characterize the relationship between the costs of LSH and NR in optimization problem (5.4), as shown in Figure 5-2. In the first two cases, the cost of LSH is less than that of

PAGE 129

119 NR. It is clear from Lemma 5.2 that the optimal * is either zero or p n . In case 3, the cost of LSH is everywhere positive and the cost of NR is negative on an interval, and thus * =. In case 4, the cost of LSH is negative and the cost of NR is negative on an interval, and we must compare the cost at = when using NR to the cost at = p n when using LSH. Thus the problem reduces to finding the value of and comparing the cost of NR at ' to the cost of LSH at 0 and p n . Cost Cost NR NR LSH LSH Case 1: * = 0 Case 2: * = pn NR Cost Cost NR LSH LSH Case 3: * = Case 4: * = or pn Figure 5-2. Four possible cases for the cost function We next provide a method to determine the value of '. Let F() denote the cost of inserting units of idle time under the NR rule. The differencing function for F() is given by

PAGE 130

120 [1][][1][]():[]()(1)[(())](|(/))(1) (1|((/))(1) [(())](1) kn I nIjnjknjnjkInIjnjTjkFFnkhccBSJnBSSJnnkhccl ():[](()) (1)(1)kInInjjTjknnkhccl (5.5) Observe that the term ():[]k j jTjkl is piecewise linear and non-decreasing in , and the other terms do not change with . This implies that the peak points of function F() lies in a convex function. Thus is the value of when (5.5) goes from negative to positive, which occurs when first exceeds [:)(kjTjjKl ] )1())((nInIcchkn . Note that if (1):[]jjkl (())0(1)Innnkhc KIjTc , then this implies that = 0. When this is not the case, we can use a simple search algorithm to find ' in O(n log n) time. The following is algorithm finds : Let L j denote the lateness of job j, i.e., L j = (C j – d j ). Step 1: For schedule S(J/n), sort jobs in descending order of L j values. Denote the jobs in the new sequence as jobs 1, 2, , (n – k). Initially let q = 1. Step 2: Calculate the tardiness cost: 1qjjl . Step 3: If q = n – k, stop and let np . Otherwise, if 1(())(1)q I nIjjnnkhccl , stop and let (1)'qL ; otherwise let q = q + 1 and return to Step 2. These results lead to the following lemma.

PAGE 131

121 Lemma 5.3: The optimal idle time in position k is determined as follows: Case 1: If and (1):[](kjIjTjklchnk ) 0.5n , then * = 0; Case 2: If and (1):[](kjIjTjklchnk ) 0.5n , then * = p n and we use the LSH ) rule; Case 3: If and (1):[](kjIjTjklchnk 0.5n , * = and we use the NR rule; Case 4: If and (1):[](kjIjTjklchnk ) 0.5n , then if (())nIhkncc I [1][](|((1njkBSp /))(1)()jnJnhnk 2)n n , * = and we use the NR rule; otherwise, * = p n and we use the LSH rule. From (5.5), we can also get a critical value for the probability n . Observe that F() F( – 1) if and only if ():[]():[] ()kkIjjTjkn I jjTjkclhnkhcl . We might think of the numerator, , as the cost of an unnecessary unit of idle time, i.e., the unnecessary cost we would incur if we insert an additional unit of idle time and do not win the new job. Let C ][:)( kjTjjIKlc O () denote this cost per unit of “over-scheduled” idle time, which depends on the value of . The quantity h(n – k) is the disruption cost per unit of time delay, i.e., the cost of not having an additional unit of planned idle time. Let C u denote this cost per unit of “under-scheduled” idle time at position k. Using this new notation, we have that 0n0()()uCCC implies that F() F( – 1).

PAGE 132

122 The critical ratio on the right hand side determines the minimum probability of winning the new job in order for us to find it attractive to schedule units of planned idle time in position k. This critical ratio is a function of the relative values of per unit over-scheduling of idle time versus per unit under-scheduling of idle time, and takes a form very similar to the classical newsvendor result in inventory theory (see Nahmias 2001). The resulting equation is quite simple and also intuitive: a higher relative unit cost for over-scheduling idle time requires a higher probability of winning the job in order for additional planned idle time to be attractive. Conversely, a higher relative unit cost for under-scheduling idle time leads to a lower probability of winning the job in order to make additional planned idle time attractive. Next, we discuss the problem of finding the best insertion position k for the planned idle time. The following function gives the expected cost for inserting an idle time of in position k: [1][][((,[]))()()][((/))min(|(/)),(-)](1)nnn j InjkFSJkhpnkFSJnBSJnchnk (5.6) Let k be the value of k such that the cost F(S(J, [k])) is minimized, which is the best insertion position of job n when n = 1 in the optimal schedule of jobs 1, , n – 1. Since all other terms in (5.6) are either decreasing with k or do not change with k, we must have that the optimal k k, which leads to the following lemma. Lemma 5.4: Given an initial sequence of jobs 1, , n – 1, the optimal position for inserting idle time for the new job is always greater than or equal to the optimal position when the new job is certain.

PAGE 133

123 We have provided an algorithm to determine the optimal amount of idle time * given a fixed insertion position k, which runs O(n log n) time. By simply enumerating the possible positions k= k+1, k+2, , n and comparing the resulting costs, we can find optimal idle time * and optimal position k * in O(n 2 log n) under our rescheduling rules. 5.3.3 The Decision to Compete for the Job Based on the prior analysis, we might ask the following question: if the best predictive schedule does not plan any idle time for the uncertain job, should the firm compete for the uncertain job at all? We assume that if the firm takes part in the competition, it must process the job if it is awarded. We would of course expect that when n is small, the rescheduling cost is high, and the profit of the uncertain job is low, the firm should not compete for the job. We next analyze this problem in greater detail. We assume that the best predictive schedule does not plan any idle time for the uncertain job, and the uncertain job has a net revenue of w n if awarded. If the firm competes for the job and it is awarded, it inserts the job in position k (which is determined by the algorithm in the previous section) using the RSH (right shift schedule), and a revenue of w n is earned. We do not consider the revenues of other jobs since these are constant for our problem. The expected cost from competing for the job is given by the following function: [((,[]))()]((/))(1)nnnnFSJkwphnkFSJn (5.7) The cost if the firm does not compete for the job is F(S(J/n)). We want to select the option (compete vs. do not compete) with lower cost. Clearly, if

PAGE 134

124 ((,[]))()((/))nnFSJkwphnkFSJn , the firm should not compete for the job. Note that F(S(J, [k])) is no more than F(S(J/n)) plus the increase in tardiness cost of jobs [k], , [n – 1] when delayed p n time units in schedule S(J/n), plus the tardiness cost of job n. Thus we have: [1][1][][1][1][1][1][1]((,[]))()((/)(|(/))()()((/)()()(),nnnkjnjnnnnnjkjkkjnjjnnnnnjjFSJkwphnkFSJnBpSJnppdlwphnkFSJnLplppdlwphnk 0 where L j is the lateness of job [j]. This leads to the following lemma. Lemma 5.5: If the best idle time for the new job in the predictive schedule is zero and for k = 1, , n, the firm should not compete for the new job; otherwise the firm should compete for the job. [1][1][1][1]()()()kkjnjjnnnnnjjLplppdlwpnkh 5.4 Heuristic Predictive Scheduling for Multiple Uncertain Jobs In this section, we discuss the more general problem with multiple uncertain jobs and provide several heuristic methods for solving the problem. Suppose there are n jobs in the system, and job has a probability j of being awarded (if j = 1, the job is certain to be processed), and we have full knowledge of the other parameters (processing time, due date, tardiness cost) of all uncertain jobs. The uncertainty is resolved at some time t j 0 for each job j, while a schedule is required at time 0, before the planner has knowledge of winning or losing the uncertain jobs. One way to generate a predictive schedule is using the method SM(1) described in Section 5.3, where a priority function involving the parameter j is used. That is,

PAGE 135

125 whenever the machine becomes free, the job with the highest priority index is scheduled next, where the priority index of job j at time t is given by expjjjjavlst t p Kpt , K [1, 3], jjjjstdpt However, the second method, SM(), is not as easy to apply to the problem with multiple uncertain jobs. The final status of each job j has two scenarios (lost and won), with probabilities j and 1 – j , respectively. In total, we have 2 n possible scenarios with n uncertain jobs. The results is that it is a very difficult combinatorial optimization problem to determine the optimal and k for each job when n is very large. In addition to methods SM(1) and SM(), we may also consider mixed integer programming and dynamic programming approaches. In this section, we consider three potential cases for handling the multiple uncertain jobs problem. Each case is associated with a certain predictive scheduling and rescheduling policy. A mixed integer programming or dynamic programming approach is provided for each case. We assume each job j has an associated release date r j , which is the earliest time at which the job can begin processing (note that we assume uncertainty is resolved for job j at time t j , and we therefore assume t j < r j ). Case 1: Predictive scheduling policy: Here we always consider the full processing time for each job in the predictive schedule. Rescheduling policy: The NR rule is used if the job is lost. If we assume a discrete set of planning periods exists, indexed by t, for t = 1, , T, this problem can be formulated as a mixed integer programming problem, where we

PAGE 136

126 define x jt as a binary variable equal to 1 if job j starts at time t, and 0 otherwise. The resulting formulation is: [P 5.1] Minimize ()(1jjtjjjjjjtjltxpdp )Ic Subject to 1,jtpjjtijijt p xx p j = 1, , n, 1,jttx j = 1, , n, ,jtjttxr j = 1, , n, x jt {0, 1}, j = 1, , n, t = 1, , T. The objective function minimizes the expected cost of the schedule, where the cost of losing a job is the cost of unused idle time. The first constraint set prohibits preemption, while the second constraint set requires that each job has a unique starting time. Finally, the third constraint set ensures that no job violates its release date constraint. Note that the objective function is a piecewise linear function that can be linearized through standard linear programming techniques. Case 2: Predictive scheduling policy: For each job, we schedule x j units of processing time, where 0 x j p j . Rescheduling policy: If a job is awarded, we use the RSH rule and right shift the schedule p j – x j units of time, postponing successor jobs’ starting times and incurring a

PAGE 137

127 disruption cost; if the job is lost, we use the NR rule, fixing the schedule and incurring an unused idle time cost of c I x j . This case is difficult to formulate as an MIP. To simplify the approach, we decompose it into two problems: the first problem is to determine the sequence of the jobs in the schedule; the second problem creates a schedule that minimizes the expected cost for the fixed sequence of jobs. For the first problem of determining the sequence of the jobs in the schedule, we might use method SM(1). We next focus on the second problem of optimizing the schedule for a fixed sequence. For this subproblem we propose a dynamic programming approach. Without loss of generality, denote the j th job in the sequence as job [j]. In the dynamic programming recursion, we define z 0 (t) = 0, 0 t < , where t is a state variable. At any stage j, let z j (t) denote the expected cost of the sequence of jobs [1], , [j] when job [j] finishes at time t. Let t denote the finishing time of job [j – 1]. By assumption, we create the predictive schedule with x [j] (0 x [j] p [j] ) units of processing time for job [j]. If the job is awarded, we right shift the schedule p [j] – x [j] units of time and will finish the job at time max{t, r [j] } + p [j] ; if the job is lost, we fix the schedule and finish at time max{t, r [j] }+ x [j] . Hence the expected finishing time of job j equals [][][][][]max{,}(1)jjjjttrpx j [][][][][](1)jjjjjttpxr 1 . Based on this formula for t, we will have either (which occurs for [][][][][][ ]/(1jjjjxtrp )j ), or [][][][][](1)jjjjjtpxr . 1 We assume that all the time variables and parameters are integral multiples of a proper unit of length such that they are still integer after we multiply by [j] (if, for example, all probabilities are multiples of 0.01 and all other parameters are scaled in multiples of 100 base time units, then this will hold).

PAGE 138

128 Expressing the expected cost z j (t) as a function of z j-1 (t), we have that z j (t) equals z j-1 (t) plus the expected tardiness cost, disruption cost and unused idle time cost of job j, which we express using the function []1[][][][][][][][][],()()()(1)(1jjjjjjjjjjjjztxzttpdlpxhnjxc )I The recursive relation for any stage j and state t is given by [][][][][][][][][][]0[]/(1)[][][][][][]min,:(1)()minmin,:[]/(1)jjjjjjjjjjjxtpjjjjjjjjtrztxttpxztztxxtrp Note that the value of z j (t) for some j and t combinations will be infinite, which implies there is no feasible schedule for job j finishing at time t. The optimal solution is given by }0 :)(min{ ttzZn for the fixed sequence. Let p max denote the largest processing time of all the jobs. In the recursive equation, we must evaluate a total of O(p max ) possible values of x j for each z j (t). There are O(n) stages and in each stage there are O(np max ) states. The complexity of the dynamic programming is therefore 22maxOnp . Case 3: Predictive scheduling policy: For each job j, we schedule x j units of processing time, where jj j p xp for some prespecified lower bound j p . Rescheduling policy: If a job is awarded, we process the job in the planned time window. Since we only schedule x j time units for job j, and this may be less than p j , we must reduce (or compress) the processing time from p j to x j and, as the result, a

PAGE 139

129 compression cost (p j – x j )c j is incurred for expediting the job. If the job is lost, we fix the schedule using the NR rule, and a cost of c I x j is incurred for unused idle time. The lower bound of j p is determined by the maximum compression of the processing time: (p j – j p ). This problem can be formulated as a mixed integer programming problem. To formulate this problem, we define binary variable z ij equal to one if job i begins processing before job j, and zero otherwise. Let s j and C j denote the start and finish periods for job j. This results in the following optimization problem. [P 5.2] Minimize ()(1jjjjjjjjjIjjjlvpxcxc )j Subject to s j r j , j = 1, , n, C j = s j + x j – 1, j = 1, , n, v j C j – d j , j = 1, , n, jj p xp , j = 1, , n, Tz ij C i – s j , i = 1, , n, j = 1, , n, j i, T(1 – z ij ) C j – s i , i = 1, , n, j = 1, , n, j i, z ij {0, 1}, i = 1, , n, j = 1, , n, j i. In the objective function, the first term represents the expected tardiness cost, the second term is expected compression cost, and the third term is the expected unused idle time cost. The first through third sets of constraints ensure that a job’s start time is at least as great as its release date, and that the finish time and tardiness for job j are set properly

PAGE 140

130 for all jobs. The fourth constraint set limits the set of values allocated to job in the schedule, while the remaining constraints ensure that preemption does not occur. Defining jj j p x and ()jjIjccc Ic , the objective function can be rewritten as: ()(1)[()](1)(1)jjjjjjjjjIjjjjjjjIjIjjjIjjjjjjjjjjIjjjlvcpclvcccpclvcpc I Note that the (1)jjj p c term is constant. We can thus reformulate [P 5.2] as follows: [P 5.2] Minimize jjjjjjjlvc Subject to s j r j , j = 1, , n, C j = p j – j + s j – 1, j = 1, , n, jj j p p j = 1, , n, v j C j – d j , j = 1, , n, Tz ij C i – s j , i = 1, , n, j = 1, , n, j i, T(1 – z ij ) C j – s i , i = 1, , n, j = 1, , n, j i, z ij {0, 1}, i = 1, , n, j = 1, , n, j i. If, the optimal 0jc jj j p p , since this minimizes both compression related costs and tardiness costs associated with job j and its successors. If, c 0jc j is

PAGE 141

131 equivalent to a compression cost, j is equivalent to the compression time of job j, and the tardiness penalty cost is now l j j . The resulting problem becomes a minimum weighted tardiness plus compression time problem, which can be solved by the compress and relax algorithm (see Chapter 3). 5.5 Conclusion We considered a predictive scheduling problem on a single machine with uncertain jobs. The probabilities of winning the jobs are known to the firm when it plans the schedule. The firm makes a predictive schedule by inserting idle time for the uncertain jobs. Two types of disruptions as a result of the uncertainty of the jobs are defined. We present several reactive approaches for these two disruptions. A polynomial time algorithm is provided to determine the optimal idle time and insertion position for the single uncertain job problem. For the problem with multiple uncertain jobs, we discuss three cases, where each case is associated with a different predictive and reactive scheduling rule. In each case, we provide an MIP formulation or dynamic programming algorithm for addressing these problems.

PAGE 142

CHAPTER 6 CONCLUSION AND FUTURE RESEARCH DIRECTIONS 6.1 Conclusion This thesis examines some new scheduling issues arising in a virtual production network. Four scheduling problems are considered as a result of these new issues. The first problem considers single machine scheduling to minimize total tardiness and overtime resource costs in a periodic planning context. This problem considered is new to the literature, addressing the tradeoff between tardiness and overtime resource costs. A compact and relax algorithm is presented for the fixed sequence special case of this problem. The algorithm divides the jobs in the schedule into several independent subsets and treats each subset as a separate problem in which the jobs have no release date constraints. The algorithm first schedules the jobs with the maximum amount of overtime resource usage possible, and then relaxes the overtime resource usage of the jobs. The idea and approach of the algorithm are new to the scheduling literature and may be applied to other similar scheduling problems. The second problem is a single machine scheduling problem with job-selection flexibility. The problem allows the firm to select a subset of the jobs to process. Although there are other papers on job selection problems in the literature, they consider relatively basic and simple assumptions regarding the nature of job processing costs and revenues. This thesis extends the current research to four more complicated cases, addressing the impacts of features such as tardiness cost, compression cost, and time-horizon extension cost. These extensions extend the literature and make the problem 132

PAGE 143

133 more applicable to a broader number of practical contexts. For each extension to the basic problem, an algorithm is provided along with a worst-case performance approximation ratio. The third problem is a single machine rescheduling problem with new job arrivals and processing time compression costs. There are two performance measures used for the rescheduling problem: efficiency and stability. Most rescheduling research in the literature considers these two measures separately. In this thesis, we apply a cost for both performance measures, and thus transform the problem into a total cost minimization problem, which represents a new direction in rescheduling literature. We also develop a heuristic solution procedure based on Very Large Scale Neighborhood (VLSN) search to solve this problem, which is to the best of our knowledge the first application of this solution method to rescheduling problems. The last problem class considers predictive scheduling on a single machine with uncertain future jobs. Here we consider a new and interesting problem: a firm competes for some future jobs, but only knows a certain probability of winning the jobs. The objective of this problem is to generate a predictive and reactive schedule that considers the impacts of these future uncertain jobs. We discuss different predictive and reactive approaches for scheduling and provide several algorithms that may be used to deal with uncertainty in future job requirements in scheduling. 6.2 Future Research Directions There are a number of possible extensions and generalizations of this work that serve as interesting directions for future research. Our focus in each chapter has been on single-resource problems. As we have seen, even these single-resource versions of the problem can quickly become extremely

PAGE 144

134 complex. In practice, however, very rarely does a typical shop floor consist of a single resource. Therefore, our models can be applied either to the entire production line considered in aggregate as a single resource, or to a bottleneck operation in a production system. Each of the scheduling problems we have considered can in principle be extended to multiple machine contexts, in order to more closely model the detailed operations of a production system. Such approaches will require further generalization and fine tuning of the metaheuristic and local search methods we have provided. Our work has also focused largely on objective functions that consider the tradeoff between tardy job delivery and the cost of reducing the processing times of jobs. That is, our primary efficiency measure was that of minimum weighted tardiness cost. The scheduling literature contains a great variety of different kinds of tradeoffs faced by scheduling firms in various contexts. It would therefore be very interesting to consider other forms of objective functions within each of the models we have developed (e.g., minimum flow time, minimizing maximum tardiness, etc.) In addition, the algorithms and heuristic solution approaches we provided can be further refined and improved. The valid inequality approach discussed in Chapter 2 for the Min-WTOT problem can be further explored to determine whether we can strengthen the inequalities provided, or possibly come up with stronger inequalities to further improve the problem’s lower bound. We can also explore further refinements to our neighborhood search methods, which might benefit from additional fine tuning. Our focus in this thesis has been on operations scheduling problems. The problems and methods in this thesis might also be extended to other areas, such as supply chain management. For example, the first problem considers the tradeoff between tardiness

PAGE 145

135 costs and overtime resource costs. In supply chain management, the model can be a cost tradeoff between shortages costs and the resource overflow costs. In this context, the firm might be able to meet more downstream demand on time by using resources in excess of their regular resources (e.g., by hiring overflow trucks and drivers for delivery, which would be analogous to using overtime to deliver a greater amount of product more quickly). The ideas of the compact and relax algorithm may be extended to solve similar supply chain problems. Additionally, the concept of an uncertain job in Chapter 5 can be extended to consider a similar concept of uncertain demand in a supply chain context. In this case, the firm must determine how to properly set inventory levels (rather than idle production time) to account for the possibility of demand later materializing.

PAGE 146

APPENDIX MIP FORMULATION OF MIN-WTOT PROBLEM We present a mixed integer formulation of the Min-WTOT problem. We use the following additional notation in the formulation: Parameters: H =Set of all pairs of precedence constrained jobs (if no precedence relations exist then H = ). l ij =Required lag between the start time of job i and the start time of job j for all (i, j) H. Decision Variables: y t = Total regular time activity in period t. u t = Total overtime activity in period t. v j = Tardiness of job j. w jt = Work performed on job j in period t (measured in time units). x jt = 1, if any work is performed on job j in period t; 0, otherwise. z ij = 1, if job i is performed after job j in the sequence; 0, otherwise. [Min-WTOT] Minimize 1()T 1n R tOtjtcyculv jj (A.1) Subject to : u t + y t = 1n j tjw , t = 1, , T, (A.2) u t 1n j tjw – R t , t = 1, , T, (A.3) Lateness Tracking: , j jjd vC for all j J, (A.4) Capacity Limits: 1n j tjw R t + O t , t = 1, , T, (A.5) Task Requirements: 1,T j ttwp j for all j J, (A.6) Task Forcing: for all j J, t = 1,, T, (A.7) ,jtjjtwpx 136

PAGE 147

137 Start/Finish Relations: for all j J, (A.8) 11,TjjjttCsx Task Start Time: , j j s r for all j J, (A.9) Sequencing 1: , j ii j s CTz for all (i, j) J, i j, (A.10) Sequencing 2 (1),ijij s CTz for all (i, j) J, i j, (A.11) Precedence: ,iijj s ls for all (i, j) H (A.12) Binary Variables: ,{0,jtijxz 1} for all i, j J, i j, (A.13) Integrality: C j , s j 0, integer for all j J, (A.14) Nonnegativity: for all j J, t = 1,, T. (A.15) ,,,0jtttjwuyv The objective function (A.1) minimizes the sum of regular and overtime costs, plus weighted tardiness costs. Constraints (A.2) and (A.3) keep track of our regular and overtime usage. Constraint (A.4) tracks the lateness of each project, while (A.5) limits the processing capacity available in a period. Constraint (A.6) ensures that each job is fully completed, and (A.7) keeps track of which jobs are worked on in any period t (through the binary x jt variables). Constraint (A.8) forces the finish period for a job to equal or exceed the start period plus the number of days worked on the task, and constraint (A.9) forces the start period of a job to equal or exceed its release period. Constraints (A.10) and (A.11) enforce our non-preemption assumption by ensuring that either job i begins after job j finishes (or vice versa) for all (i, j) job pairs. Constraint (A.12) enforces precedence relationships where necessary, while (A.13), (A.14), and (A.15) encode our variable restrictions.

PAGE 148

LIST OF REFERENCES Ahuja, R. K., Ergun, O., Orlin, J. B. and Punnen, A. P, 2002, A survey of very large-scale neighborhood search techniques. Discrete Applied Mathematics, 123, 75-102. Baruah, S., Koren, G., Mao, D., Mishra, B., Raghunathan, A., Rosier, L., Shasha, D. and Wang, F., 1992, On the competitiveness of on-line real time scheduling. Real-Time Systems, 4, 125-144. Bean, J. C., Birge, J. R., Mittenthal, J. and Noon, C. E., 1991, Mathup scheduling with multiple resources, release dates and disruptions. Operations Research, 39,470-483. Berman, P. and Dasgupta, B., 2000, Multi-phase algorithms for throughput maximization for real-time scheduling. Journal of Combinatorial Optimization, 4, 307-323. Brucker, P., Drexl, A., Mhring, R., Neumann, K. and Pesch, E., 1999, Resource-constrained project scheduling: notation, classification, models and methods. European Journal of Operations Research, 112, 3-41. Cheng, T., Chen, Z., Li, C. and Lin, B., 1998, Scheduling to minimizing the total compression and late costs. Naval Research Logistics, 45, 68-82. Chopra, S. and Meindl, P., 2003, Supply Chain Management: Strategy, Planning, and Operation (Upper Saddle River, N.J.: Prentice Hall). Daniels, R. L., 1990, Multi-objective approach to resource allocation in single machine scheduling. European Journal of Operations Research, 48(2), 226-241. Du, J. and Leung, J. Y., 1990, Minimizing total tardiness on one machine is NP-hard. Mathematics of Operations Research, 15, 483-495. Feo, T. A., Resende, M. G. C. and Smith, S. H., 1994, A greedy randomized adaptive search procedure for maximum independent set. Operations Research, 42(5), 860-878. Glover, F., Kelly, J. P., and Laguna, M., 1994, Genetic algorithms and tabu search: hybrids for optimization. Computers & Operations Research, 22(1), 111-134. Hall, N. G. and Potts, C. N., 2004, Rescheduling for new orders. Forthcoming in Operations Research. 138

PAGE 149

139 Hapke, M., Jaszkiewicz, A. and Slowinski, R., 1994, Fuzzy project scheduling system for software development. Fuzzy Sets and Systems, 67, 101-117. Harvey, R. T. and Patterson, J. H., 1979, An implicit enumeration algorithm for the time/cost tradeoff problem in project network analysis. Foundations of Control Engineering, 4, 107-117. Hindelang, T. J. and Muth, J. F., 1979, A dynamic programming algorithm for decision CPM Networks. Operations Research, 27, 225-241. Hoogeveen, J. A., Lenstra, J. K., van de Velde, S. L., 1997, Sequencing and scheduling. Chapter 12 in Annotated Bibliographies in Combinatorial Optimization, 181-197, M. Dell’Amico, F. Maffioli, S. Martello (Eds.), (New York: John Wiley & Sons). Hopp, W. J. and Spearman, M. L., 2001, Factory Physics: Foundations of Manufacturing Management (Boston: Irwin McGraw-Hill). Kelley, J. E. and Walker, M. R., 1959, Critical Path Planning and Scheduling: An Introduction (Ambler, PA: Mauchly Associates). Koren, G. and Shasha, D., 1995, An optimal on-line scheduling algorithm for overloaded real-time systems. SIAM Journal on Computing, 24, 318-339. Kumar, P. R. and Meyn, S. P., 1995, Stability of queuing networks and scheduling policies. IEEE Transaction on Automatic Control, 40(2), 251-260. Lawler, E. L., 1977, A “pseudopolynomial” algorithm for sequencing jobs to minimize total tardiness. Annals of Discrete Mathematics, 1, 331-342. Lawler, E. L., 1990, A dynamic programming approach for preemptive scheduling to minimize the number of late jobs. Annals of Operations Research, 261, 25-133. Lawler, E. L., Lenstra, J. K., Rinnooy Kan, A. H. G. and Shmoys, D. B., 1993, Sequencing and scheduling: algorithms and complexity. Chapter 9 in Logistics of Production and Inventory, Handbooks in Operations Research and Management Science, 445 – 522, S. C. Graves, A.H.G. Rinnooy Kan, and P.H. Zipkin (Eds.). (Amsterdam: North-Holland). Lee, H. L., 2001, Ultimate enterprise value creation using demand-based management. Stanford Global Supply Chain Management Forum, Report #SGSCMF-W1-2001. Lenstra, J. K., Rinnooy Kan, A. H. G. and Brucker, P., 1977, Complexity of machine scheduling problems. Annals of Discrete Mathematics, 1, 343-362. Leon, V. J., Wu, S. D. and Storer, R. H, 1994, Robustness measures and robust scheduling for job shops. IIE Transactions, 26, 32-43.

PAGE 150

140 Liberatore, M. J., Pollack-Johnson, B., July 2003, A macro approach to modeling projects with uncertain network structures. Technology Management for Reshaping the World, 2003. PICMET '03: Portland International Conference on Management of Engineering and Technology, July 20-24, 2003, 246 – 254. Lipton, R. J. and Tomkins, A., 1994, On line interval scheduling. Proceedings of the 5 th Annual ACM-SIAM Symp on Discrete Algorithms, 302-311. Lu, X., Sitters, R. A. and Stougie, L., 2003, A class of on-line scheduling algorithms to minimize total completion time. Operations Research Letters, 31(3), 232--236. Mehta, S. V., and Uzsoy, R. M., 1999, Predictable scheduling of a single machine with breakdown and sensitive job. International Journal of Production Research, 37(18), 4217-4233. Meybodi, M. Z. and Foote, B. L, 1995, Hierarchical production planning and scheduling with random demand and production failure. Annals of Operations Research, 59, 259-280. Morton, T. E. and Pentico, W., 1993, Heuristics Scheduling Systems with Applications to Production Systems and Project Management, 134-163(New York: Wiley). Nahmias, S., 2001, Production and Operations Analysis (Boston: McGraw-Hill Irwin). Nemhauser, G. L. and Wolsey, L. A., 1988, Integer and Combinatorial Optimization, 542, (New York: John Wiley & Sons). Norbis, M. I and Smith, J. M., 1988, A multi-objective, multi-level heuristic for dynamic resource constrained scheduling problems. European Journal of Operational Research, 33, 30-41. Nowicki, E. and Zdrzalka, S., 1990, A survey of results for sequencing problems with controllable processing times. Discrete Applied Mathematics, 26, 271-287. O'Brien, W. J. and Fischer, M. A., 2000, Importance of capacity constraints to construction cost and schedule. ASCE Journal of Constructions Engineering and Management, 125(6), 366-373. Pan, H., Yeh, C., Willis, R. J., 2001, Computer-aided system to solve uncertainty in project management. The 10th IEEE International Conference on Fuzzy Systems, 2001. Pinedo, M. L., 1983, Stochastic scheduling with release and due dates. Operations Research, 21, 559-572. Queyranne, M., 1993, Structure of a simple scheduling polyhedron. Mathematical Programming, 58, 263-285.

PAGE 151

141 Rachamadugu, R.V. and Morton, T. E., 1987, Priority rules for job shops with weighted tardiness costs. Management Science, 33, 1035-1047. Ribeiro, C. C. and Hansen, P., 2001, Essays and Surveys in Metaheuristics (Boston: Kluwer Academic Publishers). Sahni, S., 1976. Algorithms for scheduling independent tasks. JACM, 23, 116-127. Sarin, S. C., Erel, E. and Steiner, G., 1991, Sequencing jobs on a single machine with a common due date and stochastic processing times. European Journal of Operational Research, 51, 188-198. Szeke, E. and Kerr, R. M., 1994, Knowledge-based reactive scheduling. Production Planning and Control, 5, 124-145. Unal, A. T., Uzsoy, R., Kiran, A. S., 1997, Rescheduling on a single machine with part-type dependent setup times and deadlines. Annals of Operations Research, 70, 93-113. Vickson, R. G., 1980, Choosing the job sequence and processing times to minimize total processing plus flow cost on a single machine. Operations Research, 28(5), 1115-1167. Wolsey, L. A., 1985, Mixed integer programming formulations for production planning and scheduling problems, Invited talk at the 12 th International Symposium on Mathematical Programming, MIT, Cambridge. Wu, S. D., Storer, R. H., Chang, P. C., 1993, One-machine rescheduling heuristics with efficiency and stability as criteria. Computers and Operations Research, 20, 1-14. Yang, J. and Yu, G., 2002, On the robust single machine scheduling problem. Journal of Combinatorial Optimization, 6, 17-33. Zhang, G. and Ye, D., 2002, A note on on-line scheduling with partial information. Computers & Mathematics with Applications, 44(3-4), 539--543.

PAGE 152

BIOGRAPHICAL SKETCH Bibo Yang received a bachelor’s degree in mechanical design in 1995 from the Precision Instruments and Mechanism Department, Tsinghua University, Beijing, China. She received a master’s degree in industrial engineering in 1997 from the Industrial Engineering Department, Economics and Management School, Tsinghua University. This is her Ph.D. thesis in the Industrial and Systems Engineering Department, University of Florida. 142