Group Title: optimization of energy recovery systems
Title: The optimization of energy recovery systems
Full Citation
Permanent Link:
 Material Information
Title: The optimization of energy recovery systems
Alternate Title: Energy recovery systems
Physical Description: xiv, 114 leaves : ill. ; 28 cm.
Language: English
Creator: Shah, Jigar Vinubhai, 1951-
Copyright Date: 1978
Subject: Energy conservation   ( lcsh )
Chemical Engineering thesis (Ph. D.)
Dissertations, Academic -- Chemical Engineering -- UF
Genre: bibliography   ( marcgt )
non-fiction   ( marcgt )
Thesis: Thesis--University of Florida.
Bibliography: Bibliography: leaves 111-113.
General Note: Typescript.
General Note: Vita.
Statement of Responsibility: by Jigar V. Shah.
 Record Information
Bibliographic ID: UF00098850
Volume ID: VID00001
Source Institution: University of Florida
Holding Location: University of Florida
Rights Management: All rights reserved by the source institution and holding location.
Resource Identifier: alephbibnum - 000206824
oclc - 04044740
notis - AAX3618


This item has the following downloads:

PDF ( 5 MBs ) ( PDF )

Full Text







Dedicated to

my parents, Hemlata and Vinubhai Shah


The author wishes to express his sincere gratitude to Dr.

A. W. Westerberg. Working as a graduate student under his super-

vision was an invaluable experience.

The author also wishes to express his appreciation to the

department of chemical engineering at the University of Florida

and to Dr. J. P. O'Connell and Dr. H. D. Ratliff of the supervisory


Thanks are due to the assistance provided by the department

of chemical engineering at Carnegie-Mellon University and to the

Computer-Aided Design Centre, Cambridge, U.K.



ACKNOWLEDGEMENTS ................... .............................. iii

LIST OF TABLES ......................... ............. ......... vi

LIST OF FIGURES ..... .... .. ............. ................ vii

LIST OF SYMBOLS................. ...... .. ...... .............. ix

ABSTRACT .... ...................... ... ........................ xii


I INTRODUCTION................... ................... .. 1

II THE SYSTEM EROS .................... ............... 5

11.1. Background................. ...... ............ 5

11.2 Data Specification......... .... ...... ...... 9

11.3. Modeling Considerations.............. ...... .. 9

[1.4. Deriving a Solution Procedure and
Solving Model Equations ....................... 17

11.5. Starting the Problem: Finding a
Feasible Point................ .. ........... 20

11.6. The Optimization Strategy.................... 27

11.7. Special Uses of EROS...................... ...... 35

11.8. Results and Discussion ...................... 41

IIl LOCAL AND GLOBAL OPTIMA. ............... .............. 48

III.1. Statement of the Problem..................... 48

111.2. The Lower Bound................ .............. 49

JT1.3. The Upper Bound on the Lower (Dual) Bound... 54



111.4. Finding the Best Upper Bound................ 57

111.5. Improvement of the Upper Bound .............. 59

111.6. Procedure to Investigate Global
Optimality ................. .... ............. 62

111.7. Algorithm.................................... 63

111.8. Examples.............. .... ... ............. 65

111.9. Discussion ................................. 75

WITH INEQUALITY CONSTRAINTS ......... ............... 76

RESEARCH ................ ............. ............. 83



B DESCRIPTION OF THE PROGRAM ........................ 102

LITERATURE CITED ................................................ 111

BIOGRAPHICAL SKETCH.............................................. 114


Table Pae

1 Constraint Representation for an Example Node 3
in Figure 4 ............................................ 16

2 The Incidence Matrix....................... ............ 21

3 Solution Procedure for the Problem in Figure 2.......... 22

4 Additional User Specifications for the Problem
in Figure 4.................... .......................... 40

5 Results for the Problem in Figure 9..................... 42

6 Stream Specifications for the Example in Figure 10...... 44

7 Results .................. ............... ... ............. 46

8 Stream Specifications and Problem Data................... 79


Figure Page

1 Structure of an Optimizing System... ................... 7

2 An Example Problem..................................... 10

3 The Network Nodes .............. ...................... 12

4 Partitioning a Heat-Exchanger into Zones where
Phase Changes Occur ....................................... 15

5 A Typical Cooling Curve.............. ..... ............. 18

6 Keeping Track of the Best Point 'e'..................... 28

7 Structure of an Optimization Algorithm.................. 30

8 Using an Existing Exchanger............................. 36

9 Reliability Analysis ................. .. ............. 38

10 An Example Exchanger Network............................. 43

11 Example Problem ....................................... 50

12 The Behavior of the Objective Function for
Example Problem 1 ............. ..... ................. 51

13 Staged Processes.......................... ............ 53

14 Geometric Significance of the Dual ....................... 55

15 Geometric Significance of the Upper Bound.............. 56

16 Subsystems for Example Problem 1........................ 66

17 Example Problem 2................ ..................... 68

18 Subsystems for Example Problem 2......................... 69

19 Subsystems for Example Problem 3....................... 72


Figure Paf!

20 The Optimal Values of the Parameters................... 78

21 The Discontinuity in Optimization ................... .. 81

22 The Optimal Network When Ignoring the Approach
Temperature Requirement at Null Exchanger 2 of
Figure 20..... ......... ................... ........... 82

23 Description of the Program EROS ................ ......... 103


a = vector; constant for points on a hyperplane

b = scalar; constant for points on a hyperplane

C = the set of all newly violated constraints

C = the violated constraint encountered first

C = heat capacity

D = minimum allowable approach temperature

E0 = the equation set without any inequality constraints present
as qualities (heat and material balances only)

f = scalar return function; may be subscripted

f. = subproblem, subsystem, return function

F = flow rate

G = matrix; first n rows are (g , and (n+l) row is
(1.... 1)

g = vector constrained function; may be subscripted to repre-
sent scalar elements. May also be superscripted as gi to
represent the vector constraint function at a point i

h = enthalpy

H = a set of (m+l) constraints where m is an integer

H = a hyperplane

10 = the set of i such that stream i is an input to subsystem j

L = Lagrange function

LB = lower bound

O( = the set of i such that stream i is an output of subsystem j

Q duty of a heat-exchanger

= composite vector variable (x/u): may be subscripted

S = constrained variable set; may be subscripted

T = temperature

ATm = log mean temperature difference

u = system decision variable; may be subscripted

U heat transfer coefficient

UB = upper bound

V = the current set of inequality constraints in the equation
set (E E)
c 0
VIV2 Vm+1 = subsets formed by removing one constraint at a
time from H. (For example, V1 is obtained by
removing the first constraint in II.)

VR = the set of constraints present in the equation set as
equality constraints with the difference that their
slack variables are used as search coordinates

V = all the constraints in the system less the ones in VT
and VR

V = the set of constraints being held as equality constraints

x = vector variable associated with interconnecting streams;
may be subscripted

y = multipliers to express a vector as a linear combination of
other vectors; may be subscripted

Z = objective function for the linear programming problem

Greek Letters

S = variable vector used in linear programming formulation;
may be superscripted to represent scalar variables in
linear programming formulation

The scalar variable a and the subscripted variable
Uij may be used to represent a structural parameter
(split fraction).

0 = objective function; may be superscripted as 0 to represent
its value at point i. May also be used as 0 to denote a
vector of i0's

A = vector of Lagrange variables; may be subscripted to repre-
sent scalar elements. May also be superscripted as A to
represent its value at point i

0 = slack variable; may be subscripted

Mathematical Symbols

) = intersection with

'0' = null set

Abstract of Dissertation Presented to the Graduate Council
of the University of Florida in Partial Fulfillment of the Requirements
for the Degree of Doctor of Philosophy



Jigar V. Shah

March, 1978

Chairman: Arthur W. Westerberg
Major Department: Chemical Engineering

An emphasis on technical development oriented towards conservation

of energy in the design of new chemical engineering processes and in the

modification of the existing processes is as important as an emphasis

on the search for alternate sources of energy. With the increasing

sophistication in computer hardware, the design engineers are now able

to assess innumerable options through the use of flowsheeting packages

to maximize energy recovery. Most of the current flowsheeting packages

essentially convert the well-defined input information about a process

into a description about the output from the process. A next evolution

in these packages is judged to be a system which will be very flexible

in the input that is specified to it. In addition, it will possess

an optimization capacity to yield the most desirable values for

parameters left to it as degrees of freedom. This study describes

a prototype for what a really convenient flowsheeting system ought to


The program EROS (Energy Recovery Optimization System) is a flow-

sheeting package for evaluating and optimizing the performance of simple

networks of heat-exchangers. Using EROS one can set up an arbitrary

structure of heat-exchangers, stream splitters and mixers. Stream

flow-rates and entry and exit temperatures may be specified, free, or

bounded above and/or below. Phase changes may be allowed to occur.

Requiring no more user input (such as initial guesses), the program

gathers together the modeling equations and appropriate inequality

constraints. It then develops solution procedures repeatedly in the

course of optimizing, initially to aid in locating a feasible point

(which need not be specified by the user) and in the final stages to

take advantage of tight inequality constraints to reduce the degree

of freedom. The program is written in standard Fortran and is cur-

rently operable on IBM 360/67.

The development of this program is relevant from the point of view

of evaluating and optimizing energy recovery systems because the subject

of choosing an optimal structure has recently come under increasing

attention. A tool such as EROS will prove very useful to make a more

thorough analysis of the candidate structures chosen by the process of

synthesis at the penultimate and final stages. Existing energy recovery

schemes may be reevaluated with the intention of making changes to make

them more efficient. EROS can also perform quick reliability studies

accounting for fluctuations in stream flow-rates and temperatures,

a common problem during start-up and periodic failures.

As is the case in the optimization of most chemical engineering

systems, a global optimum cannot always be guaranteed. An attempt


to recognize whether the discovered optimum is a global one, in a

large system, is not a trivial task. Usually the problem may be

side-stepped by making several, optimization runs from different

starting points and considering the best result as the required

optimum. A systematic procedure is presented in this study to

investigate global optimality, and the results on application to

simple examples prove it to be quick and effective.

The use of EROS can be extended to perform process synthesis

via structural parameters. However, the presence of inequality

constraints inherent in the system can easily force the optimization

to stop short of the desired result. This observation has not been

reported before and attention is brought to it in this dissertation.



Most of the existing flowsheeting packages for chemical engi-

neering processes are based on Sequential or Simultaneous Modular

Approaches. In these approaches each unit is modeled by writing a

computer subroutine which converts the input stream and equipment

parameter values into output stream values. Systems based on

Sequential or Simultaneous Modular Approaches are relatively easy

to build but the penalties paid are the lack of flexibility in the

definition of the problem and the requirement for well-defined user


Equation solving approaches, as followed in EROS (and in Leigh,

Jackson and Sargent (1974), Hutchison and Shewchuk (1974) and Kubicek,

Hlavacek and Prochaska (1976)) present an alternative way to treat

flowsheets. The flowsheet is represented as a collection of non-linear

equations which must be solved simultaneously. In following an Equation

Solving Approach, the user can specify many of the values for both unit

inputs and outputs. The unit equipment parameters are then calculated

to give these desired transformations of inputs to outputs by the unit;

in other words, the unit is designed to meet these requirements. Since

EROS also contains an optimization capability the rather striking

advantage is that variables for whose values the user has no preference

for are treated as the degree of freedom by the system. EROS incor-

porates the general optimization strategy as outlined in Westerberg

and deBrosse (1973) and demonstrates the applicability and effec-

tiveness of their algorithm.

The two important problems in the design of energy recovery

systems are to choose the configuration, and, given a configuration,

to choose the design parameters and operating variables. A recent

review on the effort directed to choosing a configuration can be found

in Nishida, Liu and Lapidus (1977). In choosing a suitable configura-

tion the general trend has been to evaluate networks using the heuristic

of setting the minimum allowable approach temperature to 200F, whereas

the economics often advocate a smaller value. In addition, when a

stream is split, the need for finding the optimal value for the split

fraction has been ignored. Grossman and Sargent (1977) optimized

several heat-exchanger networks and found considerable savings

(sometimes as much as 25%).

The problem of optimizing a heat-exchanger network to obtain

the most suitable values for operating variables has been considered

in the works of Westbrook (1961), Boas (1963), Fan and Wang (1964),

Bragin (1966) and Avriel and Williams (1971). Typically each design

problem is formulated and solved for as an optimization problem.

Many investigators (Hwa (1965), Takamatsu, Hashimoto and Ohno (1970),

Henley and Williams (1973), and Takamatsu et al. (1976)) have combined

both the problems, choosing a configuration as well as the operating

variables, and formulated it as an optimization problem.

All the methods mentioned for optimizing over the operating

variables require the problem to be cast into a mathematical format.

EROS precludes this need because of its capability as a flowsheeting


In Chapter II of the dissertation a description of the program

EROS is provided. The data specification and modeling considerations

for the system are discussed here while a user's guide to EROS is

presented in the appendices. The derivation of a solution procedure

is illustrated with the help of a simple example. If the user has not

provided a feasible starting point, EROS has the capability of dis-

covering one. The algorithm used is a modified version of that pre-

sented in deBrosse and Westerberg (1973). The optimization strategy

based on Westerberg and deBrosse also merits attention in this study

because of its usefulness to this application. The different applica-

tions of EROS and the results on optimization of 10 problems of varying

sizes are also shown.

The program EROS does not guarantee a global optimum because

of the non-linear and multimodal nature of the problem. In Chapter

III an algorithm is presented to establish, often quite quickly,

whether the discovered optimum is global or not. The strategy is

based on finding both a lower (dual) bound for the optimum of the

structured system and an upper bound on this lower bound. In this

chapter the effectiveness of the algorithm is then illustrated by

applying it to three small heat-exchanger network problems.

With a convenient evaluation and optimization package such as

EROS, it is very tempting to use it to perform process synthesis via

the use of structural parameters. Using this approach several

alternate configurations are imbedded into one main structure

which, on optimization, yields the required test configuration.


Several authors, as mentioned earlier in this section, have already

experimented with this idea. However, the problem to be solved has

to be formulated very carefully, and not enough attention has been

paid to this fact. In Chapter IV a discussion regarding this subject is


The conclusions from creating a system such as EROS and recom-

mendations for further investigations form Chapter V of this disserta-

tion. A guide to the use of the program is given in the appendices.

The aspects of EROS presented are the input data format, the inter-

pretation of the output, a typical computer printout, and a description

of all the subroutines.


11.1. Background

In order to evaluate and optimize heat-exchanger networks it

is desirable to have a program which on being given information about

the configuration and stream properties, yields all the required

information about the optimal network. Since the program will perform

several different tasks, it would be very attractive and in many

instances necessary for it to possess the following features.

a) An ability to set up solution procedures.
The program should eliminate or reduce com-
putational recycles, choose the decision variables
and discover the order for calculating the various
unknown variables.

b) An ability to obtain a feasible starting point.
Often, locating a feasible starting point for
optimization is not a simple task. In order to save
users the time and trouble necessary to find a feasible
starting point, the program should be capable of per-
forming such a task on its own.

c) Efficient optimization routines.
These would be required for selecting the
optimal values for the decision variables by
searching over their full range.

However, the optimization of a system such as this one raises

certain problems. The objective function is highly non-linear and

multimodal. Also, if phase changes are allowed, continuous derivatives

cannot be obtained. These criteria force the use of a search algorithm

such as the complex method. In having resigned to the use of the

complex method for optimization, further demands can be made to Improve

the efficiency of the approach for optimization. If in the process of

optimization several constraints are violated, one remedial action is

the use of penalty functions, but this modification is inefficient and

it increases the number of iterations required for convergence.

Hence, with regard to optimization, few additional features would

be deemed attractive.

d) The ability to rederive a solution procedure.
When a constraint is violated, the program
should be able to modify the equation set and re-
derive a solution procedure so that the optimization
may be continued.

This strategy will lessen the number of iterations required

for convergence as compared with the penalty function method. However,

it is essential that the saving in computer time thus incurred com-

pensates for the extra time required in rederiving the solution


e) The use of restriction as a solution strategy.
In the presence of a large number of decision
variables, some of them are set to zero and optimiza-
tion is performed with only the remaining ones as
search coordinates. Optimization is considered
complete when the Kuhn-Tucker (1951) optimality
conditions are satisfied with respect to the decision
variables held at zero value. This strategy aids
considerably in reducing the number of iterations
during optimization.

EROS is the author's attempt at developing an optimization

program which incorporates all the features discussed above. Figure 1

illustrates the general structure of the EROS system.

The approach taken is to model each unit in a heat-exchanger

network functionally hy writing overall material and energy balances.

The unit models themselves may be very complex internally, but the

F uncttun d { (u))

Fig. I. Structure of an Optimizing System

net effect at the flowsheet level is that each unit satisfies overall

heat and material balances. In the current version of EROS only simple

models are used. Using the functional equations, the modeling of such

a system is only at the flowsheet level, and a solution procedure or

the order in which these equations are to he solved is found along with

the degree of freedom to he chosen. The solution procedure sought is

one that will eliminate, if possible, computational recycles at this


The above approach is useful because the equations are solved

repeatedly as an inner loop to an optimization program. As illustrated

in Figure 1, the optimizer directs all the activity. Its primary func-

tion is to adjust the decision variables to improve the objective

function 0. For this system 0 is the annualized cost of the equipment

plus the annual cost of buying the utilities needed such as steam for

the purpose of heating.

To evaluate 0, the optimizer supplies the block labeled "Solve

Model Equations" with the values it wishes to try for the decision

variables u. The remaining problem variables x(u) are then obtained

by solving the model equations. With u and x(u) values available,

constraint violations are checked, and if some are violated, they are

identified to the optimizer. Assuming none are violated, the units

in the system are sized and an annualized cost 0 is evaluated. The

optimizer notes this cost and changes the decision variable values,

with the aim of reducing 0. This calculation sequence is repeated

many times during a typical optimization. If constraints are violated,

special action is taken, which tor this system will result in a modi-

fied set of model equations and a need to rederive a solution procedure

for them (this approach is based on the optimization strategies in

deBrosse and Westerberg (1973) and Westerberg and deBrosse (1973)).

The modified complex optimization algorithm (Umeda and Ichikawa (1971))

is used in searching for the decision variable values, u.

11.2. Data Specification

Adequate data must be supplied to the computer to define the

problem. A problem definition will require the following input.

1) the flowsheet structure
a) connectivity of segments and units

2) the unit data
a) unit type
b) any equipment parameter specifications

3) the stream data
a) flow and temperature specifications
b) physical property data
c) film heat transfer coefficients
d) materials specifications and other cost

4) the segment data
a) associated stream identifier (i.e., what
stream this segment is a part of)
b) any specifications imposed on flow and

5) general user specifications

6) guessed set of inequality constraints to be held

11.3. Modeling Considerations

The modeling of a network will be illustrated with regard to

the example in Figure 2.

The network comprises a single hot process stream H1 which is

split and used to heat two cold process streams C1 and C2. It then

merges to its exit conditions. Streams C, and C2 are heated further

by steam utility streams S, and S2. The network has four heat


J _4 | L I .--o
4 L0C0

1 5

B15 tL i ng


5 6

13 FI 16
11(1) S2
T 500 TI 500

C I U 25 U3 20 U5 U6 50

For Strea II Ti Tot 600 lasl oif l poriatlon = 800

Cost =ultlpllier per ut [low rate 0 4

For StreT.a III T t 6000 lelt o[ Va i ." tion 800

Cost multipller per urnt fltourate 0 2

Constralots: 2500 S F2,F3 7500 30 0 I12, 15 i 5000

kim: To n.lize -- I Cost1 + 0 4 F + 0.2 F
1-2. 1/4
v here Costi 35 (Areai) 06, Ct., ard Area
are associated vith excharger 1-

Fig. 2. An Example Problem

1 (lII 1

F 10,000
I 800

exchanger (or heater and cooler) units, 2, 3, 5 and 6, one stream

splitter unit, 1, and one mixing unit, 4. These units may also be

referred to as nodes. All the heat exchangers are assumed counter-

current. The streams have been broken up into segments, of which

there are 16 overall. For example, stream C1 enters node 2 as

segment 11. It exits and proceeds to unit 5 as segment 12, and

finally leaves the system as segment 13. The naming scheme should

now be evident. The 3 basic units used are shown in Figure 3. The

unit models are written functionally by writing overall material

and energy balances.


1 F2 =F (1)

F = (1 a)FI (2)

h2 = 1 (3)

13 = hi (4)

2 F = F2 (5)

F12 F11 (6)

h2F2 + ll11 = hF4 + h12F12 (7)

3 F = F (8)

F15 FI4 (9)

h3 3 + h14F14 = h5 5 + h15F15 (10)

4 = "4 + F5 (11)

hF6 = h4F + 5 F5 (12)


Hot Stre~m


Fig. 3. The Network Nodes

5 F F7 (13)

F3 F12 (14)

h7F7 + h12 12 hF8 + 13F13 (15)

6 F = F9 (16)

F16 = F15 (17)

hF9 + h15F15 = h0F10 + h16 16 (18)

In addition to these 18 equations, the associated inequality

constraints and the equipment sizing and costing relations can be


The basic inequality constraint is that at no point in the

exchanger should the hot stream temperature equal or fall below that

of the cold stream. Referring to the heat-exchanger in Figure 3a,

this constraint is usually expressed as

T1 T + D, where D is minimum allowable approach

T2 T3 + D

However the temperatures could cross over internally and these

constraints may not be adequate to detect it, particularly when a

stream passes through a phase change. A check should therefore be

made at several points along the exchanger to prevent a "crossover"

of temperatures.

The final set of constraints indicate that a positive heat

transfer must occur, that is, the hot stream must be cooled and the

cold stream heated. These are

T2 T1 and T3 < T

All the constraints associated with a typical exchanger such

as the one shown in Figure 4 are listed in Table 1 with their ap-

propriate code so that they can be precisely identified by numbers.

No constraints are written for the splitter and mixer units.

The sizing calculation for an exchanger is to evaluate its

area. This calculation can be very involved, but for design purposes

may in fact be simplified by assuming film coefficients based on the

fluid, whether it is heating or cooling and whether it is boiling or

condensing (see Perry and Chilton (1973)). The exchanger may, again,

for simplified design purposes, be considered to operate in zones as

indicated in Figure 4. Each zone is then sized and costed using the

appropriate film coefficients and temperature driving forces.

To place a crude cost on the exchanger one might use an equation

of the form (see Guthrie (1969)).

cost = fMfp(aAm)

Where fM is a materials factor, fp a pressure factor and A the

area of a zone within an exchanger. The zones are sized and costed

differently because of the different types of heat-exchange duty.

The terms a and m are constants, with m being about 0.6 to 0.8.

Constant costs are assumed for the splitter and the mixer units.

The last source of equations is the evaluation of physical prop-

erties. The system must be able to convert from stream temperature (and

vapor fraction for a pure component in two phase region) to enthalpy

and vice versa. A cooling curve may be provided for the stream

not Storoa I

1Ti11. T7 TMApI r,
F1. IiI Seg, 1 I Seg 2 2 12i

I I ii l i IV V Y

T4 .I4 Seg 4 15 T)pC T TB SEg 3 F3T3,113
Cold Stream 2

TDIl = Dew point temperature, hot stremn
TBl Bubble point tomperaLure, hot stream

Tr D:w point Ctoecrature, cold strera

TC Bubble point terperatlre, cold stream

5 TIi'eratur' e of the cold storm at a
location stere the hot atrean temper-
ature s TI

T6 Tetperatur of the cold stre" o at
location clere the hot stream tecper-
ature is TBP

tI TccdcraCure of the hot stream at a
location tliere the cold strong lI ttmper-
ature s TI

T = in;-'errature of the hot scre.un t a
Location ciere the cold strewn
ature is T

D = Kdnjtan allowable approach trrwerature

Fig. 4. Partitioning a Heat-exchanger into Zones where Phase

Changes Occur



Commer t



FILB = lover bound
for flow F1

FIIB = upper bound
for flow F1

FLB = ocwer bound
for flow F

F3 = upper bound
for flow F






T, T+ D

T2 '3 + D

T zT2


F 2 F


T pC 1 D

T pgH T6 + D




(NODE 10) + 1 = 31

(NODE 10) + 2 = 32

(NODE 10) + 3 = 33

(NODE 10) + 4 = 34

(NODE 10) + 5 = 35

(NODE N 10) + 6 = 36

(NODE 10) + 7 = 37

(hODE i 10) 4 8 = 38


(NODE 1000) + 1 = 3001

(NODE 1000) + 2 = 3002

(NODE 1000) + 3 = 3003

(NODE 1000) + 4 = 3004

In addition there could be corntraints associated with ar. strrn segment.
Example: Se6ent 3, Node 3

H LB = lower bound
for enthalpy H3

3UB = upper bound
for enthalpy H3



H3 U H

- (SEG 10CO + NODE 10
+ 1) =-3031
- (SEG 10OC + NODE 10
+ 2) =-3032

if the stream is assumed to be at a constant pressure. Figure 5

illustrates a cooling, curve, where TD and T are the dew and

bubble point temperatures respectively.

Properties such as thermal conductivities and densities

should also be provided if the film coefficients are to be deter-

mined from correlations. If for design purposes typical values for

film coefficients are to be used, these properties will not be needed.

TI.4. Deriving the Solution Procedure and
Solving Model Equations

Consideration will now be given to developing a solution

procedure and then solving the example problem. First, the system

must gather together the necessary equations, or at least establish

their structure, so that a solution procedure may be prepared. The

desired solution procedure should eliminate all recycle loops in the

computations if possible or minimize their number.

The initial solution procedure will ignore all but the user

specified inequality constraints. Thus the system sets up the 18

heat and material relations shown in the last section. Assuming that

the user has requested that constraints 55 and 33 be included, the

additional relations required are

F7 F7LB + 55 (19)

T3 =T + 033 (20)

where F7L is the lower bound on F7.

The inequality constraints have been converted to equality by

the slack variables 055 and 033 which are required to be nonnegative.


T r T


T Bubble point temperature

I Dew point Cemperatnre

Fig. 5. A Typical Cooling Curve

When the solution procedure is derived a55 and 033 will be required

to be decision variables with an initial value of zero. In this way

F7 and T3 will be forced to equal to F7LB and T5 respectively.

The relation (20) is in terms of temperatures rather than

enthalpies. Hence the following relationships should also be added

T3 = f(h3) (21)

T = f(h5) (22)

The system can implicitly account for equations (20), (21)

and (22) by the single expression

h3 = f(h5,33) (23)

The incidence matrix can now be created. However its size can

be significantly reduced. Note that a large number of equations simply

equate one variable to another. Equations (3), (4), (5), (6), (8), (9),

(13), (14), (16) and (17) are precisely of this form. These equations

will automatically be satisfied if the values of variables so equated

are stored in a common storage location. Hence these equations may

be deleted and the two variables occurring in each one of them may be

merged to a single one.

Many of the variables in the incidence matrix are in fact

specified. Let the following be specified in data input for the

example in Figure 2

Flows F ,F ,F14,FF7LB

Enthalpies hl,h6,' 7,h,h9,hlohll hl3,hl4,hl6

These specified variables along with the slack variables 055 and 033

(required to be decision variables) may be eliminated in the incidence

matrix. The resulting and much reduced matrix is illustrated in

Table 2.

A modification of Lee, Christensen and Rudd (1966) algorithm

is applied to determine the solution procedure. The solution proce-

dure that results on the application of this algorithm to the incidence

matrix of Table 2 is shown in Table 3. The variables listed are cal-

culated from the corresponding equations in the order indicated. Note

that there is an iteration loop involving the single 'tear' variable

F2 (from steps 4 to 9). F,(=F ) appears in equation (12) and the

iterations between steps 4 and 9 are continued until the value of F2

guessed at step 4 is essentially the same as the value of F2 calculated

from equation (12) in step 9.

The execution of the solution procedure, that is, calculation

of the variables from the equations assigned to them, is termed

"Solve Model Equations" in Figure 1. Corresponding to every unit, a

subroutine is required to calculate any variable involved in the heat

and mass balances of the particular unit. These subroutines may be

supplied by the user for more sophisticated models.

11.5. Starting the Problem: Finding a Feasible Point

If the user has not provided any information to aid in obtain-

ing a feasible starting point, a modified version of an algorithm by

deBrosse and Westerberg (1973) is used. As mentioned earlier, a signif-

icant effort would be required on the part of a user to provide a

feasible starting point if computational loops are involved in solving



Varjiables `fl




x x x


X X X X x





( 1)






Decision Variables 55' T 33

Variable EqAuation

1. ,7(=F8) (19)

2. 12 (15)

3. h5 (23)

4. Guess F2(-F )

5. o (1)

6. F(=5) 2)

7. h4 (7)

8. F6 ()

9. F2 (-F ) (12)

10. 115 (10)

11. F (= ))

model equations. Computational loops are almost inevitable in complex

networks. However, the user does have the option of providing a feasible

starting point.

The deBrosse and Westerberg (1973) algorithm uses an indirect

approach. It hypothesizes that a subset of constraints has no feasible

region and then attempts to verify the conjecture. If successful, the

subset is identified as infeasible and obviously no feasible point

exists. If unsuccessful, either a new hypothesis can be generated or

the algorithm has indirectly found a feasible point.

The feasible point algoritmh

In order to demonstrate the feasible point algorithm, the

following definitions are necessary.

E0 = The equation set without any inequality constraints present
as qualities (heat and material balances only).

E = The current equation set (E0 + inequality constraints)
present as qualities with slack variables).

C = The set of all newly violated constraints.

C1 = The violated constraint encountered first.

V = The current set of inequality constraints in the equation
set: (E E)

H = A set of (m + 1) constraints.

VV 2,,...Vm+1 = subsets formed by removing one constraint at
a time from H (for example, V1 is obtained by
removing the first constraint in I).

'0' = null set.

Degree of freedom (as defined for this work) = The number of original

variables (excluding slack variables) in problem less number of con-

straints in E .

Procedure 'A' represents the following sequence of operations.

1) Set all the decision variables to a value of

2) Solve the mode] equations.

3) Check for constraint violations.

Step 1 Set Ec = E, V = '0'

Step 2 Determine the solution procedure based on struc-
tural considerations of the equation set. If a
solution procedure can be determined, go to Step 4.

Step 3 Attempt unsuccessful. Exit.

Step 4 Execute Procedure 'A'. If any constraints are
violated, go to Step 6.

Step 5 Feasible starting point. Exit.

Step 6 Set Ec = [Ec,C

Step 7 Determine the solution procedure. If a solution
procedure cannot be found, go to Step 11.

Step 8 Execute Procedure 'A'. If the problem does not
prove solvable, go to Step 11. If no constraints
are violated, go to Step 5.

Step 9 If the degree of freedom is not zero, go to
Step 6.

Step 10 Set H = [V ,CI], i = 1. Go to Step 12.

Step 11 Set i = 1, H = V

Step 12 Set Ec = [EC ,Vi Vi # Vc from Step 10.

Step 13 Determine the solution procedure. If it cannot
be determined, go to Step 17.

Step 14 Execute Procedure 'A'. If the problem cannot be
solved go to Step 17. If no constraints are violated,
go to Step 5.

Step 15 If C H # '0', go to Step 17.

Step 16 If the degree of freedom = O, go to Step 10. If
not, go to Step 6.

Step 17 Set i = i + 1. If i > m + 1, go to Step 3 (the
set of constraints in H do not allow for a feasible
region). If i in + 1, go to Step 12.

The main departures from the deBrosse and Westerberg (1973)

algorithm are as follows:

1) In this work, the set H is restricted to containing
less than or equal to (n + 1) constraints where n
represents the number of decision variables.

2) Both the sets V and H are created in a different
manner. In this work, one constraint is added at a
time to build up the set V and H is created either
when there are already n constraints in Vc and an
additional constraint is violated, or when a struc-
tural infeasibility occurs.

3) "No Solution" options (in Steps 8 and 14 where the
problem is not solvable) are not treated in as
rigorous a fashion as in deBrosse and Westerberg
(1973). The method here is the same unless several
problems corresponding to the same "hypothesis"
set H lead to no solution in Step 14. The algorithm
here may terminate unsuccessfully at Step 17 whereas
that of deBrosse and Westerberg might continue with
alternate and reduced sets H. This option is quite
complex and was never found necessary in EROS.
Hence it was never included.

The algorithm can now be applied to the example shown in Figure

2. (Note that this problem involves 8 equations in 10 unknowns; thus

two degrees of freedom exist at the start.)

EO = [(1), (2), (7), (10), (11), (12), (15), (18)]

(all the equations from Table 2 except the last two,
(19) and (23))

1) E = E0, V ''

2) Solution procedure derived

4) Procedure 'A' executed
Constraint Violation = 33.

6) Ec = [EO, 33]

7) Solution procedure derived

8) Procedure 'A' executed

Constraint Violation = 25

9) Degree of freedom = 1

6) Ec = [E0' 33, 25]

7) Solution procedure derived

8) Procedure 'A' executed

Constraint Violation = 22

9) Degree of freedom = 0

10) H = [33, 25, 22], VI = [33, 25], V2 = [22, 33], V3 = [22, 25]

12) Ec = [E V2] = [EO, 22, 33]

Note: V = [33, 25] is V in the previous iteration and
it need not be considered again

13) Solution procedure derived

14) Procedure 'A' executed. Constraint Violation = 55

15) C n H = '0'

16) Degree of freedom = 0

10) H = [22, 33, 55], V1 = [22, 33], V2 = [55, 22], V3 = [55, 33]

12) Ec = [EO, V2] = [E0, 55, 22]

Note: V = [22, 33] is V in the previous iteration and
is eliminated from consideration

13) Solution procedure derived

14) Procedure 'A' executed

No constraint violation

5) Feasible starting point. Exit.

11.6. The Optimization Strategy

The optimization strategy is modeled after the algorithm

presented in Westerberg and deBrosse (1973). The algorithm is in-

voked once a feasible point is available.

The sets of inequality constraints are divided into three sets.

VT = The set of inequality constraints being held as
equality constraints.

VR = The set of constraints present in the equation
set as equality constraints with the difference
that their slack variables are used as search

VS = The set of all remaining constraints.

Vc = [VT,VR] (Note: Vc is as defined in the previous

Solution procedures are modified as inequality constraints are

moved from one set to another. Adding constraints in the set being

held tends to aid the optimization process by reducing the dimension

of search space for what is usually a marginal or no added burden in

solving an enlarged set of equations.

As the optimization proceeds, the values of all variables, in-

cluding all slack variables, are stored for the point that yields the

best value for the objective function. Hence, even when V is changed,

optimization can be and is started at the best point discovered up to

that moment. This modification makes a significant improvement to

the Westerberg and deBrosse (1973) method. Figure 6 illustrates the

typical dilemma faced by the Westerberg and deBrosse optimization

algorithm when stepping from a current best point, point 'e', through

one or more inequality constraints to point 'f'. At point 'f' the


Fig. 6. Keeping Track of the Best Point 'e'

constraint gl is detected as being violated. The algorithm will

respond by changing the solution procedure so that the slack

variable o0 for g becomes a decision variable. The other decision

variable will be either xl or x2. There are several options now as

to where the optimization may be started. The algorithm could hold

xI or x2 (whichever is selected as the decision variable) at its

current value and find the point where o0 is zero, leading to point

P1 or P2 respectively. Alternatively it could attempt to locate P3

by searching along the direction leading from 'e' to 'f'. All of

these options can, and often do, lead to a next point which has a

higher and thus worse value for the objective function. By saving

all the variable values for the best point, the search can always

start, even after developing a new solution procedure, from that

point, that is from point 'e'. This change reduces cycling because

a change in the solution procedure cannot lead to a point that is


The actual search strategy used is the modified complex

method (Umeda and Ichikawa (1971)). The complex method is consid-

ered suitable because gradients are not required. The treatment

of phase changes creates discontinuities in first derivatives of


The optimization Algorithm

The notation followed is the same as that in the last section.

Figure 7 gives the structure of this algorithm.

Step 1 Obtain a feasible starting point.

Step 2 Set VT = V, VR = '0'. Apply the Kuhn-Tucker (1951)
conditions as follows. Perturb each slack variable


I t



L- -I

.. . . _f l Fr L

19Uaces,t ,.ul Eal t

A Nlnml mcde loup
B F I earctll coordiotehs a~bnoriail mode loop
C- P, C uin tO hI~r ln ri S

Fig. 7. Structure of thile Optimization Algorithm

corresponding to constraints in VT, one at a
time, away from zero. If an improvement in
the objective function results, move the cor-
responding constraint from VT to VR'

A. If any solution procedure fails numerically during
this step, go to Step 11.

B. If on completion of this step, the number of con-
straints in VT is equal to the number of constraints
in Vc, and the degree of freedom is zero, go to Step

C. Otherwise continue.

The optimization is back in its normal mode if return is to Step 3

or Step 7.

Step 3 Perform an optimization (using the complex algo-
rithm). Use as search variables the slack variables
corresponding to inequality constraints in VR along
with any additional problem variables still retained
as decision variables. Search only over nonnegative
values of the slack variables.

A. If at any point the solution procedure fails numeri-
cally, go to Step 11.

B. Otherwise, when the optimization algorithm exits
normally, continue.

Step 4 An optimization has just been completed.

A. If one or more constraints are violated, go to
Step 7.

B. Otherwise continue.

Step 5 Set VTold = VT. Apply the Kuhn-Tucker conditions
as done in Step 2.

A. If any solution procedure fails numerically, go to
Step 11.

B. Otherwise on completion continue.

Step 6 A. If VT = VTold go to Step 18.

B. Otherwise repeat from Step 3.

One or more constraints not in the current set V has been violated.

Step 7 A. If the degree of freedom is not zero, set
E [Ec,C ] and go to Step 9.

B. Otherwise find the set of constraints R from VR
such that each constraint in R could, from struc-
tural considerations of the equations, be traded
for C1. If R = '0', go to Step 10.

C. Find the constraint C', in set R having the
largest value for its corresponding slack
variable. If that value is zero, go to
Step 10.

D. Otherwise continue.

Step 8 Replace C' with C1 in VR.

Step 9 For any slack variable corresponding to a constraint
in VR and having a value of zero at the current best
point, transfer the corresponding constraint from VR
to VT. Set Vc = [VR,VT], Ec = [EO,Vc] and develop
a new solution procedure.

A. If a solution procedure cannot be developed, go
to Step 11.

B. Otherwise return to Step 3.

The degree of freedom is zero and either (a) the values of all the

slack variables in VR are zero or (b) the set R is empty.

Step 10 Set H = [VC,C ], set i = 1, and go to Step 12.

The current set of equations are (or appear to be) numerically


Step 11 Set H = [V ], set i = 1.

Step 12 Set E = [EV ], V i V (this test is only relevant
if entry was originally erom Step 10 above). Set VR
equal to the set of all constraints in Vi whose slack
variable values are greater than zero. Set V., equal
to all remaining constraints in Vi.

A. If VR = '0', go to Step 13.

B. Otherwise, determine a new solution procedure.

1. If one can be found, go to Step 14.

2. Otherwise, go to Step 17.

If entry to Step 12 was via Step 10 originally, a degenerate problem

is at hand as Vc and C1 (N + 1 constraints) are all satisfied in a

subspace of dimension NVc

Step 13 Determine a solution procedure.

A. If one cannot be determined, go to Step 17.

B. Otherwise, perturb, one at a time, the slack
variables corresponding to the constraints in

1. If, on any perturbation, constraints are
violated such that H n C J '0', go to Step 17.

2. If, on any perturbation of a slack variable,
an improvement is obtained in the objective
function, put the corresponding constraint
into VR.

C. If, on completion of the perturbations in Step B,
the set VR = '0', go to Step 18.

D. Otherwise continue.

Step 14 Perform an optimization. Use as search variables
the slack variables corresponding to inequality con-
straints in VR along with any additional problem
variables still retained as decision variables. Use
the solution procedure derived in Step 13. Search
only over nonnegative values of the slack variables.

A. If at any point the solution procedure fails
numerically, go to Step 17.

B. When the optimization algorithm exits normally
and if a constraint violation occurs, then

1. If I1nC '0', go to Step 17.

2. Otherwise, HnC = '0'. Go to Step 7.

C. Otherwise when the optimization algorithm exits
normally and no constraints are violated, continue.

Step 15 Set V ol VT. Apply the Kuhn-Tucker conditions
(as done in Step 2).

A. If any solution procedure fails numerically, go
to Step 17.

B. Otherwise continue.

Step 16A. If VT = VTold, go to Step 18.

B. Otherwise, return to Step 14.

The current solution procedure failed or led to an immediate

constraint violation of the constraint dropped in set H.

Step 17 Set i = i + 1.

A. If i > m + 1, go to Step 19.

B. Otherwise repeat from Step 12.

Normal exit. Optimization attempt apparently successful.

Step 18 Optimization complete. Exit.

Abnormal exit. Optimization attempt aborted.

Step 19 Optimization failed. Exit.

Application of the strategy to the Example Problem in

Figure 2.

1) Feasible starting point. V = [55,22] (from the
last section).

2) VR = [22]

3 and 4) Optimization with the complex method.

7) Constraint violated = 35, R = [22], and a22 > 0.

8) Replace constraint 22 with constraint 35.

9) VR = [35], VT = [55]. Ec = [EO,VR,VT].

3) Optimization with the complex method.

4) Optimization complete.

5 and 6) VTold = [55]. V = [55] after Kuhn-Tucker test.
Told T
18) Optimization successful. Exit.

At the optimum, the following values resulted:

F2 = F = 7350, F3 = F5 = 2650, TI2 = 5000, T15 = 300,

F7 = O, F9 = 2500, T4 = 392, 5 = 4230 and 0 = 3538

II.7. Special Uses of EROS

Using existing Heat-Exchangers. The program may be used to

analyze a network where some of the exchangers are already specified.

A special effort must be made, however, to make use of these exchangers.

It is assumed that these exchangers are available at no cost. In

Figure 8, an exchanger with an area of Al has been specified. On

analysis, however, it is discovered that an exchanger with an area A2

is required at that particular site in the network. In the program,

the costs assumed for different conditions of Al and A2 are shown in

Figure 8. The physical significance of 1) is that a by-pass will be


psifcl[ed Rqluired
Exchanger Exchnerr

11 if Al iAir~

2) if A2 A, Cost Cost asooci.ted
2 1 s-Ath one~ Irh~ssr

ba-1) th
( 2- A,)

Fig. 8. Using an Existing Exchanger

used in the specified exchanger. 2) implies that an exchanger with

area = A2 Al must be purchased in addition to the exchanger already


Reliability analysis. The program has been extended to permit

its use in quick reliability studies. The reliability studies will

be demonstrated with the help of an example. Figure 9a represents

a network that is operational under normal conditions. Cold process

streams C1, C2, and C3 are heated to their final temperatures with

the help of a hot process stream H2 and a flue gas H The flow

rate and the outlet temperature of stream H1 are undefined but are

required to be within specified bounds. Assume that two abnormal

occurrences take place separately, for certain periods of the year,


1) Stream H2 is unavailable.

2) Heating of stream C2 is no longer required.

The aim now is to find an optimal network such as the one

shown in Figure 9a, fully provided for to meet the contingencies with

the aid of by-passes in the exchangers or with the aid of auxiliary

exchangers. The designer is permitted to allow a change in the

flow of stream H1 and its outlet temperature provided they stay within

their specified bounds. In the case of failure mode 2), a cooling

utility stream may be used to cool stream C2 to 2300 so that it may

be recycled to be heated to 2500 if it is desired to maintain a

semblance to the normal operation. (Flow rate of C2 can be allowed

to vary between 0 and 70,000 in this instance.)




F -
r IC

* Un-1-cified

Fig. 9. Reliability Alnalysis

.' -*

Figure 9b represents the network when stream H2 is unavailable,

and, Figure 9c when stream C2 is not required to be heated. Additional

user specifications to those shown in Figure 9 are presented in Table 4.

In order to find the optimal operating system, networks in Figure 9a, 9b

and 9c are optimized together. The objective function 0 is given by

0 = C IF 1 + CHIF + C F + CUC"Fuc" +

V (cost of exchanger area at site i)

where C. and F. represent the cost coefficient and the flow rates of
1 1

stream i, respectively. The cost coefficient C. should reflect the
expected fraction of the year that the network is in the particular

state being represented. For example, for the problem in Figure 9

it is assumed that the networks in Figure 9a, 9b and 9c are opera-

tional 77%, 11.5%, and 11.5% of the time in a year, respectively.

Hence if CHI is 0.1, then Cl and CHi are 0.015 each.

The cost of exchanger area at a site will be illustrated with

the help of an example. At the site 2, the exchangers of different

sizes required in Figure 9 are A2, A2, and A2,. Assume that A2, is

smallest and A2,, the largest area.

The cost of exchanger area at site 2 is defined as

35[(A2,)6 (A A2)0'6 + (A2,, A2)0.6]

This manner of costing areas in doing reliability analysis

appears to be a good formulation of the real system. If, because of


ADDITI tIoo USER SPI CA] 1 IO .10 1 TH1 L l-a t I ,ia: l i :' F1 G IIlit, 9


Flow Rate

Lower Bound on FloI

Upper Bound on Flou

Inlet Temp

Outlet Temp

Lower Bound cn uutle Tump

Upper Bound on Uut leit 'emp

Cost Coefficieunt

Ui 0t








200, 00



1' tiC"

50,000 0

200,000 100,000

600 1000

S 1500

190," 150

f0.01 150,

0.015 0.008


Let U. represent the heat transfer coefficient for exchanger i

1 -

U2 1 U- U, 1 477.27

U, U /U

U," 225

some departure from the normal mode, more exchanger area is required

at a particular site, then one must pay for the auxiliary exchanger.

If the exchanger area required is more for the normal mode, then an

auxiliary exchanger is already in effect. The results obtained on

optimization of the system in Figure 9 are shown in Table 5.

11.8. Results and Discussion

A typical application of EROS to a heat exchanger network is

illustrated by the example in Figure 10, the stream specifications

for which are shown in Table 6. Stream I is a flue gas and streams

IV and VIII are utility streams. The flow rates for these streams

are not defined but are required to be within specified bounds.

Some of the streams in the network are also multicomponent and are

characterized by a dew and a bubble point. Thus the network is

analyzed to ensure that minimum approach temperature violation does

not occur inside the exchanger owing to discontinuities at the dew

and bubble points. In fact, at the optimal solution for the example

in Figure 10, the minimum approach temperature constraint at the bubble

point of stream III is operative inside exchanger 5. The program can

also evaluate variables that already exist, fully or in part. For

example, exchanger 3 was assumed present with an area of 1500 units.

The objective function 0 is defined as

0 = cost coefficient Flow Rate
streams I, IV, VIII

13 0.6 0.6
+ 35 Y (A. + 10) 10














Fli 170,341

T=T = T = 190

0 $23451.25/yr.

FI, = 180,878

F 11, = 50,148

















T 100I
V 25,003

LB LO:'.R nBI:uD
LAT IV'? J.LY !L 1.D ILL!, A'I 'k' l Ti IP.
ACflVE co;.: TIrrrs AT Oi'rcL-'

Fig. 10. An Example Exchanger Network






118 P 1SE 0HTC
-pC? 7~?ASE HTC

7T 1 F!.SE iHTC

.TC Heat Transfer

10C.O 900.0C
C.C 1.0
C.0 1.C
1.0 1.C
1.0 1.0
10.0 1c.0
1GC.O 10C.O
800.0 400.
1501 CC.0O
150C.O. 30C.C
1500.0 o0C.C
1C.C C.C
1 .0 -0.0

100c. 10C.0
90,C00. 90,000.


Inijmur Allowable Approach Temperature = 180



30 .0














8C, CO.

10C .0
2 50
250 .0
3C C.0

25, 000.


ICC .0


90 t C2

where Al is the area associated with exchanger i. In the calculation

of cost associated with an exchanger, the relation

cost = 35[(A + 10)0.6 100.6] (1)

is used in preference to

cost = 35 A06 (2)

because whenever the area of an exchanger currently at zero value, is

increased, the cost for the exchanger as calculated from (2) increases

abnormally compared to the change in cost for the rest of the system.

The modification as shown in relation (1) dampens this ill-behaved


There are 8 decision variables for this problem and the optimum

results after 328 iterations from an infeasible starting point. The

stopping criterion is a 1 x 105 difference between the worst and best

objective function values in the current set of points retained by

the complex algorithm. The value of 0 at the optimum is $152,109/yr.

Results for 10 examples are shown in Table 7. In all the

examples the feasible point results in very few iterations. It may be

observed that the time required for the rewriting of solution proce-

dures after finding a feasible point is relatively small as compared

to the time taken for function evaluations during optimization. The

maximum ratio of these two times occur in Example 6, but it is still

less than 1/3. This observation indicates that the penalty paid for

rewriting solution procedures whenever constraint violations occur is

indeed very small.



* 1 2 3 4 5 6 7 9 C

Proces- Stra 3 4 3 4 2 2 7 5 5 6

'.' -.v S r--eami 2 1 3 2 3 0 3 i 3 5
3 4
Exchangers 4 4 5 7 5 5 9 i ii l

e ion :ariables 2 2 3 4 5 6 E i '

iteratios to Feasible Point 4 2 2 1 3 9 1 **8 7

\Nev Soluriorn ?rocedur-s
After easible 0 0 3 i 0 6 4 7 i 1
Ti:e (second., 0.00 0.00 0.22 0.00 0.00 3.39 1.8 7 .32 i 2.1.e,

.:-'rc ons /.:Cr Fca :.o' e 35 59 72 1iF 14 1049 i 706 32C I 119 | i
T:mc (second. 3.43 6.31 4.51 3 .2 16.89 10.85 13.25 71.62 17.2 3

S T seconds) .31 24 6.4 4.12 6 27.82 10. 57 108.60 23.49 7
_______ _______ _______I__II I I

r'cr. Take rsuC H7F:I.:n:O LO anC O1rc. (1970).
'-_ Eamole '-lustraLc: ;n Figure 10.
-". F a.'i l starting i. provided as an inp-ut.


The figures shown for time in Table 7 are those required on

IBM 360/67. The cost per second of CPU time is about 1.4 cents and

the longest run (Example 10) costs $8.56 for complete execution while

Example 1 costs $0.40. The size limitations are 50 process stream,

25 nodes and 150 stream segments in the current version of the program.

The program is fairly well tested and gave satisfactory results when

used for sixteen different problems set up by students in a recent

design course.


III.1. Statement of tie Problem

In the course of optimizing a chemical engineering system

using a conventional minimum seeking algorithm, one should ask and

then attempt to discover if the solution found is indeed a global

one. If one is willing to try, then a common strategy is to re-

start the optimization at several different initial points, and,

if all or most lead to a single best point, that point is conjectured

to be the global optimum.

A second common strategy is to use one's intuition to claim

that only one optimum is likely. This strategy is particularly

dangerous when the optimum is at the boundary of the search region

where portions of the system effectively disappear because the flow

through them is zero. Often one models the capital cost of a process

unit by an equation of the form

cost = C1 (Throughput)06

This form is not convex and is particularly troublesome at zero

throughput, where the slope is infinite for cost versus throughput.

If a unit becomes zero in size, then a small positive perturbation

in its throughput has an apparent positive infinite effect on cost,

an effect usually large enough to trap the optimization algorithm.

One can of course (and should) modify the cost equation to reduce

this problem.

These strategies may help but still do not guarantee that one

has found the global optimum. The purpose of the work leading to this

paper was to produce a reliable method to determine if an optimal solu-

tion is either a global or a local optimal solution. It was hoped

that the technique would be computationally effective, in particular

for heat exchanger networks, a class of problems which commonly dis-

plays local optima.

The optimization problem relating to a heat-exchanger network

is to minimize the annualized cost, 0, of heat-exchangers and utilities

subject to the approach temperature inequality constraints and the

heat and material balances representing equality constraints.

The simple network in Figure 11 (due to Grossman and Sargent

(1977)) has only one degree of freedom and the network cost can be

explored by choosing different values for the temperature T3. Figure

12 represents the plot of costs, 0, vs. T At both the points A and

B, the Kuhn-Tucker optimality conditions are satisfied as any slight

perturbation away from each of these points results in an increase

in 0. However, the point B clearly represents a local optimum. It

is quite likely therefore that an optimization algorithm will not find

the global optimum.

II.2. The Lower Bound

To permit decomposition of a system structure, the primal or

overall system optimization problem may be written as

Minimize 0 = f(q) = j f (qj)

s.t. g = Xi t i(qi) = 0 iEOj) j=1,2,...n

qj S j=1,2,...n

F o 10,000
T 600

F 10,000
T 900o

T 10C T3 T 300


C l 1) U U2 200

Are. Q/u ATm Cost 35 x Area0 6

Coat refers to exchanger i.

Aim: To minimize the objective function n `here - CaostL Cost

Fig. 11. Example Problem 1


10 200 250


Fig- 12. The Behavior of the Objective Function for

Example Problem 1

(It is important to note that the constraints defining S. must be

complete enough to guarantee the boundedness of each subproblem j.)

This representation of a system, such as the one in Figure 13,

is based on Lasdon (1970), but a procedure for equality rather than

inequality constraints is stressed as shown in McGalliard and

Westerberg (1972). This stress follows from an interest in decomposed

system structures, as against, say, allocation of resources.

The problem is assumed to have an optimal feasible solution.

The Lagrange function for problem (Pl) is

I = 0 [ Aixxi ti(q )
j=l icO)

which may be rewritten as

n n
L V l[f (q) + it (q.) Ax.] f (q -)
L L (qi L ix L j (qj',A
j=l i (j) i (j) j=l

This Lagrange function is separable in q, and a Lagrange problem may

be defined which is equivalent to the following set of subproblems,

one for each subsystem in the original system:

Minimize f (q.,A) j=l,2,...n
q. Sj (P2)

The solution to the Lagrange problem equals the sum of the solutions

of (P2), i.e.,

L (= )

2 k

L ---

il 3. Staged Processes

A dual function can now be defined as

h(A) = f (A)
j=l 1

The geometric significance of the dual h(X) is illustrated in

Figure 14 for a simple system like the one shown in Figure 13. The

dual h(A) is the intercept of the line with slope A. Note that this

line is a supporting hyperplane at the point (0 ,gO). The geometric

significance of the dual has been treated in Lasdon (1970).. The

value of the dual is always less than or equal to the value of the

primal optimum. Hence h(A) can be considered a lower bound on the

global optimum.

III.3. The Upper Bound on the Lower (Dual) Bound

Theorem 1: If in the space of 0 vs. g, where g is n-

dimensional, a polytope (Geoffrion (1969)) is formed in the hyper-

plane passing through n+l support points, the value of 0 at the

intersection of this polytope with the axis g = 0 provides an upper

bound on the lower bound if the point of intersection is contained

inside or at the boundaries of the polytope.

Discussion: Figure 15 illustrates the ideas underlying this
1 2
theorem. g and g are support points to the graph of 0 vs. g. The

line connecting gl to g is a polytope formed in the hyperplane pass-

ing through these n+l = 2 support points. This polytope or line

between gl and g2 intersects the vertical axis g = 0, and thus the

Graph of d

POt al Optical Value

f hyparplane

op Support

Support Potnt

Fig. 14. Geometric Significance of the Dual

g '2-t 2()

Con- f ull

Fig. 15. Geometric Significance of the Upper Bound

Upper Bound (UB)

theorem says this point of intersection represents an upper bound on

the lower bound.

Proof: All the support points of a graph (such as g and g2

are contained on the surface of a convex hull formed by the support

hyperplanes for this graph. Assuming that the polytope intersects

the axis g = 0 in its interior or at its boundaries, a hyperplane

intersecting the axis g = 0 above this point must intersect a part

of the graph, that is, it cannot be a support hyperplane. Thus every

support hyperplane must intersect the axis g = 0 on or below the point

the polytope intersects this axis. Consequently the dual cannot exceed

this value of 0 (=UB) and UB is an upper bound to the dual.

III.4. Finding the Best Upper Bound

Given support points (0i,g ) i=1,2,...p,p2n+l, the problem is

to determine the following.

(a) Find, if possible, a set of n+l points that
yields an upper bound. Posed differently, the
problem is to find n+1 points such that the
polytope formed by them intersects the axis
g = 0.

(b) If several sets of n+l points qualify to pro-
vide an upper bound, find the set that yields
the lowest value for the upper bound.

Any point inside or at the boundaries of the convex polytope,

specifically the point g = 0, can be obtained by a convex combination

of (n+l) points forming the polytope. This result has been stated and

applied in Director et al. (1978). Thus problem (a) can be formulated

as a feasible point problem in linear programming. If a feasible

solution exists (no artificial variables present in the solution at

a nonzero level), then indeed the set of n+l points does provide an

upper bound.

In order to solve problem (b), if a "price" could be associated

with each point such that the objective function reflects the value

of the upper bound corresponding to a set of n+l points, then it too

is a linear programming problem. The solution to it, if one exists,

would yield solutions to both problems (a) and (b) simultaneously.

This "pricing" is indeed possible.

For every point on the polytope formed by n+I points, 0 can

be represented by the linear relationship

0 = a + b (1)

where a and b are constants. The value of 0 at the intersection with

the axis g = 0 is

0 = b (2)


UB = b (3)

assuming that g = 0 lies inside or at the boundaries.

For n+1 points, equation (1) may be rewritten compactly as

1 T
0 1 1

0 = = a + b (4)

n+l n+lT

T 1 n+l 2 12 n+1l
Let a = [a ,....n] such that li = 1, [g ,.. ] = 0,
and ai for all i. Premultiplying equation (4) with a gives

a O= b (5)

and from equation (3)

UB = a (6)

Thus the "price" associated with each point (0P,g ) is 0'.

The linear programming formulation to solve both problems (a)

and (b) simultaneously is as follows:

Find aT = [ to

Max Z = [01 ,... 0P1


s.t. [g1 .... ] p

a + a2 + ... + a = 1

and ,. p, = 0. The upper bound UB = Z.

The solution will contain (n+l) vectors in the basis. If a

feasible solution does not exist, artificial variables will be present

in the solution at a positive level and no upper bound will exist

because the set of points in hand so far ( ,g ,... p) do not surround

the origin.

IT.5. Improvement of the Upper Bound

Let N be the set of the current n+l points forming the poly-

tope P, H the hyperplane passing through these points with the slope

_, and UB the value of the upper bound. The aim now is to find an

improvement on UB.

If a new point is introduced, it follows from the simplex

method that, since the problem is bounded, a feasible solution and

hence a new upper bound can be obtained by replacing one of the points

in N by the new point. The question now is how can the new point be

found so that the new upper bound (UB') is an improvement on UB,

i.e., UB' < UB.

Once again, the geometry of the problem and the theory of

linear programming can be used advantageously. Assume for the moment

that the solution to the linear programming problem obtained from the

points in N is not degenerate. If a support point is determined for

slope A, this new point must lie on or below H. This result is ob-

vious as a hyperplane with slope A cannot be a support hyperplane
new new
above H. If the coordinates of the new point are (0 ,g ), and

the value of 0 on H at g = new is 0H, then

H n 0new

0g= new implies that no further improvement can be ob-

tained on the upper bound. Thus if 0 is greater than new, it

follows that UB' will be less than UB if the new point replaces one

of the points in N by the simplex method. This result can be

demonstrated as follows:

1 1 n+1 n+1
Let the points in N be (0 ),... (0 +). The constraints


1 2 n+l
(g ,g ,...g ) a = 0

1 2 n+l
S+ 2 + 1 = 1 (8)

Combining relations (7) and (8)

T u = (9)

The new point can be represented as a linear combination of the points

in the basis, that is,

^new ynew (10)

Th, ,st ,.new
The n+1s element of gnew gives

SYi (I)

Points on H may be expressed by equation (1)

0 = [g,] + b

For the (n+l) points in the basis, it follows from equation (4)


0 = I

Premultiplying with yn

T T -
nnew new T a + b new (12)
y = 0 i

Using equations (10) and (11)

T TT o
new new a + b (13)
y 0 g >4 +b (1)

However, the right hand side is a point on H at new and thus

new eHf
new = (14)

In the simplex method, the criterion for an improvement in the

objective function by replacing a point in the basis by a new one is

new hnew H new
y n new = new > 0 (see Hadley (1963))

Since 0 0n is greater than zero

UB' < UB

If the linear programming solution corresponding to the points

in N is degenerate, that is, the point g = 0 lies on one of the bounda-

ries of P, a difficulty arises. In this case, there is no unique

hyperplane H and consequently there may be some degrees of freedom

available in calculating A. There is then no guarantee that the new

point will bring about an improvement in the upper bound. In such a

case, several new points may have to be evaluated before an upper

bound with a lower value is obtained. Degenerate linear programming

problem solutions occur in Example Problem 3.

III.6. Procedure to Investigate Global Optimalit

The basic idea of this approach is that the value for the

global optimum must lie between or at the boundary of the upper and

lower bounds. The interval between the bounds is generally decreased

at every iteration by the improvement on the upper bound and possibly

an improvement on the lower bound. If at some point during this

procedure, an optimum is found to possess a value significantly

greater than the upper bound, an inference can be made that the

optimum is local and the procedure terminated. On the other hand,

if a value for the optimum is very close to the lower bound, it may

be inferred that the optimum is global and the process terminated.

A problem arises when there is a dual gap in the optimiza-

tion problem. It is possible that the global optimum may lie above

the upper bound at some iteration. However, the optimum is not

rejected as a local optimum if it lies within a certain interval

above the upper bound, say 2% of the value of the optimum. Previous

experience indicates that this modification is adequate for the type

of problems considered in this study. This behavior can be observed

in Example Problem 2.

III.7. Algorithm

To investigate a primal optimal solution with an objective

function 0 .

Step 1 Guess n+l different A's (A where j=l to n+l). Attempt
to select these Al to obtain points g which surround the

Step 2 .For each j evaluate the dual h(j), the support point
g and the-objective function 0. Let p = the total
number of support points (n+l) in this case). Find LB
where LB = Max (h(AJ), j=l to p).

Step 3 If 0* LB + e (c is a small positive quantity, say .02
0^), stop. 0* is the global optimum. Else go to Step 4.

Step 4 Formulate an L.P. problem with all the available support
points and solve for the optimal objective function (-UB)
and the corresponding basis vector. If a feasible solu-
tion does not exist, go to Step 7. If 0* UB + C, stop.
The given solution is a local optimum. If the L.P. solu-
tion is degenerate, go to Step 6.

Step 5 In the L.P. solution, let the vectors in the basis be
g ,g ,g Set up n simultaneous equations in
unknowns Xp+1 as follows

(g2 gl)T p+1 = 2 01

(g3 )T Xp+1 = 03 01

n 4 1 T p+1 4 1
(g4 gT ) p = 00 + 0
(gn+ 1 )T _p+l n+l _

Solve for A Find the dual h(Ap+), the support
point gp+1, and the objective function 0p+. Set
p = p + 1. Find LB where

LB = Max (h(Aj), j=1,2,...p)

Go to Step 3.

Step 6 Select all vectors from the basis which are present
in the L.P. solution in Step 4 at a nonzero level. Let
these be points gl g2 ... gm. Since only m (< n+1) such
points exist, the AXP+ to be chosen can come anywhere
from within an n + 1 m dimensional subspace. Either
initiate (if first time through this step with above
nonzero basis vectors) or continue a systematic search
over the subspace to find the next A's to use. A A
from the appropriate subspace is found by having it

( 2 )T _p+l 2 1

(g3 1T ,p+l 3 1 (16)

(gm 1 )T glp+1 = 1m 1_

plus any additional n + 1 m independent specifications.
Go to Step 3.

Step 7 Select a set of p+l to find a new support point gp+
the dual h(AP+1), and the objective function 0P+. Set
p = p + 1. Let LB = Max (h(A ), where j=l to p). Go to
Step 3. (Note this step is actually the same as Step 6
but with no constraints of form (16) being written, i.e.,
one should search systematically over the entire space
for the next A's.)

111.8. Examples

Example Problem 1

Figure 11 represents the Example Problem 1, and its subsystems

are shown in Figure 16.

h(A) = Min cost1 + A y) + Min (cost, Lx2)
Y1 xI

where Ah is the price associated with the variable T3.

g1 = (x2 Y1)

optimization of


the original system the twc

1 T1 = 400 T2 = 9000

2 T = 600 T2 = 700

ly the algorithm to Case 1,

= -4.14 A2 = 4.14 and

= -200 h(A) = -224.6

= +200 h(b ) = -882.0

LB = -224.6

> LB + E. (Let c = 0.02 0*

= 3000

= 1000

= 286.93

2 points.

= 476.2

= 0


0 = 286.93

0 = 189.3

= 5.74.)

UB = 238.1

0 > UB +

Hence the solution in Case 1 is a local optimum.


solutions a




Step 1 A

Step 2 g4



Step 3

Step 4

Subsystem 1 Subsyst r 2
F 10,000 IF 10,00C
r 600 00

F 10,UC F 10,000
1 -- 2
T 100 Y 2 1 3C00

100 y 00 100 2

100 u yl 300 100 x S NO

Fig. 16. Subsystems for Example Problem 1

Apply the algorithm to Case 2, 0 = 189.3 Steps 1, 2 and 3 are

the same as in Case 1.

Step 4 UB = 238.1

0 < UB + E

Step 5 A3 = -1.19

h(X3) = 189.3 ,
1 1

= 0 03 = 189.3

p = 3

LB = 189.3

Step 3 0 = 189.3 < LB +

Hence the solution in Case 2 is a global optimum.

Example Problem 2

Figure 17 represents this problem

and the subsystems are shown

in Figure 18.

h(A) = Min (Area + A yll) + Min (Area2 + 2Y12 22)

Yll Y12,x22

+ Min (Area3 A2x23)


1 is the price associated with 1 and A2 is the price associated with


g8 = (x22 11) and g2 = (x23 Y12)

On optimization

T1 = 181.9 T2 = 2950 tI = 218.1 t2 = 286.4

t3 = 395.50 and 0 = 7049.

710JO r2

- I C2a

FC 105 Z., .1L th t t- .

ul -120 J ) U3 !4 ,

ho rtiora t~ rxhali~o 1

Qu 4T 7-2 + A!-3

Ai. T -1i -[:Io Zo

Li 17,xn. P-7roblem 2

Subsystem 2

ir 4000

Subsystem 3

IT 600

t2 t

100 O x22 3 3000 1000 X2 S 400

100G y2 4 2000 t3 6000

100 0 t2 400

Fig. 18. Subsystems for Example Problem 2

Subsystem 1

IT 300

1000 Y11 0 3300

100 t1 300

Application of the Algorithm: 0 = 7049

S 1 -10 2 0 3 -15
Step 1 = A =A p = 3
0 10 -30 p

S 1 200 2 0 3 -94.6
tep g 100 300 -188

h(1) = 500

1 = 2500

Step 3 0 >

Step 4 UB =

0 <

Step 5 =


LB + E


UB + C



h(2) = -500

02 = 2500

LB = 5787

Let C = 140




h(A3)= 5787

03 = 12,846

p = 4

Step 3 0 > LB + E

Step 4 UB = 7173.34

0 < UB + c

Step 5 The points in t

5 -11.04
A- -24.65

h(X5) = 6640.3

Step 3 0 > LB + E

1 3 4
he basis are g g and g .

5 122.87
E5= __71
8 71

05= 3533.67 p 5

LB = 6640.3

Step 4 UB = 6928.94

0 < UB + E

Step 5 Tie points in the

6 -13.34
h 24.76

h(A6) 6916.26

3 4 5
basis are g g and g .

6 111.79
8 = 71

06 = 3668.11 p = 6

LB = 6916.26

Step 3 0 < LB + E

Hence the solution represents a global optimum.

Example Problem 3

Figure 2 represents this problem and the subsystems are

shown in Figure 19.

yll, x22 and

Y12' x24 and

y22,' 25 and

Y13, x26 and

h() = Min (Cost1


Min (Cost2

Min (Cost3


Min (Cost4



price A1 are associated with F2

price A2 are associated with (F4 x T4)

price X3 are associated with T 2

price A4 are associated with T15

- lx22 + 2Y12 + A3Y22) +

+ A1y11 2x24 + A4Y13) +

+ 0.4FI 3x25) +

+ 0.2III A 426) +

= 2000

2500 x22,YL2 1 7500

3000 22 5000

2500 a yllx24 7500

3000 Y13 y 5000

Subsystem 3

iF 10,000

Subaystem 4

P F 10,000


300( ) x25 S 5000

300' ,26 ( 500

Fig. 19. Subsystems for Example Problem 3

T2 3000

Subs)sten 2

F 10,000

81 = (x22 Y11)

g2 = (x24 Y12)

3 = (x25 Y22)

g4 = (x26 Y3)

On optimization there are

Case: 1 F2 = 2500 F = 7500

T15 = 500 F11 = 2500

Case: 2 F = 7350 F3 = 2650

F = 0

0 =3538

FII= 2500 T5

two possible solutions:

T = T5 = 400 '12 = 3000

F = 0 0 = 4154

T12 = 500

T15 = 300

4230 T4

tion of the Algorithm

0 o
A 0 A2

A = 0.001 =

g 2,000,000

g 2,000,000

to Case 1 0 = 4154

0 0
0.000555 A = 0
0 -10
0 0

0.001 p=

g = 2,000,00

g 1,876,000

g = -1,876,000


Step 1

Step 2

h(A1) = 2350 01 = 6350

h(A2) = 230 02 = 1340

h(X3) = 1917 03 = 3917
IB = 2350
h(4) = 2118 0 = 4618

h(A5) = 2188 05 = 5064

Step 3 0 > LB + E Let C = 80

Step 4 UB = 3845

0 > UB + C

Hence Case I corresponds to a local optimum.

Application of the Algorithm to Case 2 0 = 3538

Steps I to 3 same as in Case I.

Step 4 UB = 3845

0 < UB + E

LP solution is degenerate

1 2
Step 6 Nonzero points in the basis of LP solution are g and g
6 1 2 3 4 5
Let the n+l points for finding A be g g g g and g .

-0.065 6 0
S= 0.000225 = 0
-9.81 -200
-6.58 200

h(A6) = 3338 0 = 4037

p = 6 LB = 3338

Step 3 0 > LB + E

Step 4 UB = 3845

0 < UB + C

LP solution is degenerate

1 2
Step 6 Nonzero points in the basis of the LP solution are g and g2

Let the n+l points for finding 7 be g, g2 4 5, and g6

S 0.128 -2345
S= 0.000709 g = 4690
-8.68 0
-7.72 0

h(7) = 3534 07 = 4564 p = 7

LB = 3534

Step 3 0 < LB + E

Hence Case 2 represents the global optimum.

III.9. Discussion

In the first and the third examples, the local optima are

recognized very quickly. The global optimum in each of these

examples is confirmed albeit with a greater number of steps. In the

second example, a dual gap (global optimum = 7049, upper bound = 6928.9)

is present. However, the algorithm proves effective. It may also be

noticed in the second example that an improvement results in the upper

bound at every iteration. This is guaranteed in the absence of de-

generacy. Degenerate solutions occur in the third example and several

points may have to be evaluated before obtaining an improvement in the

upper bound.


In this chapter a problem with the structural parameter method

for process synthesis shall be discussed. The problem is almost

obvious once stated, but it has not been emphasized in the literature.

It can seriously affect the expected results. The discussion here does

not imply that the method is valueless, it simply stresses that care

must be taken to ensure that the method is really useful for a given


A system of interconnected process subsystems can be modeled

using structural parameters which are defined by the following

x. = Y a..x! i=1,2,3, ...N
1 j =1

0 a.. 1 a.. = 1 (1)
13 i=1

where x. and x' are, respectively, the input and output variables of
I ]

the i-th subsystem, N is the total number of subsystems in the entire

system and the parameters a.. are the structural parameters; that is,

each is the fraction of the output stream of the j-th subsystem which

flows into the i-th subsystem.

By means of structural parameters, a system synthesis problem can

be transformed into a non-linear programming problem with continuous

decision variables. This idea has been stated and demonstrated in

the studies by Umeda, Hirai and Ichikawa (1972), Ichikawa and Fan

(1973), Osakada and Fan (1973), Mishra, Fan and Erickson (1973), and

Himmelblau (1975). In order to obtain an optimal structure, redundant

subsystems are usually inserted into a "super" structure in which it

is hoped that the optimal structure is imbedded. For example, Figure

20 illustrates one use of the method for synthesizing a heat recovery

network. The problem data and the stream specifications are shown

in Table 8, and the problem is described in the tradition of problems

like 4SP1 in Masso and Rudd (1969) and Lee et al. (1970). A decision

is to be made on whether or not to use cold stream II to aid in

cooling hot stream V. The problem is formulated with potentially

redundant exchangers 2, 4 and 5 and the structural parameter a26'

a26 is the fraction of cold stream II that goes through exchanger

2. The strategy is, in this formulation, to convert the discrete

decision of whether or not to use cold stream II to aid in cooling

hot stream V into a problem of optimizing over the continuous deci-

sion variable a26. To avoid a cost discontinuity caused by intro-

ducing an exchanger we must, of course, require the cost of an

exchanger to approach zero as its area does.

The notion that we have converted a discrete decision problem

to a continuous one is invalid here. It may be observed that hot

stream V has sufficient heat content to drive cold stream I to its

final temperature. However, the transfer of heat in exchanger I is

limited because an inequality constraint in exchanger 2 has to be

satisfied. This constraint is the approach temperature requirement

T 311.11

V, lot
F 2520,00
T 644 44

1T 58 689

i2 V1, eating Utility, T 64. 44

Approach te'Perature tequire-,nt

7= 533 .33

rI 477.78

T 355.56 a26 0.688
12 536 11

T23 505 56
4TT 477.78
F 000D
F 198 CO
F 105 00

S477.78 Cast 16,;5 S/yr

T 588 69

F 17 Cc(5- ri 533.33
T 311 11 F7

VII, eating Ucillty
I 588.89

Fig. 20. The Optimal Values of the Parameters

I, Cold

F 504.00
T 311.11




nlo. g/s 1260.00 504.0o 1260.00 Unknown 2520.00 Unknown Urakown
Inlet Teopeztu a K 311.11 311.11 422.22 31.11 644..4 644..4 588.
Outlet Tep.ratire E 56..8 533.33 477.78 355.56 477.78 6..44 588.89
Bolling tint K 755.56 755.56 755.56 373.33 755.56 6U4. 568.89
Liqid Hest CpltY kJ/Akg K .19 4.19 4.19 4.19 4.19 4.1919
iila Haet Tr, aMer
Cccrfficiet W/m.2 1703.49 1703.49 1703.49 1703.49 1703.49 8517.45 8517.45

Cot 0.00 0.00 0.00 1.10o10-7 0.00 22.05x10"7 5.2Cxl0-

Ieat of VapcriLatian kJ/kg 6E1.80 697.80 697.80 1786.368 69I.8O 162I.20 1395.60
Inlet Vapor ractlin 0o.0 0.0 0.00 .0 0. 0.00 1. 1.00
)utlet Vapor Fraction 0. 0.0C 0.00 0.00 0.0 0.00 0.00

Approach Te-eraturee = 2.780

Beat erchanser cat e.1ato! = (a Ab)
wbera = annual rata of return = 0.1
a = 350
b = 0.6
Elqtapl/ct doan t~in = 280 hr./yr.
The coat of the nit ork Is the cost of utility atrea + tho coat of the exchangers, in I/jT.
All heat sachantel- are asueaed to be counter-curreat

between the inlet hot stream and the outlet cold stream (TI2 2

533.330 + D, where D is a minimum allowed approach temperature of, say,

2.78K). By numerical computation one finds that the overall cost can

be lowered if a26 is increased (see Figure 21). This increase in a26

lowers the flows of cooling utilities IV and VII. Ultimately, when

the flow of cooling utility IV is zero, an optimal structure is obtained

with a cost of $16,945/yr. At this point a26 = 0.688.

Note, however, that when the flow of the cold stream through

exchanger 2 is zero, and the heat exchanger 2 is totally neglected,

a network with a cost of $6864/yr. is obtained as shown in Figure 22.

Thus, a significant discontinuity, which was hoped to have been

eliminated, has reappeared. It should be emphasized that the data

for the problem and its formulation are important only in that they

are plausible and that they help make this point.

The main observation may be summarized as follows. When certain

subsystems are rendered redundant during optimization (by the asso-

ciated flow or a split fraction taking on a value of zero) and if

inequality constraints are associated with these subsystems which

force constrained behavior elsewhere, discontinuities very likely

still exist in the problem. One must still make a discrete decision

about whether or not to introduce that subsystem. This observation

means that, if the structural parameters are to be used for a syn-

thesis problem, and, if inequality constraints are involved, the

problem has to be formulated very carefully, if indeed it can be,

to make it a continuous one.

g-e 2 t.

145 S/y-r.)



6864 (opci=.r i0gr ing exchanjer 2)


0.1 0 2 0.3 0.4 0.5 0.6 0.7





Cost ($/)T.)




Fig. 21. The Iiscontinuity in Optimization


- -

V, lot
F 25:0 00
I 64- 4 44

T 588.89

Cold T 588.89 I, Cold
S12. 00 F 504- .c T 5 33
S311 00

VII, HcatLrZ liltlity
F 336 04
T 538 689
[II, Cold T 477.78
- 1260 00
- 422.22

T 477.78

Fig. 22. The ()ptinmal Network \ten Ignoring the Approach
Temperature Iequiremont at Null Exohanger 2 of

Figure 20


The optimization strategy chosen for EROS proves to be efficient.

It is the author's belief that the number of steps required for con-

vergence is significantly lowered by rederiving a solution procedure

every time a constraint violation occurs, and by the use of 'restriction'

(Geoffrion (1970)). A very useful feature in EROS is its ability to

find a feasible starting point, if none such is provided by the user.

The solution yielded by EROS can account for portions of the network

that already exist and for irregularities likely to occur in the

process streams. Considering the nature of the problem treated, cost

per a typical run of EROS seems small. The use of program may also

be extended in carrying out synthesis via structural parameters but

the problem must be formulated very carefully as mentioned in Chapter

IV. In the results shown in Table 7, a feasible starting point is

obtained in very few iterations. A feasible starting point may also

be provided by the user as in example 10 of Table 7. As indicated

in Chapter II, the time required for the rewriting of solution

procedures after finding a feasible point is relatively small as

compared to the function evaluations during optimization. Hence,

the equation solving approach followed is fully justified.

The practical applicability of a program such as EROS can only

be considered complete if a capability of handling power generation

and power intensive units is incorporated. Hence there is a need for

adding unit models such as turbines and compressors. Hereafter the

pressure changes will be important and a pressure variable must be

associated with each segment. These changes might also call for a

better modeling of a heat-exchanger unit in which the heat-transfer

coefficients may no longer be evaluated by simplistic relations.

Currently the cooling curves for streams are approximated by three

linear regions. A more accurate description could be provided for

streams by specifying enthalpy versus temperature information for the

range under consideration. A linear interpolation may be assumed

between two adjacent points provided in this specification.

In Chapter III an algorithm is presented which can effectively

identify if an optimum point is a global optimum or a local one. It

is practical as given only when applied to systems of interconnected

subsystems, fortunately, a problem form which is common in engineer-

ing design. The algorithm develops a lower (dual) bound using the

ideas of Brosilow and Lasdon (1965) wherein the total system is de-

composed into subsystems, each of which must be globally optimized.

If the subsystems are small enough, this requirement is readily satis-

fied. Frequently the optimal lower (dual) bound is equal to the

optimal upper (primal) bound, but it also often lies strictly below

because of nonconvexities in the problem. Previous experience on

actual engineering problems indicates that this difference, caused by

a duality gap, is small, i.e., only a small percent of the system cost.

The algorithm also develops an upper bound to this lower bound.

Subject to the tolerances required because of the duality gap, an

optimum more than a few percent above this upper bound is declared

to be a local optimum, and one at or just above the lower or dual

bound is considered to be a global optimum. If no discrimination is

possible at the current step, the algorithm provides a means to

guarantee improvement of the upper bound for nondegenerate problems,

the lower bound usually being improved at the same time. The three

example problems demonstrate that the algorithm is surprisingly


Relating to the algorithm presented in Chapter III, a study is

recommended to discover the global optimum having discovered that

the given primal optimum is local. It would be desirable to predict

a value of A yielding thie global optimum by perceiving a trend in

the value of A from the information gathered.


University of Florida Home Page
© 2004 - 2010 University of Florida George A. Smathers Libraries.
All rights reserved.

Acceptable Use, Copyright, and Disclaimer Statement
Last updated October 10, 2010 - - mvs