Adaptive optimization of continuous microbial growth processes


Material Information

Adaptive optimization of continuous microbial growth processes
Physical Description:
vii, 131 leaves : ill. ; 29 cm.
Harmon, Jeffrey Lynn, 1962-
Publication Date:


bibliography   ( marcgt )
theses   ( marcgt )
non-fiction   ( marcgt )


Thesis (Ph. D.)--University of Florida, 1990.
Includes bibliographical references (leaves 127-130).
Statement of Responsibility:
by Jeffrey Lynn Harmon.
General Note:
General Note:

Record Information

Source Institution:
University of Florida
Rights Management:
All applicable rights reserved by the source institution and holding location.
Resource Identifier:
aleph - 001613867
notis - AHN8285
oclc - 23535842
System ID:

Full Text









The voyage has been long and arduous, with many a moment when the

water broke hard against the bow and the wind shredded the jib and main

like drenched parchment. As I hear the call from the crow's nest of

"land ho!," I realize those days of peril have faded into the recesses

of my memory. Soon the ship will be ashore, and I am resigned to these

thoughts, to remember and thank those who have seen me through these


Foremost, I must thank the captains of the ship, Dr. Gerasimos

Lyberatos, Dr. Spyros A. Svoronos, and Dr. David P. Chynoweth. Their

guidance has been immeasurable and I shall always be grateful for the

knowledge they have imparted. Thanks are also due to the other officers

of the ships, Dr. R. Narayanan, Dr. J. Earle, and Dr. B. Koopman, for

their time and patience.

To my mother and father, Lillian and Samuel Harmon, no words can

describe the gratitude I feel. This journey would not have been pos-

sible without their love and understanding. Thanks to my brothers,

Kevin and Tim, for being friends as well as family. Special thanks to

the always crazy McCabe family.

Apologies and thanks go to Debbie Sandoval for putting up with my

last minute changes. Also, thanks go to the other members of the of-

fice, Shirley, Peggy, and Nancy, whose mermaid-like presence has made

this experience bearable.

Finally, a hearty handshake goes to all my fellow mates who worked

the deck and bunked in the forecastle.

I also wish to thank Du Pont, the National Science Foundation, and

Gas Research Institute for their financial support.



ACKNOWLEDGEMENTS .............................................. ii

ABSTRACT ....................................................... vi


1 INTRODUCTION ............................................... 1

2 GENERAL ALGORITHM DEVELOPMENT ............................ 4

2.1 Adaptive Optimization ................................ 4
2.2 Parameter Estimation ................................ 8

3 NUMERICAL SIMULATIONS ........................ ............ 11

3.1 Introduction ........................................ 11
3.2 Optimization with Respect to Dilution Rate .......... 12
3.3 Optimization with Respect to Temperature and pH ..... 24

4 ANAEROBIC DIGESTION ...................................... 44

4.1 Introduction ............................ ............. 44
4.2 Temperature Optimization ............................ 47

5 ALTERNATE SAMPLING BASIS ................................. 50

5.1 General Description ................................. 50
5.2 Example ............................................. 51

6 CONSTANT YIELD OPERATION ................................. 67

7 EXPERIMENTAL ............................................... 75

7.1 Introduction ......... ............................... 75
7.2 Algorithm ............................................ 77
7.3 Thermophilic Runs .................................... 82
7.4 Mesopnilic Run ...................................... 94
7.5 Discussion .......................................... 99

8 SUMMARY .................................................. 105



B THE PARAMETER ESTIMATOR .................................. 110
C MATERIALS AND METHODS .... ;....... ......... .. ............... 113

C.1 Experimental Set-up ................................ 113
C.2 Feed Stock and Preparation .......................... 123
C.3 Analysis ..................... ....................... 124

REFERENCES ........................... ........* ................ 127

BIOGRAPHICAL SKETCH .... .......................... .....*** ...... 131

Abstract of Dissertation Presented to the Graduate School
of the University of Florida in Partial Fulfillment of the
Requirements for the Degree of Doctor of Philosophy



Jeffrey Lynn Harmon

August 1990

Chairman: S. Svoronos
CoChairman: G. Lyberatos
CoChairman: D. Chynoweth
Major Department: Chemical Engineering

A general method of optimizing continuous microbial growth pro-

cesses is presented. In this algorithm, dynamic information is used to

identify parameters of a dynamic discrete single-input/single output

model equation. The steady-state version of this model is used to

predict the control input that will maximize a performance index. A

major advantage of this method is no requirement of a priori informa-


Numerical simulations on a Monod-type model are shown. In these

simulations, biomass productivity was maximized with respect to dilution

rate, temperature, and pH. The algorithm successfully found the optimum

inputs and maintained the optimal steady states.

An anaerobic digester was used to investigate the algorithm on an

actual process. The digester used was a 6 liter reactor continuously

fed with glucose as the limiting nutrient. Digester temperature was the

control input and methane gas production was the performance index.

Two innovations were introduced to improve optimization. The first

involved the use of an alternate sampling basis (as opposed to sampling

based on time) to speed convergence. In this method, methane production

rate was measured when a predetermined amount of methane was collected

(instead of time elapsed). The second innovation used a constant reac-

tor yield modus operandi (i.e., the amount of substrate fed per amount

of methane produced is kept constant) to reduce the chance of process

failure by digester imbalance. These changes could also be helpful for

other microbial growth processes.

Three experimental runs were made on the digester, two in the

thermopnilic range and one in the mesophilic range. The thermophilic

runs were successful at increasing gas production but could not maintain

optimal production because of the sensitivity of the methanogenic popu-

lation at the optimum temperature. The mesophilic run optimized at



In the past score of years, the field of biotechnology has gained

considerable attention by providing many novel processes as well as

alternatives to existing chemical processes. Despite unprecedented

discoveries, the impact of this field in industrial applications has

been severely limited due to the complex nature of using living organisms

as lilliputian chemical reactors. Standard procedures used in the

chemical industry simply will not work. The development of new and

innovative schemes to optimize bioreactors is a necessity if the rich

future of biotechnology is to be realized.

In the past, substantial research effort has been expended toward

the optimization of batch and semi-batch bioprocesses [1-7]. However,

very little work has been conducted for the determination of the steady-

state optimum operating conditions of continuous bioreactors. In almost

all these investigations the optimization was based on an off-line

identified mathematical model or on ad hoc experimental procedures. The

major problem of all methods based on off-line development is that the

process conditions vary continually due to population shifts, variable

feed concentrations, wall growth and other disturbances. Optimization

based on these models is questionable in most cases.

Consequently, present research in biotechnology has been directed

towards the on-line determination of the optimum steady state of contin-

uous bioreactors [8-12]. Traditional solutions to the on-line optimiza-

tion problem involve the step change of a particular manipulated


variable and the evaluation of the process performance measure at the

subsequent steady state. If improvement is observed, the manipulated

variable is changed in the same direction. This procedure is repeated

until no further improvement in the performance measure can be ob-

tained. The obvious drawback of such methods is the requirement that a

steady state be reached at each step, which translates into a long time

until the optimum is finally achieved. In addition, the actual process

optimum may vary at a faster rate than it can be located.

To enhance the speed of on-line optimization algorithms for contin-

uous microbial growth processes, dynamic information can be used to

estimate parameters of a dynamic model. The optimal constant control

value is determined from the steady-state version of the identified

dynamic model. Bamberger and Isermann [13] introduced this adaptive

optimization approach in 1978. In their work, model parameters were

identified on-line via a recursive least squares estimator and a Newton-

Raphson routine was employed to search for the optimum. This approach

has been used on many processes [9,11,12]. It is particularly promising

for biochemical processes since they are characterized by slow dynamics.

It was the goal of this work to develop and demonstrate a new

method for the adaptive optimization of a general class of continuous

bioreactors. In this procedure, a model with parameters identified from

on-line data is used to explicitly calculate the optimal input. Novel

ideas are presented in the areas of parameter estimation and algorithm

implementation. In addition, a constant yield operating strategy is

proposed to improve optimization schemes.

Chapter 2 gives the basic background and approach for adaptive

optimization of single-input/single-output systems, including the

problem of parameter estimation. The specific algorithm is presented in

the following chapter. Numerical simulations are presented to demon-

strate the ability of the algorithm to successfully locate the optimum

temperature, pH, and dilution rate for biomass productivity in continu-

ous fermentors.

In the following chapters, the algorithm is applied to an actual

process, anaerobic digestion, where gas production is optimized with

respect to temperature. The final chapter presents the conclusions of

this work and offers suggestions for future work.


2.1. Adaptive Optimization

The algorithm presented in this work uses data acquired on-line to

estimate the steady-state value of a control variable that maximizes a

performance index. This performance index can generally be described as

a function of the measurement output, y, and the control input, u:

J=q(y(u),u) (2-1)

Here, y is indicated as a function of u. Optimum steady-state perform-

ance is determined when a constant input, us, satisfies the following


dJ dq(y(u ),u )
du du *
s u =u s u =u
S S Us=U

d2 1
d2 < 0 (2-3)
du *
s u =u
s s

where s denotes steady state. It is important to note that these condi-

tions do not specify for a global optimum. Therefore, if the system

exhibits multiple extrema, the input, u will specify only for a local

optimum, which may or may not be global.

In order to evaluate the above conditions, it is necessary to have

a system model which relates the output to the input. This functiona-

lity is usually written in continuous form as

dy/dt = g(y,u; 8) (2-4)

where I is a vector of model parameters to be identified on line.

However, this form is not convenient for on-line data retrieval,

where measurements are taken at particular times. To facilitate com-

puter implementation, an equally valid discrete-time nonlinear model can

be employed.

y(i+1) = f(y(i), y(i-1), ..., y(i-Z), u(i), u(i-1), ..., u(i-k); _)


where y(n) and u(n) are the output and the input values, respectively,

at the nth sampling point.

Model structure is very important and should be chosen carefully.

A model which closely approximates the actual process system can be

employed which would most likely entail a nonlinear structure with

respect to parameters. This would require complex search methods to

estimate 0 and the predicted optimum. In addition, this model would

require a priori knowledge of the process, particularly at the opti-


An alternate model structure can be chosen to simplify parameter

identification and optimum prediction. The most convenient model would

be structured in such a way as to estimate a unique optimum input, i.e.,

u should have only one value. The simplest model with this property

would infer the steady-state performance index, Js, has a parabolic

shape and, therefore is quadratic with respect to us:

J = a + Yu + pu (2-6)
s s 3

where a, Y, and p are functions of the identified parameters e. The

evaluation of equation (2-2) yields:

u = Y/2p (2-7)

with the following sufficient condition from equation (2-3):

p < 0 (2-8)

A possible drawback to this line of reasoning is that a model of

the type described above would in general be only locally valid. How-

ever, this does not present a major problem since the goal of the adap-

tive optimization algorithm is not necessarily to obtain an overall

model but a model valid at the locality of the optimum. Svoronos and

Lyberatos [14] proved the predicted optimum and the actual process op-

timum would coincide if the model structure was locally valid and suffi-

cient data were obtained for convergence to correct parameter esti-

mates. Away from the optimum, the algorithm uses the locally valid

model to take a cautious step toward the predicted optimum. At each new

location new data can be acquired to update the parameters.

For example, in a typical case where the measurement output, y, is

the performance index, a Hammerstein type model [15] can be employed to

meet the above conditions:

y(i) = E a.y(i-j) + E b.u(i-j) + E ck u(i-j)u(i-k) + d (2-9)
j=1 j=1 j=1 k=i

where aj, bj, cjk and d are the parameters, and N and M are the dynamic

model orders. Before such a model can be implemented, the model order

and sampling period must be appropriately chosen. Most bioprocesses are

overdamped systems and, therefore, can be approximated by first order

systems (N=1,M=1). Higher order models may approximate the system

closer, but they will also increase the amount of data needed to esti-

mate parameters.

The choice of sampling period, T, will depend upon the dominant

time constant of the process. If the value of T is too large, the

measurement output at each interval will be close enough to the steady-

state value to prevent identification of a dynamic model. The T chosen

too small will not allow the measurement output to change significantly

from the previous measurement and could lead to stability problems in

parameter estimation. A rough estimate of T is usually taken as ten to

twenty percent of the dominant time constant [16]. However, since pro-

'esses are nonlinear, the dominant time constant will change as the

process moves from one steady state to another. In Chapter 5, an alter-

nate sampling basis method is described as a way to account for such


2.2. Parameter Estimation

Ever since Gauss and Legendre argued over who first developed the

method of least squares, a veritable plethora of parameter estimators

has been proposed. However, the method they presented in the early

1800's still remains as the simplest accurate procedure for experimental

systems with unknown statistics. In 1960, Kalman [17] developed a

recursive form of the least-squares method and supplied a practical

procedure for digital computer implementation. There are many excellent

texts that provide the basic development for recursive least squares as

well as necessary statistics to understand the problem [18,19].

The object of the least squares is to minimize the square of the

model error. The error for measurement i is defined as

ei = y(i) hT8 (2-10)

where, referring to the model in equation (2-9) as an example, 6 is the

vector of parameters

[al,a2,... ,aNb ,b2,...,bM 11 ,c 12'c13, ....MM,d]

and hi is the vector of previous inputs and outputs

u2(i- ),u(i-1)u(i-2),u(i-1)u(i-3),...,u2(i-M),1].

The sum of the squares is differentiated, set equal to zero, and solved

for 6. The recursive form is derived by considering the addition of new

data [20]. In this form, new parameter estimates can be calculated with

minimal computations and less storage space compared to batch least

squares. A problem arises, however, with recursive least squares (RLS)

when large amounts of data are acquired. The updating becomes insensi-

tive to new data so that the new parameter estimates are largely unaf-

fected by large errors in the model equation. To overcome this problem,

exponential weighing can be used. This can be expressed in recursive

form as:

SN() = A () + e (2-11)
N N-1 N-1 N

0 < A. 1 for i = 0,1,...,N-1

where N9 is the objective function and Ai is called the forgetting fac-

tor. The updating equations are obtained by differentiating iN with

respect to 0 and use of the matrix inversion lemma [18].

P h.e.
= + 1 1 (2-12)
1 --1 T

1 P-ih .h. P.
1 i-1 -I -i- P -
P. (P= ) (2-13)

-I 1-1 -T

e = y(i) hT- -i-I (2-14)

In this modification, the forgetting factor is used to enlarge the

covariance, P, thus preventing the updating of the parameters from

becoming insensitive.

The common choice of A, is a constant. This choice, however, has

one major disadvantage. As parameters converge and no new data are

acquired, the forgetting factor tends to "blow up" the covariance matrix

[21]. To correct this behavior, Fortescue et al. [22] proposed a vari-

able forgetting factor based on asymptotic memory length.

A = 1 (2-15)
(1 + h Pi hi) o
1-1 -1 o

X, \ if X. < X
min mm

Here, o and A. are constants dependent upon the particular system
o mm
identified. This popular method has come to be known as RLSVFF (rpcur-

sive least squares with variable forgetting factor). Convergence proofs

and other characteristics are discussed by Cordero and Mayne [23].

Recursive least squares with variable forgetting factor is used as

the parameter estimator in the work presented in the following sections

with two innovations. Tne first, described in Chapter 3, involves the

inflation of the covariance matrix for sampling periods with large

errors. In Chapter 5, a method is presented in which the forgetting

factor is applied to a previous covariance matrix.


3.1. Introduction

Many authors have proposed algorithms for the on-line adaptive

optimization of continuous bioreactors [8,10,12]. Whyatt and Petersen

[11] modified the previously mentioned approach of Bamberger and Iser-

mann [13] to include optimal input trajectories using the predicted

optimum. Simulations on a continuous fermentor model exhibited success

in reducing the time for the optimum steady state to be reached.

A general method of optimization was developed by Rolf and Lim

[8,9] using a gradient scheme to find the optimum. Their method was

applied to maximize the biomass productivity of a continuous culture of

baker's yeast. Experimental results were obtained by Semones and Lim

[24] on an actual fermenter using temperature and dilution rate as the

control inputs. Chang, Pyun, and Lim [25] improved the parameter esti-

mation aspect of this algorithm by introducing a bilevel forgetting

factor. Successful experimental and simulation studies were provided by

Chang and Lim [26].

All the methods listed above are classic examples of linear adap-

tive optimization, i.e., a linear estimator was used to identify param-

eters of a linear model equation. Golden, Pangrle, and Ydstie elected

[12], however, to address the problem in a structured format. To in-

clude the basic static nonlinearities associated with continuous biore-

actors, they proposed the use of a nonlinear least squares estimator to

adapt a structured nonlinear model to match actual operating conditions.

The purpose of this chapter is to introduce a novel adaptive opti-

mization scheme, especially suited for continuous microbial growth

processes. The algorithm implementation is easy, and the use of gradi-

ent schemes is not required. Furthermore, no previous modeling informa-

tion is required. The next two sections describe the algorithm in

detail and provide numerical simulations for a chemostat model. Biomass

productivity is optimized with respect to dilution rate, temperature,

and pH.

3.2. Optimization with Respect to Dilution Rate

Since the major goal of many fermentation processes is the produc-

tion of biomass, the maximization of total biomass productivity is a

major design objective. A suitable performance measure should include

total biomass productivity in grams of cellular material per unit

time. For a constant volume fermentation the product Dx where D is the

dilution rate and x is the biomass concentration is clearly such a

productivity measure. Furthermore, since the major running cost is that

of the utilized substrate a reasonable performance index is

J = Dx qDso (3-1)

where so is the feed substrate concentration and q is factor that de-

pends on the relative cost of the substrate to that of the product,

which in this case is biomass.

All models of continuous microbial growth processes that have

appeared in the literature predict that the above performance index J(D)

is maximized at some unique value of the dilution rate that lies between

zero and m the maximum specific growth rate at the prevailing pH and

temperature. The latter two operating variables for the purpose of this

work have been assumed to be set at their optimum which is considered

here relatively insensitive to process changes. The existence of an

optimum dilution rate is a fact that has also been repeatedly verified

experimentally [27].

The proposed optimization algorithm determines on-line the optimal

dilution rate. In addition, it ensures that the fermentation process is

operated at the optimal dilution rate at all times thereafter, despite

changes in the fermentation environment.

In the next section the optimization algorithm is described.

Numerical simulations for testing the algorithm follow, employing con-

tinuous chemostat models as the "true processes."

The Adaptive Optimization Algorithm

Since fermentation conditions are everchanging, the optimization

algorithm will not be based on a fixed model; instead the parameters of

the model will be continually identified on line. But before parameters

can be identified, a model structure must be assumed. The question of

what structure should be assumed is an important one.

Fermentation processes are nonlinear. Furthermore, the type of

nonlinearities involved vary depending on the microorganisms present and

the fermentor conditions, and as a result are not generally known. It

is usual to assume an unstructured model (e.g. Monod type) whenever a

low dimensional model is needed, but these models, often adequate for

steady-state data, fitting generally fail to match the dynamic behavior

of the system [28]. Thus it would be dangerous to assume a particular

nonlinear structure and base a general adaptive optimization algorithm

on it.

Nonlinear systems can also be viewed as linear systems with time-

varying parameters. If the system dynamics are slow, these parameters

vary slowly in time and as a result can be identified through recursive

parameter estimators which forget past data [22,29]. Fermentation

processes have notoriously slow dynamics, and for this reason a meaning-

ful linear dynamic model can be continually identified on line, regard-

less of what the "true" nonlinear model of the process is. Thus the

optimization algorithm developed here is based on a linear model.

In particular, since the algorithm is to be implemented using a

digital computer, the system is modeled via the linear discrete-time

input-output form

x(t+1) = a x(t) + a x(t-1) + *** + a x(t-n)
o 1 n

+ b D(t) + b D(t-1) + -.. + b D(t-m) (3-2)
o 1 m

+ C

Here x(x) denotes biomass concentration at time T (measured in number of

sampling intervals) and D(T) denotes dilution rate at time T .

The above dynamic model can be used to obtain an estimate of the

optimal steady-state value of the dilution rate. In particular, ne-

glecting the time-dependence of the parameters, model (3-2) gives the

following pseudo steady-state equation:
= i= i D + (3-3)
s n s n
1- Z a 1- E ai
i=0 i=O

where the subscript s denotes pseudo steady state. Using (3-3) the per-

formance measure (3-1) becomes a quadratic in Ds and can be solved

explicit for the maximizing dilution rate D* to yield

0 C
c qs (1 a)
D*(O) = = (3-4)
-2 E b.

where 6 is a vector with components the parameters of model (3-2) (see

Appendix B). It should be noted that only equation (2-2) is used here

to designate the optimum. The optimality condition of equation (2-3) is

not included because it was not needed in the subsequent numerical

simulations. This was mainly due to the lack of a zero order term in

the estimated steady-state performance index (i.e., Js = 0 when D =

0). Any data retrieved with J > 0 and D > 0 would tend to cause the

estimator to predict a properly shaped parabola. In addition, for this

particular Monod type system, the shape of xs plotted versus D is such

that good parameter estimates would predict a performance index which

satisfies equation (2-3). In view of these facts, the violation of

equation (2-3) is highly unlikely.

However, in the general case this condition of optimality should be

included to prevent the prediction of a minimum instead of the expected

maximum. This is especially true under the following three circum-

stances. First, inflection points in the process curve (see Figure 3-5)

can occur. Second, the inclusion of a zero order term would make the

predicted performance index "free floating," i.e., it would not be tied

down to a particular point (such as J = 0 when D = 0). Lastly,

measurement noise in an actual system could temporarily cause poor

parameter estimates resulting in poor prediction.

Since the parameter vector 0 is not known, an estimate 6(t) is

obtained on line and is used in its place. Furthermore, since for any

fermentation process the optimum dilution rate cannot exceed 1 hr we

limit D*(6(t)) between 0 and 1. The basic idea of the algorithm is then

to change at every sampling instant the dilution rate D(t) towards the

calculated D (9(t)) If the process were truly linear (i.e. the

parameters of model (3-2) constant), this algorithm would clearly lead

to the true optimum dilution rate. However, since the true process is

nonlinear, the success of the above scheme is not evident. In a pre-

vious work [30] a theorem was given showing this to be indeed the case,

no matter what the true nonlinear model is, provided that the parameters

of the local linear model converge to the correct values.

If the algorithm sets the dilution rate to the predicted optimum at

each interval, (D(t) D*(a(t)), large differences between D(t) and D(t-1)

may appear. The true parameters would as a result vary substantially

with time leading to a possible failure of the estimator and hence of

the algorithm. Furthermore, since the original parameter estimates will

generally be poor, taking a full step towards the calculated optimum

(D(t) = D*(9(t))) could lead to very poor performance and possibly

washout. These possibilities can be avoided if only a partial step

towards D*(9(t)) is taken, i.e.

D(t) = hD(t-1) + (1-h) D* (6(t)); 0 S h : 1


This equation may be interpreted as first order filtering of

D*(6(t)) with filter parameter h. The simplest choice for h is a

constant. However, in order to ensure safe operation when the parameter

estimates are poor (as will typically be the case at the beginning), a

conservative choice of h (i.e. close to 1) would have to be made, and

this would result in a very slow approach to the optimum. As the param-

eter estimates improve with time, a smaller h would be safe and would

considerably speed up the convergence of the algorithm.

This advocates the use of a variable h, that depends on the quality

of the parameter estimates. In a study by Harmon et al. [30] such a

variable h was used but was designed in a rather ad hoc manner.

In this work a more direct and meaningful means of varying h is

introduced. Whenever the a posteriori estimation error

n ^ m
v(t) = x(t) E a.(t) x(t-i-1) E b (t) D(t-i-l) c(t) (3-6)
i=0 i=0

is large, the parameter estimates are poor and a large h is appropri-

ate. This suggests that h could be taken to be a monotonically increas-

ing function of v(t). A reasonable choice is

(t) = ( )I (3-7)
a + | (t)

where, in order to protect against a low v due to cancellation of er-

rors, we filter it whenever it decreases, i.e. we use:

v(t) if v(t) a v(t-1)
v(t) ={ (3-8)
(1-4)v(t) + >(vt-1) if v(t) < v(t-1)

Expression (3-7) contains a saturation effect similar to that of the

Michaelis-Menten rate law with a serving the role of a saturation

parameter. The lower the value of a the more conservative h becomes.

Equation (3-7) is also justified by the arguments given in Appendix A,

which in addition provide a method for tuning the parameter a on line.

As pointed out earlier, the success of the proposed algorithm

depends largely on the ability of the estimator to converge to the

correct parameter values. Since the biochemical process is nonlinear

and the model is linear, this requires that the estimates are based on

local data only (for which the nonlinearities are negligible). An

estimation algorithm with forgetting discards the influence of past

information and thus the estimates are mainly based on recent (and hence

local for a slowly varying biochemical system) data. However, if con-

stant forgetting is employed, the estimation algorithm runs into prob-

lems (e.g. covariance blowup) whenever it stops collecting sufficient

dynamic information (i.e. in the neighborhood of a steady state). The

recursive least squares estimator with variable forgetting factor

(RLSVFF) of Ydstie and his coworkers [29,30] solve this problem by

ceasing to forget whenever the system goes to a steady state, a desired

effect since then all data collected are local. An alternative tech-

nique is to periodically reset the covariance matrix. It was found that

best results are obtained if a UDU factorization [31] of an estimator

that is essentially a combination of the two techniques is used (see

Appendix B).

A problem may arise if the step size h(t) stays close to 1 for long

periods of time. The reason is that D(t) would be approximately con-

stant and as a result parameter identifiability problems could arise.

To avoid this, random excitation can be added to the calculated D(t).

This should be considerable when h(t) is high but very low when the

estimator has converged (in which case h(t)=0); otherwise it would then

prevent the process from staying very close to the optimum steady

state. These arguments suggest the use of a variable excitation level

that is proportional to the step size h(t) of equation (3-7). In parti-

cular we set the excitation at

V(t) = (0.96 h(t) + 0.04) r(t) (3-9)

TOL iw(b) > DTOL
r(t) = i w(t) if w(t) E [-DTL, DTOL
-DTOL if w(t) < -DTOL

Thus V(t) is never allowed to surpass a certain tolerance level DTOL,

usually chosen about 20% of a typical dilution rate value. The white

noise w(t) has standard deviation DTOL/2, which implies that the bound

in equation (3-9) is active less than 5% of the time. Even though r(t)

is zero mean the sum of r(t) could sometimes become significantly dif-

ferent from zero at a given time. If this happens when h(t) is high

then we are not exciting the process about the desired point. To avoid

this we reverse the sign of r(t) in the rare case when the absolute

value of the sum of r(t) exceeds 2 DTOL. The above analysis leads to

the following algorithm:

STEP 1: Obtain the new biomass concentration measurement


.TEP 2: Use the estimator of Appendix B to update the

parameter estimate vector 6(t)

STEP 3: Calculate the step size via equations (3-6), (A-9),

(3-8) and (3-7).

STEP 4: Evaluate the desired excitation level V(t) using

equation (3-9).

STEP 5: Determine the estimated optimum dilution rate

0 < D*(8) i 1 by equation (3-4).

STEP 6: Compute and implement

D(t) = h(t) D(t-1) + (1-h(t))D* (6(t)) + V(t)

STEP 7: Go to step 1.

Numerical Simulations for Testing the Optimization Algorithm

In order to test the behavior of the developed adaptive optimiza-

tion algorithm, a dynamic model of the chemostat has been used. Since a

typical Monod-type model cannot account for the time-lag observed in the

dynamic response of the chemostat [28] a time-delay model was used

instead. The model introduced earlier [32] is described by the follow-

ing equations:

S mzx
x Dx
k + z

1 m ^ o
s --- + D(s -s)
Y k +s

z = a(s-z)

where pm is the maximum specific growth rate, ks is the Monod constant,

Y is the yield and z is a weighted average of previous substrate

concentrations. The parameter a, called the adaptability parameter, is

a measure of the organism's ability to adjust its growth rate when a

change in the conditions of the chemostat is brought about.

The model parameters used for the simulations are tabulated in

Table 3-1. Parameters for the optimization algorithm are given in Table

3-2. Even though the true model order is three, a second order model of

the form of equation (3-2) is used in order to test the robustness of

the algorithm. The sampling/control interval is five minutes. Initi-

ally the chemostat is operated for 1.75 hours at the nonoptimum dilution

rate of 0.05 hrs- during which period estimation but no control oc-

curs. After this initialization period, the full optimization algorithm

is implemented. After 100 hours a load change in the feed substrate

concentration is introduced as so changes from 30g/l to 20g/l. This

shifts the optimum value of the performance measure from J = 2.22 to J

= 1.12. A second disturbance is introduced at 200 hrs; a change in tem-

perature or pH has been assumed to cause a change in pm from 0.7 to 0.9

hr This causes a further change in J from 1.12 to 1.44.

The behavior of the algorithm is shown in Figures 3-1 and 3-2.

Figure 3-1(a) shows the implemented control D(t). The calculated opti-

mum dilution rate, D*(G(t)), is shown in Figure 3-1(b) and the stepsize

h is shown in Figure 3-1(c). From this f'gurp it is obvious that when a

disturbance is introduced, h increases rapidly rendering the control

behavior more conservative. As the parameter estimates improve, h drops

towards zero, and the excitation level progressively decreases.

Figure 3-2 shows the behavior of the biomass concentration (Figure

3-2(a)) and the performance measure J as a function of time (Figure 3-

2(b)). On the same figure a dashed line shows the actual 'ptimum values

Table 3.1.
i = 0.7 hr1

Model parameters

S = 30 g/k

(at time t=0)

D = 0.05 hr-1

a = 3 hr-1

x = 14.153 g/it

k = 22 g/Rt

q = 0

s = z = 1.6923 g/it

Table 3.2. Algorithm parameters

T = 5 min

n = m=2

4 = 0.9

DTOL = 0.05

6 = 0.05

a = 104
-6y = 1
Y= I0

(sampling interval)

(order of model in equation 2)

(filter parameter in equation 8)

hr-1 (excitation tolerance level (eq 9))

(limit to the erroneous part in D(t) appearing in

equation A4)

(estimator parameter)

(estimator parameter)

Y = 0.5






180 240
TIME (hours)










60 120 180 240 300
TIME (hours)

Figure 3-1

Tenpor-l variation of (a) implemented dilution rate, (b)
calculated optimum dilation rate, both in (hours), and (c)
step size sing time delay model as true process. Distur-
bances occur at '00 and ';D hours.







of the performance measure. It is seen that the adaptive optimization

algorithm successfully locates the optimum, even after serious distur-

bances in the chemostat environment are introduced changing its


In order to establish the generality of the proposed algorithm, its

success was established with different models for continuous bioreac-

tors, including models with substrate inhibition and variable yield

factor. The algorithm was successful in taking all these processes to

the optimum without having to make any changes in the tuning parameters.

Of particular interest is the behavior of the algorithm for a

process model taken from the literature [33]. This commercially impor-

tant process is the production of single cell protein using methylo-

trophs. The successful behavior of the algorithm is portrayed in Fig-

ures 3-3 and 3-4.

3.3. Optimization with Respect to Temperature and pH

The purpose of this section is to present an extension of the

previous section that can be applied to the determination of optimal

temperature and pH values for a continuous fermentor operating at steady

state. One important alteration to the previous dilution rate algorithm

involves the form of the identified model. Since the algorithm requires

an explicit formula for the optimal control value (something not pos-

sible with a simple linear model), a nonlinear model of the Hammerstein

form is used. An additional modification, which leads to improvement in









180 240
TIME (hours)

Figure 3-2.

Temporal variation of (a) biomass concentration and (b)
performance index (dashed line represents act-al optimum
steady state). Process model as in Figure 3-1.

TIME (hours)














60 80
TIME (hours)


20 40 60 80
TIME (hours)

Figure 3-3.

Temporal variation of (a) implemented dilution rate, (b)
calculated optimum dilution rate, both in (hours), and (c)
step size using variable yield methylotroph model.


0 20 40 60 80 10
TIME (hours)















60 80
TIME (hours)


Fir'e 3-4. Temporal variation of [a) biomass concentration and (b)
performance index (dashed line represents actual optimum
steady state). Process model as in Figure 3-3.


60 80
TIME (hours)

uz- -


the previous algorithm but becomes mandatory for the case of temperature

optimization, is the monitoring of the second derivative of the perform-

ance measure calculated from the identified model. This ensures that

the calculated control value (i.e., point of zero slope) results in a

maximum index of performance instead of a minimum. The calculation of a

minimum occurs sometimes when the parameters are estimated in regions

away from the actual optimum.

In the next section, the algorithm is described. Examples of

numerical simulation results employing a Monod model with time delay as

a true process are presented in the following section.

The Adaptive Control Algorithm

The algorithm presented in this section adaptively estimates the

value of the control variable, either temperature or pH, that maximizes

a performance index. A typical performance objective is the production

of biomass in the steady-state operation of a continuous bioreactor.

Such a measure would include a term DXs where D is the dilution rate and

Xs is the steady-state biomass concentration, to account for producti-

vity. Also, since a major operating cost is for the utilized substrate,

a reasonable performance index is

J = DX qDS (3-11)
s o

where So is the inlet substrate concentration and q is a constant de-

pendent on the relative cost of substrate to biomass. However, since Xs

is the only term in (3-11) dependent on temperature and pH, the optimi-

zation of biomass concentration will result in the optimization of J.

To obtain the optimum control, a function relating Xs to the control

variable Us is needed.

In many cases a discrete-time linear model can be used to approxi-

mately relate the control U(t) to the measured output X(t):

X(t+1) = ao X(t) + a1 X(t-1) + .... + an X(t-n)

+ bo U(t) + bI U(t-1) + .... + bm U(t-m) + d (3-12)

Here, t denotes time in number of sampling intervals. The above equa-

tion relates biomass concentration at time t+1 to previous biomass

concentrations and previous values of the control. This model was

successfully used in previous work [10] to determine the optimum of J

with respect to dilution rate. However, if U is temperature or pH the

performance measure is optimized at a bound and not at a point of zero

slope (dJ/dUs = 0). Since the true fermentation optimum is at a point

of zero slope, a model with this feature would be desirable. In related

work [14], it was proven that if such a model has the correct local

parameters then the model optimum is identical to the true process

optimum. A simple way of constructing such a model is to modify equa-

tion (3-12) to a Hammerstein form by adding terms quadratic in U:

X(t+1) = a X(t) + .... + anX(t-n) +

b U(t) + .... + b U(t-m) +

oo U2(t) .... + U2(t-m) + d


It should be noted that the large majority of fermentors are overdamped

and therefore a first order model (n=m=O) could be adequate.

Optimization now leads to an explicit analytical solution for the

optimal control value. In particular, the steady-state version of (3-

13), (i.e., when X(T) = X(T+1) and U(T) = U(T+1) for all -), yields the

control value for which J is maximized (i.e., dJ/dUs = 0):

Uopt = -b/2c (3-14)

m m m
where b = E b. and c = Z c..
i= 1 i=O j=i 1

Since the true parameters of model (3-15) are not known, they are esti-

mated from sampled data. The estimated parameters, denoted by

S= [a al, ...., a bo ..... bm c ...., cmm d

are updated after each sampling interval via a recursive least squares

routine similar to the algorithm proposed by Fortescue, Kershenbaum and

Ydstie [22]. This routine uses a variable forgetting factor to discard

old data which may bias the estimates, 6. This is done to ensure that

the parameters are estimated from local data. The discarding of data

ceases when the system converges, since all acquired data are then

local. In addition to forgetting, a mechanism for covariance matrix

resetting was included. This allows the estimator to discard a larger

number of data when disturbances occur. The complete set of estimator

equations can be found in Appendix B.

The parameter vector, 8, estimated from the least squares routine

is used to calculate the estimated optimal control value, Uopt, from

equation (3-14). In addition, if bounds are known they are included, so


U if -b/2c > U
max max

U = -b/2c if U < -b/2c < U (3-15)
opt min max
A ^
U if -b/2c < U .
min min

It must be stressed that the validity of the calculated control depends

directly on the validity of the estimated parameters. Therefore the

control value should be implemented with some measure of caution. In

other words, a step toward Uopt should be taken at each interval instead

of implementing U = U
To achieve this, a first order filter can be used. The control law

then becomes:

U(t) = (1-h)Uopt + hU(t-1); 0 < h 5 1 (3-16)

where h is the filter parameter. The simplest choice for h is a con-

stant. This constant would have to be chosen conservatively (close to

1) to avoid large steps in the wrong direction when parameter estimates

are poor. However, as the parameters improve, employing a large h will

unnecessarily slow down convergence of the algorithm. Therefore, a

variable filter parameter based on estimation errors is advocated. As

in the previous section, h is chosen as a monotonically increasing

function of v(t), the one-way-filtered a posteriori output prediction


h(t) = I(l l (3-17)
a + v(t)

v(t) if v(t) a v(t)
v(t) (3-18)
(1-0)v(t) + Wv(t-1) if v(t) < v(t)

n ^
v(t) = X(t) E a (t) X(t-1-i)

m ^
b.(t) U(t-1-i) (3-19)

m r -2
Z c ..(t) U (t-1-i) d(t)
i=0 j=i

The a posterior error v(t) is filtered whenever it decreases in order

to protect against low values due to cancellation of errors in the

parameter estimates. The parameter a is analogous to a saturation

parameter in a Michaelis-Menten type rate expression. The lower is its

value, the more conservative is the filter in (3-16). Using arguments

presented in section 3.2, the following formula for tuning a on line can

be derived

26 U2(t-)lc(t)l (3-20)
2 U op/U(t-1) + 1I

The parameter 6 can be thought of as an upper bound to the relative

error in U(t). For example, if U is temperature of value in the order

of 300 K a choice 6 = .0001 limits the erroneous part in control law (3-

16) to 0.03 K.

A white noise term is added to the right hand side of equation (3-

16) for excitation. It is included to prevent U(t) from remaining

constant for long periods of time thus causing parameter identifiability

problems. This term is designed for maximum excitation at h=1 (i.e.,

poor parameter estimates). However, when the estimator has converged

(in which case h is very low), excitation should be low to avoid keeping

the process far from the optimum steady state. Consequently, the imple-

mented excitation is taken as:

V(t) = (0.96 h(t) + 0.04)*r(t) (3-21)

Utol when w(t) > Utol

r(t) = w(t) when w(t) E [-Utol, Utol] (3-22)

-Utol when w(t) < -Utol

where w(t) is white noise with zero mean and variance equal to Utol/2.

Thus the absolute value of V(t) is never allowed to surpass the toler-

ance level, Utol, usually chosen to be about 10% of the control variable

range. Tolerance levels are employed to prevent large movements in the

control variable. Furthermore, the sign of r(t) is reversed whenever

the absolute value of the sum of V(t) over all time exceeds 2 Utol.

This is done to ensure that the control variable does not travel over

long periods of time when h is high.

To complete the algorithm one more feature is needed. Since bio-

mass concentration should not be minimized with respect to temperature

or pH at a point of zero slope, it is expected that points of zero slope

are maxima and therefore the estimated second derivative satisfies

a 2 2c < 0 (3-23)
3U 1 a

However, this might not be true due to convex curve shapes or to estima-

tion errors. In this case equation (3-14) should not be used since it

would lead towards a predicted minimum. Instead, the control variable

should be moved in the opposite direction. Since the violation of (3-

23) may be due to inaccurate parameter estimates the control step should

be small and excitation V(t) should be added to aid parameter estima-

tion. However, the step size should not be taken so small as to allow

the excitation to reverse the direction of control movement. Thus it is

taken equal to the maximum excitation level Utol (see equation (3-

22)). Of course if the parameter estimates are known to be worthless

(h = 1) no trust should be placed on the sign of the estimated second

derivative of J. In this case the control should only be excited until

the parameter estimates improve (e.g. h drops below 0.99).

In summary the above analysis leads to the following algorithm:

STEP 1: Obtain the new biomass concentration measurement

STEP 2: Use the estimator of section 3.2 to update the parameter

estimates e

STEP 3: Calculate the filter parameter h(t) via equations (3-17)-(3-


STEP 4: Check if inequality (3-23) is satisfied

STEP 5A: If inequality (3-23) is satisfied calculate

Uopt via equation (3-15)

V(t) via equations (3-22) and (3-21)

and implement
U(t) = h(t)U(t-1) + [1-h(t)] Uop + V(t)

STEP 5B: If inequality (3-23) is violated and h < 0.99 take a step away

from Uopt, i.e.
U(t) = U(t-1) sgn[-b/2c-U(t-1)] U + V(t)

where V(t) is obtained from equations (3-22) and (3-21)

STEP 5C: If inequality (3-23) is violated and h a 0.99 excite the

control variable, i.e.

U(t) = U(t-1) + V(t)

STEP 6: Go to step 1.

The above described algorithm has several parameters that must be

set by the user. Some trial and error may be needed and it is hoped

that the following discussion will aid the task.

The sampling interval should not be larger than 10% of the dominant

time constant of the fermentor. There is an advantage to going to lower

values since it would tend to speed up convergence of the algorithm.

However, the sampling period value should never be lowered to the point

that under dynamic conditions successive measurements do not differ

within the precision of the measuring device.

The estimator used has two parameters. One of them, Y, does not

significantly affect the algorithm performance as long as it is set to a

low enough value (less than 10 ). The other parameter, o adjusts the

rate at which the estimator forgets past information. The higher ao is,

the less is the forgetting; and the less the forgetting, the more the

steady-state offset that can result from the algorithm [14]. On the

other hand, excessive forgetting from a very low value of a could cause

estimator failure. This would be reflected in h(t), which would contin-

ually stay very close to 1. One can set a as follows. Start from a
very conservative value, say 10 r, where r is the expected measurement

variance (see [22]), and then decrease it until the estimator starts

having difficulties (observe h(t)). In most cases the estimator will

perform well for a wide range of o values (2-3 orders of magnitude).

The parameter 4 in equation (3-19) affects how fast h drops (the

lower it is, the faster h decreases). A value in the range .7 .95 is

recommended. A value 4 = .9 implies that h decreases by 40% ( 1-45 ) as

a result of negligible estimation errors at 5 consecutive sampling

instants. Clearly, the larger the sampling period is, the lower must 4

be to achieve the same rate of decrease in h. Another parameter that

affects h is 6 (equation (3-20)); the higher it is, the faster h de-

reases. Thus the larger the sampling period is, the higher must 6 be

to obtain the same rate of decrease in h. As mentioned previously, one

can be guided towards choosing 6 by thinking of it as a measure of the

relative error in U(t).

Finally, the parameter Utol of equation (3-22) does not depend on

the sampling period and the previously mentioned value (10% of the

control variable range) should be satisfactory.

Numerical Simulations for Testing the Optimization Algorithm

In this section, numerical simulations are presented to test the

optimization algorithm. To account for dynamic behavior in a chemostat,

a time-delay Monod model was employed as the "true process". This model

was introduced by O'Neil and Lyberatos [32] and is described by the

following equations:

m m
= DX
K + Z

SIm + D(S -S)
YK + S

Z = a(S-Z) (3-24)

where Ks is the Monod constant, Y is the yield, X is biomass concentra-

tion, S is substrate concentration and Z is a weighted average of previ-

ous substrate concentrations. The delay parameter a, called the adapta-

bility parameter, is a measure of the microbe's ability to adjust its

growth rate to changing substrate concentrations. In addition, m the

maximum specific growth rate, is taken to be a function of pH and tem-


Pm = Pmax fpH fT

B T e
f= (3-25)
T ASd /R -AHd /RT-

f PH (3-26)
pH (1 + K2/[H ] + [H+]/KI)

where fpH and fT are enzyme activity functions [34]. The constants

8pH and BT are proportionality factors, while KI and K2 are equilibrium

constants for enzyme ionization. The activation anPrgy for an enzyme

catalyzed reaction is denoted by E, whereas ASd and AHd denote the

entropy and enthalpy of deactivation, respectively. Values for the

parameters in equations (3-24) (3-26) are given in Table 3-3.

Simulation parameters



= 0.7 hr-1

Y = 0.5

so = 30 g/9

a = 3 hr-

D = 0.05 hr-1

Ks = 22 g/tit-hr

pH Run Initialization

pH = 5.5

X = 13.29

S = Z = 3.41

T = 312.0 K

Temperature Initialization

X = 12.67

Enzyme Activity

E =14,904 cal


= 473.41 o

AHd = 149,040
d .gmol

= 10-5.5

= 9.845 x 107

K2 = 10-8.5

8 = 1.063

To = 300

S = Z = 4.66

pH = 7.0

Table 3-3.

The dependence of steady state biomass concentration on temperature

and pH is shown in Figure 3-5. The optima are 7.0 and 312.0 K for pH

and temperature, respectively.

The numerical simulations were performed with a sampling period of

five minutes. The dilution rate of the chemostat was set at 0.05 hr-1

for both temperature and pH optimization. Initially, the control vari-

able was set at a nonoptimum value and excited with white noise r(t)

around that point for a period of 1.75 hours. After this initialization

period, the algorithm was implemented in full with parameters as given

in Table 3-4.

The optimization with pH as the control variable is given in Figure

3-6. Initially, the pH was 5.5 and it converged to the optimum with an

error of 0.7%. The calculated optimum, equation (3-14), converged to

the correct value within 4 hours but was not fully employed until the

filter parameter, h, decreased.

The temperature simulation, shown in Figure 3-7, was initiated at

300 K. The temperature subsequently converged to within 0.1% of the

true optimum. Initially, the estimator identities parameters that

result in a minimum instead of a maximum of J. Condition (3-23), there-

fore, was violated and Step 5B was implemented to drive the system away

from this.

In conclusion, the adaptive optimization has proven very successful

in bringing a simulated fermentation process to its optimum conditions

and ensuring optimal operation thereafter. The algorithm should prove

very useful in two types of application: (i) the fast determination of

optimal temperature and pH for microbial cultures and (ii) securing

optimal operation for large scale fermentation processes.








Figure 3-5.

3.5 7.0 10.5 14.0


The dependence of steady-state biomass concentration on (a)
pH and (b) temperature using equations (3-24) to (3-26).





5.20 12 24 36 48 60

TIME (hours)




0 12 24 36 48 60
TIME (hours)


0.5 -


0 12 24 36 48 60
TIME (hours)

F i ir 3-6. pH optimization. Temporal variation of (a) the actual pH
control value, (b) the parameter estimates, and (c) step


8 16 24 32
TIME (hours)


0 8 16 24 32 4
TIME (hours)
I --


24 32
TIME (hours)

Figure 3-7. Temperature optimization. Temporal variation of (a) the
actual temperature, (b) the predicted optimum, both in K,
and (c) step size.




Table 3-4. Algorithm parameters

5 min

= 0


0.5 K


n =






(sampling interval)

(order parameters in equation 3-13)

(filter parameter in equation 3-18)

(parameter of equation (3-22) for temperature


(pH simulation)

(parameter of equation (3-20) for temperature

(pH simulation)

(forgetting factor parameter)

(initial covariance matrix parameter)

0.5 pH units







4.1. Introduction

In order to test the adaptive optimization algorithm, an actual

bioprocess was chosen. Anaerobic digestion was selected because the

unique complexities of the process present an interesting challenge.

Probably one of the oldest uses of microbiology, anaerobic diges-

tion converts a wide variety of feedstocks to a readily usable energy

source, methane. This ubiquitous process exists in many different

sizes, from small household composting units to large-scale industrial

waste conversion systems.

The most frequent use is found in municipal waste applications

where anaerobic digestion is used to treat domestic sludge and biode-

gradable garbage, such as food products and paper. A trip to many farms

will find the use of digesters to dispose of cow, chicken and pig man-

ures as well as excess crops. In fact, many researchers have studied

the feasibility of using crops specifically to produce energy via

digestion [35,36]. Even the degradation of some hazardous wastes can be

accomplished through anaerobic digestion [37,8]. Overall, this process

is extremely important for the removal of energy from wastes to prevent

imbalancing the energy equilibrium of the environment.

Anaerobic digestion naturally occurs in soil, lake and ocean beds,

and animal digestive tracts. It can generally be described as a two-

stage mechanism [39]. Complex organic substrates (e.g., carbohydrates,

lipids, and proteins) are hydrolyzed and fermented in the first stage by


a group of bacteria, referred to as acidogens, into organic acids,

carbon dioxide, and hydrogen gas. These products are converted in the

second stage to methane and carbon dioxide by a unique group of bacteria

called methanogens.

However, this simple representation does not encompass all reac-

tions occurring in a typical digester. For example, a group of homo-

acetogenic bacteria reduce carbon dioxide to acetate and accounts for an

appreciable amount of the methane produced [40]. In addition, this two

stage description obscures the delicate interaction between acidogenic

and methanogenic bacteria. Experiments ran with separated stages pro-

duce different behavior and intermediates [41,42]. This interaction is

also the cause of many of the problems associated with digester opera-

tion. Overfeeding and inhibitors, such as oxygen and chloroform, can

quickly cause a healthy reactor to turn sour.

The health of the anaerobic digester is usually indicated in prac-

tice by measurements of overall gas production, methane percentage of

effluent gas, and volatile fatty acids (VFAs). Volatile fatty acids are

intermediates and are considered to be a good indicator of imbalance

[43]. Acetic, propionic, n- and iso-butyric, and n- and iso-valeric

acids are usually measured with acetate usually being the most predomi-

nant. A rise in VFA concentration, especially the higher weight acids,

is usually a precursor to falling methane gas production. However, VFA

analysis is time consuming and usually only indicates a reactor in a

declining mode. In addition, the mechanisms of VFA reactions are con-

tinually being debated [44,45]. Most engineers, from a practical point

of view, are concerned mainly with the methane gas production. This

value is easily measured on-line (see Appendix C) which is fortunate

since methane gas is the desired product.

Most efforts to optimize the methane production have followed

traditional procedures. Individual species of bacteria are isolated and

kinetic data are acquired through pure culture experiments. This ap-

proach obviously fails to include interaction between species. In

addition, there are many bacteria that have not been isolated [46].

Another procedure involves batch experiments [47]. Reactors are

primed with a starter culture and a quantity of substrate under varying

conditions. Methane production rates are then measured throughout the

viability of the culture. This method accounts for interaction but does

not provide enough predictive information on continuous operation. The

variety of microbes in a typical reactor is immense and relative popula-

tion sizes are dependent on substrate concentration. Therefore, contin-

uous operation can not be properly described by batch experiments.

The most popular optimization method is continuous steady-state

experiments. Step changes are made in a particular manipulated variable

and the subsequent steady-state output is measured. However, typical

residence times of 10 to 20 days would result in months of waiting for

each steady state. During this period of transition, many upsets could

occur, such as population shifts, to render these results suspect.

In view of these problems and complexities, adaptive optimization

could provide a useful alternative for maximizing production of an

anaerobic digester. Dynamic data would be used to predict steady-state

behavior, thus circumventing the need for steady states. Also, on-line

implementation would avoid the problems associated with ad hoc experi-


4.2. Temperature Optimization

Temperature is one of the most important factors in anaerobic

digestion that influences methane gas production. Enzyme kinetics,

dissociation constants, and death rates are greatly affected by small

changes in the culture temperature. Johnson et al. [34] proposed that

bacteria are mostly influenced by changes in enzyme kinetics. As tem-

perature rises, enzyme activation increases, while at the same time

enzyme denaturation increases (see Chapter 3). In addition, higher

temperatures also increase irreversible destruction of these vital

proteins. These mechanisms will cause a typical bacteria to have a

range of viability as well as an optimum temperature for growth.

There are two operating ranges that have distinct characteristics,

thermophilic and mesophilic. Thermophilic operation is usually between

50 and 600C whereas mesophilic digesters usually run between 30 and

400C. Many authors [48,49,50] have discussed the relative merits of

each range, concluding that thermophilic digesters have the most advan-

tages (e.g., better dewatering characteristics, higher fraction of

pathogens destroyed, lower cell mass production, and increased rate of

digestion) but require stricter controls on temperature variations.

A comprehensive work by Zeikus [46] on the biological morphology of

methanogens describes microbial species that are specific to each

range. Many researchers [50,51] have verified that separate populations

do exist in actual digesters. However, enumeration experiments on a

mesophilic sewage fermentor established that 10 % of the total bacteria

population were typical thermophilic organisms [50].

Much research effort has been expended to ascertain the temperature

profiles of anaerobic digesters. Pfeffer [52] showed the existence of

two different optima for a digester fed with shredded municipal waste.

Mesophilic operation was optimized between 40 and 420C and had a higher

maximum specific growth rate than thermophilic operation, which was

optimized between 60 and 620C. Other authors, however, presented re-

sults demonstrating little difference in gas production between the two

ranges [53,54,55].

Chen and Hashimoto [56] used the data of Pfeffer and others [57,58]

to propose the existence of two ranges. However, in a later paper Chen,

Hashimoto and Varel [59] presented data on a series of exhaustive exper-

iments to determine the effect of dilution rate and temperature on beef-

cattle manure (Figure 6-1). In this paper, they proposed that a smooth

transition actually existed between mesophilic and thermophilic condi-

tions. Longer acclimation periods were shown to improve results. They

concluded that gas production increased steadily from low to high tem-

peratures with maximization occurring between 55 to 600C.

These authors also reported sudden drops in steady-state gas pro-

duction at particular temperatures. More specifically, gas production

seemed to occur only for a particular range with very little change

within this range (Figure 6-1). The range also became more narrow as

the dilution rate increased. (A possible explanation of this phenomenon

is proposed in Chapter 6. In addition, a method of improving optimi-

zation and avoiding such behavior is presented.)

Much debate still presides over this important question of tempera-

ture profiles. Adaptive optimization offers an unbiased method of

providing the answer. In truth, much of the data presented in the

research described above is specific to a particular system and feed

substrate. The general approach of adaptive optimization is well-suited


for many systems. In Chapter 7, results of this method applied to a

continuous glucose-fed anaerobic digester are presented.


5.1. General Description

Before a discrete model can be implemented on line, a sampling

interval must be chosen. This value is important to the success of the

control or optimization algorithm, but is usually chosen haphazardly.

If the sampling interval is too small, changes in the measurement output

could be masked by process noise. However, dynamic information could be

lost if too large a value is chosen. Smith and Corripio [16] have shown

that best results can be obtained by choosing a sampling interval to be

about ten to twenty percent of the dominant time constant.

A major problem with this method is the variability of the time

constant as the process changes from one steady state to another. This

is especially magnified in bioprocess optimization where the bioreactor

changes from a low production state characterized by slow dynamics to a

high production state with substantially faster dynamics.

This problem may be circumvented by considering an alternate sam-

pling basis, i.e., transforming the independent variable from time to

some other variable, p. This new basis can reduce the variation of the

time constant.

This transformation can be viewed in general terms by considering

the following general single-input single-output system:

= f(x,u)

where x is the output measurement and u is the control input. Using a

first order Taylor series, the time constant, T, can be obtained from:

af -
[ S r -1

Here, X is the eigenvalue. The transformation is performed by dividing

f by the derivative of the new basis with respect to time.

dx f(d)
-= g(x,u) = f(d)t
do dt

The new eigenvalue can be easily related to the time based eigenvalue.

dt [ df
d x x
s s

Assuming the transformation is performed for a stable system, the fol-

lowing condition must be met in order to maintain stability.

> 0

This condition implies that the new eigenvalue is always negative.

Therefore, the new independent variable must be a monotonically increas-

ing function of time. This can be more clearly understood by consider-

ing the following example.

5.2. Example

A constant volume isothermal continuous stirred tank reactor (CSTR)

in which an nth order reaction takes place can be described by the

following continuous-time dynamic model:

dC n
V = FC FC VkCn
dt o


where C is the reactant concentration, Co the feed reactant concentra-

tion, V the reactor volume, F the flow rate and k the kinetic con-

stant. Let C be the measured variable and F the manipulated variable.

We can change equation (5-1) to dimensionless form by defining


f =

6 = tkCn-1

in which case equation (5-1) becomes:

dc n
-= f fc c

At steady state the above equation yields

s 1 -c

where the subscript s denotes steady state. The eigenvalue is

A = -(f + ncn1)
3 s







which corresponds to the time constant

1 c
1 s
= (5-8)
1 n-1 n n-1 -
f + nc (1-n)c + nc
S 3 S S

This is clearly a function of the concentration at the operating steady

state c = (0,1). For example, if n = 3, for 1% conversion (cs = .99)

the time constant is 0.01 while for 99% conversion (cs = .01) it in-

creases to 3320. The lower the concentration, the lower is the reaction

rate and the slower are the dynamics as evidenced by the substantial

increase in the time constant.

Let 2(8) be the dimensionlesss) cumulative amount of reactant

consumed until time 6. Then

2() cn de (5-9)


2 n
-d 0 (5-10)

Clearly 82 is a monotonically increasing function of 0. By dividing

equation (5-5) by equation (5-10) we obtain

dc -n -(n-()
d = fc- f- 1 (5-11)

What we have done is *chang. the independent variable from time to cumu-

lative amount of reactant consumed. The eigenvalue of (5-11) is

2 = s [n + (1-n)c ] (5-12)
2 n+1 s

and using equation (5-6) we obtain the following "time" constant:

1 c (1-c )
T3 (5-13)
2 2- n + (1-n)c (5-
2 s

This also varies with cs, but considerably less than T1. For example,

if n=3, in the range cs = [0.01, 0.99] the maximum value is 0.134 and

the minimum is 0.0033, a ratio of 40 versus a ratio of 332,000 for T .

Let B3 () be the dimensionlesss) cumulative net input (input -

output) of reactant, i.e.

3 (8) = f(1-c)d8 (5-14)

This is also a monotonically increasing function of 8. Dividing equa-
tion (5-5) by -d we obtain:

dc n
do = 1 (5-15)
dB f(1-c)

The "time" constant is

c (1-c )
S s s (5-16)
3 n + (1-n)c

i.e., it is equal to T 2

Finally, let B4(8) be the dimensionlesss) cumulative amount of

reactant fed, i.e.

84(8) = J f-1 dO (5-17)

Again we have a monotonically increasing function of e, and when we

change from time to Sq as the independent variable, equation (5-5)


dc 1 c
S- c --
d84 f'


and the corresponding "time" constant is

S= (1-n
4 n + (1-n)c


Table 5-1 presents the ratio of the maximum "time" constant to the

minimum "time" constant versus n for cs = [0.01, 0.99]. It makes it

clear that by switching the independent model variable from time to

cumulative amount of reactant consumed, cumulative net amount of reac-

tant fed or even cumulative amount of reactant fed, the variability of

the "time constant" with ns is considerably reduced. This is increas-

ingly the case as the reaction order, and with it the nonlinearity of

the process, increases.

Table 5-1. Ratio of maximum to minimum "time" constants
versus reaction order

T max T2, max T3, max T4, max
TI, min T2, min T3, min T4 min

1 99 25.3 99

2 4974 34.5 195

3 332088 40.3 289

4 2.5 x 107 44.6 382

The above observation motivates discretization on the basis of

model (5-11) instead of model (5-5). Using the Euler method an approxi-

mate discretization of (5-11) is:

c(2+ 22) = (B2) +

A~8 [f( )c(2 )-n f(2)( ,) 1] (5-20)

If a control or optimization algorithm employs the above model, sampling

occurs when a fixed amount of 82 has accumulated (constant A82) instead

of when a fixed amount of time has passed (constant AO). This in effect

creates a variable sampling period. The sampling interval is automati-

cally lengthened or shortened as dictated by the change in the speed of

the process dynamics.

This concept is of course not restricted to the above mentioned

reactor example. For many processes which exhibit wide variations of

the effective time constant, the associated problems can be reduced by

considering as the independent variable the cumulative amount of a

quantity generated, consumed or fed, rather than time. In this manner

the sampling period will be automatically adjusted, increasing when the

dynamics slow down and decreasing when they speed up.

5.3. Application to Anaerobic Digestion

The anaerobic digestion process, in which cellulosic material or

sugars are converted to methane and carbon dioxide, can exhibit consi-

derable variation in the effective time constant. The lower the dilu-

tion rate (flow rate/volume) is, the lower is the methane production

rate and the slower are the dynamics.

To exemplify the advantages of alternate sampling basis for anaero-

bic digestion, an adaptive control simulation is presented in this

section. Many excellent studies have been published for adaptive con-

trol of biochemical systems [60,61] as well as anaerobic digestion

[62,63]. The adaptive control law used here is a simple quadratic type

since the main purpose is to exhibit the properties of an alternate

sampling basis.

Whether the purpose of the digester operation is energy production

or wastewater treatment or both, a reasonable control objective is to

attempt to maintain a specified methane production rate. (A word of

caution though: If the digester is subject to receiving inhibitory

material in the feed, maintenance of a constant methane production rate

should not be the primary control objective [64]). The methane produc-

tion rate can be easily measured on line. A simple scheme is described

in Appendix C. An easy to manipulate variable that substantially af-

fects the methane production rate is the dilution rate. Thus the objec-

tive is to control the methane production rate by manipulating the

dilution rate.

Anaerobic digesters have a complex mixed culture, and changes in

environmental conditions may cause population changes. Furthermore, the

feed is often ill-characterized and changing. For these reasons an

adaptive controller is employed instead of basing the controller design

on a fixed process model. A fixed process model [65] is used, though,

for simulating the process.

The adaptive controller is based on the following on-line identi-

fied model:

QCH4(B+AB) = eQCH4(8) + 2D(B) + 63 (5-21)

where QCH4 is the methane production rate (liters of methane per day), D

is the dilution rate and 61 are the on-line estimated parameters.

Instead of time, the independent variable 8 is the cumulative amount of

methane produced, something easily measured on line. Sampling takes

place whenever B increases by AB.

The adaptive controller employed is a quadratic self-tuning con-

troller [66] based on the minimization of the performance measure:

J(D(8)) = [QCH4, desired QCH4(s+A6)2

+ q[D(B) D(8-AB)]2 (5-22)

The resulting control law is:

D(B) = 1 {qD(B-A8) +
q + 82

2 CH4,desired H4 3]} (5-23)

where 61 denotes the current parameter estimate.

The parameter estimator is a modification of the recursive least

squares with variable forgetting factor of Fortescue et al. [22]. The

modification is that the m (m ? number of parameters) most recent data

are excluded from forgetting. This has the advantage that even at

periods of high forgetting, the estimator retains the minimum amount of

information required (or, equivalently, the estimator can employ higher

rates of forgetting of past data). A recursive formulation that accom-

plishes this is the following:

At each sampling instant v:

T *
e = y -x 9
v-m v-m -v-m -

P x
--V -m
K = -v-m
v-m T *
1 + x P
-v-m -v-m

a8 + K e
v-m v-m

T *
P [ [I K x ]P
v-m -v-m

e = 8
-v-m -

P = P*

For 1 = 1 to m

e +i = y(v-m+i) x v .
v-m+i "-v-m+i-v-m+1-l

P x
v-m+i-1 -v-m+i
K =
v-m+i T
1 + x P x
-v-m+i v-m+i-1 -v-m+i

-v-m+i ~ -v-m+i-1 v-m+i ev-m+i

P =[I x ]
v-m+i v-m+i -v-m+i v-m+i-1

Next i

(1 + vP x )a
-v v-1 -v

P -P /A

The notation is as follows:

v = independent variable enumerating the number of sampling

periods (At for conventional algorithms or, in our case,

8 = parameter estimate vector obtained with forgetting and

without considering the last m data

P = corresponding covariance matrix

yv-m = measurement at sampling instant v-m

QCH4(8-mAS) in our case

x = information vector

[QCH4(v-m-1), D(v-m-1), 1]T in our case

e estimation error

Kv-m gain

_-m = update of 6 using y(v-m+1) and without forgetting
o = update of using y(v-m+i-1) and without for-
v-m+i -v-m+i-1

Pv-m+i = corresponding covariance matrix

O = final parameter estimate at instant v (used in the con-

trol law)
trol law)

= forgetting factor. The constant o adjusts the speed of


Essentially what this algorithm does is update the value 9 that the

parameter estimates would have if the last m data were not received,

using a variable forgetting factor. Then, the last m data are used, one

at a time, to update the parameter vector without forgetting. It should
be remarked that only P 0 and the inputs and outputs contained in

v ,...,x m+ are stored after each iteration; so the algorithm requires

approximately as much storage as a conventional least squares estimator.

Figures 5-1 to 5-3 show simulation results using the Smith et al.

[65] model for a 5 liter glucose-fed digester with glucose feed concen-

tration of 30 g/L. The sampling period was selected to be 1 liter of

methane produced. The algorithm parameters were m=3, o=0.005 and

q=10,000. The initial set-point was 0.93 L/day and after an initializa-

tion period of 10 sampling instants (during which the system was sub-

jected to random input excitation), it was changed to 6 L/day. Subse-

quently, the set-point was returned to its original value. As Figure 5-

1 shows, the controller successfully drives the methane production rate

to the desired values. Figure 5-2 shows the corresponding hydraulic

retention time (inverse of dilution rate). Figure 5-3 exhibits the

time-variation in the sampling period. Keeping AB constant at 1 L

results in equivalent time sampling period, At, varying from 4 hours

(when the system dynamics are faster at the high methane production rate

set-point) to 26 hours (when the dynamics are slower at the low methane

production set-point). Finally, Figure 5-4 shows that if a constant At

self-tuning controller were used, with At equal to that of the constant



c 4-
0 E


o 2-

0 30 60 90 120 150
Time (days)

Figure 5-1. Gas production of simulation Jsing alternate sampling



> 45-

40 rr

E 35

o- j
3 30- 9:1


1, 10-

0 30 60 90 120 150
Time (days)

Figure 5-2. Hydraulic retention time of simulation using alternate
sampling basis.




0 20-


c 10-


0 50 100 150 200 250
Sampling Point

Figure 5-3. Time variation of sampling interval of simulation using
alternate sampling basis.


Time (days)

Figure 5-4.

Gas r:.J action of simulation using time sampling basis.

AB algorithm in the initial steady state (and identical algorithm tun-

ing), the system does not speed-up the sampling rate and as a result

reaches set-points considerably more slowly.

The advantages gained by this transformation are basically three-

fold. Variation in parameter estimates will be reduced because of the

reduction in the variation of the time constant. The system will be

better represented by a linear discrete model which will lead to im-

proved stability. Finally, the time required to optimize a system will

be reduced because the sampling period will get smaller as the dynamics

of the system increase.

It is also necessary, however, to consider the potential problems

associated with the transformation. A reliable measurement of the

independent variable is required. For the anaerobic digestion process,

this is not a problem since methane gas can be measured in increments

very easily (see Appendix C). Also, all changes to process inputs must

be made with respect to the new basis. For continuous anaerobic digest-

ers, this approach leads to an attractive operation mode, constant

reactor yield, which is described in the next chapter. Lastly, because

the sampling interval is variable with respect to time, it must always

be large enough to allow for data retrieval and manipulation.


The majority of anaerobic digesters are operated in a continuous

semibatch mode. Substrate solutions are usually prepared and fed on a

daily basis. The amount fed is maintained at a constant level thus

keeping the hydraulic residence time (HRT) constant. For a simple

chemostat type fermenter (as in Figure C-1), the HRT is defined by the

following relationship:

HRT = V/F = D- (6-1)

where V is the liquid volume of the reactor, F is the volume of feed per

time, and D is the dilution rate. The kinetics of a continuous fer-

menter can be described as:

= p(s)x Dx kdX (5-2)
dt d

ds (s)x
ds ()x D(S -S) (6-3)
dt Y o0

Q = Y y(S)X (6-4)
p p

Here, X is the microbial concentration, p is the specific growth rate,

kd is the death rate, S is the substrate concentration, So is the inlet

concentration of substrate, Yx is the biomass yield (mass of cells

formed/mass of substrate consumed), Qp is the product gas rate, and Yp

is the product yield (volume of gas/mass of cells formed). In many

digestion systems, methanogenesis can be assumed to be the rate limiting

step. Therefore, X is the methanogenic bacteria concentration (g/L), S

is the total volatile fatty acid concentration (g/L COD), and P is the

methane (L).

As discussed in Chapter 4, digester failure is mostly caused by

overfeeding. An upset in the reactor lowering the specific growth rate

will result in rising VFA concentrations and a declining methanogenio

bacteria concentration. Furthermore, substrate inhibition may cause the

specific growth rate to decrease even more. Consequently, a digester

may appear healthy but in actuality be close to souring. To prevent

failure, digesters are operated at sub-optimal HRTs and methane produc-

tion is watched very closely.

This behavior has also been well documented in research lab digest-

ers. Investigators have noticed slight changes in a controlled vari-

able, such as temperature or pH, will cause a sudden collapse in gas

production [43,53]. For example, in the study provided by Chen et al.

[59], digesters were operated at different temperatures (see Figure 6-

1). These researchers noticed that steady-state gas production was

minimally affected within a certain range of temperatures for a given

HRT. Outside this range gas production was practically zero. They also

noted that this range narrowed as the HRT decreased.

These phenomena can be predicted by substituting a simple substrate

inhibition functionality for the specific growth rate:

y(S) = (6-5)
1 + -+ S



O 3-


o 2- \A

.0 1 5- '

a 0.5-

01 /
20 25 30 35 40 45 50 55 60 65 70
+-- 18 Day HRT -+- 12 Day HRT -- 9 Day HRT
-- 6 Day HRT 3 Day HRT

Figure 6-1. Experimental results of [59]. Temperature effects on the
steady-state methane production for different retention

where Ks and KI are kinetic parameters, and using the thermodynamic

functionality of equation (3-24). Temperature profiles of steady-state

gas production predicted from this model are shown in Figure (6-2).

Although these process curves violate common logic, an explanation can

be given by considering the effect of temperature and substrate concen-

tration on the specific growth rate. If the temperature is increased

toward the optimum, um of equation (3-24) will increase. However, the

substrate level will decrease so as to maintain the product term of

equation (6-4) at an approximately constant value. At the drop off

points away from the optimum, rising substrate levels due to a decrease

in p

begin to have a dramatic inhibitory effect on the growth rate. Subse-

quently, faster feeding rates (lower HRTs) will amplify this effect and

result in a reduced range of possible operating temperatures. One can

easily imagine the problems of trying to operate a digester at the

maximum feed rate (lowest HRT).

A way of operating the digester that could possibly avoid these

problems is the constant reactor yield feeding mode. In this method,

the amount of substrate fed per amount of methane produced is held

constant (see Appendix C for implementation). This functionality is

given by the following equation:

Q Y p(S)X
S= = (6-6)
o o

where Y is the reactor yield. Substituting into equations (6-2) to (6-4)


Figure 6-2.

315 320

Theoretical effect of temperature on stead.y-state gas
production of a constant HRT digester (at different HRTs)
using equations (3-25), (3-26), (6-2), (6-3), (6-4) as
process model.




- ..... .. .r T-

290 295 300 305 310
Temperature (K)

_1 ___________



d *P(S)X

dS p(S)X
dt Y

(1 -7s2X) KdX

(S -S)Y Y

The steady-state values of X, S and methane rate are:

x =( -

K d/(S )) Y

S = S (1
s 0





QPS = (p(Ss) kd)YSoV

Equation (6-10) can also be rearranged

conversion fraction.

in convenient terms of substrate

(S -S)
o YV
x p


Constant yield operation has the advantage that the substrate concentra-

tion is held constant thus avoiding substrate inhibition effects. In

addition, for a small death rate, the methanogenic bacteria concentra-

tion is also constant and therefore Qp is affected by pm only. This

results in a smooth parabolic process curve with a clearly distinguish-

able optimum.

For the above case of temperature optimization, this characteristic

has a dramatic effect. In Figure (3-3) digester temperature profiles

are shown using constant yield. The advantage gained here is that the

optimum is more pronounced and easier to locate. Also, the digester can



Low Yield
6- Low Conversion

o 5-

|- 4-

c 2-

S High Yield
SHigh Conversion
0-1 1
290 295 300 305 310 315 320
Temperature (K)

Figure 6-3. Theoretical effect of temperature on steady-state gas
production of a constant yield digester (at different
yields) using equations (3-25), (3-26), (K-2), k,-3), (6-4)
as process model.

be operated at high production rates with less chance of sudden fail-


The advantages of constant yield operation are especially important

for the case of inhibitors in the feed. Harmon et al. [67] have shown

results on a simulated anaerobic digestion system [65] where an inhibi-

tor was input into a reactor through the feed. The constant HRT reactor

ultimately failed whereas the constant yield reactor had only a slight

rise in VFA concentration. In this practical case, constant yield

provides an automatic method of raising the HRT for declining gas pro-


It should be noted that the above results assume the inlet sub-

strate concentration remains constant. In actual systems, feedstocks

may vary and thus produce negative results under constant yield opera-

tion. For example, an increase in substrate concentration will result

in a rise in gas production which will also increase the feed rate.

Consequently, constant yield policy will overfeed the reactor [64].

However, Pullammanappallil et al. [64] have presented an expert system

which can handle such upsets.


7.1. Introduction

In the experiments performed on the anaerobic digester described in

Appendix C, steady state methane production rate was the performance

index and temperature was the control variable. The digester was oper-

ated on a constant yield basis (see Chapter 6) and allowed to come to

steady state before the start of the experiment. VFAs and methane gas

rate were constant for 48 hours to verify the steady state.

Both the feed pump and gas meter were calibrated prior to the

start. The yield is calculated from these values.

Sg (mls of methane) (7-1)
Nrf (mls of feed)

where r is the meter calibration (mls methane per meter reset), rf is

the pump calibration (mls feed per revolution), and N is the number of

revolutions of the feed pump per meter reset. The constant yield feed-

ing policy is implemented by turning the feed pump on for N revolutions

each time the gas meter resets.

The reactor yield can be transformed into chemical oxygpn demand

(COD) conversion percentage by the following equation:

CODI = [liter of feed. g COD (7-2)
38.9 g COD 0.35 liter CH

Here, the first bracketed term is the COD feed concentration determined

in Appendix C, the second term is the theoretical COD conversion of

methane at 273 K and 1 atmosphere, and the last term, where Tm is the

meter temperature, corrects the gas volume measurement for temperature.

The algorithm described in section 7.2 was implemented using the

alternate sampling basis described in Chapter 5. The sampling inter-

val, 6, is given by

6 = Zr (mls of methane) (7-3)

where Z is the number of meter resets per sampling period. At the end

of each period, the instantaneous methane rate, m, was calculated by a

weighted average of the last ten gas rate measurements

z z-j
Z AZ m.
z-9 r r
m = = (7-4)
m 10 mj At
i j

where mj is the methane rate of the jth meter reset, Atj is the time

between the j and j-1 resets, and w is the weight (w=0.9 throughout all

experiments). A weighted variance was also calculated to remove errors.

10 z
S= ( J-1) [ (m -)2] (7-5)
j=1 z-9

If any measurements were more than one standard deviation from m, they

were rejected and equation (7-4) was recalculated.

The temperature, T, for the sampling period was calculated by a

simple average of the temperatures measured at each meter reset (see

Appendix C) after skipping the first ten.

T= T. (7-6)

The hydraulic residence time was calculated by
V E At.
i=1 -
HRT = (7-7)

where V is the liquid volume of the reactor.

7.2. Algorithm

The model equation for the experimental runs was

y(i) = ay(i-1) + bu(i-1) + cu-(i-1) + d = 6 h. (7-8)

where y(i) is the gas measurement from equation (7-4) scaled with the

starting steady-state gas rate and u(i) is the temperature average from

equation (7-6) scaled with the starting temperature. At each sample,

the algorithm receives a new measurement and estimates new parameters

usi-ng the recursive least squares routine detailed in Chapter 5. This

estimator is similar to the method proposed by Fortesque et al. [22] but

includes two modifications. The first involves resetting the covariance

matrix when the a posteriori error becomes significantly high (see

section 3.2). Resetting allows for faster discarding of data however

when it is applied to the most recent covariance matrix it discards all

data. This results in periods where parameter estimates will be poor

until sufficient data is retrieved. To avoid this problem, the second

modification of applying the forgetting factor to a previous covariance

matrix is included (see Chapter 5). In this alteration, the n most

recent data, where n is the number of parameters to be estimated, are

not discarded.

The step size variable, h, from Chapter 3 is used to determine the

confidence in the new estimated parameters. This variable is calculated

from equations (3-17) and (3-18) with two additions. The saturation

parameter of equation (3-17), a, was constant as opposed to the variable

functionality of Appendix A. This change was imposed to simplify the

algorithm. In addition, the filtered a posteriori error, v, was not

allowed to be greater than 100 a. Errors larger than this value would

cause h to be 1 for extended periods.

As the a posteriori error drops, h will approach 0 indicating

increasing confidence in the predicted model. If h drops below HOFF,

the model parameters are considered to have converged and updating is

stopped (i.e., the parameter estimator is turned off until h rises above


The predicted optimum temperature is calculated using the new

parameters from equation (3-14).

T = b/2c (7-9)

Here, ^ indicates estimated values. Likewise, the necessary condition

for optimality from equation (3-23) is

S< 0 (7-10)

After calculating the above values, the algorithm selects a new set

point temperature. In simulations presented in Chapter 3, this was a

relatively simple task but an actual real-time process requires a much

more complicated scheme. Certain limits must be placed on the changing

of the temperature set point.

A minimum movement limit was placed on the t mprrature set point

change for two reasons. First, the temperature change cannot be less

than the accuracy of the controller. Second, the set point change

should be large enough to produce a response in the gas measurement that

will not be masked by process noise. This limit was determined to be

0.50C. The resolution of the controller further constrained the preci-

sion to 0.10C. In other words, temperature had to be set in increments

of 0.10C.

The sensitivity of the process required a limit to be placed on the

maximum allowable set point change. Large changes in temperature could

shock the culture and produce an unfavorable response [49]. This value

was set at 20C.

In addition, these bounds made the implementing of the control law

of equation (3-16) impractical. A new strategy was developed using four

control modes to adjust the temperature set point. A schematic is shown

in Figure 7-1.


path 3

path 1 If h < H2 and al <1
path 2 f h > H2 or lal >1
path 3 If h< HOFF and Tbase = opt

path 4 If h > HOFF

Figure 7-1. Schematic diagram of experimental temperature set point

The first mode, INIT, involves initialization and is used only in

the first six intervals of the optimization run. In this section of the

algorithm, the temperature is randomly excited about a base temperature.

T (i) = T + NSEQ(i) (7-11)
sp base

Here, Tsp is the setpoint at sample i, Tbase is the base (starting)

temperature, and NSEQ is the ith member of a discrete white noise se-

quence (zero mean, 0.50C standard deviation). Basically, this section

is used to acquire reasonable estimates before attempting to locate the


After initialization the algorithm moves form INIT to the identifi-

cation section, IDENT. The control law of this mode is the same as

equation (7-11) and is used when confidence in parameter estimation is

low. The step size variable, h, described above is utilized as an

indication of estimation confidence. In other words when h is close to

1, temperature set point is adjusted in the IDENT mode.

When h drops below a predetermined value, HI, and the model equa-

tion is stable (i.e., lal < 1), the algorithm uses the MOVE mode to

change the reactor temperature. This is accomplished by changing Tbase

in equation (7-11) and reducing the maximum noise to +/- 0.50C. The

magnitude of this change further depends upon h. If h is less than H2

then Tbase is changed by 20C, otherwise the change is 10C. The direc-

tion of change is determined by the optimality condition (7-10). If the

second derivative is less than 0, Tbase is moved toward the calculated

optimum of equation (7-9). For a positive second derivative, equation

(7-9) is the predicted minimum and the move is made away from this


From the MOVE mode, the algorithm can transfer control back to the

IDENT mode in the event that h > 1 or a > 1. However, if h drops below

the estimator cutoff value, HOFF, and Tbase has converged to the pre-

dicted optimum, the algorithm removes the random excitation and main-

tains the temperature at the predicted optimum. This mode is called

CONV and is used to maintain the temperature and monitor h. Should h

rise above HOFF, set point adjustment will switch to the IDENT mode.

The algorithm can be summarized in the following steps:

STEP 1: Obtain new gas measurement and temperature set point using

equations (7-4) to (7-6)

STEP 2: Use Estimator of Chapter 5 to estimate new parameters

STEP 3: Calculate h form equations (3-17) to (3-19). If h < HOFF for

three sampling intervals do not update parameters and for-

getting factor is set to one for next sample

STEP 4: Calculate new set point using equation (7-11) and Figure 7-1

STEP 5: Go to STEP 1

Algorithm parameters are given in Table 7-1.

7.3. Thermophilic

The first run was performed at a starting temperature of 490C. The

reactor yield was 11.03 liters of methane per liter of feed. COD con-

version percentage (determined from equation (7-2)) was 81%. The sampl-

ing period was set at 50 gas meter resets. Since the gas meter was

calibrated at 21 mls, this entailed allowing 1.05 liters of methane to

be measured for each sample. The starting steady-state total VFAs were

Table 7-1. Algorithm parameters for experimental runs

a = 0.25 parameter for calculating h

a = 10- forgetting factor parameter

Po = 10 initial covariance matrix

H2 = 0.3

H1 = 0.3

HOFF = 0.1

420 mg COD/1 of reactor volume. Acetate and propionate were predominant

at concentrations of 260 mg/l and 30 mg/1, respectively.

Results of the run are shown in Figures 7-2 to 7-5. The total run

time was 10 days. After the initialization period, the algorithm pre-

dicted a minimum and increased temperature. Temperature set point rose

from 500C to 590C while the methane gas production increased 32%.

Consequently, the HRT decreased from 15 days to 10 days. At this point,

the VFA concentration had only risen slightly to 550 mg COD/I.

However, as the temperature elevated to 60C, the gas production

dropped dramatically, which caused the HRT to increase to 30 days. The

decrease in the feeding rate aided in keeping the VFAs from rising

appreciably. At sample 17, the total VFA concentration was 1200 mg

COD. Higher weight acids also appeared at this time.

Eventually, the reactor restabilized and the gas production rose

and optimized at a rate of approximately 3.3 mls/min (4.75 liters of

methane per day). The predicted optimum was 580C. It should be noted,

however, that this final rate was below the starting value. This lower

production was probably due to the inhibition caused by the rise in

concentration of the higher weight VFAs, such as isovalerate and iso-

butyrate. In addition, the digester became very unstable. A one degree

increase in temperature done manually at the end of the experiment to

verify an optimum resulted in another collapse in the gas production.

Although the constant yield operating mode did not prevent the

appearance of higher weight VFAs, it did prevent the total failure of

the digester. In fact, within three weeks the digester was operating as

it had originally at the start of this run.

- a7

[ -a










35 40

Figure 7-2. Experimental methane production for first thermophilic run.

) 5 10 15 20 25 30
Sampling Interval


5 10 15 20 25 30 35
Sampling Interval

Figure 7-3. Experimental
philic run.

hydraulic retention time for first thermo-






jlnlq: E


0 5 10 15 20 25 30 35 40
Sampling Interval
I Actual Temperature -x- Predicted Optimum

ire 7-4. Actual reactor temperature and predicted optimum tempera-
ture for first thermopnilic run.

10 15 20 25 30 35
Sampling Interval

-E- Acetate

--- Propionate -*- Other Acids

Figure 7-5. Experimental volatile fatty acid concentration for first
thermophilic run. (Other acids are a sum of isobutryate,
n-butyrate, isovalerate, and n-valerate.)














A probable reason for the rise in VFAs was the effect of instan-

taneous carbon dioxide evolution on the gas meter. That is, the gas

meter measures methane accurately only when the percentage of methane

remains constant (see Appendix C). The acidogenic population is respon-

sible for the majority of carbon dioxide produced and has substantially

faster kinetics than methanogens. Therefore, a rise in temperature will

result in a temporary increase in the carbon dioxide percentage in the

effluent gas. This would be especially unsatisfactory near the methano-

genic optimum where a rise in temperature would cause a drop in methane

production and a possible increase in carbon dioxide production.

To account for this problem a second run was started under similar

conditions with a slight modification. At the beginning of each sampl-

ing interval, feeding rate was cut in half for the first five meter

resets. It was hoped that this modification would slow the metabolism

of the acidogenic bacteria. This also changed the reactor COD conver-

sion percentage to 85%. As a result, the starting total VFAs were

lowered to 300 mg COD/1 from the above case.

The results for this run are shown in Figures 7-6 to 7-9. As in

the above run, the methane gas production rose dramatically and then

fell as the temperature rose to 600C. However, under the new modifica-

tion of initially cutting the feed rate, the VFAs rose only slightly to

500 mg COD/1 with no appearance of higher weight VFAs. Unfortunately,

after sampling period 21, a power failure caused the reactor to shut

down. After restarting the reactor, it returned to normal operation in

24 hours. It appeared that the feeding modification had the advantage

of increased stability. However, time did not permit the attempt of a

third run. The total run time was 6 days.


4- -j

5- a




K____ r


Sampling Interval

Figure 7-6. Experimental

methane production for second thermophilic








C7C10 [ []


ci cn

ci cicici



Sampling Interval

Fi;.,r- 7-7. Experimental hydraulic retention time for second thermo-
philic run.










70 1







.1 b

5 10 15


Sampling Interval
a Actual Temperature --- Predicted Optimum

Figure 7-8. Actual reactor temperature and predicted optimum tempera-
ture for second thermophilic run.


5 10 15 20
Sampling Interval

--- Acetate

-- Propionate -- Other Acids

Figure 7-9. Experimental volatile fatty acid concentration for second
thermopnilic run. (Other acids are a sum of isobutryate,
n-butyrate, isovalerate, and n-valerate.)