Mathematical Algorithms For Equation Solutions: Conclusions
Mathematical Algorithms For Equation Solutions: Conclusions
CONCLUSIONS
· The equations for each process will be written in the form G(X, P) = 0 and solved
simultaneously. This may be slower than the approach used in SSSP, but offers more
flexibility. The SSSP approach of writing semi-explicit solutions will be used only if
computation speed of the suggested approach is unacceptable. Early indications are that
the computational speed will be acceptable.
· The models will aim to build on the existing structure used in STOAT. Specifically, where
possible the IAWQ COD-based models will be used. The IAWQ have recently introduced a
replacement for Model #1 (CN removal), called Model #3. Although there is little experience of
using Model #3, nor of the benefits of switching to it, the expectation is that the IAWQ
bandwagon will lead to this becoming an accepted model. For this reason both #1 and #3 will
be offered for CN removal, with Model #2d (a recent replacement for Model #2) for CNP
removal.
INTRODUCTION
This is a discussion document on approaches to setting up and solving equations within the
design program (provisional name Plan-It STOAT).
We divide the mathematical algorithms into two parts. The first is how do we solve the equations.
The second is what equations are we solving.
SOLUTION APPROACH
Each process can be written as a set of equations,
g(x, p) = 0
where x is the set of unknowns to be calculated, and p is the set of known parameters. We
require that (i) there must be as many equations (g) as there are unknowns (x) and (ii) that all the
equations must be independent of each other, so that a unique solution can be calculated.
G(X, P) = 0
Either of these two methods has the advantage that the equations can be written in exactly the
same form as the differential equations currently used in STOAT: instead of solving for x’ = g(x,
p), we have the steady-state solution x’ = 0.
· Globally, where we solve the complete system G(X, P) = 0 This has the greatest flexibility. As
far as the equation solver is concerned there is no difference between X and P, so that we
can solve for effluent quality (X) given a design (P) or a design given effluent quality. The
constraints (i) and (ii) listed above still apply - there may be some design parameters that
cannot be calculated given a set of effluent requirements. Although this approach has the
greatest flexibility, it also has the greatest chance of failing to find a solution. Although a
· Locally, where we solve g(x, p) = 0 for each process. If there are recycles then we will have to
repeat the sequence of solutions until the effect of these recycles has led to a steady
solution. This has almost as much flexibility as the previous approach, but by reducing the
number of the equations to be solved at any time may lead to better convergence
properties.
· Explicitly, where we rewrite the equations in the form x = f(p) and solve them. For each
possible design case we will need to write a similar set of equations, (pi, xj) = f1(pk, xl) where
the subscripts mean that there will be different combinations of design parameters and effluent
variables being calculated for each possible design case. This is commonly the most stable
solution method, but with the problems that (i) there is no flexibility - if you want to solve a
different design problem from those available, a new set of equations will have to be written; (ii)
it may not be possible to find an explicit set of equations, so that the local solution approach
may be required for at least some combinations of design/effluent requirements; and (iii) the
writing of multiple cases increases the chance for error.
SSSP uses this third method. The IAWQ model is written twice, once in a form suitable for solving
as a system of differential equations, and again in a form more suitable for solving for steady
state. The SSSP implementation is reproduced in part below. The code is included to give an idea
of the checks that can be coded in the explicit solver, and some of the numerical difficulties - here,
there are several quadratic equations. Steady-state solvers can converge to the false root, for
negative solids, COD, etc. The explicit method allows us to code to better get the correct root. At
the same time, the complexity of the method is apparent1, and the difficulty of writing a new
version of this each time we want to change the set of knowns and unknowns becomes easier to
appreciate.
1. Calculate the steady-solution for a single well-mixed tank, ignoring the effects of the RAS.
2. For each stage set up an estimate of the final solution from [1], including now the effects of
RAS.
1
All computer software will look complex and messy until you begin to work closely
with it. Do not let the visual complexity of the SSSP code make you feel that alternatives
will necessarily be simpler. What will simplify the code is that it can be structured as
simple equations, with a black box solver left to handle the solution. What makes the SSP
code complex is that solution method and equations to be solved are intertwined. The
black box solver will, in its turn, be messy and difficult to fathom. But it is code taken from
established sources that should not need to be touched, and therefore its complexity can
be ignored.
MATHS.DOC - page 2 of 8 - 12/07/2020 12/07/2020
cq:=ksp[3]*(msin[11]+ksp[15]*mtc[nk])/(mtc[nk]+fout[nk]);
tc[11,nk]:=(bq+sqrt(bq*bq+4*cq))/2;
END;
tc[1,nk]:=(fin[nk]*tc[1,nk-1]+fed[1,nk]
+recyc[nk]*tcr[1]+recir[nk]*tc[1,nr[nk]])/(vr[nk]*(fout[nk]+(p[4]-p12)/
tc[1,nk]));
bq:=(msin[7]-ksp[13]*p12+p[6]-st3*ksp[16]*o2a[nk]*tc[2,nk])/fout[nk]-
ksp[17];
cq:=ksp[17]*(-ksp[13]*p12+p[6]+msin[7])/fout[nk];
tc[7,nk]:=(bq+sqrt(bq*bq+4*cq))/2;
(fout[nk]+ksp[20]-ksp[16]*o2a[nk]*tc[7,nk]/(ksp[17]+tc[7,nk])));
tc[12,nr[nk]])/vr[nk]-st1*p[1]+st2*p[2]-st3*p[3]+p[6]/14)/fout[nk];
END;
tcr[3]:=tc[3,ntk]*conc;
UNTIL(ABS(oxe-tc[3,ntk])<0.1)AND(ABS(oak-tc[12,ntk])<0.01);
REPEAT Start of loop #3
oxi:=tc[4,ntk];
FOR nk:=1 TO ntk DO
tc[4,nk]:=(fed[4,nk]+recyc[nk]*tc[4,ntk]*conc+fin[nk]*tc[4,nk-1]
+recir[nk]*tc[4,nr[nk]])/(fout[nk]*vr[nk]);
UNTIL(ABS(oxi-tc[4,ntk])<0.1);
EQUATION SOLVERS
These are the different ways of writing the equations. There are various equation solvers:
· Newton’s method. This is a classical method, but also has a small region of convergence - for
solving the sort of equations that we are likely to encounter, it is likely to produce the wrong
answer unless our initial guess at the solution is good. Constrained Newton’s methods,
where we set that, e.g., no values can be negative, have in my experience been more
successful at finding a good solution to the system of equations. But the alternatives
described below, Powell’s method and homotopy methods, are likely to be more powerful.
· Quasi-Newton methods, designed to be faster than the Newton method, but with a small risk of
being less stable.
· Powell’s dog-leg, which can be more stable that either Newton’s method or the quasi-Newton
methods. Powell’s dog-leg is based on a quasi-Newton method with a steepest-descent
algorithm to provide some additional stability. For those used to non-linear regression, it is
similar in concept to Marquadt’s method.
· Homotopy methods, where any of the above methods can be embedded in a differential
equation solver. This is commonly used when there are strong problems in finding a good
solution to the set of equations. We know that the set of equations we wish to solve are stable
when solved as differential equations. Homotopy methods can be seen as a half-way house
between pure algebraic solutions (fast, convergence problems) and differential solutions (slow,
few convergence problems).
MODEL EQUATIONS
Reuse of the STOAT equations wherever possible is recommended.
· Secondary clarifiers: One of the following: Takacs (currently used in STOAT); Diehl (better
mathematical solution to the same physical problem as Takacs) or Dupont (claimed by
IAWQ to be a better physical solution to the physical problem).
· Activated sludge aeration basin: IAWQ models #1, #2d and #3. My experience with solving
Monod models using the approach described above has not been encouraging, but I last
pursued this 10 years ago. Then, the problem was that there is a very strong solution
corresponding to zero biomass and no treatment. The success of integrating to a solution
stopped this line of development, but specifying an influent with nominal amounts of
biomass prevents the zero biomass solution being feasible.
Ron’s notes from the October meeting query the validity of Monod-based models where the
reaction is taken as being first-order in the biomass. We have no questions about the lack
of usefulness of models that use VSS to approximate the biomass component. But we feel
that the existing Monod-based models can successfully address the arguments raised by
Ron.
V (h - k) XH = Qw XH
V (H / Y - k) XH = Q (S0 - S)
where for simplicity we assume that sludge is wasted from the aeration basin, and that all
the biomass COD on breakdown is biodegradable. We also ignore the details about
particulate and soluble COD.
1 k SRT
S KS
SRT H k 1
At very long SRTs this equation predicts (for the default values in the IAWQ model) that the
effluent COD will tend to 2 mg/l.
10
7
Effluent COD, mg/l
0
0 2 4 6 8 10 12 14 16 18 20
Sludge age, days
The most important thing about this is that it predicts that at low sludge ages (and low
biomass concentration) there is a rapid reduction in effluent COD with increasing sludge
age. But by the time the sludge age reaches 2-3 days then there is no significant further
improvement in the effluent quality. A model that assumes that substrate removal is first
order in the biomass produces predictions that are in line with Ron’s work, where substrate
removal was not a strong function of the biomass concentration.
Conceptually, therefore, the Monod-based model can be seen as fitting in with Ron’s first
principle, that the model predictions should show signs of agreeing with what we know of
current treatment systems behaviour.
· Biofilm systems: The biofilm equations used in STOAT, with the reaction kinetics taken from
IAWQ #1 and #3 only. Although biofilm systems can be engineered for bio-P removal this is
still neither common nor trusted.
The STOAT biofilm equations are based on the diffusive transport model of Wanner and co-
workers, and makes no assumption about the reaction model. An alternative approach is to
use the half-order reaction kinetics popularised by Harremoe and his co-workers. The half-
rate method is an approximation to the true situation that has been successful for the case
where reaction is masked by diffusion limitations. Where data exists to calibrate the model
Harremoe’s approach is faster - but my reading of the literature is that it will require site-
specific calibration. Although Wanner’s method would also require calibration, it has a
stronger physical basis. The relevant parameters would still not be well understood by most
engineers, especially in terms of sensible values. A final approach would be to use Logan’s
filter model. This is appropriate only for structured media, and would therefore not be
relevant to BAFs, and may not be relevant to RBCs. WRc has not used Logan’s model,
having preferred to adopt Wanner’s more general model.
RECYCLE LOOPS
In typical sewage treatment systems recycle flows are small in comparison to the main flow. Even
activated sludge systems typically have a recycle flow 100% or less of the main flow. Under these
conditions simple repeated substitution has proven to be a successful way of converging rapidly to
a solution. This will therefore be used first.
Repeated substitution rarely fails, but can be slow. As stated, our experience is that convergence
is usually rapid because of the recycle conditions common at sewage works. Where convergence
is slow then an alternative method is secant acceleration (more commonly, a slight variant called
Wegstein acceleration), typically implemented as a deferred acceleration - in one popular
chemical engineering approach, 6 repeated substitution steps are used, with every seventh step
using the secant acceleration. My experience with Wegstein acceleration, using Simsci’s old
Process simulator, was that the combination of Wegstein acceleration and repeated substitution
solved every recycle problem I came across.
Where recycle loops are problematic there will also need to be a ‘tear set’ algorithm, designed to
locate the best ordering sequence for the recycle system. There is an extensive literature on the
selection of tear sets, and the aim will be to use the published method from the ASCEND IV
simulator. This was used until recently by AspenTech’s Aspen simulator.
UNCERTAINTY
This is referred to as ‘fuzziness of data’ in the minutes of the October meeting.
The suggestion is that parameters such as flow, strength and calibration parameters can be
assigned a probability function for the value, and the simulator ran many times sampling from that
probability function - Monte-Carlo simulation. The most ‘intuitive’ probability functions, for many
people, are the following:
· Normal distribution, where we specify an expected value and a standard deviation. For this
application it may be simpler to ask the user to specify the expected value (mean) and the
expected upper bound, and internally to associate the upper bound as a 99%-ile. It may be
possible to generate negative values with this approach. In one of Rod’s papers supplied
for the October meeting, taken from a Montgomery Watson report, Montgomery Watson
handled this by truncating the distribution at upper and lower boundaries. My gut feeling is
that this approach requires that the distribution then be renormalised, to ensure that the
sum of probabilities equals 1.0; Montgomery Watson’s approach, from a casual reading,
ignored this renormalisation, leading to the distribution used being different to the expected
truncated distribution.
· The log-Normal distribution can be used to avoid negative values. This guarantees positive
values, but also prohibits zero from being a value. Like the Normal distribution there is no
upper limit, so an upper bound with truncation beyond that bound could be used. The log-
Normal distribution produces a more rapid fall-off in extreme values than the Normal
distribution, so that this may not be necessary.
· Weinbull distribution. At this time I do not know what the Weinbull distribution is, but there are
references in the wastewater modelling literature that it is a strong candidate for a reasonable
predictor of the distribution of parameter values. When we get to this stage in algorithm
development we can locate the exact mathematical form. Most engineers will not know what
the Weinbull distribution is, whereas the other three are likely to be known; this lack of
familiarity would suggest that the Weinbull distribution may best be left for a later update, once
people have come to grips with the routine use of Monte-Carlo design, and are looking for
other forms of describing the uncertainty in their data.
REFERENCES
For those interested in pursuing the literature of simulator design further(!), there are the following
books:
A Husain, 1986, Chemical Process Simulation, Wiley The most recent, and the widest
coverage. This can be at the expense of readability. The only one to cover some aspects of
dynamic flowsheeting and some of the more recent methods.
A Westerburg et al., 1979, Process Flowsheeting, Cambridge University Press The classic book
for flowsheeting; readable as well.
Most of the work in this field has been published in the technical journals, particularly Computers
and Chemical Engineering and the American Institute of Chemical Engineers’ Journal. If the
maths above seem abstract, try these journals for really abstract maths.
The biofilm model of Wanner, modified by Reichert, is documented on the Web. Search for
Aquasim, which is on the ftp.eawag.ch site, and should be accessible through www.eawag.ch.
The documentation is as a Postscript file.