0% found this document useful (0 votes)
15 views

Recurrent Neural Chemical Reaction Networks That Approximate Arbitrary Dynamics

Uploaded by

pradeep kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views

Recurrent Neural Chemical Reaction Networks That Approximate Arbitrary Dynamics

Uploaded by

pradeep kumar
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 20

Recurrent neural chemical reaction networks that

approximate arbitrary dynamics


Alexander Dacka,1 Benjamin Qureshi1 Thomas E. Ouldridge1 Tomislav Plesaa,2

Keywords: Chemical Reaction Networks; Dynamical Systems; Artificial Neural Networks.


arXiv:2406.03456v1 [q-bio.MN] 5 Jun 2024

Abstract: Many important phenomena in chemistry and biology are realized via dynamical features
such as multi-stability, oscillations, and chaos. Construction of novel chemical systems with such
finely-tuned dynamics is a challenging problem central to the growing field of synthetic biology.
In this paper, we address this problem by putting forward a molecular version of a recurrent
artificial neural network, which we call a recurrent neural chemical reaction network (RNCRN).
We prove that the RNCRN, with sufficiently many auxiliary chemical species and suitable fast
reactions, can be systematically trained to achieve any dynamics. This approximation ability is
shown to hold independent of the initial conditions for the auxiliary species, making the RNCRN
more experimentally feasible. To demonstrate the results, we present a number of relatively simple
RNCRNs trained to display a variety of biologically-important dynamical features.

1 Introduction
Artificial neural networks (ANNs) are a set of algorithms, inspired by the structure and function
of the brain, which are commonly implemented on electronic machines [1]. ANNs are known for
their powerful function approximating abilities [2, 3], and have been successfully applied to solve
complex tasks ranging from image classification [4] to dynamical control of chaos [5]. Given the
success of ANNs on electronic machines, it is unsurprising that there has been substantial interest in
mapping common ANNs to chemical reaction networks (CRNs) [6–15] - a mathematical framework
used for modelling chemical and biological processes. In this paper, we say that a CRN is neural
if it executes an ANN algorithm. Neural CRNs have been implemented experimentally using a
range of molecular substrates [16–21]. As such, these networks, and other forms of chemically-
realized computing systems [22–28], intend to embed computations inside biochemical systems, where
electronic machines cannot readily operate.
Current literature on neural CRNs has focused on solving static problems, where the goal is to take
an input set of chemical concentrations and produce a time-independent (static) output. These static
neural CRNs find application as classifiers [29,30] in symbol and image recognition tasks [10,12,31,32],
complex logic gates [11, 20], and disease diagnostics [17]. However, more intricate time-dependent
(dynamical ) features can be of great importance in biology. In particular, under suitable conditions,
many biological processes can be modelled dynamically using ordinary-differential equations (ODEs).
In this context, fundamental processes such as cellular differentiation and circadian clocks are realized
via ODEs that exhibit features such as coexistence of multiple favourable states (multi-stability)
and oscillations [33–35]. In fact, even deterministic chaos, a complicated dynamical behaviour, is
thought to confer a survival advantage to populations of cells [35].
a
Email of co-corresponding authours: [email protected] or [email protected].
1
Department of Bioengineering and Imperial College Centre for Synthetic Biology, Imperial College London,
Exhibition Road, London SW7 2AZ, UK.
2
Department of Applied Mathematics and Theoretical Physics, University of Cambridge, Centre for Mathematical
Sciences, Wilberforce Road, Cambridge, CB3 0WA, UK.

1
Despite their promising potential in biology, neural CRNs trained to emulate such dynamical
behaviours remain largely unexplored in the literature. To bridge the gap, in this paper we present a
novel neural CRN called the recurrent neural chemical reaction networks (RNCRNs) (see Figure 1).
We prove that RNCRNs can approximate the dynamics of any well-behaved target system of ODEs,
provided that there are sufficiently many auxiliary chemical species and that these species react
sufficiently fast. A major advantage of RNCRNs over some other comparable methods for designing
chemical systems [36,37] is that the initial concentrations for the auxiliary species are not constrained.
This fact is important, since high-precision experimental fine-tuning of initial concentrations is
infeasible [38].

𝑌1
𝛽1
∅ 𝑋1 𝑌1 𝛾/𝜇
1/𝜇

∅ 𝑌1 ∅
𝑋1 + 𝑌1 𝜔1,1 𝜔1,0
𝜇 𝜇

𝑋1 𝑌1
∅ 𝑋1
𝛼1,1
𝑋1 ⋮

𝛼1,𝑀 𝑋1 + 𝑌𝑀 𝑌𝑀
𝑋1 ∅
𝜔𝑀,1
𝜇 1 /𝜇

𝑌𝑀 𝑌𝑀 ∅ 𝑌𝑀 ∅
𝜔𝑀,0 /𝜇 𝛾/𝜇

Figure 1: A visualisation of an RNCRN with an executive species X1 , and a layer of chemical


perceptrons Y1 , . . . , YN . Chemical reactions, as defined in Section 2.1, are shown by horizontal black
arrows. The curved black arrows show catalytic involvement of additional chemical species in a
chemical reaction. The purple and orange arrows emphasise the interdependence of the executive
species and the chemical perceptrons. The absolute value is used to emphasise that all reactions
have positive rate constants; where an RNCRN parameter is required to be negative, this is achieved
by using a degradation/inactivation reaction as opposed to a production/activation reaction. The
explicit CRN is stated in Appendix A.

The paper is organized as follows. In Section 2, we present some background theory on CRNs and
ANNs. In Section 3, we introduce an RNCRN, and outline the main theoretical result regarding its
approximation properties; this result is stated rigorously and proved in Appendix A. A generalised
RNCRN is presented in Appendix B. In Section 4, we present an algorithm for training the RNCRN
as Algorithm 1, which we demonstrate by obtaining relatively simple RNCRNs displaying multi-
stability, oscillations and chaos; more details can be found in Appendix C. Finally, we provide a

2
summary and discussion in Section 5.

2 Background theory
In this section, we present some background theory on chemical reaction networks and artificial
neural networks.

2.1 Chemical reaction networks (CRNs)


Consider two chemical species X and Y which interact via the following five reactions:
α β γ ω 1
X +Y −
→ Y, ∅−
→ X, ∅−
→ Y, X +Y −
→ X + 2Y, 2Y →
− Y. (1)

In particular, according to the first reaction, when species X and Y react, one molecule of X is
degraded. Since the molecular count of Y remains unchanged when this reaction occurs, we say that
Y acts as a catalyst for the first reaction. The second reaction describes a production of X from
some chemical species which we do not explicitly model, and simply denote by ∅; similarly, the third
reaction describes production of Y . One molecule of Y is produced when X and Y react according
to the fourth reaction. In this reaction, X remains unchanged, and is therefore a catalyst. Finally,
according to the fifth reaction, when two molecules of Y react, one of the molecules is degraded.
Systems of chemical reactions, such as (1), are called chemical reaction networks (CRNs) [39].
Reaction-rate equations. In this paper, we make a standard assumption on the rate at
which the reactions occur: it given by the product of the concentrations of the underlying reactants
(species on the left-hand side of the reaction) multiplied by the rate coefficient - the positive number
displayed above the reaction arrow. This choice of rates is called the mass-action kinetics. For
example, let us denote the concentrations of X and Y from (1) at time t ≥ 0 respectively by x = x(t)
and y = y(t). The first reaction from (1) has α > 0 as its rate coefficient, and thus occurs at the rate
αxy. Similarly, the rates of the other four reactions from (1) are given by β, γ, ωxy and y 2 ; note
that for the final reaction, we have fixed the rate coefficient to 1. Under suitable conditions [39], the
concentrations of chemical species satisfy a system of ordinary-differential equations (ODEs) called
the reaction-rate equations (RREs). For system (1), the RREs are given by

dx dy
= β − αxy, = γ + ωxy − y 2 . (2)
dt dt
In what follows, we denote the chemical species by X1 , X2 , . . . , Y1 , Y2 , . . ., their corresponding
concentrations by x1 , x2 , . . . , y1 , y2 , . . . ≥ 0, and the rate coefficients of the underlying reactions using
the Greek letters α, β, γ, ω > 0 with appropriate subscript indices. For simplicity, we assume that all
the variables, including the rate coefficients, are dimensionless. Furthermore, we limit ourselves to
the chemical reactions with at most two reactants. In this case, the RREs have quadratic polynomials
on the right-hand side, as in (2). More general chemical reactions, with three or more reactants, are
experimentally unlikely to occur, and can be approximately reduced to reactions with at most two
reactants [40, 41].

2.2 Artificial neural networks (ANNs)


Artificial neural networks (ANNs) consist of connected processing units known as artificial neurons.
In this paper, we consider a particular type of artificial neurons called the perceptron [42]. Given a
set of inputs, a perceptron first applies an affine function, followed by a suitable non-linear function,

3
called the activation function, to produce a single output. More precisely, let x1 , x2 , . . . , xN ∈ R
be the input values, ω1 , ω2 , . . . , ωN ∈ R be the weights, ω0 ∈ R a bias, and σ : R → R a suitable
non-linear function. Then, the perceptron is a function y : RN → R defined as
 
XN
y = y(x1 , . . . , xN ) = σ  ωj xj + ω0  . (3)
j=1

Chemical perceptron. A natural question arises: Is there a CRN with single species Y such
that its RRE has a unique stable equilibrium of the form (3)? Such an RRE has been put forward
and analyzed in [14], and takes the following form:
 
N
dy X
=γ+ ωj x j + ω0  y − y 2 . (4)
dt
j=1

A CRN corresponding to (4) is given by


| Nj=1 ωj xj +ω0 |
P
γ 1
∅−
→ Y, Y −−−−−−−−−−→ Y + sY, 2Y →
− Y, (5)

where s = sign( N
P
j=1 ωj xj + ω0 ), with function sign(·) yielding the sign of a real number. We call
species Y from (5) a chemical perceptron. Setting the left-hand side in (4) to zero, one finds that
there is a unique globally stable non-negative equilibrium, which we denote by y ∗ , given by
    v u 2 
N N u N
X 1 X u X
y ∗ = σγ  ωj xj + ω0  ≡  ωj xj + ω0  + t ωj xj + ω0  + 4γ  . (6)

2
j=1 j=1 j=1

We call σγ with γ > 0 a chemical activation function. As γ → 0, σγ approaches the rectified linear
unit (ReLU) activation function [14], which is a common choice in the ANN literature [4].

3 Recurrent neural chemical reaction networks (RNCRNs)


Let us consider a target ODE system with initial conditions, given by
dx̄i
= fi (x̄1 , . . . , x̄N ), x̄i (0) = ai ≥ 0, for i = 1, 2, . . . , N. (7)
dt
In what follows, we call the ODE right-hand side (f1 , f2 , . . . , fN ) a vector field, which we assume
is sufficiently smooth. Without loss of generality, we assume that (7) has desirable dynamical
features in the positive orthant RN
> . If such features are located elsewhere, a suitable affine change
of coordinates can be used to move these features to the positive orthant [43].
We wish to find a neural CRN with some chemical species X1 , . . . , XN such that the RREs for
these species approximate the target ODEs (7). To this end, let us consider the RREs and initial
conditions given by
M
dxi X
= β i + xi αi,j yj , xi (0) = ai , for i = 1, 2, . . . , N,
dt
j=1
N
!
dyj X
µ = γ + yj ωj,i xi + ωj,0 − yj2 , yj (0) = bj , for j = 1, 2, . . . , M. (8)
dt
i=1

4
We call the CRN corresponding to the RREs (8) a recurrent neural chemical reaction network
(RNCRN). In Figure 1, we schematically display this RNCRN with N = 1. The RNCRN consists
of two sub-networks, the first of which is the executive sub-system, shown in purple in Figure 1,
which contains chemical reactions which directly change the executive species X1 , X2 , . . . , XN . Note
that the initial conditions for the executive species from (8) match the target initial conditions
from (7). The second sub-network is called the neural sub-system, shown in yellow in Figure 1, and
it contains those reactions which directly influence the auxiliary species Y1 , Y2 , . . . , YM , for which
we allow arbitrary initial conditions b1 , b2 , . . . , bM ≥ 0. These species can be formally identified as
chemical perceptrons, see (4)–(5). However, let us stress that the CRN (5) with RRE (4) depends
on the parameters x1 , x2 , . . . , xN . In contrast, the concentrations of chemical perceptrons from the
RNCRN depend on the executive variables x1 (t), x2 (t), . . . , xN (t), which in turn depend on the
perceptron concentrations y1 (t), y2 (t), . . . , yM (t). In other words, there is a feedback between the
executive and neural systems, giving the RNCRN a recurrent character. This feedback is catalytic in
nature: the chemical perceptrons are catalysts in the executive system; similarly, executive species
are catalysts in the neural system.
Main result. We now wish to choose the parameters in the RNCRN so that the concentrations
of the executive species xi (t) from (8) are close to the target variables x̄i (t) from (7). Key to achieving
this match is the parameter µ > 0, which sets the speedPat which the perceptrons equilibrate relative
to the executive sub-system. The equilibrium yj∗ = σγ ( N i=1 ωj,i xi + ωj,0 ), with σγ of the form (6), is
reached infinitely fast if we formally set µ = 0 in (8); we then say that the perceptrons have reached
the quasi-static state. In this case, the executive species are governed by the reduced ODEs:
M N
!
dx̃i X X
= gi (x̃1 , . . . , x̃N ) = βi + x̃i αi,j σγ ωj,k x̃k + ωj,0 , x̃i (0) = ai , for i = 1, 2, . . . , N.
dt
j=1 k=1
(9)
This reduced system allows us to prove that the RNCRN can in principle be fine-tuned to execute
the target dynamics arbitrarily closely. In particular, to achieve this task, we follow two steps.
Firstly, we assume that the chemical perceptrons are in the quasi-static state, i.e. we consider
the reduced system (9). Using the classical (static) theory from ANNs, it follows that the rate
coefficients from the RNCRN can be fine-tuned so that the vector field from the reduced system (9)
is close to that of the target system (7). To ensure a good vector field match, one in general requires
sufficiently many chemical perceptrons. We call this first step the quasi-static approximation.
Secondly, we disregard the assumption that the chemical perceptrons are in the quasi-static
state, i.e. we consider the full system (8). Nevertheless, we substitute into the full system the
rate coefficients found in the first step. Using perturbation theory from dynamical systems, it
follows that, under this parameter choice, the concentrations of the executive species from the full
system (8) match the variables from the target system (7), provided that the chemical perceptrons
fire sufficiently (but finitely) fast. We call this second step the dynamical approximation.
In summary, the RNCRN induced by (8) with sufficiently many chemical perceptrons (M ≥ 1
large enough) which act sufficiently fast (µ > 0 small enough) can execute any target dynamics.
This result is stated rigorously and proved in Appendix A; a generalization to deep RNCRNs, i.e.
RNCRNs containing multiple coupled layers of chemical perceptrons, is presented in Appendix B.

4 Examples
In Section 3, we have outlined a two-step procedure used to prove that RNCRNs can theoretically
execute any desired dynamics. Aside from being of theoretical value, these two steps also form a

5
basis for a practical method to train RNCRNs, which is presented as Algorithm 1. In this section,
we use Algorithm 1 to train RNCRNs to achieve predefined multi-stability, oscillations, and chaos.

Fix a target system (7) and target compact sets K1 , K2 , . . . , KN ⊂ (0, +∞). Fix also the rate
coefficients β1 , β2 , . . . , βN ≥ 0 and γ > 0 in the RNCRN system (8).

(a) Quasi-static approximation. Fix a tolerance ε > 0. Fix also the number of chemi-
cal perceptrons M ≥ 1. Using the backpropagation algorithm [44], find the coefficients
∗ , ω ∗ , ω ∗ for i = 1, 2, . . . , N , j = 1, 2, . . . , M , such that the mean-square distance between
αi,j j,0 j,i P 
(fi (x1 , x2 , . . . , xN )/xi − βi /xi ) and M ∗ N ∗ ∗
P
j=1 αi,j σγ k=1 ωj,k xk + ωj,0 is within the tolerance
for (x1 , x2 , . . . , xN ) ∈ K1 × K2 × . . . × KN . If the tolerance ε is not met, then repeat step (a)
with M + 1.

(b) Dynamical approximation. Substitute αi,j = αi,j ∗ , ω ∗ ∗


j,0 = ωj,0 and ωj,i = ωj,i into the
RNCRN system (8). Fix the initial conditions a1 , a2 , . . . , aM ≥ 0 and b1 , b2 , . . . , bM ≥ 0. Fix
also the speed of chemical perceptrons 0 < µ ≪ 1. Numerically solve the target system (7)
and the RNCRN system (8) over a desired time-interval t ∈ [0, T ]. Time T > 0 must be
such that x̄i (t) ∈ Ki and xi (t) ∈ Ki for all t ∈ [0, T ] for all i = 1, 2, . . . , N . If x̄i (t) and xi (t)
are sufficiently close according to a desired criterion for all i, then terminate the algorithm.
Otherwise, repeat step (b) with a smaller µ. If no desirable µ is found, then go back to step
(a) and choose a smaller ε.
Algorithm 1: Two-step algorithm for training the RNCRN.

4.1 Multi-stability
Let us consider the one-variable target ODE
dx̄1
= f1 (x̄1 ) = sin(x̄1 ), x̄1 (0) = a1 ≥ 0. (10)
dt
This system has infinitely many equilibria, which are given by x̄∗1 = nπ for integer values of n. The
equilibria with even n are unstable, while those with odd n are stable.
Bi-stability. Let us now apply Algorithm 1 on the target system (10), in order to find an
associated bi-stable RNCRN. In particular, let us choose the target region to be K1 = [1, 12], which
includes two stable equilibria, π and 3π, and one unstable equilibrium, 2π. We choose e.g. β1 = 0,
as this coincides with sin(0), and γ = 1.
Quasi-static approximation. Let us apply the first step from Algorithm 1. We find that the
tolerance ε ≈ 10−3 is met with M = 3 chemical perceptrons, if the rate coefficients αi,j , ωj,0 , ωj,i in
the reduced ODE (see (9)) are chosen as follows:
dx̃1
= g1 (x̃1 ) = − 0.983σ1 (−1.167x̃1 + 7.789) x̃1
dt
− 0.050σ1 (0.994x̃1 − 1.918) x̃1
+ 2.398σ1 (−0.730x̃1 + 3.574) x̃1 . (11)

In Figure 2(a), we display the vector fields of the target (10) and reduced system (11). One can
notice an overall good match within the desired set K1 = [1, 12], shown as the unshaded region. As

6
expected, the approximation is poor outside of K1 ; furthermore, for the given tolerance, the accuracy
is also reduced near the left end-point of the target set.
D %LVWDEOHG\QDPLFV E %LVWDEOHWUDMHFWRULHV
 f(x1 )

x̄ 1 (t)
 g(x1 ) 7π x1 (t)
 6π
9HFWRUILHOG

 5π

x1 (t)
 4π
 3π
 2π
 π
0 π 2π 3π 4π 5π 6π 7π 8π 0     
6WDWHx1 7LPHt
F 7ULVWDEOHG\QDPLFV G 7ULVWDEOHWUDMHFWRULHV
 f(x1 )

x̄ 1 (t)
 g(x1 ) 7π x1 (t)
 6π
9HFWRUILHOG

 x1 (t) 5π
 4π
 3π
 2π
 π
0 π 2π 3π 4π 5π 6π 7π 8π 0     
6WDWHx1 7LPHt
Figure 2: RNCRN approximations of a multi-stable non-polynomial target system. (a) The vector
field of the target system (10) and reduced system (11) when K1 = [1, 12]. (b) Solutions x̄1 (t) of the
target system (10), and x1 (t) of the full system (12) with µ = 0.01. Analogous plots are shown in
panels (c) and (d) for the tri-stable RNCRN over K1 = [1, 18] from Appendix C.1, whose reduced
and full ODEs are given respectively by (28) and (29), with coefficients (27) and µ = 0.01. In panels
(c) and (d), the initial concentrations of all chemical perceptrons are set to zero.

Dynamical approximation. Let us apply the second step from Algorithm 1. Using the coefficients
from (11), we now form the full ODEs (see (8)):

dx1
= −0.983x1 y1 − 0.050x1 y2 + 2.398x1 y3 , x1 (0) = a1 ∈ K1 ,
dt
dy1
µ = 1 − 1.167x1 y1 + 7.789y1 − y12 , y1 (0) = b1 ≥ 0,
dt
dy2
µ = 1 + 0.994x1 y2 − 1.918y2 − y22 , y2 (0) = b2 ≥ 0,
dt
dy3
µ = 1 − 0.730x1 y3 + 3.574y3 − y32 , y3 (0) = b3 ≥ 0. (12)
dt
We fix the initial conditions arbitrarily to b1 = b2 = b3 = 0, the desired final-time to T = 5 and
the perceptron speed to µ = 0.01. We numerically integrate (10) and (12) over t ∈ [0, 5] for a fixed
initial concentration a1 ∈ K1 of the executive species, and plot the solutions x̄(t) and x(t); we then
repeat the same computations for various values of a1 , which we display in Figure 2(b). One can
notice that the RNCRN underlying (12) with µ = 0.01 accurately approximates the time-trajectories
of the target system; in particular, we observe bi-stability.

7
Tri-stability. Let us now apply Algorithm 1 on (10) to obtain a tri-stable RNCRN. To this end,
we consider a larger set K1 = [1, 18], which includes three stable equilibria, π, 3π, and 5π, and two
unstable equilibria, 2π and 4π. Applying Algorithm 1, we find an RNCRN with M = 4 chemical
perceptrons, which with µ = 0.01 displays the desired tri-stability; see Figure 2(c)–(d), and see
Appendix C.1 for more details. In a similar manner, Algorithm 1 can be used to achieve RNCRNs
with arbitrary number of stable equilibria.

4.2 Oscillations
Let us consider the two-variable target ODE system
 
dx̄1 3
= f1 (x̄1 , x̄2 ) = 6 + 4J0 x̄1 − x̄2 , x̄1 (0) = a1 ≥ 0,
dt 2
dx̄2
= f2 (x̄1 , x̄2 ) = x̄1 − 4, x̄2 (0) = a2 ≥ 0, (13)
dt
were J0 (x̄1 ) is the Bessel function of the first kind. Numerical simulations suggest that (13) has an
isolated oscillatory solution, which we display as the blue curve in the (x̄1 , x̄2 )-space in Figures 3(a);
also shown as grey arrows is the vector field, and as the unshaded box we display the desired region
of interest K1 × K2 = [0.1, 12] × [0.1, 12]. In Figure 3(c)–(d), we show as solid blue curves this
oscillatory solution in the (t, x̄1 )- and (t, x̄2 )-space, respectively.
Using Algorithm 1, we find an RNCRN with M = 6 chemical perceptrons, whose dynamics
with µ = 0.01 qualitatively matches the dynamics of the target system (13) within the region of
interest; see Appendix C.2 for details. We display the solution of this RNCRN as the purple curve
in Figures 3(b)–(d).

D 7DUJHWV\VWHP E 51&51DSSUR[LPDWLRQ F x1 (t)WUDMHFWRULHV


  

x1 (t)


        
6WDWHx̄ 2

6WDWHx2

7LPHt
 
G x2 (t)WUDMHFWRULHV


x2 (t)


                
6WDWHx̄ 1 6WDWHx1 7LPHt
Figure 3: RNCRN approximation of an oscillatory non-polynomial target system. (a) The vector field
of the target system (13) (grey arrows) for K1 × K2 = [0.1, 12] × [0.1, 12], and a solution of (13) with
x̄1 (0) = 2 and x̄2 (0) = 4 (blue). (b) Analogous plot is shown for the RNCRN from Appendix C.2,
whose reduced and full ODEs are given respectively by (31) and (32), with coefficients (30) and
µ = 0.01. The vector field of (31) (grey arrows) is shown over K1 × K2 , together with a solution
of (32) (purple). Panels (c) and (d) respectively display the solutions x̄1 (t) and x1 (t), and x̄2 (t)
and x2 (t). In all the panels, the same x1 (0) and x2 (0) are used, and initial concentrations of the
chemical perceptrons are all set to zero.

8
4.3 Chaos
As our final example, we consider the three-variable target ODE system
dx̄1 4x̄1 x̄2
= f1 (x̄1 , x̄2 , x̄3 ) = 2.5x̄1 (1 − 1.5x̄1 ) − , x̄1 (0) = 0.25,
dt 1 + 3x̄1
dx̄2 4x̄1 x̄2 4x̄2 x̄3
= f2 (x̄1 , x̄2 , x̄3 ) = −0.4x̄2 + − , x̄2 (0) = 0.25,
dt 1 + 3x̄1 1 + 3x̄2
dx̄3 4x̄2 x̄3
= f3 (x̄1 , x̄2 , x̄3 ) = −0.6x̄3 + , x̄3 (0) = 0.25. (14)
dt 1 + 3x̄2
ODEs (14) are known as the Hasting-Powell system [45], and have been reported to exhibit chaotic
behaviour. We present a portion of the state-space of (14) in Figure 4(a).
Using Algorithm 1, we find an RNCRN with M = 5 chemical perceptrons to approximate the
dynamics of the Hasting-Powell system (14) over the compact set x1 ∈ K1 , x2 ∈ K2 and x2 ∈ K3
with K1 = K2 = K3 = [0.01, 1]. The parameters and the RREs can be found in Appendix C.3. The
state-space for the executive species from the RNCRN is presented in Figure 4(b) when µ = 0.1,
suggesting presence of a chaotic attractor, in qualitative agreement with Figure 4(a). In Figure 4(c)–
(e), we display the underlying time-trajectories. Alignment of the trajectories is not expected for
long periods of time due to the chaotic nature of the target system (causing exponential divergence
of nearby trajectories).
D 7DUJHWV\VWHP E 51&51DSSUR[LPDWLRQ

 

x̄ 3 x3

 
 
x̄2  x2 
x̄ 1 x1
   

F x1 (t)WUDMHFWRULHV G x2 (t)WUDMHFWRULHV H x3 (t)WUDMHFWRULHV


  
x1 (t)

x2 (t)

x3 (t)

  


              
7LPHt 7LPHt 7LPHt
Figure 4: RNCRN approximation of a non-polynomial target system with a chaotic attractor. (a)
Solution of target system (14) with x̄1 (0) = x̄2 (0) = x̄3 (0) = 0.25. (b) Solution of the full ODEs
system (35) for the RNCRN from Appendix C.3 with coefficients (33), µ = 0.1, and initial conditions
x1 (0) = x2 (0) = x3 (0) = 0.25, and zero initial conditions for all chemical perceptrons. Panels (c),
(d) and (e) respectively display the solutions x̄1 (t) and x1 (t), x̄2 (t) and x2 (t) and x̄1 (t) and x1 (t).

9
5 Discussion
In this paper, we have introduced a class of CRNs that can be trained to approximate the dynamics of
any well-behaved system of ordinary-differential equations (ODEs). These recurrent neural chemical
reaction networks (RNCRNs) operate by continually actuating the concentration of the executive
chemical species such that they adhere to a vector field which has been encoded by a faster system of
chemical perceptrons. In Theorem A.1 in Appendix A, we have presented the approximation abilities
of RNCRNs. Based on this result, we have put forward Algorithm 1 for training RNCRNs, which
relies on the backpropagation algorithm from ANN theory. Due to the nature of the backpropagation
procedure, Algorithm 1 is not guaranteed to find an optimal solution; nevertheless, Algorithm 1
proved effective for the target systems presented in this paper. In particular, in Section 4, we
showcased examples of RNCRNs that replicate sophisticated dynamical behaviours with up to six
chemical perceptrons, which are at most two orders of magnitude faster than the executive species.
RNCRNs contain only chemical reactions of certain kind (e.g. see (1)), all of which are at most
bi-molecular, which is suitable for experimental implementations via DNA strand-displacement
technologies [46]. In particular, arbitrary bi-molecular CRNs, whose rate coefficients span up to six
orders of magnitude [47], can be implemented with DNA strand-displacement reactions, and this
approach has achieved chemical systems with more than 100 chemical species [31, 32]. Let us stress
that high-precision calibration of the initial concentrations of chemical species within this framework
is not feasible [38]. In this context, an advantage of the RNCRNs is that the bulk of the underlying
species, namely the chemical perceptrons, have arbitrary initial conditions.
While RNCRNs are experimentally feasible, such chemical-based machine learning suffers from
several idiosyncrasies that are not present in traditional electronic-based machine learning. Firstly,
adding a perceptron or new weight to an ANN is trivial compared to adding a chemical perceptron to
an RNCRN, which requires introducing a specially engineered chemical species. Secondly, chemical
systems are prone to unintended reactions, so that the modularity of each chemical perceptron might
break down as the network scales in size [25]. Finally, the weight parameters of an ANN can be
fine-tuned to a high degree of precision. However, despite techniques to modulate the effective rates
of chemical reactions, the calibration of reaction rates is significantly less precise than in electronic
circuits [47, 48].
RNCRNs allow one to map ODEs with non-polynomial right-hand sides to dynamically similar
reaction-rate equations (RREs). Alternative methods put forward in the literature solve this problem
in two steps: firstly, the non-polynomial ODEs are mapped to polynomial ones and, secondly, special
maps are then applied to transform these polynomial ODEs to RREs. Although the second step can
be performed efficiently [43, 49–53], the first step suffers from a significant drawback, which is not a
feature of the RNCRNs: the initial conditions for the auxiliary species are constrained. For example,
applying the standard method from [36, 37] to the target non-polynomial system (10), one obtains
polynomial ODEs given by
dx1
= y1 , x1 (0) = a1 ,
dt
dy1
= y1 y2 , y1 (0) = sin(a1 ),
dt
dy2
= y1 y3 , y2 (0) = cos(a1 ),
dt
dy3
= y2 y3 , y3 (0) = − sin(a1 ). (15)
dt
The solution x1 (t) from equation (15) is equivalent to that of x̄1 (t) from the target system (10) for all

10
a1 ∈ R. However, the initial conditions for the auxiliary species y1 (0) = sin(a1 ), y2 (0) = cos(a1 ), and
y3 (0) = − sin(a1 ) from (15) are constrained. In contrast, the initial concentrations of the auxiliary
species from the RNCRN approximation (12) are arbitrary.
As part of the future work, firstly, it is important to study the performance of RNCRNs when
the underlying rate coefficients are perturbed; we suspect modifications to the training algorithm
might be a strategy to select for robust RNCRNs. Secondly, of importance is also to study how
the RNCRNs preform in the low-molecular-count regime, where stochastic effects are significant.
Finally, in Appendix B, we have generalized the single-layer (shallow) RNCRNs to multi-layer (deep)
ones; in future work, we will study whether such deep RNCRNs have sufficiently advantageous
approximation abilities to justify the additional experimental complexity.

6 Declarations
Author Contributions: AD, BQ, TEO, and TP conceptualization the study; AD and TP preformed the
mathematical analyses; AD preformed the simulations and wrote the original draft; BQ, TEO, and TP
reviewed and edited the final submission.
Funding: Alexander Dack acknowledges funding from the Department of Bioengineering at Imperial College
London. Benjamin Qureshi would like to thank the European Research Council (ERC) under the European
Union’s Horizon 2020 research and innovation program (Grant agreement No. 851910). Thomas E. Ouldridge
would like to thank the Royal Society for a University Research Fellowship. Tomislav Plesa would like to
thank Peterhouse, University of Cambridge, for a Fellowship.
Conflict of interest: The authors declare that they have no competing interests.

References
[1] W. S. McCulloch and W. Pitts, “A logical calculus of ideas immanent in nervous activity,”
Bulletin of Mathematical Biophysics, vol. 5, pp. 115–133, 1943.

[2] G. Cybenko, “Approximation by superpositions of a sigmoidal function,” Mathematics of Control,


Signals, and Systems, vol. 2, pp. 303–314, 1989.

[3] K. Hornik, M. Stinchcombe, and H. White, “Multilayer feedforward networks are universal
approximators,” Neural Networks, vol. 2, pp. 359–366, 1989.

[4] Y. LeCun, Y. Bengio, and G. Hinton, “Deep learning,” Nature, vol. 521, pp. 436–444, 2015.

[5] M. A. Bucci, O. Semeraro, A. Allauzen, G. Wisniewski, L. Cordier, and L. Mathelin, “Con-


trol of chaotic systems by deep reinforcement learning,” Proceedings of the Royal Society A:
Mathematical, Physical and Engineering Sciences, vol. 475, p. 20190351, 2019.

[6] A. Hjelmfelt, E. D. Weinberger, and J. Ross, “Chemical implementation of neural networks and
Turing machines.,” Proceedings of the National Academy of Sciences, vol. 88, pp. 10983–10987,
1991.

[7] L. Qian, E. Winfree, and J. Bruck, “Neural network computation with DNA strand displacement
cascades,” Nature, vol. 475, pp. 368–372, 2011.

[8] J. Kim, J. Hopfield, and E. Winfree, “Neural Network Computation by In Vitro Transcriptional
Circuits,” Advances in Neural Information Processing Systems, vol. 17, 2004.

11
[9] H.-J. K. Chiang, J.-H. R. Jiang, and F. Fages, “Reconfigurable neuromorphic computation in
biochemical systems,” in 2015 37th Annual International Conference of the IEEE Engineering
in Medicine and Biology Society (EMBC), pp. 937–940, 2015.

[10] W. Poole, A. Ortiz-Muñoz, A. Behera, N. S. Jones, T. E. Ouldridge, E. Winfree, and M. Gopalkr-


ishnan, “Chemical Boltzmann Machines,” in DNA 2017: DNA Computing and Molecular
Programming, vol. 10467, pp. 210–231, 2017.

[11] A. Moorman, C. C. Samaniego, C. Maley, and R. Weiss, “A Dynamical Biomolecular Neural


Network,” in 2019 IEEE 58th Conference on Decision and Control (CDC), pp. 1797–1802, 2019.
ISSN: 2576-2370.

[12] M. Vasić, C. Chalk, A. Luchsinger, S. Khurshid, and D. Soloveichik, “Programming and training
rate-independent chemical reaction networks,” Proceedings of the National Academy of Sciences,
vol. 119, p. e2111552119, 2022.

[13] J. Linder, Y.-J. Chen, D. Wong, G. Seelig, L. Ceze, and K. Strauss, “Robust Digital Molecular
Design of Binarized Neural Networks,” in 27th International Conference on DNA Computing
and Molecular Programming (DNA 27), vol. 205, pp. 1:1–1:20, 2021.

[14] D. F. Anderson, B. Joshi, and A. Deshpande, “On reaction network implementations of neural
networks,” Journal of The Royal Society Interface, vol. 18, p. 20210031, 2021.

[15] J. Fil, N. Dalchau, and D. Chu, “Programming Molecular Systems To Emulate a Learning
Spiking Neuron,” ACS Synthetic Biology, vol. 11, pp. 1992–2220, 2022.

[16] A. J. Genot, T. Fujii, and Y. Rondelez, “Scaling down DNA circuits with competitive neural
networks,” Journal of The Royal Society Interface, vol. 10, p. 20130212, 2013.

[17] R. Lopez, R. Wang, and G. Seelig, “A molecular multi-gene classifier for disease diagnostics,”
Nature Chemistry, vol. 10, pp. 746–754, 2018.

[18] A. Pandi, M. Koch, P. L. Voyvodic, P. Soudier, J. Bonnet, M. Kushwaha, and J.-L. Faulon,
“Metabolic perceptrons for neural computing in biological systems,” Nature Communications,
vol. 10, p. 3880, 2019.

[19] X. Li, L. Rizik, V. Kravchik, M. Khoury, N. Korin, and R. Daniel, “Synthetic neural-like
computing in microbial consortia for pattern recognition,” Nature Communications, vol. 12,
p. 3139, 2021.

[20] S. Okumura, G. Gines, N. Lobato-Dauzier, A. Baccouche, R. Deteix, T. Fujii, Y. Rondelez, and


A. J. Genot, “Nonlinear decision-making with enzymatic neural networks,” Nature, vol. 610,
pp. 496–501, 2022.

[21] A. J. Van Der Linden, P. A. Pieters, M. W. Bartelds, B. L. Nathalia, P. Yin, W. T. S. Huck,


J. Kim, and T. F. A. De Greef, “DNA Input Classification by a Riboregulator-Based Cell-Free
Perceptron,” ACS Synthetic Biology, vol. 11, pp. 1510–1520, 2022.

[22] L. Cardelli, M. Tribastone, and M. Tschaikowski, “From electric circuits to chemical networks,”
Natural Computing, vol. 19, pp. 237–248, 2020.

[23] A. Arkin and J. Ross, “Computational functions in biochemical reaction networks,” Biophysical
Journal, vol. 67, pp. 560–578, 1994.

12
[24] G. Seelig, D. Soloveichik, D. Y. Zhang, and E. Winfree, “Enzyme-Free Nucleic Acid Logic
Circuits,” Science, vol. 314, pp. 1585–1588, 2006.

[25] L. Qian and E. Winfree, “Scaling Up Digital Circuit Computation with DNA Strand Displacement
Cascades,” Science, vol. 332, pp. 1196–1201, 2011.

[26] D. Del Vecchio, A. J. Dy, and Y. Qian, “Control theory meets synthetic biology,” Journal of
The Royal Society Interface, vol. 13, p. 20160380, 2016.

[27] C. Briat, A. Gupta, and M. Khammash, “Antithetic Integral Feedback Ensures Robust Perfect
Adaptation in Noisy Biomolecular Networks,” Cell Systems, vol. 2, pp. 15–26, 2016.

[28] T. Plesa, G.-B. Stan, T. E. Ouldridge, and W. Bae, “Quasi-robust control of biochemical reaction
networks via stochastic morphing,” Journal of The Royal Society Interface, vol. 18, p. 20200985,
2021.

[29] C. Kieffer, A. J. Genot, Y. Rondelez, and G. Gines, “Molecular Computation for Molecular
Classification,” Advanced Biology, vol. 7, p. 2200203, 2023.

[30] C. C. Samaniego, E. Wallace, F. Blanchini, E. Franco, and G. Giordano, “Neural networks built
from enzymatic reactions can operate as linear and nonlinear classifiers,” bioRxiv, 2024.

[31] K. M. Cherry and L. Qian, “Scaling up molecular pattern recognition with DNA-based winner-
take-all neural networks,” Nature, vol. 559, pp. 370–376, 2018.

[32] X. Xiong, T. Zhu, Y. Zhu, M. Cao, J. Xiao, L. Li, F. Wang, C. Fan, and H. Pei, “Molecular
convolutional neural networks with DNA regulatory circuits,” Nature Machine Intelligence,
vol. 4, pp. 625–635, 2022.

[33] W. Xiong and J. E. Ferrell, “A positive-feedback-based bistable ‘memory module’ that governs
a cell fate decision,” Nature, vol. 426, pp. 460–465, Nov. 2003.

[34] P. E. Hardin, J. C. Hall, and M. Rosbash, “Feedback of the Drosophila period gene product on
circadian cycling of its messenger RNA levels,” Nature, vol. 343, pp. 536–540, Feb. 1990.

[35] M. L. Heltberg, S. Krishna, and M. H. Jensen, “On chaotic dynamics in transcription factors
and the associated effects in differential gene regulation,” Nature Communications, vol. 10, p. 71,
Jan. 2019.

[36] E. H. Kerner, “Universal formats for nonlinear ordinary differential systems,” Journal of
Mathematical Physics, vol. 22, pp. 1366–1371, 1981.

[37] K. Kowalski, “Universal formats for nonlinear dynamical systems,” Chemical Physics Letters,
vol. 209, pp. 167–170, 1993.

[38] N. Srinivas, J. Parkin, G. Seelig, E. Winfree, and D. Soloveichik, “Enzyme-free nucleic acid
dynamical systems,” Science, vol. 358, p. eaal2052, Dec. 2017.

[39] M. Feinberg, “Lecture notes on chemical reaction networks,” Lecture Notes, Mathematics
Research Center, University of Wisconsin-Madison, 1979.

[40] T. Wilhelm, “Chemical systems consisting only of elementary steps – a paradigma for nonlinear
behavior,” Journal of Mathematical Chemistry, vol. 27, pp. 71–88, 2000.

13
[41] T. Plesa, “Stochastic approximations of higher-molecular by bi-molecular reactions,” Journal of
Mathematical Biology, vol. 86, p. 28, 2023.

[42] F. Rosenblatt, The perceptron, a perceiving and recognizing automaton Project Para. Cornell
Aeronautical Laboratory, 1957.

[43] T. Plesa, T. Vejchodský, and R. Erban, “Chemical reaction systems with a homoclinic bifurcation:
an inverse problem,” Journal of Mathematical Chemistry, vol. 54, pp. 1884–1915, 2016.

[44] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, “Learning representations by back-


propagating errors,” Nature, vol. 323, pp. 533–536, Oct. 1986.

[45] L. Stone and D. He, “Chaotic oscillations and cycles in multi-trophic ecological systems,” Journal
of Theoretical Biology, vol. 248, pp. 382–390, 2007.

[46] D. Soloveichik, G. Seelig, and E. Winfree, “DNA as a universal substrate for chemical kinetics,”
Proceedings of the National Academy of Sciences, vol. 107, pp. 5393–5398, 2010.

[47] D. Y. Zhang and E. Winfree, “Control of dna strand displacement kinetics using toehold
exchange,” Journal of the American Chemical Society, vol. 131, no. 47, pp. 17303–17314, 2009.
PMID: 19894722.

[48] N. E. C. Haley, T. E. Ouldridge, I. Mullor Ruiz, A. Geraldini, A. A. Louis, J. Bath, and A. J.


Turberfield, “Design of hidden thermodynamic driving for non-equilibrium systems via mismatch
elimination during DNA strand displacement,” Nature Communications, vol. 11, p. 2562, May
2020.

[49] N. Samardzija, L. D. Greller, and E. Wasserman, “Nonlinear chemical kinetic schemes derived
from mechanical and electrical dynamical systems,” The Journal of Chemical Physics, vol. 90,
pp. 2296–2304, 1989.

[50] D. Poland, “Cooperative catalysis and chemical chaos: a chemical model for the Lorenz equations,”
Physica D: Nonlinear Phenomena, vol. 65, pp. 86–99, 1993.

[51] T. Plesa, T. Vejchodský, and R. Erban, “Test Models for Statistical Inference: Two-Dimensional
Reaction Systems Displaying Limit Cycle Bifurcations and Bistability,” in Stochastic Processes,
Multiscale Modeling, and Numerical Methods for Computational Cellular Biology, pp. 3–27,
2017.

[52] T. Plesa, A. Dack, and T. E. Ouldridge, “Integral feedback in synthetic biology: Negative-
equilibrium catastrophe (Appendix D),” Journal of Mathematical Chemistry, vol. 61, pp. 1980–
2018, 2023.

[53] K. M. Hangos and G. Szederkényi, “Mass action realizations of reaction kinetic system models
on various time scales,” Journal of Physics: Conference Series, vol. 268, p. 012009, 2011.

[54] A. Pinkus, “Approximation theory of the MLP model in neural networks,” Acta Numerica,
vol. 8, pp. 143–195, 1999.

[55] E. A. Coddington and N. Levinson, Theory of Ordinary Differential Equations. New-York:


McGraw-Hill, 1995.

14
[56] W. Klonowski, “Simplifying principles for chemical and enzyme reaction kinetics,” Biophysical
Chemistry, vol. 18, pp. 73–87, 1983.

[57] A. N. Tikhonov, “Systems of differential equations containing small parameters in the derivatives,”
Mat. Sb. (N.S.), vol. 31(73), pp. 575–586, 1952.

A Appendix: Single-layer RNCRN


Consider again the target system (7), and the single-layer RNCRN system (8), whose CRN is given
by

βi |αi,j |
∅ −→ Xi , Xi + Yj −−−→ Yj + Xi + sign(αi,j )Xi ,
γ/µ |ωj,i |/µ
∅ −−→ Yj , Xi + Yj −−−−→ Xi + Yj + sign(ωj,i )Yj ,
|ωj,0 |/µ 1/µ
Yj −−−−−→ Yj + sign(ωj,0 )Yj , 2Yj −−→ Yj , (16)

for i = 1, 2, . . . , N and j = 1, 2, . . . , M . We let x = (x1 , x2 , . . . , xN ) ∈ RN , and collect all the rate


coefficients αi,j and ωj,k respectively into suitable vectors α ∈ RN M and ω ∈ RM (N +1) ; similarly,
we let β = (β1 , β2 , . . . , βN ) ∈ RN
≥ . Furthermore, we define the reduced vector field by

M N
!
X X
gi (x) = gi (x; α, β, γ, ω) = βi + xi αi,j σγ ωj,k xk + ωj,0 , for i = 1, 2, . . . , N, (17)
j=1 k=1

which appears on the right-hand side of (9).

Theorem A.1. (Single-layer RNCRN) Consider the target system (7) on a fixed compact set
K = K1 × K2 × . . . × KN ⊂ RN
> in the state-space, with vector field f1 , f2 , . . . , fN Lipschitz-continuous
on K. Consider also the single-layer RNCRN system (8) with rate coefficients β = β ∗ ∈ RN ≥ and
γ = γ ∗ > 0 fixed.

(i) Quasi-static approximation. Consider the reduced vector field (17). Let ε > 0 be any given
tolerance. Then, for every sufficiently large M > 0 there exist α∗ ∈ RN M and ω ∗ ∈ RM (N +1)
such that

max |gi (x; α∗ , β ∗ , γ ∗ , ω ∗ ) − fi (x)| ≤ ε for all i = 1, 2, . . . , N. (18)


x∈K

(ii) Dynamical approximation. Assume that the solution of (7) exists for all t ∈ [0, T ] for some
T > 0, and that x̄i (t) ∈ Ki for all t ∈ [0, T ] for all i = 1, 2, . . . , N . Then, for every sufficiently
small ε = ε∗ > 0 fixed there exists µ0 > 0 such that for all µ ∈ (0, µ0 ) system (8) has a unique
solution xi (t) ∈ Ki for all t ∈ [0, T ] for all i = 1, 2, . . . , N , and

max |xi (t; α∗ , β ∗ , γ ∗ , ω ∗ , µ) − x̄i (t)| ≤ c1 ε∗ + c2 µ for all i = 1, 2, . . . , N, (19)


t∈[0,T ]

where constants c1 and c2 are independent of µ.

Proof.

15
(i) Quasi-static approximation. Consider the continuous function hi : K → R defined by
hi (x) = (fi (x) − βi∗ )/xi . Since the activation function σγ , defined by (6), is continuous and non-
polynomial, it follows from [54][Theorem 3.1] that for any ε > 0 there exist M > 0, coefficients
∗ , ω ∗ ∈ R and a continuous function ρ (x) such that
αi,j j,k i

M N
!
X X
∗ ∗ ∗
hi (x) = αi,j σγ ∗ ωj,k xk + ωj,0 + ρi (x) for all i = 1, 2, . . . , N, (20)
j=1 k=1

and maxx∈K |ρi (x)| ≤ ε/Xi , with Xi = maxxi ∈Ki xi . Equation (20) implies (18).
(ii) Dynamical approximation. It follows from (18) and regular perturbation theory [55] that
there exists ε0 > 0 such that for all ε ∈ (0, ε0 )

max |x̃i (t; α∗ , β ∗ , γ ∗ , ω ∗ ) − x̄i (t)| ≤ c1 ε for all i = 1, 2, . . . , N, (21)


t∈[0,T ]

for some ε-independent constant c1 > 0, where x̃i (t) = x̃i (t; α∗ , β ∗ , γ ∗ , ω ∗ ) satisfies the reduced
system (9). In what follows, we fix ε = ε∗ ∈ (0, ε0 ).
Consider the fast (adjoined) system from (8), defined by
N
!
dyj X
= γ ∗ + yj ∗
ωj,i ∗
xi + ωj,0 − yj2 , yj (0) = bj , for j = 1, 2, . . . , M, (22)

i=1

where x1 , x2 , . . . , xN are parameters. The ODEs from (22) aredecoupled, and eachone has a unique
PN ∗ ∗
non-negative continuously differentiable equilibrium yj = σγ ∗ k=1 ωj,k xk + ωj,0 , which is stable
for all non-negative initial conditions bj ≥ 0. It follows from singular perturbation theory (Tikhonov’s
theorem) [56, 57] that there exists µ0 > 0 such that for all µ ∈ (0, µ0 ) system (8) has a unique
solution in K over time-interval [0, T ], and

max |xi (t; α∗ , β ∗ , γ ∗ , ω ∗ , µ) − x̃i (t; α∗ , β ∗ , γ ∗ , ω ∗ )| ≤ c2 µ for all i = 1, 2, . . . , N, (23)


t∈[0,T ]

for some µ-independent constant c2 > 0. Using |xi − x̄i | ≤ |xi − x̃i | + |x̃i − x̄i | and (21) and (23),
one obtains (19).

B Appendix: Multi-layer RNCRN


Multiple layers of perceptrons are common in deep ANNs [4]. Similarly, we construct multi-
(1) (1)
layer RNCRNs by using the outputs, Y1 , . . . , YM1 , of a given layer as inputs to the next layer,
(2) (2) (L) (L)
Y1 , . . . , YM2 . The final layer, Y1 , . . . , YML , then feeds back to the executive species X1 , . . . , XN .

16
Stated rigorously as an RRE, we define a L-layered RNCRN with N executive species as

L M
dxi X (L)
= βi + xi αi,k yk , xi (0) = ai , for i = 1, 2, . . . , N,
dt
k=1
(1) N
!
dyj1 X (1) (1) (1) (1) (1)
µ =γ+ ωj1 ,i xi + ωj1 ,0 yj1 − (yj1 )2 , yj1 (0) = bj1 ≥ 0, for j1 = 1, 2, . . . , M1 ,
dt
i=1
..
.
 
(L) ML−1
dyjL X (L) (L−1) (L) (L) (L) (1)
µ =γ+ ωjL ,k yk + ωjL ,0  yjL − (yjL )2 , yjL (0) = bjL ≥ 0, for jL = 1, 2, . . . , ML .
dt
k=1
(24)

The index i = 1, . . . , N enumerates over the executive species whose molecular concentrations are
xi = xi (t) ≥ 0. The index l = 1, . . . , L enumerates over the number of layers in the L-layered
RNCRN. The compound indices jl = 1, . . . , Ml enumerate over the chemical perceptrons within
a given layer (each layer may have a different number of chemical perceptrons). The molecular
(l) (l)
concentration of the chemical perceptrons is yjl = yjl (t) ≥ 0. There are several parameters βi ≥ 0,
(l)
γ > 0, αi,jl ∈ R, and ωjl ,jl−1 ∈ R.
Example chemical reaction network. A CRN with the same RRE as equation (24) can be
constructed as

βi (L) |αi,jL | (L)


∅ −→ Xi , Xi + YjL −−−− → YjL + Xi + sign(αi,jL )Xi ,
(1)
γ/µ ωj /µ
(1) (1) 1 ,i (1) (1) (1)
∅ −−→ Yj1 , Xi + Yj1 −−−−−→ Xi + Yj1 + sign(ωj1 ,i )Yj1 ,
(1)
ωj /µ
(1) 1 ,0 (1) (1) (1) (1) 1/µ (1)
Yj1 −−−−−→ Yj1 + sign(ωj1 ,0 )Yj1 , 2Yj1 −−→ Yj1 ,
(l)
γ/µ ωj /µ
(l) (l−1) (l) l ,jl−1 (l−1) (l) (l) (l)
∅ −−→ Yj l , Yjl−1 + Yjl −−−−−−−→ Yjl−1 + Yjl + sign(ωjl ,jl−1 )Yjl ,
(l)
ωj /µ 1/µ
(l) l ,0 (l) (l) (l)
Yjl −−−−−→ Yjl + sign(ωjl ,0 )Yjl , 2Yjl −−→ Yjl , (25)

for i = 1, . . . , N and l = 2, . . . , L such that jl = 1, . . . , Ml .


Reduced ODEs. If we formally let µ = 0 in (24) we can identify the multi-layered RNCRN’s

17
quasi-static state as
ML
dxi X (L)
= gi (x1 , . . . , xN ) = βi + xi αi,k yk , xi (0) = ai ≥ 0, for i = 1, 2, . . . , N,
dt
k=1
 v 
N
! u N !2
(1) 1  X (1) (1)
u X (1) (1) (1)
yj1 =  ωj1 ,i xi + ωj1 ,0 + t ωj1 ,i xi + ωj1 ,0 + 4γ  , yj1 (0) = bj1 ≥ 0,

2
i=1 i=1

..
.
  v u 2 
ML−1 u MX
L−1
(L) 1 X (L) (L−1) (L) u (L) (L−1) (L) (L)
yjL =  ωjL ,k yk + ωjL ,0  + t ωjL ,k yk + ωjL ,0  + 4γ  , yjL (0) = bjL ≥ 0

2
k=1 k=1

(26)

for jl = 1, 2, . . . , Ml . This is the attractive equilibrium that is reached infinitely fast by the chemical
perceptrons if µ = 0. Equation (26) follows from (24) by applying singular perturbation theory
(Tikhonov’s theorem) [56, 57] and stability analysis from [14][Definition 4.7 and Proposition 4.9].
Given equation (26) and [54][Theorem 3.1] it follows that the approximation properties of multi-layer
RNCRNs can be formulated similarly to single-layer RNCRNs.

C Appendix: Examples
C.1 Tri-stable target system
We use Algorithm 1 to approximate the target system (10) on K1 = [1, 18]. Tolerance ε ≈ 0.5 × 10−3
is met with an RNCRN with M = 4 chemical perceptrons and coefficients β1 = 0, γ = 1, and
     
−4.247 −4.968 0.511
 0.487   −3.590   1.043 
α1 =  1.363  , ω 0 = −11.236 , ω 1 =  1.078  ,
     (27)
0.185 10.249 −2.355

where α1 = (α1,1 , α1,2 , α1,3 , α1,4 )⊤ and ωk = (ω1,k , ω2,k , ω3,k , ω4,k )⊤ for k = 0, 1. The reduced ODE
is given by
4
dx̃1 X
= g1 (x̃1 ) = x̃1 α1,j σ1 (ωj,1 x̃1 + ωj,0 ) , (28)
dt
j=1

18
while the full ODEs read
 
4
dx1 X
= x1  α1,j yj  , x1 (0) = a1 ∈ K1 ,
dt
j=1
dy1
µ = 1 + (ω1,1 x1 + ω1,0 ) y1 − y12 , y1 (0) = b1 ≥ 0,
dt
dy2
µ = 1 + (ω2,1 x1 + ω2,0 ) y2 − y22 , y2 (0) = b2 ≥ 0,
dt
dy3
µ = 1 + (ω3,1 x1 + ω3,0 ) y3 − y32 , y3 (0) = b3 ≥ 0,
dt
dy4
µ = 1 + (ω4,1 x1 + ω4,0 ) y4 − y42 , y4 (0) = b4 ≥ 0. (29)
dt

C.2 Oscillatory target system


We use Algorithm 1 to approximate the target system (13) on K1 ×K2 = [0.1, 12]×[0.1, 12]. Tolerance
ε ≈ 101 is met with an RNCRN with M = 6 chemical perceptrons and coefficients β1 = β2 = 1,
γ = 1, and
−1.413 −0.074 −1.744
         
5.908 0.061
−0.189  8.391  −1.904  0.244  −1.009
         
−0.261 −8.387  0.753  −0.299 −0.879
α1 = 
 , α2 = −0.053 , ω 0 = −6.183 , ω1 =  1.411  , ω2 =  0.716  ,
        
−0.523        
 5.437   0.226   3.721  −1.820 −0.221
1.820 0.307 −2.744 0.495 0.152
(30)

where αi = (αi,1 , αi,2 , . . . , αi,6 )⊤ for i = 1, 2, and ωk = (ω1,k , ω2,k , . . . , ω6,k )⊤ for k = 0, 1, 2. The
reduced ODEs are given by
6
dx̃1 X
= g1 (x̃1 , x̃2 ) = 1 + x̃1 α1,j σ1 (ωj,1 x̃1 + ωj,2 x̃2 + ωj,0 ) ,
dt
j=1
6
dx̃2 X
= g2 (x̃1 , x̃2 ) = 1 + x̃2 α2,j σ1 (ωj,1 x̃1 + ωj,2 x̃2 + ωj,0 ) , (31)
dt
j=1

while the full ODEs read


   
6 6
dx1 X dx2 X
= 1 + x1  α1,j yj  , = 1 + x2  α2,j yj  ,
dt dt
j=1 j=1
2 2
! !
dy1 X dy 2
X
µ =1+ ω1,i xi + ω1,0 y1 − y12 , µ =1+ ω2,i xi + ω2,0 y2 − y22 ,
dt dt
i=1 i=1
2 2
! !
dy3 X
2 dy4 X
µ =1+ ω3,i xi + ω3,0 y3 − y3 , µ =1+ ω4,i xi + ω4,0 y4 − y42 ,
dt dt
i=1 i=1
2 2
! !
dy5 X dy 6
X
µ =1+ ω5,i xi + ω5,0 y5 − y52 , µ =1+ ω6,i xi + ω6,0 y6 − y62 . (32)
dt dt
i=1 i=1

19
Here, we assume general initial concentrations: x1 (0) = a1 ∈ K1 , x2 (0) = a2 ∈ K2 , and y1 (0) =
b1 ≥ 0, y2 (0) = b2 ≥ 0, y3 (0) = b3 ≥ 0, y4 (0) = b4 ≥ 0, y5 (0) = b5 ≥ 0, and y6 (0) = b6 ≥ 0. The
coefficients were rounded to 3 decimal places before being used in simulations.

C.3 Chaotic target system


We use Algorithm 1 to approximate the target system (14) on x1 ∈ K1 , x2 ∈ K2 and x2 ∈ K3
with K1 = K2 = K3 = [0.01, 1]. Tolerance ε ≈ 102 is met with an RNCRN with M = 5 chemical
perceptrons and coefficients β1 = β2 = β3 = 0, γ = 1, and
     
−0.272 0.109 0.026
 2.996   24.039  −0.529
     
α1 =  0.862  , α2 = −5.668 , α3 = −1.101 ,
    
−0.244 −0.057  0.034 
1.276 −9.584 1.065
       
0.284 −5.049 8.895 −0.068
−1.589  0.148  −2.951 −0.525
       
ω 0 = −0.178 , ω 1 =  0.506  , ω 2 = −4.504 , ω 3 = 
     
 0.329  ,
 (33)
 1.212   15.973  −7.781 −0.027
−0.707 −1.151 −0.606 0.199

where αi = (αi,1 , αi,2 , . . . , αi,5 )⊤ for i = 1, 2, 3, and ωk = (ω1,k , ω2,k , . . . , ω5,k )⊤ for k = 0, 1, 2, 3.
The reduced ODEs are given by
5
dx̃1 X
= g1 (x̃1 , x̃2 , x̃3 ) = x̃1 α1,j σ1 (ωj,1 x̃1 + ωj,2 x̃2 + ωj,3 x̃3 + ωj,0 ) ,
dt
j=1
5
dx̃2 X
= g2 (x̃1 , x̃2 , x̃3 ) = x̃2 α2,j σ1 (ωj,1 x̃1 + ωj,2 x̃2 + ωj,3 x̃3 + ωj,0 ) ,
dt
j=1
5
dx̃3 X
= g3 (x̃1 , x̃2 , x̃3 ) = x̃3 α3,j σ1 (ωj,1 x̃1 + ωj,2 x̃2 + ωj,3 x̃3 + ωj,0 ) , (34)
dt
j=1

while the full ODEs read


     
5 5 5
dx1 X dx2 X dx3 X
= x1  α1,j yj  , = x2  α2,j yj  , = x3  α3,j yj  ,
dt dt dt
j=1 j=1 j=1
3 3
! !
dy1 X dy 2
X
µ =1+ ω1,i xi + ω1,0 y1 − y12 , µ =1+ ω2,i xi + ω2,0 y2 − y22 ,
dt dt
i=1 i=1
3 3
! !
dy3 X
2 dy4 X
µ =1+ ω3,i xi + ω3,0 y3 − y3 , µ =1+ ω4,i xi + ω4,0 y4 − y42 ,
dt dt
i=1 i=1
3
!
dy5 X
µ =1+ ω5,i xi + ω5,0 y5 − y52 . (35)
dt
i=1

Here, we assume general initial concentrations: x1 (0) = a1 ∈ K1 , x2 (0) = a2 ∈ K2 , x3 (0) = a3 ∈ K3


and y1 (0) = b1 ≥ 0, y2 (0) = b2 ≥ 0, y3 (0) = b3 ≥ 0, y4 (0) = b4 ≥ 0, and y5 (0) = b5 ≥ 0. The
coefficients were rounded to 3 decimal places before being used in simulations.

20

You might also like