Aai7488 Li SM
Aai7488 Li SM
org/content/358/6366/1042/suppl/DC1
S5 Control energy 11
S5.1 Derivation of control energy for temporal networks . . . . . . . . . . . . . . . 11
S5.2 Solving the quadratic problem . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
S5.3 Minimum energy needed to control temporal networks . . . . . . . . . . . . . . 15
S5.4 Minimum energy needed to control static networks . . . . . . . . . . . . . . . 16
S10 Figures 29
2
S1 Controllable space
For a dynamical system of the form ẋ(t) = Ax(t) + Bu(t), the system state at time t is given
Rt
by x(t) = eA(t−t0 ) x0 + t0 eA(t−s) Bu(s)ds, with the initial state x(t0 ) = x0 . For temporal
networks we can write the system state after the mth snapshot as x(tm ) = eAm ∆tm x(tm−1 ) +
R tm A (t −s)
tm−1
e m m Bm um (s)ds. After M snapshots, the final state xf at time tM is
−1
1 M m+1
!
Y X Y Z tm
xf = eAm ∆tm x0 + eAj ∆tj eAm (tm −s) Bm um (s)ds (S1)
m=M m=1 j=M tm−1
Z tM
+ eAM (tM −s) BM uM (s)ds.
tM −1
−1
M m+1
!
X Y Z tm Z tM
Am (tm −s)
xf = e Aj ∆tj
e Bm um (s)ds + eAM (tM −s) BM uM (s)ds.
m=1 j=M tm−1 tM −1
M Y
X m Z tm
−Aj ∆tj
x0 = − e eAm (tm −s) Bm um (s)ds.
m=1 j=1 tm−1
Taken together, the set of states xf from x0 = 0 under control input um (t) (m = 1, 2, · · · , M )
for a temporal network defined by {(Am , Bm , ∆tm )}M
m=1 is
M
X −1 m+1
Y Z tm
Aj ∆tj Am (tm −s)
Ω = e xx= e Bm um (s)ds, for ∀ um
m=1 j=M tm−1
( )
Z tM
+ xx= eAM (tM −s) BM uM (s)ds, for ∀ uM ,
tM −1
M Y
X m Z tm
−Aj ∆tj Am (tm −s)
e xx= e Bm um (s)ds, for ∀ um .
m=1 j=1 tm−1
3
We can simplify the integration term above using the following lemma.
Lemma 1 : Given matrices A ∈ RN ×N and B ∈ RN ×p , for any 0 ≤ t0 < tf < +∞, we have
Z tf
A(tf −s)
xx= e Bu(s)ds, ∀ u = hA|Bi, (S2)
t0
where
N
X −1
N −1
hA|Bi = hA|R(B)i = R(B) + AR(B) + · · · + A R(B) = Ai R(B),
i=0
with ‘+’ corresponding to the direct summation of vector spaces and A0 being the identity
matrix. The column space of BN ×p is defined as R(B) = {Bv|v ∈ Rp }.
The above lemma can be derived from Lemma 2.10 in Ref. [31]. For readers’ convenience,
we offer a proof here.
n R tf A(t −s) o
Proof : Denoting S = x x = t0 e f Bu(s)ds, ∀ u , from
∞
X 1 i
eA = A
i=0
i!
we have
( ∞
)
Z tf X
1 i
S = xx= A (tf − s)i Bu(s)ds, ∀ u
t0 i=0 i!
( ∞ Z tf )
X 1
= xx= Ai B (tf − s)i u(s)ds, ∀ u
i=0 t0
i!
N
X −1
⊂ Ai R(B) = hA|Bi.
i=0
4
If x belongs to the null space of S0 , we have
Z tf
T (t −s)
0 = x S0 x = T
xT eA(tf −s) BBT eA f
xds
t
Z 0tf
T (t −s)
= kBT eA f
xkds,
t0
T (t −s)
which induces BT eA f x = 0 for all s ∈ [t0 , tf ]. This requires that all derivatives of
T (t −s)
BT eA f x equal to 0 at tf , that is
BT x = 0, BT AT x = 0, · · · , BT (AT )m x = 0, · · ·
which gives
= hA|Bi⊥ ,
since N(BT ) = {x|BT x = 0} = [R(B)]⊥ , where [Q]⊥ means the orthogonal complementary
space of Q. Conversely, if x ∈ hA|Bi⊥ , we have x ∈ N(S0 ). Hence
N(S0 ) = hA|Bi⊥ ,
or equivalently,
R(S0 ) = hA|Bi.
If x ∈ hA|Bi, there exists a vector z such that x = S0 z. Then using the control signal,
T (t −s)
u(s) = BT eA f z, we have
Z tf Z tf
A(tf −s) T AT (tf −s)
x = S0 z = e BB e zds = eA(tf −s) Bu(s)ds ∈ S.
t0 t0
5
Based on the above lemma, for the temporal network {(Am , Bm , ∆tm )}M
m=1 , we define
M
X −1 m+1
Y
Ω = hAM |BM i + eAj ∆tj hAm |Bm i, (S3)
m=1 j=M
M Y
X m M m−1
X Y
−Aj ∆tj
e hAm |Bm i = hA1 |B1 i + e−Aj ∆tj hAm |Bm i, (S4)
m=1 j=1 m=2 j=1
as the reachable space. In the area of complex networks, for convenience, we set x0 = 0, and
note that here the dimensions of controllable and reachable spaces are equal.
When all snapshots of a temporal network are identical, i.e. Am = As , the temporal network
reduces to a static one [22], and the spaces (S3) and (S4) reduce to hAs |Bi. It follows that a
static network is controllable if and only if
It is natural to ask what is the relation between (S3) and its counterpart (S5). The answer is
that the relationship between the controllable spaces of temporal (Ωt ) and static (Ωs ) networks
is not determinate. Indeed, Fig. S1 shows a simple example illustrating that theoretically both
Ωt % Ωs and Ωt $ Ωs are possible.
6
ACM conference: This data set is from the ACM Hypertext 2009 conference provided
by the SocioPatterns collaboration [32], where the 113 conference attendees wearing radio
badges were monitored for face-to-face communications. Every communication between a pair
of attendees is stored as a triplet (t, i, j) (the number of face-to-face communications is 20, 818),
meaning that people with anonymous IDs i and j chatted with each other during the 20-second
interval [t − 20s, t]. The data spans a time period of about 2.5 days (212,340s) starting from
8am on Jun 29th, 2009 [25]. The snapshot duration ∆t is chosen from 1000s to 212, 340s
for each temporal network, yielding different temporal sequences with different numbers of
snapshots.
Student contacts: This data set consists of a sequence of time-stamped contacts between
126 students in three classes in a high school in Marseilles, France over 4 days in Dec. 2011
[33]. The format of the data is same as that of ACM conference, and it is also provided by the
SocioPatterns collaboration [32]. Here ∆t is chosen between 1000s to 326, 450s.
Ant interactions: The interactions in this data set represent antenna-body contacts of
four colonies of the ant Temnothorax rugatulus [26]. We adopt the largest colony (colony 1)
as our data set, comprising 1,911 interactions between 89 ants over 1,438 seconds. Here the
duration of each antenna-body is neglected (Fig. S2). As with the ACM conference data, we
generate the snapshots of the temporal network with ∆t chosen between 10s to 1438s.
Protein network: This dataset is based on the protein-protein interaction network of
Saccharomyces cerevisiae from the Database of Interacting Proteins (DIP), consisting of 5,023
proteins and 22,570 interactions. From different gene expression datasets, researchers construct
the dynamic protein-protein interactions by identifying the active time of each protein [27].
According to gene ontology, networks of proteins are then constructed based on three domains:
cellular component (CC), molecular function (MF), and biological process (BP). Here we have
33, 50, and 50 snapshots for CC, MF, and BP, where the number of related proteins is 84, 74,
and 85, respectively.
Technological network: Here we consider three datasets consisting of data packet ex-
changes between 25 wireless radios in an emulated mobile ad-hoc network (MANET) that
experiences a denial-of-service cyber-attack. Each dataset covers a time period of 900s, with
the cyber-attack occurring after approximately 300s of operation. The packets are generated
7
using real application and networking software running in a special test environment that em-
ulates the packet loss characteristics of a wireless communication channel between radios in
motion. A packet exchange is recorded by each network protocol that handles the packet. The
applications generating the packets are specially designed test applications that are configured
to model communication in a mobile wireless network that could be seen in a search and res-
cue mission. We construct the temporal network with 50 snapshots from three datasets named
1-ip6, 2-ip6, and 3-ip6, respectively. Each network contains the same set of 34 nodes.
Figures S2-S5 show, as a function of time, the interaction activity, degree distribution, av-
erage degree, and number of components of the aggregated networks for each of these datasets.
Table S1: Characteristics of the empirical data sets. N is the number of nodes, and
M is the (maximum) number of snapshots. For human and animal data sets stored in terms
of sequence of interactions, M is acquired from the time window we choose to aggregate the
networks. Considering the basic attributes of temporal networks highly depend on ∆t, here
we only list N and M , and other information of the data is shown in Figs. S2 to S5.
8
duration of the interaction between i and j is now reduced by half. In our data, we run TR
according to the two cases above, and find our results are robust. This model is designed to
assess the causality between individual interactions, for example, whether the latter contacts
are triggered by the former [19], or if there are strong temporal correlations embedded in the
original data.
Randomly Permuted Times (RPT): Here we shuffle the timestamps of the contacts,
leaving the sources and targets of the links unaltered. Note that RPT has the effect of de-
stroying temporal patterns and erasing time correlations between contacts.
Randomized Edges (RE): In this model, we iteratively choose pairs of contacts (i, j)
and (i0 , j 0 ), and replace them with (i, i0 )(j, j 0 ) or (i, j 0 )(j, i0 ) with equal probability provided
the change results in neither self loops nor multiple edges. Here duration of interactions is
maintained from the point of whole static network, while the numbers and durations of an
individual node’s contacts are generally altered. For example, for two contiguous contacts
(t1 , i, j) and (t2 , i, j), RE may change (t2 , i, j) into (t2 , i, j 0 ). A given node’s degree may be
conserved in data with low temporal resolution, while it is likely to change in high resolution
data.
Randomized Edges and Randomly Permuted Times (RERPT): Equivalent to RE
followed by RPT.
Since the raw PPI and technology data are already represented as network snapshots, we
first extract the corresponding contact sequence and use the index of a contact’s containing
snapshot as its interaction time. That is, each link (between i and j) in snapshot m is repre-
sented by the triplet (m, i, j).
For the empirical data we considered, the effects of the above randomizations on the average
degree and number of components of the aggregated networks are shown in Figs. S4 and S5.
The schematic illustration of methods used to randomize temporal networks is shown in Fig. S7.
9
main text, the mechanism we illustrate in Fig. 2 is the rule rather than the exception, which
makes real temporal networks typically more controllable than their static counterparts. Below
we show additional analyses to check the robustness of our results.
As we only show temporal networks corresponding to a single ∆t for the ACM conference
and ant interactions data in main text, more cases of ∆t are given in Fig. S10. The result
demonstrated in Fig. 3 (namely that St < Ss ) holds for other values of ∆t.
For the supplementary student interactions data we analyzed, the corresponding values of
St and Ss are shown in Fig. S9C. As with the ant interactions data, the interaction times are a
sequence of discrete time points rather than a time interval like the communications between
conference attendees and students. We give the results based on the original data format in
Fig. 2, assigning a small, finite duration to each contact, which generates the result in Fig. S10.
The robustness of the results for other values of the snapshot duration has also been verified.
The results shown in Fig. S10 corroborate those in Fig. 2B, suggesting that the result given in
main text does not depend on the duration of the interactions.
For a temporal network with M snapshots, we define St (Ss ) to be equal to M if the
corresponding temporal (static) networks are not controllable even upon reaching (aggregating)
the final snapshot M . In this case, the number of snapshots required for control is larger than
M , or equivalently, more driver nodes are needed. For the protein and technological networks,
we find many cases where Ss = M in Fig. 2, thus we performed additional analysis by adding
more driver nodes and thereby decreasing St and Ss . The results are shown in Fig. S11.
Here we employ a toy model to explore the above insight we got from empirical datasets.
We first generate a set of M snapshots randomly and independently according to the G(N, p)
model [37], where the link weights are assigned independently and randomly from (0, 1). We
then randomly generate Ns snapshot sequences, calculating St and Ss for each under the same
set of driver nodes. We find that with more snapshots (larger M ) the likelihood that St > Ss
drops to zero (Fig. S12).
This result can be qualitatively understood as follows. We know that the emergence (or
aggregation) of new snapshots with independent edge weights will never shrink the controllable
subspace for temporal (or static) networks. In the temporal case, the controllable subspace will
typically expand after the emergence of a new snapshot. Yet, in static networks, aggregating
10
a new snapshot usually switches the controllable subspace from one to another without neces-
sarily adding additional dimensions (Fig. 1C and 1D). With increasing M , temporal (static)
networks will have more snapshots to explore (aggregate), which offers temporal networks
more opportunities to expand the controllable subspace. By definition, when the dimension of
the controllable subspace reaches the size of the system, the system becomes fully controllable.
Hence temporal networks will typically need fewer snapshots to achieve full controllability than
static networks, implying that St < Ss dominates for large M , consistent with our numerical
findings.
Finally, considering that the previous four randomization models presented in Sec. S3
mainly alter the network—not the timing of the events, here we analyse a new model, Ran-
domly Distributed Times (RDT, Fig. S7), by keeping the network unchanged but changing
the timing of each event. In RDT, we randomly distribute time stamps using Poisson, uni-
form, and normal distributions. After generating a series of time stamps that follow the given
distribution, we adjust every time stamp t0i to the observation time window [tmin , tmax ] of the
original data with an appropriate transformation: (t0i − t0min )(tmax − tmin )/(t0max − t0min ) + tmin
for t0min > 0, where t0min (t0max ) is the minimal (maximal) number in the generated time stamps.
Note that when t0min ≤ 0 the expression becomes (t0i + t0min )(tmax − tmin )/(t0max − t0min ) + tmin . Af-
ter performing RDT on all datasets, we find that our main conclusion that temporal networks
reach controllability faster than the corresponding static networks still holds (Fig. S13).
S5 Control energy
For a single snapshot (A, B) (or equivalently, a static network), the minimum energy for
controlling the system from x0 at t0 to xf at tf corresponds to the unique input of the form
T Rt T
u(t) = BT eA (tf −t) cs , where cs = Ws−1 (xf − eAtf x0 ) and Ws = t0f eA(tf −s) BBT eA (tf −s) ds
[38, 39]. Here cs is a constant vector determined by x0 , xf , t0 , and the system’s dynamics.
For simplicity, we consider the case where each snapshot is controllable, which actually makes
it possible to compare the energy for controlling temporal networks and corresponding static
networks (otherwise we cannot ensure that the static and temporal versions of the network are
11
simultaneously controllable (Fig. 1D, 1F and Sec. S1.3)).
According to the principle of optimality, if u(t) is the energy-optimal input to control a
temporal network, then the energy accumulated over each snapshot must also be minimal
for the control sub-problem of traveling between the states at the beginning and end of that
snapshot. Hence we can write the candidate energy optimal control signals for a temporal
network as
T
Am (tm −t)
u(t) = BT
me cm for tm−1 ≤ t < tm , m = 1, 2, · · · , M.
Note that the above form of the optimal input over each snapshot is derived based on the
principle of optimality combined with the (known) form of the optimal control signals in static
networks.
Using the following notation
tm−1
Z ∆tm
T
= eAm s Bm BT me
Am s
ds,
0
T T
cT T
c = 1 , c 2 , · · · , c M ,
12
we can write (S6) as d = Hc.
The energy to control temporal networks from x0 at t0 to xf at tf can be written as
Z tf
1 1
E(x0 , xf ) = uT (t)u(t)dt = cT Wc.
2 t0 2
Hence, the minimum energy could be obtained by solving the quadratic programming problem
1
min E(x0 , xf ) = cT Wc
2
s.t. Hc = d (S7)
1 1 1
min E(x0 , xf ) = cT Wc = cT UΛUT c = xT x
2 2 2
√ −1
s.t. HU Λ x = Kx = d. (S8)
13
f (x, v) reaches the minimum, the following relations must be satisfied
∂f (x, v)
= x∗ + KT v∗ = 0 (S9)
∂x∗
∂f (x, v)
= Kx∗ − d = 0. (S10)
∂v∗
Kx∗ + KKT v∗ = 0
Kx∗ = d.
−1
If KKT is non-singular, v∗ = − KKT d, and then according to (S9) we have
−1
x∗ = KT KKT d. (S11)
√ −1
Since K = HU Λ , if we prove KKT is non-singular, then the problem (S8) can be
solved according to the expression (S11). To do so, we employ the following Lemma 2.
Lemma 2 : If K is a matrix over real numbers with size n × m, then the rank of K and
KKT is equal.
It is a simple exercise of advanced matrix theory to prove the above lemma, and we would
like to give a proof as follows.
Proof : The null space of KT is given by vectors x satisfying KT x = 0. And the null space
of KKT is given by vectors y satisfying KKT y = 0. Since KT x = 0, we have KKT x = 0, i.e.
x belongs to the null space of KKT . From KKT y = 0, we have yT KKT y = 0 = (KT y)T KT y,
i.e., KT y = 0 and y belongs to the null space of KT . Thus, the two equations KT x = 0
and KKT y = 0 have same solutions. As such, the number of independent vectors in the
fundamental system is also the same, i.e. n − rank(KT ) = n − rank(KKT ). Hence we have
rank(K) =rank(KT ) =rank(KKT ).
√ −1
Based on the above Lemma 2, we have rank(KKT ) = rank(K) = rank(HU Λ ),
14
and K is a matrix with size N × N M , and
1 ∗T ∗ 1 T h T −1 iT h T −1 i
E(x0 , xf ) = x x = d K KKT K KKT d
2 2
1 T −1 1 −1
= d KKT d = dT SUΛUT UΛ−1 UT UΛUT ST d
2 2
1 T −1
= d SWST d.
2
Taken together, the quadratic programming problem given in Eq. (S7) is solved analytically
by the optimal solution
−1
c∗ = ST SWST d, (S12)
1 −1
E ∗ (x0 , xf ) = dT Weff d, (S13)
2
where the N × N matrix Weff = SWST is an “effective” gramian matrix, encoding the en-
ergy structure of the temporal network. Hereafter, we refer to the minimum control energy
E ∗ (x0 , xf ) as simply the control energy E.
15
For controllability in the case x0 = 0, the above results reduce to
−1
c∗c = ST SWST xf ,
and
1
E = xT W−1 xf . (S14)
2 f eff
When all snapshots are identical (Am = As ), our results reduce to the case of static networks.
Indeed, for static networks, the quadratic programming (S7) becomes
1
min E(x0 , xf ) = cT Wc
2
s.t. Ws c = d = xf − eAs (tf −t0 ) x0 .
1 T
xf − eAs (tf −t0 ) x0 Ws−1 xf − eAs (tf −t0 ) x0 ,
E= (S15)
2
R tf T T
where Ws = t0
eAs (tf −s) BBT eAs (tf −s) ds, and e−As tf Ws e−As tf is the canonical controllability
gramian matrix [38]. The result for M = 1 is same as that given in [39].
16
As Am = As for m = 1, 2, · · · , M , we have
−1
M i+1 M
!
AT
X Y Y
SWST = eAk ∆tk Wi e l ∆tl + WM
i=1 k=M l=i+1
M −1 Z ∆ti
X PM T (s+
PM
= eA(s+ k=i+1 ∆tk )
BBT eA k=i+1 ∆tk )
ds + WM
i=1 0
PM
Z k=1 ∆tk
T
= eAs BBT eA s ds
0
= Ws .
Thus the energy for controlling temporal networks (S14) reduces to the known result for static
networks (S15).
T −1
xT SWS xf
E= f T
.
2xf xf
1 1
E= ≤E≤E= ,
2λmax 2λmin
where E and E are the lower and upper bounds of E; λmax and λmin are the maximum and
minimum eigenvalues of Weff , respectively. Note that Weff is symmetric and positive definite
because the underlying temporal (or static) network is taken to be controllable in our analysis.
As such, the minimum eigenvalue of Weff is bigger than 0, meaning all quantities are well
defined.
We can demonstrate numerically that λmin is generally greater in temporal networks, and
hence E is usually smaller, often much smaller, than in their static counterparts (Fig. S14).
This implies that the average control energy hEi is typically much less in a temporal network,
17
despite the fact that E may correspond to different “worst-case” directions in the static versus
temporal case. Indeed a typical control direction d will have some component lying along the
eigenvector of Weff corresponding to λmin . Since the eigenvalues of Weff typically vary many
orders of magnitude (Fig. S14), this worst-case direction dominates the control energy, and E
is expected to be representative of hEi, an expectation borne out by our results (Fig. 3). This
also explains why the scaling of hEi is determined by that of E, which we can show decreases
according to hEi ∼ ∆t−γ for small ∆t before reaching a plateau (Fig. 3).
The robustness of these results has been checked for other networks and shown in Figs. S16
-S18. To test the extent to which our results rely on knowing the exact sequence of snapshots
we generate a set of M random snapshots, and enumerate all the possible switching sequences
of the snapshots from the set. We find that the control energy does not vary notably as the
switching sequence changes, both for real and synthetic networks (Fig. S19 and Fig. S20). This
indicates that this control cost is principally a function of the set of possible networks, not
their orders.
For the simple case with two snapshots and one driver node, we employ the notation A(i, j) =
Aij = aij to represent the entry at ith row and jth column in matrix A, and let A1 = (aij )N N
and A2 = (bij )N N . As only the cth node receives input directly (a single driver node), we have
BT = (0, · · · , 1, · · · , 0) where the cth entry is 1 while others are 0.
For undirected temporal network, both A1 and A2 are symmetric with real entries, allow-
ing us to write A1 = PΘPT and A2 = QΓQT , where P = (Pij )N N , Q = (Qij )N N , Θ =
18
θ1 γ1
... ...
, and Γ = , and we assume λ(A1 ) : θ1 > θ2 > · · · > θN ,
θN γN
and λ(A2 ) : γ1 > γ2 > · · · > γN .
As there are two snapshots A1 and A2 , we have
Z ∆t Z ∆t
T A2 ∆t A1 t T AT
1t AT
2 ∆t
T
Weff = SWS = e · e BB e dt · e + eA2 t BBT eA2 t dt .
0
| {z } |0 {z }
Weff1 Weff2
Then we obtain
Z ∆t
T
Weff1 = Qe Γ∆t
Q P eΘt PT BBT PeΘt dtPT QeΓ∆t QT ,
0
Z ∆t
Weff2 = Q eΓt QT BBT QeΓt dtQT ,
0
and
N X
N X
N X
N
( N N )
X X X Psk Pck Pcl Pml
Qir eγr ∆t Qsr e(θk +θl )∆t − 1 Qmn eγn ∆t Qjn
Weff1 (i, j) =
r=1 s=1 m=1 n=1 k=1 l=1
θ k + θ l
N X
N X
N X
N N X N
X X Psk Pck Pcl Pml (θk +θl )∆t
Qir e(γr +γn )∆t Qsr Qmn Qjn
= e − 1 (S16)
r=1 s=1 m=1 n=1 k=1 l=1
θk + θl
N X N
X Qik Qck Qcl Qjl (γk +γl )∆t
Weff2 (i, j) = e −1 . (S17)
k=1 l=1
γk + γl
19
Generally, as ∆t → 0, e(γk +γl )∆t ≈ 1 + (γk + γl )∆t. Then we have
N X
X N X
N X
N
Weff1 (i, j) ≈ Qir e(γr +γn )∆t Qsr Qmn Qjn ∆t (S18)
r=1 s=1 m=1 n=1
XN X N
= Qir Qcr Qcn Qjn [1 + (γr + γn )∆t] ∆t
r=1 n=1
XN X N N X
X N
2
= ∆t Qir Qcr Qcn Qjn + ∆t Qir Qcr Qcn Qjn γr
r=1 n=1 r=1 n=1
| {z } | {z }
Ω1 Ω2
N X
X N
2
+ ∆t Qir Qcr Qcn Qjn γn ,
r=1 n=1
| {z }
Ω3
and
N X
X N
Weff2 (i, j) ≈ Qik Qck Qcl Qjl ∆t (S19)
k=1 l=1
∆t N
P
k=1 Qik Qck if j = c
∆t if i = j = c
PN
= ∆t l=1 Qcl Qjl if i = c = .
0
otherwise
0
otherwise
Furthermore, we obtain
∆t if i = j = c
Ω1 =
0
otherwise
∆t2 N Qir Qcr γr = ∆t2 bic
P
if j = c
r=1
Ω2 =
0
otherwise
∆t2 N Qcn Qjn γn = ∆t2 bcj if i = c
P
n=1
Ω3 = .
0
otherwise
20
Thus we have
2∆t + 2bcc ∆t2
if i = j = c
Weff (i, j) = Weff1 (i, j) + Weff2 (i, j) = bcj ∆t2 if i = c and j 6= c ,
bic ∆t2
if j = c and i 6= c
and we get
0 ··· 0 b1c ∆t2 0 ··· 0
.. .. .. .. ..
. . . . .
0 ··· 0 bc−1,c ∆t2 0 ··· 0
Weff = bc1 ∆t2 · · · bc,c−1 ∆t2 2∆t + 2bcc ∆t2 bc,c+1 ∆t2 · · · 2 .
bcN ∆t
··· bc+1,c ∆t2 ···
0 0 0 0
.. .. .. .. ..
. . . . .
0 ··· 0 bN c ∆t2 0 ··· 0
To determine the eigenvalues of Weff , we just need to solve the following set of linear equations
21
Therefore, in the limit ∆t → 0, we have
1
E≈ q ∼ ∆t−1 .
2 ∆t + bcc ∆t2 + ∆t2 + 2bcc ∆t3 + ∆t4 N 2
P
i=1 bic
For the general case, we first assume that there are p ≤ N driver nodes, and the set of nodes
receiving inputs is I = {i1 , i2 , · · · , ip }. Without loss of generality, we can relabel all nodes as
such that the driver nodes are j = 1, 2, · · · , p (i.e. ij = j), with node j receiving the input
uj (t). Thus we have Bii = 1 for i = 1, 2, · · · , p, with other entries of B equal to 0. We shall
first analyze the control energy with two snapshots, and later derive the more general case.
For the control gramian entries Weff1 (i, j) and Weff2 (i, j) given in equations (S16) and (S17),
we have
N X
X N X
N X
N
Weff1 (i, j) = Qir e(γr +γn )∆t Qsr Qmn Qjn
r=1 s=1 m=1 n=1
N X N X p N X
N
X X Psk Pxk Bxs Bys Pyl Pml (θk +θl )∆t
· e −1
k=1 x=1 s=1 y=1 l=1
θk + θl
N X p
N X N X
N
X X Qik Qxk Bxs Bys Qyl Qjl (γk +γl )∆t
Weff2 (i, j) = e −1 ,
k=1 x=1 s=1 y=1 l=1
γk + γl
0
for p driver nodes. Denoting BBT = (Bxy )N N , we have
p 1 if 1 ≤ x = y ≤ p
X
0
Bxy = Bxs Bys = ,
s=1 0 otherwise
I I I 0
and we have B = and BBT = I 0 = , where I is the identity matrix
0 0 0 0
with size p.
22
Hence we obtain
N X
N X
N X
N N X p N
X
(γr +γn )∆t
X X Psk Pck Pcl Pml (θk +θl )∆t
Weff1 (i, j) = Qir e Qsr Qmn Qjn e −1
r=1 s=1 m=1 n=1 k=1 c=1 l=1
θk + θl
N X p N
X X Qik Qck Qcl Qjl
e(γk +γl )∆t − 1 .
Weff2 (i, j) =
k=1 c=1 l=1
γk + γl
As ∆t → 0, we get
∆t if i = j ∈ I
Weff1 (i, j) ≈ Ω1 + Ω2 + Ω3 Weff2 (i, j) ≈
0
otherwise
and
∆t if i = j ∈ I
Ω1 =
0
otherwise
∆t2 N Qir Qjr γr = ∆t2 bij
P
if j ∈ I
r=1
Ω2 =
0
otherwise
∆t2 N Qin Qjn γn = ∆t2 bij if i ∈ I
P
n=1
Ω3 = .
0
otherwise
For details of the notation, we refer the reader to equations (S19) and (S18). Furthermore, we
have
2∆t + 2bii ∆t2 if i = j ∈ I
Weff (i, j) = Weff1 (i, j) + Weff2 (i, j) = bij ∆t2 if i ∈ I and j 6= i ,
bij ∆t2
if j ∈ I and i 6= j
23
and
2 2 2 2
2∆t + 2b1,1 ∆t ··· b1,p ∆t b1,p+1 ∆t · · · b1N ∆t
.. .. .. ..
. . . .
bp,1 ∆t2 · · · 2∆t + 2bp,p ∆t2 bp,p+1 ∆t2 · · · bp,N ∆t2
Weff = .
bp+1,1 ∆t2 ··· bp+1,p ∆t2 ···
0 0
.. .. .. ..
. . . .
bN,1 ∆t2 ··· bN,p ∆t2 0 ··· 0
·(−λ)N −p .
24
S7.2.2 Arbitrary numbers of snapshots and driver nodes
(j)
λ1
...
(j)
For each snapshot, we write Aj = Uj Λj UT with Uj = Ur,s and Λj = .
j
NN
(j)
λN
Then we have
(M ) (l+1)
λi ∆t
X (M ) (M ) (l+1) (l+1)
Weffl (i, j) = Ui,iM eλiM ∆t
UOM ,iM · · · UOl+2 ,il+1 e l+1 UOl+1 ,il+1
iM ,OM ,··· ,Ol+2 ,il+1 ,Ol+1 ,
0
Ol+1 ,i0l+1 ,Ol+2
0 0 ,i0
,··· ,OM M
PM
where Weff = l=1 Weffl (i, j).
For small ∆t, we have
" #
(l) (l)
1 λi +λ 0 ∆t
(l) (l)
e l i
l − 1 ≈ ∆t,
λil + λi0
l
which leads to E ∼ ∆t−1 for small ∆t according to derivation provided in Sec. S7.2.1.
In summary, for short ∆t we prove that for temporal networks, the lower bound of control
energy follows
E ∼ ∆t−1 ,
regardless of the number of driver nodes and snapshots. Note that this includes static networks
as a special case, recapitulating the earlier observation that E ∼ ∆t−1 [39].
25
am is chosen to stabilize the standalone dynamics of each snapshot m, reflecting the fact that
most real systems have a stable state corresponding to the system’s mode of normal operation
[40]. Note, however, that our underlying theory also works for unstable dynamics.
For the control energy and locality analyses of the main text, we employ the Laplacian
matrix with self loops for the system matrix Am of each snapshot. Specifically, L = (lij )N N ,
where
wij i 6= j
lij =
− PN
wij i=j
j=1,j6=i
and wij is (randomly-chosen) weight of the edge from node j to node i. For an arbitrary vector
ξ = (ξ1 , ξ2 , · · · , ξN )T , we have
N X
N N N N N
X X X 1 XX
T
wij ξi2 wij (ξi − ξj )2 .
ξ Lξ = lij ξi ξj = wij ξi ξj − =−
i=1 j=1 i=1 j=1,j6=i
2 i=1 j=1
Thus we know when all wij > 0 (wij < 0), L is negative (positive) semi-definite. Here, we let
Am = am I + L, where am is chosen to stabilize the dynamics of each individual snapshot:
* When wij > 0, we can tune am to make Am negative (am < 0), negative semi- (am = 0),
and non-negative definite (am > 0, here when am is sufficiently positive, Am can be
positive definite) matrix Am since am is the maximum eigenvalue of Am .
* When wij < 0, we can tune am to make Am positive (am > 0), positive semi- (am = 0),
and non-positive definite (am < 0, here when am is sufficiently negative, Am can be
negative definite) matrix Am since am is the minimum eigenvalue of Am .
26
corresponding trajectory
Z tm
x(tm ) = e Am ∆tm
x(tm−1 ) + eAm (tm −s) Bm um (s)ds,
tm−1
for t ∈ [tm−1 , tm−1 + ∆tm ), along which the control cost is minimum, where Wm [tm−1 , t] =
Rt AT −AT
eAm (t−s) Bm BT m (tm −s) ds and e−Am tm W [t m tm is the gramian matrix of snap-
tm−1 me m m−1 , tm ]e
shot m, and c∗m is a constant vector of dimension N . Note that c∗m is given by c∗ =
−1 ∗T T
d with c∗ = c∗T ∗T
ST SWST 1 , c2 , · · · , cM .
In this section, we show control trajectories for temporal and static systems in two to
three dimensions to give a visual understanding of the control non-locality of static networks.
Figure S21 shows the full trajectories for an example two-dimensional system with two and
five snapshots, for 100 different final states. Figure S22 shows the same for a three dimensional
system. In agreement with the results presented in Fig. 4, we find that the length of trajectories
for temporal networks is considerably less than that for static networks, independent of the
choice of x0 and the value of control distance.
We calculate L numerically according to
Z tf tf q
Z
L = kẋ(t)kdt = x02 02 02
1 (t) + x2 (t) + · · · + xN (t)dt
t0 t0
v
1/tstep u N h i2
X uX
≈ t xi (tj + tstep ) − xi (tj ) ,
j=0 i=1
1/tstep
X
Li∗ = max xi (tj + tstep ) − xi (tj ) .
i
j=0
27
Since in our numerical examples, Li∗ is on the order of 1035 for the temporal network and 1064
for the static counterpart, looking at x1 (t) is sufficient to demonstrate that temporal networks
exhibit more local trajectories in this network. The corresponding state components in the
case of 1, 2, and 3 driver nodes are shown in Fig. S23.
Similar to the earlier analysis of control energy in terms of different switching sequences
(Sec. S6), for the different snapshot sequences shown in Fig. S20, we here give the corresponding
control trajectories in Fig. S25. For the real (technological) network we adopted, Fig. S24 shows
the locality as the order of snapshots is changed. As with the control energy, the locality is
shown to be largely a function of the set of snapshots rather than their precise order.
28
S10 Figures
A Temporal Static
B Temporal Static
u u u u u u
Network
+ = + =
( A ,B)
1 ( A ,B)
2 ( A ,B) s ( A ,B)
1 ( A ,B)
2 ( A ,B)
s
A2
N=3 t
= A2 | B + e 2
A1 | B s
= B, A sB, A s2B B = (1,0,0) T
dim ( )=3
t
dim ( )=2
s
dim ( )=2
t
dim ( )=3
s
29
300
ACM conference
200
100
0
0 10 20 30 40 50 60
Number of contacts
400
Student contacts
200
0
10 20 30 40 50 60 70 80 90 100
15
Ant interactions
10
0
0 0.05 0.1 0.15 0.2 0.25 0.3 0.35 0.4
(hour)
t
Fig. S2: Contact activity in empirical data. The curves show the contact activity
(number of contacts over a 300s time window) for the ACM conference and student contacts,
and over 10 seconds for ant interactions. For human interactions we observe the rhythm of day
and night, while for ants the number of interactions shows little temporal variation, i.e. with
no bursts or lulls.
30
ACM conference Ant interactions Ptotein network (CC) Protein network (MF)
25 25 80 40
20 20
60 30
15 15
()
P k
40 20
10 10
20 10
5 5
0 0 0 0
0 25 50 75 100 0 20 40 60 0 10 20 30 40 0 5 10 15
k k k k
Protein network (BP) Technological network1-ip6 Technological network 2-ip6 Technological network 3-ip6
60 40 40 40
30 30 30
40
()
P k
20 20 20
20
10 10 10
0 0 0 0
0 5 10 15 0 10 20 30 40 0 10 20 30 40 0 10 20 30 40
k k k k
Fig. S3: Degree distribution of the static networks corresponding to four kinds
of empirical datasets. The static networks are aggregated from all contacts for the ACM
conference and ant interactions. For protein and technological networks, the static networks
are aggregated from all snapshots.
31
ACM conference Ant interactions Protein network (CC) Protein network (MF)
80 40 40 20
ORI
TR
60 30 RPT 30 15
RE
40 20 RERPT 20 10
k
20 10 10 5
0 0 0 0
0 50 100 150 200 0 50 100 0 10 20 30 0 10 20 30 40 50
m m m m
Protein network (BP) Technological network1-ip6 Technological network 2-ip6 Technological network 3-ip6
20 4 5 3
2.5
15 3 4
2
10 2 3
k
1.5
5 1 2
1
0 0 1 0.5
0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50
m m m m
32
ACM conference Ant interactions Protein network (CC) Protein network (MF)
150 80 25 80
ORI
# of components
TR 20
RPT 60 60
100 RE 15
RERPT 40 40
10
50
20 20
5
0 0 0 0
0 50 100 150 200 0 50 100 0 10 20 30 0 10 20 30 40 50
m m m m
Protein network (BP) Technological network1-ip6 Technological network 2-ip6 Technological network 3-ip6
80 20 20 25
# of components
20
60 15 15
15
40 10 10
10
20 5 5
5
0 0 0 0
0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50 0 10 20 30 40 50
m m m m
33
Time points
T1 T2 T3 T4 T50
g1
g2
Step 0: g3
Genes
Downloading the time g4
course gene expression data g5
g6
g7
Step 1:
Filtering the gene expression data
T1 T2 T3 T50
g1 g2 g1 g2
g3 g3 g4 g4
g7 g7 g11 g5
g10 g14 g14 g7
g23 g25 g17 g15
g25 g26 g22 g21
g29 g30 g25 g25
.
.
Step 2:
Constructing the PPI network
T1 T2 T3 T50
Step 3:
Calculating St and Ss according to the Eq. (2)
St and Ss
34
Fig. S6: Schematic illustration of the calculation process over temporal and static
protein-protein interaction (PPI) networks. Step 0: Download the raw time-series mi-
croarray data from the gene expression array (GSE4987) in the Gene Expression Omnibus
(GEO). This dataset is in the form of a 6,297 × 50 matrix, including the expression profiles
of 6,228 probes at 50 different time points. The probe sets are mapped to gene symbols ac-
cording to the annotation file provided by Affymetrix and thus obtain 4,915 budding yeast
Saccharomyces cerevisiae gene products [27]. Step 1: Filter the raw gene expression data by
comparing the expression levels of genes at every time point to the active threshold obtained
from the three-sigma principle [27]. Step 2: Construct the PPI network (snapshot) at ev-
ery time point by only considering those interacting proteins in the global PPI network of S.
cerevisiae that are present at those time points. The global PPI network of S. cerevisiae is
downloaded from the Database of Interacting Proteins (DIP). This network consists of 5,023
proteins and 22,570 interactions. The static PPI network is obtained by aggregating all the
snapshots. This figure is adapted from Ref. [41], Fig. 1. Step 3: Based on the sequences of
the snapshots, the number of snapshots that temporal (or static) network needs to elapse (or
aggregate) to reach fully controllable space can be calculated based on Eq. (2) in the main text.
Note that in this work, we consider three small temporal PPI networks based on three gene
ontology (GO) terms: cellular component (CC), molecular function (MF), biological process
(BP), and in each temporal network, all the proteins share the GO term. Those three small
temporal networks (denoted as CC, MF, BP) have 33, 50, and 50 snapshots, and 84, 74, and
85 proteins, respectively.
35
Original data Temporal sequence Network Temporal sequence Network
A A
t5
t6
B B
t3, t6 t1, t4
t1 ,
t2 ,
TR
C C
D D
t4
t5
t2 ,
t3 ,
t1 t2 t3 t4 t5 t6 t1 t2 t3 t4 t5 t6
A A
t1
t6
B B
RPT
t1, t2 t3, t6
t5
t3 ,
RE
C C
D D t4 t2
t5
t4 ,
t1 t2 t3 t4 t5 t6 t1 t2 t3 t4 t5 t6
A A
’
t3
RERPT
t5
B B ’
RDT
t3’, t6’
t6
t1, t2
t1 ,
C C
’
D t4 t5 D ’
t4
t2 ,
t1 t2 t3 t4 t5 t6 t1’ t2’ t3’ t4’ t5’ t6’
Time Time
36
Randomized data
A 250
TR
50
RPT
150
RE
40
RERPT
ACM conference
200 40
30
100
150 30
20
100 20
50
10
50 10
0 0 0 0
1000 2000 3000 1000 2000 3000 1000 2000 3000 1000 2000 3000
B 120
∆t
120
∆t
100
∆t
120
∆t
Ant interactions
80 80 80
60
60 60 60
40
40 40 40
20 20 20 20
0 0 0 0
C
10 20 30 10 20 30 10 20 30 10 20 30
∆t ∆t ∆t ∆t
60 60 60 60
Protein network
50 50 50 50
40 40 40 40
30 30 30 30
20 20 20 20
10 10 10 10
0 0 0 0
D CC MF BP CC MF BP CC MF BP CC MF BP
Technological network
60 60 60 60
50 50 50 50
40 40 40 40
30 30 30 30
20 20 20 20
10 10 10 10
0 0 0 0
1-ip6 2-ip6 3-ip6 1-ip6 2-ip6 3-ip6 1-ip6 2-ip6 3-ip6 1-ip6 2-ip6 3-ip6
Fig. S8: Faster paths to controllability in temporal networks when the original
data is randomized. We find that St < Ss for various ∆t both for the original sequence of
snapshots (Fig. 2), and when the sequence of interactions is randomized using several null mod-
els [19]: TR (Time Reversal), RPT (Randomly Permuted Times), RE (Randomized Edges)
and RERPT (Randomized Edges and Randomly Permuted Times) (see Sec. S3 for the ran-
domization procedure). Parameters and other details of this analysis are the same as those
used in Fig. 2 of the main text.
37
Original data Randomized data
A 200 200
TR
50
RPT
150
RE
40
RERPT
ACM conference
St
Ss 40
150 150 30
100
30
100 100 20
20
50
50 50 10
10
0 0 0 0 0
B
10 3 10 4 10 5 10 3 10 4 10 5 10 3 10 4 10 5 10 3 10 4 10 5 10 3 10 4 10 5
120 120 120 100 120
80 80 80 80
60
60 60 60 60
40
40 40 40 40
20 20 20 20 20
0 0 0 0 0
C
1 2 3 1 2 3 1 2 3 1 2 3 1 2 3
10 10 10 10 10 10 10 10 10 10 10 10 10 10 10
400 300 200 80 80
250
Student contacts
300 150 60 60
200
100
100 50 20 20
50
0 0 0 0 0
10 3 10 4 10 5 10 3 10 4 10 5 10 3 10 4 10 5 10 3 10 4 10 5 10 3 10 4 10 5
∆t ∆t ∆t ∆t ∆t
Fig. S9: Temporal networks reach controllability faster independent of the value
of ∆t. Shown are St and Ss for the ACM conference (A), ant interactions (B), and student
contacts (C) networks. Our result that temporal networks reach controllability faster holds
over a wide range of ∆t. Parameters and other details of this analysis are the same as those
used in Fig. 2 of the main text.
250
Ant interactions
400
300 300 300
200
300
200 200 150 200
200
100
100 100 100
100 50
0 0 0 0 0
10 2 10 3 10 4 10 5 10 2 10 3 10 4 10 5 10 2 10 3 10 4 10 5 10 2 10 3 10 4 10 5 10 2 10 3 10 4 10 5
∆t ∆t ∆t ∆t ∆t
Fig. S10: Temporal networks reach controllability faster in the ant interaction
network. St is not bigger than Ss for ants interactions even when each contact is equipped
with a finite duration. Here each time point is scaled up by a factor of 60 and every antenna-
body interaction is assumed to last 20s. Parameters and other details of this analysis are the
same as those used in Fig. 2 of the main text.
38
A 60 B 60
Technological network
*** *** ***
St ***
Protein network
50 50
Ss
40 *** 40 ***
30 30
20 20
10 10
0 0
CC MF BP 1-ip6 2-ip6 3-ip6
Fig. S11: Temporal networks reach controllability faster regardless of the number
of driver nodes used. The static versions of the technological and protein networks some-
times remain uncontrollable at the final snapshot using sets of driver nodes corresponding to
20% of the network, as done in Fig. 2. Here we calculate St and Ss by instead using sets of driver
nodes corresponding to 80% of the network size. Our demonstration that temporal networks
reach controllability faster than their static counterparts remains true for these larger sets of
driver nodes. Predictably, both St and Ss decrease relative to Fig. 3C and 3D. Nonetheless,
we still observe cases where Ss = M , meaning that the static network remains uncontrollable
even after the final snapshot is aggregated. This is true even though a full 80% of the nodes
are directly controlled. In contrast, the temporal version of the network is controllable, and
with only 20% of the network as driver nodes. Each bar corresponds to 103 random sets of
driver nodes.
0.6
St < Ss St > Ss
Probability
0.4
0.2
0
M=5 M=10 M=15
Fig. S12: Relation between Ss and St based on a toy model. For Ns sequences
randomly selected from a set of M snapshots, we calculate the St and Ss with fixed setting of
the driver nodes. And the probability of St < Ss and St > Ss is calculated over all the selected
sequences. Here N = 50, Ns = 500, p = 0.03 and Nd = 3.
39
A 40
Poisson
St
50
Uniform
80
Normal
ACM conference
Ss 40
30 60
30
20 40
20
10 20
10
0 0 0
1000 2000 3000 1000 2000 3000 1000 2000 3000
B
∆t ∆t ∆t
60 100 80
50
Ant interactions
80
60
40
60
30 40
40
20
20
10 20
0 0 0
1000 2000 3000 1000 2000 3000 1000 2000 3000
C
∆t ∆t ∆t
60 60 60
Protein network
50 50 50
40 40 40
30 30 30
20 20 20
10 10 10
0 0 0
D
CC MF BP CC MF BP CC MF BP
Technological network
60 60 60
50 50 50
40 40 40
30 30 30
20 20 20
10 10 10
0 0 0
E
1-ip6 2-ip6 3-ip6 1-ip6 2-ip6 3-ip6 1-ip6 2-ip6 3-ip6
80 30 200
Student contacts
25
60 150
20
40 15 100
10
20 50
5
0 0 0
1000 2000 3000 1000 2000 3000 1000 2000 3000
∆t ∆t ∆t
Fig. S13: Faster paths to controllability in temporal networks. For all datasets, we
randomly distribute time stamps within the observation time window [tmin , tmax ] of the original
dataset, using Poisson, uniform, and normal distributions. These distributions were selected
for illustrative purposes, and do not represent the event times generated by any specific point
process. Regardless of how the timestamps are distributed, we find that temporal networks
reach controllability faster than the corresponding static networks on all datasets we considered.
Note that after generating a series of time stamps that obey Poisson (mean: 3) and normal
(mean: 0.5, standard deviation: 1) distributions, we adjust every time stamp to the time
window [tmin , tmax ] by appropriate transformation (see Sec. S4 for details). Other parameters
are the same as those in Fig. 2 of the main text. The robustness of the results has been checked.
40
A Temporal B Static
10 0 100
Eigenvalues of Weff
10-50 10-100
10-100 10-200
10-150
10-300
10-6 10-3 100 103 10-6 10-3 100 103
t t
Fig. S14: The minimum eigenvalue of Weff dominates the control energy. For
different ∆t, all eigenvalues of Weff are given by gray points for (A) temporal and (B) static
networks, where the minimum eigenvalues are enlarged in red and blue, respectively. The
eigenvalues of Weff vary over many orders of magnitude, implying that the average control
energy is dominated by the worst-case direction (corresponding to the λmin ) of Weff (Eq. (S13)).
Since λmin is much greater for the temporal network, the energy required to move in typical
control directions is thus expected to be less than in the corresponding static network. Here
the system parameters are the same as those used in Fig. 3A of the main text.
A N d / N = 0.3 B Nd / N = 1 C
1050 106
5.07
0.2 5.07 0.18
0.2 t = 10 t = 10
Nd = 1 Temporal Static
0.1
0.1
P(E )
P(E )
0.2 0.2
t = 102.07 t = 102.07
104 0
Control energy
10 30 0.1 0.1 10250 10260 10270 10280 10520 10530 10540 10550
E
0 0
10 4 10 6 10 8 103 10 0.25 10 0.5 10 0.75
1020
E
~ - 1.0
E D 0.18
10 2 Nd = 7 Temporal Static
~ - 7.0
~ - 5.0
P(E )
371.0
0.09 E ~ 10187.4 ~ 10
1010
101
0
10170 10180 10190 10200 10350 10360 10370 10380
100 100 E
10-6 10 -3
10 0
10 3
10-6 10 -3
10 0
10 3
t t
Fig. S15: Temporal networks require less control energy compared to static net-
works . Counterpart to Fig. 3 of the main text with more fractions of nodes are controlled:
(A) 0.3 and (B) 1. For the technological network, the number of driver nodes is 1 in (C) and
7 in (D). Here the system parameters are the same as those used in Fig. 3 of the main text.
41
A N d / N = 0.1 B N d / N = 0.3 C Nd / N = 1
10140 1050 106
0.2 5.07
0.3 5.07
0.2 5.07
t = 10 t = 10 t = 10
0.2
0.1 0.1
10120 0.1 105
1040
0 50 75 100 125
0 18 26 34 42
0
P(E )
P(E )
P(E )
4.46691 4.46695
10 10 10 10 10 10 10 10 10 10
10100 0.2 0.2 0.2
4
t = 102.07 t = 102.07 10 t = 102.07
Control energy
Fig. S16: Temporal networks require less control energy compared to static net-
works. Counterpart to Fig. 3 of the main text with N = 10, M = 2, k̄ = 4, a1 = −3, and
a2 = −1.
A N d / N = 0.1 B N d / N = 0.3 C Nd / N = 1
10140 1050 106
Temporal Static
10120 E
105
E 1040
10100
104
Control energy
1030
1080 ~ - 19.0
~ - 7.0 103
60
~ - 1.0
10
1020
102
1040
~ - 5.0
1010
101
1020 ~ - 3.0
Fig. S17: Temporal networks require less control energy compared to static net-
works. Counterpart to Fig. 3 of the main text with N = 10, M = 5, k̄ = 4, and ai = −2 for
i = 1, 2, · · · , 5.
42
0.2
t = 10 6 Temporal Static
Nd = 1 ~ 10 266.2
~ 10540.4
0.1
Nd = 7 ~ 10187.4 ~ 10371.0
Nd = 7 Nd = 1 Nd = 7 Nd = 1
0
0.2
t = 10 5.44 Temporal Static
Nd = 1 ~ 10 246.5
~ 10502.7
P(E )
0.1
Nd = 7 ~ 10173.3 ~ 10344.6
Nd = 7 Nd = 1 Nd = 7 Nd = 1
0
0.2
4.88
t = 10 Temporal Static
Nd = 1 ~ 10 226.9
~ 10465.0
0.1
Nd = 7 ~ 10159.2 ~ 10318.2
Nd = 7 Nd = 1 Nd = 7 Nd = 1
0
10150 10250 10350 10450 10550
E
Fig. S18: Difference in control energy for a real network. We aggregate the total
M = 50 snapshots of the 1-ip6 network into M = 2 snapshots. We show the distribution of the
control energy over 300 randomly-selected final states with unit distance away from x0 = 0, for
varying ∆t and numbers of driver nodes Nd (blue: static, red: temporal). The corresponding
average energies hEi are denoted in each panel. We find that control energy decreases as either
Nd or ∆t increases. Here we choose a1 = −1 and a2 = −2.
43
0.2
t = 10 6 Temporal Static
Nd = 1 ~ 10 266.2
~ 10540.4
0.1
Nd = 7 ~ 10187.4 ~ 10371.0
Nd = 7 Nd = 1 Nd = 7 Nd = 1
0
0.2
t = 10 5.44 Temporal Static
Nd = 1 ~ 10 246.6
~ 10502.7
P(E )
0.1
Nd = 7 ~ 10173.4 ~ 10344.6
Nd = 7 Nd = 1 Nd = 7 Nd = 1
0
0.2
4.88
t = 10 Temporal Static
Nd = 1 ~ 10 226.9
~ 10465.0
0.1
Nd = 7 ~ 10159.3 ~ 10318.2
Nd = 7 Nd = 1 Nd = 7 Nd = 1
0
10150 10250 10350 10450 10550
E
Fig. S19: Difference in control energy for a real network. Here we change the order of
the snapshots in the temporal network studied in Fig. S18. All other parameters and notations
are the same as those in Fig. S18.
44
0.16
A B
Temporal Static Temporal Static
0.08 39.94 112.25
~ 10 ~ 10 ~ 10 39.88
~ 10112.25
0
0.16
C Temporal Static D Temporal Static
P(E )
0
0.16
E F
Temporal Static Temporal Static
0.08
112.25
~ 10 39.92
~ 10 ~ 10 39.93
~ 10112.25
0
1030 1040 1060 1080 10100 10120 1030 1040 1060 1080 10100 10120
E E
Fig. S20: The energy for controlling temporal networks shows little variability with
respect to the switching sequence. For a set of M snapshots, we enumerate all the possible
sequences of snapshots, and for every sequence, we calculate the control energy distribution
(calculation method is the same as that in Fig. S18). Here N = 10, M = 3, Nd = 1, ∆t = 10−6 ,
and we choose a1 = −1, a2 = −2, a3 = −3. We present the distribution of the control energy
(red: temporal, blue: static) for six sequences here ((A): A1 , A2 , A3 , (B): A1 , A3 , A2 , (C):
A2 , A1 , A3 , (D): A2 , A3 , A1 , (E): A3 , A1 , A2 , (F): A3 , A2 , A1 ). The numbers on each panel
are the average control energy over the final states we choose, for which we find that the control
energy varies less than 10% over the possible switching sequences.
45
A X 10
-3 Static Temporal X 10
-3 B X 10
-3 Static Temporal X 10
-3
1.5 1.5
1 1
0.8 0.8
1 1
0.6 0.6
0.2 0.2
x 2 (t )
0 0 0 0
-0.2 -0.2
-0.5 -0.4 -0.5 -0.4
-0.6 -0.6
-1 -1
-0.8 -0.8
-1 -1
-1.5 -1.5
-0.01 0 0.01 -0.005 0 0.005 -0.02 0 0.02 -0.005 0 0.005
x 1(t ) x 1(t )
Fig. S21: Additional control trajectories for temporal and static networks with
two and five snapshots. We select 100 trajectories from x0 = 0 (indicated by a star) to xf
with kxf k = 10−3 (i.e. uniformly along the gray curve), for a randomly-generated temporal
network and its static counterpart with (A) N = 2, M = 2, and (B) N = 2, M = 5.
A Static B Temporal
0.005
0.000005
x 3 (t) 0 x 3 (t) 0
-0.000005
-0.005
0.05 0.001
0.005 0.001
0 0 0
x 2 (t) 0 x 2 (t)
-0.001 -0.001
-0.05 -0.005 x 1(t ) x 1(t )
Fig. S22: Control trajectories in a three dimensional system. Each trajectory is from
x0 (star) to a given xf over 0 ≤ t ≤ 1 for a static (A) and a temporal (B) network. We consider
a total of 100 such randomly chosen xf located on a sphere (grey) centered on x0 with a radius
of δ = 10−3 . We only plot those final states on the equator for clarity. Here we consider three
dimension systems and two snapshots for visualization, and the robustness of the results has
been tested.
46
90
10
Nd = 1 L 1.82*1035 1.90*1064 Nd = 2 L 6.82*1028 2.34*1054
1045
x i (t)
-10 45
-10 90
1090
Nd = 3 L 6.04*1023 6.90*1045 Nd = 7 L 6.01*1021 3.36*1040
1045
x i (t)
-10 45
-10 90
0 0.2 0.4 0.6 0.8 10 0.2 0.4 0.6 0.8 1
t t
Fig. S23: Locality of control trajectories in a real network. The panels show, for
different numbers Nd of driver nodes, the node states xi (t) as a function of time for control
in the temporal (red) and static (blue) version of the√ad hoc mobile communication network
(1-ip6). Here x0 = 0 and xf is taken to be (1, ..., 1)T / N for N = 34 nodes. The total length
of the control trajectory L is denoted in red (temporal) and blue (static). We see that the true
temporal version of this network exhibits considerably more local control trajectories than its
aggregated counterpart, in line with the results shown for synthetic networks in the main text
(Fig. 4C). Moreover, L decreases as Nd increases for both the temporal and static network.
Here ai = −1 and M = 2.
47
1090
Nd = 1 L 1.09*1035 1.90*1064 Nd = 2 L 6.63*1029 2.34*1054
1045
x i (t)
-10 45
-10 90
1090
Nd = 3 L 1.18*1024 6.90*1045 Nd = 7 L 2.08*1022 3.36*1040
1045
x i (t)
-10 45
-10 90
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
t t
Fig. S24: Locality of control trajectories in a real network. Counterpart of Fig. S23,
and here we change the order of the snapshots. All other parameters and notations are the
same as those in Fig. S23.
48
A 1010 B
L 1.40*104 1.39*108 L 2.99*104 1.39*108
5
10
x i (t)
5
-10
10
-10
C 1010 D
L 1.77*10 4
1.39*10 8
L 3.12*104 1.39*108
5
10
x i (t)
5
-10
-10 10
E 1010 F
L 2.50*10 4
1.39*10 8
L 1.95*104 1.39*108
5
10
x i (t)
-10 5
-10 10
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1
t t
Fig. S25: Locality of control trajectories for different switching sequences of a
temporal network. Counterpart of Fig. S20, where here we calculate the corresponding
control trajectories for each different snapshot sequence. We find that the temporal network
exhibits more local control trajectories regardless of the precise order of snapshots.
49
References and Notes
1. E. Almaas, B. Kovács, T. Vicsek, Z. N. Oltvai, A.-L. Barabási, Global organization of
metabolic fluxes in the bacterium Escherichia coli. Nature 427, 839–843 (2004).
doi:10.1038/nature02289 Medline
2. R. Cohen, S. Havlin, Complex Networks: Structure, Robustness and Function (Cambridge
Univ. Press, 2010).
3. Z. Toroczkai, H. Guclu, Proximity networks and epidemics. Physica A 378, 68–75 (2007).
doi:10.1016/j.physa.2006.11.088
4. N. Masuda, K. Klemm, V. M. Eguíluz, Temporal networks: Slowing down diffusion by long
lasting interactions. Phys. Rev. Lett. 111, 188701 (2013).
doi:10.1103/PhysRevLett.111.188701 Medline
5. H. H. K. Lentz, T. Selhorst, I. M. Sokolov, Unfolding accessibility provides a macroscopic
approach to temporal networks. Phys. Rev. Lett. 110, 118701 (2013).
doi:10.1103/PhysRevLett.110.118701 Medline
6. I. Scholtes, N. Wider, R. Pfitzner, A. Garas, C. J. Tessone, F. Schweitzer, Causality-driven
slow-down and speed-up of diffusion in non-Markovian temporal networks. Nat.
Commun. 5, 5024 (2014). doi:10.1038/ncomms6024 Medline
7. M. Starnini, A. Baronchelli, A. Barrat, R. Pastor-Satorras, Random walks on temporal
networks. Phys. Rev. E Stat. Nonlin. Soft Matter Phys. 85, 056115 (2012).
doi:10.1103/PhysRevE.85.056115 Medline
8. R. E. Kalman, Mathematical description of linear dynamical systems. J. Soc. Ind. Appl. Math.
Ser. A 1, 152–192 (1963). doi:10.1137/0301010
9. J. Klamka, Controllability of Dynamical Systems (Mathematics and Its Applications Series,
Springer, 1991), vol. 48.
10. P. Ögren, E. Fiorelli, N. Leonard, Cooperative control of mobile sensor networks: Adaptive
gradient climbing in a distributed environment. IEEE Trans. Automat. Contr. 49, 1292–
1302 (2004). doi:10.1109/TAC.2004.832203
11. N. Yosef, A. Regev, Impulse control: Temporal dynamics in gene transcription. Cell 144,
886–896 (2011). doi:10.1016/j.cell.2011.02.015 Medline
12. J. Uhlendorf, A. Miermont, T. Delaveau, G. Charvin, F. Fages, S. Bottani, G. Batt, P. Hersen,
Long-term model predictive control of gene expression at the population and single-cell
levels. Proc. Natl. Acad. Sci. U.S.A. 109, 14271–14276 (2012).
doi:10.1073/pnas.1206810109 Medline
13. T. Nepusz, T. Vicsek, Controlling edge dynamics in complex networks. Nat. Phys. 8, 568–
573 (2012). doi:10.1038/nphys2327
14. G. Chen, Pinning control and synchronization on complex dynamical networks. Int. J.
Control. Autom. Syst. 12, 221–230 (2014). doi:10.1007/s12555-014-9001-2
15. Y.-Y. Liu, J.-J. Slotine, A.-L. Barabási, Controllability of complex networks. Nature 473,
167–173 (2011). doi:10.1038/nature10011 Medline
50
16. I. Rajapakse, M. Groudine, M. Mesbahi, Dynamics and control of state-dependent networks
for probing genomic organization. Proc. Natl. Acad. Sci. U.S.A. 108, 17257–17262
(2011). doi:10.1073/pnas.1113249108 Medline
17. J. Ruths, D. Ruths, Control profiles of complex networks. Science 343, 1373–1376 (2014).
doi:10.1126/science.1242063 Medline
18. G. Chen, Pinning control and controllability of complex dynamical networks. Int. J. Autom.
Comput. 14, 1–9 (2017). doi:10.1007/s11633-016-1052-9
19. P. Holme, J. Saramäki, Temporal networks. Phys. Rep. 519, 97–125 (2012).
doi:10.1016/j.physrep.2012.03.001
20. M. Pósfai, P. Hövel, Structural controllability of temporal networks. New J. Phys. 16, 123055
(2014). doi:10.1088/1367-2630/16/12/123055
21. Y.-Y. Liu, A.-L. Barabási, Control principles of complex systems. Rev. Mod. Phys. 88,
035006 (2016). doi:10.1103/RevModPhys.88.035006
22. G. Xie, D. Zheng, L. Wang, Controllability of switched linear systems. IEEE Trans.
Automat. Contr. 47, 1401–1405 (2002). doi:10.1109/TAC.2002.801182
23. J. Sun, A. E. Motter, Controllability transition and nonlocality in network control. Phys. Rev.
Lett. 110, 208701 (2013). doi:10.1103/PhysRevLett.110.208701 Medline
24. G. Yan, G. Tsekenis, B. Barzel, J.-J. Slotine, Y.-Y. Liu, A.-L. Barabási, Spectrum of
controlling and observing complex networks. Nat. Phys. 11, 779–786 (2015).
doi:10.1038/nphys3422
25. L. Isella, J. Stehlé, A. Barrat, C. Cattuto, J.-F. Pinton, W. Van den Broeck, What’s in a
crowd? Analysis of face-to-face behavioral networks. J. Theor. Biol. 271, 166–180
(2011). doi:10.1016/j.jtbi.2010.11.033 Medline
26. B. Blonder, A. Dornhaus, Time-ordered networks reveal limitations to information flow in
ant colonies. PLOS ONE 6, e20298 (2011). doi:10.1371/journal.pone.0020298 Medline
27. J. Wang, X. Peng, M. Li, Y. Pan, Construction and application of dynamic protein interaction
network based on time course gene expression data. Proteomics 13, 301–312 (2013).
doi:10.1002/pmic.201200277 Medline
28. A.-L. Barabási, The origin of bursts and heavy tails in human dynamics. Nature 435, 207–
211 (2005). doi:10.1038/nature03459 Medline
29. J. Gao, Y.-Y. Liu, R. M. D’Souza, A.-L. Barabási, Target control of complex networks. Nat.
Commun. 5, 5415 (2014). doi:10.1038/ncomms6415 Medline
30. S. P. Cornelius, W. L. Kath, A. E. Motter, Realistic control of network dynamics. Nat.
Commun. 4, 1942 (2013). doi:10.1038/ncomms2939 Medline
31. P. J. Antsaklis, A. N. Michel, Linear Systems (McGraw-Hill, 1997).
32. SocioPatterns, www.sociopatterns.org.
33. J. Fournet, A. Barrat, Contact patterns among high school students. PLOS ONE 9, e107878
(2014). doi:10.1371/journal.pone.0107878 Medline
51
34. L. E. C. Rocha, F. Liljeros, P. Holme, Information dynamics shape the sexual networks of
Internet-mediated prostitution. Proc. Natl. Acad. Sci. U.S.A. 107, 5706–5711 (2010).
doi:10.1073/pnas.0914080107 Medline
35. L. E. C. Rocha, F. Liljeros, P. Holme, Simulated epidemics in an empirical spatiotemporal
network of 50,185 sexual contacts. PLOS Comput. Biol. 7, e1001109 (2011).
doi:10.1371/journal.pcbi.1001109 Medline
36. J. L. Iribarren, E. Moro, Impact of human activity patterns on the dynamics of information
diffusion. Phys. Rev. Lett. 103, 038702 (2009). doi:10.1103/PhysRevLett.103.038702
Medline
37. P. Erdös, A. Rényi, On the evolution of random graphs. Publ. Math. Inst. Hung. Acad. Sci. 5,
17–61 (1960).
38. F. L. Lewis, V. L. Syrmos, Optimal Control (Wiley, ed. 2, 1995).
39. G. Yan, J. Ren, Y.-C. Lai, C.-H. Lai, B. Li, Controlling complex networks: How much
energy is needed? Phys. Rev. Lett. 108, 218703 (2012).
doi:10.1103/PhysRevLett.108.218703 Medline
40. R. M. May, Stability and Complexity in Model Ecosystems (Princeton Univ. Press, 1974).
41. X. Tang, J. Wang, B. Liu, M. Li, G. Chen, Y. Pan, A comparison of the functional modules
identified from time course and static PPI network data. BMC Bioinformatics 12, 339
(2011). doi:10.1186/1471-2105-12-339 Medline
52