0% found this document useful (0 votes)
4 views48 pages

2504.15417v1

This paper establishes a formal link between Datalog¬ and Boolean network theory, demonstrating that under certain structural conditions, the regular models of Datalog¬ coincide with stable models, and provides uniqueness results for stable partial models. It also revisits previous work on the semantics of Datalog¬, correcting definitions and proving new upper bounds on model counts based on feedback vertex sets. Additionally, the authors introduce the concept of trap spaces in Datalog¬, relating them to other semantics and revealing deeper insights into the model-theoretic landscape.

Uploaded by

simobova
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
4 views48 pages

2504.15417v1

This paper establishes a formal link between Datalog¬ and Boolean network theory, demonstrating that under certain structural conditions, the regular models of Datalog¬ coincide with stable models, and provides uniqueness results for stable partial models. It also revisits previous work on the semantics of Datalog¬, correcting definitions and proving new upper bounds on model counts based on feedback vertex sets. Additionally, the authors introduce the concept of trap spaces in Datalog¬, relating them to other semantics and revealing deeper insights into the model-theoretic landscape.

Uploaded by

simobova
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 48

TPLP : Page 1–xx. © The Author(s), 20XX.

Published by Cambridge University Press 20XX 1


doi:10.1017/S147106840100xxxx

On the Boolean Network Theory of Datalog¬


arXiv:2504.15417v1 [cs.LO] 21 Apr 2025

VAN-GIANG TRINH
Inria Saclay, EP Lifeware, Palaiseau, France

BELAID BENHAMOU
LIRICA team, LIS, Aix-Marseille University, Marseille, France

SYLVAIN SOLIMAN
Inria Saclay, EP Lifeware, Palaiseau, France

FRANÇOIS FAGES
Inria Saclay, EP Lifeware, Palaiseau, France

Abstract
Datalog is a central formalism used in a variety of domains ranging from deductive databases
¬

and abstract argumentation frameworks to answer set programming. Its model theory is the
finite counterpart of the logical semantics developed for normal logic programs, mainly based
on the notions of Clark’s completion and two-valued or three-valued canonical models including
supported, stable, regular and well-founded models. In this paper we establish a formal link
between Datalog¬ and Boolean network theory, which was initially introduced by Stuart Kauf-
man and René Thomas to reason about gene regulatory networks. We use previous results from
Boolean network theory to prove that in the absence of odd cycles in a Datalog¬ program, the
regular models coincide with the stable models, which entails the existence of stable models,
and in the absence of even cycles, we show the uniqueness of stable partial models, which entails
the uniqueness of regular models. These results on regular models have been claimed by You
and Yuan in 1994 for normal logic programs but we show problems in their definition of well-
founded stratification and in their proofs that we can fix for negative normal logic programs
only. We also give upper bounds on the numbers of stable partial, regular, and stable models
of a Datalog¬ program using the cardinality of a feedback vertex set in its atom dependency
graph. Interestingly, our connection to Boolean network theory also points us to the notion of
trap spaces for Datalog¬ programs. We relate the notions of supported or stable trap spaces to
the other semantics of Datalog¬ , and show the equivalence between subset-minimal stable trap
spaces and regular models.

KEYWORDS: deductive database, Datalog, model-theoretic semantics, dynamics, Boolean net-


work, trap space, attractor, model counting, feedback vertex set

1 Introduction

Datalog¬ is a central non-monotonic logic programming formalism that plays a vi-


tal role in a variety of computational domains, including deductive databases, an-
swer set programming, and abstract argumentation (Ceri et al. 1990; Eiter et al.
1997; Seitzer and Schlipf 1997; Niemelä 1999; Wu et al. 2009; Alviano et al. 2012;
Janhunen and Niemelä 2016; Caminada and Schulz 2017). As a syntactic restriction of
2 Trinh, Benhamou, Soliman and Fages

normal logic programs to function-free rules and finite Herbrand universes, Datalog¬ ben-
efits from favorable computational properties while retaining expressive non-monotonic
semantics (Ceri et al. 1990; Sato 1990; Guessarian and Peixoto 1994; Niemelä 1999;
Basta et al. 2002).
The model theory of Datalog¬ is closely tied to the well-established logical seman-
tics developed for normal logic programs (You and Yuan 1995). These include two-
valued and three-valued semantics such as supported models, stable models, regular mod-
els, and the well-founded models, which are typically defined in terms of Clark’s com-
pletion and its associated fixpoint or model-theoretic characterizations (Clark 1977;
Lloyd 1984; Przymusinski 1990; 1994; You and Yuan 1994; Saccà and Zaniolo 1997;
Eiter et al. 1997). In addition, some semantics such as the stable, or supported class
semantics have been proposed to represent the dynamical behavior of a normal logic pro-
gram (Baral and Subrahmanian 1992; Inoue and Sakama 2012). Each of these semantics
provides different insights into the meaning of a logic program, and understanding their
relationships is crucial for both theoretical and practical applications.
In this paper, we investigate the semantics of Datalog¬ programs through a new
and perhaps unexpected lens: the theory of Boolean networks, a formalism originally
introduced by Stuart Kauffman and René Thomas to model gene regulatory net-
works (Kauffman 1969; Thomas 1973). Boolean networks have since evolved into a rich
mathematical framework used to study the dynamical behavior of discrete systems, lead-
ing to a wide range of applications from science to engineering, especially in systems
biology (Schwab et al. 2020; Trinh et al. 2023). Notably, Datalog¬ programs have been
widely applied to modeling and analysis of Boolean networks (Inoue 2011; Klarner et al.
2015; Trinh et al. 2023; Khaled et al. 2023; Trinh et al. 2024a).
The preliminary link between Datalog¬ programs and Boolean networks can be traced
back to the theoretical work by Inoue (2011). It defined a Boolean network encoding
for Datalog¬ programs, which relies on the notion of Clark’s completion (Clark 1977),
and pointed out that the two-valued models of the Clark’s completion of a Datalog¬
program one-to-one correspond to the fixed points of the encoded Boolean network.
The subsequent work (Inoue and Sakama 2012) pointed out that the strict supported
classes of a Datalog¬ program one-to-one correspond to the synchronous attractors of
the encoded Boolean network. However, this line of work did not explore the structural
relationships in detail, nor did it examine connections with other higher-level semantics.
Moreover, the discussion remained largely conceptual and did not extend to concrete
implications for the analysis of Datalog¬ programs.
In this work, we establish a formal connection between Datalog¬ and Boolean
network theory by 1) mapping the dependency structure of atoms in a Datalog¬
program (Apt et al. 1988) to the influence graph of a Boolean network (Richard
2019), and 2) relating the supported or stable partial model semantics (Przymusinski
1994; Saccà and Zaniolo 1997) and the regular model semantics (You and Yuan 1994;
Saccà and Zaniolo 1997) to the notion of trap spaces in Boolean networks (Klarner et al.
2015; Trinh et al. 2023). This connection allows us to transfer a variety of structural re-
sults from Boolean network theory (Remy et al. 2003; Richard and Ruet 2013; Richard
2019; Schwab et al. 2020; Richard and Tonello 2023) to the analysis of Datalog¬ pro-
grams.
On the Boolean Network Theory of Datalog¬ 3

Relating graphical representations of a normal logic program and its semantics is


an interesting research direction in theory that also has many useful applications
in practice (Sato 1990; Fages 1994; Cholewinski and Truszczynski 1999; Linke 2001;
Lin and Zhao 2004; Costantini 2006; Fichte 2011; Fandinno and Hecher 2021). Histor-
ically, the first studies in this direction focused on the existence of a unique stable
model in classes of programs with special graphical properties on (positive) atom de-
pendency graphs, including positive programs (Gelfond and Lifschitz 1988), acyclic pro-
grams (Apt and Bezem 1991), and locally stratified programs (Gelfond and Lifschitz
1988). The work by Fages (1991) gave a simple characterization of stable models as
well-supported models, and then showed that for tight programs (i.e. without non-well-
founded positive justifications), the stable models of the program coincide with the two-
valued models of its Clark’s completion (Fages 1994). Being finer-represented but more
computationally expensive than atom dependency graphs, several other graphical rep-
resentations (e.g., cycle and extended dependency graphs, rule graphs, block graphs)
were introduced and several improved results were obtained (Dimopoulos and Torres
1996; Linke 2001; Costantini 2006; Costantini and Provetti 2011). There are some recent
studies on atom dependency graphs (Fandinno and Lifschitz 2023; Trinh and Benhamou
2024), but they still focus only on stable models. In contrast, very few stud-
ies (You and Yuan 1994; Eiter et al. 1997) have been made about stable partial models
or regular models.
By exploiting the established connection between Datalog¬ programs and Boolean
networks, we derive the following key results w.r.t. the graphical analysis and theory of
Datalog¬ programs:
(1) Coincidence of Semantics under Structural Conditions. We show that in
the absence of odd cycles in the atom dependency graph of a Datalog¬ program,
the regular models coincide with the stable models. This collapse of semantics not
only simplifies the model-theoretic landscape but also guarantees the existence of
stable models in such cases.
(2) Uniqueness Results. In the absence of even cycles in the atom dependency graph
of a Datalog¬ program, we prove the uniqueness of stable partial models, which fur-
ther implies the uniqueness of regular models. This result provides a clear structural
condition under which the semantics of a program becomes deterministic.
(3) Correction and Refinement of Previous Work. We revisit a seminal work
by You and Yuan (1994), which claimed similar properties (the coincidence of reg-
ular models and stable models under the absence of odd cycles, and the uniqueness
of regular models under the absence of even cycles) for normal logic programs. We
reveal problems in their definition of well-founded stratification and in their proofs
that we can fix for negative normal logic programs only.
(4) Upper Bounds via Feedback Vertex Sets. We derive several upper bounds on
the number of stable, regular, and stable partial models for a Datalog¬ program.
These bounds are expressed in terms of the cardinality of a feedback vertex set in the
atom dependency graph, providing a structural measure of semantics complexity.
To the best of our knowledge, these insights are new to the theory of Datalog¬
programs.
4 Trinh, Benhamou, Soliman and Fages

(5) Stronger Graphical Analysis Results. We obtain several stronger graphical


analysis results on an important subclass of Datalog¬ programs, namely uni-rule
Datalog¬ programs (Seitzer and Schlipf 1997; Caminada et al. 2015).
(6) Trap Spaces and Semantics Correspondences. Our exploration reveals that
the Boolean network notion of trap spaces—sets of states closed under the system’s
dynamics—has meaningful analogues in the context of Datalog¬. We introduce the
notions of supported trap spaces and stable trap spaces, prove their basic properties,
and relate them to other semantics for Datalog¬ programs, in particular show that
the subset-minimal stable trap spaces coincide with the regular models.

Prior Work and Differences. This article is a significantly extended and thoroughly re-
vised version of the conference paper (Trinh et al. 2024b) that appeared in the Pro-
ceedings of the 40th International Conference on Logic Programming. Its differences
from Trinh et al. (2024b) include (2) (the case of stable partial models), (3), (4), (5),
and (6). In terms of presentation, this article simplifies the proofs shown in Trinh et al.
(2024b) w.r.t. the parts (1) and (2), as well as adds deeper discussions related to abstract
argumentation (Baroni et al. 2011).

Paper Structure. The remainder of the paper is structured as follows. In Section 2, we


provide background on normal logic programs, Datalog¬ programs, and Boolean net-
works. Section 3 presents the formal link between Datalog¬ programs and BNs. Section 4
revisits the work of You and Yuan (1994) and provides the new graphical analysis re-
sults exploiting the established connection. Section 5 introduces the notions of stable and
supported trap spaces for Datalog¬ programs, shows their basic properties, and relates
them to other semantics. Finally, Section 6 concludes the paper.

2 Preliminaries
2.1 Normal Logic Programs
We assume that the reader is familiar with the logic program theory and the stable model
semantics (Gelfond and Lifschitz 1988). Unless specifically stated, NLP means normal
logic program. In addition, we consider the Boolean domain B = {0, 1}, the three-valued
domain B⋆ = {0, 1, ⋆}, and the logical connectives used in this paper are ∧ (conjunction),
∨ (disjunction), ¬ (negation), and ↔ (equivalence).

2.1.1 Definitions
We consider a first-order language built over an infinite alphabet of variables, and finite
alphabets of constant, function and predicate symbols. The set of first-order terms is the
least set containing variables, constants and closed by application of function symbols.
An atom is a formula of the form p(t1 , . . . , tk ) where p is a predicate symbol and ti are
terms. A normal logic program P is a finite set of rules of the form
p ← p1 , . . . , pm , ∼pm+1 , . . . , ∼pk
where p and pi are atoms, k ≥ m ≥ 0, and ∼ is a symbol for default negation. A fact
is a rule with k = 0. For any rule r of the above form, h(r) = p is called the head of r,
On the Boolean Network Theory of Datalog¬ 5

b+ (r) = {p1 , . . . , pm } is called the positive body of r, b− (r) = {pm+1 , . . . , pk } is called


the negative body of r, and b(r) = b+ (r) ∪ b− (r) is called the body of r. For convenience,
V V
we denote bf(r) = v∈b+ (r) v ∧ v∈b− (r) ¬v as the body formula of r; if k = 0, then
bf(r) = 1. If b− (r) = ∅, ∀r ∈ P , then P is called a positive NLP. If b+ (r) = ∅, ∀r ∈ P ,
then P is called a negative NLP.
A term, an atom or an NLP is ground if it contains no variable. The Herbrand base of an
NLP P (denoted by HBP ) is the set of ground atoms formed over the alphabet of P . It is
finite in absence of function symbols, which is the case of Datalog¬ programs (Ceri et al.
1990). The ground instantiation of an NLP P (denoted by gr(P )) is the set of the ground
instances of all the rules in P . An NLP P is called uni-rule if for every atom a ∈ HBP ,
gr(P ) contains at most one rule whose head is a (Seitzer and Schlipf 1997).

2.1.2 Atom Dependency Graph


We first recall some basic concepts in the graph theory.
Definition 2.1
A signed directed graph G on {⊕, ⊖} is defined as a tuple (V (G), E(G)) where V (G)
is a (possibly infinite) set vertices and E(G) is a (possibly infinite) set of arcs of the
form (uv, s) where u, v ∈ V (G) (unnecessarily distinct) and either s = ⊕ or s = ⊖. The
in-degree of a vertex v is defined as the number of arcs ending at v. Then the minimum
in-degree of G is the minimum value of the in-degrees of all vertices in V (G). An arc
(uv, ⊕) (resp. (uv, ⊖)) is called a positive (resp. negative) arc, and can be written as
⊕ ⊕ ⊖ ⊖
u− → v or v ← − u (resp. u −→ v or v ← − u). The graph G is strongly connected if there
is always a path between two any vertices of G. It is sign-definite if there cannot be two
arcs with different signs between two different vertices.

Definition 2.2
A signed directed graph G is called a sub-graph of a signed directed graph H iff V (G) ⊆
V (H) and E(G) ⊆ E(H).

Definition 2.3
Consider a signed directed graph G. A path of G is defined as a (possibly infinite)
sequence of vertices v0 v1 v2 . . . such that (vi vi+1 , ⊕) ∈ E(G) or (vi vi+1 , ⊖) ∈ E(G), for
all i ≥ 0 and except the starting and ending vertices, any other pair of two vertices are
distinct. When the starting and ending vertices coincide, the path is called a cycle. In
this case, the number of vertices in this cycle is finite. A cycle is called even (resp. odd )
if its number of negative arcs is even (resp. odd). In addition, a cycle C of G can be seen
as a sub-graph of G such that V (C) is the set of vertices of C and E(C) is the set of arcs
of C. It is easy to derive that C is strongly connected and the in-degree of each vertex
in V (C) is 1 within C.

Definition 2.4
Given a signed directed graph G and an arc (uv, ǫ) with u, v ∈ V (G), G + (uv, ǫ) is a
signed directed graph (V (G), E(G) ∪ {(uv, ǫ)}) and G − (uv, ǫ) is a signed directed graph
(V (G), E(G) \ {(uv, ǫ)}).
6 Trinh, Benhamou, Soliman and Fages

Then we show the formal definition of atom dependency graph, the most prominent
graphical representation of an NLP (Fages 1994).
Definition 2.5
Consider an NLP P . The atom dependency graph of P , denoted by adg(P ), is a signed
directed graph over {⊕, ⊖} defined as follows:
• V (adg(P )) = HBP
• (uv, ⊕) ∈ E(adg(P )) iff there is a rule r ∈ gr(P ) such that v = h(r) and u ∈ b+ (r)
• (uv, ⊖) ∈ E(adg(P )) iff there is a rule r ∈ gr(P ) such that v = h(r) and u ∈ b− (r)
We next recall the notion of tightness, which plays an important role in characterizing
certain semantics properties of NLPs. An NLP is called tight if its atom dependency graph
⊕ ⊕ ⊕
contains no infinite descending chain v0 ← − v1 ← − v2 ←− . . . of only positive arcs (Fages
1994). Note that for Datalog¬ programs, the atom dependency graph is always a finite
graph; hence a Datalog¬ program is tight if its atom dependency graph contains no cycle
of only positive arcs (Dietz et al. 2014).

2.1.3 Semantics of Negation


In this section, we recall several semantics of NLPs in presence of default negation.

Stable and Supported Partial Models. A three-valued interpretation I of an NLP P is a


mapping I : HBP → B⋆ . If I(a) 6= ⋆, ∀a ∈ HBP , then I is a two-valued interpretation of P .
Usually, a two-valued interpretation is used interchangeably with the set of ground atoms
that are true in this interpretation. A three-valued interpretation I characterizes the set
of two-valued interpretations denoted by S(I) as S(I) = {J|J ∈ 2HBP , J(a) = I(a), ∀a ∈
HBP , I(a) 6= ⋆}. For example, if I = {p = 1, q = 0, r = ⋆}, then S(I) = {{p}, {p, r}}.
We consider two orders on three-valued interpretations. The truth order ≤t is given by
0 <t ⋆ <t 1. Then, I1 ≤t I2 iff I1 (a) ≤t I2 (a), ∀a ∈ HBP . The order ≤s is given by 0 <s ⋆,
1 <s ⋆, and it contains no other relation. Then, I1 ≤s I2 iff I1 (a) ≤s I2 (a), ∀a ∈ HBP . In
addition, I1 ≤s I2 iff S(I1 ) ⊆ S(I2 ), i.e., ≤s is identical to the subset partial order.
Let e be a propositional formula on HBP . Then the valuation of e under a three-valued
interpretation I (denoted by I(e)) following the three-valued logic is defined recursively
as follows:


 e if e ∈ B⋆




I(e)
 if e = a, a ∈ HBP
I(e) = ¬I(e1 ) if e = ¬e1



 min≤t (I(e1 ), I(e2 )) if e = e1 ∧ e2



max (I(e ), I(e )) if e = e ∨ e
≤t 1 2 1 2

where ¬1 = 0, ¬0 = 1, ¬⋆ = ⋆, and min≤t (resp. max≤t ) is the function to get the min-
imum (resp. maximum) value of two values w.r.t. the order ≤t . We say three-valued
interpretation I is a three-valued model of an NLP P iff for each rule r ∈ gr(P ),
I(bf(r)) ≤t I(h(r)).
Let I be a three-valued interpretation of an NLP P . We build the reduct of P w.r.t. I
(denoted by P I ) as follows.
On the Boolean Network Theory of Datalog¬ 7

• Remove any rule a ← a1 , . . . , am , ∼b1 , . . . , ∼bk ∈ gr(P ) if I(bi ) = 1 for some


1 ≤ i ≤ k.
• Afterwards, remove any occurrence of ∼bi from gr(P ) such that I(bi ) = 0.
• Then, replace any occurrence of ∼bi left by a special atom u such that u 6∈ HBP
and it always receives the value ⋆.
The reduct P I is positive and has a unique ≤t -least three-valued model. See Przymusinski
(1990) for the method for computing this ≤t -least three-valued model of P I . Then I is
a stable partial model of P iff I is equal to the ≤t -least three-valued model of P I . A
stable partial model I is a regular model if it is ≤s -minimal (Eiter et al. 1997). A regular
model is non-trivial if it is not two-valued. In the two-valued setting, P I is identical to
the reduct defined by Gelfond and Lifschitz (1988). A two-valued interpretation I is a
stable model of P iff I is equal to the ≤t -least two-valued model of P I . It is easy to derive
that a stable model is a two-valued regular model, as well as a two-valued stable partial
model (Aravindan and Dung 1995).
To further relate the above two-valued or three-valued stable semantics to clas-
sical logic, we now recall the notion of Clark’s completion and the associated con-
cept of two-valued or three-valued supported models. The (propositional) Clark’s com-
pletion of an NLP P (denoted by comp(P )) consists of the following equivalences:
W
p ↔ r∈gr(P ),h(r)=p bf(r), for each p ∈ HBP ; if there is no rule whose head is p, then
the equivalence is p ↔ 0. Let rhsP (a) denote the right hand side of the equivalene of
ground atom a ∈ HBP in comp(P ). A three-valued interpretation I is a three-valued
model of comp(P ) iff for every a ∈ HBP , I(a) = I(rhsP (a)). In this work, we define a
supported partial model of P as a three-valued model of comp(P ), and a supported model
of P as a two-valued model of comp(P ). Of course, a supported model is a (two-valued)
partial supported model. It has been pointed out that a stable partial model (resp. stable
model) is a supported partial model (resp. supported model), but the reverse may be not
true (Dietz et al. 2014).

Stable and Supported Classes. We first recall two semantic operators that capture key
aspects of the (two-valued) stable and supported model semantics. Let P be an NLP
and let I be any two-valued interpretation of P . We have that P I is positive, and has a
unique ≤t -least two-valued model (say J). We define the operator FP as FP (I) = J. In
contrast, we define the operator TP as TP (I) = J where J is a two-valued interpretation
such that for every a ∈ HBP , J(a) = I(rhsP (a)).
Definition 2.6 (Baral and Subrahmanian (1992))
A non-empty set S of two-valued interpretations is a stable class of an NLP P iff it holds
that S = {FP (I) | I ∈ S}.

Definition 2.7 (Inoue and Sakama (2012))


A non-empty set S of two-valued interpretations is a supported class of an NLP P iff it
holds that S = {TP (I) | I ∈ S}.
Trivially, a stable (resp. supported) class of size 1 is equivalent to a stable (resp. sup-
ported) model. It is well-known that a stable model is a supported model. However, a
stable class may not be a supported class. A stable (resp. supported) class S of P is
8 Trinh, Benhamou, Soliman and Fages

strict iff no proper subset of S is a stable (resp. supported) class of P . An NLP P al-
ways has at least one stable class (Baral and Subrahmanian 1992). If P is a Datalog¬
program, it has at least one strict stable class (Baral and Subrahmanian 1992). Simi-
larly, a Datalog¬ program has at least one supported class, as well as at least one strict
supported class (Inoue and Sakama 2012).
To better understand the dynamics underlying stable and supported classes, we now
turn to a graph-theoretic characterization based on the stable and supported seman-
tics operators. The stable (resp. supported ) transition graph of P is a directed graph
(denoted by tgst (P ) (resp. tgsp (P ))) on the set of all possible two-valued interpreta-
tions of P such that (I, J) is an arc of tgst (P ) (resp. tgsp (P )) iff J = FP (I) (resp.
J = TP (I)). The stable or supported transition graph exhibits the dynamical be-
havior of an NLP (Inoue and Sakama 2012). It has been pointed that the strict sta-
ble (resp. supported) classes of P coincide with the simple cycles of tgst (P ) (resp.
tgsp (P )) (Inoue and Sakama 2012).

2.1.4 Least Fixpoint


We shall use the fixpoint semantics of normal logic programs (Dung and Kanchanasut
1989) to prove many new results in the next sections. To be self-contained, we briefly
recall the definition of the least fixpoint of an NLP P as follows. Let r be the rule
p ← ∼p1 , . . . , ∼pk , q1 , . . . , qj in gr(P ) and let ri be the rules qi ← ∼qi1 , . . . , ∼qili in gr(P )
where 1 ≤ i ≤ j and li ≥ 0. Then Πr ({r1 , . . . , rj }) is the following rule
l
p ← ∼p1 , . . . , ∼pk , ∼q11 , . . . , ∼q1l1 , . . . , ∼qj1 , . . . , ∼qjj ,
which means that each atom qj in the positive body of r is substituted with the body of
the rule rj . Then ΠP is the transformation on negative normal logic programs:
ΠP (Q) = {Πr ({r1 , . . . , rj })|r ∈ gr(P ), ri ∈ Q, 1 ≤ i ≤ j}.
S
Let lfp(P )i = (ΠP (∅))i = ΠP (ΠP (. . . ΠP (∅))), then lfp(P ) = i≥1 lfp(P )i is the
least fixpoint of P . In the case of Datalog¬ programs, lfp(P ) is finite and also nega-
tive (Dung and Kanchanasut 1989).
Finally, we now illustrate the key notions introduced so far—such as stable and sup-
ported partial models, regular models, tightness, least fixpoint, and transition graphs—
through a concrete example of a Datalog¬ program.
Example 2.1
Consider Datalog¬ program P (taken from Example 2.1 of Inoue and Sakama (2012))
where P = {p ← ∼q; q ← ∼p; r ← q}. Herein, we use ’;’ to separate program rules.
Note that gr(P ) = P . The least fixpoint of P is lfp(P ) = {p ← ∼q; q ← ∼p; r ← ∼p}.
Figures 1 (a), (b), and (c) show the atom dependency graph, the stable transition graph,
and the supported transition graph of P , respectively. The program P is tight, since
every cycle of adg(P ) has a negative arc. Consider five three-valued interpretations of
P : I1 = {p = 1, q = 0, r = ⋆}, I2 = {p = 0, q = 1, r = ⋆}, I3 = {p = ⋆, q = ⋆, r = ⋆},
I4 = {p = 1, q = 0, r = 0}, and I5 = {p = 0, q = 1, r = 1}. Among them, only I3 ,
I4 , and I5 are stable (also supported) partial models of P . The program P has two
regular models (I4 and I5 ) that are also stable (also supported) models of P . It has three
strict stable classes that correspond to three cycles of the stable transition graph of P
On the Boolean Network Theory of Datalog¬ 9

(see Figure 1 (b)): C1 = {p} → {p}, C2 = {q, r} → {q, r}, and C3 = ∅ → {p, q, r} → ∅.
It has three strict supported classes that correspond to three cycles of the supported
transition graph of P (see Figure 1 (c)): C1 = {p} → {p}, C2 = {q, r} → {q, r}, and
C4 = {p, q} → {r} → {p, q}. Note that C1 and C2 are cycles of size 1 and correspond to
stable (also supported) models of P .

p {p, r} {q} {p, r} {q}

⊖ ⊖

q {p} {q, r} {p, q} {p} {q, r} {p, q}

r ∅ {p, q, r} {r} ∅ {p, q, r} {r}

(a) (b) (c)

Fig. 1: (a) Atom dependency graph adg(P ), (b) stable transition graph tgst (P ), and (c)
supported transition graph tgsp (P ) of Datalog¬ program P of Example 2.1.

2.2 Boolean Networks


2.2.1 Definitions
A Boolean Network (BN) f is a finite set of Boolean functions on a finite set of Boolean
variables denoted by varf . Each variable v ∈ varf is associated with a Boolean function
fv : B|varf | → B. Function fv is called constant if it is always evaluated to either 0 or 1
regardless of its arguments. A state s of f is a Boolean vector s ∈ B|varf | . State s can
be also seen as a mapping s : varf → B that assigns either 0 (inactive) or 1 (active) to
each variable. We can write sv instead of s(v) for short. For convenience, we write a state
simply as a string of values of variables in this state, for instance, we write 010 instead
of (0, 1, 0).
Among various types of BNs, one particularly simple yet expressive class is the so-
called AND-NOT BNs (Richard and Ruet 2013), which we introduce next. A BN f is
called an AND-NOT BN if for every v ∈ varf , fv is 0, 1, or a conjunction of literals (i.e.,
variables or their negations connected by logical-and rules). We assume that fv does not
contain two literals of the same variable.

2.2.2 Influence Graph


We now introduce the concept of Influence Graph (IG), a graphical representation that
captures the local effects of variables on one another within a BN (Richard 2019). This
structure provides an intuitive way to visualize how changes in the value of one variable
can influence the outcome of another, either positively or negatively. The following formal
definition makes this precise.
Let x be a state of f . We use x[v ← a] to denote the state y so that yv = a and yu =
xu , ∀u ∈ varf \ {v} where a ∈ B. The influence graph of f (denoted by G(f )) is a signed
10 Trinh, Benhamou, Soliman and Fages

directed graph over the set of signs {⊕, ⊖} where V (G(f )) = varf , (uv, ⊕) ∈ E(G(f ))
(i.e., u positively affects the value of fv ) iff there is a state x such that fv (x[u ← 0]) <
fv (x[u ← 1]), and (uv, ⊖) ∈ E(G(f )) (i.e., u negatively affects the value of fv ) iff there
is a state x such that fv (x[u ← 0]) > fv (x[u ← 1]).
To facilitate the structural analysis of the influence graph, we introduce the concept of
feedback vertex set, which is instrumental in characterizing the behavior of cycles within
the graph. A feedback vertex set allows us to control such feedback by identifying a subset
of variables whose removal breaks all cycles of a certain type. Specifically, an even (resp.
odd) feedback vertex set of a signed directed graph G is a (possibly empty) subset of
V (G) that intersects every even (resp. odd) cycle of G. These notions will be particularly
useful in later sections when we study the graphical properties of models in Datalog¬
programs.

2.2.3 Dynamics and Attractors


Given a BN f , an update scheme specifies the way the variables in varf update their
states at each time step. There are two major types of update schemes (Schwab et al.
2020): synchronous (in which all variables update simultaneously) and asynchronous (in
which a single variable is non-deterministically chosen for updating). Under the update
scheme employed, the dynamics of the BN are represented by a directed graph, called the
State Transition Graph (STG). We use sstg(f ) (resp. astg(f )) to denote the synchronous
(resp. asynchronous) STG of f , and formally define them as follows.
Definition 2.8
Given a BN f , the synchronous state transition graph of f (denoted by sstg(f )) is given as:
V (sstg(f )) = B|varf | (the set of possible states of f ) and (x, y) ∈ E(sstg(f )) iff yv = fv (x)
for every v ∈ varf .

Definition 2.9
Given a BN f , the asynchronous state transition graph of f (denoted by astg(f )) is given
as: V (astg(f )) = B|varf | (the set of possible states of f ) and (x, y) ∈ E(astg(f )) iff there is
a variable v ∈ varf such that y(v) = fv (x) 6= x(v) and y(u) = x(u) for all u ∈ varf \ {v}.
An attractor of a BN is defined as a subset-minimal trap set, which depends on the
employed update scheme. Equivalently, an attractor is a terminal strongly connected
component of the STG corresponding to the employed update scheme (Richard 2009).
To ease the statement of our results, we define the concepts of trap set and attractor for
directed graphs in general.
Definition 2.10
Given a directed graph G, a trap set A of G is a non-empty subset of V (G) such that
there is no arc in G going out of A (formally, there do not exist two vertices x ∈ A and
y ∈ V (G) \ A such that (x, y) ∈ E(G)). An attractor of G is defined as a ⊆-minimal
trap set of G. An attractor is called a fixed point if it consists of only one vertex, and a
cyclic attractor otherwise. Equivalently, A is an attractor of G iff A is a terminal strongly
connected component of G.

Definition 2.11
On the Boolean Network Theory of Datalog¬ 11

Given a BN f , a synchronous (resp. an asynchronous) attractor of f is defined as an


attractor of sstg(f ) (resp. astg(f )).
Regarding the synchronous update scheme, each vertex of sstg(f ) has exactly one out-
going arc (possibly a self-arc). Hence, an attractor of sstg(f ) is equivalent to a simple
cycle of sstg(f ). Regarding the asynchronous update scheme, the graph astg(f ) may
contain multiple (up to |varf |) arcs out from a vertex. Hence, an attractor of astg(f ) may
be a terminal strongly connected component comprising multiple overlapping cycles.

2.2.4 Trap Spaces


To better understand the long-term behavior of a BN, it is useful to reason not just about
individual states, but also about collections of states that share common properties. One
such useful abstraction is the notion of trap space (Klarner et al. 2015), which gener-
alizes the concept of trap sets by allowing partial specification of variable values. Trap
spaces serve as a compact representation of dynamically invariant regions of the state
space, within which the system cannot escape once entered. This notion is particularly
attractive because it is independent of the specific update scheme (e.g., synchronous or
asynchronous) and can be analyzed purely from the network’s structure (Klarner et al.
2015; Trinh et al. 2023). We begin by formally defining trap spaces and explaining their
relationship with sub-spaces and attractors in the context of BNs, followed by a concrete
example.
A sub-space m of a BN f is a mapping m : varf → B⋆ . A sub-space m represents
the set of all states s (denoted by S(m)) such that s(v) = m(v), ∀v ∈ varf , m(v) 6= ⋆.
It is easy to see that the notion of sub-space (resp. state) in BNs is identical to the
notion of three-valued interpretation (resp. two-valued interpretation) in normal logic
programs. For convenience, we write a sub-space simply as a string of values of variables
in this sub-space following an order of variables, for instance, we write 01⋆ instead of
{p = 0, q = 1, r = ⋆} w.r.t. the variable order (p, q, r). If a sub-space is also a trap set,
it is a trap space. Unlike trap sets and attractors, trap spaces of a BN are independent
of the employed update scheme (Klarner et al. 2015). Then a trap space m is minimal
iff there is no other trap space m′ such that S(m′ ) ⊂ S(m). It is easy to derive that a
minimal trap space contains at least one attractor of the BN regardless of the employed
update scheme (Klarner et al. 2015).
Example 2.2
Consider BN f with varf = {p, q, r}, fp = ¬q, fq = ¬p, fr = q. Figure 2 (a), Figure 2 (b),
and Figure 2 (c) show the influence graph, the synchronous STG, and the asynchronous
STG of f , respectively. Attractor states are highlighted with boxes. The synchronous
STG sstg(f ) has two fixed points and one cyclic attractor, whereas the asynchronous
STG astg(f ) has only two fixed points. The BN f has five trap spaces: m1 = 10⋆,
m2 = 01⋆, m3 = ⋆ ⋆ ⋆, m4 = 100, and m5 = 011. Among them, m4 and m5 are minimal;
they are also two fixed points of f .
12 Trinh, Benhamou, Soliman and Fages
p 101 010
101 010
⊖ ⊖

q 100 011 110


100 011 110

r 000 111 001


000 111 001

(a) (b) (c)

Fig. 2: (a) G(f ), (b) sstg(f ), and (c) astg(f ). The BN f is given in Example 2.2.

3 Datalog¬ Programs and Boolean Networks

To bridge the gap between logic programming and Boolean network analysis, we now
introduce a way to encode a Datalog¬ program as a BN. This translation enables us to
analyze the logical structure and dynamic behavior of a Datalog¬ program using tools and
concepts from the BN theory. Intuitively, each atom in the Datalog¬ program becomes a
variable in the BN, and the update function of each variable reflects the conditions under
which the atom can be derived in the program. This encoding provides a natural and
faithful representation of Datalog¬ programs in terms of Boolean dynamics, and serves
as a foundation for structural and dynamical analysis in the subsequent sections.
Definition 3.1
Let P be a Datalog¬ program. We define a BN f that corresponds to P as follows:
varf = HBP and for each v ∈ varf ,
_
fv = bf(r).
r∈gr(P ), v=h(r)

By convention, if there is no rule r ∈ gr(P ) such that h(r) = v, then fv = 0.


The graphical connection between Datalog¬ programs and BNs is given in Proposi-
tion 3.1. The atom dependency graph is purely syntactic—it reflects all occurrences of
atoms in rule bodies, whereas the influence graph is (locally) dynamical—it only includes
edges that actually cause a change in the Boolean update function. Intuitively, the in-
fluence graph of the BN derived from a Datalog¬ program is always contained within its
atom dependency graph. This reflects the fact that dynamical influence is grounded in
syntactic dependency, but not all syntactic dependencies are dynamically active.
This graphical relation is very important because all the subsequent results rely on it
and the atom dependency graph of a Datalog¬ program can be efficiently built based
on the syntax only, whereas the construction of the influence graph of a BN may be
exponential in general. Note however that the influence graph of a BN is usually built
by using Binary Decision Diagrams (BDDs) (Richard 2019). In this case, the influence
graph can be efficiently obtained because each Boolean function is already in Disjunctive
Normal Form (DNF) (see Definition 3.1), thus the BDD of this function would be not
too large.
Proposition 3.1
On the Boolean Network Theory of Datalog¬ 13

Consider a Datalog¬ program P . Let f be the encoded BN of P . Then the influence


graph of f is a sub-graph of the atom dependency graph of P .

Proof
By construction, G(f ) and adg(P ) have the same set of vertices (i.e., HBP ). Let (uv, s) be
an arc in G(f ). We show that (uv, s) is also an arc in adg(P ). Without loss of generality,
suppose that s = ⊕.
Assume that (uv, ⊕) is not an arc of adg(P ). There are two cases. Case 1: there is no
arc from u to v in adg(P ). In this case, both u and ¬u clearly do not appear in fv . This
implies that G(f ) has no arc from u to v, which is a contradiction. Case 2: there is only
a negative arc from u to v in adg(P ). It follows that ¬u appears in fv but u does not
because fv is in DNF. Then, for any state x and for every conjunction c of fv , we have that
c(x[u ← 0]) ≥ c(x[u ← 1]). This implies that fv (x[u ← 0]) ≥ fv (x[u ← 1]) for any state x.
Since (uv, ⊕) is an arc in G(f ), there is a state x such that fv (x[u ← 0]) < fv (x[u ← 1]).
This leads to a contradiction. Hence, (uv, ⊕) is an arc in adg(P ).
Now, we can conclude that G(f ) is a sub-graph of adg(P )).
The above result establishes a structural correspondence between a Datalog¬ program
and its encoded BN. This connection lays a foundation for transferring concepts and tech-
niques between the two domains. In particular, it motivates the application of ideas from
argumentation theory and BN dynamics to Datalog¬ programs. Inspired by the concept
of complete extension in abstract argumentation frameworks, the notion of complete trap
space has been proposed for BNs (Trinh et al. 2025a).
Definition 3.2 (Trinh et al. (2025a))
Consider a BN f . A sub-space m is a complete trap space of f iff for every v ∈ varf ,
m(v) = m(fv ).

Theorem 3.1
Let P be a Datalog¬ program and f be its encoded BN. Then the supported partial
models of P coincide with the complete trap spaces of f .

Proof
A three-valued interpretation I of P is a supported partial model of P
iff I is a three-valued model of comp(P )
V
iff I is a three-valued model of v∈HBP (v ↔ fv )
iff I is a complete trap space of f .
Building on this result, we now recall a key result concerning the relationship between
trap spaces and complete trap spaces in BNs (see Lemma 3.1), thereby proving a direct
consequence (see Theorem 3.2).
Lemma 3.1 (Proposition 3 of Trinh et al. (2025a))
Let f be a BN. For every trap space m of f , there is a complete trap space m
b of f such
b ≤s m.
that m

Theorem 3.2
14 Trinh, Benhamou, Soliman and Fages

Let f be a BN. A sub-space m is a ≤s -minimal complete trap space of f iff m is a


≤s -minimal trap space of f .

Proof
Regarding the forward direction, assume that m is not a ≤s -minimal trap space of f . Then
there is a trap space m′ of f such that m′ <s m. By Lemma 3.1, there is a complete trap
space mb such that m b ≤s m′ . It follows that m
b <s m, which is a contradiction. Hence,
m is a ≤s -minimal trap space of f . The backward direction is trivial since the set of
complete trap spaces is a subset of the set of trap spaces.
Theorem 3.2 shows that minimality under the subset ordering is preserved when re-
stricting to complete trap spaces, thereby aligning the minimal elements in both the sets
of trap spaces and complete trap spaces. This entails the key connection expressed by
Corollary 3.1
Let P be a Datalog¬ program and f be its encoded BN. Then the ≤s -minimal supported
partial models of P coincide with the ≤s -minimal trap spaces of f .

Proof
This immediately follows from Theorem 3.2 and Theorem 3.1.
By restricting our attention to two-valued interpretations, we naturally obtain the
following correspondence between supported models in Datalog¬ programs and fixed
points in BNs.
Corollary 3.2
Let P be a Datalog¬ program and f be its encoded BN. Then the supported models of
P coincide with the fixed points of f .

Proof
This immediately follows from Corollary 3.1 and the fact that the supported models of
P are the two-valued supported partial models of P (thus are always ≤s -minimal) and
the fixed points of f are the two-valued ≤s -minimal trap spaces of f .
To illustrate the correspondence between Datalog¬ programs and BNs established
above, we present the following concrete example.
Example 3.1
Consider Datalog¬ program P = {a ← b; a ← ∼b; b ← ∼b, c; c ← b}. We use “;” to
separate program rules. The encoded BN f of P is: varf = {a, b, c}, fa = b ∨ ¬b, fb =
¬b ∧ c, fc = b. Figure 3 (a) and Figure 3 (b) show the atom dependency graph of P
and the influence graph of f , respectively. We can see that V (G(f )) = V (adg(P )) and
E(G(f )) ⊂ E(adg(P )). The BN f has four trap spaces: m1 = {a = ⋆, b = 0, c = 0},
m2 = {a = ⋆, b = ⋆, c = ⋆}, m3 = {a = 1, b = 0, c = 0}, m4 = {a = 1, b = ⋆, c = ⋆}.
Among others, m2 and m3 are two complete trap spaces of f and are also two supported
partial models of P . In particular, m3 is a fixed point of f and is also a supported model
of P .
On the Boolean Network Theory of Datalog¬ 15

⊖ ⊖
⊕ ⊕ ⊕
a ⊖ b ⊕ c a b ⊕ c

(a) (b)

Fig. 3: (a) Atom dependency graph adg(P ) of the Datalog¬ program P , (b) influence
graph G(f ) of the BN f . The details of P and f are given in Example 3.1.

4 Graphical Analyses of Datalog¬ Programs

In this section, we present new graphical analysis results on Datalog¬ programs. We


first present some problems in the previous claims published in You and Yuan (1994)
(see Section 4.1). We then introduce precise definitions for proving our main results on
odd cycles (see Section 4.3), even cycles (see Section 4.4), and upper bounds (see Sec-
tion 4.5). To improve readability and help the reader quickly identify key findings, Table 1
provides a structured overview of these new results on Datalog¬ programs, with the key
contributions highlighted in bold.

Table 1: Summary of graphical analysis results on Datalog¬ programs. Main results are
highlighted in bold.

Class Odd Cycles Even Cycles Upper Bounds

General • Theorem 4.7 • Theorem 4.12 • Theorem 4.18


• Corollary 4.2 • Corollary 4.5 • Corollary 4.8
• Theorem 4.8 • Corollary 4.6 • Proposition 4.6
• Corollary 4.7 • Theorem 4.20
• Theorem 4.22
Uni-rule • Corollary 4.3 • Theorem 4.14 • Theorem 4.24
• Theorem 4.10 • Theorem 4.16 • Theorem 4.26

4.1 Previous Claims Made for Normal Logic Programs


In You and Yuan (1994), the authors use the following notion of stratification which
exists for any NLP:
Definition 4.1 (You and Yuan (1994))
Let P be an NLP. A stratification of P is a partial order ≤ over subsets of HBP such
that
• every (ground) atom belongs to one and only one stratum; and
• two atoms a and b are in the same stratum if they are on a common cycle in adg(P ), or
there exists an atom c such that a and c are in the same stratum and the same holds
true for b and c; and these are the only atoms that can be in the same stratum.
Let [a] denote the stratum of an atom a. [a] is a lower stratum than [b], denoted by
[a] ≤ [b], iff there is a path from some atom in [a] to some atom in [b].
16 Trinh, Benhamou, Soliman and Fages

Definition 4.2 (You and Yuan (1994))


Let P be an NLP. A stratification of P is said to be well-founded iff for every stratum
[b], there exists [a] such that [a] ≤ [b] and for any stratum [c], if [c] ≤ [a] then there are
only positive arcs from atoms in [c] to atoms in [a].
However, the given Definition 4.2 appears to be inconsistent with the proofs presented
in You and Yuan (1994), since they do not use the positive arc requirement, as well as
with their later definition given in Lin and You (2002). Let us thus consider the following
standard definition:
Definition 4.3 (well-founded stratification)
Let P be an NLP. A stratification of P is said to be well-founded iff there is no infinite
descending chain of strata [a0 ] ≥ [a1 ] ≥ . . ..
With that definition, an NLP whose Herbrand base is finite always has a well-founded
stratification. In contrast, it is well known that a general NLP may not have a well-
founded stratification, e.g., NLP {p(X) ← p(s(X))}. By mimicking the proof of Theorem
5.3(i) of You and Yuan (1994), we can prove the following result for negative NLPs:
Theorem 4.1
Consider a negative NLP P . If P has a well-founded stratification and adg(P ) has no
odd cycle, then all the regular models of P are two-valued.
The result for general NLPs claimed in You and Yuan (1994) is unfortunately not
correctly proved there, nor in any subsequent work to the best of our knowledge. We can
similarly prove Theorem 5.3(ii) of You and Yuan (1994) for negative NLPs only:
Theorem 4.2
Consider a negative NLP P . If P has a well-founded stratification and adg(P ) has no
even cycle, then P has a unique regular model.
That second result is claimed in You and Yuan (1994) for general NLPs using the
property that “if adg(P ) has no even cycle, then adg(lfp(P )) has no even cycle”, but this
is not correct:
Counter-example 4.1
Let P = {a ← c; b ← c; c ← ∼a, ∼b}. Then lfp(P ) = {a ← ∼a, ∼b; b ← ∼a, ∼b; c ←
∼a, ∼b}. The graph adg(P ) has no even cycle, but adg(lfp(P )) does (there is the even
⊖ ⊖
cycle a −
→b− → a in adg(lfp(P ))).
We shall fix these issues for general Datalog¬ programs with Theorem 4.7 and Corol-
lary 4.6, but it is worth noting that the questions for general NLPs are still open to the
best of our knowledge.

4.2 Definitions for Graphical Analysis


To facilitate our graphical analysis of Datalog¬ programs, we begin by establishing a set
of technical definitions and auxiliary results. These foundational elements provide the
necessary tools to reason about the structural properties of programs via their associated
graphs. In particular, we focus on the interplay between atom dependencies, program
rules, and semantic interpretations, setting the stage for the results that follow.
On the Boolean Network Theory of Datalog¬ 17

We first start with the natural yet important insight that the least fixpoint of a
Datalog¬ program is also a Datalog¬ program.
Proposition 4.1
Consider a Datalog¬ program P . Then the least fixpoint of P is also a Datalog¬ program.

Proof
Let lfp(P ) be the least fixpoint of P . By the definition of least fixpoint, HBlfp(P ) = HBP .
Hence, lfp(P ) is also a Datalog¬ program.
We now recall and establish several important results that characterize the relation-
ships between different semantics of interpretations under the assumption of tightness—a
key structural condition on the atom dependency graph of a Datalog¬ program. These
results highlight the equivalence between stable and supported (partial) models in tight
Datalog¬ programs, and they play a central role in connecting logic programming se-
mantics with the dynamics of associated BNs. In particular, tightness guarantees that
the distinction between the stable and supported semantics collapses, thereby simplify-
ing semantic analysis. Moreover, the regular models of a tight program coincide with the
≤s -minimal trap spaces of the corresponding BN, reinforcing the utility of tightness in
both logical and graphical reasoning.
Theorem 4.3 (Theorem 3.2 of Fages (1994))
Consider a tight Datalog¬ program P . Then the set of stable models of P coincides with
the set of supported models of P .

Theorem 4.4 (Lemma 16 of Dietz et al. (2014))


Consider a tight Datalog¬ program P . Then the set of stable partial models of P coincides
with the set of supported partial models of P .

Corollary 4.1
Consider a negative Datalog¬ program P . Then the set of stable partial models of P
coincides with the set of supported partial models of P .

Proof
Since P is negative, the atom dependency graph of P contains no positive arcs, thus P
is tight. The corollary immediately follows from Theorem 4.4.

Lemma 4.1
Consider a Datalog¬ program P and its encoded BN f . If P is tight, then the regular
models of P coincide with the ≤s -minimal trap spaces of f .

Proof
A three-valued interpreration I is a regular model of P
iff I is a ≤s -minimal stable partial model of P by definition
iff I is a ≤s -minimal supported partial model of P by Theorem 4.4
iff I is a ≤s -minimal trap space of f by Corollary 3.1.
18 Trinh, Benhamou, Soliman and Fages

The following result shows that the (stable) semantic behaviors of a Datalog¬ program
are preserved under its least fixpoint transformation. However, the (supported) semantic
behaviors of a Datalog¬ program may be not preserved.
Theorem 4.5 (Theorem 3.1 of Aravindan and Dung (1995))
Consider a Datalog¬ program P . Let lfp(P ) be the least fixpoint of P . Then P and lfp(P )
have the same set of stable partial models, the same set of regular models, and the same
set of stable models.
The next result establishes a characterization of regular models via trap spaces after
applying the least fixpoint transformation to a Datalog¬ program.
Lemma 4.2
Consider a Datalog¬ program P . Let lfp(P ) be the least fixpoint of P and f ′ be the
encoded BN of lfp(P ). Then the regular models of P coincide with the ≤s -minimal trap
spaces of f ′ .

Proof
A three-valued interpreration I is a regular model of P
iff I is a regular model of lfp(P ) by Theorem 4.5
iff I is a ≤s -minimal stable partial model of lfp(P ) by definition
iff I is a ≤s -minimal supported partial model of lfp(P ) by Corollary 4.1
iff I is a ≤s -minimal trap space of f ′ by Corollary 3.1.
The following example illustrates the concepts and results introduced above by ana-
lyzing a concrete Datalog¬ program, its least fixpoint, and the correspondence between
regular models and trap spaces.
Example 4.1
Consider Datalog¬ program P of Example 3.1. Figure 3 (b) shows the atom dependency
⊕ ⊕
graph of P , which demonstrates that P is non-tight because of the cycle b − →c− → b.
Program P has two supported partial models: m2 = {a = ⋆, b = ⋆, c = ⋆} and m3 =
{a = 1, b = 0, c = 0}. However, only m3 is a stable partial model of P . The least fixpoint
of P is lfp(P ) = {a ← ∼b}. The program lfp(P ) has a unique stable (supported) partial
model m3 . The encoded BN f ′ of lfp(P ) is varf ′ = {a, b, c}, fa′ = ¬b, fb′ = 0, fc′ = 0. The
BN f ′ has a unique ≤s -minimal trap space m3 . This is consistent with Lemma 4.2.

4.3 Model Existence and Odd Cycles


This subsection investigates the existence of models for Datalog¬ programs in the ab-
sence of odd cycles in their atom dependency graphs. We present both known and novel
results that clarify the role of odd cycles in determining the presence or absence of var-
ious semantic models. Our analysis distinguishes between general Datalog¬ programs
and the more restrictive class of uni-rule Datalog¬ programs, for which stronger guaran-
tees and tighter characterizations can be obtained. These results lay the foundation for
understanding the structural limitations imposed by odd dependencies.

General Datalog¬ Programs. We first show that the absence of odd cycles in a Datalog¬
program is preserved under its least fixpoint transformation.
On the Boolean Network Theory of Datalog¬ 19

Lemma 4.3
Let P be a Datalog¬ program and lfp(P ) be its least fixpoint. If adg(P ) is has no odd
cycle, then adg(lfp(P )) has no odd cycle.

Proof
This directly follows from Lemma 5.3 of Fages (1994).
Building on this observation, we now connect the absence of odd cycles in the atom
dependency graph to dynamical properties of the associated BN, which ultimately allows
us to characterize the nature of regular models in such cases.
Theorem 4.6 (Theorem 1 of Richard (2010))
Let f be a BN . If G(f ) has no odd cycle, then astg(f ) has no cyclic attractor.

Theorem 4.7 (main result)


Let P be a Datalog¬ program. If adg(P ) has no odd cycle, then all the regular models
of P are two-valued.

Proof
Let lfp(P ) be the least fixpoint of P . By Proposition 4.1, lfp(P ) is a Datalog¬ program.
By Lemma 4.3, adg(lfp(P )) has no odd cycle. Let f ′ be the encoded BN of lfp(P ).
By Lemma 4.2, the regular models of P coincide with the ≤s -minimal trap spaces of f ′ .
Since G(f ′ ) is a sub-graph of adg(lfp(P )), G(f ′ ) has no odd cycle. By Theorem 4.6,
astg(f ′ ) has no cyclic attractor. Each ≤s -minimal trap space of f ′ contains at least
one attractor of astg(f ′ ) (Klarner et al. 2015). In addition, if a ≤s -minimal trap space
contains a fixed point, then it is also a fixed point because of the minimality. Hence,
all the ≤s -minimal trap spaces of f ′ are fixed points. This implies that all the regular
models of P are two-valued.
An immediate consequence of the above result is the guaranteed existence of stable
models for programs whose atom dependency graphs are free of odd cycles.
Corollary 4.2
Consider a Datalog¬ program P . If adg(P ) has no odd cycle, then P has at least one
stable model.

Proof
This immediately follows from Theorem 4.7 and the fact that P always has at least one
regular model.
Inspired by Corollary 4.2, we explore an interesting result shown in Theorem 4.8. To
prove this result, we first prove an auxiliary result that establishes a useful structural
property of signed directed graphs that are strongly connected (see Lemma 4.4). Then
an existing result in BNs is applied.
Lemma 4.4
If a signed directed graph G is strongly connected and has no odd cycle or has no even
cycle, then G is sign-definite.
20 Trinh, Benhamou, Soliman and Fages

Proof
We first prove that each arc of G belongs to a cycle in G (*). Taken an arbitrary arc
(uv, ǫ) in G where ǫ ∈ {⊕, ⊖}. Since G is strongly connected, there is a directed path
from v to u. By adding (uv, ǫ) to this path, we obtain a cycle.
Assume that G is not sign-definite. Then there are two arcs: (uv, ⊕) and (uv, ⊖). By
(*), (uv, ⊕) (resp. (uv, ⊖)) belongs to a cycle in G (say C). C is an even (resp. odd)
cycle because G has no odd (resp. even) cycle. Then (C − (uv, ⊕)) + (uv, ⊖) (resp.
(C − (uv, ⊖)) + (uv, ⊕)) is an odd (resp. even) cycle in G. This implies a contradiction.
Hence, G is sign-definite.

Theorem 4.8
Consider a Datalog¬ program P . Suppose that adg(P ) is strongly connected, has at least
one arc, and has no odd cycle. If P is tight, then P has two stable models A and B
such that ∀v ∈ HBP , either v ∈ A or v ∈ B. In addition, A and B can be computed in
polynomial time w.r.t. |HBP |.

Proof
Let f be the encoded BN of P . By Theorem 4.3, the stable models of P coincide with
the supported models of P , thus the fixed points of f by Corollary 3.2. We show that f
has two fixed points that are complementary.
Since adg(P ) is strongly connected and has no negative cycle, it is sign-definite
by Lemma 4.4. Since E(G(f )) ⊆ E(adg(P )), G(f ) is also sign-definite. The graph adg(P )
has the minimum in-degree of at least one because it is strongly connected and has at
least one arc. This implies that for every variable j ∈ varf , the number of arcs ending
at j in G(f ) is at least one. Hence, fj cannot be constant for every variable j ∈ varf .
It is known that when adg(P ) is strongly connected and has no negative cycle, its set
of vertices can be divided into two equivalence classes (say S + and S − ) such that any
two vertices in S + (resp. S − ) are connected by either no arc or a positive arc, and
there is either no arc or a negative arc between two vertices in S + and S − (Theorem
1 of Akutsu et al. (2012)). Since E(G(f )) ⊆ E(adg(P )) and V (G(f )) = V (adg(P )), S +
and S − are still such two equivalence classes in G(f ).
Let x be a state defined as: xi = 1 if i ∈ S + and xi = 0 if i ∈ S − . Consider a variable
j. If xj = 0, then j ∈ S − , and for all i ∈ varf such that G(f ) has a positive arc from i
to j, i ∈ S − , thus xi = 0, and for all i ∈ varf such that G(f ) has a negative arc from
i to j, i ∈ S + , thus xi = 1. Since fj cannot be constant, fj (x) = 0. Analogously, if
xj = 1, then fj (x) = 1. Since j is arbitrary, we can conclude that x is a fixed point of f .
By using the similar deduction, we can conclude that x is also a fixed point of f where
xi = 1−xi , ∀i ∈ varf . Let A and B are two two-valued interpretations of P corresponding
to x and x. Then A and B are stable models of P . We have that ∀v ∈ HBP , either v ∈ A or
v ∈ B. In addition, since S + and S − can be computed in polynomial time (Akutsu et al.
2012) w.r.t. |V (G(f ))| = |varf |, A and B can be computed in polynomial time w.r.t.
|HBP |.
The following example demonstrates an application of Theorem 4.8, showcasing a
Datalog¬ program whose atom dependency graph satisfies the required conditions and
admits exactly two complementary stable models.
On the Boolean Network Theory of Datalog¬ 21

Example 4.2
Consider a Datalog¬ program P = {a ← ∼b; b ← ∼a; b ← ∼c; c ← ∼b}. Figure 4 shows
the atom dependency graph of P . It is to see that adg(P ) is strongly connected, has
at least one arc, has no odd cycle, and P is tight. The program P has two supported
models: A = {a = 0, b = 1, c = 0}, B = {a = 1, b = 0, c = 1}. These models are also two
stable models of P . It is easy to see that A ∩ B = ∅ and A ∪ B = HBP .

⊖ ⊖
a ⊖ b ⊖ c

Fig. 4: Atom dependency graph of the Datalog¬ program P given in Example 4.2.

Uni-rule Datalog¬ Programs. We now turn our attention to a syntactic fragment of


Datalog¬ programs, which we refer to as uni-rule Datalog¬ programs. These are pro-
grams in which each ground atom appears in the head of at most one ground rule. De-
spite their restricted form, uni-rule Datalog¬ programs arise naturally in various modeling
scenarios and exhibit desirable computational properties (see more detailed discussions
at Remark 4.1). In this subsection, we study the model existence and semantic charac-
teristics of such programs in relation to the structure of their atom dependency graphs.
Remark 4.1
Why a uni-rule Datalog¬ program is important? First, the class of uni-rule Datalog¬
programs is easily identifiable by a syntatic characterization. Second, this class is as
hard as the class of general Datalog¬ programs w.r.t the stable model semantics, since
identifying whether a uni-rule Datalog¬ program has a stable model or not is NP-
complete (Seitzer and Schlipf 1997). In addition, computing the (three-valued) well-
founded model of a uni-rule Datalog¬ program is linear (Seitzer and Schlipf 1997). Third,
the corresponding Datalog¬ program of an abstract argumentation framework is exactly
a uni-rule Datalog¬ program (Caminada and Gabbay 2009; Caminada et al. 2015). Fi-
nally, it has been shown that in general no abstract argumentation semantics is able
to coincide with the L-stable model semantics in Datalog¬ programs, thus the crucial
question is whether there exists a restricted class of Datalog¬ programs for which the
semi-stable semantics in abstract argumentation frameworks does coincide with the L-
stable model semantics in Datalog¬ programs (Caminada et al. 2015). In Caminada et al.
(2015), the L-stable model semantics of a uni-rule Datalog¬ program1 coincides with the
semi-stable semantics of its corresponding abstract argumentation framework.
By considering uni-rule Datalog¬ programs, we obtain several stronger results. First,
we show that the class of BNs encoding uni-rule Datalog¬ programs is AND-NOT BNs.
Proposition 4.2
Consider a uni-rule Datalog¬ program P . Then the encoded BN f of P is an AND-NOT
BN.

1 In that paper, a uni-rule Datalog¬ program is called an AF-program.


22 Trinh, Benhamou, Soliman and Fages

Proof
Consider a variable v ∈ HBP . There are two cases. Case 1: there is no rule whose head
is v. Then fv = 0 by construction. Case 2: There is exactly one rule r whose head is v.
V V
Then fv = w∈b+ (r) w ∧ w∈b− (r) ¬w by construction. It follows that fv is either 0 or a
conjunction of literals. Hence, f is an AND-NOT BN.
A key advantage of uni-rule Datalog¬ programs lies in the simplicity of their syntactic
structure, which enables a tighter correspondence between their logical and dynamical
representations. The following insight formalizes this observation by establishing that, for
any Datalog¬ program, the atom dependency graph coincides exactly with the influence
graph of its associated BN. This structural alignment is significant because it allows one
to analyze model-theoretic and dynamical properties of such programs interchangeably
through either graph, thereby facilitating the transfer of results and intuitions across the
logic programming and BN domains.
Proposition 4.3
Consider a uni-rule Datalog¬ program P . Let f be the encoded BN of P . Then the
influence graph of f and the atom dependency graph of P coincide.

Proof
By construction, V (G(f )) = V (adg(P )) = HBP . By Proposition 3.1, E(G(f )) ⊆
E(adg(P )). Assume that (uv, ⊕) be an arc in E(adg(P )). There exists a rule r ∈ gr(P )
such that h(r) = v and u ∈ b+ (r). Let x be a state of f such that x(w) = 0 if
w ∈ b− (r), x(w) = 1 if w ∈ b+ (r), and x(w) = 1 otherwise. Since P is uni-rule,
V V
fv = w∈b+ (r) w ∧ w∈b− (r) ¬w. We have that fv (x[u = 0]) = 0 < fv (x[u = 1]) = 1.
Hence, (uv, ⊕) is also an arc in E(G(f )). The case that (uv, ⊖) is an arc in E(adg(P )) is
similar. It follows that E(adg(P )) ⊆ E(G(f )), leading to E(G(f )) = E(adg(P )). Hence,
G(f ) coincides with adg(P ).

Remark 4.2
It is easy to see that the ground instantiation of a uni-rule Datalog¬ program is uniquely
determined by its atom dependency graph. Similarly, an AND-NOT BN is uniquely
determined by its influence graph.
The notion of delocalizing triple plays a central role in analyzing the structural proper-
ties of signed directed graphs that underlie both uni-rule Datalog¬ programs and AND-
NOT BNs. As established earlier, these two formalisms are uniquely specified through
their respective graphs: the atom dependency graph in the case of Datalog¬ programs,
and the influence graph for BNs. Within this unified graphical perspective, delocalizing
triples—introduced by Richard and Ruet (2013)—serve as critical structural motifs that
can disrupt cyclic behaviors and affect the existence or uniqueness of fixed points. The
following definition formalizes this concept, followed by a concrete example.
Definition 4.4 (Richard and Ruet (2013))
Given a signed directed graph G, a cycle C of G, and vertices u, v1 , v2 of G, (u, v1 , v2 ) is
said to be a delocalizing triple of C when 1) v1 , v2 are distinct vertices of C; 2) (uv1 , ⊕)
and (uv2 , ⊖) are arcs of G that are not in E(C). Such a delocalizing triple is called
internal if u ∈ V (C), and external otherwise.
On the Boolean Network Theory of Datalog¬ 23

Example 4.3
Consider the signed directed graph G taken from (Veliz-Cuba et al. 2012, Figure 4). It
⊖ ⊖
is also shown in Figure 5. Regarding cycle C1 = v3 − → v4 − → v3 of G, (v1 , v3 , v4 ) is an
external delocalizing triple of C1 because (v1 v3 , ⊕) and (v1 v4 , ⊖) are arcs of G but not
⊕ ⊖ ⊖ ⊕ ⊕
C1 and v1 6∈ V (C1 ). Regarding cycle C2 = v1 − → v2 −→ v4 − → v3 −→ v5 − → v1 of G,
(v1 , v5 , v4 ) is an internal delocalizing triple of C2 because (v1 v5 , ⊕) and (v1 v4 , ⊖) are arcs
of G but not C2 and v1 ∈ V (C2 ). Every remaining cycle of G has no delocalizing triple.

v1 v3 v5
⊕ ⊕

⊕ ⊖ ⊖ ⊖ ⊖

v2 v4 v6

Fig. 5: Signed directed graph G of Example 4.3.

Cycles in the influence graph of an AND-NOT BN (or equivalently, the atom depen-
dency graph of a uni-rule Datalog¬ program) can critically impact the existence of fixed
points or stable models. Richard and Ruet (2013) showed that the presence of internal
delocalizing triples within odd cycles imposes the stability of fixed points. The following
theorem formalizes this result, guaranteeing the existence of a fixed point for any AND-
NOT BN whose odd cycles are all “internally delocalized.” As a direct consequence, we
obtain a sufficient condition for the existence of stable models in tight uni-rule Datalog¬
programs.
Theorem 4.9 (Theorem 3’ of Richard and Ruet (2013))
Let f be an AND-NOT BN. If every odd cycle of G(f ) has an internal delocalizing triple,
then f has at least one fixed point.

Corollary 4.3
Consider a uni-rule Datalog¬ program P . If P is tight and every odd cycle of adg(P ) has
an internal delocalizing triple, then P has at least one stable model.

Proof
Let f be the encoded BN of P . By Corollary 3.2, the supported models of P coincide
with the fixed points of f . Since G(f ) = adg(P ) by Proposition 4.3, every odd cycle
of G(f ) has an internal delocalizing triple. By Theorem 4.9, f has at least one fixed
point. Since P is tight, the supported models of P coincide with the stable models of P
by Theorem 4.3. Hence, P has at least one stable model.
We generalize the above result for stable models to regular models (see Theorem 4.10).
Theorem 4.10 (main result)
24 Trinh, Benhamou, Soliman and Fages

Consider a uni-rule Datalog¬ program P . If P is tight and every odd cycle of adg(P ) has
an internal delocalizing triple, then every regular model of P is two-valued.

Proof
Let f be the encoded BN of P . By Proposition 4.2, f is an AND-NOT BN. By Proposi-
tion 4.3, G(f ) = adg(P ). It follows that every odd cycle of G(f ) has an internal delocal-
izing triple. Since P is tight, by Lemma 4.1, the regular models of P coincide with the
≤s -minimal trap spaces of f .
Assume that m is a non-trivial ≤s -minimal trap space of f (i.e., m is not two-valued).
We build the new BN f ′ as follows: for every v ∈ varf and m(v) 6= ⋆, fv′ = m(v); and for
every v ∈ varf and m(v) = ⋆, fv′ = fv .
It is easy to derive that G(f ′ ) is a sub-graph of G(f ). Hence, an odd cycle in G(f ′ )
is also an odd cycle in G(f ). We show that every odd cycle of G(f ′ ) has an internal
delocalizing triple (*). Indeed, for every odd cycle C of G(f ), we have two cases. Case
1: every internal delocalizing triple of C has the first vertex u such that m(u) 6= ⋆.
Then since u ∈ V (C), C is broken in G(f ′ ) because all the input arcs of u in G(f ) are
removed in G(f ′ ). Case 2: there is an internal delocalizing triple (u, v1 , v2 ) of C such that
m(u) = ⋆. If m(v1 ) 6= ⋆ or m(v2 ) 6= ⋆, then C is broken in G(f ′ ) because v1 , v2 ∈ V (C).
Otherwise, the arcs (uv1 , ⊕) and (uv2 , ⊖) of G(f ) are retained in G(f ′ ). In this case, if
C still appears in G(f ′ ), (u, v1 , v2 ) is still an internal delocalizing triple of C in G(f ′ ). In
all cases, (*) still preserves.
It follows that f ′ has at least one fixed point (say mf ix ) by Theorem 4.9. Since a fixed
point is a two-valued complete trap space, mf ix (v) = m(v) for every v ∈ varf , m(v) 6= ⋆.
Obviously, mf ix is a complete trap space of f . We have mf ix <s m which is a contra-
diction. Hence, all the ≤s -minimal trap spaces of f are two-valued. This implies that all
the regular models of P are two-valued.
Corollary 4.3 and Theorem 4.10 are true for only tight uni-rule Datalog¬ programs.
We make two respective conjectures for (general) uni-rule Datalog¬ programs.
Conjecture 4.1
Let P be a uni-rule Datalog¬ program. If every odd cycle of adg(P ) has an internal
delocalizing triple, then P has at least one stable model.

Conjecture 4.2
Let P be a uni-rule Datalog¬ program. If every odd cycle of adg(P ) has an internal
delocalizing triple, then all the regular models of P are stable models.

Remark 4.3
If Conjecture 4.1 is true, then we can imply that Conjecture 4.2 is true by applying the
arguments of the proof of Theorem 4.10.

4.4 Model Uniqueness and Even Cycles


In this subsection, we explore how the absence of even cycles in the atom dependency
graph of a uni-rule Datalog¬ program influences the uniqueness and structure of its
two-valued or three-valued stable models.
On the Boolean Network Theory of Datalog¬ 25

General Datalog¬ Programs. While uni-rule Datalog¬ programs admit a clean correspon-
dence between their atom dependency graphs and influence graphs of their associated
BNs, this connection does not carry over to general Datalog¬ program. Indeed, the
characterization of complete trap spaces in BNs fundamentally relies on the three-valued
logic, which goes beyond what influence graphs can represent. As a result, we must aban-
don influence graphs and instead develop a new type of graphical representation—one
that captures the syntactic dependencies and interactions of Boolean functions without
relying on their dynamical interpretation.
Definition 4.5
Given a BN f , we define its syntactic influence graph (denoted by SG(f )) as follows:
• (vj vi , ⊕) is an arc of SG(f ) iff vj appears in fvi
• (vj vi , ⊖) is an arc of SG(f ) iff ¬vj appears in fvi
It is natural to obtain the coincidence between the atom dependency graph of a
Datalog¬ program and the syntactic influence graph of its encoded BN.
Corollary 4.4
Given a Datalog¬ program P , let f be its encoded BN. Then adg(P ) = SG(f ).

Proof
This immediately follows from the construction of the encoded BN and the definition of
the syntactic influence graph.
The following definition introduces several notations to capture input relationships in
a signed directed graph, which we will use in subsequent results, including a key insight
that establishes conditions under which a BN has a unique complete trap space.
Definition 4.6
Consider a signed directed graph G. Let IN+ (G, v) denote the set of input vertices that
have positive arcs to v in G. Let IN− (G, v) denote the set of input vertices that have
negative arcs to v in G. We then define IN(G, v) = IN+ (G, v) ∪ IN− (G, v).

Lemma 4.5
Consider a BN f . Assume that SG(f ) has the minimum in-degree of at least one. If SG(f )
has no even cycle, then f has a unique complete trap space.

Proof
Since SG(f ) has the minimum in-degree of at least one, f has no variable v such that
either fv = 0 or fv = 1. Then the sub-space ε where all variables are free is simply a
complete trap space of f . Assume that f has a complete trap space m 6= ε. It follows
that there is a variable v0 ∈ varf such that m(v0 ) 6= ⋆. We show that SG(f ) has an even
cycle that can be constructed from v0 (*).
Let Π0 be the subset of IN(SG(f ), v0 ) such that for every v ∈ Π0 , both v ∈
IN+ (SG(f ), v0 ) and v ∈ IN− (SG(f ), v0 ) hold. If there is a vertex v ∈ Π0 such that
m(v) 6= ⋆, then we choose v1 as v. Otherwise, we consider the set V0 = IN(SG(f ), v0 ) \ Π0 .
The set V0 cannot be empty because if so, m(v) = ⋆ for every v ∈ IN(SG(f ), v0 ), leading
to m(fv0 ) = ⋆ 6= m(v0 ) and m cannot be a complete trap space of f . Since m(v0 ) 6= ⋆, we
26 Trinh, Benhamou, Soliman and Fages

have two cases as follows. Case 1: m(v0 ) = 0. Then m(fv0 ) = m(v0 ) = 0 because m is a
complete trap space of f . If m(v) 6= 0 for every v ∈ IN+ (SG(f ), v0 ) ∩ V0 and m(v) 6= 1
for every v ∈ IN− (SG(f ), v0 ) ∩ V0 , then m(fv0 ) is either 1 or ⋆ due to the three-valued
logic semantics. This means that m(fv0 ) 6= 0 always holds in this case, which is a contra-
diction. Hence, there is a variable v ∈ V0 such that if v ∈ IN+ (SG(f ), v0 ) then m(v) = 0
and if v ∈ IN− (SG(f ), v0 ) then m(v) = 1. We choose v1 as v. Case 2: m(v0 ) = 1. Then
m(fv0 ) = m(v0 ) = 1 because m is a complete trap space of f . If m(v) 6= 1 for every
v ∈ IN+ (SG(f ), v0 ) ∩ V0 and m(v) 6= 0 for every v ∈ IN− (SG(f ), v0 ) ∩ V0 , then m(fv0 )
is either 0 or ⋆ due to the three-valued logic semantics. This means that m(fv0 ) 6= 1
always holds in this case, which is a contradiction. Hence, there is a variable v ∈ V0 such
that if v ∈ IN+ (SG(f ), v0 ) then m(v) = 1 and if v ∈ IN− (SG(f ), v0 ) then m(v) = 0.
We choose v1 as v. Following the two above cases, m(v1 ) takes the value of m(v0 ) if
v1 ∈ IN+ (SG(f ), v0 ) ∩ V0 and m(v1 ) takes the value of ¬m(v0 ) if v1 ∈ IN− (SG(f ), v0 ) ∩ V0
(**). Note that m(v1 ) 6= ⋆ always holds.
s0 s1
Repeating the above construction, we obtain an infinite descending chain v0 ←− v1 ←−
s2
v2 ←− . . . where for every i ≤ 0, vi ∈ varf , m(vi ) 6= ⋆, and si is both ⊕ and ⊖ or si
is either ⊕ or ⊖. Since varf is finite, there are two integer numbers j and k (j, k ≤ 0)
such that vj = vj+k . Let V = {vj , vj+1 , . . . , vj+k }. Since vj = vj+k , SG(f )[V ] contains
at least one cycle, and every its cycle is constituted by all vertices in V . If there is
i ∈ {j, j + 1, . . . , j + k − 1} such that si is both ⊕ and ⊖, then SG(f )[V ] contains both
even and odd cycles. If si is either ⊕ or ⊖ for every i ∈ {j, j + 1, . . . , j + k − 1}, then
sj sj+1 sj+k−1
vj ←− vj+1 ←−−− vj+2 . . . ←−−−− vj+k is a cycle of SG(f ). By (**), the number of ⊖
signs in this cycle must be even, thus this cycle is even.
We have that (*) contradicts to the non-existence of even cycles in SG(f ). Hence, ε is
the unique complete trap space of f .
Lemma 4.5 requires the condition on the minimum in-degree. To relax this condition,
we introduce a process that iteratively eliminates syntatic constants from the BN. This
begins with the notion of one-step syntatic percolation, defined as follows.
Definition 4.7
Consider a BN f . A variable v ∈ varf is called syntatic constant if either fv = 0 or fv = 1.
Let SynC(f ) denote the set of syntatic constant variables of f . We define the one-step
syntatic percolation of f (denoted as SP(f )) as follows: varSP(f ) = varf \ SynC(f ), and
for every v ∈ varSP(f ) , SP(f )v = fv′ , where fv′ is the Boolean function obtained by
substituting syntatic constant values of f in fv with their Boolean functions w.r.t. the
three-valued logic.

Definition 4.8
Consider a BN f . The syntatic percolation of f (denoted by SP ω (f )) is obtained by
applying the one-step syntatic percolation operator starting from f until it reaches a
BN f ′ such that f ′ = ∅ or SP(f ′ ) = f ′ ; this is always possible because the number of
variables is finite.

Proposition 4.4
On the Boolean Network Theory of Datalog¬ 27

Consider a BN f . One-step syntatic percolation may reduce the set of variables and may
reduce the set of arcs, i.e., V (SG(SP(f ))) ⊆ V (SG(f )) and E(SG(SP(f ))) ⊆ E(SG(f )).
Consequently, V (SG(SP ω (f ))) ⊆ V (SG(f )) and E(SG(SP ω (f ))) ⊆ E(SG(f )).

Proof
Note that varSP(f ) = varf \ SynC(f ) ⊆ varf by definition. Hence, V (SG(SP(f ))) ⊆
V (SG(f )) (*). Consider a variable v ∈ varSP(f ) . Let u be a literal that appears in SP(f )v .
Then u must appears in fv because SP(f )v is obtained by syntatically simplifying fv .
This implies that if (uv, ⊕) is an arc in SG(SP(f )), it is an arc in SG(f ). Similarly,
if (uv, ⊖) is an arc in SG(SP(f )), it is an arc in SG(f ). Now, we can conclude that
E(SG(SP(f ))) ⊆ E(SG(f )) (**). By applying (*) and (**) sequentially, we obtain that
V (SG(SP ω (f ))) ⊆ V (SG(f )) and E(SG(SP ω (f ))) ⊆ E(SG(f )).

Proposition 4.5
Given a BN f , the set of complete trap spaces of SP(f ) one-to-one corresponds to the
set of complete trap space of f . Consequently, the set of complete trap spaces of SP ω (f )
one-to-one corresponds to the set of complete trap space of f .

Proof
Assume that m is a complete trap space of f . For every v ∈ varf such that either
fv = 0 or fv = 1, m(v) = m(fv ) = fv . For every v ∈ varf such that neither fv = 0 nor
fv = 1, m(SP(f )v ) = m(fv ) = m(v) because SP(f )v is obtained from fv by substituting
syntatic constant values of f in fv with their Boolean functions w.r.t. the three-valued
logic. Hence, the projection of m to varSP(f ) is a complete trap space of SP(f ).
Assume that m′ is a complete trap space of SP(f ). Let m be a sub-space of f such
that m(v) = fv for every v ∈ SynC(f ) and m(v) = m′ (v) for every v ∈ varSP(f ) . For
every v ∈ SynC(f ), m(v) = fv = m(fv ) because fv ∈ B⋆ . For every v ∈ varSP(f ) ,
m(v) = m′ (v) = m(SP(f )v ) = m(fv ) by the definition of SP(f )v . This implies that m
is a complete trap space of f .
Now we can conclude that the set of complete trap spaces of SP(f ) one-to-one corre-
sponds to the set of complete trap space of f . By applying this property sequentially, we
obtain that the set of complete trap spaces of SP ω (f ) one-to-one corresponds to the set
of complete trap space of f .
Now, we have enough ingredients to prove the following important result.
Theorem 4.11
Consider a BN f . If SG(f ) has no even cycle, then f has a unique complete trap space.

Proof
Let SP ω (f ) be the syntatic percolation of f . By construction, there are two cases. Case
1: SP ω (f ) = ∅. In this case, it is easy to see that f has a unique complete trap space
specified by syntatic constant values of variables through the construction of SP ω (f ).
Case 2: SP ω (f ) 6= ∅ and SG(SP ω (f )) has the minimum in-degree of at least one. In this
case, by Proposition 4.4, V (SG(SP ω (f ))) ⊆ V (SG(f )) and E(SG(SP ω (f ))) ⊆ E(SG(f )).
Since SG(f ) has no even cycle, SG(SP ω (f )) has no even cycle. By Lemma 4.5, SP ω (f )
has a unique complete trap space. By Proposition 4.5, the set of complete trap spaces of
28 Trinh, Benhamou, Soliman and Fages

f one-to-one corresponds to the set of complete trap spaces of SP ω (f ). Hence, f has a


unique complete trap space.
We now show that the even-cycle freeness in the atom dependency graph of a Datalog¬
program ensures not only the existence but also the uniqueness of several fundamental
semantic objects. Specifically, if adg(P ) has no even cycle, then P has a unique supported
partial model, a unique stable partial model, and a unique regular model. Furthermore,
this implies that P has at most one stable model. These results follow from the corre-
spondence between atom dependency graphs and syntactic influence graphs.
Theorem 4.12 (main result)
Let P be a Datalog¬ program. If adg(P ) has no even cycle, then P has a unique supported
partial model.

Proof
Let f be the encoded BN of P . By Corollary 4.4, SG(f ) = adg(P ). Therefore, SG(f ) has
no even cycle. By Theorem 4.11, f has a unique complete trap space. By Theorem 3.1,
P has a unique supported partial model.

Corollary 4.5
Let P be a Datalog¬ program. If adg(P ) has no even cycle, then P has a unique stable
partial model.

Proof
By Theorem 4.12, P has a unique supported partial model. A stable partial model is also
a supported partial model. Since P always has at least one stable partial model, it has a
unique stable partial model.

Corollary 4.6
Let P be a Datalog¬ program. If adg(P ) has no even cycle, then P has a unique regular
model.

Proof
This immediately follows from Corollary 4.5, the existence of regular models, the fact
that a regular model is a stable partial model.

Corollary 4.7
Let P be a Datalog¬ program. If adg(P ) has no even cycle, then P has at most one stable
model.

Proof
This immediately follows from Corollary 4.5 and the fact that a stable model is a two-
valued stable partial model. There exists some Datalog¬ program such that its atom
dependency graph has no even cycle and it has no stable model. For example, consider
Datalog¬ program P = {p ← ∼p}. The atom dependency graph of P has no even cycle
and P has no stable model.
On the Boolean Network Theory of Datalog¬ 29

Uni-rule Datalog¬ Programs. Similar to the case of odd cycles, by considering uni-rule
Datalog¬ programs, we obtain two stronger results as follows.
Theorem 4.13 (Theorem 2’ of Richard and Ruet (2013))
Consider an AND-NOT BN f . If every even cycle of G(f ) has a delocalizing triple, then
f has at most one fixed point.

Theorem 4.14
Let P be a uni-rule Datalog¬ program. If every even cycle of adg(P ) has a delocalizing
triple, then P has at most one stable model.

Proof
Let f be the encoded BN of P . By Proposition 4.2, f is an AND-NOT BN. By Proposi-
tion 4.3, G(f ) = adg(P ). It follows that every even cycle of G(f ) has a delocalizing triple.
By Theorem 4.13, f has at most one fixed point. By Corollary 3.2, P has at most one
stable model.

Theorem 4.15 (Lemma 1 of Trinh et al. (2025b))


Consider an AND-NOT BN f . If every even cycle of G(f ) has a delocalizing triple, then
astg(f ) has a unique attractor.

Theorem 4.16
Let P be a tight uni-rule Datalog¬ program. If every even cycle of adg(P ) has a delocal-
izing triple, then P has a unique regular model.

Proof
Let f be the encoded BN of P . By Proposition 4.2, f is an AND-NOT BN. By Proposi-
tion 4.3, G(f ) = adg(P ). It follows that every even cycle of G(f ) has a delocalizing triple.
Since P is tight, by Lemma 4.1, the regular models of P coincide with the ≤s -minimal
trap spaces of f . By Theorem 4.15, astg(f ) has a unique attractor. Hence, f has a unique
≤s -minimal trap space, leading to P has a unique regular model.

4.5 Number of Models and Feedback Vertex Sets


Understanding the number of semantic models a Datalog¬ program can admit is crucial
for analyzing its behavior, especially in applications involving reasoning, verification,
or program synthesis (Dimopoulos and Torres 1996; Cholewinski and Truszczynski 1999;
Linke 2001; Lin and Zhao 2004; Costantini 2006; Fandinno and Hecher 2021). In this
subsection, we investigate structural upper bounds on the number of supported partial
models, stable partial models, stable models, and regular models of a Datalog¬ program
based on the size of its feedback vertex set. These bounds provide a direct link between
the combinatorial structure of the atom dependency graph and the semantic complexity
of the program. In particular, they offer practical guidance for estimating or constraining
the space of possible models, which is valuable for both theoretical studies and tool-
supported analysis of Datalog¬ programs.
30 Trinh, Benhamou, Soliman and Fages

General Datalog¬ Programs. To the best of our knowledge, there is no existing work
connecting between stable, stable partial, and regular models of a Datalog¬ program and
feedback vertex sets of its atom dependency graph. We first relate the number of stable
partial models and even feedback vertex sets (see Theorem 4.18). We state and prove
the upper bound of 3|U| for the number of complete trap spaces in a BN (Theorem 4.17)
where U is an even feedback vertex set of the syntatic dependency graph of this BN,
then apply this bound to the number of stable partial models in a Datalog¬ program.
The underlying intuition for the base of three is that in a stable partial model, the value
of an atom can be 1, 0, or ⋆. The underlying intuition for the exponent of |U | is that U
interesects every even cycle and the BN has a unique complete trap space in the absence
of even cycles (see Theorem 4.11).
Theorem 4.17
Consider a BN f . Let U be a subset of varf that intersects every even cycle of SG(f ).
Then the number of complete trap spaces of f is at most 3|U| .

Proof
Let I be an assignment U 7→ B⋆ . We build the new BN f I as follows. For every v ∈ U ,
fvI = I(v) if I(v) 6= ⋆, fvI = ¬v if I(v) = ⋆. For every v ∈ varf \ U , fvI = fv . Since
U intersects every even cycle of SG(f ) and only negative self arcs can be introduced,
SG(f I ) has no even cycle. By Theorem 4.11, f I has a unique complete trap space. By
the construction, the setting fvI = I(v) (resp. fvI = ¬v) ensures that for any complete
trap space of f I , the value of v is always I(v) (resp. ⋆). Hence, this unique complete trap
space agrees with the assignment I.
It is easy to see that a complete trap space of f that agrees with the assignment I
is a complete trap space of f I . Since f I has a unique complete trap space, we have an
injection from the set of complete trap spaces of f to the set of possible assignments
I. There are in total 3|U| possible assignments I. Hence, we can conclude that f has at
most 3|U| complete trap spaces.

Theorem 4.18
Consider a Datalog¬ program P . Let U be a subset of HBP that intersects every even
cycle of adg(P ). Then the number of supported partial models of P is at most 3|U| .

Proof
Let f be the encoded BN of P . By Theorem 3.1, the supported partial models of P
coincide with the complete trap spaces of f . By Corollary 4.4, SG(f ) = adg(P ), thus U
is also a subset of varf that intersects every even cycle of SG(f ). By Theorem 4.17, f
has at most 3|U| complete trap spaces. This implies that P has at most 3|U| supported
partial models.
The upper bound established in Theorem 4.18 not only constrains the number of
supported partial models, but also extends naturally to more restrictive semantic notions,
as shown in the following corollary.
Corollary 4.8
On the Boolean Network Theory of Datalog¬ 31

Consider a Datalog¬ program P . Let U be a subset of HBP that intersects every even
cycle of adg(P ). Then the number of stable partial models of P is at most 3|U| . In
addition, this upper bound also holds the number of regular models or stable models.

Proof
This immediately follows from Theorem 4.18 and the fact that a stable partial model
is also a supported partial model, and a regular or stable model is also a stable partial
model.

Remark 4.4
Let us consider Example 2.1 again. The graph adg(P ) is given in Figure 1 (a). It is easy
to verify that U = {p} or U = {q} intersects every even cycle of adg(P ). Hence, Corol-
lary 4.8 gives the upper bound 31 = 3. Indeed, the Datalog¬ program P has three
stable (supported) partial models. Hence, the upper bound given by Corollary 4.8 can
be reached.
We then provide a simple upper bound for the number of regular models based on the
connection to the dynamical behavior of a BN, in which the base decreases to 2 but the
exponent increases to |HBP |.
Proposition 4.6
Let P be a Datalog¬ program. Then the number of regular models of P is at most 2|HBP | .

Proof
Let lfp(P ) be the least fixpoint of P and f ′ be the encoded BN of lfp(P ). By Lemma 4.2,
the regular models of P coincide with the ≤s -minimal trap spaces of f ′ . We have
HBP = HBlfp(P ) = varf ′ by definition. Since each attractor of astg(f ′ ) contains at least
one state and f ′ has 2|varf ′ | states, astg(f ′ ) has at most 2|varf ′ | attractors. The num-
ber of ≤s -minimal trap spaces of f ′ is a lower bound for the number of attractors of
astg(f ′ ) (Klarner et al. 2015). Hence, P has at most 2|HBP | regular models.
We get a better upper bound for the number of regular models (see Theorem 4.20),
but restricted to tight Datalog¬ programs.
Theorem 4.19 (Corollary 2 of Richard (2009))
Given a BN f , let U be a subset of varf that intersects every even cycle of G(f ). Then
the number of attractors of astg(f ) is at most 2|U| .

Theorem 4.20
Let P be a Datalog¬ program. Let U be a subset of HBP that intersects every even cycle
of adg(P ). If P is tight, then the number of regular models of P is at most 2|U| .

Proof
Let f be the encoded BN of P . By Proposition 3.1, G(f ) is a sub-graph of adg(P ), thus
U is a subset of varf that intersects every even cycle of G(f ). By Theorem 4.19, astg(f )
has at most 2|U| attractors. The number of ≤s -minimal trap spaces of f is a lower bound
for the number of attractors of astg(f ) (Klarner et al. 2015). Hence, f has at most 2|U|
≤s -minimal trap spaces. Since P is tight, the regular models of P coincide with the ≤s -
32 Trinh, Benhamou, Soliman and Fages

minimal trap spaces of f by Lemma 4.1. This implies that P has at most 2|U| regular
models.

Remark 4.5
We here analyze the three above upper bounds for the number of regular models in a
Datalog¬ program. For tight Datalog¬ programs, the bound 2|U| is the best. However,
for non-tight Datalog¬ programs, it is not applicable. The set U is always smaller (even
much smaller in most cases) than or equal to the set HBP , yet 3|U| is not always smaller
than 2|HBP | . Hence the bound 2|HBP | still has merit for non-tight Datalog¬ programs.
Let us consider Example 2.1 again. We have HBP = {p, q, r}. The graph adg(P ) is given
in Figure 1 (a). It is easy to verify that P is tight and U = {p} or U = {q} intersects every
even cycle of adg(P ). Hence, Corollary 4.8 gives the upper bound 31 = 3, Proposition 4.6
gives the upper bound 23 = 8, whereas Theorem 4.20 gives the upper bound of 21 = 2.
Indeed, the Datalog¬ program P has two regular models. Hence, the upper bound given
by Theorem 4.20 can be reached w.r.t. tight Datalog¬ programs.
Since a stable model is a regular model, we can derive from Theorem 4.20 that 2|U|
is also an upper bound for the number of stable models of a tight Datalog¬ program.
However, we prove that this bound also holds for non-tight Datalog¬ programs (see The-
orem 4.22).
Theorem 4.21 (Corollary 10 of Aracena (2008))
Given a BN f , let U be a subset of varf that intersects every even cycle of G(f ). Then
the number of fixed points of f is at most 2|U| .

Theorem 4.22
Consider a Datalog¬ program P . Let U be a subset of HBP that intersects every even
cycle of adg(P ). Then the number of stable models of P is at most 2|U| .

Proof
Let f be the encoded BN of P . By Proposition 3.1, G(f ) is a sub-graph of adg(P ), thus
U is a subset of varf that intersects every even cycle of G(f ). By Theorem 4.21, f has at
most 2|U| fixed points. By Corollary 3.2 and the fact that a stable model is a supported
model, P has at most 2|U| stable models.

Remark 4.6
We here recall some existing upper bounds for the number of stable models in Datalog¬
programs. Given a Datalog¬ program P , Cholewinski and Truszczynski (1999) proved
the upper bound of 3n/3 where n is the number of rules in gr(P ). Lin and Zhao (2004)
later proved the upper bound of 2k where k is the number of even cycles in adg(P ). Our
new result (i.e., Theorem 4.22) provides the upper bound of 2|U| where U is an even
feedback vertex set of adg(P ). It is easy to see that we always find an even feedback
vertex set U such that |U | ≤ k, showing that our result is more general than that
of Lin and Zhao (2004). It is however hard to directly compare between 2|U| and 3n/3 .
Consider the (ground) Datalog¬ program {a ← ∼b; a ← a; b ← ∼a}. Its atom dependency

graph is given in Figure 6. It is easy to see that this graph has two even cycles: a −
→a
⊖ ⊖ ¬
and a − → b − → a. Hence, the Datalog program is non-tight and U = {a} intersects
On the Boolean Network Theory of Datalog¬ 33

every even cycle of the graph. The result of Cholewinski and Truszczynski (1999) gives
the upper bound 33/3 = 3 for the number of stable models. The result of Lin and Zhao
(2004) gives the upper bound 22 = 4. Our new result gives the upper bound 21 = 2.
Furthermore, we can see that the upper bound 2|U| can be reached.

⊕ a b

Fig. 6: Atom dependency graph of the Datalog¬ program of Remark 4.6.

Inspired by Theorem 4.22, we make the following conjecture on the number of regular
models in a (tight or non-tight) Datalog¬ program.
Conjecture 4.3
Consider a Datalog¬ program P . Let U be a subset of HBP that intersects every even
cycle of adg(P ). Then the number of regular models of P is at most 2|U| .

Uni-rule Datalog¬ Programs. By considering uni-rule Datalog¬ programs, we obtain two


tighter upper bounds for the numbers of stable and regular models.
Theorem 4.23 (Theorem 3.5 of Veliz-Cuba et al. (2012))
Given an AND-NOT BN f , let U be a subset of varf that intersects every delocalizing-
triple-free even cycle of G(f ). Then the number of fixed points of f is at most 2|U| .

Theorem 4.24
Let P be a uni-rule Datalog¬ program. Assume that U is a subset of HBP that intersects
every even cycle without a delocalizing triple of adg(P ). Then P has at most 2|U| stable
models.

Proof
Let f be the encoded BN of P . By Proposition 4.2, f is a AND-NOT BN. By Propo-
sition 4.3, G(f ) = adg(P ), thus U is a subset of varf that intersects every delocalizing-
triple-free even cycle of G(f ). By Theorem 4.23, f has at most 2|U| fixed points. By Corol-
lary 3.2 and the fact that a stable model is a supported model, P has at most 2|U| stable
models.

Theorem 4.25 (Theorem 3 of Trinh et al. (2025b))


Given an AND-NOT BN f , let U be a subset of varf that intersects every delocalizing-
triple-free even cycle of G(f ). Then the number of attractors of astg(f ) is at most 2|U| .

Theorem 4.26
Let P be a uni-rule Datalog¬ program. Assume that U is a subset of HBP that intersects
every delocalizing-triple-free even cycle of adg(P ). If P is tight, then P has at most 2|U|
regular models.
34 Trinh, Benhamou, Soliman and Fages

Proof
Let f be the encoded BN of P . By Proposition 4.2, f is a AND-NOT BN. By Propo-
sition 4.3, G(f ) = adg(P ), thus U is a subset of varf that intersects every delocalizing-
triple-free even cycle of G(f ). By Theorem 4.25, astg(f ) has at most 2|U| attractors.
The number of ≤s -minimal trap spaces of f is a lower bound for the number of attrac-
tors of astg(f ) (Klarner et al. 2015). Hence, f has at most 2|U| ≤s -minimal trap spaces.
Since P is tight, the regular models of P coincide with the ≤s -minimal trap spaces of f
by Lemma 4.1. This implies that P has at most 2|U| regular models.

Example 4.4
Consider the Datalog¬ program P = {v1 ← ∼v2 ; v2 ← v1 ; v3 ← v1 , ∼v4 ; v4 ← ∼v1 , ∼v3 }.
We use “;” to separate program rules. The atom dependency graph adg(P ) is shown
in Figure 7. It is easy to verify that P is uni-rule and tight. The graph adg(P ) has only
⊖ ⊖
one even cycle C = v3 − → v4 − → v3 . Then Theorem 4.22 (resp. Theorem 4.20) gives an
1
upper bound 2 = 2 for the number of stable models (resp. regular models) of P . However,
(v1 , v3 , v4 ) is a delocalizing triple of C. Then Theorem 4.24 (resp. Theorem 4.26) gives
the upper bound 20 = 1 for the number of stable models (or regular models) of P . Indeed,
P has only one regular model {v1 = ⋆, v2 = ⋆, v3 = ⋆, v4 = ⋆} and no stable model.

v1 v3

⊕ ⊖ ⊖ ⊖

v2 v4

Fig. 7: Atom dependency graph of the uni-rule Datalog¬ program P of Example 4.4.

Inspired by Theorem 4.26, we make the following conjecture on the number of regular
models in (tight or non-tight) uni-rule Datalog¬ programs.
Conjecture 4.4
Let P be a uni-rule Datalog¬ program. Assume that U is a subset of HBP that intersects
every delocalizing-triple-free even cycle of adg(P ). Then P has at most 2|U| regular
models.

5 Trap Spaces for Datalog¬ Programs

In this section, we introduce the notions of stable trap space and supported trap space for
Datalog¬ programs, borrowed from the notion of trap spaces in BNs. These constructs
offer a new perspective for analyzing the model-theoretic and dynamical behavior of
Datalog¬ programs. We develop their basic properties, and establish formal relationships
with classical semantics such as stable (supported) partial models, regular models, stable
(supported) models, and stable (supported) classes. This unified view lays the foundation
On the Boolean Network Theory of Datalog¬ 35

for leveraging trap space techniques in analysis and reasoning tasks involving Datalog¬
programs.

5.1 Definitions
We begin by formally defining the central notions of stable and supported trap sets,
which characterize non-empty sets of two-valued interpretations that are closed under
the program’s update operators.
Definition 5.1
A non-empty set S of two-valued interpretations of a Datalog¬ program P is called a
stable trap set (resp. supported trap set ) of P if {FP (I)|I ∈ S} ⊆ S (resp. {TP (I)|I ∈
S} ⊆ S).
Note that a stable (resp. supported) class is a stable (resp. supported) trap set, but the
reverse may not be true. Given the Datalog¬ program P of Example 2.1, {{p, r}, {p}} is
a stable (resp. supported) trap set of P , but it is not a stable (resp. supported) class of
P.
Definition 5.2
A three-valued interpretation I of a Datalog¬ program P is called a stable trap space
(resp. supported trap space) of P if S(I) is a stable (resp. supported) trap set of P .
It is easy to adapt the concept of stable or supported trap set for a directed graph in
general (see Definition 5.3).
Definition 5.3
Consider a directed graph G. A subset S of V (G) is called a trap set of G if there are no
two vertices A and B such that A ∈ S, B 6∈ S, and (A, B) ∈ E(G).
It is easy to see that S is a stable (resp. supported) trap set of P iff S is a trap set
of tgst (P ) (resp. tgsp (P )). Hence, we can deduce from Definition 5.2 that a three-valued
interpretation I is a stable (resp. supported) trap space of P if S(I) is a trap set of tgst (P )
(resp. tgsp (P )). Since stable and supported transition graphs represent the dynamical
aspect of a Datalog¬ program (Baral and Subrahmanian 1992; Inoue and Sakama 2012),
this indicates that trap spaces represent the dynamical aspect of a Datalog¬ program.
We now illustrate the notions of stable and supported trap spaces through a concrete
example, demonstrating how they can be identified respectively from the stable and
supported transition graphs of a given program.
Example 5.1
Consider the Datalog¬ program P of Example 2.1. Figure 1 (b) and Figure 1 (c) show the
stable and supported transition graphs of P , respectively. Then I1 = {p = 1, q = 0, r = ⋆}
is a stable (resp. supported) trap space of P because S(I1 ) = {{p}, {p, r}} is a trap set of
tgst (P ) (resp. tgsp (P )). By checking the remaining three-valued interpretations, we get
36 Trinh, Benhamou, Soliman and Fages

the four other stable trap spaces that are also supported trap spaces of P :
I2 = {p = 0, q = 1, r = ⋆} (S(I2 ) = {{q}, {q, r}}),
I3 = {p = ⋆, q = ⋆, r = ⋆} (S(I3 ) = {{p}, {p, r}, {q}, {q, r}, {p, q}, {r}, {p, q, r}, ∅}),
I4 = {p = 1, q = 0, r = 0} (S(I4 ) = {{p}}),
I5 = {p = 0, q = 1, r = 1} (S(I5 ) = {{q, r}}).

5.2 Properties
In this subsection, we present basic properties of stable and supported trap spaces in
Datalog¬ programs. We begin by establishing their guaranteed existence, which ensures
that the trap space framework is broadly applicable to the analysis of program dynamics.
Further properties shall clarify their intrinsic characteristics.
Proposition 5.1
A Datalog¬ program P always has a stable or supported trap space.

Proof
Let I be a three-valued interpretation that corresponds to all the two-valued interpreta-
tions, i.e., ∀a ∈ HBP , I(a) = ⋆. By setting S = S(I), the condition {FP (J)|J ∈ S} ⊆ S
(resp. {TP (J)|J ∈ S} ⊆ S) always holds. Hence, S(I) is a stable (resp. supported) trap
set of P . By definition, I is a stable (resp. supported) trap space of P .
We then introduce an important concept, namely consistent, on trap spaces of Datalog¬
programs. Two three-valued interpretations I1 and I2 are called consistent if for all
a ∈ HBP , I1 (a) ≤s I2 (a) or I2 (a) ≤s I1 (a). Equivalently, I1 and I2 are called consistent
if S(I1 ) ∩ S(I2 ) 6= ∅. Note that using I1 ≤s I2 or I2 ≤s I1 is insufficient here, since there
exist two three-valued interpretations that are not comparable w.r.t. ≤s but consistent.
When I1 and I2 are consistent, their overlap (denoted by I1 ⊓ I2 ) is a three-valued
interpretation I such that for all a ∈ HBP , I(a) = min≤s (I1 (a), I2 (a)). It also follows
that S(I) = S(I1 ) ∩ S(I2 ).
This notion of consistency enables us to study how trap spaces interact and combine,
particularly through their common overlap.
Proposition 5.2
Let P be a Datalog¬ program. The overlap of two consistent stable (resp. supported)
trap spaces of P is a stable (resp. supported) trap space of P .

Proof
Hereafter, we prove the case of stable trap spaces. The proof for the case of supported
trap spaces is symmetrical. Let I1 and I2 be two consistent stable trap spaces of P . Then
I1 ⊓ I2 is also a three-valued interpretation by construction. Let s be an arbitrary two-
valued interpretation in S(I1 ⊓ I2 ) and s′ be its successor in tgst (P ). Since S(I1 ⊓ I2 ) =
S(I1 )∩S(I2 ), s ∈ S(I1 ) and s ∈ S(I2 ). Since I1 (resp. I2 ) is a stable trap space, s′ ∈ S(I1 )
(resp. s′ ∈ S(I2 )). It follows that s′ ∈ S(I1 ⊓ I2 ). Hence, S(I1 ⊓ I2 ) is a stable trap set,
leading to I1 ⊓ I2 is a stable trap space of P .
On the Boolean Network Theory of Datalog¬ 37

The above property of stable and supported trap spaces is analogous to a property of
stable and supported classes in Datalog¬ programs (Inoue and Sakama 2012). However,
while the union of two stable (resp. supported) classes is a stable (resp. supported) class,
that of two stable (resp. supported) trap spaces may not be a stable (resp. supported)
trap space. In Example 5.1, S(I4 ) ∪ S(I5 ) is a stable class of P , whereas S(I4 ) ∪ S(I5 )
does not correspond to any three-valued interpretation.
The closure of stable or supported trap spaces under consistent overlap leads to a
useful structural consequence concerning the minimal trap space that covers a given set
of two-valued interpretations.
Corollary 5.1
Let P be a Datalog¬ program. Let S be a non-empty set of two-valued interpretations
of P . Then there is a unique ≤s -minimal stable (resp. supported) trap space, denoted by
sp sp
hSist st
P (resp. hSiP ), such that S ⊆ S(hSiP ) (resp. S ⊆ S(hSiP )).

Proof
Hereafter, we prove the case of stable trap spaces. The proof for the case of supported trap
spaces is symmetrical. Let T be the set of all stable trap spaces I such that S(I) contains
S. We have T is non-empty as at least it contains ǫ that is a stable trap space in which
all atoms are assigned to ⋆. The elements in T are mutually consistent, thus we can take
the overlap of all these elements (denoted by hSist P ). By applying the similar reasoning as
in the proof of Proposition 5.2, hSist
P is a stable trap space of P . By construction, hSist
P
st
is unique and ≤s -minimal, and S ⊆ S(hSiP ).
An another consequence of the closure of stable or supported trap spaces under consis-
tent overlap is that two distinct ≤s -minimal trap spaces cannot be consistent with each
other, as formalized below.
Proposition 5.3
Let P be a Datalog¬ program. Let I1 and I2 be two distinct ≤s -minimal stable (resp.
supported) trap spaces of P . Then I1 and I2 are not consistent.

Proof
Hereafter, we prove the case of stable trap spaces. The proof for the case of supported
trap spaces is symmetrical. Assume that I1 and I2 are consistent. Then I = I1 ⊓ I2 exists.
By Proposition 5.2, I is a stable trap space of P . Since I1 and I2 are distinct, I <s I1 or
I <s I2 must hold. This is a contradiction because I1 and I2 are ≤s -minimal stable trap
spaces of P . Hence, I1 and I2 are not consistent.
We now recall several important theoretical results from Inoue and Sakama (2012) that
lead us to similar results for trap spaces in Datalog¬ programs. Specifically, for negative
Datalog¬ programs, the stable and supported semantics coincide, resulting in identical
transition graphs and thus identical sets of trap spaces; and the stable transition graph of
an arbitrary Datalog¬ program remains invariant under the least fixpoint transformation,
which in turn implies that the set of stable trap spaces is also preserved under this
transformation.
Theorem 5.1 (Proposition 5.2 of Inoue and Sakama (2012))
38 Trinh, Benhamou, Soliman and Fages

Let P be a negative Datalog¬ program. Then tgst (P ) = tgsp (P ), i.e., the stable and
supported transition graphs of P are the same.

Corollary 5.2
Let P be a negative Datalog¬ program. Then the set of stable trap spaces of P coincides
with the set of supported trap spaces of P .

Proof
This immediately follows from Theorem 5.1 and the dynamical characterizations of stable
and supported trap spaces of a Datalog¬ program.

Theorem 5.2 (Theorem 5.5 of Inoue and Sakama (2012))


Let P be a Datalog¬ program and lfp(P ) denote the least fixpoint of P . Then tgst (P ) =
tgst (lfp(P )), i.e., P and lfp(P ) have the same stable transition graph.

Corollary 5.3
Let P be a Datalog¬ program and lfp(P ) denote the least fixpoint of P . Then the set of
stable trap spaces of P coincides with the set of stable trap spaces of lfp(P ).

Proof
This immediately follows from Theorem 5.2 and the dynamical characterization of stable
trap spaces of a Datalog¬ program.

5.3 Relationships with Other Semantics


Naturally, if a stable (resp. supported) trap space of a Datalog¬ program is two-valued,
then it is also a stable (resp. supported) model of this program. Hereafter, we show
more relationships between stable and supported trap spaces and other types of mod-
els in Datalog¬ programs. They highlight the role of trap spaces as generalizations of
models, capturing both dynamically and semantically meaningful behavior of a program.
Understanding these relationships not only deepens our theoretical insight into the se-
mantic landscape of Datalog¬ programs but also may open the door to new algorithmic
strategies.

5.3.1 Stable and Supported Class Semantics


We now investigate the connection between trap spaces and class-based semantics of
Datalog¬ programs, where models are understood in terms of cyclic or recurrent behavior
in the transition graphs.
Proposition 5.4
Consider a Datalog¬ program P . A non-empty set of two-valued interpretations S is a
⊆-minimal stable (resp. supported) trap set of P iff S forms a simple cycle of tgst (P )
(resp. tgsp (P )).

Proof
On the Boolean Network Theory of Datalog¬ 39

We have that each vertex in tgst (P ) (resp. tgsp (P )) has exactly one on-going arc. They
are similar to the synchronous state transition graph of a BN.
Hence, S forms a simple cycle of tgst (P ) (resp. tgsp (P ))
iff S is a ⊆-minimal stable (resp. supported) trap set of tgst (P ) (resp. tgsp (P ))
(see Dubrova and Teslenko (2011))
iff S is a ⊆-minimal stable (resp. supported) trap set of P .
We recall the two established results that precisely characterize strict stable and sup-
ported classes of a Datalog¬ program in terms of simple cycles in the respective transition
graphs.
Proposition 5.5 (Theorem 3 of Baral and Subrahmanian (1992))
Consider a Datalog¬ program P . A non-empty set of two-valued interpretations S is a
strict stable class of P iff S forms a simple cycle of tgst (P ).

Proposition 5.6 (Theorem 3.2 of Inoue and Sakama (2012))


Consider a Datalog¬ program P . A non-empty set of two-valued interpretations S is a
strict supported class of P iff S forms a simple cycle of tgsp (P ).
We then connect minimal trap sets and class semantics by showing that ⊆-minimal
stable and supported trap sets coincide with strict stable and supported classes, respec-
tively.
Proposition 5.7
Consider a Datalog¬ program P . A non-empty set of two-valued interpretations S is a
⊆-minimal stable (resp. supported) trap set of P iff S is a strict stable (resp. supported)
class of P .

Proof
We show the proof for the case of stable trap sets; the proof for the case of supported
trap sets is symmetrical.
The set S is a ⊆-minimal stable trap set of P
iff S is a ⊆-minimal trap set of tgst (P )
iff S is a simple cycle of tgst (P ) by Proposition 5.4
iff S is a strict stable class of P by Proposition 5.5.
Building on the previous characterizations, we can now establish that every stable or
supported trap space necessarily covers at least one strict class of the corresponding type.
Corollary 5.4
Let P be a Datalog¬ program. Then every stable (resp. supported) trap space of P
contains at least one strict stable (resp. supported) class of P .

Proof
Let I be a stable (resp. supported) trap space of P . By definition, S(I) is a stable (resp.
supported) trap set of P . There is a ⊆-minimal stable (resp. supported) trap set S of P
such that S ⊆ S(I). By Proposition 5.7, S is a strict stable (resp. supported) class of P .
Now, we can conclude the proof.
40 Trinh, Benhamou, Soliman and Fages

Corollary 5.4 shows that a stable (resp. supported) trap space always covers at least
one strict stable (resp. supported) class. Proposition 5.3 shows that two ≤s -minimal
stable (resp. supported) trap spaces are not consistent. This implies that the number
of ≤s -minimal stable (resp. supported) trap spaces is a lower bound for the number of
strict stable (resp. supported) classes in a Datalog¬ program. This insight is similar to the
insight in BNs that the number of minimal trap spaces of a BN is a lower bound for the
number of attractors of this BN regardless of the employed update scheme (Klarner et al.
2015).

5.3.2 Stable and Supported Partial Model Semantics


Stable and supported trap spaces are also closely related to partial model semantics,
particularly in the setting of BNs. The following proposition, originally formulated in
the context of BNs, characterizes trap spaces via a natural order-theoretic condition on
three-valued interpretations.
Proposition 5.8 (Proposition 2 of Trinh et al. (2025a))
Let f be a BN. A sub-space m is a trap space of f iff m(fv ) ≤s m(v) for every v ∈ varf .
Building on this characterization, we now relate the supported trap spaces of a
Datalog¬ program to the trap spaces of its corresponding BN.
Theorem 5.3
Let P be a Datalog¬ program and f be its encoded BN . Then the supported trap spaces
of P coincide with the trap spaces of f .

Proof
It is sufficient to show that the supported transition graph of P is identical to the syn-
chronous state transition graph of f .
By definition, varf = HBP , thus V (tgsp (P )) = V (sstg(f )). Let I and J be two two-
valued interpretations of P . They are states of f as well. We have (I, J) ∈ E(tgsp (P ))
iff J = TP (I)
iff J(v) = I(rhsP (v)) for every v ∈ HBP
iff J(v) = I(fv ) for every v ∈ varf
iff (I, J) ∈ E(sstg(f )). This implies that E(tgsp (P )) = E(sstg(f )).
These above results immediately lead us to the model-theoretic characterization of
supported trap spaces in Datalog¬ programs.
Corollary 5.5
Let P be a Datalog¬ program. Then a three-valued interpretation I is a supported trap
space of P iff m(rhsP (v)) ≤s m(v) for every v ∈ HBP .

Proof
This immediately follows from Theorem 5.3 and Proposition 5.8.
Corollary 5.5 shows that supported trap spaces can be characterized in another way
that is model-theoretic. This also turns out that a supported trap space may not be a
three-valued model of P as it considers the order ≤s , whereas the latter considers the
On the Boolean Network Theory of Datalog¬ 41

order ≤t . In Example 5.1, I2 = {p = 0, q = 1, r = ⋆} is a supported trap space, but it is


not a three-valued model of P .
This observation highlights a subtle distinction between supported trap spaces and
three-valued models, motivating the need to understand their relationships with other
semantics. To that end, we now show that every supported partial model of a Datalog¬
program is indeed a supported trap space, thereby establishing a one-way implication
between these two notions.
Corollary 5.6
Let P be a Datalog¬ program. If I is a supported partial model of P , then it is also a
supported trap space of P .

Proof
Assume that I is a supported partial model of P . Let f be the encoded BN of P . By The-
orem 3.1, I is a complete trap space of f . Then I is a trap space of f by Proposition 5.8.
By Theorem 5.3, I is a supported trap space of P .
We next turn our attention to the stable case and show an analogous result: every
stable partial model of a Datalog¬ program is also a stable trap space.
Proposition 5.9
Let P be a Datalog¬ program. If I is a stable partial model of P , then it is also a stable
trap space of P .

Proof
Assume that I is a stable partial model of P . Let lfp(P ) be the least fixpoint of P .
By Theorem 4.5, I is a stable partial model of lfp(P ).
By Corollary 4.1, I is a supported partial model of lfp(P ) since lfp(P ) is negative.
By Corollary 5.6, I is a supported trap space of lfp(P ).
By Corollary 5.2, I is a stable trap space of lfp(P ) since lfp(P ) is negative.
By Corollary 5.3, I is a stable trap space of P .
Having established that every supported partial model of a Datalog¬ program is also a
supported trap space, we now examine the converse direction. Specifically, we show that
every supported trap space contains (w.r.t. ≤s ) some supported partial model, thereby
revealing a form of approximation from below. This leads to a further refinement: the
notion of ≤s -minimality coincide for supported partial models and supported trap spaces.
Together, these results establish a tight correspondence between the two notions in the
minimal case.
Corollary 5.7
Let P be a Datalog¬ program. Then for every supported trap space I of P , there is a
supported partial model I ′ of P such that I ′ ≤s I.

Proof
Let f be the encoded BN of P .
By Theorem 5.3, I is a trap space of f .
By Lemma 3.1, there is a complete trap space I ′ of f such that I ′ ≤s I.
By Theorem 3.1, I ′ is a supported partial model of P .
42 Trinh, Benhamou, Soliman and Fages

Corollary 5.8
Consider a Datalog¬ program P . Then a three-valued interpretation I is a ≤s -minimal
supported partial model of P iff I is a ≤s -minimal supported trap space of P .

Proof
Let f be the encoded BN of P .
We have I is a ≤s -minimal supported partial model of P
iff I a ≤s -minimal trap space of f by Corollary 3.1
iff I a ≤s -minimal supported trap space of P by Theorem 5.3.
We now revisit the running example to illustrate the interplay between supported (sta-
ble) trap spaces and supported (stable) partial models. This concrete instance not only
highlights the relationships previously established but also provides a direct validation
of several key results. In particular, we demonstrate how supported partial models ap-
proximate supported trap spaces from below, and how minimality is preserved across the
two notions.
Example 5.2
Let us continue with Example 5.1. The Datalog¬ program P has five stable (resp. sup-
ported) trap spaces: I1 , I2 , I3 , I4 , and I5 . It has three stable (also supported) partial mod-
els: I3 , I4 , and I5 . This confirms the correctness of Proposition 5.9 (resp. Corollary 5.6).
We have that I4 ≤s I1 and I5 ≤s I2 , which confirms the correctness of Corollary 5.7. The
program P has two ≤s -minimal supported partial models, namely I4 and I5 , which are
also two ≤s -minimal supported trap spaces of P . This is consistent with Corollary 5.8.

5.3.3 Regular Model Semantics


The regular model semantics not only inherits the advantages of the stable partial model
semantics but also imposes two notable principles in non-monotonic reasoning: minimal
undefinedness and justifiability (which is closely related to the concept of labeling-based
justification in Doyle’s truth maintenance system (Doyle 1979)), making it become one
of the well-known semantics in logic programming (You and Yuan 1994; Janhunen et al.
2006). Furthermore, regular models in ground Datalog¬ programs were proven to corre-
spond to preferred extensions in Dung’s frameworks (Wu et al. 2009) and assumption-
based argumentation (Caminada and Schulz 2017), which are two central focuses in ab-
stract argumentation (Baroni et al. 2011).
In Example 5.2, I1 and I2 are stable trap spaces, but they are not stable partial models
of P . However, we observed that I4 and I5 are ≤s -minimal stable trap spaces of P . They
are also regular models of P , i.e., the set of regular models of P coincides with the set of
≤s -minimal stable trap spaces of P , which is really interesting. This observation can be
generalized as follows:
Theorem 5.4 (main result)
Let P be a Datalog¬ program. Then a three-valued interpretation I is a regular model
of P iff I is a ≤s -minimal stable trap space of P .

Proof
On the Boolean Network Theory of Datalog¬ 43

Let lfp(P ) be the least fixpoint of P .


We have I is a regular model of P
iff I is a regular model of lfp(P ) by Theorem 4.5
iff I is a ≤s -minimal stable partial model of lfp(P ) by definition
iff I is a ≤s -minimal supported partial model of lfp(P ) by Corollary 4.1 and the fact that
lfp(P ) is negative
iff I is a ≤s -minimal supported trap space of lfp(P ) by Corollary 5.8
iff I is a ≤s -minimal stable trap space of lfp(P ) by Corollary 5.2
iff I is a ≤s -minimal stable trap space of P by Corollary 5.3.
By Proposition 5.3 and Theorem 5.4, we deduce that two distinct regular models are
separated. Plus Corollary 5.4, we deduce that the number of regular models of P is a lower
bound for the number of strict stable classes of P . To the best of our knowledge, both
insights are new in relationships between regular models and stable classes. In addition,
Proposition 5.1 implies that every Datalog¬ program P has at least one regular model.
Note that the proof of the existence of a regular model in an NLP relies on that of a
stable partial model (You and Yuan 1994).

5.4 Discussions
In summary, we can first conclude that the notion of stable or supported trap space is a
natural extension of the stable (supported) model semantics and the stable (supported)
partial model semantics. It is easy to see that for any Datalog¬ program, the set of
two-valued stable (resp. supported) trap spaces coincides with the set of stable (resp.
supported) models. By Proposition 5.9 (resp. Corollary 5.6), a stable (resp. supported)
partial model is also a stable (resp. supported) trap space.
Second, the notion of stable trap space can also be viewed as an intermediate between
model-theoretic semantics (the regular model semantics) and dynamical semantics (the
stable class semantics). The regular model semantics somewhat generalizes the main other
model-theoretic semantics for Datalog¬ programs, namely the stable model semantics,
the well-founded model semantics, and the stable partial model semantics (You and Yuan
1994; Przymusinski 1994). It also imposes the principle of minimal undefinedness, i.e., the
undefined value should be used only when it is necessary (You and Yuan 1994). By Propo-
sition 5.9, the set of stable trap spaces includes the set of stable partial models, and thus
also includes the set of regular models. In addition, by Theorem 5.4, the set of ≤s -minimal
stable trap spaces coincides with the set of regular models. Hence, the trap space seman-
tics possesses both the model-theoretic aspect and the principle of minimal undefinedness
inherent in the regular model semantics. The stable class semantics expresses the dy-
namical aspect of a Datalog¬ program (Baral and Subrahmanian 1992; 1993). It is also
characterized by the stable transition graph of the program (Baral and Subrahmanian
1992). The notion of stable trap space is defined based on the stable transition graph
of a Datalog¬ program as well. Note that a stable class may not be a stable trap space
due to the requirement for three-valued interpretations. However, by Corollary 5.4, we
know that a stable (resp. supported) trap space contains at least one strict stable class,
which represents a minimal oscillation between two-valued interpretations, and all the
meaningful stable classes of a Datalog¬ program are strict (Inoue and Sakama 2012).
In particular, the notion of stable trap space reveals a deeper relationship between the
44 Trinh, Benhamou, Soliman and Fages

regular model semantics and the stable class semantics, namely, that a regular model
covers at least one strict stable class and the number of regular models is a lower bound
for the number of strict stable classes of a Datalog¬ program.
Third, the relationships between Datalog¬ programs and abstract argumentation
have been deeply studied (Dung 1995; Caminada and Gabbay 2009; Wu et al. 2009;
Caminada et al. 2015; Caminada and Schulz 2017; Alcântara et al. 2019). Abstract Ar-
gumentation Frameworks (AFs) are the most prominent formalisim for formal argu-
mentation research (Dung 1995; Baroni et al. 2011). Abstract Dialectical Frameworks
(ADFs) are more general than AFs, and have attracted much attention (Baroni et al.
2011). However, some extension-based semantics that exist in AFs or ADFs do not have
corresponding counterparts in Datalog¬ programs. The new notion of stable or sup-
ported trap space helps us to fill this gap. It has been shown that the admissible sets
of an AF correspond to the trap spaces of the respective BN (Dimopoulos et al. 2024;
Trinh et al. 2025a), and thus by Theorem 5.3, the admissible sets of an AF correspond
to the supported trap spaces of the respective Datalog¬ program. It has been shown that
the admissible interpretations of an ADF coincide with the trap spaces of the respective
BN (Azpeitia et al. 2024; Heyninck et al. 2024), and thus by Theorem 5.3, the admissi-
ble interpretations of an ADF coincide with the supported trap spaces of the respective
Datalog¬ program.

6 Conclusion
In this paper, we have established a formal link between Datalog¬ programs and Boolean
network theory in terms of both semantics and structure. This connection has enabled
us to import key concepts and results from the study of discrete dynamical systems into
the theory and analysis of Datalog¬ programs.
By analyzing the atom dependency graph of a Datalog¬ program, we have identified
structural conditions—specifically, the absence of odd or even cycles—that guarantee
desirable semantic properties. In particular, we have proved that: (i) in the absence of
odd cycles, the regular models coincide with the stable models, ensuring their existence;
(ii) in the absence of even cycles, the stable partial models are unique, which entails the
uniqueness of regular models. Key to our proofs is the established connection and the
existing graphical analysis results in Boolean network theory. We have also revisited ear-
lier claims made by You and Yuan (1994) regarding (i) and the regular model part of (ii)
in normal logic programs. While their intuition was partially correct, we have identified
issues in their formal definitions and proof arguments. We have provided corrected defi-
nitions and clarified the scope of applicability to negative normal logic programs, thereby
refining the theoretical landscape.
Beyond these structural insights, we have introduced several upper bounds on the
number of stable, stable partial, and regular models based on the cardinality of a feedback
vertex set in the atom dependency graph of a Datalog¬ program. This provides a novel
complexity measure grounded in graph-theoretic properties of Datalog¬ programs.
Furthermore, we have obtained several stronger graphical analysis results on a subclass
of Datalog¬ programs, namely uni-rule Datalog¬ programs (Seitzer and Schlipf 1997;
Caminada et al. 2015), which are important in the theory of Datalog¬ programs, as
well as being closely related to abstract argumentation frameworks (Caminada et al.
On the Boolean Network Theory of Datalog¬ 45

2015). These stronger results rely on the notion of delocalizing triple in signed directed
graphs (Richard and Ruet 2013).
Finally, our investigation has led to a conceptual enrichment of Datalog¬ programs
through the notions of stable and supported trap spaces, borrowed from the notion of
trap space in Boolean network theory. We have formalized supported and stable trap
spaces in both model-theoretic and dynamical settings, shown their basic properties, and
demonstrated their relationships to other existing semantics, in particular, shown that
the ≤s -minimal stable trap spaces coincide with the regular models. This correspondence
offers a new perspective on the dynamics of Datalog¬ programs and may open the door
to new algorithmic techniques for model computation.

References
Akutsu, T., Melkman, A. A., and Tamura, T. Singleton and 2-periodic attractors of sign-
definite Boolean networks. Inf. Process. Lett., 112(1-2):35–38 2012.
Alcântara, J., Sá, S., and Acosta-Guadarrama, J. On the equivalence between abstract di-
alectical frameworks and logic programs. Theory Pract. Log. Program., 19(5-6):941–956 2019.
Alviano, M., Faber, W., Greco, G., and Leone, N. Magic sets for disjunctive Datalog
programs. Artif. Intell., 187:156–192 2012.
Apt, K. R. and Bezem, M. Acyclic programs. New Gener. Comput., 9:335–363 1991.
Apt, K. R., Blair, H. A., and Walker, A. Towards a theory of declarative knowledge. In
Foundations of Deductive Databases and Logic Programming 1988, pp. 89–148. Elsevier.
Aracena, J. Maximum number of fixed points in regulatory Boolean networks. Bull. Math.
Biol., 70(5):1398–1409 2008.
Aravindan, C. and Dung, P. M. On the correctness of unfold/fold transformation of normal
and extended logic programs. J. Log. Program., 24(3):201–217 1995.
Azpeitia, E., Gutiérrez, S. M., Rosenblueth, D. A., and Zapata, O. Bridging abstract
dialectical argumentation and Boolean gene regulation. CoRR, abs/2407.06106 2024.
Baral, C. and Subrahmanian, V. S. Stable and extension class theory for logic programs
and default logics. J. Autom. Reason., 8(3):345–366 1992.
Baral, C. and Subrahmanian, V. S. Dualities between alternative semantics for logic pro-
gramming and nonmonotonic reasoning. J. Autom. Reason., 10(3):399–420 1993.
Baroni, P., Caminada, M., and Giacomin, M. An introduction to argumentation semantics.
Knowl. Eng. Rev., 26(4):365–410 2011.
Basta, S., Flesca, S., and Greco, S. Functional queries in Datalog. New Gener. Comput.,
20(4):339–372 2002.
Caminada, M., Sá, S., Alcântara, J. F. L., and Dvorák, W. On the equivalence between
logic programming semantics and argumentation semantics. Int. J. Approx. Reason., 58:87–
111 2015.
Caminada, M. and Schulz, C. On the equivalence between assumption-based argumentation
and logic programming. J. Artif. Intell. Res., 60:779–825 2017.
Caminada, M. W. A. and Gabbay, D. M. A logical account of formal argumentation. Stud
Logica, 93(2-3):109–145 2009.
Ceri, S., Gottlob, G., and Tanca, L. 1990. Logic Programming and Databases: An Overview.
Springer.
Cholewinski, P. and Truszczynski, M. Extremal problems in logic programming and stable
model computation. J. Log. Program., 38(2):219–242 1999.
Clark, K. L. Negation as failure. In Logic and Data Bases, Symposium on Logic and Data
Bases 1977, pp. 293–322, New York. Plemum Press.
46 Trinh, Benhamou, Soliman and Fages

Costantini, S. On the existence of stable models of non-stratified logic programs. Theory Pract.
Log. Program., 6(1-2):169–212 2006.
Costantini, S. and Provetti, A. Conflict, consistency and truth-dependencies in graph rep-
resentations of answer set logic programs. In Second International Workshop on Graph Struc-
tures for Knowledge Representation and Reasoning 2011, pp. 68–90. Springer.
Dietz, E., Hölldobler, S., and Wernhard, C. Modeling the suppression task under weak
completion and well-founded semantics. J. Appl. Non Class. Logics, 24(1-2):61–85 2014.
Dimopoulos, Y., Dvorák, W., and König, M. Connecting abstract argumentation and
Boolean networks. In Proc. of COMMA 2024, pp. 85–96. IOS Press.
Dimopoulos, Y. and Torres, A. Graph theoretical structures in logic programs and default
theories. Theor. Comput. Sci., 170(1-2):209–244 1996.
Doyle, J. A truth maintenance system. Artif. Intell., 12(3):231–272 1979.
Dubrova, E. and Teslenko, M. A SAT-based algorithm for finding attractors in synchronous
Boolean networks. IEEE ACM Trans. Comput. Biol. Bioinform., 8(5):1393–1399 2011.
Dung, P. M. On the acceptability of arguments and its fundamental role in nonmonotonic
reasoning, logic programming and n-person games. Artif. Intell., 77(2):321–358 1995.
Dung, P. M. and Kanchanasut, K. A fixpoint approach to declarative semantics of logic
programs. In Proc. of NACLP 1989, pp. 604–625. MIT Press.
Eiter, T., Leone, N., and Saccà, D. On the partial semantics for disjunctive deductive
databases. Ann. Math. Artif. Intell., 19(1-2):59–96 1997.
Fages, F. A new fixpoint semantics for general logic programs compared with the well-founded
and the stable model semantics. New Gener. Comput., 9(3/4):425–444 1991.
Fages, F. Consistency of Clark’s completion and existence of stable models. Methods Log.
Comput. Sci., 1(1):51–60 1994.
Fandinno, J. and Hecher, M. Treewidth-aware complexity in ASP: not all positive cycles are
equally hard. In Proc. of AAAI 2021, pp. 6312–6320. AAAI Press.
Fandinno, J. and Lifschitz, V. Positive dependency graphs revisited. Theory Pract. Log.
Program., 23(5):1128–1137 2023.
Fichte, J. K. The good, the bad, and the odd: Cycles in answer-set programs. In Proc. of
ESSLLI 2011, pp. 78–90. Springer.
Gelfond, M. and Lifschitz, V. The stable model semantics for logic programming. In Proc.
of ICLP/SLP 1988, pp. 1070–1080. MIT Press.
Guessarian, I. and Peixoto, M. V. About boundedness for some Datalog and Datalogneg
programs. J. Log. Comput., 4(4):375–403 1994.
Heyninck, J., Knorr, M., and Leite, J. Abstract dialectical frameworks are Boolean net-
works. In Proc. of LPNMR 2024, pp. 98–111. Springer.
Inoue, K. Logic programming for Boolean networks. In Proc. of IJCAI 2011, pp. 924–930.
IJCAI/AAAI.
Inoue, K. and Sakama, C. Oscillating behavior of logic programs. In Correct Reasoning -
Essays on Logic-Based AI in Honour of Vladimir Lifschitz 2012, pp. 345–362. Springer.
Janhunen, T. and Niemelä, I. The answer set programming paradigm. AI Mag., 37(3):13–24
2016.
Janhunen, T., Niemelä, I., Seipel, D., Simons, P., and You, J. Unfolding partiality and
disjunctions in stable model semantics. ACM Trans. Comput. Log., 7(1):1–37 2006.
Kauffman, S. A. Metabolic stability and epigenesis in randomly constructed genetic nets. J.
Theor. Biol., 22(3):437–467 1969.
Khaled, T., Benhamou, B., and Trinh, V.-G. Using answer set programming to deal with
Boolean networks and attractor computation: application to gene regulatory networks of cells.
Ann. Math. Artif. Intell., 91(5):713–750 2023.
On the Boolean Network Theory of Datalog¬ 47

Klarner, H., Bockmayr, A., and Siebert, H. Computing maximal and minimal trap spaces
of Boolean networks. Nat. Comput., 14(4):535–544 2015.
Lin, F. and You, J.-H. Abduction in logic programming: A new definition and an abductive
procedure based on rewriting. Artif. Intell., 140(1-2):175–205 2002.
Lin, F. and Zhao, X. On odd and even cycles in normal logic programs. In Proceedings of the
Nineteenth National Conference on Artificial Intelligence, Sixteenth Conference on Innovative
Applications of Artificial Intelligence 2004, pp. 80–85. AAAI Press / The MIT Press.
Linke, T. Graph theoretical characterization and computation of answer sets. In Proc. of IJCAI
2001, pp. 641–648. Morgan Kaufmann.
Lloyd, J. W. 1984. Foundations of Logic Programming. Springer Berlin Heidelberg.
Niemelä, I. Logic programs with stable model semantics as a constraint programming paradigm.
Ann. Math. Artif. Intell., 25(3-4):241–273 1999.
Przymusinski, T. C. The well-founded semantics coincides with the three-valued stable seman-
tics. Fundam. Inform., 13(4):445–463 1990.
Przymusinski, T. C. Well-founded and stationary models of logic programs. Ann. Math. Artif.
Intell., 12(3-4):141–187 1994.
Remy, E., Mossé, B., Chaouiya, C., and Thieffry, D. A description of dynamical graphs
associated to elementary regulatory circuits. Bioinf., 19(2):172–178 2003.
Richard, A. Positive circuits and maximal number of fixed points in discrete dynamical sys-
tems. Discret. Appl. Math., 157(15):3281–3288 2009.
Richard, A. Negative circuits and sustained oscillations in asynchronous automata networks.
Adv. Appl. Math., 44(4):378–392 2010.
Richard, A. Positive and negative cycles in Boolean networks. J. Theor. Biol., 463:67–76 2019.
Richard, A. and Ruet, P. From kernels in directed graphs to fixed points and negative cycles
in Boolean networks. Discret. Appl. Math., 161(7-8):1106–1117 2013.
Richard, A. and Tonello, E. Attractor separation and signed cycles in asynchronous Boolean
networks. Theor. Comput. Sci., 947:113706 2023.
Saccà, D. and Zaniolo, C. Deterministic and non-deterministic stable models. J. Log. Com-
put., 7(5):555–579 1997.
Sato, T. Completed logic programs and their consistency. J. Log. Program., 9(1):33–44 1990.
Schwab, J. D., Kühlwein, S. D., Ikonomi, N., Kühl, M., and Kestler, H. A. Concepts in
Boolean network modeling: What do they all mean? Comput. Struct. Biotechnol. J., 18:571–
582 2020.
Seitzer, J. and Schlipf, J. S. Affordable classes of normal logic programs. In Proc. of LPNMR
1997, pp. 92–111. Springer.
Thomas, R. Boolean formalisation of genetic control circuits. J. Theor. Biol., 42:565–583 1973.
Trinh, V.-G. and Benhamou, B. Static analysis of logic programs via Boolean networks.
CoRR, abs/2407.09015 2024.
Trinh, V.-G., Benhamou, B., and Paulevé, L. mpbn: a simple tool for efficient edition and
analysis of elementary properties of Boolean networks. CoRR, abs/2403.06255 2024a.
Trinh, V.-G., Benhamou, B., and Risch, V. Graphical analysis of abstract argumentation
frameworks via Boolean networks. In Proc. of ICAART 2025a, pp. 745–756.
Trinh, V.-G., Benhamou, B., and Soliman, S. Trap spaces of Boolean networks are conflict-
free siphons of their Petri net encoding. Theor. Comput. Sci., 971:114073 2023.
Trinh, V.-G., Benhamou, B., Soliman, S., and Fages, F. Graphical conditions for the exis-
tence, unicity and number of regular models. In Proc. of ICLP 2024b, pp. 175–187.
Trinh, V.-G., Pastva, S., Rozum, J., Park, K. H., and Albert, R. On the number of
asynchronous attractors in AND-NOT Boolean networks. arXiv preprint arXiv:2503.19147
2025b.
48 Trinh, Benhamou, Soliman and Fages

Veliz-Cuba, A., Buschur, K., Hamershock, R., Kniss, A., Wolff, E., and Lauben-
bacher, R. AND-NOT logic framework for steady state analysis of Boolean network models.
arXiv preprint arXiv:1211.5633 2012.
Wu, Y., Caminada, M., and Gabbay, D. M. Complete extensions in argumentation coincide
with 3-valued stable models in logic programming. Stud Logica, 93(2-3):383–403 2009.
You, J. and Yuan, L. A three-valued semantics for deductive databases and logic programs.
J. Comput. Syst. Sci., 49(2):334–361 1994.
You, J. and Yuan, L. On the equivalence of semantics for normal logic programs. J. Log.
Program., 22(3):211–222 1995.

You might also like