0% found this document useful (0 votes)
9 views

Neural-Ring-published

Uploaded by

Sandip Banerjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
9 views

Neural-Ring-published

Uploaded by

Sandip Banerjee
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Bull Math Biol

DOI 10.1007/s11538-013-9860-3
O R I G I N A L A RT I C L E

The Neural Ring: An Algebraic Tool for Analyzing


the Intrinsic Structure of Neural Codes

Carina Curto · Vladimir Itskov · Alan Veliz-Cuba ·


Nora Youngs

Received: 18 December 2012 / Accepted: 23 May 2013


© Society for Mathematical Biology 2013

Abstract Neurons in the brain represent external stimuli via neural codes. These
codes often arise from stereotyped stimulus-response maps, associating to each neu-
ron a convex receptive field. An important problem confronted by the brain is to infer
properties of a represented stimulus space without knowledge of the receptive fields,
using only the intrinsic structure of the neural code. How does the brain do this?
To address this question, it is important to determine what stimulus space features
can—in principle—be extracted from neural codes. This motivates us to define the
neural ring and a related neural ideal, algebraic objects that encode the full combi-
natorial data of a neural code. Our main finding is that these objects can be expressed
in a “canonical form” that directly translates to a minimal description of the recep-
tive field structure intrinsic to the code. We also find connections to Stanley–Reisner
rings, and use ideas similar to those in the theory of monomial ideals to obtain an al-
gorithm for computing the primary decomposition of pseudo-monomial ideals. This
allows us to algorithmically extract the canonical form associated to any neural code,
providing the groundwork for inferring stimulus space features from neural activity
alone.

Keywords Neural code · Pseudo-monomial ideals

1 Introduction

Building accurate representations of the world is one of the basic functions of the
brain. It is well known that when a stimulus is paired with pleasure or pain, an animal
quickly learns the association. Animals also learn, however, the (neutral) relation-
ships between stimuli of the same type. For example, a bar held at a 45-degree angle

C. Curto () · V. Itskov · A. Veliz-Cuba · N. Youngs


Department of Mathematics, University of Nebraska–Lincoln, Lincoln, USA
e-mail: [email protected]
C. Curto et al.

appears more similar to one held at 50 degrees than to a perfectly vertical one. Upon
hearing a triple of distinct pure tones, one seems to fall “in between” the other two.
An explored environment is perceived not as a collection of disjoint physical loca-
tions, but as a spatial map. In summary, we do not experience the world as a stream
of unrelated stimuli; rather, our brains organize different types of stimuli into highly
structured stimulus spaces.
The relationship between neural activity and stimulus space structure has,
nonetheless, received remarkably little attention. In the field of neural coding, much
has been learned about the coding properties of individual neurons by investigating
stimulus-response functions, such as place fields (O’Keefe and Dostrovsky 1971;
McNaughton et al. 2006), orientation tuning curves (Watkins and Berkley 1974;
Ben-Yishai et al. 1995), and other examples of “receptive fields” obtained by
measuring neural activity in response to experimentally-controlled stimuli. More-
over, numerous studies have shown that neural activity, together with knowl-
edge of the appropriate stimulus-response functions, can be used to accurately
estimate a newly presented stimulus (Brown et al. 1998; Deneve et al. 1999;
Ma et al. 2006). This paradigm is being actively extended and revised to include
information present in populations of neurons, spurring debates on the role of cor-
relations in neural coding (Nirenberg and Latham 2003; Averbeck et al. 2006;
Schneidman et al. 2006a). In each case, however, the underlying structure of the
stimulus space is assumed to be known, and is not treated as itself emerging from
the activity of neurons. This approach is particularly problematic when one considers
that the brain does not have access to stimulus-response functions, and must repre-
sent the world without the aid of dictionaries that lend meaning to neural activity
(Curto and Itskov 2008). In coding theory parlance, the brain does not have access
to the encoding map, and must therefore represent stimulus spaces via the intrinsic
structure of the neural code.
How does the brain do this? In order to eventually answer this question, we must
first tackle a simpler one:

Question What can be inferred about the underlying stimulus space from neural ac-
tivity alone? That is, what stimulus space features are encoded in the intrinsic struc-
ture of the neural code, and can thus be extracted without knowing the individual
stimulus-response functions?

Recently, we have shown that, in the case of hippocampal place cell codes, certain
topological features of the animal’s environment can be inferred from the neural code
alone, without knowing the place fields (Curto and Itskov 2008). As will be explained
in the next section, this information can be extracted from a simplicial complex as-
sociated to the neural code. What other stimulus space features can be inferred from
the neural code? For this, we turn to algebraic geometry. Algebraic geometry pro-
vides a useful framework for inferring geometric and topological characteristics of
spaces by associating rings of functions to these spaces. All relevant features of the
underlying space are encoded in the intrinsic structure of the ring, where coordinate
functions become indeterminates, and the space itself is defined in terms of ideals in
the ring. Inferring features of a space from properties of functions—without specified
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

domains—is similar to the task confronted by the brain, so it is natural to expect that
this framework may shed light on our question.
In this article, we introduce the neural ring, an algebro-geometric object that can
be associated to any combinatorial neural code. Much like the simplicial complex of
a code, the neural ring encodes information about the underlying stimulus space in a
way that discards specific knowledge of receptive field maps, and thus gets closer to
the essence of how the brain might represent stimulus spaces. Unlike the simplicial
complex, the neural ring retains the full combinatorial data of a neural code, pack-
aging this data in a more computationally tractable manner. We find that this object,
together with a closely related neural ideal, can be used to algorithmically extract
a compact, minimal description of the receptive field structure dictated by the code.
This enables us to more directly tie combinatorial properties of neural codes to fea-
tures of the underlying stimulus space, a critical step toward answering our motivating
question.
Although the use of an algebraic construction such as the neural ring is quite
novel in the context of neuroscience, the neural code (as we define it) is at its
core a combinatorial object, and there is a rich tradition of associating algebraic
objects to combinatorial ones (Miller and Sturmfels 2005). The most well-known
example is perhaps the Stanley–Reisner ring (Stanley 2004), which turns out to be
closely related to the neural ring. Within mathematical biology, associating poly-
nomial ideals to combinatorial data has also been fruitful. Recent examples in-
clude inferring wiring diagrams in gene-regulatory networks (Jarrah et al. 2007;
Veliz-Cuba 2012) and applications to chemical reaction networks (Shiu and Sturmfels
2010). Our work also has parallels to the study of design ideals in algebraic statistics
(Pistone et al. 2001).
The organization of this paper is as follows. In Sect. 2, we introduce receptive
field codes, and explore how the requirement of convexity enables these codes to
constrain the structure of the underlying stimulus space. In Sect. 3, we define the
neural ring and the neural ideal, and find explicit relations that enable us to compute
these objects for any neural code. Section 4 is the heart of this paper. Here, we present
an alternative set of relations for the neural ring, and demonstrate how they enable
us to “read off” receptive field structure from the neural ideal. We then introduce
pseudo-monomials and pseudo-monomial ideals, by analogy to monomial ideals; this
allows us to define a natural “canonical form” for the neural ideal. Using this, we can
extract minimal relationships among receptive fields that are dictated by the structure
of the neural code. Finally, we present an algorithm for finding the canonical form
of a neural ideal, and illustrate how to use our formalism for inferring receptive field
structure in a detailed example. Section 5 describes the primary decomposition of the
neural ideal and, more generally, of pseudo-monomial ideals. Computing the primary
decomposition of the neural ideal is a critical step in our canonical form algorithm,
and it also yields a natural decomposition of the neural code in terms of intervals of
the Boolean lattice. We end this section with an algorithm for finding the primary
decomposition of any pseudo-monomial ideal, using ideas similar to those in the
theory of square-free monomial ideals. All longer proofs can be found in Appendix 1.
A detailed classification of neural codes on three neurons is given in Appendix 2.
C. Curto et al.

2 Background and Motivation

2.1 Preliminaries

In this section, we introduce the basic objects of study: neural codes, receptive field
codes, and convex receptive field codes. We then discuss various ways in which the
structure of a convex receptive field code can constrain the underlying stimulus space.
These constraints emerge most obviously from the simplicial complex of a neural
code, but (as will be made clear) there are also constraints that arise from aspects
of a neural code’s structure that go well beyond what is captured by the simplicial
complex of the code.
def
Given a set of neurons labeled {1, . . . , n} = [n], we define a neural code C ⊂
{0, 1}n as a set of binary patterns of neural activity. An element of a neural code is
called a codeword, c = (c1 , . . . , cn ) ∈ C, and corresponds to a subset of neurons
def   
supp(c) = i ∈ [n]  ci = 1 ⊂ [n].

Similarly, the entire code C can be identified with a set of subsets of neurons,
def   
supp C = supp(c)  c ∈ C ⊂ 2[n] ,

where 2[n] denotes the set of all subsets of [n]. Because we discard the details of the
precise timing and/or rate of neural activity, what we mean by neural code is often
referred to in the neural coding literature as a combinatorial code (Schneidman et al.
2006b; Osborne et al. 2008).
A set of subsets  ⊂ 2[n] is an (abstract) simplicial complex if σ ∈  and τ ⊂ σ
implies τ ∈ . We will say that a neural code C is a simplicial complex if supp C is
a simplicial complex. In cases where the code is not a simplicial complex, we can
complete the code to a simplicial complex by simply adding in missing subsets of
codewords. This allows us to define the simplicial complex of the code as
def   
(C) = σ ⊂ [n]  σ ⊆ supp(c) for some c ∈ C .

Clearly, (C) is the smallest simplicial complex that contains supp C.

2.2 Receptive Field Codes (RF Codes)

Neurons in many brain areas have activity patterns that can be characterized by
receptive fields.1 Abstractly, a receptive field is a map fi : X → R≥0 from a
space of stimuli, X, to the average firing rate of a single neuron, i, in response
to each stimulus. Receptive fields are computed by correlating neural responses
to independently measured external stimuli. We follow a common abuse of lan-
guage, where both the map and its support (i.e., the subset Ui ⊂ X where fi takes

1 In the vision literature, the term “receptive field” is reserved for subsets of the visual field; we use the
term in a more general sense, applicable to any modality.
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

Fig. 1 Receptive field overlaps determine codewords in 1D and 2D RF codes. (A) Neurons in a 1D RF
code have receptive fields that overlap on a line segment (or circle, in the case of orientation-tuning). Each
stimulus on the line corresponds to a binary codeword. Gaussians depict graded firing rates for neural
responses; this additional information is discarded by the RF code. (B) Neurons in a 2D RF code, such as a
place field code, have receptive fields that partition a two-dimensional stimulus space into nonoverlapping
regions, as illustrated by the shaded area. All stimuli within one of these regions will activate the same set
of neurons, and hence have the same corresponding codeword

on positive values) are referred to as “receptive fields.” Convex receptive fields


are convex2 subsets of the stimulus space, for X ⊂ Rd . The paradigmatic exam-
ples are orientation-selective neurons in visual cortex (Watkins and Berkley 1974;
Ben-Yishai et al. 1995) and hippocampal place cells (O’Keefe and Dostrovsky 1971;
McNaughton et al. 2006). Orientation-selective neurons have tuning curves that re-
flect a neuron’s preference for a particular angle (see Fig. 1A). Place cells are neurons
that have place fields; i.e., each neuron has a preferred (convex) region of the animal’s
physical environment where it has a high firing rate (see Fig. 1B). Both tuning curves
and place fields are examples of receptive fields.
A receptive field code (RF code) is a neural code that corresponds to the brain’s
representation of the stimulus space covered by the receptive fields. When a stimu-
lus lies in the intersection of several receptive fields, the corresponding neurons may
cofire while the rest remain silent. The active subset σ of neurons can be identified
with a binary codeword c ∈ {0, 1}n via σ = supp(c). Unless otherwise noted, a stim-
ulus space X need only be a topological space. However, we usually have in mind
X ⊂ Rd , and this becomes important when we consider convex RF codes.

Definition Let X be a stimulus space (e.g., X ⊂ Rd ), and let U = {U1 , . . . , Un } be


a collection of open sets, with each Ui ⊂ X the receptive field of the ith neuron in a
population of n neurons. The receptive field code (RF code) C(U) ⊂ {0, 1}n is the set
of all binary codewords corresponding to stimuli in X:
    
 
def n
C(U) = c ∈ {0, 1}  Ui Uj = ∅ .
i∈supp(c) j ∈supp(c)
/

2 A subset B ⊂ Rn is convex if, given any pair of points x, y ∈ B, the point z = tx + (1 − t)y is contained
in B for any t ∈ [0, 1].
C. Curto et al.

If X ⊂ Rd and each of the Ui s is also a convex subset of X, then we say that C(U) is
a convex RF code.

Our convention is that the empty intersection is i∈∅ Ui = X, and the empty union
is i∈∅ Ui = ∅. This means that if ni=1 Ui  X, then C(U) includes the all-zeros
codeword corresponding to an “outside” point not covered by the receptive fields;
on the other hand, if ni=1 Ui = ∅, then C(U) includes the all-ones codeword. Fig-
ure 1 shows examples of convex receptive fields covering one- and two-dimensional
stimulus spaces, and examples of codewords corresponding to regions defined by the
receptive fields.
Returning to our discussion in the Introduction, we have the following question: If
we can assume C = C(U) is a RF code, then what can be learned about the underlying
stimulus space X from knowledge only of C, and not of U ? The answer to this question
will depend critically on whether or not we can assume that the RF code is convex.
In particular, if we do not assume convexity of the receptive fields, then any code can
be realized as a RF code in any dimension.

Lemma 2.1 Let C ⊂ {0, 1}n be a neural code. Then, for any d ≥ 1, there exists a stim-
ulus space X ⊂ Rd and a collection of open sets U = {U1 , . . . , Un } (not necessarily
convex), with Ui ⊂ X for each i ∈ [n], such that C = C(U).

Proof Let C ⊂ {0, 1}n be any neural code, and order the elements of C as {c1 , . . . , cm },
where m = |C|. For each c ∈ C, choose a distinct point xc ∈ Rd and an open
def
neighborhood Nc of xc such that no two neighborhoods intersect. Define Uj =
m
j ∈supp(ck ) Nck , let U = {U1 , . . . , Un }, and X = i=1 Nci . Observe that if the all-
zeros codeword is in C, then N0 = X \ ni=1 Ui corresponds to the “outside point”
not covered by any of the Ui s. By construction, C = C(U). 

Although any neural code C ⊆ {0, 1}n can be realized as a RF code, it is not true
that any code can be realized as a convex RF code. Counterexamples can be found in
codes having as few as three neurons.

Lemma 2.2 The neural code C = {0, 1}3 \ {111, 001} on three neurons cannot be
realized as a convex RF code.

Proof Assume the converse, and let U = {U1 , U2 , U3 } be a set of convex open sets
in Rd such that C = C(U). The code necessitates that U1 ∩ U2 = ∅ (since 110 ∈ C),
(U1 ∩ U3 ) \ U2 = ∅ (since 101 ∈ C), and (U2 ∩ U3 ) \ U1 = ∅ (since 011 ∈ C). Let
p1 ∈ (U1 ∩ U3 ) \ U2 and p2 ∈ (U2 ∩ U3 ) \ U1 . Since p1 , p2 ∈ U3 and U3 is convex,
the line segment  = (1 − t)p1 + tp2 for t ∈ [0, 1] must also be contained in U3 .
There are just two possibilities. Case 1:  passes through U1 ∩ U2 (see Fig. 2, left).
This implies U1 ∩ U2 ∩ U3 = ∅, and hence 111 ∈ C, a contradiction. Case 2:  does
not intersect U1 ∩ U2 . Since U1 , U2 are open sets, this implies  passes outside of
U1 ∪ U2 (see Fig. 2, right), and hence 001 ∈ C, a contradiction. 
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

Fig. 2 Two cases in the proof


of Lemma 2.2

2.3 Stimulus Space Constraints Arising from Convex RF Codes

It is clear from Lemma 2.1 that there is essentially no constraint on the stimulus
space for realizing a code as a RF code. However, if we demand that C is a convex
RF code, then the overlap structure of the Ui s sharply constrains the geometric and
topological properties of the underlying stimulus space X. To see how this works,
we first consider the simplicial complex of a neural code, (C). Classical results in
convex geometry and topology provide constraints on the underlying stimulus space
X for convex RF codes, based on the structure of (C). We will discuss these next.
We then turn to the question of constraints that arise from combinatorial properties
of a neural code C that are not captured by (C).

2.3.1 Helly’s Theorem and the Nerve Theorem

Here, we briefly review two classical and well-known theorems in convex geometry
and topology, Helly’s theorem and the Nerve theorem, as they apply to convex RF
codes. Both theorems can be used to relate the structure of the simplicial complex of
a code, (C), to topological features of the underlying stimulus space X.
Suppose U = {U1 , . . . , Un } is a finite collection of convex open subsets of Rd ,
with dimension d < n. We can associate to U a simplicial complex N (U) called the
nerve of U . A subset {i1 , . . . , ik } ⊂ [n] belongs to N (U) if and only if the appropriate
intersection k=1 Ui is nonempty. If we think of the Ui s as receptive fields, then
N(U) = (C(U)). In other words, the nerve of the cover corresponds to the simplicial
complex of the associated (convex) RF code.

Helly’s Theorem Consider k convex subsets, U1 , . . . , Uk ⊂ Rd , for d < k. If the


intersection of every d + 1 of these sets is nonempty, then the full intersection ki=1 Ui
is also nonempty.

A nice exposition of this theorem and its consequences can be found in Danzer
et al. (1963). One straightforward consequence is that the nerve N (U) is completely
determined by its d-skeleton, and corresponds to the largest simplicial complex with
that d-skeleton. For example, if d = 1, then N (U) is a clique complex (fully deter-
mined by its underlying graph). Since N (U) = (C(U)), Helly’s theorem imposes
constraints on the minimal dimension of the stimulus space X when C = C(U) is
assumed to be a convex RF code.
def
Nerve Theorem The homotopy type of X(U) = ni=1 Ui is equal to the homotopy
type of the nerve of the cover, N (U). In particular, X(U) and N (U) have exactly the
same homology groups.
C. Curto et al.

Fig. 3 Four arrangements of


three convex receptive fields,
U = {U1 , U2 , U3 }, each having
(C(U)) = 2[3] . Square boxes
denote the stimulus space X in
cases where U1 ∪ U2 ∪ U3  X.
(A) C(U) = 2[3] , including the
all-zeros codeword 000.
(B) C(U) = {111, 101, 011,
001}, with X = U3 . (C) C(U) =
{111, 011, 001, 000}.
(D) C(U) = {111, 101, 011, 110,
100, 010}, and X = U1 ∪ U2 .
The minimal embedding
dimension for the codes in
panels (A) and (D) is d = 2,
while for panels (B) and (C) it is
d =1

The Nerve theorem is an easy consequence of Hatcher (2002, Corollary 4G.3).


This is a powerful theorem relating the simplicial complex of a RF code, (C(U)) =
N(U), to topological features of the underlying space, such as homology groups and
other homotopy invariants. Note, however, that the similarities between X(U) and
N(U) only go so far. In particular, X(U) and N (U) typically have very different
dimension. It is also important to keep in mind that the Nerve theorem concerns the
topology of X(U) = ni=1 Ui . In our setup, if the stimulus space X is larger, so that
n
i=1 Ui  X, then the Nerve theorem tells us only about the homotopy type of X(U),
not of X. Since the Ui are open sets, however, conclusions about the dimension of X
can still be inferred.
In addition to Helly’s theorem and the Nerve theorem, there is a great deal known
about (C(U)) = N (U) for collections of convex sets in Rd . In particular, the f -
vectors of such simplicial complexes have been completely characterized by Kalai
(1984, 1986).

2.3.2 Beyond the Simplicial Complex of the Neural Code

We have just seen how the simplicial complex of a neural code, (C), yields con-
straints on the stimulus space X if we assume C can be realized as a convex RF
code. The example described in Lemma 2.2, however, implies that other kinds of
constraints on X may emerge from the combinatorial structure of a neural code, even
if there is no obstruction stemming from (C).
In Fig. 3, we show four possible arrangements of three convex receptive fields
in the plane. Each convex RF code has the same corresponding simplicial com-
plex (C) = 2[3] , since 111 ∈ C for each code. Nevertheless, the arrangements
clearly have different combinatorial properties. In Fig. 3C, for instance, we have
U1 ⊂ U2 ⊂ U3 , while Fig. 3A has no special containment relationships among the
receptive fields. This “receptive field structure” (RF structure) of the code has impli-
cations for the underlying stimulus space.
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

Let d be the minimal integer for which the code can be realized as a convex RF
code in Rd ; we will refer to this as the minimal embedding dimension of C. Note that
the codes in Fig. 3A, D have d = 2, whereas the codes in Fig. 3B, C have d = 1. The
simplicial complex, (C), is thus not sufficient to determine the minimal embedding
dimension of a convex RF code, but this information is somehow present in the RF
structure of the code. Similarly, in Lemma 2.2 we saw that (C) does not provide
sufficient information to determine whether or not C can be realized as a convex RF
code; after working out the RF structure, however, it was easy to see that the given
code was not realizable.

2.3.3 The Receptive Field Structure (RF Structure) of a Neural Code

As we have just seen, the intrinsic structure of a neural code contains information
about the underlying stimulus space that cannot be inferred from the simplicial com-
plex of the code alone. This information is, however, present in what we have loosely
referred to as the “RF structure” of the code. We now explain more carefully what we
mean by this term.
Given a set of receptive fields U = {U1 , . . . , Un } in a stimulus space X, there are
certain containment relations between intersections and unions of the Ui s that are
“obvious,” and carry no information about the particular arrangement in question.
For example, U1 ∩ U2 ⊆ U2 ∪ U3 ∪ U4 is always guaranteed to be true, because it
follows from U2 ⊆ U2 . On the other hand, a relationship such as U3 ⊆ U1 ∪ U2 (as
in Fig. 3D) is not always present, and thus reflects something about the structure of a
particular receptive field arrangement.
Let C ⊂ {0, 1}n be a neural code, and let U = {U1 , . . . , Un } be any arrangement of
receptive fields in a stimulus space X such that C = C(U) (this is guaranteed to exist
by Lemma 2.1). The RF structure of C refers to the set of relations among the Ui s
that are not “obvious,” and have the form:

Ui ⊆ Uj , for σ ∩ τ = ∅.
i∈σ j ∈τ

In particular, this includes any empty intersections i∈σ Ui = ∅ (here τ = ∅). In the
Fig. 3 examples, the panel A code has no RF structure relations; while panel B has
U1 ⊂ U3 and U2 ⊂ U3 ; panel C has U1 ⊂ U2 ⊂ U3 ; and panel D has U3 ⊂ U1 ∪ U2 .
The central goal of this paper is to develop a method to algorithmically extract
a minimal description of the RF structure directly from a neural code C, without
first realizing it as C(U) for some arrangement of receptive fields. We view this as a
first step toward inferring stimulus space features that cannot be obtained from the
simplicial complex (C). To do this we turn to an algebro-geometric framework, that
of neural rings and ideals. These objects are defined in Sect. 3 so as to capture the
full combinatorial data of a neural code, but in a way that allows us to naturally and
algorithmically infer a compact description of the desired RF structure, as shown in
Sect. 4.
C. Curto et al.

3 Neural Rings and Ideals

In this section, we define the neural ring RC and a closely-related neural ideal, JC .
First, we briefly review some basic algebraic geometry background needed through-
out this paper.

3.1 Basic Algebraic Geometry Background

The following definitions are standard (see, for example, Cox et al. 1997).

Rings and Ideals Let R be a commutative ring. A subset I ⊆ R is an ideal of R if


it has the following properties:
(i) I is a subgroup of R under addition.
(ii) If a ∈ I , then ra ∈ I for all r ∈ R.
An ideal I is said to be generated by a set A, and we write I = A , if

I = {r1 a1 + · · · + rn an | ai ∈ A, ri ∈ R, and n ∈ N}.

In other words, I is the set of all finite combinations of elements of A with coefficients
in R.
An ideal I ⊂ R is proper if I  R. An ideal I ⊂ R is prime if it is proper and
satisfies: if rs ∈ I for some r, s ∈ R, then r ∈ I or s ∈ I . An ideal m ⊂ R is maximal
if it is proper and for any ideal I such that m ⊆ I ⊆ R, either I = m or I = R. An
ideal I ⊂ R is radical if r n ∈ I implies r ∈ I , for any r ∈ R and n ∈ N. An ideal I ⊂ R
is primary if rs ∈ I implies r ∈ I or s n ∈ I for some n ∈ N. A primary decomposition
of an ideal I expresses I as an intersection of finitely many primary ideals.

Ideals and Varieties Let k be a field, n the number of neurons, and k[x1 , . . . , xn ]
a polynomial ring with one indeterminate xi for each neuron. We will consider k n
to be the neural activity space, where each point v = (v1 , . . . , vn ) ∈ k n is a vector
tracking the state vi of each neuron. Note that any polynomial f ∈ k[x1 , . . . , xn ] can
be evaluated at a point v ∈ k n by setting xi = vi each time xi appears in f . We will
denote this value f (v).
Let J ⊂ k[x1 , . . . , xn ] be an ideal, and define the variety
def   
V (J ) = v ∈ k n  f (v) = 0 for all f ∈ J .

Similarly, given a subset S ⊂ k n , we can define the ideal of functions that vanish on
this subset as
def   
I (S) = f ∈ k[x1 , . . . ., xn ]  f (v) = 0 for all v ∈ S .

The ideal-variety correspondence (Cox et al. 1997) gives us the usual order-reversing
relationships: I ⊆ J ⇒ V (J ) ⊆ V (I ), and S ⊆ T ⇒ I (T ) ⊆ I (S). Furthermore,
V (I (V )) = V for any variety V , but it is not always true that I (V (J )) = J for an
ideal J (see Sect. 6.1). We will regard neurons as having only two states, “on” or
“off,” and thus choose k = F2 = {0, 1}.
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

3.2 Definition of the Neural Ring

Let C ⊂ {0, 1}n = Fn2 be a neural code, and define the ideal IC of F2 [x1 , . . . , xn ]
corresponding to the set of polynomials that vanish on all codewords in C:
  
IC = I (C) = f ∈ F2 [x1 , . . . , xn ]  f (c) = 0 for all c ∈ C .
def

By design, V (IC ) = C and hence I (V (IC )) = IC . Note that the ideal generated by the
Boolean relations,
def  
B = x12 − x1 , . . . , xn2 − xn ,
is automatically contained in IC , irrespective of C.
The neural ring RC corresponding to the code C is the quotient ring
def
RC = F2 [x1 , . . . , xn ]/IC ,

together with the set of indeterminates x1 , . . . , xn . We say that two neural rings are
equivalent if there is a bijection between the sets of indeterminates that yields a ring
homomorphism.

Remark Due to the Boolean relations, any element y ∈ RC satisfies y 2 = y (cross-


terms vanish because 2 = 0 in F2 ), so the neural ring is a Boolean ring isomorphic to
|C |
F2 . It is important to keep in mind, however, that RC comes equipped with a priv-
ileged set of functions, x1 , . . . , xn ; this allows the ring to keep track of considerably
more structure than just the size of the neural code.

3.3 The Spectrum of the Neural Ring

We can think of RC as the ring of functions of the form f : C → F2 on the neural


code, where each function assigns a 0 or 1 to each codeword c ∈ C by evaluating
f ∈ F2 [x1 , . . . , xn ]/IC through the substitutions xi = ci for i = 1, . . . , n. Quotienting
the original polynomial ring by IC ensures that there is only one zero function in RC .
The spectrum of the neural ring, Spec(RC ), consists of all prime ideals in RC . We
will see shortly that the elements of Spec(RC ) are in one-to-one correspondence with
the elements of the neural code C. Indeed, our definition of RC was designed for this
to be true.
For any point v ∈ {0, 1}n of the neural activity space, let
  
mv = I (v) = f ∈ F2 [x1 , . . . , xn ]  f (v) = 0
def

be the maximal ideal of F2 [x1 , . . . , xn ] consisting of all functions that vanish on v.


We can also write mv = x1 − v1 , . . . , xn − vn (see Lemma 6.3 in Sect. 6.1). Using
this, we can characterize the spectrum of the neural ring.

Lemma 3.1 Spec(RC ) = {m̄v | v ∈ C}, where m̄v is the quotient of mv in RC .

The proof is given in Sect. 6.1. Note that because RC is a Boolean ring, the maxi-
mal ideal spectrum and the prime ideal spectrum coincide.
C. Curto et al.

3.4 The Neural Ideal and an Explicit Set of Relations for the Neural Ring

The definition of the neural ring is rather impractical, as it does not give us explicit
relations for generating IC and RC . Here, we define another ideal, JC , via an explicit
set of generating relations. Although JC is closely related to IC , it turns out that JC is
a more convenient object to study, which is why we will use the term neural ideal to
refer to JC rather than IC .
For any v ∈ {0, 1}n , consider the function ρv ∈ F2 [x1 , . . . , xn ] defined as

def     
n
ρv = (1 − vi − xi ) = xi (1 − xj ) = xi (1 − xj ).
i=1 {i | vi =1} {j | vj =0} i∈supp(v) j ∈supp(v)
/

Note that ρv (x) can be thought of as a characteristic function for v, since it sat-
isfies ρv (v) = 1 and ρv (x) = 0 for any other x ∈ Fn2 . Now consider the ideal
JC ⊆ F2 [x1 , . . . , xn ] generated by all functions ρv , for v ∈
/ C:

def  
JC = {ρv | v ∈
/ C} .

We call JC the neural ideal corresponding to the neural code C. If C = 2[n] is the
complete code, we simply set JC = 0, the zero ideal. JC is related to IC as follows,
giving us explicit relations for the neural ring.

Lemma 3.2 Let C ⊂ {0, 1}n be a neural code. Then


   
/ C}, xi (1 − xi )  i ∈ [n] ,
IC = JC + B = {ρv | v ∈

where B = {xi (1 − xi ) | i ∈ [n]} is the ideal generated by the Boolean relations,


and JC is the neural ideal.

The proof is given in Sect. 6.1.

4 How to Infer RF Structure Using the Neural Ideal

This section is the heart of the paper. We begin by presenting an alternative set of
relations that can be used to define the neural ring. These relations enable us to easily
interpret elements of IC as receptive field relationships, clarifying the connection be-
tween the neural ring and ideal and the RF structure of the code. We next introduce
pseudo-monomials and pseudo-monomial ideals, and use these notions to obtain a
minimal description of the neural ideal, which we call the “canonical form.” Theo-
rem 4.3 enables us to use the canonical form of JC in order to “read off” a minimal
description of the RF structure of the code. Finally, we present an algorithm that in-
puts a neural code C and outputs the canonical form CF(JC ), and illustrate its use in
a detailed example.
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

4.1 An Alternative Set of Relations for the Neural Ring

Let C ⊂ {0, 1}n be a neural code, and recall by Lemma 2.1 that C can always be
realized as a RF code C = C(U), provided we don’t require the Ui s to be convex. Let
X be a stimulus space and U = {Ui }ni=1 a collection of open sets in X, and consider
the RF code C(U). The neural ring corresponding to this code is RC (U ) .
Observe that the functions f ∈ RC (U ) can be evaluated at any point p ∈ X by
assigning

1 if p ∈ Ui ,
xi (p) =
0 ifp ∈ / Ui ,
each time xi appears in the polynomial f . The vector (x1 (p), . . . , xn (p)) ∈ {0, 1}n
represents the neural response to the stimulus p. Note that if p ∈ / ni=1 Ui , then
(x1 (p), . . . , xn (p)) = (0, . . . , 0) is the all-zeros codeword. For any σ ⊂ [n], define

def  def 
Uσ = Ui and xσ = xi .
i∈σ i∈σ

n
Our convention is that x∅ = 1 and U∅ = X, even in cases where X  i=1 Ui . Note
that for any p ∈ X,

1 if p ∈ Uσ ,
xσ (p) =
0 if p ∈
/ Uσ .
The relations in IC (U ) encode the combinatorial data of U . For example, if Uσ = ∅
then we cannot have xσ = 1 at any point of the stimulus space X, and must therefore
impose the relation xσ to “knock off” those points. On the other hand, if Uσ ⊂ Ui ∪
Uj , then xσ = 1 implies either xi = 1 or xj = 1, something that is guaranteed by
imposing the relation xσ (1 − xi )(1 − xj ). These observations lead us to an alternative
ideal, IU ⊂ F2 [x1 , . . . , xn ], defined directly from the arrangement of receptive fields
U = {U1 , . . . , Un }:
  
def
IU = xσ (1 − xi ) | Uσ ⊆ Ui .
i∈τ i∈τ

Note that if τ = ∅, we only get a relation for Uσ = ∅, and this is xσ . If σ = ∅, then


Uσ = X, and we only get relations of this type if X is contained in the union of the
Ui s. This is equivalent to the requirement that there is no “outside point” correspond-
ing to the all-zeros codeword.
Perhaps unsurprisingly, it turns out that IU and IC (U ) exactly coincide, so IU pro-
vides an alternative set of relations that can be used to define RC (U ) .

Theorem 4.1 IU = IC (U ) .

The proof is given in Sect. 6.2.


C. Curto et al.

4.2 Interpreting Neural Ring Relations as Receptive Field Relationships

Theorem 4.1 suggests that we can interpret elements of IC in terms of relationships


between receptive fields.

Lemma 4.2 Let C ⊂ {0, 1}n be a neural code, and let U = {U1 , . . . , Un } be any
collection of open sets (not necessarily convex) in a stimulus space X such that
C = C(U). Then, for any pair of subsets σ, τ ⊂ [n],

xσ (1 − xi ) ∈ IC ⇔ Uσ ⊆ Ui .
i∈τ i∈τ

Proof (⇐) This is a direct consequence of Theorem4.1. (⇒) We distinguish two


cases, basedon whether or not σ and τ intersect. If xσ i∈τ (1 − xi ) ∈ IC and σ ∩ τ =
∅, then xσ i∈τ (1 − xi ) ∈ B, where B = {xi (1 − xi ) | i ∈ [n]} is the ideal generated
by the Boolean relations. Consequently, the relation does not give us any information
about the code, and Uσ ⊆ i∈τ Ui follows trivially  from the observation that Ui ⊆
Ui for any i ∈ σ ∩ τ . If, on the other hand, xσ i∈τ (1 − xi ) ∈ IC and σ ∩ τ = ∅,
then ρv ∈ IC for each v ∈ {0, 1}n such that supp(v) ⊇ σ and supp(v) ∩ τ = ∅. Since
ρv (v) = 1, it follows that v ∈/ C for any v with supp(v) ⊇ σ and supp(v) ∩ τ = ∅. To
see this, recall from the original definition of IC that for all c ∈ C, f (c) = 0 for any
f ∈ IC ; it follows that ρv (c) = 0 for all c ∈ C. Because C = C(U), the fact that v ∈/C
for any v such that supp(v) ⊇ σ and supp(v)∩τ = ∅ implies i∈σ Ui \ j ∈τ Uj = ∅.
We can thus conclude that Uσ ⊆ j ∈τ Uj . 

Lemma 4.2 allows us to extract RF structure from the different types of relations
that appear in IC :
• Boolean relations: {xi (1 − xi )}. The relation xi (1 − xi ) corresponds to Ui ⊆ Ui ,
which does not contain any information about the code C.
• Type 1 relations: {xσ }.The relation xσ corresponds to Uσ = ∅.
• Type 2 relations: {xσ i∈τ (1 − xi ) | σ, τ = ∅, σ ∩ τ = ∅, Uσ = ∅ and i∈τ Ui =
X}. The relation xσ i∈τ (1 − xi ) corresponds toUσ ⊆ i∈τ Ui .
• Type 3 relations: { i∈τ (1 − xi )}. The relation i∈τ (1 − xi ) corresponds to X ⊆
i∈τ Ui .
The somewhat complicated requirements on the Type 2 relations ensure that they do
not include polynomials that are multiples of Type 1, Type 3, or Boolean relations.
Note that the constant polynomial 1 may appear as both a Type 1 and a Type 3 rela-
tion, but only if X = ∅. The four types of relations listed above are otherwise disjoint.
Type 3 relations only appear if X is fully covered by the receptive fields, and there is
thus no all-zeros codeword corresponding to an “outside” point.
Not all elements of IC are one of the above types, of course, but we will see that
these are sufficient to generate IC . This follows from the observation (see Lemma 6.6)
that the neural ideal JC is generated by the Type 1, Type 2, and Type 3 relations, and
recalling that IC is obtained from JC be adding in the Boolean relations (Lemma 3.2).
At the same time, not all of these relations are necessary to generate the neural ideal.
Can we eliminate redundant relations to come up with a “minimal” list of generators
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

for JC , and hence IC , that captures the essential RF structure of the code? This is the
goal of the next section.

4.3 Pseudo-monomials and a Canonical Form for the Neural Ideal

The Type 1, Type 2, and Type 3 relations are all products of linear terms of the form
xi and 1 − xi , and are thus very similar to monomials. By analogy with square-free
monomials and square-free monomial ideals (Miller and Sturmfels 2005), we define
the notions of pseudo-monomials and pseudo-monomial ideals. Note that we do not
allow repeated indices in our definition of pseudo-monomial, so the Boolean relations
are explicitly excluded.
 
Definition If f ∈ F2 [x1 , . . . , xn ] has the form f = i∈σ xi j ∈τ (1 − xj ) for some
σ, τ ⊂ [n] with σ ∩ τ = ∅, then we say that f is a pseudo-monomial.

Definition An ideal J ⊂ F2 [x1 , . . . , xn ] is a pseudo-monomial ideal if J can be gen-


erated by a finite set of pseudo-monomials.

Definition Let J ⊂ F2 [x1 , . . . , xn ] be an ideal, and f ∈ J a pseudo-monomial.


We say that f is a minimal pseudo-monomial of J if there does not exist an-
other pseudo-monomial g ∈ J with deg(g) < deg(f ) such that f = hg for some
h ∈ F2 [x1 , . . . , xn ].

By considering the set of all minimal pseudo-monomials in a pseudo-monomial


ideal J , we obtain a unique and compact description of J , which we call the “canon-
ical form” of J .

Definition We say that a pseudo-monomial ideal J is in canonical form if we present


def
it as J = f1 , . . . , fl , where the set CF(J ) = {f1 , . . . , fl } is the set of all minimal
pseudo-monomials of J . Equivalently, we refer to CF(J ) as the canonical form of J .

Clearly, for any pseudo-monomial ideal J ⊂ F2 [x1 , . . . , xn ], CF(J ) is unique


and J = CF(J ) . On the other hand, it is important to keep in mind that al-
though CF(J ) consists of minimal pseudo-monomials, it is not necessarily a min-
imal set of generators for J . To see why, consider the pseudo-monomial ideal
J = x1 (1 − x2 ), x2 (1 − x3 ) . This ideal in fact contains a third minimal pseudo-
monomial: x1 (1 − x3 ) = (1 − x3 ) · [x1 (1 − x2 )] + x1 · [x2 (1 − x3 )]. It follows that
CF(J ) = {x1 (1 − x2 ), x2 (1 − x3 ), x1 (1 − x3 )}, but clearly we can remove x1 (1 − x3 )
from this set and still generate J .
For any code C, the neural ideal JC is a pseudo-monomial ideal because JC =
{ρv | v ∈
/ C} , and each of the ρv s is a pseudo-monomial. (In contrast, IC is rarely
a pseudo-monomial ideal, because it is typically necessary to include the Boolean
relations as generators.) Theorem 4.3 describes the canonical form of JC . In what
follows, we say that σ ⊆ [n] is minimal with respect to property P if σ satisfies P ,
but P is not satisfied for any τ  σ . For example, if Uσ = ∅ and for all τ  σ we
have Uτ = ∅, then we say that “σ is minimal w.r.t. Uσ = ∅.”
C. Curto et al.

Theorem 4.3 Let C ⊂ {0, 1}n be a neural code, and let U = {U1 , . . . , Un } be any
collection of open sets (not necessarily convex) in a nonempty stimulus space X such
that C = C(U). The canonical form of JC is:

JC = {xσ | σ is minimal w.r.t. Uσ = ∅},
  

xσ (1 − xi )  σ, τ = ∅, σ ∩ τ = ∅, Uσ = ∅, Ui = X,
i∈τ i∈τ

and σ, τ are each minimal w.r.t. Uσ ⊆ Ui ,


i∈τ
  

(1 − xi )  τ is minimal w.r.t. X ⊆ Ui .
i∈τ i∈τ

We call the above three (disjoint) sets of relations comprising CF(JC ) the minimal
Type 1 relations, the minimal Type 2 relations, and the minimal Type 3 relations,
respectively.

The proof is given in Sect. 6.3. Note that, because of the uniqueness of the canon-
ical form, if we are given CF(JC ) then Theorem 4.3 allows us to read off the corre-
sponding (minimal) relationships that must be satisfied by any receptive field repre-
sentation of the code as C = C(U):
• Type 1: xσ ∈ CF(JC ) implies that Uσ = ∅, but all lower-order intersections Uγ
with γ  σ  are non-empty.
• Type 2: xσ i∈τ (1 − xi ) ∈ CF(JC ) implies that Uσ ⊆ i∈τ Ui , but no lower-
order intersection is contained in i∈τ Ui , and all the Ui s are necessary for
Uσ ⊆  i∈τ Ui .
• Type 3: i∈τ (1 − xi ) ∈ CF(JC ) implies that X ⊆ i∈τ Ui , but X is not contained
in any lower-order union i∈γ Ui for γ  τ .
The canonical form CF(JC ) thus provides a minimal description of the RF structure
dictated by the code C.
The Type 1 relations in CF(JC ) can be used to obtain a (crude) lower bound on the
minimal embedding dimension of the neural code, as defined in Sect. 2.3.2. Recall
Helly’s theorem (Sect. 2.3.1), and observe that if xσ ∈ CF(JC ) then σ is minimal with
respect to Uσ = ∅; this in turn implies that |σ | ≤ d + 1. (If |σ | > d + 1, by minimality
all d + 1 subsets intersect and by Helly’s theorem we must have Uσ = ∅.) We can
thus obtain a lower bound on the minimal embedding dimension d as

d≥ max |σ | − 1,
{σ |xσ ∈CF(JC )}

where the maximum is taken over all σ such that xσ is a Type 1 relation in CF(JC ).
This bound only depends on (C), however, and does not provide any insight regard-
ing the different minimal embedding dimensions observed in the examples of Fig. 3.
These codes have no Type 1 relations in their canonical forms, but they are nicely
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

differentiated by their minimal Type 2 and Type 3 relations. From the receptive field
arrangements depicted in Fig. 3, we can easily write down CF(JC ) for each of these
codes.
A. CF(JC ) = {0}. There are no relations here because C = 2[3] .
B. CF(JC ) = {1 − x3 }. This Type 3 relation reflects the fact that X = U3 .
C. CF(JC ) = {x1 (1 − x2 ), x2 (1 − x3 ), x1 (1 − x3 )}. These Type 2 relations correspond
to U1 ⊂ U2 , U2 ⊂ U3 , and U1 ⊂ U3 . Note that the first two of these receptive field
relationships imply the third; correspondingly, the third canonical form relation
satisfies: x1 (1 − x3 ) = (1 − x3 ) · [x1 (1 − x2 )] + x1 · [x2 (1 − x3 )].
D. CF(JC ) = {(1 − x1 )(1 − x2 )}. This Type 3 relation reflects X = U1 ∪ U2 , and
implies U3 ⊂ U1 ∪ U2 .
Nevertheless, we do not yet know how to infer the minimal embedding dimension
from CF(JC ). In Appendix 2, we provide a complete list of neural codes on three
neurons, up to permutation, and their respective canonical forms.

4.4 Comparison to the Stanley–Reisner Ideal

Readers familiar with the Stanley–Reisner ideal (Miller and Sturmfels 2005; Stanley
2004) will recognize that this kind of ideal is generated by the Type 1 relations of a
neural code C. The corresponding simplicial complex is (C), the smallest simplicial
complex that contains the code.

Lemma 4.4 Let C = C(U). The ideal generated by the Type 1 relations, xσ |
Uσ = ∅ , is the Stanley–Reisner ideal of (C). Moreover, if supp C is a simplicial
complex, then CF(JC ) contains no Type 2 or Type 3 relations, and JC is thus the
Stanley–Reisner ideal for supp C.

Proof To see the first statement, observe that the Stanley–Reisner ideal of a simplicial
complex  is the ideal
def
I = xσ | σ ∈
/ ,
and recall that (C) = {σ ⊆ [n] | σ ⊆ supp(c) for some c ∈ C}. As C = C(U), an
equivalent characterization is (C) = {σ ⊆ [n] | Uσ = ∅}. Since these sets are equal,
so are their complements in 2[n] :
     
σ ⊆ [n]  σ ∈/ (C) = σ ⊆ [n]  Uσ = ∅ .

Thus, xσ | Uσ = ∅ = xσ | σ ∈ / (C) , which is the Stanley–Reisner ideal for (C).


To prove the second statement, suppose that supp C is a simplicial complex. Note
that C must contain the all-zeros codeword, so X  ni=1 Ui and there can be no
Type
 3 relations. Suppose the canonical form of JC contains a Type 2 relation
xσ i∈τ (1 − xi ), for some σ, τ ⊂ [n] satisfying σ, τ = ∅, σ ∩ τ = ∅ and Uσ = ∅.
The existence of this relation indicates that σ ∈
/ supp C, while there does exist an
ω ∈ C such that σ ⊂ ω. This contradicts the assumption that supp C is a simplicial
complex. We conclude that JC has no Type 2 relations. 
C. Curto et al.

The canonical form of JC thus enables us to immediately read off, via the Type 1
relations, the minimal forbidden faces of the simplicial complex (C) associated to
the code, and also the minimal deviations of C from being a simplicial complex,
which are captured by the Type 2 and Type 3 relations.

4.5 An Algorithm for Obtaining the Canonical Form

Now that we have established that a minimal description of the RF structure can be
extracted from the canonical form of the neural ideal, the most pressing question is
the following:

Question How do we find the canonical form CF(JC ) if all we know is the code C,
and we are not given a representation of the code as C = C(U)?

In this section, we describe an algorithmic method for finding CF(JC ) from knowl-
edge only of C. It turns out that computing the primary decomposition of JC is a key
step toward finding the minimal pseudo-monomials. This parallels the situation for
monomial ideals, although there are some additional subtleties in the case of pseudo-
monomial ideals. As previously discussed, from the canonical form we can read off
the RF structure of the code, so the overall workflow is as follows:
minimal
primary canonical
neural code neural ideal RF
Workflow: → → decomposition → form →
C ⊂ {0, 1}n / C}
JC = {ρv | v ∈ structure
of JC CF(JC )
of C

Canonical Form Algorithm


Input: A neural code C ⊂ {0, 1}n .
Output: The canonical form of the neural ideal, CF(JC ).
Step 1: From C ⊂ {0, 1}n , compute JC = {ρv | v ∈ / C} .
Step 2: Compute the primary decomposition of JC . It turns out (see Theorem 5.4 in
the next section) that this decomposition yields a unique representation of the ideal
as

JC = pa ,
a∈A

where each a ∈ A is an element of {0, 1, ∗}n , and pa is defined as


def    
pa = {xi − ai | ai = ∗} = {xi | ai = 0}, {1 − xj | aj = 1} .

Note that the pa s are all prime ideals. We will see later how to compute this primary
decomposition algorithmically, in Sect. 5.3.
Step 3: Observe that any pseudo-monomial f ∈ JC must satisfy f ∈ pa for each
a ∈ A. It follows that f is a multiple of one of the linear generators of pa for each
a ∈ A. Compute the following set of elements of JC :
 

M(JC ) = ga  ga = xi − ai for some ai = ∗ .
a∈A
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

M(JC ) consists of all polynomials obtained as a product of linear generators ga ,


one for each prime ideal pa of the primary decomposition of JC .
Step 4: Reduce the elements of M(JC ) by imposing xi (1 − xi ) = 0. This eliminates
elements that are not pseudo-monomials. It also reduces the degrees of some of the
remaining elements, as it implies xi2 = xi and
(1 − xi )
2 = (1 − x ). We are left with
i
a set of pseudo-monomials of the form f = i∈σ xi j ∈τ (1 − xj ) for τ ∩ σ = ∅.
 C ).
Call this new reduced set M(J
 C ) that are multiples of lower-degree
Step 5: Finally, remove all elements of M(J

elements in M(JC ).

Proposition 4.5 The resulting set is the canonical form CF(JC ).

The proof is given in Sect. 6.4.

4.6 An Example

Now we are ready to use the canonical form algorithm in an example, illustrating
how to obtain a possible arrangement of convex receptive fields from a neural code.
Suppose a neural code C has the following 13 codewords, and 19 missing words:

C = {00000, 10000, 01000, 00100, 00001, 11000, 10001,


01100, 00110, 00101, 00011, 11100, 00111},
{0, 1}5 \C = {00010, 10100, 10010, 01010, 01001, 11010, 11001,
10110, 10101, 10011, 01110, 01101, 01011, 11110,
11101, 11011, 10111, 01111, 11111}.

Thus, the neural ideal JC has 19 generators, using the original definition JC =
{ρv | v ∈
/ C} :

JC = x4 (1 − x1 )(1 − x2 )(1 − x3 )(1 − x5 ), x1 x3 (1 − x2 )(1 − x4 )(1 − x5 ),
x1 x4 (1 − x2 )(1 − x3 )(1 − x5 ), x2 x4 (1 − x1 )(1 − x3 )(1 − x5 ),
x2 x5 (1 − x1 )(1 − x3 )(1 − x4 ), x1 x2 x4 (1 − x3 )(1 − x5 ),
x1 x2 x5 (1 − x3 )(1 − x4 ), x1 x3 x4 (1 − x2 )(1 − x5 ), x1 x3 x5 (1 − x2 )(1 − x4 ),
x1 x4 x5 (1 − x2 )(1 − x3 ), x2 x3 x4 (1 − x1 )(1 − x5 ), x2 x3 x5 (1 − x1 )(1 − x4 ),
x2 x4 x5 (1 − x1 )(1 − x3 ), x1 x2 x3 x4 (1 − x5 ), x1 x2 x3 x5 (1 − x4 ),

x1 x2 x4 x5 (1 − x3 ), x1 x3 x4 x5 (1 − x2 ), x2 x3 x4 x5 (1 − x1 ), x1 x2 x3 x4 x5 .

Despite the fact that we are considering only five neurons, this looks like a compli-
cated ideal. Considering the canonical form of JC will help us to extract the relevant
combinatorial information and allow us to create a possible arrangement of receptive
C. Curto et al.

fields U that realizes this code as C = C(U). Following step 2 of our canonical form
algorithm, we take the primary decomposition of JC :
J C = x1 , x2 , x4 ∩ x1 , x2 , 1 − x3 ∩ x1 , x2 , 1 − x5 ∩ x2 , x3 , x4
∩ x3 , x4 , x5 ∩ x1 , x4 , x5 ∩ 1 − x2 , x4 , x5 .
Then, as described in steps 3–5 of the algorithm, we take all possible products
amongst these seven larger ideals, reducing by the relation xi (1 − xi ) = 0 (note that
this gives us xi = xi2 , and hence we can say xik = xi for any k > 1). We also remove
any polynomials that are multiples of smaller-degree pseudo-monomials in our list.
This process leaves us with six minimal pseudo-monomials, yielding the canonical
form:
   
JC = CF(JC ) = x1 x3 x5 , x2 x5 , x1 x4 , x2 x4 , x1 x3 (1 − x2 ), x4 (1 − x3 )(1 − x5 ) .
Note in particular that every generator we originally put in JC is a multiple of one of
the six relations in CF(JC ). Next, we consider what the relations in CF(JC ) tell us
about the arrangement of receptive fields that would be needed to realize the code as
C = C(U).
1. x1 x3 x5 ∈ CF(JC ) ⇒ U1 ∩ U3 ∩ U5 = ∅, while U1 ∩ U3 , U3 ∩ U5 and U1 ∩ U5 are
all nonempty.
2. x2 x5 ∈ CF(JC ) ⇒ U2 ∩ U5 = ∅, while U2 , U5 are both nonempty.
3. x1 x4 ∈ CF(JC ) ⇒ U1 ∩ U4 = ∅, while U1 , U4 are both nonempty.
4. x2 x4 ∈ CF(JC ) ⇒ U2 ∩ U4 = ∅, while U2 , U4 are both nonempty.
5. x1 x3 (1 − x2 ) ∈ CF(JC ) ⇒ U1 ∩ U3 ⊆ U2 , while U1  U2 , U3  U2 , and
U1 ∩ U3 = ∅.
6. x4 (1 − x3 )(1 − x5 ) ∈ CF(JC ) ⇒ U4 ⊆ U3 ∪ U5 , while U4 = ∅, and that U4 
U3 , U4  U5 .
The minimal Type 1 relations (1–4) tell us that we should draw U1 , U3 , and U5
with all pairwise intersections, but leaving a “hole” in the middle since the triple
intersection is empty. Then U2 should be drawn to intersect U1 and U3 , but not U5 .
Similarly, U4 should intersect U3 and U5 , but not U1 or U2 . The minimal Type 2
relations (5–6) tell us that U2 should be drawn to contain the intersection U1 ∩ U3 ,
while U4 lies in the union U3 ∪ U5 , but is not contained in U3 or U5 alone. There
are no minimal Type 3 relations, as expected for a code that includes the all-zeros
codeword.
Putting all this together, and assuming convex receptive fields, we can completely
infer the receptive field structure, and draw the corresponding picture (see Fig. 4).
It is easy to verify that the code C(U) of the pictured arrangement indeed coincides
with C.

5 Primary Decomposition

Let C ⊂ {0, 1}n be a neural code. The primary decomposition of IC is boring:



IC = mc ,
c∈C
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

Fig. 4 An arrangement of five


sets that realizes C as C(U)

where mv for any v ∈ {0, 1}n is the maximal ideal I (v) defined in Sect. 3.3. This
simply expresses IC as the intersection of all maximal ideals mc for c ∈ C, because
the variety C = V (IC ) is just a finite set of points and the primary decomposition
reflects no additional structure of the code.
On the other hand, the primary decomposition of the neural ideal JC retains the full
combinatorial structure of C. Indeed, we have seen that computing this decomposition
is a critical step toward obtaining CF(JC ), which captures the receptive field structure
of the neural code. In this section, we describe the primary decomposition of JC
and discuss its relationship to some natural decompositions of the neural code. We
end with an algorithm for obtaining primary decomposition of any pseudo-monomial
ideal.

5.1 Primary Decomposition of the Neural Ideal

We begin by defining some objects related to F2 [x1 , . . . , xn ] and {0, 1}n , without
reference to any particular neural code. For any a ∈ {0, 1, ∗}n , we define the variety
def   
Va = v ∈ {0, 1}n  vi = ai for all i s.t. ai = ∗ ⊆ {0, 1}n .

This is simply the subset of points compatible with the word “a”, where ∗ is viewed
as a “wild card” symbol. Note that Vv = {v} for any v ∈ {0, 1}n . We can also associate
a prime ideal to a,
def  
pa = {xi − ai | ai = ∗} ⊆ F2 [x1 , . . . , xn ],

consisting of polynomials in F2 [x1 , . . . , xn ] that vanish on all points compatible


with a. To obtain all such polynomials, we must add in the Boolean relations (see
Sect. 6.1):
def  
qa = I (Va ) = pa + x12 − x1 , . . . , xn2 − xn .
Note that Va = V (pa ) = V (qa ).
Next, let us relate this all to a code C ⊂ {0, 1}n . Recall the definition of the neural
ideal,
 n  
def    
JC = {ρv | v ∈ / C} = (xi − vi ) − 1  v ∈/C .
i=1
C. Curto et al.

We have the following correspondences.

Lemma 5.1 JC ⊆ pa ⇔ Va ⊆ C.

Proof (⇒) JC ⊆ pa ⇒ V (pa ) ⊆ V (JC ). Recalling that V (pa ) = Va and V (JC ) = C,


this gives Va ⊆ C. (⇐) Va ⊆ C ⇒ I (C) ⊆ I (Va ) ⇒ IC ⊆ qa . Recalling that both IC
and qa differ from JC and pa , respectively, by the addition of the Boolean relations,
we obtain JC ⊆ pa . 

Lemma 5.2 For any a, b ∈ {0, 1, ∗}n , Va ⊆ Vb ⇔ pb ⊆ pa .

Proof (⇒) Suppose Va ⊆ Vb . Then, for any i such that bi = ∗ we have ai = bi . It


follows that each generator of pb is also in pa , so pb ⊆ pa . (⇐) Suppose pb ⊆ pa .
Then, Va = V (pa ) ⊆ V (pb ) = Vb . 

Recall that a an ideal p is said to be a minimal prime over J if p is a prime ideal


that contains J , and there is no other prime ideal p such that p  p ⊇ J . Minimal
primes pa ⊇ JC correspond to maximal varieties Va such that Va ⊆ C. Consider the
set
def   
AC = a ∈ {0, 1, ∗}n  Va ⊆ C .
We say that a ∈ AC is maximal if there does not exist another element b ∈ AC such
that Va  Vb (i.e., a ∈ AC is maximal if Va is maximal such that Va ⊆ C).

Lemma 5.3 The element a ∈ AC is maximal if and only if pa is a minimal prime


over JC .

Proof Recall that a ∈ AC ⇒ Va ⊆ C, and hence JC ⊆ pa (by Lemma 5.1). (⇒) Let
a ∈ AC be maximal, and choose b ∈ {0, 1, ∗} such that JC ⊆ pb ⊆ pa . By Lem-
mas 5.1 and 5.2, Va ⊆ Vb ⊆ C. Since a is maximal, we conclude that b = a, and
hence pb = pa . It follows that pa is a minimal prime over JC . (⇐) Suppose pa is a
minimal prime over JC . Then by Lemma 5.1, a ∈ AC . Let b be a maximal element
of AC such that Va ⊆ Vb ⊆ C. Then JC ⊆ pb ⊆ pa . Since pa is a minimal prime over
JC , pb = pa , and hence b = a. Thus, a is maximal in AC . 

We can now describe the primary decomposition of JC . Here we assume the neural
code C ⊆ {0, 1}n is nonempty, so that JC is a proper pseudo-monomial ideal.

Theorem 5.4 JC = i=1 pai is the unique irredundant primary decomposition of


JC , where pa1 , . . . , pa are the minimal primes over JC .

The proof is given in Sect. 6.6. Combining this theorem with Lemma 5.3, we have
the following corollary.

Corollary 5.5 JC = i=1 pai is the unique irredundant primary decomposition of


JC , where a1 , . . . , a are the maximal elements of AC .
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

5.2 Decomposing the Neural Code via Intervals of the Boolean Lattice

From the definition of AC , it is easy to see that the maximal elements yield a kind of
“primary” decomposition of the neural code C as a union of maximal Va s.

Lemma 5.6 C = i=1 Vai , where a1 , . . . , a are the maximal elements of AC . (I.e.,
pa1 , . . . , pa are the minimal primes in the primary decomposition of JC .)

Proof Since Va ⊆ C for any a ∈ AC , clearly i=1 Vai ⊆ C. To see the reverse in-
clusion, note that for any c ∈ C, c ∈ Vc ⊆ Va for some maximal a ∈ AC . Hence,
C ⊆ i=1 Vai . 

Note that Lemma 5.6 could also be regarded as a corollary of Theorem 5.4, since
C = V (JC ) = V ( i=1 pai ) = i=1 V (pai ) = i=1 Vai , and the maximal a ∈ AC
correspond to minimal primes pa ⊇ JC . Although we were able to prove Lemma 5.6
directly, in practice we use the primary decomposition in order to find (algorithmi-
cally) the maximal elements a1 , . . . , a ∈ AC , and thus determine the Va s for the
above decomposition of the code.
It is worth noting here that the decomposition of C in Lemma 5.6 is not necessarily
minimal. This is because one can have fewer qa s such that
 
qai = pai .
i∈σ [] i∈[]

Since V (qai ) = V (pai ) = Vai , this would lead to a decomposition of C as a union of


fewer Vai s. In contrast, the primary decomposition of JC in Theorem 5.4 is irredun-
dant, and hence none of the minimal primes can be dropped from the intersection.

5.2.1 Neural Activity “Motifs” and Intervals of the Boolean Lattice

We can think of an element a ∈ {0, 1, ∗}n as a neural activity “motif”. That is, a is
a pattern of activity and silence for a subset of the neurons, while Va consists of all
activity patterns on the full population of neurons that are consistent with this mo-
tif (irrespective of what the code is). For a given neural code C, the set of maximal
a1 , . . . , al ∈ AC corresponds to a set of minimal motifs that define the code (here
“minimal” is used in the sense of having the fewest number of neurons that are con-
strained to be “on” or “off” because ai = ∗). If a ∈ {0, ∗}n , we refer to a as a neural
silence motif, since it corresponds to a pattern of silence. In particular, silence motifs
correspond to simplices in supp C, since supp Va is a simplex in this case. If supp C
is a simplicial complex, then Lemma 5.6 gives the decomposition of C as a union of
minimal silence motifs (corresponding to facets, or maximal simplices, of supp C).
More generally, Va corresponds to an interval of the Boolean lattice {0, 1}n . Recall
the poset structure of the Boolean lattice: for any pair of elements v1 , v2 ∈ {0, 1}n , we
have v1 ≤ v2 if and only if supp(v1 ) ⊆ supp(v2 ). An interval of the Boolean lattice is
thus a subset of the form:
def   
[u1 , u2 ] = v ∈ {0, 1}n  u1 ≤ v ≤ u2 .
C. Curto et al.

Given an element a ∈ {0, 1, ∗}n , we have a natural interval consisting of all Boolean
lattice elements “compatible” with a. Letting a 0 ∈ {0, 1}n be the element obtained
from a by setting all ∗s to 0, and a 1 ∈ {0, 1}n the element obtained by setting all ∗s
to 1, we find that
    
Va = a 0 , a 1 = v ∈ {0, 1}n  a 0 ≤ v ≤ a 1 .

Simplices correspond to intervals of the form [0, a 1 ], where 0 is the bottom “all-
zeros” element in the Boolean lattice.
While the primary decomposition of JC allows a neural code C ⊆ {0, 1}n to be
decomposed as a union of intervals of the Boolean lattice, as indicated by Lemma 5.6,
the canonical form CF(JC ) provides a decomposition of the complement of C as a
union of intervals. First, notice that to any pseudo-monomial f ∈ CF(JC ) we can
associate an element b ∈ {0, 1, ∗} as follows: bi = 1 if xi |f , bi = 0 if (1 − xi )|f , and
bi = ∗ otherwise. In other words,
def  
f = fb = xi (1 − xj ).
{i|bi =1} {j |bj =0}

As before, b corresponds to an interval Vb = [b0 , b1 ] ⊂ {0, 1}n . Recalling the JC


is generated by pseudo-monomials corresponding to non-codewords, it is now easy
to see that the complement of C in {0, 1}n can be expressed as the union of Vb s,
where each b corresponds to a pseudo-monomial in the canonical form. The canon-
ical form thus provides an alternative description of the code, nicely complementing
Lemma 5.6.
k
Lemma 5.7 C = {0, 1}n \ i=1 Vbi , where CF(JC ) = {fb1 , . . . , fbk }.

We now illustrate both decompositions of the neural code with an example.

Example 1 Consider the neural code C = {000, 001, 011, 111} ⊂ {0, 1}3 correspond-
ing to a set of receptive fields satisfying U1  U2  U3  X. The primary decompo-
sition of JC ⊂ F2 [x1 , x2 , x3 ] is given by

x1 , x2 ∩ x1 , 1 − x3 ∩ 1 − x 2 , 1 − x3 ,

while the canonical form is


 
CF(JC ) = x1 (1 − x2 ), x2 (1 − x3 ), x1 (1 − x3 ) .

From the primary decomposition, we can write C = Va1 ∪ Va2 ∪ Va3 for a1 = 00∗,
a2 = 0∗1, and a3 = ∗11. The corresponding Boolean lattice intervals are [000, 001],
[001, 011], and [011, 111], respectively, and are depicted in black in Fig. 5. As noted
before, this decomposition of the neural code need not be minimal; indeed, we could
also write C = Va1 ∪Va3 , as the middle interval is not necessary to cover all codewords
in C.
From the canonical form, we obtain C = {0, 1}3 \ (Vb1 ∪ Vb2 ∪ Vb3 ), where b1 =
10∗, b2 = ∗10, and b3 = 1∗0. The corresponding Boolean lattice intervals spanning
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

Fig. 5 Boolean interval


decompositions of the code
C = {000, 001, 011, 111} (in
black) and of its complement (in
gray), arising from the primary
decomposition and canonical
form of JC , respectively

the complement of C are [100, 101], [010, 110], and [100, 110], respectively; these
are depicted in gray in Fig. 5. Again, notice that this decomposition is not minimal—
namely, Vb3 = [100, 110] could be dropped.

5.3 An Algorithm for Primary Decomposition of Pseudo-monomial Ideals

We have already seen that computing the primary decomposition of the neural ideal
JC is a critical step toward extracting the canonical form CF(JC ), and that it also
yields a meaningful decomposition of C in terms of neural activity motifs. Recall
from Sect. 4.3 that JC is always a pseudo-monomial ideal—i.e., JC is generated by
pseudo-monomials, which are polynomials f ∈ F2 [x1 , . . . , xn ] of the form

f= zi , where zi ∈ {xi , 1 − xi } for any i ∈ [n].
i∈σ

In this section, we provide an explicit algorithm for finding the primary decomposi-
tion of such ideals.
In the case of monomial ideals, there are many algorithms for obtaining the pri-
mary decomposition, and there are already fast implementations of such algorithms
in algebraic geometry software packages such as Singular and Macaulay 2 (Eisenbud
et al. 2002). Pseudo-monomial ideals are closely related to square-free monomial
ideals, but there are some differences which require a bit of care. In particular, if
J ⊆ F2 [x1 , . . . , xn ] is a pseudo-monomial ideal and z ∈ {xi , 1 − xi } for some i ∈ [n],
then for f a pseudo-monomial:

f ∈ J, z  f ∈J or f∈ z.

To see why, observe that x1 ∈ x1 (1 − x2 ), x2 , because x1 = 1 · x1 (1 − x2 ) + x1 · x2 ,


but x1 is not a multiple of either x1 (1 − x2 ) or x2 . We can nevertheless adapt ideas
from (square-free) monomial ideals to obtain an algorithm for the primary decom-
position of pseudo-monomial ideals. The following lemma allows us to handle the
above complication.
C. Curto et al.

Lemma 5.8 Let J ⊂ F2 [x1 , . . . , xn ] be a pseudo-monomial ideal, and let z ∈ {xi , 1 −


xi } for some i ∈ [n]. For any pseudo-monomial f ,

f ∈ J, z ⇒ f ∈J or f ∈ z or (1 − z)f ∈ J.

The proof is given in Sect. 6.5. Using Lemma 5.8, we can prove the following key
lemma for our algorithm, which mimics the case of square-free monomial ideals.

Lemma 5.9 Let J ⊂ F2 [x1 , . . . , xn ] be a pseudo-monomial ideal, and let i∈σ zi be
a pseudo-monomial, with zi ∈ {xi , 1 − xi } for each i. Then
   
J, zi = J, zi .
i∈σ i∈σ

The proof is given in Sect. 6.5. Note that if i∈σ zi ∈ J , then this lemma implies
J = i∈σ J, zi , which is the key fact we will use in our algorithm. This is similar
to Lemma 2.1 in (Eisenbud et al. 2002, Monomial Ideals Chapter), and suggests a
recursive algorithm along similar lines to those that exist for monomial ideals.
The following observation will add considerable efficiency to our algorithm for
pseudo-monomial ideals.

Lemma 5.10 Let J ⊂ F2 [x1 , . . . , xn ] be a pseudo-monomial ideal. For any zi ∈


{xi , 1 − xi } we can write
 
J = zi g1 , . . . , zi gk , (1 − zi )f1 , . . . , (1 − zi )f , h1 , . . . , hm ,

where the gj , fj and hj are pseudo-monomials that contain no zi or 1 − zi term.


(Note that k,  or m may be zero if there are no generators of the corresponding
type.) Then

J, zi = J |zi =0 , zi = zi , f1 , . . . , f , h1 , . . . , hm .

Proof Clearly, the addition of zi in J, zi renders the zi gj generators unnecessary.


The (1 − zi )fj generators can be reduced to just fj because fj = 1 · (1 − zi )fj +
fj · zi . 

We can now state our algorithm. Recall that an ideal I ⊆ R is proper if I = R.


Algorithm for Primary Decomposition of Pseudo-monomial Ideals
Input: A proper pseudo-monomial ideal J ⊂ F2 [x1 , . . . , xn ]. This is presented as
J = g1 , . . . , gr with each generator gi a pseudo-monomial.
Output: Primary decomposition of J . This is returned as a set P of prime ideals,
with J = I ∈P I .
Step 1 (Initializion Step): Set P = ∅ and D = {J }. Eliminate from the list of gener-
ators of J those that are multiples of other generators.
Step 2 (Splitting Step): For each ideal I ∈ D compute DI as follows.
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

Step 2.1: Choose a nonlinear generator zi1 · · · zim ∈ I , where each zi ∈ {xi , 1 − xi },
and m ≥ 2. (Note: the generators of I should always be pseudo-monomials.)
Step 2.2: Set DI = { I, zi1 , . . . , I, zim }. By Lemma 5.9, we know that


m 
I= I, zik = K.
k=1 K∈DI

Step 3 (Reduction Step): For each DI and each ideal I, zi ∈ DI , reduce the set of
generators as follows.

Step 3.1: Set zi = 0 in each generator of I . This yields a “0” for each multiple of zi ,
and removes 1 − zi factors in each of the remaining generators. By Lemma 5.10,
I, zi = I |zi =0 , zi .
Step 3.2: Eliminate 0s and generators that are multiples of other generators.
Step 3.3: If there is a “1” as a generator, eliminate I, zi from DI as it is not a
proper ideal.

Step 4 (Update Step): Update D and P, as follows.

Step 4.1: Set D = DI , and remove redundant ideals in D. That is, remove an
ideal if it has the same set of generators as another ideal in D.
Step 4.2: For each ideal I ∈ D, if I has only linear generators (and is thus prime),
move I to P by setting P = P ∪ I and D = D \ I .

Step 5 (Recursion Step): Repeat steps 2–4 until D = ∅.


Step 6 (Final Step): Remove redundant ideals of P. That is, remove ideals that are
not necessary to preserve the equality J = I ∈P I .

Proposition 5.11 This algorithm is guaranteed to terminate, and the final P is a set
of irredundant prime ideals such that J = I ∈P I .

Proof For any pseudo-monomial ideal I ∈ D, let deg(I ) be the sum of the degrees of
all generating monomials of I . To see that the algorithm terminates, observe that for
each ideal I, zi ∈ DI , deg( I, zi ) < deg(I ) (this follows from Lemma 5.10). The
degrees of elements in D thus steadily decrease with each recursive iteration, until
they are removed as prime ideals that are appended to P. At the same time, the size
n  
of D is strictly bounded at |D| ≤ 2(3) , since there are only n3 pseudo-monomials in
n
F2 [x1 , . . . , xn ], and thus at most 2(3) distinct pseudo-monomial ideals.
By construction, the final P is an irredundant set of prime ideals. Throughout the
algorithm, however, it is always true that J = ( I ∈D I ) ∩ ( I ∈P I ). Since the final
D = ∅, the final P satisfies J = I ∈P I . 

Acknowledgements CC was supported by NSF DMS 0920845 and NSF DMS 1225666, a Woodrow
Wilson Career Enhancement Fellowship, and an Alfred P. Sloan Research Fellowship. VI was supported
by NSF DMS 0967377, NSF DMS 1122519, and the Swartz Foundation.
C. Curto et al.

Appendix 1: Proofs

6.1 Proof of Lemmas 3.1 and 3.2

To prove Lemmas 3.1 and 3.2, we need a version of the Nullstellensatz for finite
fields. The original “Hilbert’s Nullstellensatz” applies when k is an algebraically √
closed field. It states that if f ∈ k[x1 , . . . , xn ] vanishes on V (J ), then f ∈ J . In
other words,
  √
I V (J ) = J .
Because we have chosen k = F2 = {0, 1}, we have to be a little careful about the usual
ideal-variety correspondence, as √ there are some subtleties introduced in the case of
finite fields. In particular, J = J in F2 [x1 , . . . , xn ] does not imply I (V (J )) = J .
The following lemma and theorem are well known. Let Fq be a finite field of size
q, and Fq [x1 , . . . , xn ] the n-variate polynomial ring over Fq .
q q
Lemma 6.1 For any ideal J ⊆ Fq [x1 , . . . , xn ], the ideal J + x1 − x1 , . . . , xn − xn
is a radical ideal.

Theorem 6.2 (Strong Nullstellensatz in Finite Fields) For an arbitrary finite field
Fq , let J ⊆ Fq [x1 , . . . , xn ] be an ideal. Then
   q q 
I V (J ) = J + x1 − x1 , . . . , xn − xn .

6.1.1 Proof of Lemma 3.1

We begin by describing the maximal ideals of F2 [x1 , . . . , xn ]. Recall that


  
mv = I (v) = f ∈ F2 [x1 , . . . , xn ]  f (v) = 0
def

is the maximal ideal of F2 [x1 , . . . , xn ] consisting of all functions that vanish on v ∈


Fn2 . We will use the notation m̄v to denote the quotient of mv in RC , in cases where
mv ⊃ IC .

Lemma 6.3 mv = x1 − v1 , . . . , xn − vn ⊂ F2 [x1 , . . . , xn ], and is a radical ideal.

Proof Denote Av = x1 − v1 , . . . , xn − vn , and observe that V (Av ) = {v}. It follows


that I (V (Av )) = I (v) = mv . On the other hand, using the Strong Nullstellensatz in
Finite Fields we have
   
I V (Av ) = Av + x12 − x1 , . . . , xn2 − xn = Av ,

where the last equality is obtained by observing that, since vi ∈ {0, 1} and xi2 − xi =
xi (1 − xi ), each generator of x12 − x1 , . . . , xn2 − xn is already contained in Av . We
conclude that Av = mv , and the ideal is radical by Lemma 6.1. 
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

In the proof of Lemma 3.1, we make use of the following correspondence: for any
quotient ring R/I , the maximal ideals of R/I are exactly the quotients m̄ = m/I ,
where m is a maximal ideal of R that contains I (Atiyah and Macdonald 1969).

Proof of Lemma 3.1 First, recall that because RC is a Boolean ring, Spec(RC ) =
maxSpec(RC ), the set of all maximal ideals of RC . We also know that the maximal
ideals of F2 [x1 , . . . , xn ] are exactly those of the form mv for v ∈ Fn2 . By the corre-
spondence stated above, to show that maxSpec(RC ) = {m̄v | v ∈ C} it suffices to show
mv ⊃ IC if and only if v ∈ C. To see this, note that for each v ∈ C, IC ⊆ mv because,
by definition, all elements of IC are functions that vanish on each v ∈ C. On the other
hand, if v ∈/ C then mv  IC ; in particular, the characteristic function ρv ∈ IC for
v∈/ C, but ρv ∈/ mv because ρv (v) = 1. Hence, the maximal ideals of RC are exactly
those of the form m̄v for v ∈ C. 

We have thus verified that the points in Spec(RC ) correspond to codewords in C.


This was expected given our original definition of the neural ring, and suggests that
the relations on F2 [x1 , . . . , xn ] imposed by IC are simply relations ensuring that
V (m̄v ) = ∅ for all v ∈
/ C.

6.1.2 Proof of Lemma 3.2

Here we find explicit relations for IC in the case of an arbitrary neural code. Recall
that
 n
   
ρv = (xi − vi ) − 1 = xi (1 − xj ),
i=1 {i | vi =1} {j | vj =0}

and that ρv (x) can be thought of as a characteristic function for v, since it satisfies
ρv (v) = 1 and ρv (x) = 0 for any other x ∈ Fn2 . This immediately implies that
 
V (JC ) = V {ρv | v ∈ / C} = C.

We can now prove Lemma 3.2.

Proof of Lemma 3.2 Observe that IC = I (C) = I (V (JC )), since V (JC ) = C. On the
other hand, the Strong Nullstellensatz in Finite Fields implies I (V (JC )) = JC + x12 −
x1 , . . . , xn2 − xn = JC + B. 

6.2 Proof of Theorem 4.1

Recall that for a given set of receptive fields U = {U1 , . . . , Un } in some stimulus
space X, the ideal IU ⊂ F2 [x1 , . . . , xn ] was defined as
   

(1 − xi )  Uσ ⊆
def
IU = xσ Ui .
i∈τ i∈τ

The Boolean relations are present in IU irrespective of U , as it is always true that Ui ⊆


Ui and this yields the relation xi (1 − xi ) for each i. By analogy with our definition
C. Curto et al.

of JC , it makes sense to define an ideal JU which is obtained by stripping away the


Boolean relations. This will then be used in the proof of Theorem 4.1.
Note that if σ ∩ τ = ∅, then for any i ∈ σ ∩ τ we have Uσ ⊆ Ui ⊆ j ∈τ Ui , and the
corresponding relation is a multiple of the Boolean relation xi (1 − xi ). We can thus
restrict attention to relations in IU that have σ ∩ τ = ∅, so long as we include sep-
arately the Boolean relations. These observations are summarized by the following
lemma.

Lemma 6.4 IU = JU + x12 − x1 , . . . , xn2 − xn , where


   

(1 − xi )  σ ∩ τ = ∅ and Uσ ⊆
def
JU = xσ Ui .
i∈τ i∈τ

Proof of Theorem 4.1 We will show that JU = JC (U ) (and thus that IU = IC (U ) ) by


showing that each ideal contains the generators of the other.
First, we show that all generating relations of JC (U ) are contained in JU . Recall
that the generators of JC (U ) are of the form
 
ρv = xi (1 − xj ) for v ∈
/ C(U).
i∈supp(v) j ∈supp(v)
/

If ρv is a generator of JC (U ) , then v ∈ / C(U) and this implies (by the definition of


C(U)) that Usupp(v) ⊆ j ∈supp(v)
/ U j . Taking σ = supp(v) and τ = [n] \ supp(v), we
have U σ ⊆ j ∈τ U j with σ ∩ τ = ∅. This in turn tells 
us (by the definition of JU ) that
xσ j ∈τ (1 − xj ) is a generator of JU . Since ρv = xσ j ∈τ (1 − xj ) for our choice of
σ and τ , we conclude that ρv ∈ JU . Hence, JC (U ) ⊆ JU .
Next, we showthat all generating relations of JU are contained in JC (U ) . If JU
has generator xσ i∈τ (1 − xi ), then Uσ ⊆ i∈τ Ui and σ ∩ τ = ∅. This in turn im-
plies that i∈σ Ui \ j ∈τ Uj = ∅, and thus (by the definition of C(U)) we have
v∈/ C(U) for any v such that  supp(v) ⊇ σ and supp(v) ∩ τ = ∅. It follows that JC (U )
contains the relation xsupp(v) j ∈supp(v) (1 − xj ) for any such v. This includes all re-
 / 
lations of
 the form x σ j ∈τ (1 − x j ) / ∪τ Pk , where Pk ∈ {xk , 1 − xk }. Taking
k ∈σ
f = x
 σ j ∈τ (1 − x j ) in Lemma 6.5 (below), we can conclude that JC (U ) contains
xσ j ∈τ (1 − xj ). Hence, JU ⊆ JC (U ) . 

Lemma 6.5 For any f ∈ k[x1 , . . . , xn ] and τ ⊆ [n], the ideal {f i∈τ Pi | Pi ∈
{xi , 1 − xi }} = f .

def 
Proof First, denote If (τ ) = {f i∈τ Pi | Pi ∈ {xi , 1 − xi }} . We wish to prove that
If (τ ) = f , for any τ ⊆ [n]. Clearly, If (τ ) ⊆ f , since every generator of If (τ ) is
a multiple of f . We will prove If (τ ) ⊇ f by induction on |τ |.
If |τ | = 0, then τ = ∅ and If (τ ) = f . If |τ | = 1, so that τ = {i} for some i ∈ [n],
then If (τ ) = f (1 − xi ), f xi . Note that f (1 − xi ) + f xi = f , so f ∈ If (τ ), and
thus If (τ ) ⊇ f .
Now, assume that for some  ≥ 1 we have If (σ ) ⊇ f for any σ ⊆ [n] with
|σ | ≤ . If  ≥ n, we are done, so we need only show that if  < n, then If (τ ) ⊇ f
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

for any τ of size  + 1. Consider τ ⊆ [n] with |τ | =  + 1, and let j ∈ τ be any


element. Define τ  = τ \{j }, and note that |τ  | = . By our inductive assumption,
If (τ  ) ⊇ f . We 
 will show that If (τ ) ⊇ If (τ ), and hence If (τ ) ⊇ f .
Let g = f P be any generator of I (τ  ) and observe that both f (1 −

 i∈τ  i f
xj ) i∈τ  Pi and f xj i∈τ  Pi are both generators of If (τ ). It follows that their sum,
g, is also in If (τ ), and hence g ∈ If (τ ) for any generator g of If (τ  ). We conclude
that If (τ ) ⊇ If (τ  ), as desired. 

6.3 Proof of Theorem 4.3

We begin by showing that JU , first defined in Lemma 6.4, can be generated using
the Type 1, Type 2, and Type 3 relations introduced in Sect. 4.2. From the proof of
Theorem 4.1, we know that JU = JC (U ) , so the following lemma in fact shows that
JC (U ) is generated by the Type 1, 2, and 3 relations as well.

Lemma 6.6 For U = {U1 , . . . , Un } a collection of sets in a stimulus space X,


  

JU = {xσ | Uσ = ∅}, (1 − xi )  X ⊆ Ui ,
i∈τ i∈τ
  

xσ (1 − xi )  σ, τ = ∅, σ ∩ τ = ∅, Uσ = ∅,
i∈τ

Ui = X, and Uσ ⊆ Ui .
i∈τ i∈τ

JU (equivalently, JC (U ) ) is thus generated by the Type 1, Type 3 and Type 2 relations,


respectively.

Proof Recall that in Lemma 6.4 we defined JU as


   

(1 − xi )  σ ∩ τ = ∅ and Uσ ⊆
def
JU = xσ Ui .
i∈τ i∈τ

Observe that if Uσ = ∅, then we can take τ = ∅ to obtain the Type 1 relation xσ ,
where we have used the fact that i∈∅ (1 − xi ) = 1. Any other relation with Uσ = ∅
and τ = ∅ would be a multiple of xσ . We can thus write:

JU = {xσ | Uσ = ∅},
   

xσ (1 − xi )  τ = ∅, σ ∩ τ = ∅, Uσ = ∅, and Uσ ⊆ Ui .
i∈τ i∈τ

Next,
 if σ = ∅ in the second set of relations above, then we have the relation
i∈τ (1 − xi ) with U∅ = X ⊆ i∈τ Ui . Splitting off these Type 3 relations, and re-
moving multiples of them that occur if i∈τ Ui = X, we obtain the desired result. 
C. Curto et al.

Next, we show that JU can be generated by reduced sets of the Type 1, Type 2,
and Type 3 relations given above. First, consider the Type 1 relations in Lemma 6.6,
and observe that if τ ⊆ σ , then xσ is a multiple of xτ . We can thus reduce the set
of Type 1 generators needed by taking only those corresponding to minimal σ with
Uσ = ∅:
   
{xσ | Uσ = ∅} = {xσ | σ is minimal w.r.t. Uσ = ∅} .
Similarly, we find for the Type 3 relations:
     
 

(1 − xi )  X ⊆ Ui = (1 − xi )  τ is minimal w.r.t. X ⊆ Ui .
i∈τ i∈τ i∈τ i∈τ

 the Type 2 generators. If ρ ⊆ σ and xρ i∈τ (1 − xi ) ∈ JU , then we
Finally, we reduce
also have xσ i∈τ (1 − xi ) ∈ JU . So we can restrict ourselves to only those generators
for which σ is minimal with respect to Uσ ⊆ i∈τ Ui . Similarly, we can reduce to
minimal τ such that Uσ ⊆ i∈τ Ui . In summary:
   

xσ 
(1 − xi )  σ, τ = ∅, σ ∩ τ = ∅, Uσ = ∅, Ui = X, and Uσ ⊆ Ui
i∈τ i∈τ i∈τ
  

= xσ (1 − xi )  σ, τ = ∅, σ ∩ τ = ∅, Uσ = ∅,
i∈τ

Ui = X, and σ, τ are each minimal w.r.t. Uσ ⊆ Ui .
i∈τ i∈τ

We can now prove Theorem 4.3.

Proof of Theorem 4.3 Recall that C = C(U), and that by the proof of Theorem 4.1
we have JC (U ) = JU . By the reductions given above for the Type 1, 2, and 3 gen-
erators, we also know that JU can be reduced to the form given in the statement of
Theorem 4.3. We conclude that JC can be expressed in the desired form.
To see that JC , as given in the statement of Theorem 4.3, is in canonical form,
we must show that the given set of generators is exactly the complete set of min-
imal pseudo-monomials for JC . First, observe that the generators are all pseudo-
monomials.
 If xσ is one of the Type 1 relations, and xσ ∈ g with xσ = g , then
g = i∈τ xi for some τ  σ . Since Uτ = ∅, however, it follows that g ∈ / JC and
hence xσ is a minimal pseudo-monomial of JC . By a similar argument, the Type 2
and Type 3 relations above are also minimal pseudo-monomials in JC .
It remains only to show that there are no additional minimal pseudo-monomials
in JC . Suppose f = xσ i∈τ (1 − xi ) is a minimal pseudo-monomial in JC . By
Lemma 4.2, Uσ ⊆ i∈τ Ui and σ ∩ τ = ∅, so f is a generator in the original defini-
tion of JU (Lemma 6.4). Since f  is a minimal pseudo-monomial of JC , there does not
exist a g ∈ JC such that g = xσ  i∈τ  (1 − xi ) with either σ   σ or τ   τ . There-
fore, σ and τ are each minimal with respect to Uσ ⊆ i∈τ Ui . We conclude that f
is one of the generators for JC given in the statement of Theorem 4.3. It is a minimal
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

Type 1 generator if τ = ∅, a minimal Type 3 generator if σ = ∅, and is otherwise a


minimal Type 2 generator. The three sets of minimal generators are disjoint because
the Type 1, Type 2, and Type 3 relations are disjoint, provided X = ∅. 

6.4 Proof of Proposition 4.5

Note that every polynomial obtained by the canonical form algorithm is a pseudo-
monomial of JC . This is because the algorithm constructs products of factors of the
form xi or 1 − xi , and then reduces them in such a way that no index is repeated
in the final product, and there are no powers of any xi or 1 − xi factor; we are thus
guaranteed to end up with pseudo-monomials. Moreover, since the products each
have at least one factor in each prime ideal of the primary decomposition of JC ,
the pseudo-monomials are all in JC . Proposition 4.5 states that this set of pseudo-
monomials is precisely the canonical form CF(JC ).
To prove Proposition 4.5, we will make use of the following technical lemma.
Here,zi , yi ∈ {xi , 1 − xi }, and thus any pseudo-monomial in F2 [x1 , . . . , xn ] is of the
form j ∈σ zj for some index set σ ⊆ [n].

Lemma 6.7 If yi1 · · · yim ∈ zj1 , . . . , zj where {ik } and {jr } are each distinct sets of
indices, then yik = zjr for some k ∈ [m] and r ∈ [].

Proof Let f = yi1 · · · yim and P = {zj1 , . . . , zj }. Since f ∈ P , then P = P , f ,


and so V ( P ) = V ( P , f ). We need to show that yik = zjr for some pair of indices
ik , jr . Suppose by way of contradiction that there is no ik , jr such that yik = zjr .
Select a ∈ {0, 1}n as follows: for each jr ∈ {j1 , . . . , j }, let ajr = 0 if zjr = xjr ,
and let ajr = 1 if zjr = 1 − xjr ; when evaluating at a, we thus have zjr (a) = 0 for
def
all r ∈ []. Next, for each ik ∈ ω = {i1 , . . . , im }\{j1 , . . . , j }, let aik = 1 if yik = xik ,
and let aik = 0 if yik = 1 − xik , so that yik (a) = 1 for all ik ∈ ω. For any remaining
indices t, let at = 1. Because we have assumed that yik = zjr for any ik , jr pair, we
have for any i ∈ {i1 , . . . , im } ∩ {j1 , . . . , j } that yi (a) = 1 − zi (a) = 1. It follows that
f (a) = 1.
Now, note that a ∈ V ( P ) by construction. We must therefore have a ∈
V ( P , f ), and hence f (a) = 0, a contradiction. We conclude that there must be
some ik , jr with yik = zjr , as desired. 

We can now prove the proposition.

Proof of Proposition 4.5 It suffices to show that after step 4 of the algorithm, the
 C ) consists entirely of pseudo-monomials of JC , and includes all
reduced set M(J
minimal pseudo-monomials of JC . If this is true, then after removing multiples of
lower-degree elements in step 5 we are guaranteed to obtain the set of minimal
pseudo-monomials, CF(JC ), since it is precisely the nonminimal pseudo-monomials
that will be removed in the final step of the algorithm.
Let JC = si=1 Pi be the primary decomposition of JC , with each Pi a prime
ideal of the form Pi = zj1 , . . . , zj . Recall that M(JC ), as defined in step 3 of the
C. Curto et al.

algorithm, is precisely the set of all polynomials g that are obtained by choosing one
linear factor from the generating set of each Pi :
M(JC ) = {g = zp1 · · · zps | zpi is a linear generator of Pi }.
Furthermore, recall that M(J  C ) is obtained from M(JC ) by the reductions in step 4
of the algorithm. Clearly, all elements of M(J C ) are pseudo-monomials that are con-
tained in JC .
To show that M(J  C ) contains all minimal pseudo-monomials of JC , we will show
that if f ∈ JC is a pseudo-monomial, then there exists another pseudo-monomial
 C ) (possibly the same as f ) such that h|f . To see this, let f = yi1 · · · yim
h ∈ M(J
be a pseudo-monomial of JC . Then, f ∈ Pi for each i ∈ [s]. For a given Pi =
zj1 , . . . , zj , by Lemma 6.7 we have yik = zjr for some k ∈ [m] and r ∈ []. In
other words, each prime ideal Pi has a generating term, call it zpi , that appears as one
of the linear factors of f . Setting g = zp1 · · · zps , it is clear that g ∈ M(JC ) and that
either g|f , or zpi = zpj for some distinct pair i, j . By removing repeated factors in
g one obtains a pseudo-monomial h ∈ M(J  C ) such that h|g and h|f . If we take f to
be a minimal pseudo-monomial, we find f = h ∈ M(J  C ). 

6.5 Proof of Lemmas 5.8 and 5.9

Here, we prove Lemmas 5.8 and 5.9, which underlie the primary decomposition al-
gorithm.

Proof of Lemma 5.8 Assume f ∈ J, z is a pseudo-monomial. Then f = zi1 zi2 · · · zir ,


where zi ∈ {xi , 1 − xi } for each i, and the ik are distinct. Suppose f ∈ / z . This
implies zik = z for all factors appearing in f . We will show that either f ∈ J or
(1 − z)f ∈ J .
Since J is a pseudo-monomial ideal, we can write
 
J = zg1 , . . . , zgk , (1 − z)f1 , . . . , (1 − z)fl , h1 , . . . , hm ,
where the gj , fj , and hj are pseudo-monomials that contain no z or 1 − z term. This
means

k 
l 
m
f = zi1 zi2 · · · zir = z uj gj + (1 − z) v j fj + wj hj + yz,
j =1 j =1 j =1

for polynomials uj , vj , wj , and y ∈ F2 [x1 , . . . , xn ]. Now consider what happens if


we set z = 0 in f :

   
l
 m

f |z=0 = zi1 zi2 · · · zir |z=0 = vj  fj + wj  hj .
j =1 z=0 j =1 z=0

Next, observe that after multiplying the above by (1 − z) we obtain an element of J :


 

l
 
m

(1 − z)f |z=0 = (1 − z) vj  fj + (1 − z) wj  hj ∈ J,
j =1 z=0 j =1 z=0
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

since (1 − z)fj ∈ J for j = 1, . . . , l and hj ∈ J for j = 1, . . . , m. There are two


cases:
Case 1: If 1 − z is a factor of f , say zi1 = 1 − z, then f |z=0 = zi2 · · · zir and thus
f = (1 − z)f |z=0 ∈ J .
Case 2: If 1 − z is not a factor of f , then f = f |z=0 . Multiplying by 1 − z we obtain
(1 − z)f ∈ J .
We thus conclude that f ∈
/ z implies f ∈ J or (1 − z)f ∈ J . 

Proof of Lemma 5.9 Clearly, J, zσ ⊆ i∈σ J, zi . To see the reverse inclusion,


consider f ∈ i∈σ J, zi . We have three cases.
Case 1: f ∈ J . Then f ∈ J, zσ .
Case 2: f ∈/ J , but f ∈ zi for all i ∈ σ . Then f ∈ zσ , and hence f ∈ J, zσ .
Case 3: f ∈/ J and f ∈ / zi for all i ∈ τ ⊂ σ , but f ∈ zj for all j ∈ σ \ τ . Without
loss of generality, we can rearrange indices so that τ = {1, . . . , m} for m ≥ 1. By
Lemma 5.8, we have (1 − zi )f ∈ J for all i ∈ τ . We can thus write

f = (1 − z1 )f + z1 (1 − z2 )f + · · · + z1 · · · zm−1 (1 − zm )f + z1 · · · zm f.

Observe that the first m terms are each in J . On the other hand, f ∈ zj for each
j ∈ σ \ τ implies that the last term is in zτ ∩ zσ \τ = zσ . Hence, f ∈ J, zσ .
We may thus conclude that i∈σ J, zi ⊆ J, zσ , as desired. 

6.6 Proof of Theorem 5.4

Recall that JC is always a proper pseudo-monomial ideal for any nonempty neural
code C ⊆ {0, 1}n . Theorem 5.4 is thus a direct consequence of the following proposi-
tion.

Proposition 6.8 Suppose J ⊂ F2 [x1 , . . . , xn ] is a proper pseudo-monomial ideal.


Then, J has a unique irredundant primary decomposition of the form J = a∈A pa ,
where {pa }a∈A are the minimal primes over J .

Proof By Proposition 5.11, we can always (algorithmically) obtain an irredundant


set P of prime ideals such that J = I ∈P I . Furthermore, each I ∈ P has the form
I = zi1 , . . . , zik , where zi ∈ {xi , 1 − xi } for each i. Clearly, these ideals are all
prime ideals of the form pa for a ∈ {0, 1, ∗}. It remains only to show that this pri-
mary decomposition is unique, and that the ideals {pa }a∈A are the minimal primes
over J . This is a consequence of some well-known facts summarized in Lemmas 6.9
and 6.10, below. First, observe by Lemma 6.9 that J is a radical ideal. Lemma 6.10
then tells us that the decomposition in terms of minimal primes is the unique irredun-
dant primary decomposition for J . 


Lemma 6.9 If J is the intersection of prime ideals, J = i=1 pi , then J is a radical
ideal.
C. Curto et al.

Proof Suppose p n ∈ J . Then p n ∈ pi for all i ∈ [], and hence p ∈ pi for all i ∈ [].
Therefore, p ∈ J . 

The following fact about the primary decomposition of radical ideals is true
over any field, as a consequence of the Lasker–Noether theorems (Cox et al. 1997,
pp. 204–209).

Lemma 6.10 If J is a proper radical ideal, then it has a unique irredundant primary
decomposition consisting of the minimal prime ideals over J .

Appendix 2: Neural Codes on Three Neurons

See Table 1 and Figs. 6 and 7.


Table 1 Forty permutation-inequivalent codes, each containing 000, on three neurons

Label Code C Canonical Form CF(JC )

A1 000, 100, 010, 001, 110, 101, 011, 111 ∅


A2 000, 100, 010, 110, 101, 111 x3 (1 − x1 )
A3 000, 100, 010, 001, 110, 101, 111 x2 x3 (1 − x1 )
A4 000, 100, 010, 110, 101, 011, 111 x3 (1 − x1 )(1 − x2 )
A5 000, 100, 010, 110, 111 x3 (1 − x1 ), x3 (1 − x2 )
A6 000, 100, 110, 101, 111 x2 (1 − x1 ), x3 (1 − x1 )
A7 000, 100, 010, 101, 111 x3 (1 − x1 ), x1 x2 (1 − x3 )
A8 000, 100, 010, 001, 110, 111 x1 x3 (1 − x2 ), x2 x3 (1 − x1 )
A9 000, 100, 001, 110, 011, 111 x3 (1 − x2 ), x2 (1 − x1 )(1 − x3 )
A10 000, 100, 010, 101, 011, 111 x3 (1 − x1 )(1 − x2 ), x1 x2 (1 − x3 )
A11 000, 100, 110, 101, 011, 111 x2 (1 − x1 )(1 − x3 ), x3 (1 − x1 )(1 − x2 )
A12 000, 100, 110, 111 x3 (1 − x1 ), x3 (1 − x2 ), x2 (1 − x1 )
A13 000, 100, 010, 111 x3 (1 − x1 ), x3 (1 − x2 ), x1 x2 (1 − x3 )
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

A14 000, 100, 010, 001, 111 x1 x2 (1 − x3 ), x2 x3 (1 − x1 ), x1 x3 (1 − x2 )


A15 000, 110, 101, 011, 111 x1 (1 − x2 )(1 − x3 ), x2 (1 − x1 )(1 − x3 ), x3 (1 − x1 )(1 − x2 )
A16* 000, 100, 011, 111 x2 (1 − x3 ), x3 (1 − x2 )
A17* 000, 110, 101, 111 x2 (1 − x1 ), x3 (1 − x1 ), x1 (1 − x2 )(1 − x3 )
A18* 000, 100, 111 x2 (1 − x1 ), x2 (1 − x3 ), x3 (1 − x1 ), x3 (1 − x2 )
A19* 000, 110, 111 x3 (1 − x1 ), x3 (1 − x2 ), x1 (1 − x2 ), x2 (1 − x1 )
A20* 000, 111 x1 (1 − x2 ), x2 (1 − x3 ), x3 (1 − x1 ), x1 (1 − x3 ), x2 (1 − x1 ), x3 (1 − x2 )

B1 000, 100, 010, 001, 110, 101 x2 x3


B2 000, 100, 010, 110, 101 x2 x3 , x3 (1 − x1 )
B3 000, 100, 010, 101, 011 x1 x2 , x3 (1 − x1 )(1 − x2 )
Table 1 (Continued)

Label Code C Canonical Form CF(JC )

B4 000, 100, 110, 101 x2 x3 , x2 (1 − x1 ), x3 (1 − x1 )


B5 000, 100, 110, 011 x1 x3 , x3 (1 − x2 ), x2 (1 − x1 )(1 − x3 )
B6* 000, 110, 101 x2 x3 , x2 (1 − x1 ), x3 (1 − x1 ), x1 (1 − x2 )(1 − x3 )

C1 000, 100, 010, 001, 110 x1 x3 , x2 x3


C2 000, 100, 010, 101 x1 x2 , x2 x3 , x3 (1 − x1 )
C3* 000, 100, 011 x1 x2 , x1 x3 , x2 (1 − x3 ), x3 (1 − x2 )

D1 000, 100, 010, 001 x1 x2 , x2 x3 , x1 x3

E1 000, 100, 010, 001, 110, 101,011 x1 x2 x3


E2 000, 100, 010, 110, 101, 011 x1 x2 x3 , x3 (1 − x1 )(1 − x2 )
E3 000, 100, 110, 101, 011 x1 x2 x3 , x2 (1 − x1 )(1 − x2 ), x3 (1 − x1 )(1 − x2 )
E4 000, 110, 011, 101 x1 x2 x3 , x1 (1 − x2 )(1 − x3 ), x2 (1 − x1 )(1 − x3 ), x3 (1 − x1 )(1 − x2 )

F1* 000, 100, 010, 110 x3


F2* 000, 100, 110 x3 , x2 (1 − x1 )
F3* 000, 110 x3 , x1 (1 − x2 ), x2 (1 − x1 )

G1* 000, 100 x2 , x3

H1* 000 x1 , x2 , x3

I1* 000, 100, 010 x3 , x1 x2

Note: Labels A–I indicate the various families of Type 1 relations present in CF(JC ), organized as follows (up to permutation of indices): (A) None, (B) {x1 x2 },
(C) {x1 x2 , x2 x3 }, (D) {x1 x2 , x2 x3 , x1 x3 }, (E) {x1 x2 x3 }, (F) {x1 }, (G) {x1 , x2 }, (H) {x1 , x2 , x3 }, (I) {x1 , x2 x3 }. All codes within the same A–I series share the same simplicial
complex, (C). The ∗s denote codes that have Ui = ∅ for at least one receptive field (as in the F, G, H, and I series) as well as codes that require U1 = U2 or U1 = U2 ∪ U3
(up to permutation of indices); these are considered to be highly degenerate. The remaining 27 codes are depicted with receptive field diagrams (Fig. 6) and Boolean lattice
diagrams (Fig. 7)
C. Curto et al.
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

Fig. 6 Receptive field diagrams for the 27 non-∗ codes on three neurons listed in Table 1. Codes that ad-
mit no realization as a convex RF code are labeled “non-convex.” The code E2 is the one from Lemma 2.2,
while A1 and A12 are permutation-equivalent to the codes in Fig. 3A and C, respectively. Deleting the
all-zeros codeword from A6 and A4 yields codes permutation-equivalent to those in Fig. 3B and D, re-
spectively
C. Curto et al.

Fig. 7 Boolean lattice diagrams for the 27 non-∗ codes on three neurons listed in Table 1. Interval decom-
positions (see Sect. 5.2) for each code are depicted in black, while decompositions of code complements,
arising from CF(JC ), are shown in gray. Thin black lines connect elements of the Boolean lattice that are
Hamming distance 1 apart. Note that the lattice in A12 is permutation-equivalent to the one depicted in
Fig. 5
The Neural Ring: An Algebraic Tool for Analyzing the Intrinsic

References

Atiyah, M. F., & Macdonald, I. G. (1969). Introduction to commutative algebra. Reading: Addison–
Wesley
Averbeck, B. B., Latham, P. E., & Pouget, A. (2006). Neural correlations, population coding and compu-
tation. Nat. Rev. Neurosci., 7(5), 358–366.
Ben-Yishai, R., Bar-Or, R. L., & Sompolinsky, H. (1995). Theory of orientation tuning in visual cortex.
Proc. Natl. Acad. Sci. USA, 92(9), 3844–3848.
Brown, E. N., Frank, L. M., Tang, D., Quirk, M. C., & Wilson, M. A. (1998). A statistical paradigm
for neural spike train decoding applied to position prediction from ensemble firing patterns of rat
hippocampal place cells. J. Neurosci., 18(18), 7411–7425.
Cox, D., Little, J., & O’Shea, D. (1997). An introduction to computational algebraic geometry and com-
mutative algebra. In Undergraduate texts in mathematics: Ideals, varieties, and algorithms (2nd ed.).
New York: Springer.
Curto, C., & Itskov, V. (2008). Cell groups reveal structure of stimulus space. PLoS Comput. Biol., 4(10).
Danzer, L., Grünbaum, B., & Klee, V. (1963). Helly’s theorem and its relatives. In Proc. sympos. pure
math. (Vol. VII, pp. 101–180). Providence: Am. Math. Soc.
Deneve, S., Latham, P. E., & Pouget, A. (1999). Reading population codes: a neural implementation of
ideal observers. Nat. Neurosci., 2(8), 740–745.
Eisenbud, D., Grayson, D. R., Stillman, M., & Sturmfels, B. (Eds.) (2002). Algorithms and computation
in mathematics: Vol. 8. Computations in algebraic geometry with Macaulay 2. Berlin: Springer.
Hatcher, A. (2002). Algebraic topology. Cambridge: Cambridge University Press.
Jarrah, A., Laubenbacher, R., Stigler, B., & Stillman, M. (2007). Reverse-engineering of polynomial dy-
namical systems. Adv. Appl. Math., 39, 477–489.
Kalai, G. (1984). Characterization of f -vectors of families of convex sets in Rd . I. Necessity of Eckhoff’s
conditions. Isr. J. Math., 48(2–3), 175–195.
Kalai, G. (1986). Characterization of f -vectors of families of convex sets in Rd . II. Sufficiency of Eck-
hoff’s conditions. J. Comb. Theory, Ser. A, 41(2), 167–188.
Ma, W. J., Beck, J. M., Latham, P. E., & Pouget, A. (2006). Bayesian inference with probabilistic popula-
tion codes. Nat. Neurosci., 9(11), 1432–1438.
McNaughton, B. L., Battaglia, F. P., Jensen, O., Moser, E. I., & Moser, M. B. (2006). Path integration and
the neural basis of the ‘cognitive map’. Nat. Rev. Neurosci., 7(8), 663–678.
Miller, E., & Sturmfels, B. (2005). Graduate texts in mathematics: Combinatorial commutative algebra.
Berlin: Springer.
Nirenberg, S., & Latham, P. E. (2003). Decoding neuronal spike trains: how important are correlations?
Proc. Natl. Acad. Sci. USA, 100(12), 7348–7353.
O’Keefe, J., & Dostrovsky, J. (1971). The hippocampus as a spatial map. Preliminary evidence from unit
activity in the freely-moving rat. Brain Res., 34(1), 171–175.
Osborne, L., Palmer, S., Lisberger, S., & Bialek, W. (2008). The neural basis for combinatorial coding in
a cortical population response. J. Neurosci., 28(50), 13522–13531.
Pistone, G., Riccomagno, E., & Wynn, H. P. (2001). Computational commutative algebra in statistics. In
Monographs on statistics and applied probability.: Vol. 89. Algebraic statistics, Boca Raton: Chap-
man & Hall/CRC Press.
Schneidman, E., Berry, M. II., Segev, R., & Bialek, W. (2006a). Weak pairwise correlations imply strongly
correlated network states in a neural population. Nature, 440(20), 1007–1012.
Schneidman, E., Puchalla, J., Segev, R., Harris, R., Bialek, W., & Berry II, M. (2006b). Synergy from
silence in a combinatorial neural code. arXiv:q-bio.NC/0607017.
Shiu, A., & Sturmfels, B. (2010). Siphons in chemical reaction networks. Bull. Math. Biol., 72(6), 1448–
1463.
Stanley, R. (2004). Progress in mathematics: Combinatorics and commutative algebra. Boston:
Birkhäuser.
Veliz-Cuba, A. (2012). An algebraic approach to reverse engineering finite dynamical systems arising from
biology. SIAM J. Appl. Dyn. Syst., 11(1), 31–48.
Watkins, D. W., & Berkley, M. A. (1974). The orientation selectivity of single neurons in cat striate cortex.
Exp. Brain Res., 19, 433–446.

You might also like