0% found this document useful (0 votes)
12 views

Data mining the functional architecture of the brain’s circuitry

This document discusses the complexities of understanding the brain's functional architecture through data mining, emphasizing the need to integrate diverse datasets across various brain areas, tasks, and modalities. It highlights the challenges of synthesizing data, ensuring interpretability, and addressing the limitations of current neural modeling approaches. The ultimate goal is to create a comprehensive view of how brain subsystems interact and adapt, which could lead to advancements in treating neurodegenerative and psychiatric disorders.

Uploaded by

남승우
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
12 views

Data mining the functional architecture of the brain’s circuitry

This document discusses the complexities of understanding the brain's functional architecture through data mining, emphasizing the need to integrate diverse datasets across various brain areas, tasks, and modalities. It highlights the challenges of synthesizing data, ensuring interpretability, and addressing the limitations of current neural modeling approaches. The ultimate goal is to create a comprehensive view of how brain subsystems interact and adapt, which could lead to advancements in treating neurodegenerative and psychiatric disorders.

Uploaded by

남승우
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

Data mining the functional architecture of the brain’s circuitry

Adam S. Charles∗

Abstract ods have significantly improved in parallel [42, 29, 28],


The brain is a highly complex organ consisting of a myr- offering new avenues to train and monitor more com-
iad of subsystems that flexibly interact and adapt over time plex behaviors, including freely moving animals during
arXiv:2501.09684v1 [q-bio.NC] 16 Jan 2025

and context to enable perception, cognition, and behavior. neural imaging [45, 15], across organisms. With this ex-
Understanding the multi-scale nature of the brain, i.e., how plosion in data collection comes both opportunities to
circuit- and moleclular-level interactions build up the funda- create a new data-driven view of neural function, but
mental components of brain function, holds incredible poten- also challenges at every level from alignment to inter-
tial for developing interventions for neurodegenerative and pretability.
psychiatric diseases, as well as open new understanding into This opportunity will allow us to treat the brain
our very nature. Historically technological limitations have as the interconnected system it is. For much of neuro-
forced systems neuroscience to be local in anatomy (local- science history, studies at cellular resolution focused on
ized, small neural populations in single brain areas), in be- local areas of the brain. Visual neuroscientists looked
havior (studying single tasks), in time (focusing on specific at the visual cortex, auditory processing was tested in
stages of learning or development), and in modality (focusing auditory cortex, navigation in hippocampus, emotion
on imaging single biological quantities). New developments and state in amygdala, etc. The brain, however, pro-
in neural recording technology and behavioral monitoring cesses in parallel and distributed ways [33]. Inactivating
now provide the data needed to break free of local neuro- LIP—an area implicated in decision making—does not
science to global neuroscience: i.e., understanding how the necessarily stop an animal from being able to make a
brain’s many subsystem interact, adapt, and change across decision [20]. Studies in brain loss, and sensory loss re-
the multitude of behaviors animals and humans must per- double this observation, showing that the flexible brain
form to thrive. Specifically, while we have much knowledge substrate can move computations across the neural cir-
of the anatomical architecture of the brain (i.e., the hard- cuits to compensate for loss of tissue or to leverage un-
ware), we finally are approaching the data needed to find the used resources [3]. Brain-wide recordings can now pro-
functional architecture and discover the fundamental prop- vide unbiased cellular-level scans that let us map out
erties of the software that runs on the hardware. We must the functional architecture: where and how information
take this opportunity to bridge between the vast amounts spreads and transforms throughout the brain.
of data to discover this functional architecture which will A functional architecture would provide a roadmap
face numerous challenges from low-level data alignment up to the general principles underlying the flexibility, ro-
to high level questions of interpretable mathematical mod- bustness, and efficiency of neural computation. It will
els of behavior that can synthesize the myriad of datasets give us baselines for core functions that are neces-
together. sary in healthy brains, which in turn will improve un-
derstanding of how observed activity changes in, e.g.,
1 Introduction neurodegenerative and psychiatric disorders. The cur-
rent state-of-the-art is to identify brain regions—general
With the constant advancement of new neural record-
anatomical areas—that have been linked to certain be-
ing technologies [19, 12, 35, 41], systems neuroscience
havioral and cognitive aspects. However, as per the
has officially joined the era of big data [9, 4]. Simul-
new global-brain observations, “everything is every-
taneous recordings of tens of neurons has given way to
where” [36, 6, 10, 21] and it is not clear if activity
hundreds and thousands [37], with millions of neurons
changes in a specific anatomical area must relate to a
no longer a pipe dream. Moreover, behavioral meth-
narrow set of functional deficits.
To quantitatively map the functional architecture
∗ Department of Biomedical Engineering, Center for Imaging requires bridging a plethora of data: data taken across
Science (CIS), Mathematical Institute for Data Science (MINDS), brain areas, across tasks, across modalities capturing
Kavli Neuroscience Discovery Institute (NDI), Data Science & different biophysical signals, and to combine these in-
AI Institute, Johns Hopkins University, Baltimore, MD. Contact:
ternal measurements with external observations, e.g.,
[email protected].
Figure 1: Challenges in discovering the brain’s functional architecture. Left: many brain-wide recordings have
found that functional relationships in the brain do not always follow the strict anatomical boundaries. Right:
areas of advancing our ability to quantitatively find the functional architecture of the brain include synergizing
across brain areas, across behavioral contexts, and across recording modalities.
behavior. Bringing all these different datasets and syn- switched linear dynamical systems [24], and recurrent
ergizing across them all to produce a holistic view of a neural networks [44]. Non-Markovian versions of this
single computational system that adapts and learns is core model also exist, including generalized linear mod-
the primary challenge. Here I discuss a number of these els (GLMs) [31] and RNNs with long-term connections,
challenges, specifically focusing on those challenges in e.g., Long-Short Term Memory (LSTM) and General
higher-level mathematical modeling that will provide Recurrent Unit (GRU) networks. In all these mod-
the language we need to merge all these datasets into a els, the full system is treated as co-evolving at the
common framework. same time-scale. Implicitly, the learned network con-
nectivity can be used to identify disjoint systems, e.g.,
2 Rethinking systems-level models when the linear transition matrix of an RNN is block-
When defining the brain’s functional architecture, our diagonal [30], however given that even simple RNNs can
models of neural dynamics must account for the multi- have a plethora of local minima, there is a danger of
ple, distributed systems that comprise brain-wide com- over-interpreting the parameters that are fit to finite,
putations. A primary challenge is that most of the noisy data.
fundamental models in neuroscience do not explicitly The ability to explicitly learn modular dynamics is
seek out these sub-systems. Instead the dominant mode key to identifying the functional architecture’s consti-
of modeling dynamics is to consider every recorded tute components. Mining the data in an interpretable
unit as a single dimension in a shared state space over way is thus critical, and the modularity must be built di-
which the dynamics is not assumed to display prefer- rectly into our models. A number of recent efforts have
ence. Put mathematically, if the time-series of unit i begun to learn these modular dynamics. One example is
is xi (t), then the vector x(t) = [x1 (t), · · · , xN (t)]T is the decomposed linear dynamical systems (dLDS) [25].
typically assumed to evolve as xt = g(xt−1 ; θ), i.e., a This approach treats dynamics as paths on a manifold
Markovian model where the parameters of the tran- x(t) ∈ M ⊂ RN . The approach then considers the
sition function g(·; θ) are fit to data. Prominent ex- tangent spaces at each point on the manifold T (x(t)),
amples of g(·; θ) include linear dynamical systems [11], and finds a set of linear operators {Gk }k=1,...,K that,
when applied to each point, span their respective tan-
Figure 2: Example neural modeling approaches. A: defining the local geometry of the neural data can provide
insight into the key features of the dynamics that define the flows across the manifold. B: dynamical latent
states are a key model for combining brain and behavior into a single meaningful system-level model. C: relating
disparate datasets requires finding shared and private sources of variance in an interpretable framework.
gent spaces, i.e., T (x(t)) ⊆ span(G1 x(t), · · · , GK x(t)) In neuroscience, finding such modules thus relies on
for all x(t). Each operator describes a distinct set of the synthesis of datasets across tasks, i.e., across ex-
interactions that are driven by how the neural state perimental sessions, individual animals, and even labs.
transverses the manifold. To ensure that these inter- Thus alignment in terms of neuron-to-neuron alignment
actions are meaningful, dLDS further assumes that the is impossible, and instead systems-level alignment must
use of the operators is sparse, i.e., the cardinal direc- be performed. For example, recent efforts has aimed at
tions of each tangent space’s basis align with the paths leveraging graph-based approaches [8] for multi-matrix
taken by the neural state. Similar approach also take (or multi-tensor) decompositions where shared factors
the manifold partitioning approach [38, 1]. The general can be constrained to represent similar functions, pro-
idea of finding compositional descriptions of geodesic viding a key link between datasets [27, 26]. This link
trajectories in the manifold interpretation of neural data can be, e.g., in neurons if highly overlapping popula-
appears to be a powerful tool to identify modular struc- tion are observed across contexts/tasks. Alternatively,
ture in the brain. the link can be in the trial structure for similar behav-
iors across animals. While promising, there are core as-
3 Challenges in multi-dataset analysis sumptions about the shared trial or neural structure in
Synthesizing data across the many contexts under which these approaches, and different assumptions will likely
the brain has been recorded is another key challenge in be needed in each analysis.
finding the brain’s functional architecture. Specifically,
data-driven discovery is driven by patterns and corre- 4 Challenges in interpretability
lations in the data, and if a specific task recruits two Thus far I have avoided mention of modern artificial
systems simultaneously at all times, the ensuing anal- neural network (ANN) based AI and ML that now form
ysis will never be able to differentiate between those the centerpieces of large-scale data mining in other ap-
systems. For example, in a reaction task, the visual plications. This is because the black box nature of
and motor stimuli activate together in a single burst of ANNs prevents the types of interpretability necessary
activity. Meanwhile, in another task, the two systems for scientific discovery. Scientists need to find rela-
might be used quite differently, e.g., in a decision mak- tionships between variables that extrapolate our under-
ing task where the visual system is separated in time standing and recommend broader relationships between
and context from the motor reporting. objects of study. Thus analysis methods such as mani-
In fact, the existence of distinct systems is likely fold analysis, causal analysis, and simpler linear systems
driven by the need to flexibly perform multiple tasks. tend to still be used extensively despite the increased
Recent work has studied machine learning systems, ease of training and deploying ANNs. More specificly,
specifically recurrent neural networks, to understand ANNs sacrifice interpretability for expressivity. Thus
the internal dynamics that emerge when a single sys- while they interpolate well on the domain of the data
tems is trained to perform multiple tasks [13, 40]. These provided, they do not extrapolate beyond those confines.
studies have found that in fact multi-task RNNs seem This is not to say that no ANN tools can aid sci-
to develop internal modules that are only visible under entists in learning about the brain. For example, in
extensive additional analysis (e.g., fixed-point analyses). variational autoencoders, sparsity and independence in
the latent layers has been used to promote a disentan- Analysis (CCA) [18] to deep-learning based extensions
gling of the data representation that produces more in- (deepCCA) [2] and non-parametric manifold learning
terpretable representations [7, 16]. Moreover, emerg- approaches [23]. While such methods can identify one
ing approaches in explainable AI, e.g., Shapley anal- such joint representation, the relationship of multiple
ysis [39], and others [5, 14], can identify key features datasets is often more nuanced with some information
in the dataset that drive the ANNs. Often, however, in one mode that is not in the other. E.g., voltage
these latter approaches do not provide the extrapola- data has temporal precision not present in hemodynam-
tion necessary for scientific discovery. Instead they are ics, while hemodynamics covers a larger spatial extent.
self-contained explanations of the original data. Thus, a more complete view requires separating the
In dynamical systems, there are similar trade-offs latent space into shared and private information, i.e.,
in expressivity and interpretability. For example, large z → {zs , z1 , z2 }, where zs are the variables represen-
RNNs with LSTM and GRU nodes can predict future tative of the shared information while z1 and z2 are
data to very high accuracy, however the learned RNN the private information representations for x1 and x2 ,
parameters are not unique and thus limited in the in- respectively.
sights that can be gleaned about the system’s inter- ANN architectures, e.g., cross-encoders with private
nal interactions. Moreover, nonlinear systems are now paths, can learn private latents. However, their high ex-
being described in latent spaces, i.e., low-dimensional pressibility often causes private information to leak into
representations of the neural data x(t) = P hi(z(t)). the shared variables and vice versa. This leakage, while
When the neural data is a non-linear function of the not necessarily damaging in engineering applications,
latents, i.e., Φ(·) is nonlinear. Also including nonlinearcan cause erroneous scientific conclusions about shared
dynamics results in an unidentifiability up to an invert- brain function. Recent methods leverage “butterfly” ar-
ible function h(·). Specifically, if z(t) = g(z(t − 1)) chitectures that pair multiple cross-encoders with adver-
and x(t) = Φ(z(t)) then we can define Φ e = Φ ◦ h, sarial predictions to minimize such leakage [22].
−1 −1
z(t) = h (z(t)) and ge = h ◦ g ◦ h, yielding a second
g Beyond static comparisons, another active area that
solution Φ,
e ge, ze(t) that describe the data equally well.can start shedding light into the functional architecture
Thus, theoretical advances are also needed to under- is the joint encoding of dynamical time-series informa-
stand when such combinations of extremely expressive tion [34, 17]. The general conceptual framework of the
models creates statistically unidentifiable models. above holds, however these methods are often phrased
as generative models where the latent representation
5 Challenges in multi-modal data evolves dynamically in time, and both brain and be-
havior data can be extracted from the same states. The
Finally, the brain is not just a set of pyramidal neurons same challenges apply to the dynamical systems frame-
exchanging electrical spikes. There is a rich biophysical work in terms of making sure the shared and private
infrastructure of neurons, astrocytes, vasculature, and information are fully disentangled.
a plethora of neurotransmitters that modulate neural
function beyond direct connections [32, 43]. These ad- 6 Conclusion
ditional structures are a part of the brain’s computation
and should be included in the functional architecture. I aim here to lay out a key opportunity in mining the
Each of these additional signals can be measured using depths of now-available neural data: discovering the
different technologies and resolutions: fMRI or func- brain’s functional architecture. In this endeavor, the
tional ultrasound for hemodynamics, optical imaging for field will have to so solve at a minimum the mentioned
neurotransmitters, etc. Moreover, behavioral monitor- challenges, specifically 1) the synthesis of data collected
ing provides yet another mode that captures the envi- across brain areas, behaviors, and modalities, 2) the
ronment and eventual output of the brain: the actions synthesis of brain and behavior data, and 3) the de-
taken in the world. velopment of interpretable AI that goes beyond the ex-
Sythesizing across data types require a single model plainable AI currently used in engineering applications.
that describes multiple modalities. This model class, In the emerging solutions, one emerging theme is
called data fusion, joint modeling, or multi-model mod- the importance of data geometry, specifically going be-
eling (depending on the community), is often defined by yond topology and into how the curvature and tangent
a latent state z such where each mode x1 and x2 can spaces relate to dynamics. Another theme is the impor-
be “read out” from the latent state, e.g., x1 = f1 (z) tance of statistically independent representations, which
and x2 = f2 (x2 ). Approaches to extract the shared is related to the sparsity that is enjoying a rebound
latent state span from linear Canonical Correlations in use from its ability to induce interpretability into
regression-type problems. These advances and more will
hopefully soon provide new insights into brain function. Alipasha Vaziri. High-speed, cortex-wide volumetric
recording of neuroactivity at cellular resolution using
References light beads microscopy. bioRxiv, 2021.
[13] Laura N Driscoll, Krishna Shenoy, and David Sussillo.
[1] Francisco Acosta, Colin Conwell, Sophia Sanborn, Flexible multitask computation in recurrent networks
David Klindt, and Nina Miolane. Relating represen- utilizes shared dynamical motifs. Nature Neuroscience,
tational geometry to cortical geometry in the visual 27(7):1349–1363, 2024.
cortex. NeurIPS Workshop on Unifying Representa- [14] Rudresh Dwivedi, Devam Dave, Het Naik, Smiti Sing-
tions, 2023. hal, Rana Omer, Pankesh Patel, Bin Qian, Zhenyu
[2] Galen Andrew, Raman Arora, Jeff Bilmes, and Karen Wen, Tejal Shah, Graham Morgan, et al. Explainable
Livescu. Deep canonical correlation analysis. In ai (xai): Core ideas, techniques, and solutions. ACM
International conference on machine learning, pages Computing Surveys, 55(9):1–33, 2023.
1247–1255. PMLR, 2013. [15] Ahmed El Hady, Daniel Takahashi, Ruolan Sun,
[3] Marina Bedny, Alvaro Pascual-Leone, David Dodell- Oluwateniola Akinwale, Tyler Boyd-Meredith, Yisi
Feder, Evelina Fedorenko, and Rebecca Saxe. Lan- Zhang, Adam S Charles, and Carlos D Brody. Chronic
guage processing in the occipital cortex of congenitally brain functional ultrasound imaging in freely moving
blind adults. Proceedings of the National Academy of rodents performing cognitive tasks. Journal of neuro-
Sciences, 108(11):4429–4434, 2011. science methods, 403:110033, 2024.
[4] Hadas Benisty, Alexander Song, Gal Mishne, and [16] Victor Geadah, Gabriel Barello, Daniel Greenidge,
Adam S Charles. Review of data processing of func- Adam S Charles, and Jonathan W Pillow. Sparse-
tional optical microscopy for neuroscience. Neuropho- coding variational autoencoders. Neural Computation,
tonics, 9(4):041402–041402, 2022. pages 1–31, 2024.
[5] Beepul Bharti, Paul Yi, and Jeremias Sulam. Sufficient [17] Rabia Gondur, Usama Bin Sikandar, Evan Schaf-
and necessary explanations (and what lies in between). fer, Mikio Christian Aoi, and Stephen L Keeley.
arXiv preprint arXiv:2409.20427, 2024. Multi-modal gaussian process variational autoencoders
[6] Adrian G Bondy, Julie A Charlton, Thomas Zhihao for neural and behavioral data. arXiv preprint
Luo, Charles D Kopec, Wynne M Stagnaro, Sarah Jo C arXiv:2310.03111, 2023.
Venditto, Laura Lynch, Sanjeev Janarthanan, Stefan N [18] David R Hardoon, Sandor Szedmak, and John Shawe-
Oline, Timothy D Harris, et al. Coordinated cross- Taylor. Canonical correlation analysis: An overview
brain activity during accumulation of sensory evidence with application to learning methods. Neural compu-
and decision commitment. bioRxiv, pages 2024–08, tation, 16(12):2639–2664, 2004.
2024. [19] James J Jun, Nicholas A Steinmetz, Joshua H Siegle,
[7] Christopher P Burgess, Irina Higgins, Arka Pal, Loic et al. Fully integrated silicon probes for high-density
Matthey, Nick Watters, Guillaume Desjardins, and recording of neural activity. Nature, 551(7679):232,
Alexander Lerchner. Understanding disentangling in 2017.
β-vae. arXiv preprint arXiv:1804.03599, 2018. [20] Leor N Katz, Jacob L Yates, Jonathan W Pillow,
[8] Adam S Charles, Nathan Cermak, Rifqi O Affan, and Alexander C Huk. Dissociated functional signifi-
Benjamin B Scott, Jackie Schiller, and Gal Mishne. cance of decision-related activity in the primate dorsal
Graft: Graph filtered temporal dictionary learning stream. Nature, 535(7611):285–288, 2016.
for functional neural imaging. IEEE Transactions on [21] Sue Ann Koay, Adam S Charles, Stephan Y Thiberge,
Image Processing, 31:3509–3524, 2022. Carlos Brody, and David W Tank. Sequential and ef-
[9] Adam S Charles, Benjamin Falk, Nicholas Turner, ficient neural-population coding of complex task infor-
et al. Toward community-driven big open brain sci- mation. bioRxiv, page 801654, 2021.
ence: open big data and tools for structure, function, [22] Sai Koukuntla, Joshua B Julian, Jesse C Kaminsky,
and genetics. Annual review of neuroscience, 43:441– Manuel Schottdorf, David W Tank, Carlos D Brody,
464, 2020. and Adam S Charles. Unsupervised discovery of the
[10] Susu Chen, Yi Liu, Ziyue Aiden Wang, Jennifer shared and private geometry in multi-view data. arXiv
Colonell, Liu D Liu, Han Hou, Nai-Wen Tien, Tim preprint arXiv:2408.12091, 2024.
Wang, Timothy Harris, Shaul Druckmann, et al. [23] Roy R Lederman and Ronen Talmon. Learning the ge-
Brain-wide neural activity underlying memory-guided ometry of common latent variables using alternating-
movement. Cell, 187(3):676–691, 2024. diffusion. Applied and Computational Harmonic Anal-
[11] Mark M Churchland, John P Cunningham, Matthew T ysis, 44(3):509–536, 2018.
Kaufman, Justin D Foster, Paul Nuyujukian, Stephen I [24] Scott Linderman, Matthew Johnson, Andrew Miller,
Ryu, and Krishna V Shenoy. Neural population dy- Ryan Adams, David Blei, and Liam Paninski. Bayesian
namics during reaching. Nature, 487(7405):51–56, learning and inference in recurrent switching linear dy-
2012. namical systems. In Artificial intelligence and statis-
[12] Jeffery Demas, Jason Manley, Frank Tejera, Hyewon tics, pages 914–922. PMLR, 2017.
Kim, Francisca Martinez Traub, Brandon Chen, and [25] Noga Mudrik, Yenho Chen, Eva Yezerets, Christo-
pher J Rozell, and Adam S Charles. Decomposed lin- tar, Nathaniel D Daw, and Timothy J Buschman.
ear dynamical systems (dlds) for learning the latent Building compositional tasks with shared neural sub-
components of neural dynamics. Journal of Machine spaces. bioRxiv, 2024.
Learning Research, 25(59):1–44, 2024. [39] Jacopo Teneggi, Alexandre Luster, and Jeremias Su-
[26] Noga Mudrik, Ryan Ly, Oliver Ruebel, and Adam S lam. Fast hierarchical games for image explanations.
Charles. Creimbo: Cross ensemble interactions IEEE Transactions on Pattern Analysis and Machine
in multi-view brain observations. arXiv preprint Intelligence, 45(4):4494–4503, 2022.
arXiv:2405.17395, 2024. [40] Pantelis Vafidis, Aman Bhargava, and Antonio Rangel.
[27] Noga Mudrik, Gal Mishne, and Adam S Charles. Disentangling representations in rnns through multi-
Sibblings: Similarity-driven building-block inference task learning. arXiv preprint arXiv:2407.11249, 2024.
using graphs across states. International Conference [41] Venkatakaushik Voleti, Kripa B Patel, Wenze Li, Cit-
on Machine Learning (ICML), 2024. lali Perez Campos, Srinidhi Bharadwaj, Hang Yu,
[28] Tanmay Nath, Alexander Mathis, An Chi Chen, Amir Caitlin Ford, Malte J Casper, Richard Wenwei Yan,
Patel, Matthias Bethge, and Mackenzie Weygandt Wenxuan Liang, et al. Real-time volumetric mi-
Mathis. Using deeplabcut for 3d markerless pose esti- croscopy of in vivo dynamics and large-scale samples
mation across species and behaviors. Nature protocols, with scape 2.0. Nature methods, 16(10):1054–1062,
14(7):2152–2176, 2019. 2019.
[29] Talmo D Pereira, Nathaniel Tabris, Junyu Li, Shruthi [42] Anqi Wu, Estefany Kelly Buchanan, Matthew White-
Ravindranath, Eleni S Papadoyannis, Z Yan Wang, way, Michael Schartner, Guido Meijer, Jean-Paul Noel,
David M Turner, Grace McKenzie-Smith, Sarah D Erica Rodriguez, Claire Everett, Amy Norovich, Evan
Kocher, Annegret L Falkner, et al. Sleap: Multi- Schaffer, et al. Deep graph pose: a semi-supervised
animal pose tracking. BioRxiv, pages 2020–08, 2020. deep graphical model for improved animal pose track-
[30] Matthew G Perich and Kanaka Rajan. Rethinking ing. Advances in Neural Information Processing Sys-
brain-wide interactions through multi-region ‘network tems, 33:6040–6052, 2020.
of networks’ models. Current opinion in neurobiology, [43] Eva Yezerets, Noga Mudrik, and Adam Charles. De-
65:146–151, 2020. composed linear dynamical systems (dlds) models re-
[31] Jonathan W Pillow, Jonathon Shlens, Liam Paninski, veal context-dependent dynamic connectivity in c. ele-
Alexander Sher, Alan M Litke, EJ Chichilnisky, and gans. bioRxiv, pages 2024–05, 2024.
Eero P Simoncelli. Spatio-temporal correlations and [44] Byron M Yu, Afsheen Afshar, Gopal Santhanam,
visual signalling in a complete neuronal population. Stephen Ryu, Krishna V Shenoy, and Maneesh Sahani.
Nature, 454(7207):995–999, 2008. Extracting dynamical structure embedded in neural ac-
[32] Francesco Randi, Anuj K Sharma, Sophie Dvali, and tivity. Advances in neural information processing sys-
Andrew M Leifer. Neural signal propagation atlas tems, 18, 2005.
of caenorhabditis elegans. Nature, 623(7986):406–414, [45] Weijian Zong, Horst A Obenhaus, Emilie R Skytøen,
2023. Hanna Eneqvist, Nienke L de Jong, Ruben Vale,
[33] David E Rumelhart, Geoffrey E Hinton, James L Marina R Jorge, May-Britt Moser, and Edvard I
McClelland, et al. A general framework for parallel Moser. Large-scale two-photon calcium imaging in
distributed processing. Parallel distributed processing: freely moving mice. Cell, 185(7):1240–1256, 2022.
Explorations in the microstructure of cognition, 1(45-
76):26, 1986.
[34] Omid G Sani, Hamidreza Abbaspourazad, Yan T
Wong, Bijan Pesaran, and Maryam M Shanechi. Mod-
eling behaviorally relevant neural dynamics enabled
by preferential subspace identification. Nature Neu-
roscience, 24(1):140–149, 2021.
[35] A. Song, A. S. Charles, S. A. Koay, J. L. Gauthier, S. Y.
Thiberge, J. W. Pillow, and D. W. Tank. Volumet-
ric two-photon imaging of neurons using stereoscopy
(vTwINS). Nature methods, 14(4):420, 2017.
[36] Nicholas A Steinmetz, Peter Zatka-Haas, Matteo
Carandini, and Kenneth D Harris. Distributed cod-
ing of choice, action and engagement across the mouse
brain. Nature, 576(7786):266–273, 2019.
[37] Ian H Stevenson and Konrad P Kording. How advances
in neural recording affect data analysis. Nature neuro-
science, 14(2):139–142, 2011.
[38] Sina Tafazoli, Flora M Bouchacourt, Adel Ardalan,
Nikola T Markov, Motoaki Uchimura, Marcelo G Mat-

You might also like