Explore 1.5M+ audiobooks & ebooks free for days

From $11.99/month after trial. Cancel anytime.

Principles of Neural Design
Principles of Neural Design
Principles of Neural Design
Ebook1,015 pages10 hours

Principles of Neural Design

Rating: 4.5 out of 5 stars

4.5/5

()

Read preview

About this ebook

Two distinguished neuroscientists distil general principles from more than a century of scientific study, “reverse engineering” the brain to understand its design.

Neuroscience research has exploded, with more than fifty thousand neuroscientists applying increasingly advanced methods. A mountain of new facts and mechanisms has emerged. And yet a principled framework to organize this knowledge has been missing. In this book, Peter Sterling and Simon Laughlin, two leading neuroscientists, strive to fill this gap, outlining a set of organizing principles to explain the whys of neural design that allow the brain to compute so efficiently.

Setting out to “reverse engineer” the brain—disassembling it to understand it—Sterling and Laughlin first consider why an animal should need a brain, tracing computational abilities from bacterium to protozoan to worm. They examine bigger brains and the advantages of “anticipatory regulation”; identify constraints on neural design and the need to “nanofy”; and demonstrate the routes to efficiency in an integrated molecular system, phototransduction. They show that the principles of neural design at finer scales and lower levels apply at larger scales and higher levels; describe neural wiring efficiency; and discuss learning as a principle of biological design that includes “save only what is needed.”

Sterling and Laughlin avoid speculation about how the brain might work and endeavor to make sense of what is already known. Their distinctive contribution is to gather a coherent set of basic rules and exemplify them across spatial and functional scales.

LanguageEnglish
PublisherThe MIT Press
Release dateJun 12, 2015
ISBN9780262327329
Principles of Neural Design

Related to Principles of Neural Design

Related ebooks

Biology For You

View More

Reviews for Principles of Neural Design

Rating: 4.5 out of 5 stars
4.5/5

1 rating0 reviews

What did you think?

Tap to rate

Review must be at least 10 words

    Book preview

    Principles of Neural Design - Peter Sterling

    Principles of Neural Design

    Principles

    Compute with chemistry

    Compute directly with analog primitives

    Combine analog and pulsatile processing

    Sparsify

    Send only what is needed

    Send at the lowest acceptable rate

    Minimize wire

    Make neural components irreducibly small

    Complicate

    Adapt, match, learn, and forget

    Principles of Neural Design

    Peter Sterling and Simon Laughlin

    The MIT Press

    Cambridge, Massachusetts

    London, England

    First MIT Press paperback edition, 2017

    © 2015 Massachusetts Institute of Technology

    All rights reserved. No part of this book may be reproduced in any form by any electronic or mechanical means (including photocopying, recording, or information storage and retrieval) without permission in writing from the publisher.

    MIT Press books may be purchased at special quantity discounts for business or sales promotional use. For information, please email [email protected].

    This book was set in Stone Sans and Stone Serif by Toppan Best-set Premedia Limited.

    Library of Congress Cataloging-in-Publication Data

    Sterling, Peter (Professor of neuroscience), author.

    Principles of neural design / Peter Sterling and Simon Laughlin.

    p.; cm.

    Includes bibliographical references and index.

    ISBN 978-0-262-02870-7 (hardcover : alk. paper), 978-0-262-53468-0 (pb.)

    I. Laughlin, Simon, author. II. Title.

    [DNLM: 1. Brain—physiology. 2. Learning. 3. Neural Pathways. WL 300]

    QP376

    612.8’2—dc23

    2014031498

    10 9 8 7 6 5

    d_r0

    For Sally Zigmond and Barbara Laughlin

    Contents

    Preface

    Acknowledgments

    Introduction

    1What Engineers Know about Design

    2Why an Animal Needs a Brain

    3Why a Bigger Brain?

    4How Bigger Brains Are Organized

    5Information Processing: From Molecules to Molecular Circuits

    6Information Processing in Protein Circuits

    7Design of Neurons

    8How Photoreceptors Optimize the Capture of Visual Information

    9The Fly Lamina: An Efficient Interface for High-Speed Vision

    10Design of Neural Circuits: Recoding Analogue Signals to Pulsatile

    11Principles of Retinal Design

    12Beyond the Retina: Pathways to Perception and Action

    13Principles of Efficient Wiring

    14Learning as Design/Design of Learning

    15Summary and Conclusions

    Principles of Neural Design

    Notes

    References

    Index

    List of figures

    Introduction

    Figure I.1 How do neural circuits use space and power so efficiently? Computer:…

    Chapter 2

    Figure 2.1 Three organisms of increasing size: bacterium, protozoan, and a nema…

    Figure 2.2 The lac operon: a molecular mechanism that discriminates between pat…

    Figure 2.3 E. coli's biased random walk. By moving forward more and turning les…

    Figure 2.4 Paramecium's avoidance response: behavior and electrical mechanism. …

    Figure 2.5 C. elegans locomotion matches the terrain and adapts to viscosity. S…

    Figure 2.6 Neural circuit that bends the worm. Excitatory motor neurons (DB, VB…

    Figure 2.7 The circuit for aversive behavior. Mechanosensory neurons in the nos…

    Figure 2.8 C. elegans. Spoke and hub circuit controls solitary versus social be…

    Figure 2.9 C. elegans connectome reconstructed from serial sections photographe…

    Chapter 3

    Figure 3.1 Mammalian and insect brains share many broad aspects of design. Uppe…

    Figure 3.2 Large brains accomplish the same broad tasks. Note that inner and ou…

    Figure 3.3 Internal systems match behavior. Arterial pressure fluctuates with d…

    Figure 3.4 Adapt, match, trade. Upper: Adapt response capacity to load. Every s…

    Figure 3.5 Mathematics and biophysics govern the representational capacity of s…

    Figure 3.6 Law of diminishing returns. Doubling information rate of retinal gan…

    Chapter 4

    Figure 4.1 Brain's master clock (suprachiasmatic nucleus) informs a network of …

    Figure 4.2 Fiber tracts that transmit summaries share an economical design. The…

    Figure 4.3 Wireless regulation broadcasts slow signals to efficiently couple in…

    Figure 4.4 Longitudinal section through rat brain. This section shows relative …

    Figure 4.5 Efficient wiring for integrated movement. Upper: Cross section throu…

    Figure 4.6 Unit cost of sending information differs greatly across senses. Uppe…

    Figure 4.7 Mormyrid brain greatly expands cerebellar structures. Upper: Electro…

    Figure 4.8 Speech uses lower frequencies and thus finer axons. Upper: Axons fro…

    Figure 4.9 Design of sampling arrays. Fine sampling required for spatial acuity…

    Figure 4.10 Rat brain in horizontal section. Note that striatum lies nearest to…

    Figure 4.11 Frontal view of fly brain shows prominent areas devoted to specific…

    Figure 4.12 The visual system is deep and maps spatial position. The olfactory …

    Figure 4.13 Central complex maps horizontal lines of sight. Protocerebral bridg…

    Figure 4.14 Distribution of fiber diameters in insect nerve cord. This distribu…

    Figure 4.15 CD1, the neuron that prevents a cricket being deafened by its own c…

    Chapter 5

    Figure 5.1 Shannon's general communication system maps onto communication betwe…

    Figure 5.2 Two ways to improve efficiency with which signal states represent in…

    Figure 5.3 Signal range, noise, and response dynamics determine the information…

    Figure 5.4 Protein structure, conformational state, energy landscape, and allos…

    Figure 5.5 The allosteric protein as a finite-state machine. How a sequence of …

    Figure 5.6 β2 adrenergic receptor and its G protein use allostery to operate as…

    Chapter 6

    Figure 6.1 Circuit for cascade amplifier: silicon versus protein. In silicon, a…

    Figure 6.2 Input/output (I/O) function generated by binding kinetics performs t…

    Figure 6.3 Cooperativity changes the input/output (I/O) function generated by b…

    Figure 6.4 Computation by chemical circuits. Left: Circuits that divide, calcul…

    Figure 6.5 Optimizing the noise reducer of last resort—an array of M identical …

    Figure 6.6 An ion channel is a large protein with a pore that conducts ions acr…

    Figure 6.7 Concentration gradients drive ions through channels that open and cl…

    Figure 6.8 The simple resistor–capacitor (RC) circuit formed by ion channels in…

    Figure 6.9 Input/output (I/O) function generated by the basic electrical circui…

    Figure 6.10 Voltage-gated sodium channels and voltage-gated potassium channels …

    Chapter 7

    Figure 7.1 Neurons and glial cells of cerebellar cortex. Neuron types shown her…

    Figure 7.2 Blood vessels distribute densely in gray matter with even mesh. The …

    Figure 7.3 Fusion of one synaptic vesicle briefly raises the concentration of t…

    Figure 7.4 Driving forces on key ions at the neuron's resting potential. Calciu…

    Figure 7.5 Postsynaptic receptor clusters from different synapse types span a 1…

    Figure 7.6 Neurons use different types of receptor to encode a range of tempora…

    Figure 7.7 Passive transmission by dendrite. Upper left: Passive cable modeled …

    Figure 7.8 Purkinje cell dendrites studded with spines of varied morphology. Re…

    Figure 7.9 Local computing by chemical axodendrodendritic synapses. Axon termin…

    Figure 7.10 Local computing via electrical synapse between two basket cell dend…

    Figure 7.11 Two types of inhibitory synapse at the Purkinje cell axon initial s…

    Figure 7.12 Polyaxonal amacrine neuron reverses the standard polarized design. …

    Figure 7.13 Myelination saves energy as well as conduction time. Left: In myeli…

    Figure 7.14 Wiring diagram of cerebellar cortex. Granule cell (GC) integrates e…

    Figure 7.15 Largest cerebellar neuron occupies more than a 1,000-fold greater v…

    Figure 7.16 A large terminal may contact many small dendrites without interveni…

    Figure 7.17 Parallel fiber synapse to Purkinje cell spine is ensheathed by glia…

    Figure 7.18 Energy costs by cell type. Left: Purkinje cell is the most expensiv…

    Figure 7.19 Energy costs by cell function: Granule cell versus Purkinje cell. r…

    Figure 7.20 Excitatory neurons in cerebellar cortex cost nearly fourfold more t…

    Figure 7.21 Input layer to cerebellar cortex consumes most energy. Intermediate…

    Chapter 8

    Figure 8.1 Mammalian rod and fly photoreceptor amplify the energy of a single p…

    Figure 8.2 Baboon in starlight. A mammal needs to capture this photon-sparse im…

    Figure 8.3 How a mammalian rod captures a photon. Left: Isolated mammalian rod …

    Figure 8.4 As the cGMP concentration falls, channels close more sharply to caus…

    Figure 8.5 How a rod transmits 0 or 1. Left: Early stages of rod circuit. n rod…

    Figure 8.6 Baboon in daylight. Photons arriving at far higher rates than starli…

    Figure 8.7 Cone single photon response is smaller than the rod's but faster. Le…

    Figure 8.8 Chemical amplification in a mouse rod uses far less energy than elec…

    Figure 8.9 A microvillus contains the complete transduction circuit and generat…

    Figure 8.10 A faster response codes higher frequency signals. Left: Response to…

    Figure 8.11 Cone and fly photoreceptors encode contrast using the same analogue…

    Figure 8.12 Bit rate, energy cost, and energy efficiency in fly photoreceptors.…

    Figure 8.13 Fly photoreceptor increases information capacity by increasing band…

    Figure 8.14 A photoreceptor must increase both bandwidth and S/N to retrieve sp…

    Figure 8.15 Insect photoreceptors select voltage-gated channels from parts list…

    Chapter 9

    Figure 9.1 Layout and wiring of fly lamina. Upper: Section through head of hous…

    Figure 9.2 The neurons that comprise a lamina cartridge. Upper: Each shown in a…

    Figure 9.3 Lamina neurons, synapses and circuits. Upper left: A cross section t…

    Figure 9.4 Analogue responses to light stimulus are transformed as they pass fr…

    Figure 9.5 Photoreceptor output synapses process analogue signals as they are t…

    Figure 9.6 Energy cost per bit rises with bit rate in analogue neurons, and in …

    Figure 9.7 Spatial and temporal predictive coding by a large monopolar cell (LM…

    Figure 9.8 Extracellular field potential in lamina cartridge subtracts from pre…

    Figure 9.9 LMC axon is a matched filter that selectively attenuates noise as it…

    Figure 9.10 Using an input's cumulative probability distribution to code an out…

    Figure 9.11 LMC response dynamics adapt to input statistics to maximize transmi…

    Chapter 10

    Figure 10.1 Analogue sensors recode to spikes at different stages. Smell and va…

    Figure 10.2 Sensor axon caliber trades off with axon number. Axon diameter vari…

    Figure 10.3 Olfactory and skin mechanosensors restrict their information rates,…

    Figure 10.4 Auditory hair cell, transducing high frequencies, captures too much…

    Figure 10.5 Vestibular hair cells, transducing low frequencies, can sum their a…

    Chapter 11

    Figure 11.1 Vertical slice through monkey retina. The arrow indicates the optic…

    Figure 11.2 Quantal rates step down from photon input to ganglion cell output. …

    Figure 11.3 Section through cone synaptic terminal from monkey fovea. Gap junct…

    Figure 11.4 Cone electrical coupling reduces noise from phototransduction. Uppe…

    Figure 11.5 Functional architecture for filter that removes low spatial and tem…

    Figure 11.6 A local circuit computes the cone terminal's difference-of-Gaussian…

    Figure 11.7 Horizontal cell responds to dark and bright equally and linearly at…

    Figure 11.8 Active zone architectures match functions. Inner hair cell: Synapti…

    Figure 11.9 Cones match resources to information capacity. Upper: Peripheral co…

    Figure 11.10 Diffusional filtering at the cone synapse. Upper: Horizontal cell …

    Figure 11.11 The cone subdivides its large information packet among multiple bi…

    Figure 11.12 Bipolar types with higher information rates use more active zones …

    Figure 11.13 Bipolar cell outputs simultaneously sparsify and rectify. Dark and…

    Figure 11.14 Feedback enhances timing precision at the bipolar terminal. Bipola…

    Figure 11.15 How a ganglion cell creates a Gaussian weighting. Bipolar active z…

    Figure 11.16 Ganglion cells directly compute a two-dimensional Gaussian filter.…

    Figure 11.17 Neighboring ganglion cells in an array overlap their dendrites, ac…

    Figure 11.18 Receptive field overlap maximizes information from the array. Uppe…

    Figure 11.19 Ganglion cell dendritic arbors match the natural distribution of c…

    Figure 11.20 AII amacrine cell serves a cone circuit in daylight and a rod circ…

    Figure 11.21 Ganglion cell types transmit different bandwidths with different f…

    Figure 11.22 Natural scenes contain similar distributions of spatial and tempor…

    Figure 11.23 Each type of ganglion cell responds stereotypically to all scenes …

    Figure 11.24 Three types of directionally selective ganglion cell express 10 co…

    Figure 11.25 How ganglion cell types apportion information. Total spikes are ab…

    Figure 11.26 Low mean firing rates allow thin axons that quadratically reduce s…

    Chapter 12

    Figure 12.1 Retina sends different versions of the scene to 20 distinct central…

    Figure 12.2 Ganglion cell types that intermingle in retina segregate centrally …

    Figure 12.3 The quasi-secure thalamic synapse concentrates information for rela…

    Figure 12.4 Synaptic glomerulus integrates quanta from multiple active zones of…

    Figure 12.5 Synaptic investment from retina to cortex. High-rate bipolar cells …

    Figure 12.6 Energy capacities of input layers to V1 matches information rates. …

    Figure 12.7 Relay cell inputs to cortical simple cell directly establish its Ga…

    Figure 12.8 Cortical neurons use diverse dendritic patterns to collect particul…

    Figure 12.9 Cortical neurons use diverse axonal patterns to distribute particul…

    Figure 12.10 V1's deep-layer outputs follow the principle send only what is nee…

    Figure 12.11 Layout of higher visual areas. Upper left: Monkey brain with eye t…

    Chapter 13

    Figure 13.1 Axons exploit many small opportunities to save wire. Left: Axons br…

    Figure 13.2 Retina, cerebellum, and cerebral cortex differ greatly in thickness…

    Figure 13.3 Dendritic arbors minimize conduction distance and conduction delay.…

    Figure 13.4 Layout for connecting dense array to sparse array with low divergen…

    Figure 13.5 Retinal layers minimize wire by segregating cell bodies and axon bu…

    Figure 13.6 Coarse ganglion cell dendritic mesh evenly tiles and efficiently ma…

    Figure 13.7 Layout for connecting dense array to sparse array with extreme conv…

    Figure 13.8 A layout for extreme connection. Left: Cerebellar wiring viewed par…

    Figure 13.9 Cerebellar layers minimize wire by mixing neuron cell bodies with s…

    Figure 13.10 Purkinje cell axon is thick, heavily myelinated, and rich in mitoc…

    Figure 13.11 Purkinje cell spines minimize wire. Upper left: Purkinje cell dend…

    Figure 13.12 Cerebellar folia are essential for optimal layout. Slice shown her…

    Figure 13.13 Bilateral symmetry of topographic body maps on cerebellum save wir…

    Figure 13.14 Dendritic arbors that maximize connectivity repertoire are sparse …

    Figure 13.15 Layout to maximize connectional repertoire appears similar across …

    Figure 13.16 Cerebral cortex mixes cell bodies and synaptic circuits. Dark cell…

    Figure 13.17 Cerebral cortex reduces wire by placing smaller neurons with finer…

    Figure 13.18 Ocular dominance distribution in macaque V1 matches theory of opti…

    Figure 13.19 Distribution of the energy demanding sodium-potassium pump is matc…

    Figure 13.20 Distribution of cytochrome oxidase identifies energetically expens…

    Figure 13.21 How the optic nerve allocates space and energy capacity. Upper lef…

    Chapter 14

    Figure 14.1 VWFA responds to orthographic squiggles when we learn to read. Plot…

    Figure 14.2 Adding a spoken word to the lexicon. Black trace shows that the evo…

    Figure 14.3 Early long-term potentiation (LTP) and late LTP both occur at a sin…

    Figure 14.4 Long-term potentiation requires stimulation of more than 10 spines …

    Figure 14.5 NMDA receptor provides a mechanism for Hebbian learning. Current el…

    Figure 14.6 Posterior ventromedial prefrontal cortex encodes social value and m…

    Figure 14.7 Prefrontal cortex integrates inputs from neocortical and limbic s…

    Figure 14.8 Properties of the temporal-difference teaching signal. Upper: Dopam…

    Figure 14.9 Teacher axons employ dense arbors to broadcast signal to large volu…

    Figure 14.10 Discontents of civilization. Upper: Premodern life provides divers…

    Preface

    Neuroscience abounds with stories of intellectual and technical daring. Every peak has its Norgay and Hillary, and we had imagined telling some favorite stories of heroic feats, possibly set off in little boxes. Yet, this has been well done by others in various compendia and reminiscences (Strausfield, 2012; Glickstein, 2014; Kandel, 2006; Koch, 2012). Our main goal is to evince some principles of design and note some insights that follow. Stories deviating from this intention would have lengthened the book and distracted from our message, so we have resisted the natural temptation to memoir-ize.

    Existing compendia tend to credit various discoveries to particular individuals. This belongs to the storytelling. What interest would there be to the Trojan Wars without Odysseus and Agamemnon? On the other hand, dropping a name here and there distorts the history of the discovery process—where one name may stand for a generation of thoughtful and imaginative investigators. Consequently, in addition to forgoing stories, we forgo dropping names—except for a very few who early enunciated the core principles. Nor do the citations document who did what first; rather they indicate where supporting evidence will be found—often a review.

    Existing compendia often pause to explain the ancient origins of various terms, such as cerebellum or hippocampus. This might have been useful when most neuroscientists spoke a language based in Latin and Greek, but now with so many native speakers of Mandarin or Hindi the practice seems anachronistic, and we have dropped it. Certain terms may be unfamiliar to readers outside neuroscience, such as physicists and engineers. These are italicized at their first appearance to indicate that they are technical (cation channel). A reader unfamiliar with this term can learn by Googling in 210 ms that "cation channels are pore-forming proteins that help establish and control the small voltage gradient across the plasma membrane of all living cells . . ." (Wikipedia). So rather than impede the story, we sometimes rely on you to Google.

    Many friends and colleagues long aware of this project have wondered why it has taken so long to complete. Some have tried to encourage us to let it go, saying, After all, it needn't be perfect . . . To which we reply, Don't worry, it isn't! It's just that more time is needed to write a short book than a long one.

    Acknowledgments

    For reading and commenting on various chapters we are extremely grateful to: Larry Palmer, Philip Nelson, Dmitry Chklovskii, Ron Dror, Paul Glimcher, Jay Schulkin, Glenn Vinnicombe, Neil Krieger, Francisco Hernández-Heras, Gordon Fain, Sally Zigmond, Alan Pearlman, Brian Wandell, and Dale Purves.

    For many years of fruitful exchange PS thanks colleagues at the University of Pennsylvania: Vijay Balasubramanian, Kwabena Boahen, Robert Smith, Michael Freed, Noga Vardi, Jonathan Demb, Bart Borghuis, Janos Perge, Diego Contreras, Joshua Gold, David Brainard, Yoshihiko Tsukamoto, Minghong Ma, Amita Sehgal, Jonathan Raper, and Irwin Levitan. SL thanks colleagues at the Australian National University and the University of Cambridge. Adrian Horridge, Ben Walcott, Ian Meinertzhagen, Allan Snyder, Doekele Stavenga, Martin Wilson, Srini (MV) Srinivasan, David Blest, Peter Lillywhite, Roger Hardie, Joe Howard, Barbara Blakeslee, Daniel Osorio, Rob de Ruyter van Steveninck, Matti Weckström, John Anderson, Brian Burton, David O’Carroll, Gonzalo Garcia de Polavieja, Peter Neri, David Attwell, Bob Levin, Aldo Faisal, John White, Holger Krapp, Jeremy Niven, Gordon Fain, and Biswa Sengupta.

    For kindly answering various queries on specific topics and/or providing figures, we thank: Bertil Hille, Nigel Unwin (ion channels), Stuart Firestein, Minghong Ma, Minmin Luo (Olfaction); Wallace Thoreson, Richard Kramer, Steven DeVries, Peter Lukasiewicz, Gary Matthews, Henrique von Gersdorff, Charles Ratliff, Stan Schein, Jeffrey Diamond, Richard Masland, Heinz Wässle, Steve Massey, Dennis Dacey, Beth Peterson, Helga Kolb, David Williams, David Calkins, Rowland Taylor, David Vaney, (retina); Roger Hardie, Ian Meinertzhagen (insect visual system); Nick Strausfeld, Berthold Hedwig, Randolf Menzel, Jürgen Rybak (insect brain); Larry Swanson, Eric Bittman, Kelly Lambert (hypothalamus); Michael Farries, Ed Yeterian, Robert Wurtz, Marc Sommer, Rebecca Berman (striatum); Murray Sherman, Al Humphreys, Alan Saul, Jose-Manuel Alonso,Ted Weyand, Larry Palmer, Dawei Dong (lateral geniculate nucleus); Indira Raman, Sacha du Lac, David Linden, David Attwell, John Simpson, Mitchell Glickstein, Angus Silver, Chris De Zeeuw (cerebellum); Tobias Moser, Paul Fuchs, James Saunders, Elizabeth Glowatzki, Ruth Anne Eatock (auditory and vestibular hair cells); Jonathan Horton, Kevan Martin, Deepak Pandya, Ed Callaway, Jon Kaas, Corrie Camalier, Roger Lemon, Margaret Wong-Riley (cerebral cortex).

    For long encouragement and for skill and care with the manuscript and illustrations, we thank our editors at MIT: Robert Prior, Christopher Eyer, Katherine Almeida, and Mary Reilly, and copy editor Regina Gregory.

    Introduction

    A laptop computer resembles the human brain in volume and power use—but it is stupid. Deep Blue, the IBM supercomputer that crushed Grandmaster Garry Kasparov at chess, is 100,000 times larger and draws 100,000 times more power (figure I.1). Yet, despite Deep Blue's excellence at chess, it too is stupid, the electronic equivalent of an idiot savant. The computer operates at the speed of light whereas the brain is slow. So, wherein lies the brain's advantage? A short answer is that the brain employs a hybrid architecture of superior design. A longer answer is this book—whose purpose is to identify the sources of such computational efficiency.

    9395_000_fig_001.jpg

    Figure I.1 How do neural circuits use space and power so efficiently? Computer: Image http://upload.wikimedia.org/wikipedia/commons/d/d3/IBM_Blue_Gene_P_supercomputer.jpg. Brain: Photo by UW-Madison, University Communications © Board of Regents of the University of Wisconsin System.

    The brain's inner workings have been studied scientifically for more than a century—initially by a few investigators with simple methods. In the last 20 years the field has exploded, with roughly 50,000 neuroscientists applying increasingly advanced methods. This outburst amounts to 1 million person-years of research—and facts have accumulated like a mountain. At the base are detailed descriptions: of neural connections and electrical responses, of functional images that correlate with mental states, and of molecules such as ion channels, receptors, G proteins, and so on. Higher up are key discoveries about mechanism: the action potential, transmitter release, synaptic excitation and inhibition. Summarizing this Everest of facts and mechanisms, there exist superb compendia (Kandel et al., 2012; Purves et al., 2012; Squire et al., 2008).

    But what if one seeks a book to set out principles that explain how our brain, while being far smarter than a supercomputer, can also be far smaller and cheaper? Then the shelf is bare. One reason is that modern neuroscience has been technique driven. Whereas in the 1960s most experiments that one might conceive were technically impossible, now with methods such as patch clamping, two-photon microscopy, and functional magnetic resonance imaging (fMRI), aided by molecular biology, the situation has reversed, and it is harder to conceive of an experiment that cannot be done. Consequently, the idea of pausing to distill principles from facts has lacked appeal. Moreover, to many who ferret out great new facts for a living, it has seemed like a waste of time.

    Yet, we draw inspiration from Charles Darwin, who remarked, My mind seems to have become a kind of machine for grinding general laws out of large collections of facts (Darwin, 1881). Darwin, of course, is incomparable, but this is sort of how our minds work too. So we have written a small book—relative to the great compendia—intending to beat a rough path up Data Mountain in search of organizing principles.

    Principles of engineering

    The brain is a physical device that performs specific functions; therefore, its design must obey general principles of engineering. Chapter 1 identifies several that we have gleaned (not being engineers) from essays and books on mechanical and electrical design. These principles do not address specific questions about the brain, but they do set a context for ordering one's thoughts—especially helpful for a topic so potentially intimidating. For example, it helps to realize that neuroscience is really an exercise in reverse engineering—disassembling a device in order to understand it.

    This insight points immediately to a standard set of questions that we suppose are a mantra for all reverse engineers: What does it do? What are its specifications? What is the environmental context? Then there are commandments, such as Study the interfaces and Complicate the design. The latter may puzzle scientists who, in explaining phenomena, customarily strive for simplicity. But engineers focus on designing effective devices, so they have good reasons to complicate.¹ This commandment, we shall see, certainly applies to the brain.

    Why a brain?

    To address the engineer's first question, we consider why an animal should need a brain—what fundamental purpose does it serve and at what cost to the organism? Chapter 2 begins with a tiny bacterium, Escherichia coli which succeeds without a brain, in order to evaluate what the bacterium can do and what it cannot. Then on to a protozoan, Paramecium caudatum, still a single cell and brainless, but so vastly larger than E. coli (300,000-fold) that it requires a faster type of signaling. This prefigures long-distance signaling by neurons in multicellular organisms.

    The chapter closes with the tiny nematode worm, Caenorhabditis elegans, which does have a brain—with exactly 302 neurons. This number is small in absolute terms, but it represents nearly one third of the creature's total cells, so it is a major investment that better turn a profit, and it does. For example, it controls a multicellular system that finds, ingests, and digests bacteria and that allows the worm to recall for several hours the locations of favorable temperatures and bacterial concentrations.

    Humans naturally tend to discount the computational abilities of small organisms—which seem, well . . ., mentally deficient—nearly devoid of learning or memory. But small organisms do learn and remember. It's just that their memories match their life contexts: they remember only what they need to and for just long enough. Furthermore, the mechanisms that they evolved for these computations are retained in our own neurons—so we shall see them again.

    The progression bacterium → protozoan → worm is accompanied by increasing computational complexity. It is rewarded by increasing capacity to inhabit richer environments and thus to move up the food chain: protozoa eat bacteria, and worms eat protozoa. As engineering, this makes perfect sense: little beasts compute only what they must; thus they pay only for what they use. This is equally true for beasts with much larger brains discussed in chapter 3.

    Why a bigger brain?

    The brain of a fruit fly (Drosophila melanogaster) is 350-fold larger than C. elegans’, and the brain of a human (Homo sapiens) is a million-fold larger than the fly's. These larger brains emerge from the same process of natural selection as the smaller ones, so we should continue to expect from them nothing superfluous—only mechanisms that are essential and pay for themselves. We should also expect that when a feature works really well, it will be retained—like the wheel, the paper clip, the aluminum beer can, and the transistor (Petroski, 1996; Arthur, 2009). We note design features that brains have conserved (with suitable elaborations) across at least 400 million years of natural selection. These features in the human brain are often described as primitive—reptilian—reflecting what are considered negative aspects of our nature. But, of course, any feature that has been retained for so long must be pretty effective.

    This chapter identifies the core task of all brains: it is to regulate the organism's internal milieu—by responding to needs and, better still, by anticipating needs and preparing to satisfy them before they arise. The advantages of omniscience encourage omnipresence. Brains tend to become universal devices that tune all internal parameters to improve overall stability and economy. Anticipatory regulation replaces the more familiar homeostatic regulation—which is supposed to operate by waiting for each parameter to deviate from a set point, then detecting the error and correcting it by feedback. Most physiological investigations during the 20th century were based on the homeostatic model—how kidney, gut, liver, pancreas, and so on work independently, despite Pavlov's early demonstration of the brain's role in anticipatory regulation (Pavlov, 1904). But gradually anticipatory control has been recognized.

    Anticipatory regulation offers huge advantages.² First, it matches overall response capacity to fluctuations in demand—there should always be enough but not too much. Second, it matches capacity at each stage in the system to anticipated needs downstream, thus threading an efficient path between excess capacity (costly storage) and failure from lack of supplies. Third, it resolves potential conflict between organs by setting and shifting priorities. For example, during digestion it can route more blood to the gut and less to muscle and skin, and during exercise it can reverse this priority. This allows the organism to operate with a smaller blood volume than would otherwise be needed. Finally, it minimizes errors—which are potentially lethal and also cause cumulative damage.

    Anticipatory regulation includes behavior

    An organ that anticipates need and regulates the internal milieu by overarching control of physiology would be especially effective if it also regulated behavior. For example, it could reduce a body's need for physiological cooling (e.g., sweating—which costs energy and resources—sodium and water) by directing an animal to find shade. Moreover, it could evoke the memory of an unpleasant heatstroke to remind the animal to take anticipatory measures (travel at night, carry water). Such anticipatory mechanisms are driven ceaselessly by memories of hunger, cold, drought, or predation: Pick the beans! Chop wood! Build a reservoir! Lock the door!

    The memories of danger and bad times that shape our behavior can be our own, but often they are stored in the brains of our parents and grandparents. We are reared with their nightmares—the flood, the drought, the famine, the pogrom. Before written history, which spans only 6,000 years, all lessons that would help one anticipate and thus avoid a lethal situation could be transmitted only by oral tradition—the memory of a human life span. Given that the retention of memories in small brains corresponds to their useful span, and that retention has a cost, human memory for great events should remain vivid with age whereas memories of lesser events should fade (chapter 14).

    The most persistent dangers and opportunities, those extending far beyond a few generations, eventually become part of the neural wiring. Monkeys universally fear snakes, and so do most humans—suggesting that the response was encoded into brain structure before the lines split—on the order of 35 million years. But beyond alertness for predators, primate societies reserve their most acute observations and recall for relationships within the family and the troop. The benefit is that an individual's chances for survival and reproduction are enhanced by the group's ability to anticipate and regulate. The cost is that the individual must continuously sense the social structure—in its historical context—to receive aid when needed and to avoid being killed or cast out (Cheney & Seyfarth, 2007).

    Consequently, primate brains have expanded significantly in parts concerned with social recognition and planning—such as prefrontal cortex and amygdala. Humans greatly expand these areas and also those for social communication, such as for language, facial expression, and music. These regions serve both the cooperative and the competitive aspects of anticipatory regulation to an awesome degree. They account for much of our brain structure and many of our difficulties.

    Flies too show anticipatory behavior—to a level consonant with their life span and environmental reach. A fly need not wait for its blood sugar to fall dangerously low, nor for its temperature to soar dangerously high, before taking action. Instead its brain expresses prewired commands: Find fruit! In a cool spot! Anticipatory commands are often tuned to environmental regularities that predict when and where a resource is most likely to appear—or disappear. Thus, circadian rhythms govern foraging and sleep. Seasonal rhythms, which broadly affect resource availability, govern mating and reproduction. Consequently, specific brain hormones tuned to season send orders to prewired circuits: Court a mate! Intimidate a competitor!

    What drives behavior?

    To ensure that an organism will execute these orders, there are neural mechanisms to make it feel bad when a job is undone and feel good when it has succeeded. These are circuits whose activity humans experience, respectively, as anxiety and pleasure. Of course, we cannot know what worms or flies experience—but the same neurochemicals drive similar behaviors. This is one wheel that has certainly been decorated over hundreds of millions of years, but not reinvented.

    To actually accomplish a task is vastly complicated. Reconsider Deep Blue's task. Each side in chess has 16 pieces—that move one at a time, slowly (minutes), and only in two dimensions. Each piece is constrained to move only in certain ways, and some pieces repeat so that each side has only six different types of motion. This relatively simple setup generates so many possible moves that to evaluate them requires a Deep Blue.

    But the organ responsible for anticipatory regulation takes continuous data from every sensory neuron in the organism—both internal and external—plus myriad hormones and other chemicals. While doing so, it is calculating in real time—milliseconds—how to adjust every body component inside and out. It is flying the fly, finding its food, shade, and mate; it is avoiding predators and intimidating competitors—all the while tweaking every internal parameter to match what is about to be needed. Thus, it seems fair to say that Deep Blue is stupid even compared to a fruit fly. This defines sharply the next engineering question: what constrains the design of an effective and efficient brain?

    What constrains neural design?

    When Hillel was asked in the first century B.C.E. to explain the whole Torah while standing on one leg, he was ready: That which is hateful to you, do not unto another. The rest is commentary—and now go study.

    There is a one-leg answer for neural design: As information rate rises, costs rise disproportionately. For example, to transmit more information by spikes requires a higher spike rate. Axon diameter rises linearly with spike rate, but axon volume and energy consumption rise as the diameter squared. Thus, the essence of neural design: Send only information that is needed, and send it as slowly as possible (chapter 3). This key injunction profoundly shapes the brain's macroscopic layout, as explained in chapter 4. We hope that readers will . . . go study.

    If spikes were energetically cheap, their rates would matter less. However, a 100-mV spike requires far more current than a 1-mV response evoked by one packet of chemical transmitter. Obviously then, it is cheaper to compute with the smaller currents. This exemplifies another design principle: minimize energy per bit of information by computing at the finest possible level. Chapter 5 identifies this level as a change in protein folding on the scale of nanometers. Such a change can capture, store, and transmit one bit at an energetic cost that approaches the thermodynamic limit. Chapter 6 explains how proteins couple to form intracellular circuits on the scale of micrometers, and chapter 7 explains how a neuron assembles such circuits into devices on a scale of micrometers to millimeters.

    It emerges that to compute most efficiently in space and energy, neural circuits should nanofy:

    1. Make each component irreducibly small: a functional unit should be a single protein molecule (a channel), or a linear polymer of protein subunits (a microtubule), or a sandwich of monomolecular layers (a membrane).

    2. Combine irreducible components: a membrane to separate charge and thus permit a voltage, a protein transporter to pump ions selectively across the membrane and actually separate the charges (charge the battery), a pore for ions to flow singly across the membrane and thus create a current, a gate to start and stop a current, an amplifier to enlarge the current, and an adaptive mechanism to match a current to circumstance.

    3. Compute with chemistry wherever possible: regulate gates, amplifiers, and adaptive mechanisms by binding/unbinding small molecules that are present in sufficient numbers to obey the laws of mass action. Achieve speed with chemistry by keeping the volumes small.

    4. For speed over distance compute electrically: convert a signal computed by chemistry to a current that charges membrane capacitance to spread passively up to a millimeter. For longer distance, regenerate the current by appropriately clustered voltage-gated channels.

    Design in the visual system

    Having discussed protein computing and miniaturization as general routes to efficiency, we exemplify these points in an integrated system—phototransduction (chapter 8). The engineering challenge is to capture light reflected from objects in the environment in order to extract informative patterns to guide behavior. Transduction employs a biochemical cascade with about half a dozen stages to amplify the energy of individual photons by up to a million-fold while preserving the information embodied as signal-to-noise ratio (S/N) and bandwidth. We explain why so many stages are required.

    The photoreceptor signal, once encoded as a graded membrane voltage, spreads passively down the axon to the synaptic terminal. There the analogue signal is quantized as a stream of synaptic vesicles. The insect brain can directly read out this message with very high efficiency because the distance is short enough for passive signaling (chapter 9). The mammal brain cannot directly read out this vesicle stream because the distance is too great for passive signaling. The mammal eye must transmit by action potentials, but the photoreceptor's analogue signal contains more information than action potentials can encode. Therefore, on-site retinal processing is required (chapters 10, 11).

    Principles at higher levels

    The principles of neural design at finer scales and lower levels also apply at larger scales and higher levels. For example, they can explain why the first visual area (V1) in cerebral cortex enormously expands the number and diversity of neurons. And why diverse types project in parallel from V1 to other cortical areas. And why cortex uses many specific areas and arranges them in a particular way. The answers, as explained in chapter 12, are always the same: diverse circuits allow the brain to send only information that is needed and to send it at lower information rates. This holds computation to the steep part of the benefit/cost curve.

    Wiring efficiency

    Silicon circuits with very large-scale integration strive for optimal layout—to achieve best performance given cost of space, time, and energy. Neural circuits do the same and thereby produce tremendous diversity of neuronal structure at all spatial scales. For example, cerebellar output neurons (Purkinje cells) use a two-dimensional dendritic arbor whereas cerebral output neurons (pyramidal cells) use a three-dimensional arbor. Both circuits employ a layered architecture, but the large Purkinje neurons lie above a layer of tiny neurons whereas the large pyramidal neurons lie below the smaller neurons. Cerebellar cortex folds intensely on a millimeter scale whereas cerebral cortex on this scale is smooth.

    Such differences originate from a ubiquitous biophysical constraint: the irreducible electrical resistance of neuronal cytoplasm. Passive signals spread spatially and temporally only as the square root of dendritic diameter (√d). This causes a second law of diminishing returns: a dendrite, to double its conduction distance or halve its conduction delay, must quadruple its volume. This prevents neural wires from being any finer and prevents local circuits from being any more voluminous. In both cases conduction delays would grow too large. The constraint on volume drives efficient layout: equal lengths of dendrite and axon and an optimum proportion of wire and synapses. Chapter 13 will explain.

    Designs for learning

    All organisms use new information to better anticipate the future. Thus, learning is a deep principle of biological design, and therefore of neural design. Accordingly, the brain continually updates its knowledge of every internal and external parameter—which means that learning is also a brain function. As such, neural learning is subject to the same constraints as all other neural functions. It is a design principle that must obey all the others.

    To conserve space, time, and energy, new information should be stored at the site where it is processed and from whence it can be recalled without further expense. This is the synapse. Low-level synapses relay short-term changes in input, so their memories should be short, like that of a bacterium or worm. These synapses should encode at the cheapest levels, by modifying the structure and distribution of proteins. High-level synapses encode conclusions after many stages of processing, so their memories deserve to be longer and encoded more stably, by enlarging the synapse and adding new ones.

    A new synapse of diameter (d) occupies area on the postsynaptic membrane as d² and volume as d³. Because adding synapses increases costs disproportionately, learning in an adult brain of fixed volume is subject to powerful space constraints. For every synapse enlarged or added, another must be shrunk or removed. Design of learning must include the principle save only what is needed. Chapter 14 explains how this plays out in the overall design.

    Design and designer

    This book proposes that many aspects of the brain's design can be understood as adaptations to improve efficiency under resource constraints. Improvements to brain efficiency must certainly improve fitness. Darwin himself noted that natural selection is continually trying to economize every part of the organization and proposed that instincts, equivalent in modern terms to genetically programmed neural circuits, arise by natural selection (Darwin, 1859). So our hypothesis breaks no new conceptual ground.

    A famous critique of this hypothesis argues that useless features might survive pruning if they were simply unavoidable accompaniments to important features (Gould & Lewontin, 1979). This possibility is undeniable, but if examples are found for neural designs, we expect them to be rare because each failure to prune what is useless would render the brain less efficient—more like Deep Blue—whereas the brain's efficiency exceeds Deep Blue's by at least 10⁵.

    So what do we claim is new? The energy and space constraints have been known for a while, as have various principles, such as minimize wire. The present contribution seems to lie in our gathering various rules as a concise list and in systematically exemplifying them across spatial and functional scales. When a given rule was found to apply broadly with constant explanatory power, we called it a principle. Ten are listed as a round number. As with the Biblical Commandments and the U.S. Bill of Rights, some readers will find too many (redundancy) and others too few. We are satisfied to simply set them out for consideration.

    Some readers may object to the expression design because it might imply a designer, which might suggest creationism. But design can mean "the arrangement of elements or details, also a scheme that governs functioning." These are the meanings we intend. And, of course, there is a designer—as noted, it is the process that biologists understand as natural selection.³

    Limits to this effort

    Our account rests on facts that are presently agreed upon. Where some point is controversial, we will so note, but we will not resort to imagined mechanisms. Our goal is not to explain how the brain might work, but rather to make sense of what is already known. Naturally what is agreed upon will shift with new data, so the story will evolve. We gladly acknowledge that this account is neither complete nor timeless.

    We omit so much—many senses, many brain regions, many processes—and this will disappoint readers who study them. We concentrate on vision partly because it has dominated neuroscience during its log growth phase, so that is where knowledge goes deepest at all scales. Also we have personally concentrated on vision, so that is where our own knowledge is deepest. Finally, to apply principles across the full range of scales, but keep the book small, has required rigorous selection. We certainly hope that workers in other fields will find the principles useful. If some prove less than universal and need revision, well, that's science. The best we can do with Data Mountain really is just to set a few pitons up the south face.

    1

    What Engineers Know about Design

    During the Cold War, the Soviets would occasionally capture a U.S. military aircraft invading their airspace, and with comparable frequency a defecting Soviet pilot would set down a MiG aircraft in Japan or Western Europe. These planes would be instantly swarmed by engineers—like ants to a drop of honey—with one clear goal: to reverse engineer the craft. This is the process of discovering how a device works by disassembling and analyzing in detail its structure and function. Reverse engineering allowed Soviet engineers to rather quickly reproduce a near perfect copy of the U.S. B-29 bomber, which they renamed the Tu-4. Reverse engineering still flourishes in military settings and increasingly in civilian industries—for example, in chip and software development where rival companies compete on the basis of innovation and design.

    The task in reverse engineering is accelerated immensely by prior knowledge. Soviet engineers knew the B-29's purpose—to fly. Moreover, they knew its performance specifications: carry 10 tons of explosive at 357 mph at an altitude of 36,000 feet with a range at half-load of 3,250 miles. They also knew how various parts function: wings, rudder, engines, control devices, and so forth. So to grasp how the bomber must work was straightforward. Once the how of a design is captured, a deeper goal can be approached: what a reverse engineer really seeks is to understand the why of a design—why has each feature been given its particular form? And why are their relationships just so? This is the step that reveals principles; it is the moment of aha!—the thrilling reward for the long, dull period of gathering facts.

    Neuroscience has fundamentally the same goal: to reverse engineer the brain (O’Connor, Huber, & Svoboda, 2009). What other reason could there be to invest 1 million person-years (so far) in describing so finely the brain's structure, chemistry, and function? But neuroscience has been somewhat handicapped by the lack of a framework for all this data. To some degree we resemble the isolated tribe in New Guinea that in the 1940s encountered a crashed airplane and studied it without comprehending its primary function. Nevertheless, we can learn from the engineers: we should try to state the brain's primary goal and basic performance specifications. We should try to intuit a role for each part. By placing the data in some framework, we can begin to evaluate how well our device works and begin to consider the why of its design. We will make this attempt, even though it will be incomplete, and sometimes wrong.

    Designing de novo

    Engineers know that they cannot create a general design for a general device—because there is no general material to embody it.¹,² Engineers must proceed from the particular to the particular. So they start with a list of questions: Precisely what is this machine supposed to accomplish? How fast must it operate and over what dynamic range? How large can it be and how heavy? How much power can it use? What error rates can be tolerated, and which type of error is most worrisome—a false alarm or a failure to respond? The answers to these questions are design specifications.

    Danger lurks in every vague expression: very fast, pretty small, power-efficient, error free. Generalities raise instant concern because one person's very is another's barely. To a biologist, brief is a millisecond (10–3 s), but to an electronic engineer, brief is a nanosecond (10–9 s), and the difference is a millionfold. Engineers know that no device can be truly instantaneous or error free—so they know to ask how high should we set the clock rate, how low should we hold the error rate, and at what costs?

    The engineer realizes that every device operates in an environment and that this profoundly affects the design. A car for urban roads can be low slung with slender springs, two-wheel drive, and a transmission geared for highway speeds. But a pickup for rough rural roads needs a higher undercarriage, stouter springs, four-wheel drive, and a transmission geared for power at low speeds. The decision regarding which use is more likely (urban or rural) suffuses the whole design. Moreover an engineer always wants to quantify the particular environment to estimate the frequencies of key features and hazards.

    One assumes, for example, that before building a million pickups, someone at Nissan bothered to measure the size distribution of rocks and potholes on rural roads. Then they could calculate what height of undercarriage would clear 99.99% of these obstructions and build to that standard. Knowing the frequencies of various parameters allows rational consideration of safety factor and robustness: how much extra clearance should be allowed for the rare giant boulder; how much thicker should the springs be for the rare overload? Such considerations immediately raise the issue of expense—for a sturdier machine can always be built, but it will cost more and could be less competitive. So design and cost are inseparable.

    Of course, environments change. Roads improve—and then deteriorate—so vehicle designs must take this into account. One strategy is to design a vehicle that is cheap and disposable and then bring out new models frequently. This allows adaptations to environmental changes to appear in the next model. Another strategy is to design a more expensive vehicle and invest it with intrinsically greater adaptive capacity—for example, adjustable suspension. Both designs would operate under the same basic principles; the main difference would lie in their strategies for adaptation to changes in demand. In biology the first strategy favors small animals with short lives; the second strategy, by conserving time and effort already invested, favors larger animals with longer lives. As we will see, these complementary strategies account for many differences between the brains of tiny worms, flies, and humans.

    Design evolves in the context of competition. Most designs are not de novo but rather are based upon an already existing device. The new version tries to surpass the competition: lighter, faster, cheaper, more reliable—but each advance is generally modest. To totally scrap an older model and start fresh would cost too much, take too long, and so on. However, suppose a small part could be modified slightly to improve one factor—or simply make the model prettier? The advance might pay for itself because the device would compete better with others of the same class. A backpacker need not outrun the bear—just a companion—and the same is true for improvements in design. The revolutionary Model T Ford was not the best car ever built, but it was terrific for its time: cheaper and more reliable than its competitors.

    How engineers design

    An engineer takes account of the laws of physics, such as mechanics and thermodynamics. For example, a turbine works most efficiently when the pressure drop is greatest, so this is where to place the dam or hydro-tunnel. Similarly, power generation from steam is most efficient at high temperatures, which requires high pressures. But using pressure to do work is most efficient when the pressure change is infinitesimally small—which takes infinitely long. There is no right answer here, but the laws of physics govern the practicality of power generation and power consumption—and thus affect many industrial designs.

    Similarly, a designer is aware of unalterable physical properties. Certain particles move rapidly: photons in a vacuum (3 × 10⁸ m in a second). In contrast, other particles move slowly: an amino acid diffusing in water (~1 μm in a millisecond)—a difference of 10¹⁴. So for a communications engineer to choose photons to send a message would seem like a no brainer—except that actual brains rely extensively on diffusion! This point will be developed in chapters 5 and 6.

    Designers pay particular attention to the interfaces where energy is transferred from one medium to another. For example, an automobile designed for a V-8 engine needs wide tires to powerfully grip the road. This is the final interface, tire-to-road, through which the engine's power is delivered; so to use narrow, lightly treaded tires would be worse than pointless—it would be lethal. More generally it is efficient to match components—for their operating capacities, robustness, reliability, and so on. Efficient designs will match the capacities of all parts so that none are too large or too small.

    Matching may be achieved straightforwardly where the properties of the input are predictable, such as a power transformer driven by the line voltage, or a transistor switch in a digital circuit. But the engineer knows that the real world is more variable and allows for this in the design—by providing greater tolerances, or adjusting the matches with feedback. And to estimate what tolerances or what sorts of feedback are needed, the engineer—once again—must analyze the statistics of the environment. Chapters 8–12 will do this for vision.

    What components?

    Having identified a specific task, its context and constraints, a designer starts to sketch a device. The process draws on deep knowledge of the available components—their intrinsic properties (both advantageous and problematic), their functional relationships, robustness, modifiability, and cost. A mechanical engineer draws from a vast inventory of standard bolts, gears, and bearings and exploits the malleability and versatility of plastics and metal alloys to tailor new parts to particular functions. For example, Henry Ford, in designing his 1908 Model T, solved the mechanical problem of axles cracking on roads built for horses by choosing a tougher, lighter steel alloyed with vanadium.³ An electrical engineer solves an electronic problem by drawing on a parts catalog or else drawing on known properties and costs to design a new chip. Consequently, as models advance, the number of parts grows explosively: the Boeing 747 comprises 6 million parts.

    In these respects the genome is a parts catalog—a list of DNA sequences that can be transcribed into RNA sequences (messengers) that in turn can be translated into amino acid sequences—to create proteins that serve signaling in innumerable ways. This extensive genetic parts list is not the end, but rather a start—for there are vast opportunities for further innovation and tailoring (see chapter 5). An existing gene can be duplicated and then modified slightly, just as an engineer would hope, to produce an alternative function. For example, the protein (opsin) that is tuned to capture light at middle wavelengths (550 nm) has been duplicated and retuned in evolution by changing only a few amino acids out of several hundred to capture longer wavelengths (570 nm). This seemingly minor difference supports our ability to distinguish red from green.

    At the next level, a single DNA sequence can be transcribed to produce shorter sequences of messenger RNA that can be spliced in alternative patterns to produce subtle but critical variants. For example, alternative splicing produces large families of receptor proteins with subtly different binding affinities—which give different time constants. Other variants desensitize at different rates. How these variations are exploited in neural design will be discussed, as will the capacity to further innovate and tailor the actual proteins by binding small ions and covalently adding small chemical groups (posttranslational modification). In short, with 20% of our genome devoted to coding neural signaling molecules, plus the additional variation allowed by duplication, alternative splicing, and posttranslational modification, the brain draws from a large inventory of adaptable parts. The versatility of these components, as explained further in chapter 5, is a major key to the brain's success.

    At a still higher level biological design builds on preexisting structures and processes. Where a need arises from an animal's opportunity to exploit an abundant resource, natural selection can fashion a new organ from an old one that served a different purpose. Famously, for example, the panda's thumb evolved not from the first digit that humans inherited from earlier primates but from a small bone in the hand of its ancestors that served a different purpose (Gould, 1992). Thus, efficient designs can be reached via natural selection from various directions and various developmental sequences. This was recognized a century ago by a key founder of neuroscience, Santiago Ramón y Cajal (1909):

    We certainly acknowledge that developmental conditions contribute to morphological features. Nevertheless, although cellular development may reveal how a particular feature assumes its mature form, it cannot clarify the utilitarian or teleological forces that led developmental mechanisms to incorporate this new anatomical feature. (edited for brevity)

    Neatening up

    A designer's list of needed functions might also reveal several that could be accomplished with a single, well-placed component. This has been termed neatening up, which Ford certainly did for the Model T. For example, rather than manufacture separate cylinders and bolt them together (the standard method), he cast the engine as a solid block with holes for the cylinders. Moreover, rather than use a separate belt to drive the magneto (which provides spark to initiate combustion), he built magnets into the engine's flywheel—thereby reducing parts and weight. The sum of his efforts to improve components (vanadium steel) and design (flexible suspension) and neaten up (engine block, magneto) produced a model that was 25% lighter and delivered 25% more horsepower per pound than the competition, such as the Buick Tourabout.

    Brain design reflects this process of neatening up. For example, one synapse can simultaneously serve two different pathways: fast and slow; ON and OFF. One neuron can alternately serve two different circuits: one during daylight and another during starlight (chapter 11). But this strategy must not compromise functionality.

    Complicate but do not duplicate

    Scientists are constantly lashed with the strop from Occam's razor. That is, we are forcefully encouraged to keep our explanatory models and theories simple. So the

    Enjoying the preview?
    Page 1 of 1