Supercomputer To Build 3D Brain
Supercomputer To Build 3D Brain
Neuroscientists are to build the most detailed model of the human brain with the
help of an IBM supercomputer.
Experts at the École Polytechnique Fédérale de Lausanne, Switzerland, will spend the next
two years creating a 3D simulation of the neocortex.
This is the part of the brain thought to be responsible for language, learning, memory and
complex thought.
The researchers believe the project will give them fresh insights into the most remarkable
organ in the body.
"Modelling the brain at the cellular level is a massive undertaking because of the hundreds
of thousands of parameters that need to be taken into account," said Henry Markram, the
EPFL professor leading the project.
The Swiss scientist and his colleagues will have at their disposal an IBM's eServer Blue Gene
supercomputer.
Up the pace
The system to be installed at their EPFL lab will take up the floor space of about four
refrigerators, and will have a peak processing speed of at least 22.8 trillion floating-point
operations per second (22.8 teraflops), making it one of the most powerful supercomputers
in the world.
Five years ago, no supercomputer in the world was capable of more than one teraflop.
The effort has been dubbed the Blue Brain Project. It is a daunting undertaking given the
myriad of electro-chemical connections that must be mapped.
"With an accurate computer-based model of the brain much of the pre-testing and planning
normally required for a major experiment could be done 'in silico' rather than in the
laboratory.
"With certain simulations we anticipate that a full day's worth of 'wet lab' research could be
done in a matter of seconds on Blue Gene."
The Blue Brain Project will start with the neocortex but scientists expect eventually to
produced a 3D model of the entire brain.
Researchers expect not only to get a better understanding of how the organ is wired up but
also to use that "atlas" of neurocircuitry to probe how the brain functions - and
malfunctions.
The scientists say the project could lead, for example, to new ideas on how psychiatric
disorders develop - illnesses such as autism, schizophrenia, and depression.
Pictures:
• The neocortex is organised into thousands of columns of neurons
• Supercomputers are increasinlgy being used in research to model biomolecules
Professor Steve Furber is one of the pioneers of the UK's computer industry. He
was a principal designer of the BBC Micro that gave many of Britain's current hi-
tech workers their first taste of technology. He has now turned his attention to
mimicking the human brain.
Most of the frontiers of science, from particle physics to radio astronomy, seem to be
concerned with the incredibly small or the unimaginably large.
But there is a lump of stuff inside each of our heads that we could easily hold in our hands
and look at, yet we have no idea how it works.
We know that our brains are built from a hundred billion small cells called neurons, and
these cells sit in a biochemical bath and send electrical pulses to each other every so often.
It is a strange thing to realise that everything that we see, smell, hear, think, dream and
say - indeed our very being - is just a consequence of those billions of cells inside our heads
going "ping" from time to time.
We now have a fair idea of how those neurons are organised into major functional areas
within the brain. Hi-tech scanners give us ever-more detailed glimpses into which brain
areas are active, and in what order, when we receive particular inputs or think particular
thoughts.
But we still have no idea of the spike "language" that the neurons use to talk to each other,
nor how that spiking activity becomes coherent thoughts and actions.
Brain power
Understanding the brain has turned out to be far more difficult than anyone imagined. Early
AI focussed on symbolic logic, which computers are very good at but people aren't so that
wasn't really getting at what it means for a human to be intelligent. Can we expect
computers ever to begin to emulate the achievements of human intelligence?
There are two ways to look at this question: Firstly, to ask when computers may be
powerful enough to simulate the detailed workings of the brain, to which the answer seems
to be that we aren't there yet, but we are getting close.
Secondly we can ask when we might know how to program those computers to perform this
task, to which the answer is still unknown.
At the dawn of the computer age 60 years ago machines were a million million times too
slow to model the brain in real time, but Petaflop supercomputers have closed that gap.
The programming challenge remains immense, though initiatives such as EPFL's Blue Brain
project in Switzerland are addressing this head-on.
That is gathering huge quantities of biological data on the types and behaviours of neurons,
and building high-fidelity biological models on a high-end IBM supercomputer.
Neurons are very complex living cells that have evolved to perform an information
processing function within a living organism.
One of the great unknowns in understanding the brain is the extent to which the finer
details of a neuron's structure is important to its information processing function, as
opposed to being required to stay alive, maintain chemical balance, take up energy, or just
being an artefact of evolution and the way the cell has developed within the organism.
Model makers
At Manchester we make the assumption that most of the phenomena we are interested in
arise at the network level, so we discard much of the biological detail in favour of modelling
larger numbers of simpler neurons. But, as the famous paraphrase of Einstein insists,
"everything should be as simple as possible, but no simpler."
How far can we go before we risk losing some vital aspect of the neuron's information
processing function? This question will only be answered as we begin to understand the
operational principles at work inside the brain - as we begin to learn the language of the
spikes.
Researchers around the world are using computer models to test the hypotheses of brain
function that have emerged from work by neuroscientists and psychologists. What today's
"brain modelling" computers offer is a platform that enables those models to be scaled up
and to become increasingly accurate, and to enable scientists to get ever closer to the "big
picture".
Where will this research lead us? The ultimate goal is the Grand Challenge of understanding
the architecture of brain and mind but this is still some way beyond our grasp.
In the nearer term we can expect to see a growing understanding of brain subsystems, and
from that understanding new computational approaches will emerge with applications in
control, robotics and elsewhere.
Data damage
After training the model can be selectively "damaged" in ways that reproduce the patterns
of behaviour observed in individuals who have suffered brain damage.
The model will then be used to test the effectiveness of various different speech therapies,
and its predictions checked against the results of using those therapies with stroke patients
who have language problems.
As the computing platforms used for this work scale up in performance, the accuracy and
scope of the models they can support will scale up too, and we hope to gain an ever-deeper
understanding of how the brain supports language, how it can fail, and the best ways to
achieve recovery from those failures.
The need for computers to become better at coping with component failure is underlined by
the trends in the semiconductor technology from which they are built.
As transistors approach atomic scales there is an inevitable degradation in the consistency
of their operation and designers are searching for ways to build microchips that can tolerate
high rates of transistor failure.
The brain is an existence proof that it is possible to accommodate high component failure
rates without significant loss of functionality, and there is much to be learnt from biology
about building reliable systems on unreliable technology.
As for improvements in computer software that might emerge from the quest to understand
the inner working of the brain, the potential for improvement in natural language interfaces
is almost limitless.
At present you have to put a lot of effort into learning how to use your computer effectively.
Imagine if this changed around, and it became the computer's job to learn how to be useful
to you, just like a good human personal assistant. This would require the computer to build
a model of how you - and in particular your mind - work.
The fear you may have of humanoid robots taking over the world as a result of computers
approaching the capability of modelling the human brain can be dispelled relatively easily.
Any computer capable of running these models will be large, expensive and very power-
hungry for the foreseeable future.
Biology will continue to offer the cheapest way of making portable, low-power brains (in
highly dangerous embodiments) for a long time yet.
Pictures:
• The challenge to understand the brain could be helped by computer models
• Prof Furber is currently looking at ways to mimic human brains.
• Research could help those suffering speech problems as a result of serious injury
The "Blue Brain" has been put in a virtual body, and observing it gives the first indications
of the molecular and neural basis of thought and memory.
Scaling the simulation to the human brain is only a matter of money, says the project's
head.
The work was presented at the European Future Technologies meeting in Prague.
The Blue Brain project launched in 2005 as the most ambitious brain simulation effort ever
undertaken.
"The thing about the neocortical column is that you can think of it as an isolated processor.
It is very much the same from mouse to man - it gets a bit larger a bit wider in humans,
but the circuit diagram is very similar," Henry Markram, leader of the Blue Brain project and
founder of the Brain Mind Institute in Switzerland, told BBC News.
He added that, when evolution discovered this "mammalian secret", it duplicated it many
many times and then "used it as it needed more and more functionality".
Virtually there
Professor Markram told the Science Beyond Fiction conference that the column is being
integrated into a virtual reality agent - a simulated animal in a simulated environment, so
that the researchers will be able to observe the detailed activities in the column as the
animal moves around the space.
"It starts to learn things and starts to remember things. We can actually see when it
retrieves a memory, and where they retrieved it from because we can trace back every
activity of every molecule, every cell, every connection and see how the memory was
formed."
The next phase of the project will make use of a more advanced version of the IBM Blue
Gene supercomputer that was used in the research to date.
"The next phase is beginning with a 'molecularisation' process: we add in all the molecules
and biochemical pathways to move toward gene expression and gene networks. We couldn't
do that on our first supercomputer."
Moreover, Professor Markram thinks the exponential rise in computing power will allow the
project in 10 to 20 years to integrate many facets of medicine, right down to genomic
profile, eventually creating a vast database for "personalised medicine".
Such an approach would allow researchers to simulate, on the level of an individual, how
they will respond to a given drug or treatment.
Emerging arts
Not all of them agree that the lofty ultimate goals of the Blue Brain project are achievable.
Wolfgang Wahlster of the German Research Center for Artificial Intelligence, and a chief
German government scientific adviser on ICT, thinks that the reductionist strategy of the
project is flawed - that it won't see the forest for the trees.
"Imagine you could follow in one of the most advanced Pentium chips today what each and
every transistor is doing right now," he told BBC News.
"Then I ask, 'What is happening? Is Word running? Are you doing a Google search?' You
couldn't answer. Looking at this level you cannot figure it out.
"This is very interesting research and I'm not criticising it, but it doesn't help us in computer
science in having the intelligent behaviour of humans replicated."
Professor Markram believes that by building up from one neocortical column to the entire
neocortex, the ethereal "emergent properties" that characterise human thought will, step by
step, make themselves apparent.
"They are not things that are easily predicted by just knowing elements - by definition - but
by putting them together you can explore the principles, where they came from. Basically
that's what we're after: understanding the principles of emergent properties."
Such emergent properties lead to the very essence of being human - the spatial awareness
of lower mammals graduates to political views and artistic expression in humans.
When asked when the simulation would come up with something artistic or an invention,
Professor Markram said it was simply a matter of money.
"It's not a question of years, it's one of dollars. The psychology is there today and the
technology is there today. It's a matter of if society wants this. If they want it in 10 years,
they'll have it in 10 years. If they want it in 1000 years, we can wait."
Pictures:
• This result completes the first phase of the brain simulation project
• Organised columns of neurons have been simulated molecule by molecule
A detailed, functional artificial human brain can be built within the next 10 years,
a leading scientist has claimed.
Henry Markram, director of the Blue Brain Project, has already simulated elements of a rat
brain.
He told the TED Global conference in Oxford that a synthetic human brain would be of
particular use finding treatments for mental illnesses.
Around two billion people are thought to suffer some kind of brain impairment, he said.
"It is not impossible to build a human brain and we can do it in 10 years," he said.
"And if we do succeed, we will send a hologram to TED to talk."
'Shared fabric'
The Blue Brain project was launched in 2005 and aims to reverse engineer the mammalian
brain from laboratory data.
In particular, his team has focused on the neocortical column - repetitive units of the
mammalian brain known as the neocortex.
"It's a new brain," he explained. "The mammals needed it because they had to cope with
parenthood, social interactions complex cognitive functions.
"It was so successful an evolution from mouse to man it expanded about a thousand fold in
terms of the numbers of units to produce this almost frightening organ."
"It's a bit like going and cataloguing a bit of the rainforest - how may trees does it have,
what shape are the trees, how many of each type of tree do we have, what is the position of
the trees," he said.
"But it is a bit more than cataloguing because you have to describe and discover all the
rules of communication, the rules of connectivity."
The project now has a software model of "tens of thousands" of neurons - each one of which
is different - which has allowed them to digitally construct an artificial neocortical column.
Although each neuron is unique, the team has found the patterns of circuitry in different
brains have common patterns.
"Even though your brain may be smaller, bigger, may have different morphologies of
neurons - we do actually share the same fabric," he said.
"And we think this is species specific, which could explain why we can't communicate across
species."
World view
To make the model come alive, the team feeds the models and a few algorithms into a
supercomputer.
"You need one laptop to do all the calculations for one neuron," he said. "So you need ten
thousand laptops."
But as well as advancing neuroscience and philosophy, the Blue Brain project has other
practical applications.
For example, by pooling all the world's neuroscience data on animals - to create a "Noah's
Ark", researchers may be able to build animal models.
"We cannot keep on doing animal experiments forever," said Professor Markram.
It may also give researchers new insights into diseases of the brain.
"There are two billion people on the planet affected by mental disorder," he told the
audience.