Entries Tagged as 'Brain'

Brain Emulation by 2030

Over the past few years I’ve been thinking about whole brain emulation (WBE) and the required computational resources.  My conclusion is that the required technology level will be reached in the 2025 – 2030 time frame.

Although most estimates focus on calculations per second, the relevant parameters are:

  • Calculations per second
  • Memory size
  • Memory bandwidth per node
  • Inter-node communication bandwidth

[Read more →]

Long-Distance Wiring Diagram of the Monkey Brain

Raghavendra Singh and Dharmendra S Modha published a paper in PNAS detailing 383 brain regions and 6,602 connections between them.

IBM’s Blue Brain and Simulated Level of Detail

Henry Markham calls IBM’s cat scale brain simulation a hoax. Markham claims that the simulation doesn’t have the 10,000+ differential equations needed to simulate the synapses with fidelity.  This argument is a version of the naturalistic fallacy – if Nature requires X to achieve a result, we will have to perform X when replicating the effect.

It is useful to think about the simulation’s level of detail (LOD) in terms of certain thresholds, from most detailed to least detailed:

  • Noise level: at this level (call it LODn) the lack of precision is on the order of the noise present in a biological brain.  It is not possible to distinguish the functioning of such a simulated brain from the functioning of a biological brain.
  • Functional level: at this level (call it LODf) the lack of precision is greater, but the result is functionally similar.  There may be some behavior changes, but the overall capabilities (e.g. “intelligence”) are similar.
  • Equivalence level: at this level (call it LODe) the precision is even lower, but this is compensated with  tweaks to the simulated physiology.  The result is equivalent in capabilities, although some characteristics may be very different.  For example, the retina can be replaced with a non-biological equivalent.

The computation power required for LODn > LODf > LODe.  There are likely order of magnitude differences between the levels.

If we consider a non-biological example – a digital computer, what does it take to simulate it?  It is obviously enough to simulate the logic function.  The Markham’s line of reasoning would seem to argue that we have to simulate the voltage gradients and charge movements in each transistor!

In the transistor case, LODn would involve simulating each transistor’s logic function.  LODf would involve an instruction set simulation (e.g. the QEMU emulator).  LODe would involve using the most convenient instruction set (e.g. x86) and recompiling any software.  Clearly, an LODe simulation is several orders of magnitude more efficient.

Markham fails to convince that his preferred level of simulation is required for LODn, never mind the other levels.

One way to find out what levels require is to actually run simulations and compare to physical neural matter.  The Blue Brain project aims to do that, although the results are not conclusive yet.  It would be good if more research was directed at comparing their simulation to a biological brain.  This would make the project more grounded.

Human Scale Memory Timeline Calculator

I have previously mentioned my estimator for when human scale computation power will be available. I have since realized that the bottleneck might be memory rather than computation. I’ve created a similar estimator for memory.

Although we may achieve human level compute power in 2014, it looks like memory capacity will lag by another 6 years, assuming low estimates. With high estimates, compute power is available in 2020 and memory capacity will lag by 5 years after that, to 2025.

However, if Flash memory or similar technology will do the trick, a factor of 4 in cost reduction will advanced the timeline by about 4 years.

Whole Brain Emulation Roadmap

A very detailed roadmap written by Anders Sandberg and Nick Bostrom and published by the Future of Humanity Institute / University of Oxford. Lots of nice complexity estimates for different emulation detail levels. Seems like 2020 will be an interesting year.

H/T: Next Big Future

Mouse brain simulated at 1/10 of real-time

Update: the BlueGene/L instance used here is only 1/32 of the size of the one deployed at LLNL, so we are still within the high bound after all. On the other hand, it remains to be seen how accurate the model is compared to a functional neuron.


Dharmendra S Modha posts an article about a recent result presented at CoSyNe 2007.

We deployed the simulator on a 4096-processor BlueGene/L supercomputer with 256 MB per CPU. We were able to represent 8,000,000 neurons (80% excitatory) and 6,300 synapses per neuron in the 1 TB main memory of the system. Using a synthetic pattern of neuronal interconnections, at a 1 ms resolution and an average firing rate of 1 Hz, we were able to run 1s of model time in 10s of real time!

This is excellent news, since it will now be possible to figure out what biological modeling aspects are important to functionality.

Since the human brain has 100 billion neurons, this represents 1/10,000 of a human brain. The computer was a $100 million BlueGene/L. So an improvement of 10,000,000 is required in order to model a human brain for $1M in real time.

However, the BlueGene/L is two years old, and it is about 20 times less efficient compared to commodity hardware (based on a quoted 360 teraflops). So the real improvement required is only around 100,000.

Based on this data, the human brain requires 10 Exa CPS, one order of magnitude above the high estimate use in my calculator. Human equivalent for $1M would be available around the year 2023.

Hardware specifically suitable for this application may bring this back to 1 Exa CPS and pull this back to the year 2020.