Brain Emulation by 2030

Over the past few years I’ve been thinking about whole brain emulation (WBE) and the required computational resources.  My conclusion is that the required technology level will be reached in the 2025 – 2030 time frame.

Although most estimates focus on calculations per second, the relevant parameters are:

  • Calculations per second
  • Memory size
  • Memory bandwidth per node
  • Inter-node communication bandwidth

[Read more →]

Singularity Summit 2010 – live blogging – day 2

Missed Eliezer Yudkowsky: Simplified  Humanism and Positive Futurism.

9:40 – Ramez Naam: The Digital Biome

Plenty of carrying capacity for the biome – 30-300 billion people with advanced biotech.  We are using only 1/1000 of the incident energy from the sun.  There’s no reason to crash due to lack of resources with advanced tech.  Population is predicted to level off at 10 billion.

Good points, but the Singularity is likely to happen on a shorter time scale.

[Read more →]

Singularity Summit 2010 – live blogging – day 1

9:30 – Missed Michael Vassar‘s talk.

9:50 – Gregory Stock is talking.  He is skeptical about progress in the bio realm.  He says that the FDA is a damper on progress, but he also says that there are difficult problems.  He brings up Alzheimer’s as an example.  I think he is underestimating the power of info tech to change the way we do bio-science.  Having read/write access to DNA, plus “in-silico” simulations will change the game.

Now he is talking about Silicon and saying that the complexity of computers rivals that of life.  And now he is talking about the rapid exponential progress in DNA technology.  As far as I understand, he is worried that we will create new life forms that will supersede humans.  He is saying that human evolution is “not exponential”. I think he means that it’s a very slow exponential compared to tech.

[Read more →]

IBM’s Blue Brain and Simulated Level of Detail

Henry Markham calls IBM’s cat scale brain simulation a hoax. Markham claims that the simulation doesn’t have the 10,000+ differential equations needed to simulate the synapses with fidelity.  This argument is a version of the naturalistic fallacy – if Nature requires X to achieve a result, we will have to perform X when replicating the effect.

It is useful to think about the simulation’s level of detail (LOD) in terms of certain thresholds, from most detailed to least detailed:

  • Noise level: at this level (call it LODn) the lack of precision is on the order of the noise present in a biological brain.  It is not possible to distinguish the functioning of such a simulated brain from the functioning of a biological brain.
  • Functional level: at this level (call it LODf) the lack of precision is greater, but the result is functionally similar.  There may be some behavior changes, but the overall capabilities (e.g. “intelligence”) are similar.
  • Equivalence level: at this level (call it LODe) the precision is even lower, but this is compensated with  tweaks to the simulated physiology.  The result is equivalent in capabilities, although some characteristics may be very different.  For example, the retina can be replaced with a non-biological equivalent.

The computation power required for LODn > LODf > LODe.  There are likely order of magnitude differences between the levels.

If we consider a non-biological example – a digital computer, what does it take to simulate it?  It is obviously enough to simulate the logic function.  The Markham’s line of reasoning would seem to argue that we have to simulate the voltage gradients and charge movements in each transistor!

In the transistor case, LODn would involve simulating each transistor’s logic function.  LODf would involve an instruction set simulation (e.g. the QEMU emulator).  LODe would involve using the most convenient instruction set (e.g. x86) and recompiling any software.  Clearly, an LODe simulation is several orders of magnitude more efficient.

Markham fails to convince that his preferred level of simulation is required for LODn, never mind the other levels.

One way to find out what levels require is to actually run simulations and compare to physical neural matter.  The Blue Brain project aims to do that, although the results are not conclusive yet.  It would be good if more research was directed at comparing their simulation to a biological brain.  This would make the project more grounded.

2008 Singularity Summit

The 2008 Singularity Summit is coming up on Saturday. I’ve been helping out on the web site and payment processing. Should be interesting.

VCs at the Singularity Summit

Steven Jurveston and Peter Thiel gave presentations today. I found these to be very interesting perspectives.

Thiel’s theorized
that the increase in frequency and magnitude of boom and bust cycles are a prelude to the Singularity, which would be a sustained boom driven by radical increases in productivity. Something to watch.

Jurveston’s main thesis was that AI created by evolutionary algorithms would have a strong competitive advantage over attempts to use traditional design. Some of the advantages include lack of brittleness and speed of implementation. Once you go down the evolution path, you diverge from the design path, because reverse engineering an evolved system is either very difficult or impossible. One idea I had while listening to this is that you can evolve subsystems with well defined I/O and connect the subsystems into a designed overall architecture.

Presentation by Artificial Development

Artificial Development is giving a presentation at the Singularity Summit about their CCortex and CorticalDB products. Seems like a full featured product to create biologically plausible neural networks/brain maps (CorticalDB) and simulate the resulting network (CCortex). They claim biological high fidelity simulation, 8 bit action potentials, etc. .

The claim of up to 100 million neurons seems pretty aggressive. Not sure why they would be so much ahead of the IBM effort using x86 CPUs, even if they have thousands of them.

CCortex will be open-source.

They claim multiple levels of simulation: 1. detailed equations 2. “estimation of action potentials” 3. a proprietary method.

At the Singularity Summit

The most memorable morning session at the Singularity Summit 2007 was the inimitable Eliezer Yudkowsky‘s.

He talked about three different schools of thought about the Singularity:

Vingean – where prediction becomes impossible
Accelerationist – exponential technological change
Greater than Human Inteligence – in a positive feedback loop

His thesis was that the three schools reinforce but also contradict each-other.

Another good point Eliezer makes is that advances in scientific knowledge and algorithms reduce the threshold for the Singularity.

Many-worlds Immortality and the Simulation Argument

An alternative to the simulation argument:

Nick Bostrom’s Simulation Argument argues that at least one of the following must be true:

  • the human species is very likely to go extinct before reaching a “posthuman” stage
  • any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history
  • or we are almost certainly living in a computer simulation

However, I see other possibilities. Assumptions:

  • The strong many-worlds theory is correct (i.e. all consistent mathematical systems exist as universes, a.k.a “everything exists”)
  • The many-worlds immortality theory is correct (i.e. for every conscious state there is at least one smooth continuation of that state in the many-worlds)

Given these assumptions, it doesn’t matter if we are in a simulation because our conscious state exists in many simulations and many non-simulated worlds that look identical to us (but are different in imperceptible ways). Even if all the simulations stopped, there would still be a continuation of our conscious state in a non-simulated world consistent with our observations to date.

Further, it seems that there are more non-simulated worlds than simulated worlds. This is because there are many ways a mathematical model can exist so that it cannot be formulated in a finite way, and therefore not simulatable by an intelligent entity. It might even be that simulatable world are of measure zero in the many-worlds.

Further out ideas:

A fascinating related idea is the Egan Jump as described in the book Permutation City. The idea is to jump to another world in the many-worlds by simulating the genesis of a new universe. In this universe you code yourself into the initial conditions, and design the rules so that you end up as an upload in the substrate of the new universe. Because that universe will continue as it’s own mathematical model, your conscious state will continue in that universe, branching off your original self.

Yet another, more distantly related idea is that the peculiarities of our universe (quantum physics, large amounts of empty space) are in a sense an error correcting mechanism. Because any perturbation of a world is also a world, the result is quite chaotic and inhospitable to meaningful life. The structure we see around us with large aggregates “average out” the chaos. This leads to a stable environment as required for conscious observers to arise.

Singularity Institute

Accelerating Future has a piece about some interesting progress for the Singularity Institute.  Apparently they have offices right here in SF.

I still believe that the Friendly AI initiative will have a hard time achieving meaningful success.  I think it’s more likely that we will be able to create friendly Uploads first, or at least that we should strive towards that goal.

Nanofactory Collaboration

CRN reports:

Nanotechnologists Robert A. Freitas, Jr. and Ralph C. Merkle have launched a “Nanofactory Collaboration” website. 

Too early, or just the right timing?

NPR piece on Singularity with Doctorow and Vinge

NPR : Man-Machine Merger Arriving Sooner Than You Think

Blogged with Flock

Timeline to Singularity

(this one is by my father, Vladimir)

Timeline for Singularity is Near (by R. Kurzweil)

2025: IPM creates the first machine to execute 500 trillion instructions/second, the throughput needed to simulate the brain.

2027: Mr. Bill Bates obtains from IPM the exclusive rights to produce a human-level software system. Bill buys an inexpensive brain simulation
and creates the first Humachine.

2028: With huge VC financing and a grant from POD, Bill hires tens of thousands of programmers, builds a campus of 400 buildings and achieves
de facto monopoly in Humachine software business.

2029: A first division of our Humachines destroys the enemy at Pat Pot. Enemy retaliates.

2030: One million Humachines suffer sudden death. A machinopsy shows that the last message in the system was: “irreversible error”.

2031: Improved software which has less bugs is introduced. The production of Humachines takes most of the Earth
available resources. Earth warms up considerably as a result of excessive use of resources to produce Humachines.

2050: World human population decreases considerably as a result of low birthrate and losses related to Humachine wars.
Twenty million Humachines are destroyed by a terrorist attack on their power sources. Much more are built.

2052: Earth temperature continues to increase. A group of humans led by Steve Gobs design an interplanetary vessel
that defies the laws of Physics and might be able to reach outer worlds.

2072: Humachines attack the remaining humans. Gobs’ group escapes with a number of space vessels.

2112: Earth is too hot for life. After 40 years of wandering through the space, the last humans and their children reach the
New Outer World and establish a new civilization (where producing any machine that can do more than 100 computation/s is punishable by death).

The Singularity Was Here…

V.

What Matt Bamberger believes about the Singularity

Here is the long version:

http://www.mattbamberger.com/Main/WhatIBelieveAboutTheSingularityLong

Singularity before Nano? Interesting…

The Singularity is Near by Kurzweil

Good semi-introductory text to give to all your smart friends.