Presentation by Artificial Development

Artificial Development is giving a presentation at the Singularity Summit about their CCortex and CorticalDB products. Seems like a full featured product to create biologically plausible neural networks/brain maps (CorticalDB) and simulate the resulting network (CCortex). They claim biological high fidelity simulation, 8 bit action potentials, etc. .

The claim of up to 100 million neurons seems pretty aggressive. Not sure why they would be so much ahead of the IBM effort using x86 CPUs, even if they have thousands of them.

CCortex will be open-source.

They claim multiple levels of simulation: 1. detailed equations 2. “estimation of action potentials” 3. a proprietary method.

At the Singularity Summit

The most memorable morning session at the Singularity Summit 2007 was the inimitable Eliezer Yudkowsky‘s.

He talked about three different schools of thought about the Singularity:

Vingean – where prediction becomes impossible
Accelerationist – exponential technological change
Greater than Human Inteligence – in a positive feedback loop

His thesis was that the three schools reinforce but also contradict each-other.

Another good point Eliezer makes is that advances in scientific knowledge and algorithms reduce the threshold for the Singularity.

Many-worlds Immortality and the Simulation Argument

An alternative to the simulation argument:

Nick Bostrom’s Simulation Argument argues that at least one of the following must be true:

  • the human species is very likely to go extinct before reaching a “posthuman” stage
  • any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history
  • or we are almost certainly living in a computer simulation

However, I see other possibilities. Assumptions:

  • The strong many-worlds theory is correct (i.e. all consistent mathematical systems exist as universes, a.k.a “everything exists”)
  • The many-worlds immortality theory is correct (i.e. for every conscious state there is at least one smooth continuation of that state in the many-worlds)

Given these assumptions, it doesn’t matter if we are in a simulation because our conscious state exists in many simulations and many non-simulated worlds that look identical to us (but are different in imperceptible ways). Even if all the simulations stopped, there would still be a continuation of our conscious state in a non-simulated world consistent with our observations to date.

Further, it seems that there are more non-simulated worlds than simulated worlds. This is because there are many ways a mathematical model can exist so that it cannot be formulated in a finite way, and therefore not simulatable by an intelligent entity. It might even be that simulatable world are of measure zero in the many-worlds.

Further out ideas:

A fascinating related idea is the Egan Jump as described in the book Permutation City. The idea is to jump to another world in the many-worlds by simulating the genesis of a new universe. In this universe you code yourself into the initial conditions, and design the rules so that you end up as an upload in the substrate of the new universe. Because that universe will continue as it’s own mathematical model, your conscious state will continue in that universe, branching off your original self.

Yet another, more distantly related idea is that the peculiarities of our universe (quantum physics, large amounts of empty space) are in a sense an error correcting mechanism. Because any perturbation of a world is also a world, the result is quite chaotic and inhospitable to meaningful life. The structure we see around us with large aggregates “average out” the chaos. This leads to a stable environment as required for conscious observers to arise.

An insufficient present

I strongly identify with this concept.

Mouse brain simulated at 1/10 of real-time

Update: the BlueGene/L instance used here is only 1/32 of the size of the one deployed at LLNL, so we are still within the high bound after all. On the other hand, it remains to be seen how accurate the model is compared to a functional neuron.


Dharmendra S Modha posts an article about a recent result presented at CoSyNe 2007.

We deployed the simulator on a 4096-processor BlueGene/L supercomputer with 256 MB per CPU. We were able to represent 8,000,000 neurons (80% excitatory) and 6,300 synapses per neuron in the 1 TB main memory of the system. Using a synthetic pattern of neuronal interconnections, at a 1 ms resolution and an average firing rate of 1 Hz, we were able to run 1s of model time in 10s of real time!

This is excellent news, since it will now be possible to figure out what biological modeling aspects are important to functionality.

Since the human brain has 100 billion neurons, this represents 1/10,000 of a human brain. The computer was a $100 million BlueGene/L. So an improvement of 10,000,000 is required in order to model a human brain for $1M in real time.

However, the BlueGene/L is two years old, and it is about 20 times less efficient compared to commodity hardware (based on a quoted 360 teraflops). So the real improvement required is only around 100,000.

Based on this data, the human brain requires 10 Exa CPS, one order of magnitude above the high estimate use in my calculator. Human equivalent for $1M would be available around the year 2023.

Hardware specifically suitable for this application may bring this back to 1 Exa CPS and pull this back to the year 2020.

Nano and lightyears in context

Check out this interactive “powers-of-ten” flash presentation from Nikon. Good for some perspective…

Make sure your browser is full screen or you may miss the controls at the bottom.

Hat tip to Nanodot.

Advances in top-down nanotech

Brian Wang has a good review article about recent advances in top-down nanotech and some projections. Maybe top-down will meet bottom-up in 10 years?

Nano-Ethics and Spirituality

Dr. Donald Bruce, writes an article titled Faster, Higher, Stronger in the first issues of Nano Now. In response, Chris Phoenix takes issue with Dr. Bruce’s perceived ethical fallacies.

I think one of the major deficiencies in Dr. Bruce’s article is the false dichotomy between material improvement and spiritual improvement.  Living longer, better, with more access to information and in richer personal networks allows one to spend more effort and be supported on the path to spiritual growth.  For example, one may speculate, that a novice yoga practitioner who’s body has been suitably enhanced (e.g. through biofeedback or by being more limber) will be aided on their path.

Another false dichotomy is criticizing the initial cost of a technology and limited adoption. Computers were very expensive at first. However, if we didn’t develop them because of the limited initial availability, we would not reach the current inclusive situation.

One may also speculate that the discomfort some Christian believers experience with their bodies may indispose them to mind/body synergistic growth.  Buddhists, on the other hand, may be more amenable to technologically enhanced mind/body improvement.

GPU increases computation heavy lifting by 10-200x

This slashdot article talks about Nvidia’s compiler environment for the new GeForce 8800 GPU. Apparently, you can get a 10-200x increase in performance for heavy numeric tasks compared to a CPU.

Assuming this will enable a 100x increase in Neural Network computation, it will save 8 years or so in Moore’s law timeline to NN based AI.

What is Friendly?

Bob Mottram argues in a post on the Streeb-Greebling Diaries that Friendly AI suffers from a critcal flaw. The definition of Friendly is lacking:

The elephant in the room here though is that there really is no good definition for what qualifies as “friendly”. What’s a good decision for me, might not be a good decision for someone else.

I completely agree. A well known result in game theory – Arrow’s Impossibility Theorem – means that it is not possible, even in principle, to tally votes (i.e. come to a decision given individual opinions). Once there are at least two persons and three issues, there is no general way to make decisions for the group in a way that satisfies certain reasonable criteria.
Michael Anissimov writes a response to Bob’s article, in which he argues:

There is a common definition for “friendly”, and it is accepted by many in the field:

“A “Friendly AI” is an AI that takes actions that are, on the whole, beneficial to humans and humanity; benevolent rather than malevolent; nice rather than hostile.”

Not too difficult.

and

If you can’t please everyone all the time, then try to please as many people as possible most of the time. Again, there’s no dilemma here.

However, this just begs the question on how to agree on what is benevolent, and how to distill a decision, given differences in opinion and Arrow’s Theorem. Two reasonable persons can disagree on a desired course of action because of differences in information, temperament, goals, personal situations and so on. Current political processes produce outcomes that are unacceptable to many. It seems naive to hope that an entity created by these same reasonable people will act in a way that is agreed to be Friendly.

Because of the impossibility of agreement among reasonable persons, I believe we should strive towards the following goals:

  • A balance of power, with the aim of making defense stronger than offense
  • Self-enhancement with the aim of keeping pace with other humans and with AIs
  • Self-definition, including autonomy and ownership of the self

A balance of power is important to create a stable political situation where individuals can protect their self and be free from violence and coercion. We should look for technological applications (nano, computation, etc.) which favor defense (e.g. active shields, sensor technology, uploading/backup). These are technological fixes, which one hopes will be possible but are not certain. A balance of power is an enabler for the next two goals.

Self-enhancement allows humans to be equal players and be meaningful participants in the future, rather than depending on the uncertain benevolence of people/AIs/organizations. It also feeds back into the balance of power goal, since it allows the balance to be maintain in the face of technological progress.

Self-definition allows individuals and voluntary groups to make their own decisions, which are more likely to be in line with individuals goals and preferences.

CRNANO on DNA based nanomanufacturing

Chris Phoneix at CRNANO has some suggestions for nanomanufacturing using DNA bricks, and a followup article.

Web 3.0, according to Miron

Here is what I think Web 3.0 will have:

  •  A global and open Reputation Network
  •  A distributed and open Computing and Storage Platform

Reputation Network

What does it mean for a Reputation Network to be global?  Currently, we have propietary reputation systems, such as the reputation scores for sellers (and buyers) at Amazon and eBay.  However, that reputation is not portable.  This means that if an Amazon third-party seller wants to start selling on eBay, they have to start from scratch, as if their business is new.  Trust is an integral ingredient to transactions.  It becomes crucial on the internet, when a buyer and a seller are likely to never have heard of each-other.  With portable reputations, a trust metric can be made available in all interactions.

What about the open part?  A global reputation system owned by one entity is a non-starter.  Why would one trust a single entity to provide basic infrastructure that affects all commerce and other interaction?  Reputation should be like TCP/IP – based on open standards so that different vendors can provide different levels of service and create a robust overall system.  The individual reputation systems can remain under the control of Amazon, eBay and others.  However, they can inter-operate so that they can create a global reputation network.
Reputation should be subjective. End-users should be able to subscribe to different raters, and thereby compute different scores for the same target. End-users have diverse values and preferences. One number cannot capture this diversity.

Storage and Computing

What about storage and computing?  Currently, people have presence on the Web through Blogs, Wikis, Storefronts, IM, e-mail, etc. .  However, creating a new Web application faces certain barriers.  The application creator has to acquire servers, manage them, ensure that the data is safe and face scalability issues as the application grows in popularity.  Also, interoprability between applications is difficult.  A standardized computing and storage abstraction will allow new application to be installed by the user into their virtual computing appliance.  Users will have control of which application they run and how the applications communicate.  Applications and data will migrate to physical hardware based on what the user is willing to pay and what scalability requires.

The division of labor is:  the application provider does what they are good at – writing applications.  The computing and storage providers provide efficient and reliable computing and storage (and if they don’t – the application can migrate easily or even automatically).  The end-user does what they do best – connect the dots and provide content.

Brian Wang’s predictions

Brian Wang has predictions.

And he has some actual references to back them up.

(Found through Accelerating Future.)

Singularity Institute

Accelerating Future has a piece about some interesting progress for the Singularity Institute.  Apparently they have offices right here in SF.

I still believe that the Friendly AI initiative will have a hard time achieving meaningful success.  I think it’s more likely that we will be able to create friendly Uploads first, or at least that we should strive towards that goal.

Nanofactory Collaboration

CRN reports:

Nanotechnologists Robert A. Freitas, Jr. and Ralph C. Merkle have launched a “Nanofactory Collaboration” website. 

Too early, or just the right timing?

NPR piece on Singularity with Doctorow and Vinge

NPR : Man-Machine Merger Arriving Sooner Than You Think

Blogged with Flock

Three systems of action

CRN posts a review of their “Three Systems of Action” – Guardian, Commercial and Information. Parallels to Government, Industry and Open-Source.

Responsible Nanotechnology: Systems Working Together

Blogged with Flock

Timeline to Singularity

(this one is by my father, Vladimir)

Timeline for Singularity is Near (by R. Kurzweil)

2025: IPM creates the first machine to execute 500 trillion instructions/second, the throughput needed to simulate the brain.

2027: Mr. Bill Bates obtains from IPM the exclusive rights to produce a human-level software system. Bill buys an inexpensive brain simulation
and creates the first Humachine.

2028: With huge VC financing and a grant from POD, Bill hires tens of thousands of programmers, builds a campus of 400 buildings and achieves
de facto monopoly in Humachine software business.

2029: A first division of our Humachines destroys the enemy at Pat Pot. Enemy retaliates.

2030: One million Humachines suffer sudden death. A machinopsy shows that the last message in the system was: “irreversible error”.

2031: Improved software which has less bugs is introduced. The production of Humachines takes most of the Earth
available resources. Earth warms up considerably as a result of excessive use of resources to produce Humachines.

2050: World human population decreases considerably as a result of low birthrate and losses related to Humachine wars.
Twenty million Humachines are destroyed by a terrorist attack on their power sources. Much more are built.

2052: Earth temperature continues to increase. A group of humans led by Steve Gobs design an interplanetary vessel
that defies the laws of Physics and might be able to reach outer worlds.

2072: Humachines attack the remaining humans. Gobs’ group escapes with a number of space vessels.

2112: Earth is too hot for life. After 40 years of wandering through the space, the last humans and their children reach the
New Outer World and establish a new civilization (where producing any machine that can do more than 100 computation/s is punishable by death).

The Singularity Was Here…

V.

What Matt Bamberger believes about the Singularity

Here is the long version:

http://www.mattbamberger.com/Main/WhatIBelieveAboutTheSingularityLong

Singularity before Nano? Interesting…

Winner take all and Molecular Manufacturing

I posted on a discussion about the problem of winner-take-all and MM.