Whole Brain Emulation Roadmap

A very detailed roadmap written by Anders Sandberg and Nick Bostrom and published by the Future of Humanity Institute / University of Oxford. Lots of nice complexity estimates for different emulation detail levels. Seems like 2020 will be an interesting year.

H/T: Next Big Future

2008 Singularity Summit

The 2008 Singularity Summit is coming up on Saturday. I’ve been helping out on the web site and payment processing. Should be interesting.

Reputation Economies symposium

O’Reilly Radar has a blog post about the Yale symposium on reputation economies in cyberspace.

OpenSocial insecurity – no user to app authentication

I was pretty excited to hear about Google trying to set a standard for social network applications. I wasn’t so happy to notice a serious omission in the way security is handled.

Executive Summary: no user authentication! Any user can forge anybody else’s identity when interacting with any OpenSocial application. As it currently stands, it is not possible to write secure social applications on the platform.

[Read more →]

OpenSocial vs. Facebook API – an analysis

Executive Summary

  • OpenSocial applications will have diverging look-and-feel, from each other and from the containers. This is because the containers do not provide common elements to blend the application into the container.
  • OpenSocial applications may not be vertically resizable, since they will exist in an iframe. However, Google has an API For resizing that some or all of the networks may implement
  • Facebook has additional API functionality that is not present in OpenSocial
  • The Facebook API is server oriented, whereas the OpenSocial/Google Gadgets API is client-side JavaScript oriented

[Read more →]

Making choices

Brian Wang posts a very cogent article about how global forces exist which delay progress and cause global harm. This includes the use of coal instead of cleaner energy (including nuclear), causing hundreds of thousands of deaths per year. It also includes corruption and violence preventing distribution of food and medicine and preventing economic growth.

I find it amazing that the human race spends on the order of a million dollars per year on molecular manufacturing when the potential impacts are measured in trillions. In other words, the race is spending less than one ten millionth of its efforts on this technology. The shortfall is mostly due to political issues, such as a lack of interdisciplinary thinking.

VCs at the Singularity Summit

Steven Jurveston and Peter Thiel gave presentations today. I found these to be very interesting perspectives.

Thiel’s theorized
that the increase in frequency and magnitude of boom and bust cycles are a prelude to the Singularity, which would be a sustained boom driven by radical increases in productivity. Something to watch.

Jurveston’s main thesis was that AI created by evolutionary algorithms would have a strong competitive advantage over attempts to use traditional design. Some of the advantages include lack of brittleness and speed of implementation. Once you go down the evolution path, you diverge from the design path, because reverse engineering an evolved system is either very difficult or impossible. One idea I had while listening to this is that you can evolve subsystems with well defined I/O and connect the subsystems into a designed overall architecture.

Presentation by Artificial Development

Artificial Development is giving a presentation at the Singularity Summit about their CCortex and CorticalDB products. Seems like a full featured product to create biologically plausible neural networks/brain maps (CorticalDB) and simulate the resulting network (CCortex). They claim biological high fidelity simulation, 8 bit action potentials, etc. .

The claim of up to 100 million neurons seems pretty aggressive. Not sure why they would be so much ahead of the IBM effort using x86 CPUs, even if they have thousands of them.

CCortex will be open-source.

They claim multiple levels of simulation: 1. detailed equations 2. “estimation of action potentials” 3. a proprietary method.

At the Singularity Summit

The most memorable morning session at the Singularity Summit 2007 was the inimitable Eliezer Yudkowsky‘s.

He talked about three different schools of thought about the Singularity:

Vingean – where prediction becomes impossible
Accelerationist – exponential technological change
Greater than Human Inteligence – in a positive feedback loop

His thesis was that the three schools reinforce but also contradict each-other.

Another good point Eliezer makes is that advances in scientific knowledge and algorithms reduce the threshold for the Singularity.

Many-worlds Immortality and the Simulation Argument

An alternative to the simulation argument:

Nick Bostrom’s Simulation Argument argues that at least one of the following must be true:

  • the human species is very likely to go extinct before reaching a “posthuman” stage
  • any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history
  • or we are almost certainly living in a computer simulation

However, I see other possibilities. Assumptions:

  • The strong many-worlds theory is correct (i.e. all consistent mathematical systems exist as universes, a.k.a “everything exists”)
  • The many-worlds immortality theory is correct (i.e. for every conscious state there is at least one smooth continuation of that state in the many-worlds)

Given these assumptions, it doesn’t matter if we are in a simulation because our conscious state exists in many simulations and many non-simulated worlds that look identical to us (but are different in imperceptible ways). Even if all the simulations stopped, there would still be a continuation of our conscious state in a non-simulated world consistent with our observations to date.

Further, it seems that there are more non-simulated worlds than simulated worlds. This is because there are many ways a mathematical model can exist so that it cannot be formulated in a finite way, and therefore not simulatable by an intelligent entity. It might even be that simulatable world are of measure zero in the many-worlds.

Further out ideas:

A fascinating related idea is the Egan Jump as described in the book Permutation City. The idea is to jump to another world in the many-worlds by simulating the genesis of a new universe. In this universe you code yourself into the initial conditions, and design the rules so that you end up as an upload in the substrate of the new universe. Because that universe will continue as it’s own mathematical model, your conscious state will continue in that universe, branching off your original self.

Yet another, more distantly related idea is that the peculiarities of our universe (quantum physics, large amounts of empty space) are in a sense an error correcting mechanism. Because any perturbation of a world is also a world, the result is quite chaotic and inhospitable to meaningful life. The structure we see around us with large aggregates “average out” the chaos. This leads to a stable environment as required for conscious observers to arise.

Amazon tries to patent S3

Slashdot reports that Amazon is trying to patent S3. This means that I will refrain from using it for any project, and stop using it for my existing business, FillZ.

An insufficient present

I strongly identify with this concept.

Mouse brain simulated at 1/10 of real-time

Update: the BlueGene/L instance used here is only 1/32 of the size of the one deployed at LLNL, so we are still within the high bound after all. On the other hand, it remains to be seen how accurate the model is compared to a functional neuron.


Dharmendra S Modha posts an article about a recent result presented at CoSyNe 2007.

We deployed the simulator on a 4096-processor BlueGene/L supercomputer with 256 MB per CPU. We were able to represent 8,000,000 neurons (80% excitatory) and 6,300 synapses per neuron in the 1 TB main memory of the system. Using a synthetic pattern of neuronal interconnections, at a 1 ms resolution and an average firing rate of 1 Hz, we were able to run 1s of model time in 10s of real time!

This is excellent news, since it will now be possible to figure out what biological modeling aspects are important to functionality.

Since the human brain has 100 billion neurons, this represents 1/10,000 of a human brain. The computer was a $100 million BlueGene/L. So an improvement of 10,000,000 is required in order to model a human brain for $1M in real time.

However, the BlueGene/L is two years old, and it is about 20 times less efficient compared to commodity hardware (based on a quoted 360 teraflops). So the real improvement required is only around 100,000.

Based on this data, the human brain requires 10 Exa CPS, one order of magnitude above the high estimate use in my calculator. Human equivalent for $1M would be available around the year 2023.

Hardware specifically suitable for this application may bring this back to 1 Exa CPS and pull this back to the year 2020.

Various Autism pointers

– Discover has an article about environmental stress (diet, toxins), inflammation and Autism.

– A University of Nottingham study described on the Biosingularity blog shows that autistic children are actually very good at inferring mental states by looking at eyes. This study uses moving images and overturns previous studies where static images were used. I personally find expressions quite readable, yet I used to block them out because the associated emotional charge is anxiety provoking.

– IEET links to an article by Michael L. Ganz that attempts to quantify the cost of autism to society. However, this makes me wonder if the cost is overwhelmingly offset by the contribution of people with Asperger’s syndrome to science and technology. Asperger’s is thought to be a form of high-functioning autism.

Compute power estimate for future (and past) years

Since I end up figuring these a few times a year, I went ahead and created a little calculator.

It has two fields for year, and calculates the compute powers and ratio. It also compares the compute powers to that required to compete with (or simulate) a human brain.

Nano and lightyears in context

Check out this interactive “powers-of-ten” flash presentation from Nikon. Good for some perspective…

Make sure your browser is full screen or you may miss the controls at the bottom.

Hat tip to Nanodot.

Random collection of reputation system articles

Here is a collection of links to articles about reputation systems, in no particular order:

Random Reputation Ramblings (and quite a few other reputation posts on same blog)

Reputation systems vendor relationship management (VRM – opposite of CRM)

Group reputation and application to loans

– Jim Downing mentions a study on feedback gaming on eBay on the Smart Mobs blog

– On gaming reputation systems in Privacy Digest

Advances in top-down nanotech

Brian Wang has a good review article about recent advances in top-down nanotech and some projections. Maybe top-down will meet bottom-up in 10 years?

Stanford Delta Scan and Technologies for Cooperation

Stanford’s Delta Scan makes predictions similar to my take on Web 3.0, including:

  • Trust over Social networks / Social accounting methods (i.e. reputation systems)
  • the rise of computing grids
  • Social mobile computing
  • Knowledge collectives
  • Mesh networks

Giving rise to:

  • Adhocracies
  • Faster innovation
  • Faster/better decision making
  • Increase in effectiveness of online economies

Crowdhacking

Wired latest issue has an article about the emerging arms race between rating systems (such as eBay, Amazon, Digg) and crowdhackers. Crowdhackers take advantage of the naive methods used to aggregate ratings in existing systems.

A crowdhacking proof system will have to use a transitive trust model rather than the naive averaging existing systems use.