Over the past few years I’ve been thinking about whole brain emulation (WBE) and the required computational resources. My conclusion is that the required technology level will be reached in the 2025 – 2030 time frame.
Although most estimates focus on calculations per second, the relevant parameters are:
Missed Eliezer Yudkowsky: Simplified Humanism and Positive Futurism.
9:40 – Ramez Naam: The Digital Biome
Plenty of carrying capacity for the biome – 30-300 billion people with advanced biotech. We are using only 1/1000 of the incident energy from the sun. There’s no reason to crash due to lack of resources with advanced tech. Population is predicted to level off at 10 billion.
Good points, but the Singularity is likely to happen on a shorter time scale.
9:30 – Missed Michael Vassar‘s talk.
9:50 – Gregory Stock is talking. He is skeptical about progress in the bio realm. He says that the FDA is a damper on progress, but he also says that there are difficult problems. He brings up Alzheimer’s as an example. I think he is underestimating the power of info tech to change the way we do bio-science. Having read/write access to DNA, plus “in-silico” simulations will change the game.
Now he is talking about Silicon and saying that the complexity of computers rivals that of life. And now he is talking about the rapid exponential progress in DNA technology. As far as I understand, he is worried that we will create new life forms that will supersede humans. He is saying that human evolution is “not exponential”. I think he means that it’s a very slow exponential compared to tech.
Martin talks about the future of the cosmos and our responsibility to prevent existential risks at a long now foundation seminar.
Nice to see H+ memes coming from the president of the Royal Society.
H/T Tom McCabe @ Kurzweil AI
Raghavendra Singh and Dharmendra S Modha published a paper in PNAS detailing 383 brain regions and 6,602 connections between them.
Tihamer Toth-Fejel writes about Productive Nanosystems and the 2009 Financial Meltdown.
The collisions between unstoppable juggernauts and immovable obstacles are always fascinating—we just cannot tear our eyes away from the immense conflict, especially if we have a glimmer of the immense consequences it will have for us. So it will be when Productive Nanosystems emerge from the global financial meltdown. To predict what will happen in the next decade or so, we must understand the essential nature of wealth, and we must understand the capabilities of productive nanosystems.
Henry Markham calls IBM’s cat scale brain simulation a hoax. Markham claims that the simulation doesn’t have the 10,000+ differential equations needed to simulate the synapses with fidelity. This argument is a version of the naturalistic fallacy – if Nature requires X to achieve a result, we will have to perform X when replicating the effect.
It is useful to think about the simulation’s level of detail (LOD) in terms of certain thresholds, from most detailed to least detailed:
The computation power required for LODn > LODf > LODe. There are likely order of magnitude differences between the levels.
If we consider a non-biological example – a digital computer, what does it take to simulate it? It is obviously enough to simulate the logic function. The Markham’s line of reasoning would seem to argue that we have to simulate the voltage gradients and charge movements in each transistor!
In the transistor case, LODn would involve simulating each transistor’s logic function. LODf would involve an instruction set simulation (e.g. the QEMU emulator). LODe would involve using the most convenient instruction set (e.g. x86) and recompiling any software. Clearly, an LODe simulation is several orders of magnitude more efficient.
Markham fails to convince that his preferred level of simulation is required for LODn, never mind the other levels.
One way to find out what levels require is to actually run simulations and compare to physical neural matter. The Blue Brain project aims to do that, although the results are not conclusive yet. It would be good if more research was directed at comparing their simulation to a biological brain. This would make the project more grounded.
The Register tells us that Amazon will auction their excess capacity. We’re a couple of steps away from computation becoming a liquid commodity. The next step is for a couple of additional providers to arise (Google?). The step after that is for the APIs to be brought in sync by the providers or by a third party intermediary.
[ I’ve started this article 2 years ago – will post it now even though I feel it is incomplete. ]
I’d like to propose a somewhat simplified approach to eliminate aging, given early-stage molecular robots and following the SENS approach to aging.
Results in this approach depend on equal amounts of creation of new cells and destruction of old cells to exponentially reduce the amount of aging related defects in the body over time.
Chris Phoenix is writing an interesting series of articles over at CRN about the dynamics of the development of molecular engineering.
His thesis, as far as I understand it after the first three articles, is that we’re likely to see a fast takeoff, because it’s easy to achieve excellent results after you’ve achieved good enough.
One example is error rate – going past an adequate error rate to a superlative one just requires additional purification / error correcting steps. Other things that may improve very quickly once a workable solution is found are reaction rates – which are exponentially dependent on positional accuracy / stiffness.
I have previously mentioned my estimator for when human scale computation power will be available. I have since realized that the bottleneck might be memory rather than computation. I’ve created a similar estimator for memory.
Although we may achieve human level compute power in 2014, it looks like memory capacity will lag by another 6 years, assuming low estimates. With high estimates, compute power is available in 2020 and memory capacity will lag by 5 years after that, to 2025.
However, if Flash memory or similar technology will do the trick, a factor of 4 in cost reduction will advanced the timeline by about 4 years.
Brian Wang writes about technologies to watch for 2009, including Memristors, high speed networking, optical computing and quantum computing.
Apparently, John Doerr recommended Bill Joy yesterday as the USA CTO to Barak Obama.
Misguided relinquishment anyone?
Excerpt from Joy’s article “Why the future doesn’t need us”:
These possibilities are all thus either undesirable or unachievable or both. The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.
This could be pretty bad.
Update: Slashdot reports that Bill Joy is not in the running anymore. Haven’t looked at the two that are, yet.
The 2008 Singularity Summit is coming up on Saturday. I’ve been helping out on the web site and payment processing. Should be interesting.
Brian Wang posts a very cogent article about how global forces exist which delay progress and cause global harm. This includes the use of coal instead of cleaner energy (including nuclear), causing hundreds of thousands of deaths per year. It also includes corruption and violence preventing distribution of food and medicine and preventing economic growth.
I find it amazing that the human race spends on the order of a million dollars per year on molecular manufacturing when the potential impacts are measured in trillions. In other words, the race is spending less than one ten millionth of its efforts on this technology. The shortfall is mostly due to political issues, such as a lack of interdisciplinary thinking.
Steven Jurveston and Peter Thiel gave presentations today. I found these to be very interesting perspectives.
Thiel’s theorized that the increase in frequency and magnitude of boom and bust cycles are a prelude to the Singularity, which would be a sustained boom driven by radical increases in productivity. Something to watch.
Jurveston’s main thesis was that AI created by evolutionary algorithms would have a strong competitive advantage over attempts to use traditional design. Some of the advantages include lack of brittleness and speed of implementation. Once you go down the evolution path, you diverge from the design path, because reverse engineering an evolved system is either very difficult or impossible. One idea I had while listening to this is that you can evolve subsystems with well defined I/O and connect the subsystems into a designed overall architecture.