Entries Tagged as 'Future'

Max More is now CEO of Alcor

It’s interesting to see one of the major characters in the Extropian movement (precursor to the H+ movement) become the CEO of Alcor.

Brain Emulation by 2030

Over the past few years I’ve been thinking about whole brain emulation (WBE) and the required computational resources.  My conclusion is that the required technology level will be reached in the 2025 – 2030 time frame.

Although most estimates focus on calculations per second, the relevant parameters are:

  • Calculations per second
  • Memory size
  • Memory bandwidth per node
  • Inter-node communication bandwidth

[Read more →]

Singularity Summit 2010 – live blogging – day 2

Missed Eliezer Yudkowsky: Simplified  Humanism and Positive Futurism.

9:40 – Ramez Naam: The Digital Biome

Plenty of carrying capacity for the biome – 30-300 billion people with advanced biotech.  We are using only 1/1000 of the incident energy from the sun.  There’s no reason to crash due to lack of resources with advanced tech.  Population is predicted to level off at 10 billion.

Good points, but the Singularity is likely to happen on a shorter time scale.

[Read more →]

Singularity Summit 2010 – live blogging – day 1

9:30 – Missed Michael Vassar‘s talk.

9:50 – Gregory Stock is talking.  He is skeptical about progress in the bio realm.  He says that the FDA is a damper on progress, but he also says that there are difficult problems.  He brings up Alzheimer’s as an example.  I think he is underestimating the power of info tech to change the way we do bio-science.  Having read/write access to DNA, plus “in-silico” simulations will change the game.

Now he is talking about Silicon and saying that the complexity of computers rivals that of life.  And now he is talking about the rapid exponential progress in DNA technology.  As far as I understand, he is worried that we will create new life forms that will supersede humans.  He is saying that human evolution is “not exponential”. I think he means that it’s a very slow exponential compared to tech.

[Read more →]

Sir Martin Reese about the Future of the Cosmos

Martin talks about the future of the cosmos and our responsibility to prevent existential risks at a long now foundation seminar.

Nice to see H+ memes coming from the president of the Royal Society.

H/T Tom McCabe @ Kurzweil AI

Long-Distance Wiring Diagram of the Monkey Brain

Raghavendra Singh and Dharmendra S Modha published a paper in PNAS detailing 383 brain regions and 6,602 connections between them.

On Nanotech and Economics

Tihamer Toth-Fejel writes about Productive Nanosystems and the 2009 Financial Meltdown.

The collisions between unstoppable juggernauts and immovable obstacles are always fascinating—we just cannot tear our eyes away from the immense conflict, especially if we have a glimmer of the immense consequences it will have for us. So it will be when Productive Nanosystems emerge from the global financial meltdown. To predict what will happen in the next decade or so, we must understand the essential nature of wealth, and we must understand the capabilities of productive nanosystems.

IBM’s Blue Brain and Simulated Level of Detail

Henry Markham calls IBM’s cat scale brain simulation a hoax. Markham claims that the simulation doesn’t have the 10,000+ differential equations needed to simulate the synapses with fidelity.  This argument is a version of the naturalistic fallacy – if Nature requires X to achieve a result, we will have to perform X when replicating the effect.

It is useful to think about the simulation’s level of detail (LOD) in terms of certain thresholds, from most detailed to least detailed:

  • Noise level: at this level (call it LODn) the lack of precision is on the order of the noise present in a biological brain.  It is not possible to distinguish the functioning of such a simulated brain from the functioning of a biological brain.
  • Functional level: at this level (call it LODf) the lack of precision is greater, but the result is functionally similar.  There may be some behavior changes, but the overall capabilities (e.g. “intelligence”) are similar.
  • Equivalence level: at this level (call it LODe) the precision is even lower, but this is compensated with  tweaks to the simulated physiology.  The result is equivalent in capabilities, although some characteristics may be very different.  For example, the retina can be replaced with a non-biological equivalent.

The computation power required for LODn > LODf > LODe.  There are likely order of magnitude differences between the levels.

If we consider a non-biological example – a digital computer, what does it take to simulate it?  It is obviously enough to simulate the logic function.  The Markham’s line of reasoning would seem to argue that we have to simulate the voltage gradients and charge movements in each transistor!

In the transistor case, LODn would involve simulating each transistor’s logic function.  LODf would involve an instruction set simulation (e.g. the QEMU emulator).  LODe would involve using the most convenient instruction set (e.g. x86) and recompiling any software.  Clearly, an LODe simulation is several orders of magnitude more efficient.

Markham fails to convince that his preferred level of simulation is required for LODn, never mind the other levels.

One way to find out what levels require is to actually run simulations and compare to physical neural matter.  The Blue Brain project aims to do that, although the results are not conclusive yet.  It would be good if more research was directed at comparing their simulation to a biological brain.  This would make the project more grounded.

The computation market becomes more liquid

The Register tells us that Amazon will auction their excess capacity.  We’re a couple of steps away from computation becoming a liquid commodity.  The next step is for a couple of additional providers to arise (Google?).  The step after that is for the APIs to be brought in sync by the providers or by a third party intermediary.

Cool advanced user-interface video

Very nice attention to detail on the user-interface widgets…


World Builder from Bruce Branit on Vimeo.

Simplified nanobot anti-aging solution

[ I’ve started this article 2 years ago – will post it now even though I feel it is incomplete. ]

I’d like to propose a somewhat simplified approach to eliminate aging, given early-stage molecular robots and following the SENS approach to aging.

Results in this approach depend on equal amounts of creation of new cells and destruction of old cells to exponentially reduce the amount of aging related defects in the body over time.

Aubrey de Grey proposes 7 mechanisms for aging, which are believed to be comprehensive: [Read more →]

Chris Phoenix on Nanotech Fast Takeoff

Chris Phoenix is writing an interesting series of articles over at CRN about the dynamics of the development of molecular engineering.

His thesis, as far as I understand it after the first three articles, is that we’re likely to see a fast takeoff, because it’s easy to achieve excellent results after you’ve achieved good enough.

One example is error rate – going past an adequate error rate to a superlative one just requires additional purification / error correcting steps. Other things that may improve very quickly once a workable solution is found are reaction rates – which are exponentially dependent on positional accuracy / stiffness.

Cool virus infection and assembly videos

Fun video with accurate structures and mechanism, although motions are not realistic (Brownian motion is not goal directed). The third video is the most involved.

H/T: Eric Drexler

Human Scale Memory Timeline Calculator

I have previously mentioned my estimator for when human scale computation power will be available. I have since realized that the bottleneck might be memory rather than computation. I’ve created a similar estimator for memory.

Although we may achieve human level compute power in 2014, it looks like memory capacity will lag by another 6 years, assuming low estimates. With high estimates, compute power is available in 2020 and memory capacity will lag by 5 years after that, to 2025.

However, if Flash memory or similar technology will do the trick, a factor of 4 in cost reduction will advanced the timeline by about 4 years.

Brian Wang’s things to watch for 2009

Brian Wang writes about technologies to watch for 2009, including Memristors, high speed networking, optical computing and quantum computing.

Bill Joy for CTO of the USA?

Apparently, John Doerr recommended Bill Joy yesterday as the USA CTO to Barak Obama.

Misguided relinquishment anyone?

Excerpt from Joy’s article “Why the future doesn’t need us”:

These possibilities are all thus either undesirable or unachievable or both. The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.

This could be pretty bad.

H/T: slashdot

Update: Slashdot reports that Bill Joy is not in the running anymore. Haven’t looked at the two that are, yet.

Whole Brain Emulation Roadmap

A very detailed roadmap written by Anders Sandberg and Nick Bostrom and published by the Future of Humanity Institute / University of Oxford. Lots of nice complexity estimates for different emulation detail levels. Seems like 2020 will be an interesting year.

H/T: Next Big Future

2008 Singularity Summit

The 2008 Singularity Summit is coming up on Saturday. I’ve been helping out on the web site and payment processing. Should be interesting.

Making choices

Brian Wang posts a very cogent article about how global forces exist which delay progress and cause global harm. This includes the use of coal instead of cleaner energy (including nuclear), causing hundreds of thousands of deaths per year. It also includes corruption and violence preventing distribution of food and medicine and preventing economic growth.

I find it amazing that the human race spends on the order of a million dollars per year on molecular manufacturing when the potential impacts are measured in trillions. In other words, the race is spending less than one ten millionth of its efforts on this technology. The shortfall is mostly due to political issues, such as a lack of interdisciplinary thinking.

VCs at the Singularity Summit

Steven Jurveston and Peter Thiel gave presentations today. I found these to be very interesting perspectives.

Thiel’s theorized
that the increase in frequency and magnitude of boom and bust cycles are a prelude to the Singularity, which would be a sustained boom driven by radical increases in productivity. Something to watch.

Jurveston’s main thesis was that AI created by evolutionary algorithms would have a strong competitive advantage over attempts to use traditional design. Some of the advantages include lack of brittleness and speed of implementation. Once you go down the evolution path, you diverge from the design path, because reverse engineering an evolved system is either very difficult or impossible. One idea I had while listening to this is that you can evolve subsystems with well defined I/O and connect the subsystems into a designed overall architecture.