Nano-Ethics and Spirituality

Dr. Donald Bruce, writes an article titled Faster, Higher, Stronger in the first issues of Nano Now. In response, Chris Phoenix takes issue with Dr. Bruce’s perceived ethical fallacies.

I think one of the major deficiencies in Dr. Bruce’s article is the false dichotomy between material improvement and spiritual improvement.  Living longer, better, with more access to information and in richer personal networks allows one to spend more effort and be supported on the path to spiritual growth.  For example, one may speculate, that a novice yoga practitioner who’s body has been suitably enhanced (e.g. through biofeedback or by being more limber) will be aided on their path.

Another false dichotomy is criticizing the initial cost of a technology and limited adoption. Computers were very expensive at first. However, if we didn’t develop them because of the limited initial availability, we would not reach the current inclusive situation.

One may also speculate that the discomfort some Christian believers experience with their bodies may indispose them to mind/body synergistic growth.  Buddhists, on the other hand, may be more amenable to technologically enhanced mind/body improvement.

Google Releases Paper on Disk Reliability

Google is a prime user of commodity grid supercomputing.  As such, they are in a perfect position to release a paper on disk reliability, with hundreds of thousands of data points.  Very interesting results, including the failure of SMART to predict failures, and low correlation with usage and temperature.

This will be important as grid supercomputing becomes the preferred way to manage compute resources.

Update: Slashdot points to another storage paper presented at FAST ’07 confirming some of the points in the Google paper, and invalidating the manufacturers’ MTTF estimates.

Supercruncher “web 3.0” applications

Bill McColl writes an article named Supercruncher Applications on his Computing at Scale blog about massively parallel  “web 3.0” applications.  In particular the following caught my eye:  continuous search, complex algorithmic trading and decentralized marketplaces and recommendation agents.

This is related to my previous post about the future of the web.

Found through Slashdot.

GPU increases computation heavy lifting by 10-200x

This slashdot article talks about Nvidia’s compiler environment for the new GeForce 8800 GPU. Apparently, you can get a 10-200x increase in performance for heavy numeric tasks compared to a CPU.

Assuming this will enable a 100x increase in Neural Network computation, it will save 8 years or so in Moore’s law timeline to NN based AI.

What is Friendly?

Bob Mottram argues in a post on the Streeb-Greebling Diaries that Friendly AI suffers from a critcal flaw. The definition of Friendly is lacking:

The elephant in the room here though is that there really is no good definition for what qualifies as “friendly”. What’s a good decision for me, might not be a good decision for someone else.

I completely agree. A well known result in game theory – Arrow’s Impossibility Theorem – means that it is not possible, even in principle, to tally votes (i.e. come to a decision given individual opinions). Once there are at least two persons and three issues, there is no general way to make decisions for the group in a way that satisfies certain reasonable criteria.
Michael Anissimov writes a response to Bob’s article, in which he argues:

There is a common definition for “friendly”, and it is accepted by many in the field:

“A “Friendly AI” is an AI that takes actions that are, on the whole, beneficial to humans and humanity; benevolent rather than malevolent; nice rather than hostile.”

Not too difficult.

and

If you can’t please everyone all the time, then try to please as many people as possible most of the time. Again, there’s no dilemma here.

However, this just begs the question on how to agree on what is benevolent, and how to distill a decision, given differences in opinion and Arrow’s Theorem. Two reasonable persons can disagree on a desired course of action because of differences in information, temperament, goals, personal situations and so on. Current political processes produce outcomes that are unacceptable to many. It seems naive to hope that an entity created by these same reasonable people will act in a way that is agreed to be Friendly.

Because of the impossibility of agreement among reasonable persons, I believe we should strive towards the following goals:

  • A balance of power, with the aim of making defense stronger than offense
  • Self-enhancement with the aim of keeping pace with other humans and with AIs
  • Self-definition, including autonomy and ownership of the self

A balance of power is important to create a stable political situation where individuals can protect their self and be free from violence and coercion. We should look for technological applications (nano, computation, etc.) which favor defense (e.g. active shields, sensor technology, uploading/backup). These are technological fixes, which one hopes will be possible but are not certain. A balance of power is an enabler for the next two goals.

Self-enhancement allows humans to be equal players and be meaningful participants in the future, rather than depending on the uncertain benevolence of people/AIs/organizations. It also feeds back into the balance of power goal, since it allows the balance to be maintain in the face of technological progress.

Self-definition allows individuals and voluntary groups to make their own decisions, which are more likely to be in line with individuals goals and preferences.

CRNANO on DNA based nanomanufacturing

Chris Phoneix at CRNANO has some suggestions for nanomanufacturing using DNA bricks, and a followup article.

Web 3.0, according to Miron

Here is what I think Web 3.0 will have:

  •  A global and open Reputation Network
  •  A distributed and open Computing and Storage Platform

Reputation Network

What does it mean for a Reputation Network to be global?  Currently, we have propietary reputation systems, such as the reputation scores for sellers (and buyers) at Amazon and eBay.  However, that reputation is not portable.  This means that if an Amazon third-party seller wants to start selling on eBay, they have to start from scratch, as if their business is new.  Trust is an integral ingredient to transactions.  It becomes crucial on the internet, when a buyer and a seller are likely to never have heard of each-other.  With portable reputations, a trust metric can be made available in all interactions.

What about the open part?  A global reputation system owned by one entity is a non-starter.  Why would one trust a single entity to provide basic infrastructure that affects all commerce and other interaction?  Reputation should be like TCP/IP – based on open standards so that different vendors can provide different levels of service and create a robust overall system.  The individual reputation systems can remain under the control of Amazon, eBay and others.  However, they can inter-operate so that they can create a global reputation network.
Reputation should be subjective. End-users should be able to subscribe to different raters, and thereby compute different scores for the same target. End-users have diverse values and preferences. One number cannot capture this diversity.

Storage and Computing

What about storage and computing?  Currently, people have presence on the Web through Blogs, Wikis, Storefronts, IM, e-mail, etc. .  However, creating a new Web application faces certain barriers.  The application creator has to acquire servers, manage them, ensure that the data is safe and face scalability issues as the application grows in popularity.  Also, interoprability between applications is difficult.  A standardized computing and storage abstraction will allow new application to be installed by the user into their virtual computing appliance.  Users will have control of which application they run and how the applications communicate.  Applications and data will migrate to physical hardware based on what the user is willing to pay and what scalability requires.

The division of labor is:  the application provider does what they are good at – writing applications.  The computing and storage providers provide efficient and reliable computing and storage (and if they don’t – the application can migrate easily or even automatically).  The end-user does what they do best – connect the dots and provide content.

Brian Wang’s predictions

Brian Wang has predictions.

And he has some actual references to back them up.

(Found through Accelerating Future.)

Hybrid Wetware

Wired reports about machines controlled by natural neural networks.  Georgia Institute of technology.

Japan’s Petaflop Supercomputer

Japan builds MDGrape3 – a petaflop supercomputer for a measly $9 million.

The hardware is specialized, but may possibly be suitable for neural simulation.

Singularity Institute

Accelerating Future has a piece about some interesting progress for the Singularity Institute.  Apparently they have offices right here in SF.

I still believe that the Friendly AI initiative will have a hard time achieving meaningful success.  I think it’s more likely that we will be able to create friendly Uploads first, or at least that we should strive towards that goal.

Nanofactory Collaboration

CRN reports:

Nanotechnologists Robert A. Freitas, Jr. and Ralph C. Merkle have launched a “Nanofactory Collaboration” website. 

Too early, or just the right timing?

NPR piece on Singularity with Doctorow and Vinge

NPR : Man-Machine Merger Arriving Sooner Than You Think

Blogged with Flock

Amazon releases grid storage

Price is reasonable too!

Amazon.com Amazon Web Services Store: Amazon S3 / Amazon Web Services

Blogged with Flock

Three systems of action

CRN posts a review of their “Three Systems of Action” – Guardian, Commercial and Information. Parallels to Government, Industry and Open-Source.

Responsible Nanotechnology: Systems Working Together

Blogged with Flock

People Aggregator – unification of social networks?

Federated, single-signon, standards based. What’s not to like?

BroadBand Mechanics presents People Aggregator

Web site is not fully functional yet, so have to wait.

Blogged with Flock

More “everything exists” links

http://groups.google.com/group/everything-list
http://www.anthropic-principle.com/preprints/inv/investigations.html
http://space.mit.edu/home/tegmark/toe.pdf
http://arxiv.org/PS_cache/astro-ph/pdf/0302/0302131.pdf
http://www.physica.freeserve.co.uk/

Computable universes

Looks like there are some people working on the “all quantized universes exist” angle:

Jürgen Schmidhuber’s Computable Universes page.

A final theory and the Omniverse

I just read “The Cosmic Landscape” by Leonard Susskind. Susskind believes that the small magnitude of the cosmological constant (119 zero digits after the decimal) compared to a randomly generated constant (which is likely to be close to to unity) leads to a multiple universe explanation. The actual choice of universe in which we live is constrained (and enabled) by the Anthropic Principle (AP). We exist in a habitable universe (where the constant is within a range consistent with life). Other universes exist in the Multiverse where things are not fine tuned, but nobody is there to notice.

Susskind further makes a distinction between the Landscape (set of possible universes) and Multiverse (set of actual universes). Universes can actually be born from other ones through quantum fluctuations and it seems that Susskind thinks that only those Universes born from others are “real”.

However, that seems arbitrary. What is the reason that only some of the universes in the Landscape exist, assuming they are all consistent? There is no “prime motive”, so no set of universes is preferred over any other set. Just because a Universe can arise from another, doesn’t seem to give its existence additional justification. For example, assume Universe A can give rise to B and C. Then on the other hand we have D that can give rise to E and F. If we say B exists (is in the multiverse) and E does not (in the Landscape but not in the Multiverse), what is the reason for that?

Additionally, Susskind speaks of the Landscape as being created by varying parameters of String Theory. The question here is what are the limits? All theories can be embelished or modified with additional rules and constructs. Different Landscapes can be created. Why choose one Landscape over another?

I would like to propose an extension to these ideas. Here is what I propose:

  • The Landscape includes all self-consistent mathematical constructs (the Omniverse)
  • The Multiverse and Landscape are identical

Some corollaries:

  • Every self-consistent mathematical construct exists as an independent universe
  • We exist in such a universe (but we have to find out which one)
  • Every perturbation of a self-consistent mathematical construct also exists

Note that a mathematical construct is not just what humans can conceive of – it’s the class of all possible ones.

Some speculations:

  • Quantum mechanics has a random nature at small scales because every perturbation exists
  • Our particular universe is a machine that is good at hiding the perturbations. In particular, at the macroscopic level the perturbations are smoothed out. This is another application of the AP – life could not exist in a fatally chaotic and/or unpredictable universe.
  • The Universe is quantized and finite in all dimensions and magnitudes (although unboundedness may be okay). Otherwise we get into the measure problem.

I found a few other mentions of similar ideas. The canonical references seems to be Max Tegmark:

http://space.mit.edu/home/tegmark/toe_frames.html

I believe a couple of additional features apply that I haven’t seen in other people’s theories: 1. randomness is a given at the bottom, because all perturbation exist, and universe must smooth it out 2. the measure problem means that the universe must be quantized and finite.

Google is god

http://money.cnn.com/2006/01/24/technology/dumbest_googlegod/index.htm