Blue Brain Project Documentary – Year 1

Noah Hutton’s company Couple 3 Films has released year 1 of a 10 year documentary project documenting the Blue Brain project.  The project includes Henry Markham’s work on reverse engineering the brain, scaling up from rodents to humans by 2010.

The work is funded by the Swiss government.

$3000 Whole Genome Sequencing Cost

Life Technologies announces $3,000 marginal cost (later this year) for sequencing complete human genomes.  This is after Illumina announced the same for $10,000 (now).  So a $1,000 genome early next year?

Here comes personalized medicine.

Attack Scenarios on Software Distributions

I’ve been asked to outline specific scenarios after I posted a previous entry on the Google’s network compromise.  Here are some, from most serious to least serious:

  • Build host – the machines that compile the source into binary packages are compromised.  In this scenario, code can be injected by the malicious party into the package just before it is signed and prepared for distribution.  All clients that install the updated packages are affected.  A software audit cannot identify the altered packages because the alteration happens after binaries are generated.
  • Distribution host and Signing key – the machines that host the packages for distribution (web servers) are compromised and the package signing key is compromised.  The effect of this is the same as a build host compromise.
  • Source repository – the machines that host the software source-code are compromised.  This allows code to be injected and all clients are affected.  However, a software audit can uncover the injected code.
  • Insider threats – an insider can insert non-obvious security holes into software they are responsible for.
  • Signing key – the key used to sign the software distribution is compromised.  This would allow the malicious party to compromise only specific targeted clients through a “man-in-the-middle” attack and DNS poisoning

How would multiple independent auditors help?  If the auditors can verify that a binary was produced from certain source, the build host compromise would be much harder, since the altered binary would not signed by the uncompromised auditors.  Similarly, a signing key compromise, if it is limited to a subset of auditors, would fail to get a full set of signatures on the altered package.

Source repository compromise and Insider injection of security holes would be more difficult to detect for subtle exploits, but again, multiple entities looking at the code increases the chances that the alteration would be caught.

(Note: verification that a certain binary was produced from certain source code requires a deterministic build system. Although such a system is relatively straightforward to implement, I have not run across one before I implemented Gitian.  I did find mention of it by Conifer Systems.)

Doubling in Incidence of Malicious Data Breaches

CNet reports on Ponemon institute’s survey showing a doubling of data breach incidents.

Average cost per record in the surveyed group is around $200.

Operation Aurora and Software Distributions as Single Points of Security Failure

Operation Aurora (Google’s compromise by China) highlights the possibility that software distributions may be targeted for code injection by malicious parties.  If Apple, Microsoft or a linux distributors are compromised, a large percentage of individuals, businesses and governments could be consequentially compromised when they install software updates.

One way to mitigate such a risk is to have multiple independent security auditors sign software distributions.  This is more likely to be successful in an open-source environment, where source is available and can easily be inspected.  I started such an initiative in late 2009 – Gitian.org.

IBM’s Blue Brain and Simulated Level of Detail

Henry Markham calls IBM’s cat scale brain simulation a hoax. Markham claims that the simulation doesn’t have the 10,000+ differential equations needed to simulate the synapses with fidelity.  This argument is a version of the naturalistic fallacy – if Nature requires X to achieve a result, we will have to perform X when replicating the effect.

It is useful to think about the simulation’s level of detail (LOD) in terms of certain thresholds, from most detailed to least detailed:

  • Noise level: at this level (call it LODn) the lack of precision is on the order of the noise present in a biological brain.  It is not possible to distinguish the functioning of such a simulated brain from the functioning of a biological brain.
  • Functional level: at this level (call it LODf) the lack of precision is greater, but the result is functionally similar.  There may be some behavior changes, but the overall capabilities (e.g. “intelligence”) are similar.
  • Equivalence level: at this level (call it LODe) the precision is even lower, but this is compensated with  tweaks to the simulated physiology.  The result is equivalent in capabilities, although some characteristics may be very different.  For example, the retina can be replaced with a non-biological equivalent.

The computation power required for LODn > LODf > LODe.  There are likely order of magnitude differences between the levels.

If we consider a non-biological example – a digital computer, what does it take to simulate it?  It is obviously enough to simulate the logic function.  The Markham’s line of reasoning would seem to argue that we have to simulate the voltage gradients and charge movements in each transistor!

In the transistor case, LODn would involve simulating each transistor’s logic function.  LODf would involve an instruction set simulation (e.g. the QEMU emulator).  LODe would involve using the most convenient instruction set (e.g. x86) and recompiling any software.  Clearly, an LODe simulation is several orders of magnitude more efficient.

Markham fails to convince that his preferred level of simulation is required for LODn, never mind the other levels.

One way to find out what levels require is to actually run simulations and compare to physical neural matter.  The Blue Brain project aims to do that, although the results are not conclusive yet.  It would be good if more research was directed at comparing their simulation to a biological brain.  This would make the project more grounded.

Nasal flu vaccine

Alex and I got nasal H1N1 vaccines on Tue. I felt tired on Wed and Alex has a sore throat. Nasal is live-attenuated instead of dead virus.

Apparently symptoms are more likely with the nasal. On the up-side – no preservatives!

Does the nasal-spray flu vaccine LAIV (FluMist) contain thimerosal?

No, the nasal-spray flu vaccine LAIV (FluMist) does not contain thimerosal or any other preservative.

The computation market becomes more liquid

The Register tells us that Amazon will auction their excess capacity.  We’re a couple of steps away from computation becoming a liquid commodity.  The next step is for a couple of additional providers to arise (Google?).  The step after that is for the APIs to be brought in sync by the providers or by a third party intermediary.

Cool advanced user-interface video

Very nice attention to detail on the user-interface widgets…


World Builder from Bruce Branit on Vimeo.

How I stopped worrying and learned to love technofixes

Peter Thiel writes regarding the failure of Democracy to preserve freedom and some possible technofix strategies. He includes are thoughts about creating freedom in Cyberspace, Outer space or on the high seas. I think it would be interesting to build certain distributed Internet apps that could change the dynamics of freedom, including reputation systems, gifting/barter systems and user-controlled Internet apps.
[Read more →]

Simplified nanobot anti-aging solution

[ I’ve started this article 2 years ago – will post it now even though I feel it is incomplete. ]

I’d like to propose a somewhat simplified approach to eliminate aging, given early-stage molecular robots and following the SENS approach to aging.

Results in this approach depend on equal amounts of creation of new cells and destruction of old cells to exponentially reduce the amount of aging related defects in the body over time.

Aubrey de Grey proposes 7 mechanisms for aging, which are believed to be comprehensive: [Read more →]

Chris Phoenix on Nanotech Fast Takeoff

Chris Phoenix is writing an interesting series of articles over at CRN about the dynamics of the development of molecular engineering.

His thesis, as far as I understand it after the first three articles, is that we’re likely to see a fast takeoff, because it’s easy to achieve excellent results after you’ve achieved good enough.

One example is error rate – going past an adequate error rate to a superlative one just requires additional purification / error correcting steps. Other things that may improve very quickly once a workable solution is found are reaction rates – which are exponentially dependent on positional accuracy / stiffness.

Freedom is generative

I’ve been thinking about what we learned about freedom from the open-source movement.

I think one of the more important benefits of freedom is that it is generative. You can glue things together in ways that create completely new things. For example, you can take the Internet, existing computers and the ability to write software (originally the Mosaic browser) and create a whole new ecosystem – the World Wide Web.

What if you didn’t have the freedom to transmit arbitrary data on wires? You’d have the telco monopoly and no Internet. If you couldn’t talk to anybody you want? You’d get the original walled-garden AOL. If you couldn’t write arbitrary software?

But there’s nothing specific to software in this lesson. What if you couldn’t freely associate? If you couldn’t invest in arbitrary ideas? If someone else made the decisions for you?

Another question is how much could we go beyond the current state of affairs. I think we could have significantly more freedom in technology and obtain much richer outcomes.

For example, if reputations systems were not stuck in walled gardens, such as eBay and Amazon seller ratings, we could have a global reputation system. Such a system will be immensely more useful, since it could be used to guide us in every interaction rather than just the current 1%. I would guess that such a system could guide you to interesting content and interaction with uncanny accuracy. Such a system would have to be decentralized and user-controlled to protect the users’ interests.

Another promising direction is the Google Android phone OS. If you buy one of the unlocked ones (also known as dev phones), you can re-compile and install the OS and any applications you want. Google maps is one mobile killer app, but there will be more, and I would guess the truly groundbreaking ones will not pass the iPhone store gateway keepers. (see here, here and many others).

I sometime pay a price for being an early adopter and eschewing closed solutions. Yes, the iPhone is very slick and music from the iTunes store was tempting even when it was all DRM. But I think in the long term open solutions will be much more valuable. The original AOL was nice for the time, but it’s dead now.

Obama really thinks warrantless wiretapping is OK

Wired reports that Obama’s administration sided with the previous administration in a federal court filing. To all the people that said Obama’s FISA vote was political expediency – you were wrong – it’s policy.

H/T: Slashdot

Cool virus infection and assembly videos

Fun video with accurate structures and mechanism, although motions are not realistic (Brownian motion is not goal directed). The third video is the most involved.

H/T: Eric Drexler

Obama’s DOJ appointment

First, Obama votes for FISA, effectively saying that it’s okay for AT&T and other telcos in cahoots with the president to violate the constitution by spying on people without a warrant. Then he appoints an ex-RIAA lawyer to a top DOJ post – corporate interests above all.

Not much hope there.

Human Scale Memory Timeline Calculator

I have previously mentioned my estimator for when human scale computation power will be available. I have since realized that the bottleneck might be memory rather than computation. I’ve created a similar estimator for memory.

Although we may achieve human level compute power in 2014, it looks like memory capacity will lag by another 6 years, assuming low estimates. With high estimates, compute power is available in 2020 and memory capacity will lag by 5 years after that, to 2025.

However, if Flash memory or similar technology will do the trick, a factor of 4 in cost reduction will advanced the timeline by about 4 years.

Brian Wang’s things to watch for 2009

Brian Wang writes about technologies to watch for 2009, including Memristors, high speed networking, optical computing and quantum computing.

Peer to Peer Development

GitTorrent (described on Advogato) is a really distributed version control system, based on Git and BitTorrent. It seems to hold the promise of:

  • Public keys (PGP) are used for authenticating changes
  • No central web site for a project
  • Easy forking of projects
  • Package and OS distributions without a central download location
  • A distributed mechanism for security and feature updates

The significance of all this is that it:

  • levels the playing field for individual developers and small groups
  • routes around censorship more effectively
  • allows end user to choose different views of the repository based on which developers they trust

H/T: Slashdot

I would suggest a further improvement – multiple signatures on sources and on binaries. This would greatly reduce the chance of Trojan binaries being installed on hundreds of thousands of computers next time that Canonical/Debian/RedHat distribution points are subverted by a black hat hacker. Binary signatures would require a repeatable transformation from source to binary – by fully specifying the compile tools and compilation environment and using specified values for timestamps.

Bill Joy for CTO of the USA?

Apparently, John Doerr recommended Bill Joy yesterday as the USA CTO to Barak Obama.

Misguided relinquishment anyone?

Excerpt from Joy’s article “Why the future doesn’t need us”:

These possibilities are all thus either undesirable or unachievable or both. The only realistic alternative I see is relinquishment: to limit development of the technologies that are too dangerous, by limiting our pursuit of certain kinds of knowledge.

This could be pretty bad.

H/T: slashdot

Update: Slashdot reports that Bill Joy is not in the running anymore. Haven’t looked at the two that are, yet.