The elephant in the room here though is that there really is no good definition for what qualifies as “friendly”. What’s a good decision for me, might not be a good decision for someone else.
I completely agree. A well known result in game theory – Arrow’s Impossibility Theorem – means that it is not possible, even in principle, to tally votes (i.e. come to a decision given individual opinions). Once there are at least two persons and three issues, there is no general way to make decisions for the group in a way that satisfies certain reasonable criteria.
Michael Anissimov writes a response to Bob’s article, in which he argues:
There is a common definition for “friendly”, and it is accepted by many in the field:
“A “Friendly AI” is an AI that takes actions that are, on the whole, beneficial to humans and humanity; benevolent rather than malevolent; nice rather than hostile.”
Not too difficult.
If you can’t please everyone all the time, then try to please as many people as possible most of the time. Again, there’s no dilemma here.
However, this just begs the question on how to agree on what is benevolent, and how to distill a decision, given differences in opinion and Arrow’s Theorem. Two reasonable persons can disagree on a desired course of action because of differences in information, temperament, goals, personal situations and so on. Current political processes produce outcomes that are unacceptable to many. It seems naive to hope that an entity created by these same reasonable people will act in a way that is agreed to be Friendly.
Because of the impossibility of agreement among reasonable persons, I believe we should strive towards the following goals:
- A balance of power, with the aim of making defense stronger than offense
- Self-enhancement with the aim of keeping pace with other humans and with AIs
- Self-definition, including autonomy and ownership of the self
A balance of power is important to create a stable political situation where individuals can protect their self and be free from violence and coercion. We should look for technological applications (nano, computation, etc.) which favor defense (e.g. active shields, sensor technology, uploading/backup). These are technological fixes, which one hopes will be possible but are not certain. A balance of power is an enabler for the next two goals.
Self-enhancement allows humans to be equal players and be meaningful participants in the future, rather than depending on the uncertain benevolence of people/AIs/organizations. It also feeds back into the balance of power goal, since it allows the balance to be maintain in the face of technological progress.
Self-definition allows individuals and voluntary groups to make their own decisions, which are more likely to be in line with individuals goals and preferences.