Volume 1, Issue 1
1st Quarter, 2006


Implications of Adaptive Artificial General Intelligence, for Legal Rights and Obligations

Peter Voss

page 3 of 7

Timeframe
One key question that arises is, how soon will this happen? I maintain that the pieces of the puzzle are out there now. No fundamental technology still needs to be invented. I know this is a strong statement, but I am convinced that this will happen in less Voss quotethan ten years. In fact, our own company is working on it and our own projections are for it to happen in three to six years.

Power
Another question is, how powerful will it be? Are there hard limits to intelligence? There may be hard limits to intelligence at some level. We do not yet know that, but we do know that it will be very powerful. It will be substantially more capable than humans in purely cognitive, reasoning, and problem-solving tasks.

Take-off
Will there be a hard take-off? The scenario is that once A.G.I. reaches that ready-to-learn state - the seed A.I. state - some people speculate that within twenty-four hours, the system will self improve so much that the singularity will happen. That is one extreme. Other people believe it will take twenty, thirty, or fifty years for A.I.’s to develop and become smarter and smarter. My own position is that it will be a firm take-off. We are talking months rather than years, certainly not tens of years.

There will be practical limits on how fast the machine can be improved, how fast hardware can be implemented and improved, and how fast systems can be redesigned. However, I believe that essentially, it will be a very short period of take-off in terms of giving society a chance to adapt and embrace it. It will take off certainly much faster than our legal system can move, or society as a whole can adapt.

Reversal
Now we ask, can we put the genie back in the bottle? The quick answer is no. There is already too much knowledge out there. We know too much about intelligence and A.I.. It is just a question of when it is going to happen. It is not something you could legislate or prevent even if you wanted to. It will happen. There are too many people all over the world that have access to the essential information and that information is going to grow.

A Mind of its Own
The next question is, will it have a mind or agenda of its own? That is a bit more of a complicated question, because it depends exactly what you mean by that. Will it have a mind of its own? Yes, in some very important sense.

Will it have an agenda of doing something with its life? I believe the answer is essentially no, unless you specifically design it to do so. There is not a lot of reason that we would want to design machines that have an agenda of their own. We want them to do things for us. We want them to create value for us. I have already touched on the difficulty in first integrating A.G.I. into human wetware to soften the blow and make us more comfortable, concluding that we cannot do that. It is much harder for us to upgrade our wetware in order to improve humans than it is to build a stand-alone A.G.I..

1 2 3 4 5 6 7 next page>