Thursday, July 27, 2006

The Philosophy of AI

The philosophy of AI
What would an AI be?
How could we know if we had it?
Is it possible?
Is AI research ethical?

The strong AI thesis.
The set of all possible Turing machines can be enumerated, simpler to more complex.
The theory is that if AI is possible, then there is at least one AI in this set.
As you enumerate something that's infinitely long, then you have an infinite number of AI's.

"Computing Machinery and Intelligence" (Turing, 1950),
Lady Lovelace's objection: "The Analytical Engine has no pretensions to originate anything". it can do only what we tell it to do.
We're smart, it's stupid.

Can machines think? Yes.
The Mathematical Objection: Godel, Turing etc have shown that no purely formal system can prove (know) every mathematical truth.

The assumption is that we are not formal systems ourselves. But if we are formal systems, or isomorphic to formal systems, then AI can hold true. There is no proof that we are not formal systems.

To avoid endless arguments, Turing proposed the Turing test for intelligence. The Imitation Game.
So instead of trying to understand what is DEEP intelligence, he just went by what appears to be intelligence.

Unfortunately, the Turing test is neither necessary nor sufficient as a test for intelligence.
It's a worthy goal, however.
What if the internet passed the turing test one day?
Monkey's MAY type Shakespeare's complete works, but there's more to AI than random search.

The Total Turing test is where you drop the screen.
It tests the ability of the robot to interact with the world (embodiment).

Even More Total Turing Test
Require isomorphic internal information processing.
Totally Complete Turing Test: Require isomoprhic internal processing of all types, down to subatomic level.

Which tests are sufficient for intelligence?
Surely we'll have something impressive if we can create something to fool people fairly regularly?

Can Machines Think? No Way.
Human intellect is an inarticulable skill; computer "intellect" involves rule-following.

But AI would be processing at a higher level than the implementation level. See the rest of Kevin's rebuttals.

Let's say we can have a program that can understand Chinese.

John Searle's Chinese Room Argument is that you can have a program that will run and have a conversation in Chinese, but it isn't REALLY understanding. There's no understanding going on.

What it comes down to is what do we mean by intelligence?
Teletype intelligence isn't enough. The meanings of words isn't enough for intelligence.

What would the Searle room answer if you asked it "Do you like my new Red shirt?"

intelligence without consciousness is a possibility. It might be a possibility that we can create an artificial consciousness in a box.

So we have the turing test, and Searle's rebuttal that this is not sufficient.

The next philosophical question is, what would it mean to produce an ethical AI?
The fundamental goal of AI is to build a utilitarian (good) AI.

There's perhaps a limit to the number of brains that can be enumerated, with physical limitations etc. However the number of turing machines are infinite.

Then we develop them, and they get smarter and smarter themeselves, and we have the SUPERINTELLIGENCE singularity.

Can we control these?
Asimoc's Laws of Robotics.

The biggest dangers to premature extinsion of civilisation today.
risk = probability x disvalue

1. Asteroid impact
2. Environmental collapse
3. CBN warfare
4. Nanotechnology (could be good, could be really nasty)
5. Adversarial SuperIntelligence
6. Terrorism.

0 Comments:

Post a Comment

<< Home