Wednesday, September 13, 2006

Lecture AI Planning

Planning in AI
Principle of Least Commitment

Only make choices when forced.
--> Avoids unnecessary backtracking.

Represented graphically in Russel and Norvig.

A plan is a structure consisting of
-A set of plan steps. Each step is an operator.
-A set of step ordering constraints.

Slide 16
Hubble Space Telescope



OP: Assemble(X)
Pre: Painted(X)
Post: Assembled(X)

OP: Finish
Pre: Corrected(M), Polished(M), Assembled(H)
Post: _____

So you just work through this, identifying pre and post conditions.

In the 3rd assignment, this is relevant way of dealing with the planning problem in there. (Using the POP algorithm).

The STRIPS language is fairly limited, but it is restricted intentionally.
But this makes it too restricted for complex, realistic domains.
You can make it more complex with conditional effects.

But what about when the world becomes uncertain? How does it deal with that then?
The POP algorithm dies. It does not cope with that.

Lecture 15

The Frame Peroblem
All we have are axioms. Some set of limited finite set of them to determine change.
All the axioms grow expodentiallly, so that sort of thing doesn't work. It might work in a small toy world, but not in the real world.

R&N try things, but it requires the robot to know exhaustively what can and cannot happen in the world, and encode it into the rebot.
There is this such approach in AI.

Another approach is allowing these things to be constructed on the fly, with the robot learning about the world as it exists in it.

Up until now there's been traditional search,

The Many Frame Problems

Lecture 15
Nonmonotonicity and Defeasable Reasoning.

The Japanese 5th generation project.
Japanese producing AI machines that would dominate the world. The 80s scare.

The enormous cauldron you throw all these axiomised things into and then things warm up, suddenly boom you have an AI

Minsky "A framework for representing knowledge".
Logic can't be the right vehicle for knowledge representation.

People are inconsistent! It would be easy to create an AI that can always say "Yes".
Therefore AI needs means of dealing with inconsistency.

The AI has to be able to generalise, which means it has to be able to accept mistakes.

I think I'm interested and most believing in neural networks for AI, considering that I think it's like solving the meta problem, and it also simulates the brain.
The goal of building a neural network would be to create the architecture for it to begin learning, and to train it in language, any language, and then use that naturally learnt language to associate real world objects to those objects within the same net.

It would just be a monitor-able, extremely diligent and quick student.

It could learn all human languages, and therefore learn everything that humans know.

If we made a deaf man hear for the first time, would he understand the language he can read and write in?

Because this has implications for an AI who could read and write, but had not been given ears.

You cannot convey the same amount of emotion and emphasis through writing as you can with voice.

Tuesday, September 05, 2006

Lecture 13

For the second assignment you're going to have to come up with some heuristics, some that we've been talking about.
Assignment 2 is game playing.

Non-compositional sentences.

x believes that the morning star is difference from the evening star.;

Propositional logic is nice, complete, but it doesn't have much power.
We want to be able to move beyond that and reason about collections of objects and their properties.
First Order Logic
Objects: Plants, animals, nu,mber.
Properties - yummy, big
Relations - faster than

Syntax
Constants A, B, Sally
Variables x y z
Functions of Terms f(sally), g(f(Sally))

Sentences
Atomic
Complex

FOL - First Order Languages
Mapping from syntax to world.

M(term) = object