Wednesday, August 02, 2006

Week 3

Zombies.
Functionalists.
There are functions of consciousness - arguing for that is not trivial - or a constellation of functions.
How had consciousness evolved... if you conclude that, then it must have been built up on functions.

Lecture 5
Artificial Agency.

We're moving onto more technical matters.
Applied philosophy and ethics.
There IS some kind of philosophical overlay to this subject, other courses might be more technical.
What AI is aiming for, is building an independent intelligent artificial agent. Not just building a set of programs, building an architecture within the AI should reside.

So what does that mean?

The "Insert Human knowledge and turn a crank" model is NON autonamous. They're 'expert systems'. That's fine as an application.

Primitive induction
The opposite of deduction. You want an autonomous agent to navigate the world without intervention from the designer. It means learning about the world from nothing.
You could start your bots off with some sort of knowledge of course.
Instead of simply going numb in the face of a new problem, you should be able to deal with it!

Generalisation and specialisation.

Pragmatics
Dealing with context appropriately

Incremental Learning
Learning by trial and error.
The majority of machine learning research is actually in batch processing. But if you're talking about machine learning then this isn't what you're talking about.

Goal-oriented learning
What goals are driving the learning? The idea that knowledge for knowledge's sake is mistaken in AI. You must have a goal.

Defeasability.
Buzzword or no, it comes from the word to defeat. An agent must have it, if it doesn't it is stupid. The agent has to be able to come to the conclusion that it is mistaken.
If you can't recover from your mistakes then you're in trouble.

Coping with uncertainty.
Two things look the same, but aren't. How do I deal with that?

Supposing that we are all autonomously intelligent, then these are the things that we'll be able to do.

An agent is anything that can perceive it's environment through sensors and act on environment through effectors.

What does "do the right thing" mean?
"Rational" = based on reason.
"Reason" = rational ground for belief or action.
Given options, chooses those that re the best for "success".

You have to think about what you want, before you ask for it, because you might get it.

Ideal rational agent is one which performs a series of sequences.

Environment Types
A lot of AI research is done in deterministic 'toy worlds', which do not reflect reality. This is because it's easier to do!

0 Comments:

Post a Comment

<< Home