Wednesday, September 13, 2006

Lecture AI Planning

Planning in AI
Principle of Least Commitment

Only make choices when forced.
--> Avoids unnecessary backtracking.

Represented graphically in Russel and Norvig.

A plan is a structure consisting of
-A set of plan steps. Each step is an operator.
-A set of step ordering constraints.

Slide 16
Hubble Space Telescope



OP: Assemble(X)
Pre: Painted(X)
Post: Assembled(X)

OP: Finish
Pre: Corrected(M), Polished(M), Assembled(H)
Post: _____

So you just work through this, identifying pre and post conditions.

In the 3rd assignment, this is relevant way of dealing with the planning problem in there. (Using the POP algorithm).

The STRIPS language is fairly limited, but it is restricted intentionally.
But this makes it too restricted for complex, realistic domains.
You can make it more complex with conditional effects.

But what about when the world becomes uncertain? How does it deal with that then?
The POP algorithm dies. It does not cope with that.

Lecture 15

The Frame Peroblem
All we have are axioms. Some set of limited finite set of them to determine change.
All the axioms grow expodentiallly, so that sort of thing doesn't work. It might work in a small toy world, but not in the real world.

R&N try things, but it requires the robot to know exhaustively what can and cannot happen in the world, and encode it into the rebot.
There is this such approach in AI.

Another approach is allowing these things to be constructed on the fly, with the robot learning about the world as it exists in it.

Up until now there's been traditional search,

The Many Frame Problems

Lecture 15
Nonmonotonicity and Defeasable Reasoning.

The Japanese 5th generation project.
Japanese producing AI machines that would dominate the world. The 80s scare.

The enormous cauldron you throw all these axiomised things into and then things warm up, suddenly boom you have an AI

Minsky "A framework for representing knowledge".
Logic can't be the right vehicle for knowledge representation.

People are inconsistent! It would be easy to create an AI that can always say "Yes".
Therefore AI needs means of dealing with inconsistency.

The AI has to be able to generalise, which means it has to be able to accept mistakes.

I think I'm interested and most believing in neural networks for AI, considering that I think it's like solving the meta problem, and it also simulates the brain.
The goal of building a neural network would be to create the architecture for it to begin learning, and to train it in language, any language, and then use that naturally learnt language to associate real world objects to those objects within the same net.

It would just be a monitor-able, extremely diligent and quick student.

It could learn all human languages, and therefore learn everything that humans know.

If we made a deaf man hear for the first time, would he understand the language he can read and write in?

Because this has implications for an AI who could read and write, but had not been given ears.

You cannot convey the same amount of emotion and emphasis through writing as you can with voice.

Tuesday, September 05, 2006

Lecture 13

For the second assignment you're going to have to come up with some heuristics, some that we've been talking about.
Assignment 2 is game playing.

Non-compositional sentences.

x believes that the morning star is difference from the evening star.;

Propositional logic is nice, complete, but it doesn't have much power.
We want to be able to move beyond that and reason about collections of objects and their properties.
First Order Logic
Objects: Plants, animals, nu,mber.
Properties - yummy, big
Relations - faster than

Syntax
Constants A, B, Sally
Variables x y z
Functions of Terms f(sally), g(f(Sally))

Sentences
Atomic
Complex

FOL - First Order Languages
Mapping from syntax to world.

M(term) = object

Wednesday, August 02, 2006

Week 3

Zombies.
Functionalists.
There are functions of consciousness - arguing for that is not trivial - or a constellation of functions.
How had consciousness evolved... if you conclude that, then it must have been built up on functions.

Lecture 5
Artificial Agency.

We're moving onto more technical matters.
Applied philosophy and ethics.
There IS some kind of philosophical overlay to this subject, other courses might be more technical.
What AI is aiming for, is building an independent intelligent artificial agent. Not just building a set of programs, building an architecture within the AI should reside.

So what does that mean?

The "Insert Human knowledge and turn a crank" model is NON autonamous. They're 'expert systems'. That's fine as an application.

Primitive induction
The opposite of deduction. You want an autonomous agent to navigate the world without intervention from the designer. It means learning about the world from nothing.
You could start your bots off with some sort of knowledge of course.
Instead of simply going numb in the face of a new problem, you should be able to deal with it!

Generalisation and specialisation.

Pragmatics
Dealing with context appropriately

Incremental Learning
Learning by trial and error.
The majority of machine learning research is actually in batch processing. But if you're talking about machine learning then this isn't what you're talking about.

Goal-oriented learning
What goals are driving the learning? The idea that knowledge for knowledge's sake is mistaken in AI. You must have a goal.

Defeasability.
Buzzword or no, it comes from the word to defeat. An agent must have it, if it doesn't it is stupid. The agent has to be able to come to the conclusion that it is mistaken.
If you can't recover from your mistakes then you're in trouble.

Coping with uncertainty.
Two things look the same, but aren't. How do I deal with that?

Supposing that we are all autonomously intelligent, then these are the things that we'll be able to do.

An agent is anything that can perceive it's environment through sensors and act on environment through effectors.

What does "do the right thing" mean?
"Rational" = based on reason.
"Reason" = rational ground for belief or action.
Given options, chooses those that re the best for "success".

You have to think about what you want, before you ask for it, because you might get it.

Ideal rational agent is one which performs a series of sequences.

Environment Types
A lot of AI research is done in deterministic 'toy worlds', which do not reflect reality. This is because it's easier to do!

Thursday, July 27, 2006

Getting Lispy

More Lisp
We have the basic arithmetic available in Lisp.
cos, expt, log etc we have.
We always need random numbers in AI, so
(random limit [random-state])

You of course need logic

/= is not equals

(and ...) (or ...)

cond is for conditionals.
(cond (test_1 exp_11 ... exp_1k)
(test_2 exp_21 ... exp_2k)
...
)
There is no 'else' statement. Just use t for true as a final test.

There really is way too much in common lisp...

Function values
Non-destructive

Scope
Recursion
Procedural languages will allow recursion, so who cares?
Procedural languages have it as an add-on, whereas functional languages have it as a fundamental.

(mapcar #'* a b)
A convenience function to apply the multiplication to each item in the list.

Procedural Programming is the Von Nuemann architecture. Fetch, execute, store.
Side effect programming
Not compositional, but repetitive.
Requires lots of housekeeping (i, count, etc)

Functional programming is the opposite of all this.
But then when you actually get into it, "Pure lisp is often buried in large extensions buried in Von Nuemann style stuff". Or something like that, in 1978.

The first example of recursion still relied on side effects.

The Philosophy of AI

The philosophy of AI
What would an AI be?
How could we know if we had it?
Is it possible?
Is AI research ethical?

The strong AI thesis.
The set of all possible Turing machines can be enumerated, simpler to more complex.
The theory is that if AI is possible, then there is at least one AI in this set.
As you enumerate something that's infinitely long, then you have an infinite number of AI's.

"Computing Machinery and Intelligence" (Turing, 1950),
Lady Lovelace's objection: "The Analytical Engine has no pretensions to originate anything". it can do only what we tell it to do.
We're smart, it's stupid.

Can machines think? Yes.
The Mathematical Objection: Godel, Turing etc have shown that no purely formal system can prove (know) every mathematical truth.

The assumption is that we are not formal systems ourselves. But if we are formal systems, or isomorphic to formal systems, then AI can hold true. There is no proof that we are not formal systems.

To avoid endless arguments, Turing proposed the Turing test for intelligence. The Imitation Game.
So instead of trying to understand what is DEEP intelligence, he just went by what appears to be intelligence.

Unfortunately, the Turing test is neither necessary nor sufficient as a test for intelligence.
It's a worthy goal, however.
What if the internet passed the turing test one day?
Monkey's MAY type Shakespeare's complete works, but there's more to AI than random search.

The Total Turing test is where you drop the screen.
It tests the ability of the robot to interact with the world (embodiment).

Even More Total Turing Test
Require isomorphic internal information processing.
Totally Complete Turing Test: Require isomoprhic internal processing of all types, down to subatomic level.

Which tests are sufficient for intelligence?
Surely we'll have something impressive if we can create something to fool people fairly regularly?

Can Machines Think? No Way.
Human intellect is an inarticulable skill; computer "intellect" involves rule-following.

But AI would be processing at a higher level than the implementation level. See the rest of Kevin's rebuttals.

Let's say we can have a program that can understand Chinese.

John Searle's Chinese Room Argument is that you can have a program that will run and have a conversation in Chinese, but it isn't REALLY understanding. There's no understanding going on.

What it comes down to is what do we mean by intelligence?
Teletype intelligence isn't enough. The meanings of words isn't enough for intelligence.

What would the Searle room answer if you asked it "Do you like my new Red shirt?"

intelligence without consciousness is a possibility. It might be a possibility that we can create an artificial consciousness in a box.

So we have the turing test, and Searle's rebuttal that this is not sufficient.

The next philosophical question is, what would it mean to produce an ethical AI?
The fundamental goal of AI is to build a utilitarian (good) AI.

There's perhaps a limit to the number of brains that can be enumerated, with physical limitations etc. However the number of turing machines are infinite.

Then we develop them, and they get smarter and smarter themeselves, and we have the SUPERINTELLIGENCE singularity.

Can we control these?
Asimoc's Laws of Robotics.

The biggest dangers to premature extinsion of civilisation today.
risk = probability x disvalue

1. Asteroid impact
2. Environmental collapse
3. CBN warfare
4. Nanotechnology (could be good, could be really nasty)
5. Adversarial SuperIntelligence
6. Terrorism.

Wednesday, July 19, 2006

Lisp - The Language

Lisp
The functionality and structure of Lisp lends itself to AI. It's used for research and application of AI.

Other languages are more geared towards practical problem solving in industry, Lisp towards... impractical ones.

It's functional/recursive
Dynamic typing (not strong typing)
Flexibility; ease of prototyping (vs. Maintenance).

Taught at Monash because
Historical
Exposure to functional model
Exposure to prototyping

If you're worried about type, you have to do your own checking (it's called debugging!).

It's actually the third oldest language still in use today. (FORTRAN 57, COBOL 58, Lisp 1958)

There are many different versions of Lisp
Common Lisp is the common language.

GNU CLISP is what we're using in this subject.

Lisp is normally interpreted, but then compiled later.
Script for Debug, compile for testing.

At the topmost level:
loop {
read
eval
print
}

Lisp objects
s-expression
Atoms. Atoms may be either
-numbers 3, 75.23322e-38
-or symbols x, nil

Intelligence is all about symbol manipulation. (McArthy) This is a lie!(?) but we'll go on with it for a while. More on that later. Anyway

Lists are a sequence of objects inside a pair of parentheses.
(8 7 99)
((8) 7 99)

Assigning values to symbols.
x = 5
x <- 5

Lisp uses a special function called setq.
[7]> (setq x 5)

The [7]> is a prompt that tells you how many s-exp have been executed so far.

There is also setf for simple assignment.

car and cdr are non-destructive

The empty list is (), nil - the atom.

Taking Lists apart.
car operated on lists, not atoms.

caadr

(car (car (cdr 'x y z)))

yz
y - y must be a list, otherwise it's going to fail.

(cons expression list): constructs a node.

Internally, lists are represented as binary trees.

What if you layered neural networks upon neural networks?
Do we really need to know HOW something is working and learning? Build neural bacteria from genetic algorithms. That's how life began on earth, with the basics, with simple organisms which weren't intelligent, but their bodies and genetics were! You can't just try to jump to the end of evolution when trying to recreate it.

User defined functions
(defun dname (arg1 arg2 ... argn)
s-expr1
s-expr2
...
s-expr3)

In principle, Lisp doesn't believe in side effects. The core of lisp is that you invoke a function that call other functions, and then you get your answers popping back until you get a final answer to present.

/usr/bin/clisp on ra-clay. Play around with it.
Edit filename.lisp
(load "filename")
Run/test your functions
Loop back to step 1.

That should be enough to get started on Lisp.

Tuesday, July 18, 2006

Introduction to AI

People aren't really very smart anyway.

Books:
Artificial Intelligence: A Modern Approach.

Tutorials in lisp run weeks 3 - 9.
Functional Programming.

As a field, it is an investigation of intelligence.
Applying computers to model/implement such intelligence.
Founded by Alan Turing in the 1950's, "Computing Machinery and Intelligence".
The early founders were optimistic about when it would come about though.

Going to the moon was in itself not really very productive, but the spin offs of communication, miniturisation, that came from it were very practical.

That's why society has genuine interest in AI research. Machine translation is one of them, data mining comes out of machine learning. All pretty useful. AI for game programming.

But what on earth is intelligence!?
Humans are our current best example of intelligence in the universe. But we don't get things right ALL the time.

Intelligent agents have to be able to learn

Cognitive Science is an interrelated field.
Creativity.... is this artificially possible?

What is intelligence? We don't really know, it's as equaloly contentious as it was in Aristotle's time.

My question is, if we don't even know what intelligence is, how can we even hope to create an artificial intelligence? Are we hoping to stumble upon it somewhere along the way?