Wednesday, November 23, 2011

Stanford AI Class Midterm

Here's my notes on the Stanford AI midterm

Q.1

a) There exists (at least) one environment in which every agent is rational

I take this to mean one environment in which all agents regardless of behavior will optimally achieve their goals, which i think must be false since if you have two agents and we set it up to the first doesn't move and the second does. I guess if we set up so that every action incurs no cost, and every agent is rewarded maximally independent of its behavior

so I am making both the first two true, although I have a strong feeling the instructors wand the first one to be answered false, but I haven't come up with the necessary counter example

regarding the 15 puzzle

a lookup reflex agent will need to have moves for 16! positions that the puzzle can be in, i.e. > 2 trillion, an agent that searches will look at the starting state and consider all the possible moves branching out from there. A depth first search agent will only store a single route to the solution, and then discard that not taking up much space.

There is an upper bound of 4 moves in each state, so the max branching factor of the tree is 4, so depending on the number of steps in the solution there will be #steps^4 nodes in the tree? maximum path to the solution? maximum of 15 moves to get to solution? I think not.., more like 15 moves*16moves, but even then it's much less than 2 trillion.

So I'm probably not reasoning soundly here, but still the answer feels right, i.e. that the lookup table will be larger in terms of memory, but the agent that searches will not do any better ...



Q.2

A*

always expand the the node with the minimum value of the function f = g+h
g(path) = path cost
h(path) = h(state) = estimated distance to the goal

layer 2 is labelled a1-5 from left to right and layer 3 is labelled b1-6 also from left to right

f(top-node) = 0+15 Expanded 1st

f(a1) = 10+11 = 21
f(a2) = 10+8 = 18 Expanded 4th
f(a3) = 10+7 = 17 Expanded 3rd
f(a4) = 10+6 = 16 Expanded 2nd
f(a5) = 10+10 = 20 Expanded 5th

f(b4) = 20+5 = 25
f(b5) = 20+20 = 40

f(b2) = 20+3 = 23
f(b3) = 20+9 = 29

f(b6) = 20 + 0 = 20 Expanded 6th

My checking seems to match my initial guess

Admissible --> h(s) < true cost, i.e. h never overestimates the distance to the goal

and I think this heuristic is not admissible since f(b5) heuristic is 20 when it is only 10 away from the goal, although one might also question whether other b nodes can actually reach the goal node, but they are all underestimating under any assumptions I can think off.



Q.3

P(H) = 0.3
P(F) = 0.7

Q.4

unfair coin
P(HH) = 0.04 + P(H) * P(H) --> P(H) = 0.2
--> P(T) = 0.8
--> P(TT) = 0.64



Q.5

P1(H) = 0.5
P2(H) = 1

pick a coin at random and see a head, what is P(P2|H)

So i think this should be Bayes rule

P(P2|H) = P(H|P2)P(P2)/P(H) where P(H) = P(H|P2)P(P2) + P(H|P1)P(P1)
= 1*0.5/[1*0.5+ 0.5*0.5]
= 0.5/[0.5+0.25]
= 50/75 = 2/3

sense of multiple worlds here very strong. There is the world where we chose the fair coin and a world where we chose the loaded coin. There are two possible coin flip results in the fair world, and only 1 in the loaded world. The evidence suggests we may be in either of two of these three possible states --> but the loaded world must be equally likely to the fair world, so we are twice as likely to be in the loaded world than the fair world ...

now if we see two heads in a row, what is P(P2|HH)

P(P2|HH) = P(HH|P2)P(P2)/P(HH) where P(HH) = P(HH|P2)P(P2) + P(HH|P1)P(P1)
= 1*0.5/[1*0.5+ 0.25*0.5]
= 0.5/[0.5+0.125]
= 500/625 = 8/10

now there is still only one possible loaded world situation, but in the fair world we have four possible coin outputs we might have seen. If we are in the fair world we are in precisely 1 of those four. The chances of being in the loaded world is now four times more likely than being in the fair world

Not sure if I was supposed to be drawing a bayes net here, or if that would help and/or adjust the answer ...


Q.6

very confused now - need to rematch all on C.I.

3.34 D-separation series is the key here

"Any two variables are independent if they are not linked by just unknown variables"

re-looked at all quizzes and homeworks, and changed my initial answer slightly to indicate that A was not conditional independent of C given G, because G tells us about E and E links A and B ERROR HERE SINCE QUESTION IS ABOUT A & C

I am still on the fence about whether A should be conditionally independent of B given F, but in 3.36 F was independent of A given knowledge of H which told us about a child var that linked A and F, but that didn't tell us about the parent of the child var that linked A and F

Q.7

So Hmwk 2.3. explanation is my friend here:


P(B|C) = P(B|C,A)P(A|C) + P(B|C,!A)P(!A|C)
P(B|C) = P(B|A)P(C|A)P(A)/P(C) + P(B|!A)P(C|!A)P(!A)/P(C)
P(B|C) = 0.2*0.8*0.5/0.6 + 0.2*0.4*0.5/0.6 = 0.2

P(C) = P(C|A)P(A) + P(C|!A)P(!A)
P(C) = 0.8*0.5 + 0.4*0.5 = 0.6

P(C|B) = P(C|B,A)P(A|B) + P(C|B,!A)P(!A|B)
P(C|B) = P(C|A)P(B|A)P(A)/P(B) + P(C|!A)P(B|!A)P(!A)/P(B)
P(C|B) = 0.8*0.2*0.5/0.2 + 0.4*0.2*0.5/0.2

P(B) = P(B|A)P(A) + P(B|!A)P(!A)
P(B) = 0.2*0.5 + 0.2*0.5 = 0.2

again, unsure of calculation error but looks like knowledge of each other does not help


Q.8

so my intuition for P(OLD) was actually the MaxLikelihood answer

I redid my spreadsheet for this, eventually getting to what I hope is the right answer. The fractions work out reasonable which suggests to me it might be right as that is what I saw thrun doing in Hmwk 3.2a and b. Feels like I should be able to do this with just pencil and paper, but I don't trust myself to make minor mistakes which is why I put it in the spreadsheet - although I still worry I will have spreadsheet errors ...

ERROR HERE SINCE I DID 4/7 INSTEAD OF 5/7

Q.9

rewatched the K-Nearest Neighbor 5.33 videos, but my initial guess of 7 still seems correct - the suggestion about ties just seems to be confusing since they were never mention in the lecture and superficially all the points seem at different distances from the query point


Q.10

now first time through I just drew a graph and guessed the numbers ... how about linear regression in octave?

octave-3.4.0:2> A = [1,2;3,5.2;4,6.8;5,8.4;9,14.8]
octave-3.4.0:5> X = A(:,1)
octave-3.4.0:6> Y = A(:,2)
octave-3.4.0:10> sum((Y-X).^2)
ans = 58.880

gives me sum of square of differences, but not sure what to do next ...

Going back to 5.36 in the lectures I see thrun deriving

w_0 = 1/M * sum(y) - w_1/M * sum(x)

and

w_1 = M * sum(x*y) - sum(x)*sum(y) * [M * sum(x.^2) - (sum(x))^2]^-1

and I didn't trust myself to derive these correctly

so I can vectorize these:

M = size(X)(1)
w_0 = 1/M * sum(Y) - w_1/M * sum(X)
w_1 = (M * X'*Y - sum(X)*sum(Y)) * [M * sum(X.^2) - (sum(X))^2]^-1

gives
w_1 = 1.6 guessed 0.642
w_0 = 0.40000 guessed 1

so plotting those I get an exact match suggesting I could have intuited this without the math if I had drawn it precisely ... :-) my first guess based on hand plot was well off ... :-)

plot(X,Y)
hold on
XT = [0;1;2;3;4;5;6;7;8;9]
YT = w_0 + w_1 * XT
plot(XT,YT,'r')

still pondering over relation between NNs, linear regression and statistical significance


Q.11

initial intuition appears wrong after watching 6.5 on k means clustering which suggests the initial division of the space is done by drawing a line of equidistance between C1 and C2 indicating that only one point will get associated with C1

Q.12

so this should be straightforward

Valid --> always true
Satisfiable --> true or false
Unsatisfiable --> always false

My initial guess on (A&&!A)->(B->C) appears wrong as I, as usual, incorrectly remembered the logical form of implication

Truth table appears to confirm my intuition that (A->B)&&(B->C)&&(C->A) is satisfiable - my intuition based on finding cases where it would be false and where it would be true

My intuition about the equivalence of A->B&&B->C and A->C seems disproved by truth table - if A is true and B is false and C is true then A->B is false and A->C is true COUNTEREXAMPLE


Q.13

seemed possible to answer this from common sense rather than applying any particular knowledge from the course

my second pass on it including review of the clarifications doesn't throw up anything of concern ...


Q.14

could spin wheels here and the answer from value iteration might well be my best guess, which is subtract five going back into each of the states

actually explanation in 9.18 quiz gives solution for deterministic case giving me higher confidence here.


Q.15

mental mistake - thought that laplacian smoothing would involve +4 on the denominator since there are four possible transitions, but it is only +2 as we are simply considering possible transitions from a single state in a two state model?

Also I had them grouped wrong initially - need to group by state we are coming from

AAAAB

F(A|A) = 3
F(B|A) = 1
F(A|B) = 0
F(B|B) = 0


P(A_0) = (1+1)/(1+2)
P(A|A) = (3+1)/(4+2)
P(A|B) = (0+1)/(0+2)

No comments: