Quatifying Uncertainty

Artificial Intelligence Chapter 12

Date: 23.04.10 ~ 23.04.16

Writer: 9tailwolf


Introduction


In this chapter, I learn about uncertainty. Since uncertainty is generated by particular observable and indeterminate or combination of them. Existing uncertainty means there is no way to know agent’s current state or state after action.


Rational Decision


In decision, utility is not only element of estimating function. probablility is also element of estimating function. Therefore, Decision is consist of utility and probability.

algorithm

def DT_agent(percept):
    belief_state = calculate(action,percept)
    action_result = calculateProbability(belif_state)
    action = max(action_result,key = value)
    return action


Basic Probability Theory


In Probability Theory, there is a Sample Space, which is set of possible world, it can express with \(\Omega\). And, one of the world can express with \(w\). Then we can express as follows, \(\forall{w}\),\(0 \leq P(w) \leq 1\), \(w \in \Omega\), and \(\Sigma_{w\in\Omega}P(w) = 1\)

kolmogorov’s axioms

\(P(a \lor b) = P(a) + P(b) - P(a \land b)\)

Bruno de Finetti

When some probability isn’t satisfy kolmogorov’s axioms, We can make unfair bet game.

Probability in continous variables

When non discrete probability function exist, \(P(a)\) can express with \(P(a) = \lim_{dx \rightarrow 0} \frac{P(a \leq x \leq a + dx)}{dx}\).

Conditional Probability

\(P(a|b)\) means if \(b\), then \(P(a|b)\) chance of \(a\).

Bayes’s Rule

Since \(P(a|b)P(b) = P(b|a)P(a)\), \(P(a|b) = \frac{P(b|a)P(a)}{P(b)}\).

Normalization

We can Determine \(P(a|b)\) by normalization with constant \(\alpha = \frac{1}{P(b)}\) \(P(a|b) = \alpha P(a,b) = \alpha(P(a,b,c)+P(a,b,\lnot c))\)

Independence

When \(a\) and \(b\) is independence condition, then \(P(a|b) = P(a)\) and \(P(b|a) = P(b)\).

When We apply this,

\[P(a,b|c) = \frac{P(a,b,c)}{P(c)} = \frac{P(a|b,c)P(b,c)}{P(c)} = \frac{P(a|b,c)P(b|c)P(c)}{P(c)} = P(a|c)P(b|c)\]

in independence with \(a\) and \(b\).