# Policy iteration

## Policy iteration

Last chapter we discuss actor-critic algorithms, but what if we just use critic (value), without an actor(policy)? Surely we can do this, by extracting a policy from a value function.

Firstly, $$A^\pi(s\_t,a\_t)$$ measures how much is $$a\_t$$ than the average action according to $$\pi$$, and $$\arg \max\_{a\_t}A^\pi(s\_t,a\_t)$$ means the best action from $$s\_t$$ if we then follow $$\pi$$, which can be viewed as a substitute for policy $$\pi(s\_t|a\_t)$$. Notice that the "policy" obtained by max trick is at least as good as any normal policy, regardless of what $$\pi(a\_t|s\_a)$$ is.

So forget policies, let's just use max trick.

$$
\pi'(a\_t|s\_t)=
\begin{cases}
1&\text{if }a\_t=\arg \max\_{a\_t}A^\pi(s\_t,a\_t)\\
0 &\text{otherwise}
\end{cases}
$$

![Policy Iteration](https://4133958719-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LigLKy0c06y4iTEtrkI%2F-Lo5SZswCzdKfaAyVDZK%2F-Lo5S_qJuuY_pQatTL5h%2F1567770415812.png?generation=1567771484909498\&alt=media)

At a high level, we got the policy iteration algorithms:

> Policy Iteration:
>
> repeat until converge
>
> \====1: evaluate $$A^\pi(s\_t,a\_t)$$
>
> \====2: set $$\pi\leftarrow \pi'$$

As before $$A^\pi(s,a)=r(s,a)+\gamma \mathbb{E}\[V^\pi(s')]-V^\pi(s)$$. So now the key problem is to evaluate $$V^\pi(s)$$.

## Dynamic programming

First of all, some basic hypothesis: assume we know the dynamic $$p(s'|s,a)$$, and $$s$$ and $$a$$ are both discrete (and small). For example:

![Dynamic Programming](https://4133958719-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LigLKy0c06y4iTEtrkI%2F-Lo5SZswCzdKfaAyVDZK%2F-Lo5S_qLcMbyB1bGh2BF%2F1567770828100.png?generation=1567771484967557\&alt=media)

16 states, 4 actions per state. Can store full $$V^\pi(s)$$ in a table. $$\mathcal{T}$$ is a $$16\times16\times4$$ tensor.

We can use bootstrapped update:

$$
V^\pi(s)\leftarrow \mathbb{E}*{a\sim\pi(a|s)}\left\[r(s,a)+\gamma\mathbb{E}*{s'\sim p(s'|s,a)}\[V^\pi(s's)]\right]
$$

Since we use deterministic policy $$\pi(s)=a$$, the update equation can be even simplified:

$$
V^\pi(s)\leftarrow r(s,\pi(s))+\gamma\mathbb{E}\_{s'\sim p(s'|s,\pi(s))}\[V^\pi(s's)]
$$

![Simplified policy iteration](https://4133958719-files.gitbook.io/~/files/v0/b/gitbook-legacy-files/o/assets%2F-LigLKy0c06y4iTEtrkI%2F-Lo5SZswCzdKfaAyVDZK%2F-Lo5S_qNHmYa2g4tubzx%2F1567771344789.png?generation=1567771484981000\&alt=media)

> Simplified Policy Iteration:
>
> repeat until converge:
>
> \==== 1: evaluate $$V^\pi(s)$$
>
> \==== 2: set $$\pi\leftarrow \pi'$$


---

# Agent Instructions: Querying This Documentation

If you need additional information that is not directly available in this page, you can query the documentation dynamically by asking a question.

Perform an HTTP GET request on the current page URL with the `ask` query parameter:

```
GET https://drdh.gitbook.io/rl/deep-rl-course/value-function-methods/policy-iteration.md?ask=<question>
```

The question should be specific, self-contained, and written in natural language.
The response will contain a direct answer to the question and relevant excerpts and sources from the documentation.

Use this mechanism when the answer is not explicitly present in the current page, you need clarification or additional context, or you want to retrieve related documentation sections.
