Spinning Up by OpenAI

Some note from OpenAI Spinning Up

Kinds of RL Algorithms

A Taxonomy of RL Algorithms

Key Algorithms

Vanilla Policy Gradient

The key idea underlying policy gradient is to push up the possibilities of actions that lead to higher return, and push down the possibilities of actions that lead to lower return, until you arrive at the optimal policy.

Quick Facts

  • on-policy

  • discrete or continuous action spaces

Key Equations

Let πθ\pi_\theta denote a policy with parameters θ\theta, and J(πθ)J(\pi_\theta) denote the expected finite-horizon undiscounted return of the policy. The gradient of J(πθ)J(\pi_\theta) is

θJ(πθ)=Eτπθ[t=0Tθlogπθ(atst)Aπθ(st,at)],\nabla_{\theta} J(\pi_{\theta}) = E_{\tau \sim \pi_{\theta}}{ \left[ \sum_{t=0}^{T} \nabla_{\theta} \log \pi_{\theta}(a_t|s_t) A^{\pi_{\theta}}(s_t,a_t) \right] },

where τ\tau is a trajectory and AπθA^{\pi_\theta} is the advantage function for the current policy.

The policy gradient algorithm works by updating policy parameters via stochastic gradient ascent on policy performance:

θk+1=θk+αθJ(πθk)\theta_{k+1} = \theta_k + \alpha \nabla_{\theta} J(\pi_{\theta_k})

Exploration vs. Exploitation

VPG trains a stochastic policy in an on-policy way. This means that it explores by sampling actions according to the latest version of its stochastic policy. The amount of randomness in action selection depends on both initial conditions and the training procedure. Over the course of training, the policy typically becomes progressively less random, as the update rule encourages it to exploit rewards that it has already found. This may cause the policy to get trapped in local optima.

Pseudocode

VPG Algorithms

References

Trust Region Policy Optimization

TRPO updates policies by taking the largest step possible to improve performance, while satisfying a special constraint on how close the new and old policies are allowed to be. The constraint is expressed in terms of KL-Divergence, a measure of distance between probability distributions.

This is different from normal policy gradient, which keeps new and old policies close in parameter space. But even seemingly small differences in parameter space can have very large differences in performance -- so a single bad step can collapse the policy performance. This make it dangerous to use large step sizes with vanilla policy gradients, thus hurting its sampling efficiency. TRPO nicely avoids this kind of collapse, and tends to quickly and monotonically improve performance.

Quick Facts

  • on-policy

  • discrete or continuous action spaces

Key Equations

Let πθ\pi_\theta denote a policy with parameter θ\theta. The theoretical TRPO update is:

θk+1=argmaxθ  L(θk,θ)s.t.  DˉKL(θθk)δ\begin{align} \theta_{k+1} = \arg \max_{\theta} \; & {\mathcal L}(\theta_k, \theta) \\ \text{s.t.} \; & \bar{D}_{KL}(\theta || \theta_k) \leq \delta \end{align}

where L(θk,θ)\mathcal{L}(\theta_k,\theta) is the surrogate advantage. a measure of how policy πθ\pi_\theta performs relative to the old policy πθk\pi_{\theta_k} using data from the old policy:

L(θk,θ)=Es,aπθk[πθ(as)πθk(as)Aπθk(s,a)],{\mathcal L}(\theta_k, \theta) = E_{s,a \sim \pi_{\theta_k}}{\left[ \frac{\pi_{\theta}(a|s)}{\pi_{\theta_k}(a|s)} A^{\pi_{\theta_k}}(s,a) \right] },

and DˉKL(θθk)\bar{D}_{KL}(\theta||\theta_k) is an average KL-divergence between policies across states visited by the old policy:

DˉKL(θθk)=Esπθk[DKL(πθ(s)πθk(s))].\bar{D}_{KL}(\theta || \theta_k) = E_{s \sim \pi_{\theta_k}}{ \left[ D_{KL}\left(\pi_{\theta}(\cdot|s) || \pi_{\theta_k} (\cdot|s) \right)\right] }.

Notice that the objective and constraint are both zero when θ=θk\theta=\theta_k. Furthermore, the gradient of the constraint with respect to θ\theta is zero when θ=θk\theta=\theta_k.

The theoretical TRPO update isn't the easiest to work with, so TRPO makes some approximations to get an answer quickly. We Taylor expand the objective and constraint to leading order around θk\theta_k:

L(θk,θ)gT(θθk)DˉKL(θθk)12(θθk)TH(θθk)\begin{align} {\mathcal L}(\theta_k, \theta) &\approx g^T (\theta - \theta_k) \\ \bar{D}_{KL}(\theta || \theta_k) & \approx \frac{1}{2} (\theta - \theta_k)^T H (\theta - \theta_k) \end{align}

resulting in an approximate optimization problem,

θk+1=argmaxθ  gT(θθk)s.t.  12(θθk)TH(θθk)δ.\begin{align} \theta_{k+1} = \arg \max_{\theta} \; & g^T (\theta - \theta_k) \\ \text{s.t.} \; & \frac{1}{2} (\theta - \theta_k)^T H (\theta - \theta_k) \leq \delta. \end{align}

Notice that the gradient gg of the surrogate advantage function with respect to θ\theta, evaluate at θ=θk\theta=\theta_k, is exactly equal to the policy gradient, θJ(πθ)\nabla_\theta J(\pi_\theta).

This approximate problem can be analytically solved by the methods of Lagrangian duality, yielding the solution:

θk+1=θk+2δgTH1gH1g.\theta_{k+1} = \theta_k + \sqrt{\frac{2 \delta}{g^T H^{-1} g}} H^{-1} g.

If we were to stop here, and just use this final result, the algorithm would be exactly calculating the Natural Policy Gradient. A problem is that, due to the approximation errors introduced by the Taylor expansion, this may not satisfy the KL constraint, or actually improve the surrogate advantage. TRPO adds a modification to this update rule: a backtracking line search,

θk+1=θk+αj2δgTH1gH1g,\theta_{k+1} = \theta_k + \alpha^j \sqrt{\frac{2 \delta}{g^T H^{-1} g}} H^{-1} g,

where α(0,1)\alpha\in (0,1) is the backtracking coefficient, and jj is the smallest nonnegative integer such that πθk+1\pi_{\theta_{k+1}} satisfies the KL constraint and produces a positive surrogate advantage.

Lastly: computing and storing the matrix inverse, H1H^{-1}, is painfully expensive when dealing with neural network policies with thousands or millions of parameters. TRPO sidesteps the issue by using the conjugate gradient algorithm to solve Hx=gHx=g for x=H1gx=H^{-1}g, requiring only a function which can compute the matrix-vector product HxHx instead of computing and storing the whole matrix HH directly. This is not too hard to do: we set up a symbolic operation to calculate

Hx=θ((θDˉKL(θθk))Tx),Hx = \nabla_{\theta} \left( \left(\nabla_{\theta} \bar{D}_{KL}(\theta || \theta_k)\right)^T x \right),

which gives s the correct output without computing the whole matrix.

Exploration vs. Exploitation

TRPO trains a stochastic policy in an on-policy way. This means that it explores by sampling actions according to the latest version of its stochastic policy. The amount of randomness in action selection depends on both initial conditions and the training procedure. Over the course of training, the policy typically becomes progressively less random, as the update rule encourages it to exploit rewards that it has already found. This may cause the policy to get trapped in local optima.

Pseudocode

TRPO Algorithm

References

Proximal Policy Optimization

Quick Facts

Key Equations

Exploration vs. Exploitation

Pseudocode

References

Deep Deterministic Policy Gradient

Quick Facts

Key Equations

Exploration vs. Exploitation

Pseudocode

References

Twin Delayed DDPG

Quick Facts

Key Equations

Exploration vs. Exploitation

Pseudocode

References

Soft Actor-Critic

Quick Facts

Key Equations

Exploration vs. Exploitation

Pseudocode

References

Last updated

Was this helpful?