Spinning Up by OpenAI
Last updated
Last updated
Some note from OpenAI Spinning Up
The key idea underlying policy gradient is to push up the possibilities of actions that lead to higher return, and push down the possibilities of actions that lead to lower return, until you arrive at the optimal policy.
on-policy
discrete or continuous action spaces
Let denote a policy with parameters , and denote the expected finite-horizon undiscounted return of the policy. The gradient of is
where is a trajectory and is the advantage function for the current policy.
The policy gradient algorithm works by updating policy parameters via stochastic gradient ascent on policy performance:
VPG trains a stochastic policy in an on-policy way. This means that it explores by sampling actions according to the latest version of its stochastic policy. The amount of randomness in action selection depends on both initial conditions and the training procedure. Over the course of training, the policy typically becomes progressively less random, as the update rule encourages it to exploit rewards that it has already found. This may cause the policy to get trapped in local optima.
Policy Gradient Methods for Reinforcement Learning with Function Approximation, Sutton et al. 2000
timeless classic of RL theory
contains references to the earlier work which led to modern policy gradients.
Optimizing Expectations: From Deep Reinforcement Learning to Stochastic Computation Graphs, Schulman 2016(a)
chapter 2 contains a lucid introduction to the theory of policy gradient algorithms, including pseudocode.
Benchmarking Deep Reinforcement Learning for Continuous Control, Duan et al. 2016
recent benchmark paper that shows how vanilla policy gradient in the deep RL setting(eg. with neural network policies and Adam as the optimizer) compares with other deep RL algorithms.
High Dimensional Continuous Control Using Generalized Advantage Estimation, Schulman et al. 2016(b)
the implementation VPG makes use of Generalized Advantage Estimation for computing the policy gradient.
TRPO updates policies by taking the largest step possible to improve performance, while satisfying a special constraint on how close the new and old policies are allowed to be. The constraint is expressed in terms of KL-Divergence, a measure of distance between probability distributions.
This is different from normal policy gradient, which keeps new and old policies close in parameter space. But even seemingly small differences in parameter space can have very large differences in performance -- so a single bad step can collapse the policy performance. This make it dangerous to use large step sizes with vanilla policy gradients, thus hurting its sampling efficiency. TRPO nicely avoids this kind of collapse, and tends to quickly and monotonically improve performance.
on-policy
discrete or continuous action spaces
Let denote a policy with parameter . The theoretical TRPO update is:
where is the surrogate advantage. a measure of how policy performs relative to the old policy using data from the old policy:
and is an average KL-divergence between policies across states visited by the old policy:
Notice that the objective and constraint are both zero when . Furthermore, the gradient of the constraint with respect to is zero when .
The theoretical TRPO update isn't the easiest to work with, so TRPO makes some approximations to get an answer quickly. We Taylor expand the objective and constraint to leading order around :
resulting in an approximate optimization problem,
Notice that the gradient of the surrogate advantage function with respect to , evaluate at , is exactly equal to the policy gradient, .
This approximate problem can be analytically solved by the methods of Lagrangian duality, yielding the solution:
If we were to stop here, and just use this final result, the algorithm would be exactly calculating the Natural Policy Gradient. A problem is that, due to the approximation errors introduced by the Taylor expansion, this may not satisfy the KL constraint, or actually improve the surrogate advantage. TRPO adds a modification to this update rule: a backtracking line search,
where is the backtracking coefficient, and is the smallest nonnegative integer such that satisfies the KL constraint and produces a positive surrogate advantage.
Lastly: computing and storing the matrix inverse, , is painfully expensive when dealing with neural network policies with thousands or millions of parameters. TRPO sidesteps the issue by using the conjugate gradient algorithm to solve for , requiring only a function which can compute the matrix-vector product instead of computing and storing the whole matrix directly. This is not too hard to do: we set up a symbolic operation to calculate
which gives s the correct output without computing the whole matrix.
TRPO trains a stochastic policy in an on-policy way. This means that it explores by sampling actions according to the latest version of its stochastic policy. The amount of randomness in action selection depends on both initial conditions and the training procedure. Over the course of training, the policy typically becomes progressively less random, as the update rule encourages it to exploit rewards that it has already found. This may cause the policy to get trapped in local optima.
Trust Region Policy Optimization, Schulman et al. 2015
original paper describing TRPO
High Dimensional Continuous Control Using Generalized Advantage Estimation, Schulman et al. 2016
Generalized Advantage Estimation
Approximately Optimal Approximate Reinforcement Learning, Kakade and Langford 2002
contains theoretical results which motivate and deeply connect to the theoretical foundations of TRPO.