Types of RL algorithms

We knew that the final objective is

θ=argmaxθEτpθ(τ)[tr(st,at)]\theta^\star=\arg\max_{\theta} E_{\tau\sim p_\theta(\tau)}\left[\sum_t r(s_t,a_t)\right]

There are various methods to accomplish this objective.

Model-free RL

No need to estimate the transition model.

Policy gradient

Directly differentiate the above objective. If the problem has huge possibles, this solution will be intractable, but we can use approximation.

Value-based

Estimate value function or Q-function of the optimal policy with arg max trick, but no explicit policy.

Use samples to fit Q-function -- supervised learning, then policy set to the best.

Actor-critic

Estimate value function or Q-function of the current policy, the use it to improve policy. The combination of policy gradient and Value-based.

Model-based RL

Estimate the transition model and then...

  • Use it for planning (no explicit policy)

    • Trajectory optimization/Optimal control (primarily in continuous spaces)

      Essentially backpropagation to optimize over actions

    • Discrete planning in discrete action spaces, e.g. Monte Carlo tree search

  • Use it to improve policy

    • Backpropagate gradients into the policy

      requires some tricks to make it work

  • Use the model to learn a value function

    • Dynamic programming

    • Generate simulated experience for model-free learner (Dyna)

Examples of specific algorithms

  • Policy gradient methods

    • REINFORCE

    • Natural policy gradient

    • Trust region policy optimization

  • Value function fitting methods

    • Q-learning, DQN

    • Temporal difference learning

    • Fitted value iteration

  • Actor-critic algorithms

    • Asynchronous advantage actor-critic (A3C)

    • Soft actor-critic (SAC)

  • Model-based RL

    • Dyna

    • Guided policy search

Last updated