RL
  • Introduction
  • Deep RL Course
    • Introduction
    • Intro to RL
      • MDP Definition
      • RL Objective
      • Structure of RL algorithms
      • Value functions and Q-functions
      • Types of RL algorithms
      • Comparison
    • Policy Gradient
      • Evaluate the PG
      • Intuition of PG
      • Reduce variance
      • Off-policy version PG
      • PG in practice
    • Actor-Critic Algorithms
      • Advantages
      • Policy evaluation
      • Discount factors
      • Actor-Critic in practice
      • Baselines
      • Other advantages
    • Value Function Methods
      • Policy iteration
      • Value iteration
      • Q iteration
      • Learning theory
    • Deep RL with Q-Function
      • Correlated samples and unstable target
      • The accuracy of Q-function
      • Continuous actions
    • Advanced Policy Gradient
    • Optimal Control and Planning
    • Model-Based RL
    • Advanced Model-Based RL
    • Model-Based RL and Policy Learning
    • Variational Inference and Generative Models
    • Reframing Control as an Inference Problem
    • Inverse Reinforcement Learning
    • Exploration
    • Transfer Learning
    • Multi-Task Learning
    • Meta Learning
    • RL Challenges
    • AutoML
  • Related Papers
    • Meta RL
  • Resources
    • Resources
    • Spinning Up by OpenAI
  • RL in practice
    • Policy gradients
Powered by GitBook
On this page
  • Model-free RL
  • Policy gradient
  • Value-based
  • Actor-critic
  • Model-based RL
  • Examples of specific algorithms

Was this helpful?

  1. Deep RL Course
  2. Intro to RL

Types of RL algorithms

PreviousValue functions and Q-functionsNextComparison

Last updated 5 years ago

Was this helpful?

We knew that the final objective is

θ⋆=arg⁡max⁡θEτ∼pθ(τ)[∑tr(st,at)]\theta^\star=\arg\max_{\theta} E_{\tau\sim p_\theta(\tau)}\left[\sum_t r(s_t,a_t)\right]θ⋆=argθmax​Eτ∼pθ​(τ)​[t∑​r(st​,at​)]

There are various methods to accomplish this objective.

Model-free RL

No need to estimate the transition model.

Policy gradient

Directly differentiate the above objective. If the problem has huge possibles, this solution will be intractable, but we can use approximation.

Value-based

Estimate value function or Q-function of the optimal policy with arg max trick, but no explicit policy.

Use samples to fit Q-function -- supervised learning, then policy set to the best.

Actor-critic

Estimate value function or Q-function of the current policy, the use it to improve policy. The combination of policy gradient and Value-based.

Model-based RL

Estimate the transition model and then...

  • Use it for planning (no explicit policy)

    • Trajectory optimization/Optimal control (primarily in continuous spaces)

      Essentially backpropagation to optimize over actions

    • Discrete planning in discrete action spaces, e.g. Monte Carlo tree search

  • Use it to improve policy

    • Backpropagate gradients into the policy

      requires some tricks to make it work

  • Use the model to learn a value function

    • Dynamic programming

    • Generate simulated experience for model-free learner (Dyna)

Examples of specific algorithms

  • Policy gradient methods

    • REINFORCE

    • Natural policy gradient

    • Trust region policy optimization

  • Value function fitting methods

    • Q-learning, DQN

    • Temporal difference learning

    • Fitted value iteration

  • Actor-critic algorithms

    • Asynchronous advantage actor-critic (A3C)

    • Soft actor-critic (SAC)

  • Model-based RL

    • Dyna

    • Guided policy search

Policy gradient
Value-based
Actor-critic
Model-based RL