Types of RL algorithms
Last updated
Last updated
We knew that the final objective is
There are various methods to accomplish this objective.
No need to estimate the transition model.
Directly differentiate the above objective. If the problem has huge possibles, this solution will be intractable, but we can use approximation.
Estimate value function or Q-function of the optimal policy with arg max trick, but no explicit policy.
Use samples to fit Q-function -- supervised learning, then policy set to the best.
Estimate value function or Q-function of the current policy, the use it to improve policy. The combination of policy gradient and Value-based.
Estimate the transition model and then...
Use it for planning (no explicit policy)
Trajectory optimization/Optimal control (primarily in continuous spaces)
Essentially backpropagation to optimize over actions
Discrete planning in discrete action spaces, e.g. Monte Carlo tree search
Use it to improve policy
Backpropagate gradients into the policy
requires some tricks to make it work
Use the model to learn a value function
Dynamic programming
Generate simulated experience for model-free learner (Dyna)
Policy gradient methods
REINFORCE
Natural policy gradient
Trust region policy optimization
Value function fitting methods
Q-learning, DQN
Temporal difference learning
Fitted value iteration
Actor-critic algorithms
Asynchronous advantage actor-critic (A3C)
Soft actor-critic (SAC)
Model-based RL
Dyna
Guided policy search