Reduce variance
Last updated
Was this helpful?
Last updated
Was this helpful?
We knew that , and there are many reasons that will cause high variance. Here is the most straightforward one.
The causality always works well, so can be used every time.
Our purpose is to make good trajectory have bigger probability and bad trajectory have smaller probability. The problem is that good trajectories don't always have big positive rewards and bad trajectories aren't always negative. Our methods is to subtract a baseline like that,
Actually, subtracting a baseline is unbiased in expectation
The last thing worth mentioning is that average reward is not the best baseline, but it's pretty good.
Subtracting a baseline is unbiased, but can we find the best baseline that has the lowest variance ?
And the variance
The second equation is because baseline is unbiased in expectation.
This is just expected reward, but weighted by gradient magnitudes.
In theory, the best baseline should be weighted expected reward, but in practice, we haven't find any difference between weighted expected reward and average reward. And since average have smaller computation, we just use average reward as baseline.
Suppose we got a large negative reward and two small positive rewards, according to the update formula to , the distribution of will move to the right a lot. However, if we add a constant to both positive and negative reward, here we have a small positive reward and two large positive rewards, the result is that the distribution of will move to the right a little bit.
From the above illustration, a sight change to rewards will severely influence the value of , which will cause high variance.
The worst case is that, if the two good samples have , it may take a long to converge or end in a sub-optimal solution.
The causality is that policy at time cannot affect reward at time when . So the gradient should be
The is reward to go from time for sample . Since becomes , the number of rewards reduces, which leads to lower variance.
where
Denote
So should be