What's wrong with the policy gradient ?
We knew that c, and there are many reasons that will cause high variance. Here is the most straightforward one.
Suppose we got a large negative reward and two small positive rewards, according to the update formula to θ, the distribution of τ will move to the right a lot. However, if we add a constant to both positive and negative reward, here we have a small positive reward and two large positive rewards, the result is that the distribution of τ will move to the right a little bit.
From the above illustration, a sight change to rewards will severely influence the value of ∇θJ(θ), which will cause high variance.
The worst case is that, if the two good samples have r(τ)=0, it may take a long to converge or end in a sub-optimal solution.
Causality
∇θJ(θ)=N1i=1∑N[(t=1∑T∇θlogπθ(ai,t∣si,t))(t=1∑Tr(si,t,ai,t))] The causality is that policy at time t′ cannot affect reward at time t when t<t′. So the gradient should be
∇θJ(θ)=N1i=1∑N[t=1∑T∇θlogπθ(ai,t∣si,t)(t′=t∑Tr(si,t′,ai,t′))]=N1i=1∑N[t=1∑T∇θlogπθ(ai,t∣si,t)Q^i,t] The Q^i,t is reward to go from time t for sample i. Since t′=1 becomes t′=t , the number of rewards reduces, which leads to lower variance.
The causality always works well, so can be used every time.
Baseline
∇θJ(θ)≈N1i∑N∇θlogπθ(τi)r(τi) Our purpose is to make good trajectory have bigger probability and bad trajectory have smaller probability. The problem is that good trajectories don't always have big positive rewards and bad trajectories aren't always negative. Our methods is to subtract a baseline like that,
∇θJ(θ)≈N1i∑N∇θlogπθ(τi)(r(τi)−b) where b=N1∑i=1Nr(τi)
Actually, subtracting a baseline is unbiased in expectation
E[∇θlogπθ(τ)b]=∫πθ(τ)∇θlogπθ(τ)bdτ=∫∇θπθ(τ)bdτ=b∇θ∫πθ(τ)dτ=b∇θ1=0 The last thing worth mentioning is that average reward is not the best baseline, but it's pretty good.
Analyzing variance
Subtracting a baseline is unbiased, but can we find the best baseline that has the lowest variance ?
∇θJ(θ)=Eτ∼πθ(τ)[∇θlogπθ(τ)(r(τ)−b)] And the variance
Var=E[x2]−E[x]2=Eτ∼πθ(τ)[(∇θlogπθ(τ)(r(τ)−b))2]−Eτ∼πθ(τ)[∇θlogπθ(τ)(r(τ)−b)]2=Eτ∼πθ(τ)[(∇θlogπθ(τ)(r(τ)−b))2]−Eτ∼πθ(τ)[∇θlogπθ(τ)r(τ)]2 The second equation is because baseline is unbiased in expectation.
Denote g(τ)=∇θlogπθ(τ)
dbdVar=dbdE[g(τ)2(r(τ)−b)2]=dbd(E[g(τ)2r(τ)2]−2E[g(τ)2r(τ)b]+b2E[g(τ)])=dbd(−2E[g(τ)2r(τ)b]+b2E[g(τ)])=−2E[g(τ)2r(τ)]+2bE[g(τ)]=0 So b should be
b=E[g(τ)E[g(τ)2r(τ)] This is just expected reward, but weighted by gradient magnitudes.
In theory, the best baseline should be weighted expected reward, but in practice, we haven't find any difference between weighted expected reward and average reward. And since average have smaller computation, we just use average reward as baseline.