Reduce variance

What's wrong with the policy gradient ?

We knew that cc, and there are many reasons that will cause high variance. Here is the most straightforward one.

Suppose we got a large negative reward and two small positive rewards, according to the update formula to θ\theta, the distribution of τ\tau will move to the right a lot. However, if we add a constant to both positive and negative reward, here we have a small positive reward and two large positive rewards, the result is that the distribution of τ\tau will move to the right a little bit.

From the above illustration, a sight change to rewards will severely influence the value of θJ(θ)\nabla_\theta J(\theta), which will cause high variance.

The worst case is that, if the two good samples have r(τ)=0r(\tau)=0, it may take a long to converge or end in a sub-optimal solution.

Causality

θJ(θ)=1Ni=1N[(t=1Tθlogπθ(ai,tsi,t))(t=1Tr(si,t,ai,t))]\nabla_\theta J(\theta) =\frac{1}{N}\sum_{i=1}^N \left[ \left(\sum_{t=1}^T \nabla_\theta \log\pi_\theta (a_{i,t}|s_{i,t}) \right) \left(\sum_{t=1}^T r(s_{i,t},a_{i,t}) \right) \right]

The causality is that policy at time tt' cannot affect reward at time tt when t<tt<t'. So the gradient should be

θJ(θ)=1Ni=1N[t=1Tθlogπθ(ai,tsi,t)(t=tTr(si,t,ai,t))]=1Ni=1N[t=1Tθlogπθ(ai,tsi,t)Q^i,t]\nabla_\theta J(\theta) =\frac{1}{N}\sum_{i=1}^N \left[ \sum_{t=1}^T \nabla_\theta \log\pi_\theta (a_{i,t}|s_{i,t}) \left(\sum_{t'=t}^T r(s_{i,t'},a_{i,t'}) \right) \right] =\frac{1}{N}\sum_{i=1}^N \left[ \sum_{t=1}^T \nabla_\theta \log\pi_\theta (a_{i,t}|s_{i,t}) \hat{Q}_{i,t} \right]

The Q^i,t\hat{Q}_{i,t} is reward to go from time tt for sample ii. Since t=1t'=1 becomes t=tt'=t , the number of rewards reduces, which leads to lower variance.

The causality always works well, so can be used every time.

Baseline

θJ(θ)1NiNθlogπθ(τi)r(τi)\nabla_\theta J(\theta)\approx\frac{1}{N}\sum_i^N \nabla_\theta \log \pi_\theta (\tau_i)r(\tau_i)

Our purpose is to make good trajectory have bigger probability and bad trajectory have smaller probability. The problem is that good trajectories don't always have big positive rewards and bad trajectories aren't always negative. Our methods is to subtract a baseline like that,

θJ(θ)1NiNθlogπθ(τi)(r(τi)b)\nabla_\theta J(\theta)\approx\frac{1}{N}\sum_i^N \nabla_\theta \log \pi_\theta (\tau_i)(r(\tau_i)-b)

where b=1Ni=1Nr(τi)b=\frac{1}{N}\sum_{i=1}^N r(\tau_i)

Actually, subtracting a baseline is unbiased in expectation

E[θlogπθ(τ)b]=πθ(τ)θlogπθ(τ)bdτ=θπθ(τ)bdτ=bθπθ(τ)dτ=bθ1=0E[\nabla_\theta\log\pi_\theta(\tau)b]=\int \pi_\theta(\tau)\nabla_\theta\log\pi_\theta(\tau)bd\tau=\int \nabla_\theta \pi_\theta(\tau) bd\tau=b\nabla_\theta\int \pi_\theta(\tau)d\tau=b\nabla_\theta1=0

The last thing worth mentioning is that average reward is not the best baseline, but it's pretty good.

Analyzing variance

Subtracting a baseline is unbiased, but can we find the best baseline that has the lowest variance ?

θJ(θ)=Eτπθ(τ)[θlogπθ(τ)(r(τ)b)]\nabla_\theta J(\theta)=E_{\tau\sim \pi_\theta(\tau)}[\nabla_\theta\log\pi_\theta (\tau)(r(\tau)-b)]

And the variance

Var=E[x2]E[x]2=Eτπθ(τ)[(θlogπθ(τ)(r(τ)b))2]Eτπθ(τ)[θlogπθ(τ)(r(τ)b)]2=Eτπθ(τ)[(θlogπθ(τ)(r(τ)b))2]Eτπθ(τ)[θlogπθ(τ)r(τ)]2\begin{aligned} \text{Var} &= E[x^2]-E[x]^2 \\ &=E_{\tau\sim \pi_\theta(\tau)}[(\nabla_\theta\log\pi_\theta (\tau)(r(\tau)-b))^2]-E_{\tau\sim \pi_\theta(\tau)}[\nabla_\theta\log\pi_\theta (\tau)(r(\tau)-b)]^2 \\ &=E_{\tau\sim \pi_\theta(\tau)}[(\nabla_\theta\log\pi_\theta (\tau)(r(\tau)-b))^2]-E_{\tau\sim \pi_\theta(\tau)}[\nabla_\theta\log\pi_\theta (\tau)r(\tau)]^2 \\ \end{aligned}

The second equation is because baseline is unbiased in expectation.

Denote g(τ)=θlogπθ(τ)g(\tau)=\nabla_\theta\log\pi_\theta (\tau)

dVardb=ddbE[g(τ)2(r(τ)b)2]=ddb(E[g(τ)2r(τ)2]2E[g(τ)2r(τ)b]+b2E[g(τ)])=ddb(2E[g(τ)2r(τ)b]+b2E[g(τ)])=2E[g(τ)2r(τ)]+2bE[g(τ)]=0\begin{aligned} \frac{d\text{Var}}{db} &=\frac{d}{db}E[g(\tau)^2(r(\tau)-b)^2] \\ &=\frac{d}{db}\left(E[g(\tau)^2r(\tau)^2]-2E[g(\tau)^2r(\tau)b]+b^2E[g(\tau)] \right) \\ &=\frac{d}{db}\left(-2E[g(\tau)^2r(\tau)b]+b^2E[g(\tau)] \right) \\ &=-2E[g(\tau)^2r(\tau)]+2bE[g(\tau)] =0 \end{aligned}

So bb should be

b=E[g(τ)2r(τ)]E[g(τ)b=\frac{E[g(\tau)^2r(\tau)]}{E[g(\tau)}

This is just expected reward, but weighted by gradient magnitudes.

In theory, the best baseline should be weighted expected reward, but in practice, we haven't find any difference between weighted expected reward and average reward. And since average have smaller computation, we just use average reward as baseline.

Last updated