import matplotlib.pyplot as plt
import numpy as np
= np.load("lnprob.npy")
lnprob # the following lines are optional, but useful if your traces are not full
# (i.e. your MCMC runs didn't run all their steps)
# trace = np.load("trace.npy")
# from espei.analysis import truncate_arrays
# trace, lnprob = truncate_arrays(trace, lnprob)
= plt.subplots()
fig, ax
ax.plot(lnprob.T)"log-probability convergence")
ax.set_title("iterations")
ax.set_xlabel("lnprob")
ax.set_ylabel("symlog") # log-probabilties are often negative, symlog gives log scale for negative numbers
ax.set_yscale( fig.show()
MCMC Probability Convergence Plots
Convergence can be qualitatively estimated by looking at how the log-probability changes for all of the chains as a function of iterations.
Some metrics exist to help estimate convergence, but they should be used with caution, especialy ones that depend on multiple independent chains, as ensemble MCMC methods like the one used by ESPEI do not have independent chains.
Plotting these can also be helpful to estimate how many iterations to discard as burn-in, e.g. in the corner plot example.
we can zoom in using a linear scale to inspect more closely:
= plt.subplots()
fig, ax
ax.plot(lnprob.T)"log-probability convergence")
ax.set_title("iterations")
ax.set_xlabel("lnprob")
ax.set_ylabel(-3000, -2700)
ax.set_ylim( fig.show()