- sherpa.ui.plot_pvalue(null_model, alt_model, conv_model=None, id=1, otherids=(), num=500, bins=25, numcores=None, replot=False, overplot=False, clearwindow=True, **kwargs)
Compute and plot a histogram of likelihood ratios by simulating data.
Compare the likelihood of the null model to an alternative model by running a number of simulations, comparing the likelihoods of the two models when compared to the observed data. The fit statistic must be set to a likelihood-based method, such as “cash” or “cstat”. Screen output is created as well as the plot; these values can be retrieved with
null_model – The model expression for the null hypothesis.
alt_model – The model expression for the alternative hypothesis.
conv_model (optional) – An expression used to modify the model so that it can be compared to the data (e.g. a PSF or PHA response).
otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
num (int, optional) – The number of simulations to run. The default is 500.
bins (int, optional) – The number of bins to use to create the histogram. The default is 25.
numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
overplot (bool, optional) – If
Truethen add the data to an exsiting plot, otherwise create a new plot. The default is
clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
TypeError – An invalid statistic.
Each simulation involves creating a data set using the observed data simulated with Poisson noise.
For the likelihood ratio test to be valid, the following conditions must hold:
The null model is nested within the alternative model.
The extra parameters of the alternative model have Gaussian (normal) distributions that are not truncated by the boundaries of the parameter spaces.
Use the likelihood ratio to see if the data in data set 1 has a statistically-significant gaussian component:
>>> create_model_component('powlaw1d', 'pl') >>> create_model_component('gauss1d', 'gline') >>> plot_pvalue(pl, pl + gline)
Use 1000 simulations and use the data from data sets ‘core’, ‘jet1’, and ‘jet2’:
>>> mdl1 = pl >>> mdl2 = pl + gline >>> plot_pvalue(mdl1, mdl2, id='core', otherids=('jet1', 'jet2'), ... num=1000)
Apply a convolution to the models before fitting:
>>> rsp = get_psf() >>> plot_pvalue(mdl1, mdl2, conv_model=rsp)