Session
- class sherpa.ui.utils.Session[source] [edit on github]
Bases:
NoNewAttributesAfterInit
Methods Summary
add_model
(modelclass[, args, kwargs])Create a user-defined model class.
add_user_pars
(modelname, parnames[, ...])Add parameter information to a user model.
calc_chisqr
([id])Calculate the per-bin chi-squared statistic.
calc_stat
([id])Calculate the fit statistic for a data set.
Display the statistic values for the current models.
clean
()Clear out the current Sherpa session.
conf
(*args)Estimate parameter confidence intervals using the confidence method.
confidence
(*args)Estimate parameter confidence intervals using the confidence method.
contour
(*args, **kwargs)Create a contour plot for an image data set.
contour_data
([id, replot, overcontour])Contour the values of an image data set.
contour_fit
([id, replot, overcontour])Contour the fit to a data set.
contour_fit_resid
([id, replot, overcontour])Contour the fit and the residuals to a data set.
contour_kernel
([id, replot, overcontour])Contour the kernel applied to the model of an image data set.
contour_model
([id, replot, overcontour])Create a contour plot of the model.
contour_psf
([id, replot, overcontour])Contour the PSF applied to the model of an image data set.
contour_ratio
([id, replot, overcontour])Contour the ratio of data to model.
contour_resid
([id, replot, overcontour])Contour the residuals of the fit.
contour_source
([id, replot, overcontour])Create a contour plot of the unconvolved spatial model.
copy_data
(fromid, toid)Copy a data set, creating a new identifier.
covar
(*args)Estimate parameter confidence intervals using the covariance method.
covariance
(*args)Estimate parameter confidence intervals using the covariance method.
create_model_component
([typename, name])Create a model component.
dataspace1d
(start, stop[, step, numbins, ...])Create the independent axis for a 1D data set.
dataspace2d
(dims[, id, dstype])Create the independent axis for a 2D data set.
delete_data
([id])Delete a data set by identifier.
delete_model
([id])Delete the model expression for a data set.
delete_model_component
(name)Delete a model component.
delete_psf
([id])Delete the PSF model for a data set.
fake
([id, method])Simulate a data set.
fit
([id])Fit a model to one or more data sets.
freeze
(*args)Fix model parameters so they are not changed by a fit.
Return the data used to plot the last CDF.
get_chisqr_plot
([id, recalc])Return the data used by plot_chisqr.
get_conf
()Return the confidence-interval estimation object.
get_conf_opt
([name])Return one or all of the options for the confidence interval method.
Return the results of the last
conf
run.Return the results of the last
conf
run.Return the covariance estimation object.
get_covar_opt
([name])Return one or all of the options for the covariance method.
Return the results of the last
covar
run.Return the results of the last
covar
run.get_data
([id])Return the data set by identifier.
get_data_contour
([id, recalc])Return the data used by contour_data.
Return the preferences for contour_data.
get_data_image
([id])Return the data used by image_data.
get_data_plot
([id, recalc])Return the data used by plot_data.
get_data_plot_prefs
([id])Return the preferences for plot_data.
Return the default data set identifier.
get_delchi_plot
([id, recalc])Return the data used by plot_delchi.
get_dep
([id, filter])Return the dependent axis of a data set.
get_dims
([id, filter])Return the dimensions of the data set.
get_draws
([id, otherids, niter, covar_matrix])Run the pyBLoCXS MCMC algorithm.
get_error
([id, filter])Return the errors on the dependent axis of a data set.
get_filter
([id])Return the filter expression for a data set.
get_fit_contour
([id, recalc])Return the data used by contour_fit.
get_fit_plot
([id, recalc])Return the data used to create the fit plot.
Return the results of the last fit.
Return the functions provided by Sherpa.
get_indep
([id])Return the independent axes of a data set.
get_int_proj
([par, id, otherids, recalc, ...])Return the interval-projection object.
get_int_unc
([par, id, otherids, recalc, ...])Return the interval-uncertainty object.
Return the name of the iterative fitting scheme.
get_iter_method_opt
([optname])Return one or all options for the iterative-fitting scheme.
get_kernel_contour
([id, recalc])Return the data used by contour_kernel.
get_kernel_image
([id])Return the data used by image_kernel.
get_kernel_plot
([id, recalc])Return the data used by plot_kernel.
get_method
([name])Return an optimization method.
Return the name of current Sherpa optimization method.
get_method_opt
([optname])Return one or all of the options for the current optimization method.
get_model
([id])Return the model expression for a data set.
Return the method used to create model component identifiers.
get_model_component
(name)Returns a model component given its name.
get_model_component_image
(id[, model])Return the data used by image_model_component.
get_model_component_plot
(id[, model, recalc])Return the data used to create the model-component plot.
get_model_contour
([id, recalc])Return the data used by contour_model.
Return the preferences for contour_model.
get_model_image
([id])Return the data used by image_model.
get_model_pars
(model)Return the names of the parameters of a model.
get_model_plot
([id, recalc])Return the data used to create the model plot.
get_model_plot_prefs
([id])Return the preferences for plot_model.
get_model_type
(model)Describe a model expression.
get_num_par
([id])Return the number of parameters in a model expression.
get_num_par_frozen
([id])Return the number of frozen parameters in a model expression.
get_num_par_thawed
([id])Return the number of thawed parameters in a model expression.
get_par
(par)Return a parameter of a model component.
Return the data used to plot the last PDF.
get_prior
(par)Return the prior function for a parameter (MCMC).
get_proj
()Return the confidence-interval estimation object.
get_proj_opt
([name])Return one or all of the options for the confidence interval method.
Return the results of the last
proj
run.Return the results of the last
proj
run.get_psf
([id])Return the PSF model defined for a data set.
get_psf_contour
([id, recalc])Return the data used by contour_psf.
get_psf_image
([id])Return the data used by image_psf.
get_psf_plot
([id, recalc])Return the data used by plot_psf.
get_pvalue_plot
([null_model, alt_model, ...])Return the data used by plot_pvalue.
Return the data calculated by the last plot_pvalue call.
get_ratio_contour
([id, recalc])Return the data used by contour_ratio.
get_ratio_image
([id])Return the data used by image_ratio.
get_ratio_plot
([id, recalc])Return the data used by plot_ratio.
get_reg_proj
([par0, par1, id, otherids, ...])Return the region-projection object.
get_reg_unc
([par0, par1, id, otherids, ...])Return the region-uncertainty object.
get_resid_contour
([id, recalc])Return the data used by contour_resid.
get_resid_image
([id])Return the data used by image_resid.
get_resid_plot
([id, recalc])Return the data used by plot_resid.
Return the current MCMC sampler options.
Return the name of the current MCMC sampler.
get_sampler_opt
(opt)Return an option of the current MCMC sampler.
Return the data used to plot the last scatter plot.
get_source
([id])Return the source model expression for a data set.
get_source_component_image
(id[, model])Return the data used by image_source_component.
get_source_component_plot
(id[, model, recalc])Return the data used by plot_source_component.
get_source_contour
([id, recalc])Return the data used by contour_source.
get_source_image
([id])Return the data used by image_source.
get_source_plot
([id, recalc])Return the data used to create the source plot.
Return the plot attributes for displays with multiple plots.
get_stat
([name])Return the fit statisic.
Return the statistic values for the current models.
Return the name of the current fit statistic.
get_staterror
([id, filter])Return the statistical error on the dependent axis of a data set.
get_syserror
([id, filter])Return the systematic error on the dependent axis of a data set.
Return the data used to plot the last trace.
guess
([id, model, limits, values])Estimate the parameter values and ranges given the loaded data.
ignore
([lo, hi])Exclude data from the fit.
ignore_id
(ids[, lo, hi])Exclude data from the fit for a data set.
Close the image viewer.
image_data
([id, newframe, tile])Display a data set in the image viewer.
Delete all the frames open in the image viewer.
image_fit
([id, newframe, tile, deleteframes])Display the data, model, and residuals for a data set in the image viewer.
image_getregion
([coord])Return the region defined in the image viewer.
image_kernel
([id, newframe, tile])Display the 2D kernel for a data set in the image viewer.
image_model
([id, newframe, tile])Display the model for a data set in the image viewer.
image_model_component
(id[, model, newframe, ...])Display a component of the model in the image viewer.
Start the image viewer.
image_psf
([id, newframe, tile])Display the 2D PSF model for a data set in the image viewer.
image_ratio
([id, newframe, tile])Display the ratio (data/model) for a data set in the image viewer.
image_resid
([id, newframe, tile])Display the residuals (data - model) for a data set in the image viewer.
image_setregion
(reg[, coord])Set the region to display in the image viewer.
image_source
([id, newframe, tile])Display the source expression for a data set in the image viewer.
image_source_component
(id[, model, ...])Display a component of the source expression in the image viewer.
image_xpaget
(arg)Return the result of an XPA call to the image viewer.
image_xpaset
(arg[, data])Return the result of an XPA call to the image viewer.
int_proj
(par[, id, otherids, replot, fast, ...])Calculate and plot the fit statistic versus fit parameter value.
int_unc
(par[, id, otherids, replot, min, ...])Calculate and plot the fit statistic versus fit parameter value.
link
(par, val)Link a parameter to a value.
List the identifiers for the loaded data sets.
list_functions
([outfile, clobber])Display the functions provided by Sherpa.
List the iterative fitting schemes.
List the optimization methods.
List the names of all the model components.
List of all the data sets with a source expression.
list_models
([show])List the available model types.
Return the priors set for model parameters, if any.
List of all the data sets with a PSF.
List the MCMC samplers.
List the fit statistics.
load_arrays
(id, *args)Create a data set from array values.
load_conv
(modelname, filename_or_model, ...)Load a 1D convolution model.
load_data
(id[, filename, ncols, colkeys, ...])Load a data set from an ASCII file.
load_filter
(id[, filename, ignore, ncols])Load the filter array from an ASCII file and add to a data set.
load_psf
(modelname, filename_or_model, ...)Create a PSF model.
load_staterror
(id[, filename, ncols])Load the statistical errors from an ASCII file.
load_syserror
(id[, filename, ncols])Load the systematic errors from an ASCII file.
load_table_model
(modelname, filename[, ...])Load ASCII tabular data and use it as a model component.
load_template_interpolator
(name, ...)Set the template interpolation scheme.
load_template_model
(modelname, templatefile)Load a set of templates and use it as a model component.
load_user_model
(func, modelname[, filename, ...])Create a user-defined model.
load_user_stat
(statname, calc_stat_func[, ...])Create a user-defined statistic.
normal_sample
([num, sigma, correlate, id, ...])Sample the fit statistic by taking the parameter values from a normal distribution.
notice
([lo, hi])Include data in the fit.
notice_id
(ids[, lo, hi])Include data from the fit for a data set.
paramprompt
([val])Should the user be asked for the parameter values when creating a model?
plot
(*args, **kwargs)Create one or more plot types.
plot_cdf
(points[, name, xlabel, replot, ...])Plot the cumulative density function of an array of values.
plot_chisqr
([id, replot, overplot, clearwindow])Plot the chi-squared value for each point in a data set.
plot_data
([id, replot, overplot, clearwindow])Plot the data values.
plot_delchi
([id, replot, overplot, clearwindow])Plot the ratio of residuals to error for a data set.
plot_fit
([id, replot, overplot, clearwindow])Plot the fit results (data, model) for a data set.
plot_fit_delchi
([id, replot, overplot, ...])Plot the fit results, and the residuals, for a data set.
plot_fit_ratio
([id, replot, overplot, ...])Plot the fit results, and the ratio of data to model, for a data set.
plot_fit_resid
([id, replot, overplot, ...])Plot the fit results, and the residuals, for a data set.
plot_kernel
([id, replot, overplot, clearwindow])Plot the 1D kernel applied to a data set.
plot_model
([id, replot, overplot, clearwindow])Plot the model for a data set.
plot_model_component
(id[, model, replot, ...])Plot a component of the model for a data set.
plot_pdf
(points[, name, xlabel, bins, ...])Plot the probability density function of an array of values.
plot_psf
([id, replot, overplot, clearwindow])Plot the 1D PSF model applied to a data set.
plot_pvalue
(null_model, alt_model[, ...])Compute and plot a histogram of likelihood ratios by simulating data.
plot_ratio
([id, replot, overplot, clearwindow])Plot the ratio of data to model for a data set.
plot_resid
([id, replot, overplot, clearwindow])Plot the residuals (data - model) for a data set.
plot_scatter
(x, y[, name, xlabel, ylabel, ...])Create a scatter plot.
plot_source
([id, replot, overplot, clearwindow])Plot the source expression for a data set.
plot_source_component
(id[, model, replot, ...])Plot a component of the source expression for a data set.
plot_trace
(points[, name, xlabel, replot, ...])Create a trace plot of row number versus value.
proj
(*args)Estimate parameter confidence intervals using the projection method.
projection
(*args)Estimate parameter confidence intervals using the projection method.
reg_proj
(par0, par1[, id, otherids, replot, ...])Plot the statistic value as two parameters are varied.
reg_unc
(par0, par1[, id, otherids, replot, ...])Plot the statistic value as two parameters are varied.
reset
([model, id])Reset the model parameters to their default settings.
restore
([filename])Load in a Sherpa session from a file.
save
([filename, clobber])Save the current Sherpa session to a file.
save_arrays
(filename, args[, fields, ...])Write a list of arrays to an ASCII file.
save_data
(id[, filename, fields, sep, ...])Save the data to a file.
save_delchi
(id[, filename, clobber, sep, ...])Save the ratio of residuals (data-model) to error to a file.
save_error
(id[, filename, clobber, sep, ...])Save the errors to a file.
save_filter
(id[, filename, clobber, sep, ...])Save the filter array to a file.
save_model
(id[, filename, clobber, sep, ...])Save the model values to a file.
save_resid
(id[, filename, clobber, sep, ...])Save the residuals (data-model) to a file.
save_source
(id[, filename, clobber, sep, ...])Save the model values to a file.
save_staterror
(id[, filename, clobber, sep, ...])Save the statistical errors to a file.
save_syserror
(id[, filename, clobber, sep, ...])Save the statistical errors to a file.
set_conf_opt
(name, val)Set an option for the confidence interval method.
set_covar_opt
(name, val)Set an option for the covariance method.
set_data
(id[, data])Set a data set.
set_default_id
(id)Set the default data set identifier.
set_dep
(id[, val])Set the dependent axis of a data set.
set_filter
(id[, val, ignore])Set the filter array of a data set.
set_full_model
(id[, model])Define the convolved model expression for a data set.
set_iter_method
(meth)Set the iterative-fitting scheme used in the fit.
set_iter_method_opt
(optname, val)Set an option for the iterative-fitting scheme.
set_method
(meth)Set the optimization method.
set_method_opt
(optname, val)Set an option for the current optimization method.
set_model
(id[, model])Set the source model expression for a data set.
set_model_autoassign_func
([func])Set the method used to create model component identifiers.
set_par
(par[, val, min, max, frozen])Set the value, limits, or behavior of a model parameter.
set_prior
(par, prior)Set the prior function to use with a parameter.
set_proj_opt
(name, val)Set an option for the projection method.
set_psf
(id[, psf])Add a PSF model to a data set.
set_sampler
(sampler)Set the MCMC sampler.
set_sampler_opt
(opt, value)Set an option for the current MCMC sampler.
set_source
(id[, model])Set the source model expression for a data set.
set_stat
(stat)Set the statistical method.
set_staterror
(id[, val, fractional])Set the statistical errors on the dependent axis of a data set.
set_syserror
(id[, val, fractional])Set the systematic errors on the dependent axis of a data set.
set_xlinear
([plottype])New plots will display a linear X axis.
set_xlog
([plottype])New plots will display a logarithmically-scaled X axis.
set_ylinear
([plottype])New plots will display a linear Y axis.
set_ylog
([plottype])New plots will display a logarithmically-scaled Y axis.
show_all
([id, outfile, clobber])Report the current state of the Sherpa session.
show_conf
([outfile, clobber])Display the results of the last conf evaluation.
show_covar
([outfile, clobber])Display the results of the last covar evaluation.
show_data
([id, outfile, clobber])Summarize the available data sets.
show_filter
([id, outfile, clobber])Show any filters applied to a data set.
show_fit
([outfile, clobber])Summarize the fit results.
show_kernel
([id, outfile, clobber])Display any kernel applied to a data set.
show_method
([outfile, clobber])Display the current optimization method and options.
show_model
([id, outfile, clobber])Display the model expression used to fit a data set.
show_proj
([outfile, clobber])Display the results of the last proj evaluation.
show_psf
([id, outfile, clobber])Display any PSF model applied to a data set.
show_source
([id, outfile, clobber])Display the source model expression for a data set.
show_stat
([outfile, clobber])Display the current fit statistic.
simulfit
([id])Fit a model to one or more data sets.
t_sample
([num, dof, id, otherids, numcores])Sample the fit statistic by taking the parameter values from a Student's t-distribution.
thaw
(*args)Allow model parameters to be varied during a fit.
uniform_sample
([num, factor, id, otherids, ...])Sample the fit statistic by taking the parameter values from an uniform distribution.
unlink
(par)Unlink a parameter value.
unpack_arrays
(*args)Create a sherpa data object from arrays of data.
unpack_data
(filename[, ncols, colkeys, ...])Create a sherpa data object from an ASCII file.
Methods Documentation
- add_model(modelclass, args=(), kwargs={})[source] [edit on github]
Create a user-defined model class.
Create a model from a class. The name of the class can then be used to create model components - e.g. with
set_model
orcreate_model_component
- as with any existing Sherpa model.- Parameters:
modelclass – A class derived from
sherpa.models.model.ArithmeticModel
. This class defines the functional form and the parameters of the model.args – Arguments for the class constructor.
kwargs – Keyword arguments for the class constructor.
See also
create_model_component
Create a model component.
list_models
List the available model types.
load_table_model
Load tabular data and use it as a model component.
load_user_model
Create a user-defined model.
set_model
Set the source model expression for a data set.
Notes
The
load_user_model
function is designed to make it easy to add a model, but the interface is not the same as the existing models (such as having to call bothload_user_model
andadd_user_pars
for each new instance). Theadd_model
function is used to add a model as a Python class, which is more work to set up, but then acts the same way as the existing models.Examples
The following example creates a model type called “mygauss1d” which will behave excatly the same as the existing “gauss1d” model. Normally the class used with
add_model
would add new functionality.>>> from sherpa.models import Gauss1D >>> class MyGauss1D(Gauss1D): ... pass ... >>> add_model(MyGauss1D) >>> set_source(mygauss1d.g1 + mygauss1d.g2)
- add_user_pars(modelname, parnames, parvals=None, parmins=None, parmaxs=None, parunits=None, parfrozen=None)[source] [edit on github]
Add parameter information to a user model.
- Parameters:
modelname (str) – The name of the user model (created by
load_user_model
).parnames (array of str) – The names of the parameters. The order of all the parameter arrays must match that expected by the model function (the first argument to
load_user_model
).parvals (array of number, optional) – The default values of the parameters. If not given each parameter is set to 0.
parmins (array of number, optional) – The minimum values of the parameters (hard limit). The default value is -3.40282e+38.
parmaxs (array of number, optional) – The maximum values of the parameters (hard limit). The default value is 3.40282e+38.
parunits (array of str, optional) – The units of the parameters. This is only used in screen output (i.e. is informational in nature).
parfrozen (array of bool, optional) – Should each parameter be frozen. The default is that all parameters are thawed.
See also
add_model
Create a user-defined model class.
load_user_model
Create a user-defined model.
set_par
Set the value, limits, or behavior of a model parameter.
Notes
The parameters must be specified in the order that the function expects. That is, if the function has two parameters, pars[0]=’slope’ and pars[1]=’y_intercept’, then the call to add_user_pars must use the order [“slope”, “y_intercept”].
Examples
Create a user model for the function
profile
called “myprof”, which has two parameters called “core” and “ampl”, both of which will start with a value of 0.>>> load_user_model(profile, "myprof") >>> add_user_pars("myprof", ["core", "ampl"])
Set the starting values, minimum values, and whether or not the parameter is frozen by default, for the “prof” model:
>>> pnames = ["core", "ampl", "intflag"] >>> pvals = [10, 200, 1] >>> pmins = [0.01, 0, 0] >>> pfreeze = [False, False, True] >>> add_user_pars("prof", pnames, pvals, ... parmins=pmins, parfrozen=pfreeze)
- calc_chisqr(id=None, *otherids)[source] [edit on github]
Calculate the per-bin chi-squared statistic.
Evaluate the model for one or more data sets, compare it to the data using the current statistic, and return an array of chi-squared values for each bin. No fitting is done, as the current model parameter, and any filters, are used.
- Parameters:
- Returns:
chisq – The chi-square value for each bin of the data, using the current statistic (as set by
set_stat
). A value ofNone
is returned if the statistic is not a chi-square distribution.- Return type:
array or
None
See also
calc_stat
Calculate the fit statistic for a data set.
calc_stat_info
Display the statistic values for the current models.
set_stat
Set the statistical method.
Notes
The output array length equals the sum of the arrays lengths of the requested data sets.
Examples
When called with no arguments, the return value is the chi-squared statistic for each bin in the data sets which have a defined model.
>>> calc_chisqr()
Supplying a specific data set ID to calc_chisqr - such as “1” or “src” - will return the chi-squared statistic array for only that data set.
>>> calc_chisqr(1) >>> calc_chisqr("src")
Restrict the calculation to just datasets 1 and 3:
>>> calc_chisqr(1, 3)
- calc_stat(id=None, *otherids)[source] [edit on github]
Calculate the fit statistic for a data set.
Evaluate the model for one or more data sets, compare it to the data using the current statistic, and return the value. No fitting is done, as the current model parameter, and any filters, are used.
- Parameters:
- Returns:
stat – The current statistic value.
- Return type:
number
See also
calc_chisqr
Calculate the per-bin chi-squared statistic.
calc_stat_info
Display the statistic values for the current models.
set_stat
Set the statistical method.
Examples
Calculate the statistic for the model and data in the default data set:
>>> stat = calc_stat()
Find the statistic for data set 3:
>>> stat = calc_stat(3)
When fitting to multiple data sets, you can get the contribution to the total fit statistic from only one data set, or from several by listing the datasets explicitly. The following finds the contribution from the data sets labelled “core” and “jet”:
>>> stat = calc_stat("core", "jet")
Calculate the statistic value using two different statistics:
>>> set_stat('cash') >>> s1 = calc_stat() >>> set_stat('cstat') >>> s2 = calc_stat()
- calc_stat_info()[source] [edit on github]
Display the statistic values for the current models.
Displays the statistic value for each data set, and the combined fit, using the current set of models, parameters, and ranges. The output is printed to stdout, and so is intended for use in interactive analysis. The
get_stat_info
function returns the same information but as an array of Python structures.See also
calc_stat
Calculate the fit statistic for a data set.
get_stat_info
Return the statistic values for the current models.
Notes
If a fit to a particular data set has not been made, or values - such as parameter settings, the noticed data range, or choice of statistic - have been changed since the last fit, then the results for that data set may not be meaningful and will therefore bias the results for the simultaneous results.
The information returned by
calc_stat_info
includes:- Dataset
The dataset identifier (or identifiers).
- Statistic
The name of the statistic used to calculate the results.
- Fit statistic value
The current fit statistic value.
- Data points
The number of bins used in the fit.
- Degrees of freedom
The number of bins minus the number of thawed parameters.
Some fields are only returned for a subset of statistics:
- Probability (Q-value)
A measure of the probability that one would observe the reduced statistic value, or a larger value, if the assumed model is true and the best-fit model parameters are the true parameter values.
- Reduced statistic
The fit statistic value divided by the number of degrees of freedom.
Examples
>>> calc_stat_info()
- clean()[source] [edit on github]
Clear out the current Sherpa session.
The
clean
function removes all data sets and model assignments, and restores the default settings for the optimisation and fit statistic.Changed in version 4.15.0: The model names are now removed from the global symbol table.
See also
save
Save the current Sherpa session to a file.
restore
Load in a Sherpa session from a file.
sherpa.astro.ui.save_all
Save the Sherpa session as an ASCII file.
Examples
>>> clean()
After the call to
clean
, theline
andbgnd
variables will be removed, so accessing them would cause a NameError.>>> set_source(gauss1d.line + const1d.bgnd) >>> bgnd.c0.min = 0 >>> print(line) >>> clean()
- conf(*args)[source] [edit on github]
Estimate parameter confidence intervals using the confidence method.
The
conf
command computes confidence interval bounds for the specified model parameters in the dataset. A given parameter’s value is varied along a grid of values while the values of all the other thawed parameters are allowed to float to new best-fit values. Theget_conf
andset_conf_opt
commands can be used to configure the error analysis; an example being changing the ‘sigma’ field to1.6
(i.e. 90%) from its default value of1
. The output from the routine is displayed on screen, and theget_conf_results
routine can be used to retrieve the results.- Parameters:
id (int or str, optional) – The data set, or sets, that provides the data. If not given then all data sets with an associated model are used simultaneously.
parameter (sherpa.models.parameter.Parameter, optional) – The default is to calculate the confidence limits on all thawed parameters of the model, or models, for all the data sets. The evaluation can be restricted by listing the parameters to use. Note that each parameter should be given as a separate argument, rather than as a list. For example
conf(g1.ampl, g1.sigma)
.model (sherpa.models.model.Model, optional) – Select all the thawed parameters in the model.
See also
covar
Estimate the confidence intervals using the covariance method.
get_conf
Return the confidence-interval estimation object.
get_conf_results
Return the results of the last
conf
run.int_proj
Plot the statistic value as a single parameter is varied.
int_unc
Plot the statistic value as a single parameter is varied.
reg_proj
Plot the statistic value as two parameters are varied.
reg_unc
Plot the statistic value as two parameters are varied.
set_conf_opt
Set an option of the
conf
estimation object.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with multiple
ids
orparameters
values, the order is unimportant, since any argument that is not defined as a model parameter is assumed to be a data id.The
conf
function is different tocovar
, in that in that all other thawed parameters are allowed to float to new best-fit values, instead of being fixed to the initial best-fit values as they are incovar
. Whileconf
is more general (e.g. allowing the user to examine the parameter space away from the best-fit point), it is in the strictest sense no more accurate thancovar
for determining confidence intervals.The
conf
function is a replacement for theproj
function, which uses a different algorithm to estimate parameter confidence limits.An estimated confidence interval is accurate if and only if:
the chi^2 or logL surface in parameter space is approximately shaped like a multi-dimensional paraboloid, and
the best-fit point is sufficiently far from parameter space boundaries.
One may determine if these conditions hold, for example, by plotting the fit statistic as a function of each parameter’s values (the curve should approximate a parabola) and by examining contour plots of the fit statistics made by varying the values of two parameters at a time (the contours should be elliptical, and parameter space boundaries should be no closer than approximately 3 sigma from the best-fit point). The
int_proj
andreg_proj
commands may be used for this.If either of the conditions given above does not hold, then the output from
conf
may be meaningless except to give an idea of the scale of the confidence intervals. To accurately determine the confidence intervals, one would have to reparameterize the model, use Monte Carlo simulations, or Bayesian methods.As the calculation can be computer intensive, the default behavior is to use all available CPU cores to speed up the analysis. This can be changed be varying the
numcores
option - or settingparallel
toFalse
- either withset_conf_opt
orget_conf
.As
conf
estimates intervals for each parameter independently, the relationship between sigma and the change in statistic value delta_S can be particularly simple: sigma = the square root of delta_S for statistics sampled from the chi-square distribution and for the Cash statistic, and is approximately equal to the square root of (2 * delta_S) for fits based on the general log-likelihood. The default setting is to calculate the one-sigma interval, which can be changed with thesigma
option toset_conf_opt
orget_conf
.The limit calculated by
conf
is basically a 1-dimensional root in the translated coordinate system (translated by the value of the statistic at the minimum plus sigma^2). The Taylor series expansion of the multi-dimensional function at the minimum is:f(x + dx) ~ f(x) + grad( f(x) )^T dx + dx^T Hessian( f(x) ) dx + ...
where x is understood to be the n-dimensional vector representing the free parameters to be fitted and the super-script ‘T’ is the transpose of the row-vector. At or near the minimum, the gradient of the function is zero or negligible, respectively. So the leading term of the expansion is quadratic. The best root finding algorithm for a curve which is approximately parabolic is Muller’s method [1]_. Muller’s method is a generalization of the secant method [2]_: the secant method is an iterative root finding method that approximates the function by a straight line through two points, whereas Muller’s method is an iterative root finding method that approxmiates the function by a quadratic polynomial through three points.
Three data points are the minimum input to Muller’s root finding method. The first point to be submitted to the Muller’s root finding method is the point at the minimum. To strategically choose the other two data points, the confidence function uses the output from covariance as the second data point. To generate the third data points for the input to Muller’s root finding method, the secant root finding method is used since it only requires two data points to generate the next best approximation of the root.
However, there are cases where
conf
cannot locate the root even though the root is bracketed within an interval (perhaps due to the bad resolution of the data). In such cases, when the optionopeninterval
is set toFalse
(which is the default), the routine will print a warning message about not able to find the root within the set tolerance and the function will return the average of the open interval which brackets the root. Ifopeninterval
is set toTrue
thenconf
will print the minimal open interval which brackets the root (not to be confused with the lower and upper bound of the confidence interval). The most accurate thing to do is to return an open interval where the root is localized/bracketed rather then the average of the open interval (since the average of the interval is not a root within the specified tolerance).References
Examples
Evaluate confidence intervals for all thawed parameters in all data sets with an associated source model. The results are then stored in the variable
res
.>>> conf() >>> res = get_conf_results()
Only evaluate the parameters associated with data set 2:
>>> conf(2)
Only evaluate the intervals for the
pos.xpos
andpos.ypos
parameters:>>> conf(pos.xpos, pos.ypos)
Change the limits to be 1.6 sigma (90%) rather than the default 1 sigma.
>>> set_conf_opt('sigma', 1.6) >>> conf()
Only evaluate the
clus.kt
parameter for the data sets with identifiers “obs1”, “obs5”, and “obs6”. This will still use the 1.6 sigma setting from the previous run.>>> conf("obs1", "obs5", "obs6", clus.kt)
Only use two cores when evaluating the errors for the parameters used in the model for data set 3:
>>> set_conf_opt('numcores', 2) >>> conf(3)
Estimate the errors for all the thawed parameters from the
line
model and theclus.kt
parameter for datasets 1, 3, and 4:>>> conf(1, 3, 4, line, clus.kt)
- confidence(*args) [edit on github]
Estimate parameter confidence intervals using the confidence method.
The
conf
command computes confidence interval bounds for the specified model parameters in the dataset. A given parameter’s value is varied along a grid of values while the values of all the other thawed parameters are allowed to float to new best-fit values. Theget_conf
andset_conf_opt
commands can be used to configure the error analysis; an example being changing the ‘sigma’ field to1.6
(i.e. 90%) from its default value of1
. The output from the routine is displayed on screen, and theget_conf_results
routine can be used to retrieve the results.- Parameters:
id (int or str, optional) – The data set, or sets, that provides the data. If not given then all data sets with an associated model are used simultaneously.
parameter (sherpa.models.parameter.Parameter, optional) – The default is to calculate the confidence limits on all thawed parameters of the model, or models, for all the data sets. The evaluation can be restricted by listing the parameters to use. Note that each parameter should be given as a separate argument, rather than as a list. For example
conf(g1.ampl, g1.sigma)
.model (sherpa.models.model.Model, optional) – Select all the thawed parameters in the model.
See also
covar
Estimate the confidence intervals using the covariance method.
get_conf
Return the confidence-interval estimation object.
get_conf_results
Return the results of the last
conf
run.int_proj
Plot the statistic value as a single parameter is varied.
int_unc
Plot the statistic value as a single parameter is varied.
reg_proj
Plot the statistic value as two parameters are varied.
reg_unc
Plot the statistic value as two parameters are varied.
set_conf_opt
Set an option of the
conf
estimation object.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with multiple
ids
orparameters
values, the order is unimportant, since any argument that is not defined as a model parameter is assumed to be a data id.The
conf
function is different tocovar
, in that in that all other thawed parameters are allowed to float to new best-fit values, instead of being fixed to the initial best-fit values as they are incovar
. Whileconf
is more general (e.g. allowing the user to examine the parameter space away from the best-fit point), it is in the strictest sense no more accurate thancovar
for determining confidence intervals.The
conf
function is a replacement for theproj
function, which uses a different algorithm to estimate parameter confidence limits.An estimated confidence interval is accurate if and only if:
the chi^2 or logL surface in parameter space is approximately shaped like a multi-dimensional paraboloid, and
the best-fit point is sufficiently far from parameter space boundaries.
One may determine if these conditions hold, for example, by plotting the fit statistic as a function of each parameter’s values (the curve should approximate a parabola) and by examining contour plots of the fit statistics made by varying the values of two parameters at a time (the contours should be elliptical, and parameter space boundaries should be no closer than approximately 3 sigma from the best-fit point). The
int_proj
andreg_proj
commands may be used for this.If either of the conditions given above does not hold, then the output from
conf
may be meaningless except to give an idea of the scale of the confidence intervals. To accurately determine the confidence intervals, one would have to reparameterize the model, use Monte Carlo simulations, or Bayesian methods.As the calculation can be computer intensive, the default behavior is to use all available CPU cores to speed up the analysis. This can be changed be varying the
numcores
option - or settingparallel
toFalse
- either withset_conf_opt
orget_conf
.As
conf
estimates intervals for each parameter independently, the relationship between sigma and the change in statistic value delta_S can be particularly simple: sigma = the square root of delta_S for statistics sampled from the chi-square distribution and for the Cash statistic, and is approximately equal to the square root of (2 * delta_S) for fits based on the general log-likelihood. The default setting is to calculate the one-sigma interval, which can be changed with thesigma
option toset_conf_opt
orget_conf
.The limit calculated by
conf
is basically a 1-dimensional root in the translated coordinate system (translated by the value of the statistic at the minimum plus sigma^2). The Taylor series expansion of the multi-dimensional function at the minimum is:f(x + dx) ~ f(x) + grad( f(x) )^T dx + dx^T Hessian( f(x) ) dx + ...
where x is understood to be the n-dimensional vector representing the free parameters to be fitted and the super-script ‘T’ is the transpose of the row-vector. At or near the minimum, the gradient of the function is zero or negligible, respectively. So the leading term of the expansion is quadratic. The best root finding algorithm for a curve which is approximately parabolic is Muller’s method [1]_. Muller’s method is a generalization of the secant method [2]_: the secant method is an iterative root finding method that approximates the function by a straight line through two points, whereas Muller’s method is an iterative root finding method that approxmiates the function by a quadratic polynomial through three points.
Three data points are the minimum input to Muller’s root finding method. The first point to be submitted to the Muller’s root finding method is the point at the minimum. To strategically choose the other two data points, the confidence function uses the output from covariance as the second data point. To generate the third data points for the input to Muller’s root finding method, the secant root finding method is used since it only requires two data points to generate the next best approximation of the root.
However, there are cases where
conf
cannot locate the root even though the root is bracketed within an interval (perhaps due to the bad resolution of the data). In such cases, when the optionopeninterval
is set toFalse
(which is the default), the routine will print a warning message about not able to find the root within the set tolerance and the function will return the average of the open interval which brackets the root. Ifopeninterval
is set toTrue
thenconf
will print the minimal open interval which brackets the root (not to be confused with the lower and upper bound of the confidence interval). The most accurate thing to do is to return an open interval where the root is localized/bracketed rather then the average of the open interval (since the average of the interval is not a root within the specified tolerance).References
Examples
Evaluate confidence intervals for all thawed parameters in all data sets with an associated source model. The results are then stored in the variable
res
.>>> conf() >>> res = get_conf_results()
Only evaluate the parameters associated with data set 2:
>>> conf(2)
Only evaluate the intervals for the
pos.xpos
andpos.ypos
parameters:>>> conf(pos.xpos, pos.ypos)
Change the limits to be 1.6 sigma (90%) rather than the default 1 sigma.
>>> set_conf_opt('sigma', 1.6) >>> conf()
Only evaluate the
clus.kt
parameter for the data sets with identifiers “obs1”, “obs5”, and “obs6”. This will still use the 1.6 sigma setting from the previous run.>>> conf("obs1", "obs5", "obs6", clus.kt)
Only use two cores when evaluating the errors for the parameters used in the model for data set 3:
>>> set_conf_opt('numcores', 2) >>> conf(3)
Estimate the errors for all the thawed parameters from the
line
model and theclus.kt
parameter for datasets 1, 3, and 4:>>> conf(1, 3, 4, line, clus.kt)
- contour(*args, **kwargs)[source] [edit on github]
Create a contour plot for an image data set.
Create one or more contour plots, depending on the arguments it is set: a plot type, followed by an optional data set identifier, and this can be repeated. If no data set identifier is given for a plot type, the default identifier - as returned by
get_default_id
- is used. This is for 2D data sets.Changed in version 4.12.2: Keyword arguments, such as alpha, can be sent to each plot.
- Raises:
sherpa.utils.err.DataErr – The data set does not support the requested plot type.
See also
contour_data
Contour the values of an image data set.
contour_fit
Contour the fit to a data set.
contour_fit_resid
Contour the fit and the residuals to a data set.
contour_kernel
Contour the kernel applied to the model of an image data set.
contour_model
Contour the values of the model, including any PSF.
contour_psf
Contour the PSF applied to the model of an image data set.
contour_ratio
Contour the ratio of data to model.
contour_resid
Contour the residuals of the fit.
contour_source
Contour the values of the model, without any PSF.
get_default_id
Return the default data set identifier.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Notes
The supported plot types depend on the data set type, and include the following list. There are also individual functions, with
contour_
prepended to the plot type, such ascontour_data
and thecontour_fit_resid
variant:data
The data.
fit
Contours of the data and the source model.
fit_resid
Two plots: the first is the contours of the data and the source model and the second is the residuals.
kernel
The kernel.
model
The source model including any PSF convolution set by
set_psf
.psf
The PSF.
ratio
Contours of the ratio image, formed by dividing the data by the model.
resid
Contours of the residual image, formed by subtracting the model from the data.
source
The source model (without any PSF convolution set by
set_psf
).
The keyword arguments are sent to each plot (so care must be taken to ensure they are valid for all plots).
Examples
>>> contour('data')
>>> contour('data', 1, 'data', 2)
>>> contour('data', 'model')
>>> contour('data', 'model', 'fit', 'resid')
>>> contour('data', 'model', alpha=0.7)
- contour_data(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Contour the values of an image data set.
- Parameters:
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_data
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_data_contour
Return the data used by contour_data.
get_data_contour_prefs
Return the preferences for contour_data.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Examples
Plot the data from the default data set:
>>> contour_data()
Contour the data and then overplot the data from the second data set:
>>> contour_data() >>> contour_data(2, overcontour=True)
- contour_fit(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Contour the fit to a data set.
Overplot the model - including any PSF - on the data. The preferences are the same as
contour_data
andcontour_model
.- Parameters:
id (int or str, optional) – The data set that provides the data and model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_fit
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_fit_contour
Return the data used by contour_fit.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Examples
Plot the fit for the default data set:
>>> contour_fit()
Overplot the fit to data set ‘s2’ on that of the default data set:
>>> contour_fit() >>> contour_fit('s2', overcontour=True)
- contour_fit_resid(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Contour the fit and the residuals to a data set.
Overplot the model - including any PSF - on the data. In a separate plot contour the residuals. The preferences are the same as
contour_data
andcontour_model
.- Parameters:
id (int or str, optional) – The data set that provides the data and model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_fit_resid
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_fit_contour
Return the data used by contour_fit.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
contour_fit
Contour the fit to a data set.
contour_resid
Contour the residuals of the fit.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Examples
Plot the fit and residuals for the default data set:
>>> contour_fit_resid()
- contour_kernel(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Contour the kernel applied to the model of an image data set.
If the data set has no PSF applied to it, the model is displayed.
- Parameters:
id (int or str, optional) – The data set that provides the model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_kernel
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_psf_contour
Return the data used by contour_psf.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
contour_psf
Contour the PSF applied to the model of an image data set.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
set_psf
Add a PSF model to a data set.
- contour_model(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Create a contour plot of the model.
Displays a contour plot of the values of the model, evaluated on the data, including any PSF kernel convolution (if set).
- Parameters:
id (int or str, optional) – The data set that provides the model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_model
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_model_contour
Return the data used by contour_model.
get_model_contour_prefs
Return the preferences for contour_model.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
contour_source
Create a contour plot of the unconvolved spatial model.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
set_psf
Add a PSF model to a data set.
Examples
Plot the model from the default data set:
>>> contour_model()
Compare the model without and with the PSF component, for the “img” data set:
>>> contour_source("img") >>> contour_model("img", overcontour=True)
- contour_psf(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Contour the PSF applied to the model of an image data set.
If the data set has no PSF applied to it, the model is displayed.
- Parameters:
id (int or str, optional) – The data set that provides the model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_psf
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_psf_contour
Return the data used by contour_psf.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
contour_kernel
Contour the kernel applied to the model of an image data set.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
set_psf
Add a PSF model to a data set.
- contour_ratio(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Contour the ratio of data to model.
The ratio image is formed by dividing the data by the current model, including any PSF. The preferences are the same as
contour_data
.- Parameters:
id (int or str, optional) – The data set that provides the data and model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_ratio
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_ratio_contour
Return the data used by contour_ratio.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Examples
Plot the ratio from the default data set:
>>> contour_ratio()
Overplot the ratio on the residuals:
>>> contour_resid('img') >>> contour_ratio('img', overcontour=True)
- contour_resid(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Contour the residuals of the fit.
The residuals are formed by subtracting the current model - including any PSF - from the data. The preferences are the same as
contour_data
.- Parameters:
id (int or str, optional) – The data set that provides the data and model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_resid
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_resid_contour
Return the data used by contour_resid.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Examples
Plot the residuals from the default data set:
>>> contour_resid()
Overplot the residuals on the model:
>>> contour_model('img') >>> contour_resid('img', overcontour=True)
- contour_source(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Create a contour plot of the unconvolved spatial model.
Displays a contour plot of the values of the model, evaluated on the data, without any PSF kernel convolution applied. The preferences are the same as
contour_model
.- Parameters:
id (int or str, optional) – The data set that provides the model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_source
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_source_contour
Return the data used by contour_source.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
contour_model
Create a contour plot of the model.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
set_psf
Add a PSF model to a data set.
Examples
Plot the model from the default data set:
>>> contour_source()
Compare the model without and with the PSF component, for the “img” data set:
>>> contour_model("img") >>> contour_source("img", overcontour=True)
- copy_data(fromid, toid)[source] [edit on github]
Copy a data set, creating a new identifier.
After copying the data set, any changes made to the original data set (that is, the
fromid
identifier) will not be reflected in the new (thetoid
identifier) data set.- Parameters:
- Raises:
sherpa.utils.err.IdentifierErr – If there is no data set with a
fromid
identifier.
Examples
>>> copy_data(1, 2)
Rename the data set with identifier 2 to “orig”, and then delete the old data set:
>>> copy_data(2, "orig") >>> delete_data(2)
- covar(*args)[source] [edit on github]
Estimate parameter confidence intervals using the covariance method.
The
covar
command computes confidence interval bounds for the specified model parameters in the dataset, using the covariance matrix of the statistic. Theget_covar
andset_covar_opt
commands can be used to configure the error analysis; an example being changing thesigma
field to 1.6 (i.e. 90%) from its default value of 1. The output from the routine is displayed on screen, and theget_covar_results
routine can be used to retrieve the results.- Parameters:
id (int or str, optional) – The data set, or sets, that provides the data. If not given then all data sets with an associated model are used simultaneously.
parameter (sherpa.models.parameter.Parameter, optional) – The default is to calculate the confidence limits on all thawed parameters of the model, or models, for all the data sets. The evaluation can be restricted by listing the parameters to use. Note that each parameter should be given as a separate argument, rather than as a list. For example
covar(g1.ampl, g1.sigma)
.model (sherpa.models.model.Model, optional) – Select all the thawed parameters in the model.
See also
covar
Estimate the confidence intervals using the confidence method.
get_covar
Return the covariance estimation object.
get_covar_results
Return the results of the last
covar
run.int_proj
Plot the statistic value as a single parameter is varied.
int_unc
Plot the statistic value as a single parameter is varied.
reg_proj
Plot the statistic value as two parameters are varied.
reg_unc
Plot the statistic value as two parameters are varied.
set_covar_opt
Set an option of the
covar
estimation object.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with multiple
ids
orparameters
values, the order is unimportant, since any argument that is not defined as a model parameter is assumed to be a data id.The
covar
command is different toconf
, in that in that all other thawed parameters are fixed, rather than being allowed to float to new best-fit values. Whileconf
is more general (e.g. allowing the user to examine the parameter space away from the best-fit point), it is in the strictest sense no more accurate thancovar
for determining confidence intervals.An estimated confidence interval is accurate if and only if:
the chi^2 or logL surface in parameter space is approximately shaped like a multi-dimensional paraboloid, and
the best-fit point is sufficiently far from parameter space boundaries.
One may determine if these conditions hold, for example, by plotting the fit statistic as a function of each parameter’s values (the curve should approximate a parabola) and by examining contour plots of the fit statistics made by varying the values of two parameters at a time (the contours should be elliptical, and parameter space boundaries should be no closer than approximately 3 sigma from the best-fit point). The
int_proj
andreg_proj
commands may be used for this.If either of the conditions given above does not hold, then the output from
covar
may be meaningless except to give an idea of the scale of the confidence intervals. To accurately determine the confidence intervals, one would have to reparameterize the model, use Monte Carlo simulations, or Bayesian methods.As
covar
estimates intervals for each parameter independently, the relationship between sigma and the change in statistic value delta_S can be particularly simple: sigma = the square root of delta_S for statistics sampled from the chi-square distribution and for the Cash statistic, and is approximately equal to the square root of (2 * delta_S) for fits based on the general log-likelihood. The default setting is to calculate the one-sigma interval, which can be changed with thesigma
option toset_covar_opt
orget_covar
.Examples
Evaluate confidence intervals for all thawed parameters in all data sets with an associated source model. The results are then stored in the variable
res
.>>> covar() >>> res = get_covar_results()
Only evaluate the parameters associated with data set 2.
>>> covar(2)
Only evaluate the intervals for the
pos.xpos
andpos.ypos
parameters:>>> covar(pos.xpos, pos.ypos)
Change the limits to be 1.6 sigma (90%) rather than the default 1 sigma.
>>> set_covar_ope('sigma', 1.6) >>> covar()
Only evaluate the
clus.kt
parameter for the data sets with identifiers “obs1”, “obs5”, and “obs6”. This will still use the 1.6 sigma setting from the previous run.>>> covar("obs1", "obs5", "obs6", clus.kt)
Estimate the errors for all the thawed parameters from the
line
model and theclus.kt
parameter for datasets 1, 3, and 4:>>> covar(1, 3, 4, line, clus.kt)
- covariance(*args) [edit on github]
Estimate parameter confidence intervals using the covariance method.
The
covar
command computes confidence interval bounds for the specified model parameters in the dataset, using the covariance matrix of the statistic. Theget_covar
andset_covar_opt
commands can be used to configure the error analysis; an example being changing thesigma
field to 1.6 (i.e. 90%) from its default value of 1. The output from the routine is displayed on screen, and theget_covar_results
routine can be used to retrieve the results.- Parameters:
id (int or str, optional) – The data set, or sets, that provides the data. If not given then all data sets with an associated model are used simultaneously.
parameter (sherpa.models.parameter.Parameter, optional) – The default is to calculate the confidence limits on all thawed parameters of the model, or models, for all the data sets. The evaluation can be restricted by listing the parameters to use. Note that each parameter should be given as a separate argument, rather than as a list. For example
covar(g1.ampl, g1.sigma)
.model (sherpa.models.model.Model, optional) – Select all the thawed parameters in the model.
See also
covar
Estimate the confidence intervals using the confidence method.
get_covar
Return the covariance estimation object.
get_covar_results
Return the results of the last
covar
run.int_proj
Plot the statistic value as a single parameter is varied.
int_unc
Plot the statistic value as a single parameter is varied.
reg_proj
Plot the statistic value as two parameters are varied.
reg_unc
Plot the statistic value as two parameters are varied.
set_covar_opt
Set an option of the
covar
estimation object.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with multiple
ids
orparameters
values, the order is unimportant, since any argument that is not defined as a model parameter is assumed to be a data id.The
covar
command is different toconf
, in that in that all other thawed parameters are fixed, rather than being allowed to float to new best-fit values. Whileconf
is more general (e.g. allowing the user to examine the parameter space away from the best-fit point), it is in the strictest sense no more accurate thancovar
for determining confidence intervals.An estimated confidence interval is accurate if and only if:
the chi^2 or logL surface in parameter space is approximately shaped like a multi-dimensional paraboloid, and
the best-fit point is sufficiently far from parameter space boundaries.
One may determine if these conditions hold, for example, by plotting the fit statistic as a function of each parameter’s values (the curve should approximate a parabola) and by examining contour plots of the fit statistics made by varying the values of two parameters at a time (the contours should be elliptical, and parameter space boundaries should be no closer than approximately 3 sigma from the best-fit point). The
int_proj
andreg_proj
commands may be used for this.If either of the conditions given above does not hold, then the output from
covar
may be meaningless except to give an idea of the scale of the confidence intervals. To accurately determine the confidence intervals, one would have to reparameterize the model, use Monte Carlo simulations, or Bayesian methods.As
covar
estimates intervals for each parameter independently, the relationship between sigma and the change in statistic value delta_S can be particularly simple: sigma = the square root of delta_S for statistics sampled from the chi-square distribution and for the Cash statistic, and is approximately equal to the square root of (2 * delta_S) for fits based on the general log-likelihood. The default setting is to calculate the one-sigma interval, which can be changed with thesigma
option toset_covar_opt
orget_covar
.Examples
Evaluate confidence intervals for all thawed parameters in all data sets with an associated source model. The results are then stored in the variable
res
.>>> covar() >>> res = get_covar_results()
Only evaluate the parameters associated with data set 2.
>>> covar(2)
Only evaluate the intervals for the
pos.xpos
andpos.ypos
parameters:>>> covar(pos.xpos, pos.ypos)
Change the limits to be 1.6 sigma (90%) rather than the default 1 sigma.
>>> set_covar_ope('sigma', 1.6) >>> covar()
Only evaluate the
clus.kt
parameter for the data sets with identifiers “obs1”, “obs5”, and “obs6”. This will still use the 1.6 sigma setting from the previous run.>>> covar("obs1", "obs5", "obs6", clus.kt)
Estimate the errors for all the thawed parameters from the
line
model and theclus.kt
parameter for datasets 1, 3, and 4:>>> covar(1, 3, 4, line, clus.kt)
- create_model_component(typename=None, name=None)[source] [edit on github]
Create a model component.
Model components created by this function are set to their default values. Components can also be created directly using the syntax
typename.name
, such as in calls toset_model
andset_source
(unless you have calledset_model_autoassign_func
to change the default model auto-assignment setting).- Parameters:
typename (str) – The name of the model. This should match an entry from the return value of
list_models
, and defines the type of model.name (str) – The name used to refer to this instance, or component, of the model. A Python variable will be created with this name that can be used to inspect and change the model parameters, as well as use it in model expressions.
- Returns:
model
- Return type:
the sherpa.models.Model object created
See also
delete_model_component
Delete a model component.
get_model_component
Returns a model component given its name.
list_models
List the available model types.
list_model_components
List the names of all the model components.
set_model
Set the source model expression for a data set.
set_model_autoassign_func
Set the method used to create model component identifiers.
Notes
This function can over-write an existing component. If the over-written component is part of a source expression - as set by
set_model
- then the model evaluation will still use the old model definition (and be able to change the fit parameters), but direct access to its parameters is not possible since the name now refers to the new component (this is true using direct access, such asmname.parname
, or withset_par
).Examples
Create an instance of the
powlaw1d
model calledpl
, and then freeze itsgamma
parameter to 2.6.>>> create_model_component("powlaw1d", "pl") >>> pl.gamma = 2.6 >>> freeze(pl.gamma)
Create a blackbody model called bb, check that it is reconized as a component, and display its parameters:
>>> create_model_component("bbody", "bb") >>> list_model_components() >>> print(bb) >>> print(bb.ampl)
- dataspace1d(start, stop, step=1, numbins=None, id=None, dstype=<class 'sherpa.data.Data1DInt'>)[source] [edit on github]
Create the independent axis for a 1D data set.
Create an “empty” one-dimensional data set by defining the grid on which the points are defined (the independent axis). The values are set to 0.
- Parameters:
start (number) – The minimum value of the axis.
stop (number) – The maximum value of the axis.
step (number, optional) – The separation between each grid point. This is not used if
numbins
is set.numbins (int, optional) – The number of grid points. This over-rides the
step
setting.id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.dstype (data class to use, optional) – What type of data is to be used. Supported values include
Data1DInt
(the default) andData1D
.
See also
dataspace2d
Create the independent axis for a 2D data set.
get_dep
Return the dependent axis of a data set.
get_indep
Return the independent axes of a data set.
set_dep
Set the dependent axis of a data set.
Notes
The meaning of the
stop
parameter depends on whether it is a binned or unbinned data set (as set by thedstype
parameter).Examples
Create a binned data set, starting at 1 and with a bin-width of 1.
>>> dataspace1d(1, 5, 1) >>> print(get_indep()) (array([ 1., 2., 3., 4.]), array([ 2., 3., 4., 5.]))
This time for an un-binned data set:
>>> dataspace1d(1, 5, 1, dstype=Data1D) >>> print(get_indep()) (array([ 1., 2., 3., 4., 5.]),)
Specify the number of bins rather than the grid spacing:
>>> dataspace1d(1, 5, numbins=5, id=2) >>> (xlo, xhi) = get_indep(2) >>> xlo array([ 1. , 1.8, 2.6, 3.4, 4.2]) >>> xhi array([ 1.8, 2.6, 3.4, 4.2, 5. ])
>>> dataspace1d(1, 5, numbins=5, id=3, dstype=Data1D) >>> (x, ) = get_indep(3) >>> x array([ 1., 2., 3., 4., 5.])
- dataspace2d(dims, id=None, dstype=<class 'sherpa.data.Data2D'>)[source] [edit on github]
Create the independent axis for a 2D data set.
Create an “empty” two-dimensional data set by defining the grid on which the points are defined (the independent axis). The values are set to 0.
- Parameters:
dims (sequence of 2 number) – The dimensions of the grid in
(width,height)
order.id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.dstype (data class to use, optional) – What type of data is to be used. Supported values include
Data2D
(the default) andData2DInt
.
See also
dataspace1d
Create the independent axis for a 1D data set.
get_dep
Return the dependent axis of a data set.
get_indep
Return the independent axes of a data set.
set_dep
Set the dependent axis of a data set.
Examples
Create a 200 pixel by 150 pixel grid (number of columns by number of rows) and display it (each pixel has a value of 0):
>>> dataspace2d([200, 150]) >>> image_data()
Create a data space called “fakeimg”:
>>> dataspace2d([nx,ny], id="fakeimg")
- delete_data(id=None)[source] [edit on github]
Delete a data set by identifier.
The data set, and any associated structures - such as the ARF and RMF for PHA data sets - are removed.
- Parameters:
id (int or str, optional) – The data set to delete. If not given then the default identifier is used, as returned by
get_default_id
.
See also
clean
Clear all stored session data.
copy_data
Copy a data set to a new identifier.
delete_model
Delete the model expression from a data set.
get_default_id
Return the default data set identifier.
list_data_ids
List the identifiers for the loaded data sets.
Notes
The source expression is not removed by this function.
Examples
Delete the data from the default data set:
>>> delete_data()
Delete the data set identified as ‘src’:
>>> delete_data('src')
- delete_model(id=None)[source] [edit on github]
Delete the model expression for a data set.
This removes the model expression, created by
set_model
, for a data set. It does not delete the components of the expression.- Parameters:
id (int or str, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.
See also
clean
Clear all stored session data.
delete_data
Delete a data set by identifier.
get_default_id
Return the default data set identifier.
set_model
Set the source model expression for a data set.
show_model
Display the source model expression for a data set.
Examples
Remove the model expression for the default data set:
>>> delete_model()
Remove the model expression for the data set with the identifier called ‘src’:
>>> delete_model('src')
- delete_model_component(name)[source] [edit on github]
Delete a model component.
- Parameters:
name (str) – The name used to refer to this instance, or component, of the model. The corresponding Python variable will be deleted by this function.
See also
create_model_component
Create a model component.
delete_model
Delete the model expression for a data set.
list_models
List the available model types.
list_model_components
List the names of all the model components.
set_model
Set the source model expression for a data set.
set_model_autoassign_func
Set the method used to create model component identifiers.
Notes
It is an error to try to delete a component that is part of a model expression - i.e. included as part of an expression in a
set_model
orset_source
call. In such a situation, use thedelete_model
function to remove the source expression before callingdelete_model_component
.Examples
If a model instance called
pl
has been created - e.g. bycreate_model_component('powlaw1d', 'pl')
- then the following will remove it:>>> delete_model_component('pl')
- delete_psf(id=None)[source] [edit on github]
Delete the PSF model for a data set.
Remove the PSF convolution applied to a source model.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.
See also
list_psf_ids
List of all the data sets with a PSF.
load_psf
Create a PSF model.
set_psf
Add a PSF model to a data set.
get_psf
Return the PSF model defined for a data set.
Examples
>>> delete_psf()
>>> delete_psf('core')
- fake(id=None, method=<function poisson_noise>)[source] [edit on github]
Simulate a data set.
Take a data set, evaluate the model for each bin, and then use this value to create a data value from each bin. The default behavior is to use a Poisson distribution, with the model value as the expectation value of the distribution.
- Parameters:
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.method (func) – The function used to create a random realisation of a data set.
See also
dataspace1d
Create the independent axis for a 1D data set.
dataspace2d
Create the independent axis for a 2D data set.
get_dep
Return the dependent axis of a data set.
load_arrays
Create a data set from array values.
set_model
Set the source model expression for a data set.
Notes
The function for the
method
argument accepts a single argument, the data values, and should return an array of the same shape as the input, with the data values to use.The function can be called on any data set, it does not need to have been created with
dataspace1d
ordataspace2d
.Specific data set types may have their own, specialized, version of this function.
Examples
Create a random realisation of the model - a constant plus gaussian line - for the range x=-5 to 5.
>>> dataspace1d(-5, 5, 0.5, dstype=Data1D) >>> set_source(gauss1d.gline + const1d.bgnd) >>> bgnd.c0 = 2 >>> gline.fwhm = 4 >>> gline.ampl = 5 >>> gline.pos = 1 >>> fake() >>> plot_data() >>> plot_model(overplot=True)
For a 2D data set, display the simulated data, model, and residuals:
>>> dataspace2d([150, 80], id='fakeimg') >>> set_source('fakeimg', beta2d.src + polynom2d.bg) >>> src.xpos, src.ypos = 75, 40 >>> src.r0, src.alpha = 15, 2.3 >>> src.ellip, src.theta = 0.4, 1.32 >>> src.ampl = 100 >>> bg.c, bg.cx1, bg.cy1 = 3, 0.4, 0.3 >>> fake('fakeimg') >>> image_fit('fakeimg')
- fit(id=None, *otherids, **kwargs)[source] [edit on github]
Fit a model to one or more data sets.
Use forward fitting to find the best-fit model to one or more data sets, given the chosen statistic and optimization method. The fit proceeds until the results converge or the number of iterations exceeds the maximum value (these values can be changed with
set_method_opt
). An iterative scheme can be added usingset_iter_method
to try and improve the fit. The final fit results are displayed to the screen and can be retrieved withget_fit_results
.- Parameters:
id (int or str, optional) – The data set that provides the data. If not given then all data sets with an associated model are fit simultaneously.
*otherids (int or str, optional) – Other data sets to use in the calculation.
outfile (str, optional) – If set, then the fit results will be written to a file with this name. The file contains the per-iteration fit results.
clobber (bool, optional) – This flag controls whether an existing file can be overwritten (
True
) or if it raises an exception (False
, the default setting).
- Raises:
sherpa.utils.err.FitErr – If
filename
already exists andclobber
isFalse
.
See also
conf
Estimate parameter confidence intervals using the confidence method.
contour_fit
Contour the fit to a data set.
covar
Estimate the confidence intervals using the confidence method.
freeze
Fix model parameters so they are not changed by a fit.
get_fit_results
Return the results of the last fit.
plot_fit
Plot the fit results (data, model) for a data set.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
set_stat
Set the statistical method.
set_method
Change the optimization method.
set_method_opt
Change an option of the current optimization method.
set_full_model
Define the convolved model expression for a data set.
set_iter_method
Set the iterative-fitting scheme used in the fit.
set_model
Set the model expression for a data set.
show_fit
Summarize the fit results.
thaw
Allow model parameters to be varied during a fit.
Examples
Simultaneously fit all data sets with models and then store the results in the variable fres:
>>> fit() >>> fres = get_fit_results()
Fit just the data set ‘img’:
>>> fit('img')
Simultaneously fit data sets 1, 2, and 3:
>>> fit(1, 2, 3)
Fit data set ‘jet’ and write the fit results to the text file ‘jet.fit’, over-writing it if it already exists:
>>> fit('jet', outfile='jet.fit', clobber=True)
- freeze(*args)[source] [edit on github]
Fix model parameters so they are not changed by a fit.
The arguments can be parameters or models, in which case all parameters of the model are frozen. If no arguments are given then nothing is changed.
See also
Notes
The
thaw
function can be used to reverse this setting, so that parameters can be varied in a fit.Examples
Fix the FWHM parameter of the line model (in this case a
gauss1d
model) so that it will not be varied in the fit.>>> set_source(const1d.bgnd + gauss1d.line) >>> line.fwhm = 2.1 >>> freeze(line.fwhm) >>> fit()
Freeze all parameters of the line model and then re-fit:
>>> freeze(line) >>> fit()
Freeze the nh parameter of the gal model and the abund parameter of the src model:
>>> freeze(gal.nh, src.abund)
- get_cdf_plot()[source] [edit on github]
Return the data used to plot the last CDF.
- Returns:
plot – An object containing the data used by the last call to
plot_cdf
. The fields will beNone
if the function has not been called.- Return type:
a
sherpa.plot.CDFPlot
instance
See also
plot_cdf
Plot the cumulative density function of an array.
- get_chisqr_plot(id=None, recalc=True)[source] [edit on github]
Return the data used by plot_chisqr.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_chisqr
(orget_chisqr_plot
) are returned, otherwise the data is re-generated.
- Returns:
resid_data
- Return type:
a
sherpa.plot.ChisqrPlot
instance- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_delchi_plot
Return the data used by plot_delchi.
get_ratio_plot
Return the data used by plot_ratio.
get_resid_plot
Return the data used by plot_resid.
plot_chisqr
Plot the chi-squared value for each point in a data set.
Examples
Return the residual data, measured as chi square, for the default data set:
>>> rplot = get_chisqr_plot() >>> np.min(rplot.y) 0.0005140622701128954 >>> np.max(rplot.y) 8.379696454792295
Display the contents of the residuals plot for data set 2:
>>> print(get_chisqr_plot(2))
Overplot the residuals plot from the ‘core’ data set on the ‘jet’ data set:
>>> r1 = get_chisqr_plot('jet') >>> r2 = get_chisqr_plot('core') >>> r1.plot() >>> r2.overplot()
- get_conf()[source] [edit on github]
Return the confidence-interval estimation object.
- Returns:
conf
- Return type:
See also
conf
Estimate parameter confidence intervals using the confidence method.
get_conf_opt
Return one or all of the options for the confidence interval method.
set_conf_opt
Set an option of the conf estimation object.
Notes
The attributes of the confidence-interval object include:
eps
The precision of the calculated limits. The default is 0.01.
fast
If
True
then the fit optimization used may be changed from the current setting (only for the error analysis) to use a faster optimization method. The default isFalse
.max_rstat
If the reduced chi square is larger than this value, do not use (only used with chi-square statistics). The default is 3.
maxfits
The maximum number of re-fits allowed (that is, when the
remin
filter is met). The default is 5.maxiters
The maximum number of iterations allowed when bracketing limits, before stopping for that parameter. The default is 200.
numcores
The number of computer cores to use when evaluating results in parallel. This is only used if
parallel
isTrue
. The default is to use all cores.openinterval
How the
conf
method should cope with intervals that do not converge (that is, when themaxiters
limit has been reached). The default isFalse
.parallel
If there is more than one free parameter then the results can be evaluated in parallel, to reduce the time required. The default is
True
.remin
The minimum difference in statistic value for a new fit location to be considered better than the current best fit (which starts out as the starting location of the fit at the time
conf
is called). The default is 0.01.sigma
What is the error limit being calculated. The default is 1.
soft_limits
Should the search be restricted to the soft limits of the parameters (
True
), or can parameter values go out all the way to the hard limits if necessary (False
). The default isFalse
tol
The tolerance for the fit. The default is 0.2.
verbose
Should extra information be displayed during fitting? The default is
False
.
Examples
>>> print(get_conf()) name = confidence numcores = 8 verbose = False openinterval = False max_rstat = 3 maxiters = 200 soft_limits = False eps = 0.01 fast = False maxfits = 5 remin = 0.01 tol = 0.2 sigma = 1 parallel = True
Change the
remin
field to 0.05.>>> cf = get_conf() >>> cf.remin = 0.05
- get_conf_opt(name=None)[source] [edit on github]
Return one or all of the options for the confidence interval method.
This is a helper function since the options can also be read directly using the object returned by
get_conf
.- Parameters:
name (str, optional) – If not given, a dictionary of all the options are returned. When given, the individual value is returned.
- Returns:
value
- Return type:
dictionary or value
- Raises:
sherpa.utils.err.ArgumentErr – If the
name
argument is not recognized.
See also
conf
Estimate parameter confidence intervals using the confidence method.
get_conf
Return the confidence-interval estimation object.
set_conf_opt
Set an option of the conf estimation object.
Examples
>>> get_conf_opt('verbose') False
>>> copts = get_conf_opt() >>> copts['verbose'] False
- get_conf_results()[source] [edit on github]
Return the results of the last
conf
run.- Returns:
results
- Return type:
sherpa.fit.ErrorEstResults object
- Raises:
sherpa.utils.err.SessionErr – If no
conf
call has been made.
See also
get_conf_opt
Return one or all of the options for the confidence interval method.
set_conf_opt
Set an option of the conf estimation object.
Notes
The fields of the object include:
datasets
A tuple of the data sets used in the analysis.
methodname
This will be ‘confidence’.
iterfitname
The name of the iterated-fit method used, if any.
fitname
The name of the optimization method used.
statname
The name of the fit statistic used.
sigma
The sigma value used to calculate the confidence intervals.
percent
The percentage of the signal contained within the confidence intervals (calculated from the
sigma
value assuming a normal distribution).parnames
A tuple of the parameter names included in the analysis.
parvals
A tuple of the best-fit parameter values, in the same order as
parnames
.parmins
A tuple of the lower error bounds, in the same order as
parnames
.parmaxes
A tuple of the upper error bounds, in the same order as
parnames
.nfits
The number of model evaluations.
Examples
>>> res = get_conf_results() >>> print(res) datasets = (1,) methodname = confidence iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('p1.gamma', 'p1.ampl') parvals = (2.1585155113403327, 0.00022484014787994827) parmins = (-0.082785567348122591, -1.4825550342799376e-05) parmaxes = (0.083410634144100104, 1.4825550342799376e-05) nfits = 13
The following converts the above into a dictionary where the keys are the parameter names and the values are the tuple (best-fit value, lower-limit, upper-limit):
>>> pvals1 = zip(res.parvals, res.parmins, res.parmaxes) >>> pvals2 = [(v, v+l, v+h) for (v, l, h) in pvals1] >>> dres = dict(zip(res.parnames, pvals2)) >>> dres['p1.gamma'] (2.1585155113403327, 2.07572994399221, 2.241926145484433)
- get_confidence_results() [edit on github]
Return the results of the last
conf
run.- Returns:
results
- Return type:
sherpa.fit.ErrorEstResults object
- Raises:
sherpa.utils.err.SessionErr – If no
conf
call has been made.
See also
get_conf_opt
Return one or all of the options for the confidence interval method.
set_conf_opt
Set an option of the conf estimation object.
Notes
The fields of the object include:
datasets
A tuple of the data sets used in the analysis.
methodname
This will be ‘confidence’.
iterfitname
The name of the iterated-fit method used, if any.
fitname
The name of the optimization method used.
statname
The name of the fit statistic used.
sigma
The sigma value used to calculate the confidence intervals.
percent
The percentage of the signal contained within the confidence intervals (calculated from the
sigma
value assuming a normal distribution).parnames
A tuple of the parameter names included in the analysis.
parvals
A tuple of the best-fit parameter values, in the same order as
parnames
.parmins
A tuple of the lower error bounds, in the same order as
parnames
.parmaxes
A tuple of the upper error bounds, in the same order as
parnames
.nfits
The number of model evaluations.
Examples
>>> res = get_conf_results() >>> print(res) datasets = (1,) methodname = confidence iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('p1.gamma', 'p1.ampl') parvals = (2.1585155113403327, 0.00022484014787994827) parmins = (-0.082785567348122591, -1.4825550342799376e-05) parmaxes = (0.083410634144100104, 1.4825550342799376e-05) nfits = 13
The following converts the above into a dictionary where the keys are the parameter names and the values are the tuple (best-fit value, lower-limit, upper-limit):
>>> pvals1 = zip(res.parvals, res.parmins, res.parmaxes) >>> pvals2 = [(v, v+l, v+h) for (v, l, h) in pvals1] >>> dres = dict(zip(res.parnames, pvals2)) >>> dres['p1.gamma'] (2.1585155113403327, 2.07572994399221, 2.241926145484433)
- get_covar()[source] [edit on github]
Return the covariance estimation object.
- Returns:
covar
- Return type:
See also
covar
Estimate parameter confidence intervals using the covariance method.
get_covar_opt
Return one or all of the options for the covariance method.
set_covar_opt
Set an option of the covar estimation object.
Notes
The attributes of the covariance object include:
eps
The precision of the calculated limits. The default is 0.01.
maxiters
The maximum number of iterations allowed before stopping for that parameter. The default is 200.
sigma
What is the error limit being calculated. The default is 1.
soft_limits
Should the search be restricted to the soft limits of the parameters (
True
), or can parameter values go out all the way to the hard limits if necessary (False
). The default isFalse
Examples
>>> print(get_covar()) name = covariance sigma = 1 maxiters = 200 soft_limits = False eps = 0.01
Change the
sigma
field to 1.9.>>> cv = get_covar() >>> cv.sigma = 1.6
- get_covar_opt(name=None)[source] [edit on github]
Return one or all of the options for the covariance method.
This is a helper function since the options can also be read directly using the object returned by
get_covar
.- Parameters:
name (str, optional) – If not given, a dictionary of all the options are returned. When given, the individual value is returned.
- Returns:
value
- Return type:
dictionary or value
- Raises:
sherpa.utils.err.ArgumentErr – If the
name
argument is not recognized.
See also
covar
Estimate parameter confidence intervals using the covariance method.
get_covar
Return the covariance estimation object.
set_covar_opt
Set an option of the covar estimation object.
Examples
>>> get_covar_opt('sigma') 1
>>> copts = get_covar_opt() >>> copts['sigma'] 1
- get_covar_results()[source] [edit on github]
Return the results of the last
covar
run.- Returns:
results
- Return type:
sherpa.fit.ErrorEstResults object
- Raises:
sherpa.utils.err.SessionErr – If no
covar
call has been made.
See also
get_covar_opt
Return one or all of the options for the covariance method.
set_covar_opt
Set an option of the covar estimation object.
Notes
The fields of the object include:
datasets
A tuple of the data sets used in the analysis.
methodname
This will be ‘covariance’.
iterfitname
The name of the iterated-fit method used, if any.
fitname
The name of the optimization method used.
statname
The name of the fit statistic used.
sigma
The sigma value used to calculate the confidence intervals.
percent
The percentage of the signal contained within the confidence intervals (calculated from the
sigma
value assuming a normal distribution).parnames
A tuple of the parameter names included in the analysis.
parvals
A tuple of the best-fit parameter values, in the same order as
parnames
.parmins
A tuple of the lower error bounds, in the same order as
parnames
.parmaxes
A tuple of the upper error bounds, in the same order as
parnames
.nfits
The number of model evaluations.
There is also an
extra_output
field which is used to return the covariance matrix.Examples
>>> res = get_covar_results() >>> print(res) datasets = (1,) methodname = covariance iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('bgnd.c0',) parvals = (10.228675427602724,) parmins = (-2.4896739438296795,) parmaxes = (2.4896739438296795,) nfits = 0
In this case, of a single parameter, the covariance matrix is just the variance of the parameter:
>>> res.extra_output array([[ 6.19847635]])
- get_covariance_results() [edit on github]
Return the results of the last
covar
run.- Returns:
results
- Return type:
sherpa.fit.ErrorEstResults object
- Raises:
sherpa.utils.err.SessionErr – If no
covar
call has been made.
See also
get_covar_opt
Return one or all of the options for the covariance method.
set_covar_opt
Set an option of the covar estimation object.
Notes
The fields of the object include:
datasets
A tuple of the data sets used in the analysis.
methodname
This will be ‘covariance’.
iterfitname
The name of the iterated-fit method used, if any.
fitname
The name of the optimization method used.
statname
The name of the fit statistic used.
sigma
The sigma value used to calculate the confidence intervals.
percent
The percentage of the signal contained within the confidence intervals (calculated from the
sigma
value assuming a normal distribution).parnames
A tuple of the parameter names included in the analysis.
parvals
A tuple of the best-fit parameter values, in the same order as
parnames
.parmins
A tuple of the lower error bounds, in the same order as
parnames
.parmaxes
A tuple of the upper error bounds, in the same order as
parnames
.nfits
The number of model evaluations.
There is also an
extra_output
field which is used to return the covariance matrix.Examples
>>> res = get_covar_results() >>> print(res) datasets = (1,) methodname = covariance iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('bgnd.c0',) parvals = (10.228675427602724,) parmins = (-2.4896739438296795,) parmaxes = (2.4896739438296795,) nfits = 0
In this case, of a single parameter, the covariance matrix is just the variance of the parameter:
>>> res.extra_output array([[ 6.19847635]])
- get_data(id=None)[source] [edit on github]
Return the data set by identifier.
The object returned by the call can be used to query and change properties of the data set.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
instance – The data instance.
- Return type:
- Raises:
sherpa.utils.err.IdentifierErr – No data has been loaded for this data set.
See also
copy_data
Copy a data set to a new identifier.
delete_data
Delete a data set by identifier.
load_data
Create a data set from a file.
set_data
Set a data set.
Examples
>>> d = get_data()
>>> dimg = get_data('img')
>>> load_arrays('tst', [10, 20, 30], [5.4, 2.3, 9.8]) >>> print(get_data('tst')) name = x = Int64[3] y = Float64[3] staterror = None syserror = None
- get_data_contour(id=None, recalc=True)[source] [edit on github]
Return the data used by contour_data.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_data
(orget_data_contour
) are returned, otherwise the data is re-generated.
- Returns:
resid_data – The
y
attribute contains the residual values and thex0
andx1
arrays contain the corresponding coordinate values, as one-dimensional arrays.- Return type:
a
sherpa.plot.DataContour
instance- Raises:
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_data_image
Return the data used by image_data.
contour_data
Contour the values of an image data set.
image_data
Display a data set in the image viewer.
Examples
Return the data for the default data set:
>>> dinfo = get_data_contour()
- get_data_contour_prefs()[source] [edit on github]
Return the preferences for contour_data.
- Returns:
prefs – Changing the values of this dictionary will change any new contour plots. The default is an empty dictionary.
- Return type:
See also
contour_data
Contour the values of an image data set.
Notes
The meaning of the fields depend on the chosen plot backend. A value of
None
(or not set) means to use the default value for that attribute, unless indicated otherwise.alpha
The transparency value used to draw the contours, where 0 is fully transparent and 1 is fully opaque.
colors
The colors to draw the contours.
linewidths
What thickness of line to draw the contours.
xlog
Should the X axis be drawn with a logarithmic scale? The default is
False
.ylog
Should the Y axis be drawn with a logarithmic scale? The default is
False
.
Examples
Change the contours to be drawn in ‘green’:
>>> contour_data() >>> prefs = get_data_contour_prefs() >>> prefs['color'] = 'green' >>> contour_data()
- get_data_image(id=None)[source] [edit on github]
Return the data used by image_data.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
data_img – The
y
attribute contains the ratio values as a 2D NumPy array.- Return type:
a
sherpa.image.DataImage
instance- Raises:
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
contour_data
Contour the values of an image data set.
image_data
Display a data set in the image viewer.
Examples
Return the image data for the default data set:
>>> dinfo = get_data_image() >>> dinfo.y.shape (150, 175)
- get_data_plot(id=None, recalc=True)[source] [edit on github]
Return the data used by plot_data.
- Parameters:
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_data
(orget_data_plot
) are returned, otherwise the data is re-generated.
- Returns:
data – An object representing the data used to create the plot by
plot_data
. The relationship between the returned values and the values in the data set depend on the data type. For example PHA data are plotted in units controlled bysherpa.astro.ui.set_analysis
, but are stored as channels and counts, and may have been grouped and the background estimate removed.- Return type:
a
sherpa.plot.DataPlot
instance
See also
get_data_plot_prefs
Return the preferences for plot_data.
get_default_id
Return the default data set identifier.
plot_data
Plot the data values.
- get_data_plot_prefs(id=None)[source] [edit on github]
Return the preferences for plot_data.
The plot preferences may depend on the data set, so it is now an optional argument.
Changed in version 4.12.2: The id argument has been given.
- Parameters:
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
prefs – Changing the values of this dictionary will change any new data plots. This dictionary will be empty if no plot backend is available.
- Return type:
See also
plot_data
Plot the data values.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Notes
The meaning of the fields depend on the chosen plot backend. A value of
None
means to use the default value for that attribute, unless indicated otherwise. These preferences are used by the following commands:plot_data
,plot_bkg
,plot_ratio
, and the “fit” variants, such asplot_fit
,plot_fit_resid
, andplot_bkg_fit
.The following preferences are recognized by the matplotlib backend:
alpha
The transparency value used to draw the line or symbol, where 0 is fully transparent and 1 is fully opaque.
barsabove
The barsabove argument for the matplotlib errorbar function.
capsize
The capsize argument for the matplotlib errorbar function.
color
The color to use (will be over-ridden by more-specific options below). The default is
None
.ecolor
The color to draw error bars. The default is
None
.linestyle
How should the line connecting the data points be drawn. The default is ‘None’, which means no line is drawn.
marker
What style is used for the symbols. The default is
'.'
which indicates a point.markerfacecolor
What color to draw the symbol representing the data points. The default is
None
.markersize
What size is the symbol drawn. The default is
None
.ratioline
Should a horizontal line be drawn at y=1? The default is
False
.xaxis
The default is
False
xerrorbars
Should error bars be drawn for the X axis. The default is
False
.xlog
Should the X axis be drawn with a logarithmic scale? The default is
False
. This field can also be changed with theset_xlog
andset_xlinear
functions.yerrorbars
Should error bars be drawn for the Y axis. The default is
True
.ylog
Should the Y axis be drawn with a logarithmic scale? The default is
False
. This field can also be changed with theset_ylog
andset_ylinear
functions.
Examples
After these commands, any data plot will use a green symbol and not display Y error bars.
>>> prefs = get_data_plot_prefs() >>> prefs['color'] = 'green' >>> prefs['yerrorbars'] = False
- get_default_id()[source] [edit on github]
Return the default data set identifier.
The Sherpa data id ties data, model, fit, and plotting information into a data set easily referenced by id. The default identifier, used by many commands, is returned by this command and can be changed by
set_default_id
.- Returns:
id – The default data set identifier used by certain Sherpa functions when an identifier is not given, or set to
None
.- Return type:
See also
list_data_ids
List the identifiers for the loaded data sets.
set_default_id
Set the default data set identifier.
Notes
The default Sherpa data set identifier is the integer 1.
Examples
Display the default identifier:
>>> print(get_default_id())
Store the default identifier and use it as an argument to call another Sherpa routine:
>>> defid = get_default_id() >>> load_arrays(defid, x, y)
- get_delchi_plot(id=None, recalc=True)[source] [edit on github]
Return the data used by plot_delchi.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_delchi
(orget_delchi_plot
) are returned, otherwise the data is re-generated.
- Returns:
resid_data
- Return type:
a
sherpa.plot.DelchiPlot
instance- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_chisqr_plot
Return the data used by plot_chisqr.
get_ratio_plot
Return the data used by plot_ratio.
get_resid_plot
Return the data used by plot_resid.
plot_delchi
Plot the ratio of residuals to error for a data set.
Examples
Return the residual data, measured in units of the error, for the default data set:
>>> rplot = get_delchi_plot() >>> np.min(rplot.y) -2.85648373819671875 >>> np.max(rplot.y) 2.89477053577520982
Display the contents of the residuals plot for data set 2:
>>> print(get_delchi_plot(2))
Overplot the residuals plot from the ‘core’ data set on the ‘jet’ data set:
>>> r1 = get_delchi_plot('jet') >>> r2 = get_delchi_plot('core') >>> r1.plot() >>> r2.overplot()
- get_dep(id=None, filter=False)[source] [edit on github]
Return the dependent axis of a data set.
This function returns the data values (the dependent axis) for each point or pixel in the data set.
- Parameters:
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – Should the filter attached to the data set be applied to the return value or not. The default is
False
.
- Returns:
axis – The dependent axis values. The model estimate is compared to these values during fitting.
- Return type:
array
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_error
Return the errors on the dependent axis of a data set.
get_indep
Return the independent axis of a data set.
list_data_ids
List the identifiers for the loaded data sets.
Examples
>>> load_arrays(1, [10, 15, 19], [4, 5, 9]) >>> get_dep() array([4, 5, 9])
>>> x0 = [10, 15, 12, 19] >>> x1 = [12, 14, 10, 17] >>> y = [4, 5, 9, -2] >>> load_arrays(2, x0, x1, y, Data2D) >>> get_dep(2) array([ 4, 5, 9, -2])
If the
filter
flag is set then the return will be limited to the data that is used in the fit:>>> load_arrays(1, [10, 15, 19], [4, 5, 9]) >>> ignore_id(1, 17, None) >>> get_dep() array([4, 5, 9]) >>> get_dep(filter=True) array([4, 5])
- get_dims(id=None, filter=False)[source] [edit on github]
Return the dimensions of the data set.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – If
True
then apply any filter to the data set before returning the dimensions. The default isFalse
.
- Returns:
dims
- Return type:
a tuple of int
See also
ignore
Exclude data from the fit.
sherpa.astro.ui.ignore2d
Exclude a spatial region from an image.
notice
Include data in the fit.
sherpa.astro.ui.notice2d
Include a spatial region of an image.
Examples
Display the dimensions for the default data set:
>>> print(get_dims())
Find the number of bins in dataset ‘a2543’ without and with any filters applied to it:
>>> nall = get_dims('a2543') >>> nfilt = get_dims('a2543', filter=True)
- get_draws(id=None, otherids=(), niter=1000, covar_matrix=None)[source] [edit on github]
Run the pyBLoCXS MCMC algorithm.
The function runs a Markov Chain Monte Carlo (MCMC) algorithm designed to carry out Bayesian Low-Count X-ray Spectral (BLoCXS) analysis. It explores the model parameter space at the suspected statistic minimum (i.e. after using
fit
). The return values include the statistic value, parameter values, and an acceptance flag indicating whether the row represents a jump from the current location or not. For more information see thesherpa.sim
module and [1]_.- Parameters:
id (int or str, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
niter (int, optional) – The number of draws to use. The default is
1000
.covar_matrix (2D array, optional) – The covariance matrix to use. If
None
then the result fromget_covar_results().extra_output
is used.
- Returns:
The results of the MCMC chain. The stats and accept arrays contain
niter+1
elements, with the first row being the starting values. The params array has(nparams, niter+1)
elements, where nparams is the number of free parameters in the model expression, and the first column contains the values that the chain starts at. The accept array contains boolean values, indicating whether the jump, or step, was accepted (True
), so the parameter values and statistic change, or it wasn’t, in which case there is no change to the previous row. Thesherpa.utils.get_error_estimates
routine can be used to calculate the credible one-sigma interval from the params array.- Return type:
stats, accept, params
See also
covar
Estimate the confidence intervals using the covariance method.
fit
Fit a model to one or more data sets.
plot_cdf
Plot the cumulative density function of an array.
plot_pdf
Plot the probability density function of an array.
plot_scatter
Create a scatter plot.
plot_trace
Create a trace plot of row number versus value.
set_prior
Set the prior function to use with a parameter.
set_sampler
Set the MCMC sampler.
get_sampler
Return information about the current MCMC sampler.
Notes
The chain is run using fit information associated with the specified data set, or sets, the currently set sampler (
set_sampler
) and parameter priors (set_prior
), for a specified number of iterations. The model should have been fit to find the best-fit parameters, andcovar
called, before runningget_draws
. The results fromget_draws
is used to estimate the parameter distributions.References
Examples
Fit a source and then run a chain to investigate the parameter distributions. The distribution of the stats values created by the chain is then displayed, using
plot_trace
, and the parameter distributions for the first two thawed parameters are displayed. The first one as a cumulative distribution usingplot_cdf
and the second one as a probability distribution usingplot_pdf
. Finally the acceptance fraction (number of draws where the chain moved) is displayed. Note that in a full analysis session a burn-in period would normally be removed from the chain before using the results.>>> fit() >>> covar() >>> stats, accept, params = get_draws(1, niter=1e4) >>> plot_trace(stats, name='stat') >>> names = [p.fullname for p in get_source().pars if not p.frozen] >>> plot_cdf(params[0,:], name=names[0], xlabel=names[0]) >>> plot_pdf(params[1,:], name=names[1], xlabel=names[1]) >>> accept[:-1].sum() * 1.0 / len(accept - 1) 0.4287
The following runs the chain on multiple data sets, with identifiers ‘core’, ‘jet1’, and ‘jet2’:
>>> stats, accept, params = get_draws('core', ['jet1', 'jet2'], niter=1e4)
- get_error(id=None, filter=False)[source] [edit on github]
Return the errors on the dependent axis of a data set.
The function returns the total errors (a quadrature addition of the statistical and systematic errors) on the values (dependent axis) of a data set. The individual components can be retrieved with the
get_staterror
andget_syserror
functions.- Parameters:
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – Should the filter attached to the data set be applied to the return value or not. The default is
False
.
- Returns:
errors – The error for each data point, formed by adding the statistical and systematic errors in quadrature. The size of this array depends on the
filter
argument.- Return type:
array
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_error
Return the errors on the dependent axis of a data set.
get_indep
Return the independent axis of a data set.
get_staterror
Return the statistical errors on the dependent axis of a data set.
get_syserror
Return the systematic errors on the dependent axis of a data set.
list_data_ids
List the identifiers for the loaded data sets.
Notes
The default behavior is to not apply any filter defined on the independent axes to the results, so that the return value is for all points (or bins) in the data set. Set the
filter
argument toTrue
to apply this filter.Examples
Return the error values for the default data set, ignoring any filter applied to it:
>>> err = get_error()
Ensure that the return values are for the selected (filtered) points in the default data set (the return array may be smaller than in the previous example):
>>> err = get_error(filter=True)
Find the errors for the “core” data set:
>>> err = get_error('core', filter=True)
- get_filter(id=None)[source] [edit on github]
Return the filter expression for a data set.
This returns the filter expression, created by one or more calls to
ignore
andnotice
, for the data set.Changed in version 4.14.0: The filter expressions have been tweaked for Data1DInt and PHA data sets (when using energy or wavelength units) and now describe the full range of the bins, rather than the mid-points.
- Parameters:
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
filter – The empty string or a string expression representing the filter used. For PHA data dets the units are controlled by the analysis setting for the data set.
- Return type:
- Raises:
sherpa.utils.err.ArgumentErr – If the data set does not exist.
See also
ignore
Exclude data from the fit.
load_filter
Load the filter array from a file and add to a data set.
notice
Include data in the fit.
save_filter
Save the filter array to a file.
show_filter
Show any filters applied to a data set.
set_filter
Set the filter array of a data set.
Examples
The default filter is the full dataset, given in the format
lowval:hival
(for aData1D
dataset like this these are inclusive limits):>>> load_arrays(1, [10, 15, 20, 25], [5, 7, 4, 2]) >>> get_filter() '10.0000:25.0000'
The
notice
call restricts the data to the range between 14 and 30. The resulting filter is the combination of this range and the data:>>> notice(14, 30) >>> get_filter() '15.0000:25.0000'
Ignoring the point at
x=20
means that only the points atx=15
andx=25
remain, so a comma-separated list is used:>>> ignore(19, 22) >>> get_filter() '15.0000,25.0000'
The filter equivalent to the per-bin array of filter values:
>>> set_filter([1, 1, 0, 1]) >>> get_filter() '10.0000:15.0000,25.0000'
For an integrated data set (Data1DInt and DataPHA with energy or wavelength units)
>>> load_arrays(1, [10, 15, 20, 25], [15, 20, 23, 30], [5, 7, 4, 2], Data1DInt) >>> get_filter() '10.0000:30.0000'
For integrated datasets the limits are now inclusive only for the lower limit, but in this the end-point ends within a bin so is is included:
>>> notice(17, 28) >>> get_filter() '15.0000:30.0000'
There is no data in the range 23 to 24 so the ignore doesn’t change anything:
>>> ignore(23, 24) >>> get_filter() '15.0000:30.0000'
However it does match the range 22 to 23 and so changes the filter:
>>> ignore(22, 23) >>> get_filter() '15.0000:20.0000,25:000:30.0000'
Return the filter for data set 3:
>>> get_filter(3)
- get_fit_contour(id=None, recalc=True)[source] [edit on github]
Return the data used by contour_fit.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_fit
(orget_fit_contour
) are returned, otherwise the data is re-generated.
- Returns:
fit_data – An object representing the data used to create the plot by
contour_fit
. It contains the data fromget_data_contour
andget_model_contour
in thedatacontour
andmodelcontour
attributes.- Return type:
a
sherpa.plot.FitContour
instance- Raises:
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_data_image
Return the data used by image_data.
get_model_image
Return the data used by image_model.
contour_data
Contour the values of an image data set.
contour_model
Contour the values of the model, including any PSF.
image_data
Display a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
Examples
Return the contour data for the default data set:
>>> finfo = get_fit_contour()
- get_fit_plot(id=None, recalc=True)[source] [edit on github]
Return the data used to create the fit plot.
- Parameters:
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_fit
(orget_fit_plot
) are returned, otherwise the data is re-generated.
- Returns:
data – An object representing the data used to create the plot by
plot_fit
. It contains the data fromget_data_plot
andget_model_plot
in thedataplot
andmodelplot
attributes.- Return type:
a
sherpa.plot.FitPlot
instance
See also
get_data_plot_prefs
Return the preferences for plot_data.
get_model_plot_prefs
Return the preferences for plot_model.
get_default_id
Return the default data set identifier.
plot_data
Plot the data values.
plot_model
Plot the model for a data set.
Examples
Create the data needed to create the “fit plot” for the default data set and display it:
>>> fplot = get_fit_plot() >>> print(fplot)
Return the plot data for data set 2, and then use it to create a plot:
>>> f2 = get_fit_plot(2) >>> f2.plot()
The fit plot consists of a combination of a data plot and a model plot, which are captured in the
dataplot
andmodelplot
attributes of the return value. These can be used to display the plots individually, such as:>>> f2.dataplot.plot() >>> f2.modelplot.plot()
or, to combine the two:
>>> f2.dataplot.plot() >>> f2.modelplot.overplot()
- get_fit_results()[source] [edit on github]
Return the results of the last fit.
This function returns the results from the most-recent fit. The returned value includes information on the parameter values and fit statistic.
- Returns:
stats – The results of the last fit. It does not reflect any changes made to the model parameter, or settings, since the last fit.
- Return type:
a
sherpa.fit.FitResults
instance
See also
calc_stat
Calculate the fit statistic for a data set.
calc_stat_info
Display the statistic values for the current models.
fit
Fit a model to one or more data sets.
get_stat_info
Return the statistic values for the current models.
set_iter_method
Set the iterative-fitting scheme used in the fit.
Notes
The fields of the object include:
- datasets
A sequence of the data set ids included in the results.
- itermethodname
What iterated-fit scheme was used, if any (as set by
set_iter_method
).- statname
The name of the statistic function (as used in
set_stat
).- succeeded
Was the fit successful (did it converge)?
- parnames
A tuple of the parameter names that were varied in the fit (the thawed parameters in the model expression).
- parvals
A tuple of the parameter values, in the same order as
parnames
.- statval
The statistic value after the fit.
- istatval
The statistic value at the start of the fit.
- dstatval
The change in the statistic value (
istatval - statval
).- numpoints
The number of bins used in the fits.
- dof
The number of degrees of freedom in the fit (the number of bins minus the number of free parameters).
- qval
The Q-value (probability) that one would observe the reduced statistic value, or a larger value, if the assumed model is true and the current model parameters are the true parameter values. This will be
None
if the value can not be calculated with the current statistic (e.g. the Cash statistic).- rstat
The reduced statistic value (the
statval
field divided bydof
). This is not calculated for all statistics.- message
A message about the results of the fit (e.g. if the fit was unable to converge). The format and contents depend on the optimisation method.
- nfev
The number of model evaluations made during the fit.
Examples
Display the fit results:
>>> print(get_fit_results())
Inspect the fit results:
>>> res = get_fit_results() >>> res.statval 498.21750663761935 >>> res.dof 439 >>> res.parnames ('pl.gamma', 'pl.ampl', 'gline.fwhm', 'gline.pos', 'gline.ampl') >>> res.parvals (-0.20659543380329071, 0.00030398852609788524, 100.0, 4900.0, 0.001)
- get_functions()[source] [edit on github]
Return the functions provided by Sherpa.
- Returns:
functions
- Return type:
list of str
See also
list_functions
Display the functions provided by Sherpa.
- get_indep(id=None)[source] [edit on github]
Return the independent axes of a data set.
This function returns the coordinates of each point, or pixel, in the data set.
- Parameters:
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
axis – The independent axis values. These are the values at which the model is evaluated during fitting.
- Return type:
tuple of arrays
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_dep
Return the dependent axis of a data set.
list_data_ids
List the identifiers for the loaded data sets.
Examples
For a one-dimensional data set, the X values are returned:
>>> load_arrays(1, [10, 15, 19], [4, 5, 9]) >>> get_indep() (array([10, 15, 19]),)
For a 2D data set the X0 and X1 values are returned:
>>> x0 = [10, 15, 12, 19] >>> x1 = [12, 14, 10, 17] >>> y = [4, 5, 9, -2] >>> load_arrays(2, x0, x1, y, Data2D) >>> get_indep(2) (array([10, 15, 12, 19]), array([12, 14, 10, 17]))
- get_int_proj(par=None, id=None, otherids=None, recalc=False, fast=True, min=None, max=None, nloop=20, delv=None, fac=1, log=False, numcores=None)[source] [edit on github]
Return the interval-projection object.
This returns (and optionally calculates) the data used to display the
int_proj
plot. Note that if the therecalc
parameter isFalse
(the default value) then all other parameters are ignored and the results of the lastint_proj
call are returned.- Parameters:
par – The parameter to plot. This argument is only used if
recalc
is set toTrue
.id (str or int, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (list of str or int, optional) – Other data sets to use in the calculation.
recalc (bool, optional) – The default value (
False
) means that the results from the last call toint_proj
(orget_int_proj
) are returned, ignoring all other parameter values. Otherwise, the statistic curve is re-calculated, but not plotted.fast (bool, optional) – If
True
then the fit optimization used may be changed from the current setting (only for the error analysis) to use a faster optimization method. The default isFalse
.min (number, optional) – The minimum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.max (number, optional) – The maximum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.nloop (int, optional) – The number of steps to use. This is used when
delv
is set toNone
.delv (number, optional) – The step size for the parameter. Setting this over-rides the
nloop
parameter. The default isNone
.fac (number, optional) – When
min
ormax
is not given, multiply the covariance of the parameter by this value to calculate the limit (which is then added or subtracted to the parameter value, as required).log (bool, optional) – Should the step size be logarithmically spaced? The default (
False
) is to use a linear grid.numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
- Returns:
iproj – The fields of this object can be used to re-create the plot created by
int_proj
.- Return type:
a
sherpa.plot.IntervalProjection
instance
See also
conf
Estimate parameter confidence intervals using the confidence method.
covar
Estimate the confidence intervals using the covariance method.
int_proj
Calculate and plot the fit statistic versus fit parameter value.
int_unc
Calculate and plot the fit statistic versus fit parameter value.
reg_proj
Plot the statistic value as two parameters are varied.
Examples
Return the results of the
int_proj
run:>>> int_proj(src.xpos) >>> iproj = get_int_proj() >>> min(iproj.y) 119.55942437129544
Since the
recalc
parameter has not been changed toTrue
, the following will return the results for the last call toint_proj
, which may not have been for the src.ypos parameter:>>> iproj = get_int_proj(src.ypos)
Create the data without creating a plot:
>>> iproj = get_int_proj(pl.gamma, recalc=True)
Specify the range and step size for the parameter, in this case varying linearly between 12 and 14 with 51 values:
>>> iproj = get_int_proj(src.r0, id="src", min=12, max=14, ... nloop=51, recalc=True)
- get_int_unc(par=None, id=None, otherids=None, recalc=False, min=None, max=None, nloop=20, delv=None, fac=1, log=False, numcores=None)[source] [edit on github]
Return the interval-uncertainty object.
This returns (and optionally calculates) the data used to display the
int_unc
plot. Note that if the therecalc
parameter isFalse
(the default value) then all other parameters are ignored and the results of the lastint_unc
call are returned.- Parameters:
par – The parameter to plot. This argument is only used if
recalc
is set toTrue
.id (str or int, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (list of str or int, optional) – Other data sets to use in the calculation.
recalc (bool, optional) – The default value (
False
) means that the results from the last call toint_proj
(orget_int_proj
) are returned, ignoring all other parameter values. Otherwise, the statistic curve is re-calculated, but not plotted.min (number, optional) – The minimum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.max (number, optional) – The maximum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.nloop (int, optional) – The number of steps to use. This is used when
delv
is set toNone
.delv (number, optional) – The step size for the parameter. Setting this over-rides the
nloop
parameter. The default isNone
.fac (number, optional) – When
min
ormax
is not given, multiply the covariance of the parameter by this value to calculate the limit (which is then added or subtracted to the parameter value, as required).log (bool, optional) – Should the step size be logarithmically spaced? The default (
False
) is to use a linear grid.numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
- Returns:
iunc – The fields of this object can be used to re-create the plot created by
int_unc
.- Return type:
a
sherpa.plot.IntervalUncertainty
instance
See also
conf
Estimate parameter confidence intervals using the confidence method.
covar
Estimate the confidence intervals using the covariance method.
int_proj
Calculate and plot the fit statistic versus fit parameter value.
int_unc
Calculate and plot the fit statistic versus fit parameter value.
reg_proj
Plot the statistic value as two parameters are varied.
Examples
Return the results of the
int_unc
run:>>> int_unc(src.xpos) >>> iunc = get_int_unc() >>> min(iunc.y) 119.55942437129544
Since the
recalc
parameter has not been changed toTrue
, the following will return the results for the last call toint_unc
, which may not have been for the src.ypos parameter:>>> iunc = get_int_unc(src.ypos)
Create the data without creating a plot:
>>> iunc = get_int_unc(pl.gamma, recalc=True)
Specify the range and step size for the parameter, in this case varying linearly between 12 and 14 with 51 values:
>>> iunc = get_int_unc(src.r0, id="src", min=12, max=14, ... nloop=51, recalc=True)
- get_iter_method_name()[source] [edit on github]
Return the name of the iterative fitting scheme.
- Returns:
name – The name of the iterative fitting scheme set by
set_iter_method
.- Return type:
{‘none’, ‘sigmarej’}
See also
list_iter_methods
List the iterative fitting schemes.
set_iter_method
Set the iterative-fitting scheme used in the fit.
Examples
>>> print(get_iter_method_name())
- get_iter_method_opt(optname=None)[source] [edit on github]
Return one or all options for the iterative-fitting scheme.
The options available for the iterative-fitting methods are described in
set_iter_method_opt
.- Parameters:
optname (str, optional) – If not given, a dictionary of all the options are returned. When given, the individual value is returned.
- Returns:
value – The dictionary is empty when no iterative scheme is being used.
- Return type:
dictionary or value
- Raises:
sherpa.utils.err.ArgumentErr – If the
optname
argument is not recognized.
See also
get_iter_method_name
Return the name of the iterative fitting scheme.
set_iter_method_opt
Set an option for the iterative-fitting scheme.
set_iter_method
Set the iterative-fitting scheme used in the fit.
Examples
Display the settings of the current iterative-fitting method:
>>> print(get_iter_method_opt())
Switch to the sigmarej scheme and find out the current settings:
>>> set_iter_method('sigmarej') >>> opts = get_iter_method_opt()
Return the ‘maxiters’ setting (if applicable):
>>> get_iter_method_opt('maxiters')
- get_kernel_contour(id=None, recalc=True)[source] [edit on github]
Return the data used by contour_kernel.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_kernel
(orget_kernel_contour
) are returned, otherwise the data is re-generated.
- Returns:
psf_data
- Return type:
a
sherpa.plot.PSFKernelContour
instance- Raises:
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If a PSF model has not been created for the data set.
See also
get_psf_contour
Return the data used by contour_psf.
contour_kernel
Contour the kernel applied to the model of an image data set.
contour_psf
Contour the PSF applied to the model of an image data set.
Examples
Return the contour data for the kernel for the default data set:
>>> kplot = get_kernel_contour()
- get_kernel_image(id=None)[source] [edit on github]
Return the data used by image_kernel.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
psf_data
- Return type:
a
sherpa.image.PSFKernelImage
instance- Raises:
sherpa.utils.err.IdentifierErr – If a PSF model has not been created for the data set.
See also
get_psf_image
Return the data used by image_psf.
image_kernel
Display the 2D kernel for a data set in the image viewer.
image_psf
Display the 2D PSF model for a data set in the image viewer.
Examples
Return the image data for the kernel for the default data set:
>>> lplot = get_kernel_image() >>> iplot.y.shape (51, 51)
- get_kernel_plot(id=None, recalc=True)[source] [edit on github]
Return the data used by plot_kernel.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_kernel
(orget_kernel_plot
) are returned, otherwise the data is re-generated.
- Returns:
kernel_plot
- Return type:
a
sherpa.plot.PSFKernelPlot
instance- Raises:
sherpa.utils.err.IdentifierErr – If a PSF model has not been created for the data set.
See also
get_psf_plot
Return the data used by plot_psf.
plot_kernel
Plot the 1D kernel applied to a data set.
plot_psf
Plot the 1D PSF model applied to a data set.
Examples
Return the plot data and then create a plot with it:
>>> kplot = get_kernel_plot() >>> kplot.plot()
- get_method(name=None)[source] [edit on github]
Return an optimization method.
- Parameters:
name (str, optional) – If not given, the current method is returned, otherwise it should be one of the names returned by the
list_methods
function.- Returns:
method – An object representing the optimization method.
- Return type:
- Raises:
sherpa.utils.err.ArgumentErr – If the
name
argument is not recognized.
See also
get_method_opt
Get the options for the current optimization method.
list_methods
List the supported optimization methods.
set_method
Change the optimization method.
set_method_opt
Change an option of the current optimization method.
Examples
The fields of the object returned by
get_method
can be used to view or change the method options.>>> method = ui.get_method() >>> print(method.name) levmar >>> print(method) name = levmar ftol = 1.19209289551e-07 xtol = 1.19209289551e-07 gtol = 1.19209289551e-07 maxfev = None epsfcn = 1.19209289551e-07 factor = 100.0 verbose = 0 >>> method.maxfev = 5000
- get_method_name()[source] [edit on github]
Return the name of current Sherpa optimization method.
- Returns:
name – The name of the current optimization method, in lower case. This may not match the value sent to
set_method
because some methods can be set by multiple names.- Return type:
See also
get_method
Return an optimization method.
get_method_opt
Get the options for the current optimization method.
Examples
>>> get_method_name() 'levmar'
The ‘neldermead’ method can also be referred to as ‘simplex’:
>>> set_method('simplex') >>> get_method_name() 'neldermead'
- get_method_opt(optname=None)[source] [edit on github]
Return one or all of the options for the current optimization method.
This is a helper function since the optimization options can also be read directly using the object returned by
get_method
.- Parameters:
optname (str, optional) – If not given, a dictionary of all the options are returned. When given, the individual value is returned.
- Returns:
value
- Return type:
dictionary or value
- Raises:
sherpa.utils.err.ArgumentErr – If the
optname
argument is not recognized.
See also
get_method
Return an optimization method.
set_method
Change the optimization method.
set_method_opt
Change an option of the current optimization method.
Examples
>>> get_method_opt('maxfev') is None True
>>> mopts = get_method_opt() >>> mopts['maxfev'] is None True
- get_model(id=None)[source] [edit on github]
Return the model expression for a data set.
This returns the model expression for a data set, including any instrument response (e.g. PSF or ARF and RMF) whether created automatically or explicitly, with
set_full_model
.- Parameters:
id (int or str, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
This can contain multiple model components and any instrument response. Changing attributes of this model changes the model used by the data set.
- Return type:
instance
See also
delete_model
Delete the model expression from a data set.
get_model_pars
Return the names of the parameters of a model.
get_model_type
Describe a model expression.
get_source
Return the source model expression for a data set.
list_model_ids
List of all the data sets with a source expression.
sherpa.astro.ui.set_bkg_model
Set the background model expression for a data set.
set_model
Set the source model expression for a data set.
set_full_model
Define the convolved model expression for a data set.
show_model
Display the source model expression for a data set.
Examples
Return the model fitted to the default data set:
>>> mdl = get_model() >>> len(mdl.pars) 5
- get_model_autoassign_func()[source] [edit on github]
Return the method used to create model component identifiers.
Provides access to the function which is used by
create_model_component
and when creating model components directly to add an identifier in the current Python namespace.- Returns:
The model function set by
set_model_autoassign_func
.- Return type:
func
See also
create_model_component
Create a model component.
set_model
Set the source model expression for a data set.
set_model_autoassign_func
Set the method used to create model component identifiers.
- get_model_component(name)[source] [edit on github]
Returns a model component given its name.
- Parameters:
name (str) – The name of the model component.
- Returns:
component – The model component object.
- Return type:
a sherpa.models.model.Model instance
- Raises:
sherpa.utils.err.IdentifierErr – If there is no model component with the given
name
.
See also
create_model_component
Create a model component.
get_model
Return the model expression for a data set.
get_source
Return the source model expression for a data set.
list_model_components
List the names of all the model components.
set_model
Set the source model expression for a data set.
Notes
The model instances are named as modeltype.username, and it is the
username
component that is used here to access the instance.Examples
When a model component is created, a variable is created that contains the model instance. The instance can also be returned with
get_model_component
, which can then be queried or used to change the model settings:>>> create_model_component('gauss1d', 'gline') >>> gmodel = get_model_component('gline') >>> gmodel.name 'gauss1d.gline' >>> print([p.name for p in gmodel.pars]) ['fwhm', 'pos', 'ampl'] >>> gmodel.fwhm.val = 12.2 >>> gmodel.fwhm.freeze()
- get_model_component_image(id, model=None)[source] [edit on github]
Return the data used by image_model_component.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.model.Model instance) – The component to display (the name, if a string).
- Returns:
cpt_img – The
y
attribute contains the component model values as a 2D NumPy array.- Return type:
a
sherpa.image.ComponentModelImage
instance- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_source_component_image
Return the data used by image_source_component.
get_model_image
Return the data used by image_model.
image_model
Display the model for a data set in the image viewer.
image_model_component
Display a component of the model in the image viewer.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.Examples
Return the gsrc component values for the default data set:
>>> minfo = get_model_component_image(gsrc)
Get the
bgnd
model pixel values for data set 2:>>> minfo = get_model_component_image(2, bgnd)
- get_model_component_plot(id, model=None, recalc=True)[source] [edit on github]
Return the data used to create the model-component plot.
- Parameters:
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.model.Model instance) – The component to use (the name, if a string).
recalc (bool, optional) – If
False
then the results from the last call toplot_model_component
(orget_model_component_plot
) are returned, otherwise the data is re-generated.
- Returns:
An object representing the data used to create the plot by
plot_model_component
. The return value depends on the data set (e.g. 1D binned or un-binned).- Return type:
instance
See also
get_model_plot
Return the data used to create the model plot.
plot_model
Plot the model for a data set.
plot_model_component
Plot a component of the model for a data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.Examples
Return the plot data for the
pl
component used in the default data set:>>> cplot = get_model_component_plot(pl)
Return the full source model (
fplot
) and then for the componentsgal * pl
andgal * gline
, for the data set ‘jet’:>>> fmodel = xsphabs.gal * (powlaw1d.pl + gauss1d.gline) >>> set_source('jet', fmodel) >>> fit('jet') >>> fplot = get_model_plot('jet') >>> plot1 = get_model_component_plot('jet', pl*gal) >>> plot2 = get_model_component_plot('jet', gline*gal)
- get_model_contour(id=None, recalc=True)[source] [edit on github]
Return the data used by contour_model.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_model
(orget_model_contour
) are returned, otherwise the data is re-generated.
- Returns:
model_data – The
y
attribute contains the model values and thex0
andx1
arrays contain the corresponding coordinate values, as one-dimensional arrays.- Return type:
a
sherpa.plot.ModelContour
instance- Raises:
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_model_image
Return the data used by image_model.
contour_model
Contour the values of the model, including any PSF.
image_model
Display the model for a data set in the image viewer.
Examples
Return the model pixel values for the default data set:
>>> minfo = get_model_contour()
- get_model_contour_prefs()[source] [edit on github]
Return the preferences for contour_model.
- Returns:
prefs – Changing the values of this dictionary will change any new contour plots.
- Return type:
See also
contour_model
Contour the values of the model, including any PSF.
Notes
The meaning of the fields depend on the chosen plot backend. A value of
None
(or not set) means to use the default value for that attribute, unless indicated otherwise.alpha
The transparency value used to draw the contours, where 0 is fully transparent and 1 is fully opaque.
colors
The colors to draw the contours.
linewidths
What thickness of line to draw the contours.
xlog
Should the X axis be drawn with a logarithmic scale? The default is
False
.ylog
Should the Y axis be drawn with a logarithmic scale? The default is
False
.
Examples
Change the contours for the model to be drawn in ‘orange’:
>>> prefs = get_model_contour_prefs() >>> prefs['color'] = 'orange' >>> contour_data() >>> contour_model(overcontour=True)
- get_model_image(id=None)[source] [edit on github]
Return the data used by image_model.
Evaluate the source expression for the image pixels - including any PSF convolution defined by
set_psf
- and return the results.- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
src_img – The
y
attribute contains the source model values as a 2D NumPy array.- Return type:
a
sherpa.image.ModelImage
instance- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_source_image
Return the data used by image_source.
contour_model
Contour the values of the model, including any PSF.
image_model
Display the model for a data set in the image viewer.
set_psf
Add a PSF model to a data set.
Examples
Calculate the residuals (data - model) for the default data set:
>>> minfo = get_model_image() >>> dinfo = get_data_image() >>> resid = dinfo.y - minfo.y
- get_model_pars(model)[source] [edit on github]
Return the names of the parameters of a model.
- Parameters:
model (str or a sherpa.models.model.Model object) –
- Returns:
names – The names of the parameters in the model expression. These names do not include the name of the parent component.
- Return type:
list of str
See also
create_model_component
Create a model component.
get_model
Return the model expression for a data set.
get_model_type
Describe a model expression.
get_source
Return the source model expression for a data set.
Examples
>>> set_source(gauss2d.src + const2d.bgnd) >>> get_model_pars(get_source()) ['fwhm', 'xpos', 'ypos', 'ellip', 'theta', 'ampl', 'c0']
- get_model_plot(id=None, recalc=True)[source] [edit on github]
Return the data used to create the model plot.
- Parameters:
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_model
(orget_model_plot
) are returned, otherwise the data is re-generated.
- Returns:
An object representing the data used to create the plot by
plot_model
. The return value depends on the data set (e.g. 1D binned or un-binned).- Return type:
instance
See also
get_model_plot_prefs
Return the preferences for plot_model.
plot_model
Plot the model for a data set.
Examples
>>> mplot = get_model_plot() >>> print(mplot)
- get_model_plot_prefs(id=None)[source] [edit on github]
Return the preferences for plot_model.
The plot preferences may depend on the data set, so it is now an optional argument.
Changed in version 4.12.2: The id argument has been given.
- Parameters:
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
prefs – Changing the values of this dictionary will change any new model plots. This dictionary will be empty if no plot backend is available.
- Return type:
See also
plot_model
Plot the model for a data set.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Notes
The meaning of the fields depend on the chosen plot backend. A value of
None
means to use the default value for that attribute, unless indicated otherwise. These preferences are used by the following commands:plot_model
,plot_ratio
,plot_bkg_model
, and the “fit” variants, such asplot_fit
,plot_fit_resid
, andplot_bkg_fit
.The preferences recognized by the matplotlib backend are the same as for
get_data_plot_prefs
.Examples
After these commands, any model plot will use a green line to display the model:
>>> prefs = get_model_plot_prefs() >>> prefs['color'] = 'green'
- get_model_type(model)[source] [edit on github]
Describe a model expression.
- Parameters:
model (str or a sherpa.models.model.Model object) –
- Returns:
type – The name of the model expression.
- Return type:
See also
create_model_component
Create a model component.
get_model
Return the model expression for a data set.
get_model_pars
Return the names of the parameters of a model.
get_source
Return the source model expression for a data set.
Examples
>>> create_model_component("powlaw1d", "pl") >>> get_model_type("pl") 'powlaw1d'
For expressions containing more than one component, the result is likely to be ‘binaryopmodel’
>>> get_model_type(const1d.norm * (polynom1d.poly + gauss1d.gline)) 'binaryopmodel'
For sources with some form of an instrument model - such as a PSF convolution for an image or a PHA file with response information from the ARF and RMF - the response can depend on whether the expression contains this extra information or not:
>>> get_model_type(get_source('spec')) 'binaryopmodel' >>> get_model_type(get_model('spec')) 'rspmodelpha'
- get_num_par(id=None)[source] [edit on github]
Return the number of parameters in a model expression.
The
get_num_par
function returns the number of parameters, both frozen and thawed, in the model assigned to a data set.- Parameters:
id (int or str, optional) – The data set containing the model expression. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
npar – The number of parameters in the model expression. This sums up all the parameters of the components in the expression, and includes both frozen and thawed components.
- Return type:
- Raises:
sherpa.utils.err.IdentifierErr – If no model expression has been set for the data set (with
set_model
orset_source
).
See also
get_num_par_frozen
Return the number of frozen parameters.
get_num_par_thawed
Return the number of thawed parameters.
set_model
Set the source model expression for a data set.
Examples
Return the total number of parameters for the default data set:
>>> print(get_num_par())
Find the number of parameters for the model associated with the data set called “jet”:
>>> njet = get_num_par('jet')
- get_num_par_frozen(id=None)[source] [edit on github]
Return the number of frozen parameters in a model expression.
The
get_num_par_frozen
function returns the number of frozen parameters in the model assigned to a data set.- Parameters:
id (int or str, optional) – The data set containing the model expression. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
npar – The number of parameters in the model expression. This sums up all the frozen parameters of the components in the expression.
- Return type:
- Raises:
sherpa.utils.err.IdentifierErr – If no model expression has been set for the data set (with
set_model
orset_source
).
See also
get_num_par
Return the number of parameters.
get_num_par_thawed
Return the number of thawed parameters.
set_model
Set the source model expression for a data set.
Examples
Return the number of frozen parameters for the default data set:
>>> print(get_num_par_frozen())
Find the number of frozen parameters for the model associated with the data set called “jet”:
>>> njet = get_num_par_frozen('jet')
- get_num_par_thawed(id=None)[source] [edit on github]
Return the number of thawed parameters in a model expression.
The
get_num_par_thawed
function returns the number of thawed parameters in the model assigned to a data set.- Parameters:
id (int or str, optional) – The data set containing the model expression. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
npar – The number of parameters in the model expression. This sums up all the thawed parameters of the components in the expression.
- Return type:
- Raises:
sherpa.utils.err.IdentifierErr – If no model expression has been set for the data set (with
set_model
orset_source
).
See also
get_num_par
Return the number of parameters.
get_num_par_frozen
Return the number of frozen parameters.
set_model
Set the source model expression for a data set.
Examples
Return the number of thawed parameters for the default data set:
>>> print(get_num_par_thawed())
Find the number of thawed parameters for the model associated with the data set called “jet”:
>>> njet = get_num_par_thawed('jet')
- get_par(par)[source] [edit on github]
Return a parameter of a model component.
- Parameters:
par (str) – The name of the parameter, using the format “componentname.parametername”.
- Returns:
par – The parameter values - e.g. current value, limits, and whether it is frozen - can be changed using this object.
- Return type:
a
sherpa.models.parameter.Parameter
instance- Raises:
sherpa.utils.err.ArgumentErr – If the
par
argument is invalid: the model component does not exist or the given model has no parameter with that name.
See also
set_par
Set the value, limits, or behavior of a model parameter.
Examples
Return the “c0” parameter of the “bgnd” model component and change it to be frozen:
>>> p = get_par('bgnd.c0') >>> p.frozen = True
- get_pdf_plot()[source] [edit on github]
Return the data used to plot the last PDF.
- Returns:
plot – An object containing the data used by the last call to
plot_pdf
. The fields will beNone
if the function has not been called.- Return type:
a
sherpa.plot.PDFPlot
instance
See also
plot_pdf
Plot the probability density function of an array.
- get_prior(par)[source] [edit on github]
Return the prior function for a parameter (MCMC).
The default behavior of the pyBLoCXS MCMC sampler (run by the
get_draws
function) is to use a flat prior for each parameter. Theget_prior
routine finds the current prior assigned to a parameter, andset_prior
is used to change it.- Parameters:
par (a
sherpa.models.parameter.Parameter
instance) – A parameter of a model instance.- Returns:
The parameter prior set by a previous call to
set_prior
. This may be a function or model instance.- Return type:
prior
- Raises:
ValueError – If a prior has not been set for the parameter.
See also
set_prior
Set the prior function to use with a parameter.
Examples
>>> prior = get_prior(bgnd.c0) >>> print(prior)
- get_proj()[source] [edit on github]
Return the confidence-interval estimation object.
- Returns:
proj
- Return type:
See also
conf
Estimate parameter confidence intervals using the confidence method.
get_proj_opt
Return one or all of the options for the confidence interval method.
proj
Estimate confidence intervals for fit parameters.
set_proj_opt
Set an option of the proj estimation object.
Notes
The attributes of the object include:
eps
The precision of the calculated limits. The default is 0.01.
fast
If
True
then the fit optimization used may be changed from the current setting (only for the error analysis) to use a faster optimization method. The default isFalse
.max_rstat
If the reduced chi square is larger than this value, do not use (only used with chi-square statistics). The default is 3.
maxfits
The maximum number of re-fits allowed (that is, when the
remin
filter is met). The default is 5.maxiters
The maximum number of iterations allowed when bracketing limits, before stopping for that parameter. The default is 200.
numcores
The number of computer cores to use when evaluating results in parallel. This is only used if
parallel
isTrue
. The default is to use all cores.parallel
If there is more than one free parameter then the results can be evaluated in parallel, to reduce the time required. The default is
True
.remin
The minimum difference in statistic value for a new fit location to be considered better than the current best fit (which starts out as the starting location of the fit at the time
proj
is called). The default is 0.01.sigma
What is the error limit being calculated. The default is 1.
soft_limits
Should the search be restricted to the soft limits of the parameters (
True
), or can parameter values go out all the way to the hard limits if necessary (False
). The default isFalse
tol
The tolerance for the fit. The default is 0.2.
Examples
>>> print(get_proj()) name = projection numcores = 8 max_rstat = 3 maxiters = 200 soft_limits = False eps = 0.01 fast = False maxfits = 5 remin = 0.01 tol = 0.2 sigma = 1 parallel = True
- get_proj_opt(name=None)[source] [edit on github]
Return one or all of the options for the confidence interval method.
This is a helper function since the options can also be read directly using the object returned by
get_proj
.- Parameters:
name (str, optional) – If not given, a dictionary of all the options are returned. When given, the individual value is returned.
- Returns:
value
- Return type:
dictionary or value
- Raises:
sherpa.utils.err.ArgumentErr – If the
name
argument is not recognized.
See also
conf
Estimate parameter confidence intervals using the confidence method.
proj
Estimate confidence intervals for fit parameters.
get_proj
Return the confidence-interval estimation object.
set_proj_opt
Set an option of the proj estimation object.
Examples
>>> get_proj_opt('sigma') 1
>>> popts = get_proj_opt() >>> popts['sigma'] 1
- get_proj_results()[source] [edit on github]
Return the results of the last
proj
run.- Returns:
results
- Return type:
sherpa.fit.ErrorEstResults object
- Raises:
sherpa.utils.err.SessionErr – If no
proj
call has been made.
See also
conf
Estimate parameter confidence intervals using the confidence method.
proj
Estimate confidence intervals for fit parameters.
get_proj_opt
Return one or all of the options for the projection method.
set_proj_opt
Set an option of the proj estimation object.
Notes
The fields of the object include:
datasets
A tuple of the data sets used in the analysis.
methodname
This will be ‘projection’.
iterfitname
The name of the iterated-fit method used, if any.
fitname
The name of the optimization method used.
statname
The name of the fit statistic used.
sigma
The sigma value used to calculate the confidence intervals.
percent
The percentage of the signal contained within the confidence intervals (calculated from the
sigma
value assuming a normal distribution).parnames
A tuple of the parameter names included in the analysis.
parvals
A tuple of the best-fit parameter values, in the same order as
parnames
.parmins
A tuple of the lower error bounds, in the same order as
parnames
.parmaxes
A tuple of the upper error bounds, in the same order as
parnames
.nfits
The number of model evaluations.
Examples
>>> res = get_proj_results() >>> print(res) datasets = ('src',) methodname = projection iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('bgnd.c0',) parvals = (9.1958148476800918,) parmins = (-2.0765029551804268,) parmaxes = (2.0765029551935186,) nfits = 0
- get_projection_results() [edit on github]
Return the results of the last
proj
run.- Returns:
results
- Return type:
sherpa.fit.ErrorEstResults object
- Raises:
sherpa.utils.err.SessionErr – If no
proj
call has been made.
See also
conf
Estimate parameter confidence intervals using the confidence method.
proj
Estimate confidence intervals for fit parameters.
get_proj_opt
Return one or all of the options for the projection method.
set_proj_opt
Set an option of the proj estimation object.
Notes
The fields of the object include:
datasets
A tuple of the data sets used in the analysis.
methodname
This will be ‘projection’.
iterfitname
The name of the iterated-fit method used, if any.
fitname
The name of the optimization method used.
statname
The name of the fit statistic used.
sigma
The sigma value used to calculate the confidence intervals.
percent
The percentage of the signal contained within the confidence intervals (calculated from the
sigma
value assuming a normal distribution).parnames
A tuple of the parameter names included in the analysis.
parvals
A tuple of the best-fit parameter values, in the same order as
parnames
.parmins
A tuple of the lower error bounds, in the same order as
parnames
.parmaxes
A tuple of the upper error bounds, in the same order as
parnames
.nfits
The number of model evaluations.
Examples
>>> res = get_proj_results() >>> print(res) datasets = ('src',) methodname = projection iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('bgnd.c0',) parvals = (9.1958148476800918,) parmins = (-2.0765029551804268,) parmaxes = (2.0765029551935186,) nfits = 0
- get_psf(id=None)[source] [edit on github]
Return the PSF model defined for a data set.
Return the parameter settings for the PSF model assigned to the data set.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
psf
- Return type:
a
sherpa.instrument.PSFModel
instance- Raises:
sherpa.utils.err.IdentifierErr – If no PSF model has been set for the data set.
See also
delete_psf
Delete the PSF model for a data set.
image_psf
Display the 2D PSF model for a data set in the image viewer.
list_psf_ids
List of all the data sets with a PSF.
load_psf
Create a PSF model.
plot_psf
Plot the 1D PSF model applied to a data set.
set_psf
Add a PSF model to a data set.
Examples
Change the size and center of the PSF for the default data set:
>>> psf = get_psf() >>> psf.size = (21, 21) >>> psf.center = (10, 10)
- get_psf_contour(id=None, recalc=True)[source] [edit on github]
Return the data used by contour_psf.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_psf
(orget_psf_contour
) are returned, otherwise the data is re-generated.
- Returns:
psf_data
- Return type:
a
sherpa.plot.PSFContour
instance- Raises:
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If a PSF model has not been created for the data set.
See also
get_kernel_contour
Return the data used by contour_kernel.
contour_kernel
Contour the kernel applied to the model of an image data set.
contour_psf
Contour the PSF applied to the model of an image data set.
Examples
Return the contour data for the PSF for the default data set:
>>> cplot = get_psf_contour()
- get_psf_image(id=None)[source] [edit on github]
Return the data used by image_psf.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
psf_data
- Return type:
a
sherpa.image.PSFImage
instance- Raises:
sherpa.utils.err.IdentifierErr – If a PSF model has not been created for the data set.
See also
get_kernel_image
Return the data used by image_kernel.
image_kernel
Display the 2D kernel for a data set in the image viewer.
image_psf
Display the 2D PSF model for a data set in the image viewer.
Examples
Return the image data for the PSF for the default data set:
>>> iplot = get_psf_image() >>> iplot.y.shape (175, 200)
- get_psf_plot(id=None, recalc=True)[source] [edit on github]
Return the data used by plot_psf.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_psf
(orget_psf_plot
) are returned, otherwise the data is re-generated.
- Returns:
psf_plot
- Return type:
a
sherpa.plot.PSFPlot
instance- Raises:
sherpa.utils.err.IdentifierErr – If a PSF model has not been created for the data set.
See also
get_kernel_plot
Return the data used by plot_kernel.
plot_kernel
Plot the 1D kernel applied to a data set.
plot_psf
Plot the 1D PSF model applied to a data set.
Examples
Return the plot data and then create a plot with it:
>>> pplot = get_psf_plot() >>> pplot.plot()
- get_pvalue_plot(null_model=None, alt_model=None, conv_model=None, id=1, otherids=(), num=500, bins=25, numcores=None, recalc=False)[source] [edit on github]
Return the data used by plot_pvalue.
Access the data arrays and preferences defining the histogram plot produced by the
plot_pvalue
function, a histogram of the likelihood ratios comparing fits of the null model to fits of the alternative model using faked data with Poisson noise. Data returned includes the likelihood ratio computed using the observed data, and the p-value, used to reject or accept the null model.- Parameters:
null_model – The model expression for the null hypothesis.
alt_model – The model expression for the alternative hypothesis.
conv_model (optional) – An expression used to modify the model so that it can be compared to the data (e.g. a PSF or PHA response).
id (int or str, optional) – The data set that provides the data. The default is 1.
otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
num (int, optional) – The number of simulations to run. The default is 500.
bins (int, optional) – The number of bins to use to create the histogram. The default is 25.
numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
recalc (bool, optional) – The default value (
False
) means that the results from the last call toplot_pvalue
orget_pvalue_plot
are returned. IfTrue
, the values are re-calculated.
- Returns:
plot
- Return type:
a
sherpa.plot.LRHistogram
instance
See also
get_pvalue_results
Return the data calculated by the last plot_pvalue call.
plot_pvalue
Compute and plot a histogram of likelihood ratios by simulating data.
Examples
Return the values from the last call to
plot_pvalue
:>>> pvals = get_pvalue_plot() >>> pvals.ppp 0.472
Run 500 simulations for the two models and print the results:
>>> pvals = get_pvalue_plot(mdl1, mdl2, recalc=True, num=500) >>> print(pvals)
- get_pvalue_results()[source] [edit on github]
Return the data calculated by the last plot_pvalue call.
The
get_pvalue_results
function returns the likelihood ratio test results computed by theplot_pvalue
command, which compares fits of the null model to fits of the alternative model using faked data with Poisson noise. The likelihood ratio based on the observed data is returned, along with the p-value, used to reject or accept the null model.- Returns:
plot – If
plot_pvalue
orget_pvalue_plot
have been called then the return value is asherpa.sim.simulate.LikelihoodRatioResults
instance, otherwiseNone
is returned.- Return type:
None or a
sherpa.sim.simulate.LikelihoodRatioResults
instance
See also
plot_value
Compute and plot a histogram of likelihood ratios by simulating data.
get_pvalue_plot
Return the data used by plot_pvalue.
Notes
The fields of the returned (
LikelihoodRatioResults
) object are:- ratios
The calculated likelihood ratio for each iteration.
- stats
The calculated fit statistics for each iteration, stored as the null model and then the alt model in a nsim by 2 array.
- samples
The parameter samples array for each simulation, stored in a nsim by npar array.
- lr
The likelihood ratio of the observed data for the null and alternate models.
- ppp
The p value of the observed data for the null and alternate models.
- null
The fit statistic of the null model on the observed data.
- alt
The fit statistic of the alternate model on the observed data.
Examples
Return the results of the last pvalue analysis and display the results - first using the
format
method, which provides a summary of the data, and then a look at the individual fields in the returned object. The last call displays the contents of one of the fields (ppp
).>>> res = get_pvalue_results() >>> print(res.format()) >>> print(res) >>> print(res.ppp)
- get_ratio_contour(id=None, recalc=True)[source] [edit on github]
Return the data used by contour_ratio.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_ratio
(orget_ratio_contour
) are returned, otherwise the data is re-generated.
- Returns:
ratio_data – The
y
attribute contains the ratio values and thex0
andx1
arrays contain the corresponding coordinate values, as one-dimensional arrays.- Return type:
a
sherpa.plot.RatioContour
instance- Raises:
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_ratio_image
Return the data used by image_ratio.
get_resid_contour
Return the data used by contour_resid.
contour_ratio
Contour the ratio of data to model.
image_ratio
Display the ratio (data/model) for a data set in the image viewer.
Examples
Return the ratio data for the default data set:
>>> rinfo = get_ratio_contour()
- get_ratio_image(id=None)[source] [edit on github]
Return the data used by image_ratio.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
ratio_img – The
y
attribute contains the ratio values as a 2D NumPy array.- Return type:
a
sherpa.image.RatioImage
instance- Raises:
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_resid_image
Return the data used by image_resid.
contour_ratio
Contour the ratio of data to model.
image_ratio
Display the ratio (data/model) for a data set in the image viewer.
Examples
Return the ratio data for the default data set:
>>> rinfo = get_ratio_image()
- get_ratio_plot(id=None, recalc=True)[source] [edit on github]
Return the data used by plot_ratio.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_ratio
(orget_ratio_plot
) are returned, otherwise the data is re-generated.
- Returns:
ratio_data
- Return type:
a
sherpa.plot.RatioPlot
instance- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_chisqr_plot
Return the data used by plot_chisqr.
get_delchi_plot
Return the data used by plot_delchi.
get_resid_plot
Return the data used by plot_resid.
plot_ratio
Plot the ratio of data to model for a data set.
Examples
Return the ratio of the data to the model for the default data set:
>>> rplot = get_ratio_plot() >>> np.min(rplot.y) 0.6320905073750186 >>> np.max(rplot.y) 1.5170172177000447
Display the contents of the ratio plot for data set 2:
>>> print(get_ratio_plot(2))
Overplot the ratio plot from the ‘core’ data set on the ‘jet’ data set:
>>> r1 = get_ratio_plot('jet') >>> r2 = get_ratio_plot('core') >>> r1.plot() >>> r2.overplot()
- get_reg_proj(par0=None, par1=None, id=None, otherids=None, recalc=False, fast=True, min=None, max=None, nloop=(10, 10), delv=None, fac=4, log=(False, False), sigma=(1, 2, 3), levels=None, numcores=None)[source] [edit on github]
Return the region-projection object.
This returns (and optionally calculates) the data used to display the
reg_proj
contour plot. Note that if the therecalc
parameter isFalse
(the default value) then all other parameters are ignored and the results of the lastreg_proj
call are returned.- Parameters:
par0 – The parameters to plot on the X and Y axes, respectively. These arguments are only used if recalc is set to
True
.par1 – The parameters to plot on the X and Y axes, respectively. These arguments are only used if recalc is set to
True
.id (str or int, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (list of str or int, optional) – Other data sets to use in the calculation.
recalc (bool, optional) – The default value (
False
) means that the results from the last call toreg_proj
(orget_reg_proj
) are returned, ignoring all other parameter values. Otherwise, the statistic curve is re-calculated, but not plotted.fast (bool, optional) – If
True
then the fit optimization used may be changed from the current setting (only for the error analysis) to use a faster optimization method. The default isFalse
.min (pair of numbers, optional) – The minimum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.max (pair of number, optional) – The maximum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.nloop (pair of int, optional) – The number of steps to use. This is used when
delv
is set toNone
.delv (pair of number, optional) – The step size for the parameter. Setting this over-rides the
nloop
parameter. The default isNone
.fac (number, optional) – When
min
ormax
is not given, multiply the covariance of the parameter by this value to calculate the limit (which is then added or subtracted to the parameter value, as required).log (pair of bool, optional) – Should the step size be logarithmically spaced? The default (
False
) is to use a linear grid.sigma (sequence of number, optional) – The levels at which to draw the contours. The units are the change in significance relative to the starting value, in units of sigma.
levels (sequence of number, optional) – The numeric values at which to draw the contours. This over-rides the
sigma
parameter, if set (the default isNone
).numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
- Returns:
rproj – The fields of this object can be used to re-create the plot created by
reg_proj
.- Return type:
a
sherpa.plot.RegionProjection
instance
See also
conf
Estimate patameter confidence intervals using the confidence method.
covar
Estimate the confidence intervals using the covariance method.
int_proj
Calculate and plot the fit statistic versus fit parameter value.
int_unc
Calculate and plot the fit statistic versus fit parameter value.
reg_proj
Plot the statistic value as two parameters are varied.
reg_unc
Plot the statistic value as two parameters are varied.
Examples
Return the results for the
reg_proj
run for thexpos
andypos
parameters of thesrc
component, for the default data set:>>> reg_proj(src.xpos, src.ypos) >>> rproj = get_reg_proj()
Since the
recalc
parameter has not been changed toTrue
, the following will return the results for the last call toreg_proj
, which may not have been for the r0 and alpha parameters:>>> rprog = get_reg_proj(src.r0, src.alpha)
Create the data without creating a plot:
>>> rproj = get_reg_proj(pl.gamma, gal.nh, recalc=True)
Specify the range and step size for both the parameters, in this case pl.gamma should vary between 0.5 and 2.5, with gal.nh between 0.01 and 1, both with 51 values and the nH range done over a log scale:
>>> rproj = get_reg_proj(pl.gamma, gal.nh, id="src", ... min=(0.5, 0.01), max=(2.5, 1), ... nloop=(51, 51), log=(False, True), ... recalc=True)
- get_reg_unc(par0=None, par1=None, id=None, otherids=None, recalc=False, min=None, max=None, nloop=(10, 10), delv=None, fac=4, log=(False, False), sigma=(1, 2, 3), levels=None, numcores=None)[source] [edit on github]
Return the region-uncertainty object.
This returns (and optionally calculates) the data used to display the
reg_unc
contour plot. Note that if the therecalc
parameter isFalse
(the default value) then all other parameters are ignored and the results of the lastreg_unc
call are returned.- Parameters:
par0 – The parameters to plot on the X and Y axes, respectively. These arguments are only used if
recalc
is set toTrue
.par1 – The parameters to plot on the X and Y axes, respectively. These arguments are only used if
recalc
is set toTrue
.id (str or int, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (list of str or int, optional) – Other data sets to use in the calculation.
recalc (bool, optional) – The default value (
False
) means that the results from the last call toreg_unc
(orget_reg_unc
) are returned, ignoring all other parameter values. Otherwise, the statistic curve is re-calculated, but not plotted.fast (bool, optional) – If
True
then the fit optimization used may be changed from the current setting (only for the error analysis) to use a faster optimization method. The default isFalse
.min (pair of numbers, optional) – The minimum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.max (pair of number, optional) – The maximum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.nloop (pair of int, optional) – The number of steps to use. This is used when
delv
is set toNone
.delv (pair of number, optional) – The step size for the parameter. Setting this over-rides the
nloop
parameter. The default isNone
.fac (number, optional) – When
min
ormax
is not given, multiply the covariance of the parameter by this value to calculate the limit (which is then added or subtracted to the parameter value, as required).log (pair of bool, optional) – Should the step size be logarithmically spaced? The default (
False
) is to use a linear grid.sigma (sequence of number, optional) – The levels at which to draw the contours. The units are the change in significance relative to the starting value, in units of sigma.
levels (sequence of number, optional) – The numeric values at which to draw the contours. This over-rides the
sigma
parameter, if set (the default isNone
).numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
- Returns:
rproj – The fields of this object can be used to re-create the plot created by
reg_unc
.- Return type:
a
sherpa.plot.RegionUncertainty
instance
See also
conf
Estimate patameter confidence intervals using the confidence method.
covar
Estimate the confidence intervals using the covariance method.
int_proj
Calculate and plot the fit statistic versus fit parameter value.
int_unc
Calculate and plot the fit statistic versus fit parameter value.
reg_proj
Plot the statistic value as two parameters are varied.
reg_unc
Plot the statistic value as two parameters are varied.
Examples
Return the results for the
reg_unc
run for thexpos
andypos
parameters of thesrc
component, for the default data set:>>> reg_unc(src.xpos, src.ypos) >>> runc = get_reg_unc()
Since the
recalc
parameter has not been changed toTrue
, the following will return the results for the last call toreg_unc
, which may not have been for the r0 and alpha parameters:>>> runc = get_reg_unc(src.r0, src.alpha)
Create the data without creating a plot:
>>> runc = get_reg_unc(pl.gamma, gal.nh, recalc=True)
Specify the range and step size for both the parameters, in this case pl.gamma should vary between 0.5 and 2.5, with gal.nh between 0.01 and 1, both with 51 values and the nH range done over a log scale:
>>> runc = get_reg_unc(pl.gamma, gal.nh, id="src", ... min=(0.5, 0.01), max=(2.5, 1), ... nloop=(51, 51), log=(False, True), ... recalc=True)
- get_resid_contour(id=None, recalc=True)[source] [edit on github]
Return the data used by contour_resid.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_resid
(orget_resid_contour
) are returned, otherwise the data is re-generated.
- Returns:
resid_data – The
y
attribute contains the residual values and thex0
andx1
arrays contain the corresponding coordinate values, as one-dimensional arrays.- Return type:
a
sherpa.plot.ResidContour
instance- Raises:
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_ratio_contour
Return the data used by contour_ratio.
get_resid_image
Return the data used by image_resid.
contour_resid
Contour the residuals of the fit.
image_resid
Display the residuals (data - model) for a data set in the image viewer.
Examples
Return the residual data for the default data set:
>>> rinfo = get_resid_contour()
- get_resid_image(id=None)[source] [edit on github]
Return the data used by image_resid.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
resid_img – The
y
attribute contains the residual values as a 2D NumPy array.- Return type:
a
sherpa.image.ResidImage
instance- Raises:
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_ratio_image
Return the data used by image_ratio.
contour_resid
Contour the residuals of the fit.
image_resid
Display the residuals (data - model) for a data set in the image viewer.
Examples
Return the residual data for the default data set:
>>> rinfo = get_resid_image()
- get_resid_plot(id=None, recalc=True)[source] [edit on github]
Return the data used by plot_resid.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_resid
(orget_resid_plot
) are returned, otherwise the data is re-generated.
- Returns:
resid_data
- Return type:
a
sherpa.plot.ResidPlot
instance- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_chisqr_plot
Return the data used by plot_chisqr.
get_delchi_plot
Return the data used by plot_delchi.
get_ratio_plot
Return the data used by plot_ratio.
plot_resid
Plot the residuals (data - model) for a data set.
Examples
Return the residual data for the default data set:
>>> rplot = get_resid_plot() >>> np.min(rplot.y) -2.9102595936209896 >>> np.max(rplot.y) 4.0897404063790104
Display the contents of the residuals plot for data set 2:
>>> print(get_resid_plot(2))
Overplot the residuals plot from the ‘core’ data set on the ‘jet’ data set:
>>> r1 = get_resid_plot('jet') >>> r2 = get_resid_plot('core') >>> r1.plot() >>> r2.overplot()
- get_sampler()[source] [edit on github]
Return the current MCMC sampler options.
Returns the options for the current pyBLoCXS MCMC sampling method (jumping rules).
- Returns:
options – A copy of the options for the chosen sampler. Use
set_sampler_opt
to change these values. The fields depend on the current sampler.- Return type:
See also
get_sampler_name
Return the name of the current MCMC sampler.
get_sampler_opt
Return an option of the current MCMC sampler.
set_sampler
Set the MCMC sampler.
set_sampler_opt
Set an option for the current MCMC sampler.
Examples
>>> print(get_sampler())
- get_sampler_name()[source] [edit on github]
Return the name of the current MCMC sampler.
- Returns:
name
- Return type:
See also
get_sampler
Return the current MCMC sampler options.
set_sampler
Set the MCMC sampler.
Examples
>>> get_sampler_name() 'MetropolisMH'
- get_sampler_opt(opt)[source] [edit on github]
Return an option of the current MCMC sampler.
- Returns:
opt – The name of the option. The fields depend on the current sampler.
- Return type:
See also
get_sampler
Return the current MCMC sampler options.
set_sampler_opt
Set an option for the current MCMC sampler.
Examples
>>> get_sampler_opt('log') False
- get_scatter_plot()[source] [edit on github]
Return the data used to plot the last scatter plot.
- Returns:
plot – An object containing the data used by the last call to
plot_scatter
. The fields will beNone
if the function has not been called.- Return type:
a
sherpa.plot.ScatterPlot
instance
See also
plot_scatter
Create a scatter plot.
- get_source(id=None)[source] [edit on github]
Return the source model expression for a data set.
This returns the model expression created by
set_model
orset_source
. It does not include any instrument response.- Parameters:
id (int or str, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
model – This can contain multiple model components. Changing attributes of this model changes the model used by the data set.
- Return type:
a sherpa.models.Model object
See also
delete_model
Delete the model expression from a data set.
get_model
Return the model expression for a data set.
get_model_pars
Return the names of the parameters of a model.
get_model_type
Describe a model expression.
list_model_ids
List of all the data sets with a source expression.
sherpa.astro.ui.set_bkg_model
Set the background model expression for a data set.
set_model
Set the source model expression for a data set.
set_full_model
Define the convolved model expression for a data set.
show_model
Display the source model expression for a data set.
Examples
Return the source expression for the default data set, display it, and then find the number of parameters in it:
>>> src = get_source() >>> print(src) >>> len(src.pars) 5
Set the source expression for data set ‘obs2’ to be equal to the model of data set ‘obs1’ multiplied by a scalar value:
>>> set_source('obs2', const1d.norm * get_source('obs1'))
- get_source_component_image(id, model=None)[source] [edit on github]
Return the data used by image_source_component.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.model.Model instance) – The component to display (the name, if a string).
- Returns:
cpt_img – The
y
attribute contains the component model values as a 2D NumPy array.- Return type:
a
sherpa.image.ComponentSourceImage
instance- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_model_component_image
Return the data used by image_model_component.
get_source_image
Return the data used by image_source.
image_source
Display the source expression for a data set in the image viewer.
image_source_component
Display a component of the source expression in the image viewer.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.Examples
Return the gsrc component values for the default data set:
>>> sinfo = get_source_component_image(gsrc)
Get the ‘bgnd’ model pixel values for data set 2:
>>> sinfo = get_source_component_image(2, bgnd)
- get_source_component_plot(id, model=None, recalc=True)[source] [edit on github]
Return the data used by plot_source_component.
- Parameters:
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.model.Model instance) – The component to use (the name, if a string).
recalc (bool, optional) – If
False
then the results from the last call toplot_source_component
(orget_source_component_plot
) are returned, otherwise the data is re-generated.
- Returns:
An object representing the data used to create the plot by
plot_source_component
. The return value depends on the data set (e.g. 1D binned or un-binned).- Return type:
instance
See also
get_source_plot
Return the data used to create the source plot.
plot_source
Plot the source expression for a data set.
plot_source_component
Plot a component of the source expression for a data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.Examples
Return the plot data for the
pl
component used in the default data set:>>> cplot = get_source_component_plot(pl)
Return the full source model (
fplot
) and then for the componentsgal * pl
andgal * gline
, for the data set ‘jet’:>>> fmodel = xsphabs.gal * (powlaw1d.pl + gauss1d.gline) >>> set_source('jet', fmodel) >>> fit('jet') >>> fplot = get_source('jet') >>> plot1 = get_source_component_plot('jet', pl*gal) >>> plot2 = get_source_component_plot('jet', gline*gal)
- get_source_contour(id=None, recalc=True)[source] [edit on github]
Return the data used by contour_source.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_source
(orget_source_contour
) are returned, otherwise the data is re-generated.
- Returns:
source_data – The
y
attribute contains the model values and thex0
andx1
arrays contain the corresponding coordinate values, as one-dimensional arrays.- Return type:
a
sherpa.plot.SourceContour
instance- Raises:
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_source_image
Return the data used by image_source.
contour_source
Contour the values of the model, without any PSF.
image_source
Display the source expression for a data set in the image viewer.
Examples
Return the source model pixel values for the default data set:
>>> sinfo = get_source_contour()
- get_source_image(id=None)[source] [edit on github]
Return the data used by image_source.
Evaluate the source expression for the image pixels - without any PSF convolution - and return the results.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
src_img – The
y
attribute contains the source model values as a 2D NumPy array.- Return type:
a
sherpa.image.SourceImage
instance- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_model_image
Return the data used by image_model.
contour_source
Contour the values of the model, without any PSF.
image_source
Display the source expression for a data set in the image viewer.
Examples
Return the model data for the default data set:
>>> sinfo = get_source_image() >>> sinfo.y.shape (150, 175)
- get_source_plot(id=None, recalc=True)[source] [edit on github]
Return the data used to create the source plot.
- Parameters:
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_source
(orget_source_plot
) are returned, otherwise the data is re-generated.
- Returns:
An object representing the data used to create the plot by
plot_source
. The return value depends on the data set (e.g. 1D binned or un-binned).- Return type:
instance
See also
get_model_plot
Return the data used to create the model plot.
plot_model
Plot the model for a data set.
plot_source
Plot the source expression for a data set.
Examples
Retrieve the source plot information for the default data set and then display it:
>>> splot = get_source_plot() >>> print(splot)
Return the plot data for data set 2, and then use it to create a plot:
>>> s2 = get_source_plot(2) >>> s2.plot()
Display the two source plots for the ‘jet’ and ‘core’ datasets on the same plot:
>>> splot1 = get_source_plot(id='jet') >>> splot2 = get_source_plot(id='core') >>> splot1.plot() >>> splot2.overplot()
- get_split_plot()[source] [edit on github]
Return the plot attributes for displays with multiple plots.
- Returns:
splot
- Return type:
a
sherpa.plot.SplitPlot
instance
- get_stat(name=None)[source] [edit on github]
Return the fit statisic.
- Parameters:
name (str, optional) – If not given, the current fit statistic is returned, otherwise it should be one of the names returned by the
list_stats
function.- Returns:
stat – An object representing the fit statistic.
- Return type:
- Raises:
sherpa.utils.err.ArgumentErr – If the
name
argument is not recognized.
See also
get_stat_name
Return the name of the current fit statistic.
list_stats
List the fit statistics.
set_stat
Change the fit statistic.
Examples
Return the currently-selected statistic, display its name, and read the help documentation for it:
>>> stat = get_stat() >>> stat.name 'chi2gehrels' >>> help(stat)
Read the help for the “wstat” statistic:
>>> help(get_stat('wstat'))
- get_stat_info()[source] [edit on github]
Return the statistic values for the current models.
Calculate the statistic value for each data set, and the combined fit, using the current set of models, parameters, and ranges.
- Returns:
stats – The values for each data set. If there are multiple model expressions then the last element will be the value for the combined data sets.
- Return type:
array of
sherpa.fit.StatInfoResults
See also
calc_stat
Calculate the fit statistic for a data set.
calc_stat_info
Display the statistic values for the current models.
get_fit_results
Return the results of the last fit.
list_data_ids
List the identifiers for the loaded data sets.
list_model_ids
List of all the data sets with a source expression.
Notes
If a fit to a particular data set has not been made, or values - such as parameter settings, the noticed data range, or choice of statistic - have been changed since the last fit, then the results for that data set may not be meaningful and will therefore bias the results for the simultaneous results.
The return value of
get_stat_info
differs toget_fit_results
since it includes values for each data set, individually, rather than just the combined results.The fields of the object include:
- name
The name of the data set, or sets, as a string.
- ids
A sequence of the data set ids (it may be a tuple or array) included in the results.
- bkg_ids
A sequence of the background data set ids (it may be a tuple or array) included in the results, if any.
- statname
The name of the statistic function (as used in
set_stat
).- statval
The statistic value.
- numpoints
The number of bins used in the fits.
- dof
The number of degrees of freedom in the fit (the number of bins minus the number of free parameters).
- qval
The Q-value (probability) that one would observe the reduced statistic value, or a larger value, if the assumed model is true and the current model parameters are the true parameter values. This will be
None
if the value can not be calculated with the current statistic (e.g. the Cash statistic).- rstat
The reduced statistic value (the
statval
field divided bydof
). This is not calculated for all statistics.
Examples
>>> res = get_stat_info() >>> res[0].statval 498.21750663761935 >>> res[0].dof 439
- get_stat_name()[source] [edit on github]
Return the name of the current fit statistic.
- Returns:
name – The name of the current fit statistic method, in lower case.
- Return type:
Examples
>>> get_stat_name() 'chi2gehrels'
>>> set_stat('cash') >>> get_stat_name() 'cash'
- get_staterror(id=None, filter=False)[source] [edit on github]
Return the statistical error on the dependent axis of a data set.
The function returns the statistical errors on the values (dependenent axis) of a data set. These may have been set explicitly - either when the data set was created or with a call to
set_staterror
- or as defined by the chosen fit statistic (such as “chi2gehrels”).- Parameters:
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – Should the filter attached to the data set be applied to the return value or not. The default is
False
.
- Returns:
staterrors – The statistical error for each data point. This may be estimated from the data (e.g. with the
chi2gehrels
statistic) or have been set explicitly (set_staterror
). The size of this array depends on thefilter
argument.- Return type:
array
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_error
Return the errors on the dependent axis of a data set.
get_indep
Return the independent axis of a data set.
get_syserror
Return the systematic errors on the dependent axis of a data set.
list_data_ids
List the identifiers for the loaded data sets.
set_staterror
Set the statistical errors on the dependent axis of a data set.
Notes
The default behavior is to not apply any filter defined on the independent axes to the results, so that the return value is for all points (or bins) in the data set. Set the
filter
argument toTrue
to apply this filter.Examples
If not explicitly given, the statistical errors on a data set may be calculated from the data values (the independent axis), depending on the chosen statistic:
>>> load_arrays(1, [10, 15, 19], [4, 5, 9]) >>> set_stat('chi2datavar') >>> get_staterror() array([ 2. , 2.23606798, 3. ]) >>> set_stat('chi2gehrels') >>> get_staterror() array([ 3.17944947, 3.39791576, 4.122499 ])
If the statistical errors are set - either when the data set is created or with a call to
set_staterror
- then these values will be used, no matter the statistic:>>> load_arrays(1, [10, 15, 19], [4, 5, 9], [2, 3, 5]) >>> set_stat('chi2datavar') >>> get_staterror() array([2, 3, 5]) >>> set_stat('chi2gehrels') >>> get_staterror() array([2, 3, 5])
- get_syserror(id=None, filter=False)[source] [edit on github]
Return the systematic error on the dependent axis of a data set.
The function returns the systematic errors on the values (dependenent axis) of a data set. It is an error if called on a data set with no systematic errors (which are set with
set_syserror
).- Parameters:
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – Should the filter attached to the data set be applied to the return value or not. The default is
False
.
- Returns:
syserrors – The systematic error for each data point. The size of this array depends on the
filter
argument.- Return type:
array
- Raises:
sherpa.utils.err.DataErr – If the data set has no systematic errors.
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_error
Return the errors on the dependent axis of a data set.
get_indep
Return the independent axis of a data set.
get_staterror
Return the statistical errors on the dependent axis of a data set.
list_data_ids
List the identifiers for the loaded data sets.
set_syserror
Set the systematic errors on the dependent axis of a data set.
Notes
The default behavior is to not apply any filter defined on the independent axes to the results, so that the return value is for all points (or bins) in the data set. Set the
filter
argument toTrue
to apply this filter.Examples
Return the systematic error for the default data set:
>>> yerr = get_syserror()
Return an array that has been filtered to match the data:
>>> yerr = get_syserror(filter=True)
Return the filtered errors for data set “core”:
>>> yerr = get_syserror("core", filter=True)
- get_trace_plot()[source] [edit on github]
Return the data used to plot the last trace.
- Returns:
plot – An object containing the data used by the last call to
plot_trace
. The fields will beNone
if the function has not been called.- Return type:
a
sherpa.plot.TracePlot
instance
See also
plot_trace
Create a trace plot of row number versus value.
- guess(id=None, model=None, limits=True, values=True)[source] [edit on github]
Estimate the parameter values and ranges given the loaded data.
The guess function can change the parameter values and limits to match the loaded data. This is generally limited to changing the amplitude and position parameters (sometimes just the values and sometimes just the limits). The parameters that are changed depend on the type of model.
- Parameters:
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.model – Change the parameters of this model component. If
None
, then the source expression is assumed to consist of a single component, and that component is used.limits (bool) – Should the parameter limits be changed? The default is
True
.values (bool) – Should the parameter values be changed? The default is
True
.
See also
get_default_id
Return the default data set identifier.
reset
Reset the model parameters to their default settings.
set_par
Set the value, limits, or behavior of a model parameter.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.The guess function can reduce the time required to fit a data set by moving the parameters closer to a realistic solution. It can also be useful because it can set bounds on the parameter values based on the data: for instance, many two-dimensional models will limit their
xpos
andypos
values to lie within the data area. This can be done manually, butguess
simplifies this, at least for those parameters that are supported. Instrument models - such as an ARF and RMF - should be set up before calling guess.Examples
Since the source expression contains only one component, guess can be called with no arguments:
>>> set_source(polynom1d.poly) >>> guess()
In this case, guess is called on each component separately.
>>> set_source(gauss1d.line + powlaw1d.cont) >>> guess(line) >>> guess(cont)
In this example, the values of the
src
model component are guessed from the “src” data set, whereas thebgnd
component is guessed from the “bgnd” data set.>>> set_source("src", gauss2d.src + const2d.bgnd) >>> set_source("bgnd", bgnd) >>> guess("src", src) >>> guess("bgnd", bgnd)
Set the source model for the default dataset. Guess is run to determine the values of the model component “p1” and the limits of the model component “g1”:
>>> set_source(powlaw1d.p1 + gauss1d.g1) >>> guess(p1, limits=False) >>> guess(g1, values=False)
- ignore(lo=None, hi=None, **kwargs)[source] [edit on github]
Exclude data from the fit.
Select one or more ranges of data to exclude by filtering on the independent axis value. The filter is applied to all data sets.
Changed in version 4.15.0: The change in the filter is now reported for each dataset.
Changed in version 4.14.0: Integrated data sets - so Data1DInt and DataPHA when using energy or wavelengths - now ensure that the
hi
argument is exclusive and better handling of thelo
argument when it matches a bin edge. This can result in the same filter selecting a smaller number of bins than in earlier versions of Sherpa.- Parameters:
lo (number or str, optional) – The lower bound of the filter (when a number) or a string expression listing ranges in the form
a:b
, with multiple ranges allowed, where the ranges are separated by a,
. The term:b
means exclude everything up tob
(an exclusive limit for integrated datasets), anda:
means exclude everything that is higher than, or equal to,a
.hi (number, optional) – The upper bound of the filter when
lo
is not a string.bkg_id (int or str, optional) – The filter will be applied to the associated background component of the data set if
bkg_id
is set. Only PHA data sets support this option; if not given, then the filter is applied to all background components as well as the source data.
See also
ignore_id
Exclude data from the fit for a data set.
sherpa.astro.ui.ignore2d
Exclude a spatial region from an image.
notice
Include data in the fit.
show_filter
Show any filters applied to a data set.
Notes
The order of
ignore
andnotice
calls is important, and the results are a union, rather than intersection, of the combination.For binned data sets, the bin is excluded if the ignored range falls anywhere within the bin.
The units used depend on the
analysis
setting of the data set, if appropriate.To filter a 2D data set by a shape use
ignore2d
.The report of the change in the filter expression can be controlled with the
SherpaVerbosity
context manager, as shown in the examples below.Examples
Ignore all data points with an X value (the independent axis) between 12 and 18. For this one-dimensional data set, this means that the second bin is ignored:
>>> load_arrays(1, [10, 15, 20, 30], [5, 10, 7, 13]) >>> ignore(12, 18) dataset 1: 10:30 -> 10,20:30 x >>> get_dep(filter=True) array([ 5, 7, 13])
Filtering X values that are 25 or larger means that the last point is also ignored:
>>> ignore(lo=25) dataset 1: 10,20:30 -> 10,20 x >>> get_dep(filter=True) array([ 5, 7])
The
notice
call removes the previous filter, and then a multi-range filter is applied to exclude values between 8 and 12 and 18 and 22:>>> notice() dataset 1: 10,20 -> 10:30 x >>> ignore("8:12, 18:22") dataset 1: 10:30 -> 15:30 x dataset 1: 15:30 -> 15,30 x >>> get_dep(filter=True) array([10, 13])
The
SherpaVerbosity
context manager can be used to hide the screen output:>>> from sherpa.utils.logging import SherpaVerbosity >>> with SherpaVerbosity("WARN"): ... ignore(hi=12) ...
- ignore_id(ids, lo=None, hi=None, **kwargs)[source] [edit on github]
Exclude data from the fit for a data set.
Select one or more ranges of data to exclude by filtering on the independent axis value. The filter is applied to the given data set, or sets.
Changed in version 4.15.0: The change in the filter is now reported for the dataset.
Changed in version 4.14.0: Integrated data sets - so Data1DInt and DataPHA when using energy or wavelengths - now ensure that the
hi
argument is exclusive and better handling of thelo
argument when it matches a bin edge. This can result in the same filter selecting a smaller number of bins than in earlier versions of Sherpa.- Parameters:
ids (int or str, or array of int or str) – The data set, or sets, to use.
lo (number or str, optional) – The lower bound of the filter (when a number) or a string expression listing ranges in the form
a:b
, with multiple ranges allowed, where the ranges are separated by a,
. The term:b
means exclude everything up tob
(an exclusive limit for integrated datasets), anda:
means exclude everything that is higher than, or equal to,a
.hi (number, optional) – The upper bound of the filter when
lo
is not a string.bkg_id (int or str, optional) – The filter will be applied to the associated background component of the data set if
bkg_id
is set. Only PHA data sets support this option; if not given, then the filter is applied to all background components as well as the source data.
See also
ignore
Exclude data from the fit.
sherpa.astro.ui.ignore2d
Exclude a spatial region from an image.
notice_id
Include data from the fit for a data set.
show_filter
Show any filters applied to a data set.
Notes
The order of
ignore
andnotice
calls is important.The units used depend on the
analysis
setting of the data set, if appropriate.To filter a 2D data set by a shape use
ignore2d
.Examples
Ignore all data points with an X value (the independent axis) between 12 and 18 for data set 1:
>>> ignore_id(1, 12, 18) dataset 1: 10:30 -> 10,20:30 x
Ignore the range up to 0.5 and 7 and above, for data sets 1, 2, and 3:
>>> ignore_id([1, 2, 3], hi=0.5) dataset 1: 0.00146:14.9504 -> 0.584:14.9504 Energy (keV) dataset 2: 0.00146:14.9504 -> 0.6424:14.9504 Energy (keV) dataset 3: 0.00146:14.9504 -> 0.511:14.9504 Energy (keV) >>> ignore_id([1, 2, 3], lo=7) dataset 1: 0.584:14.9504 -> 0.584:4.4384 Energy (keV) dataset 2: 0.6424:14.9504 -> 0.6424:5.1392 Energy (keV) dataset 3: 0.511:14.9504 -> 0.511:4.526 Energy (keV)
Apply the same filter as the previous example, but to data sets “core” and “jet”, and hide the screen output:
>>> from sherpa.utils.logging import SherpaVerbosity >>> with SherpaVerbsity("WARN"): ... ignore_id(["core", "jet"], ":0.5,7:") ...
- image_close()[source] [edit on github]
Close the image viewer.
Close the image viewer created by a previous call to one of the
image_xxx
functions.See also
image_deleteframes
Delete all the frames open in the image viewer.
image_getregion
Return the region defined in the image viewer.
image_open
Start the image viewer.
image_setregion
Set the region to display in the image viewer.
image_xpaget
Return the result of an XPA call to the image viewer.
image_xpaset
Send an XPA command to the image viewer.
Examples
>>> image_close()
- image_data(id=None, newframe=False, tile=False)[source] [edit on github]
Display a data set in the image viewer.
The image viewer is automatically started if it is not already open.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_data_image
Return the data used by image_data.
image_close
Close the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_open
Open the image viewer.
image_source
Display the model for a data set in the image viewer.
Notes
Image visualization is optional, and provided by the DS9 application.
Examples
Display the data in default data set.
>>> image_data()
Display data set 2 in a new frame so that the data in the current frame is not destroyed. The new data will be displayed in a single frame (i.e. the only data shown by the viewer).
>>> image_data(2, newframe=True)
Display data sets ‘i1’ and ‘i2’ side by side:
>>> image_data('i1') >>> image_data('i2', newframe=True, tile=True)
- image_deleteframes()[source] [edit on github]
Delete all the frames open in the image viewer.
Delete all the frames - in other words, images - being displayed in the image viewer (e.g. as created by
image_data
orimage_fit
).See also
image_close
Close the image viewer.
image_getregion
Return the region defined in the image viewer.
image_open
Create the image viewer.
image_setregion
Set the region to display in the image viewer.
image_xpaget
Return the result of an XPA call to the image viewer.
image_xpaset
Send an XPA command to the image viewer.
Examples
>>> image_deleteframes()
- image_fit(id=None, newframe=True, tile=True, deleteframes=True)[source] [edit on github]
Display the data, model, and residuals for a data set in the image viewer.
This function displays the data, model (including any instrument response), and the residuals (data - model), for a data set.
The image viewer is automatically started if it is not already open.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.deleteframes (bool, optional) – Should existing frames be deleted? The default is
True
.
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
image_close
Close the image viewer.
image_data
Display a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
image_open
Open the image viewer.
image_resid
Display the residuals (data - model) for a data set in the image viewer.
Notes
Image visualization is optional, and provided by the DS9 application.
Examples
Display the fit results - that is, the data, model, and residuals - for the default data set.
>>> image_fit()
Do not tile the frames (the three frames are loaded, but only the last displayed, the residuals), and then change the frame being displayed to the second one (the model).
>>> image_fit('img', tile=False) >>> image_xpaset('frame 2')
- image_getregion(coord='')[source] [edit on github]
Return the region defined in the image viewer.
The regions defined in the current frame are returned.
- Parameters:
coord (str, optional) – The coordinate system to use.
- Returns:
region – The region, or regions, or the empty string.
- Return type:
- Raises:
sherpa.utils.err.DS9Err – Invalid coordinate system.
See also
image_setregion
Set the region to display in the image viewer.
image_xpaget
Return the result of an XPA call to the image viewer.
image_xpaset
Send an XPA command to the image viewer.
Examples
>>> image_getregion() 'circle(123,128,12.377649);-box(130,121,14,14,329.93142);'
>>> image_getregion('physical') 'circle(3920.5,4080.5,396.08476);-rotbox(4144.5,3856.5,448,448,329.93142);'
- image_kernel(id=None, newframe=False, tile=False)[source] [edit on github]
Display the 2D kernel for a data set in the image viewer.
The image viewer is automatically started if it is not already open.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_kernel_image
Return the data used by image_kernel.
image_close
Close the image viewer.
image_data
Display a data set in the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
image_open
Open the image viewer.
image_source
Display the model for a data set in the image viewer.
plot_kernel
Plot the 1D kernel applied to a data set.
Notes
Image visualization is optional, and provided by the DS9 application.
Examples
>>> image_kernel()
>>> image_kernel(2)
- image_model(id=None, newframe=False, tile=False)[source] [edit on github]
Display the model for a data set in the image viewer.
This function evaluates and displays the model expression for a data set, including any instrument response (e.g. PSF or ARF and RMF) whether created automatically or with
set_full_model
.The image viewer is automatically started if it is not already open.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_model_image
Return the data used by image_model.
image_close
Close the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_model_component
Display a component of the model in the image viewer.
image_open
Open the image viewer.
image_source
Display the model for a data set in the image viewer.
image_source_component
Display a component of the source expression in the image viewer.
Notes
Image visualization is optional, and provided by the DS9 application.
Examples
Display the model for the default data set.
>>> image_model()
Display the model for data set 2 in a new frame so that the data in the current frame is not destroyed. The new data will be displayed in a single frame (i.e. the only data shown by the viewer).
>>> image_model(2, newframe=True)
Display the models for data sets ‘i1’ and ‘i2’ side by side:
>>> image_model('i1') >>> image_model('i2', newframe=True, tile=True)
- image_model_component(id, model=None, newframe=False, tile=False)[source] [edit on github]
Display a component of the model in the image viewer.
This function evaluates and displays a component of the model expression for a data set, including any instrument response. Use
image_source_component
to exclude the response.The image viewer is automatically started if it is not already open.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.model.Model instance) – The component to display (the name, if a string).
newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_model_component_image
Return the data used by image_model_component.
image_close
Close the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
image_open
Open the image viewer.
image_source
Display the source expression for a data set in the image viewer.
image_source_component
Display a component of the source expression in the image viewer.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.Image visualization is optional, and provided by the DS9 application.
Examples
Display the full source model and then just the ‘gsrc’ component for the default data set:
>>> image_model() >>> image_model_component(gsrc)
Display the ‘clus’ component of the model for the ‘img’ data set side by side without the with any instrument response (such as convolution with a PSF model):
>>> image_source_component('img', 'clus') >>> image_model_component('img', 'clus', newframe=True, ... tile=True)
- image_open()[source] [edit on github]
Start the image viewer.
The image viewer will be started, if found. Calling this function when the viewer has already been started will not cause a second viewer to be started. The image viewer will be started automatically by any of the commands like
image_data
.See also
image_close
Close the image viewer.
image_deleteframes
Delete all the frames open in the image viewer.
image_getregion
Return the region defined in the image viewer.
image_setregion
Set the region to display in the image viewer.
image_xpaget
Return the result of an XPA call to the image viewer.
image_xpaset
Send an XPA command to the image viewer.
Notes
Image visualization is optional, and provided by the DS9 application.
Examples
>>> image_open()
- image_psf(id=None, newframe=False, tile=False)[source] [edit on github]
Display the 2D PSF model for a data set in the image viewer.
The image viewer is automatically started if it is not already open.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_psf_image
Return the data used by image_psf.
image_close
Close the image viewer.
image_data
Display a data set in the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
image_open
Open the image viewer.
image_source
Display the model for a data set in the image viewer.
plot_psf
Plot the 1D PSF model applied to a data set.
Notes
Image visualization is optional, and provided by the DS9 application.
Examples
>>> image_psf()
>>> image_psf(2)
- image_ratio(id=None, newframe=False, tile=False)[source] [edit on github]
Display the ratio (data/model) for a data set in the image viewer.
This function displays the ratio data/model for a data set.
The image viewer is automatically started if it is not already open.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_ratio_image
Return the data used by image_ratio.
image_close
Close the image viewer.
image_data
Display a data set in the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
image_open
Open the image viewer.
image_resid
Display the residuals (data - model) for a data set in the image viewer.
image_source
Display the model for a data set in the image viewer.
Notes
Image visualization is optional, and provided by the DS9 application.
Examples
Display the ratio (data/model) for the default data set.
>>> image_ratio()
- image_resid(id=None, newframe=False, tile=False)[source] [edit on github]
Display the residuals (data - model) for a data set in the image viewer.
This function displays the residuals (data - model) for a data set.
The image viewer is automatically started if it is not already open.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_resid_image
Return the data used by image_resid.
image_close
Close the image viewer.
image_data
Display a data set in the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
image_open
Open the image viewer.
image_ratio
Display the ratio (data/model) for a data set in the image viewer.
image_source
Display the model for a data set in the image viewer.
Notes
Image visualization is optional, and provided by the DS9 application.
Examples
Display the residuals for the default data set.
>>> image_resid()
Display the residuals for data set 2 in a new frame so that the data in the current frame is not destroyed. The new data will be displayed in a single frame (i.e. the only data shown by the viewer).
>>> image_resid(2, newframe=True)
Display the residuals for data sets ‘i1’ and ‘i2’ side by side:
>>> image_resid('i1') >>> image_resid('i2', newframe=True, tile=True)
- image_setregion(reg, coord='')[source] [edit on github]
Set the region to display in the image viewer.
- Parameters:
- Raises:
sherpa.utils.err.DS9Err – Invalid coordinate system.
See also
image_getregion
Return the region defined in the image viewer.
image_xpaget
Return the result of an XPA call to the image viewer.
image_xpaset
Send an XPA command to the image viewer.
Examples
Add a circle, in the physical coordinate system, to the data from the default data set:
>>> image_data() >>> image_setregion('circle(4234.53,3245.29,46.74)', 'physical')
Copy the region from the current frame, create a new frame displaying the residuals from data set ‘img’, and then display the region on it:
>>> r = image_getregion() >>> image_resid('img', newframe=True) >>> image_setregion(r)
- image_source(id=None, newframe=False, tile=False)[source] [edit on github]
Display the source expression for a data set in the image viewer.
This function evaluates and displays the model expression for a data set, without any instrument response.
The image viewer is automatically started if it is not already open.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_source_image
Return the data used by image_source.
image_close
Close the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
image_model_component
Display a component of the model in the image viewer.
image_open
Open the image viewer.
image_source_component
Display a component of the source expression in the image viewer.
Notes
Image visualization is optional, and provided by the DS9 application.
Examples
Display the source model for the default data set.
>>> image_source()
Display the source model for data set 2 in a new frame so that the data in the current frame is not destroyed. The new data will be displayed in a single frame (i.e. the only data shown by the viewer).
>>> image_source(2, newframe=True)
Display the source models for data sets ‘i1’ and ‘i2’ side by side:
>>> image_source('i1') >>> image_source('i2', newframe=True, tile=True)
- image_source_component(id, model=None, newframe=False, tile=False)[source] [edit on github]
Display a component of the source expression in the image viewer.
This function evaluates and displays a component of the model expression for a data set, without any instrument response. Use
image_model_component
to include any response.The image viewer is automatically started if it is not already open.
- Parameters:
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.model.Model instance) – The component to display (the name, if a string).
newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_source_component_image
Return the data used by image_source_component.
image_close
Close the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
image_model_component
Display a component of the model in the image viewer.
image_open
Open the image viewer.
image_source
Display the source expression for a data set in the image viewer.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.Image visualization is optional, and provided by the DS9 application.
Examples
Display the full source model and then just the ‘gsrc’ component for the default data set:
>>> image_source() >>> image_source_component(gsrc)
Display the ‘clus’ and ‘bgnd’ components of the model for the ‘img’ data set side by side:
>>> image_source_component('img', 'clus') >>> image_source_component('img', 'bgnd', newframe=True, ... tile=True)
- image_xpaget(arg)[source] [edit on github]
Return the result of an XPA call to the image viewer.
Send a query to the image viewer.
- Parameters:
arg (str) – A command to send to the image viewer via XPA.
- Returns:
returnval
- Return type:
- Raises:
sherpa.utils.err.DS9Err – The image viewer is not running.
sherpa.utils.err.RuntimeErr – If the command is not recognized.
See also
image_close
Close the image viewer.
image_getregion
Return the region defined in the image viewer.
image_open
Create the image viewer.
image_setregion
Set the region to display in the image viewer.
image_xpaset
Send an XPA command to the image viewer.
Notes
The XPA access point of the ds9 image viewer lets commands and queries to be sent to the viewer.
Examples
Return the current zoom setting of the active frame:
>>> image_xpaget('zoom') '1\n'
- image_xpaset(arg, data=None)[source] [edit on github]
Return the result of an XPA call to the image viewer.
Send a command to the image viewer.
- Parameters:
arg (str) – A command to send to the image viewer via XPA.
data (optional) – The data for the command.
- Raises:
sherpa.utils.err.DS9Err – The image viewer is not running.
sherpa.utils.err.RuntimeErr – If the command is not recognized or could not be completed.
See also
image_close
Close the image viewer.
image_getregion
Return the region defined in the image viewer.
image_open
Create the image viewer.
image_setregion
Set the region to display in the image viewer.
image_xpaset
Send an XPA command to the image viewer.
Notes
The XPA access point of the ds9 image viewer lets commands and queries to be sent to the viewer.
Examples
Change the zoom setting of the active frame:
>>> image_xpaset('zoom 4')
Overlay the coordinate grid on the current frame:
>>> image_xpaset('grid yes')
Add the region file ‘src.reg’ to the display:
>>> image_xpaset('regions src.reg')
Create a png version of the image being displayed:
>>> image_xpaset('saveimage png /tmp/img.png')