Session
- class sherpa.astro.ui.utils.Session[source] [edit on github]
Bases:
Session
Methods Summary
add_model
(modelclass[, args, kwargs])Create a user-defined model class.
add_user_pars
(modelname, parnames[, ...])Add parameter information to a user model.
calc_bkg_stat
([id])Calculate the fit statistic for a background data set.
Display the statistic values for the current background models.
calc_chisqr
([id])Calculate the per-bin chi-squared statistic.
calc_data_sum
([lo, hi, id, bkg_id])Sum up the data values over a pass band.
calc_data_sum2d
([reg, id])Sum up the data values of a 2D data set.
calc_energy_flux
([lo, hi, id, bkg_id, model])Integrate the unconvolved source model over a pass band.
calc_kcorr
(z, obslo, obshi[, restlo, ...])Calculate the K correction for a model.
calc_model
([id, bkg_id])Calculate the per-bin model values.
calc_model_sum
([lo, hi, id, bkg_id])Sum up the fitted model over a pass band.
calc_model_sum2d
([reg, id])Sum up the convolved model for a 2D data set.
calc_photon_flux
([lo, hi, id, bkg_id, model])Integrate the unconvolved source model over a pass band.
calc_source
([id, bkg_id])Calculate the per-bin source values.
calc_source_sum
([lo, hi, id, bkg_id])Sum up the source model over a pass band.
calc_source_sum2d
([reg, id])Sum up the unconvolved model for a 2D data set.
calc_stat
([id])Calculate the fit statistic for a data set.
Display the statistic values for the current models.
clean
()Clear out the current Sherpa session.
conf
(*args)Estimate parameter confidence intervals using the confidence method.
confidence
(*args)Estimate parameter confidence intervals using the confidence method.
contour
(*args, **kwargs)Create a contour plot for an image data set.
contour_data
([id, replot, overcontour])Contour the values of an image data set.
contour_fit
([id, replot, overcontour])Contour the fit to a data set.
contour_fit_resid
([id, replot, overcontour])Contour the fit and the residuals to a data set.
contour_kernel
([id, replot, overcontour])Contour the kernel applied to the model of an image data set.
contour_model
([id, replot, overcontour])Create a contour plot of the model.
contour_psf
([id, replot, overcontour])Contour the PSF applied to the model of an image data set.
contour_ratio
([id, replot, overcontour])Contour the ratio of data to model.
contour_resid
([id, replot, overcontour])Contour the residuals of the fit.
contour_source
([id, replot, overcontour])Create a contour plot of the unconvolved spatial model.
copy_data
(fromid, toid)Copy a data set, creating a new identifier.
covar
(*args)Estimate parameter confidence intervals using the covariance method.
covariance
(*args)Estimate parameter confidence intervals using the covariance method.
create_arf
(elo, ehi[, specresp, exposure, ...])Create an ARF.
create_model_component
([typename, name])Create a model component.
create_rmf
(rmflo, rmfhi[, startchan, e_min, ...])Create an RMF.
dataspace1d
(start, stop[, step, numbins, ...])Create the independent axis for a 1D data set.
dataspace2d
(dims[, id, dstype])Create the independent axis for a 2D data set.
delete_bkg_model
([id, bkg_id])Delete the background model expression for a data set.
delete_data
([id])Delete a data set by identifier.
delete_model
([id])Delete the model expression for a data set.
delete_model_component
(name)Delete a model component.
delete_pileup_model
([id])Delete the pile up model for a data set.
delete_psf
([id])Delete the PSF model for a data set.
eqwidth
(src, combo[, id, lo, hi, bkg_id, ...])Calculate the equivalent width of an emission or absorption line.
fake
([id, method])Simulate a data set.
fake_pha
(id[, arf, rmf, exposure, backscal, ...])Simulate a PHA data set from a model.
fit
([id])Fit a model to one or more data sets.
fit_bkg
([id])Fit a model to one or more background PHA data sets.
freeze
(*args)Fix model parameters so they are not changed by a fit.
get_analysis
([id])Return the units used when fitting spectral data.
get_areascal
([id, bkg_id])Return the fractional area factor of a PHA data set.
get_arf
([id, resp_id, bkg_id])Return the ARF associated with a PHA data set.
get_arf_plot
([id, resp_id, recalc])Return the data used by plot_arf.
get_axes
([id, bkg_id])Return information about the independent axes of a data set.
get_backscal
([id, bkg_id])Return the BACKSCAL scaling of a PHA data set.
get_bkg
([id, bkg_id])Return the background for a PHA data set.
get_bkg_arf
([id])Return the background ARF associated with a PHA data set.
get_bkg_chisqr_plot
([id, bkg_id, recalc])Return the data used by plot_bkg_chisqr.
get_bkg_delchi_plot
([id, bkg_id, recalc])Return the data used by plot_bkg_delchi.
get_bkg_fit_plot
([id, bkg_id, recalc])Return the data used by plot_bkg_fit.
get_bkg_model
([id, bkg_id])Return the model expression for the background of a PHA data set.
get_bkg_model_plot
([id, bkg_id, recalc])Return the data used by plot_bkg_model.
get_bkg_plot
([id, bkg_id, recalc])Return the data used by plot_bkg.
get_bkg_ratio_plot
([id, bkg_id, recalc])Return the data used by plot_bkg_ratio.
get_bkg_resid_plot
([id, bkg_id, recalc])Return the data used by plot_bkg_resid.
get_bkg_rmf
([id])Return the background RMF associated with a PHA data set.
get_bkg_scale
([id, bkg_id, units, group, filter])Return the background scaling factor for a background data set.
get_bkg_source
([id, bkg_id])Return the model expression for the background of a PHA data set.
get_bkg_source_plot
([id, lo, hi, bkg_id, recalc])Return the data used by plot_bkg_source.
Return the statistic values for the current background models.
Return the data used to plot the last CDF.
get_chisqr_plot
([id, recalc])Return the data used by plot_chisqr.
get_conf
()Return the confidence-interval estimation object.
get_conf_opt
([name])Return one or all of the options for the confidence interval method.
Return the results of the last
conf
run.Return the results of the last
conf
run.get_contour_prefs
(contourtype[, id])Return the preferences for the given contour type.
get_coord
([id])Get the coordinate system used for image analysis.
get_counts
([id, filter, bkg_id])Return the dependent axis of a data set.
Return the covariance estimation object.
get_covar_opt
([name])Return one or all of the options for the covariance method.
Return the results of the last
covar
run.Return the results of the last
covar
run.get_data
([id])Return the data set by identifier.
get_data_contour
([id, recalc])Return the data used by contour_data.
Return the preferences for contour_data.
get_data_image
([id])Return the data used by image_data.
get_data_plot
([id, recalc])Return the data used by plot_data.
get_data_plot_prefs
([id])Return the preferences for plot_data.
Return the default data set identifier.
get_delchi_plot
([id, recalc])Return the data used by plot_delchi.
get_dep
([id, filter, bkg_id])Return the dependent axis of a data set.
get_dims
([id, filter])Return the dimensions of the data set.
get_draws
([id, otherids, niter, covar_matrix])Run the pyBLoCXS MCMC algorithm.
get_energy_flux_hist
([lo, hi, id, num, ...])Return the data displayed by plot_energy_flux.
get_error
([id, filter, bkg_id])Return the errors on the dependent axis of a data set.
get_exposure
([id, bkg_id])Return the exposure time of a PHA data set.
get_filter
([id, format, delim])Return the filter expression for a data set.
get_fit_contour
([id, recalc])Return the data used by contour_fit.
get_fit_plot
([id, recalc])Return the data used to create the fit plot.
Return the results of the last fit.
Return the functions provided by Sherpa.
get_grouping
([id, bkg_id])Return the grouping array for a PHA data set.
get_indep
([id, filter, bkg_id])Return the independent axes of a data set.
get_int_proj
([par, id, otherids, recalc, ...])Return the interval-projection object.
get_int_unc
([par, id, otherids, recalc, ...])Return the interval-uncertainty object.
Return the name of the iterative fitting scheme.
get_iter_method_opt
([optname])Return one or all options for the iterative-fitting scheme.
get_kernel_contour
([id, recalc])Return the data used by contour_kernel.
get_kernel_image
([id])Return the data used by image_kernel.
get_kernel_plot
([id, recalc])Return the data used by plot_kernel.
get_method
([name])Return an optimization method.
Return the name of current Sherpa optimization method.
get_method_opt
([optname])Return one or all of the options for the current optimization method.
get_model
([id])Return the model expression for a data set.
Return the method used to create model component identifiers.
get_model_component
(name)Returns a model component given its name.
get_model_component_image
(id[, model])Return the data used by image_model_component.
get_model_component_plot
(id[, model, recalc])Return the data used to create the model-component plot.
Return the data used by plot_model_components.
get_model_contour
([id, recalc])Return the data used by contour_model.
Return the preferences for contour_model.
get_model_image
([id])Return the data used by image_model.
get_model_pars
(model)Return the names of the parameters of a model.
get_model_plot
([id, recalc])Return the data used to create the model plot.
get_model_plot_prefs
([id])Return the preferences for plot_model.
get_model_type
(model)Describe a model expression.
get_num_par
([id])Return the number of parameters in a model expression.
get_num_par_frozen
([id])Return the number of frozen parameters in a model expression.
get_num_par_thawed
([id])Return the number of thawed parameters in a model expression.
get_order_plot
([id, orders, recalc])Return the data used by plot_order.
get_par
(par)Return a parameter of a model component.
Return the data used to plot the last PDF.
get_photon_flux_hist
([lo, hi, id, num, ...])Return the data displayed by plot_photon_flux.
get_pileup_model
([id])Return the pile up model for a data set.
get_plot_prefs
(plottype[, id])Return the preferences for the given plot type.
get_prior
(par)Return the prior function for a parameter (MCMC).
get_proj
()Return the confidence-interval estimation object.
get_proj_opt
([name])Return one or all of the options for the confidence interval method.
Return the results of the last
proj
run.Return the results of the last
proj
run.get_psf
([id])Return the PSF model defined for a data set.
get_psf_contour
([id, recalc])Return the data used by contour_psf.
get_psf_image
([id])Return the data used by image_psf.
get_psf_plot
([id, recalc])Return the data used by plot_psf.
get_pvalue_plot
([null_model, alt_model, ...])Return the data used by plot_pvalue.
Return the data calculated by the last plot_pvalue call.
get_quality
([id, bkg_id])Return the quality flags for a PHA data set.
get_rate
([id, filter, bkg_id])Return the count rate of a PHA data set.
get_ratio_contour
([id, recalc])Return the data used by contour_ratio.
get_ratio_image
([id])Return the data used by image_ratio.
get_ratio_plot
([id, recalc])Return the data used by plot_ratio.
get_reg_proj
([par0, par1, id, otherids, ...])Return the region-projection object.
get_reg_unc
([par0, par1, id, otherids, ...])Return the region-uncertainty object.
get_resid_contour
([id, recalc])Return the data used by contour_resid.
get_resid_image
([id])Return the data used by image_resid.
get_resid_plot
([id, recalc])Return the data used by plot_resid.
get_response
([id, bkg_id])Return the response information applied to a PHA data set.
get_rmf
([id, resp_id, bkg_id])Return the RMF associated with a PHA data set.
get_rmf_plot
([id, resp_id, recalc])Return the data used by plot_rmf.
get_rng
()Return the RNG generator in use.
Return the current MCMC sampler options.
Return the name of the current MCMC sampler.
get_sampler_opt
(opt)Return an option of the current MCMC sampler.
Return the data used to plot the last scatter plot.
get_source
([id])Return the source model expression for a data set.
get_source_component_image
(id[, model])Return the data used by image_source_component.
get_source_component_plot
(id[, model, recalc])Return the data used by plot_source_component.
Return the data used by plot_source_components.
get_source_contour
([id, recalc])Return the data used by contour_source.
get_source_image
([id])Return the data used by image_source.
get_source_plot
([id, lo, hi, recalc])Return the data used by plot_source.
get_specresp
([id, filter, bkg_id])Return the effective area values for a PHA data set.
Return the plot attributes for displays with multiple plots.
get_stat
([name])Return the fit statisic.
Return the statistic values for the current models.
Return the name of the current fit statistic.
get_staterror
([id, filter, bkg_id])Return the statistical error on the dependent axis of a data set.
get_syserror
([id, filter, bkg_id])Return the systematic error on the dependent axis of a data set.
Return the data used to plot the last trace.
group
([id, bkg_id])Turn on the grouping for a PHA data set.
group_adapt
(id[, min, bkg_id, maxLength, ...])Adaptively group to a minimum number of counts.
group_adapt_snr
(id[, min, bkg_id, ...])Adaptively group to a minimum signal-to-noise ratio.
group_bins
(id[, num, bkg_id, tabStops])Group into a fixed number of bins.
group_counts
(id[, num, bkg_id, maxLength, ...])Group into a minimum number of counts per bin.
group_snr
(id[, snr, bkg_id, maxLength, ...])Group into a minimum signal-to-noise ratio.
group_width
(id[, num, bkg_id, tabStops])Group into a fixed bin width.
guess
([id, model, limits, values])Estimate the parameter values and ranges given the loaded data.
ignore
([lo, hi])Exclude data from the fit.
ignore2d
([val])Exclude a spatial region from all data sets.
ignore2d_id
(ids[, val])Exclude a spatial region from a data set.
ignore2d_image
([ids])Exclude pixels using the region defined in the image viewer.
ignore_bad
([id, bkg_id])Exclude channels marked as bad in a PHA data set.
ignore_id
(ids[, lo, hi])Exclude data from the fit for a data set.
Close the image viewer.
image_data
([id, newframe, tile])Display a data set in the image viewer.
Delete all the frames open in the image viewer.
image_fit
([id, newframe, tile, deleteframes])Display the data, model, and residuals for a data set in the image viewer.
image_getregion
([coord])Return the region defined in the image viewer.
image_kernel
([id, newframe, tile])Display the 2D kernel for a data set in the image viewer.
image_model
([id, newframe, tile])Display the model for a data set in the image viewer.
image_model_component
(id[, model, newframe, ...])Display a component of the model in the image viewer.
Start the image viewer.
image_psf
([id, newframe, tile])Display the 2D PSF model for a data set in the image viewer.
image_ratio
([id, newframe, tile])Display the ratio (data/model) for a data set in the image viewer.
image_resid
([id, newframe, tile])Display the residuals (data - model) for a data set in the image viewer.
image_setregion
(reg[, coord])Set the region to display in the image viewer.
image_source
([id, newframe, tile])Display the source expression for a data set in the image viewer.
image_source_component
(id[, model, ...])Display a component of the source expression in the image viewer.
image_xpaget
(arg)Return the result of an XPA call to the image viewer.
image_xpaset
(arg[, data])Return the result of an XPA call to the image viewer.
int_proj
(par[, id, otherids, replot, fast, ...])Calculate and plot the fit statistic versus fit parameter value.
int_unc
(par[, id, otherids, replot, min, ...])Calculate and plot the fit statistic versus fit parameter value.
link
(par, val)Link a parameter to a value.
list_bkg_ids
([id])List all the background identifiers for a data set.
List the identifiers for the loaded data sets.
list_functions
([outfile, clobber])Display the functions provided by Sherpa.
List the iterative fitting schemes.
List the optimization methods.
List the names of all the model components.
List of all the data sets with a source expression.
list_models
([show])List the available model types.
List of all the data sets with a pile up model.
Return the priors set for model parameters, if any.
List of all the data sets with a PSF.
list_response_ids
([id, bkg_id])List all the response identifiers of a data set.
List the MCMC samplers.
List the fit statistics.
load_arf
(id[, arg, resp_id, bkg_id])Load an ARF from a file and add it to a PHA data set.
load_arrays
(id, *args)Create a data set from array values.
load_ascii
(id[, filename, ncols, colkeys, ...])Load an ASCII file as a data set.
load_ascii_with_errors
(id[, filename, ...])Load an ASCII file with asymmetric errors as a data set.
load_bkg
(id[, arg, use_errors, bkg_id])Load the background from a file and add it to a PHA data set.
load_bkg_arf
(id[, arg])Load an ARF from a file and add it to the background of a PHA data set.
load_bkg_rmf
(id[, arg])Load a RMF from a file and add it to the background of a PHA data set.
load_conv
(modelname, filename_or_model, ...)Load a 1D convolution model.
load_data
(id[, filename])Load a data set from a file.
load_filter
(id[, filename, bkg_id, ignore, ...])Load the filter array from a file and add to a data set.
load_grouping
(id[, filename, bkg_id])Load the grouping scheme from a file and add to a PHA data set.
load_image
(id[, arg, coord, dstype])Load an image as a data set.
load_multi_arfs
(id, filenames[, resp_ids])Load multiple ARFs for a PHA data set.
load_multi_rmfs
(id, filenames[, resp_ids])Load multiple RMFs for a PHA data set.
load_pha
(id[, arg, use_errors])Load a PHA data set.
load_psf
(modelname, filename_or_model, ...)Create a PSF model.
load_quality
(id[, filename, bkg_id])Load the quality array from a file and add to a PHA data set.
load_rmf
(id[, arg, resp_id, bkg_id])Load a RMF from a file and add it to a PHA data set.
load_staterror
(id[, filename, bkg_id])Load the statistical errors from a file.
load_syserror
(id[, filename, bkg_id])Load the systematic errors from a file.
load_table
(id[, filename, ncols, colkeys, ...])Load a FITS binary file as a data set.
load_table_model
(modelname, filename[, method])Load tabular or image data and use it as a model component.
load_template_interpolator
(name, ...)Set the template interpolation scheme.
load_template_model
(modelname, templatefile)Load a set of templates and use it as a model component.
load_user_model
(func, modelname[, filename])Create a user-defined model.
load_user_stat
(statname, calc_stat_func[, ...])Create a user-defined statistic.
load_xstable_model
(modelname, filename[, etable])Load a XSPEC table model.
normal_sample
([num, sigma, correlate, id, ...])Sample the fit statistic by taking the parameter values from a normal distribution.
notice
([lo, hi])Include data in the fit.
notice2d
([val])Include a spatial region of all data sets.
notice2d_id
(ids[, val])Include a spatial region of a data set.
notice2d_image
([ids])Include pixels using the region defined in the image viewer.
notice_id
(ids[, lo, hi])Include data from the fit for a data set.
pack_image
([id])Convert a data set into an image structure.
pack_pha
([id])Convert a PHA data set into a file structure.
pack_table
([id])Convert a data set into a table structure.
paramprompt
([val])Should the user be asked for the parameter values when creating a model?
plot
(*args, **kwargs)Create one or more plot types.
plot_arf
([id, resp_id, replot, overplot, ...])Plot the ARF associated with a data set.
plot_bkg
([id, bkg_id, replot, overplot, ...])Plot the background values for a PHA data set.
plot_bkg_chisqr
([id, bkg_id, replot, ...])Plot the chi-squared value for each point of the background of a PHA data set.
plot_bkg_delchi
([id, bkg_id, replot, ...])Plot the ratio of residuals to error for the background of a PHA data set.
plot_bkg_fit
([id, bkg_id, replot, overplot, ...])Plot the fit results (data, model) for the background of a PHA data set.
plot_bkg_fit_delchi
([id, bkg_id, replot, ...])Plot the fit results, and the residuals, for the background of a PHA data set.
plot_bkg_fit_ratio
([id, bkg_id, replot, ...])Plot the fit results, and the data/model ratio, for the background of a PHA data set.
plot_bkg_fit_resid
([id, bkg_id, replot, ...])Plot the fit results, and the residuals, for the background of a PHA data set.
plot_bkg_model
([id, bkg_id, replot, ...])Plot the model for the background of a PHA data set.
plot_bkg_ratio
([id, bkg_id, replot, ...])Plot the ratio of data to model values for the background of a PHA data set.
plot_bkg_resid
([id, bkg_id, replot, ...])Plot the residual (data-model) values for the background of a PHA data set.
plot_bkg_source
([id, lo, hi, bkg_id, ...])Plot the model expression for the background of a PHA data set.
plot_cdf
(points[, name, xlabel, replot, ...])Plot the cumulative density function of an array of values.
plot_chisqr
([id, replot, overplot, clearwindow])Plot the chi-squared value for each point in a data set.
plot_data
([id, replot, overplot, clearwindow])Plot the data values.
plot_delchi
([id, replot, overplot, clearwindow])Plot the ratio of residuals to error for a data set.
plot_energy_flux
([lo, hi, id, num, bins, ...])Display the energy flux distribution.
plot_fit
([id, replot, overplot, clearwindow])Plot the fit results (data, model) for a data set.
plot_fit_delchi
([id, replot, overplot, ...])Plot the fit results, and the residuals, for a data set.
plot_fit_ratio
([id, replot, overplot, ...])Plot the fit results, and the ratio of data to model, for a data set.
plot_fit_resid
([id, replot, overplot, ...])Plot the fit results, and the residuals, for a data set.
plot_kernel
([id, replot, overplot, clearwindow])Plot the 1D kernel applied to a data set.
plot_model
([id, replot, overplot, clearwindow])Plot the model for a data set.
plot_model_component
(id[, model, replot, ...])Plot a component of the model for a data set.
plot_model_components
([id, overplot, ...])Plot all the components of a model.
plot_order
([id, orders, replot, overplot, ...])Plot the model for a data set convolved by the given response.
plot_pdf
(points[, name, xlabel, bins, ...])Plot the probability density function of an array of values.
plot_photon_flux
([lo, hi, id, num, bins, ...])Display the photon flux distribution.
plot_psf
([id, replot, overplot, clearwindow])Plot the 1D PSF model applied to a data set.
plot_pvalue
(null_model, alt_model[, ...])Compute and plot a histogram of likelihood ratios by simulating data.
plot_ratio
([id, replot, overplot, clearwindow])Plot the ratio of data to model for a data set.
plot_resid
([id, replot, overplot, clearwindow])Plot the residuals (data - model) for a data set.
plot_rmf
([id, resp_id, replot, overplot, ...])Plot the RMF associated with a data set.
plot_scatter
(x, y[, name, xlabel, ylabel, ...])Create a scatter plot.
plot_source
([id, lo, hi, replot, overplot, ...])Plot the source expression for a data set.
plot_source_component
(id[, model, replot, ...])Plot a component of the source expression for a data set.
plot_source_components
([id, overplot, ...])Plot all the components of a source.
plot_trace
(points[, name, xlabel, replot, ...])Create a trace plot of row number versus value.
proj
(*args)Estimate parameter confidence intervals using the projection method.
projection
(*args)Estimate parameter confidence intervals using the projection method.
reg_proj
(par0, par1[, id, otherids, replot, ...])Plot the statistic value as two parameters are varied.
reg_unc
(par0, par1[, id, otherids, replot, ...])Plot the statistic value as two parameters are varied.
resample_data
([id, niter, seed])Resample data with asymmetric error bars.
reset
([model, id])Reset the model parameters to their default settings.
restore
([filename])Load in a Sherpa session from a file.
sample_energy_flux
([lo, hi, id, num, ...])Return the energy flux distribution of a model.
sample_flux
([modelcomponent, lo, hi, id, ...])Return the flux distribution of a model.
sample_photon_flux
([lo, hi, id, num, ...])Return the photon flux distribution of a model.
save
([filename, clobber])Save the current Sherpa session to a file.
save_all
([outfile, clobber])Save the information about the current session to a text file.
save_arf
(id[, filename, resp_id, bkg_id, ...])Save an ARF data set to a file.
save_arrays
(filename, args[, fields, ascii, ...])Write a list of arrays to a file.
save_data
(id[, filename, bkg_id, ascii, clobber])Save the data to a file.
save_delchi
(id[, filename, bkg_id, ascii, ...])Save the ratio of residuals (data-model) to error to a file.
save_error
(id[, filename, bkg_id, ascii, ...])Save the errors to a file.
save_filter
(id[, filename, bkg_id, ascii, ...])Save the filter array to a file.
save_grouping
(id[, filename, bkg_id, ascii, ...])Save the grouping scheme to a file.
save_image
(id[, filename, ascii, clobber])Save the pixel values of a 2D data set to a file.
save_model
(id[, filename, bkg_id, ascii, ...])Save the model values to a file.
save_pha
(id[, filename, bkg_id, ascii, clobber])Save a PHA data set to a file.
save_quality
(id[, filename, bkg_id, ascii, ...])Save the quality array to a file.
save_resid
(id[, filename, bkg_id, ascii, ...])Save the residuals (data-model) to a file.
save_rmf
(id[, filename, resp_id, bkg_id, ...])Save an RMF data set to a file.
save_source
(id[, filename, bkg_id, ascii, ...])Save the model values to a file.
save_staterror
(id[, filename, bkg_id, ...])Save the statistical errors to a file.
save_syserror
(id[, filename, bkg_id, ascii, ...])Save the systematic errors to a file.
save_table
(id[, filename, ascii, clobber])Save a data set to a file as a table.
set_analysis
(id[, quantity, type, factor])Set the units used when fitting and displaying spectral data.
set_areascal
(id[, area, bkg_id])Change the fractional area factor of a PHA data set.
set_arf
(id[, arf, resp_id, bkg_id])Set the ARF for use by a PHA data set.
set_backscal
(id[, backscale, bkg_id])Change the area scaling of a PHA data set.
set_bkg
(id[, bkg, bkg_id])Set the background for a PHA data set.
set_bkg_full_model
(id[, model, bkg_id])Define the convolved background model expression for a PHA data set.
set_bkg_model
(id[, model, bkg_id])Set the background model expression for a PHA data set.
set_bkg_source
(id[, model, bkg_id])Set the background model expression for a PHA data set.
set_conf_opt
(name, val)Set an option for the confidence interval method.
set_coord
(id[, coord])Set the coordinate system to use for image analysis.
set_counts
(id[, val, bkg_id])Set the dependent axis of a data set.
set_covar_opt
(name, val)Set an option for the covariance method.
set_data
(id[, data])Set a data set.
set_default_id
(id)Set the default data set identifier.
set_dep
(id[, val, bkg_id])Set the dependent axis of a data set.
set_exposure
(id[, exptime, bkg_id])Change the exposure time of a PHA data set.
set_filter
(id[, val, bkg_id, ignore])Set the filter array of a data set.
set_full_model
(id[, model])Define the convolved model expression for a data set.
set_grouping
(id[, val, bkg_id])Apply a set of grouping flags to a PHA data set.
set_iter_method
(meth)Set the iterative-fitting scheme used in the fit.
set_iter_method_opt
(optname, val)Set an option for the iterative-fitting scheme.
set_method
(meth)Set the optimization method.
set_method_opt
(optname, val)Set an option for the current optimization method.
set_model
(id[, model])Set the source model expression for a data set.
set_model_autoassign_func
([func])Set the method used to create model component identifiers.
set_par
(par[, val, min, max, frozen])Set the value, limits, or behavior of a model parameter.
set_pileup_model
(id[, model])Include a model of the Chandra ACIS pile up when fitting PHA data.
set_plot_backend
(backend)Change the plot backend.
set_prior
(par, prior)Set the prior function to use with a parameter.
set_proj_opt
(name, val)Set an option for the projection method.
set_psf
(id[, psf])Add a PSF model to a data set.
set_quality
(id[, val, bkg_id])Apply a set of quality flags to a PHA data set.
set_rmf
(id[, rmf, resp_id, bkg_id])Set the RMF for use by a PHA data set.
set_rng
(rng)Set the RNG generator.
set_sampler
(sampler)Set the MCMC sampler.
set_sampler_opt
(opt, value)Set an option for the current MCMC sampler.
set_source
(id[, model])Set the source model expression for a data set.
set_stat
(stat)Set the statistical method.
set_staterror
(id[, val, fractional, bkg_id])Set the statistical errors on the dependent axis of a data set.
set_syserror
(id[, val, fractional, bkg_id])Set the systematic errors on the dependent axis of a data set.
set_xlinear
([plottype])New plots will display a linear X axis.
set_xlog
([plottype])New plots will display a logarithmically-scaled X axis.
set_ylinear
([plottype])New plots will display a linear Y axis.
set_ylog
([plottype])New plots will display a logarithmically-scaled Y axis.
show_all
([id, outfile, clobber])Report the current state of the Sherpa session.
show_bkg
([id, bkg_id, outfile, clobber])Show the details of the PHA background data sets.
show_bkg_model
([id, bkg_id, outfile, clobber])Display the background model expression used to fit a data set.
show_bkg_source
([id, bkg_id, outfile, clobber])Display the background model expression for a data set.
show_conf
([outfile, clobber])Display the results of the last conf evaluation.
show_covar
([outfile, clobber])Display the results of the last covar evaluation.
show_data
([id, outfile, clobber])Summarize the available data sets.
show_filter
([id, outfile, clobber])Show any filters applied to a data set.
show_fit
([outfile, clobber])Summarize the fit results.
show_kernel
([id, outfile, clobber])Display any kernel applied to a data set.
show_method
([outfile, clobber])Display the current optimization method and options.
show_model
([id, outfile, clobber])Display the model expression used to fit a data set.
show_proj
([outfile, clobber])Display the results of the last proj evaluation.
show_psf
([id, outfile, clobber])Display any PSF model applied to a data set.
show_source
([id, outfile, clobber])Display the source model expression for a data set.
show_stat
([outfile, clobber])Display the current fit statistic.
show_xsabund
([outfile, clobber])Show the XSPEC abundance values.
simulfit
([id])Fit a model to one or more data sets.
subtract
([id])Subtract the background estimate from a data set.
t_sample
([num, dof, id, otherids, numcores])Sample the fit statistic by taking the parameter values from a Student's t-distribution.
thaw
(*args)Allow model parameters to be varied during a fit.
ungroup
([id, bkg_id])Turn off the grouping for a PHA data set.
uniform_sample
([num, factor, id, otherids, ...])Sample the fit statistic by taking the parameter values from an uniform distribution.
unlink
(par)Unlink a parameter value.
unpack_arf
(arg)Create an ARF data structure.
unpack_arrays
(*args)Create a sherpa data object from arrays of data.
unpack_ascii
(filename[, ncols, colkeys, ...])Unpack an ASCII file into a data structure.
unpack_bkg
(arg[, use_errors])Create a PHA data structure for a background data set.
unpack_data
(filename, *args, **kwargs)Create a sherpa data object from a file.
unpack_image
(arg[, coord, dstype])Create an image data structure.
unpack_pha
(arg[, use_errors])Create a PHA data structure.
unpack_rmf
(arg)Create a RMF data structure.
unpack_table
(filename[, ncols, colkeys, dstype])Unpack a FITS binary file into a data structure.
unsubtract
([id])Undo any background subtraction for the data set.
Methods Documentation
- add_model(modelclass, args=(), kwargs={}) None [source] [edit on github]
Create a user-defined model class.
Create a model from a class. The name of the class can then be used to create model components - e.g. with
set_model
orcreate_model_component
- as with any existing Sherpa model.- Parameters:
modelclass – A class derived from
sherpa.models.model.ArithmeticModel
. This class defines the functional form and the parameters of the model.args – Arguments for the class constructor.
kwargs – Keyword arguments for the class constructor.
See also
create_model_component
Create a model component.
list_models
List the available model types.
load_table_model
Load tabular data and use it as a model component.
load_user_model
Create a user-defined model.
set_model
Set the source model expression for a data set.
Notes
The
load_user_model
function is designed to make it easy to add a model, but the interface is not the same as the existing models (such as having to call bothload_user_model
andadd_user_pars
for each new instance). Theadd_model
function is used to add a model as a Python class, which is more work to set up, but then acts the same way as the existing models.Examples
The following example creates a model type called “mygauss1d” which will behave exactly the same as the existing “gauss1d” model. Normally the class used with
add_model
would add new functionality.>>> from sherpa.models import Gauss1D >>> class MyGauss1D(Gauss1D): ... pass ... >>> add_model(MyGauss1D) >>> set_source(mygauss1d.g1 + mygauss1d.g2)
- add_user_pars(modelname, parnames, parvals=None, parmins=None, parmaxs=None, parunits=None, parfrozen=None) None [source] [edit on github]
Add parameter information to a user model.
- Parameters:
modelname (str) – The name of the user model (created by
load_user_model
).parnames (array of str) – The names of the parameters. The order of all the parameter arrays must match that expected by the model function (the first argument to
load_user_model
).parvals (array of number, optional) – The default values of the parameters. If not given each parameter is set to 0.
parmins (array of number, optional) – The minimum values of the parameters (hard limit). The default value is -3.40282e+38.
parmaxs (array of number, optional) – The maximum values of the parameters (hard limit). The default value is 3.40282e+38.
parunits (array of str, optional) – The units of the parameters. This is only used in screen output (i.e. is informational in nature).
parfrozen (array of bool, optional) – Should each parameter be frozen. The default is that all parameters are thawed.
See also
add_model
Create a user-defined model class.
load_user_model
Create a user-defined model.
set_par
Set the value, limits, or behavior of a model parameter.
Notes
The parameters must be specified in the order that the function expects. That is, if the function has two parameters, pars[0]=’slope’ and pars[1]=’y_intercept’, then the call to add_user_pars must use the order [“slope”, “y_intercept”].
Examples
Create a user model for the function
profile
called “myprof”, which has two parameters called “core” and “ampl”, both of which will start with a value of 0.>>> load_user_model(profile, "myprof") >>> add_user_pars("myprof", ["core", "ampl"])
Set the starting values, minimum values, and whether or not the parameter is frozen by default, for the “prof” model:
>>> pnames = ["core", "ampl", "intflag"] >>> pvals = [10, 200, 1] >>> pmins = [0.01, 0, 0] >>> pfreeze = [False, False, True] >>> add_user_pars("prof", pnames, pvals, ... parmins=pmins, parfrozen=pfreeze)
- calc_bkg_stat(id: int | str | None = None, *otherids: int | str)[source] [edit on github]
Calculate the fit statistic for a background data set.
Evaluate the current background models for the background datasets, calculate the statistic for each background, and return the sum. No fitting is done, as the current model parameter, and any filters, are used. The
calc_bkg_stat_info
routine should be used if the result for a particular background component needs to be returned.Added in version 4.16.0.
- Parameters:
- Returns:
stat – The current statistic value.
- Return type:
number
See also
calc_bkg_stat_info
,calc_stat
,fit_bkg
,get_bkg_stat_info
,set_stat
Examples
Calculate the statistic for the background in the default data set:
>>> stat = calc_bkg_stat()
Find the statistic for the background for data set 3:
>>> stat = calc_bkg_stat(3)
Calculate the background statistic value using two different statistics:
>>> set_stat('chi2datavar') >>> s1 = calc_bkg_stat() >>> set_stat('chi2gehrels') >>> s2 = calc_bkg_stat()
- calc_bkg_stat_info() None [source] [edit on github]
Display the statistic values for the current background models.
Returns the statistics values for background datasets with background models. See
calc_stat_info
for a description of the return value.Added in version 4.16.0.
See also
Notes
If a fit to a particular background data set has not been made, or values - such as parameter settings, the noticed data range, or choice of statistic - have been changed since the last fit, then the results for that data set may not be meaningful and will therefore bias the results for the simultaneous results.
Examples
>>> calc_bkg_stat_info()
- calc_chisqr(id: int | str | None = None, *otherids: int | str)[source] [edit on github]
Calculate the per-bin chi-squared statistic.
Evaluate the model for one or more data sets, compare it to the data using the current statistic, and return an array of chi-squared values for each bin. No fitting is done, as the current model parameter, and any filters, are used.
- Parameters:
- Returns:
chisq – The chi-square value for each bin of the data, using the current statistic (as set by
set_stat
). A value ofNone
is returned if the statistic is not a chi-square distribution.- Return type:
array or
None
See also
calc_stat
Calculate the fit statistic for a data set.
calc_stat_info
Display the statistic values for the current models.
set_stat
Set the statistical method.
Notes
The output array length equals the sum of the arrays lengths of the requested data sets.
Examples
When called with no arguments, the return value is the chi-squared statistic for each bin in the data sets which have a defined model.
>>> calc_chisqr()
Supplying a specific data set ID to calc_chisqr - such as “1” or “src” - will return the chi-squared statistic array for only that data set.
>>> calc_chisqr(1) >>> calc_chisqr("src")
Restrict the calculation to just datasets 1 and 3:
>>> calc_chisqr(1, 3)
- calc_data_sum(lo=None, hi=None, id: int | str | None = None, bkg_id: int | str | None = None)[source] [edit on github]
Sum up the data values over a pass band.
This function is for one-dimensional data sets: use
calc_data_sum2d
for two-dimensional data sets.- Parameters:
lo (number, optional) – If both are None or both are set then sum up the data over the given band. If only one is set then return the data count in the given bin.
hi (number, optional) – If both are None or both are set then sum up the data over the given band. If only one is set then return the data count in the given bin.
id (int, str, or None, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – If set, use the model associated with the given background component rather than the source model.
- Returns:
dsum – If a background estimate has been subtracted from the data set then the calculation will use the background-subtracted values.
- Return type:
number
See also
calc_data_sum2d
Sum up the data values of a 2D data set.
calc_model_sum
Sum up the fitted model over a pass band.
calc_energy_flux
Integrate the unconvolved source model over a pass band.
calc_photon_flux
Integrate the unconcolved source model over a pass band.
calc_source_sum
Sum up the source model over a pass band.
set_model
Set the source model expression for a data set.
Notes
The units of
lo
andhi
are determined by the analysis setting for the data set (e.g.get_analysis
). The summation occurs over those points in the data set that lie within this range, not the range itself.Any existing filter on the data set - e.g. as created by
ignore
ornotice
- is ignored by this function.If a grouping scheme has been applied to the data set that it will be used. This can change the results, since the first and last bins of the selected range may extend outside the requested range.
Examples
Sum up the data values (the dependent axis) for all points or bins in the default data set:
>>> dsum = calc_data_sum()
Calculate the number of counts over the ranges 0.5 to 2 and 0.5 to 7 keV for the default data set, first using the observed signal and then, for the 0.5 to 2 keV band - the background-subtraced estimate:
>>> set_analysis('energy') >>> calc_data_sum(0.5, 2) 745.0 >>> calc_data_sum(0.5, 7) 60.0 >>> subtract() >>> calc_data_sum(0.5, 2) 730.9179738207356
Calculate the data value in the bin containing 0.5 keV for the source “core”:
>>> calc_data_sum(0.5, id="core") 0.0
Calculate the sum of the second background component for data set 3 over the independent axis range 12 to 45:
>>> calc_data_sum(12, 45, id=3, bkg_id=2)
- calc_data_sum2d(reg=None, id: int | str | None = None)[source] [edit on github]
Sum up the data values of a 2D data set.
This function is for two-dimensional data sets: use
calc_model_sum
for one-dimensional data sets.- Parameters:
reg (str, optional) – The spatial filter to use. The default,
None
, is to use the whole data set.id (int, str, or None, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.
- Returns:
dsum – The sum of the data values that lie within the given region.
- Return type:
number
See also
calc_data_sum
Sum up the data values of a data set.
calc_model_sum2d
Sum up the convolved model for a 2D data set.
calc_source_sum2d
Sum up the unconvolved model for a 2D data set.
set_model
Set the source model expression for a data set.
Notes
The coordinate system of the region filter is determined by the coordinate setting for the data set (e.g.
get_coord
).Any existing filter on the data set - e.g. as created by
ignore2d
ornotice2d
- is ignored by this function.Examples
The following examples use the data in the default data set created with the following calls, which sets the y (data) values to be 0 to 11 in a 3 row by 4 column image:
>>> ivals = np.arange(12) >>> y, x = np.mgrid[10:13, 20:24] >>> y = y.flatten() >>> x = x.flatten() >>> load_arrays(1, x, y, ivals, (3, 4), DataIMG)
with no argument, the full data set is used:
>>> calc_data_sum2d() 66 >>> ivals.sum() 66
and a spatial filter can be used to restrict the region used for the summation:
>>> calc_data_sum2d('circle(22,12,1)') 36 >>> calc_data_sum2d('field()-circle(2,2,1)') 30
Apply the spatial filter to the data set labelled “a2142”:
>>> calc_data_sum2d('rotbox(4232.3,3876,300,200,43)', 'a2142')
- calc_energy_flux(lo=None, hi=None, id: int | str | None = None, bkg_id: int | str | None = None, model=None)[source] [edit on github]
Integrate the unconvolved source model over a pass band.
Calculate the integral of E * S(E) over a pass band, where E is the energy of the bin and S(E) the spectral model evaluated for that bin (that is, the model without any instrumental responses applied to it).
Changed in version 4.12.1: The model parameter was added.
- Parameters:
lo (number, optional) – If both are None or both are set then calculate the flux over the given band. If only one is set then calculate the flux density at that point. The units for
lo
andhi
are given by the current analysis setting.hi (number, optional) – If both are None or both are set then calculate the flux over the given band. If only one is set then calculate the flux density at that point. The units for
lo
andhi
are given by the current analysis setting.id (int, str, or None, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – If set, use the model associated with the given background component rather than the source model.
model (model, optional) – The model to integrate. If left as
None
then the source model for the dataset will be used. This can be used to calculate the unabsorbed flux, as shown in the examples.
- Returns:
flux – The flux or flux density. For X-Spec style models the flux units will be erg/cm^2/s and the flux density units will be either erg/cm^2/s/keV or erg/cm^2/s/Angstrom, depending on the analysis setting.
- Return type:
number
See also
calc_data_sum
Sum up the data values over a pass band.
calc_model_sum
Sum up the fitted model over a pass band.
calc_source_sum
Sum up the source model over a pass band.
calc_photon_flux
Integrate the unconvolved source model over a pass band.
set_analysis
Set the units used when fitting and displaying spectral data
set_model
Set the source model expression for a data set.
Notes
The units of
lo
andhi
are determined by the analysis setting for the data set (e.g.get_analysis
).Any existing filter on the data set - e.g. as created by
ignore
ornotice
- is ignored by this function.The units of the answer depend on the model components used in the source expression and the axis or axes of the data set. It is unlikely to give sensible results for 2D data sets.
Examples
Calculate the integral of the unconvolved model over the full range of the default data set:
>>> calc_energy_flux()
Return the flux for the data set labelled “core”:
>>> calc_energy_flux(id='core')
Calculate the energy flux over the ranges 0.5 to 2 and 0.5 to 7 keV:
>>> set_analysis('energy') >>> calc_energy_flux(0.5, 2) 5.7224906878061796e-10 >>> calc_energy_flux(0.5, 7) 1.3758131915063825e-09
Calculate the energy flux density at 0.5 keV for the source “core”:
>>> calc_energy_flux(0.5, id="core") 5.2573786652855304e-10
Calculate the flux for the model applied to the second background component of the ‘jet’ data set, for the wavelength range 20 to 22 Angstroms:
>>> set_analysis('jet', 'wave') >>> calc_energy_flux(20, 22, id='jet', bkg_id=2)
For the following example, the source model is an absorbed powerlaw -
xsphabs.gal * powerlaw.pl
- so that thefabs
value represents the absorbed flux, andfunabs
the unabsorbed flux (i.e. just the power-law component):>>> fabs = calc_energy_flux(0.5, 7) >>> funabs = calc_energy_flux(0.5, 7, model=pl)
- calc_kcorr(z, obslo, obshi, restlo=None, resthi=None, id: int | str | None = None, bkg_id: int | str | None = None)[source] [edit on github]
Calculate the K correction for a model.
The K correction ([1], [2], [3], [4]) is the numeric factor applied to measured energy fluxes in an observed energy band to estimate the flux in a given rest-frame energy band. It accounts for the change in spectral energy distribution between the desired rest-frame band and the rest-frame band corresponding to the observed band. This is often used when converting a flux into a luminosity.
- Parameters:
z (number or array, >= 0) – The redshift, or redshifts, of the source.
obslo (number) – The minimum energy of the observed band.
obshi (number) – The maximum energy of the observed band, which must be larger than
obslo
.restlo (number or
None
) – The minimum energy of the rest-frame band. IfNone
then useobslo
.resthi (number or
None
) – The maximum energy of the rest-frame band. It must be larger thanrestlo
. IfNone
then useobshi
.id (int, str, or None, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – If set, use the model associated with the given background component rather than the source model.
- Returns:
kz
- Return type:
number or array of numbers
See also
calc_energy_flux
Integrate the unconvolved source model over a pass band.
dataspace1d
Create the independent axis for a 1D data set.
Notes
This is only defined when the analysis is in ‘energy’ units.
If the model contains a redshift parameter then it should be set to 0, rather than the source redshift.
If the source model is at zero redshift, the observed energy band is olo to ohi, and the rest frame band is rlo to rhi (which need not match the observed band), then the K correction at a redshift z can be calculated as:
frest = calc_energy_flux(rlo, rhi) fobs = calc_energy_flux(olo*(1+z), ohi*(1+z)) kz = frest / fobs
The energy ranges used - rlo to rhi and olo*(1+z) to ohi*(1+z) - should be fully covered by the data grid, otherwise the flux calculation will be truncated at the grid boundaries, leading to incorrect results.
References
Examples
Calculate the K correction for an X-Spec apec model, with a source temperature of 6 keV and abundance of 0.3 solar, for the energy band of 0.5 to 2 keV:
>>> dataspace1d(0.01, 10, 0.01) >>> set_source(xsapec.clus) >>> clus.kt = 6 >>> clus.abundanc = 0.3 >>> calc_kcorr(0.5, 0.5, 2) 0.82799195070436793
Calculate the K correction for a range of redshifts (0 to 2) using an observed frame of 0.5 to 2 keV and a rest frame of 0.1 to 10 keV (the energy grid is set to ensure that it covers the full energy range; that is the rest-frame band and the observed frame band multiplied by the smallest and largest (1+z) terms):
>>> dataspace1d(0.01, 11, 0.01) >>> zs = np.linspace(0, 2, 21) >>> ks = calc_kcorr(zs, 0.5, 2, restlo=0.1, resthi=10)
Calculate the k correction for the background dataset bkg_id=2 for a redshift of 0.5 over the energy range 0.5 to 2 keV with rest-frame energy limits of 2 to 10 keV.
>>> calc_kcorr(0.5, 0.5, 2, 2, 10, bkg_id=2)
- calc_model(id: int | str | None = None, bkg_id: int | str | None = None) tuple[tuple[ndarray, ...], ndarray] [source] [edit on github]
Calculate the per-bin model values.
The values are filtered and grouped based on the data and will use the analysis setting for PHA data, but not the other plot options (such as whether to display as a rate).
Added in version 4.17.0.
- Parameters:
id (int, str, or None, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – If set, use the model associated with the given background component rather than the source model.
- Returns:
xvals, yvals – The independent axis, which uses a tuple as the number of elements depends on the dimensionality and type of data. The units depends on the data type: for PHA data the X axis will be in the analysis units and Y axis will generally be counts.
- Return type:
tuple of ndarray, ndarray
See also
Examples
For a PHA dataset the independent axis is a pair of values, giving the low and high energies. The xlo and xhi values are in keV, and represent the low and high edges of each bin, and the yvals array is in counts.
>>> load_pha("3c273.pi") >>> set_analysis("energy") >>> notice(0.5, 6) >>> set_source(xsphabs.gal * powlaw1d.pl) >>> gal.nh = 0.1 >>> pl.gamma = 1.7 >>> pl.ampl = 2e-4 >>> xvals, yvals = calc_model() >>> xlo = xvals[0] >>> xhi = xvals[1]
The results can be compared to the model output in plot_fit to show agreement (note that calc_model returns grouped values, as used by plot_fit, whereas plot_model shows the ungrouped data):
>>> set_analysis("energy", type="rate", factor=0) >>> plot_fit() >>> plot_model(overplot=True, color="black", alpha=0.4) >>> xvals, yvals = calc_model() >>> elo, ehi = xvals >>> exposure = get_exposure() >>> plt.plot((elo + ehi) / 2, yvals / (ehi - elo) / exposure)
Changing the analysis setting changes the x values, as xvals2 is in Angstrom rather than keV (the model values are the same, although there may be small numerical differences that mean the values do not exactly match):
>>> set_analysis("wave") >>> xvals2, yvals2 = calc_model()
For 1D datasets the x axis is a single-element tuple:
>>> load_arrays(2, [1, 4, 7], [3, 12, 2]) >>> set_source(2, gauss1.gline) >>> gline.pos = 4.2 >>> gline.fwhm = 3 >>> gline.ampl = 12 >>> xvals, yvals = calc_model(2) >>> x = xvals[0] >>> x array([1, 4, 7]) >>> yvals array([ 0.51187072, 11.85303595, 1.07215839])
- calc_model_sum(lo=None, hi=None, id: int | str | None = None, bkg_id: int | str | None = None)[source] [edit on github]
Sum up the fitted model over a pass band.
Sum up M(E) over a range of bins, where M(E) is the per-bin model value after it has been convolved with any instrumental response (e.g. RMF and ARF or PSF). This is intended for one-dimensional data sets: use
calc_model_sum2d
for two-dimensional data sets. Thecalc_source_sum
function is used to calculate the sum of the model before any instrumental response is applied.- Parameters:
lo (number, optional) – If both are None or both are set then sum up over the given band. If only one is set then use the model value in the selected bin. The units for
lo
andhi
are given by the current analysis setting.hi (number, optional) – If both are None or both are set then sum up over the given band. If only one is set then use the model value in the selected bin. The units for
lo
andhi
are given by the current analysis setting.id (int, str, or None, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – If set, use the model associated with the given background component rather than the source model.
- Returns:
signal – The model value (sum or individual bin).
- Return type:
number
Notes
The units of
lo
andhi
are determined by the analysis setting for the data set (e.g.get_analysis
). The summation occurs over those points in the data set that lie within this range, not the range itself.Any existing filter on the data set - e.g. as created by
ignore
ornotice
- is ignored by this function.The units of the answer depend on the model components used in the source expression and the axis or axes of the data set.
Examples
Calculate the model evaluated over the full data set (all points or pixels of the independent axis) for the default data set, and compare it to the sum for th first background component:
>>> tsrc = calc_model_sum() >>> tbkg = calc_model_sum(bkg_id=1)
Sum up the model over the data range 0.5 to 2 for the default data set, and compared to the data over the same range:
>>> calc_model_sum(0.5, 2) 404.97796489631639 >>> calc_data_sum(0.5, 2) 745.0
Calculate the model sum, evaluated over the range 20 to 22 Angstroms, for the first background component of the “histate” data set:
>>> set_analysis("histate", "wavelength") >>> calc_model_sum(20, 22, "histate", bkg_id=1)
In the following example, a small data set is created, covering the axis range of -5 to 5, and an off-center gaussian model created (centered at 1). The model is evaluated over the full data grid and then a subset of pixels. As the summation is done over those points in the data set that lie within the requested range, the sum for lo=-2 to hi=1 is the same as that for lo=-1.5 to hi=1.5:
>>> load_arrays('test', [-5, -2.5, 0, 2.5, 5], [2, 5, 12, 7, 3]) >>> set_source('test', gauss1d.gmdl) >>> gmdl.pos = 1 >>> gmdl.fwhm = 2.4 >>> gmdl.ampl = 10 >>> calc_model_sum(id='test') 9.597121089731253 >>> calc_model_sum(-2, 1, id='test') 6.179472329646446 >>> calc_model_sum(-1.5, 1.5, id='test') 6.179472329646446
- calc_model_sum2d(reg=None, id: int | str | None = None)[source] [edit on github]
Sum up the convolved model for a 2D data set.
This function is for two-dimensional data sets: use
calc_model_sum
for one-dimensional data sets.- Parameters:
reg (str, optional) – The spatial filter to use. The default,
None
, is to use the whole data set.id (int, str, or None, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.
- Returns:
msum – The sum of the model values, as fitted to the data, that lie within the given region. This includes any PSF included by
set_psf
.- Return type:
number
See also
calc_model_sum
Sum up the fitted model over a pass band.
calc_source_sum2d
Sum up the unconvolved model for a 2D data set.
set_psf
Add a PSF model to a data set.
set_model
Set the source model expression for a data set.
Notes
The coordinate system of the region filter is determined by the coordinate setting for the data set (e.g.
get_coord
).Any existing filter on the data set - e.g. as created by
ignore2d
ornotice2d
- is ignored by this function.Examples
The following examples use the data in the default data set created with the following calls, which sets the y (data) values to be 0 to 11 in a 3 row by 4 column image:
>>> ivals = np.arange(12) >>> y, x = np.mgrid[10:13, 20:24] >>> y = y.flatten() >>> x = x.flatten() >>> load_arrays(1, x, y, ivals, (3, 4), DataIMG) >>> set_source(const2d.bgnd) >>> bgnd.c0 = 2
with no argument, the full data set is used. Since the model evaluates to 2 per pixel, and there are 12 pixels in the data set, the result is 24:
>>> calc_model_sum2d() 24.0
and a spatial filter can be used to restrict the region used for the summation:
>>> calc_model_sum2d('circle(22,12,1)') 8.0 >>> calc_model_sum2d('field()-circle(22,12,1)') 16.0
Apply the spatial filter to the model for the data set labelled “a2142”:
>>> calc_model_sum2d('rotbox(4232.3,3876,300,200,43)', 'a2142')
- calc_photon_flux(lo=None, hi=None, id: int | str | None = None, bkg_id: int | str | None = None, model=None)[source] [edit on github]
Integrate the unconvolved source model over a pass band.
Calculate the integral of S(E) over a pass band, where S(E) is the spectral model evaluated for each bin (that is, the model without any instrumental responses applied to it).
Changed in version 4.12.1: The model parameter was added.
- Parameters:
lo (number, optional) – If both are None or both are set then calculate the flux over the given band. If only one is set then calculate the flux density at that point. The units for
lo
andhi
are given by the current analysis setting.hi (number, optional) – If both are None or both are set then calculate the flux over the given band. If only one is set then calculate the flux density at that point. The units for
lo
andhi
are given by the current analysis setting.id (int, str, or None, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – If set, use the model associated with the given background component rather than the source model.
model (model, optional) – The model to integrate. If left as
None
then the source model for the dataset will be used. This can be used to calculate the unabsorbed flux, as shown in the examples.
- Returns:
flux – The flux or flux density. For X-Spec style models the flux units will be photon/cm^2/s and the flux density units will be either photon/cm^2/s/keV or photon/cm^2/s/Angstrom, depending on the analysis setting.
- Return type:
number
See also
calc_data_sum
Sum up the observed counts over a pass band.
calc_model_sum
Sum up the fitted model over a pass band.
calc_energy_flux
Integrate the unconvolved source model over a pass band.
calc_source_sum
Sum up the source model over a pass band.
set_analysis
Set the units used when fitting and displaying spectral data
set_model
Set the source model expression for a data set.
Notes
The units of
lo
andhi
are determined by the analysis setting for the data set (e.g.get_analysis
).Any existing filter on the data set - e.g. as created by
ignore
ornotice
- is ignored by this function.The units of the answer depend on the model components used in the source expression and the axis or axes of the data set. It is unlikely to give sensible results for 2D data sets.
Examples
Calculate the integral of the unconvolved model over the full range of the default data set:
>>> calc_photon_flux()
Return the flux for the data set labelled “core”:
>>> calc_photon_flux(id='core')
Calculate the photon flux over the ranges 0.5 to 2 and 0.5 to 7 keV, and compared to the energy fluxes for the same bands:
>>> set_analysis('energy') >>> calc_photon_flux(0.5, 2) 0.35190275 >>> calc_photon_flux(0.5, 7) 0.49050927 >>> calc_energy_flux(0.5, 2) 5.7224906878061796e-10 >>> calc_energy_flux(0.5, 7) 1.3758131915063825e-09
Calculate the photon flux density at 0.5 keV for the source “core”:
>>> calc_photon_flux(0.5, id="core") 0.64978176
Calculate the flux for the model applied to the second background component of the ‘jet’ data set, for the wavelength range 20 to 22 Angstroms:
>>> set_analysis('jet', 'wave') >>> calc_photon_flux(20, 22, id='jet', bkg_id=2)
For the following example, the source model is an absorbed powerlaw -
xsphabs.gal * powerlaw.pl
- so that thefabs
value represents the absorbed flux, andfunabs
the unabsorbed flux (i.e. just the power-law component):>>> fabs = calc_photon_flux(0.5, 7) >>> funabs = calc_photon_flux(0.5, 7, model=pl)
- calc_source(id: int | str | None = None, bkg_id: int | str | None = None) tuple[tuple[ndarray, ...], ndarray] [source] [edit on github]
Calculate the per-bin source values.
Unlike calc_model, the values are not filtered and grouped, but the independent axis will use the analysis setting for PHA data.
Added in version 4.17.0.
- Parameters:
id (int, str, or None, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – If set, use the model associated with the given background component rather than the source model.
- Returns:
xvals, yvals – The independent axis, which uses a tuple as the number of elements depends on the dimensionality and type of data. The units depends on the data type: for PHA data the X axis will be in the analysis units and Y axis will generally be photon/cm^2/s.
- Return type:
tuple of ndarray, ndarray
See also
Examples
For a PHA dataset the independent axis is a pair of values, giving the low and high energies. The xlo and xhi values are in keV, and represent the low and high edges of each bin, and the yvals array will generally be in photon/cm^2/s.
>>> load_pha("3c273.pi") >>> set_analysis("energy") >>> notice(0.5, 6) >>> set_source(xsphabs.gal * powlaw1d.pl) >>> gal.nh = 0.1 >>> pl.gamma = 1.7 >>> pl.ampl = 2e-4 >>> xvals, yvals = calc_source() >>> xlo = xvals[0] >>> xhi = xvals[1]
The results can be compared to the output of plot_source to show agreement:
>>> set_analysis("energy", type="rate", factor=0) >>> plot_source() >>> xvals, yvals = calc_source() >>> elo, ehi = xvals >>> plt.plot((elo + ehi) / 2, yvals / (ehi - elo))
Changing the analysis setting changes the x values, as xvals2 is in Angstrom rather than keV (the model values are the same, although there may be small numerical differences that mean the values do not exactly match):
>>> set_analysis("wave") >>> xvals2, yvals2 = calc_source()
For 1D datasets the x axis is a single-element tuple:
>>> load_arrays(2, [1, 4, 7], [3, 12, 2]) >>> set_source(2, gauss1.gline) >>> gline.pos = 4.2 >>> gline.fwhm = 3 >>> gline.ampl = 12 >>> xvals, yvals = calc_source(2) >>> x = xvals[0] >>> x array([1, 4, 7]) >>> yvals array([ 0.51187072, 11.85303595, 1.07215839])
- calc_source_sum(lo=None, hi=None, id: int | str | None = None, bkg_id: int | str | None = None)[source] [edit on github]
Sum up the source model over a pass band.
Sum up S(E) over a range of bins, where S(E) is the per-bin model value before it has been convolved with any instrumental response (e.g. RMF and ARF or PSF). This is intended for one-dimensional data sets: use
calc_source_sum2d
for two-dimensional data sets. Thecalc_model_sum
function is used to calculate the sum of the model after any instrumental response is applied.- Parameters:
lo (number, optional) – If both are None or both are set then sum up over the given band. If only one is set then use the model value in the selected bin. The units for
lo
andhi
are given by the current analysis setting.hi (number, optional) – If both are None or both are set then sum up over the given band. If only one is set then use the model value in the selected bin. The units for
lo
andhi
are given by the current analysis setting.id (int, str, or None, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – If set, use the model associated with the given background component rather than the source model.
- Returns:
signal – The model value (sum or individual bin).
- Return type:
number
Notes
The units of
lo
andhi
are determined by the analysis setting for the data set (e.g.get_analysis
). The summation occurs over those points in the data set that lie within this range, not the range itself.Any existing filter on the data set - e.g. as created by
ignore
ornotice
- is ignored by this function.The units of the answer depend on the model components used in the source expression and the axis or axes of the data set.
Examples
Calculate the model evaluated over the full data set (all points or pixels of the independent axis) for the default data set, and compare it to the sum for th first background component:
>>> tsrc = calc_source_sum() >>> tbkg = calc_source_sum(bkg_id=1)
Sum up the model over the data range 0.5 to 2 for the default data set:
>>> calc_source_sum(0.5, 2) 139.12819041922018
Compare the output of the
calc_source_sum
andcalc_photon_flux
routines. A 1099-bin data space is created, with a model which has a value of 1 for each bin. As the bin width is constant, at 0.01, the integrated value, calculated bycalc_photon_flux
, is one hundredth the value returned bycalc_data_sum
:>>> dataspace1d(0.01, 11, 0.01, id="test") >>> set_source("test", const1d.bflat) >>> bflat.c0 = 1 >>> calc_source_sum(id="test") 1099.0 >>> calc_photon_flux(id="test") 10.99
In the following example, a small data set is created, covering the axis range of -5 to 5, and an off-center gaussian model created (centered at 1). The model is evaluated over the full data grid and then a subset of pixels. As the summation is done over those points in the data set that lie within the requested range, the sum for lo=-2 to hi=1 is the same as that for lo=-1.5 to hi=1.5:
>>> load_arrays('test', [-5, -2.5, 0, 2.5, 5], [2, 5, 12, 7, 3]) >>> set_source('test', gauss1d.gmdl) >>> gmdl.pos = 1 >>> gmdl.fwhm = 2.4 >>> gmdl.ampl = 10 >>> calc_source_sum(id='test') 9.597121089731253 >>> calc_source_sum(-2, 1, id='test') 6.179472329646446 >>> calc_source_sum(-1.5, 1.5, id='test') 6.179472329646446
- calc_source_sum2d(reg=None, id: int | str | None = None)[source] [edit on github]
Sum up the unconvolved model for a 2D data set.
This function is for two-dimensional data sets: use
calc_source_sum
for one-dimensional data sets.- Parameters:
reg (str, optional) – The spatial filter to use. The default,
None
, is to use the whole data set.id (int, str, or None, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.
- Returns:
msum – The sum of the model values that lie within the given region. This does not include any PSF included by
set_psf
.- Return type:
number
See also
calc_model_sum2d
Sum up the convolved model for a 2D data set.
calc_source_sum
Sum up the model over a pass band.
set_psf
Add a PSF model to a data set.
set_model
Set the source model expression for a data set.
Notes
The coordinate system of the region filter is determined by the coordinate setting for the data set (e.g.
get_coord
).Any existing filter on the data set - e.g. as created by
ignore2d
ornotice2d
- is ignored by this function.Examples
The following examples use the data in the default data set created with the following calls, which sets the y (data) values to be 0 to 11 in a 3 row by 4 column image:
>>> ivals = np.arange(12) >>> y, x = np.mgrid[10:13, 20:24] >>> y = y.flatten() >>> x = x.flatten() >>> load_arrays(1, x, y, ivals, (3, 4), DataIMG) >>> set_source(const2d.bgnd) >>> bgnd.c0 = 2
with no argument, the full data set is used. Since the model evaluates to 2 per pixel, and there are 12 pixels in the data set, the result is 24:
>>> calc_source_sum2d() 24.0
and a spatial filter can be used to restrict the region used for the summation:
>>> calc_source_sum2d('circle(22,12,1)') 8.0 >>> calc_source_sum2d('field()-circle(22,12,1)') 16.0
Apply the spatial filter to the model for the data set labelled “a2142”:
>>> calc_source_sum2d('rotbox(4232.3,3876,300,200,43)', 'a2142')
- calc_stat(id: int | str | None = None, *otherids: int | str)[source] [edit on github]
Calculate the fit statistic for a data set.
Evaluate the model for one or more data sets, compare it to the data using the current statistic, and return the value. No fitting is done, as the current model parameter, and any filters, are used.
- Parameters:
- Returns:
stat – The current statistic value.
- Return type:
number
See also
calc_chisqr
Calculate the per-bin chi-squared statistic.
calc_stat_info
Display the statistic values for the current models.
set_stat
Set the statistical method.
Examples
Calculate the statistic for the model and data in the default data set:
>>> stat = calc_stat()
Find the statistic for data set 3:
>>> stat = calc_stat(3)
When fitting to multiple data sets, you can get the contribution to the total fit statistic from only one data set, or from several by listing the datasets explicitly. The following finds the contribution from the data sets labelled “core” and “jet”:
>>> stat = calc_stat("core", "jet")
Calculate the statistic value using two different statistics:
>>> set_stat('cash') >>> s1 = calc_stat() >>> set_stat('cstat') >>> s2 = calc_stat()
- calc_stat_info()[source] [edit on github]
Display the statistic values for the current models.
Displays the statistic value for each data set, and the combined fit, using the current set of models, parameters, and ranges. The output is printed to stdout, and so is intended for use in interactive analysis. The
get_stat_info
function returns the same information but as an array of Python structures.See also
calc_stat
Calculate the fit statistic for a data set.
get_stat_info
Return the statistic values for the current models.
Notes
If a fit to a particular data set has not been made, or values - such as parameter settings, the noticed data range, or choice of statistic - have been changed since the last fit, then the results for that data set may not be meaningful and will therefore bias the results for the simultaneous results.
The information returned by
calc_stat_info
includes:- Dataset
The dataset identifier (or identifiers).
- Statistic
The name of the statistic used to calculate the results.
- Fit statistic value
The current fit statistic value.
- Data points
The number of bins used in the fit.
- Degrees of freedom
The number of bins minus the number of thawed parameters.
Some fields are only returned for a subset of statistics:
- Probability (Q-value)
A measure of the probability that one would observe the reduced statistic value, or a larger value, if the assumed model is true and the best-fit model parameters are the true parameter values.
- Reduced statistic
The fit statistic value divided by the number of degrees of freedom.
Examples
>>> calc_stat_info()
- clean() None [source] [edit on github]
Clear out the current Sherpa session.
The
clean
function removes all data sets and model assignments, and restores the default settings for the optimisation and fit statistic.Changed in version 4.15.0: The model names are now removed from the global symbol table.
See also
save
Save the current Sherpa session to a file.
restore
Load in a Sherpa session from a file.
sherpa.astro.ui.save_all
Save the Sherpa session as an ASCII file.
Examples
>>> clean()
After the call to
clean
, theline
andbgnd
variables will be removed, so accessing them would cause a NameError.>>> set_source(gauss1d.line + const1d.bgnd) >>> bgnd.c0.min = 0 >>> print(line) >>> clean()
- conf(*args)[source] [edit on github]
Estimate parameter confidence intervals using the confidence method.
The
conf
command computes confidence interval bounds for the specified model parameters in the dataset. A given parameter’s value is varied along a grid of values while the values of all the other thawed parameters are allowed to float to new best-fit values. Theget_conf
andset_conf_opt
commands can be used to configure the error analysis; an example being changing the ‘sigma’ field to1.6
(i.e. 90%) from its default value of1
. The output from the routine is displayed on screen, and theget_conf_results
routine can be used to retrieve the results.- Parameters:
id (int or str, optional) – The data set, or sets, that provides the data. If not given then all data sets with an associated model are used simultaneously.
parameter (sherpa.models.parameter.Parameter, optional) – The default is to calculate the confidence limits on all thawed parameters of the model, or models, for all the data sets. The evaluation can be restricted by listing the parameters to use. Note that each parameter should be given as a separate argument, rather than as a list. For example
conf(g1.ampl, g1.sigma)
.model (sherpa.models.model.Model, optional) – Select all the thawed parameters in the model.
See also
covar
Estimate the confidence intervals using the covariance method.
get_conf
Return the confidence-interval estimation object.
get_conf_results
Return the results of the last
conf
run.int_proj
Plot the statistic value as a single parameter is varied.
int_unc
Plot the statistic value as a single parameter is varied.
reg_proj
Plot the statistic value as two parameters are varied.
reg_unc
Plot the statistic value as two parameters are varied.
set_conf_opt
Set an option of the
conf
estimation object.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with multiple
ids
orparameters
values, the order is unimportant, since any argument that is not defined as a model parameter is assumed to be a data id.The
conf
function is different tocovar
, in that in that all other thawed parameters are allowed to float to new best-fit values, instead of being fixed to the initial best-fit values as they are incovar
. Whileconf
is more general (e.g. allowing the user to examine the parameter space away from the best-fit point), it is in the strictest sense no more accurate thancovar
for determining confidence intervals.The
conf
function is a replacement for theproj
function, which uses a different algorithm to estimate parameter confidence limits.An estimated confidence interval is accurate if and only if:
the chi^2 or logL surface in parameter space is approximately shaped like a multi-dimensional paraboloid, and
the best-fit point is sufficiently far from parameter space boundaries.
One may determine if these conditions hold, for example, by plotting the fit statistic as a function of each parameter’s values (the curve should approximate a parabola) and by examining contour plots of the fit statistics made by varying the values of two parameters at a time (the contours should be elliptical, and parameter space boundaries should be no closer than approximately 3 sigma from the best-fit point). The
int_proj
andreg_proj
commands may be used for this.If either of the conditions given above does not hold, then the output from
conf
may be meaningless except to give an idea of the scale of the confidence intervals. To accurately determine the confidence intervals, one would have to reparameterize the model, use Monte Carlo simulations, or Bayesian methods.As the calculation can be computer intensive, the default behavior is to use all available CPU cores to speed up the analysis. This can be changed be varying the
numcores
option - or settingparallel
toFalse
- either withset_conf_opt
orget_conf
.As
conf
estimates intervals for each parameter independently, the relationship between sigma and the change in statistic value delta_S can be particularly simple: sigma = the square root of delta_S for statistics sampled from the chi-square distribution and for the Cash statistic, and is approximately equal to the square root of (2 * delta_S) for fits based on the general log-likelihood. The default setting is to calculate the one-sigma interval, which can be changed with thesigma
option toset_conf_opt
orget_conf
.The limit calculated by
conf
is basically a 1-dimensional root in the translated coordinate system (translated by the value of the statistic at the minimum plus sigma^2). The Taylor series expansion of the multi-dimensional function at the minimum is:f(x + dx) ~ f(x) + grad( f(x) )^T dx + dx^T Hessian( f(x) ) dx + ...
where x is understood to be the n-dimensional vector representing the free parameters to be fitted and the super-script ‘T’ is the transpose of the row-vector. At or near the minimum, the gradient of the function is zero or negligible, respectively. So the leading term of the expansion is quadratic. The best root finding algorithm for a curve which is approximately parabolic is Muller’s method [1]. Muller’s method is a generalization of the secant method [2]: the secant method is an iterative root finding method that approximates the function by a straight line through two points, whereas Muller’s method is an iterative root finding method that approxmiates the function by a quadratic polynomial through three points.
Three data points are the minimum input to Muller’s root finding method. The first point to be submitted to the Muller’s root finding method is the point at the minimum. To strategically choose the other two data points, the confidence function uses the output from covariance as the second data point. To generate the third data points for the input to Muller’s root finding method, the secant root finding method is used since it only requires two data points to generate the next best approximation of the root.
However, there are cases where
conf
cannot locate the root even though the root is bracketed within an interval (perhaps due to the bad resolution of the data). In such cases, when the optionopeninterval
is set toFalse
(which is the default), the routine will print a warning message about not able to find the root within the set tolerance and the function will return the average of the open interval which brackets the root. Ifopeninterval
is set toTrue
thenconf
will print the minimal open interval which brackets the root (not to be confused with the lower and upper bound of the confidence interval). The most accurate thing to do is to return an open interval where the root is localized/bracketed rather then the average of the open interval (since the average of the interval is not a root within the specified tolerance).References
Muller, David E., “A Method for Solving Algebraic Equations Using an Automatic Computer,” MTAC, 10 (1956), 208-215.
Numerical Recipes in Fortran, 2nd edition, 1986, Press et al., p. 347
Examples
Evaluate confidence intervals for all thawed parameters in all data sets with an associated source model. The results are then stored in the variable
res
.>>> conf() >>> res = get_conf_results()
Only evaluate the parameters associated with data set 2:
>>> conf(2)
Only evaluate the intervals for the
pos.xpos
andpos.ypos
parameters:>>> conf(pos.xpos, pos.ypos)
Change the limits to be 1.6 sigma (90%) rather than the default 1 sigma.
>>> set_conf_opt('sigma', 1.6) >>> conf()
Only evaluate the
clus.kt
parameter for the data sets with identifiers “obs1”, “obs5”, and “obs6”. This will still use the 1.6 sigma setting from the previous run.>>> conf("obs1", "obs5", "obs6", clus.kt)
Only use two cores when evaluating the errors for the parameters used in the model for data set 3:
>>> set_conf_opt('numcores', 2) >>> conf(3)
Estimate the errors for all the thawed parameters from the
line
model and theclus.kt
parameter for datasets 1, 3, and 4:>>> conf(1, 3, 4, line, clus.kt)
- confidence(*args) [edit on github]
Estimate parameter confidence intervals using the confidence method.
The
conf
command computes confidence interval bounds for the specified model parameters in the dataset. A given parameter’s value is varied along a grid of values while the values of all the other thawed parameters are allowed to float to new best-fit values. Theget_conf
andset_conf_opt
commands can be used to configure the error analysis; an example being changing the ‘sigma’ field to1.6
(i.e. 90%) from its default value of1
. The output from the routine is displayed on screen, and theget_conf_results
routine can be used to retrieve the results.- Parameters:
id (int or str, optional) – The data set, or sets, that provides the data. If not given then all data sets with an associated model are used simultaneously.
parameter (sherpa.models.parameter.Parameter, optional) – The default is to calculate the confidence limits on all thawed parameters of the model, or models, for all the data sets. The evaluation can be restricted by listing the parameters to use. Note that each parameter should be given as a separate argument, rather than as a list. For example
conf(g1.ampl, g1.sigma)
.model (sherpa.models.model.Model, optional) – Select all the thawed parameters in the model.
See also
covar
Estimate the confidence intervals using the covariance method.
get_conf
Return the confidence-interval estimation object.
get_conf_results
Return the results of the last
conf
run.int_proj
Plot the statistic value as a single parameter is varied.
int_unc
Plot the statistic value as a single parameter is varied.
reg_proj
Plot the statistic value as two parameters are varied.
reg_unc
Plot the statistic value as two parameters are varied.
set_conf_opt
Set an option of the
conf
estimation object.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with multiple
ids
orparameters
values, the order is unimportant, since any argument that is not defined as a model parameter is assumed to be a data id.The
conf
function is different tocovar
, in that in that all other thawed parameters are allowed to float to new best-fit values, instead of being fixed to the initial best-fit values as they are incovar
. Whileconf
is more general (e.g. allowing the user to examine the parameter space away from the best-fit point), it is in the strictest sense no more accurate thancovar
for determining confidence intervals.The
conf
function is a replacement for theproj
function, which uses a different algorithm to estimate parameter confidence limits.An estimated confidence interval is accurate if and only if:
the chi^2 or logL surface in parameter space is approximately shaped like a multi-dimensional paraboloid, and
the best-fit point is sufficiently far from parameter space boundaries.
One may determine if these conditions hold, for example, by plotting the fit statistic as a function of each parameter’s values (the curve should approximate a parabola) and by examining contour plots of the fit statistics made by varying the values of two parameters at a time (the contours should be elliptical, and parameter space boundaries should be no closer than approximately 3 sigma from the best-fit point). The
int_proj
andreg_proj
commands may be used for this.If either of the conditions given above does not hold, then the output from
conf
may be meaningless except to give an idea of the scale of the confidence intervals. To accurately determine the confidence intervals, one would have to reparameterize the model, use Monte Carlo simulations, or Bayesian methods.As the calculation can be computer intensive, the default behavior is to use all available CPU cores to speed up the analysis. This can be changed be varying the
numcores
option - or settingparallel
toFalse
- either withset_conf_opt
orget_conf
.As
conf
estimates intervals for each parameter independently, the relationship between sigma and the change in statistic value delta_S can be particularly simple: sigma = the square root of delta_S for statistics sampled from the chi-square distribution and for the Cash statistic, and is approximately equal to the square root of (2 * delta_S) for fits based on the general log-likelihood. The default setting is to calculate the one-sigma interval, which can be changed with thesigma
option toset_conf_opt
orget_conf
.The limit calculated by
conf
is basically a 1-dimensional root in the translated coordinate system (translated by the value of the statistic at the minimum plus sigma^2). The Taylor series expansion of the multi-dimensional function at the minimum is:f(x + dx) ~ f(x) + grad( f(x) )^T dx + dx^T Hessian( f(x) ) dx + ...
where x is understood to be the n-dimensional vector representing the free parameters to be fitted and the super-script ‘T’ is the transpose of the row-vector. At or near the minimum, the gradient of the function is zero or negligible, respectively. So the leading term of the expansion is quadratic. The best root finding algorithm for a curve which is approximately parabolic is Muller’s method [1]. Muller’s method is a generalization of the secant method [2]: the secant method is an iterative root finding method that approximates the function by a straight line through two points, whereas Muller’s method is an iterative root finding method that approxmiates the function by a quadratic polynomial through three points.
Three data points are the minimum input to Muller’s root finding method. The first point to be submitted to the Muller’s root finding method is the point at the minimum. To strategically choose the other two data points, the confidence function uses the output from covariance as the second data point. To generate the third data points for the input to Muller’s root finding method, the secant root finding method is used since it only requires two data points to generate the next best approximation of the root.
However, there are cases where
conf
cannot locate the root even though the root is bracketed within an interval (perhaps due to the bad resolution of the data). In such cases, when the optionopeninterval
is set toFalse
(which is the default), the routine will print a warning message about not able to find the root within the set tolerance and the function will return the average of the open interval which brackets the root. Ifopeninterval
is set toTrue
thenconf
will print the minimal open interval which brackets the root (not to be confused with the lower and upper bound of the confidence interval). The most accurate thing to do is to return an open interval where the root is localized/bracketed rather then the average of the open interval (since the average of the interval is not a root within the specified tolerance).References
Muller, David E., “A Method for Solving Algebraic Equations Using an Automatic Computer,” MTAC, 10 (1956), 208-215.
Numerical Recipes in Fortran, 2nd edition, 1986, Press et al., p. 347
Examples
Evaluate confidence intervals for all thawed parameters in all data sets with an associated source model. The results are then stored in the variable
res
.>>> conf() >>> res = get_conf_results()
Only evaluate the parameters associated with data set 2:
>>> conf(2)
Only evaluate the intervals for the
pos.xpos
andpos.ypos
parameters:>>> conf(pos.xpos, pos.ypos)
Change the limits to be 1.6 sigma (90%) rather than the default 1 sigma.
>>> set_conf_opt('sigma', 1.6) >>> conf()
Only evaluate the
clus.kt
parameter for the data sets with identifiers “obs1”, “obs5”, and “obs6”. This will still use the 1.6 sigma setting from the previous run.>>> conf("obs1", "obs5", "obs6", clus.kt)
Only use two cores when evaluating the errors for the parameters used in the model for data set 3:
>>> set_conf_opt('numcores', 2) >>> conf(3)
Estimate the errors for all the thawed parameters from the
line
model and theclus.kt
parameter for datasets 1, 3, and 4:>>> conf(1, 3, 4, line, clus.kt)
- contour(*args, **kwargs) None [source] [edit on github]
Create a contour plot for an image data set.
Create one or more contour plots, depending on the arguments it is set: a plot type, followed by an optional data set identifier, and this can be repeated. If no data set identifier is given for a plot type, the default identifier - as returned by
get_default_id
- is used. This is for 2D data sets.Changed in version 4.17.0: The keyword arguments can now be set per plot by using a sequence of values. The layout can be changed with the rows and cols arguments and the automatic calculation no longer forces two rows. Handling of the overcontour flag has been improved.
Changed in version 4.12.2: Keyword arguments, such as alpha, can be sent to each plot.
- Parameters:
args – The contour-plot names and identifiers.
rows – The number of rows and columns (if set).
cols – The number of rows and columns (if set).
kwargs – The plot arguments applied to each contour plot.
- Raises:
sherpa.utils.err.DataErr – The data set does not support the requested plot type.
See also
contour_data
,contour_fit
,contour_fit_resid
,contour_kernel
,contour_model
,contour_psf
,contour_ratio
,contour_resid
,contour_source
,get_default_id
,get_split_plot
Notes
The supported plot types depend on the data set type, and include the following list. There are also individual functions, with
contour_
prepended to the plot type, such ascontour_data
and thecontour_fit_resid
variant:data
The data.
fit
Contours of the data and the source model.
fit_resid
Two plots: the first is the contours of the data and the source model and the second is the residuals.
kernel
The kernel.
model
The source model including any PSF convolution set by
set_psf
.psf
The PSF.
ratio
Contours of the ratio image, formed by dividing the data by the model.
resid
Contours of the residual image, formed by subtracting the model from the data.
source
The source model (without any PSF convolution set by
set_psf
).
The keyword arguments are sent to each plot (so care must be taken to ensure they are valid for all plots).
Examples
>>> contour('data')
>>> contour('data', 1, 'data', 2)
>>> contour('data', 'model')
>>> contour('data', 'model', 'fit', 'resid')
>>> contour('data', 'model', alpha=0.7)
Use a single column rather than single row to display the contour plots:
>>> contour('data', 'model', cols=1)
- contour_data(id: int | str | None = None, replot=False, overcontour=False, **kwargs) None [source] [edit on github]
Contour the values of an image data set.
- Parameters:
id (int, str, or None, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_data
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_data_contour
Return the data used by contour_data.
get_data_contour_prefs
Return the preferences for contour_data.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Examples
Plot the data from the default data set:
>>> contour_data()
Contour the data and then overplot the data from the second data set:
>>> contour_data() >>> contour_data(2, overcontour=True)
- contour_fit(id: int | str | None = None, replot=False, overcontour=False, **kwargs) None [source] [edit on github]
Contour the fit to a data set.
Overplot the model - including any PSF - on the data. The preferences are the same as
contour_data
andcontour_model
.- Parameters:
id (int, str, or None, optional) – The data set that provides the data and model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_fit
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_fit_contour
Return the data used by contour_fit.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Examples
Plot the fit for the default data set:
>>> contour_fit()
Overplot the fit to data set ‘s2’ on that of the default data set:
>>> contour_fit() >>> contour_fit('s2', overcontour=True)
- contour_fit_resid(id: int | str | None = None, replot=False, overcontour=False, **kwargs) None [source] [edit on github]
Contour the fit and the residuals to a data set.
Overplot the model - including any PSF - on the data. In a separate plot contour the residuals. The preferences are the same as
contour_data
andcontour_model
.- Parameters:
id (int, str, or None, optional) – The data set that provides the data and model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_fit_resid
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_fit_contour
Return the data used by contour_fit.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
contour_fit
Contour the fit to a data set.
contour_resid
Contour the residuals of the fit.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Examples
Plot the fit and residuals for the default data set:
>>> contour_fit_resid()
- contour_kernel(id: int | str | None = None, replot=False, overcontour=False, **kwargs) None [source] [edit on github]
Contour the kernel applied to the model of an image data set.
If the data set has no PSF applied to it, the model is displayed.
- Parameters:
id (int, str, or None, optional) – The data set that provides the model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_kernel
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_psf_contour
Return the data used by contour_psf.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
contour_psf
Contour the PSF applied to the model of an image data set.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
set_psf
Add a PSF model to a data set.
- contour_model(id: int | str | None = None, replot=False, overcontour=False, **kwargs) None [source] [edit on github]
Create a contour plot of the model.
Displays a contour plot of the values of the model, evaluated on the data, including any PSF kernel convolution (if set).
- Parameters:
id (int, str, or None, optional) – The data set that provides the model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_model
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_model_contour
Return the data used by contour_model.
get_model_contour_prefs
Return the preferences for contour_model.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
contour_source
Create a contour plot of the unconvolved spatial model.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
set_psf
Add a PSF model to a data set.
Examples
Plot the model from the default data set:
>>> contour_model()
Compare the model without and with the PSF component, for the “img” data set:
>>> contour_source("img") >>> contour_model("img", overcontour=True)
- contour_psf(id: int | str | None = None, replot=False, overcontour=False, **kwargs) None [source] [edit on github]
Contour the PSF applied to the model of an image data set.
If the data set has no PSF applied to it, the model is displayed.
- Parameters:
id (int, str, or None, optional) – The data set that provides the model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_psf
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_psf_contour
Return the data used by contour_psf.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
contour_kernel
Contour the kernel applied to the model of an image data set.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
set_psf
Add a PSF model to a data set.
- contour_ratio(id: int | str | None = None, replot=False, overcontour=False, **kwargs) None [source] [edit on github]
Contour the ratio of data to model.
The ratio image is formed by dividing the data by the current model, including any PSF. The preferences are the same as
contour_data
.- Parameters:
id (int, str, or None, optional) – The data set that provides the data and model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_ratio
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_ratio_contour
Return the data used by contour_ratio.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Examples
Plot the ratio from the default data set:
>>> contour_ratio()
Overplot the ratio on the residuals:
>>> contour_resid('img') >>> contour_ratio('img', overcontour=True)
- contour_resid(id: int | str | None = None, replot=False, overcontour=False, **kwargs) None [source] [edit on github]
Contour the residuals of the fit.
The residuals are formed by subtracting the current model - including any PSF - from the data. The preferences are the same as
contour_data
.- Parameters:
id (int, str, or None, optional) – The data set that provides the data and model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_resid
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_resid_contour
Return the data used by contour_resid.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Examples
Plot the residuals from the default data set:
>>> contour_resid()
Overplot the residuals on the model:
>>> contour_model('img') >>> contour_resid('img', overcontour=True)
- contour_source(id: int | str | None = None, replot=False, overcontour=False, **kwargs) None [source] [edit on github]
Create a contour plot of the unconvolved spatial model.
Displays a contour plot of the values of the model, evaluated on the data, without any PSF kernel convolution applied. The preferences are the same as
contour_model
.- Parameters:
id (int, str, or None, optional) – The data set that provides the model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_source
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_source_contour
Return the data used by contour_source.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
contour_model
Create a contour plot of the model.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
set_psf
Add a PSF model to a data set.
Examples
Plot the model from the default data set:
>>> contour_source()
Compare the model without and with the PSF component, for the “img” data set:
>>> contour_model("img") >>> contour_source("img", overcontour=True)
- copy_data(fromid: int | str, toid: int | str) None [source] [edit on github]
Copy a data set, creating a new identifier.
After copying the data set, any changes made to the original data set (that is, the
fromid
identifier) will not be reflected in the new (thetoid
identifier) data set.- Parameters:
- Raises:
sherpa.utils.err.IdentifierErr – If there is no data set with a
fromid
identifier.
See also
Examples
>>> copy_data(1, 2)
Rename the data set with identifier 2 to “orig”, and then delete the old data set:
>>> copy_data(2, "orig") >>> delete_data(2)
- covar(*args)[source] [edit on github]
Estimate parameter confidence intervals using the covariance method.
The
covar
command computes confidence interval bounds for the specified model parameters in the dataset, using the covariance matrix of the statistic. Theget_covar
andset_covar_opt
commands can be used to configure the error analysis; an example being changing thesigma
field to 1.6 (i.e. 90%) from its default value of 1. The output from the routine is displayed on screen, and theget_covar_results
routine can be used to retrieve the results.- Parameters:
id (int or str, optional) – The data set, or sets, that provides the data. If not given then all data sets with an associated model are used simultaneously.
parameter (sherpa.models.parameter.Parameter, optional) – The default is to calculate the confidence limits on all thawed parameters of the model, or models, for all the data sets. The evaluation can be restricted by listing the parameters to use. Note that each parameter should be given as a separate argument, rather than as a list. For example
covar(g1.ampl, g1.sigma)
.model (sherpa.models.model.Model, optional) – Select all the thawed parameters in the model.
See also
covar
Estimate the confidence intervals using the confidence method.
get_covar
Return the covariance estimation object.
get_covar_results
Return the results of the last
covar
run.int_proj
Plot the statistic value as a single parameter is varied.
int_unc
Plot the statistic value as a single parameter is varied.
reg_proj
Plot the statistic value as two parameters are varied.
reg_unc
Plot the statistic value as two parameters are varied.
set_covar_opt
Set an option of the
covar
estimation object.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with multiple
ids
orparameters
values, the order is unimportant, since any argument that is not defined as a model parameter is assumed to be a data id.The
covar
command is different toconf
, in that in that all other thawed parameters are fixed, rather than being allowed to float to new best-fit values. Whileconf
is more general (e.g. allowing the user to examine the parameter space away from the best-fit point), it is in the strictest sense no more accurate thancovar
for determining confidence intervals.An estimated confidence interval is accurate if and only if:
the chi^2 or logL surface in parameter space is approximately shaped like a multi-dimensional paraboloid, and
the best-fit point is sufficiently far from parameter space boundaries.
One may determine if these conditions hold, for example, by plotting the fit statistic as a function of each parameter’s values (the curve should approximate a parabola) and by examining contour plots of the fit statistics made by varying the values of two parameters at a time (the contours should be elliptical, and parameter space boundaries should be no closer than approximately 3 sigma from the best-fit point). The
int_proj
andreg_proj
commands may be used for this.If either of the conditions given above does not hold, then the output from
covar
may be meaningless except to give an idea of the scale of the confidence intervals. To accurately determine the confidence intervals, one would have to reparameterize the model, use Monte Carlo simulations, or Bayesian methods.As
covar
estimates intervals for each parameter independently, the relationship between sigma and the change in statistic value delta_S can be particularly simple: sigma = the square root of delta_S for statistics sampled from the chi-square distribution and for the Cash statistic, and is approximately equal to the square root of (2 * delta_S) for fits based on the general log-likelihood. The default setting is to calculate the one-sigma interval, which can be changed with thesigma
option toset_covar_opt
orget_covar
.Examples
Evaluate confidence intervals for all thawed parameters in all data sets with an associated source model. The results are then stored in the variable
res
.>>> covar() >>> res = get_covar_results()
Only evaluate the parameters associated with data set 2.
>>> covar(2)
Only evaluate the intervals for the
pos.xpos
andpos.ypos
parameters:>>> covar(pos.xpos, pos.ypos)
Change the limits to be 1.6 sigma (90%) rather than the default 1 sigma.
>>> set_covar_ope('sigma', 1.6) >>> covar()
Only evaluate the
clus.kt
parameter for the data sets with identifiers “obs1”, “obs5”, and “obs6”. This will still use the 1.6 sigma setting from the previous run.>>> covar("obs1", "obs5", "obs6", clus.kt)
Estimate the errors for all the thawed parameters from the
line
model and theclus.kt
parameter for datasets 1, 3, and 4:>>> covar(1, 3, 4, line, clus.kt)
- covariance(*args) [edit on github]
Estimate parameter confidence intervals using the covariance method.
The
covar
command computes confidence interval bounds for the specified model parameters in the dataset, using the covariance matrix of the statistic. Theget_covar
andset_covar_opt
commands can be used to configure the error analysis; an example being changing thesigma
field to 1.6 (i.e. 90%) from its default value of 1. The output from the routine is displayed on screen, and theget_covar_results
routine can be used to retrieve the results.- Parameters:
id (int or str, optional) – The data set, or sets, that provides the data. If not given then all data sets with an associated model are used simultaneously.
parameter (sherpa.models.parameter.Parameter, optional) – The default is to calculate the confidence limits on all thawed parameters of the model, or models, for all the data sets. The evaluation can be restricted by listing the parameters to use. Note that each parameter should be given as a separate argument, rather than as a list. For example
covar(g1.ampl, g1.sigma)
.model (sherpa.models.model.Model, optional) – Select all the thawed parameters in the model.
See also
covar
Estimate the confidence intervals using the confidence method.
get_covar
Return the covariance estimation object.
get_covar_results
Return the results of the last
covar
run.int_proj
Plot the statistic value as a single parameter is varied.
int_unc
Plot the statistic value as a single parameter is varied.
reg_proj
Plot the statistic value as two parameters are varied.
reg_unc
Plot the statistic value as two parameters are varied.
set_covar_opt
Set an option of the
covar
estimation object.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with multiple
ids
orparameters
values, the order is unimportant, since any argument that is not defined as a model parameter is assumed to be a data id.The
covar
command is different toconf
, in that in that all other thawed parameters are fixed, rather than being allowed to float to new best-fit values. Whileconf
is more general (e.g. allowing the user to examine the parameter space away from the best-fit point), it is in the strictest sense no more accurate thancovar
for determining confidence intervals.An estimated confidence interval is accurate if and only if:
the chi^2 or logL surface in parameter space is approximately shaped like a multi-dimensional paraboloid, and
the best-fit point is sufficiently far from parameter space boundaries.
One may determine if these conditions hold, for example, by plotting the fit statistic as a function of each parameter’s values (the curve should approximate a parabola) and by examining contour plots of the fit statistics made by varying the values of two parameters at a time (the contours should be elliptical, and parameter space boundaries should be no closer than approximately 3 sigma from the best-fit point). The
int_proj
andreg_proj
commands may be used for this.If either of the conditions given above does not hold, then the output from
covar
may be meaningless except to give an idea of the scale of the confidence intervals. To accurately determine the confidence intervals, one would have to reparameterize the model, use Monte Carlo simulations, or Bayesian methods.As
covar
estimates intervals for each parameter independently, the relationship between sigma and the change in statistic value delta_S can be particularly simple: sigma = the square root of delta_S for statistics sampled from the chi-square distribution and for the Cash statistic, and is approximately equal to the square root of (2 * delta_S) for fits based on the general log-likelihood. The default setting is to calculate the one-sigma interval, which can be changed with thesigma
option toset_covar_opt
orget_covar
.Examples
Evaluate confidence intervals for all thawed parameters in all data sets with an associated source model. The results are then stored in the variable
res
.>>> covar() >>> res = get_covar_results()
Only evaluate the parameters associated with data set 2.
>>> covar(2)
Only evaluate the intervals for the
pos.xpos
andpos.ypos
parameters:>>> covar(pos.xpos, pos.ypos)
Change the limits to be 1.6 sigma (90%) rather than the default 1 sigma.
>>> set_covar_ope('sigma', 1.6) >>> covar()
Only evaluate the
clus.kt
parameter for the data sets with identifiers “obs1”, “obs5”, and “obs6”. This will still use the 1.6 sigma setting from the previous run.>>> covar("obs1", "obs5", "obs6", clus.kt)
Estimate the errors for all the thawed parameters from the
line
model and theclus.kt
parameter for datasets 1, 3, and 4:>>> covar(1, 3, 4, line, clus.kt)
- static create_arf(elo, ehi, specresp=None, exposure=None, ethresh=None, name='test-arf') DataARF [source] [edit on github]
Create an ARF.
Added in version 4.10.1.
- Parameters:
elo (numpy.ndarray) – The energy bins (low and high, in keV) for the ARF. It is assumed that ehi_i > elo_i, elo_j > 0, the energy bins are either ascending - so elo_i+1 > elo_i - or descending (elo_i+1 < elo_i), and that there are no overlaps.
ehi (numpy.ndarray) – The energy bins (low and high, in keV) for the ARF. It is assumed that ehi_i > elo_i, elo_j > 0, the energy bins are either ascending - so elo_i+1 > elo_i - or descending (elo_i+1 < elo_i), and that there are no overlaps.
specresp (None or array, optional) – The spectral response (in cm^2) for the ARF. It is assumed to be >= 0. If not given a flat response of 1.0 is used.
exposure (number or None, optional) – If not None, the exposure of the ARF in seconds.
ethresh (number or None, optional) – Passed through to the DataARF call. It controls whether zero-energy bins are replaced.
name (str, optional) – The name of the ARF data set
- Returns:
arf
- Return type:
DataARF instance
See also
Examples
Create a flat ARF, with a value of 1.0 cm^2 for each bin, over the energy range 0.1 to 10 keV, with a bin spacing of 0.01 keV.
>>> egrid = np.arange(0.1, 10, 0.01) >>> arf = create_arf(egrid[:-1], egrid[1:])
Create an ARF that has 10 percent more area than the ARF from the default data set:
>>> arf1 = get_arf() >>> elo = arf1.energ_lo >>> ehi = arf1.energ_hi >>> y = 1.1 * arf1.specresp >>> arf2 = create_arf(elo, ehi, y, exposure=arf1.exposure)
- create_model_component(typename=None, name=None)[source] [edit on github]
Create a model component.
Model components created by this function are set to their default values. Components can also be created directly using the syntax
typename.name
, such as in calls toset_model
andset_source
(unless you have calledset_model_autoassign_func
to change the default model auto-assignment setting).- Parameters:
typename (str) – The name of the model. This should match an entry from the return value of
list_models
, and defines the type of model.name (str) – The name used to refer to this instance, or component, of the model. A Python variable will be created with this name that can be used to inspect and change the model parameters, as well as use it in model expressions.
- Returns:
model
- Return type:
the sherpa.models.Model object created
See also
delete_model_component
Delete a model component.
get_model_component
Returns a model component given its name.
list_models
List the available model types.
list_model_components
List the names of all the model components.
set_model
Set the source model expression for a data set.
set_model_autoassign_func
Set the method used to create model component identifiers.
Notes
This function can over-write an existing component. If the over-written component is part of a source expression - as set by
set_model
- then the model evaluation will still use the old model definition (and be able to change the fit parameters), but direct access to its parameters is not possible since the name now refers to the new component (this is true using direct access, such asmname.parname
, or withset_par
).Examples
Create an instance of the
powlaw1d
model calledpl
, and then freeze itsgamma
parameter to 2.6.>>> create_model_component("powlaw1d", "pl") >>> pl.gamma = 2.6 >>> freeze(pl.gamma)
Create a blackbody model called bb, check that it is recognized as a component, and display its parameters:
>>> create_model_component("bbody", "bb") >>> list_model_components() >>> print(bb) >>> print(bb.ampl)
- static create_rmf(rmflo, rmfhi, startchan=1, e_min=None, e_max=None, ethresh=None, fname=None, name='delta-rmf') DataRMF [source] [edit on github]
Create an RMF.
If fname is set to
None
then this creates a “perfect” RMF, which has a delta-function response (so each channel uniquely maps to a single energy bin), otherwise the RMF is taken from the image data stored in the file pointed to byfname
.Changed in version 4.17.0: Support for startchan values other than 1 has been improved.
Changed in version 4.16.0: The e_min and e_max values will use the rmflo and rmfhi values if not set.
Added in version 4.10.1.
- Parameters:
rmflo (array) – The energy bins (low and high, in keV) for the RMF. It is assumed that emfhi_i > rmflo_i, rmflo_j > 0, that the energy bins are either ascending, so rmflo_i+1 > rmflo_i or descending (rmflo_i+1 < rmflo_i), and that there are no overlaps. These correspond to the Elow and Ehigh columns (represented by the ENERG_LO and ENERG_HI columns of the MATRIX block) of the OGIP standard.
rmfhi (array) – The energy bins (low and high, in keV) for the RMF. It is assumed that emfhi_i > rmflo_i, rmflo_j > 0, that the energy bins are either ascending, so rmflo_i+1 > rmflo_i or descending (rmflo_i+1 < rmflo_i), and that there are no overlaps. These correspond to the Elow and Ehigh columns (represented by the ENERG_LO and ENERG_HI columns of the MATRIX block) of the OGIP standard.
startchan (int, optional) – The starting channel number: it should match the offset value for the DataRMF class.
e_min (None or array, optional) – The E_MIN and E_MAX columns of the EBOUNDS block of the RMF. If not set they are taken from rmflo and rmfhi respectively.
e_max (None or array, optional) – The E_MIN and E_MAX columns of the EBOUNDS block of the RMF. If not set they are taken from rmflo and rmfhi respectively.
ethresh (number or None, optional) – Passed through to the DataRMF call. It controls whether zero-energy bins are replaced.
fname (None or str, optional) – If None then a “perfect” RMF is generated, otherwise it gives the name of the two-dimensional image file which stores the response information (the format of this file matches that created by the CIAO tool rmfimg).
name (str, optional) – The name of the RMF data set
- Returns:
rmf
- Return type:
DataRMF instance
See also
- dataspace1d(start, stop, step=1, numbins=None, id: int | str | None = None, bkg_id: int | str | None = None, dstype=<class 'sherpa.data.Data1DInt'>) None [source] [edit on github]
Create the independent axis for a 1D data set.
Create an “empty” one-dimensional data set by defining the grid on which the points are defined (the independent axis). The values are set to 0.
- Parameters:
start (number) – The minimum value of the axis.
stop (number) – The maximum value of the axis.
step (number, optional) – The separation between each grid point. This is not used if
numbins
is set.numbins (int, optional) – The number of grid points. This overrides the
step
setting.id (int, str, or None, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – If set, the grid is for the background component of the data set.
dstype (data class to use, optional) – What type of data is to be used. Supported values include
Data1DInt
(the default),Data1D
, andDataPHA
.
See also
dataspace2d
Create the independent axis for a 2D data set.
get_dep
Return the dependent axis of a data set.
get_indep
Return the independent axes of a data set.
set_dep
Set the dependent axis of a data set.
Notes
The meaning of the
stop
parameter depends on whether it is a binned or unbinned data set (as set by thedstype
parameter).Examples
Create a binned data set, starting at 1 and with a bin-width of 1.
>>> dataspace1d(1, 5, 1) >>> print(get_indep()) (array([ 1., 2., 3., 4.]), array([ 2., 3., 4., 5.]))
This time for an un-binned data set:
>>> dataspace1d(1, 5, 1, dstype=Data1D) >>> print(get_indep()) (array([ 1., 2., 3., 4., 5.]),)
Specify the number of bins rather than the grid spacing:
>>> dataspace1d(1, 5, numbins=5, id=2) >>> (xlo, xhi) = get_indep(2) >>> xlo array([ 1. , 1.8, 2.6, 3.4, 4.2]) >>> xhi array([ 1.8, 2.6, 3.4, 4.2, 5. ])
>>> dataspace1d(1, 5, numbins=5, id=3, dstype=Data1D) >>> (x, ) = get_indep(3) >>> x array([ 1., 2., 3., 4., 5.])
Create a grid for a PHA data set called ‘jet’, and for its background component (note that the axis values are in channels, and there are 1024 channels set):
>>> dataspace1d(1, 1024, id='jet', dstype=DataPHA) >>> dataspace1d(1, 1024, id='jet', bkg_id=1, dstype=DataPHA)
- dataspace2d(dims, id: int | str | None = None, dstype=<class 'sherpa.astro.data.DataIMG'>) None [source] [edit on github]
Create the independent axis for a 2D data set.
Create an “empty” two-dimensional data set by defining the grid on which the points are defined (the independent axis). The values are set to 0.
- Parameters:
dims (sequence of 2 number) – The dimensions of the grid in
(width,height)
order.id (int, str, or None, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.dstype (data class to use, optional) – What type of data is to be used. Supported values include
DataIMG
(the default),Data2D
, andData2DInt
.
See also
dataspace1d
Create the independent axis for a 1D data set.
get_dep
Return the dependent axis of a data set.
get_indep
Return the independent axes of a data set.
set_dep
Set the dependent axis of a data set.
Examples
Create a 200 pixel by 150 pixel grid (number of columns by number of rows) and display it (each pixel has a value of 0):
>>> dataspace2d([200, 150]) >>> image_data()
Create a data space called “fakeimg”:
>>> dataspace2d([nx, ny], id="fakeimg")
- delete_bkg_model(id: int | str | None = None, bkg_id: int | str | None = None) None [source] [edit on github]
Delete the background model expression for a data set.
This removes the model expression, created by
set_bkg_model
, for the background component of a data set. It does not delete the components of the expression, or remove the models for any other background components or the source of the data set.- Parameters:
id (int, str, or None, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – The identifier for the background component to use.
See also
clean
Clear all stored session data.
delete_model
Delete the model expression for a data set.
get_default_id
Return the default data set identifier.
list_bkg_ids
List all the background identifiers for a data set.
set_model
Set the source model expression for a data set.
show_model
Display the source model expression for a data set.
Examples
Remove the background model expression for the default data set:
>>> delete_bkg_model()
Remove the model expression for the background component labelled ‘down’ for the data set with the identifier ‘src’:
>>> delete_bkg_model('src', 'down')
- delete_data(id: int | str | None = None) None [source] [edit on github]
Delete a data set by identifier.
The data set, and any associated structures - such as the ARF and RMF for PHA data sets - are removed.
- Parameters:
id (int, str, or None, optional) – The data set to delete. If not given then the default identifier is used, as returned by
get_default_id
.
See also
clean
Clear all stored session data.
copy_data
Copy a data set to a new identifier.
delete_model
Delete the model expression from a data set.
get_default_id
Return the default data set identifier.
list_data_ids
List the identifiers for the loaded data sets.
Notes
The source expression is not removed by this function.
Examples
Delete the data from the default data set:
>>> delete_data()
Delete the data set identified as ‘src’:
>>> delete_data('src')
- delete_model(id: int | str | None = None) None [source] [edit on github]
Delete the model expression for a data set.
This removes the model expression, created by
set_model
, for a data set. It does not delete the components of the expression.- Parameters:
id (int, str, or None, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.
See also
clean
Clear all stored session data.
delete_data
Delete a data set by identifier.
get_default_id
Return the default data set identifier.
set_model
Set the source model expression for a data set.
show_model
Display the source model expression for a data set.
Examples
Remove the model expression for the default data set:
>>> delete_model()
Remove the model expression for the data set with the identifier called ‘src’:
>>> delete_model('src')
- delete_model_component(name: str) None [source] [edit on github]
Delete a model component.
- Parameters:
name (str) – The name used to refer to this instance, or component, of the model. The corresponding Python variable will be deleted by this function.
See also
create_model_component
Create a model component.
delete_model
Delete the model expression for a data set.
list_models
List the available model types.
list_model_components
List the names of all the model components.
set_model
Set the source model expression for a data set.
set_model_autoassign_func
Set the method used to create model component identifiers.
Notes
It is an error to try to delete a component that is part of a model expression - i.e. included as part of an expression in a
set_model
orset_source
call. In such a situation, use thedelete_model
function to remove the source expression before callingdelete_model_component
.Examples
If a model instance called
pl
has been created - e.g. bycreate_model_component('powlaw1d', 'pl')
- then the following will remove it:>>> delete_model_component('pl')
- delete_pileup_model(id: int | str | None = None) None [source] [edit on github]
Delete the pile up model for a data set.
Remove the pile up model applied to a source model.
Added in version 4.12.2.
- Parameters:
id (int, str, or None, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.
See also
get_pileup_model
Return the pile up model for a data set.
list_pileup_model_ids
List of all the data sets with a pile up model.
set_pileup_model
Add a pile up model to a data set.
Examples
>>> delete_pileup_model()
>>> delete_pileup_model('core')
- delete_psf(id: int | str | None = None) None [source] [edit on github]
Delete the PSF model for a data set.
Remove the PSF convolution applied to a source model.
- Parameters:
id (int, str, or None, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.
See also
list_psf_ids
List of all the data sets with a PSF.
load_psf
Create a PSF model.
set_psf
Add a PSF model to a data set.
get_psf
Return the PSF model defined for a data set.
Examples
>>> delete_psf()
>>> delete_psf('core')
- eqwidth(src, combo, id: int | str | None = None, lo=None, hi=None, bkg_id: int | str | None = None, error=False, params=None, otherids: Sequence[int | str] = (), niter=1000, covar_matrix=None)[source] [edit on github]
Calculate the equivalent width of an emission or absorption line.
The equivalent width is calculated in the selected units for the data set (which can be retrieved with
get_analysis
).Changed in version 4.16.0: The random number generation is now controlled by the
set_rng
routine.Changed in version 4.10.1: The
error
parameter was added which controls whether the return value is a scalar (the calculated equivalent width), when set toFalse
, or the median value, error limits, and ancillary values.- Parameters:
src – The continuum model (this may contain multiple components).
combo – The continuum plus line (absorption or emission) model.
lo (optional) – The lower limit for the calculation (the units are set by
set_analysis
for the data set). The default value (None
) means that the lower range of the data set is used.hi (optional) – The upper limit for the calculation (the units are set by
set_analysis
for the data set). The default value (None
) means that the upper range of the data set is used.id (int, str, or None, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
bkg_id (int, str, or None, optional) – The identifier of the background component to use. This should only be set when the line to be measured is in the background model.
error (bool, optional) – The parameter indicates whether the errors are to be calculated or not. The default value is False
params (2D array, optional) – The default is None, in which case get_draws shall be called. The user can input the parameter array (e.g. from running
sample_flux
).otherids (sequence of integer or strings, optional) – Other data sets to use in the calculation.
niter (int, optional) – The number of draws to use. The default is
1000
.covar_matrix (2D array, optional) – The covariance matrix to use. If
None
then the result fromget_covar_results().extra_output
is used.
- Returns:
If
error
isFalse
, then returns the equivalent width, otherwise the median, 1 sigma lower bound, 1 sigma upper bound, the parameters array, and the array of the equivalent width values used to determine the errors.- Return type:
retval
See also
calc_model_sum
Sum up the fitted model over a pass band.
calc_source_sum
Calculate the un-convolved model signal.
get_default_id
Return the default data set identifier.
set_model
Set the source model expression.
Examples
Set a source model (a powerlaw for the continuum and a gaussian for the line), fit it, and then evaluate the equivalent width of the line. The example assumes that this is a PHA data set, with an associated response, so that the analysis can be done in wavelength units.
>>> set_source(powlaw1d.cont + gauss1d.line) >>> set_analysis('wavelength') >>> fit() >>> eqwidth(cont, cont+line) 2.1001988282497308
The calculation is restricted to the range 20 to 20 Angstroms.
>>> eqwidth(cont, cont+line, lo=20, hi=24) 1.9882824973082310
The calculation is done for the background model of data set 2, over the range 0.5 to 2 (the units of this are whatever the analysis setting for this data set id).
>>> set_bkg_source(2, const1d.flat + gauss1d.bline) >>> eqwidth(flat, flat+bline, id=2, bkg_id=1, lo=0.5, hi=2) 0.45494599793003426
With the
error
flag set toTrue
, the return value is enhanced with extra information, such as the median and one-sigma ranges on the equivalent width:>>> res = eqwidth(p1, p1 + g1, error=True) >>> ewidth = res[0] # the median equivalent width >>> errlo = res[1] # the one-sigma lower limit >>> errhi = res[2] # the one-sigma upper limit >>> pars = res[3] # the parameter values used >>> ews = res[4] # array of eq. width values
which can be used to display the probability density or cumulative distribution function of the equivalent widths:
>>> plot_pdf(ews) >>> plot_cdf(ews)
- fake(id: int | str | None = None, method=<function poisson_noise>) None [source] [edit on github]
Simulate a data set.
Take a data set, evaluate the model for each bin, and then use this value to create a data value from each bin. The default behavior is to use a Poisson distribution, with the model value as the expectation value of the distribution.
- Parameters:
id (int, str, or None, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.method (func) – The function used to create a random realisation of a data set.
See also
dataspace1d
Create the independent axis for a 1D data set.
dataspace2d
Create the independent axis for a 2D data set.
get_dep
Return the dependent axis of a data set.
load_arrays
Create a data set from array values.
set_model
Set the source model expression for a data set.
Notes
The function for the
method
argument accepts a single argument, the data values, and should return an array of the same shape as the input, with the data values to use.The function can be called on any data set, it does not need to have been created with
dataspace1d
ordataspace2d
.Specific data set types may have their own, specialized, version of this function.
Examples
Create a random realisation of the model - a constant plus gaussian line - for the range x=-5 to 5.
>>> dataspace1d(-5, 5, 0.5, dstype=Data1D) >>> set_source(gauss1d.gline + const1d.bgnd) >>> bgnd.c0 = 2 >>> gline.fwhm = 4 >>> gline.ampl = 5 >>> gline.pos = 1 >>> fake() >>> plot_data() >>> plot_model(overplot=True)
For a 2D data set, display the simulated data, model, and residuals:
>>> dataspace2d([150, 80], id='fakeimg') >>> set_source('fakeimg', beta2d.src + polynom2d.bg) >>> src.xpos, src.ypos = 75, 40 >>> src.r0, src.alpha = 15, 2.3 >>> src.ellip, src.theta = 0.4, 1.32 >>> src.ampl = 100 >>> bg.c, bg.cx1, bg.cy1 = 3, 0.4, 0.3 >>> fake('fakeimg') >>> image_fit('fakeimg')
- fake_pha(id, arf=None, rmf=None, exposure=None, backscal=None, areascal=None, grouping=None, grouped=False, quality=None, bkg=None, method=None) None [source] [edit on github]
Simulate a PHA data set from a model.
The function creates a simulated PHA data set based on a source model, instrument response (given as an ARF and RMF), and exposure time, along with a Poisson noise term. A background component can be included.
Changed in version 4.16.1: Several bugs have been addressed when simulating data with a background: the background model contribution would be wrong if the source and background exposure times differ or if there were multiple background datasets. The arf, rmf, and exposure arguments are now optional.
Changed in version 4.16.0: The method parameter was added.
Changed in version 4.15.0: The arf argument can now be set to
None
when the data uses a RSP file (combined RMF and ARF).- Parameters:
id (int or str) – The identifier for the data set to create. If it already exists then it is assumed to contain a PHA data set and the counts will be over-written.
arf (None or filename or ARF object or list of filenames, optional) – The name of the ARF, or an ARF data object (e.g. as returned by
get_arf
orunpack_arf
). A list of filenames can be passed in for instruments that require multiple ARFs. Set this toNone
to use any arf that is already set for the data set given by id or for instruments that do not use an ARF separate from the RMF (e.g. XMM-Newton/RGS).rmf (filename or RMF object or list of filenames, optional) – The name of the RMF, or an RMF data object (e.g. as returned by
get_rmf
orunpack_rmf
). A list of filenames can be passed in for instruments that require multiple RMFs. Set this toNone
to use any rmf that is already set for the data set given by id.exposure (number, optional) – The exposure time, in seconds. If not set (i.e. is
None
) then use the exposure time of the data set given by id.backscal (number, optional) – The ‘BACKSCAL’ value for the data set.
areascal (number, optional) – The ‘AREASCAL’ value for the data set.
grouping (array, optional) – The grouping array for the data (see
set_grouping
). Set this toNone
to use any grouping that is already set for the data set given by id; the grouping is only applied ifgrouped
isTrue
.grouped (bool, optional) – Should the simulated data be grouped (see
group
)? The default isFalse
. This value is only used if thegrouping
parameter is set.quality (array, optional) – The quality array for the data (see
set_quality
).bkg (optional) – If left empty, then only the source emission is simulated. If set to a PHA data object, then the counts from this data set are scaled appropriately and added to the simulated source signal. To use background model, set
bkg="model"
. In that case a background dataset withbkg_id=1
has to be set before callingfake_pha
. That background dataset needs to include the data itself (not used in this function), the background model, and the response.method (callable or None, optional) – If None, the default, then the data is simulated using the
sherpa.utils.poisson_noise
routine. If set, it must be a callable that takes a ndarray of the predicted values and returns a ndarray of the same size with the simulated data.
- Raises:
sherpa.utils.err.ArgumentErr – If the data set already exists and does not contain PHA data.
See also
Notes
A model expression is created by using the supplied ARF and RMF to convolve the source expression for the dataset (the return value of
get_source
for the suppliedid
parameter). This expression is evaluated for each channel to create the expectation values, which is then passed to a Poisson random number generator to determine the observed number of counts per channel. Any background component is scaled by appropriate terms (exposure time, area scaling, and the backscal value) before it is passed to a Poisson random number generator. The simulated background is added to the simulated data.Examples
Fit a model - an absorbed powerlaw - to the data in the file src.pi and then simulate the data using the fitted model. The exposure time, ARF, and RMF are taken from the data in src.pi.
>>> load_pha("src.pi") >>> set_source(xsphabs.gal * powlawd.pl) >>> notice(0.5, 6) >>> fit(1) >>> fake_pha(1)
Simulate the data but for a 1 Ms observation:
>>> fake_pha(1, exposure=1e6)
Estimate the signal from a 5000 second observation using the ARF and RMF from “src.arf” and “src.rmf” respectively:
>>> set_source(1, xsphabs.gal * xsapec.clus) >>> gal.nh = 0.12 >>> clus.kt, clus.abundanc = 4.5, 0.3 >>> clus.redshift = 0.187 >>> clus.norm = 1.2e-3 >>> fake_pha(1, 'src.arf', 'src.rmf', 5000)
Simulate a 1 mega second observation for the data and model from the default data set. The simulated data will include an estimated background component based on scaling the existing background observations for the source. The simulated data set, which has the same grouping as the default set, for easier comparison, is created with the ‘sim’ label and then written out to the file ‘sim.pi’:
>>> arf = get_arf() >>> rmf = get_rmf() >>> bkg = get_bkg() >>> bscal = get_backscal() >>> grp = get_grouping() >>> qual = get_quality() >>> texp = 1e6 >>> set_source('sim', get_source()) >>> fake_pha('sim', arf, rmf, texp, backscal=bscal, bkg=bkg, ... grouping=grp, quality=qual, grouped=True) >>> save_pha('sim', 'sim.pi')
Sometimes, the background dataset is noisy because there are not enough photons in the background region. In this case, the background model can be used to generate the photons that the background contributes to the source spectrum. To do this, a background model must be passed in. This model is then convolved with the ARF and RMF (which must be set before) of the default background data set:
>>> set_bkg_source('sim', 'const1d.con1') >>> load_arf('sim', 'bkg.arf.fits', bkg_id=1) >>> load_rmf('sim', 'bkg_rmf.fits', bkg_id=1) >>> fake_pha('sim', arf, rmf, texp, backscal=bscal, bkg='model', ... grouping=grp, quality=qual, grouped=True) >>> save_pha('sim', 'sim.pi')
- fit(id: int | str | None = None, *otherids: int | str, **kwargs) None [source] [edit on github]
Fit a model to one or more data sets.
Use forward fitting to find the best-fit model to one or more data sets, given the chosen statistic and optimization method. The fit proceeds until the results converge or the number of iterations exceeds the maximum value (these values can be changed with
set_method_opt
). An iterative scheme can be added usingset_iter_method
to try and improve the fit. The final fit results are displayed to the screen and can be retrieved withget_fit_results
.Changed in version 4.17.0: The outfile parameter can now be sent a Path object or a file handle instead of a string.
- Parameters:
id (int or str, optional) – The data set that provides the data. If not given then all data sets with an associated model are fit simultaneously.
*otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
outfile (str, Path, IO object, or None, optional) – If set, then the fit results will be written to a file with this name. The file contains the per-iteration fit results.
clobber (bool, optional) – This flag controls whether an existing file can be overwritten (
True
) or if it raises an exception (False
, the default setting). This is only used ifoutfile
is set to a string or Path object.
- Raises:
sherpa.utils.err.FitErr – If
filename
already exists andclobber
isFalse
.
See also
conf
Estimate the confidence intervals using the confidence method.
contour_fit
Contour the fit to a data set.
covar
Estimate the confidence intervals using the confidence method.
fit_bkg
Fit a model to one or more background PHA data sets.
freeze
Fix model parameters so they are not changed by a fit.
get_fit_results
Return the results of the last fit.
plot_fit
Plot the fit results (data, model) for a data set.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
set_stat
Set the statistical method.
set_method
Change the optimization method.
set_method_opt
Change an option of the current optimization method.
set_bkg_full_model
Define the convolved background model expression for a PHA data set.
set_bkg_model
Set the background model expression for a PHA data set.
set_full_model
Define the convolved model expression for a data set.
set_iter_method
Set the iterative-fitting scheme used in the fit.
set_model
Set the model expression for a data set.
show_fit
Summarize the fit results.
thaw
Allow model parameters to be varied during a fit.
Notes
For PHA data sets with background components, the function will fit any background components for which a background model has been created (rather than being subtracted). The
fit_bkg
function can be used to fit models to just the background data.If outfile is sent a file handle then it is not closed by this routine.
Examples
Simultaneously fit all data sets with models and then store the results in the variable fres:
>>> fit() >>> fres = get_fit_results()
Fit just the data set ‘img’:
>>> fit('img')
Simultaneously fit data sets 1, 2, and 3:
>>> fit(1, 2, 3)
Fit data set ‘jet’ and write the fit results to the text file ‘jet.fit’, over-writing it if it already exists:
>>> fit('jet', outfile='jet.fit', clobber=True)
Store the per-iteration values in a StringIO object and extract the data into the variable txt (this avoids the need to create a file):
>>> from io import StringIO >>> out = StringIO() >>> fit(outfile=out) >>> txt = out.getvalue()
- fit_bkg(id: int | str | None = None, *otherids: int | str, **kwargs) None [source] [edit on github]
Fit a model to one or more background PHA data sets.
Fit only the background components of PHA data sets. This can be used to find the best-fit background parameters, which can then be frozen before fitting the data, or to ensure that these parameters are well defined before performing a simultaneous source and background fit.
Changed in version 4.17.0: The outfile parameter can now be sent a Path object or a file handle instead of a string.
- Parameters:
id (int or str, optional) – The data set that provides the background data. If not given then all data sets with an associated background model are fit simultaneously.
*otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
outfile (str, Path, IO object, or None, optional) – If set, then the fit results will be written to a file with this name. The file contains the per-iteration fit results.
clobber (bool, optional) – This flag controls whether an existing file can be overwritten (
True
) or if it raises an exception (False
, the default setting). This is only used ifoutfile
is set to a string or Path object.
- Raises:
sherpa.utils.err.FitErr – If
filename
already exists andclobber
isFalse
.
See also
calc_bkg_stat
,conf
,contour_fit
,covar
,fit
,freeze
,get_fit_results
,plot_fit
,image_fit
,set_stat
,set_method
,set_method_opt
,set_bkg_full_model
,set_bkg_model
,set_full_model
,set_iter_method
,set_model
,show_bkg_source
,show_bkg_model
,show_fit
,thaw
Notes
This is only for PHA data sets where the background is being modelled, rather than subtracted from the data.
If outfile is sent a file handle then it is not closed by this routine.
Examples
Simultaneously fit all background data sets with models and then store the results in the variable fres:
>>> fit_bkg() >>> fres = get_fit_results()
Fit the background for data sets 1 and 2, then do a simultaneous fit to the source and background data sets:
>>> fit_bkg(1, 2) >>> fit(1, 2)
Store the per-iteration values in a StringIO object and extract the data into the variable txt (this avoids the need to create a file):
>>> from io import StringIO >>> out = StringIO() >>> fit_bkg(outfile=out) >>> txt = out.getvalue()
- freeze(*args)[source] [edit on github]
Fix model parameters so they are not changed by a fit.
The arguments can be parameters or models, in which case all parameters of the model are frozen. If no arguments are given then nothing is changed.
See also
Notes
The
thaw
function can be used to reverse this setting, so that parameters can be varied in a fit.Examples
Fix the FWHM parameter of the line model (in this case a
gauss1d
model) so that it will not be varied in the fit.>>> set_source(const1d.bgnd + gauss1d.line) >>> line.fwhm = 2.1 >>> freeze(line.fwhm) >>> fit()
Freeze all parameters of the line model and then re-fit:
>>> freeze(line) >>> fit()
Freeze the nh parameter of the gal model and the abund parameter of the src model:
>>> freeze(gal.nh, src.abund)
- get_analysis(id: int | str | None = None) str [source] [edit on github]
Return the units used when fitting spectral data.
- Parameters:
id (int, str, or None, optional) – The data set to query. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
setting – The analysis setting for the data set.
- Return type:
{ ‘channel’, ‘energy’, ‘wavelength’ }
- Raises:
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
id
argument is not recognized.
See also
get_default_id
Return the default data set identifier.
set_analysis
Change the analysis setting.
Examples
Display the analysis setting for the default data set:
>>> print(get_analysis())
Check whether the data set labelled ‘SgrA’ is using the wavelength setting:
>>> is_wave = get_analysis('SgrA') == 'wavelength'
- get_areascal(id: int | str | None = None, bkg_id: int | str | None = None)[source] [edit on github]
Return the fractional area factor of a PHA data set.
Return the AREASCAL setting for the source or background component of a PHA data set.
- Parameters:
id (int, str, or None, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – Set to identify which background component to use. The default value (
None
) means that the value is for the source component of the data set.
- Returns:
areascal – The AREASCAL value, which can be a scalar or a 1D array.
- Return type:
number or ndarray
See also
get_backscal
Return the area scaling of a PHA data set.
set_areascal
Change the fractional area factor of a PHA data set.
Notes
The fractional area scale is normally set to 1, with the ARF used to scale the model.
References
K. A. Arnaud, I. M. George & A. F. Tennant, “The OGIP Spectral File Format”
Examples
Return the AREASCAL value for the default data set:
>>> get_areascal()
Return the AREASCAL value for the first background component of dataset 2:
>>> get_areascal(id=2, bkg_id=1)
- get_arf(id: int | str | None = None, resp_id: int | str | None = None, bkg_id: int | str | None = None)[source] [edit on github]
Return the ARF associated with a PHA data set.
- Parameters:
id (int, str, or None, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.resp_id (int, str, or None, optional) – The identifier for the ARF within this data set, if there are multiple responses.
bkg_id (int, str, or None, optional) – Set this to return the given background component.
- Returns:
arf – This is a reference to the ARF, rather than a copy, so that changing the fields of the object will change the values in the data set.
- Return type:
a
sherpa.astro.instrument.ARF1D
instance
See also
fake_pha
Simulate a PHA data set from a model.
get_response
Return the response information applied to a PHA data set.
load_arf
Load an ARF from a file and add it to a PHA data set.
load_pha
Load a file as a PHA data set.
set_full_model
Define the convolved model expression for a data set.
set_arf
Set the ARF for use by a PHA data set.
set_rmf
Set the RMF for use by a PHA data set.
unpack_arf
Read in an ARF from a file.
Examples
Return the exposure field of the ARF from the default data set:
>>> get_arf().exposure
Copy the ARF from the default data set to data set 2:
>>> arf1 = get_arf() >>> set_arf(2, arf1)
Retrieve the ARF associated to the second background component of the ‘core’ data set:
>>> bgarf = get_arf('core', 'bkg.arf', bkg_id=2)
Retrieve the ARF and RMF for the default data set and use them to create a model expression which includes a power-law component (pbgnd) that is not convolved by the response:
>>> arf = get_arf() >>> rmf = get_rmf() >>> src_expr = xsphabs.abs1 * powlaw1d.psrc >>> set_full_model(rmf(arf(src_expr)) + powlaw1d.pbgnd) >>> print(get_model())
- get_arf_plot(id: int | str | None = None, resp_id: int | str | None = None, recalc=True)[source] [edit on github]
Return the data used by plot_arf.
- Parameters:
id (int, str, or None, optional) – The data set with an ARF. If not given then the default identifier is used, as returned by
get_default_id
.resp_id (int, str, or None, optional) – Which ARF to use in the case that multiple ARFs are associated with a data set. The default is
None
, which means the first one.recalc (bool, optional) – If
False
then the results from the last call toplot_arf
(orget_arf_plot
) are returned, otherwise the data is re-generated.
- Returns:
arf_plot
- Return type:
a
sherpa.astro.plot.ARFPlot
instance- Raises:
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
Examples
Return the ARF plot data for the default data set:
>>> aplot = get_arf_plot() >>> aplot.y.max() 676.95794677734375
Return the ARF data for the second response of the data set labelled ‘histate’, and then plot it:
>>> aplot = get_arf_plot('histate', 2) >>> aplot.plot()
- get_axes(id: int | str | None = None, bkg_id: int | str | None = None)[source] [edit on github]
Return information about the independent axes of a data set.
This function returns the coordinates of each point, or pixel, in the data set. The
get_indep
function may be be preferred in some situations.- Parameters:
id (int, str, or None, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – Set if the values returned should be from the given background component, instead of the source data set.
- Returns:
axis – The independent axis values. The differences to
get_dep
that this represents the “alternate grid” for the axis. For PHA data, this is the energy grid (E_MIN and E_MAX). For image data it is an array for each axis, of the length of the axis, using the current coordinate system for the data set.- Return type:
tuple of arrays
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_indep
Return the independent axis of a data set.
list_data_ids
List the identifiers for the loaded data sets.
Examples
For 1D data sets, the “alternate” view is the same as the independent axis:
>>> load_arrays(1, [10, 15, 19], [4, 5, 9], Data1D) >>> get_indep() array([10, 15, 19]) >>> get_axes() array([10, 15, 19])
For a PHA data set, the approximate energy grid of the channels is returned (this is determined by the EBOUNDS extension of the RMF).
>>> load_pha('core', 'src.pi') read ARF file src.arf read RMF file src.rmf read background file src_bkg.pi >>> (chans,) = get_indep() >>> (elo, ehi) = get_axes() >>> chans[0:5] array([ 1., 2., 3., 4., 5.]) >>> elo[0:5] array([ 0.0073, 0.0146, 0.0292, 0.0438, 0.0584]) >>> ehi[0:5] array([ 0.0146, 0.0292, 0.0438, 0.0584, 0.073 ])
The image has 101 columns by 108 rows. The
get_indep
function returns one-dimensional arrays, for the full dataset, whereasget_axes
returns values for the individual axis:>>> load_image('img', 'img.fits') >>> get_data('img').shape (108, 101) >>> set_coord('img', 'physical') >>> (x0, x1) = get_indep('img') >>> (a0, a1) = get_axes('img') >>> (x0.size, x1.size) (10908, 10908) >>> (a0.size, a1.size) (101, 108) >>> np.all(x0[:101] == a0) True >>> np.all(x1[::101] == a1) True
- get_backscal(id: int | str | None = None, bkg_id: int | str | None = None)[source] [edit on github]
Return the BACKSCAL scaling of a PHA data set.
Return the BACKSCAL setting for the source or background component of a PHA data set.
- Parameters:
id (int, str, or None, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – Set to identify which background component to use. The default value (
None
) means that the value is for the source component of the data set.
- Returns:
backscal – The BACKSCAL value, which can be a scalar or a 1D array.
- Return type:
number or ndarray
See also
get_areascal
Return the fractional area factor of a PHA data set.
get_bkg_scale
Return the background scaling factor for a PHA data set.
set_backscal
Change the area scaling of a PHA data set.
Notes
The BACKSCAL value can be defined as the ratio of the area of the source (or background) extraction region in image pixels to the total number of image pixels. The fact that there is no ironclad definition for this quantity does not matter so long as the value for a source dataset and its associated background dataset are defined in the similar manner, because only the ratio of source and background BACKSCAL values is used. It can be a scalar or be an array.
References
K. A. Arnaud, I. M. George & A. F. Tennant, “The OGIP Spectral File Format”
Examples
>>> get_backscal() 7.8504301607718007e-06 >>> get_backscal(bkg_id=1) 0.00022745132446289
- get_bkg(id: int | str | None = None, bkg_id: int | str | None = None) DataPHA [source] [edit on github]
Return the background for a PHA data set.
Function to return the background for a PHA data set. The object returned by the call can be used to query and change properties of the background.
- Parameters:
id (int, str, or None, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – The identifier for this background, which is needed if there are multiple background estimates for the source.
- Returns:
data
- Return type:
a DataPHA object
- Raises:
sherpa.utils.err.ArgumentErr – If the data set does not contain a PHA data set.
sherpa.utils.err.IdentifierErr – If no data set is associated with this identifier.
See also
Examples
>>> bg = get_bkg()
>>> bg = get_bkg('flare', 2)
- get_bkg_arf(id: int | str | None = None)[source] [edit on github]
Return the background ARF associated with a PHA data set.
This is for the case when there is only one background component and one background response. If this does not hold, use
get_arf
and use thebkg_id
andresp_id
arguments.- Parameters:
id (int, str, or None, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
arf – This is a reference to the ARF, rather than a copy, so that changing the fields of the object will change the values in the data set.
- Return type:
a
sherpa.astro.instrument.ARF1D
instance
See also
fake_pha
Simulate a PHA data set from a model.
load_bkg_arf
Load an ARF from a file and add it to the background of a PHA data set.
load_pha
Load a file as a PHA data set.
set_full_model
Define the convolved model expression for a data set.
set_arf
Set the ARF for use by a PHA data set.
set_rmf
Set the RMF for use by a PHA data set.
unpack_arf
Read in an ARF from a file.
Examples
Return the exposure field of the ARF from the background of the default data set:
>>> get_bkg_arf().exposure
Copy the ARF from the default data set to data set 2, as the first component:
>>> arf1 = get_bkg_arf() >>> set_arf(2, arf1, bkg_id=1)
- get_bkg_chisqr_plot(id: int | str | None = None, bkg_id: int | str | None = None, recalc=True)[source] [edit on github]
Return the data used by plot_bkg_chisqr.
- Parameters:
id (int, str, or None, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
recalc (bool, optional) – If
False
then the results from the last call toplot_bkg_chisqr
(orget_bkg_chisqr_plot
) are returned, otherwise the data is re-generated.
- Returns:
chisqr – An object representing the data used to create the plot by
plot_bkg_chisqr
.- Return type:
a
sherpa.astro.plot.BkgChisqrPlot
instance- Raises:
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_delchi_plot
Return the data used by plot_bkg_delchi.
get_bkg_ratio_plot
Return the data used by plot_bkg_ratio.
get_bkg_resid_plot
Return the data used by plot_bkg_resid.
plot_bkg_chisqr
Plot the chi-squared value for each point of the background of a PHA data set.
Examples
>>> bplot = get_bkg_chisqr_plot() >>> print(bplot)
>>> get_bkg_chisqr_plot('jet', bkg_id=1).plot() >>> get_bkg_chisqr_plot('jet', bkg_id=2).overplot()
- get_bkg_delchi_plot(id: int | str | None = None, bkg_id: int | str | None = None, recalc=True)[source] [edit on github]
Return the data used by plot_bkg_delchi.
- Parameters:
id (int, str, or None, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
recalc (bool, optional) – If
False
then the results from the last call toplot_bkg_delchi
(orget_bkg_delchi_plot
) are returned, otherwise the data is re-generated.
- Returns:
delchi – An object representing the data used to create the plot by
plot_bkg_delchi
.- Return type:
a
sherpa.astro.plot.BkgDelchiPlot
instance- Raises:
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_chisqr_plot
Return the data used by plot_bkg_chisqr.
get_bkg_ratio_plot
Return the data used by plot_bkg_ratio.
get_bkg_resid_plot
Return the data used by plot_bkg_resid.
plot_bkg_delchi
Plot the ratio of residuals to error for the background of a PHA data set.
Examples
>>> bplot = get_bkg_delchi_plot() >>> print(bplot)
>>> get_bkg_delchi_plot('jet', bkg_id=1).plot() >>> get_bkg_delchi_plot('jet', bkg_id=2).overplot()
- get_bkg_fit_plot(id: int | str | None = None, bkg_id: int | str | None = None, recalc=True)[source] [edit on github]
Return the data used by plot_bkg_fit.
- Parameters:
id (int, str, or None, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
recalc (bool, optional) – If
False
then the results from the last call toplot_bkg_fit
(orget_bkg_fit_plot
) are returned, otherwise the data is re-generated.
- Returns:
model – An object representing the data used to create the plot by
plot_bkg_fit
.- Return type:
a
sherpa.astro.plot.BkgFitPlot
instance- Raises:
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_plot
Return the data used by plot_bkg.
get_bkg_model_plot
Return the data used by plot_bkg_model.
plot_bkg_fit
Plot the fit results (data, model) for the background of a PHA data set.
Examples
Create the data needed to create the “fit plot” for the background of the default data set and display it:
>>> bplot = get_bkg_fit_plot() >>> print(bplot)
Return the plot data for data set 2, and then use it to create a plot:
>>> b2 = get_bkg_fit_plot(2) >>> b2.plot()
The fit plot consists of a combination of a data plot and a model plot, which are captured in the
dataplot
andmodelplot
attributes of the return value. These can be used to display the plots individually, such as:>>> b2.dataplot.plot() >>> b2.modelplot.plot()
or, to combine the two:
>>> b2.dataplot.plot() >>> b2.modelplot.overplot()
Return the plot data for the second background component to the “jet” data set:
>>> bplot = get_bkg_fit_plot('jet', bkg_id=2)
- get_bkg_model(id: int | str | None = None, bkg_id: int | str | None = None)[source] [edit on github]
Return the model expression for the background of a PHA data set.
This returns the model expression for the background of a data set, including the instrument response (e.g. ARF and RMF), whether created automatically or explicitly, with
set_bkg_full_model
.Changed in version 4.15.1: The response will now be taken from the source dataset if the background has no response.
- Parameters:
- Returns:
This can contain multiple model components and any instrument response. Changing attributes of this model changes the model used by the data set.
- Return type:
instance
See also
delete_bkg_model
Delete the background model expression for a data set.
get_bkg_source
Return the model expression for the background of a PHA data set.
list_model_ids
List of all the data sets with a source expression.
set_bkg_model
Set the background model expression for a PHA data set.
set_bkg_full_model
Define the convolved background model expression for a PHA data set.
show_bkg_model
Display the background model expression for a data set.
Examples
Return the background model expression for the default data set, including any instrument response:
>>> bkg = get_bkg_model()
- get_bkg_model_plot(id: int | str | None = None, bkg_id: int | str | None = None, recalc=True)[source] [edit on github]
Return the data used by plot_bkg_model.
- Parameters:
id (int, str, or None, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
recalc (bool, optional) – If
False
then the results from the last call toplot_bkg_model
(orget_bkg_model_plot
) are returned, otherwise the data is re-generated.
- Returns:
model – An object representing the data used to create the plot by
plot_bkg_model
.- Return type:
a
sherpa.astro.plot.BkgModelHistogram
instance- Raises:
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_source_plot
Return the data used by plot_bkg_source.
plot_bkg_model
Plot the model for the background of a PHA data set.
plot_bkg_source
Plot the model expression for the background of a PHA data set.
Examples
>>> bplot = get_bkg_model_plot() >>> print(bplot)
>>> get_bkg_model_plot('jet', bkg_id=1).plot() >>> get_bkg_model_plot('jet', bkg_id=2).overplot()
- get_bkg_plot(id: int | str | None = None, bkg_id: int | str | None = None, recalc=True)[source] [edit on github]
Return the data used by plot_bkg.
- Parameters:
id (int, str, or None, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
recalc (bool, optional) – If
False
then the results from the last call toplot_bkg
(orget_bkg_plot
) are returned, otherwise the data is re-generated.
- Returns:
data – An object representing the data used to create the plot by
plot_data
. The relationship between the returned values and the values in the data set depend on the analysis, filtering, and grouping settings of the data set.- Return type:
a
sherpa.astro.plot.BkgDataPlot
instance- Raises:
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.
See also
get_default_id
Return the default data set identifier.
plot_bkg
Plot the background values for a PHA data set.
Examples
Create the data needed to create the “data plot” for the background of the default data set and display it:
>>> bplot = get_bkg_plot() >>> print(bplot)
Return the plot data for data set 2, and then use it to create a plot:
>>> b2 = get_bkg_plot(2) >>> b2.plot()
Return the plot data for the second background component to the “jet” data set:
>>> bplot = get_bkg_plot('jet', bkg_id=2)
- get_bkg_ratio_plot(id: int | str | None = None, bkg_id: int | str | None = None, recalc=True)[source] [edit on github]
Return the data used by plot_bkg_ratio.
- Parameters:
id (int, str, or None, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
recalc (bool, optional) – If
False
then the results from the last call toplot_bkg_ratio
(orget_bkg_ratio_plot
) are returned, otherwise the data is re-generated.
- Returns:
ratio – An object representing the data used to create the plot by
plot_bkg_ratio
.- Return type:
a
sherpa.astro.plot.BkgRatioPlot
instance- Raises:
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_chisqr_plot
Return the data used by plot_bkg_chisqr.
get_bkg_delchi_plot
Return the data used by plot_bkg_delchi.
get_bkg_resid_plot
Return the data used by plot_bkg_resid.
plot_bkg_ratio
Plot the ratio of data to model values for the background of a PHA data set.
Examples
>>> bplot = get_bkg_ratio_plot() >>> print(bplot)
>>> get_bkg_ratio_plot('jet', bkg_id=1).plot() >>> get_bkg_ratio_plot('jet', bkg_id=2).overplot()
- get_bkg_resid_plot(id: int | str | None = None, bkg_id: int | str | None = None, recalc=True)[source] [edit on github]
Return the data used by plot_bkg_resid.
- Parameters:
id (int, str, or None, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
recalc (bool, optional) – If
False
then the results from the last call toplot_bkg_resid
(orget_bkg_resid_plot
) are returned, otherwise the data is re-generated.
- Returns:
resid – An object representing the data used to create the plot by
plot_bkg_resid
.- Return type:
a
sherpa.astro.plot.BkgResidPlot
instance- Raises:
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_chisqr_plot
Return the data used by plot_bkg_chisqr.
get_bkg_delchi_plot
Return the data used by plot_bkg_delchi.
get_bkg_ratio_plot
Return the data used by plot_bkg_ratio.
plot_bkg_resid
Plot the residual (data-model) values for the background of a PHA data set.
Examples
>>> bplot = get_bkg_resid_plot() >>> print(bplot)
>>> get_bkg_resid_plot('jet', bkg_id=1).plot() >>> get_bkg_resid_plot('jet', bkg_id=2).overplot()
- get_bkg_rmf(id: int | str | None = None)[source] [edit on github]
Return the background RMF associated with a PHA data set.
This is for the case when there is only one background component and one background response. If this does not hold, use
get_rmf
and use thebkg_id
andresp_id
arguments.- Parameters:
id (int, str, or None, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
rmf – This is a reference to the RMF, rather than a copy, so that changing the fields of the object will change the values in the data set.
- Return type:
a
sherpa.astro.instrument.RMF1D
instance
See also
fake_pha
Simulate a PHA data set from a model.
load_bkg_rmf
Load a RMF from a file and add it to the background of a PHA data set.
load_pha
Load a file as a PHA data set.
set_full_model
Define the convolved model expression for a data set.
set_arf
Set the ARF for use by a PHA data set.
set_rmf
Set the RMF for use by a PHA data set.
unpack_rmf
Read in a RMF from a file.
Examples
Copy the RMF from the default data set to data set 2, as the first component:
>>> rmf1 = get_bkg_arf() >>> set_rmf(2, arf1, bkg_id=1)
- get_bkg_scale(id: int | str | None = None, bkg_id: int | str = 1, units='counts', group=True, filter=False)[source] [edit on github]
Return the background scaling factor for a background data set.
Return the factor applied to the background component to scale it to match it to the source, either when subtracting the background (units=’counts’), or fitting it simultaneously (units=’rate’).
Changed in version 4.12.2: The bkg_id, counts, group, and filter parameters have been added and the routine no longer calculates the average scaling for all the background components but just for the given component.
- Parameters:
id (int, str, or None, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Set to identify which background component to use. The default value is 1.
units ({'counts', 'rate'}, optional) – The correction is applied to a model defined as counts, the default, or a rate. The latter should be used when calculating the correction factor for adding the background data to the source aperture.
group (bool, optional) – Should the values be grouped to match the data?
filter (bool, optional) – Should the values be filtered to match the data?
- Returns:
ratio – The scaling factor. The result can vary per channel, in which case an array is returned.
- Return type:
number or array
See also
get_areascal
Return the fractional area factor of a PHA data set.
get_backscal
Return the area scaling factor for a PHA data set.
set_backscal
Change the area scaling of a PHA data set.
set_full_model
Define the convolved model expression for a data set.
set_bkg_full_model
Define the convolved background model expression for a PHA data set.
Notes
The scale factor when units=’counts’ is:
exp_src * bscale_src * areascal_src / (exp_bgnd * bscale_bgnd * areascal_ngnd) / nbkg
where
exp_x
,bscale_x
. andareascal_x
are the exposure, BACKSCAL, and AREASCAL values for the source (x=src
) and background (x=bgnd
) regions, respectively, andnbkg
is the number of background datasets associated with the source aperture. When units=’rate’, the exposure and areascal corrections are not included.Examples
Return the background-scaling factor for the default dataset (this assumes there’s only one background component).
>>> get_bkg_scale() 0.034514770047217924
Return the factor for dataset “pi”:
>>> get_bkg_scale('pi') 0.034514770047217924
Calculate the factors for the first two background components of the default dataset, valid for combining the source and background models to fit the source aperture:
>>> scale1 = get_bkg_scale(units='rate') >>> scale2 = get_bkg_scale(units='rate', bkg_id=2)
- get_bkg_source(id: int | str | None = None, bkg_id: int | str | None = None)[source] [edit on github]
Return the model expression for the background of a PHA data set.
This returns the model expression created by
set_bkg_model
orset_bkg_source
. It does not include any instrument response.- Parameters:
id (int, str, or None, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int, str, or None, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
- Returns:
model – This can contain multiple model components. Changing attributes of this model changes the model used by the data set.
- Return type:
a sherpa.models.Model object
See also
delete_bkg_model
Delete the background model expression for a data set.
get_bkg_model
Return the model expression for the background of a PHA data set.
list_model_ids
List of all the data sets with a source expression.
set_bkg_model
Set the background model expression for a PHA data set.
show_bkg_model
Display the background model expression for a data set.
Examples
Return the background model expression for the default data set:
>>> bkg = get_bkg_source() >>> len(bkg.pars) 2
- get_bkg_source_plot(id=None, lo=None, hi=None, bkg_id: int | str | None = None, recalc=True)[source] [edit on github]
Return the data used by plot_bkg_source.
- Parameters:
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.lo (number, optional) – The low value to plot.
hi (number, optional) – The high value to plot.
bkg_id (int, str, or None, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
recalc (bool, optional) – If
False
then the results from the last call toplot_bkg_source
(orget_bkg_source_plot
) are returned, otherwise the data is re-generated.
- Returns:
source – An object representing the data used to create the plot by
plot_bkg_source
.- Return type:
a
sherpa.astro.plot.BkgSourcePlot
instance- Raises:
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_model_plot
Return the data used by plot_bkg_model.
plot_bkg_model
Plot the model for the background of a PHA data set.
plot_bkg_source
Plot the model expression for the background of a PHA data set.
Examples
Retrieve the source plot information for the background of the default data set and display it:
>>> splot = get_bkg_source_plot() >>> print(splot)
Return the background plot data for data set 2, and then use it to create a plot:
>>> s2 = get_bkg_source_plot(2) >>> s2.plot()
Create a plot of the first two background components of the ‘histate’ data set, overplotting the second on the first:
>>> b1 = get_bkg_source_plot('histate', bkg_id=1) >>> b2 = get_bkg_source_plot('histate', bkg_id=2) >>> b1.plot() >>> b2.overplot()
Retrieve the background source plots for the 0.5 to 7 range of the ‘jet’ and ‘core’ data sets and display them on the same plot:
>>> splot1 = get_bkg_source_plot(id='jet', lo=0.5, hi=7) >>> splot2 = get_bkg_source_plot(id='core', lo=0.5, hi=7) >>> splot1.plot() >>> splot2.overplot()
For a PHA data set, the units on both the X and Y axes of the plot are controlled by the
set_analysis
command. In this case the Y axis will be in units of photons/s/cm^2 and the X axis in keV:>>> set_analysis('energy', factor=1) >>> splot = get_bkg_source_plot() >>> print(splot)
- get_bkg_stat_info()[source] [edit on github]
Return the statistic values for the current background models.
Return the statistic values for the background datasets. See get_stat_info.
Added in version 4.16.0.
- Returns:
stats – The values for each data set. If there are multiple model expressions then the last element will be the value for the combined data sets.
- Return type:
array of
sherpa.fit.StatInfoResults
See also
Notes
If a fit to a particular data set has not been made, or values - such as parameter settings, the noticed data range, or choice of statistic - have been changed since the last fit, then the results for that data set may not be meaningful and will therefore bias the results for the simultaneous results.
Examples
>>> res = get_stat_info() >>> res[0].statval 498.21750663761935 >>> res[0].dof 439
- get_cdf_plot()[source] [edit on github]
Return the data used to plot the last CDF.
- Returns:
plot – An object containing the data used by the last call to
plot_cdf
. The fields will beNone
if the function has not been called.- Return type:
a
sherpa.plot.CDFPlot
instance
See also
plot_cdf
Plot the cumulative density function of an array.
- get_chisqr_plot(id: int | str | None = None, recalc=True)[source] [edit on github]
Return the data used by plot_chisqr.
- Parameters:
id (int, str, or None, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_chisqr
(orget_chisqr_plot
) are returned, otherwise the data is re-generated.
- Returns:
resid_data
- Return type:
a
sherpa.plot.ChisqrPlot
instance- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_delchi_plot
Return the data used by plot_delchi.
get_ratio_plot
Return the data used by plot_ratio.
get_resid_plot
Return the data used by plot_resid.
plot_chisqr
Plot the chi-squared value for each point in a data set.
Examples
Return the residual data, measured as chi square, for the default data set:
>>> rplot = get_chisqr_plot() >>> np.min(rplot.y) 0.0005140622701128954 >>> np.max(rplot.y) 8.379696454792295
Display the contents of the residuals plot for data set 2:
>>> print(get_chisqr_plot(2))
Overplot the residuals plot from the ‘core’ data set on the ‘jet’ data set:
>>> r1 = get_chisqr_plot('jet') >>> r2 = get_chisqr_plot('core') >>> r1.plot() >>> r2.overplot()
- get_conf()[source] [edit on github]
Return the confidence-interval estimation object.
- Returns:
conf
- Return type:
See also
conf
Estimate parameter confidence intervals using the confidence method.
get_conf_opt
Return one or all of the options for the confidence interval method.
set_conf_opt
Set an option of the conf estimation object.
Notes
The attributes of the confidence-interval object include:
eps
The precision of the calculated limits. The default is 0.01.
fast
If
True
then the fit optimization used may be changed from the current setting (only for the error analysis) to use a faster optimization method. The default isFalse
.max_rstat
If the reduced chi square is larger than this value, do not use (only used with chi-square statistics). The default is 3.
maxfits
The maximum number of re-fits allowed (that is, when the
remin
filter is met). The default is 5.maxiters
The maximum number of iterations allowed when bracketing limits, before stopping for that parameter. The default is 200.
numcores
The number of computer cores to use when evaluating results in parallel. This is only used if
parallel
isTrue
. The default is to use all cores.openinterval
How the
conf
method should cope with intervals that do not converge (that is, when themaxiters
limit has been reached). The default isFalse
.parallel
If there is more than one free parameter then the results can be evaluated in parallel, to reduce the time required. The default is
True
.remin
The minimum difference in statistic value for a new fit location to be considered better than the current best fit (which starts out as the starting location of the fit at the time
conf
is called). The default is 0.01.sigma
What is the error limit being calculated. The default is 1.
soft_limits
Should the search be restricted to the soft limits of the parameters (
True
), or can parameter values go out all the way to the hard limits if necessary (False
). The default isFalse
tol
The tolerance for the fit. The default is 0.2.
verbose
Should extra information be displayed during fitting? The default is
False
.
Examples
>>> print(get_conf()) name = confidence numcores = 8 verbose = False openinterval = False max_rstat = 3 maxiters = 200 soft_limits = False eps = 0.01 fast = False maxfits = 5 remin = 0.01 tol = 0.2 sigma = 1 parallel = True
Change the
remin
field to 0.05.>>> cf = get_conf() >>> cf.remin = 0.05
- get_conf_opt(name=None)[source] [edit on github]
Return one or all of the options for the confidence interval method.
This is a helper function since the options can also be read directly using the object returned by
get_conf
.- Parameters:
name (str, optional) – If not given, a dictionary of all the options are returned. When given, the individual value is returned.
- Returns:
value
- Return type:
dictionary or value
- Raises:
sherpa.utils.err.ArgumentErr – If the
name
argument is not recognized.
See also
conf
Estimate parameter confidence intervals using the confidence method.
get_conf
Return the confidence-interval estimation object.
set_conf_opt
Set an option of the conf estimation object.
Examples
>>> get_conf_opt('verbose') False
>>> copts = get_conf_opt() >>> copts['verbose'] False
- get_conf_results()[source] [edit on github]
Return the results of the last
conf
run.- Returns:
results
- Return type:
sherpa.fit.ErrorEstResults object
- Raises:
sherpa.utils.err.SessionErr – If no
conf
call has been made.
See also
get_conf_opt
Return one or all of the options for the confidence interval method.
set_conf_opt
Set an option of the conf estimation object.
Notes
The fields of the object include:
datasets
A tuple of the data sets used in the analysis.
methodname
This will be ‘confidence’.
iterfitname
The name of the iterated-fit method used, if any.
fitname
The name of the optimization method used.
statname
The name of the fit statistic used.
sigma
The sigma value used to calculate the confidence intervals.
percent
The percentage of the signal contained within the confidence intervals (calculated from the
sigma
value assuming a normal distribution).parnames
A tuple of the parameter names included in the analysis.
parvals
A tuple of the best-fit parameter values, in the same order as
parnames
.parmins
A tuple of the lower error bounds, in the same order as
parnames
.parmaxes
A tuple of the upper error bounds, in the same order as
parnames
.nfits
The number of model evaluations.
Examples
>>> res = get_conf_results() >>> print(res) datasets = (1,) methodname = confidence iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('p1.gamma', 'p1.ampl') parvals = (2.1585155113403327, 0.00022484014787994827) parmins = (-0.082785567348122591, -1.4825550342799376e-05) parmaxes = (0.083410634144100104, 1.4825550342799376e-05) nfits = 13
The following converts the above into a dictionary where the keys are the parameter names and the values are the tuple (best-fit value, lower-limit, upper-limit):
>>> pvals1 = zip(res.parvals, res.parmins, res.parmaxes) >>> pvals2 = [(v, v+l, v+h) for (v, l, h) in pvals1] >>> dres = dict(zip(res.parnames, pvals2)) >>> dres['p1.gamma'] (2.1585155113403327, 2.07572994399221, 2.241926145484433)
- get_confidence_results() [edit on github]
Return the results of the last
conf
run.- Returns:
results
- Return type:
sherpa.fit.ErrorEstResults object
- Raises:
sherpa.utils.err.SessionErr – If no
conf
call has been made.
See also
get_conf_opt
Return one or all of the options for the confidence interval method.
set_conf_opt
Set an option of the conf estimation object.
Notes
The fields of the object include:
datasets
A tuple of the data sets used in the analysis.
methodname
This will be ‘confidence’.
iterfitname
The name of the iterated-fit method used, if any.
fitname
The name of the optimization method used.
statname
The name of the fit statistic used.
sigma
The sigma value used to calculate the confidence intervals.
percent
The percentage of the signal contained within the confidence intervals (calculated from the
sigma
value assuming a normal distribution).parnames
A tuple of the parameter names included in the analysis.
parvals
A tuple of the best-fit parameter values, in the same order as
parnames
.parmins
A tuple of the lower error bounds, in the same order as
parnames
.parmaxes
A tuple of the upper error bounds, in the same order as
parnames
.nfits
The number of model evaluations.
Examples
>>> res = get_conf_results() >>> print(res) datasets = (1,) methodname = confidence iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('p1.gamma', 'p1.ampl') parvals = (2.1585155113403327, 0.00022484014787994827) parmins = (-0.082785567348122591, -1.4825550342799376e-05) parmaxes = (0.083410634144100104, 1.4825550342799376e-05) nfits = 13
The following converts the above into a dictionary where the keys are the parameter names and the values are the tuple (best-fit value, lower-limit, upper-limit):
>>> pvals1 = zip(res.parvals, res.parmins, res.parmaxes) >>> pvals2 = [(v, v+l, v+h) for (v, l, h) in pvals1] >>> dres = dict(zip(res.parnames, pvals2)) >>> dres['p1.gamma'] (2.1585155113403327, 2.07572994399221, 2.241926145484433)
- get_contour_prefs(contourtype: str, id: int | str | None = None)[source] [edit on github]
Return the preferences for the given contour type.
Added in version 4.16.0.
- Parameters:
contourtype (str) – The contour type, such as “data”, “model”, or “resid”. The “fit” argument is not supported.
id (int, str, or None, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.
- Returns:
prefs – Changing the values of this dictionary will change any new contour plots. This dictionary will be empty if no plot backend is available.
- Return type:
Notes
The meaning of the fields depend on the chosen plot backend. A value of
None
means to use the default value for that attribute, or not to use that setting.The “fit” argument can not be used, even though there is a get_fit_contour call. Either use the “data” or “model” arguments to access the desired plot type, or use get_fit_contour() and access the datacontour and modelcontour attributes directly.
Examples
Change the contours of the model to be drawn partly opaque (with the matplotlib backend):
>>> prefs = get_contour_prefs("data") >>> prefs['alpha'] = 0.7 >>> contour_data() >>> contour_model(overcontour=True)
- get_coord(id: int | str | None = None) str [source] [edit on github]
Get the coordinate system used for image analysis.
- Parameters:
id (int, str, or None, optional) – The data set to query. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
coord
- Return type:
{ ‘logical’, ‘physical’, ‘world’ }
- Raises:
sherpa.utils.err.ArgumentErr – If the data set does not contain image data.
sherpa.utils.err.IdentifierErr – If the
id
argument is not recognized.
See also
get_default_id
Return the default data set identifier.
set_coord
Set the coordinate system to use for image analysis.
- get_counts(id: int | str | None = None, filter=False, bkg_id: int | str | None = None) [edit on github]
Return the dependent axis of a data set.
This function returns the data values (the dependent axis) for each point or pixel in the data set.
- Parameters:
id (int, str, or None, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – Should the filter attached to the data set be applied to the return value or not. The default is
False
.bkg_id (int, str, or None, optional) – Set if the values returned should be from the given background component, instead of the source data set.
- Returns:
axis – The dependent axis values. The model estimate is compared to these values during fitting. For PHA data sets, the return array will match the grouping scheme applied to the data set. This array is one-dimensional, even for two dimensional (e.g. image) data.
- Return type:
array
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_error
Return the errors on the dependent axis of a data set.
get_indep
Return the independent axis of a data set.
get_rate
Return the count rate of a PHA data set.
list_data_ids
List the identifiers for the loaded data sets.
Examples
>>> load_arrays(1, [10, 15, 19], [4, 5, 9], Data1D) >>> get_dep() array([4, 5, 9])
>>> x0 = [10, 15, 12, 19] >>> x1 = [12, 14, 10, 17] >>> y = [4, 5, 9, -2] >>> load_arrays(2, x0, x1, y, Data2D) >>> get_dep(2) array([4, 5, 9, -2])
If the
filter
flag is set then the return will be limited to the data that is used in the fit:>>> load_arrays(1, [10, 15, 19], [4, 5, 9]) >>> ignore_id(1, 17, None) >>> get_dep() array([4, 5, 9]) >>> get_dep(filter=True) array([4, 5])
An example with a PHA data set named ‘spec’:
>>> notice_id('spec', 0.5, 7) >>> yall = get_dep('spec') >>> yfilt = get_dep('spec', filter=True) >>> yall.size 1024 >>> yfilt.size 446
For images, the data is returned as a one-dimensional array:
>>> load_image('img', 'image.fits') >>> ivals = get_dep('img') >>> ivals.shape (65536,)
- get_covar()[source] [edit on github]
Return the covariance estimation object.
- Returns:
covar
- Return type:
See also
covar
Estimate parameter confidence intervals using the covariance method.
get_covar_opt
Return one or all of the options for the covariance method.
set_covar_opt
Set an option of the covar estimation object.
Notes
The attributes of the covariance object include:
eps
The precision of the calculated limits. The default is 0.01.
maxiters
The maximum number of iterations allowed before stopping for that parameter. The default is 200.
sigma
What is the error limit being calculated. The default is 1.
soft_limits
Should the search be restricted to the soft limits of the parameters (
True
), or can parameter values go out all the way to the hard limits if necessary (False
). The default isFalse
Examples
>>> print(get_covar()) name = covariance sigma = 1 maxiters = 200 soft_limits = False eps = 0.01
Change the
sigma
field to 1.9.>>> cv = get_covar() >>> cv.sigma = 1.6
- get_covar_opt(name=None)[source] [edit on github]
Return one or all of the options for the covariance method.
This is a helper function since the options can also be read directly using the object returned by
get_covar
.- Parameters:
name (str, optional) – If not given, a dictionary of all the options are returned. When given, the individual value is returned.
- Returns:
value
- Return type:
dictionary or value
- Raises:
sherpa.utils.err.ArgumentErr – If the
name
argument is not recognized.
See also
covar
Estimate parameter confidence intervals using the covariance method.
get_covar
Return the covariance estimation object.
set_covar_opt
Set an option of the covar estimation object.
Examples
>>> get_covar_opt('sigma') 1
>>> copts = get_covar_opt() >>> copts['sigma'] 1
- get_covar_results()[source] [edit on github]
Return the results of the last
covar
run.- Returns:
results
- Return type:
sherpa.fit.ErrorEstResults object
- Raises:
sherpa.utils.err.SessionErr – If no
covar
call has been made.
See also
get_covar_opt
Return one or all of the options for the covariance method.
set_covar_opt
Set an option of the covar estimation object.
Notes
The fields of the object include:
datasets
A tuple of the data sets used in the analysis.
methodname
This will be ‘covariance’.
iterfitname
The name of the iterated-fit method used, if any.
fitname
The name of the optimization method used.
statname
The name of the fit statistic used.
sigma
The sigma value used to calculate the confidence intervals.
percent
The percentage of the signal contained within the confidence intervals (calculated from the
sigma
value assuming a normal distribution).parnames
A tuple of the parameter names included in the analysis.
parvals
A tuple of the best-fit parameter values, in the same order as
parnames
.parmins
A tuple of the lower error bounds, in the same order as
parnames
.parmaxes
A tuple of the upper error bounds, in the same order as
parnames
.nfits
The number of model evaluations.
There is also an
extra_output
field which is used to return the covariance matrix.Examples
>>> res = get_covar_results() >>> print(res) datasets = (1,) methodname = covariance iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('bgnd.c0',) parvals = (10.228675427602724,) parmins = (-2.4896739438296795,) parmaxes = (2.4896739438296795,) nfits = 0
In this case, of a single parameter, the covariance matrix is just the variance of the parameter:
>>> res.extra_output array([[ 6.19847635]])
- get_covariance_results() [edit on github]
Return the results of the last
covar
run.- Returns:
results
- Return type:
sherpa.fit.ErrorEstResults object
- Raises:
sherpa.utils.err.SessionErr – If no
covar
call has been made.
See also
get_covar_opt
Return one or all of the options for the covariance method.
set_covar_opt
Set an option of the covar estimation object.
Notes
The fields of the object include:
datasets
A tuple of the data sets used in the analysis.
methodname
This will be ‘covariance’.
iterfitname
The name of the iterated-fit method used, if any.
fitname
The name of the optimization method used.
statname
The name of the fit statistic used.
sigma
The sigma value used to calculate the confidence intervals.
percent
The percentage of the signal contained within the confidence intervals (calculated from the
sigma
value assuming a normal distribution).parnames
A tuple of the parameter names included in the analysis.
parvals
A tuple of the best-fit parameter values, in the same order as
parnames
.parmins
A tuple of the lower error bounds, in the same order as
parnames
.parmaxes
A tuple of the upper error bounds, in the same order as
parnames
.nfits
The number of model evaluations.
There is also an
extra_output
field which is used to return the covariance matrix.Examples
>>> res = get_covar_results() >>> print(res) datasets = (1,) methodname = covariance iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('bgnd.c0',) parvals = (10.228675427602724,) parmins = (-2.4896739438296795,) parmaxes = (2.4896739438296795,) nfits = 0
In this case, of a single parameter, the covariance matrix is just the variance of the parameter:
>>> res.extra_output array([[ 6.19847635]])
- get_data(id: int | str | None = None) Data [source] [edit on github]
Return the data set by identifier.
The object returned by the call can be used to query and change properties of the data set.
- Parameters:
id (int, str, or None, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
instance – The data instance.
- Return type:
- Raises:
sherpa.utils.err.IdentifierErr – No data has been loaded for this data set.
See also
copy_data
Copy a data set to a new identifier.
delete_data
Delete a data set by identifier.
load_data
Create a data set from a file.
set_data
Set a data set.
Examples
>>> d = get_data()
>>> dimg = get_data('img')
>>> load_arrays('tst', [10, 20, 30], [5.4, 2.3, 9.8]) >>> print(get_data('tst')) name = x = Int64[3] y = Float64[3] staterror = None syserror = None
- get_data_contour(id: int | str | None = None, recalc=True)[source] [edit on github]
Return the data used by contour_data.
- Parameters:
id (int, str, or None, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_data
(orget_data_contour
) are returned, otherwise the data is re-generated.
- Returns:
resid_data – The
y
attribute contains the residual values and thex0
andx1
arrays contain the corresponding coordinate values, as one-dimensional arrays.- Return type:
a
sherpa.plot.DataContour
instance- Raises:
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_data_image
Return the data used by image_data.
contour_data
Contour the values of an image data set.
image_data
Display a data set in the image viewer.
Examples
Return the data for the default data set:
>>> dinfo = get_data_contour()
- get_data_contour_prefs()[source] [edit on github]
Return the preferences for contour_data.
- Returns:
prefs – Changing the values of this dictionary will change any new contour plots. The default is an empty dictionary.
- Return type:
See also
Notes
The meaning of the fields depend on the chosen plot backend. A value of
None
(or not set) means to use the default value for that attribute, unless indicated otherwise.alpha
The transparency value used to draw the contours, where 0 is fully transparent and 1 is fully opaque.
colors
The colors to draw the contours.
linewidths
What thickness of line to draw the contours.
xlog
Should the X axis be drawn with a logarithmic scale? The default is
False
.ylog
Should the Y axis be drawn with a logarithmic scale? The default is
False
.
Examples
Change the contours to be drawn in ‘green’:
>>> contour_data() >>> prefs = get_data_contour_prefs() >>> prefs['color'] = 'green' >>> contour_data()
- get_data_image(id: int | str | None = None)[source] [edit on github]
Return the data used by image_data.
- Parameters:
id (int, str, or None, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
data_img – The
y
attribute contains the ratio values as a 2D NumPy array.- Return type:
a
sherpa.image.DataImage
instance- Raises:
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
contour_data
Contour the values of an image data set.
image_data
Display a data set in the image viewer.
Examples
Return the image data for the default data set:
>>> dinfo = get_data_image() >>> dinfo.y.shape (150, 175)
- get_data_plot(id: int | str | None = None, recalc=True)[source] [edit on github]
Return the data used by plot_data.
- Parameters:
id (int, str, or None, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_data
(orget_data_plot
) are returned, otherwise the data is re-generated.
- Returns:
data – An object representing the data used to create the plot by
plot_data
. The relationship between the returned values and the values in the data set depend on the data type. For example PHA data are plotted in units controlled bysherpa.astro.ui.set_analysis
, but are stored as channels and counts, and may have been grouped and the background estimate removed.- Return type:
a
sherpa.plot.DataPlot
instance
See also
get_data_plot_prefs
Return the preferences for plot_data.
get_default_id
Return the default data set identifier.
plot_data
Plot the data values.
- get_data_plot_prefs(id: int | str | None = None)[source] [edit on github]
Return the preferences for plot_data.
The plot preferences may depend on the data set, so it is now an optional argument.
Changed in version 4.12.2: The id argument has been given.
- Parameters:
id (int, str, or None, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.- Returns:
prefs – Changing the values of this dictionary will change any new data plots. This dictionary will be empty if no plot backend is available.
- Return type:
See also
get_plot_prefs
,plot_data
,set_xlinear
,set_xlog
,set_ylinear
,set_ylog
Notes
The meaning of the fields depend on the chosen plot backend. A value of
None
means to use the default value for that attribute, unless indicated otherwise. These preferences are used by the following commands:plot_data
,plot_bkg
,plot_ratio
, and the “fit” variants, such asplot_fit
,plot_fit_resid
, andplot_bkg_fit
.The following preferences are recognized by the matplotlib backend:
alpha
The transparency value used to draw the line or symbol, where 0 is fully transparent and 1 is fully opaque.
barsabove
The barsabove argument for the matplotlib errorbar function.
capsize
The capsize argument for the matplotlib errorbar function.
color
The color to use (will be over-ridden by more-specific options below). The default is
None
.ecolor
The color to draw error bars. The default is
None
.linestyle
How should the line connecting the data points be drawn. The default is ‘None’, which means no line is drawn.
marker
What style is used for the symbols. The default is
'.'
which indicates a point.markerfacecolor
What color to draw the symbol representing the data points. The default is
None
.markersize
What size is the symbol drawn. The default is
None
.xerrorbars
Should error bars be drawn for the X axis. The default is
False
.xlog
Should the X axis be drawn with a logarithmic scale? The default is
False
. This field can also be changed with theset_xlog
andset_xlinear
functions.yerrorbars
Should error bars be drawn for the Y axis. The default is
True
.ylog
Should the Y axis be drawn with a logarithmic scale? The default is
False
. This field can also be changed with theset_ylog
andset_ylinear
functions.
Examples
After these commands, any data plot will use a green symbol and not display Y error bars.
>>> prefs = get_data_plot_prefs() >>> prefs['color'] = 'green' >>> prefs['yerrorbars'] = False
- get_default_id() int | str [source] [edit on github]
Return the default data set identifier.
The Sherpa data id ties data, model, fit, and plotting information into a data set easily referenced by id. The default identifier, used by many commands, is returned by this command and can be changed by
set_default_id
.- Returns:
id – The default data set identifier used by certain Sherpa functions when an identifier is not given, or set to
None
.- Return type:
See also
list_data_ids
List the identifiers for the loaded data sets.
set_default_id
Set the default data set identifier.
Notes
The default Sherpa data set identifier is the integer 1.
Examples
Display the default identifier:
>>> print(get_default_id())
Store the default identifier and use it as an argument to call another Sherpa routine:
>>> defid = get_default_id() >>> load_arrays(defid, x, y)
- get_delchi_plot(id: int | str | None = None, recalc=True)[source] [edit on github]
Return the data used by plot_delchi.
- Parameters:
id (int, str, or None, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_delchi
(orget_delchi_plot
) are returned, otherwise the data is re-generated.
- Returns:
resid_data
- Return type:
a
sherpa.plot.DelchiPlot
instance- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_chisqr_plot
Return the data used by plot_chisqr.
get_ratio_plot
Return the data used by plot_ratio.
get_resid_plot
Return the data used by plot_resid.
plot_delchi
Plot the ratio of residuals to error for a data set.
Examples
Return the residual data, measured in units of the error, for the default data set:
>>> rplot = get_delchi_plot() >>> np.min(rplot.y) -2.85648373819671875 >>> np.max(rplot.y) 2.89477053577520982
Display the contents of the residuals plot for data set 2:
>>> print(get_delchi_plot(2))
Overplot the residuals plot from the ‘core’ data set on the ‘jet’ data set:
>>> r1 = get_delchi_plot('jet') >>> r2 = get_delchi_plot('core') >>> r1.plot() >>> r2.overplot()
- get_dep(id: int | str | None = None, filter=False, bkg_id: int | str | None = None)[source] [edit on github]
Return the dependent axis of a data set.
This function returns the data values (the dependent axis) for each point or pixel in the data set.
- Parameters:
id (int, str, or None, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – Should the filter attached to the data set be applied to the return value or not. The default is
False
.bkg_id (int, str, or None, optional) – Set if the values returned should be from the given background component, instead of the source data set.
- Returns:
axis – The dependent axis values. The model estimate is compared to these values during fitting. For PHA data sets, the return array will match the grouping scheme applied to the data set. This array is one-dimensional, even for two dimensional (e.g. image) data.
- Return type:
array
- Raises:
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_error
Return the errors on the dependent axis of a data set.
get_indep
Return the independent axis of a data set.
get_rate
Return the count rate of a PHA data set.
list_data_ids
List the identifiers for the loaded data sets.
Examples
>>> load_arrays(1, [10, 15, 19], [4, 5, 9], Data1D) >>> get_dep() array([4, 5, 9])
>>> x0 = [10, 15, 12, 19] >>> x1 = [12, 14, 10, 17] >>> y = [4, 5, 9, -2] >>> load_arrays(2, x0, x1, y, Data2D) >>> get_dep(2) array([4, 5, 9, -2])
If the
filter
flag is set then the return will be limited to the data that is used in the fit:>>> load_arrays(1, [10, 15, 19], [4, 5, 9]) >>> ignore_id(1, 17, None) >>> get_dep() array([4, 5, 9]) >>> get_dep(filter=True) array([4, 5])
An example with a PHA data set named ‘spec’:
>>> notice_id('spec', 0.5, 7) >>> yall = get_dep('spec') >>> yfilt = get_dep('spec', filter=True) >>> yall.size 1024 >>> yfilt.size 446
For images, the data is returned as a one-dimensional array:
>>> load_image('img', 'image.fits') >>> ivals = get_dep('img') >>> ivals.shape (65536,)
- get_dims(id: int | str | None = None, filter=False)[source] [edit on github]
Return the dimensions of the data set.
- Parameters:
id (int, str, or None, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – If
True
then apply any filter to the data set before returning the dimensions. The default isFalse
.
- Returns:
dims
- Return type:
a tuple of int
See also
ignore
Exclude data from the fit.
sherpa.astro.ui.ignore2d
Exclude a spatial region from an image.
notice
Include data in the fit.
sherpa.astro.ui.notice2d
Include a spatial region of an image.
Examples
Display the dimensions for the default data set:
>>> print(get_dims())
Find the number of bins in dataset ‘a2543’ without and with any filters applied to it:
>>> nall = get_dims('a2543') >>> nfilt = get_dims('a2543', filter=True)
- get_draws(id: int | str | None = None, otherids: Sequence[int | str] = (), niter=1000, covar_matrix=None)[source] [edit on github]
Run the pyBLoCXS MCMC algorithm.
The function runs a Markov Chain Monte Carlo (MCMC) algorithm designed to carry out Bayesian Low-Count X-ray Spectral (BLoCXS) analysis. It explores the model parameter space at the suspected statistic minimum (i.e. after using
fit
). The return values include the statistic value, parameter values, and an acceptance flag indicating whether the row represents a jump from the current location or not. For more information see thesherpa.sim
module and the reference given below.- Parameters:
id (int, str, or None, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
niter (int, optional) – The number of draws to use. The default is
1000
.covar_matrix (2D array, optional) – The covariance matrix to use. If
None
then the result fromget_covar_results().extra_output
is used.
- Returns:
The results of the MCMC chain. The stats and accept arrays contain
niter+1
elements, with the first row being the starting values. The params array has(nparams, niter+1)
elements, where nparams is the number of free parameters in the model expression, and the first column contains the values that the chain starts at. The accept array contains boolean values, indicating whether the jump, or step, was accepted (True
), so the parameter values and statistic change, or it wasn’t, in which case there is no change to the previous row. Thesherpa.utils.get_error_estimates
routine can be used to calculate the credible one-sigma interval from the params array.- Return type:
stats, accept, params
See also
covar
,fit
,get_sampler
,plot_cdf
,plot_pdf
,plot_scatter
,plot_trace
,set_prior
,set_rng
,set_sampler
Notes
The chain is run using fit information associated with the specified data set, or sets, the currently set sampler (
set_sampler
) and parameter priors (set_prior
), for a specified number of iterations. The model should have been fit to find the best-fit parameters, andcovar
called, before runningget_draws
. The results fromget_draws
is used to estimate the parameter distributions.The
set_rng
routine is used to control how the random numbers are generated.References
Examples
Fit a source and then run a chain to investigate the parameter distributions. The distribution of the stats values created by the chain is then displayed, using
plot_trace
, and the parameter distributions for the first two thawed parameters are displayed. The first one as a cumulative distribution usingplot_cdf
and the second one as a probability distribution usingplot_pdf
. Finally the acceptance fraction (number of draws where the chain moved) is displayed. Note that in a full analysis session a burn-in period would normally be removed from the chain before using the results.>>> fit() >>> covar() >>> stats, accept, params = get_draws(1, niter=1e4) >>> plot_trace(stats, name='stat') >>> names = [p.fullname for p in get_source().pars if not p.frozen] >>> plot_cdf(params[0,:], name=names[0], xlabel=names[0]) >>> plot_pdf(params[1,:], name=names[1], xlabel=names[1]) >>> accept[:-1].sum() * 1.0 / len(accept - 1) 0.4287
The following runs the chain on multiple data sets, with identifiers ‘core’, ‘jet1’, and ‘jet2’:
>>> stats, accept, params = get_draws('core', ['jet1', 'jet2'], niter=1e4)
- get_energy_flux_hist(lo=None, hi=None, id: int | str | None = None, num=7500, bins=75, correlated=False, numcores=None, bkg_id: int | str | None = None, scales=None, model=None, otherids: Sequence[int | str] = (), recalc=True, clip='hard')[source] [edit on github]
Return the data displayed by plot_energy_flux.
The get_energy_flux_hist() function calculates a histogram of simulated energy flux values representing the energy flux probability distribution for a model component, accounting for the errors on the model parameters.
Changed in version 4.12.2: The scales parameter is no longer ignored when set and the model and otherids parameters have been added. The clip argument has been added.
- Parameters:
lo (number, optional) – The lower limit to use when summing up the signal. If not given then the lower value of the data grid is used.
hi (optional) – The upper limit to use when summing up the signal. If not given then the upper value of the data grid is used.
id (int or string or None, optional) – The identifier of the data set to use. If
None
, the default value, then all datasets with associated models are used to calculate the errors and the model evaluation is done using the default dataset.num (int, optional) – The number of samples to create. The default is 7500.
bins (int, optional) – The number of bins to use for the histogram.
correlated (bool, optional) – If
True
(the default isFalse
) thenscales
is the full covariance matrix, otherwise it is just a 1D array containing the variances of the parameters (the diagonal elements of the covariance matrix).numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
bkg_id (int, str, or None, optional) – The identifier of the background component to use. This should only be set when the line to be measured is in the background model.
scales (array, optional) – The scales used to define the normal distributions for the parameters. The size and shape of the array depends on the number of free parameters in the fit (n) and the value of the
correlated
parameter. When the parameter isTrue
, scales must be given the covariance matrix for the free parameters (a n by n matrix that matches the parameter ordering used by Sherpa). For un-correlated parameters the covariance matrix can be used, or a one-dimensional array of n elements can be used, giving the width (specified as the sigma value of a normal distribution) for each parameter (e.g. the square root of the diagonal elements of the covariance matrix). If the scales parameter is not given then the covariance matrix is evaluated for the current model and best-fit parameters.model (model, optional) – The model to integrate. If left as
None
then the source model for the dataset will be used. This can be used to calculate the unabsorbed flux, as shown in the examples. The model must be part of the source expression.otherids (sequence of integer and string ids, optional) – The list of other datasets that should be included when calculating the errors to draw values from.