Session
- class sherpa.astro.ui.utils.Session[source] [edit on github]
Bases:
Session
Methods Summary
add_model
(modelclass[, args, kwargs])Create a user-defined model class.
add_user_pars
(modelname, parnames[, ...])Add parameter information to a user model.
calc_chisqr
([id])Calculate the per-bin chi-squared statistic.
calc_data_sum
([lo, hi, id, bkg_id])Sum up the data values over a pass band.
calc_data_sum2d
([reg, id])Sum up the data values of a 2D data set.
calc_energy_flux
([lo, hi, id, bkg_id, model])Integrate the unconvolved source model over a pass band.
calc_kcorr
(z, obslo, obshi[, restlo, ...])Calculate the K correction for a model.
calc_model_sum
([lo, hi, id, bkg_id])Sum up the fitted model over a pass band.
calc_model_sum2d
([reg, id])Sum up the convolved model for a 2D data set.
calc_photon_flux
([lo, hi, id, bkg_id, model])Integrate the unconvolved source model over a pass band.
calc_source_sum
([lo, hi, id, bkg_id])Sum up the source model over a pass band.
calc_source_sum2d
([reg, id])Sum up the unconvolved model for a 2D data set.
calc_stat
([id])Calculate the fit statistic for a data set.
Display the statistic values for the current models.
clean
()Clear out the current Sherpa session.
conf
(*args)Estimate parameter confidence intervals using the confidence method.
confidence
(*args)Estimate parameter confidence intervals using the confidence method.
contour
(*args, **kwargs)Create a contour plot for an image data set.
contour_data
([id, replot, overcontour])Contour the values of an image data set.
contour_fit
([id, replot, overcontour])Contour the fit to a data set.
contour_fit_resid
([id, replot, overcontour])Contour the fit and the residuals to a data set.
contour_kernel
([id, replot, overcontour])Contour the kernel applied to the model of an image data set.
contour_model
([id, replot, overcontour])Create a contour plot of the model.
contour_psf
([id, replot, overcontour])Contour the PSF applied to the model of an image data set.
contour_ratio
([id, replot, overcontour])Contour the ratio of data to model.
contour_resid
([id, replot, overcontour])Contour the residuals of the fit.
contour_source
([id, replot, overcontour])Create a contour plot of the unconvolved spatial model.
copy_data
(fromid, toid)Copy a data set, creating a new identifier.
covar
(*args)Estimate parameter confidence intervals using the covariance method.
covariance
(*args)Estimate parameter confidence intervals using the covariance method.
create_arf
(elo, ehi[, specresp, exposure, ...])Create an ARF.
create_model_component
([typename, name])Create a model component.
create_rmf
(rmflo, rmfhi[, startchan, e_min, ...])Create an RMF.
dataspace1d
(start, stop[, step, numbins, ...])Create the independent axis for a 1D data set.
dataspace2d
(dims[, id, dstype])Create the independent axis for a 2D data set.
delete_bkg_model
([id, bkg_id])Delete the background model expression for a data set.
delete_data
([id])Delete a data set by identifier.
delete_model
([id])Delete the model expression for a data set.
delete_model_component
(name)Delete a model component.
delete_pileup_model
([id])Delete the pile up model for a data set.
delete_psf
([id])Delete the PSF model for a data set.
eqwidth
(src, combo[, id, lo, hi, bkg_id, ...])Calculate the equivalent width of an emission or absorption line.
fake
([id, method])Simulate a data set.
fake_pha
(id, arf, rmf, exposure[, backscal, ...])Simulate a PHA data set from a model.
fit
([id])Fit a model to one or more data sets.
fit_bkg
([id])Fit a model to one or more background PHA data sets.
freeze
(*args)Fix model parameters so they are not changed by a fit.
get_analysis
([id])Return the units used when fitting spectral data.
get_areascal
([id, bkg_id])Return the fractional area factor of a PHA data set.
get_arf
([id, resp_id, bkg_id])Return the ARF associated with a PHA data set.
get_arf_plot
([id, resp_id, recalc])Return the data used by plot_arf.
get_axes
([id, bkg_id])Return information about the independent axes of a data set.
get_backscal
([id, bkg_id])Return the BACKSCAL scaling of a PHA data set.
get_bkg
([id, bkg_id])Return the background for a PHA data set.
get_bkg_arf
([id])Return the background ARF associated with a PHA data set.
get_bkg_chisqr_plot
([id, bkg_id, recalc])Return the data used by plot_bkg_chisqr.
get_bkg_delchi_plot
([id, bkg_id, recalc])Return the data used by plot_bkg_delchi.
get_bkg_fit_plot
([id, bkg_id, recalc])Return the data used by plot_bkg_fit.
get_bkg_model
([id, bkg_id])Return the model expression for the background of a PHA data set.
get_bkg_model_plot
([id, bkg_id, recalc])Return the data used by plot_bkg_model.
get_bkg_plot
([id, bkg_id, recalc])Return the data used by plot_bkg.
get_bkg_ratio_plot
([id, bkg_id, recalc])Return the data used by plot_bkg_ratio.
get_bkg_resid_plot
([id, bkg_id, recalc])Return the data used by plot_bkg_resid.
get_bkg_rmf
([id])Return the background RMF associated with a PHA data set.
get_bkg_scale
([id, bkg_id, units, group, filter])Return the background scaling factor for a background data set.
get_bkg_source
([id, bkg_id])Return the model expression for the background of a PHA data set.
get_bkg_source_plot
([id, lo, hi, bkg_id, recalc])Return the data used by plot_bkg_source.
Return the data used to plot the last CDF.
get_chisqr_plot
([id, recalc])Return the data used by plot_chisqr.
get_conf
()Return the confidence-interval estimation object.
get_conf_opt
([name])Return one or all of the options for the confidence interval method.
Return the results of the last
conf
run.Return the results of the last
conf
run.get_coord
([id])Get the coordinate system used for image analysis.
get_counts
([id, filter, bkg_id])Return the dependent axis of a data set.
Return the covariance estimation object.
get_covar_opt
([name])Return one or all of the options for the covariance method.
Return the results of the last
covar
run.Return the results of the last
covar
run.get_data
([id])Return the data set by identifier.
get_data_contour
([id, recalc])Return the data used by contour_data.
Return the preferences for contour_data.
get_data_image
([id])Return the data used by image_data.
get_data_plot
([id, recalc])Return the data used by plot_data.
get_data_plot_prefs
([id])Return the preferences for plot_data.
Return the default data set identifier.
get_delchi_plot
([id, recalc])Return the data used by plot_delchi.
get_dep
([id, filter, bkg_id])Return the dependent axis of a data set.
get_dims
([id, filter])Return the dimensions of the data set.
get_draws
([id, otherids, niter, covar_matrix])Run the pyBLoCXS MCMC algorithm.
get_energy_flux_hist
([lo, hi, id, num, ...])Return the data displayed by plot_energy_flux.
get_error
([id, filter, bkg_id])Return the errors on the dependent axis of a data set.
get_exposure
([id, bkg_id])Return the exposure time of a PHA data set.
get_filter
([id])Return the filter expression for a data set.
get_fit_contour
([id, recalc])Return the data used by contour_fit.
get_fit_plot
([id, recalc])Return the data used to create the fit plot.
Return the results of the last fit.
Return the functions provided by Sherpa.
get_grouping
([id, bkg_id])Return the grouping array for a PHA data set.
get_indep
([id, filter, bkg_id])Return the independent axes of a data set.
get_int_proj
([par, id, otherids, recalc, ...])Return the interval-projection object.
get_int_unc
([par, id, otherids, recalc, ...])Return the interval-uncertainty object.
Return the name of the iterative fitting scheme.
get_iter_method_opt
([optname])Return one or all options for the iterative-fitting scheme.
get_kernel_contour
([id, recalc])Return the data used by contour_kernel.
get_kernel_image
([id])Return the data used by image_kernel.
get_kernel_plot
([id, recalc])Return the data used by plot_kernel.
get_method
([name])Return an optimization method.
Return the name of current Sherpa optimization method.
get_method_opt
([optname])Return one or all of the options for the current optimization method.
get_model
([id])Return the model expression for a data set.
Return the method used to create model component identifiers.
get_model_component
(name)Returns a model component given its name.
get_model_component_image
(id[, model])Return the data used by image_model_component.
get_model_component_plot
(id[, model, recalc])Return the data used to create the model-component plot.
get_model_contour
([id, recalc])Return the data used by contour_model.
Return the preferences for contour_model.
get_model_image
([id])Return the data used by image_model.
get_model_pars
(model)Return the names of the parameters of a model.
get_model_plot
([id, recalc])Return the data used to create the model plot.
get_model_plot_prefs
([id])Return the preferences for plot_model.
get_model_type
(model)Describe a model expression.
get_num_par
([id])Return the number of parameters in a model expression.
get_num_par_frozen
([id])Return the number of frozen parameters in a model expression.
get_num_par_thawed
([id])Return the number of thawed parameters in a model expression.
get_order_plot
([id, orders, recalc])Return the data used by plot_order.
get_par
(par)Return a parameter of a model component.
Return the data used to plot the last PDF.
get_photon_flux_hist
([lo, hi, id, num, ...])Return the data displayed by plot_photon_flux.
get_pileup_model
([id])Return the pile up model for a data set.
get_prior
(par)Return the prior function for a parameter (MCMC).
get_proj
()Return the confidence-interval estimation object.
get_proj_opt
([name])Return one or all of the options for the confidence interval method.
Return the results of the last
proj
run.Return the results of the last
proj
run.get_psf
([id])Return the PSF model defined for a data set.
get_psf_contour
([id, recalc])Return the data used by contour_psf.
get_psf_image
([id])Return the data used by image_psf.
get_psf_plot
([id, recalc])Return the data used by plot_psf.
get_pvalue_plot
([null_model, alt_model, ...])Return the data used by plot_pvalue.
Return the data calculated by the last plot_pvalue call.
get_quality
([id, bkg_id])Return the quality flags for a PHA data set.
get_rate
([id, filter, bkg_id])Return the count rate of a PHA data set.
get_ratio_contour
([id, recalc])Return the data used by contour_ratio.
get_ratio_image
([id])Return the data used by image_ratio.
get_ratio_plot
([id, recalc])Return the data used by plot_ratio.
get_reg_proj
([par0, par1, id, otherids, ...])Return the region-projection object.
get_reg_unc
([par0, par1, id, otherids, ...])Return the region-uncertainty object.
get_resid_contour
([id, recalc])Return the data used by contour_resid.
get_resid_image
([id])Return the data used by image_resid.
get_resid_plot
([id, recalc])Return the data used by plot_resid.
get_response
([id, bkg_id])Return the response information applied to a PHA data set.
get_rmf
([id, resp_id, bkg_id])Return the RMF associated with a PHA data set.
Return the current MCMC sampler options.
Return the name of the current MCMC sampler.
get_sampler_opt
(opt)Return an option of the current MCMC sampler.
Return the data used to plot the last scatter plot.
get_source
([id])Return the source model expression for a data set.
get_source_component_image
(id[, model])Return the data used by image_source_component.
get_source_component_plot
(id[, model, recalc])Return the data used by plot_source_component.
get_source_contour
([id, recalc])Return the data used by contour_source.
get_source_image
([id])Return the data used by image_source.
get_source_plot
([id, lo, hi, recalc])Return the data used by plot_source.
get_specresp
([id, filter, bkg_id])Return the effective area values for a PHA data set.
Return the plot attributes for displays with multiple plots.
get_stat
([name])Return the fit statisic.
Return the statistic values for the current models.
Return the name of the current fit statistic.
get_staterror
([id, filter, bkg_id])Return the statistical error on the dependent axis of a data set.
get_syserror
([id, filter, bkg_id])Return the systematic error on the dependent axis of a data set.
Return the data used to plot the last trace.
group
([id, bkg_id])Turn on the grouping for a PHA data set.
group_adapt
(id[, min, bkg_id, maxLength, ...])Adaptively group to a minimum number of counts.
group_adapt_snr
(id[, min, bkg_id, ...])Adaptively group to a minimum signal-to-noise ratio.
group_bins
(id[, num, bkg_id, tabStops])Group into a fixed number of bins.
group_counts
(id[, num, bkg_id, maxLength, ...])Group into a minimum number of counts per bin.
group_snr
(id[, snr, bkg_id, maxLength, ...])Group into a minimum signal-to-noise ratio.
group_width
(id[, num, bkg_id, tabStops])Group into a fixed bin width.
guess
([id, model, limits, values])Estimate the parameter values and ranges given the loaded data.
ignore
([lo, hi])Exclude data from the fit.
ignore2d
([val])Exclude a spatial region from all data sets.
ignore2d_id
(ids[, val])Exclude a spatial region from a data set.
ignore2d_image
([ids])Exclude pixels using the region defined in the image viewer.
ignore_bad
([id, bkg_id])Exclude channels marked as bad in a PHA data set.
ignore_id
(ids[, lo, hi])Exclude data from the fit for a data set.
Close the image viewer.
image_data
([id, newframe, tile])Display a data set in the image viewer.
Delete all the frames open in the image viewer.
image_fit
([id, newframe, tile, deleteframes])Display the data, model, and residuals for a data set in the image viewer.
image_getregion
([coord])Return the region defined in the image viewer.
image_kernel
([id, newframe, tile])Display the 2D kernel for a data set in the image viewer.
image_model
([id, newframe, tile])Display the model for a data set in the image viewer.
image_model_component
(id[, model, newframe, ...])Display a component of the model in the image viewer.
Start the image viewer.
image_psf
([id, newframe, tile])Display the 2D PSF model for a data set in the image viewer.
image_ratio
([id, newframe, tile])Display the ratio (data/model) for a data set in the image viewer.
image_resid
([id, newframe, tile])Display the residuals (data - model) for a data set in the image viewer.
image_setregion
(reg[, coord])Set the region to display in the image viewer.
image_source
([id, newframe, tile])Display the source expression for a data set in the image viewer.
image_source_component
(id[, model, ...])Display a component of the source expression in the image viewer.
image_xpaget
(arg)Return the result of an XPA call to the image viewer.
image_xpaset
(arg[, data])Return the result of an XPA call to the image viewer.
int_proj
(par[, id, otherids, replot, fast, ...])Calculate and plot the fit statistic versus fit parameter value.
int_unc
(par[, id, otherids, replot, min, ...])Calculate and plot the fit statistic versus fit parameter value.
link
(par, val)Link a parameter to a value.
list_bkg_ids
([id])List all the background identifiers for a data set.
List the identifiers for the loaded data sets.
list_functions
([outfile, clobber])Display the functions provided by Sherpa.
List the iterative fitting schemes.
List the optimization methods.
List the names of all the model components.
List of all the data sets with a source expression.
list_models
([show])List the available model types.
List of all the data sets with a pile up model.
Return the priors set for model parameters, if any.
List of all the data sets with a PSF.
list_response_ids
([id, bkg_id])List all the response identifiers of a data set.
List the MCMC samplers.
List the fit statistics.
load_arf
(id[, arg, resp_id, bkg_id])Load an ARF from a file and add it to a PHA data set.
load_arrays
(id, *args)Create a data set from array values.
load_ascii
(id[, filename, ncols, colkeys, ...])Load an ASCII file as a data set.
load_ascii_with_errors
(id[, filename, ...])Load an ASCII file with asymmetric errors as a data set.
load_bkg
(id[, arg, use_errors, bkg_id])Load the background from a file and add it to a PHA data set.
load_bkg_arf
(id[, arg])Load an ARF from a file and add it to the background of a PHA data set.
load_bkg_rmf
(id[, arg])Load a RMF from a file and add it to the background of a PHA data set.
load_conv
(modelname, filename_or_model, ...)Load a 1D convolution model.
load_data
(id[, filename])Load a data set from a file.
load_filter
(id[, filename, bkg_id, ignore, ...])Load the filter array from a file and add to a data set.
load_grouping
(id[, filename, bkg_id])Load the grouping scheme from a file and add to a PHA data set.
load_image
(id[, arg, coord, dstype])Load an image as a data set.
load_multi_arfs
(id, filenames[, resp_ids])Load multiple ARFs for a PHA data set.
load_multi_rmfs
(id, filenames[, resp_ids])Load multiple RMFs for a PHA data set.
load_pha
(id[, arg, use_errors])Load a PHA data set.
load_psf
(modelname, filename_or_model, ...)Create a PSF model.
load_quality
(id[, filename, bkg_id])Load the quality array from a file and add to a PHA data set.
load_rmf
(id[, arg, resp_id, bkg_id])Load a RMF from a file and add it to a PHA data set.
load_staterror
(id[, filename, bkg_id])Load the statistical errors from a file.
load_syserror
(id[, filename, bkg_id])Load the systematic errors from a file.
load_table
(id[, filename, ncols, colkeys, ...])Load a FITS binary file as a data set.
load_table_model
(modelname, filename[, method])Load tabular or image data and use it as a model component.
load_template_interpolator
(name, ...)Set the template interpolation scheme.
load_template_model
(modelname, templatefile)Load a set of templates and use it as a model component.
load_user_model
(func, modelname[, filename])Create a user-defined model.
load_user_stat
(statname, calc_stat_func[, ...])Create a user-defined statistic.
load_xstable_model
(modelname, filename[, etable])Load a XSPEC table model.
normal_sample
([num, sigma, correlate, id, ...])Sample the fit statistic by taking the parameter values from a normal distribution.
notice
([lo, hi])Include data in the fit.
notice2d
([val])Include a spatial region of all data sets.
notice2d_id
(ids[, val])Include a spatial region of a data set.
notice2d_image
([ids])Include pixels using the region defined in the image viewer.
notice_id
(ids[, lo, hi])Include data from the fit for a data set.
pack_image
([id])Convert a data set into an image structure.
pack_pha
([id])Convert a PHA data set into a file structure.
pack_table
([id])Convert a data set into a table structure.
paramprompt
([val])Should the user be asked for the parameter values when creating a model?
plot
(*args, **kwargs)Create one or more plot types.
plot_arf
([id, resp_id, replot, overplot, ...])Plot the ARF associated with a data set.
plot_bkg
([id, bkg_id, replot, overplot, ...])Plot the background values for a PHA data set.
plot_bkg_chisqr
([id, bkg_id, replot, ...])Plot the chi-squared value for each point of the background of a PHA data set.
plot_bkg_delchi
([id, bkg_id, replot, ...])Plot the ratio of residuals to error for the background of a PHA data set.
plot_bkg_fit
([id, bkg_id, replot, overplot, ...])Plot the fit results (data, model) for the background of a PHA data set.
plot_bkg_fit_delchi
([id, bkg_id, replot, ...])Plot the fit results, and the residuals, for the background of a PHA data set.
plot_bkg_fit_ratio
([id, bkg_id, replot, ...])Plot the fit results, and the data/model ratio, for the background of a PHA data set.
plot_bkg_fit_resid
([id, bkg_id, replot, ...])Plot the fit results, and the residuals, for the background of a PHA data set.
plot_bkg_model
([id, bkg_id, replot, ...])Plot the model for the background of a PHA data set.
plot_bkg_ratio
([id, bkg_id, replot, ...])Plot the ratio of data to model values for the background of a PHA data set.
plot_bkg_resid
([id, bkg_id, replot, ...])Plot the residual (data-model) values for the background of a PHA data set.
plot_bkg_source
([id, lo, hi, bkg_id, ...])Plot the model expression for the background of a PHA data set.
plot_cdf
(points[, name, xlabel, replot, ...])Plot the cumulative density function of an array of values.
plot_chisqr
([id, replot, overplot, clearwindow])Plot the chi-squared value for each point in a data set.
plot_data
([id, replot, overplot, clearwindow])Plot the data values.
plot_delchi
([id, replot, overplot, clearwindow])Plot the ratio of residuals to error for a data set.
plot_energy_flux
([lo, hi, id, num, bins, ...])Display the energy flux distribution.
plot_fit
([id, replot, overplot, clearwindow])Plot the fit results (data, model) for a data set.
plot_fit_delchi
([id, replot, overplot, ...])Plot the fit results, and the residuals, for a data set.
plot_fit_ratio
([id, replot, overplot, ...])Plot the fit results, and the ratio of data to model, for a data set.
plot_fit_resid
([id, replot, overplot, ...])Plot the fit results, and the residuals, for a data set.
plot_kernel
([id, replot, overplot, clearwindow])Plot the 1D kernel applied to a data set.
plot_model
([id, replot, overplot, clearwindow])Plot the model for a data set.
plot_model_component
(id[, model, replot, ...])Plot a component of the model for a data set.
plot_order
([id, orders, replot, overplot, ...])Plot the model for a data set convolved by the given response.
plot_pdf
(points[, name, xlabel, bins, ...])Plot the probability density function of an array of values.
plot_photon_flux
([lo, hi, id, num, bins, ...])Display the photon flux distribution.
plot_psf
([id, replot, overplot, clearwindow])Plot the 1D PSF model applied to a data set.
plot_pvalue
(null_model, alt_model[, ...])Compute and plot a histogram of likelihood ratios by simulating data.
plot_ratio
([id, replot, overplot, clearwindow])Plot the ratio of data to model for a data set.
plot_resid
([id, replot, overplot, clearwindow])Plot the residuals (data - model) for a data set.
plot_scatter
(x, y[, name, xlabel, ylabel, ...])Create a scatter plot.
plot_source
([id, lo, hi, replot, overplot, ...])Plot the source expression for a data set.
plot_source_component
(id[, model, replot, ...])Plot a component of the source expression for a data set.
plot_trace
(points[, name, xlabel, replot, ...])Create a trace plot of row number versus value.
proj
(*args)Estimate parameter confidence intervals using the projection method.
projection
(*args)Estimate parameter confidence intervals using the projection method.
reg_proj
(par0, par1[, id, otherids, replot, ...])Plot the statistic value as two parameters are varied.
reg_unc
(par0, par1[, id, otherids, replot, ...])Plot the statistic value as two parameters are varied.
resample_data
([id, niter, seed])Resample data with asymmetric error bars.
reset
([model, id])Reset the model parameters to their default settings.
restore
([filename])Load in a Sherpa session from a file.
sample_energy_flux
([lo, hi, id, num, ...])Return the energy flux distribution of a model.
sample_flux
([modelcomponent, lo, hi, id, ...])Return the flux distribution of a model.
sample_photon_flux
([lo, hi, id, num, ...])Return the photon flux distribution of a model.
save
([filename, clobber])Save the current Sherpa session to a file.
save_all
([outfile, clobber])Save the information about the current session to a text file.
save_arrays
(filename, args[, fields, ascii, ...])Write a list of arrays to a file.
save_data
(id[, filename, bkg_id, ascii, clobber])Save the data to a file.
save_delchi
(id[, filename, bkg_id, ascii, ...])Save the ratio of residuals (data-model) to error to a file.
save_error
(id[, filename, bkg_id, ascii, ...])Save the errors to a file.
save_filter
(id[, filename, bkg_id, ascii, ...])Save the filter array to a file.
save_grouping
(id[, filename, bkg_id, ascii, ...])Save the grouping scheme to a file.
save_image
(id[, filename, ascii, clobber])Save the pixel values of a 2D data set to a file.
save_model
(id[, filename, bkg_id, ascii, ...])Save the model values to a file.
save_pha
(id[, filename, bkg_id, ascii, clobber])Save a PHA data set to a file.
save_quality
(id[, filename, bkg_id, ascii, ...])Save the quality array to a file.
save_resid
(id[, filename, bkg_id, ascii, ...])Save the residuals (data-model) to a file.
save_source
(id[, filename, bkg_id, ascii, ...])Save the model values to a file.
save_staterror
(id[, filename, bkg_id, ...])Save the statistical errors to a file.
save_syserror
(id[, filename, bkg_id, ascii, ...])Save the systematic errors to a file.
save_table
(id[, filename, ascii, clobber])Save a data set to a file as a table.
set_analysis
(id[, quantity, type, factor])Set the units used when fitting and displaying spectral data.
set_areascal
(id[, area, bkg_id])Change the fractional area factor of a PHA data set.
set_arf
(id[, arf, resp_id, bkg_id])Set the ARF for use by a PHA data set.
set_backscal
(id[, backscale, bkg_id])Change the area scaling of a PHA data set.
set_bkg
(id[, bkg, bkg_id])Set the background for a PHA data set.
set_bkg_full_model
(id[, model, bkg_id])Define the convolved background model expression for a PHA data set.
set_bkg_model
(id[, model, bkg_id])Set the background model expression for a PHA data set.
set_bkg_source
(id[, model, bkg_id])Set the background model expression for a PHA data set.
set_conf_opt
(name, val)Set an option for the confidence interval method.
set_coord
(id[, coord])Set the coordinate system to use for image analysis.
set_counts
(id[, val, bkg_id])Set the dependent axis of a data set.
set_covar_opt
(name, val)Set an option for the covariance method.
set_data
(id[, data])Set a data set.
set_default_id
(id)Set the default data set identifier.
set_dep
(id[, val, bkg_id])Set the dependent axis of a data set.
set_exposure
(id[, exptime, bkg_id])Change the exposure time of a PHA data set.
set_filter
(id[, val, bkg_id, ignore])Set the filter array of a data set.
set_full_model
(id[, model])Define the convolved model expression for a data set.
set_grouping
(id[, val, bkg_id])Apply a set of grouping flags to a PHA data set.
set_iter_method
(meth)Set the iterative-fitting scheme used in the fit.
set_iter_method_opt
(optname, val)Set an option for the iterative-fitting scheme.
set_method
(meth)Set the optimization method.
set_method_opt
(optname, val)Set an option for the current optimization method.
set_model
(id[, model])Set the source model expression for a data set.
set_model_autoassign_func
([func])Set the method used to create model component identifiers.
set_par
(par[, val, min, max, frozen])Set the value, limits, or behavior of a model parameter.
set_pileup_model
(id[, model])Include a model of the Chandra ACIS pile up when fitting PHA data.
set_prior
(par, prior)Set the prior function to use with a parameter.
set_proj_opt
(name, val)Set an option for the projection method.
set_psf
(id[, psf])Add a PSF model to a data set.
set_quality
(id[, val, bkg_id])Apply a set of quality flags to a PHA data set.
set_rmf
(id[, rmf, resp_id, bkg_id])Set the RMF for use by a PHA data set.
set_sampler
(sampler)Set the MCMC sampler.
set_sampler_opt
(opt, value)Set an option for the current MCMC sampler.
set_source
(id[, model])Set the source model expression for a data set.
set_stat
(stat)Set the statistical method.
set_staterror
(id[, val, fractional, bkg_id])Set the statistical errors on the dependent axis of a data set.
set_syserror
(id[, val, fractional, bkg_id])Set the systematic errors on the dependent axis of a data set.
set_xlinear
([plottype])New plots will display a linear X axis.
set_xlog
([plottype])New plots will display a logarithmically-scaled X axis.
set_ylinear
([plottype])New plots will display a linear Y axis.
set_ylog
([plottype])New plots will display a logarithmically-scaled Y axis.
show_all
([id, outfile, clobber])Report the current state of the Sherpa session.
show_bkg
([id, bkg_id, outfile, clobber])Show the details of the PHA background data sets.
show_bkg_model
([id, bkg_id, outfile, clobber])Display the background model expression used to fit a data set.
show_bkg_source
([id, bkg_id, outfile, clobber])Display the background model expression for a data set.
show_conf
([outfile, clobber])Display the results of the last conf evaluation.
show_covar
([outfile, clobber])Display the results of the last covar evaluation.
show_data
([id, outfile, clobber])Summarize the available data sets.
show_filter
([id, outfile, clobber])Show any filters applied to a data set.
show_fit
([outfile, clobber])Summarize the fit results.
show_kernel
([id, outfile, clobber])Display any kernel applied to a data set.
show_method
([outfile, clobber])Display the current optimization method and options.
show_model
([id, outfile, clobber])Display the model expression used to fit a data set.
show_proj
([outfile, clobber])Display the results of the last proj evaluation.
show_psf
([id, outfile, clobber])Display any PSF model applied to a data set.
show_source
([id, outfile, clobber])Display the source model expression for a data set.
show_stat
([outfile, clobber])Display the current fit statistic.
simulfit
([id])Fit a model to one or more data sets.
subtract
([id])Subtract the background estimate from a data set.
t_sample
([num, dof, id, otherids, numcores])Sample the fit statistic by taking the parameter values from a Student's t-distribution.
thaw
(*args)Allow model parameters to be varied during a fit.
ungroup
([id, bkg_id])Turn off the grouping for a PHA data set.
uniform_sample
([num, factor, id, otherids, ...])Sample the fit statistic by taking the parameter values from an uniform distribution.
unlink
(par)Unlink a parameter value.
unpack_arf
(arg)Create an ARF data structure.
unpack_arrays
(*args)Create a sherpa data object from arrays of data.
unpack_ascii
(filename[, ncols, colkeys, ...])Unpack an ASCII file into a data structure.
unpack_bkg
(arg[, use_errors])Create a PHA data structure for a background data set.
unpack_data
(filename, *args, **kwargs)Create a sherpa data object from a file.
unpack_image
(arg[, coord, dstype])Create an image data structure.
unpack_pha
(arg[, use_errors])Create a PHA data structure.
unpack_rmf
(arg)Create a RMF data structure.
unpack_table
(filename[, ncols, colkeys, dstype])Unpack a FITS binary file into a data structure.
unsubtract
([id])Undo any background subtraction for the data set.
Methods Documentation
- add_model(modelclass, args=(), kwargs={})[source] [edit on github]
Create a user-defined model class.
Create a model from a class. The name of the class can then be used to create model components - e.g. with
set_model
orcreate_model_component
- as with any existing Sherpa model.- Parameters
modelclass – A class derived from
sherpa.models.model.ArithmeticModel
. This class defines the functional form and the parameters of the model.args – Arguments for the class constructor.
kwargs – Keyword arguments for the class constructor.
See also
create_model_component
Create a model component.
list_models
List the available model types.
load_table_model
Load tabular data and use it as a model component.
load_user_model
Create a user-defined model.
set_model
Set the source model expression for a data set.
Notes
The
load_user_model
function is designed to make it easy to add a model, but the interface is not the same as the existing models (such as having to call bothload_user_model
andadd_user_pars
for each new instance). Theadd_model
function is used to add a model as a Python class, which is more work to set up, but then acts the same way as the existing models.Examples
The following example creates a model type called “mygauss1d” which will behave excatly the same as the existing “gauss1d” model. Normally the class used with
add_model
would add new functionality.>>> from sherpa.models import Gauss1D >>> class MyGauss1D(Gauss1D): ... pass ... >>> add_model(MyGauss1D) >>> set_source(mygauss1d.g1 + mygauss1d.g2)
- add_user_pars(modelname, parnames, parvals=None, parmins=None, parmaxs=None, parunits=None, parfrozen=None)[source] [edit on github]
Add parameter information to a user model.
- Parameters
modelname (str) – The name of the user model (created by
load_user_model
).parnames (array of str) – The names of the parameters. The order of all the parameter arrays must match that expected by the model function (the first argument to
load_user_model
).parvals (array of number, optional) – The default values of the parameters. If not given each parameter is set to 0.
parmins (array of number, optional) – The minimum values of the parameters (hard limit). The default value is -3.40282e+38.
parmaxs (array of number, optional) – The maximum values of the parameters (hard limit). The default value is 3.40282e+38.
parunits (array of str, optional) – The units of the parameters. This is only used in screen output (i.e. is informational in nature).
parfrozen (array of bool, optional) – Should each parameter be frozen. The default is that all parameters are thawed.
See also
add_model
Create a user-defined model class.
load_user_model
Create a user-defined model.
set_par
Set the value, limits, or behavior of a model parameter.
Notes
The parameters must be specified in the order that the function expects. That is, if the function has two parameters, pars[0]=’slope’ and pars[1]=’y_intercept’, then the call to add_user_pars must use the order [“slope”, “y_intercept”].
Examples
Create a user model for the function
profile
called “myprof”, which has two parameters called “core” and “ampl”, both of which will start with a value of 0.>>> load_user_model(profile, "myprof") >>> add_user_pars("myprof", ["core", "ampl"])
Set the starting values, minimum values, and whether or not the parameter is frozen by default, for the “prof” model:
>>> pnames = ["core", "ampl", "intflag"] >>> pvals = [10, 200, 1] >>> pmins = [0.01, 0, 0] >>> pfreeze = [False, False, True] >>> add_user_pars("prof", pnames, pvals, ... parmins=pmins, parfrozen=pfreeze)
- calc_chisqr(id=None, *otherids)[source] [edit on github]
Calculate the per-bin chi-squared statistic.
Evaluate the model for one or more data sets, compare it to the data using the current statistic, and return an array of chi-squared values for each bin. No fitting is done, as the current model parameter, and any filters, are used.
- Parameters
- Returns
chisq – The chi-square value for each bin of the data, using the current statistic (as set by
set_stat
). A value ofNone
is returned if the statistic is not a chi-square distribution.- Return type
array or
None
See also
calc_stat
Calculate the fit statistic for a data set.
calc_stat_info
Display the statistic values for the current models.
set_stat
Set the statistical method.
Notes
The output array length equals the sum of the arrays lengths of the requested data sets.
Examples
When called with no arguments, the return value is the chi-squared statistic for each bin in the data sets which have a defined model.
>>> calc_chisqr()
Supplying a specific data set ID to calc_chisqr - such as “1” or “src” - will return the chi-squared statistic array for only that data set.
>>> calc_chisqr(1) >>> calc_chisqr("src")
Restrict the calculation to just datasets 1 and 3:
>>> calc_chisqr(1, 3)
- calc_data_sum(lo=None, hi=None, id=None, bkg_id=None)[source] [edit on github]
Sum up the data values over a pass band.
This function is for one-dimensional data sets: use
calc_data_sum2d
for two-dimensional data sets.- Parameters
lo (number, optional) – If both are None or both are set then sum up the data over the given band. If only one is set then return the data count in the given bin.
hi (number, optional) – If both are None or both are set then sum up the data over the given band. If only one is set then return the data count in the given bin.
id (int or str, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – If set, use the model associated with the given background component rather than the source model.
- Returns
dsum – If a background estimate has been subtracted from the data set then the calculation will use the background-subtracted values.
- Return type
number
See also
calc_data_sum2d
Sum up the data values of a 2D data set.
calc_model_sum
Sum up the fitted model over a pass band.
calc_energy_flux
Integrate the unconvolved source model over a pass band.
calc_photon_flux
Integrate the unconcolved source model over a pass band.
calc_source_sum
Sum up the source model over a pass band.
set_model
Set the source model expression for a data set.
Notes
The units of
lo
andhi
are determined by the analysis setting for the data set (e.g.get_analysis
). The summation occurs over those points in the data set that lie within this range, not the range itself.Any existing filter on the data set - e.g. as created by
ignore
ornotice
- is ignored by this function.If a grouping scheme has been applied to the data set that it will be used. This can change the results, since the first and last bins of the selected range may extend outside the requested range.
Examples
Sum up the data values (the dependent axis) for all points or bins in the default data set:
>>> dsum = calc_data_sum()
Calculate the number of counts over the ranges 0.5 to 2 and 0.5 to 7 keV for the default data set, first using the observed signal and then, for the 0.5 to 2 keV band - the background-subtraced estimate:
>>> set_analysis('energy') >>> calc_data_sum(0.5, 2) 745.0 >>> calc_data_sum(0.5, 7) 60.0 >>> subtract() >>> calc_data_sum(0.5, 2) 730.9179738207356
Calculate the data value in the bin containing 0.5 keV for the source “core”:
>>> calc_data_sum(0.5, id="core") 0.0
Calculate the sum of the second background component for data set 3 over the independent axis range 12 to 45:
>>> calc_data_sum(12, 45, id=3, bkg_id=2)
- calc_data_sum2d(reg=None, id=None)[source] [edit on github]
Sum up the data values of a 2D data set.
This function is for two-dimensional data sets: use
calc_model_sum
for one-dimensional data sets.- Parameters
reg (str, optional) – The spatial filter to use. The default,
None
, is to use the whole data set.id (int or str, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.
- Returns
dsum – The sum of the data values that lie within the given region.
- Return type
number
See also
calc_data_sum
Sum up the data values of a data set.
calc_model_sum2d
Sum up the convolved model for a 2D data set.
calc_source_sum2d
Sum up the unconvolved model for a 2D data set.
set_model
Set the source model expression for a data set.
Notes
The coordinate system of the region filter is determined by the coordinate setting for the data set (e.g.
get_coord
).Any existing filter on the data set - e.g. as created by
ignore2d
ornotice2d
- is ignored by this function.Examples
The following examples use the data in the default data set created with the following calls, which sets the y (data) values to be 0 to 11 in a 3 row by 4 column image:
>>> ivals = np.arange(12) >>> y, x = np.mgrid[10:13, 20:24] >>> y = y.flatten() >>> x = x.flatten() >>> load_arrays(1, x, y, ivals, (3, 4), DataIMG)
with no argument, the full data set is used:
>>> calc_data_sum2d() 66 >>> ivals.sum() 66
and a spatial filter can be used to restrict the region used for the summation:
>>> calc_data_sum2d('circle(22,12,1)') 36 >>> calc_data_sum2d('field()-circle(2,2,1)') 30
Apply the spatial filter to the data set labelled “a2142”:
>>> calc_data_sum2d('rotbox(4232.3,3876,300,200,43)', 'a2142')
- calc_energy_flux(lo=None, hi=None, id=None, bkg_id=None, model=None)[source] [edit on github]
Integrate the unconvolved source model over a pass band.
Calculate the integral of E * S(E) over a pass band, where E is the energy of the bin and S(E) the spectral model evaluated for that bin (that is, the model without any instrumental responses applied to it).
Changed in version 4.12.1: The model parameter was added.
- Parameters
lo (number, optional) – If both are None or both are set then calculate the flux over the given band. If only one is set then calculate the flux density at that point. The units for
lo
andhi
are given by the current analysis setting.hi (number, optional) – If both are None or both are set then calculate the flux over the given band. If only one is set then calculate the flux density at that point. The units for
lo
andhi
are given by the current analysis setting.id (int or str, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – If set, use the model associated with the given background component rather than the source model.
model (model, optional) – The model to integrate. If left as
None
then the source model for the dataset will be used. This can be used to calculate the unabsorbed flux, as shown in the examples.
- Returns
flux – The flux or flux density. For X-Spec style models the flux units will be erg/cm^2/s and the flux density units will be either erg/cm^2/s/keV or erg/cm^2/s/Angstrom, depending on the analysis setting.
- Return type
number
See also
calc_data_sum
Sum up the data values over a pass band.
calc_model_sum
Sum up the fitted model over a pass band.
calc_source_sum
Sum up the source model over a pass band.
calc_photon_flux
Integrate the unconvolved source model over a pass band.
set_analysis
Set the units used when fitting and displaying spectral data
set_model
Set the source model expression for a data set.
Notes
The units of
lo
andhi
are determined by the analysis setting for the data set (e.g.get_analysis
).Any existing filter on the data set - e.g. as created by
ignore
ornotice
- is ignored by this function.The units of the answer depend on the model components used in the source expression and the axis or axes of the data set. It is unlikely to give sensible results for 2D data sets.
Examples
Calculate the integral of the unconvolved model over the full range of the default data set:
>>> calc_energy_flux()
Return the flux for the data set labelled “core”:
>>> calc_energy_flux(id='core')
Calculate the energy flux over the ranges 0.5 to 2 and 0.5 to 7 keV:
>>> set_analysis('energy') >>> calc_energy_flux(0.5, 2) 5.7224906878061796e-10 >>> calc_energy_flux(0.5, 7) 1.3758131915063825e-09
Calculate the energy flux density at 0.5 keV for the source “core”:
>>> calc_energy_flux(0.5, id="core") 5.2573786652855304e-10
Calculate the flux for the model applied to the second background component of the ‘jet’ data set, for the wavelength range 20 to 22 Angstroms:
>>> set_analysis('jet', 'wave') >>> calc_energy_flux(20, 22, id='jet', bkg_id=2)
For the following example, the source model is an absorbed powerlaw -
xsphabs.gal * powerlaw.pl
- so that thefabs
value represents the absorbed flux, andfunabs
the unabsorbed flux (i.e. just the power-law component):>>> fabs = calc_energy_flux(0.5, 7) >>> funabs = calc_energy_flux(0.5, 7, model=pl)
- calc_kcorr(z, obslo, obshi, restlo=None, resthi=None, id=None, bkg_id=None)[source] [edit on github]
Calculate the K correction for a model.
The K correction ([1]_, [2]_, [3]_, [4]_) is the numeric factor applied to measured energy fluxes in an observed energy band to estimate the flux in a given rest-frame energy band. It accounts for the change in spectral energy distribution between the desired rest-frame band and the rest-frame band corresponding to the observed band. This is often used when converting a flux into a luminosity.
- Parameters
z (number or array, >= 0) – The redshift, or redshifts, of the source.
obslo (number) – The minimum energy of the observed band.
obshi (number) – The maximum energy of the observed band, which must be larger than
obslo
.restlo (number or
None
) – The minimum energy of the rest-frame band. IfNone
then useobslo
.resthi (number or
None
) – The maximum energy of the rest-frame band. It must be larger thanrestlo
. IfNone
then useobshi
.id (int or str, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – If set, use the model associated with the given background component rather than the source model.
- Returns
kz
- Return type
number or array of numbers
See also
calc_energy_flux
Integrate the unconvolved source model over a pass band.
dataspace1d
Create the independent axis for a 1D data set.
Notes
This is only defined when the analysis is in ‘energy’ units.
If the model contains a redshift parameter then it should be set to 0, rather than the source redshift.
If the source model is at zero redshift, the observed energy band is olo to ohi, and the rest frame band is rlo to rhi (which need not match the observed band), then the K correction at a redshift z can be calculated as:
frest = calc_energy_flux(rlo, rhi) fobs = calc_energy_flux(olo*(1+z), ohi*(1+z)) kz = frest / fobs
The energy ranges used - rlo to rhi and olo*(1+z) to ohi*(1+z) - should be fully covered by the data grid, otherwise the flux calculation will be truncated at the grid boundaries, leading to incorrect results.
References
- 1
“The K correction”, Hogg, D.W., et al. http://arxiv.org/abs/astro-ph/0210394
- 2
Appendix B of Jones et al. 1998, ApJ, vol 495, p. 100-114. http://adsabs.harvard.edu/abs/1998ApJ…495..100J
- 3
“K and evolutionary corrections from UV to IR”, Poggianti, B.M., A&AS, 1997, vol 122, p. 399-407. http://adsabs.harvard.edu/abs/1997A%26AS..122..399P
- 4
“Galactic evolution and cosmology - Probing the cosmological deceleration parameter”, Yoshii, Y. & Takahara, F., ApJ, 1988, vol 326, p. 1-18. http://adsabs.harvard.edu/abs/1988ApJ…326….1Y
Examples
Calculate the K correction for an X-Spec apec model, with a source temperature of 6 keV and abundance of 0.3 solar, for the energy band of 0.5 to 2 keV:
>>> dataspace1d(0.01, 10, 0.01) >>> set_source(xsapec.clus) >>> clus.kt = 6 >>> clus.abundanc = 0.3 >>> calc_kcorr(0.5, 0.5, 2) 0.82799195070436793
Calculate the K correction for a range of redshifts (0 to 2) using an observed frame of 0.5 to 2 keV and a rest frame of 0.1 to 10 keV (the energy grid is set to ensure that it covers the full energy range; that is the rest-frame band and the observed frame band multiplied by the smallest and largest (1+z) terms):
>>> dataspace1d(0.01, 11, 0.01) >>> zs = np.linspace(0, 2, 21) >>> ks = calc_kcorr(zs, 0.5, 2, restlo=0.1, resthi=10)
Calculate the k correction for the background dataset bkg_id=2 for a redshift of 0.5 over the energy range 0.5 to 2 keV with rest-frame energy limits of 2 to 10 keV.
>>> calc_kcorr(0.5, 0.5, 2, 2, 10, bkg_id=2)
- calc_model_sum(lo=None, hi=None, id=None, bkg_id=None)[source] [edit on github]
Sum up the fitted model over a pass band.
Sum up M(E) over a range of bins, where M(E) is the per-bin model value after it has been convolved with any instrumental response (e.g. RMF and ARF or PSF). This is intended for one-dimensional data sets: use
calc_model_sum2d
for two-dimensional data sets. Thecalc_source_sum
function is used to calculate the sum of the model before any instrumental response is applied.- Parameters
lo (number, optional) – If both are None or both are set then sum up over the given band. If only one is set then use the model value in the selected bin. The units for
lo
andhi
are given by the current analysis setting.hi (number, optional) – If both are None or both are set then sum up over the given band. If only one is set then use the model value in the selected bin. The units for
lo
andhi
are given by the current analysis setting.id (int or str, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – If set, use the model associated with the given background component rather than the source model.
- Returns
signal – The model value (sum or individual bin).
- Return type
number
See also
calc_data_sum
Sum up the observed counts over a pass band.
calc_energy_flux
Integrate the unconvolved source model over a pass band.
calc_photon_flux
Integrate the unconvolved source model over a pass band.
calc_source_sum
Sum up the source model over a pass band.
set_model
Set the source model expression for a data set.
Notes
The units of
lo
andhi
are determined by the analysis setting for the data set (e.g.get_analysis
). The summation occurs over those points in the data set that lie within this range, not the range itself.Any existing filter on the data set - e.g. as created by
ignore
ornotice
- is ignored by this function.The units of the answer depend on the model components used in the source expression and the axis or axes of the data set.
Examples
Calculate the model evaluated over the full data set (all points or pixels of the independent axis) for the default data set, and compare it to the sum for th first background component:
>>> tsrc = calc_model_sum() >>> tbkg = calc_model_sum(bkg_id=1)
Sum up the model over the data range 0.5 to 2 for the default data set, and compared to the data over the same range:
>>> calc_model_sum(0.5, 2) 404.97796489631639 >>> calc_data_sum(0.5, 2) 745.0
Calculate the model sum, evaluated over the range 20 to 22 Angstroms, for the first background component of the “histate” data set:
>>> set_analysis("histate", "wavelength") >>> calc_model_sum(20, 22, "histate", bkg_id=1)
In the following example, a small data set is created, covering the axis range of -5 to 5, and an off-center gaussian model created (centered at 1). The model is evaluated over the full data grid and then a subset of pixels. As the summation is done over those points in the data set that lie within the requested range, the sum for lo=-2 to hi=1 is the same as that for lo=-1.5 to hi=1.5:
>>> load_arrays('test', [-5, -2.5, 0, 2.5, 5], [2, 5, 12, 7, 3]) >>> set_source('test', gauss1d.gmdl) >>> gmdl.pos = 1 >>> gmdl.fwhm = 2.4 >>> gmdl.ampl = 10 >>> calc_model_sum(id='test') 9.597121089731253 >>> calc_model_sum(-2, 1, id='test') 6.179472329646446 >>> calc_model_sum(-1.5, 1.5, id='test') 6.179472329646446
- calc_model_sum2d(reg=None, id=None)[source] [edit on github]
Sum up the convolved model for a 2D data set.
This function is for two-dimensional data sets: use
calc_model_sum
for one-dimensional data sets.- Parameters
reg (str, optional) – The spatial filter to use. The default,
None
, is to use the whole data set.id (int or str, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.
- Returns
msum – The sum of the model values, as fitted to the data, that lie within the given region. This includes any PSF included by
set_psf
.- Return type
number
See also
calc_model_sum
Sum up the fitted model over a pass band.
calc_source_sum2d
Sum up the unconvolved model for a 2D data set.
set_psf
Add a PSF model to a data set.
set_model
Set the source model expression for a data set.
Notes
The coordinate system of the region filter is determined by the coordinate setting for the data set (e.g.
get_coord
).Any existing filter on the data set - e.g. as created by
ignore2d
ornotice2d
- is ignored by this function.Examples
The following examples use the data in the default data set created with the following calls, which sets the y (data) values to be 0 to 11 in a 3 row by 4 column image:
>>> ivals = np.arange(12) >>> y, x = np.mgrid[10:13, 20:24] >>> y = y.flatten() >>> x = x.flatten() >>> load_arrays(1, x, y, ivals, (3, 4), DataIMG) >>> set_source(const2d.bgnd) >>> bgnd.c0 = 2
with no argument, the full data set is used. Since the model evaluates to 2 per pixel, and there are 12 pixels in the data set, the result is 24:
>>> calc_model_sum2d() 24.0
and a spatial filter can be used to restrict the region used for the summation:
>>> calc_model_sum2d('circle(22,12,1)') 8.0 >>> calc_model_sum2d('field()-circle(22,12,1)') 16.0
Apply the spatial filter to the model for the data set labelled “a2142”:
>>> calc_model_sum2d('rotbox(4232.3,3876,300,200,43)', 'a2142')
- calc_photon_flux(lo=None, hi=None, id=None, bkg_id=None, model=None)[source] [edit on github]
Integrate the unconvolved source model over a pass band.
Calculate the integral of S(E) over a pass band, where S(E) is the spectral model evaluated for each bin (that is, the model without any instrumental responses applied to it).
Changed in version 4.12.1: The model parameter was added.
- Parameters
lo (number, optional) – If both are None or both are set then calculate the flux over the given band. If only one is set then calculate the flux density at that point. The units for
lo
andhi
are given by the current analysis setting.hi (number, optional) – If both are None or both are set then calculate the flux over the given band. If only one is set then calculate the flux density at that point. The units for
lo
andhi
are given by the current analysis setting.id (int or str, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – If set, use the model associated with the given background component rather than the source model.
model (model, optional) – The model to integrate. If left as
None
then the source model for the dataset will be used. This can be used to calculate the unabsorbed flux, as shown in the examples.
- Returns
flux – The flux or flux density. For X-Spec style models the flux units will be photon/cm^2/s and the flux density units will be either photon/cm^2/s/keV or photon/cm^2/s/Angstrom, depending on the analysis setting.
- Return type
number
See also
calc_data_sum
Sum up the observed counts over a pass band.
calc_model_sum
Sum up the fitted model over a pass band.
calc_energy_flux
Integrate the unconvolved source model over a pass band.
calc_source_sum
Sum up the source model over a pass band.
set_analysis
Set the units used when fitting and displaying spectral data
set_model
Set the source model expression for a data set.
Notes
The units of
lo
andhi
are determined by the analysis setting for the data set (e.g.get_analysis
).Any existing filter on the data set - e.g. as created by
ignore
ornotice
- is ignored by this function.The units of the answer depend on the model components used in the source expression and the axis or axes of the data set. It is unlikely to give sensible results for 2D data sets.
Examples
Calculate the integral of the unconvolved model over the full range of the default data set:
>>> calc_photon_flux()
Return the flux for the data set labelled “core”:
>>> calc_photon_flux(id='core')
Calculate the photon flux over the ranges 0.5 to 2 and 0.5 to 7 keV, and compared to the energy fluxes for the same bands:
>>> set_analysis('energy') >>> calc_photon_flux(0.5, 2) 0.35190275 >>> calc_photon_flux(0.5, 7) 0.49050927 >>> calc_energy_flux(0.5, 2) 5.7224906878061796e-10 >>> calc_energy_flux(0.5, 7) 1.3758131915063825e-09
Calculate the photon flux density at 0.5 keV for the source “core”:
>>> calc_photon_flux(0.5, id="core") 0.64978176
Calculate the flux for the model applied to the second background component of the ‘jet’ data set, for the wavelength range 20 to 22 Angstroms:
>>> set_analysis('jet', 'wave') >>> calc_photon_flux(20, 22, id='jet', bkg_id=2)
For the following example, the source model is an absorbed powerlaw -
xsphabs.gal * powerlaw.pl
- so that thefabs
value represents the absorbed flux, andfunabs
the unabsorbed flux (i.e. just the power-law component):>>> fabs = calc_photon_flux(0.5, 7) >>> funabs = calc_photon_flux(0.5, 7, model=pl)
- calc_source_sum(lo=None, hi=None, id=None, bkg_id=None)[source] [edit on github]
Sum up the source model over a pass band.
Sum up S(E) over a range of bins, where S(E) is the per-bin model value before it has been convolved with any instrumental response (e.g. RMF and ARF or PSF). This is intended for one-dimensional data sets: use
calc_source_sum2d
for two-dimensional data sets. Thecalc_model_sum
function is used to calculate the sum of the model after any instrumental response is applied.- Parameters
lo (number, optional) – If both are None or both are set then sum up over the given band. If only one is set then use the model value in the selected bin. The units for
lo
andhi
are given by the current analysis setting.hi (number, optional) – If both are None or both are set then sum up over the given band. If only one is set then use the model value in the selected bin. The units for
lo
andhi
are given by the current analysis setting.id (int or str, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – If set, use the model associated with the given background component rather than the source model.
- Returns
signal – The model value (sum or individual bin).
- Return type
number
See also
calc_data_sum
Sum up the observed counts over a pass band.
calc_model_sum
Sum up the fitted model over a pass band.
calc_energy_flux
Integrate the unconvolved source model over a pass band.
calc_photon_flux
Integrate the unconvolved source model over a pass band.
set_model
Set the source model expression for a data set.
Notes
The units of
lo
andhi
are determined by the analysis setting for the data set (e.g.get_analysis
). The summation occurs over those points in the data set that lie within this range, not the range itself.Any existing filter on the data set - e.g. as created by
ignore
ornotice
- is ignored by this function.The units of the answer depend on the model components used in the source expression and the axis or axes of the data set.
Examples
Calculate the model evaluated over the full data set (all points or pixels of the independent axis) for the default data set, and compare it to the sum for th first background component:
>>> tsrc = calc_source_sum() >>> tbkg = calc_source_sum(bkg_id=1)
Sum up the model over the data range 0.5 to 2 for the default data set:
>>> calc_source_sum(0.5, 2) 139.12819041922018
Compare the output of the
calc_source_sum
andcalc_photon_flux
routines. A 1099-bin data space is created, with a model which has a value of 1 for each bin. As the bin width is constant, at 0.01, the integrated value, calculated bycalc_photon_flux
, is one hundredth the value returned bycalc_data_sum
:>>> dataspace1d(0.01, 11, 0.01, id="test") >>> set_source("test", const1d.bflat) >>> bflat.c0 = 1 >>> calc_source_sum(id="test") 1099.0 >>> calc_photon_flux(id="test") 10.99
In the following example, a small data set is created, covering the axis range of -5 to 5, and an off-center gaussian model created (centered at 1). The model is evaluated over the full data grid and then a subset of pixels. As the summation is done over those points in the data set that lie within the requested range, the sum for lo=-2 to hi=1 is the same as that for lo=-1.5 to hi=1.5:
>>> load_arrays('test', [-5, -2.5, 0, 2.5, 5], [2, 5, 12, 7, 3]) >>> set_source('test', gauss1d.gmdl) >>> gmdl.pos = 1 >>> gmdl.fwhm = 2.4 >>> gmdl.ampl = 10 >>> calc_source_sum(id='test') 9.597121089731253 >>> calc_source_sum(-2, 1, id='test') 6.179472329646446 >>> calc_source_sum(-1.5, 1.5, id='test') 6.179472329646446
- calc_source_sum2d(reg=None, id=None)[source] [edit on github]
Sum up the unconvolved model for a 2D data set.
This function is for two-dimensional data sets: use
calc_source_sum
for one-dimensional data sets.- Parameters
reg (str, optional) – The spatial filter to use. The default,
None
, is to use the whole data set.id (int or str, optional) – Use the source expression associated with this data set. If not given then the default identifier is used, as returned by
get_default_id
.
- Returns
msum – The sum of the model values that lie within the given region. This does not include any PSF included by
set_psf
.- Return type
number
See also
calc_model_sum2d
Sum up the convolved model for a 2D data set.
calc_source_sum
Sum up the model over a pass band.
set_psf
Add a PSF model to a data set.
set_model
Set the source model expression for a data set.
Notes
The coordinate system of the region filter is determined by the coordinate setting for the data set (e.g.
get_coord
).Any existing filter on the data set - e.g. as created by
ignore2d
ornotice2d
- is ignored by this function.Examples
The following examples use the data in the default data set created with the following calls, which sets the y (data) values to be 0 to 11 in a 3 row by 4 column image:
>>> ivals = np.arange(12) >>> y, x = np.mgrid[10:13, 20:24] >>> y = y.flatten() >>> x = x.flatten() >>> load_arrays(1, x, y, ivals, (3, 4), DataIMG) >>> set_source(const2d.bgnd) >>> bgnd.c0 = 2
with no argument, the full data set is used. Since the model evaluates to 2 per pixel, and there are 12 pixels in the data set, the result is 24:
>>> calc_source_sum2d() 24.0
and a spatial filter can be used to restrict the region used for the summation:
>>> calc_source_sum2d('circle(22,12,1)') 8.0 >>> calc_source_sum2d('field()-circle(22,12,1)') 16.0
Apply the spatial filter to the model for the data set labelled “a2142”:
>>> calc_source_sum2d('rotbox(4232.3,3876,300,200,43)', 'a2142')
- calc_stat(id=None, *otherids)[source] [edit on github]
Calculate the fit statistic for a data set.
Evaluate the model for one or more data sets, compare it to the data using the current statistic, and return the value. No fitting is done, as the current model parameter, and any filters, are used.
- Parameters
- Returns
stat – The current statistic value.
- Return type
number
See also
calc_chisqr
Calculate the per-bin chi-squared statistic.
calc_stat_info
Display the statistic values for the current models.
set_stat
Set the statistical method.
Examples
Calculate the statistic for the model and data in the default data set:
>>> stat = calc_stat()
Find the statistic for data set 3:
>>> stat = calc_stat(3)
When fitting to multiple data sets, you can get the contribution to the total fit statistic from only one data set, or from several by listing the datasets explicitly. The following finds the contribution from the data sets labelled “core” and “jet”:
>>> stat = calc_stat("core", "jet")
Calculate the statistic value using two different statistics:
>>> set_stat('cash') >>> s1 = calc_stat() >>> set_stat('cstat') >>> s2 = calc_stat()
- calc_stat_info()[source] [edit on github]
Display the statistic values for the current models.
Displays the statistic value for each data set, and the combined fit, using the current set of models, parameters, and ranges. The output is printed to stdout, and so is intended for use in interactive analysis. The
get_stat_info
function returns the same information but as an array of Python structures.See also
calc_stat
Calculate the fit statistic for a data set.
get_stat_info
Return the statistic values for the current models.
Notes
If a fit to a particular data set has not been made, or values - such as parameter settings, the noticed data range, or choice of statistic - have been changed since the last fit, then the results for that data set may not be meaningful and will therefore bias the results for the simultaneous results.
The information returned by
calc_stat_info
includes:- Dataset
The dataset identifier (or identifiers).
- Statistic
The name of the statistic used to calculate the results.
- Fit statistic value
The current fit statistic value.
- Data points
The number of bins used in the fit.
- Degrees of freedom
The number of bins minus the number of thawed parameters.
Some fields are only returned for a subset of statistics:
- Probability (Q-value)
A measure of the probability that one would observe the reduced statistic value, or a larger value, if the assumed model is true and the best-fit model parameters are the true parameter values.
- Reduced statistic
The fit statistic value divided by the number of degrees of freedom.
Examples
>>> calc_stat_info()
- clean()[source] [edit on github]
Clear out the current Sherpa session.
The
clean
function removes all data sets and model assignments, and restores the default settings for the optimisation and fit statistic.See also
Examples
>>> clean()
- conf(*args)[source] [edit on github]
Estimate parameter confidence intervals using the confidence method.
The
conf
command computes confidence interval bounds for the specified model parameters in the dataset. A given parameter’s value is varied along a grid of values while the values of all the other thawed parameters are allowed to float to new best-fit values. Theget_conf
andset_conf_opt
commands can be used to configure the error analysis; an example being changing the ‘sigma’ field to1.6
(i.e. 90%) from its default value of1
. The output from the routine is displayed on screen, and theget_conf_results
routine can be used to retrieve the results.- Parameters
id (int or str, optional) – The data set, or sets, that provides the data. If not given then all data sets with an associated model are used simultaneously.
parameter (sherpa.models.parameter.Parameter, optional) – The default is to calculate the confidence limits on all thawed parameters of the model, or models, for all the data sets. The evaluation can be restricted by listing the parameters to use. Note that each parameter should be given as a separate argument, rather than as a list. For example
conf(g1.ampl, g1.sigma)
.model (sherpa.models.model.Model, optional) – Select all the thawed parameters in the model.
See also
covar
Estimate the confidence intervals using the covariance method.
get_conf
Return the confidence-interval estimation object.
get_conf_results
Return the results of the last
conf
run.int_proj
Plot the statistic value as a single parameter is varied.
int_unc
Plot the statistic value as a single parameter is varied.
reg_proj
Plot the statistic value as two parameters are varied.
reg_unc
Plot the statistic value as two parameters are varied.
set_conf_opt
Set an option of the
conf
estimation object.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with multiple
ids
orparameters
values, the order is unimportant, since any argument that is not defined as a model parameter is assumed to be a data id.The
conf
function is different tocovar
, in that in that all other thawed parameters are allowed to float to new best-fit values, instead of being fixed to the initial best-fit values as they are incovar
. Whileconf
is more general (e.g. allowing the user to examine the parameter space away from the best-fit point), it is in the strictest sense no more accurate thancovar
for determining confidence intervals.The
conf
function is a replacement for theproj
function, which uses a different algorithm to estimate parameter confidence limits.An estimated confidence interval is accurate if and only if:
the chi^2 or logL surface in parameter space is approximately shaped like a multi-dimensional paraboloid, and
the best-fit point is sufficiently far from parameter space boundaries.
One may determine if these conditions hold, for example, by plotting the fit statistic as a function of each parameter’s values (the curve should approximate a parabola) and by examining contour plots of the fit statistics made by varying the values of two parameters at a time (the contours should be elliptical, and parameter space boundaries should be no closer than approximately 3 sigma from the best-fit point). The
int_proj
andreg_proj
commands may be used for this.If either of the conditions given above does not hold, then the output from
conf
may be meaningless except to give an idea of the scale of the confidence intervals. To accurately determine the confidence intervals, one would have to reparameterize the model, use Monte Carlo simulations, or Bayesian methods.As the calculation can be computer intensive, the default behavior is to use all available CPU cores to speed up the analysis. This can be changed be varying the
numcores
option - or settingparallel
toFalse
- either withset_conf_opt
orget_conf
.As
conf
estimates intervals for each parameter independently, the relationship between sigma and the change in statistic value delta_S can be particularly simple: sigma = the square root of delta_S for statistics sampled from the chi-square distribution and for the Cash statistic, and is approximately equal to the square root of (2 * delta_S) for fits based on the general log-likelihood. The default setting is to calculate the one-sigma interval, which can be changed with thesigma
option toset_conf_opt
orget_conf
.The limit calculated by
conf
is basically a 1-dimensional root in the translated coordinate system (translated by the value of the statistic at the minimum plus sigma^2). The Taylor series expansion of the multi-dimensional function at the minimum is:f(x + dx) ~ f(x) + grad( f(x) )^T dx + dx^T Hessian( f(x) ) dx + ...
where x is understood to be the n-dimensional vector representing the free parameters to be fitted and the super-script ‘T’ is the transpose of the row-vector. At or near the minimum, the gradient of the function is zero or negligible, respectively. So the leading term of the expansion is quadratic. The best root finding algorithm for a curve which is approximately parabolic is Muller’s method [1]_. Muller’s method is a generalization of the secant method [2]_: the secant method is an iterative root finding method that approximates the function by a straight line through two points, whereas Muller’s method is an iterative root finding method that approxmiates the function by a quadratic polynomial through three points.
Three data points are the minimum input to Muller’s root finding method. The first point to be submitted to the Muller’s root finding method is the point at the minimum. To strategically choose the other two data points, the confidence function uses the output from covariance as the second data point. To generate the third data points for the input to Muller’s root finding method, the secant root finding method is used since it only requires two data points to generate the next best approximation of the root.
However, there are cases where
conf
cannot locate the root even though the root is bracketed within an interval (perhaps due to the bad resolution of the data). In such cases, when the optionopeninterval
is set toFalse
(which is the default), the routine will print a warning message about not able to find the root within the set tolerance and the function will return the average of the open interval which brackets the root. Ifopeninterval
is set toTrue
thenconf
will print the minimal open interval which brackets the root (not to be confused with the lower and upper bound of the confidence interval). The most accurate thing to do is to return an open interval where the root is localized/bracketed rather then the average of the open interval (since the average of the interval is not a root within the specified tolerance).References
- 1
Muller, David E., “A Method for Solving Algebraic Equations Using an Automatic Computer,” MTAC, 10 (1956), 208-215.
- 2
Numerical Recipes in Fortran, 2nd edition, 1986, Press et al., p. 347
Examples
Evaluate confidence intervals for all thawed parameters in all data sets with an associated source model. The results are then stored in the variable
res
.>>> conf() >>> res = get_conf_results()
Only evaluate the parameters associated with data set 2:
>>> conf(2)
Only evaluate the intervals for the
pos.xpos
andpos.ypos
parameters:>>> conf(pos.xpos, pos.ypos)
Change the limits to be 1.6 sigma (90%) rather than the default 1 sigma.
>>> set_conf_opt('sigma', 1.6) >>> conf()
Only evaluate the
clus.kt
parameter for the data sets with identifiers “obs1”, “obs5”, and “obs6”. This will still use the 1.6 sigma setting from the previous run.>>> conf("obs1", "obs5", "obs6", clus.kt)
Only use two cores when evaluating the errors for the parameters used in the model for data set 3:
>>> set_conf_opt('numcores', 2) >>> conf(3)
Estimate the errors for all the thawed parameters from the
line
model and theclus.kt
parameter for datasets 1, 3, and 4:>>> conf(1, 3, 4, line, clus.kt)
- confidence(*args) [edit on github]
Estimate parameter confidence intervals using the confidence method.
The
conf
command computes confidence interval bounds for the specified model parameters in the dataset. A given parameter’s value is varied along a grid of values while the values of all the other thawed parameters are allowed to float to new best-fit values. Theget_conf
andset_conf_opt
commands can be used to configure the error analysis; an example being changing the ‘sigma’ field to1.6
(i.e. 90%) from its default value of1
. The output from the routine is displayed on screen, and theget_conf_results
routine can be used to retrieve the results.- Parameters
id (int or str, optional) – The data set, or sets, that provides the data. If not given then all data sets with an associated model are used simultaneously.
parameter (sherpa.models.parameter.Parameter, optional) – The default is to calculate the confidence limits on all thawed parameters of the model, or models, for all the data sets. The evaluation can be restricted by listing the parameters to use. Note that each parameter should be given as a separate argument, rather than as a list. For example
conf(g1.ampl, g1.sigma)
.model (sherpa.models.model.Model, optional) – Select all the thawed parameters in the model.
See also
covar
Estimate the confidence intervals using the covariance method.
get_conf
Return the confidence-interval estimation object.
get_conf_results
Return the results of the last
conf
run.int_proj
Plot the statistic value as a single parameter is varied.
int_unc
Plot the statistic value as a single parameter is varied.
reg_proj
Plot the statistic value as two parameters are varied.
reg_unc
Plot the statistic value as two parameters are varied.
set_conf_opt
Set an option of the
conf
estimation object.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with multiple
ids
orparameters
values, the order is unimportant, since any argument that is not defined as a model parameter is assumed to be a data id.The
conf
function is different tocovar
, in that in that all other thawed parameters are allowed to float to new best-fit values, instead of being fixed to the initial best-fit values as they are incovar
. Whileconf
is more general (e.g. allowing the user to examine the parameter space away from the best-fit point), it is in the strictest sense no more accurate thancovar
for determining confidence intervals.The
conf
function is a replacement for theproj
function, which uses a different algorithm to estimate parameter confidence limits.An estimated confidence interval is accurate if and only if:
the chi^2 or logL surface in parameter space is approximately shaped like a multi-dimensional paraboloid, and
the best-fit point is sufficiently far from parameter space boundaries.
One may determine if these conditions hold, for example, by plotting the fit statistic as a function of each parameter’s values (the curve should approximate a parabola) and by examining contour plots of the fit statistics made by varying the values of two parameters at a time (the contours should be elliptical, and parameter space boundaries should be no closer than approximately 3 sigma from the best-fit point). The
int_proj
andreg_proj
commands may be used for this.If either of the conditions given above does not hold, then the output from
conf
may be meaningless except to give an idea of the scale of the confidence intervals. To accurately determine the confidence intervals, one would have to reparameterize the model, use Monte Carlo simulations, or Bayesian methods.As the calculation can be computer intensive, the default behavior is to use all available CPU cores to speed up the analysis. This can be changed be varying the
numcores
option - or settingparallel
toFalse
- either withset_conf_opt
orget_conf
.As
conf
estimates intervals for each parameter independently, the relationship between sigma and the change in statistic value delta_S can be particularly simple: sigma = the square root of delta_S for statistics sampled from the chi-square distribution and for the Cash statistic, and is approximately equal to the square root of (2 * delta_S) for fits based on the general log-likelihood. The default setting is to calculate the one-sigma interval, which can be changed with thesigma
option toset_conf_opt
orget_conf
.The limit calculated by
conf
is basically a 1-dimensional root in the translated coordinate system (translated by the value of the statistic at the minimum plus sigma^2). The Taylor series expansion of the multi-dimensional function at the minimum is:f(x + dx) ~ f(x) + grad( f(x) )^T dx + dx^T Hessian( f(x) ) dx + ...
where x is understood to be the n-dimensional vector representing the free parameters to be fitted and the super-script ‘T’ is the transpose of the row-vector. At or near the minimum, the gradient of the function is zero or negligible, respectively. So the leading term of the expansion is quadratic. The best root finding algorithm for a curve which is approximately parabolic is Muller’s method [1]_. Muller’s method is a generalization of the secant method [2]_: the secant method is an iterative root finding method that approximates the function by a straight line through two points, whereas Muller’s method is an iterative root finding method that approxmiates the function by a quadratic polynomial through three points.
Three data points are the minimum input to Muller’s root finding method. The first point to be submitted to the Muller’s root finding method is the point at the minimum. To strategically choose the other two data points, the confidence function uses the output from covariance as the second data point. To generate the third data points for the input to Muller’s root finding method, the secant root finding method is used since it only requires two data points to generate the next best approximation of the root.
However, there are cases where
conf
cannot locate the root even though the root is bracketed within an interval (perhaps due to the bad resolution of the data). In such cases, when the optionopeninterval
is set toFalse
(which is the default), the routine will print a warning message about not able to find the root within the set tolerance and the function will return the average of the open interval which brackets the root. Ifopeninterval
is set toTrue
thenconf
will print the minimal open interval which brackets the root (not to be confused with the lower and upper bound of the confidence interval). The most accurate thing to do is to return an open interval where the root is localized/bracketed rather then the average of the open interval (since the average of the interval is not a root within the specified tolerance).References
- 1
Muller, David E., “A Method for Solving Algebraic Equations Using an Automatic Computer,” MTAC, 10 (1956), 208-215.
- 2
Numerical Recipes in Fortran, 2nd edition, 1986, Press et al., p. 347
Examples
Evaluate confidence intervals for all thawed parameters in all data sets with an associated source model. The results are then stored in the variable
res
.>>> conf() >>> res = get_conf_results()
Only evaluate the parameters associated with data set 2:
>>> conf(2)
Only evaluate the intervals for the
pos.xpos
andpos.ypos
parameters:>>> conf(pos.xpos, pos.ypos)
Change the limits to be 1.6 sigma (90%) rather than the default 1 sigma.
>>> set_conf_opt('sigma', 1.6) >>> conf()
Only evaluate the
clus.kt
parameter for the data sets with identifiers “obs1”, “obs5”, and “obs6”. This will still use the 1.6 sigma setting from the previous run.>>> conf("obs1", "obs5", "obs6", clus.kt)
Only use two cores when evaluating the errors for the parameters used in the model for data set 3:
>>> set_conf_opt('numcores', 2) >>> conf(3)
Estimate the errors for all the thawed parameters from the
line
model and theclus.kt
parameter for datasets 1, 3, and 4:>>> conf(1, 3, 4, line, clus.kt)
- contour(*args, **kwargs)[source] [edit on github]
Create a contour plot for an image data set.
Create one or more contour plots, depending on the arguments it is set: a plot type, followed by an optional data set identifier, and this can be repeated. If no data set identifier is given for a plot type, the default identifier - as returned by
get_default_id
- is used. This is for 2D data sets.Changed in version 4.12.2: Keyword arguments, such as alpha, can be sent to each plot.
- Raises
sherpa.utils.err.DataErr – The data set does not support the requested plot type.
See also
contour_data
Contour the values of an image data set.
contour_fit
Contour the fit to a data set.
contour_fit_resid
Contour the fit and the residuals to a data set.
contour_kernel
Contour the kernel applied to the model of an image data set.
contour_model
Contour the values of the model, including any PSF.
contour_psf
Contour the PSF applied to the model of an image data set.
contour_ratio
Contour the ratio of data to model.
contour_resid
Contour the residuals of the fit.
contour_source
Contour the values of the model, without any PSF.
get_default_id
Return the default data set identifier.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Notes
The supported plot types depend on the data set type, and include the following list. There are also individual functions, with
contour_
prepended to the plot type, such ascontour_data
and thecontour_fit_resid
variant:data
The data.
fit
Contours of the data and the source model.
fit_resid
Two plots: the first is the contours of the data and the source model and the second is the residuals.
kernel
The kernel.
model
The source model including any PSF convolution set by
set_psf
.psf
The PSF.
ratio
Contours of the ratio image, formed by dividing the data by the model.
resid
Contours of the residual image, formed by subtracting the model from the data.
source
The source model (without any PSF convolution set by
set_psf
).
The keyword arguments are sent to each plot (so care must be taken to ensure they are valid for all plots).
Examples
>>> contour('data')
>>> contour('data', 1, 'data', 2)
>>> contour('data', 'model')
>>> contour('data', 'model', 'fit', 'resid')
>>> contour('data', 'model', alpha=0.7)
- contour_data(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Contour the values of an image data set.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_data
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_data_contour
Return the data used by contour_data.
get_data_contour_prefs
Return the preferences for contour_data.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Examples
Plot the data from the default data set:
>>> contour_data()
Contour the data and then overplot the data from the second data set:
>>> contour_data() >>> contour_data(2, overcontour=True)
- contour_fit(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Contour the fit to a data set.
Overplot the model - including any PSF - on the data. The preferences are the same as
contour_data
andcontour_model
.- Parameters
id (int or str, optional) – The data set that provides the data and model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_fit
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_fit_contour
Return the data used by contour_fit.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Examples
Plot the fit for the default data set:
>>> contour_fit()
Overplot the fit to data set ‘s2’ on that of the default data set:
>>> contour_fit() >>> contour_fit('s2', overcontour=True)
- contour_fit_resid(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Contour the fit and the residuals to a data set.
Overplot the model - including any PSF - on the data. In a separate plot contour the residuals. The preferences are the same as
contour_data
andcontour_model
.- Parameters
id (int or str, optional) – The data set that provides the data and model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_fit_resid
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_fit_contour
Return the data used by contour_fit.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
contour_fit
Contour the fit to a data set.
contour_resid
Contour the residuals of the fit.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Examples
Plot the fit and residuals for the default data set:
>>> contour_fit_resid()
- contour_kernel(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Contour the kernel applied to the model of an image data set.
If the data set has no PSF applied to it, the model is displayed.
- Parameters
id (int or str, optional) – The data set that provides the model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_kernel
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_psf_contour
Return the data used by contour_psf.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
contour_psf
Contour the PSF applied to the model of an image data set.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
set_psf
Add a PSF model to a data set.
- contour_model(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Create a contour plot of the model.
Displays a contour plot of the values of the model, evaluated on the data, including any PSF kernel convolution (if set).
- Parameters
id (int or str, optional) – The data set that provides the model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_model
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_model_contour
Return the data used by contour_model.
get_model_contour_prefs
Return the preferences for contour_model.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
contour_source
Create a contour plot of the unconvolved spatial model.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
set_psf
Add a PSF model to a data set.
Examples
Plot the model from the default data set:
>>> contour_model()
Compare the model without and with the PSF component, for the “img” data set:
>>> contour_source("img") >>> contour_model("img", overcontour=True)
- contour_psf(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Contour the PSF applied to the model of an image data set.
If the data set has no PSF applied to it, the model is displayed.
- Parameters
id (int or str, optional) – The data set that provides the model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_psf
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_psf_contour
Return the data used by contour_psf.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
contour_kernel
Contour the kernel applied to the model of an image data set.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
set_psf
Add a PSF model to a data set.
- contour_ratio(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Contour the ratio of data to model.
The ratio image is formed by dividing the data by the current model, including any PSF. The preferences are the same as
contour_data
.- Parameters
id (int or str, optional) – The data set that provides the data and model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_ratio
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_ratio_contour
Return the data used by contour_ratio.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Examples
Plot the ratio from the default data set:
>>> contour_ratio()
Overplot the ratio on the residuals:
>>> contour_resid('img') >>> contour_ratio('img', overcontour=True)
- contour_resid(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Contour the residuals of the fit.
The residuals are formed by subtracting the current model - including any PSF - from the data. The preferences are the same as
contour_data
.- Parameters
id (int or str, optional) – The data set that provides the data and model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_resid
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_resid_contour
Return the data used by contour_resid.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
Examples
Plot the residuals from the default data set:
>>> contour_resid()
Overplot the residuals on the model:
>>> contour_model('img') >>> contour_resid('img', overcontour=True)
- contour_source(id=None, replot=False, overcontour=False, **kwargs)[source] [edit on github]
Create a contour plot of the unconvolved spatial model.
Displays a contour plot of the values of the model, evaluated on the data, without any PSF kernel convolution applied. The preferences are the same as
contour_model
.- Parameters
id (int or str, optional) – The data set that provides the model. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call tocontour_source
. The default isFalse
.overcontour (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new contour plot. The default isFalse
.
See also
get_source_contour
Return the data used by contour_source.
get_default_id
Return the default data set identifier.
contour
Create one or more plot types.
contour_model
Create a contour plot of the model.
sherpa.astro.ui.set_coord
Set the coordinate system to use for image analysis.
set_psf
Add a PSF model to a data set.
Examples
Plot the model from the default data set:
>>> contour_source()
Compare the model without and with the PSF component, for the “img” data set:
>>> contour_model("img") >>> contour_source("img", overcontour=True)
- copy_data(fromid, toid)[source] [edit on github]
Copy a data set, creating a new identifier.
After copying the data set, any changes made to the original data set (that is, the
fromid
identifier) will not be reflected in the new (thetoid
identifier) data set.- Parameters
- Raises
sherpa.utils.err.IdentifierErr – If there is no data set with a
fromid
identifier.
Examples
>>> copy_data(1, 2)
Rename the data set with identifier 2 to “orig”, and then delete the old data set:
>>> copy_data(2, "orig") >>> delete_data(2)
- covar(*args)[source] [edit on github]
Estimate parameter confidence intervals using the covariance method.
The
covar
command computes confidence interval bounds for the specified model parameters in the dataset, using the covariance matrix of the statistic. Theget_covar
andset_covar_opt
commands can be used to configure the error analysis; an example being changing thesigma
field to 1.6 (i.e. 90%) from its default value of 1. The output from the routine is displayed on screen, and theget_covar_results
routine can be used to retrieve the results.- Parameters
id (int or str, optional) – The data set, or sets, that provides the data. If not given then all data sets with an associated model are used simultaneously.
parameter (sherpa.models.parameter.Parameter, optional) – The default is to calculate the confidence limits on all thawed parameters of the model, or models, for all the data sets. The evaluation can be restricted by listing the parameters to use. Note that each parameter should be given as a separate argument, rather than as a list. For example
covar(g1.ampl, g1.sigma)
.model (sherpa.models.model.Model, optional) – Select all the thawed parameters in the model.
See also
covar
Estimate the confidence intervals using the confidence method.
get_covar
Return the covariance estimation object.
get_covar_results
Return the results of the last
covar
run.int_proj
Plot the statistic value as a single parameter is varied.
int_unc
Plot the statistic value as a single parameter is varied.
reg_proj
Plot the statistic value as two parameters are varied.
reg_unc
Plot the statistic value as two parameters are varied.
set_covar_opt
Set an option of the
covar
estimation object.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with multiple
ids
orparameters
values, the order is unimportant, since any argument that is not defined as a model parameter is assumed to be a data id.The
covar
command is different toconf
, in that in that all other thawed parameters are fixed, rather than being allowed to float to new best-fit values. Whileconf
is more general (e.g. allowing the user to examine the parameter space away from the best-fit point), it is in the strictest sense no more accurate thancovar
for determining confidence intervals.An estimated confidence interval is accurate if and only if:
the chi^2 or logL surface in parameter space is approximately shaped like a multi-dimensional paraboloid, and
the best-fit point is sufficiently far from parameter space boundaries.
One may determine if these conditions hold, for example, by plotting the fit statistic as a function of each parameter’s values (the curve should approximate a parabola) and by examining contour plots of the fit statistics made by varying the values of two parameters at a time (the contours should be elliptical, and parameter space boundaries should be no closer than approximately 3 sigma from the best-fit point). The
int_proj
andreg_proj
commands may be used for this.If either of the conditions given above does not hold, then the output from
covar
may be meaningless except to give an idea of the scale of the confidence intervals. To accurately determine the confidence intervals, one would have to reparameterize the model, use Monte Carlo simulations, or Bayesian methods.As
covar
estimates intervals for each parameter independently, the relationship between sigma and the change in statistic value delta_S can be particularly simple: sigma = the square root of delta_S for statistics sampled from the chi-square distribution and for the Cash statistic, and is approximately equal to the square root of (2 * delta_S) for fits based on the general log-likelihood. The default setting is to calculate the one-sigma interval, which can be changed with thesigma
option toset_covar_opt
orget_covar
.Examples
Evaluate confidence intervals for all thawed parameters in all data sets with an associated source model. The results are then stored in the variable
res
.>>> covar() >>> res = get_covar_results()
Only evaluate the parameters associated with data set 2.
>>> covar(2)
Only evaluate the intervals for the
pos.xpos
andpos.ypos
parameters:>>> covar(pos.xpos, pos.ypos)
Change the limits to be 1.6 sigma (90%) rather than the default 1 sigma.
>>> set_covar_ope('sigma', 1.6) >>> covar()
Only evaluate the
clus.kt
parameter for the data sets with identifiers “obs1”, “obs5”, and “obs6”. This will still use the 1.6 sigma setting from the previous run.>>> covar("obs1", "obs5", "obs6", clus.kt)
Estimate the errors for all the thawed parameters from the
line
model and theclus.kt
parameter for datasets 1, 3, and 4:>>> covar(1, 3, 4, line, clus.kt)
- covariance(*args) [edit on github]
Estimate parameter confidence intervals using the covariance method.
The
covar
command computes confidence interval bounds for the specified model parameters in the dataset, using the covariance matrix of the statistic. Theget_covar
andset_covar_opt
commands can be used to configure the error analysis; an example being changing thesigma
field to 1.6 (i.e. 90%) from its default value of 1. The output from the routine is displayed on screen, and theget_covar_results
routine can be used to retrieve the results.- Parameters
id (int or str, optional) – The data set, or sets, that provides the data. If not given then all data sets with an associated model are used simultaneously.
parameter (sherpa.models.parameter.Parameter, optional) – The default is to calculate the confidence limits on all thawed parameters of the model, or models, for all the data sets. The evaluation can be restricted by listing the parameters to use. Note that each parameter should be given as a separate argument, rather than as a list. For example
covar(g1.ampl, g1.sigma)
.model (sherpa.models.model.Model, optional) – Select all the thawed parameters in the model.
See also
covar
Estimate the confidence intervals using the confidence method.
get_covar
Return the covariance estimation object.
get_covar_results
Return the results of the last
covar
run.int_proj
Plot the statistic value as a single parameter is varied.
int_unc
Plot the statistic value as a single parameter is varied.
reg_proj
Plot the statistic value as two parameters are varied.
reg_unc
Plot the statistic value as two parameters are varied.
set_covar_opt
Set an option of the
covar
estimation object.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with multiple
ids
orparameters
values, the order is unimportant, since any argument that is not defined as a model parameter is assumed to be a data id.The
covar
command is different toconf
, in that in that all other thawed parameters are fixed, rather than being allowed to float to new best-fit values. Whileconf
is more general (e.g. allowing the user to examine the parameter space away from the best-fit point), it is in the strictest sense no more accurate thancovar
for determining confidence intervals.An estimated confidence interval is accurate if and only if:
the chi^2 or logL surface in parameter space is approximately shaped like a multi-dimensional paraboloid, and
the best-fit point is sufficiently far from parameter space boundaries.
One may determine if these conditions hold, for example, by plotting the fit statistic as a function of each parameter’s values (the curve should approximate a parabola) and by examining contour plots of the fit statistics made by varying the values of two parameters at a time (the contours should be elliptical, and parameter space boundaries should be no closer than approximately 3 sigma from the best-fit point). The
int_proj
andreg_proj
commands may be used for this.If either of the conditions given above does not hold, then the output from
covar
may be meaningless except to give an idea of the scale of the confidence intervals. To accurately determine the confidence intervals, one would have to reparameterize the model, use Monte Carlo simulations, or Bayesian methods.As
covar
estimates intervals for each parameter independently, the relationship between sigma and the change in statistic value delta_S can be particularly simple: sigma = the square root of delta_S for statistics sampled from the chi-square distribution and for the Cash statistic, and is approximately equal to the square root of (2 * delta_S) for fits based on the general log-likelihood. The default setting is to calculate the one-sigma interval, which can be changed with thesigma
option toset_covar_opt
orget_covar
.Examples
Evaluate confidence intervals for all thawed parameters in all data sets with an associated source model. The results are then stored in the variable
res
.>>> covar() >>> res = get_covar_results()
Only evaluate the parameters associated with data set 2.
>>> covar(2)
Only evaluate the intervals for the
pos.xpos
andpos.ypos
parameters:>>> covar(pos.xpos, pos.ypos)
Change the limits to be 1.6 sigma (90%) rather than the default 1 sigma.
>>> set_covar_ope('sigma', 1.6) >>> covar()
Only evaluate the
clus.kt
parameter for the data sets with identifiers “obs1”, “obs5”, and “obs6”. This will still use the 1.6 sigma setting from the previous run.>>> covar("obs1", "obs5", "obs6", clus.kt)
Estimate the errors for all the thawed parameters from the
line
model and theclus.kt
parameter for datasets 1, 3, and 4:>>> covar(1, 3, 4, line, clus.kt)
- static create_arf(elo, ehi, specresp=None, exposure=None, ethresh=None, name='test-arf')[source] [edit on github]
Create an ARF.
New in version 4.10.1.
- Parameters
elo (numpy.ndarray) – The energy bins (low and high, in keV) for the ARF. It is assumed that ehi_i > elo_i, elo_j > 0, the energy bins are either ascending - so elo_i+1 > elo_i - or descending (elo_i+1 < elo_i), and that there are no overlaps.
ehi (numpy.ndarray) – The energy bins (low and high, in keV) for the ARF. It is assumed that ehi_i > elo_i, elo_j > 0, the energy bins are either ascending - so elo_i+1 > elo_i - or descending (elo_i+1 < elo_i), and that there are no overlaps.
specresp (None or array, optional) – The spectral response (in cm^2) for the ARF. It is assumed to be >= 0. If not given a flat response of 1.0 is used.
exposure (number or None, optional) – If not None, the exposure of the ARF in seconds.
ethresh (number or None, optional) – Passed through to the DataARF call. It controls whether zero-energy bins are replaced.
name (str, optional) – The name of the ARF data set
- Returns
arf
- Return type
DataARF instance
See also
Examples
Create a flat ARF, with a value of 1.0 cm^2 for each bin, over the energy range 0.1 to 10 keV, with a bin spacing of 0.01 keV.
>>> egrid = np.arange(0.1, 10, 0.01) >>> arf = create_arf(egrid[:-1], egrid[1:])
Create an ARF that has 10 percent more area than the ARF from the default data set:
>>> arf1 = get_arf() >>> elo = arf1.energ_lo >>> ehi = arf1.energ_hi >>> y = 1.1 * arf1.specresp >>> arf2 = create_arf(elo, ehi, y, exposure=arf1.exposure)
- create_model_component(typename=None, name=None)[source] [edit on github]
Create a model component.
Model components created by this function are set to their default values. Components can also be created directly using the syntax
typename.name
, such as in calls toset_model
andset_source
(unless you have calledset_model_autoassign_func
to change the default model auto-assignment setting).- Parameters
typename (str) – The name of the model. This should match an entry from the return value of
list_models
, and defines the type of model.name (str) – The name used to refer to this instance, or component, of the model. A Python variable will be created with this name that can be used to inspect and change the model parameters, as well as use it in model expressions.
- Returns
model
- Return type
the sherpa.models.Model object created
See also
delete_model_component
Delete a model component.
get_model_component
Returns a model component given its name.
list_models
List the available model types.
list_model_components
List the names of all the model components.
set_model
Set the source model expression for a data set.
set_model_autoassign_func
Set the method used to create model component identifiers.
Notes
This function can over-write an existing component. If the over-written component is part of a source expression - as set by
set_model
- then the model evaluation will still use the old model definition (and be able to change the fit parameters), but direct access to its parameters is not possible since the name now refers to the new component (this is true using direct access, such asmname.parname
, or withset_par
).Examples
Create an instance of the
powlaw1d
model calledpl
, and then freeze itsgamma
parameter to 2.6.>>> create_model_component("powlaw1d", "pl") >>> pl.gamma = 2.6 >>> freeze(pl.gamma)
Create a blackbody model called bb, check that it is reconized as a component, and display its parameters:
>>> create_model_component("bbody", "bb") >>> list_model_components() >>> print(bb) >>> print(bb.ampl)
- static create_rmf(rmflo, rmfhi, startchan=1, e_min=None, e_max=None, ethresh=None, fname=None, name='delta-rmf')[source] [edit on github]
Create an RMF.
If fname is set to
None
then this creats a “perfect” RMF, which has a delta-function response (so each channel uniquely maps to a single energy bin), otherwise the RMF is taken from the image data stored in the file pointed to byfname
.New in version 4.10.1.
- Parameters
rmflo (array) – The energy bins (low and high, in keV) for the RMF. It is assumed that emfhi_i > rmflo_i, rmflo_j > 0, that the energy bins are either ascending, so rmflo_i+1 > rmflo_i or descending (rmflo_i+1 < rmflo_i), and that there are no overlaps. These correspond to the Elow and Ehigh columns (represented by the ENERG_LO and ENERG_HI columns of the MATRIX block) of the OGIP standard.
rmfhi (array) – The energy bins (low and high, in keV) for the RMF. It is assumed that emfhi_i > rmflo_i, rmflo_j > 0, that the energy bins are either ascending, so rmflo_i+1 > rmflo_i or descending (rmflo_i+1 < rmflo_i), and that there are no overlaps. These correspond to the Elow and Ehigh columns (represented by the ENERG_LO and ENERG_HI columns of the MATRIX block) of the OGIP standard.
startchan (int, optional) – The starting channel number: expected to be 0 or 1 but this is not enforced.
e_min (None or array, optional) – The E_MIN and E_MAX columns of the EBOUNDS block of the RMF.
e_max (None or array, optional) – The E_MIN and E_MAX columns of the EBOUNDS block of the RMF.
ethresh (number or None, optional) – Passed through to the DataRMF call. It controls whether zero-energy bins are replaced.
fname (None or str, optional) – If None then a “perfect” RMF is generated, otherwise it gives the name of the two-dimensional image file which stores the response information (the format of this file matches that created by the CIAO tool rmfimg [1]_).
name (str, optional) – The name of the RMF data set
- Returns
rmf
- Return type
DataRMF instance
See also
References
- dataspace1d(start, stop, step=1, numbins=None, id=None, bkg_id=None, dstype=<class 'sherpa.data.Data1DInt'>)[source] [edit on github]
Create the independent axis for a 1D data set.
Create an “empty” one-dimensional data set by defining the grid on which the points are defined (the independent axis). The values are set to 0.
- Parameters
start (number) – The minimum value of the axis.
stop (number) – The maximum value of the axis.
step (number, optional) – The separation between each grid point. This is not used if
numbins
is set.numbins (int, optional) – The number of grid points. This over-rides the
step
setting.id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – If set, the grid is for the background component of the data set.
dstype (data class to use, optional) – What type of data is to be used. Supported values include
Data1DInt
(the default),Data1D
, andDataPHA
.
See also
dataspace2d
Create the independent axis for a 2D data set.
get_dep
Return the dependent axis of a data set.
get_indep
Return the independent axes of a data set.
set_dep
Set the dependent axis of a data set.
Notes
The meaning of the
stop
parameter depends on whether it is a binned or unbinned data set (as set by thedstype
parameter).Examples
Create a binned data set, starting at 1 and with a bin-width of 1.
>>> dataspace1d(1, 5, 1) >>> print(get_indep()) (array([ 1., 2., 3., 4.]), array([ 2., 3., 4., 5.]))
This time for an un-binned data set:
>>> dataspace1d(1, 5, 1, dstype=Data1D) >>> print(get_indep()) (array([ 1., 2., 3., 4., 5.]),)
Specify the number of bins rather than the grid spacing:
>>> dataspace1d(1, 5, numbins=5, id=2) >>> (xlo, xhi) = get_indep(2) >>> xlo array([ 1. , 1.8, 2.6, 3.4, 4.2]) >>> xhi array([ 1.8, 2.6, 3.4, 4.2, 5. ])
>>> dataspace1d(1, 5, numbins=5, id=3, dstype=Data1D) >>> (x, ) = get_indep(3) >>> x array([ 1., 2., 3., 4., 5.])
Create a grid for a PHA data set called ‘jet’, and for its background component:
>>> dataspace1d(0.01, 11, 0.01, id='jet', dstype=DataPHA) >>> dataspace1d(0.01, 11, 0.01, id='jet', bkg_id=1, ... dstype=DataPHA)
- dataspace2d(dims, id=None, dstype=<class 'sherpa.astro.data.DataIMG'>)[source] [edit on github]
Create the independent axis for a 2D data set.
Create an “empty” two-dimensional data set by defining the grid on which the points are defined (the independent axis). The values are set to 0.
- Parameters
dims (sequence of 2 number) – The dimensions of the grid in
(width,height)
order.id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.dstype (data class to use, optional) – What type of data is to be used. Supported values include
DataIMG
(the default),Data2D
, andData2DInt
.
See also
dataspace1d
Create the independent axis for a 1D data set.
get_dep
Return the dependent axis of a data set.
get_indep
Return the independent axes of a data set.
set_dep
Set the dependent axis of a data set.
Examples
Create a 200 pixel by 150 pixel grid (number of columns by number of rows) and display it (each pixel has a value of 0):
>>> dataspace2d([200, 150]) >>> image_data()
Create a data space called “fakeimg”:
>>> dataspace2d([nx, ny], id="fakeimg")
- delete_bkg_model(id=None, bkg_id=None)[source] [edit on github]
Delete the background model expression for a data set.
This removes the model expression, created by
set_bkg_model
, for the background component of a data set. It does not delete the components of the expression, or remove the models for any other background components or the source of the data set.- Parameters
id (int or str, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or string, optional) – The identifier for the background component to use.
See also
clean
Clear all stored session data.
delete_model
Delete the model expression for a data set.
get_default_id
Return the default data set identifier.
list_bkg_ids
List all the background identifiers for a data set.
set_model
Set the source model expression for a data set.
show_model
Display the source model expression for a data set.
Examples
Remove the background model expression for the default data set:
>>> delete_bkg_model()
Remove the model expression for the background component labelled ‘down’ for the data set with the identifier ‘src’:
>>> delete_bkg_model('src', 'down')
- delete_data(id=None)[source] [edit on github]
Delete a data set by identifier.
The data set, and any associated structures - such as the ARF and RMF for PHA data sets - are removed.
- Parameters
id (int or str, optional) – The data set to delete. If not given then the default identifier is used, as returned by
get_default_id
.
See also
clean
Clear all stored session data.
copy_data
Copy a data set to a new identifier.
delete_model
Delete the model expression from a data set.
get_default_id
Return the default data set identifier.
list_data_ids
List the identifiers for the loaded data sets.
Notes
The source expression is not removed by this function.
Examples
Delete the data from the default data set:
>>> delete_data()
Delete the data set identified as ‘src’:
>>> delete_data('src')
- delete_model(id=None)[source] [edit on github]
Delete the model expression for a data set.
This removes the model expression, created by
set_model
, for a data set. It does not delete the components of the expression.- Parameters
id (int or str, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.
See also
clean
Clear all stored session data.
delete_data
Delete a data set by identifier.
get_default_id
Return the default data set identifier.
set_model
Set the source model expression for a data set.
show_model
Display the source model expression for a data set.
Examples
Remove the model expression for the default data set:
>>> delete_model()
Remove the model expression for the data set with the identifier called ‘src’:
>>> delete_model('src')
- delete_model_component(name)[source] [edit on github]
Delete a model component.
- Parameters
name (str) – The name used to refer to this instance, or component, of the model. The corresponding Python variable will be deleted by this function.
See also
create_model_component
Create a model component.
delete_model
Delete the model expression for a data set.
list_models
List the available model types.
list_model_components
List the names of all the model components.
set_model
Set the source model expression for a data set.
set_model_autoassign_func
Set the method used to create model component identifiers.
Notes
It is an error to try to delete a component that is part of a model expression - i.e. included as part of an expression in a
set_model
orset_source
call. In such a situation, use thedelete_model
function to remove the source expression before callingdelete_model_component
.Examples
If a model instance called
pl
has been created - e.g. bycreate_model_component('powlaw1d', 'pl')
- then the following will remove it:>>> delete_model_component('pl')
- delete_pileup_model(id=None)[source] [edit on github]
Delete the pile up model for a data set.
Remove the pile up model applied to a source model.
New in version 4.12.2.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.
See also
get_pileup_model
Return the pile up model for a data set.
list_pileup_model_ids
List of all the data sets with a pile up model.
set_pileup_model
Add a pile up model to a data set.
Examples
>>> delete_pileup_model()
>>> delete_pileup_model('core')
- delete_psf(id=None)[source] [edit on github]
Delete the PSF model for a data set.
Remove the PSF convolution applied to a source model.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.
See also
list_psf_ids
List of all the data sets with a PSF.
load_psf
Create a PSF model.
set_psf
Add a PSF model to a data set.
get_psf
Return the PSF model defined for a data set.
Examples
>>> delete_psf()
>>> delete_psf('core')
- eqwidth(src, combo, id=None, lo=None, hi=None, bkg_id=None, error=False, params=None, otherids=(), niter=1000, covar_matrix=None)[source] [edit on github]
Calculate the equivalent width of an emission or absorption line.
The equivalent width [1]_ is calculated in the selected units for the data set (which can be retrieved with
get_analysis
).Changed in version 4.10.1: The
error
parameter was added which controls whether the return value is a scalar (the calculated equivalent width), when set toFalse
, or the median value, error limits, and ancillary values.- Parameters
src – The continuum model (this may contain multiple components).
combo – The continuum plus line (absorption or emission) model.
lo (optional) – The lower limit for the calculation (the units are set by
set_analysis
for the data set). The default value (None
) means that the lower range of the data set is used.hi (optional) – The upper limit for the calculation (the units are set by
set_analysis
for the data set). The default value (None
) means that the upper range of the data set is used.id (int or string, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
bkg_id (int or string, optional) – The identifier of the background component to use. This should only be set when the line to be measured is in the background model.
error (bool, optional) – The parameter indicates whether the errors are to be calculated or not. The default value is False
params (2D array, optional) – The default is None, in which case get_draws shall be called. The user can input the parameter array (e.g. from running
sample_flux
).otherids (sequence of integer or strings, optional) – Other data sets to use in the calculation.
niter (int, optional) – The number of draws to use. The default is
1000
.covar_matrix (2D array, optional) – The covariance matrix to use. If
None
then the result fromget_covar_results().extra_output
is used.
- Returns
If
error
isFalse
, then returns the equivalent width, otherwise the median, 1 sigma lower bound, 1 sigma upper bound, the parameters array, and the array of the equivalent width values used to determine the errors.- Return type
retval
See also
calc_model_sum
Sum up the fitted model over a pass band.
calc_source_sum
Calculate the un-convolved model signal.
get_default_id
Return the default data set identifier.
set_model
Set the source model expression.
References
Examples
Set a source model (a powerlaw for the continuum and a gaussian for the line), fit it, and then evaluate the equivalent width of the line. The example assumes that this is a PHA data set, with an associated response, so that the analysis can be done in wavelength units.
>>> set_source(powlaw1d.cont + gauss1d.line) >>> set_analysis('wavelength') >>> fit() >>> eqwidth(cont, cont+line) 2.1001988282497308
The calculation is restricted to the range 20 to 20 Angstroms.
>>> eqwidth(cont, cont+line, lo=20, hi=24) 1.9882824973082310
The calculation is done for the background model of data set 2, over the range 0.5 to 2 (the units of this are whatever the analysis setting for this data set id).
>>> set_bkg_source(2, const1d.flat + gauss1d.bline) >>> eqwidth(flat, flat+bline, id=2, bkg_id=1, lo=0.5, hi=2) 0.45494599793003426
With the
error
flag set toTrue
, the return value is enhanced with extra information, such as the median and one-sigma ranges on the equivalent width:>>> res = eqwidth(p1, p1 + g1, error=True) >>> ewidth = res[0] # the median equivalent width >>> errlo = res[1] # the one-sigma lower limit >>> errhi = res[2] # the one-sigma upper limit >>> pars = res[3] # the parameter values used >>> ews = res[4] # array of eq. width values
which can be used to display the probability density or cumulative distribution function of the equivalent widths:
>>> plot_pdf(ews) >>> plot_cdf(ews)
- fake(id=None, method=<function poisson_noise>)[source] [edit on github]
Simulate a data set.
Take a data set, evaluate the model for each bin, and then use this value to create a data value from each bin. The default behavior is to use a Poisson distribution, with the model value as the expectation value of the distribution.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.method (func) – The function used to create a random realisation of a data set.
See also
dataspace1d
Create the independent axis for a 1D data set.
dataspace2d
Create the independent axis for a 2D data set.
get_dep
Return the dependent axis of a data set.
load_arrays
Create a data set from array values.
set_model
Set the source model expression for a data set.
Notes
The function for the
method
argument accepts a single argument, the data values, and should return an array of the same shape as the input, with the data values to use.The function can be called on any data set, it does not need to have been created with
dataspace1d
ordataspace2d
.Specific data set types may have their own, specialized, version of this function.
Examples
Create a random realisation of the model - a constant plus gaussian line - for the range x=-5 to 5.
>>> dataspace1d(-5, 5, 0.5, dstype=Data1D) >>> set_source(gauss1d.gline + const1d.bgnd) >>> bgnd.c0 = 2 >>> gline.fwhm = 4 >>> gline.ampl = 5 >>> gline.pos = 1 >>> fake() >>> plot_data() >>> plot_model(overplot=True)
For a 2D data set, display the simulated data, model, and residuals:
>>> dataspace2d([150, 80], id='fakeimg') >>> set_source('fakeimg', beta2d.src + polynom2d.bg) >>> src.xpos, src.ypos = 75, 40 >>> src.r0, src.alpha = 15, 2.3 >>> src.ellip, src.theta = 0.4, 1.32 >>> src.ampl = 100 >>> bg.c, bg.cx1, bg.cy1 = 3, 0.4, 0.3 >>> fake('fakeimg') >>> image_fit('fakeimg')
- fake_pha(id, arf, rmf, exposure, backscal=None, areascal=None, grouping=None, grouped=False, quality=None, bkg=None)[source] [edit on github]
Simulate a PHA data set from a model.
The function creates a simulated PHA data set based on a source model, instrument response (given as an ARF and RMF), and exposure time, along with a Poisson noise term. A background component can be included.
- Parameters
id (int or str) – The identifier for the data set to create. If it already exists then it is assumed to contain a PHA data set and the counts will be over-written.
arf (filename or ARF object or list of filenames) – The name of the ARF, or an ARF data object (e.g. as returned by
get_arf
orunpack_arf
). A list of filenames can be passed in for instruments that require multile ARFs. Set this toNone
to use any arf that is already set for the data set given by id.rmf (filename or RMF object or list of filenames) – The name of the RMF, or an RMF data object (e.g. as returned by
get_arf
orunpack_arf
). A list of filenames can be passed in for instruments that require multile RMFs. Set this toNone
to use any arf that is already set for the data set given by id.exposure (number) – The exposure time, in seconds.
backscal (number, optional) – The ‘BACKSCAL’ value for the data set.
areascal (number, optional) – The ‘AREASCAL’ value for the data set.
grouping (array, optional) – The grouping array for the data (see
set_grouping
).grouped (bool, optional) – Should the simulated data be grouped (see
group
)? The default isFalse
. This value is only used if thegrouping
parameter is set.quality (array, optional) – The quality array for the data (see
set_quality
).bkg (optional) – If left empty, then only the source emission is simulated. If set to a PHA data object, then the counts from this data set are scaled appropriately and added to the simulated source signal. To use background model, set
bkg="model"`. In that case a background dataset with ``bkg_id=1
has to be set before callingfake_pha
. That background dataset needs to include the data itself (not used in this function), the background model, and the response.
- Raises
sherpa.utils.err.ArgumentErr – If the data set already exists and does not contain PHA data.
See also
Notes
A model expression is created by using the supplied ARF and RMF to convolve the source expression for the dataset (the return value of
get_source
for the suppliedid
parameter). This expresion is evaluated for each channel to create the expectation values, which is then passed to a Poisson random number generator to determine the observed number of counts per channel. Any background component is scaled by appropriate terms (exposure time, area scaling, and the backscal value) before it is passed to a Poisson random number generator. The simulated background is added to the simulated data.Examples
Estimate the signal from a 5000 second observation using the ARF and RMF from “src.arf” and “src.rmf” respectively:
>>> set_source(1, xsphabs.gal * xsapec.clus) >>> gal.nh = 0.12 >>> clus.kt, clus.abundanc = 4.5, 0.3 >>> clus.redshift = 0.187 >>> clus.norm = 1.2e-3 >>> fake_pha(1, 'src.arf', 'src.rmf', 5000)
Simulate a 1 mega second observation for the data and model from the default data set. The simulated data will include an estimated background component based on scaling the existing background observations for the source. The simulated data set, which has the same grouping as the default set, for easier comparison, is created with the ‘sim’ label and then written out to the file ‘sim.pi’:
>>> arf = get_arf() >>> rmf = get_rmf() >>> bkg = get_bkg() >>> bscal = get_backscal() >>> grp = get_grouping() >>> qual = get_quality() >>> texp = 1e6 >>> set_source('sim', get_source()) >>> fake_pha('sim', arf, rmf, texp, backscal=bscal, bkg=bkg, ... grouping=grp, quality=qual, grouped=True) >>> save_pha('sim', 'sim.pi')
Sometimes, the background dataset is noisy because there are not enough photons in the background region. In this case, the background model can be used to generate the photons that the background contributes to the source spectrum. To do this, a background model must be passed in. This model is then convolved with the ARF and RMF (which must be set before) of the default background data set:
>>> set_bkg_source('sim', 'const1d.con1') >>> load_arf('sim', 'bkg.arf.fits', bkg_id=1) >>> load_rmf('sim', 'bkg_rmf.fits', bkg_id=1) >>> fake_pha('sim', arf, rmf, texp, backscal=bscal, bkg='model', ... grouping=grp, quality=qual, grouped=True) >>> save_pha('sim', 'sim.pi')
- fit(id=None, *otherids, **kwargs)[source] [edit on github]
Fit a model to one or more data sets.
Use forward fitting to find the best-fit model to one or more data sets, given the chosen statistic and optimization method. The fit proceeds until the results converge or the number of iterations exceeds the maximum value (these values can be changed with
set_method_opt
). An iterative scheme can be added usingset_iter_method
to try and improve the fit. The final fit results are displayed to the screen and can be retrieved withget_fit_results
.- Parameters
id (int or str, optional) – The data set that provides the data. If not given then all data sets with an associated model are fit simultaneously.
*otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
outfile (str, optional) – If set, then the fit results will be written to a file with this name. The file contains the per-iteration fit results.
clobber (bool, optional) – This flag controls whether an existing file can be overwritten (
True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.FitErr – If
filename
already exists andclobber
isFalse
.
See also
conf
Estimate the confidence intervals using the confidence method.
contour_fit
Contour the fit to a data set.
covar
Estimate the confidence intervals using the confidence method.
fit_bkg
Fit a model to one or more background PHA data sets.
freeze
Fix model parameters so they are not changed by a fit.
get_fit_results
Return the results of the last fit.
plot_fit
Plot the fit results (data, model) for a data set.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
set_stat
Set the statistical method.
set_method
Change the optimization method.
set_method_opt
Change an option of the current optimization method.
set_bkg_full_model
Define the convolved background model expression for a PHA data set.
set_bkg_model
Set the background model expression for a PHA data set.
set_full_model
Define the convolved model expression for a data set.
set_iter_method
Set the iterative-fitting scheme used in the fit.
set_model
Set the model expression for a data set.
show_fit
Summarize the fit results.
thaw
Allow model parameters to be varied during a fit.
Notes
For PHA data sets with background components, the function will fit any background components for which a background model has been created (rather than being subtracted). The
fit_bkg
function can be used to fit models to just the background data.Examples
Simultaneously fit all data sets with models and then store the results in the variable fres:
>>> fit() >>> fres = get_fit_results()
Fit just the data set ‘img’:
>>> fit('img')
Simultaneously fit data sets 1, 2, and 3:
>>> fit(1, 2, 3)
Fit data set ‘jet’ and write the fit results to the text file ‘jet.fit’, over-writing it if it already exists:
>>> fit('jet', outfile='jet.fit', clobber=True)
- fit_bkg(id=None, *otherids, **kwargs)[source] [edit on github]
Fit a model to one or more background PHA data sets.
Fit only the backgound components of PHA data sets. This can be used to find the best-fit background parameters, which can then be frozen before fitting the data, or to ensure that these parameters are well defined before performing a simultaneous source and background fit.
- Parameters
id (int or str, optional) – The data set that provides the background data. If not given then all data sets with an associated background model are fit simultaneously.
*otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
outfile (str, optional) – If set, then the fit results will be written to a file with this name. The file contains the per-iteration fit results.
clobber (bool, optional) – This flag controls whether an existing file can be overwritten (
True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.FitErr – If
filename
already exists andclobber
isFalse
.
See also
conf
Estimate the confidence intervals using the confidence method.
contour_fit
Contour the fit to a data set.
covar
Estimate the confidence intervals using the confidence method.
fit
Fit a model to one or more data sets.
freeze
Fix model parameters so they are not changed by a fit.
get_fit_results
Return the results of the last fit.
plot_fit
Plot the fit results (data, model) for a data set.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
set_stat
Set the statistical method.
set_method
Change the optimization method.
set_method_opt
Change an option of the current optimization method.
set_bkg_full_model
Define the convolved background model expression for a PHA data set.
set_bkg_model
Set the background model expression for a PHA data set.
set_full_model
Define the convolved model expression for a data set.
set_iter_method
Set the iterative-fitting scheme used in the fit.
set_model
Set the model expression for a data set.
show_bkg_source
Display the background model expression for a data set.
show_bkg_model
Display the background model expression used to fit a data set.
show_fit
Summarize the fit results.
thaw
Allow model parameters to be varied during a fit.
Notes
This is only for PHA data sets where the background is being modelled, rather than subtracted from the data.
Examples
Simultaneously fit all background data sets with models and then store the results in the variable fres:
>>> fit_bkg() >>> fres = get_fit_results()
Fit the background for data sets 1 and 2, then do a simultaneous fit to the source and background data sets:
>>> fit_bkg(1,2) >>> fit(1,2)
- freeze(*args)[source] [edit on github]
Fix model parameters so they are not changed by a fit.
The arguments can be parameters or models, in which case all parameters of the model are frozen. If no arguments are given then nothing is changed.
See also
Notes
The
thaw
function can be used to reverse this setting, so that parameters can be varied in a fit.Examples
Fix the FWHM parameter of the line model (in this case a
gauss1d
model) so that it will not be varied in the fit.>>> set_source(const1d.bgnd + gauss1d.line) >>> line.fwhm = 2.1 >>> freeze(line.fwhm) >>> fit()
Freeze all parameters of the line model and then re-fit:
>>> freeze(line) >>> fit()
Freeze the nh parameter of the gal model and the abund parameter of the src model:
>>> freeze(gal.nh, src.abund)
- get_analysis(id=None)[source] [edit on github]
Return the units used when fitting spectral data.
- Parameters
id (int or str, optional) – The data set to query. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
setting – The analysis setting for the data set.
- Return type
{ ‘channel’, ‘energy’, ‘wavelength’ }
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
id
argument is not recognized.
See also
get_default_id
Return the default data set identifier.
set_analysis
Change the analysis setting.
Examples
Display the analysis setting for the default data set:
>>> print(get_analysis())
Check whether the data set labelled ‘SgrA’ is using the wavelength setting:
>>> is_wave = get_analysis('SgrA') == 'wavelength'
- get_areascal(id=None, bkg_id=None)[source] [edit on github]
Return the fractional area factor of a PHA data set.
Return the AREASCAL setting [1]_ for the source or background component of a PHA data set.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Set to identify which background component to use. The default value (
None
) means that the value is for the source component of the data set.
- Returns
areascal – The AREASCAL value, which can be a scalar or a 1D array.
- Return type
number or ndarray
See also
get_backscal
Return the area scaling of a PHA data set.
set_areascal
Change the fractional area factor of a PHA data set.
Notes
The fractional area scale is normally set to 1, with the ARF used to scale the model.
References
- 1
“The OGIP Spectral File Format”, Arnaud, K. & George, I. http://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/ogip_92_007.html
Examples
Return the AREASCAL value for the default data set:
>>> get_areascal()
Return the AREASCAL value for the first background component of dataset 2:
>>> get_areascal(id=2, bkg_id=1)
- get_arf(id=None, resp_id=None, bkg_id=None)[source] [edit on github]
Return the ARF associated with a PHA data set.
- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.resp_id (int or str, optional) – The identifier for the ARF within this data set, if there are multiple responses.
bkg_id (int or str, optional) – Set this to return the given background component.
- Returns
arf – This is a reference to the ARF, rather than a copy, so that changing the fields of the object will change the values in the data set.
- Return type
a
sherpa.astro.instrument.ARF1D
instance
See also
fake_pha
Simulate a PHA data set from a model.
get_response
Return the respone information applied to a PHA data set.
load_arf
Load an ARF from a file and add it to a PHA data set.
load_pha
Load a file as a PHA data set.
set_full_model
Define the convolved model expression for a data set.
set_arf
Set the ARF for use by a PHA data set.
set_rmf
Set the RMF for use by a PHA data set.
unpack_arf
Read in an ARF from a file.
Examples
Return the exposure field of the ARF from the default data set:
>>> get_arf().exposure
Copy the ARF from the default data set to data set 2:
>>> arf1 = get_arf() >>> set_arf(2, arf1)
Retrieve the ARF associated to the second background component of the ‘core’ data set:
>>> bgarf = get_arf('core', 'bkg.arf', bkg_id=2)
Retrieve the ARF and RMF for the default data set and use them to create a model expression which includes a power-law component (pbgnd) that is not convolved by the response:
>>> arf = get_arf() >>> rmf = get_rmf() >>> src_expr = xsphabs.abs1 * powlaw1d.psrc >>> set_full_model(rmf(arf(src_expr)) + powlaw1d.pbgnd) >>> print(get_model())
- get_arf_plot(id=None, resp_id=None, recalc=True)[source] [edit on github]
Return the data used by plot_arf.
- Parameters
id (int or str, optional) – The data set with an ARF. If not given then the default identifier is used, as returned by
get_default_id
.resp_id (int or str, optional) – Which ARF to use in the case that multiple ARFs are associated with a data set. The default is
None
, which means the first one.recalc (bool, optional) – If
False
then the results from the last call toplot_arf
(orget_arf_plot
) are returned, otherwise the data is re-generated.
- Returns
arf_plot
- Return type
a
sherpa.astro.plot.ARFPlot
instance- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
Examples
Return the ARF plot data for the default data set:
>>> aplot = get_arf_plot() >>> aplot.y.max() 676.95794677734375
Return the ARF data for the second response of the data set labelled ‘histate’, and then plot it:
>>> aplot = get_arf_plot('histate', 2) >>> aplot.plot()
- get_axes(id=None, bkg_id=None)[source] [edit on github]
Return information about the independent axes of a data set.
This function returns the coordinates of each point, or pixel, in the data set. The
get_indep
function may be be preferred in some situations.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Set if the values returned should be from the given background component, instead of the source data set.
- Returns
axis – The independent axis values. The differences to
get_dep
that this represents the “alternate grid” for the axis. For PHA data, this is the energy grid (E_MIN and E_MAX). For image data it is an array for each axis, of the length of the axis, using the current coordinate system for the data set.- Return type
tuple of arrays
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_indep
Return the independent axis of a data set.
list_data_ids
List the identifiers for the loaded data sets.
Examples
For 1D data sets, the “alternate” view is the same as the independent axis:
>>> load_arrays(1, [10, 15, 19], [4, 5, 9], Data1D) >>> get_indep() array([10, 15, 19]) >>> get_axes() array([10, 15, 19])
For a PHA data set, the approximate energy grid of the channels is returned (this is determined by the EBOUNDS extension of the RMF).
>>> load_pha('core', 'src.pi') read ARF file src.arf read RMF file src.rmf read background file src_bkg.pi >>> (chans,) = get_indep() >>> (elo, ehi) = get_axes() >>> chans[0:5] array([ 1., 2., 3., 4., 5.]) >>> elo[0:5] array([ 0.0073, 0.0146, 0.0292, 0.0438, 0.0584]) >>> ehi[0:5] array([ 0.0146, 0.0292, 0.0438, 0.0584, 0.073 ])
The image has 101 columns by 108 rows. The
get_indep
function returns one-dimensional arrays, for the full dataset, whereasget_axes
returns values for the individual axis:>>> load_image('img', 'img.fits') >>> get_data('img').shape (108, 101) >>> set_coord('img', 'physical') >>> (x0, x1) = get_indep('img') >>> (a0, a1) = get_axes('img') >>> (x0.size, x1.size) (10908, 10908) >>> (a0.size, a1.size) (101, 108) >>> np.all(x0[:101] == a0) True >>> np.all(x1[::101] == a1) True
- get_backscal(id=None, bkg_id=None)[source] [edit on github]
Return the BACKSCAL scaling of a PHA data set.
Return the BACKSCAL setting [1]_ for the source or background component of a PHA data set.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Set to identify which background component to use. The default value (
None
) means that the value is for the source component of the data set.
- Returns
backscal – The BACKSCAL value, which can be a scalar or a 1D array.
- Return type
number or ndarray
See also
get_areascal
Return the fractional area factor of a PHA data set.
get_bkg_scale
Return the background scaling factor for a PHA data set.
set_backscal
Change the area scaling of a PHA data set.
Notes
The BACKSCAL value can be defined as the ratio of the area of the source (or background) extraction region in image pixels to the total number of image pixels. The fact that there is no ironclad definition for this quantity does not matter so long as the value for a source dataset and its associated background dataset are defined in the similar manner, because only the ratio of source and background BACKSCAL values is used. It can be a scalar or be an array.
References
- 1
“The OGIP Spectral File Format”, Arnaud, K. & George, I. http://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/ogip_92_007.html
Examples
>>> get_backscal() 7.8504301607718007e-06 >>> get_backscal(bkg_id=1) 0.00022745132446289
- get_bkg(id=None, bkg_id=None)[source] [edit on github]
Return the background for a PHA data set.
Function to return the background for a PHA data set. The object returned by the call can be used to query and change properties of the background.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – The identifier for this background, which is needed if there are multiple background estimates for the source.
- Returns
data
- Return type
a sherpa.astro.data.DataPHA object
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain a PHA data set.
sherpa.utils.err.IdentifierErr – If no data set is associated with this identifier.
See also
Examples
>>> bg = get_bkg()
>>> bg = get_bkg('flare', 2)
- get_bkg_arf(id=None)[source] [edit on github]
Return the background ARF associated with a PHA data set.
This is for the case when there is only one background component and one background response. If this does not hold, use
get_arf
and use thebkg_id
andresp_id
arguments.- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
arf – This is a reference to the ARF, rather than a copy, so that changing the fields of the object will change the values in the data set.
- Return type
a
sherpa.astro.instrument.ARF1D
instance
See also
fake_pha
Simulate a PHA data set from a model.
load_bkg_arf
Load an ARF from a file and add it to the background of a PHA data set.
load_pha
Load a file as a PHA data set.
set_full_model
Define the convolved model expression for a data set.
set_arf
Set the ARF for use by a PHA data set.
set_rmf
Set the RMF for use by a PHA data set.
unpack_arf
Read in an ARF from a file.
Examples
Return the exposure field of the ARF from the background of the default data set:
>>> get_bkg_arf().exposure
Copy the ARF from the default data set to data set 2, as the first component:
>>> arf1 = get_bkg_arf() >>> set_arf(2, arf1, bkg_id=1)
- get_bkg_chisqr_plot(id=None, bkg_id=None, recalc=True)[source] [edit on github]
Return the data used by plot_bkg_chisqr.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
recalc (bool, optional) – If
False
then the results from the last call toplot_bkg_chisqr
(orget_bkg_chisqr_plot
) are returned, otherwise the data is re-generated.
- Returns
chisqr – An object representing the data used to create the plot by
plot_bkg_chisqr
.- Return type
a
sherpa.astro.plot.BkgChisqrPlot
instance- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_delchi_plot
Return the data used by plot_bkg_delchi.
get_bkg_ratio_plot
Return the data used by plot_bkg_ratio.
get_bkg_resid_plot
Return the data used by plot_bkg_resid.
plot_bkg_chisqr
Plot the chi-squared value for each point of the background of a PHA data set.
Examples
>>> bplot = get_bkg_chisqr_plot() >>> print(bplot)
>>> get_bkg_chisqr_plot('jet', bkg_id=1).plot() >>> get_bkg_chisqr_plot('jet', bkg_id=2).overplot()
- get_bkg_delchi_plot(id=None, bkg_id=None, recalc=True)[source] [edit on github]
Return the data used by plot_bkg_delchi.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
recalc (bool, optional) – If
False
then the results from the last call toplot_bkg_delchi
(orget_bkg_delchi_plot
) are returned, otherwise the data is re-generated.
- Returns
delchi – An object representing the data used to create the plot by
plot_bkg_delchi
.- Return type
a
sherpa.astro.plot.BkgDelchiPlot
instance- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_chisqr_plot
Return the data used by plot_bkg_chisqr.
get_bkg_ratio_plot
Return the data used by plot_bkg_ratio.
get_bkg_resid_plot
Return the data used by plot_bkg_resid.
plot_bkg_delchi
Plot the ratio of residuals to error for the background of a PHA data set.
Examples
>>> bplot = get_bkg_delchi_plot() >>> print(bplot)
>>> get_bkg_delchi_plot('jet', bkg_id=1).plot() >>> get_bkg_delchi_plot('jet', bkg_id=2).overplot()
- get_bkg_fit_plot(id=None, bkg_id=None, recalc=True)[source] [edit on github]
Return the data used by plot_bkg_fit.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
recalc (bool, optional) – If
False
then the results from the last call toplot_bkg_fit
(orget_bkg_fit_plot
) are returned, otherwise the data is re-generated.
- Returns
model – An object representing the data used to create the plot by
plot_bkg_fit
.- Return type
a
sherpa.astro.plot.BkgFitPlot
instance- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_plot
Return the data used by plot_bkg.
get_bkg_model_plot
Return the data used by plot_bkg_model.
plot_bkg_fit
Plot the fit results (data, model) for the background of a PHA data set.
Examples
Create the data needed to create the “fit plot” for the background of the default data set and display it:
>>> bplot = get_bkg_fit_plot() >>> print(bplot)
Return the plot data for data set 2, and then use it to create a plot:
>>> b2 = get_bkg_fit_plot(2) >>> b2.plot()
The fit plot consists of a combination of a data plot and a model plot, which are captured in the
dataplot
andmodelplot
attributes of the return value. These can be used to display the plots individually, such as:>>> b2.dataplot.plot() >>> b2.modelplot.plot()
or, to combine the two:
>>> b2.dataplot.plot() >>> b2.modelplot.overplot()
Return the plot data for the second background component to the “jet” data set:
>>> bplot = get_bkg_fit_plot('jet', bkg_id=2)
- get_bkg_model(id=None, bkg_id=None)[source] [edit on github]
Return the model expression for the background of a PHA data set.
This returns the model expression for the background of a data set, including the instrument response (e.g. ARF and RMF), whether created automatically or explicitly, with
set_bkg_full_model
.- Parameters
- Returns
This can contain multiple model components and any instrument response. Changing attributes of this model changes the model used by the data set.
- Return type
instance
See also
delete_bkg_model
Delete the background model expression for a data set.
get_bkg_source
Return the model expression for the background of a PHA data set.
list_model_ids
List of all the data sets with a source expression.
set_bkg_model
Set the background model expression for a PHA data set.
set_bkg_full_model
Define the convolved background model expression for a PHA data set.
show_bkg_model
Display the background model expression for a data set.
Examples
Return the background model expression for the default data set, including any instrument response:
>>> bkg = get_bkg_model()
- get_bkg_model_plot(id=None, bkg_id=None, recalc=True)[source] [edit on github]
Return the data used by plot_bkg_model.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
recalc (bool, optional) – If
False
then the results from the last call toplot_bkg_model
(orget_bkg_model_plot
) are returned, otherwise the data is re-generated.
- Returns
model – An object representing the data used to create the plot by
plot_bkg_model
.- Return type
a
sherpa.astro.plot.BkgModelHistogram
instance- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_source_plot
Return the data used by plot_bkg_source.
plot_bkg_model
Plot the model for the background of a PHA data set.
plot_bkg_source
Plot the model expression for the background of a PHA data set.
Examples
>>> bplot = get_bkg_model_plot() >>> print(bplot)
>>> get_bkg_model_plot('jet', bkg_id=1).plot() >>> get_bkg_model_plot('jet', bkg_id=2).overplot()
- get_bkg_plot(id=None, bkg_id=None, recalc=True)[source] [edit on github]
Return the data used by plot_bkg.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
recalc (bool, optional) – If
False
then the results from the last call toplot_bkg
(orget_bkg_plot
) are returned, otherwise the data is re-generated.
- Returns
data – An object representing the data used to create the plot by
plot_data
. The relationship between the returned values and the values in the data set depend on the analysis, filtering, and grouping settings of the data set.- Return type
a
sherpa.astro.plot.BkgDataPlot
instance- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.
See also
get_default_id
Return the default data set identifier.
plot_bkg
Plot the background values for a PHA data set.
Examples
Create the data needed to create the “data plot” for the background of the default data set and display it:
>>> bplot = get_bkg_plot() >>> print(bplot)
Return the plot data for data set 2, and then use it to create a plot:
>>> b2 = get_bkg_plot(2) >>> b2.plot()
Return the plot data for the second background component to the “jet” data set:
>>> bplot = get_bkg_plot('jet', bkg_id=2)
- get_bkg_ratio_plot(id=None, bkg_id=None, recalc=True)[source] [edit on github]
Return the data used by plot_bkg_ratio.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
recalc (bool, optional) – If
False
then the results from the last call toplot_bkg_ratio
(orget_bkg_ratio_plot
) are returned, otherwise the data is re-generated.
- Returns
ratio – An object representing the data used to create the plot by
plot_bkg_ratio
.- Return type
a
sherpa.astro.plot.BkgRatioPlot
instance- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_chisqr_plot
Return the data used by plot_bkg_chisqr.
get_bkg_delchi_plot
Return the data used by plot_bkg_delchi.
get_bkg_resid_plot
Return the data used by plot_bkg_resid.
plot_bkg_ratio
Plot the ratio of data to model values for the background of a PHA data set.
Examples
>>> bplot = get_bkg_ratio_plot() >>> print(bplot)
>>> get_bkg_ratio_plot('jet', bkg_id=1).plot() >>> get_bkg_ratio_plot('jet', bkg_id=2).overplot()
- get_bkg_resid_plot(id=None, bkg_id=None, recalc=True)[source] [edit on github]
Return the data used by plot_bkg_resid.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
recalc (bool, optional) – If
False
then the results from the last call toplot_bkg_resid
(orget_bkg_resid_plot
) are returned, otherwise the data is re-generated.
- Returns
resid – An object representing the data used to create the plot by
plot_bkg_resid
.- Return type
a
sherpa.astro.plot.BkgResidPlot
instance- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_chisqr_plot
Return the data used by plot_bkg_chisqr.
get_bkg_delchi_plot
Return the data used by plot_bkg_delchi.
get_bkg_ratio_plot
Return the data used by plot_bkg_ratio.
plot_bkg_resid
Plot the residual (data-model) values for the background of a PHA data set.
Examples
>>> bplot = get_bkg_resid_plot() >>> print(bplot)
>>> get_bkg_resid_plot('jet', bkg_id=1).plot() >>> get_bkg_resid_plot('jet', bkg_id=2).overplot()
- get_bkg_rmf(id=None)[source] [edit on github]
Return the background RMF associated with a PHA data set.
This is for the case when there is only one background component and one background response. If this does not hold, use
get_rmf
and use thebkg_id
andresp_id
arguments.- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
rmf – This is a reference to the RMF, rather than a copy, so that changing the fields of the object will change the values in the data set.
- Return type
a
sherpa.astro.instrument.RMF1D
instance
See also
fake_pha
Simulate a PHA data set from a model.
load_bkg_rmf
Load a RMF from a file and add it to the background of a PHA data set.
load_pha
Load a file as a PHA data set.
set_full_model
Define the convolved model expression for a data set.
set_arf
Set the ARF for use by a PHA data set.
set_rmf
Set the RMF for use by a PHA data set.
unpack_rmf
Read in a RMF from a file.
Examples
Copy the RMF from the default data set to data set 2, as the first component:
>>> rmf1 = get_bkg_arf() >>> set_rmf(2, arf1, bkg_id=1)
- get_bkg_scale(id=None, bkg_id=1, units='counts', group=True, filter=False)[source] [edit on github]
Return the background scaling factor for a background data set.
Return the factor applied to the background component to scale it to match it to the source, either when subtracting the background (units=’counts’), or fitting it simultaneously (units=’rate’).
Changed in version 4.12.2: The bkg_id, counts, group, and filter parameters have been added and the routine no-longer calculates the average scaling for all the background components but just for the given component.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Set to identify which background component to use. The default value is 1.
units ({'counts', 'rate'}, optional) – The correction is applied to a model defined as counts, the default, or a rate. The latter should be used when calculating the correction factor for adding the background data to the source aperture.
group (bool, optional) – Should the values be grouped to match the data?
filter (bool, optional) – Should the values be filtered to match the data?
- Returns
ratio – The scaling factor. The result can vary per channel, in which case an array is returned.
- Return type
number or array
See also
get_areascal
Return the fractional area factor of a PHA data set.
get_backscal
Return the area scaling factor for a PHA data set.
set_backscal
Change the area scaling of a PHA data set.
set_full_model
Define the convolved model expression for a data set.
set_bkg_full_model
Define the convolved background model expression for a PHA data set.
Notes
The scale factor when units=’counts’ is:
exp_src * bscale_src * areascal_src / (exp_bgnd * bscale_bgnd * areascal_ngnd) / nbkg
where
exp_x
,bscale_x
. andareascal_x
are the exposure, BACKSCAL, and AREASCAL values for the source (x=src
) and background (x=bgnd
) regions, respectively, andnbkg
is the number of background datasets associated with the source aperture. When units=’rate’, the exposure and areascal corrections are not included.Examples
Return the background-scaling factor for the default dataset (this assumes there’s only one background component).
>>> get_bkg_scale() 0.034514770047217924
Return the factor for dataset “pi”:
>>> get_bkg_scale('pi') 0.034514770047217924
Calculate the factors for the first two background components of the default dataset, valid for combining the source and background models to fit the source aperture:
>>> scale1 = get_bkg_scale(units='rate') >>> scale2 = get_bkg_scale(units='rate', bkg_id=2)
- get_bkg_source(id=None, bkg_id=None)[source] [edit on github]
Return the model expression for the background of a PHA data set.
This returns the model expression created by
set_bkg_model
orset_bkg_source
. It does not include any instrument response.- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
- Returns
model – This can contain multiple model components. Changing attributes of this model changes the model used by the data set.
- Return type
a sherpa.models.Model object
See also
delete_bkg_model
Delete the background model expression for a data set.
get_bkg_model
Return the model expression for the background of a PHA data set.
list_model_ids
List of all the data sets with a source expression.
set_bkg_model
Set the background model expression for a PHA data set.
show_bkg_model
Display the background model expression for a data set.
Examples
Return the background model expression for the default data set:
>>> bkg = get_bkg_source() >>> len(bkg.pars) 2
- get_bkg_source_plot(id=None, lo=None, hi=None, bkg_id=None, recalc=True)[source] [edit on github]
Return the data used by plot_bkg_source.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.lo (number, optional) – The low value to plot.
hi (number, optional) – The high value to plot.
bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
recalc (bool, optional) – If
False
then the results from the last call toplot_bkg_source
(orget_bkg_source_plot
) are returned, otherwise the data is re-generated.
- Returns
source – An object representing the data used to create the plot by
plot_bkg_source
.- Return type
a
sherpa.astro.plot.BkgSourcePlot
instance- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_model_plot
Return the data used by plot_bkg_model.
plot_bkg_model
Plot the model for the background of a PHA data set.
plot_bkg_source
Plot the model expression for the background of a PHA data set.
Examples
Retrieve the source plot information for the background of the default data set and display it:
>>> splot = get_bkg_source_plot() >>> print(splot)
Return the background plot data for data set 2, and then use it to create a plot:
>>> s2 = get_bkg_source_plot(2) >>> s2.plot()
Create a plot of the first two background components of the ‘histate’ data set, overplotting the second on the first:
>>> b1 = get_bkg_source_plot('histate', bkg_id=1) >>> b2 = get_bkg_source_plot('histate', bkg_id=2) >>> b1.plot() >>> b2.overplot()
Retrieve the background source plots for the 0.5 to 7 range of the ‘jet’ and ‘core’ data sets and display them on the same plot:
>>> splot1 = get_bkg_source_plot(id='jet', lo=0.5, hi=7) >>> splot2 = get_bkg_source_plot(id='core', lo=0.5, hi=7) >>> splot1.plot() >>> splot2.overplot()
For a PHA data set, the units on both the X and Y axes of the plot are controlled by the
set_analysis
command. In this case the Y axis will be in units of photons/s/cm^2 and the X axis in keV:>>> set_analysis('energy', factor=1) >>> splot = get_bkg_source_plot() >>> print(splot)
- get_cdf_plot()[source] [edit on github]
Return the data used to plot the last CDF.
- Returns
plot – An object containing the data used by the last call to
plot_cdf
. The fields will beNone
if the function has not been called.- Return type
a
sherpa.plot.CDFPlot
instance
See also
plot_cdf
Plot the cumulative density function of an array.
- get_chisqr_plot(id=None, recalc=True)[source] [edit on github]
Return the data used by plot_chisqr.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_chisqr
(orget_chisqr_plot
) are returned, otherwise the data is re-generated.
- Returns
resid_data
- Return type
a
sherpa.plot.ChisqrPlot
instance- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_delchi_plot
Return the data used by plot_delchi.
get_ratio_plot
Return the data used by plot_ratio.
get_resid_plot
Return the data used by plot_resid.
plot_chisqr
Plot the chi-squared value for each point in a data set.
Examples
Return the residual data, measured as chi square, for the default data set:
>>> rplot = get_chisqr_plot() >>> np.min(rplot.y) 0.0005140622701128954 >>> np.max(rplot.y) 8.379696454792295
Display the contents of the residuals plot for data set 2:
>>> print(get_chisqr_plot(2))
Overplot the residuals plot from the ‘core’ data set on the ‘jet’ data set:
>>> r1 = get_chisqr_plot('jet') >>> r2 = get_chisqr_plot('core') >>> r1.plot() >>> r2.overplot()
- get_conf()[source] [edit on github]
Return the confidence-interval estimation object.
- Returns
conf
- Return type
See also
conf
Estimate parameter confidence intervals using the confidence method.
get_conf_opt
Return one or all of the options for the confidence interval method.
set_conf_opt
Set an option of the conf estimation object.
Notes
The attributes of the confidence-interval object include:
eps
The precision of the calculated limits. The default is 0.01.
fast
If
True
then the fit optimization used may be changed from the current setting (only for the error analysis) to use a faster optimization method. The default isFalse
.max_rstat
If the reduced chi square is larger than this value, do not use (only used with chi-square statistics). The default is 3.
maxfits
The maximum number of re-fits allowed (that is, when the
remin
filter is met). The default is 5.maxiters
The maximum number of iterations allowed when bracketing limits, before stopping for that parameter. The default is 200.
numcores
The number of computer cores to use when evaluating results in parallel. This is only used if
parallel
isTrue
. The default is to use all cores.openinterval
How the
conf
method should cope with intervals that do not converge (that is, when themaxiters
limit has been reached). The default isFalse
.parallel
If there is more than one free parameter then the results can be evaluated in parallel, to reduce the time required. The default is
True
.remin
The minimum difference in statistic value for a new fit location to be considered better than the current best fit (which starts out as the starting location of the fit at the time
conf
is called). The default is 0.01.sigma
What is the error limit being calculated. The default is 1.
soft_limits
Should the search be restricted to the soft limits of the parameters (
True
), or can parameter values go out all the way to the hard limits if necessary (False
). The default isFalse
tol
The tolerance for the fit. The default is 0.2.
verbose
Should extra information be displayed during fitting? The default is
False
.
Examples
>>> print(get_conf()) name = confidence numcores = 8 verbose = False openinterval = False max_rstat = 3 maxiters = 200 soft_limits = False eps = 0.01 fast = False maxfits = 5 remin = 0.01 tol = 0.2 sigma = 1 parallel = True
Change the
remin
field to 0.05.>>> cf = get_conf() >>> cf.remin = 0.05
- get_conf_opt(name=None)[source] [edit on github]
Return one or all of the options for the confidence interval method.
This is a helper function since the options can also be read directly using the object returned by
get_conf
.- Parameters
name (str, optional) – If not given, a dictionary of all the options are returned. When given, the individual value is returned.
- Returns
value
- Return type
dictionary or value
- Raises
sherpa.utils.err.ArgumentErr – If the
name
argument is not recognized.
See also
conf
Estimate parameter confidence intervals using the confidence method.
get_conf
Return the confidence-interval estimation object.
set_conf_opt
Set an option of the conf estimation object.
Examples
>>> get_conf_opt('verbose') False
>>> copts = get_conf_opt() >>> copts['verbose'] False
- get_conf_results()[source] [edit on github]
Return the results of the last
conf
run.- Returns
results
- Return type
sherpa.fit.ErrorEstResults object
- Raises
sherpa.utils.err.SessionErr – If no
conf
call has been made.
See also
get_conf_opt
Return one or all of the options for the confidence interval method.
set_conf_opt
Set an option of the conf estimation object.
Notes
The fields of the object include:
datasets
A tuple of the data sets used in the analysis.
methodname
This will be ‘confidence’.
iterfitname
The name of the iterated-fit method used, if any.
fitname
The name of the optimization method used.
statname
The name of the fit statistic used.
sigma
The sigma value used to calculate the confidence intervals.
percent
The percentage of the signal contained within the confidence intervals (calculated from the
sigma
value assuming a normal distribution).parnames
A tuple of the parameter names included in the analysis.
parvals
A tuple of the best-fit parameter values, in the same order as
parnames
.parmins
A tuple of the lower error bounds, in the same order as
parnames
.parmaxes
A tuple of the upper error bounds, in the same order as
parnames
.nfits
The number of model evaluations.
Examples
>>> res = get_conf_results() >>> print(res) datasets = (1,) methodname = confidence iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('p1.gamma', 'p1.ampl') parvals = (2.1585155113403327, 0.00022484014787994827) parmins = (-0.082785567348122591, -1.4825550342799376e-05) parmaxes = (0.083410634144100104, 1.4825550342799376e-05) nfits = 13
The following converts the above into a dictionary where the keys are the parameter names and the values are the tuple (best-fit value, lower-limit, upper-limit):
>>> pvals1 = zip(res.parvals, res.parmins, res.parmaxes) >>> pvals2 = [(v, v+l, v+h) for (v, l, h) in pvals1] >>> dres = dict(zip(res.parnames, pvals2)) >>> dres['p1.gamma'] (2.1585155113403327, 2.07572994399221, 2.241926145484433)
- get_confidence_results() [edit on github]
Return the results of the last
conf
run.- Returns
results
- Return type
sherpa.fit.ErrorEstResults object
- Raises
sherpa.utils.err.SessionErr – If no
conf
call has been made.
See also
get_conf_opt
Return one or all of the options for the confidence interval method.
set_conf_opt
Set an option of the conf estimation object.
Notes
The fields of the object include:
datasets
A tuple of the data sets used in the analysis.
methodname
This will be ‘confidence’.
iterfitname
The name of the iterated-fit method used, if any.
fitname
The name of the optimization method used.
statname
The name of the fit statistic used.
sigma
The sigma value used to calculate the confidence intervals.
percent
The percentage of the signal contained within the confidence intervals (calculated from the
sigma
value assuming a normal distribution).parnames
A tuple of the parameter names included in the analysis.
parvals
A tuple of the best-fit parameter values, in the same order as
parnames
.parmins
A tuple of the lower error bounds, in the same order as
parnames
.parmaxes
A tuple of the upper error bounds, in the same order as
parnames
.nfits
The number of model evaluations.
Examples
>>> res = get_conf_results() >>> print(res) datasets = (1,) methodname = confidence iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('p1.gamma', 'p1.ampl') parvals = (2.1585155113403327, 0.00022484014787994827) parmins = (-0.082785567348122591, -1.4825550342799376e-05) parmaxes = (0.083410634144100104, 1.4825550342799376e-05) nfits = 13
The following converts the above into a dictionary where the keys are the parameter names and the values are the tuple (best-fit value, lower-limit, upper-limit):
>>> pvals1 = zip(res.parvals, res.parmins, res.parmaxes) >>> pvals2 = [(v, v+l, v+h) for (v, l, h) in pvals1] >>> dres = dict(zip(res.parnames, pvals2)) >>> dres['p1.gamma'] (2.1585155113403327, 2.07572994399221, 2.241926145484433)
- get_coord(id=None)[source] [edit on github]
Get the coordinate system used for image analysis.
- Parameters
id (int or str, optional) – The data set to query. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
coord
- Return type
{ ‘logical’, ‘physical’, ‘world’ }
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain image data.
sherpa.utils.err.IdentifierErr – If the
id
argument is not recognized.
See also
get_default_id
Return the default data set identifier.
set_coord
Set the coordinate system to use for image analysis.
- get_counts(id=None, filter=False, bkg_id=None) [edit on github]
Return the dependent axis of a data set.
This function returns the data values (the dependent axis) for each point or pixel in the data set.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – Should the filter attached to the data set be applied to the return value or not. The default is
False
.bkg_id (int or str, optional) – Set if the values returned should be from the given background component, instead of the source data set.
- Returns
axis – The dependent axis values. The model estimate is compared to these values during fitting. For PHA data sets, the return array will match the grouping scheme applied to the data set. This array is one-dimensional, even for two dimensional (e.g. image) data.
- Return type
array
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_error
Return the errors on the dependent axis of a data set.
get_indep
Return the independent axis of a data set.
get_rate
Return the count rate of a PHA data set.
list_data_ids
List the identifiers for the loaded data sets.
Examples
>>> load_arrays(1, [10, 15, 19], [4, 5, 9], Data1D) >>> get_dep() array([4, 5, 9])
>>> x0 = [10, 15, 12, 19] >>> x1 = [12, 14, 10, 17] >>> y = [4, 5, 9, -2] >>> load_arrays(2, x0, x1, y, Data2D) >>> get_dep(2) array([4, 5, 9, -2])
If the
filter
flag is set then the return will be limited to the data that is used in the fit:>>> load_arrays(1, [10, 15, 19], [4, 5, 9]) >>> ignore_id(1, 17, None) >>> get_dep() array([4, 5, 9]) >>> get_dep(filter=True) array([4, 5])
An example with a PHA data set named ‘spec’:
>>> notice_id('spec', 0.5, 7) >>> yall = get_dep('spec') >>> yfilt = get_dep('spec', filter=True) >>> yall.size 1024 >>> yfilt.size 446
For images, the data is returned as a one-dimensional array:
>>> load_image('img', 'image.fits') >>> ivals = get_dep('img') >>> ivals.shape (65536,)
- get_covar()[source] [edit on github]
Return the covariance estimation object.
- Returns
covar
- Return type
See also
covar
Estimate parameter confidence intervals using the covariance method.
get_covar_opt
Return one or all of the options for the covariance method.
set_covar_opt
Set an option of the covar estimation object.
Notes
The attributes of the covariance object include:
eps
The precision of the calculated limits. The default is 0.01.
maxiters
The maximum number of iterations allowed before stopping for that parameter. The default is 200.
sigma
What is the error limit being calculated. The default is 1.
soft_limits
Should the search be restricted to the soft limits of the parameters (
True
), or can parameter values go out all the way to the hard limits if necessary (False
). The default isFalse
Examples
>>> print(get_covar()) name = covariance sigma = 1 maxiters = 200 soft_limits = False eps = 0.01
Change the
sigma
field to 1.9.>>> cv = get_covar() >>> cv.sigma = 1.6
- get_covar_opt(name=None)[source] [edit on github]
Return one or all of the options for the covariance method.
This is a helper function since the options can also be read directly using the object returned by
get_covar
.- Parameters
name (str, optional) – If not given, a dictionary of all the options are returned. When given, the individual value is returned.
- Returns
value
- Return type
dictionary or value
- Raises
sherpa.utils.err.ArgumentErr – If the
name
argument is not recognized.
See also
covar
Estimate parameter confidence intervals using the covariance method.
get_covar
Return the covariance estimation object.
set_covar_opt
Set an option of the covar estimation object.
Examples
>>> get_covar_opt('sigma') 1
>>> copts = get_covar_opt() >>> copts['sigma'] 1
- get_covar_results()[source] [edit on github]
Return the results of the last
covar
run.- Returns
results
- Return type
sherpa.fit.ErrorEstResults object
- Raises
sherpa.utils.err.SessionErr – If no
covar
call has been made.
See also
get_covar_opt
Return one or all of the options for the covariance method.
set_covar_opt
Set an option of the covar estimation object.
Notes
The fields of the object include:
datasets
A tuple of the data sets used in the analysis.
methodname
This will be ‘covariance’.
iterfitname
The name of the iterated-fit method used, if any.
fitname
The name of the optimization method used.
statname
The name of the fit statistic used.
sigma
The sigma value used to calculate the confidence intervals.
percent
The percentage of the signal contained within the confidence intervals (calculated from the
sigma
value assuming a normal distribution).parnames
A tuple of the parameter names included in the analysis.
parvals
A tuple of the best-fit parameter values, in the same order as
parnames
.parmins
A tuple of the lower error bounds, in the same order as
parnames
.parmaxes
A tuple of the upper error bounds, in the same order as
parnames
.nfits
The number of model evaluations.
There is also an
extra_output
field which is used to return the covariance matrix.Examples
>>> res = get_covar_results() >>> print(res) datasets = (1,) methodname = covariance iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('bgnd.c0',) parvals = (10.228675427602724,) parmins = (-2.4896739438296795,) parmaxes = (2.4896739438296795,) nfits = 0
In this case, of a single parameter, the covariance matrix is just the variance of the parameter:
>>> res.extra_output array([[ 6.19847635]])
- get_covariance_results() [edit on github]
Return the results of the last
covar
run.- Returns
results
- Return type
sherpa.fit.ErrorEstResults object
- Raises
sherpa.utils.err.SessionErr – If no
covar
call has been made.
See also
get_covar_opt
Return one or all of the options for the covariance method.
set_covar_opt
Set an option of the covar estimation object.
Notes
The fields of the object include:
datasets
A tuple of the data sets used in the analysis.
methodname
This will be ‘covariance’.
iterfitname
The name of the iterated-fit method used, if any.
fitname
The name of the optimization method used.
statname
The name of the fit statistic used.
sigma
The sigma value used to calculate the confidence intervals.
percent
The percentage of the signal contained within the confidence intervals (calculated from the
sigma
value assuming a normal distribution).parnames
A tuple of the parameter names included in the analysis.
parvals
A tuple of the best-fit parameter values, in the same order as
parnames
.parmins
A tuple of the lower error bounds, in the same order as
parnames
.parmaxes
A tuple of the upper error bounds, in the same order as
parnames
.nfits
The number of model evaluations.
There is also an
extra_output
field which is used to return the covariance matrix.Examples
>>> res = get_covar_results() >>> print(res) datasets = (1,) methodname = covariance iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('bgnd.c0',) parvals = (10.228675427602724,) parmins = (-2.4896739438296795,) parmaxes = (2.4896739438296795,) nfits = 0
In this case, of a single parameter, the covariance matrix is just the variance of the parameter:
>>> res.extra_output array([[ 6.19847635]])
- get_data(id=None)[source] [edit on github]
Return the data set by identifier.
The object returned by the call can be used to query and change properties of the data set.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
An instance of a sherpa.Data.Data-derived class.
- Return type
instance
- Raises
sherpa.utils.err.IdentifierErr – If no model expression has been set for the data set (with
set_model
orset_source
).
See also
copy_data
Copy a data set to a new identifier.
delete_data
Delete a data set by identifier.
load_data
Create a data set from a file.
set_data
Set a data set.
Examples
>>> d = get_data()
>>> dimg = get_data('img')
>>> load_arrays('tst', [10, 20, 30], [5.4, 2.3, 9.8]) >>> print(get_data('tst')) name = x = Int64[3] y = Float64[3] staterror = None syserror = None
- get_data_contour(id=None, recalc=True)[source] [edit on github]
Return the data used by contour_data.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_data
(orget_data_contour
) are returned, otherwise the data is re-generated.
- Returns
resid_data – The
y
attribute contains the residual values and thex0
andx1
arrays contain the corresponding coordinate values, as one-dimensional arrays.- Return type
a
sherpa.plot.DataContour
instance- Raises
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_data_image
Return the data used by image_data.
contour_data
Contour the values of an image data set.
image_data
Display a data set in the image viewer.
Examples
Return the data for the default data set:
>>> dinfo = get_data_contour()
- get_data_contour_prefs()[source] [edit on github]
Return the preferences for contour_data.
- Returns
prefs – Changing the values of this dictionary will change any new contour plots. The default is an empty dictionary.
- Return type
See also
contour_data
Contour the values of an image data set.
Notes
The meaning of the fields depend on the chosen plot backend. A value of
None
(or not set) means to use the default value for that attribute, unless indicated otherwise.alpha
The transparency value used to draw the contours, where 0 is fully transparent and 1 is fully opaque.
colors
The colors to draw the contours.
linewidths
What thickness of line to draw the contours.
xlog
Should the X axis be drawn with a logarithmic scale? The default is
False
.ylog
Should the Y axis be drawn with a logarithmic scale? The default is
False
.
Examples
Change the contours to be drawn in ‘green’:
>>> contour_data() >>> prefs = get_data_contour_prefs() >>> prefs['color'] = 'green' >>> contour_data()
- get_data_image(id=None)[source] [edit on github]
Return the data used by image_data.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
data_img – The
y
attribute contains the ratio values as a 2D NumPy array.- Return type
a
sherpa.image.DataImage
instance- Raises
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
contour_data
Contour the values of an image data set.
image_data
Display a data set in the image viewer.
Examples
Return the image data for the default data set:
>>> dinfo = get_data_image() >>> dinfo.y.shape (150, 175)
- get_data_plot(id=None, recalc=True)[source] [edit on github]
Return the data used by plot_data.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_data
(orget_data_plot
) are returned, otherwise the data is re-generated.
- Returns
data – An object representing the data used to create the plot by
plot_data
. The relationship between the returned values and the values in the data set depend on the data type. For example PHA data are plotted in units controlled bysherpa.astro.ui.set_analysis
, but are stored as channels and counts, and may have been grouped and the background estimate removed.- Return type
a
sherpa.plot.DataPlot
instance
See also
get_data_plot_prefs
Return the preferences for plot_data.
get_default_id
Return the default data set identifier.
plot_data
Plot the data values.
- get_data_plot_prefs(id=None)[source] [edit on github]
Return the preferences for plot_data.
The plot preferences may depend on the data set, so it is now an optional argument.
Changed in version 4.12.2: The id argument has been given.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
prefs – Changing the values of this dictionary will change any new data plots. This dictionary will be empty if no plot backend is available.
- Return type
See also
plot_data
Plot the data values.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Notes
The meaning of the fields depend on the chosen plot backend. A value of
None
means to use the default value for that attribute, unless indicated otherwise. These preferences are used by the following commands:plot_data
,plot_bkg
,plot_ratio
, and the “fit” variants, such asplot_fit
,plot_fit_resid
, andplot_bkg_fit
.The following preferences are recognized by the matplotlib backend:
alpha
The transparency value used to draw the line or symbol, where 0 is fully transparent and 1 is fully opaque.
barsabove
The barsabove argument for the matplotlib errorbar function.
capsize
The capsize argument for the matplotlib errorbar function.
color
The color to use (will be over-ridden by more-specific options below). The default is
None
.ecolor
The color to draw error bars. The default is
None
.linestyle
How should the line connecting the data points be drawn. The default is ‘None’, which means no line is drawn.
marker
What style is used for the symbols. The default is
'.'
which indicates a point.markerfacecolor
What color to draw the symbol representing the data points. The default is
None
.markersize
What size is the symbol drawn. The default is
None
.ratioline
Should a horizontal line be drawn at y=1? The default is
False
.xaxis
The default is
False
xerrorbars
Should error bars be drawn for the X axis. The default is
False
.xlog
Should the X axis be drawn with a logarithmic scale? The default is
False
. This field can also be changed with theset_xlog
andset_xlinear
functions.yerrorbars
Should error bars be drawn for the Y axis. The default is
True
.ylog
Should the Y axis be drawn with a logarithmic scale? The default is
False
. This field can also be changed with theset_ylog
andset_ylinear
functions.
Examples
After these commands, any data plot will use a green symbol and not display Y error bars.
>>> prefs = get_data_plot_prefs() >>> prefs['color'] = 'green' >>> prefs['yerrorbars'] = False
- get_default_id()[source] [edit on github]
Return the default data set identifier.
The Sherpa data id ties data, model, fit, and plotting information into a data set easily referenced by id. The default identifier, used by many commands, is returned by this command and can be changed by
set_default_id
.- Returns
id – The default data set identifier used by certain Sherpa functions when an identifier is not given, or set to
None
.- Return type
See also
list_data_ids
List the identifiers for the loaded data sets.
set_default_id
Set the default data set identifier.
Notes
The default Sherpa data set identifier is the integer 1.
Examples
Display the default identifier:
>>> print(get_default_id())
Store the default identifier and use it as an argument to call another Sherpa routine:
>>> defid = get_default_id() >>> load_arrays(defid, x, y)
- get_delchi_plot(id=None, recalc=True)[source] [edit on github]
Return the data used by plot_delchi.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_delchi
(orget_delchi_plot
) are returned, otherwise the data is re-generated.
- Returns
resid_data
- Return type
a
sherpa.plot.DelchiPlot
instance- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_chisqr_plot
Return the data used by plot_chisqr.
get_ratio_plot
Return the data used by plot_ratio.
get_resid_plot
Return the data used by plot_resid.
plot_delchi
Plot the ratio of residuals to error for a data set.
Examples
Return the residual data, measured in units of the error, for the default data set:
>>> rplot = get_delchi_plot() >>> np.min(rplot.y) -2.85648373819671875 >>> np.max(rplot.y) 2.89477053577520982
Display the contents of the residuals plot for data set 2:
>>> print(get_delchi_plot(2))
Overplot the residuals plot from the ‘core’ data set on the ‘jet’ data set:
>>> r1 = get_delchi_plot('jet') >>> r2 = get_delchi_plot('core') >>> r1.plot() >>> r2.overplot()
- get_dep(id=None, filter=False, bkg_id=None)[source] [edit on github]
Return the dependent axis of a data set.
This function returns the data values (the dependent axis) for each point or pixel in the data set.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – Should the filter attached to the data set be applied to the return value or not. The default is
False
.bkg_id (int or str, optional) – Set if the values returned should be from the given background component, instead of the source data set.
- Returns
axis – The dependent axis values. The model estimate is compared to these values during fitting. For PHA data sets, the return array will match the grouping scheme applied to the data set. This array is one-dimensional, even for two dimensional (e.g. image) data.
- Return type
array
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_error
Return the errors on the dependent axis of a data set.
get_indep
Return the independent axis of a data set.
get_rate
Return the count rate of a PHA data set.
list_data_ids
List the identifiers for the loaded data sets.
Examples
>>> load_arrays(1, [10, 15, 19], [4, 5, 9], Data1D) >>> get_dep() array([4, 5, 9])
>>> x0 = [10, 15, 12, 19] >>> x1 = [12, 14, 10, 17] >>> y = [4, 5, 9, -2] >>> load_arrays(2, x0, x1, y, Data2D) >>> get_dep(2) array([4, 5, 9, -2])
If the
filter
flag is set then the return will be limited to the data that is used in the fit:>>> load_arrays(1, [10, 15, 19], [4, 5, 9]) >>> ignore_id(1, 17, None) >>> get_dep() array([4, 5, 9]) >>> get_dep(filter=True) array([4, 5])
An example with a PHA data set named ‘spec’:
>>> notice_id('spec', 0.5, 7) >>> yall = get_dep('spec') >>> yfilt = get_dep('spec', filter=True) >>> yall.size 1024 >>> yfilt.size 446
For images, the data is returned as a one-dimensional array:
>>> load_image('img', 'image.fits') >>> ivals = get_dep('img') >>> ivals.shape (65536,)
- get_dims(id=None, filter=False)[source] [edit on github]
Return the dimensions of the data set.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – If
True
then apply any filter to the data set before returning the dimensions. The default isFalse
.
- Returns
dims
- Return type
a tuple of int
See also
ignore
Exclude data from the fit.
sherpa.astro.ui.ignore2d
Exclude a spatial region from an image.
notice
Include data in the fit.
sherpa.astro.ui.notice2d
Include a spatial region of an image.
Examples
Display the dimensions for the default data set:
>>> print(get_dims())
Find the number of bins in dataset ‘a2543’ without and with any filters applied to it:
>>> nall = get_dims('a2543') >>> nfilt = get_dims('a2543', filter=True)
- get_draws(id=None, otherids=(), niter=1000, covar_matrix=None)[source] [edit on github]
Run the pyBLoCXS MCMC algorithm.
The function runs a Markov Chain Monte Carlo (MCMC) algorithm designed to carry out Bayesian Low-Count X-ray Spectral (BLoCXS) analysis. It explores the model parameter space at the suspected statistic minimum (i.e. after using
fit
). The return values include the statistic value, parameter values, and an acceptance flag indicating whether the row represents a jump from the current location or not. For more information see thesherpa.sim
module and [1]_.- Parameters
id (int or str, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
niter (int, optional) – The number of draws to use. The default is
1000
.covar_matrix (2D array, optional) – The covariance matrix to use. If
None
then the result fromget_covar_results().extra_output
is used.
- Returns
The results of the MCMC chain. The stats and accept arrays contain
niter+1
elements, with the first row being the starting values. The params array has(nparams, niter+1)
elements, where nparams is the number of free parameters in the model expression, and the first column contains the values that the chain starts at. The accept array contains boolean values, indicating whether the jump, or step, was accepted (True
), so the parameter values and statistic change, or it wasn’t, in which case there is no change to the previous row. Thesherpa.utils.get_error_estimates
routine can be used to calculate the credible one-sigma interval from the params array.- Return type
stats, accept, params
See also
covar
Estimate the confidence intervals using the covariance method.
fit
Fit a model to one or more data sets.
plot_cdf
Plot the cumulative density function of an array.
plot_pdf
Plot the probability density function of an array.
plot_scatter
Create a scatter plot.
plot_trace
Create a trace plot of row number versus value.
set_prior
Set the prior function to use with a parameter.
set_sampler
Set the MCMC sampler.
get_sampler
Return information about the current MCMC sampler.
Notes
The chain is run using fit information associated with the specified data set, or sets, the currently set sampler (
set_sampler
) and parameter priors (set_prior
), for a specified number of iterations. The model should have been fit to find the best-fit parameters, andcovar
called, before runningget_draws
. The results fromget_draws
is used to estimate the parameter distributions.References
- 1
“Analysis of Energy Spectra with Low Photon Counts via Bayesian Posterior Simulation”, van Dyk, D.A., Connors, A., Kashyap, V.L., & Siemiginowska, A. 2001, Ap.J., 548, 224 http://adsabs.harvard.edu/abs/2001ApJ…548..224V
Examples
Fit a source and then run a chain to investigate the parameter distributions. The distribution of the stats values created by the chain is then displayed, using
plot_trace
, and the parameter distributions for the first two thawed parameters are displayed. The first one as a cumulative distribution usingplot_cdf
and the second one as a probability distribution usingplot_pdf
. Finally the acceptance fraction (number of draws where the chain moved) is displayed. Note that in a full analysis session a burn-in period would normally be removed from the chain before using the results.>>> fit() >>> covar() >>> stats, accept, params = get_draws(1, niter=1e4) >>> plot_trace(stats, name='stat') >>> names = [p.fullname for p in get_source().pars if not p.frozen] >>> plot_cdf(params[0,:], name=names[0], xlabel=names[0]) >>> plot_pdf(params[1,:], name=names[1], xlabel=names[1]) >>> accept[:-1].sum() * 1.0 / len(accept - 1) 0.4287
The following runs the chain on multiple data sets, with identifiers ‘core’, ‘jet1’, and ‘jet2’:
>>> stats, accept, params = get_draws('core', ['jet1', 'jet2'], niter=1e4)
- get_energy_flux_hist(lo=None, hi=None, id=None, num=7500, bins=75, correlated=False, numcores=None, bkg_id=None, scales=None, model=None, otherids=(), recalc=True, clip='hard')[source] [edit on github]
Return the data displayed by plot_energy_flux.
The get_energy_flux_hist() function calculates a histogram of simulated energy flux values representing the energy flux probability distribution for a model component, accounting for the errors on the model parameters.
Changed in version 4.12.2: The scales parameter is no longer ignored when set and the model and otherids parameters have been added. The clip argument has been added.
- Parameters
lo (number, optional) – The lower limit to use when summing up the signal. If not given then the lower value of the data grid is used.
hi (optional) – The upper limit to use when summing up the signal. If not given then the upper value of the data grid is used.
id (int or string, optional) – The identifier of the data set to use. If
None
, the default value, then all datasets with associated models are used to calculate the errors and the model evaluation is done using the default dataset.num (int, optional) – The number of samples to create. The default is 7500.
bins (int, optional) – The number of bins to use for the histogram.
correlated (bool, optional) – If
True
(the default isFalse
) thenscales
is the full covariance matrix, otherwise it is just a 1D array containing the variances of the parameters (the diagonal elements of the covariance matrix).numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
bkg_id (int or string, optional) – The identifier of the background component to use. This should only be set when the line to be measured is in the background model.
scales (array, optional) – The scales used to define the normal distributions for the parameters. The size and shape of the array depends on the number of free parameters in the fit (n) and the value of the
correlated
parameter. When the parameter isTrue
, scales must be given the covariance matrix for the free parameters (a n by n matrix that matches the parameter ordering used by Sherpa). For un-correlated parameters the covariance matrix can be used, or a one-dimensional array of n elements can be used, giving the width (specified as the sigma value of a normal distribution) for each parameter (e.g. the square root of the diagonal elements of the covariance matrix). If the scales parameter is not given then the covariance matrix is evaluated for the current model and best-fit parameters.model (model, optional) – The model to integrate. If left as
None
then the source model for the dataset will be used. This can be used to calculate the unabsorbed flux, as shown in the examples. The model must be part of the source expression.otherids (sequence of integer and string ids, optional) – The list of other datasets that should be included when calculating the errors to draw values from.
recalc (bool, optional) – If
True
, the default, then re-calculate the values rather than use the values from the last time the function was run.clip ({'hard', 'soft', 'none'}, optional) – What clipping strategy should be applied to the sampled parameters. The default (‘hard’) is to fix values at their hard limits if they exceed them. A value of ‘soft’ uses the soft limits instead, and ‘none’ applies no clipping.
- Returns
hist – An object representing the data used to create the plot by
plot_energy_flux
.- Return type
a
sherpa.astro.plot.EnergyFluxHistogram
instance
See also
get_photon_flux_hist
Return the data displayed by plot_photon_flux.
plot_energy_flux
Display the energy flux distribution.
plot_photon_flux
Display the photon flux distribution.
sample_energy_flux
Return the energy flux distribution of a model.
sample_flux
Return the flux distribution of a model.
sample_photon_flux
Return the photon flux distribution of a model.
Examples
Get the energy flux distribution for the range 0.5 to 7 for the default data set:
>>> ehist = get_energy_flux_hist(0.5, 7, num=1000) >>> print(ehist)
Compare the 0.5 to 2 energy flux distribution from the “core” data set to the values from the “jet” data set:
>>> ehist1 = get_energy_flux_hist(0.5, 2, id='jet', num=1000) >>> ehist2 = get_energy_flux_hist(0.5, 2, id='core', num=1000)
Compare the flux distribution for the full source expression (aflux) to that for just the pl component (uflux); this can be useful to calculate the unabsorbed flux distribution if the full source model contains an absorption component:
>>> aflux = get_energy_flux_hist(0.5, 2, num=1000, bins=20) >>> uflux = get_energy_flux_hist(0.5, 2, model=pl, num=1000, bins=20)
When there are multiple datasets loaded,
get_energy_flux_hist
uses all datasets to evaluate the errors when theid
parameter is left at its default value ofNone
. Theotherids
parameter is used, along withid
, to specify exactly what datasets are used:>>> x = get_energy_flux_hist(2, 10, num=1000, bins=20, model=src) >>> y = get_energy_flux_hist(2, 10, num=1000, bins=20, model=src, ... id=1, otherids=(2, 3, 4))
- get_error(id=None, filter=False, bkg_id=None)[source] [edit on github]
Return the errors on the dependent axis of a data set.
The function returns the total errors (a quadrature addition of the statistical and systematic errors) on the values (dependent axis) of a data set or its background. The individual components can be retrieved with the
get_staterror
andget_syserror
functions.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – Should the filter attached to the data set be applied to the return value or not. The default is
False
.bkg_id (int or str, optional) – Set if the values returned should be from the given background component, instead of the source data set.
- Returns
errors – The error for each data point, formed by adding the statistical and systematic errors in quadrature. For PHA data sets, the return array will match the grouping scheme applied to the data set. The size of this array depends on the
filter
argument.- Return type
array
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_dep
Return the dependent axis of a data set.
get_staterror
Return the statistical errors on the dependent axis of a data set.
get_syserror
Return the systematic errors on the dependent axis of a data set.
list_data_ids
List the identifiers for the loaded data sets.
Notes
The default behavior is to not apply any filter defined on the independent axes to the results, so that the return value is for all points (or bins) in the data set. Set the
filter
argument toTrue
to apply this filter.Examples
Return the error values for the default data set, ignoring any filter applied to it:
>>> err = get_error()
Ensure that the return values are for the selected (filtered) points in the default data set (the return array may be smaller than in the previous example):
>>> err = get_error(filter=True)
Find the errors for the “core” data set and its two background components:
>>> err = get_error('core', filter=True) >>> berr1 = get_error('core', bkg_id=1, filter=True) >>> berr2 = get_error('core', bkg_id=2, filter=True)
- get_exposure(id=None, bkg_id=None)[source] [edit on github]
Return the exposure time of a PHA data set.
The exposure time of a PHA data set is taken from the EXPTIME keyword in its header, but it can be changed once the file has been loaded.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Set to identify which background component to use. The default value (
None
) means that the time is for the source component of the data set.
- Returns
exposure – The exposure time, in seconds.
- Return type
number
See also
get_areascal
Return the fractional area factor of a PHA data set.
get_backscal
Return the area scaling of a PHA data set.
set_exposure
Change the exposure time of a PHA data set.
Examples
Return the exposure time for the default data set.
>>> t = get_exposure()
Return the exposure time for the data set with identifier 2:
>>> t2 = get_exposure(2)
Return the exposure time for the first background component of data set “core”:
>>> tbkg = get_exposure('core', bkg_id=1)
- get_filter(id=None)[source] [edit on github]
Return the filter expression for a data set.
This returns the filter expression, created by one or more calls to
ignore
andnotice
, for the data set.Changed in version 4.14.0: The filter expressions have been tweaked for Data1DInt and PHA data sets (when using energy or wavelength units) and now describe the full range of the bins, rather than the mid-points.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
filter – The empty string or a string expression representing the filter used. For PHA data dets the units are controlled by the analysis setting for the data set.
- Return type
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not exist.
See also
ignore
Exclude data from the fit.
load_filter
Load the filter array from a file and add to a data set.
notice
Include data in the fit.
save_filter
Save the filter array to a file.
show_filter
Show any filters applied to a data set.
set_filter
Set the filter array of a data set.
Examples
The default filter is the full dataset, given in the format
lowval:hival
(for aData1D
dataset like this these are inclusive limits):>>> load_arrays(1, [10, 15, 20, 25], [5, 7, 4, 2]) >>> get_filter() '10.0000:25.0000'
The
notice
call restricts the data to the range between 14 and 30. The resulting filter is the combination of this range and the data:>>> notice(14, 30) >>> get_filter() '15.0000:25.0000'
Ignoring the point at
x=20
means that only the points atx=15
andx=25
remain, so a comma-separated list is used:>>> ignore(19, 22) >>> get_filter() '15.0000,25.0000'
The filter equivalent to the per-bin array of filter values:
>>> set_filter([1, 1, 0, 1]) >>> get_filter() '10.0000:15.0000,25.0000'
For an integrated data set (Data1DInt and DataPHA with energy or wavelength units)
>>> load_arrays(1, [10, 15, 20, 25], [15, 20, 23, 30], [5, 7, 4, 2], Data1DInt) >>> get_filter() '10.0000:30.0000'
For integrated datasets the limits are now inclusive only for the lower limit, but in this the end-point ends within a bin so is is included:
>>> notice(17, 28) >>> get_filter() '15.0000:30.0000'
There is no data in the range 23 to 24 so the ignore doesn’t change anything:
>>> ignore(23, 24) >>> get_filter() '15.0000:30.0000'
However it does match the range 22 to 23 and so changes the filter:
>>> ignore(22, 23) >>> get_filter() '15.0000:20.0000,25:000:30.0000'
Return the filter for data set 3:
>>> get_filter(3)
- get_fit_contour(id=None, recalc=True)[source] [edit on github]
Return the data used by contour_fit.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_fit
(orget_fit_contour
) are returned, otherwise the data is re-generated.
- Returns
fit_data – An object representing the data used to create the plot by
contour_fit
. It contains the data fromget_data_contour
andget_model_contour
in thedatacontour
andmodelcontour
attributes.- Return type
a
sherpa.plot.FitContour
instance- Raises
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_data_image
Return the data used by image_data.
get_model_image
Return the data used by image_model.
contour_data
Contour the values of an image data set.
contour_model
Contour the values of the model, including any PSF.
image_data
Display a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
Examples
Return the contour data for the default data set:
>>> finfo = get_fit_contour()
- get_fit_plot(id=None, recalc=True)[source] [edit on github]
Return the data used to create the fit plot.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_fit
(orget_fit_plot
) are returned, otherwise the data is re-generated.
- Returns
data – An object representing the data used to create the plot by
plot_fit
. It contains the data fromget_data_plot
andget_model_plot
in thedataplot
andmodelplot
attributes.- Return type
a
sherpa.plot.FitPlot
instance
See also
get_data_plot_prefs
Return the preferences for plot_data.
get_model_plot_prefs
Return the preferences for plot_model.
get_default_id
Return the default data set identifier.
plot_data
Plot the data values.
plot_model
Plot the model for a data set.
Examples
Create the data needed to create the “fit plot” for the default data set and display it:
>>> fplot = get_fit_plot() >>> print(fplot)
Return the plot data for data set 2, and then use it to create a plot:
>>> f2 = get_fit_plot(2) >>> f2.plot()
The fit plot consists of a combination of a data plot and a model plot, which are captured in the
dataplot
andmodelplot
attributes of the return value. These can be used to display the plots individually, such as:>>> f2.dataplot.plot() >>> f2.modelplot.plot()
or, to combine the two:
>>> f2.dataplot.plot() >>> f2.modelplot.overplot()
- get_fit_results()[source] [edit on github]
Return the results of the last fit.
This function returns the results from the most-recent fit. The returned value includes information on the parameter values and fit statistic.
- Returns
stats – The results of the last fit. It does not reflect any changes made to the model parameter, or settings, since the last fit.
- Return type
a
sherpa.fit.FitResults
instance
See also
calc_stat
Calculate the fit statistic for a data set.
calc_stat_info
Display the statistic values for the current models.
fit
Fit a model to one or more data sets.
get_stat_info
Return the statistic values for the current models.
set_iter_method
Set the iterative-fitting scheme used in the fit.
Notes
The fields of the object include:
- datasets
A sequence of the data set ids included in the results.
- itermethodname
What iterated-fit scheme was used, if any (as set by
set_iter_method
).- statname
The name of the statistic function (as used in
set_stat
).- succeeded
Was the fit successful (did it converge)?
- parnames
A tuple of the parameter names that were varied in the fit (the thawed parameters in the model expression).
- parvals
A tuple of the parameter values, in the same order as
parnames
.- statval
The statistic value after the fit.
- istatval
The statistic value at the start of the fit.
- dstatval
The change in the statistic value (
istatval - statval
).- numpoints
The number of bins used in the fits.
- dof
The number of degrees of freedom in the fit (the number of bins minus the number of free parameters).
- qval
The Q-value (probability) that one would observe the reduced statistic value, or a larger value, if the assumed model is true and the current model parameters are the true parameter values. This will be
None
if the value can not be calculated with the current statistic (e.g. the Cash statistic).- rstat
The reduced statistic value (the
statval
field divided bydof
). This is not calculated for all statistics.- message
A message about the results of the fit (e.g. if the fit was unable to converge). The format and contents depend on the optimisation method.
- nfev
The number of model evaluations made during the fit.
Examples
Display the fit results:
>>> print(get_fit_results())
Inspect the fit results:
>>> res = get_fit_results() >>> res.statval 498.21750663761935 >>> res.dof 439 >>> res.parnames ('pl.gamma', 'pl.ampl', 'gline.fwhm', 'gline.pos', 'gline.ampl') >>> res.parvals (-0.20659543380329071, 0.00030398852609788524, 100.0, 4900.0, 0.001)
- get_functions()[source] [edit on github]
Return the functions provided by Sherpa.
- Returns
functions
- Return type
list of str
See also
list_functions
Display the functions provided by Sherpa.
- get_grouping(id=None, bkg_id=None)[source] [edit on github]
Return the grouping array for a PHA data set.
The function returns the grouping value for each channel in the PHA data set.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Set if the grouping flags should be taken from a background associated with the data set.
- Returns
grouping – A value of
1
indicates the start of a new group, and-1
indicates that the bin is part of the group. This array is not filtered - that is, there is one element for each channel in the PHA data set. Changes to the elements of this array will change the values in the dataset (it is a reference to the values used to define the quality, not a copy).- Return type
ndarray or
None
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain a PHA data set.
See also
fit
Fit one or more data sets.
get_quality
Return the quality array for a PHA data set.
ignore_bad
Exclude channels marked as bad in a PHA data set.
load_grouping
Load the grouping scheme from a file and add to a PHA data set.
set_grouping
Apply a set of grouping flags to a PHA data set.
Notes
The meaning of the grouping column is taken from [1]_, which says that +1 indicates the start of a bin, -1 if the channel is part of group, and 0 if the data grouping is undefined for all channels.
References
- 1
“The OGIP Spectral File Format”, https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/ogip_92_007.html
Examples
Copy the grouping array from the default data set to data set 2:
>>> grp1 = get_grouping() >>> set_grouping(2, grp1)
Return the grouping array of the background component labelled 2 for the ‘histate’ data set:
>>> grp = get_grouping('histate', bkg_id=2)
- get_indep(id=None, filter=False, bkg_id=None)[source] [edit on github]
Return the independent axes of a data set.
This function returns the coordinates of each point, or pixel, in the data set. The
get_axes
function may be be preferred in some situations.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – Should the filter attached to the data set be applied to the return value or not. The default is
False
.bkg_id (int or str, optional) – Set if the values returned should be from the given background component, instead of the source data set.
- Returns
axis – The independent axis values. These are the values at which the model is evaluated during fitting. The values returned depend on the coordinate system in use for the data set (as set by
set_coord
). For PHA data sets the value returned is always in channels, whatever theset_analysis
setting is, and does not follow any grouping setting for the data set.- Return type
tuple of arrays
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_axes
Return information about the independent axes of a data set.
get_dep
Return the dependent axis of a data set.
list_data_ids
List the identifiers for the loaded data sets.
set_coord
Set the coordinate system to use for image analysis.
Notes
For a two-dimensional image, with size n by m pixels, the
get_dep
function will return two arrays, each of size n * m, which contain the coordinate of the center of each pixel. Theget_axes
function will instead return the coordinates of each axis separately, i.e. arrays of size n and m.Examples
For a one-dimensional data set, the X values are returned:
>>> load_arrays(1, [10, 15, 19], [4, 5, 9], Data1D) >>> get_indep() (array([10, 15, 19]),)
For a 2D data set the X0 and X1 values are returned:
>>> x0 = [10, 15, 12, 19] >>> x1 = [12, 14, 10, 17] >>> y = [4, 5, 9, -2] >>> load_arrays(2, x0, x1, y, Data2D) >>> get_indep(2) (array([10, 15, 12, 19]), array([12, 14, 10, 17]))
For PHA data sets the return value is in channel units:
>>> load_pha('spec', 'src.pi') >>> set_analysis('spec', 'energy') >>> (chans,) = get_indep('spec') >>> chans[0:6] array([ 1., 2., 3., 4., 5., 6.])
If the
filter
flag is set then the return will be limited to the data that is used in the fit:>>> notice_id('spec', 0.5, 7) >>> (nchans,) = get_indep('spec', filter=True) >>> nchans[0:5] array([ 35., 36., 37., 38., 39.])
For images the pixel coordinates of each pixel are returned, as 1D arrays, one value for each pixel:
>>> load_image('img', 'image.fits') >>> (xvals, yvals) = get_indep('img') >>> xvals.shape (65536,) >>> yvals.shape (65536,) >>> xvals[0:5] array([ 1., 2., 3., 4., 5.]) >>> yvals[0:5] array([ 1., 1., 1., 1., 1.])
The coordinate system for image axes is determinated by the
set_coord
setting for the data set:>>> set_coord('img', 'physical') >>> (avals, bvals) = get_indep('img') >>> avals[0:5] array([ 16.5, 48.5, 80.5, 112.5, 144.5])
- get_int_proj(par=None, id=None, otherids=None, recalc=False, fast=True, min=None, max=None, nloop=20, delv=None, fac=1, log=False, numcores=None)[source] [edit on github]
Return the interval-projection object.
This returns (and optionally calculates) the data used to display the
int_proj
plot. Note that if the therecalc
parameter isFalse
(the default value) then all other parameters are ignored and the results of the lastint_proj
call are returned.- Parameters
par – The parameter to plot. This argument is only used if
recalc
is set toTrue
.id (str or int, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (list of str or int, optional) – Other data sets to use in the calculation.
recalc (bool, optional) – The default value (
False
) means that the results from the last call toint_proj
(orget_int_proj
) are returned, ignoring all other parameter values. Otherwise, the statistic curve is re-calculated, but not plotted.fast (bool, optional) – If
True
then the fit optimization used may be changed from the current setting (only for the error analysis) to use a faster optimization method. The default isFalse
.min (number, optional) – The minimum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.max (number, optional) – The maximum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.nloop (int, optional) – The number of steps to use. This is used when
delv
is set toNone
.delv (number, optional) – The step size for the parameter. Setting this over-rides the
nloop
parameter. The default isNone
.fac (number, optional) – When
min
ormax
is not given, multiply the covariance of the parameter by this value to calculate the limit (which is then added or subtracted to the parameter value, as required).log (bool, optional) – Should the step size be logarithmically spaced? The default (
False
) is to use a linear grid.numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
- Returns
iproj – The fields of this object can be used to re-create the plot created by
int_proj
.- Return type
a
sherpa.plot.IntervalProjection
instance
See also
conf
Estimate parameter confidence intervals using the confidence method.
covar
Estimate the confidence intervals using the covariance method.
int_proj
Calculate and plot the fit statistic versus fit parameter value.
int_unc
Calculate and plot the fit statistic versus fit parameter value.
reg_proj
Plot the statistic value as two parameters are varied.
Examples
Return the results of the
int_proj
run:>>> int_proj(src.xpos) >>> iproj = get_int_proj() >>> min(iproj.y) 119.55942437129544
Since the
recalc
parameter has not been changed toTrue
, the following will return the results for the last call toint_proj
, which may not have been for the src.ypos parameter:>>> iproj = get_int_proj(src.ypos)
Create the data without creating a plot:
>>> iproj = get_int_proj(pl.gamma, recalc=True)
Specify the range and step size for the parameter, in this case varying linearly between 12 and 14 with 51 values:
>>> iproj = get_int_proj(src.r0, id="src", min=12, max=14, ... nloop=51, recalc=True)
- get_int_unc(par=None, id=None, otherids=None, recalc=False, min=None, max=None, nloop=20, delv=None, fac=1, log=False, numcores=None)[source] [edit on github]
Return the interval-uncertainty object.
This returns (and optionally calculates) the data used to display the
int_unc
plot. Note that if the therecalc
parameter isFalse
(the default value) then all other parameters are ignored and the results of the lastint_unc
call are returned.- Parameters
par – The parameter to plot. This argument is only used if
recalc
is set toTrue
.id (str or int, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (list of str or int, optional) – Other data sets to use in the calculation.
recalc (bool, optional) – The default value (
False
) means that the results from the last call toint_proj
(orget_int_proj
) are returned, ignoring all other parameter values. Otherwise, the statistic curve is re-calculated, but not plotted.min (number, optional) – The minimum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.max (number, optional) – The maximum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.nloop (int, optional) – The number of steps to use. This is used when
delv
is set toNone
.delv (number, optional) – The step size for the parameter. Setting this over-rides the
nloop
parameter. The default isNone
.fac (number, optional) – When
min
ormax
is not given, multiply the covariance of the parameter by this value to calculate the limit (which is then added or subtracted to the parameter value, as required).log (bool, optional) – Should the step size be logarithmically spaced? The default (
False
) is to use a linear grid.numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
- Returns
iunc – The fields of this object can be used to re-create the plot created by
int_unc
.- Return type
a
sherpa.plot.IntervalUncertainty
instance
See also
conf
Estimate parameter confidence intervals using the confidence method.
covar
Estimate the confidence intervals using the covariance method.
int_proj
Calculate and plot the fit statistic versus fit parameter value.
int_unc
Calculate and plot the fit statistic versus fit parameter value.
reg_proj
Plot the statistic value as two parameters are varied.
Examples
Return the results of the
int_unc
run:>>> int_unc(src.xpos) >>> iunc = get_int_unc() >>> min(iunc.y) 119.55942437129544
Since the
recalc
parameter has not been changed toTrue
, the following will return the results for the last call toint_unc
, which may not have been for the src.ypos parameter:>>> iunc = get_int_unc(src.ypos)
Create the data without creating a plot:
>>> iunc = get_int_unc(pl.gamma, recalc=True)
Specify the range and step size for the parameter, in this case varying linearly between 12 and 14 with 51 values:
>>> iunc = get_int_unc(src.r0, id="src", min=12, max=14, ... nloop=51, recalc=True)
- get_iter_method_name()[source] [edit on github]
Return the name of the iterative fitting scheme.
- Returns
name – The name of the iterative fitting scheme set by
set_iter_method
.- Return type
{‘none’, ‘sigmarej’}
See also
list_iter_methods
List the iterative fitting schemes.
set_iter_method
Set the iterative-fitting scheme used in the fit.
Examples
>>> print(get_iter_method_name())
- get_iter_method_opt(optname=None)[source] [edit on github]
Return one or all options for the iterative-fitting scheme.
The options available for the iterative-fitting methods are described in
set_iter_method_opt
.- Parameters
optname (str, optional) – If not given, a dictionary of all the options are returned. When given, the individual value is returned.
- Returns
value – The dictionary is empty when no iterative scheme is being used.
- Return type
dictionary or value
- Raises
sherpa.utils.err.ArgumentErr – If the
optname
argument is not recognized.
See also
get_iter_method_name
Return the name of the iterative fitting scheme.
set_iter_method_opt
Set an option for the iterative-fitting scheme.
set_iter_method
Set the iterative-fitting scheme used in the fit.
Examples
Display the settings of the current iterative-fitting method:
>>> print(get_iter_method_opt())
Switch to the sigmarej scheme and find out the current settings:
>>> set_iter_method('sigmarej') >>> opts = get_iter_method_opt()
Return the ‘maxiters’ setting (if applicable):
>>> get_iter_method_opt('maxiters')
- get_kernel_contour(id=None, recalc=True)[source] [edit on github]
Return the data used by contour_kernel.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_kernel
(orget_kernel_contour
) are returned, otherwise the data is re-generated.
- Returns
psf_data
- Return type
a
sherpa.plot.PSFKernelContour
instance- Raises
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If a PSF model has not been created for the data set.
See also
get_psf_contour
Return the data used by contour_psf.
contour_kernel
Contour the kernel applied to the model of an image data set.
contour_psf
Contour the PSF applied to the model of an image data set.
Examples
Return the contour data for the kernel for the default data set:
>>> kplot = get_kernel_contour()
- get_kernel_image(id=None)[source] [edit on github]
Return the data used by image_kernel.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
psf_data
- Return type
a
sherpa.image.PSFKernelImage
instance- Raises
sherpa.utils.err.IdentifierErr – If a PSF model has not been created for the data set.
See also
get_psf_image
Return the data used by image_psf.
image_kernel
Display the 2D kernel for a data set in the image viewer.
image_psf
Display the 2D PSF model for a data set in the image viewer.
Examples
Return the image data for the kernel for the default data set:
>>> lplot = get_kernel_image() >>> iplot.y.shape (51, 51)
- get_kernel_plot(id=None, recalc=True)[source] [edit on github]
Return the data used by plot_kernel.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_kernel
(orget_kernel_plot
) are returned, otherwise the data is re-generated.
- Returns
kernel_plot
- Return type
a
sherpa.plot.PSFKernelPlot
instance- Raises
sherpa.utils.err.IdentifierErr – If a PSF model has not been created for the data set.
See also
get_psf_plot
Return the data used by plot_psf.
plot_kernel
Plot the 1D kernel applied to a data set.
plot_psf
Plot the 1D PSF model applied to a data set.
Examples
Return the plot data and then create a plot with it:
>>> kplot = get_kernel_plot() >>> kplot.plot()
- get_method(name=None)[source] [edit on github]
Return an optimization method.
- Parameters
name (str, optional) – If not given, the current method is returned, otherwise it should be one of the names returned by the
list_methods
function.- Returns
method – An object representing the optimization method.
- Return type
- Raises
sherpa.utils.err.ArgumentErr – If the
name
argument is not recognized.
See also
get_method_opt
Get the options for the current optimization method.
list_methods
List the supported optimization methods.
set_method
Change the optimization method.
set_method_opt
Change an option of the current optimization method.
Examples
The fields of the object returned by
get_method
can be used to view or change the method options.>>> method = ui.get_method() >>> print(method.name) levmar >>> print(method) name = levmar ftol = 1.19209289551e-07 xtol = 1.19209289551e-07 gtol = 1.19209289551e-07 maxfev = None epsfcn = 1.19209289551e-07 factor = 100.0 verbose = 0 >>> method.maxfev = 5000
- get_method_name()[source] [edit on github]
Return the name of current Sherpa optimization method.
- Returns
name – The name of the current optimization method, in lower case. This may not match the value sent to
set_method
because some methods can be set by multiple names.- Return type
See also
get_method
Return an optimization method.
get_method_opt
Get the options for the current optimization method.
Examples
>>> get_method_name() 'levmar'
The ‘neldermead’ method can also be referred to as ‘simplex’:
>>> set_method('simplex') >>> get_method_name() 'neldermead'
- get_method_opt(optname=None)[source] [edit on github]
Return one or all of the options for the current optimization method.
This is a helper function since the optimization options can also be read directly using the object returned by
get_method
.- Parameters
optname (str, optional) – If not given, a dictionary of all the options are returned. When given, the individual value is returned.
- Returns
value
- Return type
dictionary or value
- Raises
sherpa.utils.err.ArgumentErr – If the
optname
argument is not recognized.
See also
get_method
Return an optimization method.
set_method
Change the optimization method.
set_method_opt
Change an option of the current optimization method.
Examples
>>> get_method_opt('maxfev') is None True
>>> mopts = get_method_opt() >>> mopts['maxfev'] is None True
- get_model(id=None)[source] [edit on github]
Return the model expression for a data set.
This returns the model expression for a data set, including any instrument response (e.g. PSF or ARF and RMF) whether created automatically or explicitly, with
set_full_model
.- Parameters
id (int or str, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
This can contain multiple model components and any instrument response. Changing attributes of this model changes the model used by the data set.
- Return type
instance
See also
delete_model
Delete the model expression from a data set.
get_model_pars
Return the names of the parameters of a model.
get_model_type
Describe a model expression.
get_source
Return the source model expression for a data set.
list_model_ids
List of all the data sets with a source expression.
sherpa.astro.ui.set_bkg_model
Set the background model expression for a data set.
set_model
Set the source model expression for a data set.
set_full_model
Define the convolved model expression for a data set.
show_model
Display the source model expression for a data set.
Examples
Return the model fitted to the default data set:
>>> mdl = get_model() >>> len(mdl.pars) 5
- get_model_autoassign_func()[source] [edit on github]
Return the method used to create model component identifiers.
Provides access to the function which is used by
create_model_component
and when creating model components directly to add an identifier in the current Python namespace.- Returns
The model function set by
set_model_autoassign_func
.- Return type
func
See also
create_model_component
Create a model component.
set_model
Set the source model expression for a data set.
set_model_autoassign_func
Set the method used to create model component identifiers.
- get_model_component(name)[source] [edit on github]
Returns a model component given its name.
- Parameters
name (str) – The name of the model component.
- Returns
component – The model component object.
- Return type
a sherpa.models.model.Model instance
- Raises
sherpa.utils.err.IdentifierErr – If there is no model component with the given
name
.
See also
create_model_component
Create a model component.
get_model
Return the model expression for a data set.
get_source
Return the source model expression for a data set.
list_model_components
List the names of all the model components.
set_model
Set the source model expression for a data set.
Notes
The model instances are named as modeltype.username, and it is the
username
component that is used here to access the instance.Examples
When a model component is created, a variable is created that contains the model instance. The instance can also be returned with
get_model_component
, which can then be queried or used to change the model settings:>>> create_model_component('gauss1d', 'gline') >>> gmodel = get_model_component('gline') >>> gmodel.name 'gauss1d.gline' >>> print([p.name for p in gmodel.pars]) ['fwhm', 'pos', 'ampl'] >>> gmodel.fwhm.val = 12.2 >>> gmodel.fwhm.freeze()
- get_model_component_image(id, model=None)[source] [edit on github]
Return the data used by image_model_component.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.model.Model instance) – The component to display (the name, if a string).
- Returns
cpt_img – The
y
attribute contains the component model values as a 2D NumPy array.- Return type
a
sherpa.image.ComponentModelImage
instance- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_source_component_image
Return the data used by image_source_component.
get_model_image
Return the data used by image_model.
image_model
Display the model for a data set in the image viewer.
image_model_component
Display a component of the model in the image viewer.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.Examples
Return the gsrc component values for the default data set:
>>> minfo = get_model_component_image(gsrc)
Get the
bgnd
model pixel values for data set 2:>>> minfo = get_model_component_image(2, bgnd)
- get_model_component_plot(id, model=None, recalc=True)[source] [edit on github]
Return the data used to create the model-component plot.
For PHA data, the response model is automatically added by the routine unless the model contains a response.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.model.Model instance) – The component to use (the name, if a string).
recalc (bool, optional) – If
False
then the results from the last call toplot_model_component
(orget_model_component_plot
) are returned, otherwise the data is re-generated.
- Returns
An object representing the data used to create the plot by
plot_model_component
. The return value depends on the data set (e.g. PHA, 1D binned, or 1D un-binned).- Return type
instance
See also
get_model_plot
Return the data used to create the model plot.
plot_model
Plot the model for a data set.
plot_model_component
Plot a component of the model for a data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.Examples
Return the plot data for the
pl
component used in the default data set:>>> cplot = get_model_component_plot(pl)
Return the full source model (
fplot
) and then for the componentsgal * pl
andgal * gline
, for the data set ‘jet’:>>> fmodel = xsphabs.gal * (powlaw1d.pl + gauss1d.gline) >>> set_source('jet', fmodel) >>> fit('jet') >>> fplot = get_model_plot('jet') >>> plot1 = get_model_component_plot('jet', pl*gal) >>> plot2 = get_model_component_plot('jet', gline*gal)
For PHA data sets the response is automatically added, but it can also be manually specified. In the following plot1 and plot2 contain the same data:
>>> plot1 = get_model_component_plot(pl) >>> rsp = get_response() >>> plot2 = get_model_component_plot(rsp(pl))
- get_model_contour(id=None, recalc=True)[source] [edit on github]
Return the data used by contour_model.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_model
(orget_model_contour
) are returned, otherwise the data is re-generated.
- Returns
model_data – The
y
attribute contains the model values and thex0
andx1
arrays contain the corresponding coordinate values, as one-dimensional arrays.- Return type
a
sherpa.plot.ModelContour
instance- Raises
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_model_image
Return the data used by image_model.
contour_model
Contour the values of the model, including any PSF.
image_model
Display the model for a data set in the image viewer.
Examples
Return the model pixel values for the default data set:
>>> minfo = get_model_contour()
- get_model_contour_prefs()[source] [edit on github]
Return the preferences for contour_model.
- Returns
prefs – Changing the values of this dictionary will change any new contour plots.
- Return type
See also
contour_model
Contour the values of the model, including any PSF.
Notes
The meaning of the fields depend on the chosen plot backend. A value of
None
(or not set) means to use the default value for that attribute, unless indicated otherwise.alpha
The transparency value used to draw the contours, where 0 is fully transparent and 1 is fully opaque.
colors
The colors to draw the contours.
linewidths
What thickness of line to draw the contours.
xlog
Should the X axis be drawn with a logarithmic scale? The default is
False
.ylog
Should the Y axis be drawn with a logarithmic scale? The default is
False
.
Examples
Change the contours for the model to be drawn in ‘orange’:
>>> prefs = get_model_contour_prefs() >>> prefs['color'] = 'orange' >>> contour_data() >>> contour_model(overcontour=True)
- get_model_image(id=None)[source] [edit on github]
Return the data used by image_model.
Evaluate the source expression for the image pixels - including any PSF convolution defined by
set_psf
- and return the results.- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
src_img – The
y
attribute contains the source model values as a 2D NumPy array.- Return type
a
sherpa.image.ModelImage
instance- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_source_image
Return the data used by image_source.
contour_model
Contour the values of the model, including any PSF.
image_model
Display the model for a data set in the image viewer.
set_psf
Add a PSF model to a data set.
Examples
Calculate the residuals (data - model) for the default data set:
>>> minfo = get_model_image() >>> dinfo = get_data_image() >>> resid = dinfo.y - minfo.y
- get_model_pars(model)[source] [edit on github]
Return the names of the parameters of a model.
- Parameters
model (str or a sherpa.models.model.Model object) –
- Returns
names – The names of the parameters in the model expression. These names do not include the name of the parent component.
- Return type
list of str
See also
create_model_component
Create a model component.
get_model
Return the model expression for a data set.
get_model_type
Describe a model expression.
get_source
Return the source model expression for a data set.
Examples
>>> set_source(gauss2d.src + const2d.bgnd) >>> get_model_pars(get_source()) ['fwhm', 'xpos', 'ypos', 'ellip', 'theta', 'ampl', 'c0']
- get_model_plot(id=None, recalc=True)[source] [edit on github]
Return the data used to create the model plot.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_model
(orget_model_plot
) are returned, otherwise the data is re-generated.
- Returns
An object representing the data used to create the plot by
plot_model
. The return value depends on the data set (e.g. 1D binned or un-binned).- Return type
instance
See also
get_model_plot_prefs
Return the preferences for plot_model.
plot_model
Plot the model for a data set.
Examples
>>> mplot = get_model_plot() >>> print(mplot)
- get_model_plot_prefs(id=None)[source] [edit on github]
Return the preferences for plot_model.
The plot preferences may depend on the data set, so it is now an optional argument.
Changed in version 4.12.2: The id argument has been given.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
prefs – Changing the values of this dictionary will change any new model plots. This dictionary will be empty if no plot backend is available.
- Return type
See also
plot_model
Plot the model for a data set.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Notes
The meaning of the fields depend on the chosen plot backend. A value of
None
means to use the default value for that attribute, unless indicated otherwise. These preferences are used by the following commands:plot_model
,plot_ratio
,plot_bkg_model
, and the “fit” variants, such asplot_fit
,plot_fit_resid
, andplot_bkg_fit
.The preferences recognized by the matplotlib backend are the same as for
get_data_plot_prefs
.Examples
After these commands, any model plot will use a green line to display the model:
>>> prefs = get_model_plot_prefs() >>> prefs['color'] = 'green'
- get_model_type(model)[source] [edit on github]
Describe a model expression.
- Parameters
model (str or a sherpa.models.model.Model object) –
- Returns
type – The name of the model expression.
- Return type
See also
create_model_component
Create a model component.
get_model
Return the model expression for a data set.
get_model_pars
Return the names of the parameters of a model.
get_source
Return the source model expression for a data set.
Examples
>>> create_model_component("powlaw1d", "pl") >>> get_model_type("pl") 'powlaw1d'
For expressions containing more than one component, the result is likely to be ‘binaryopmodel’
>>> get_model_type(const1d.norm * (polynom1d.poly + gauss1d.gline)) 'binaryopmodel'
For sources with some form of an instrument model - such as a PSF convolution for an image or a PHA file with response information from the ARF and RMF - the response can depend on whether the expression contains this extra information or not:
>>> get_model_type(get_source('spec')) 'binaryopmodel' >>> get_model_type(get_model('spec')) 'rspmodelpha'
- get_num_par(id=None)[source] [edit on github]
Return the number of parameters in a model expression.
The
get_num_par
function returns the number of parameters, both frozen and thawed, in the model assigned to a data set.- Parameters
id (int or str, optional) – The data set containing the model expression. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
npar – The number of parameters in the model expression. This sums up all the parameters of the components in the expression, and includes both frozen and thawed components.
- Return type
- Raises
sherpa.utils.err.IdentifierErr – If no model expression has been set for the data set (with
set_model
orset_source
).
See also
get_num_par_frozen
Return the number of frozen parameters.
get_num_par_thawed
Return the number of thawed parameters.
set_model
Set the source model expression for a data set.
Examples
Return the total number of parameters for the default data set:
>>> print(get_num_par())
Find the number of parameters for the model associated with the data set called “jet”:
>>> njet = get_num_par('jet')
- get_num_par_frozen(id=None)[source] [edit on github]
Return the number of frozen parameters in a model expression.
The
get_num_par_frozen
function returns the number of frozen parameters in the model assigned to a data set.- Parameters
id (int or str, optional) – The data set containing the model expression. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
npar – The number of parameters in the model expression. This sums up all the frozen parameters of the components in the expression.
- Return type
- Raises
sherpa.utils.err.IdentifierErr – If no model expression has been set for the data set (with
set_model
orset_source
).
See also
get_num_par
Return the number of parameters.
get_num_par_thawed
Return the number of thawed parameters.
set_model
Set the source model expression for a data set.
Examples
Return the number of frozen parameters for the default data set:
>>> print(get_num_par_frozen())
Find the number of frozen parameters for the model associated with the data set called “jet”:
>>> njet = get_num_par_frozen('jet')
- get_num_par_thawed(id=None)[source] [edit on github]
Return the number of thawed parameters in a model expression.
The
get_num_par_thawed
function returns the number of thawed parameters in the model assigned to a data set.- Parameters
id (int or str, optional) – The data set containing the model expression. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
npar – The number of parameters in the model expression. This sums up all the thawed parameters of the components in the expression.
- Return type
- Raises
sherpa.utils.err.IdentifierErr – If no model expression has been set for the data set (with
set_model
orset_source
).
See also
get_num_par
Return the number of parameters.
get_num_par_frozen
Return the number of frozen parameters.
set_model
Set the source model expression for a data set.
Examples
Return the number of thawed parameters for the default data set:
>>> print(get_num_par_thawed())
Find the number of thawed parameters for the model associated with the data set called “jet”:
>>> njet = get_num_par_thawed('jet')
- get_order_plot(id=None, orders=None, recalc=True)[source] [edit on github]
Return the data used by plot_order.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.orders (optional) – Which response to use. The argument can be a scalar or array, in which case multiple curves will be displayed. The default is to use all orders.
recalc (bool, optional) – If
False
then the results from the last call toplot_order
(orget_order_plot
) are returned, otherwise the data is re-generated.
- Returns
data – An object representing the data used to create the plot by
plot_order
.- Return type
a
sherpa.astro.plot.OrderPlot
instance
See also
get_default_id
Return the default data set identifier.
plot_order
Plot the model for a data set convolved by the given response.
Examples
Retrieve the plot information for order 1 of the default data set and then display it:
>>> oplot = get_order_plot(orders=1) >>> print(oplot)
Return the plot data for orders 1 and 2 of data set ‘jet’, plot the first and then overplot the second:
>>> plots = get_order_plot('jet', orders=[1, 2]) >>> plots[0].plot() >>> plots[1].overplot()
- get_par(par)[source] [edit on github]
Return a parameter of a model component.
- Parameters
par (str) – The name of the parameter, using the format “componentname.parametername”.
- Returns
par – The parameter values - e.g. current value, limits, and whether it is frozen - can be changed using this object.
- Return type
a
sherpa.models.parameter.Parameter
instance- Raises
sherpa.utils.err.ArgumentErr – If the
par
argument is invalid: the model component does not exist or the given model has no parameter with that name.
See also
set_par
Set the value, limits, or behavior of a model parameter.
Examples
Return the “c0” parameter of the “bgnd” model component and change it to be frozen:
>>> p = get_par('bgnd.c0') >>> p.frozen = True
- get_pdf_plot()[source] [edit on github]
Return the data used to plot the last PDF.
- Returns
plot – An object containing the data used by the last call to
plot_pdf
. The fields will beNone
if the function has not been called.- Return type
a
sherpa.plot.PDFPlot
instance
See also
plot_pdf
Plot the probability density function of an array.
- get_photon_flux_hist(lo=None, hi=None, id=None, num=7500, bins=75, correlated=False, numcores=None, bkg_id=None, scales=None, model=None, otherids=(), recalc=True, clip='hard')[source] [edit on github]
Return the data displayed by plot_photon_flux.
The get_photon_flux_hist() function calculates a histogram of simulated photon flux values representing the photon flux probability distribution for a model component, accounting for the errors on the model parameters.
Changed in version 4.12.2: The scales parameter is no longer ignored when set and the model and otherids parameters have been added.
- Parameters
lo (number, optional) – The lower limit to use when summing up the signal. If not given then the lower value of the data grid is used.
hi (optional) – The upper limit to use when summing up the signal. If not given then the upper value of the data grid is used.
id (int or string, optional) – The identifier of the data set to use. If
None
, the default value, then all datasets with associated models are used to calculate the errors and the model evaluation is done using the default dataset.num (int, optional) – The number of samples to create. The default is 7500.
bins (int, optional) – The number of bins to use for the histogram.
correlated (bool, optional) – If
True
(the default isFalse
) thenscales
is the full covariance matrix, otherwise it is just a 1D array containing the variances of the parameters (the diagonal elements of the covariance matrix).numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
bkg_id (int or string, optional) – The identifier of the background component to use. This should only be set when the line to be measured is in the background model.
scales (array, optional) – The scales used to define the normal distributions for the parameters. The size and shape of the array depends on the number of free parameters in the fit (n) and the value of the
correlated
parameter. When the parameter isTrue
, scales must be given the covariance matrix for the free parameters (a n by n matrix that matches the parameter ordering used by Sherpa). For un-correlated parameters the covariance matrix can be used, or a one-dimensional array of n elements can be used, giving the width (specified as the sigma value of a normal distribution) for each parameter (e.g. the square root of the diagonal elements of the covariance matrix). If the scales parameter is not given then the covariance matrix is evaluated for the current model and best-fit parameters.model (model, optional) – The model to integrate. If left as
None
then the source model for the dataset will be used. This can be used to calculate the unabsorbed flux, as shown in the examples. The model must be part of the source expression.otherids (sequence of integer and string ids, optional) – The list of other datasets that should be included when calculating the errors to draw values from.
recalc (bool, optional) – If
True
, the default, then re-calculate the values rather than use the values from the last time the function was run.clip ({'hard', 'soft', 'none'}, optional) – What clipping strategy should be applied to the sampled parameters. The default (‘hard’) is to fix values at their hard limits if they exceed them. A value of ‘soft’ uses the soft limits instead, and ‘none’ applies no clipping.
- Returns
hist – An object representing the data used to create the plot by
plot_photon_flux
.- Return type
a
sherpa.astro.plot.PhotonFluxHistogram
instance
See also
get_energy_flux_hist
Return the data displayed by plot_energy_flux.
plot_energy_flux
Display the energy flux distribution.
plot_photon_flux
Display the photon flux distribution.
sample_energy_flux
Return the energy flux distribution of a model.
sample_flux
Return the flux distribution of a model.
sample_photon_flux
Return the photon flux distribution of a model.
Examples
Get the photon flux distribution for the range 0.5 to 7 for the default data set:
>>> phist = get_photon_flux_hist(0.5, 7, num=1000) >>> print(phist)
Compare the 0.5 to 2 photon flux distribution from the “core” data set to the values from the “jet” data set:
>>> phist1 = get_photon_flux_hist(0.5, 2, id='jet', num=1000) >>> phist2 = get_photon_flux_hist(0.5, 2, id='core', num=1000)
Compare the flux distribution for the full source expression (aflux) to that for just the pl component (uflux); this can be useful to calculate the unabsorbed flux distribution if the full source model contains an absorption component:
>>> aflux = get_photon_flux_hist(0.5, 2, num=1000, bins=20) >>> uflux = get_photon_flux_hist(0.5, 2, model=pl, num=1000, bins=20)
When there are multiple datasets loaded,
get_photon_flux_hist
uses all datasets to evaluate the errors when theid
parameter is left at its default value ofNone
. Theotherids
parameter is used, along withid
, to specify exactly what datasets are used:>>> x = get_photon_flux_hist(2, 10, num=1000, bins=20, model=src) >>> y = get_photon_flux_hist(2, 10, num=1000, bins=20, model=src, ... id=1, otherids=(2, 3, 4))
- get_pileup_model(id=None)[source] [edit on github]
Return the pile up model for a data set.
Return the pile up model set by a call to
set_pileup_model
.- Parameters
id (int or str, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
model
- Return type
a
sherpa.astro.models.JDPileup
instance- Raises
sherpa.utils.err.IdentifierErr – If no pile up model has been set for the data set.
See also
delete_pileup_model
Delete the pile up model for a data set.
fit
Fit one or more data sets.
get_model
Return the model expression for a data set.
get_source
Return the source model expression for a data set.
sherpa.astro.models.JDPileup
The ACIS pile up model.
list_pileup_model_ids
List of all the data sets with a pile up model.
set_pileup_model
Include a model of the Chandra ACIS pile up when fitting PHA data.
Examples
>>> jdp1 = get_pileup_model() >>> jdp2 = get_pileup_model(2)
- get_prior(par)[source] [edit on github]
Return the prior function for a parameter (MCMC).
The default behavior of the pyBLoCXS MCMC sampler (run by the
get_draws
function) is to use a flat prior for each parameter. Theget_prior
routine finds the current prior assigned to a parameter, andset_prior
is used to change it.- Parameters
par (a
sherpa.models.parameter.Parameter
instance) – A parameter of a model instance.- Returns
The parameter prior set by a previous call to
set_prior
. This may be a function or model instance.- Return type
prior
- Raises
ValueError – If a prior has not been set for the parameter.
See also
set_prior
Set the prior function to use with a parameter.
Examples
>>> prior = get_prior(bgnd.c0) >>> print(prior)
- get_proj()[source] [edit on github]
Return the confidence-interval estimation object.
- Returns
proj
- Return type
See also
conf
Estimate parameter confidence intervals using the confidence method.
get_proj_opt
Return one or all of the options for the confidence interval method.
proj
Estimate confidence intervals for fit parameters.
set_proj_opt
Set an option of the proj estimation object.
Notes
The attributes of the object include:
eps
The precision of the calculated limits. The default is 0.01.
fast
If
True
then the fit optimization used may be changed from the current setting (only for the error analysis) to use a faster optimization method. The default isFalse
.max_rstat
If the reduced chi square is larger than this value, do not use (only used with chi-square statistics). The default is 3.
maxfits
The maximum number of re-fits allowed (that is, when the
remin
filter is met). The default is 5.maxiters
The maximum number of iterations allowed when bracketing limits, before stopping for that parameter. The default is 200.
numcores
The number of computer cores to use when evaluating results in parallel. This is only used if
parallel
isTrue
. The default is to use all cores.parallel
If there is more than one free parameter then the results can be evaluated in parallel, to reduce the time required. The default is
True
.remin
The minimum difference in statistic value for a new fit location to be considered better than the current best fit (which starts out as the starting location of the fit at the time
proj
is called). The default is 0.01.sigma
What is the error limit being calculated. The default is 1.
soft_limits
Should the search be restricted to the soft limits of the parameters (
True
), or can parameter values go out all the way to the hard limits if necessary (False
). The default isFalse
tol
The tolerance for the fit. The default is 0.2.
Examples
>>> print(get_proj()) name = projection numcores = 8 max_rstat = 3 maxiters = 200 soft_limits = False eps = 0.01 fast = False maxfits = 5 remin = 0.01 tol = 0.2 sigma = 1 parallel = True
- get_proj_opt(name=None)[source] [edit on github]
Return one or all of the options for the confidence interval method.
This is a helper function since the options can also be read directly using the object returned by
get_proj
.- Parameters
name (str, optional) – If not given, a dictionary of all the options are returned. When given, the individual value is returned.
- Returns
value
- Return type
dictionary or value
- Raises
sherpa.utils.err.ArgumentErr – If the
name
argument is not recognized.
See also
conf
Estimate parameter confidence intervals using the confidence method.
proj
Estimate confidence intervals for fit parameters.
get_proj
Return the confidence-interval estimation object.
set_proj_opt
Set an option of the proj estimation object.
Examples
>>> get_proj_opt('sigma') 1
>>> popts = get_proj_opt() >>> popts['sigma'] 1
- get_proj_results()[source] [edit on github]
Return the results of the last
proj
run.- Returns
results
- Return type
sherpa.fit.ErrorEstResults object
- Raises
sherpa.utils.err.SessionErr – If no
proj
call has been made.
See also
conf
Estimate parameter confidence intervals using the confidence method.
proj
Estimate confidence intervals for fit parameters.
get_proj_opt
Return one or all of the options for the projection method.
set_proj_opt
Set an option of the proj estimation object.
Notes
The fields of the object include:
datasets
A tuple of the data sets used in the analysis.
methodname
This will be ‘projection’.
iterfitname
The name of the iterated-fit method used, if any.
fitname
The name of the optimization method used.
statname
The name of the fit statistic used.
sigma
The sigma value used to calculate the confidence intervals.
percent
The percentage of the signal contained within the confidence intervals (calculated from the
sigma
value assuming a normal distribution).parnames
A tuple of the parameter names included in the analysis.
parvals
A tuple of the best-fit parameter values, in the same order as
parnames
.parmins
A tuple of the lower error bounds, in the same order as
parnames
.parmaxes
A tuple of the upper error bounds, in the same order as
parnames
.nfits
The number of model evaluations.
Examples
>>> res = get_proj_results() >>> print(res) datasets = ('src',) methodname = projection iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('bgnd.c0',) parvals = (9.1958148476800918,) parmins = (-2.0765029551804268,) parmaxes = (2.0765029551935186,) nfits = 0
- get_projection_results() [edit on github]
Return the results of the last
proj
run.- Returns
results
- Return type
sherpa.fit.ErrorEstResults object
- Raises
sherpa.utils.err.SessionErr – If no
proj
call has been made.
See also
conf
Estimate parameter confidence intervals using the confidence method.
proj
Estimate confidence intervals for fit parameters.
get_proj_opt
Return one or all of the options for the projection method.
set_proj_opt
Set an option of the proj estimation object.
Notes
The fields of the object include:
datasets
A tuple of the data sets used in the analysis.
methodname
This will be ‘projection’.
iterfitname
The name of the iterated-fit method used, if any.
fitname
The name of the optimization method used.
statname
The name of the fit statistic used.
sigma
The sigma value used to calculate the confidence intervals.
percent
The percentage of the signal contained within the confidence intervals (calculated from the
sigma
value assuming a normal distribution).parnames
A tuple of the parameter names included in the analysis.
parvals
A tuple of the best-fit parameter values, in the same order as
parnames
.parmins
A tuple of the lower error bounds, in the same order as
parnames
.parmaxes
A tuple of the upper error bounds, in the same order as
parnames
.nfits
The number of model evaluations.
Examples
>>> res = get_proj_results() >>> print(res) datasets = ('src',) methodname = projection iterfitname = none fitname = levmar statname = chi2gehrels sigma = 1 percent = 68.2689492137 parnames = ('bgnd.c0',) parvals = (9.1958148476800918,) parmins = (-2.0765029551804268,) parmaxes = (2.0765029551935186,) nfits = 0
- get_psf(id=None)[source] [edit on github]
Return the PSF model defined for a data set.
Return the parameter settings for the PSF model assigned to the data set.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
psf
- Return type
a
sherpa.instrument.PSFModel
instance- Raises
sherpa.utils.err.IdentifierErr – If no PSF model has been set for the data set.
See also
delete_psf
Delete the PSF model for a data set.
image_psf
Display the 2D PSF model for a data set in the image viewer.
list_psf_ids
List of all the data sets with a PSF.
load_psf
Create a PSF model.
plot_psf
Plot the 1D PSF model applied to a data set.
set_psf
Add a PSF model to a data set.
Examples
Change the size and center of the PSF for the default data set:
>>> psf = get_psf() >>> psf.size = (21, 21) >>> psf.center = (10, 10)
- get_psf_contour(id=None, recalc=True)[source] [edit on github]
Return the data used by contour_psf.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_psf
(orget_psf_contour
) are returned, otherwise the data is re-generated.
- Returns
psf_data
- Return type
a
sherpa.plot.PSFContour
instance- Raises
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If a PSF model has not been created for the data set.
See also
get_kernel_contour
Return the data used by contour_kernel.
contour_kernel
Contour the kernel applied to the model of an image data set.
contour_psf
Contour the PSF applied to the model of an image data set.
Examples
Return the contour data for the PSF for the default data set:
>>> cplot = get_psf_contour()
- get_psf_image(id=None)[source] [edit on github]
Return the data used by image_psf.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
psf_data
- Return type
a
sherpa.image.PSFImage
instance- Raises
sherpa.utils.err.IdentifierErr – If a PSF model has not been created for the data set.
See also
get_kernel_image
Return the data used by image_kernel.
image_kernel
Display the 2D kernel for a data set in the image viewer.
image_psf
Display the 2D PSF model for a data set in the image viewer.
Examples
Return the image data for the PSF for the default data set:
>>> iplot = get_psf_image() >>> iplot.y.shape (175, 200)
- get_psf_plot(id=None, recalc=True)[source] [edit on github]
Return the data used by plot_psf.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_psf
(orget_psf_plot
) are returned, otherwise the data is re-generated.
- Returns
psf_plot
- Return type
a
sherpa.plot.PSFPlot
instance- Raises
sherpa.utils.err.IdentifierErr – If a PSF model has not been created for the data set.
See also
get_kernel_plot
Return the data used by plot_kernel.
plot_kernel
Plot the 1D kernel applied to a data set.
plot_psf
Plot the 1D PSF model applied to a data set.
Examples
Return the plot data and then create a plot with it:
>>> pplot = get_psf_plot() >>> pplot.plot()
- get_pvalue_plot(null_model=None, alt_model=None, conv_model=None, id=1, otherids=(), num=500, bins=25, numcores=None, recalc=False)[source] [edit on github]
Return the data used by plot_pvalue.
Access the data arrays and preferences defining the histogram plot produced by the
plot_pvalue
function, a histogram of the likelihood ratios comparing fits of the null model to fits of the alternative model using faked data with Poisson noise. Data returned includes the likelihood ratio computed using the observed data, and the p-value, used to reject or accept the null model.- Parameters
null_model – The model expression for the null hypothesis.
alt_model – The model expression for the alternative hypothesis.
conv_model (optional) – An expression used to modify the model so that it can be compared to the data (e.g. a PSF or PHA response).
id (int or str, optional) – The data set that provides the data. The default is 1.
otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
num (int, optional) – The number of simulations to run. The default is 500.
bins (int, optional) – The number of bins to use to create the histogram. The default is 25.
numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
recalc (bool, optional) – The default value (
False
) means that the results from the last call toplot_pvalue
orget_pvalue_plot
are returned. IfTrue
, the values are re-calculated.
- Returns
plot
- Return type
a
sherpa.plot.LRHistogram
instance
See also
get_pvalue_results
Return the data calculated by the last plot_pvalue call.
plot_pvalue
Compute and plot a histogram of likelihood ratios by simulating data.
Examples
Return the values from the last call to
plot_pvalue
:>>> pvals = get_pvalue_plot() >>> pvals.ppp 0.472
Run 500 simulations for the two models and print the results:
>>> pvals = get_pvalue_plot(mdl1, mdl2, recalc=True, num=500) >>> print(pvals)
- get_pvalue_results()[source] [edit on github]
Return the data calculated by the last plot_pvalue call.
The
get_pvalue_results
function returns the likelihood ratio test results computed by theplot_pvalue
command, which compares fits of the null model to fits of the alternative model using faked data with Poisson noise. The likelihood ratio based on the observed data is returned, along with the p-value, used to reject or accept the null model.- Returns
plot – If
plot_pvalue
orget_pvalue_plot
have been called then the return value is asherpa.sim.simulate.LikelihoodRatioResults
instance, otherwiseNone
is returned.- Return type
None or a
sherpa.sim.simulate.LikelihoodRatioResults
instance
See also
plot_value
Compute and plot a histogram of likelihood ratios by simulating data.
get_pvalue_plot
Return the data used by plot_pvalue.
Notes
The fields of the returned (
LikelihoodRatioResults
) object are:- ratios
The calculated likelihood ratio for each iteration.
- stats
The calculated fit statistics for each iteration, stored as the null model and then the alt model in a nsim by 2 array.
- samples
The parameter samples array for each simulation, stored in a nsim by npar array.
- lr
The likelihood ratio of the observed data for the null and alternate models.
- ppp
The p value of the observed data for the null and alternate models.
- null
The fit statistic of the null model on the observed data.
- alt
The fit statistic of the alternate model on the observed data.
Examples
Return the results of the last pvalue analysis and display the results - first using the
format
method, which provides a summary of the data, and then a look at the individual fields in the returned object. The last call displays the contents of one of the fields (ppp
).>>> res = get_pvalue_results() >>> print(res.format()) >>> print(res) >>> print(res.ppp)
- get_quality(id=None, bkg_id=None)[source] [edit on github]
Return the quality flags for a PHA data set.
The function returns the quality value for each channel in the PHA data set.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Set if the quality flags should be taken from a background associated with the data set.
- Returns
qual – The quality value for each channel in the PHA data set. This array is not grouped or filtered - that is, there is one element for each channel in the PHA data set. Changes to the elements of this array will change the values in the dataset (is is a reference to the values used to define the quality, not a copy).
- Return type
ndarray or
None
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain a PHA data set.
See also
fit
Fit one or more data sets.
get_grouping
Return the grouping array for a PHA data set.
get_indep
Return the independent axes of a data set.
ignore_bad
Exclude channels marked as bad in a PHA data set.
load_quality
Load the quality array from a file and add to a PHA data set.
set_quality
Apply a set of quality flags to a PHA data set.
Notes
The meaning of the quality column is taken from [1]_, which says that 0 indicates a “good” channel, 1 and 2 are for channels that are identified as “bad” or “dubious” (respectively) by software, 5 indicates a “bad” channel set by the user, and values of 3 or 4 are not used.
References
- 1
“The OGIP Spectral File Format”, https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/ogip_92_007.html
Examples
Copy the quality array from the default data set to data set 2:
>>> qual1 = get_quality() >>> set_quality(2, qual1)
Return the quality array of the background component labelled 2 for the ‘histate’ data set:
>>> qual = get_quality('histate', bkg_id=2)
Change the quality setting for all channels below 30 in the default data set to 5 (considered bad by the user):
>>> chans, = get_indep() >>> qual = get_quality() >>> qual[chans < 30] = 5
- get_rate(id=None, filter=False, bkg_id=None)[source] [edit on github]
Return the count rate of a PHA data set.
Return an array of count-rate values for each bin in the data set. The units of the returned values depends on the values set by the
set_analysis
rountine for the data set.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – Should the filter attached to the data set be applied to the return value or not. The default is
False
.bkg_id (int or str, optional) – Set if the rate should be taken from the background associated with the data set.
- Returns
rate – The rate array. The output matches the grouping of the data set. The units are controlled by the
set_analysis
setting for this data set; that is, the units used inplot_data
, except that thetype
argument toset_analysis
is ignored. The return array will match the grouping scheme applied to the data set.- Return type
array
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
See also
get_dep
Return the data for a data set.
ignore
Exclude data from the fit.
notice
Include data in the fit.
plot_data
Plot the data values.
set_analysis
Set the units used when fitting and displaying spectral data.
Examples
Return the count-rate for the default data set. For a PHA data set, where
set_analysis
has not been called, the return value will be in units of count/second/keV, and a value for each group in the data set is returned.>>> rate = get_rate()
The return value is grouped to match the data, but is not filtered (with the default
filter
argument). The data set used here 46 groups in it, but after filtering only has 40 groups, but the call toget_rate
returns a 46-element array unlessfilter
is explicitly set toTrue
:>>> notice() >>> get_rate().size 46 >>> ignore(None, 0.5) >>> ignore(7, None) >>> get_rate().size 46 >>> get_rate(filter=True).size 40
The rate of data set 2 will be in units of count/s/Angstrom and only cover the range 20 to 22 Angstroms:
>>> set_analysis(2, 'wave') >>> notice_id(2, 20, 22) >>> r2 = get_rate(2, filter=True)
The returned rate is now in units of count/s (the return value is multiplied by
binwidth^factor
, wherefactor
is normally 0):>>> set_analysis(2, 'wave', factor=1) >>> r2 = get_rate(2, filter=True)
Return the count rate for the second background component of data set “grating”:
>>> get_rate(id="grating", bkg_id=2)
- get_ratio_contour(id=None, recalc=True)[source] [edit on github]
Return the data used by contour_ratio.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_ratio
(orget_ratio_contour
) are returned, otherwise the data is re-generated.
- Returns
ratio_data – The
y
attribute contains the ratio values and thex0
andx1
arrays contain the corresponding coordinate values, as one-dimensional arrays.- Return type
a
sherpa.plot.RatioContour
instance- Raises
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_ratio_image
Return the data used by image_ratio.
get_resid_contour
Return the data used by contour_resid.
contour_ratio
Contour the ratio of data to model.
image_ratio
Display the ratio (data/model) for a data set in the image viewer.
Examples
Return the ratio data for the default data set:
>>> rinfo = get_ratio_contour()
- get_ratio_image(id=None)[source] [edit on github]
Return the data used by image_ratio.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
ratio_img – The
y
attribute contains the ratio values as a 2D NumPy array.- Return type
a
sherpa.image.RatioImage
instance- Raises
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_resid_image
Return the data used by image_resid.
contour_ratio
Contour the ratio of data to model.
image_ratio
Display the ratio (data/model) for a data set in the image viewer.
Examples
Return the ratio data for the default data set:
>>> rinfo = get_ratio_image()
- get_ratio_plot(id=None, recalc=True)[source] [edit on github]
Return the data used by plot_ratio.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_ratio
(orget_ratio_plot
) are returned, otherwise the data is re-generated.
- Returns
ratio_data
- Return type
a
sherpa.plot.RatioPlot
instance- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_chisqr_plot
Return the data used by plot_chisqr.
get_delchi_plot
Return the data used by plot_delchi.
get_resid_plot
Return the data used by plot_resid.
plot_ratio
Plot the ratio of data to model for a data set.
Examples
Return the ratio of the data to the model for the default data set:
>>> rplot = get_ratio_plot() >>> np.min(rplot.y) 0.6320905073750186 >>> np.max(rplot.y) 1.5170172177000447
Display the contents of the ratio plot for data set 2:
>>> print(get_ratio_plot(2))
Overplot the ratio plot from the ‘core’ data set on the ‘jet’ data set:
>>> r1 = get_ratio_plot('jet') >>> r2 = get_ratio_plot('core') >>> r1.plot() >>> r2.overplot()
- get_reg_proj(par0=None, par1=None, id=None, otherids=None, recalc=False, fast=True, min=None, max=None, nloop=(10, 10), delv=None, fac=4, log=(False, False), sigma=(1, 2, 3), levels=None, numcores=None)[source] [edit on github]
Return the region-projection object.
This returns (and optionally calculates) the data used to display the
reg_proj
contour plot. Note that if the therecalc
parameter isFalse
(the default value) then all other parameters are ignored and the results of the lastreg_proj
call are returned.- Parameters
par0 – The parameters to plot on the X and Y axes, respectively. These arguments are only used if recalc is set to
True
.par1 – The parameters to plot on the X and Y axes, respectively. These arguments are only used if recalc is set to
True
.id (str or int, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (list of str or int, optional) – Other data sets to use in the calculation.
recalc (bool, optional) – The default value (
False
) means that the results from the last call toreg_proj
(orget_reg_proj
) are returned, ignoring all other parameter values. Otherwise, the statistic curve is re-calculated, but not plotted.fast (bool, optional) – If
True
then the fit optimization used may be changed from the current setting (only for the error analysis) to use a faster optimization method. The default isFalse
.min (pair of numbers, optional) – The minimum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.max (pair of number, optional) – The maximum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.nloop (pair of int, optional) – The number of steps to use. This is used when
delv
is set toNone
.delv (pair of number, optional) – The step size for the parameter. Setting this over-rides the
nloop
parameter. The default isNone
.fac (number, optional) – When
min
ormax
is not given, multiply the covariance of the parameter by this value to calculate the limit (which is then added or subtracted to the parameter value, as required).log (pair of bool, optional) – Should the step size be logarithmically spaced? The default (
False
) is to use a linear grid.sigma (sequence of number, optional) – The levels at which to draw the contours. The units are the change in significance relative to the starting value, in units of sigma.
levels (sequence of number, optional) – The numeric values at which to draw the contours. This over-rides the
sigma
parameter, if set (the default isNone
).numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
- Returns
rproj – The fields of this object can be used to re-create the plot created by
reg_proj
.- Return type
a
sherpa.plot.RegionProjection
instance
See also
conf
Estimate patameter confidence intervals using the confidence method.
covar
Estimate the confidence intervals using the covariance method.
int_proj
Calculate and plot the fit statistic versus fit parameter value.
int_unc
Calculate and plot the fit statistic versus fit parameter value.
reg_proj
Plot the statistic value as two parameters are varied.
reg_unc
Plot the statistic value as two parameters are varied.
Examples
Return the results for the
reg_proj
run for thexpos
andypos
parameters of thesrc
component, for the default data set:>>> reg_proj(src.xpos, src.ypos) >>> rproj = get_reg_proj()
Since the
recalc
parameter has not been changed toTrue
, the following will return the results for the last call toreg_proj
, which may not have been for the r0 and alpha parameters:>>> rprog = get_reg_proj(src.r0, src.alpha)
Create the data without creating a plot:
>>> rproj = get_reg_proj(pl.gamma, gal.nh, recalc=True)
Specify the range and step size for both the parameters, in this case pl.gamma should vary between 0.5 and 2.5, with gal.nh between 0.01 and 1, both with 51 values and the nH range done over a log scale:
>>> rproj = get_reg_proj(pl.gamma, gal.nh, id="src", ... min=(0.5, 0.01), max=(2.5, 1), ... nloop=(51, 51), log=(False, True), ... recalc=True)
- get_reg_unc(par0=None, par1=None, id=None, otherids=None, recalc=False, min=None, max=None, nloop=(10, 10), delv=None, fac=4, log=(False, False), sigma=(1, 2, 3), levels=None, numcores=None)[source] [edit on github]
Return the region-uncertainty object.
This returns (and optionally calculates) the data used to display the
reg_unc
contour plot. Note that if the therecalc
parameter isFalse
(the default value) then all other parameters are ignored and the results of the lastreg_unc
call are returned.- Parameters
par0 – The parameters to plot on the X and Y axes, respectively. These arguments are only used if
recalc
is set toTrue
.par1 – The parameters to plot on the X and Y axes, respectively. These arguments are only used if
recalc
is set toTrue
.id (str or int, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (list of str or int, optional) – Other data sets to use in the calculation.
recalc (bool, optional) – The default value (
False
) means that the results from the last call toreg_unc
(orget_reg_unc
) are returned, ignoring all other parameter values. Otherwise, the statistic curve is re-calculated, but not plotted.fast (bool, optional) – If
True
then the fit optimization used may be changed from the current setting (only for the error analysis) to use a faster optimization method. The default isFalse
.min (pair of numbers, optional) – The minimum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.max (pair of number, optional) – The maximum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.nloop (pair of int, optional) – The number of steps to use. This is used when
delv
is set toNone
.delv (pair of number, optional) – The step size for the parameter. Setting this over-rides the
nloop
parameter. The default isNone
.fac (number, optional) – When
min
ormax
is not given, multiply the covariance of the parameter by this value to calculate the limit (which is then added or subtracted to the parameter value, as required).log (pair of bool, optional) – Should the step size be logarithmically spaced? The default (
False
) is to use a linear grid.sigma (sequence of number, optional) – The levels at which to draw the contours. The units are the change in significance relative to the starting value, in units of sigma.
levels (sequence of number, optional) – The numeric values at which to draw the contours. This over-rides the
sigma
parameter, if set (the default isNone
).numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
- Returns
rproj – The fields of this object can be used to re-create the plot created by
reg_unc
.- Return type
a
sherpa.plot.RegionUncertainty
instance
See also
conf
Estimate patameter confidence intervals using the confidence method.
covar
Estimate the confidence intervals using the covariance method.
int_proj
Calculate and plot the fit statistic versus fit parameter value.
int_unc
Calculate and plot the fit statistic versus fit parameter value.
reg_proj
Plot the statistic value as two parameters are varied.
reg_unc
Plot the statistic value as two parameters are varied.
Examples
Return the results for the
reg_unc
run for thexpos
andypos
parameters of thesrc
component, for the default data set:>>> reg_unc(src.xpos, src.ypos) >>> runc = get_reg_unc()
Since the
recalc
parameter has not been changed toTrue
, the following will return the results for the last call toreg_unc
, which may not have been for the r0 and alpha parameters:>>> runc = get_reg_unc(src.r0, src.alpha)
Create the data without creating a plot:
>>> runc = get_reg_unc(pl.gamma, gal.nh, recalc=True)
Specify the range and step size for both the parameters, in this case pl.gamma should vary between 0.5 and 2.5, with gal.nh between 0.01 and 1, both with 51 values and the nH range done over a log scale:
>>> runc = get_reg_unc(pl.gamma, gal.nh, id="src", ... min=(0.5, 0.01), max=(2.5, 1), ... nloop=(51, 51), log=(False, True), ... recalc=True)
- get_resid_contour(id=None, recalc=True)[source] [edit on github]
Return the data used by contour_resid.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_resid
(orget_resid_contour
) are returned, otherwise the data is re-generated.
- Returns
resid_data – The
y
attribute contains the residual values and thex0
andx1
arrays contain the corresponding coordinate values, as one-dimensional arrays.- Return type
a
sherpa.plot.ResidContour
instance- Raises
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_ratio_contour
Return the data used by contour_ratio.
get_resid_image
Return the data used by image_resid.
contour_resid
Contour the residuals of the fit.
image_resid
Display the residuals (data - model) for a data set in the image viewer.
Examples
Return the residual data for the default data set:
>>> rinfo = get_resid_contour()
- get_resid_image(id=None)[source] [edit on github]
Return the data used by image_resid.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
resid_img – The
y
attribute contains the residual values as a 2D NumPy array.- Return type
a
sherpa.image.ResidImage
instance- Raises
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_ratio_image
Return the data used by image_ratio.
contour_resid
Contour the residuals of the fit.
image_resid
Display the residuals (data - model) for a data set in the image viewer.
Examples
Return the residual data for the default data set:
>>> rinfo = get_resid_image()
- get_resid_plot(id=None, recalc=True)[source] [edit on github]
Return the data used by plot_resid.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call toplot_resid
(orget_resid_plot
) are returned, otherwise the data is re-generated.
- Returns
resid_data
- Return type
a
sherpa.plot.ResidPlot
instance- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_chisqr_plot
Return the data used by plot_chisqr.
get_delchi_plot
Return the data used by plot_delchi.
get_ratio_plot
Return the data used by plot_ratio.
plot_resid
Plot the residuals (data - model) for a data set.
Examples
Return the residual data for the default data set:
>>> rplot = get_resid_plot() >>> np.min(rplot.y) -2.9102595936209896 >>> np.max(rplot.y) 4.0897404063790104
Display the contents of the residuals plot for data set 2:
>>> print(get_resid_plot(2))
Overplot the residuals plot from the ‘core’ data set on the ‘jet’ data set:
>>> r1 = get_resid_plot('jet') >>> r2 = get_resid_plot('core') >>> r1.plot() >>> r2.overplot()
- get_response(id=None, bkg_id=None)[source] [edit on github]
Return the response information applied to a PHA data set.
For a PHA data set, the source model - created by
set_model
- is modified by a model representing the instrumental effects - such as the effective area of the mirror, the energy resolution of the detector, and any model of pile up - which is collectively known as the instrument response. Theget_response
function returns the instrument response model.- Parameters
id (int or str, optional) – The data set containing the instrument response. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – If given, return the response for the given background component, rather than the source.
- Returns
The return value depends on whether an ARF, RMF, or pile up model has been associated with the data set.
- Return type
response
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
See also
get_arf
Return the ARF associated with a PHA data set.
get_pileup_model
Return the pile up model for a data set.
get_rmf
Return the RMF associated with a PHA data set.
set_bkg_full_model
Define the convolved background model expression for a PHA data set.
set_full_model
Define the convolved model expression for a data set.
Examples
Create an empty PHA data set, load in an ARF and RMF, and then retrieve the response. The response is then used to model the instrument response applied to a
powlaw1d
model component, along with a constant component (bgnd
) that does not “pass through” the instrument response:>>> dataspace1d(1, 1024, 1, dstype=DataPHA) >>> load_arf('src.arf') >>> load_rmf('src.rmf') >>> rsp = get_response() >>> set_full_model(rsp(powlaw1d.pl) + const1d.bgnd)
- get_rmf(id=None, resp_id=None, bkg_id=None)[source] [edit on github]
Return the RMF associated with a PHA data set.
- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.resp_id (int or str, optional) – The identifier for the RMF within this data set, if there are multiple responses.
bkg_id (int or str, optional) – Set this to return the given background component.
- Returns
rmf – This is a reference to the RMF, rather than a copy, so that changing the fields of the object will change the values in the data set.
- Return type
a
sherpa.astro.instrument.RMF1D
instance
See also
fake_pha
Simulate a PHA data set from a model.
get_response
Return the respone information applied to a PHA data set.
load_pha
Load a file as a PHA data set.
load_rmf
Load a RMF from a file and add it to a PHA data set.
set_full_model
Define the convolved model expression for a data set.
set_arf
Set the ARF for use by a PHA data set.
set_rmf
Set the RMF for use by a PHA data set.
unpack_rmf
Read in a RMF from a file.
Examples
Copy the RMF from the default data set to data set 2:
>>> rmf1 = get_rmf() >>> set_rmf(2, rmf1)
Retrieve the RMF associated to the second background component of the ‘core’ data set:
>>> bgrmf = get_rmf('core', 'bkg.rmf', bkg_id=2)
Retrieve the ARF and RMF for the default data set and use them to create a model expression which includes a power-law component (pbgnd) that is not convolved by the response:
>>> arf = get_arf() >>> rmf = get_rmf() >>> src_expr = xsphabs.abs1 * powlaw1d.psrc >>> set_full_model(rmf(arf(src_expr)) + powlaw1d.pbgnd) >>> print(get_model())
- get_sampler()[source] [edit on github]
Return the current MCMC sampler options.
Returns the options for the current pyBLoCXS MCMC sampling method (jumping rules).
- Returns
options – A copy of the options for the chosen sampler. Use
set_sampler_opt
to change these values. The fields depend on the current sampler.- Return type
See also
get_sampler_name
Return the name of the current MCMC sampler.
get_sampler_opt
Return an option of the current MCMC sampler.
set_sampler
Set the MCMC sampler.
set_sampler_opt
Set an option for the current MCMC sampler.
Examples
>>> print(get_sampler())
- get_sampler_name()[source] [edit on github]
Return the name of the current MCMC sampler.
- Returns
name
- Return type
See also
get_sampler
Return the current MCMC sampler options.
set_sampler
Set the MCMC sampler.
Examples
>>> get_sampler_name() 'MetropolisMH'
- get_sampler_opt(opt)[source] [edit on github]
Return an option of the current MCMC sampler.
- Returns
opt – The name of the option. The fields depend on the current sampler.
- Return type
See also
get_sampler
Return the current MCMC sampler options.
set_sampler_opt
Set an option for the current MCMC sampler.
Examples
>>> get_sampler_opt('log') False
- get_scatter_plot()[source] [edit on github]
Return the data used to plot the last scatter plot.
- Returns
plot – An object containing the data used by the last call to
plot_scatter
. The fields will beNone
if the function has not been called.- Return type
a
sherpa.plot.ScatterPlot
instance
See also
plot_scatter
Create a scatter plot.
- get_source(id=None)[source] [edit on github]
Return the source model expression for a data set.
This returns the model expression created by
set_model
orset_source
. It does not include any instrument response.- Parameters
id (int or str, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
model – This can contain multiple model components. Changing attributes of this model changes the model used by the data set.
- Return type
a sherpa.models.Model object
See also
delete_model
Delete the model expression from a data set.
get_model
Return the model expression for a data set.
get_model_pars
Return the names of the parameters of a model.
get_model_type
Describe a model expression.
list_model_ids
List of all the data sets with a source expression.
sherpa.astro.ui.set_bkg_model
Set the background model expression for a data set.
set_model
Set the source model expression for a data set.
set_full_model
Define the convolved model expression for a data set.
show_model
Display the source model expression for a data set.
Examples
Return the source expression for the default data set, display it, and then find the number of parameters in it:
>>> src = get_source() >>> print(src) >>> len(src.pars) 5
Set the source expression for data set ‘obs2’ to be equal to the model of data set ‘obs1’ multiplied by a scalar value:
>>> set_source('obs2', const1d.norm * get_source('obs1'))
- get_source_component_image(id, model=None)[source] [edit on github]
Return the data used by image_source_component.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.model.Model instance) – The component to display (the name, if a string).
- Returns
cpt_img – The
y
attribute contains the component model values as a 2D NumPy array.- Return type
a
sherpa.image.ComponentSourceImage
instance- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_model_component_image
Return the data used by image_model_component.
get_source_image
Return the data used by image_source.
image_source
Display the source expression for a data set in the image viewer.
image_source_component
Display a component of the source expression in the image viewer.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.Examples
Return the gsrc component values for the default data set:
>>> sinfo = get_source_component_image(gsrc)
Get the ‘bgnd’ model pixel values for data set 2:
>>> sinfo = get_source_component_image(2, bgnd)
- get_source_component_plot(id, model=None, recalc=True)[source] [edit on github]
Return the data used by plot_source_component.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.model.Model instance) – The component to use (the name, if a string).
recalc (bool, optional) – If
False
then the results from the last call toplot_source_component
(orget_source_component_plot
) are returned, otherwise the data is re-generated.
- Returns
An object representing the data used to create the plot by
plot_source_component
. The return value depends on the data set (e.g. 1D binned or un-binned).- Return type
instance
See also
get_source_plot
Return the data used to create the source plot.
plot_source
Plot the source expression for a data set.
plot_source_component
Plot a component of the source expression for a data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.Examples
Return the plot data for the
pl
component used in the default data set:>>> cplot = get_source_component_plot(pl)
Return the full source model (
fplot
) and then for the componentsgal * pl
andgal * gline
, for the data set ‘jet’:>>> fmodel = xsphabs.gal * (powlaw1d.pl + gauss1d.gline) >>> set_source('jet', fmodel) >>> fit('jet') >>> fplot = get_source('jet') >>> plot1 = get_source_component_plot('jet', pl*gal) >>> plot2 = get_source_component_plot('jet', gline*gal)
- get_source_contour(id=None, recalc=True)[source] [edit on github]
Return the data used by contour_source.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.recalc (bool, optional) – If
False
then the results from the last call tocontour_source
(orget_source_contour
) are returned, otherwise the data is re-generated.
- Returns
source_data – The
y
attribute contains the model values and thex0
andx1
arrays contain the corresponding coordinate values, as one-dimensional arrays.- Return type
a
sherpa.plot.SourceContour
instance- Raises
sherpa.utils.err.DataErr – If the data set is not 2D.
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_source_image
Return the data used by image_source.
contour_source
Contour the values of the model, without any PSF.
image_source
Display the source expression for a data set in the image viewer.
Examples
Return the source model pixel values for the default data set:
>>> sinfo = get_source_contour()
- get_source_image(id=None)[source] [edit on github]
Return the data used by image_source.
Evaluate the source expression for the image pixels - without any PSF convolution - and return the results.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
src_img – The
y
attribute contains the source model values as a 2D NumPy array.- Return type
a
sherpa.image.SourceImage
instance- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_model_image
Return the data used by image_model.
contour_source
Contour the values of the model, without any PSF.
image_source
Display the source expression for a data set in the image viewer.
Examples
Return the model data for the default data set:
>>> sinfo = get_source_image() >>> sinfo.y.shape (150, 175)
- get_source_plot(id=None, lo=None, hi=None, recalc=True)[source] [edit on github]
Return the data used by plot_source.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.lo (number, optional) – The low value to plot (only used for PHA data sets).
hi (number, optional) – The high value to plot (only use for PHA data sets).
recalc (bool, optional) – If
False
then the results from the last call toplot_source
(orget_source_plot
) are returned, otherwise the data is re-generated.
- Returns
An object representing the data used to create the plot by
plot_source
. The return value depends on the data set (e.g. PHA, 1D binned, 1D un-binned). Iflo
orhi
were set then themask
attribute of the object can be used to apply the filter to thexlo
,xhi
, andy
attributes.- Return type
instance
See also
get_model_plot
Return the data used by plot_model.
plot_model
Plot the model for a data set.
plot_source
Plot the source expression for a data set.
Examples
Retrieve the source plot information for the default data set and then display it:
>>> splot = get_source_plot() >>> print(splot)
Return the plot data for data set 2, and then use it to create a plot:
>>> s2 = get_source_plot(2) >>> s2.plot()
Retrieve the source plots for the 0.5 to 7 range of the ‘jet’ and ‘core’ data sets and display them on the same plot:
>>> splot1 = get_source_plot(id='jet', lo=0.5, hi=7) >>> splot2 = get_source_plot(id='core', lo=0.5, hi=7) >>> splot1.plot() >>> splot2.overplot()
Access the plot data (for a PHA data set) and select only the bins corresponding to the 2-7 keV range defined in the call:
>>> splot = get_source_plot(lo=2, hi=7) >>> xlo = splot.xlo[splot.mask] >>> xhi = splot.xhi[splot.mask] >>> y = splot.y[splot.mask]
For a PHA data set, the units on both the X and Y axes of the plot are controlled by the
set_analysis
command. In this case the Y axis will be in units of photon/s/cm^2/keV x Energy and the X axis in keV:>>> set_analysis('energy', factor=1) >>> splot = get_source_plot() >>> print(splot)
- get_specresp(id=None, filter=False, bkg_id=None)[source] [edit on github]
Return the effective area values for a PHA data set.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – Should the filter attached to the data set be applied to the ARF or not. The default is
False
.bkg_id (int or str, optional) – Set if the ARF should be taken from a background set associated with the data set.
- Returns
arf – The effective area values for the data set (or background component).
- Return type
array
Examples
Return the effective-area values for the default data set:
>>> arf = get_specresp()
Return the area for the second background component of the data set with the id “eclipse”:
>>> barf = get_spectresp("eclipse", bkg_id=2)
- get_split_plot()[source] [edit on github]
Return the plot attributes for displays with multiple plots.
- Returns
splot
- Return type
a
sherpa.plot.SplitPlot
instance
- get_stat(name=None)[source] [edit on github]
Return the fit statisic.
- Parameters
name (str, optional) – If not given, the current fit statistic is returned, otherwise it should be one of the names returned by the
list_stats
function.- Returns
stat – An object representing the fit statistic.
- Return type
- Raises
sherpa.utils.err.ArgumentErr – If the
name
argument is not recognized.
See also
get_stat_name
Return the name of the current fit statistic.
list_stats
List the fit statistics.
set_stat
Change the fit statistic.
Examples
Return the currently-selected statistic, display its name, and read the help documentation for it:
>>> stat = get_stat() >>> stat.name 'chi2gehrels' >>> help(stat)
Read the help for the “wstat” statistic:
>>> help(get_stat('wstat'))
- get_stat_info()[source] [edit on github]
Return the statistic values for the current models.
Calculate the statistic value for each data set, and the combined fit, using the current set of models, parameters, and ranges.
- Returns
stats – The values for each data set. If there are multiple model expressions then the last element will be the value for the combined data sets.
- Return type
array of
sherpa.fit.StatInfoResults
See also
calc_stat
Calculate the fit statistic for a data set.
calc_stat_info
Display the statistic values for the current models.
get_fit_results
Return the results of the last fit.
list_data_ids
List the identifiers for the loaded data sets.
list_model_ids
List of all the data sets with a source expression.
Notes
If a fit to a particular data set has not been made, or values - such as parameter settings, the noticed data range, or choice of statistic - have been changed since the last fit, then the results for that data set may not be meaningful and will therefore bias the results for the simultaneous results.
The return value of
get_stat_info
differs toget_fit_results
since it includes values for each data set, individually, rather than just the combined results.The fields of the object include:
- name
The name of the data set, or sets, as a string.
- ids
A sequence of the data set ids (it may be a tuple or array) included in the results.
- bkg_ids
A sequence of the background data set ids (it may be a tuple or array) included in the results, if any.
- statname
The name of the statistic function (as used in
set_stat
).- statval
The statistic value.
- numpoints
The number of bins used in the fits.
- dof
The number of degrees of freedom in the fit (the number of bins minus the number of free parameters).
- qval
The Q-value (probability) that one would observe the reduced statistic value, or a larger value, if the assumed model is true and the current model parameters are the true parameter values. This will be
None
if the value can not be calculated with the current statistic (e.g. the Cash statistic).- rstat
The reduced statistic value (the
statval
field divided bydof
). This is not calculated for all statistics.
Examples
>>> res = get_stat_info() >>> res[0].statval 498.21750663761935 >>> res[0].dof 439
- get_stat_name()[source] [edit on github]
Return the name of the current fit statistic.
- Returns
name – The name of the current fit statistic method, in lower case.
- Return type
Examples
>>> get_stat_name() 'chi2gehrels'
>>> set_stat('cash') >>> get_stat_name() 'cash'
- get_staterror(id=None, filter=False, bkg_id=None)[source] [edit on github]
Return the statistical error on the dependent axis of a data set.
The function returns the statistical errors on the values (dependenent axis) of a data set, or its background. These may have been set explicitly - either when the data set was created or with a call to
set_staterror
- or as defined by the chosen fit statistic (such as “chi2gehrels”).- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – Should the filter attached to the data set be applied to the return value or not. The default is
False
.bkg_id (int or str, optional) – Set if the values returned should be from the given background component, instead of the source data set.
- Returns
staterrors – The statistical error for each data point. This may be estimated from the data (e.g. with the
chi2gehrels
statistic) or have been set explicitly (set_staterror
). For PHA data sets, the return array will match the grouping scheme applied to the data set. The size of this array depends on thefilter
argument.- Return type
array
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_error
Return the errors on the dependent axis of a data set.
get_indep
Return the independent axis of a data set.
get_syserror
Return the systematic errors on the dependent axis of a data set.
list_data_ids
List the identifiers for the loaded data sets.
set_staterror
Set the statistical errors on the dependent axis of a data set.
Notes
The default behavior is to not apply any filter defined on the independent axes to the results, so that the return value is for all points (or bins) in the data set. Set the
filter
argument toTrue
to apply this filter.Examples
If not explicitly given, the statistical errors on a data set may be calculated from the data values (the independent axis), depending on the chosen statistic:
>>> load_arrays(1, [10, 15, 19], [4, 5, 9]) >>> set_stat('chi2datavar') >>> get_staterror() array([ 2. , 2.23606798, 3. ]) >>> set_stat('chi2gehrels') >>> get_staterror() array([ 3.17944947, 3.39791576, 4.122499 ])
If the statistical errors are set - either when the data set is created or with a call to
set_staterror
- then these values will be used, no matter the statistic:>>> load_arrays(1, [10, 15, 19], [4, 5, 9], [2, 3, 5]) >>> set_stat('chi2datavar') >>> get_staterror() array([2, 3, 5]) >>> set_stat('chi2gehrels') >>> get_staterror() array([2, 3, 5])
- get_syserror(id=None, filter=False, bkg_id=None)[source] [edit on github]
Return the systematic error on the dependent axis of a data set.
The function returns the systematic errors on the values (dependenent axis) of a data set, or its background. It is an error if called on a data set with no systematic errors (which are set with
set_syserror
).- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filter (bool, optional) – Should the filter attached to the data set be applied to the return value or not. The default is
False
.bkg_id (int or str, optional) – Set if the values returned should be from the given background component, instead of the source data set.
- Returns
syserrors – The systematic error for each data point. The size of this array depends on the
filter
argument.- Return type
array
- Raises
sherpa.utils.err.DataErr – If the data set has no systematic errors.
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_error
Return the errors on the dependent axis of a data set.
get_indep
Return the independent axis of a data set.
get_staterror
Return the statistical errors on the dependent axis of a data set.
list_data_ids
List the identifiers for the loaded data sets.
set_syserror
Set the systematic errors on the dependent axis of a data set.
Notes
The default behavior is to not apply any filter defined on the independent axes to the results, so that the return value is for all points (or bins) in the data set. Set the
filter
argument toTrue
to apply this filter.Examples
Return the systematic error for the default data set:
>>> yerr = get_syserror()
Return an array that has been filtered to match the data:
>>> yerr = get_syserror(filter=True)
Return the filtered errors for data set “core”:
>>> yerr = get_syserror("core", filter=True)
- get_trace_plot()[source] [edit on github]
Return the data used to plot the last trace.
- Returns
plot – An object containing the data used by the last call to
plot_trace
. The fields will beNone
if the function has not been called.- Return type
a
sherpa.plot.TracePlot
instance
See also
plot_trace
Create a trace plot of row number versus value.
- group(id=None, bkg_id=None)[source] [edit on github]
Turn on the grouping for a PHA data set.
A PHA data set can be grouped either because it contains grouping information [1]_, which is automatically applied when the data is read in with
load_pha
orload_data
, or because thegroup
set of routines has been used to dynamically re-group the data. Theungroup
function removes this grouping (however it was created). Thegroup
function re-applies this grouping. The grouping scheme can be changed dynamically, using thegroup_xxx
series of routines.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Set to group the background associated with the data set.
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain a PHA data set.
See also
fit
Fit one or more data sets.
group_adapt
Adaptively group to a minimum number of counts.
group_adapt_snr
Adaptively group to a minimum signal-to-noise ratio.
group_bins
Group into a fixed number of bins.
group_counts
Group into a minimum number of counts per bin.
group_snr
Group into a minimum signal-to-noise ratio.
group_width
Group into a fixed bin width.
set_grouping
Apply a set of grouping flags to a PHA data set.
set_quality
Apply a set of quality flags to a PHA data set.
ungroup
Turn off the grouping for a PHA data set.
Notes
PHA data is often grouped to improve the signal to noise of the data, by decreasing the number of bins, so that a chi-square statistic can be used when fitting the data. After calling
group
, anything that uses the data set - such as a plot, fit, or error analysis - will use the grouped data values. Models should be re-fit ifgroup
is called; the increase in the signal of the bins may mean that a chi-square statistic can now be used.The grouping is implemented by separate arrays to the main data - the information is stored in the
grouping
andquality
arrays of the PHA data set - so that a data set can be grouped and ungrouped many times, without losing information. Thegroup
command does not create this information; this is either created by modifying the PHA file before it is read in, or by using thegroup_xxx
routines once the data has been loaded.The
grouped
field of a PHA data set is set toTrue
when the data is grouped.References
- 1
Arnaud., K. & George, I., “The OGIP Spectral File Format”, http://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/ogip_92_007.html
Examples
Group the data in the default data set:
>>> group() >>> get_data().grouped True
Group the first background component of the ‘core’ data set:
>>> group('core', bkg_id=1) >>> get_bkg('core', bkg_id=1).grouped True
The data is fit using the ungrouped data, and then plots of the data and best-fit, and the residuals, are created. The first plot uses the ungrouped data, and the second plot uses the grouped data.
>>> ungroup() >>> fit() >>> plot_fit_resid() >>> group() >>> plot_fit_resid()
- group_adapt(id, min=None, bkg_id=None, maxLength=None, tabStops=None)[source] [edit on github]
Adaptively group to a minimum number of counts.
Combine the data so that each bin contains
min
or more counts. The difference togroup_counts
is that this algorithm starts with the bins with the largest signal, in order to avoid over-grouping bright features, rather than at the first channel of the data. The adaptive nature means that low-count regions between bright features may not end up in groups with the minimum number of counts. The binning scheme is applied to all the channels, but any existing filter - created by theignore
ornotice
set of functions - is re-applied after the data has been grouped.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.min (int) – The number of channels to combine into a group.
bkg_id (int or str, optional) – Set to group the background associated with the data set. When
bkg_id
isNone
(which is the default), the grouping is applied to all the associated background data sets as well as the source data set.maxLength (int, optional) – The maximum number of channels that can be combined into a single group.
tabStops (array of int or bool, optional) – If set, indicate one or more ranges of channels that should not be included in the grouped output. The array should match the number of channels in the data set and non-zero or
True
means that the channel should be ignored from the grouping (use 0 orFalse
otherwise).
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain a PHA data set.
See also
group_adapt_snr
Adaptively group to a minimum signal-to-noise ratio.
group_bins
Group into a fixed number of bins.
group_counts
Group into a minimum number of counts per bin.
group_snr
Group into a minimum signal-to-noise ratio.
group_width
Group into a fixed bin width.
set_grouping
Apply a set of grouping flags to a PHA data set.
set_quality
Apply a set of quality flags to a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
min
parameter. If given two un-named arguments, then they are interpreted as theid
andmin
parameters, respectively. The remaining parameters are expected to be given as named arguments.Unlike
group
, it is possible to callgroup_adapt
multiple times on the same data set without needing to callungroup
.If channels can not be placed into a “valid” group, then a warning message will be displayed to the screen and the quality value for these channels will be set to 2. This information can be found with the
get_quality
command.Examples
Group the default data set so that each bin contains at least 20 counts:
>>> group_adapt(20)
Plot two versions of the ‘jet’ data set: the first uses an adaptive scheme of 20 counts per bin, the second the
group_counts
method:>>> group_adapt('jet', 20) >>> plot_data('jet') >>> group_counts('jet', 20) >>> plot_data('jet', overplot=True)
- group_adapt_snr(id, min=None, bkg_id=None, maxLength=None, tabStops=None, errorCol=None)[source] [edit on github]
Adaptively group to a minimum signal-to-noise ratio.
Combine the data so that each bin has a signal-to-noise ratio of at least
num
. The difference togroup_snr
is that this algorithm starts with the bins with the largest signal, in order to avoid over-grouping bright features, rather than at the first channel of the data. The adaptive nature means that low-count regions between bright features may not end up in groups with the minimum number of counts. The binning scheme is applied to all the channels, but any existing filter - created by theignore
ornotice
set of functions - is re-applied after the data has been grouped.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.num (number) – The minimum signal-to-noise ratio that must be reached to form a group of channels.
bkg_id (int or str, optional) – Set to group the background associated with the data set. When
bkg_id
isNone
(which is the default), the grouping is applied to all the associated background data sets as well as the source data set.maxLength (int, optional) – The maximum number of channels that can be combined into a single group.
tabStops (array of int or bool, optional) – If set, indicate one or more ranges of channels that should not be included in the grouped output. The array should match the number of channels in the data set and non-zero or
True
means that the channel should be ignored from the grouping (use 0 orFalse
otherwise).errorCol (array of num, optional) – If set, the error to use for each channel when calculating the signal-to-noise ratio. If not given then Poisson statistics is assumed. A warning is displayed for each zero-valued error estimate.
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain a PHA data set.
See also
group_adapt
Adaptively group to a minimum number of counts.
group_bins
Group into a fixed number of bins.
group_counts
Group into a minimum number of counts per bin.
group_snr
Group into a minimum signal-to-noise ratio.
group_width
Group into a fixed bin width.
set_grouping
Apply a set of grouping flags to a PHA data set.
set_quality
Apply a set of quality flags to a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
num
parameter. If given two un-named arguments, then they are interpreted as theid
andnum
parameters, respectively. The remaining parameters are expected to be given as named arguments.Unlike
group
, it is possible to callgroup_adapt_snr
multiple times on the same data set without needing to callungroup
.If channels can not be placed into a “valid” group, then a warning message will be displayed to the screen and the quality value for these channels will be set to 2. This information can be found with the
get_quality
command.Examples
Group the default data set so that each bin contains a signal-to-noise ratio of at least 5:
>>> group_adapt_snr(5)
Plot two versions of the ‘jet’ data set: the first uses an adaptive scheme and the second the non-adaptive version:
>>> group_adapt_snr('jet', 4) >>> plot_data('jet') >>> group_snr('jet', 4) >>> plot_data('jet', overplot=True)
- group_bins(id, num=None, bkg_id=None, tabStops=None)[source] [edit on github]
Group into a fixed number of bins.
Combine the data so that there
num
equal-width bins (or groups). The binning scheme is applied to all the channels, but any existing filter - created by theignore
ornotice
set of functions - is re-applied after the data has been grouped.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.num (int) – The number of bins in the grouped data set. Each bin will contain the same number of channels.
bkg_id (int or str, optional) – Set to group the background associated with the data set. When
bkg_id
is None (which is the default), the grouping is applied to all the associated background data sets as well as the source data set.tabStops (array of int or bool, optional) – If set, indicate one or more ranges of channels that should not be included in the grouped output. The array should match the number of channels in the data set and non-zero or
True
means that the channel should be ignored from the grouping (use 0 orFalse
otherwise).
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain a PHA data set.
See also
group_adapt
Adaptively group to a minimum number of counts.
group_adapt_snr
Adaptively group to a minimum signal-to-noise ratio.
group_counts
Group into a minimum number of counts per bin.
group_snr
Group into a minimum signal-to-noise ratio.
group_width
Group into a fixed bin width.
set_grouping
Apply a set of grouping flags to a PHA data set.
set_quality
Apply a set of quality flags to a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
num
parameter. If given two un-named arguments, then they are interpreted as theid
andnum
parameters, respectively. The remaining parameters are expected to be given as named arguments.Unlike
group
, it is possible to callgroup_bins
multiple times on the same data set without needing to callungroup
.Since the bin width is an integer number of channels, it is likely that some channels will be “left over”. This is even more likely when the
tabStops
parameter is set. If this happens, a warning message will be displayed to the screen and the quality value for these channels will be set to 2. This information can be found with theget_quality
command.Examples
Group the default data set so that there are 50 bins.
>>> group_bins(50)
Group the ‘jet’ data set to 50 bins and plot the result, then re-bin to 100 bins and overplot the data:
>>> group_bins('jet', 50) >>> plot_data('jet') >>> group_bins('jet', 100) >>> plot_data('jet', overplot=True)
The grouping is applied to the full data set, and then the filter - in this case defined over the range 0.5 to 8 keV - will be applied. This means that the noticed data range will likely contain less than 50 bins.
>>> set_analysis('energy') >>> notice(0.5, 8) >>> group_bins(50) >>> plot_data()
Do not group any channels numbered less than 20 or 800 or more. Since there are 780 channels to be grouped, the width of each bin will be 20 channels and there are no “left over” channels:
>>> notice() >>> channels = get_data().channel >>> ign = (channels <= 20) | (channels >= 800) >>> group_bins(39, tabStops=ign) >>> plot_data()
- group_counts(id, num=None, bkg_id=None, maxLength=None, tabStops=None)[source] [edit on github]
Group into a minimum number of counts per bin.
Combine the data so that each bin contains
num
or more counts. The binning scheme is applied to all the channels, but any existing filter - created by theignore
ornotice
set of functions - is re-applied after the data has been grouped. The background is not included in this calculation; the calculation is done on the raw data even ifsubtract
has been called on this data set.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.num (int) – The number of channels to combine into a group.
bkg_id (int or str, optional) – Set to group the background associated with the data set. When
bkg_id
is None (which is the default), the grouping is applied to all the associated background data sets as well as the source data set.maxLength (int, optional) – The maximum number of channels that can be combined into a single group.
tabStops (array of int or bool, optional) – If set, indicate one or more ranges of channels that should not be included in the grouped output. The array should match the number of channels in the data set and non-zero or
True
means that the channel should be ignored from the grouping (use 0 orFalse
otherwise).
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain a PHA data set.
See also
group_adapt
Adaptively group to a minimum number of counts.
group_adapt_snr
Adaptively group to a minimum signal-to-noise ratio.
group_bins
Group into a fixed number of bins.
group_snr
Group into a minimum signal-to-noise ratio.
group_width
Group into a fixed bin width.
set_grouping
Apply a set of grouping flags to a PHA data set.
set_quality
Apply a set of quality flags to a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
num
parameter. If given two un-named arguments, then they are interpreted as theid
andnum
parameters, respectively. The remaining parameters are expected to be given as named arguments.Unlike
group
, it is possible to callgroup_counts
multiple times on the same data set without needing to callungroup
.If channels can not be placed into a “valid” group, then a warning message will be displayed to the screen and the quality value for these channels will be set to 2. This information can be found with the
get_quality
command.Examples
Group the default data set so that each bin contains at least 20 counts:
>>> group_counts(20)
Plot two versions of the ‘jet’ data set: the first uses 20 counts per group and the second is 50:
>>> group_counts('jet', 20) >>> plot_data('jet') >>> group_counts('jet', 50) >>> plot_data('jet', overplot=True)
The grouping is applied to the full data set, and then the filter - in this case defined over the range 0.5 to 8 keV - will be applied.
>>> set_analysis('energy') >>> notice(0.5, 8) >>> group_counts(30) >>> plot_data()
If a channel has more than 30 counts then do not group, otherwise group channels so that they contain at least 40 counts. The
group_adapt
andgroup_adapt_snr
functions provide similar functionality to this example. A maximum length of 10 channels is enforced, to avoid bins getting too large when the signal is low.>>> notice() >>> counts = get_data().counts >>> ign = counts > 30 >>> group_counts(40, tabStops=ign, maxLength=10)
- group_snr(id, snr=None, bkg_id=None, maxLength=None, tabStops=None, errorCol=None)[source] [edit on github]
Group into a minimum signal-to-noise ratio.
Combine the data so that each bin has a signal-to-noise ratio of at least
snr
. The binning scheme is applied to all the channels, but any existing filter - created by theignore
ornotice
set of functions - is re-applied after the data has been grouped. The background is not included in this calculation; the calculation is done on the raw data even ifsubtract
has been called on this data set.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.snr (number) – The minimum signal-to-noise ratio that must be reached to form a group of channels.
bkg_id (int or str, optional) – Set to group the background associated with the data set. When
bkg_id
is None (which is the default), the grouping is applied to all the associated background data sets as well as the source data set.maxLength (int, optional) – The maximum number of channels that can be combined into a single group.
tabStops (array of int or bool, optional) – If set, indicate one or more ranges of channels that should not be included in the grouped output. The array should match the number of channels in the data set and non-zero or
True
means that the channel should be ignored from the grouping (use 0 orFalse
otherwise).errorCol (array of num, optional) – If set, the error to use for each channel when calculating the signal-to-noise ratio. If not given then Poisson statistics is assumed. A warning is displayed for each zero-valued error estimate.
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain a PHA data set.
See also
group_adapt
Adaptively group to a minimum number of counts.
group_adapt_snr
Adaptively group to a minimum signal-to-noise ratio.
group_bins
Group into a fixed number of bins.
group_counts
Group into a minimum number of counts per bin.
group_width
Group into a fixed bin width.
set_grouping
Apply a set of grouping flags to a PHA data set.
set_quality
Apply a set of quality flags to a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
snr
parameter. If given two un-named arguments, then they are interpreted as theid
andsnr
parameters, respectively. The remaining parameters are expected to be given as named arguments.Unlike
group
, it is possible to callgroup_snr
multiple times on the same data set without needing to callungroup
.If channels can not be placed into a “valid” group, then a warning message will be displayed to the screen and the quality value for these channels will be set to 2. This information can be found with the
get_quality
command.Examples
Group the default data set so that each bin has a signal-to-noise ratio of at least 5:
>>> group_snr(20)
Plot two versions of the ‘jet’ data set: the first uses a signal-to-noise ratio of 3 and the second 5:
>>> group_snr('jet', 3) >>> plot_data('jet') >>> group_snr('jet', 5) >>> plot_data('jet', overplot=True)
- group_width(id, num=None, bkg_id=None, tabStops=None)[source] [edit on github]
Group into a fixed bin width.
Combine the data so that each bin contains
num
channels. The binning scheme is applied to all the channels, but any existing filter - created by theignore
ornotice
set of functions - is re-applied after the data has been grouped.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.num (int) – The number of channels to combine into a group.
bkg_id (int or str, optional) – Set to group the background associated with the data set. When
bkg_id
is None (which is the default), the grouping is applied to all the associated background data sets as well as the source data set.tabStops (array of int or bool, optional) – If set, indicate one or more ranges of channels that should not be included in the grouped output. The array should match the number of channels in the data set and non-zero or
True
means that the channel should be ignored from the grouping (use 0 orFalse
otherwise).
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain a PHA data set.
See also
group_adapt
Adaptively group to a minimum number of counts.
group_adapt_snr
Adaptively group to a minimum signal-to-noise ratio.
group_bins
Group into a fixed number of bins.
group_counts
Group into a minimum number of counts per bin.
group_snr
Group into a minimum signal-to-noise ratio.
set_grouping
Apply a set of grouping flags to a PHA data set.
set_quality
Apply a set of quality flags to a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
num
parameter. If given two un-named arguments, then they are interpreted as theid
andnum
parameters, respectively. The remaining parameters are expected to be given as named arguments.Unlike
group
, it is possible to callgroup_width
multiple times on the same data set without needing to callungroup
.Unless the requested bin width is a factor of the number of channels (and no
tabStops
parameter is given), then some channels will be “left over”. If this happens, a warning message will be displayed to the screen and the quality value for these channels will be set to 2. This information can be found with theget_quality
command.Examples
Group the default data set so that each bin contains 20 channels:
>>> group_width(20)
Plot two versions of the ‘jet’ data set: the first uses 20 channels per group and the second is 50 channels per group:
>>> group_width('jet', 20) >>> plot_data('jet') >>> group_width('jet', 50) >>> plot_data('jet', overplot=True)
The grouping is applied to the full data set, and then the filter - in this case defined over the range 0.5 to 8 keV - will be applied.
>>> set_analysis('energy') >>> notice(0.5, 8) >>> group_width(50) >>> plot_data()
The grouping is not applied to channels 101 to 149, inclusive:
>>> notice() >>> channels = get_data().channel >>> ign = (channels > 100) & (channels < 150) >>> group_width(40, tabStops=ign) >>> plot_data()
- guess(id=None, model=None, limits=True, values=True)[source] [edit on github]
Estimate the parameter values and ranges given the loaded data.
The guess function can change the parameter values and limits to match the loaded data. This is generally limited to changing the amplitude and position parameters (sometimes just the values and sometimes just the limits). The parameters that are changed depend on the type of model.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.model – Change the parameters of this model component. If
None
, then the source expression is assumed to consist of a single component, and that component is used.limits (bool) – Should the parameter limits be changed? The default is
True
.values (bool) – Should the parameter values be changed? The default is
True
.
See also
get_default_id
Return the default data set identifier.
reset
Reset the model parameters to their default settings.
set_par
Set the value, limits, or behavior of a model parameter.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.The guess function can reduce the time required to fit a data set by moving the parameters closer to a realistic solution. It can also be useful because it can set bounds on the parameter values based on the data: for instance, many two-dimensional models will limit their
xpos
andypos
values to lie within the data area. This can be done manually, butguess
simplifies this, at least for those parameters that are supported. Instrument models - such as an ARF and RMF - should be set up before calling guess.Examples
Since the source expression contains only one component, guess can be called with no arguments:
>>> set_source(polynom1d.poly) >>> guess()
In this case, guess is called on each component separately.
>>> set_source(gauss1d.line + powlaw1d.cont) >>> guess(line) >>> guess(cont)
In this example, the values of the
src
model component are guessed from the “src” data set, whereas thebgnd
component is guessed from the “bgnd” data set.>>> set_source("src", gauss2d.src + const2d.bgnd) >>> set_source("bgnd", bgnd) >>> guess("src", src) >>> guess("bgnd", bgnd)
Set the source model for the default dataset. Guess is run to determine the values of the model component “p1” and the limits of the model component “g1”:
>>> set_source(powlaw1d.p1 + gauss1d.g1) >>> guess(p1, limits=False) >>> guess(g1, values=False)
- ignore(lo=None, hi=None, **kwargs)[source] [edit on github]
Exclude data from the fit.
Select one or more ranges of data to exclude by filtering on the independent axis value. The filter is applied to all data sets.
Changed in version 4.14.0: Integrated data sets - so Data1DInt and DataPHA when using energy or wavelengths - now ensure that the
hi
argument is exclusive and better handling of thelo
argument when it matches a bin edge. This can result in the same filter selecting a smaller number of bins than in earlier versions of Sherpa.- Parameters
lo (number or str, optional) – The lower bound of the filter (when a number) or a string expression listing ranges in the form
a:b
, with multiple ranges allowed, where the ranges are separated by a,
. The term:b
means exclude everything up tob
(an exclusive limit for integrated datasets), anda:
means exclude everything that is higher than, or equal to,a
.hi (number, optional) – The upper bound of the filter when
lo
is not a string.bkg_id (int or str, optional) – The filter will be applied to the associated background component of the data set if
bkg_id
is set. Only PHA data sets support this option; if not given, then the filter is applied to all background components as well as the source data.
See also
ignore_id
Exclude data from the fit for a data set.
sherpa.astro.ui.ignore2d
Exclude a spatial region from an image.
notice
Include data in the fit.
show_filter
Show any filters applied to a data set.
Notes
The order of
ignore
andnotice
calls is important, and the results are a union, rather than intersection, of the combination.For binned data sets, the bin is excluded if the ignored range falls anywhere within the bin.
The units used depend on the
analysis
setting of the data set, if appropriate.To filter a 2D data set by a shape use
ignore2d
.Examples
Ignore all data points with an X value (the independent axis) between 12 and 18. For this one-dimensional data set, this means that the second bin is ignored:
>>> load_arrays(1, [10, 15, 20, 30], [5, 10, 7, 13]) >>> ignore(12, 18) >>> get_dep(filter=True) array([ 5, 7, 13])
Filtering X values that are 25 or larger means that the last point is also ignored:
>>> ignore(25, None) >>> get_dep(filter=True) array([ 5, 7])
The
notice
call removes the previous filter, and then a multi-range filter is applied to exclude values between 8 and 12 and 18 and 22:>>> notice() >>> ignore("8:12,18:22") >>> get_dep(filter=True) array([10, 13])
- ignore2d(val=None)[source] [edit on github]
Exclude a spatial region from all data sets.
Select a spatial region to exclude in the fit. The filter is applied to all data sets.
- Parameters
val (str, optional) – A region specification as a string or the name of a file containing a region filter. The coordinates system of the filter is taken from the coordinate setting of the data sets (
set_coord
). IfNone
, then all points are included.
See also
ignore2d_id
Exclude a spatial region from a data set.
ignore2d_image
Select the region to exclude from the image viewer.
notice2d
Include a spatial region from all data sets.
notice2d_id
Include a spatial region of a data set.
notice2d_image
Select the region to include from the image viewer.
set_coord
Set the coordinate system to use for image analysis.
Notes
The region syntax is described in the
notice2d
function.Examples
Exclude points that fall within the two regions:
>>> ignore2d('ellipse(200,300,40,30,-34)') >>> ignore2d('box(40,100,30,40)')
Use a region file called ‘reg.fits’, by using either:
>>> ignore2d('reg.fits')
or
>>> ignore2d('region(reg.fits)')
Exclude all points.
>>> ignore2d()
- ignore2d_id(ids, val=None)[source] [edit on github]
Exclude a spatial region from a data set.
Select a spatial region to exclude in the fit. The filter is applied to the given data set, or sets.
- Parameters
ids (int or str, or array of int or str) – The data set, or sets, to use.
val (str, optional) – A region specification as a string or the name of a file containing a region filter. The coordinates system of the filter is taken from the coordinate setting of the data sets (
set_coord
). IfNone
, then all points are included.
See also
ignore2d
Exclude a spatial region from all data sets.
ignore2d_image
Select the region to exclude from the image viewer.
notice2d
Include a spatial region of all data sets.
notice2d_id
Include a spatial region from a data set.
notice2d_image
Select the region to include from the image viewer.
set_coord
Set the coordinate system to use for image analysis.
Notes
The region syntax is described in the
notice2d
function.Examples
Ignore the pixels within the rectangle from data set 1:
>>> ignore2d_id(1, 'rect(10,10,20,290)')
Ignore the spatial region in the file
srcs.reg
:>>> ignore2d_id(1, 'srcs.reg')
or
>>> ignore2d_id(1, 'region(srcs.reg)')
- ignore2d_image(ids=None)[source] [edit on github]
Exclude pixels using the region defined in the image viewer.
Exclude points that lie within the region defined in the image viewer.
- Parameters
ids (int or str, or sequence of int or str, optional) – The data set, or sets, to ignore. If
None
(the default) then the default identifier is used, as returned byget_default_id
.
See also
ignore2d
Exclude a spatial region from an image.
notice2d
Include a spatial region of an image.
notice2d_image
Include pixels using the region defined in the image viewer.
set_coord
Set the coordinate system to use for image analysis.
Notes
The region definition is converted into the coordinate system relevant to the data set before it is applied.
Examples
Use the region in the image viewer to ignore points from the default data set.
>>> ignore2d_image()
Ignore points in the data set labelled “2”.
>>> ignore2d_image(2)
Ignore points in data sets “src” and “bg”.
>>> ignore2d_image(["src", "bg"])
- ignore_bad(id=None, bkg_id=None)[source] [edit on github]
Exclude channels marked as bad in a PHA data set.
Ignore any bin in the PHA data set which has a quality value that is larger than zero.
- Parameters
id (int or str, optional) – The data set to change. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – The identifier for the background (the default of
None
uses the first component).
- Raises
sherpa.utils.err.DataErr – If the data set has no quality array.
See also
ignore
Exclude data from the fit.
notice
Include data in the fit.
set_quality
Apply a set of quality flags to a PHA data set.
Notes
The
load_pha
command - and others that create a PHA data set - do not exclude these bad-quality bins automatically.If the data set has been grouped, then calling
ignore_bad
will remove any filter applied to the data set. If this happens a warning message will be displayed.Examples
Remove any bins that are marked bad in the default data set:
>>> load_pha('src.pi') >>> ignore_bad()
The data set ‘jet’ is grouped, and a filter applied. After ignoring the bad-quality points, the filter has been removed and will need to be re-applied:
>>> group_counts('jet', 20) >>> notice_id('jet', 0.5, 7) >>> get_filter('jet') '0.496399998665:7.212399959564' >>> ignore_bad('jet') WARNING: filtering grouped data with quality flags, previous filters deleted >>> get_filter('jet') '0.001460000058:14.950400352478'
- ignore_id(ids, lo=None, hi=None, **kwargs)[source] [edit on github]
Exclude data from the fit for a data set.
Select one or more ranges of data to exclude by filtering on the independent axis value. The filter is applied to the given data set, or sets.
Changed in version 4.14.0: Integrated data sets - so Data1DInt and DataPHA when using energy or wavelengths - now ensure that the
hi
argument is exclusive and better handling of thelo
argument when it matches a bin edge. This can result in the same filter selecting a smaller number of bins than in earlier versions of Sherpa.- Parameters
ids (int or str, or array of int or str) – The data set, or sets, to use.
lo (number or str, optional) – The lower bound of the filter (when a number) or a string expression listing ranges in the form
a:b
, with multiple ranges allowed, where the ranges are separated by a,
. The term:b
means exclude everything up tob
(an exclusive limit for integrated datasets), anda:
means exclude everything that is higher than, or equal to,a
.hi (number, optional) – The upper bound of the filter when
lo
is not a string.bkg_id (int or str, optional) – The filter will be applied to the associated background component of the data set if
bkg_id
is set. Only PHA data sets support this option; if not given, then the filter is applied to all background components as well as the source data.
See also
ignore
Exclude data from the fit.
sherpa.astro.ui.ignore2d
Exclude a spatial region from an image.
notice_id
Include data from the fit for a data set.
show_filter
Show any filters applied to a data set.
Notes
The order of
ignore
andnotice
calls is important.The units used depend on the
analysis
setting of the data set, if appropriate.To filter a 2D data set by a shape use
ignore2d
.Examples
Ignore all data points with an X value (the independent axis) between 12 and 18 for data set 1:
>>> ignore_id(1, 12, 18)
Ignore the range up to 0.5 and 7 and above, for data sets 1, 2, and 3:
>>> ignore_id([1,2,3], None, 0.5) >>> ignore_id([1,2,3], 7, None)
Apply the same filter as the previous example, but to data sets “core” and “jet”:
>>> ignore_id(["core","jet"], ":0.5,7:")
- image_close()[source] [edit on github]
Close the image viewer.
Close the image viewer created by a previous call to one of the
image_xxx
functions.See also
image_deleteframes
Delete all the frames open in the image viewer.
image_getregion
Return the region defined in the image viewer.
image_open
Start the image viewer.
image_setregion
Set the region to display in the image viewer.
image_xpaget
Return the result of an XPA call to the image viewer.
image_xpaset
Send an XPA command to the image viewer.
Examples
>>> image_close()
- image_data(id=None, newframe=False, tile=False)[source] [edit on github]
Display a data set in the image viewer.
The image viewer is automatically started if it is not already open.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_data_image
Return the data used by image_data.
image_close
Close the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_open
Open the image viewer.
image_source
Display the model for a data set in the image viewer.
Notes
Image visualization is optional, and provided by the DS9 application [1]_.
References
Examples
Display the data in default data set.
>>> image_data()
Display data set 2 in a new frame so that the data in the current frame is not destroyed. The new data will be displayed in a single frame (i.e. the only data shown by the viewer).
>>> image_data(2, newframe=True)
Display data sets ‘i1’ and ‘i2’ side by side:
>>> image_data('i1') >>> image_data('i2', newframe=True, tile=True)
- image_deleteframes()[source] [edit on github]
Delete all the frames open in the image viewer.
Delete all the frames - in other words, images - being displayed in the image viewer (e.g. as created by
image_data
orimage_fit
).See also
image_close
Close the image viewer.
image_getregion
Return the region defined in the image viewer.
image_open
Create the image viewer.
image_setregion
Set the region to display in the image viewer.
image_xpaget
Return the result of an XPA call to the image viewer.
image_xpaset
Send an XPA command to the image viewer.
Examples
>>> image_deleteframes()
- image_fit(id=None, newframe=True, tile=True, deleteframes=True)[source] [edit on github]
Display the data, model, and residuals for a data set in the image viewer.
This function displays the data, model (including any instrument response), and the residuals (data - model), for a data set.
The image viewer is automatically started if it is not already open.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.deleteframes (bool, optional) – Should existing frames be deleted? The default is
True
.
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
image_close
Close the image viewer.
image_data
Display a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
image_open
Open the image viewer.
image_resid
Display the residuals (data - model) for a data set in the image viewer.
Notes
Image visualization is optional, and provided by the DS9 application [1]_.
References
Examples
Display the fit results - that is, the data, model, and residuals - for the default data set.
>>> image_fit()
Do not tile the frames (the three frames are loaded, but only the last displayed, the residuals), and then change the frame being displayed to the second one (the model).
>>> image_fit('img', tile=False) >>> image_xpaset('frame 2')
- image_getregion(coord='')[source] [edit on github]
Return the region defined in the image viewer.
The regions defined in the current frame are returned.
- Parameters
coord (str, optional) – The coordinate system to use.
- Returns
region – The region, or regions, or the empty string.
- Return type
- Raises
sherpa.utils.err.DS9Err – Invalid coordinate system.
See also
image_setregion
Set the region to display in the image viewer.
image_xpaget
Return the result of an XPA call to the image viewer.
image_xpaset
Send an XPA command to the image viewer.
Examples
>>> image_getregion() 'circle(123,128,12.377649);-box(130,121,14,14,329.93142);'
>>> image_getregion('physical') 'circle(3920.5,4080.5,396.08476);-rotbox(4144.5,3856.5,448,448,329.93142);'
- image_kernel(id=None, newframe=False, tile=False)[source] [edit on github]
Display the 2D kernel for a data set in the image viewer.
The image viewer is automatically started if it is not already open.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_kernel_image
Return the data used by image_kernel.
image_close
Close the image viewer.
image_data
Display a data set in the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
image_open
Open the image viewer.
image_source
Display the model for a data set in the image viewer.
plot_kernel
Plot the 1D kernel applied to a data set.
Notes
Image visualization is optional, and provided by the DS9 application [1]_.
References
Examples
>>> image_kernel()
>>> image_kernel(2)
- image_model(id=None, newframe=False, tile=False)[source] [edit on github]
Display the model for a data set in the image viewer.
This function evaluates and displays the model expression for a data set, including any instrument response (e.g. PSF or ARF and RMF) whether created automatically or with
set_full_model
.The image viewer is automatically started if it is not already open.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_model_image
Return the data used by image_model.
image_close
Close the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_model_component
Display a component of the model in the image viewer.
image_open
Open the image viewer.
image_source
Display the model for a data set in the image viewer.
image_source_component
Display a component of the source expression in the image viewer.
Notes
Image visualization is optional, and provided by the DS9 application [1]_.
References
Examples
Display the model for the default data set.
>>> image_model()
Display the model for data set 2 in a new frame so that the data in the current frame is not destroyed. The new data will be displayed in a single frame (i.e. the only data shown by the viewer).
>>> image_model(2, newframe=True)
Display the models for data sets ‘i1’ and ‘i2’ side by side:
>>> image_model('i1') >>> image_model('i2', newframe=True, tile=True)
- image_model_component(id, model=None, newframe=False, tile=False)[source] [edit on github]
Display a component of the model in the image viewer.
This function evaluates and displays a component of the model expression for a data set, including any instrument response. Use
image_source_component
to exclude the response.The image viewer is automatically started if it is not already open.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.model.Model instance) – The component to display (the name, if a string).
newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_model_component_image
Return the data used by image_model_component.
image_close
Close the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
image_open
Open the image viewer.
image_source
Display the source expression for a data set in the image viewer.
image_source_component
Display a component of the source expression in the image viewer.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.Image visualization is optional, and provided by the DS9 application [1]_.
References
Examples
Display the full source model and then just the ‘gsrc’ component for the default data set:
>>> image_model() >>> image_model_component(gsrc)
Display the ‘clus’ component of the model for the ‘img’ data set side by side without the with any instrument response (such as convolution with a PSF model):
>>> image_source_component('img', 'clus') >>> image_model_component('img', 'clus', newframe=True, ... tile=True)
- image_open()[source] [edit on github]
Start the image viewer.
The image viewer will be started, if found. Calling this function when the viewer has already been started will not cause a second viewer to be started. The image viewer will be started automatically by any of the commands like
image_data
.See also
image_close
Close the image viewer.
image_deleteframes
Delete all the frames open in the image viewer.
image_getregion
Return the region defined in the image viewer.
image_setregion
Set the region to display in the image viewer.
image_xpaget
Return the result of an XPA call to the image viewer.
image_xpaset
Send an XPA command to the image viewer.
Notes
Image visualization is optional, and provided by the DS9 application [1]_.
References
Examples
>>> image_open()
- image_psf(id=None, newframe=False, tile=False)[source] [edit on github]
Display the 2D PSF model for a data set in the image viewer.
The image viewer is automatically started if it is not already open.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist.
See also
get_psf_image
Return the data used by image_psf.
image_close
Close the image viewer.
image_data
Display a data set in the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
image_open
Open the image viewer.
image_source
Display the model for a data set in the image viewer.
plot_psf
Plot the 1D PSF model applied to a data set.
Notes
Image visualization is optional, and provided by the DS9 application [1]_.
References
Examples
>>> image_psf()
>>> image_psf(2)
- image_ratio(id=None, newframe=False, tile=False)[source] [edit on github]
Display the ratio (data/model) for a data set in the image viewer.
This function displays the ratio data/model for a data set.
The image viewer is automatically started if it is not already open.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_ratio_image
Return the data used by image_ratio.
image_close
Close the image viewer.
image_data
Display a data set in the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
image_open
Open the image viewer.
image_resid
Display the residuals (data - model) for a data set in the image viewer.
image_source
Display the model for a data set in the image viewer.
Notes
Image visualization is optional, and provided by the DS9 application [1]_.
References
Examples
Display the ratio (data/model) for the default data set.
>>> image_ratio()
- image_resid(id=None, newframe=False, tile=False)[source] [edit on github]
Display the residuals (data - model) for a data set in the image viewer.
This function displays the residuals (data - model) for a data set.
The image viewer is automatically started if it is not already open.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_resid_image
Return the data used by image_resid.
image_close
Close the image viewer.
image_data
Display a data set in the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
image_open
Open the image viewer.
image_ratio
Display the ratio (data/model) for a data set in the image viewer.
image_source
Display the model for a data set in the image viewer.
Notes
Image visualization is optional, and provided by the DS9 application [1]_.
References
Examples
Display the residuals for the default data set.
>>> image_resid()
Display the residuals for data set 2 in a new frame so that the data in the current frame is not destroyed. The new data will be displayed in a single frame (i.e. the only data shown by the viewer).
>>> image_resid(2, newframe=True)
Display the residuals for data sets ‘i1’ and ‘i2’ side by side:
>>> image_resid('i1') >>> image_resid('i2', newframe=True, tile=True)
- image_setregion(reg, coord='')[source] [edit on github]
Set the region to display in the image viewer.
- Parameters
- Raises
sherpa.utils.err.DS9Err – Invalid coordinate system.
See also
image_getregion
Return the region defined in the image viewer.
image_xpaget
Return the result of an XPA call to the image viewer.
image_xpaset
Send an XPA command to the image viewer.
Examples
Add a circle, in the physical coordinate system, to the data from the default data set:
>>> image_data() >>> image_setregion('circle(4234.53,3245.29,46.74)', 'physical')
Copy the region from the current frame, create a new frame displaying the residuals from data set ‘img’, and then display the region on it:
>>> r = image_getregion() >>> image_resid('img', newframe=True) >>> image_setregion(r)
- image_source(id=None, newframe=False, tile=False)[source] [edit on github]
Display the source expression for a data set in the image viewer.
This function evaluates and displays the model expression for a data set, without any instrument response.
The image viewer is automatically started if it is not already open.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_source_image
Return the data used by image_source.
image_close
Close the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
image_model_component
Display a component of the model in the image viewer.
image_open
Open the image viewer.
image_source_component
Display a component of the source expression in the image viewer.
Notes
Image visualization is optional, and provided by the DS9 application [1]_.
References
Examples
Display the source model for the default data set.
>>> image_source()
Display the source model for data set 2 in a new frame so that the data in the current frame is not destroyed. The new data will be displayed in a single frame (i.e. the only data shown by the viewer).
>>> image_source(2, newframe=True)
Display the source models for data sets ‘i1’ and ‘i2’ side by side:
>>> image_source('i1') >>> image_source('i2', newframe=True, tile=True)
- image_source_component(id, model=None, newframe=False, tile=False)[source] [edit on github]
Display a component of the source expression in the image viewer.
This function evaluates and displays a component of the model expression for a data set, without any instrument response. Use
image_model_component
to include any response.The image viewer is automatically started if it is not already open.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.model.Model instance) – The component to display (the name, if a string).
newframe (bool, optional) – Create a new frame for the data? If
False
, the default, then the data will be displayed in the current frame.tile (bool, optional) – Should the frames be tiles? If
False
, the default, then only a single frame is displayed.
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_source_component_image
Return the data used by image_source_component.
image_close
Close the image viewer.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
image_model
Display the model for a data set in the image viewer.
image_model_component
Display a component of the model in the image viewer.
image_open
Open the image viewer.
image_source
Display the source expression for a data set in the image viewer.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.Image visualization is optional, and provided by the DS9 application [1]_.
References
Examples
Display the full source model and then just the ‘gsrc’ component for the default data set:
>>> image_source() >>> image_source_component(gsrc)
Display the ‘clus’ and ‘bgnd’ components of the model for the ‘img’ data set side by side:
>>> image_source_component('img', 'clus') >>> image_source_component('img', 'bgnd', newframe=True, ... tile=True)
- image_xpaget(arg)[source] [edit on github]
Return the result of an XPA call to the image viewer.
Send a query to the image viewer.
- Parameters
arg (str) – A command to send to the image viewer via XPA.
- Returns
returnval
- Return type
- Raises
sherpa.utils.err.DS9Err – The image viewer is not running.
sherpa.utils.err.RuntimeErr – If the command is not recognized.
See also
image_close
Close the image viewer.
image_getregion
Return the region defined in the image viewer.
image_open
Create the image viewer.
image_setregion
Set the region to display in the image viewer.
image_xpaset
Send an XPA command to the image viewer.
Notes
The XPA access point [1]_ of the ds9 image viewer lets commands and queries to be sent to the viewer.
References
Examples
Return the current zoom setting of the active frame:
>>> image_xpaget('zoom') '1\n'
- image_xpaset(arg, data=None)[source] [edit on github]
Return the result of an XPA call to the image viewer.
Send a command to the image viewer.
- Parameters
arg (str) – A command to send to the image viewer via XPA.
data (optional) – The data for the command.
- Raises
sherpa.utils.err.DS9Err – The image viewer is not running.
sherpa.utils.err.RuntimeErr – If the command is not recognized or could not be completed.
See also
image_close
Close the image viewer.
image_getregion
Return the region defined in the image viewer.
image_open
Create the image viewer.
image_setregion
Set the region to display in the image viewer.
image_xpaset
Send an XPA command to the image viewer.
Notes
The XPA access point [1]_ of the ds9 image viewer lets commands and queries to be sent to the viewer.
References
Examples
Change the zoom setting of the active frame:
>>> image_xpaset('zoom 4')
Overlay the coordinate grid on the current frame:
>>> image_xpaset('grid yes')
Add the region file ‘src.reg’ to the display:
>>> image_xpaset('regions src.reg')
Create a png version of the image being displayed:
>>> image_xpaset('saveimage png /tmp/img.png')
- int_proj(par, id=None, otherids=None, replot=False, fast=True, min=None, max=None, nloop=20, delv=None, fac=1, log=False, numcores=None, overplot=False)[source] [edit on github]
Calculate and plot the fit statistic versus fit parameter value.
Create a confidence plot of the fit statistic as a function of parameter value. Dashed lines are added to indicate the current statistic value and the parameter value at this point. The parameter value is varied over a grid of points and the free parameters re-fit. It is expected that this is run after a successful fit, so that the parameter values identify the best-fit location.
- Parameters
par – The parameter to plot.
id (int or str, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toint_proj
. The default isFalse
.fast (bool, optional) – If
True
then the fit optimization used may be changed from the current setting (only for the error analysis) to use a faster optimization method. The default isFalse
.min (number, optional) – The minimum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.max (number, optional) – The maximum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.nloop (int, optional) – The number of steps to use. This is used when
delv
is set toNone
.delv (number, optional) – The step size for the parameter. Setting this over-rides the
nloop
parameter. The default isNone
.fac (number, optional) – When
min
ormax
is not given, multiply the covariance of the parameter by this value to calculate the limit (which is then added or subtracted to the parameter value, as required).log (bool, optional) – Should the step size be logarithmically spaced? The default (
False
) is to use a linear grid.numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.
See also
conf
Estimate patameter confidence intervals using the confidence method.
covar
Estimate the confidence intervals using the covariance method.
get_int_proj
Return the interval-projection object.
int_unc
Calculate and plot the fit statistic versus fit parameter value.
reg_proj
Plot the statistic value as two parameters are varied.
Notes
The difference to
int_unc
is that at each step, a fit is made to the remaining thawed parameters in the source model. This makes the result a more-accurate rendering of the projected shape of the hypersurface formed by the statistic, but the run-time is longer than, the results ofint_unc
, which does not vary any other parameter. If there are no free parameters in the source expression, other than the parameter being plotted, then the results will be the same.Examples
Vary the
gamma
parameter of thep1
model component for all data sets with a source expression.>>> int_proj(p1.gamma)
Use only the data in data set 1:
>>> int_proj(p1.gamma, id=1)
Use two data sets (‘obs1’ and ‘obs2’):
>>> int_proj(clus.kt, id='obs1', otherids=['obs2'])
Vary the
bgnd.c0
parameter between 1e-4 and 2e-4, using 41 points:>>> int_proj(bgnd.c0, min=1e-4, max=2e-4, step=41)
This time define the step size, rather than the number of steps to use:
>>> int_proj(bgnd.c0, min=1e-4, max=2e-4, delv=2e-6)
Overplot the
int_proj
results for the parameter on top of theint_unc
values:>>> int_unc(mdl.xpos) >>> int_proj(mdl.xpos, overplot=True)
- int_unc(par, id=None, otherids=None, replot=False, min=None, max=None, nloop=20, delv=None, fac=1, log=False, numcores=None, overplot=False)[source] [edit on github]
Calculate and plot the fit statistic versus fit parameter value.
Create a confidence plot of the fit statistic as a function of parameter value. Dashed lines are added to indicate the current statistic value and the parameter value at this point. The parameter value is varied over a grid of points and the statistic evaluated while holding the other parameters fixed. It is expected that this is run after a successful fit, so that the parameter values identify the best-fit location.
- Parameters
par – The parameter to plot.
id (int or str, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toint_proj
. The default isFalse
.min (number, optional) – The minimum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.max (number, optional) – The maximum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.nloop (int, optional) – The number of steps to use. This is used when
delv
is set toNone
.delv (number, optional) – The step size for the parameter. Setting this over-rides the
nloop
parameter. The default isNone
.fac (number, optional) – When
min
ormax
is not given, multiply the covariance of the parameter by this value to calculate the limit (which is then added or subtracted to the parameter value, as required).log (bool, optional) – Should the step size be logarithmically spaced? The default (
False
) is to use a linear grid.numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.
See also
conf
Estimate patameter confidence intervals using the confidence method.
covar
Estimate the confidence intervals using the covariance method.
get_int_unc
Return the interval-uncertainty object.
int_proj
Calculate and plot the fit statistic versus fit parameter value.
reg_unc
Plot the statistic value as two parameters are varied.
Notes
The difference to
int_proj
is that at each step only the single parameter value is varied while all other parameters remain at their starting value. This makes the result a less-accurate rendering of the projected shape of the hypersurface formed by the statistic, but the run-time is likely shorter than, the results ofint_proj
, which fits the model to the remaining thawed parameters at each step. If there are no free parameters in the source expression, other than the parameter being plotted, then the results will be the same.Examples
Vary the
gamma
parameter of thep1
model component for all data sets with a source expression.>>> int_unc(p1.gamma)
Use only the data in data set 1:
>>> int_unc(p1.gamma, id=1)
Use two data sets (‘obs1’ and ‘obs2’):
>>> int_unc(clus.kt, id='obs1', otherids=['obs2'])
Vary the
bgnd.c0
parameter between 1e-4 and 2e-4, using 41 points:>>> int_unc(bgnd.c0, min=1e-4, max=2e-4, step=41)
This time define the step size, rather than the number of steps to use:
>>> int_unc(bgnd.c0, min=1e-4, max=2e-4, delv=2e-6)
Overplot the
int_unc
results for the parameter on top of theint_proj
values:>>> int_proj(mdl.xpos) >>> int_unc(mdl.xpos, overplot=True)
- link(par, val)[source] [edit on github]
Link a parameter to a value.
A parameter can be linked to another parameter value, or function of that value, rather than be an independent value. As the linked-to values change, the parameter value will change.
- Parameters
See also
Notes
The
link
attribute of the parameter is set to match the mathematical expression used forval
.For a parameter value to be varied during a fit, it must be part of one of the source expressions involved in the fit. So, in the following, the
src1.xpos
parameter will not be varied because thesrc2
model - from which it takes its value - is not included in the source expression of any of the data sets being fit.>>> set_source(1, gauss1d.src1) >>> gauss1d.src2 >>> link(src1.xpos, src2.xpos) >>> fit(1)
One way to work around this is to include the model but with zero signal: for example
>>> set_source(1, gauss1d.src1 + 0 * gauss1d.src2)
Examples
The
fwhm
parameter of theg2
model is set to be the same as thefwhm
parameter of theg1
model.>>> link(g2.fwhm, g1.fwhm)
Fix the
pos
parameter ofg2
to be 2.3 more than thepos
parameter of theg1
model.>>> gauss1d.g1 >>> gauss1d.g2 >>> g1.pos = 12.2 >>> link(g2.pos, g1.pos + 2.3) >>> g2.pos.val 14.5 >>> g1.pos = 12.1 >>> g2.pos.val 14.399999999999999
- list_bkg_ids(id=None)[source] [edit on github]
List all the background identifiers for a data set.
A PHA data set can contain multiple background datasets, each identified by an integer or string. This function returns a list of these identifiers for a data set.
- Parameters
id (int or str, optional) – The data set to query. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
ids – The identifiers for the backround data sets for the data set. In many cases this will just be
[1]
.- Return type
array of int or str
See also
list_response_ids
List all the response identifiers of a data set.
load_bkg
Load the background of a PHA data set.
- list_data_ids()[source] [edit on github]
List the identifiers for the loaded data sets.
- Returns
ids – A list of the data set identifiers that have been created by commands like
load_data
andload_arrays
.- Return type
list of int or str
See also
delete_data
Delete a data set by identifier.
load_arrays
Create a data set from arrays of data.
load_data
Create a data set from a file.
Examples
In this case only one data set has been loaded:
>>> list_data_ids() [1]
Two data sets have been loaded, using string identifiers:
>>> list_data_ids() ['nucleus', 'jet']
- list_functions(outfile=None, clobber=False)[source] [edit on github]
Display the functions provided by Sherpa.
Unlike the other
list_xxx
commands, this does not return an array. Instead it acts like theshow_xxx
family of commands.- Parameters
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
See also
get_functions
Return the functions provided by Sherpa.
show_all
Report the current state of the Sherpa session.
- list_iter_methods()[source] [edit on github]
List the iterative fitting schemes.
- Returns
schemes – A list of the names that can be used with
set_iter_method
.- Return type
list of str
See also
get_iter_method_name
Return the name of the iterative fitting scheme.
set_iter_method
Set the iterative-fitting scheme used in the fit.
Examples
>>> list_iter_methods() ['none', 'sigmarej']
- list_methods()[source] [edit on github]
List the optimization methods.
- Returns
methods – A list of the names that can be used with
set_method
.- Return type
list of str
See also
get_method_name
Return the name of the current optimization method.
set_method
Set the optimization method.
Examples
>>> list_methods() ['gridsearch', 'levmar', 'moncar', 'neldermead', 'simplex']
- list_model_components()[source] [edit on github]
List the names of all the model components.
Models are created either directly - by using the form
mname.mid
, wheremname
is the name of the model, such asgauss1d
, andmid
is the name of the component - or with thecreate_model_component
function, which acceptsmname
andmid
as separate arguments. This function returns all themid
values that have been created.- Returns
ids – The identifiers for all the model components that have been created. They do not need to be associated with a source expression (i.e. they do not need to have been included in a call to
set_model
).- Return type
list of str
See also
create_model_component
Create a model component.
delete_model_component
Delete a model component.
list_models
List the available model types.
list_model_ids
List of all the data sets with a source expression.
set_model
Set the source model expression for a data set.
Examples
The
gal
andpl
model components are created - as versions of thexsphabs
andpowlaw1d
model types - which means that the list of model components returned asmids
will contain both strings.>>> set_model(xsphabs.gal * powlaw1d.pl) >>> mids = list_model_components() >>> 'gal' in mids True >>> 'pl' in mids True
The model component does not need to be included as part of a source expression for it to be included in the output of this function:
>>> create_model_component('gauss2d', 'gsrc') >>> 'gsrc' in list_model_components() True
- list_model_ids()[source] [edit on github]
List of all the data sets with a source expression.
- Returns
ids – The identifiers for all the data sets which have a source expression set by
set_model
orset_source
.- Return type
list of int or str
See also
list_data_ids
List the identifiers for the loaded data sets.
list_model_components
List the names of all the model components.
list_psf_ids
List of all the data sets with a PSF.
set_model
Set the source model expression for a data set.
- list_models(show='all')[source] [edit on github]
List the available model types.
- Parameters
show ({ 'all', '1d', '2d', 'xspec' }, optional) – What type of model should be returned. The default is ‘all’. An unrecognized value is treated as ‘all’.
- Returns
models
- Return type
list of str
See also
create_model_components
Create a model component.
list_model_components
List the current model components.
Examples
>>> models = list_models() >>> models[0:5] ['absorptionedge', 'absorptiongaussian', 'absorptionlorentz', 'absorptionvoigt', 'accretiondisk']
>>> list_models('2d') ['beta2d', 'box2d', 'const2d', 'delta2d', 'devaucouleurs2d', 'disk2d', 'gauss2d', 'lorentz2d', 'normgauss2d', 'polynom2d', 'scale2d', 'sersic2d', 'shell2d', 'sigmagauss2d']
- list_pileup_model_ids()[source] [edit on github]
List of all the data sets with a pile up model.
New in version 4.12.2.
- Returns
ids – The identifiers for all the data sets which have a pile up model set by
set_pileup_model
.- Return type
list of int or str
See also
list_data_ids
List the identifiers for the loaded data sets.
list_model_ids
List of all the data sets with a source expression.
set_pileup_model
Add a pile up model to a data set.
- list_priors()[source] [edit on github]
Return the priors set for model parameters, if any.
- Returns
priors – The dictionary of mappings between parameters (keys) and prior functions (values) created by
set_prior
.- Return type
See also
Examples
In this example a prior on the
PhoIndex
parameter of thepl
instance has been set to be a gaussian:>>> list_priors() {'pl.PhoIndex': <Gauss1D model instance 'gauss1d.gline'>}
- list_psf_ids()[source] [edit on github]
List of all the data sets with a PSF.
New in version 4.12.2.
- Returns
ids – The identifiers for all the data sets which have a PSF model set by
set_psf
.- Return type
list of int or str
See also
list_data_ids
List the identifiers for the loaded data sets.
list_model_ids
List of all the data sets with a source expression.
set_psf
Add a PSF model to a data set.
- list_response_ids(id=None, bkg_id=None)[source] [edit on github]
List all the response identifiers of a data set.
A PHA data set can contain multiple responses, that is, pairs of ARF and RMF, each of which has an identifier. This function returns a list of these identifiers for a data set.
- Parameters
id (int or str, optional) – The data set to query. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Set this to identify the background component to query.
- Returns
ids – The identifiers for the response information for the data set. In many cases this will just be
[1]
.- Return type
array of int or str
See also
list_bkg_ids
List all the background identifiers for a data set.
load_arf
Load an ARF from a file and add it to a PHA data set.
load_rmf
Load a RMF from a file and add it to a PHA data set.
- list_samplers()[source] [edit on github]
List the MCMC samplers.
- Returns
samplers – A list of the names (in lower case) that can be used with
set_sampler
.- Return type
list of str
See also
get_sampler_name
Return the name of the current MCMC sampler.
Examples
>>> list_samplers() ['metropolismh', 'fullbayes', 'mh', 'pragbayes']
- list_stats()[source] [edit on github]
List the fit statistics.
- Returns
stat – A list of the names that can be used with
set_stat
.- Return type
list of str
See also
get_stat_name
Return the name of the current statistical method.
set_stat
Set the statistical method.
Examples
>>> list_stats() ['cash', 'chi2', 'chi2constvar', 'chi2datavar', 'chi2gehrels', 'chi2modvar', 'chi2xspecvar', 'cstat', 'leastsq', 'wstat']
- load_arf(id, arg=None, resp_id=None, bkg_id=None)[source] [edit on github]
Load an ARF from a file and add it to a PHA data set.
Load in the effective area curve for a PHA data set, or its background. The
load_bkg_arf
function can be used for setting most background ARFs.- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.arg – Identify the ARF: a file name, or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: a
TABLECrate
for crates, as used by CIAO, or a list of AstroPy HDU objects.resp_id (int or str, optional) – The identifier for the ARF within this data set, if there are multiple responses.
bkg_id (int or str, optional) – Set this to identify the ARF as being for use with the background.
See also
get_arf
Return the ARF associated with a PHA data set.
load_bkg_arf
Load an ARF from a file and add it to the background of a PHA data set.
load_multi_arfs
Load multiple ARFs for a PHA data set.
load_pha
Load a file as a PHA data set.
load_rmf
Load a RMF from a file and add it to a PHA data set.
set_full_model
Define the convolved model expression for a data set.
set_arf
Set the ARF for use by a PHA data set.
unpack_arf
Create an ARF data structure.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
arg
parameter. If given two un-named arguments, then they are interpreted as theid
andarg
parameters, respectively. The remaining parameters are expected to be given as named arguments.If a PHA data set has an associated ARF - either from when the data was loaded or explicitly with the
set_arf
function - then the model fit to the data will include the effect of the ARF when the model is created withset_model
orset_source
. In this case theget_source
function returns the user model, andget_model
the model that is fit to the data (i.e. it includes any response information; that is the ARF and RMF, if set). To include the ARF explicitly, useset_full_model
.The
minimum_energy
setting of theogip
section of the Sherpa configuration file determines the behavior when an ARF with a minimum energy of 0 is read in. The default is to replace the 0 by the value 1e-10, which will also cause a warning message to be displayed.Examples
Use the contents of the file ‘src.arf’ as the ARF for the default data set.
>>> load_arf('src.arf')
Read in an ARF from the file ‘bkg.arf’ and set it as the ARF for the background model of data set “core”:
>>> load_arf('core', 'bkg.arf', bkg_id=1)
- load_arrays(id, *args)[source] [edit on github]
Create a data set from array values.
- Parameters
Warning
Sherpa currently does not support numpy masked arrays. Use the set_filter function and note that it follows a different convention by default (a positive value or True for a “bad” channel, 0 or False for a good channel).
See also
copy_data
Copy a data set to a new identifier.
delete_data
Delete a data set by identifier.
get_data
Return the data set by identifier.
load_data
Create a data set from a file.
set_data
Set a data set.
unpack_arrays
Create a sherpa data object from arrays of data.
Notes
The data type identifier, which defaults to
Data1D
, determines the number, and order, of the required inputs.Identifier
Required Fields
Optional Fields
Data1D
x, y
statistical error, systematic error
Data1DInt
xlo, xhi, y
statistical error, systematic error
Data2D
x0, x1, y
shape, statistical error, systematic error
Data2DInt
x0lo, x1lo, x0hi, x1hi, y
shape, statistical error, systematic error
DataPHA
channel, counts
statistical error, systematic error, bin_lo, bin_hi, grouping, quality
DataIMG
x0, x1, y
shape, statistical error, systematic error
The
shape
argument should be a tuple giving the size of the data(ny,nx)
, and for theDataIMG
case the arrays are 1D, not 2D.Examples
Create a 1D data set with three points:
>>> load_arrays(1, [10, 12, 15], [4.2, 12.1, 8.4])
Create a 1D data set, with the identifier ‘prof’, from the arrays
x
(independent axis),y
(dependent axis), anddy
(statistical error on the dependent axis):>>> load_arrays('prof', x, y, dy)
Explicitly define the type of the data set:
>>> load_arrays('prof', x, y, dy, Data1D)
Data set 1 is a histogram, where the bins cover the range 1-3, 3-5, and 5-7 with values 4, 5, and 9 respectively.
>>> load_arrays(1, [1, 3, 5], [3, 5, 7], [4, 5, 9], Data1DInt)
Create an image data set:
>>> ivals = np.arange(12) >>> y, x = np.mgrid[0:3, 0:4] >>> x = x.flatten() >>> y = y.flatten() >>> load_arrays('img', x, y, ivals, (3, 4), DataIMG)
- load_ascii(id, filename=None, ncols=2, colkeys=None, dstype=<class 'sherpa.data.Data1D'>, sep=' ', comment='#')[source] [edit on github]
Load an ASCII file as a data set.
The standard behavior is to create a single data set, but multiple data sets can be loaded with this command, as described in the
sherpa.astro.datastack
module.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to read in. Selection of the relevant column depends on the I/O library in use (Crates or AstroPy).
ncols (int, optional) – The number of columns to read in (the first
ncols
columns in the file). The meaning of the columns is determined by thedstype
parameter.colkeys (array of str, optional) – An array of the column name to read in. The default is
None
.sep (str, optional) – The separator character. The default is
' '
.comment (str, optional) – The comment character. The default is
'#'
.dstype (optional) – The data class to use. The default is
Data1D
.
See also
load_ascii_with_errors
Load an ASCII file with asymmetric errors as a data set.
load_table
Load a FITS binary file as a data set.
load_image
Load an image as a data set.
set_data
Set a data set.
unpack_ascii
Unpack an ASCII file into a data structure.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.The column order for the different data types are as follows, where
x
indicates an independent axis andy
the dependent axis.Identifier
Required Fields
Optional Fields
Data1D
x, y
statistical error, systematic error
Data1DInt
xlo, xhi, y
statistical error, systematic error
Data2D
x0, x1, y
shape, statistical error, systematic error
Data2DInt
x0lo, x1lo, x0hi, x1hi, y
shape, statistical error, systematic error
Examples
Read in the first two columns of the file, as the independent (X) and dependent (Y) columns of the default data set:
>>> load_ascii('sources.dat')
Read in the first three columns (the third column is taken to be the error on the dependent variable):
>>> load_ascii('sources.dat', ncols=3)
Read in from columns ‘RMID’ and ‘SUR_BRI’ into data set ‘prof’:
>>> load_ascii('prof', 'rprof.dat', ... colkeys=['RMID', 'SUR_BRI'])
The first three columns are taken to be the two independent axes of a two-dimensional data set (
x0
andx1
) and the dependent value (y
):>>> load_ascii('fields.txt', ncols=3, ... dstype=sherpa.astro.data.Data2D)
When using the Crates I/O library, the file name can include CIAO Data Model syntax, such as column selection. This can also be done using the
colkeys
parameter, as shown above:>>> load_ascii('prof', ... 'rprof.dat[cols rmid,sur_bri,sur_bri_err]', ... ncols=3)
- load_ascii_with_errors(id, filename=None, colkeys=None, sep=' ', comment='#', func=<function average>, delta=False)[source] [edit on github]
Load an ASCII file with asymmetric errors as a data set.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to read in. Selection of the relevant column depends on the I/O library in use (Crates or AstroPy).
sep (str, optional) – The separator character. The default is
' '
.comment (str, optional) – The comment character. The default is
'#'
.func (python function, optional) – The function used to combine the lo and hi values to estimate an error. The function should take two arguments
(lo, hi)
and return a single NumPy array, giving the per-bin error. The default function used is numpy.average.delta (boolean, optional) – The flag is used to indicate if the asymmetric errors for the third and fourth columns are delta values from the second (y) column or not. The default value is False
See also
load_ascii
Load an ASCII file as a data set.
load_arrays
Create a data set from array values.
load_table
Load a FITS binary file as a data set.
load_image
Load an image as a data set.
resample_data
Resample data with asymmetric error bars.
set_data
Set a data set.
unpack_ascii
Unpack an ASCII file into a data structure.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.The column order for the different data types are as follows, where
x
indicates an independent axis,y
the dependent axis, the asymmetric errorselo
andehi
.Identifier
Required Fields
Optional Fields
Data1DAsymmetricErrs
x, y, elo, ehi
Examples
Read in the first four columns of the file, as the independent (X), dependent (Y), error low (ELO) and error high (EHI) columns of the default data set:
>>> load_ascii_with_errors('sources.dat')
Read in the first four columns (x, y, elo, ehi) where elo and ehi are of the form y - delta_lo and y + delta_hi, respectively.
>>> load_ascii_with_errors('sources.dat', delta=True)
Read in the first four columns (x, y, elo, ehi) where elo and ehi are of the form delta_lo and delta_hi, respectively.
>>> def rms(lo, hi): ... return numpy.sqrt(lo * lo + hi * hi) ... >>> load_ascii_with_errors('sources.dat', func=rms)
Read in the first four columns (x, y, elo, ehi) where elo and ehi are of the form delta_lo and delta_hi, respectively. The
func
argument is used to calculate the error based on the elo and ehi column values.
- load_bkg(id, arg=None, use_errors=False, bkg_id=None)[source] [edit on github]
Load the background from a file and add it to a PHA data set.
This will load the PHA data and any response information - so ARF and RMF - and add it as a background component to the PHA data set.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.arg – Identify the data to read: a file name, or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: a
PHACrateDataset
for crates, as used by CIAO, or a list of AstroPy HDU objects.use_errors (bool, optional) – If
True
then the statistical errors are taken from the input data, rather than calculated by Sherpa from the count values. The default isFalse
.bkg_id (int or str, optional) – The identifier for the background (the default of
None
uses the first component).
See also
load_bkg_arf
Load an ARF from a file and add it to the background of a PHA data set.
load_bkg_rmf
Load a RMF from a file and add it to the background of a PHA data set.
load_pha
Load a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
arg
parameter. If given two un-named arguments, then they are interpreted as theid
andarg
parameters, respectively. The remaining parameters are expected to be given as named arguments.Examples
Load a source and background data set:
>>> load_pha('src.pi') read ARF file src.arf read RMF file src.rmf >>> load_bkg('src_bkg.pi')
Read in the background via Crates:
>>> bpha = pycrates.read_pha('src_bkg.pi') >>> load_bkg(bpha)
Create the data set from the data read in by AstroPy:
>>> bhdus = astropy.io.fits.open('src_bkg.pi') >>> load_bkg(bhdus)
- load_bkg_arf(id, arg=None)[source] [edit on github]
Load an ARF from a file and add it to the background of a PHA data set.
Load in the ARF to the background of the given data set. It is only for use when there is only one background component, and one response, for the source. For multiple backgrounds or responses, use
load_arf
.- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.arg – Identify the ARF: a file name, or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: a
TABLECrate
for crates, as used by CIAO, or a list of AstroPy HDU objects.
See also
load_arf
Load an ARF from a file and add it to a PHA data set.
load_bkg_rmf
Load a RMF from a file and add it to the background of a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
arg
parameter. If given two un-named arguments, then they are interpreted as theid
andarg
parameters, respectively. The remaining parameters are expected to be given as named arguments.The
minimum_energy
setting of theogip
section of the Sherpa configuration file determines the behavior when an ARF with a minimum energy of 0 is read in. The default is to replace the 0 by the value 1e-10, which will also cause a warning message to be displayed.Examples
Use the contents of the file ‘bkg.arf’ as the ARF for the background of the default data set.
>>> load_bkg_arf('bkg.arf')
Set ‘core_bkg.arf’ as the ARF for the background of data set ‘core’:
>>> load_bkg_arf('core', 'core_bkg.arf')
- load_bkg_rmf(id, arg=None)[source] [edit on github]
Load a RMF from a file and add it to the background of a PHA data set.
Load in the RMF to the background of the given data set. It is only for use when there is only one background component, and one response, for the source. For multiple backgrounds or responses, use
load_rmf
.- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.arg – Identify the RMF: a file name, or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: a
RMFCrateDataset
for crates, as used by CIAO, or an AstroPyHDUList
object.
See also
load_rmf
Load a RMF from a file and add it to a PHA data set.
load_bkg_arf
Load an ARF from a file and add it to the background of a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
arg
parameter. If given two un-named arguments, then they are interpreted as theid
andarg
parameters, respectively. The remaining parameters are expected to be given as named arguments.The
minimum_energy
setting of theogip
section of the Sherpa configuration file determines the behavior when an RMF with a minimum energy of 0 is read in. The default is to replace the 0 by the value 1e-10, which will also cause a warning message to be displayed.Examples
Use the contents of the file ‘bkg.rmf’ as the RMF for the background of the default data set.
>>> load_bkg_rmf('bkg.rmf')
Set ‘core_bkg.rmf’ as the RMF for the background of data set ‘core’:
>>> load_bkg_arf('core', 'core_bkg.rmf')
- load_conv(modelname, filename_or_model, *args, **kwargs)[source] [edit on github]
Load a 1D convolution model.
The convolution model can be defined either by a data set, read from a file, or an analytic model, using a Sherpa model instance. A source model can be convolved with this model by including
modelname
in theset_model
call, using the form:modelname(modelexpr)
- Parameters
modelname (str) – The identifier for this PSF model.
filename_or_model (str or model instance) – This can be the name of an ASCII file or a Sherpa model component.
args – Arguments for
unpack_data
iffilename_or_model
is a file.kwargs – Keyword arguments for
unpack_data
iffilename_or_model
is a file.
See also
delete_psf
Delete the PSF model for a data set.
load_psf
Create a PSF model.
load_table_model
Load tabular data and use it as a model component.
set_full_model
Define the convolved model expression for a data set.
set_model
Set the source model expression for a data set.
set_psf
Add a PSF model to a data set.
Examples
Create a 1D data set, assign a box model - which is flat between the xlow and xhi values and zero elsewhere - and then display the model values. Then add in a convolution component by a gaussian and overplot the resulting source model with two different widths.
>>> dataspace1d(-10, 10, 0.5, id='tst', dstype=Data1D) >>> set_source('tst', box1d.bmdl) >>> bmdl.xlow = -2 >>> bmdl.xhi = 3 >>> plot_source('tst') >>> load_conv('conv', normgauss1d.gconv) >>> gconv.fwhm = 2 >>> set_source('tst', conv(bmdl)) >>> plot_source('tst', overplot=True) >>> gconv.fwhm = 5 >>> plot_source('tst', overplot=True)
Create a convolution component called “cmodel” which uses the data in the file “conv.dat”, which should have two columns (the X and Y values).
>>> load_conv('cmodel', 'conv.dat')
- load_data(id, filename=None, *args, **kwargs)[source] [edit on github]
Load a data set from a file.
This loads a data set from the file, trying in order
load_pha
,load_image
,load_table
, thenload_ascii
.Changed in version 4.13.1: The id argument is now used to define the first identifier when loading in a PHA2 file to match
load_pha
(previously the range always started at 1).- Parameters
id (int or str, optional) – The identifier for the data set to use. For multi-dataset files, currently only PHA2, the id value indicates the first dataset: if it is an integer then the numbering starts at id, and if a string then a suffix of 1 to n is added. If not given then the default identifier is used, as returned by
get_default_id
.filename – A file name or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: e.g. a
PHACrateDataset
,TABLECrate
, orIMAGECrate
for crates, as used by CIAO, or a list of AstroPy HDU objects.args – The arguments supported by
load_pha
,load_image
,load_table
, andload_ascii
.kwargs – The keyword arguments supported by
load_pha
,load_image
,load_table
, andload_ascii
.
See also
load_arrays
Create a data set from array values.
load_ascii
Load an ASCII file as a data set.
load_image
Load an image as a data set.
load_pha
Load a PHA data set.
load_table
Load a FITS binary file as a data set.
set_data
Set a data set.
unpack_data
Create a sherpa data object from a file.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.Examples
>>> load_data('tbl.dat')
>>> load_data('hist.dat', dstype=Data1DInt)
>>> load_data('img', 'img.fits') >>> load_data('bg', 'img_bg.fits')
>>> cols = ['rmid', 'sur_bri', 'sur_bri_err'] >>> load_data(2, 'profile.fits', colkeys=cols)
- load_filter(id, filename=None, bkg_id=None, ignore=False, ncols=2, *args, **kwargs)[source] [edit on github]
Load the filter array from a file and add to a data set.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file that contains the filter information. This file can be a FITS table or an ASCII file. Selection of the relevant column depends on the I/O library in use (Crates or AstroPy).
bkg_id (int or str, optional) – Set if the filter array should be associated with the background associated with the data set.
ignore (bool, optional) – If
False
(the default) then include bins with a non-zero filter value, otherwise exclude these bins.colkeys (array of str, optional) – An array of the column name to read in. The default is
None
.sep (str, optional) – The separator character. The default is
' '
.comment (str, optional) – The comment character. The default is
'#'
.
See also
get_filter
Return the filter expression for a data set.
ignore
Exclude data from the fit.
notice
Include data in the fit.
save_filter
Save the filter array to a file.
set_filter
Set the filter array of a data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.Examples
Read in the first column of the file and apply it to the default data set:
>>> load_filter('filt.dat')
Select the FILTER column of the file:
>>> load_filter(2, 'filt.dat', colkeys=['FILTER'])
When using Crates as the I/O library, the above can also be written as
>>> load_filter(2, 'filt.dat[cols filter]')
Read in a filter for an image. The image must match the size of the data and, as
ignore=True
, pixels with a non-zero value are excluded (rather than included):>>> load_filter('img', 'filt.img', ignore=True)
- load_grouping(id, filename=None, bkg_id=None, *args, **kwargs)[source] [edit on github]
Load the grouping scheme from a file and add to a PHA data set.
This function sets the grouping column but does not automatically group the data, since the quality array may also need updating. The
group
function will apply the grouping information.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file that contains the grouping information. This file can be a FITS table or an ASCII file. Selection of the relevant column depends on the I/O library in use (Crates or AstroPy).
bkg_id (int or str, optional) – Set if the grouping scheme should be associated with the background associated with the data set.
colkeys (array of str, optional) – An array of the column name to read in. The default is
None
.sep (str, optional) – The separator character. The default is
' '
.comment (str, optional) – The comment character. The default is
'#'
.
See also
get_grouping
Return the gouping array for a PHA data set.
group
Turn on the grouping for a PHA data set.
load_quality
Load the quality array from a file and add to a PHA data set.
save_grouping
Save the grouping scheme to a file.
set_grouping
Apply a set of grouping flags to a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.There is no check made to see if the grouping array contains valid data.
Examples
When using Crates as the I/O library, select the grouping column from the file ‘src.pi’, and use it to set the values in the default data set:
>>> load_grouping('src.pi[cols grouping]')
Use the
colkeys
option to define the column in the input file:>>> load_grouping('src.pi', colkeys=['grouping'])
Load the first column in ‘grp.dat’ and use it to populate the grouping array of the data set called ‘core’.
>>> load_grouping('core', 'grp.dat')
Use
group_counts
to calculate a grouping scheme for the data set labelled ‘src1’, save this scheme to the file ‘grp.dat’, and then load this scheme in for data set ‘src2’.>>> group_counts('src1', 10) >>> save_grouping('src1', 'grp.dat') >>> load_grouping('src2', 'grp.dat', colkeys=['groups'])
- load_image(id, arg=None, coord='logical', dstype=<class 'sherpa.astro.data.DataIMG'>)[source] [edit on github]
Load an image as a data set.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.arg – Identify the image data: a file name, or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: an
IMAGECrate
for crates, as used by CIAO, or a list of AstroPy HDU objects.coord ({ 'logical', 'image', 'physical', 'world', 'wcs' }) – The coordinate system to use. The ‘image’ option is the same as ‘logical’, and ‘wcs’ the same as ‘world’.
dstype (optional) – The data class to use. The default is
DataIMG
.
See also
load_arrays
Create a data set from array values.
load_ascii
Load an ASCII file as a data set.
load_table
Load a FITS binary file as a data set.
set_coord
Set the coordinate system to use for image analysis.
set_data
Set a data set.
unpack_image
Create an image data structure.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
arg
parameter. If given two un-named arguments, then they are interpreted as theid
andarg
parameters, respectively. The remaining parameters are expected to be given as named arguments.Examples
Load the image from the file “img.fits” into the default data set:
>>> load_image('img.fits')
Set the ‘bg’ data set to the contents of the file “img_bg.fits”:
>>> load_image('bg', 'img_bg.fits')
- load_multi_arfs(id, filenames, resp_ids=None)[source] [edit on github]
Load multiple ARFs for a PHA data set.
A grating observation - such as a Chandra LETGS data set - may require multiple responses if the detector has insufficient energy resolution to sort the photons into orders. In this case, the extracted spectrum will contain the signal from more than one diffraction orders.
This function lets the multiple ARFs for such a data set be loaded with one command. The
load_arf
function can instead be used to load them in individually.- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filenames (iterable of str) – An array of file names.
resp_ids (iterable of int or str) – The identifiers for the ARF within this data set. The length should match the filenames argument.
See also
load_arf
Load an ARF from a file and add it to a PHA data set.
load_multi_rmfs
Load multiple RMFs for a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with two arguments, they are assumed to be
filenames
andresp_ids
, and three positional arguments meansid
,filenames
, andresp_ids
.The
minimum_energy
setting of theogip
section of the Sherpa configuration file determines the behavior when an ARF with a minimum energy of 0 is read in. The default is to replace the 0 by the value 1e-10, which will also cause a warning message to be displayed.Examples
Load three ARFs into the default data set, using response ids of 1, 2, and 3 for the LETG/HRC-S orders 1, 2, and 3 respectively:
>>> arfs = ['leg_p1.arf', 'leg_p2.arf', 'leg_p3.arf'] >>> load_multi_arfs(arfs, [1, 2, 3])
Load in the ARFs to the data set with the identifier ‘lowstate’:
>>> load_multi_arfs('lowstate', arfs, [1, 2, 3])
- load_multi_rmfs(id, filenames, resp_ids=None)[source] [edit on github]
Load multiple RMFs for a PHA data set.
A grating observation - such as a Chandra LETGS data set - may require multiple responses if the detector has insufficient energy resolution to sort the photons into orders. In this case, the extracted spectrum will contain the signal from more than one diffraction orders.
This function lets the multiple RMFs for such a data set be loaded with one command. The
load_rmf
function can instead be used to load them in individually.- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filenames (iterable of str) – An array of file names.
resp_ids (iterable of int or str) – The identifiers for the RMF within this data set. The length should match the filenames argument.
See also
load_rmf
Load a RMF from a file and add it to a PHA data set.
load_multi_arfs
Load multiple ARFs for a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with two arguments, they are assumed to be
filenames
andresp_ids
, and three positional arguments meansid
,filenames
, andresp_ids
.The
minimum_energy
setting of theogip
section of the Sherpa configuration file determines the behavior when an RMF with a minimum energy of 0 is read in. The default is to replace the 0 by the value 1e-10, which will also cause a warning message to be displayed.Examples
Load three ARFs into the default data set, using response ids of 1, 2, and 3 for the LETG/HRC-S orders 1, 2, and 3 respectively:
>>> arfs = ['leg_p1.rmf', 'leg_p2.rmf', 'leg_p3.rmf'] >>> load_multi_rmfs(rmfs, [1, 2, 3])
Load in the RMFs to the data set with the identifier ‘lowstate’:
>>> load_multi_rmfs('lowstate', rmfs, [1, 2, 3])
- load_pha(id, arg=None, use_errors=False)[source] [edit on github]
Load a PHA data set.
This will load the PHA data and any related information, such as ARF, RMF, and background. The background is loaded but not subtracted. Any grouping information in the file will be applied to the data. The quality information is read in, but not automatically applied. See
subtract
andignore_bad
.The standard behavior is to create a single data set, but multiple data sets can be loaded with this command, as described in the
sherpa.astro.datastack
module.Changed in version 4.12.2: The id argument is now used to define the first identifier when loading in a PHA2 file (previously they always used the range 1 to number of files).
- Parameters
id (int or str, optional) – The identifier for the data set to use. For PHA2 files, that is those that contain multiple datasets, the id value indicates the first dataset: if it is an integer then the numbering starts at id, and if a string then a suffix of 1 to n is added. If not given then the default identifier is used, as returned by
get_default_id
.arg – Identify the data to read: a file name, or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: a
PHACrateDataset
for crates, as used by CIAO, or a list of AstroPy HDU objects.use_errors (bool, optional) – If
True
then the statistical errors are taken from the input data, rather than calculated by Sherpa from the count values. The default isFalse
.
See also
ignore_bad
Exclude channels marked as bad in a PHA data set.
load_arf
Load an ARF from a file and add it to a PHA data set.
load_bkg
Load the background from a file and add it to a PHA data set.
load_rmf
Load a RMF from a file and add it to a PHA data set.
pack_pha
Convert a PHA data set into a file structure.
save_pha
Save a PHA data set to a file.
subtract
Subtract the background estimate from a data set.
unpack_pha
Create a PHA data structure.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
arg
parameter. If given two un-named arguments, then they are interpreted as theid
andarg
parameters, respectively. The remaining parameters are expected to be given as named arguments.The
minimum_energy
setting of theogip
section of the Sherpa configuration file determines the behavior when an ARF with a minimum energy of 0 is read in. The default is to replace the 0 by the value 1e-10, which will also cause a warning message to be displayed.Examples
Load the PHA file ‘src.pi’ into the default data set, and automatically load the ARF, RMF, and background from the files pointed to by the ANCRFILE, RESPFILE, and BACKFILE keywords in the file. The background is then subtracted and any ‘bad quality’ bins are removed:
>>> load_pha('src.pi') read ARF file src.arf read RMF file src.rmf read background file src_bkg.pi >>> subtract() >>> ignore_bad()
Load two files into data sets ‘src’ and ‘bg’:
>>> load_pha('src', 'x1.fits') >>> load_pha('bg', 'x2.fits')
If a type II PHA data set is loaded, then multiple data sets will be created, one for each order. The default behavior is to use the dataset identifiers 1 to the number of files.
>>> clean() >>> load_pha('src.pha2') >>> list_data_ids() [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]
If given an identifier as the first argument then this is used to start the numbering scheme for PHA2 files. If id is an integer then the numbers go from id:
>>> clean() >>> load_pha(20, 'src.pha2') >>> list_data_ids() [20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]
If the id is a string then the identifier is formed by adding the number of the dataset (starting at 1) to the end of id. Note that the
list_data_ids
routine does not guarantee an ordering to the output (as shown below):>>> clean() >>> load_pha('x', 'src.pha2') >>> list_data_ids() ['x1', 'x10', 'x11', 'x12', 'x2', 'x3', 'x4', 'x5', 'x6', 'x7', 'x8', 'x9']
Create the data set from the data read in by Crates:
>>> pha = pycrates.read_pha('src.pi') >>> load_pha(pha) read ARF file src.arf read RMF file src.rmf read background file src_bkg.pi
Create the data set from the data read in by AstroPy:
>>> hdus = astropy.io.fits.open('src.pi') >>> load_pha(hdus) read ARF file src.arf read RMF file src.rmf read background file src_bkg.pi
The default behavior is to calculate the errors based on the counts values and the choice of statistic - e.g.
chi2gehrels
orchi2datavar
- but the statistical errors from the input file can be used instead by settinguse_errors
toTrue
:>>> load_pha('source.fits', use_errors=True)
- load_psf(modelname, filename_or_model, *args, **kwargs)[source] [edit on github]
Create a PSF model.
Create a PSF model representing either an array of data, read from a file, or a model component (such as a gaussian). The
set_psf
function is used to associate this model with a data set.- Parameters
modelname (str) – The identifier for this PSF model.
filename_or_model (str or model instance) – This can be the name of an ASCII file or a Sherpa model component.
args – Arguments for
unpack_data
iffilename_or_model
is a file.kwargs – Keyword arguments for
unpack_data
iffilename_or_model
is a file.
See also
delete_psf
Delete the PSF model for a data set.
load_conv
Load a 1D convolution model.
load_table_model
Load tabular data and use it as a model component.
set_full_model
Define the convolved model expression for a data set.
set_model
Set the source model expression for a data set.
set_psf
Add a PSF model to a data set.
Examples
Create a PSF model using a 2D gaussian:
>>> load_psf('psf1', gauss2d.gpsf) >>> set_psf('psf1') >>> gpsf.fwhm = 4.2 >>> gpsf.ellip = 0.2 >>> gpsf.theta = 30 * np.pi / 180 >>> image_psf()
Create a PSF model from the data in the ASCII file ‘line_profile.dat’ and apply it to the data set called ‘bgnd’:
>>> load_psf('pmodel', 'line_profile.dat') >>> set_psf('bgnd', 'pmodel')
- load_quality(id, filename=None, bkg_id=None, *args, **kwargs)[source] [edit on github]
Load the quality array from a file and add to a PHA data set.
This function sets the quality column but does not automatically ignore any columns marked as “bad”. Use the
ignore_bad
function to apply the new quality information.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file that contains the quality information. This file can be a FITS table or an ASCII file. Selection of the relevant column depends on the I/O library in use (Crates or AstroPy).
bkg_id (int or str, optional) – Set if the quality array should be associated with the background associated with the data set.
colkeys (array of str, optional) – An array of the column name to read in. The default is
None
.sep (str, optional) – The separator character. The default is
' '
.comment (str, optional) – The comment character. The default is
'#'
.
See also
get_quality
Return the quality array for a PHA data set.
ignore_bad
Exclude channels marked as bad in a PHA data set.
load_grouping
Load the grouping scheme from a file and add to a PHA data set.
save_quality
Save the quality array to a file.
set_quality
Apply a set of quality flags to a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.There is no check made to see if the quality array contains valid data.
Examples
When using Crates as the I/O library, select the quality column from the file ‘src.pi’, and use it to set the values in the default data set:
>>> load_quality('src.pi[cols quality]')
Use the
colkeys
option to define the column in the input file:>>> load_quality('src.pi', colkeys=['quality'])
Load the first column in ‘grp.dat’ and use it to populate the quality array of the data set called ‘core’.
>>> load_quality('core', 'grp.dat')
- load_rmf(id, arg=None, resp_id=None, bkg_id=None)[source] [edit on github]
Load a RMF from a file and add it to a PHA data set.
Load in the redistribution matrix function for a PHA data set, or its background. The
load_bkg_rmf
function can be used for setting most background RMFs.- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.arg – Identify the RMF: a file name, or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: a
RMFCrateDataset
for crates, as used by CIAO, or an AstroPyHDUList
object.resp_id (int or str, optional) – The identifier for the RMF within this data set, if there are multiple responses.
bkg_id (int or str, optional) – Set this to identify the RMF as being for use with the background.
See also
get_rmf
Return the RMF associated with a PHA data set.
load_bkg_rmf
Load a RMF from a file and add it to the background of a PHA data set.
load_arf
Load an ARF from a file and add it to a PHA data set.
load_multi_rmfs
Load multiple RMFs for a PHA data set.
load_pha
Load a file as a PHA data set.
set_full_model
Define the convolved model expression for a data set.
set_rmf
Load a RMF from a file and add it to a PHA data set.
unpack_rmf
Read in a RMF from a file.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
arg
parameter. If given two un-named arguments, then they are interpreted as theid
andarg
parameters, respectively. The remaining parameters are expected to be given as named arguments.If a PHA data set has an associated RMF - either from when the data was loaded or explicitly with the
set_rmf
function - then the model fit to the data will include the effect of the RMF when the model is created withset_model
orset_source
. In this case theget_source
function returns the user model, andget_model
the model that is fit to the data (i.e. it includes any response information; that is the ARF and RMF, if set). To include the RMF explicitly, useset_full_model
.The
minimum_energy
setting of theogip
section of the Sherpa configuration file determines the behavior when an RMF with a minimum energy of 0 is read in. The default is to replace the 0 by the value 1e-10, which will also cause a warning message to be displayed.Examples
Use the contents of the file ‘src.rmf’ as the RMF for the default data set.
>>> load_rmf('src.rmf')
Read in a RMF from the file ‘bkg.rmf’ and set it as the RMF for the background model of data set “core”:
>>> load_rmf('core', 'bkg.rmf', bkg_id=1)
- load_staterror(id, filename=None, bkg_id=None, *args, **kwargs)[source] [edit on github]
Load the statistical errors from a file.
Read in a column or image from a file and use the values as the statistical errors for a data set. This over rides the errors calculated by any statistic, such as
chi2gehrels
orchi2datavar
.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to read in. Supported formats depends on the I/O library in use (Crates or AstroPy) and the type of data set (e.g. 1D or 2D).
bkg_id (int or str, optional) – Set to identify which background component to set. The default value (
None
) means that this is for the source component of the data set.colkeys (array of str, optional) – An array of the column name to read in. The default is
None
.sep (str, optional) – The separator character. The default is
' '
.comment (str, optional) – The comment character. The default is
'#'
.
See also
get_staterror
Return the statistical error on the dependent axis of a data set.
load_syserror
Load the systematic errors from a file.
set_staterror
Set the statistical errors on the dependent axis of a data set.
set_stat
Set the statistical method.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.Examples
Read in the first column from ‘tbl.dat’:
>>> load_staterror('tbl.dat')
Use the column labelled ‘col3’
>>> load_staterror('tbl.dat', colkeys=['col3'])
When using the Crates I/O library, the file name can include CIAO Data Model syntax, such as column selection:
>>> load_staterror('tbl.dat[cols col3]')
Read in the first column from the file ‘errors.fits’ as the statistical errors for the ‘core’ data set:
>>> load_staterror('core', 'errors.fits')
The data set labelled ‘img’ is loaded from the file ‘image.fits’ and the statistical errors from ‘err.fits’. The dimensions of the two images must be the same.
>>> load_image('img', 'image.fits') >>> load_staterror('img', 'err.fits')
- load_syserror(id, filename=None, bkg_id=None, *args, **kwargs)[source] [edit on github]
Load the systematic errors from a file.
Read in a column or image from a file and use the values as the systematic errors for a data set.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to read in. Supported formats depends on the I/O library in use (Crates or AstroPy) and the type of data set (e.g. 1D or 2D).
bkg_id (int or str, optional) – Set to identify which background component to set. The default value (
None
) means that this is for the source component of the data set.ncols (int, optional) – The number of columns to read in (the first
ncols
columns in the file).colkeys (array of str, optional) – An array of the column name to read in. The default is
None
.sep (str, optional) – The separator character. The default is
' '
.comment (str, optional) – The comment character. The default is
'#'
.
See also
get_syserror
Return the systematic error on the dependent axis of a data set.
load_staterror
Load the statistical errors from a file.
set_syserror
Set the systematic errors on the dependent axis of a data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.Examples
Read in the first column from ‘tbl.dat’:
>>> load_syserror('tbl.dat')
Use the column labelled ‘col3’
>>> load_syserror('tbl.dat', colkeys=['col3'])
When using the Crates I/O library, the file name can include CIAO Data Model syntax, such as column selection:
>>> load_syserror('tbl.dat[cols col3]')
Read in the first column from the file ‘errors.fits’ as the systematic errors for the ‘core’ data set:
>>> load_syserror('core', 'errors.fits')
The data set labelled ‘img’ is loaded from the file ‘image.fits’ and the systematic errors from ‘syserr.fits’. The dimensions of the two images must be the same.
>>> load_image('img', 'image.fits') >>> load_syserror('img', 'syserr.fits')
- load_table(id, filename=None, ncols=2, colkeys=None, dstype=<class 'sherpa.data.Data1D'>)[source] [edit on github]
Load a FITS binary file as a data set.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename – Identify the file to read: a file name, or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: a
TABLECrate
for crates, as used by CIAO, or a list of AstroPy HDU objects.ncols (int, optional) – The number of columns to read in (the first
ncols
columns in the file). The meaning of the columns is determined by thedstype
parameter.colkeys (array of str, optional) – An array of the column name to read in. The default is
None
.dstype (optional) – The data class to use. The default is
Data1D
.
See also
load_arrays
Create a data set from array values.
load_ascii
Load an ASCII file as a data set.
load_image
Load an image as a data set.
set_data
Set a data set.
unpack_table
Unpack a FITS binary table into a data structure.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.The column order for the different data types are as follows, where
x
indicates an independent axis andy
the dependent axis:Identifier
Required Fields
Optional Fields
Data1D
x, y
statistical error, systematic error
Data1DInt
xlo, xhi, y
statistical error, systematic error
Data2D
x0, x1, y
shape, statistical error, systematic error
Data2DInt
x0lo, x1lo, x0hi, x1hi, y
shape, statistical error, systematic error
Examples
Read in the first two columns of the file, as the independent (X) and dependent (Y) columns of the default data set:
>>> load_table('sources.fits')
Read in the first three columns (the third column is taken to be the error on the dependent variable):
>>> load_table('sources.fits', ncols=3)
Read in from columns ‘RMID’ and ‘SUR_BRI’ into data set ‘prof’:
>>> load_table('prof', 'rprof.fits', ... colkeys=['RMID', 'SUR_BRI'])
The first three columns are taken to be the two independent axes of a two-dimensional data set (
x0
andx1
) and the dependent value (y
):>>> load_table('fields.fits', ncols=3, ... dstype=sherpa.astro.data.Data2D)
When using the Crates I/O library, the file name can include CIAO Data Model syntax, such as column selection. This can also be done using the
colkeys
parameter, as shown above:>>> load_table('prof', ... 'rprof.fits[cols rmid,sur_bri,sur_bri_err]', ... ncols=3)
Read in a data set using Crates:
>>> cr = pycrates.read_file('table.fits') >>> load_table(cr)
Read in a data set using AstroPy:
>>> hdus = astropy.io.fits.open('table.fits') >>> load_table(hdus)
- load_table_model(modelname, filename, method=<function linear_interp>, *args, **kwargs)[source] [edit on github]
Load tabular or image data and use it as a model component.
Note
Deprecated in Sherpa 4.9 The new
load_xstable_model
routine should be used for loading XSPEC table model files. Support for these files will be removed fromload_table_model
in the next release.A table model is defined on a grid of points which is interpolated onto the independent axis of the data set. The model will have at least one parameter (the amplitude, or scaling factor to multiply the data by), but may have more (if X-Spec table models are used).
- Parameters
modelname (str) – The identifier for this table model.
filename (str) – The name of the file containing the data, which should contain two columns, which are the x and y values for the data, or be an image.
method (func) – The interpolation method to use to map the input data onto the coordinate grid of the data set. Linear, nearest-neighbor, and polynomial schemes are provided in the sherpa.utils module.
args – Arguments for reading in the data.
kwargs – Keyword arguments for reading in the data.
See also
load_conv
Load a 1D convolution model.
load_psf
Create a PSF model
load_template_model
Load a set of templates and use it as a model component.
load_xstable_model
Load a XSPEC table model.
set_model
Set the source model expression for a data set.
set_full_model
Define the convolved model expression for a data set.
Notes
Examples of interpolation schemes provided by
sherpa.utils
are:linear_interp
,nearest_interp
,neville
, andneville2d
.Examples
Load in the data from filt.fits and use it to multiply the source model (a power law and a gaussian). Allow the amplitude for the table model to vary between 1 and 1e6, starting at 1e3.
>>> load_table_model('filt', 'filt.fits') >>> set_source(filt * (powlaw1d.pl + gauss1d.gline)) >>> set_par(filt.ampl, 1e3, min=1, max=1e6)
Load in an image (“broad.img”) and use the pixel values as a model component for data set “img”:
>>> load_table_model('emap', 'broad.img') >>> set_source('img', emap * gauss2d)
- load_template_interpolator(name, interpolator_class, **kwargs)[source] [edit on github]
Set the template interpolation scheme.
- Parameters
name (str) –
interpolator_class – An interpolator class.
**kwargs – The arguments for the interpolator.
See also
load_template_model
Load a set of templates and use it as a model component.
Examples
Create an interpolator name that can be used as the
template_interpolator_name
argument toload_template_model
.>>> from sherpa.models import KNNInterpolator >>> load_template_interpolator('myint', KNNInterpolator, k=4, order=3)
- load_template_model(modelname, templatefile, dstype=<class 'sherpa.data.Data1D'>, sep=' ', comment='#', method=<function linear_interp>, template_interpolator_name='default')[source] [edit on github]
Load a set of templates and use it as a model component.
A template model can be considered to be an extension of the table model supported by
load_table_model
. In the template case, a set of models (the “templates”) are read in and then compared to the data, with the best-fit being used to return a set of parameters.- Parameters
modelname (str) – The identifier for this table model.
templatefile (str) – The name of the file to read in. This file lists the template data files.
dstype (data class to use, optional) – What type of data is to be used. Supported values include
Data1D
(the default) andData1DInt
.sep (str, optional) – The separator character. The default is
' '
.comment (str, optional) – The comment character. The default is
'#'
.method (func) – The interpolation method to use to map the input data onto the coordinate grid of the data set. Linear, nearest-neighbor, and polynomial schemes are provided in the sherpa.utils module.
template_interpolator_name (str) – The method used to interpolate within the set of templates. The default is
default
. A value ofNone
turns off the interpolation; in this case the grid-search optimiser must be used to fit the data.
See also
load_conv
Load a 1D convolution model.
load_psf
Create a PSF model
load_table_model
Load tabular data and use it as a model component.
load_template_interpolator
Set the template interpolation scheme.
set_model
Set the source model expression for a data set.
set_full_model
Define the convolved model expression for a data set.
Notes
Examples of interpolation schemes provided by
sherpa.utils
are:linear_interp
,nearest_interp
, andneville
.The template index file is the argument to
load_template_model
, and is used to list the data files. It is an ASCII file with one line per template, and each line containing the model parameters (numeric values), followed by the MODELFLAG column and then the file name for the data file (its name must begin with FILE). The MODELFLAG column is used to indicate whether a file should be used or not; a value of 1 means that the file should be used, and a value of 0 that the line should be ignored. The parameter names are set by the column names.The data file - the last column of the template index file - is read in and the first two columns used to set up the x and y values (
Data1D
) or xlo, xhi, and y values (Data1DInt
). These files must be in ASCII format.The
method
parameter determines how the template data values are interpolated onto the source data grid.The
template_interpolator_name
parameter determines how the dependent axis (Y) values are interpolated when the parameter values are varied. This interpolation can be turned off by using a value ofNone
, in which case the grid-search optimiser must be used. Seeload_template_interpolator
for how to create a valid interpolator. The “default” interpolator usessherpa.models.KNNInterpolator
with k=2 and order=2.Examples
Load in the templates from the file “index.tmpl” as the model component “kerr”, and set it as the source model for the default data set. The optimisation method is switched to use a grid search for the parameters of this model.
>>> load_template_model("kerr", "index.tmpl") >>> set_source(kerr) >>> set_method('gridsearch') >>> set_method_opt('sequence', kerr.parvals) >>> fit()
Fit a constant plus the templates, using the neville scheme for integrating the template onto the data grid. The Monte-Carlo based optimiser is used.
>>> load_template_model('tbl', 'table.lis', ... sherpa.utils.neville) >>> set_source(tbl + const1d.bgnd) >>> set_method('moncar')
- load_user_model(func, modelname, filename=None, *args, **kwargs)[source] [edit on github]
Create a user-defined model.
Assign a name to a function; this name can then be used as any other name of a model component, either in a source expression - such as with
set_model
- or to change a parameter value. Theadd_user_pars
function should be called afterload_user_model
to set up the parameter names and defaults.- Parameters
func (func) – The function that evaluates the model.
modelname (str) – The name to use to refer to the model component.
filename (str, optional) – Set this to include data from this file in the model. The file should contain two columns, and the second column is stored in the
_y
attribute of the model.args – Arguments for reading in the data from
filename
, if set. Seeload_table
andload_image
for more information.kwargs – Keyword arguments for reading in the data from
filename
, if set. Seeload_table
andload_image
for more information.
See also
add_model
Create a user-defined model class.
add_user_pars
Add parameter information to a user model.
load_image
Load an image as a data set.
load_table
Load a FITS binary file as a data set.
load_table_model
Load tabular data and use it as a model component.
load_template_model
Load a set of templates and use it as a model component.
set_model
Set the source model expression for a data set.
Notes
The
load_user_model
function is designed to make it easy to add a model, but the interface is not the same as the existing models (such as having to call bothload_user_model
andadd_user_pars
for each new instance). Theadd_model
function is used to add a model as a Python class, which is more work to set up, but then acts the same way as the existing models.The function used for the model depends on the dimensions of the data. For a 1D model, the signature is:
def func1d(pars, x, xhi=None):
where, if xhi is not None, then the dataset is binned and the x argument is the low edge of each bin. The pars argument is the parameter array - the names, defaults, and limits can be set with
add_user_pars
- and should not be changed. The return value is an array the same size as x.For 2D models, the signature is:
def func2d(pars, x0, x1, x0hi=None, x1hi=None):
There is no way using this interface to indicate that the model is for 1D or 2D data.
Examples
Create a two-parameter model of the form “y = mx + c”, where the intercept is the first parameter and the slope the second, set the parameter names and default values, then use it in a source expression:
>>> def func1d(pars, x, xhi=None): ... if xhi is not None: ... x = (x + xhi)/2 ... return x * pars[1] + pars[0] ... >>> load_user_model(func1d, "myfunc") >>> add_user_pars(myfunc, ["c", "m"], [0, 1]) >>> set_source(myfunc + gauss1d.gline)
- load_user_stat(statname, calc_stat_func, calc_err_func=None, priors={})[source] [edit on github]
Create a user-defined statistic.
The choice of statistics - that is, the numeric value that is minimised during the fit - can be extended by providing a function to calculate a numeric value given the data. The statistic is given a name and then can be used just like any of the pre-defined statistics.
- Parameters
Notes
The
calc_stat_func
should have the following signature:def func(data, model, staterror=None, syserrr=None, weight=None)
where data is the array of dependent values, model the array of the predicted values, staterror and syserror are arrays of statistical and systematic errors respectively (if valid), and weight an array of weights. The return value is the pair (stat_value, stat_per_bin), where stat_value is a scalar and stat_per_bin is an array the same length as data.
The
calc_err_func
should have the following signature:def func(data)
and returns an array the same length as data.
Examples
Define a chi-square statistic with the label “qstat”:
>>> def qstat(d, m, staterr=None, syserr=None, w=None): ... if staterr is None: ... staterr = 1 ... c = ((d-m) / staterr) ... return ((c*c).sum(), c) ... >>> load_user_stat("qstat", qstat) >>> set_stat("qstat")
- load_xstable_model(modelname, filename, etable=False)[source] [edit on github]
Load a XSPEC table model.
Create an additive (‘atable’, [1]_), multiplicative (‘mtable’, [2]_), or exponential (‘etable’, [3]_) XSPEC table model component. These models may have multiple model parameters.
Changed in version 4.14.0: The etable argument has been added to allow exponential table models to be used.
- Parameters
modelname (str) – The identifier for this model component.
filename (str) – The name of the FITS file containing the data, which should match the XSPEC table model definition [4]_.
etable (bool, optional) – Set if this is an etable (as there’s no way to determine this from the file itself). Defaults to False.
- Raises
sherpa.utils.err.ImportErr – If XSPEC support is not enabled.
See also
load_conv
Load a 1D convolution model.
load_psf
Create a PSF model
load_template_model
Load a set of templates and use it as a model component.
load_table_model
Load tabular or image data and use it as a model component.
set_model
Set the source model expression for a data set.
set_full_model
Define the convolved model expression for a data set.
Notes
NASA’s HEASARC site contains a link to community-provided XSPEC table models 5.
References
- 1
http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/manual/XSmodelAtable.html
- 2
http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/manual/XSmodelMtable.html
- 3
http://heasarc.gsfc.nasa.gov/docs/xanadu/xspec/manual/XSmodelEtable.html
- 4
http://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/general/ogip_92_009/ogip_92_009.html
- 5
Examples
Load in the XSPEC table model from the file ‘bbrefl_1xsolar.fits’ and create a model component labelled ‘xtbl’, which is then used in a source expression:
>>> load_xstable_model('xtbl', 'bbrefl_1xsolar.fits') >>> set_source(xsphabs.gal * xtbl) >>> print(xtbl)
Load in an XSPEC etable model:
>>> load_xstable_model('etbl', 'etable.mod', etable=True)
- normal_sample(num=1, sigma=1, correlate=True, id=None, otherids=(), numcores=None)[source] [edit on github]
Sample the fit statistic by taking the parameter values from a normal distribution.
For each iteration (sample), change the thawed parameters by drawing values from a uni- or multi-variate normal (Gaussian) distribution, and calculate the fit statistic.
- Parameters
num (int, optional) – The number of samples to use (default is 1).
sigma (number, optional) – The width of the normal distribution (the default is 1).
correlate (bool, optional) – Should a multi-variate normal be used, with parameters set by the covariance matrix (
True
) or should a uni-variate normal be used (False
)?id (int or str, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
- Returns
A NumPy array table with the first column representing the statistic and later columns the parameters used.
- Return type
samples
See also
fit
Fit a model to one or more data sets.
set_model
Set the source model expression for a data set.
set_stat
Set the statistical method.
t_sample
Sample from the Student’s t-distribution.
uniform_sample
Sample from a uniform distribution.
Notes
All thawed model parameters are sampled from the Gaussian distribution, where the mean is set as the best-fit parameter value and the variance is determined by the diagonal elements of the covariance matrix. The multi-variate Gaussian is assumed by default for correlated parameters, using the off-diagonal elements of the covariance matrix.
Examples
The model fit to the default data set has three free parameters. The median value of the statistic calculated by
normal_sample
is returned:>>> ans = normal_sample(num=10000) >>> ans.shape (1000, 4) >>> np.median(ans[:,0]) 119.82959326927781
- notice(lo=None, hi=None, **kwargs)[source] [edit on github]
Include data in the fit.
Select one or more ranges of data to include by filtering on the independent axis value. The filter is applied to all data sets.
Changed in version 4.14.0: Integrated data sets - so Data1DInt and DataPHA when using energy or wavelengths - now ensure that the
hi
argument is exclusive and better handling of thelo
argument when it matches a bin edge. This can result in the same filter selecting a smaller number of bins than in earlier versions of Sherpa.- Parameters
lo (number or str, optional) – The lower bound of the filter (when a number) or a string expression listing ranges in the form
a:b
, with multiple ranges allowed, where the ranges are separated by a,
. The term:b
means include everything up tob
(an exclusive limit for integrated datasets), anda:
means include everything that is higher than, or equal to,a
.hi (number, optional) – The upper bound of the filter when
lo
is not a string.bkg_id (int or str, optional) – The filter will be applied to the associated background component of the data set if
bkg_id
is set. Only PHA data sets support this option; if not given, then the filter is applied to all background components as well as the source data.
See also
notice_id
Include data for a data set.
sherpa.astro.ui.notice2d
Include a spatial region in an image.
ignore
Exclude data from the fit.
show_filter
Show any filters applied to a data set.
Notes
The order of
ignore
andnotice
calls is important, and the results are a union, rather than intersection, of the combination.If
notice
is called on an un-filtered data set, then the ranges outside the noticed range are excluded: it can be thought of as ifignore
had been used to remove all data points. Ifnotice
is called after a filter has been applied then the filter is applied to the existing data.For binned data sets, the bin is included if the noticed range falls anywhere within the bin, but excluing the
hi
value (except for PHA data sets when usingchannel
units).The units used depend on the
analysis
setting of the data set, if appropriate.To filter a 2D data set by a shape use
notice2d
.Examples
Since the
notice
call is applied to an un-filtered data set, the filter chooses only those points that lie within the range 12 <= X <= 18.>>> load_arrays(1, [10, 15, 20, 30], [5, 10, 7, 13]) >>> notice(12, 28) >>> get_dep(filter=True) array([10, 7])
As no limits are given, the whole data set is included:
>>> notice() >>> get_dep(filter=True) array([ 5, 10, 7, 13])
The
ignore
call excludes the first two points, but thenotice
call adds back in the second point:>>> ignore(None, 17) >>> notice(12, 16) >>> get_dep(filter=True) array([10, 7, 13])
Only include data points in the range 8<=X<=12 and 18<=X=22:
>>> ignore() >>> notice("8:12, 18:22") >>> get_dep(filter=True) array([5, 7])
- notice2d(val=None)[source] [edit on github]
Include a spatial region of all data sets.
Select a spatial region to include in the fit. The filter is applied to all data sets.
- Parameters
val (str, optional) – A region specification as a string or the name of a file containing a region filter. The coordinates system of the filter is taken from the coordinate setting of the data sets (
set_coord
). IfNone
, then all points are included.
See also
ignore2d
Exclude a spatial region from all data sets.
ignore2d_id
Exclude a spatial region from a data set.
ignore2d_image
Select the region to exclude from the image viewer.
notice2d_id
Include a spatial region of a data set.
notice2d_image
Select the region to include from the image viewer.
set_coord
Set the coordinate system to use for image analysis.
Notes
The region syntax support is provided by the CIAO region library [1]_, and supports the following shapes (the capitalized parts of the name indicate the minimum length of the name that is supported):
Name
Arguments
RECTangle
(xmin,ymin,xmax,ymax)
BOX
(xcenter,ycenter,width,height)
BOX
(xcenter,ycenter,width,height,angle)
ROTBOX
(xcenter,ycenter,width,height,angle)
CIRcle
(xcenter,ycenter,radius)
ANNULUS
(xcenter,ycenter,iradius,oradius)
ELLipse
(xcenter,ycenter,xradius,yradius,angle)
SECTor
(xcenter,ycenter,minangle,maxangle)
PIE
(xcenter,ycenter,iradius,oradius,minangle,maxangle)
POLYgon
(x1,y1,x2,y2,x3,y3,…)
POInt
(xcenter,ycenter)
REGION
(file)
FIELD
()
Angles are measured in degrees from the X axis, with a positive value indicating a counter-clockwise direction.
Only simple polygons are supported, which means that a polygon can not intersect itself. The last point does not need to equal the first point (i.e. polygons are automatically closed if necessary).
The shapes can be combined using AND (intersection), OR (union), or NOT (negation):
intersection:
shape1()*shape2() shape1()&shape2()
union:
shape1()+shape2() shape1()|shape2() shape1()shape2()
negation:
!shape1() shape1()-shape2() shape1()*!shape1()
The precedence uses the same rules as the mathematical operators
+
and*
(with-
replaced by*!
), so that:circle(0,0,10)+rect(10,-10,20,10)-circle(10,0,10)
means that the second circle is only excluded from the rectangle, and not the first circle. To remove it from both shapes requires writing:
circle(0,0,10)-circle(10,0,10)+rect(10,-10,20,10)-circle(10,0,10)
A point is included if the center of the pixel lies within the region. The comparison is done using the selected coordinate system for the image, so a pixel may not have a width and height of 1.
The REGION specifier is only supported when using CIAO. Unfortunately you can not combine region shapes using this syntax. That is
region(s1.reg)+region(s2.reg)
is not supported.References
Examples
Include the data points that lie within a circle centered at 4324.5,3827.5 with a radius of 300:
>>> set_coord('physical') >>> notice2d('circle(4324.5,3827.5,430)')
Read in the filter from the file
ds9.reg
, using either:>>> notice2d('ds9.reg')
or, when using CIAO,
>>> notice2d('region(ds9.reg)')
Select those points that lie both within the rotated box and the annulus (i.e. an intersection of the two shapes):
>>> notice2d('rotbox(100,200,50,40,45)*annulus(120,190,20,60)')
Select those points that lie within the rotated box or the annulus (i.e. a union of the two shapes):
>>> notice2d('rotbox(100,200,50,40,45)+annulus(120,190,20,60)')
All existing spatial filters are removed:
>>> notice2d()
- notice2d_id(ids, val=None)[source] [edit on github]
Include a spatial region of a data set.
Select a spatial region to include in the fit. The filter is applied to the given data set, or sets.
- Parameters
ids (int or str, or array of int or str) – The data set, or sets, to use.
val (str, optional) – A region specification as a string or the name of a file containing a region filter. The coordinates system of the filter is taken from the coordinate setting of the data sets (
set_coord
). IfNone
, then all points are included.
See also
ignore2d
Exclude a spatial region from all data sets.
ignore2d_id
Exclude a spatial region from a data set.
ignore2d_image
Select the region to exclude from the image viewer.
notice2d
Include a spatial region of all data sets.
notice2d_image
Select the region to include from the image viewer.
set_coord
Set the coordinate system to use for image analysis.
Notes
The region syntax is described in the
notice2d
function.Examples
Select all the pixels in the default data set:
>>> notice2d_id(1)
Select all the pixels in data sets ‘i1’ and ‘i2’:
>>> notice2d_id(['i1', 'i2'])
Apply the filter to the ‘img’ data set:
>>> notice2d_id('img', 'annulus(4324.2,3982.2,40.2,104.3)')
Use the regions in the file
srcs.reg
for data set 1:>>> notice2d_id(1, 'srcs.reg')
or
>>> notice2d_id(1, 'region(srcs.reg)')
- notice2d_image(ids=None)[source] [edit on github]
Include pixels using the region defined in the image viewer.
Include points that lie within the region defined in the image viewer.
- Parameters
ids (int or str, or sequence of int or str, optional) – The data set, or sets, to use. If
None
(the default) then the default identifier is used, as returned byget_default_id
.
See also
ignore2d
Exclude a spatial region from an image.
ignore2d_image
Exclude pixels using the region defined in the image viewer.
notice2d
Include a spatial region of an image.
set_coord
Set the coordinate system to use for image analysis.
Notes
The region definition is converted into the coordinate system relevant to the data set before it is applied.
Examples
Use the region in the image viewer to include points from the default data set.
>>> notice2d_image()
Include points in the data set labelled “2”.
>>> notice2d_image(2)
Include points in data sets “src” and “bg”.
>>> notice2d_image(["src", "bg"])
- notice_id(ids, lo=None, hi=None, **kwargs)[source] [edit on github]
Include data from the fit for a data set.
Select one or more ranges of data to include by filtering on the independent axis value. The filter is applied to the given data set, or data sets.
Changed in version 4.14.0: Integrated data sets - so Data1DInt and DataPHA when using energy or wavelengths - now ensure that the
hi
argument is exclusive and better handling of thelo
argument when it matches a bin edge. This can result in the same filter selecting a smaller number of bins than in earlier versions of Sherpa.- Parameters
ids (int or str, or array of int or str) – The data set, or sets, to use.
lo (number or str, optional) – The lower bound of the filter (when a number) or a string expression listing ranges in the form
a:b
, with multiple ranges allowed, where the ranges are separated by a,
. The term:b
means include everything up tob
(an exclusive limit for integrated datasets), anda:
means include everything that is higher than, or equal to,a
.hi (number, optional) – The upper bound of the filter when
lo
is not a string.bkg_id (int or str, optional) – The filter will be applied to the associated background component of the data set if
bkg_id
is set. Only PHA data sets support this option; if not given, then the filter is applied to all background components as well as the source data.
See also
ignore_id
Exclude data from the fit for a data set.
sherpa.astro.ui.ignore2d
Exclude a spatial region from an image.
notice
Include data in the fit.
show_filter
Show any filters applied to a data set.
Notes
The order of
ignore
andnotice
calls is important.The units used depend on the
analysis
setting of the data set, if appropriate.To filter a 2D data set by a shape use
ignore2d
.Examples
Include all data points with an X value (the independent axis) between 12 and 18 for data set 1:
>>> notice_id(1, 12, 18)
Include the range 0.5 to 7, for data sets 1, 2, and 3:
>>> notice_id([1,2,3], 0.5, 7)
Apply the filter 0.5 to 2 and 2.2 to 7 to the data sets “core” and “jet”:
>>> notice_id(["core","jet"], "0.5:2, 2.2:7")
- pack_image(id=None)[source] [edit on github]
Convert a data set into an image structure.
- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
The return value depends on the I/O library in use.
- Return type
img
See also
load_image
Load an image as a data set.
set_data
Set a data set.
unpack_image
Create an image data structure.
- pack_pha(id=None)[source] [edit on github]
Convert a PHA data set into a file structure.
- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
The return value depends on the I/O library in use.
- Return type
pha
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
See also
load_pha
Load a file as a PHA data set.
set_data
Set a data set.
unpack_pha
Create a PHA data structure.
- pack_table(id=None)[source] [edit on github]
Convert a data set into a table structure.
- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.- Returns
The return value depends on the I/O library in use and the type of data (such as
Data1D
,Data2D
).- Return type
tbl
See also
load_table
Load a FITS binary file as a data set.
set_data
Set a data set.
unpack_table
Unpack a FITS binary file into a data structure.
- paramprompt(val=False)[source] [edit on github]
Should the user be asked for the parameter values when creating a model?
When
val
isTrue
, calls toset_model
will cause the user to be prompted for each parameter in the expression. The prompt includes the parameter name and default value, in[]
: the valid responses arereturn which accepts the default
value which changes the parameter value
value, min which changes the value and the minimum value
value, min, max which changes the value, minimum, and maximum values
The
value
,min
, andmax
components are optional, so “,-5” will use the default parameter value and set its minimum to -5, while “2,,10” will change the parameter value to 2 and its maximum to 10, but leave the minimum at its default. If any value is invalid then the parameter is re-prompted.- Parameters
val (bool, optional) – If
True
, the user will be prompted to enter each parameter value, including support for changing the minimum and maximum values, when a model component is created. The default isFalse
.
See also
set_model
Set the source model expression for a data set.
set_par
Set the value, limits, or behavior of a model parameter.
show_model
Display the model expression used to fit a data set.
Notes
Setting this to
True
only makes sense in an interactive environment. It is designed to be similar to the parameter prompting provided by X-Spec [1]_.References
Examples
In the following, the default parameter settings are accepted for the
pl.gamma
parameter, the starting values for thepl.ref
andgline.pos
values are changed, the starting value and ranges of both thepl.ampl
andgline.ampl
parameters are set, and thegline.fwhm
parameter is set to 100, with its maximum changed to 10000.>>> paramprompt(True) >>> set_source(powlaw1d.pl + gauss1d.gline) pl.gamma parameter value [1] pl.ref parameter value [1] 4500 pl.ampl parameter value [1] 1.0e-5,1.0e-8,0.01 gline.fwhm parameter value [10] 100,,10000 gline.pos parameter value [0] 4900 gline.ampl parameter value [1] 1.0e-3,1.0e-7,1
- plot(*args, **kwargs)[source] [edit on github]
Create one or more plot types.
The plot function creates one or more plots, depending on the arguments it is sent: a plot type, followed by optional identifiers, and this can be repeated. If no data set identifier is given for a plot type, the default identifier - as returned by
get_default_id
- is used.Changed in version 4.12.2: Keyword arguments, such as alpha and ylog, can be sent to each plot.
- Raises
sherpa.utils.err.ArgumentErr – The data set does not support the requested plot type.
See also
get_default_id
Return the default data set identifier.
sherpa.astro.ui.set_analysis
Set the units used when fitting and displaying spectral data.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Notes
The supported plot types depend on the data set type, and include the following list. There are also individual functions, with
plot_
prepended to the plot type, such asplot_data
(thebkg
variants use a prefix ofplot_bkg_
). There are also several multiple-plot commands, such asplot_fit_ratio
,plot_fit_resid
, andplot_fit_delchi
.arf
The ARF for the data set (only for
DataPHA
data sets).bkg
The background.
bkgchisqr
The chi-squared statistic calculated for each bin when fitting the background.
bkgdelchi
The residuals for each bin, calculated as (data-model) divided by the error, for the background.
bkgfit
The data (as points) and the convolved model (as a line), for the background data set.
bkgmodel
The convolved background model.
bkgratio
The residuals for each bin, calculated as data/model, for the background data set.
bkgresid
The residuals for each bin, calculated as (data-model), for the background data set.
bkgsource
The un-convolved background model.
chisqr
The chi-squared statistic calculated for each bin.
data
The data (which may be background subtracted).
delchi
The residuals for each bin, calculated as (data-model) divided by the error.
fit
The data (as points) and the convolved model (as a line).
kernel
The PSF kernel associated with the data set.
model
The convolved model.
psf
The unfiltered PSF kernel associated with the data set.
ratio
The residuals for each bin, calculated as data/model.
resid
The residuals for each bin, calculated as (data-model).
source
The un-convolved model.
The plots can be specialized for a particular data type, such as the
set_analysis
command controlling the units used for PHA data sets.See the documentation for the individual routines for information on how to configure the plots.
The plot capabilities depend on what plotting backend, if any, is installed. If there is none available, a warning message will be displayed when
sherpa.ui
orsherpa.astro.ui
is imported, and theplot
set of commands will not create any plots. The choice of back end is made by changing theoptions.plot_pkg
setting in the Sherpa configuration file.The keyword arguments are sent to each plot (so care must be taken to ensure they are valid for all plots).
Examples
Plot the data for the default data set. This is the same as
plot_data
.>>> plot("data")
Plot the data for data set 2.
>>> plot("data", 2)
Plot the data and ARF for the default data set, in two seaparate plots.
>>> plot("data", "arf")
Plot the fit (data and model) for data sets 1 and 2, in two separate plots.
>>> plot("fit", 1, "fit", 2)
Plot the fit (data and model) for data sets “fit” and “jet”, in two separate plots.
>>> plot("fit", "nucleus", "fit", "jet")
Draw the data and model plots both with a log-scale for the y axis:
>>> plot("data", "model", ylog=True)
Plot the backgrounds for dataset 1 using the ‘up’ and ‘down’ components (in this case the background identifier):
>>> plot('bkg', 1, 'up', 'bkg', 1, 'down')
- plot_arf(id=None, resp_id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the ARF associated with a data set.
Display the effective area curve from the ARF component of a PHA data set.
- Parameters
id (int or str, optional) – The data set with an ARF. If not given then the default identifier is used, as returned by
get_default_id
.resp_id (int or str, optional) – Which ARF to use in the case that multiple ARFs are associated with a data set. The default is
None
, which means the first one.replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_data
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
See also
get_arf_plot
Return the data used by plot_arf.
plot
Create one or more plot types.
Examples
Plot the ARF for the default data set:
>>> plot_arf()
Plot the ARF from data set 1 and overplot the ARF from data set 2:
>>> plot_arf(1) >>> plot_arf(2, overplot=True)
Plot the ARFs labelled “arf1” and “arf2” for the “src” data set:
>>> plot_arf("src", "arf1") >>> plot_arf("src", "arf2", overplot=True)
The following example requires that the Matplotlib backend is selected, since this determines what extra keywords
plot_arf
accepts. The ARFs from the default and data set 2 are drawn together, but the second curve is drawn with a dashed line.>>> plot_arf(ylog=True) >>> plot_arf(2, overplot=True, linestyle='dashed')
- plot_bkg(id=None, bkg_id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the background values for a PHA data set.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_bkg
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.
See also
get_bkg_plot
Return the data used by plot_bkg.
get_default_id
Return the default data set identifier.
plot
Create one or more plot types.
set_analysis
Set the units used when fitting and displaying spectral data.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Examples
Plot the background from the default data set:
>>> plot_bkg()
Overplot the background from the ‘jet’ data set on the data. There is no scaling for differences in aperture or exposure time:
>>> plot_data('jet') >>> plot_bkg('jet', overplot=True)
Compare the first two background components of data set 1:
>>> plot_bkg(1, 1) >>> plot_bkg(1, 2, overplot=True)
- plot_bkg_chisqr(id=None, bkg_id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the chi-squared value for each point of the background of a PHA data set.
Display the square of the residuals (data-model) divided by the error values for the background of a PHA data set when it is being fit, rather than subtracted from the source.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_bkg_chisqr
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_chisqr_plot
Return the data used by plot_bkg_chisqr.
plot_bkg_delchi
Plot the ratio of residuals to error for the background of a PHA data set.
plot_bkg_ratio
Plot the ratio of data to model values for the background of a PHA data set.
plot_bkg_resid
Plot the residual (data-model) values for the background of a PHA data set.
set_bkg_model
Set the background model expression for a PHA data set.
Examples
>>> plot_bkg_chisqr()
>>> plot_bkg_chisqr('jet', bkg_id=1) >>> plot_bkg_chisqr('jet', bkg_id=2, overplot=True)
- plot_bkg_delchi(id=None, bkg_id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the ratio of residuals to error for the background of a PHA data set.
Display the ratio of the residuals (data-model) to the error values for the background of a PHA data set when it is being fit, rather than subtracted from the source.
Changed in version 4.12.0: The Y axis is now always drawn using a linear scale.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_bkg_ratio
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_delchi_plot
Return the data used by plot_bkg_delchi.
plot_bkg_chisqr
Plot the chi-squared value for each point of the background of a PHA data set.
plot_bkg_ratio
Plot the ratio of data to model values for the background of a PHA data set.
plot_bkg_resid
Plot the residual (data-model) values for the background of a PHA data set.
set_bkg_model
Set the background model expression for a PHA data set.
Notes
The ylog setting is ignored, and the Y axis is drawn using a linear scale.
Examples
>>> plot_bkg_delchi()
>>> plot_bkg_delchi('jet', bkg_id=1) >>> plot_bkg_delchi('jet', bkg_id=2, overplot=True)
- plot_bkg_fit(id=None, bkg_id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the fit results (data, model) for the background of a PHA data set.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_bkg_fit
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_fit_plot
Return the data used by plot_bkg_fit.
plot
Create one or more plot types.
plot_bkg
Plot the background values for a PHA data set.
plot_bkg_model
Plot the model for the background of a PHA data set.
plot_bkg_fit_delchi
Plot the fit results, and the residuals, for the background of a PHA data set.
plot_bkg_fit_ratio
Plot the fit results, and the data/model ratio, for the background of a PHA data set.
plot_bkg_fit_resid
Plot the fit results, and the residuals, for the background of a PHA data set.
plot_fit
Plot the fit results (data, model) for a data set.
set_analysis
Set the units used when fitting and displaying spectral data.
Examples
Plot the background fit to the default data set:
>>> plot_bkg_fit()
- plot_bkg_fit_delchi(id=None, bkg_id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the fit results, and the residuals, for the background of a PHA data set.
This creates two plots - the first from
plot_bkg_fit
and the second fromplot_bkg_delchi
- for a data set.Changed in version 4.12.2: The
overplot
option now works.Changed in version 4.12.0: The Y axis of the residual plot is now always drawn using a linear scale.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_bkg_fit_delchi
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_fit_plot
Return the data used by plot_bkg_fit.
get_bkg_delchi_plot
Return the data used by plot_bkg_delchi.
plot
Create one or more plot types.
plot_bkg
Plot the background values for a PHA data set.
plot_bkg_model
Plot the model for the background of a PHA data set.
plot_bkg_fit
Plot the fit results (data, model) for the background of a PHA data set.
plot_bkg_fit_ratio
Plot the fit results, and the data/model ratio, for the background of a PHA data set.
plot_bkg_fit_resid
Plot the fit results, and the residuals, for the background of a PHA data set.
plot_fit
Plot the fit results (data, model) for a data set.
plot_fit_delchi
Plot the fit results, and the residuals, for a data set.
set_analysis
Set the units used when fitting and displaying spectral data.
Notes
For the residual plot, the ylog setting is ignored, and the Y axis is drawn using a linear scale.
Examples
Plot the background fit and residuals (normalised by the error) to the default data set:
>>> plot_bkg_fit_delchi()
- plot_bkg_fit_ratio(id=None, bkg_id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the fit results, and the data/model ratio, for the background of a PHA data set.
This creates two plots - the first from
plot_bkg_fit
and the second fromplot_bkg_ratio
- for a data set.Changed in version 4.12.2: The
overplot
option now works.New in version 4.12.0.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_bkg_fit_ratio
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_fit_plot
Return the data used by plot_bkg_fit.
get_bkg_resid_plot
Return the data used by plot_bkg_resid.
plot
Create one or more plot types.
plot_bkg
Plot the background values for a PHA data set.
plot_bkg_model
Plot the model for the background of a PHA data set.
plot_bkg_fit
Plot the fit results (data, model) for the background of a PHA data set.
plot_bkg_fit_delchi
Plot the fit results, and the residuals, for the background of a PHA data set.
plot_bkg_fit_resid
Plot the fit results, and the residuals, for the background of a PHA data set.
plot_fit
Plot the fit results (data, model) for a data set.
plot_fit_resid
Plot the fit results, and the residuals, for a data set.
set_analysis
Set the units used when fitting and displaying spectral data.
Notes
For the residual plot, the ylog setting is ignored, and the Y axis is drawn using a linear scale.
Examples
Plot the background fit and the ratio of the background to this fit for the default data set:
>>> plot_bkg_fit_ratio()
- plot_bkg_fit_resid(id=None, bkg_id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the fit results, and the residuals, for the background of a PHA data set.
This creates two plots - the first from
plot_bkg_fit
and the second fromplot_bkg_resid
- for a data set.Changed in version 4.12.2: The
overplot
option now works.Changed in version 4.12.0: The Y axis of the residual plot is now always drawn using a linear scale.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_bkg_fit_resid
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_fit_plot
Return the data used by plot_bkg_fit.
get_bkg_resid_plot
Return the data used by plot_bkg_resid.
plot
Create one or more plot types.
plot_bkg
Plot the background values for a PHA data set.
plot_bkg_model
Plot the model for the background of a PHA data set.
plot_bkg_fit
Plot the fit results (data, model) for the background of a PHA data set.
plot_bkg_fit_ratio
Plot the fit results, and the data/model ratio, for the background of a PHA data set.
plot_bkg_fit_delchi
Plot the fit results, and the residuals, for the background of a PHA data set.
plot_fit
Plot the fit results (data, model) for a data set.
plot_fit_resid
Plot the fit results, and the residuals, for a data set.
set_analysis
Set the units used when fitting and displaying spectral data.
Notes
For the residual plot, the ylog setting is ignored, and the Y axis is drawn using a linear scale.
Examples
Plot the background fit and residuals to the default data set:
>>> plot_bkg_fit_resid()
- plot_bkg_model(id=None, bkg_id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the model for the background of a PHA data set.
This function plots the model for the background of a PHA data set, which includes any instrument response (the ARF and RMF).
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_bkg_model
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_model_plot
Return the data used by plot_bkg_model.
plot_bkg_source
Plot the model expression for the background of a PHA data set.
set_bkg_model
Set the background model expression for a PHA data set.
Examples
>>> plot_bkg_model()
>>> plot_bkg('jet') >>> plot_bkg_model('jet', bkg_id=1, overplot=True) >>> plot_bkg_model('jet', bkg_id=2, overplot=True)
- plot_bkg_ratio(id=None, bkg_id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the ratio of data to model values for the background of a PHA data set.
Display the ratio of data to model values for the background of a PHA data set when it is being fit, rather than subtracted from the source.
Changed in version 4.12.0: The Y axis is now always drawn using a linear scale.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_bkg_ratio
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_ratio_plot
Return the data used by plot_bkg_ratio.
plot_bkg_chisqr
Plot the chi-squared value for each point of the background of a PHA data set.
plot_bkg_delchi
Plot the ratio of residuals to error for the background of a PHA data set.
plot_bkg_resid
Plot the residual (data-model) values for the background of a PHA data set.
set_bkg_model
Set the background model expression for a PHA data set.
Notes
The ylog setting is ignored, and the Y axis is drawn using a linear scale.
Examples
>>> plot_bkg_ratio()
>>> plot_bkg_ratio('jet', bkg_id=1) >>> plot_bkg_ratio('jet', bkg_id=2, overplot=True)
- plot_bkg_resid(id=None, bkg_id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the residual (data-model) values for the background of a PHA data set.
Display the residuals for the background of a PHA data set when it is being fit, rather than subtracted from the source.
Changed in version 4.12.0: The Y axis is now always drawn using a linear scale.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_bkg_resid
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_resid_plot
Return the data used by plot_bkg_resid.
plot_bkg_chisqr
Plot the chi-squared value for each point of the background of a PHA data set.
plot_bkg_delchi
Plot the ratio of residuals to error for the background of a PHA data set.
plot_bkg_ratio
Plot the ratio of data to model values for the background of a PHA data set.
set_bkg_model
Set the background model expression for a PHA data set.
Notes
The ylog setting is ignored, and the Y axis is drawn using a linear scale.
Examples
>>> plot_bkg_resid()
>>> plot_bkg('jet') >>> plot_bkg_resid('jet', bkg_id=1, overplot=True) >>> plot_bkg_resid('jet', bkg_id=2, overplot=True)
- plot_bkg_source(id=None, lo=None, hi=None, bkg_id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the model expression for the background of a PHA data set.
This function plots the model for the background of a PHA data set. It does not include the instrument response (the ARF and RMF).
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.lo (number, optional) – The low value to plot.
hi (number, optional) – The high value to plot.
bkg_id (int or str, optional) – Identify the background component to use, if there are multiple ones associated with the data set.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_bkg_model
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IdentifierErr – If the
bkg_id
parameter is invalid.sherpa.utils.err.ModelErr – If no model expression has been created for the background data.
See also
get_bkg_source_plot
Return the data used by plot_bkg_source.
plot_bkg_model
Plot the model for the background of a PHA data set.
set_bkg_model
Set the background model expression for a PHA data set.
Examples
>>> plot_bkg_source()
>>> plot_bkg_source('jet', bkg_id=1) >>> plot_bkg_source('jet', bkg_id=2, overplot=True)
- plot_cdf(points, name='x', xlabel='x', replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the cumulative density function of an array of values.
Create and plot the cumulative density function (CDF) of the input array. Median and upper- and lower- quartiles are marked on the plot.
- Parameters
points (array) – The values used to create the cumulative density function.
name (str, optional) – The label to use as part of the plot title.
xlabel (str, optional) – The label for the X and part of the Y axes.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_cdf
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
See also
get_cdf_plot
Return the data used to plot the last CDF.
get_draws
Run the pyBLoCXS MCMC algorithm.
plot_pdf
Plot the probability density function of an array.
plot_scatter
Create a scatter plot.
Examples
>>> mu, sigma, n = 100, 15, 500 >>> x = np.random.normal(loc=mu, scale=sigma, size=n) >>> plot_cdf(x)
>>> plot_cdf(x, xlabel="x pos", name="Simulations")
- plot_chisqr(id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the chi-squared value for each point in a data set.
This function displays the square of the residuals (data - model) divided by the error, for a data set.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_chisqr
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_chisqr_plot
Return the data used by plot_chisqr.
get_default_id
Return the default data set identifier.
plot
Create one or more plot types.
plot_delchi
Plot the ratio of residuals to error for a data set.
plot_ratio
Plot the ratio of data to model for a data set.
plot_resid
Plot the residuals (data - model) for a data set.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Examples
Plot the chi-quare values for each point in the default data set:
>>> plot_chisqr()
Overplot the values from the ‘core’ data set on those from the ‘jet’ dataset:
>>> plot_chisqr('jet') >>> plot_chisqr('core', overplot=True)
- plot_data(id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the data values.
- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_data
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
See also
get_data_plot
Return the data used by plot_data.
get_data_plot_prefs
Return the preferences for plot_data.
get_default_id
Return the default data set identifier.
plot
Create one or more plot types.
sherpa.astro.ui.set_analysis
Set the units used when fitting and displaying spectral data.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Notes
The additional arguments supported by
plot_data
are the same as the keywords of the dictionary returned byget_data_plot_prefs
.Examples
Plot the data from the default data set:
>>> plot_data()
Plot the data from data set 1:
>>> plot_data(1)
Plot the data from data set labelled “jet” and then overplot the “core” data set. The
set_xlog
command is used to select a logarithmic scale for the X axis.>>> set_xlog("data") >>> plot_data("jet") >>> plot_data("core", overplot=True)
The following example requires that the Matplotlib backend is selected, and uses a Matplotlib function to create a subplot (in this case one filling the bottom half of the plot area) and then calls
plot_data
with theclearwindow
argument set toFalse
to use this subplot. If theclearwindow
argument had not been used then the plot area would have been cleared and the plot would have filled the area.>>> plt.subplot(2, 1, 2) >>> plot_data(clearwindow=False)
Additional arguments can be given that are passed to the plot backend: the supported arguments match the keywords of the dictionary returned by
get_data_plot_prefs
. Examples include (for the Matplotlib backend): adding a “cap” to the error bars:>>> plot_data(capsize=4)
changing the symbol to a square:
>>> plot_data(marker='s')
using a dotted line to connect the points:
>>> plot_data(linestyle='dotted')
and plotting multiple data sets on the same plot, using a log scale for the Y axis, setting the alpha transparency for each plot, and explicitly setting the colors of the last two datasets:
>>> plot_data(ylog=True, alpha=0.7) >>> plot_data(2, overplot=True, alpha=0.7, color='brown') >>> plot_data(3, overplot=True, alpha=0.7, color='purple')
- plot_delchi(id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the ratio of residuals to error for a data set.
This function displays the residuals (data - model) divided by the error, for a data set.
Changed in version 4.12.0: The Y axis is now always drawn using a linear scale.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_delchi
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_delchi_plot
Return the data used by plot_delchi.
get_default_id
Return the default data set identifier.
plot
Create one or more plot types.
plot_chisqr
Plot the chi-squared value for each point in a data set.
plot_ratio
Plot the ratio of data to model for a data set.
plot_resid
Plot the residuals (data - model) for a data set.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
Notes
The additional arguments supported by
plot_delchi
are the same as the keywords of the dictionary returned byget_data_plot_prefs
.The ylog setting is ignored, and the Y axis is drawn using a linear scale.
Examples
Plot the residuals for the default data set, divided by the error value for each bin:
>>> plot_delchi()
Overplot the values from the ‘core’ data set on those from the ‘jet’ dataset:
>>> plot_delchi('jet') >>> plot_delchi('core', overplot=True)
Additional arguments can be given that are passed to the plot backend: the supported arguments match the keywords of the dictionary returned by
get_data_plot_prefs
. The following sets error bars to be orange and the marker to be a circle (larger than the default one):>>> plot_delchi(ecolor='orange', marker='o')
- plot_energy_flux(lo=None, hi=None, id=None, num=7500, bins=75, correlated=False, numcores=None, bkg_id=None, scales=None, model=None, otherids=(), recalc=True, clip='hard', overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Display the energy flux distribution.
For each iteration, draw the parameter values of the model from a normal distribution, evaluate the model, and sum the model over the given range (the flux). Plot up the distribution of this flux. The units for the flux are as returned by
calc_energy_flux
. Thesample_energy_flux
andget_energy_flux_hist
functions return the data used to create this plot.Changed in version 4.12.2: The scales parameter is no longer ignored when set and the model and otherids parameters have been added. The clip argument has been added.
- Parameters
lo (number, optional) – The lower limit to use when summing up the signal. If not given then the lower value of the data grid is used.
hi (optional) – The upper limit to use when summing up the signal. If not given then the upper value of the data grid is used.
id (int or string, optional) – The identifier of the data set to use. If
None
, the default value, then all datasets with associated models are used to calculate the errors and the model evaluation is done using the default dataset.num (int, optional) – The number of samples to create. The default is 7500.
bins (int, optional) – The number of bins to use for the histogram.
correlated (bool, optional) – If
True
(the default isFalse
) thenscales
is the full covariance matrix, otherwise it is just a 1D array containing the variances of the parameters (the diagonal elements of the covariance matrix).numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
bkg_id (int or string, optional) – The identifier of the background component to use. This should only be set when the line to be measured is in the background model.
scales (array, optional) – The scales used to define the normal distributions for the parameters. The size and shape of the array depends on the number of free parameters in the fit (n) and the value of the
correlated
parameter. When the parameter isTrue
, scales must be given the covariance matrix for the free parameters (a n by n matrix that matches the parameter ordering used by Sherpa). For un-correlated parameters the covariance matrix can be used, or a one-dimensional array of n elements can be used, giving the width (specified as the sigma value of a normal distribution) for each parameter (e.g. the square root of the diagonal elements of the covariance matrix). If the scales parameter is not given then the covariance matrix is evaluated for the current model and best-fit parameters.model (model, optional) – The model to integrate. If left as
None
then the source model for the dataset will be used. This can be used to calculate the unabsorbed flux, as shown in the examples. The model must be part of the source expression.otherids (sequence of integer and string ids, optional) – The list of other datasets that should be included when calculating the errors to draw values from.
recalc (bool, optional) – If
True
, the default, then re-calculate the values rather than use the values from the last time the function was run.clip ({'hard', 'soft', 'none'}, optional) – What clipping strategy should be applied to the sampled parameters. The default (‘hard’) is to fix values at their hard limits if they exceed them. A value of ‘soft’ uses the soft limits instead, and ‘none’ applies no clipping.
overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
See also
calc_photon_flux
Integrate the unconvolved source model over a pass band.
calc_energy_flux
Integrate the unconvolved source model over a pass band.
covar
Estimate the confidence intervals using the confidence method.
get_energy_flux_hist
Return the data displayed by plot_energy_flux.
get_photon_flux_hist
Return the data displayed by plot_photon_flux.
plot_cdf
Plot the cumulative density function of an array.
plot_pdf
Plot the probability density function of an array.
plot_photon_flux
Display the photon flux distribution.
plot_trace
Create a trace plot of row number versus value.
sample_energy_flux
Return the energy flux distribution of a model.
sample_flux
Return the flux distribution of a model.
sample_photon_flux
Return the photon flux distribution of a model.
Examples
Plot the energy flux distribution for the range 0.5 to 7 for the default data set:
>>> plot_energy_flux(0.5, 7, num=1000)
Overplot the 0.5 to 2 energy flux distribution from the “core” data set on top of the values from the “jet” data set:
>>> plot_energy_flux(0.5, 2, id="jet", num=1000) >>> plot_energy_flux(0.5, 2, id="core", num=1000, overplot=True)
Overplot the flux distribution for just the pl component (which must be part of the source expression) on top of the full model. If the full model was xsphabs.gal * powlaw1d.pl then this will compare the unabsorbed to absorbed flux distributions:
>>> plot_energy_flux(0.5, 2, num=1000, bins=20) >>> plot_energy_flux(0.5, 2, model=pl, num=1000, bins=20)
If you have multiple datasets loaded, each with a model, then all datasets will be used to calculate the errors when the id parameter is not set. A single dataset can be used by specifying a dataset (in this example the overplot is just with dataset 1):
>>> mdl = xsphabs.gal * xsapec.src >>> set_source(1, mdl) >>> set_source(2, mdl) ... >>> plot_energy_flux(0.5, 2, model=src num=1000, bins=20) >>> plot_energy_flux(0.5, 2, model=src num=1000, bins=20, ... id=1, overplot=True)
If you have multiple datasets then you can use the otherids argument to specify exactly what set of data is used:
>>> plot_energy_flux(0.5, 2, model=src num=1000, bins=20, ... id=1, otherids=(2, 3, 4))
- plot_fit(id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the fit results (data, model) for a data set.
This function creates a plot containing the data and the model (including any instrument response) for a data set.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_fit
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_fit_plot
Return the data used to create the fit plot.
get_default_id
Return the default data set identifier.
plot
Create one or more plot types.
plot_fit_delchi
Plot the fit results, and the residuals, for a data set.
plot_fit_ratio
Plot the fit results, and the ratio of data to model, for a data set.
plot_fit_resid
Plot the fit results, and the residuals, for a data set.
plot_data
Plot the data values.
plot_model
Plot the model for a data set.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Notes
The additional arguments supported by
plot_fit
are the same as the keywords of the dictionary returned byget_data_plot_prefs
.Examples
Plot the fit results for the default data set:
>>> plot_fit()
Overplot the ‘core’ results on those from the ‘jet’ data set, using a logarithmic scale for the X axis:
>>> set_xlog() >>> plot_fit('jet') >>> plot_fit('core', overplot=True)
Keyword arguments can be given to override the plot preferences; for example the following sets the y axis to a log scale, but only for this plot:
>>> plot_fit(ylog=True)
The color can be changed for both the data and model using (note that the keyword name and supported values depends on the plot backend; this example assumes that Matplotlib is being used):
>>> plot_fit(color='orange')
Draw the fits for two datasets, setting the second one partially transparent (this assumes Matplotlib is used):
>>> plot_fit(1) >>> plot_fit(2, alpha=0.7, overplot=True)
- plot_fit_delchi(id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the fit results, and the residuals, for a data set.
This creates two plots - the first from
plot_fit
and the second fromplot_delchi
- for a data set.Changed in version 4.12.2: The
overplot
option now works.Changed in version 4.12.0: The Y axis of the delchi plot is now always drawn using a linear scale.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_fit_delchi
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_fit_plot
Return the data used to create the fit plot.
get_default_id
Return the default data set identifier.
plot
Create one or more plot types.
plot_fit
Plot the fit results for a data set.
plot_fit_ratio
Plot the fit results, and the ratio of data to model, for a data set.
plot_fit_resid
Plot the fit results, and the residuals, for a data set.
plot_data
Plot the data values.
plot_delchi
Plot the ratio of residuals to error for a data set.
plot_model
Plot the model for a data set.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Notes
The additional arguments supported by
plot_fit_delchi
are the same as the keywords of the dictionary returned byget_data_plot_prefs
, and are applied to both plots.For the delchi plot, the ylog setting is ignored, and the Y axis is drawn using a linear scale.
Examples
Plot the results for the default data set:
>>> plot_fit_delchi()
Overplot the ‘core’ results on those from the ‘jet’ data set, using a logarithmic scale for the X axis:
>>> set_xlog() >>> plot_fit_delchi('jet') >>> plot_fit_delchi('core', overplot=True)
Additional arguments can be given that are passed to the plot backend: the supported arguments match the keywords of the dictionary returned by
get_data_plot_prefs
. The following sets the error bars to be drawn in gray when using the Matplotlib backend:>>> plot_fit_delchi(ecolor='gray')
- plot_fit_ratio(id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the fit results, and the ratio of data to model, for a data set.
This creates two plots - the first from
plot_fit
and the second fromplot_ratio
- for a data set.Changed in version 4.12.2: The
overplot
option now works.New in version 4.12.0.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_fit_ratio
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_fit_plot
Return the data used to create the fit plot.
get_default_id
Return the default data set identifier.
plot
Create one or more plot types.
plot_fit
Plot the fit results for a data set.
plot_fit_resid
Plot the fit results, and the residuals, for a data set.
plot_fit_delchi
Plot the fit results, and the residuals, for a data set.
plot_data
Plot the data values.
plot_model
Plot the model for a data set.
plot_ratio
Plot the ratio of data to model for a data set.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Notes
The additional arguments supported by
plot_fit_ratio
are the same as the keywords of the dictionary returned byget_data_plot_prefs
, and are applied to both plots.For the ratio plot, the ylog setting is ignored, and the Y axis is drawn using a linear scale.
Examples
Plot the results for the default data set:
>>> plot_fit_ratio()
Overplot the ‘core’ results on those from the ‘jet’ data set, using a logarithmic scale for the X axis:
>>> set_xlog() >>> plot_fit_ratio('jet') >>> plot_fit_ratio('core', overplot=True)
Additional arguments can be given that are passed to the plot backend: the supported arguments match the keywords of the dictionary returned by
get_data_plot_prefs
. The following sets the plots to use square symbols (this includes the model as well as data in the top plot) and turns off any line between plots, when using the Matplotlib backend:>>> plot_fit_ratio(marker='s', linestyle='none')
- plot_fit_resid(id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the fit results, and the residuals, for a data set.
This creates two plots - the first from
plot_fit
and the second fromplot_resid
- for a data set.Changed in version 4.12.2: The
overplot
option now works.Changed in version 4.12.0: The Y axis of the residual plot is now always drawn using a linear scale.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the previous values. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_fit_plot
Return the data used to create the fit plot.
get_default_id
Return the default data set identifier.
plot
Create one or more plot types.
plot_fit
Plot the fit results for a data set.
plot_fit_delchi
Plot the fit results, and the residuals, for a data set.
plot_fit_ratio
Plot the fit results, and the ratio of data to model, for a data set.
plot_data
Plot the data values.
plot_model
Plot the model for a data set.
plot_resid
Plot the residuals (data - model) for a data set.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Notes
The additional arguments supported by
plot_fit_resid
are the same as the keywords of the dictionary returned byget_data_plot_prefs
, and are applied to both plots.For the residual plot, the ylog setting is ignored, and the Y axis is drawn using a linear scale.
Examples
Plot the results for the default data set:
>>> plot_fit_resid()
Overplot the ‘core’ results on those from the ‘jet’ data set, using a logarithmic scale for the X axis:
>>> set_xlog() >>> plot_fit_resid('jet') >>> plot_fit_resid('core', overplot=True)
Additional arguments can be given that are passed to the plot backend: the supported arguments match the keywords of the dictionary returned by
get_data_plot_prefs
. The following sets the data in both plots to be drawn in a blue color, have caps on the error bars, but to only draw the y error bars:>>> plot_fit_resid(capsize=4, color='skyblue', xerrorbars=False)
- plot_kernel(id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the 1D kernel applied to a data set.
The
plot_psf
function shows the full PSF, from which the kernel is derived.- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_kernel
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.IdentifierErr – If a PSF model has not been created for the data set.
See also
get_kernel_plot
Return the data used by plot_kernel.
get_default_id
Return the default data set identifier.
plot
Create one or more plot types.
plot_psf
Plot the 1D PSF model applied to a data set.
set_psf
Add a PSF model to a data set.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Examples
Create a model (a step function) that is convolved by a gaussian, and display the kernel overplotted on the PSF:
>>> dataspace1d(1, 10, step=1, dstype=Data1D) >>> set_model(steplo1d.stp) >>> stp.xcut = 4.4 >>> load_psf('psf1', gauss1d.gline) >>> set_psf('psf1') >>> gline.fwhm = 1.2 >>> plot_psf() >>> plot_kernel(overplot=True)
- plot_model(id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the model for a data set.
This function plots the model for a data set, which includes any instrument response (e.g. a convolution created by
set_psf
).- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_model
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
See also
get_model_plot
Return the data used to create the model plot.
get_model_plot_prefs
Return the preferences for plot_model.
get_default_id
Return the default data set identifier.
plot
Create one or more plot types.
plot_model_component
Plot a component of the model for a data set.
plot_source
Plot the source expression for a data set.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Notes
The additional arguments supported by
plot_model
are the same as the keywords of the dictionary returned byget_model_plot_prefs
.For PHA data sets the model plot created by
plot_model
differs to the model plot created byplot_fit
: the fit version uses the grouping of the data set whereas theplot_model
version shows the ungrouped data (that is, it uses the instrumental grid). The filters used are the same in both cases.Examples
Plot the convolved source model for the default data set:
>>> plot_model()
Overplot the model for data set 2 on data set 1:
>>> plot_model(1) >>> plot_model(2, overplot=True)
Create the equivalent of
plot_fit('jet')
:>>> plot_data('jet') >>> plot_model('jet', overplot=True)
Additional arguments can be given that are passed to the plot backend: the supported arguments match the keywords of the dictionary returned by
get_model_plot_prefs
. The following plots the model using a log scale for both axes, and then overplots the model from data set 2 using a dashed line and slightly transparent:>>> plot_model(xlog=True, ylog=True) >>> plot_model(2, overplot=True, alpha=0.7, linestyle='dashed')
- plot_model_component(id, model=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot a component of the model for a data set.
This function evaluates and plots a component of the model expression for a data set, including any instrument response. Use
plot_source_component
to display without any response. For PHA data, the response model is automatically added by the routine unless the model contains a response.- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.model.Model instance) – The component to display (the name, if a string).
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_model_component
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
See also
get_model_component_plot
Return the data used to create the model-component plot.
get_default_id
Return the default data set identifier.
plot
Create one or more plot types.
plot_source_component
Plot a component of the source expression for a data set.
plot_model
Plot the model for a data set.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.The additional keyword arguments match the keywords of the dictionary returned by get_model_plot_prefs.
Examples
Overplot the
pl
component of the model expression for the default data set:>>> plot_model() >>> plot_model_component(pl, overplot=True)
Display the results for the ‘jet’ data set (data and model), and then overplot the
pl
component evaluated for the ‘jet’ and ‘core’ data sets:>>> plot_fit('jet') >>> plot_model_component('jet', pl, overplot=True) >>> plot_model_component('core', pl, overplot=True)
For PHA data sets the response is automatically added, but it can also be explicitly included, which will create the same plot:
>>> plot_model_component(pl) >>> rsp = get_response() >>> plot_model_component(rsp(pl))
- plot_order(id=None, orders=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the model for a data set convolved by the given response.
Some data sets - such as grating PHA data - can have multiple responses. The
plot_order
function acts likeplot_model
, in that it displays the model after passing through a response, but allows the user to select which response to use.- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.orders (optional) – Which response to use. The argument can be a scalar or array, in which case multiple curves will be displayed. The default is to use all orders.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_model
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
See also
get_order_plot
Return the data used by plot_order.
plot
Create one or more plot types.
plot_model
Plot the model for a data set.
Examples
Display the source model convolved by the first response for the default data set:
>>> plot_order(orders=1)
Plot the source convolved through the first and second responses for the second data set (separate curves for each response):
>>> plot_order(2, orders=[1, 2])
Add the orders plot to a model plot:
>>> plot_model() >>> plot_order(orders=[2, 3], overplot=True)
- plot_pdf(points, name='x', xlabel='x', bins=12, normed=True, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the probability density function of an array of values.
Create and plot the probability density function (PDF) of the input array.
- Parameters
points (array) – The values used to create the probability density function.
name (str, optional) – The label to use as part of the plot title.
xlabel (str, optional) – The label for the X axis
bins (int, optional) – The number of bins to use to create the PDF.
normed (bool, optional) – Should the PDF be normalized (the default is
True
).replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_pdf
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
See also
get_draws
Run the pyBLoCXS MCMC algorithm.
get_pdf_plot
Return the data used to plot the last PDF.
plot_cdf
Plot the cumulative density function of an array.
plot_scatter
Create a scatter plot.
Examples
>>> mu, sigma, n = 100, 15, 500 >>> x = np.random.normal(loc=mu, scale=sigma, size=n) >>> plot_pdf(x, bins=25)
>>> plot_pdf(x, normed=False, xlabel="mu", name="Simulations")
- plot_photon_flux(lo=None, hi=None, id=None, num=7500, bins=75, correlated=False, numcores=None, bkg_id=None, scales=None, model=None, otherids=(), recalc=True, clip='hard', overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Display the photon flux distribution.
For each iteration, draw the parameter values of the model from a normal distribution, evaluate the model, and sum the model over the given range (the flux). Plot up the distribution of this flux. The units for the flux are as returned by
calc_photon_flux
. Thesample_photon_flux
andget_photon_flux_hist
functions return the data used to create this plot.Changed in version 4.12.2: The scales parameter is no longer ignored when set and the model and otherids parameters have been added. The clip argument has been added.
- Parameters
lo (number, optional) – The lower limit to use when summing up the signal. If not given then the lower value of the data grid is used.
hi (optional) – The upper limit to use when summing up the signal. If not given then the upper value of the data grid is used.
id (int or string, optional) – The identifier of the data set to use. If
None
, the default value, then all datasets with associated models are used to calculate the errors and the model evaluation is done using the default dataset.num (int, optional) – The number of samples to create. The default is 7500.
bins (int, optional) – The number of bins to use for the histogram.
correlated (bool, optional) – If
True
(the default isFalse
) thenscales
is the full covariance matrix, otherwise it is just a 1D array containing the variances of the parameters (the diagonal elements of the covariance matrix).numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
bkg_id (int or string, optional) – The identifier of the background component to use. This should only be set when the line to be measured is in the background model.
scales (array, optional) – The scales used to define the normal distributions for the parameters. The size and shape of the array depends on the number of free parameters in the fit (n) and the value of the
correlated
parameter. When the parameter isTrue
, scales must be given the covariance matrix for the free parameters (a n by n matrix that matches the parameter ordering used by Sherpa). For un-correlated parameters the covariance matrix can be used, or a one-dimensional array of n elements can be used, giving the width (specified as the sigma value of a normal distribution) for each parameter (e.g. the square root of the diagonal elements of the covariance matrix). If the scales parameter is not given then the covariance matrix is evaluated for the current model and best-fit parameters.model (model, optional) – The model to integrate. If left as
None
then the source model for the dataset will be used. This can be used to calculate the unabsorbed flux, as shown in the examples. The model must be part of the source expression.otherids (sequence of integer and string ids, optional) – The list of other datasets that should be included when calculating the errors to draw values from.
recalc (bool, optional) – If
True
, the default, then re-calculate the values rather than use the values from the last time the function was run.clip ({'hard', 'soft', 'none'}, optional) – What clipping strategy should be applied to the sampled parameters. The default (‘hard’) is to fix values at their hard limits if they exceed them. A value of ‘soft’ uses the soft limits instead, and ‘none’ applies no clipping.
overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
See also
calc_photon_flux
Integrate the unconvolved source model over a pass band.
calc_energy_flux
Integrate the unconvolved source model over a pass band.
covar
Estimate the confidence intervals using the confidence method.
get_energy_flux_hist
Return the data displayed by plot_energy_flux.
get_photon_flux_hist
Return the data displayed by plot_photon_flux.
plot_cdf
Plot the cumulative density function of an array.
plot_pdf
Plot the probability density function of an array.
plot_energy_flux
Display the energy flux distribution.
plot_trace
Create a trace plot of row number versus value.
sample_energy_flux
Return the energy flux distribution of a model.
sample_flux
Return the flux distribution of a model.
sample_photon_flux
Return the photon flux distribution of a model.
Examples
Plot the photon flux distribution for the range 0.5 to 7 for the default data set:
>>> plot_photon_flux(0.5, 7, num=1000)
Overplot the 0.5 to 2 photon flux distribution from the “core” data set on top of the values from the “jet” data set:
>>> plot_photon_flux(0.5, 2, id="jet", num=1000) >>> plot_photon_flux(0.5, 2, id="core", num=1000, overplot=True)
Overplot the flux distribution for just the pl component (which must be part of the source expression) on top of the full model. If the full model was xsphabs.gal * powlaw1d.pl then this will compare the unabsorbed to absorbed flux distributions:
>>> plot_photon_flux(0.5, 2, num=1000, bins=20) >>> plot_photon_flux(0.5, 2, model=pl, num=1000, bins=20)
If you have multiple datasets loaded, each with a model, then all datasets will be used to calculate the errors when the id parameter is not set. A single dataset can be used by specifying a dataset (in this example the overplot is just with dataset 1):
>>> mdl = xsphabs.gal * xsapec.src >>> set_source(1, mdl) >>> set_source(2, mdl) ... >>> plot_photon_flux(0.5, 2, model=src num=1000, bins=20) >>> plot_photon_flux(0.5, 2, model=src num=1000, bins=20, ... id=1, overplot=True)
If you have multiple datasets then you can use the otherids argument to specify exactly what set of data is used:
>>> plot_photon_flux(0.5, 2, model=src num=1000, bins=20, ... id=1, otherids=(2, 3, 4))
- plot_psf(id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the 1D PSF model applied to a data set.
The
plot_kernel
function shows the data used to convolve the model.- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_psf
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.IdentifierErr – If a PSF model has not been created for the data set.
See also
get_psf_plot
Return the data used by plot_psf.
get_default_id
Return the default data set identifier.
plot
Create one or more plot types.
plot_kernel
Plot the 1D kernel applied to a data set.
set_psf
Add a PSF model to a data set.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Examples
Create a model (a step function) that is convolved by a gaussian, and display the PSF:
>>> dataspace1d(1, 10, step=1, dstype=Data1D) >>> set_model(steplo1d.stp) >>> stp.xcut = 4.4 >>> load_psf('psf1', gauss1d.gline) >>> set_psf('psf1') >>> gline.fwhm = 1.2 >>> plot_psf()
- plot_pvalue(null_model, alt_model, conv_model=None, id=1, otherids=(), num=500, bins=25, numcores=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Compute and plot a histogram of likelihood ratios by simulating data.
Compare the likelihood of the null model to an alternative model by running a number of simulations, comparing the likelihoods of the two models when compared to the observed data. The fit statistic must be set to a likelihood-based method, such as “cash” or “cstat”. Screen output is created as well as the plot; these values can be retrieved with
get_pvalue_results
.- Parameters
null_model – The model expression for the null hypothesis.
alt_model – The model expression for the alternative hypothesis.
conv_model (optional) – An expression used to modify the model so that it can be compared to the data (e.g. a PSF or PHA response).
id (int or str, optional) – The data set that provides the data. The default is 1.
otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
num (int, optional) – The number of simulations to run. The default is 500.
bins (int, optional) – The number of bins to use to create the histogram. The default is 25.
numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_pvalue
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an exsiting plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
TypeError – An invalid statistic.
See also
get_pvalue_plot
Return the data used by plot_pvalue.
get_pvalue_results
Return the data calculated by the last plot_pvalue call.
Notes
Each simulation involves creating a data set using the observed data simulated with Poisson noise.
For the likelihood ratio test to be valid, the following conditions must hold:
The null model is nested within the alternative model.
The extra parameters of the alternative model have Gaussian (normal) distributions that are not truncated by the boundaries of the parameter spaces.
Examples
Use the likelihood ratio to see if the data in data set 1 has a statistically-significant gaussian component:
>>> create_model_component('powlaw1d', 'pl') >>> create_model_component('gauss1d', 'gline') >>> plot_pvalue(pl, pl + gline)
Use 1000 simulations and use the data from data sets ‘core’, ‘jet1’, and ‘jet2’:
>>> mdl1 = pl >>> mdl2 = pl + gline >>> plot_pvalue(mdl1, mdl2, id='core', otherids=('jet1', 'jet2'), ... num=1000)
Apply a convolution to the models before fitting:
>>> rsp = get_psf() >>> plot_pvalue(mdl1, mdl2, conv_model=rsp)
- plot_ratio(id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the ratio of data to model for a data set.
This function displays the ratio data / model for a data set.
Changed in version 4.12.0: The Y axis is now always drawn using a linear scale.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_ratio
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_ratio_plot
Return the data used by plot_ratio.
get_default_id
Return the default data set identifier.
plot
Create one or more plot types.
plot_chisqr
Plot the chi-squared value for each point in a data set.
plot_delchi
Plot the ratio of residuals to error for a data set.
plot_resid
Plot the residuals (data - model) for a data set.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
Notes
The additional arguments supported by
plot_ratio
are the same as the keywords of the dictionary returned byget_data_plot_prefs
.The ylog setting is ignored, and the Y axis is drawn using a linear scale.
Examples
Plot the ratio of data to model for the default data set:
>>> plot_ratio()
Overplot the ratios from the ‘core’ data set on those from the ‘jet’ dataset:
>>> plot_ratio('jet') >>> plot_ratio('core', overplot=True)
Additional arguments can be given that are passed to the plot backend: the supported arguments match the keywords of the dictionary returned by
get_data_plot_prefs
. The following sets the X axis to a log scale and draws a solid line between the points:>>> plot_ratio(xlog=True, linestyle='solid')
- plot_resid(id=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the residuals (data - model) for a data set.
This function displays the residuals (data - model) for a data set.
Changed in version 4.12.0: The Y axis is now always drawn using a linear scale.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_resid
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
- Raises
sherpa.utils.err.IdentifierErr – If the data set does not exist or a source expression has not been set.
See also
get_resid_plot
Return the data used by plot_resid.
get_default_id
Return the default data set identifier.
plot
Create one or more plot types.
plot_chisqr
Plot the chi-squared value for each point in a data set.
plot_delchi
Plot the ratio of residuals to error for a data set.
plot_ratio
Plot the ratio of data to model for a data set.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
Notes
The additional arguments supported by
plot_resid
are the same as the keywords of the dictionary returned byget_data_plot_prefs
.The ylog setting is ignored, and the Y axis is drawn using a linear scale.
Examples
Plot the residuals for the default data set:
>>> plot_resid()
Overplot the residuals from the ‘core’ data set on those from the ‘jet’ dataset:
>>> plot_resid('jet') >>> plot_resid('core', overplot=True)
Add the residuals to the plot of the data, for the default data set:
>>> plot_data() >>> plot_resid(overplot=True)
Additional arguments can be given that are passed to the plot backend: the supported arguments match the keywords of the dictionary returned by
get_data_plot_prefs
. The following sets the cap length for the ends of the error bars:>>> plot_resid(capsize=5)
- plot_scatter(x, y, name='(x,y)', xlabel='x', ylabel='y', replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Create a scatter plot.
- Parameters
x (array) – The values to plot on the X axis.
y (array) – The values to plot on the Y axis. This must match the size of the
x
array.name (str, optional) – The plot title.
xlabel (str, optional) – The label for the X axis.
ylabel (str, optional) – The label for the Y axis.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_scatter
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
See also
get_scatter_plot
Return the data used to plot the last scatter plot.
plot_trace
Create a trace plot of row number versus value.
Examples
Plot the X and Y points:
>>> mu, sigma, n = 100, 15, 500 >>> x = mu + sigma * np.random.randn(n) >>> y = mu + sigma * np.random.randn(n) >>> plot_scatter(x, y)
Change the axis labels and the plot title:
>>> plot_scatter(nh, kt, xlabel='nH', ylabel='kT', name='Simulations')
- plot_source(id=None, lo=None, hi=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot the source expression for a data set.
This function plots the source model for a data set. This does not include any instrument response (e.g. a convolution created by
set_psf
or ARF and RMF automatically created for a PHA data set).- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.lo (number, optional) – The low value to plot (only used for PHA data sets).
hi (number, optional) – The high value to plot (only use for PHA data sets).
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_source
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
See also
get_source_plot
Return the data used by plot_source.
get_default_id
Return the default data set identifier.
plot
Create one or more plot types.
plot_model
Plot the model for a data set.
set_analysis
Set the units used when fitting and displaying spectral data.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Examples
Plot the unconvolved source model for the default data set:
>>> plot_source()
Overplot the source model for data set 2 on data set 1:
>>> plot_source(1) >>> plot_source(2, overplot=True)
Restrict the plot to values between 0.5 and 7 for the independent axis:
>>> plot_source(lo=0.5, hi=7)
For a PHA data set, the units on both the X and Y axes of the plot are controlled by the
set_analysis
command. In this case the Y axis will be in units of photons/s/cm^2 and the X axis in keV:>>> set_analysis('energy', factor=1) >>> plot_source()
- plot_source_component(id, model=None, replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Plot a component of the source expression for a data set.
This function evaluates and plots a component of the model expression for a data set, without any instrument response. Use
plot_model_component
to include any response.- Parameters
id (int or str, optional) – The data set that provides the data. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.model.Model instance) – The component to display (the name, if a string).
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_source_component
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
See also
get_source_component_plot
Return the data used by plot_source_component.
get_default_id
Return the default data set identifier.
plot
Create one or more plot types.
plot_model_component
Plot a component of the model for a data set.
plot_source
Plot the source expression for a data set.
set_xlinear
New plots will display a linear X axis.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.The additional keyword arguments match the keywords of the dictionary returned by get_model_plot_prefs.
Examples
Overplot the
pl
component of the source expression for the default data set:>>> plot_source() >>> plot_source_component(pl, overplot=True)
- plot_trace(points, name='x', xlabel='x', replot=False, overplot=False, clearwindow=True, **kwargs)[source] [edit on github]
Create a trace plot of row number versus value.
Dispay a plot of the
points
array values (Y axis) versus row number (X axis). This can be useful to view how a value changes, such as the value of a parameter returned byget_draws
.- Parameters
points (array) – The values to plot on the Y axis.
name (str, optional) – The label to use on the Y axis and as part of the plot title.
xlabel (str, optional) –
replot (bool, optional) – Set to
True
to use the values calculated by the last call toplot_trace
. The default isFalse
.overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.clearwindow (bool, optional) – Should the existing plot area be cleared before creating this new plot (e.g. for multi-panel plots)?
See also
get_draws
Run the pyBLoCXS MCMC algorithm.
get_trace_plot
Return the data used to plot the last trace.
plot_cdf
Plot the cumulative density function of an array.
plot_pdf
Plot the probability density function of an array.
plot_scatter
Create a scatter plot.
Examples
Plot the trace of the 500 elements in the
x
array:>>> mu, sigma = 100, 15 >>> x = mu + sigma * np.random.randn(500) >>> plot_trace(x)
Use “ampl” as the Y axis label:
>>> plot_trace(ampl, name='ampl')
- proj(*args)[source] [edit on github]
Estimate parameter confidence intervals using the projection method.
The
proj
command computes confidence interval bounds for the specified model parameters in the dataset. A given parameter’s value is varied along a grid of values while the values of all the other thawed parameters are allowed to float to new best-fit values. Theget_proj
andset_proj_opt
commands can be used to configure the error analysis; an example being changing the ‘sigma’ field to1.6
(i.e. 90%) from its default value of1
. The output from the routine is displayed on screen, and theget_proj_results
routine can be used to retrieve the results.- Parameters
id (int or str, optional) – The data set, or sets, that provides the data. If not given then all data sets with an associated model are used simultaneously.
parameter (sherpa.models.parameter.Parameter, optional) – The default is to calculate the confidence limits on all thawed parameters of the model, or models, for all the data sets. The evaluation can be restricted by listing the parameters to use. Note that each parameter should be given as a separate argument, rather than as a list. For example
proj(g1.ampl, g1.sigma)
.model (sherpa.models.model.Model, optional) – Select all the thawed parameters in the model.
See also
conf
Estimate parameter confidence intervals using the confidence method.
covar
Estimate the confidence intervals using the covariance method.
get_proj
Return the confidence-interval estimation object.
get_proj_results
Return the results of the last
proj
run.int_proj
Plot the statistic value as a single parameter is varied.
reg_proj
Plot the statistic value as two parameters are varied.
set_proj_opt
Set an option of the
proj
estimation object.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with multiple
ids
orparameters
values, the order is unimportant, since any argument that is not defined as a model parameter is assumed to be a data id.The
proj
command is different tocovar
, in that all other thawed parameters are allowed to float to new best-fit values, instead of being fixed to the initial best-fit values. Whileproj
is more general (e.g. allowing the user to examine the parameter space away from the best-fit point), it is in the strictest sense no more accurate thancovar
for determining confidence intervals.An estimated confidence interval is accurate if and only if:
the chi^2 or logL surface in parameter space is approximately shaped like a multi-dimensional paraboloid, and
the best-fit point is sufficiently far from parameter space boundaries.
One may determine if these conditions hold, for example, by plotting the fit statistic as a function of each parameter’s values (the curve should approximate a parabola) and by examining contour plots of the fit statistics made by varying the values of two parameters at a time (the contours should be elliptical, and parameter space boundaries should be no closer than approximately 3 sigma from the best-fit point). The
int_proj
andreg_proj
commands may be used for this.If either of the conditions given above does not hold, then the output from
proj
may be meaningless except to give an idea of the scale of the confidence intervals. To accurately determine the confidence intervals, one would have to reparameterize the model, use Monte Carlo simulations, or Bayesian methods.As the calculation can be computer intensive, the default behavior is to use all available CPU cores to speed up the analysis. This can be changed be varying the
numcores
option - or settingparallel
toFalse
- either withset_proj_opt
orget_proj
.As
proj
estimates intervals for each parameter independently, the relationship between sigma and the change in statistic value delta_S can be particularly simple: sigma = the square root of delta_S for statistics sampled from the chi-square distribution and for the Cash statistic, and is approximately equal to the square root of (2 * delta_S) for fits based on the general log-likelihood. The default setting is to calculate the one-sigma interval, which can be changed with thesigma
option toset_proj_opt
orget_proj
.
- projection(*args) [edit on github]
Estimate parameter confidence intervals using the projection method.
The
proj
command computes confidence interval bounds for the specified model parameters in the dataset. A given parameter’s value is varied along a grid of values while the values of all the other thawed parameters are allowed to float to new best-fit values. Theget_proj
andset_proj_opt
commands can be used to configure the error analysis; an example being changing the ‘sigma’ field to1.6
(i.e. 90%) from its default value of1
. The output from the routine is displayed on screen, and theget_proj_results
routine can be used to retrieve the results.- Parameters
id (int or str, optional) – The data set, or sets, that provides the data. If not given then all data sets with an associated model are used simultaneously.
parameter (sherpa.models.parameter.Parameter, optional) – The default is to calculate the confidence limits on all thawed parameters of the model, or models, for all the data sets. The evaluation can be restricted by listing the parameters to use. Note that each parameter should be given as a separate argument, rather than as a list. For example
proj(g1.ampl, g1.sigma)
.model (sherpa.models.model.Model, optional) – Select all the thawed parameters in the model.
See also
conf
Estimate parameter confidence intervals using the confidence method.
covar
Estimate the confidence intervals using the covariance method.
get_proj
Return the confidence-interval estimation object.
get_proj_results
Return the results of the last
proj
run.int_proj
Plot the statistic value as a single parameter is varied.
reg_proj
Plot the statistic value as two parameters are varied.
set_proj_opt
Set an option of the
proj
estimation object.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with multiple
ids
orparameters
values, the order is unimportant, since any argument that is not defined as a model parameter is assumed to be a data id.The
proj
command is different tocovar
, in that all other thawed parameters are allowed to float to new best-fit values, instead of being fixed to the initial best-fit values. Whileproj
is more general (e.g. allowing the user to examine the parameter space away from the best-fit point), it is in the strictest sense no more accurate thancovar
for determining confidence intervals.An estimated confidence interval is accurate if and only if:
the chi^2 or logL surface in parameter space is approximately shaped like a multi-dimensional paraboloid, and
the best-fit point is sufficiently far from parameter space boundaries.
One may determine if these conditions hold, for example, by plotting the fit statistic as a function of each parameter’s values (the curve should approximate a parabola) and by examining contour plots of the fit statistics made by varying the values of two parameters at a time (the contours should be elliptical, and parameter space boundaries should be no closer than approximately 3 sigma from the best-fit point). The
int_proj
andreg_proj
commands may be used for this.If either of the conditions given above does not hold, then the output from
proj
may be meaningless except to give an idea of the scale of the confidence intervals. To accurately determine the confidence intervals, one would have to reparameterize the model, use Monte Carlo simulations, or Bayesian methods.As the calculation can be computer intensive, the default behavior is to use all available CPU cores to speed up the analysis. This can be changed be varying the
numcores
option - or settingparallel
toFalse
- either withset_proj_opt
orget_proj
.As
proj
estimates intervals for each parameter independently, the relationship between sigma and the change in statistic value delta_S can be particularly simple: sigma = the square root of delta_S for statistics sampled from the chi-square distribution and for the Cash statistic, and is approximately equal to the square root of (2 * delta_S) for fits based on the general log-likelihood. The default setting is to calculate the one-sigma interval, which can be changed with thesigma
option toset_proj_opt
orget_proj
.
- reg_proj(par0, par1, id=None, otherids=None, replot=False, fast=True, min=None, max=None, nloop=(10, 10), delv=None, fac=4, log=(False, False), sigma=(1, 2, 3), levels=None, numcores=None, overplot=False)[source] [edit on github]
Plot the statistic value as two parameters are varied.
Create a confidence plot of the fit statistic as a function of parameter value. Dashed lines are added to indicate the current statistic value and the parameter value at this point. The parameter value is varied over a grid of points and the free parameters re-fit. It is expected that this is run after a successful fit, so that the parameter values are at the best-fit location.
- Parameters
par0 – The parameters to plot on the X and Y axes, respectively.
par1 – The parameters to plot on the X and Y axes, respectively.
id (int or str, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toint_proj
. The default isFalse
.fast (bool, optional) – If
True
then the fit optimization used may be changed from the current setting (only for the error analysis) to use a faster optimization method. The default isFalse
.min (pair of numbers, optional) – The minimum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.max (pair of number, optional) – The maximum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.nloop (pair of int, optional) – The number of steps to use. This is used when
delv
is set toNone
.delv (pair of number, optional) – The step size for the parameter. Setting this over-rides the
nloop
parameter. The default isNone
.fac (number, optional) – When
min
ormax
is not given, multiply the covariance of the parameter by this value to calculate the limit (which is then added or subtracted to the parameter value, as required).log (pair of bool, optional) – Should the step size be logarithmically spaced? The default (
False
) is to use a linear grid.sigma (sequence of number, optional) – The levels at which to draw the contours. The units are the change in significance relative to the starting value, in units of sigma.
levels (sequence of number, optional) – The numeric values at which to draw the contours. This over-rides the
sigma
parameter, if set (the default isNone
).numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.
See also
conf
Estimate patameter confidence intervals using the confidence method.
covar
Estimate the confidence intervals using the covariance method.
get_reg_proj
Return the interval-projection object.
int_proj
Calculate and plot the fit statistic versus fit parameter value.
reg_unc
Plot the statistic value as two parameters are varied.
Notes
The difference to
reg_unc
is that at each step, a fit is made to the remaining thawed parameters in the source model. This makes the result a more-accurate rendering of the projected shape of the hypersurface formed by the statistic, but the run-time is longer than, the results ofreg_unc
, which does not vary any other parameter. If there are no free parameters in the model, other than the parameters being plotted, then the results will be the same.Examples
Vary the
xpos
andypos
parameters of thegsrc
model component for all data sets with a source expression.>>> reg_proj(gsrc.xpos, gsrc.ypos)
Use only the data in data set 1:
>>> reg_proj(gsrc.xpos, gsrc.ypos, id=1)
Only display the one- and three-sigma contours:
>>> reg_proj(gsrc.xpos, gsrc.ypos, sigma=(1, 3))
Display contours at values of 5, 10, and 20 more than the statistic value of the source model for data set 1:
>>> s0 = calc_stat(id=1) >>> lvls = s0 + np.asarray([5, 10, 20]) >>> reg_proj(gsrc.xpos, gsrc.ypos, levels=lvls, id=1)
Increase the limits of the plot and the number of steps along each axis:
>>> reg_proj(gsrc.xpos, gsrc.ypos, id=1, fac=6, nloop=(41, 41))
Compare the
ampl
parameters of theg
andb
model components, for data sets ‘core’ and ‘jet’, over the given ranges:>>> reg_proj(g.ampl, b.ampl, min=(0, 1e-4), max=(0.2, 5e-4), ... nloop=(51, 51), id='core', otherids=['jet'])
- reg_unc(par0, par1, id=None, otherids=None, replot=False, min=None, max=None, nloop=(10, 10), delv=None, fac=4, log=(False, False), sigma=(1, 2, 3), levels=None, numcores=None, overplot=False)[source] [edit on github]
Plot the statistic value as two parameters are varied.
Create a confidence plot of the fit statistic as a function of parameter value. Dashed lines are added to indicate the current statistic value and the parameter value at this point. The parameter value is varied over a grid of points and the statistic evaluated while holding the other parameters fixed. It is expected that this is run after a successful fit, so that the parameter values are at the best-fit location.
- Parameters
par0 – The parameters to plot on the X and Y axes, respectively.
par1 – The parameters to plot on the X and Y axes, respectively.
id (int or str, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
replot (bool, optional) – Set to
True
to use the values calculated by the last call toint_proj
. The default isFalse
.min (pair of numbers, optional) – The minimum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.max (pair of number, optional) – The maximum parameter value for the calculation. The default value of
None
means that the limit is calculated from the covariance, using thefac
value.nloop (pair of int, optional) – The number of steps to use. This is used when
delv
is set toNone
.delv (pair of number, optional) – The step size for the parameter. Setting this over-rides the
nloop
parameter. The default isNone
.fac (number, optional) – When
min
ormax
is not given, multiply the covariance of the parameter by this value to calculate the limit (which is then added or subtracted to the parameter value, as required).log (pair of bool, optional) – Should the step size be logarithmically spaced? The default (
False
) is to use a linear grid.sigma (sequence of number, optional) – The levels at which to draw the contours. The units are the change in significance relative to the starting value, in units of sigma.
levels (sequence of number, optional) – The numeric values at which to draw the contours. This over-rides the
sigma
parameter, if set (the default isNone
).numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
overplot (bool, optional) – If
True
then add the data to an existing plot, otherwise create a new plot. The default isFalse
.
See also
conf
Estimate patameter confidence intervals using the confidence method.
covar
Estimate the confidence intervals using the covariance method.
get_reg_unc
Return the interval-uncertainty object.
int_unc
Calculate and plot the fit statistic versus fit parameter value.
reg_proj
Plot the statistic value as two parameters are varied.
Notes
The difference to
reg_proj
is that at each step only the pair of parameters are varied, while all the other parameters remain at their starting value. This makes the result a less-accurate rendering of the projected shape of the hypersurface formed by the statistic, but the run-time is likely shorter than, the results ofreg_proj
, which fits the model to the remaining thawed parameters at each step. If there are no free parameters in the model, other than the parameters being plotted, then the results will be the same.Examples
Vary the
xpos
andypos
parameters of thegsrc
model component for all data sets with a source expression.>>> reg_unc(gsrc.xpos, gsrc.ypos)
Use only the data in data set 1:
>>> reg_unc(gsrc.xpos, gsrc.ypos, id=1)
Only display the one- and three-sigma contours:
>>> reg_unc(gsrc.xpos, gsrc.ypos, sigma=(1, 3))
Display contours at values of 5, 10, and 20 more than the statistic value of the source model for data set 1:
>>> s0 = calc_stat(id=1) >>> lvls = s0 + np.asarray([5, 10, 20]) >>> reg_unc(gsrc.xpos, gsrc.ypos, levels=lvls, id=1)
Increase the limits of the plot and the number of steps along each axis:
>>> reg_unc(gsrc.xpos, gsrc.ypos, id=1, fac=6, nloop=(41, 41))
Compare the
ampl
parameters of theg
andb
model components, for data sets ‘core’ and ‘jet’, over the given ranges:>>> reg_unc(g.ampl, b.ampl, min=(0, 1e-4), max=(0.2, 5e-4), ... nloop=(51, 51), id='core', otherids=['jet'])
Overplot the results on the
reg_proj
plot:>>> reg_proj(s1.c0, s2.xpos) >>> reg_unc(s1.c0, s2.xpos, overplot=True)
- resample_data(id=None, niter=1000, seed=None)[source] [edit on github]
Resample data with asymmetric error bars.
The function performs a parametric bootstrap assuming a skewed normal distribution centered on the observed data point with the variance given by the low and high measurement errors. The function simulates niter realizations of the data and fits each realization with the assumed model to obtain the best fit parameters. The function returns the best fit parameters for each realization, and displays the average and standard deviation for each parameter.
New in version 4.12.2: The samples and statistic keys were added to the return value and the parameter values are returned as NumPy arrays rather than as lists.
- Parameters
- Returns
sampled – The keys are statistic, which contains the best-fit statistic value for each iteration, samples, which contains the resampled data used in the fits as a niter by ndata array, and the free parameters in the fit, containing a NumPy array containing the fit parameter for each iteration (of size niter).
- Return type
See also
load_ascii_with_errors
Load an ASCII file with asymmetric errors as a data set.
Examples
Account for of asymmetric errors when calculating parameter uncertainties:
>>> load_ascii_with_errors(1, 'test.dat') >>> set_model(polynom1d.p0) >>> thaw(p0.c1) >>> fit() Dataset = 1 Method = levmar Statistic = leastsq Initial fit statistic = 4322.56 Final fit statistic = 247.768 at function evaluation 6 Data points = 61 Degrees of freedom = 59 Change in statistic = 4074.79 p0.c0 3.2661 +/- 0.193009 p0.c1 2162.19 +/- 65.8445 >>> result = resample_data(1, niter=10) p0.c0 : avg = 4.159973865314249 , std = 1.0575403309799554 p0.c1 : avg = 1943.5489865678633 , std = 268.64478808013547 >>> print(result['p0.c0']) [5.856479033432613, 3.8252624107243465, ... 2.8704270612985345] >>> print(result['p0.c1']) [1510.049972062868, 1995.4742750432902, ... 2235.9753113309894]
Display the PDF of the parameter values of the p0.c0 component from a run with 5000 iterations:
>>> sample = resample_data(1, 5000) p0.c0 : avg = 3.966543284267264 , std = 0.9104639711036427 p0.c1 : avg = 1988.8417667057342 , std = 220.21903089622705 >>> plot_pdf(sample['p0.c0'], bins=40)
The samples used for the analysis are returned as the samples key (as a 2D NumPy array of size number of iterations by number of data points), that can be used if further analysis is desired. In this case, the distribution of the first bin is shown as a CDF:
>>> sample = resample_data(1, 5000) >>> samples = sample['samples'] >>> plot_cdf(samples[:, 0])
- reset(model=None, id=None)[source] [edit on github]
Reset the model parameters to their default settings.
The
reset
function restores the parameter values to the default value set byguess
or to the user-defined default. If the user set initial model values or soft limits - e.g. either withset_par
or by using parameter prompting viaparamprompt
- thenreset
will restore these values and limits even afterguess
orfit
has been called.- Parameters
model (optional) – The model component or expression to reset. The default is to use all source expressions.
id (int or string, optional) – The data set to use. The default is to use all data sets with a source expression.
See also
fit
Fit one or more data sets.
guess
Set model parameters to values matching the data.
paramprompt
Control how parameter values are set.
set_par
Set the value, limits, or behavior of a model parameter.
Examples
The following examples assume that the source model has been set using:
>>> set_source(powlaw1d.pl * xsphabs.gal)
Fit the model and then reset the values of both components (
pl
andgal
):>>> fit() >>> reset()
Reset just the parameters of the
pl
model component:>>> reset(pl)
Reset all the components of the source expression for data set 2.
>>> reset(get_source(2))
- restore(filename='sherpa.save')[source] [edit on github]
Load in a Sherpa session from a file.
- Parameters
filename (str, optional) – The name of the file to read the results from. The default is ‘sherpa.save’.
- Raises
IOError – If
filename
does not exist.
Notes
The input to
restore
must have been created with thesave
command. This is a binary file, which may not be portable between versions of Sherpa, but is platform independent. A warning message may be created if a file saved by an older (or newer) version of Sherpa is loaded. An example of such a message is:WARNING: Could not determine whether the model is discrete. This probably means that you have restored a session saved with a previous version of Sherpa. Falling back to assuming that the model is continuous.
Examples
Load in the Sherpa session from ‘sherpa.save’.
>>> restore()
Load in the session from the given file:
>>> restore('/data/m31/setup.sherpa')
- sample_energy_flux(lo=None, hi=None, id=None, num=1, scales=None, correlated=False, numcores=None, bkg_id=None, model=None, otherids=(), clip='hard')[source] [edit on github]
Return the energy flux distribution of a model.
For each iteration, draw the parameter values of the model from a normal distribution, evaluate the model, and sum the model over the given range (the flux). The return array contains the flux and parameter values for each iteration. The units for the flux are as returned by
calc_energy_flux
.Changed in version 4.12.2: The model, otherids, and clip parameters were added and the return value has an extra column.
- Parameters
lo (number, optional) – The lower limit to use when summing up the signal. If not given then the lower value of the data grid is used.
hi (optional) – The upper limit to use when summing up the signal. If not given then the upper value of the data grid is used.
id (int or string, optional) – The identifier of the data set to use. If
None
, the default value, then all datasets with associated models are used to calculate the errors and the model evaluation is done using the default dataset.num (int, optional) – The number of samples to create. The default is 1.
scales (array, optional) – The scales used to define the normal distributions for the parameters. The size and shape of the array depends on the number of free parameters in the fit (n) and the value of the
correlated
parameter. When the parameter isTrue
, scales must be given the covariance matrix for the free parameters (a n by n matrix that matches the parameter ordering used by Sherpa). For un-correlated parameters the covariance matrix can be used, or a one-dimensional array of n elements can be used, giving the width (specified as the sigma value of a normal distribution) for each parameter (e.g. the square root of the diagonal elements of the covariance matrix). If the scales parameter is not given then the covariance matrix is evaluated for the current model and best-fit parameters.correlated (bool, optional) – Should the correlation between the parameters be included when sampling the parameters? If not, then each parameter is sampled from independent distributions. In both cases a normal distribution is used.
numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
bkg_id (int or string, optional) – The identifier of the background component to use. This should only be set when the line to be measured is in the background model.
model (model, optional) – The model to integrate. If left as
None
then the source model for the dataset will be used. This can be used to calculate the unabsorbed flux, as shown in the examples. The model must be part of the source expression.otherids (sequence of integer and string ids, optional) – The list of other datasets that should be included when calculating the errors to draw values from.
clip ({'hard', 'soft', 'none'}, optional) – What clipping strategy should be applied to the sampled parameters. The default (‘hard’) is to fix values at their hard limits if they exceed them. A value of ‘soft’ uses the soft limits instead, and ‘none’ applies no clipping. The last column in the returned arrays indicates if the row had any clipped parameters (even when clip is set to ‘none’).
- Returns
The return array has the shape
(num, N+2)
, whereN
is the number of free parameters in the fit and num is thenum
parameter. The rows of this array contain the flux value, as calculated bycalc_energy_flux
, followed by the values of the thawed parameters used for that iteration, and then a flag column indicating if the parameters were clipped (1) or not (0). The order of the parameters matches the data returned byget_fit_results
.- Return type
vals
See also
calc_photon_flux
Integrate the unconvolved source model over a pass band.
calc_energy_flux
Integrate the unconvolved source model over a pass band.
covar
Estimate the confidence intervals using the confidence method.
plot_cdf
Plot the cumulative density function of an array.
plot_pdf
Plot the probability density function of an array.
plot_energy_flux
Display the energy flux distribution.
plot_photon_flux
Display the photon flux distribution.
plot_trace
Create a trace plot of row number versus value.
sample_photon_flux
Return the flux distribution of a model.
sample_flux
Return the flux distribution of a model.
Notes
There are two ways to use this function to calculate fluxes from multiple sources. The first is to leave the
id
argument asNone
, in which case all available datasets will be used. Alternatively, theid
andotherids
arguments can be set to list the exact datasets to use, such asid=1, otherids=(2,3,4)
.The returned value contains all free parameters in the fit, even if they are not included in the model argument (e.g. when calculating an unabsorbed flux).
Examples
Calculate the energy flux distribution for the range 0.5 to 7, and plot up the resulting flux distribution (as a cumulative distribution):
>>> vals = sample_energy_flux(0.5, 7, num=1000) >>> plot_cdf(vals[:, 0], name='flux')
Repeat the above, but allowing the parameters to be correlated, and then calculate the 5, 50, and 95 percent quantiles of the energy flux distribution:
>>> cvals = sample_energy_flux(0.5, 7, num=1000, correlated=True) >>> np.percentile(cvals[:, 0], [5, 50, 95])
The energy flux of a component (or sub-set of components) can be calculated using the model argument. For the following case, an absorbed power-law was used to fit the data -
xsphabs.gal * powerlaw.pl
- and then the flux of just the power-law component is calculated. Note that the returned array has columns ‘flux’, ‘gal.nh’, ‘pl.gamma’, and ‘pl.ampl’ (that is flux and then the free parameters in the full model).>>> vals = sample_energy_flux(0.5, 7, model=pl, num=1000, correlated=True)
Calculate the 2-10 keV flux for the pl model using a joint fit to the datasets 1, 2, 3, and 4:
>>> vals = sample_energy_flux(2, 10, model=pl, id=1, otherids=(2,3,4), ... num=1000)
Use the given parameter errors for sampling the parameter distribution. The fit must have three free parameters, and each parameter is sampled independently (in this case parerrs gives the sigma values for each parameter):
>>> parerrs = [0.25, 1.22, 1.04e-4] >>> vals = sample_energy_flux(2, 10, num=5000, scales=parerrs)
In this case the parameter errors are taken from the covariance analysis, using the
parmaxes
field since these are positive.>>> covar() >>> parerrs = get_covar_results().parmaxes >>> vals = sample_energy_flux(0.5, 2, num=1000, scales=parerrs)
Run covariance to estimate the parameter errors and then extract the covariance matrix from the results (as the
cmat
variable). This matrix is then used to define the parameter widths - including correlated terms - in the flux sampling, after being increased by ten percent. This is used to calculate both the absorbed (vals1
) and unabsorbed (vals2
) fluxes. Both arrays have columns: flux, gal.nh, pl.gamma, and pl.ampl.>>> set_source(xsphabs.gal * powlaw1d.pl) >>> fit() >>> covar() >>> cmat = get_covar_results().extra_output >>> vals1 = sample_energy_flux(2, 10, num=5000, correlated=True, ... scales=1.1 * cmat) >>> vals2 = sample_energy_flux(2, 10, num=5000, correlated=True, ... model=pl, scales=1.1 * cmat)
Calculate the flux and error distribution using fits to all datasets:
>>> set_source(xsphabs.gal * xsapec.clus) >>> set_source(2, gal * clus) >>> set_source(3, gal * clus) ... fit the data >>> vals = sample_energy_flux(0.5, 10, model=clus, num=10000)
Calculate the flux and error distribution using fits to an explicit set of datasets (in this case datasets 1 and 2):
>>> vals = sample_energy_flux(0.5, 10, id=1, otherids=[2], ... model=clus, num=10000)
Generate two sets of parameter values, where the parameter values in v1 are generated from a random distribution and then clipped to the hard limits of the parameters, and the values in v2 use the soft limits of the parameters. The last column in both v1 and v2 indicates whether the row had any clipped parameters. The flux1_filt and flux2_filt arrays indicate the energy-flux distribution after it has been filtered to remove any row with clipped parameters:
>>> v1 = sample_energy_flux(0.5, 2, num=1000) >>> v2 = sample_energy_flux(0.5, 2, num=1000, clip='soft') >>> flux1 = v1[:, 0] >>> flux2 = v2[:, 0] >>> flux1_filt = flux1[v1[:, -1] == 0] >>> flux2_filt = flux2[v2[:, -1] == 0]
- sample_flux(modelcomponent=None, lo=None, hi=None, id=None, num=1, scales=None, correlated=False, numcores=None, bkg_id=None, Xrays=True, confidence=68)[source] [edit on github]
Return the flux distribution of a model.
For each iteration, draw the parameter values of the model from a normal distribution, filter out samples that lie outside the soft limits of the parameters, evaluate the model, and sum the model over the given range (the flux). Return the parameter values used, together with the median, upper, and lower quantiles of the flux distribution.
Changed in version 4.13.1: The
id
parameter is now used if set (previously the default dataset was always used). The screen output is now controlled by the Sherpa logging setup. The flux calculation no-longer excludes samples at the parameter soft limits, as this could cause an over-estimation of the flux when a parameter is only an upper limit. The statistic value is now returned for each row, even those that were excluded from the flux calculation. The last-but-one column of the returnedvals
array now records the rows that were excluded from the flux calculation.- Parameters
modelcomponent (optional) – The model to use. It can be a single component or a combination. If not given, then the full source expression for the data set is used.
lo (number, optional) – The lower limit to use when summing up the signal. If not given then the lower value of the data grid is used.
hi (optional) – The upper limit to use when summing up the signal. If not given then the upper value of the data grid is used.
id (int or string, optional) – The identifier of the data set to use. The default value (
None
) means that the default identifier, as returned byget_default_id
, is used.num (int, optional) – The number of samples to create. The default is 1.
scales (array, optional) – The scales used to define the normal distributions for the parameters. The form depends on the
correlated
parameter: whenTrue
, the array should be a symmetric positive semi-definite (N, N) array, otherwise a 1D array of length N, where N is the number of free parameters.correlated (bool, optional) – If
True
(the default isFalse
) thenscales
is the full covariance matrix, otherwise it is just a 1D array containing the variances of the parameters (the diagonal elements of the covariance matrix).numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
bkg_id (int or string, optional) – The identifier of the background component to use. This should only be set when the line to be measured is in the background model.
Xrays (bool, optional) – When
True
(the default), assume that the model has units of photon/cm^2/s, and usecalc_energy_flux
to convert to erg/cm^2/s. This should not be changed from the default value.confidence (number, optional) – The confidence level for the upper and lower values, as a percentage (0 to 100). The default is 68, so as to return the one-sigma range.
- Returns
The fullflux and cptflux arrays contain the results for the full source model and the flux of the
modelcomponent
argument (they can be the same). They have three elements and give the median value, the value containing 100 - confidence/2 of the data, and the fraction containing confidence/2 of the flux distribution. For the default confidence argument of 68 this means the last two give the one-sigma upper and lower bounds. The vals array has a shape of(num+1, N+3)
, whereN
is the number of free parameters and num is thenum
parameter. The rows of this array contain the flux value for the iteration (for the full source model), the parameter values, a flag indicating whether any parameter in that row was clipped (and so was excluded from the flux calculation), and the statistic value for this set of parameters.- Return type
(fullflux, cptflux, vals)
See also
calc_photon_flux
Integrate the unconvolved source model over a pass band.
calc_energy_flux
Integrate the unconvolved source model over a pass band.
covar
Estimate the confidence intervals using the confidence method.
plot_energy_flux
Display the energy flux distribution.
plot_photon_flux
Display the photon flux distribution.
sample_energy_flux
Return the energy flux distribution of a model.
sample_photon_flux
Return the photon flux distribution of a model.
Notes
Setting the Xrays parameter to False is currently unsupported.
The summary output displayed by this routine - giving the median and confidence ranges - is controlled by the standard Sherpa logging instance, and can be hidden by changing the logging to a level greater than “INFO” (e.g. with
sherpa.utils.logging.SherpaVerbosity
).This routine can not be used if you have used set_full_model: the calc_energy_flux routine should be used instead.
Examples
Estimate the flux distribution for the “src” component using the default data set. The parameters are assumed to be uncorrelated.
>>> set_source(xsphabs.gal * xsapec.src) >>> fit() >>> fflux, cflux, vals = sample_flux(src, 0.5, 2, num=1000) original model flux = 2.88993e-14, + 1.92575e-15, - 1.81963e-15 model component flux = 7.96865e-14, + 4.65144e-15, - 4.41222e-15 >>> f0, fhi, flo = cflux >>> print("Flux: {:.2e} {:+.2e} {:+.2e}".format(f0, fhi-f0, flo-f0)) Flux: 7.97e-14 +4.65e-15 -4.41e-15
This time the parameters are assumed to be correlated, using the covariance matrix for the fit:
>>> ans = sample_flux(src, 0.5, 2, num=1000, correlated=True)
Explicitly send in the parameter widths (sigma values), using the estimates generated by
covar
:>>> covar() >>> errs = get_covar_results().parmaxes >>> ans = sample_flux(correlated=False, scales=errs, num=500)
Explicitly send in a covariance matrix:
>>> cmatrix = get_covar_results().extra_output >>> ans = sample_flux(correlated=True, scales=cmatrix, num=500)
Run sample_flux after changing the logging level, so that the screen output from sample_flux is not displayed. We use the SherpaVerbosity function from
sherpa.utils.logging
to only change the logging level while runnng sample_flux:>>> from sherpa.utils.logging import SherpaVerbosity >>> with SherpaVerbosity('WARN'): ... ans = sample_flux(num=1000, lo=0.5, hi=7)
- sample_photon_flux(lo=None, hi=None, id=None, num=1, scales=None, correlated=False, numcores=None, bkg_id=None, model=None, otherids=(), clip='hard')[source] [edit on github]
Return the photon flux distribution of a model.
For each iteration, draw the parameter values of the model from a normal distribution, evaluate the model, and sum the model over the given range (the flux). The return array contains the flux and parameter values for each iteration. The units for the flux are as returned by
calc_photon_flux
.Changed in version 4.12.2: The model, otherids, and clip parameters were added and the return value has an extra column.
- Parameters
lo (number, optional) – The lower limit to use when summing up the signal. If not given then the lower value of the data grid is used.
hi (optional) – The upper limit to use when summing up the signal. If not given then the upper value of the data grid is used.
id (int or string, optional) – The identifier of the data set to use. If
None
, the default value, then all datasets with associated models are used to calculate the errors and the model evaluation is done using the default dataset.num (int, optional) – The number of samples to create. The default is 1.
scales (array, optional) – The scales used to define the normal distributions for the parameters. The size and shape of the array depends on the number of free parameters in the fit (n) and the value of the
correlated
parameter. When the parameter isTrue
, scales must be given the covariance matrix for the free parameters (a n by n matrix that matches the parameter ordering used by Sherpa). For un-correlated parameters the covariance matrix can be used, or a one-dimensional array of n elements can be used, giving the width (specified as the sigma value of a normal distribution) for each parameter (e.g. the square root of the diagonal elements of the covariance matrix). If the scales parameter is not given then the covariance matrix is evaluated for the current model and best-fit parameters.correlated (bool, optional) – Should the correlation between the parameters be included when sampling the parameters? If not, then each parameter is sampled from independent distributions. In both cases a normal distribution is used.
numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
bkg_id (int or string, optional) – The identifier of the background component to use. This should only be set when the line to be measured is in the background model.
model (model, optional) – The model to integrate. If left as
None
then the source model for the dataset will be used. This can be used to calculate the unabsorbed flux, as shown in the examples. The model must be part of the source expression.otherids (sequence of integer and string ids, optional) – The list of other datasets that should be included when calculating the errors to draw values from.
clip ({'hard', 'soft', 'none'}, optional) – What clipping strategy should be applied to the sampled parameters. The default (‘hard’) is to fix values at their hard limits if they exceed them. A value of ‘soft’ uses the soft limits instead, and ‘none’ applies no clipping. The last column in the returned arrays indicates if the row had any clipped parameters (even when clip is set to ‘none’).
- Returns
The return array has the shape
(num, N+2)
, whereN
is the number of free parameters in the fit and num is thenum
parameter. The rows of this array contain the flux value, as calculated bycalc_photon_flux
, followed by the values of the thawed parameters used for that iteration, and then a flag column indicating if the parameters were clipped (1) or not (0). The order of the parameters matches the data returned byget_fit_results
.- Return type
vals
See also
calc_photon_flux
Integrate the unconvolved source model over a pass band.
calc_energy_flux
Integrate the unconvolved source model over a pass band.
covar
Estimate the confidence intervals using the confidence method.
plot_cdf
Plot the cumulative density function of an array.
plot_pdf
Plot the probability density function of an array.
plot_energy_flux
Display the energy flux distribution.
plot_photon_flux
Display the photon flux distribution.
plot_trace
Create a trace plot of row number versus value.
sample_energy_flux
Return the energy flux distribution of a model.
sample_flux
Return the flux distribution of a model.
Notes
There are two ways to use this function to calculate fluxes from multiple sources. The first is to leave the
id
argument asNone
, in which case all available datasets will be used. Alternatively, theid
andotherids
arguments can be set to list the exact datasets to use, such asid=1, otherids=(2,3,4)
.The returned value contains all free parameters in the fit, even if they are not included in the model argument (e.g. when calculating an unabsorbed flux).
Examples
Calculate the photon flux distribution for the range 0.5 to 7, and plot up the resulting flux distribution (as a cumulative distribution):
>>> vals = sample_photon_flux(0.5, 7, num=1000) >>> plot_cdf(vals[:, 0], name='flux')
Repeat the above, but allowing the parameters to be correlated, and then calculate the 5, 50, and 95 percent quantiles of the photon flux distribution:
>>> cvals = sample_photon_flux(0.5, 7, num=1000, correlated=True) >>> np.percentile(cvals[:, 0], [5, 50, 95])
The photon flux of a component (or sub-set of components) can be calculated using the model argument. For the following case, an absorbed power-law was used to fit the data -
xsphabs.gal * powerlaw.pl
- and then the flux of just the power-law component is calculated. Note that the returned array has columns ‘flux’, ‘gal.nh’, ‘pl.gamma’, and ‘pl.ampl’ (that is flux and then the free parameters in the full model).>>> vals = sample_photon_flux(0.5, 7, model=pl, num=1000, correlated=True)
Calculate the 2-10 keV flux for the pl model using a joint fit to the datasets 1, 2, 3, and 4:
>>> vals = sample_photon_flux(2, 10, model=pl, id=1, otherids=(2,3,4), ... num=1000)
Use the given parameter errors for sampling the parameter distribution. The fit must have three free parameters, and each parameter is sampled independently (in this case parerrs gives the sigma values for each parameter):
>>> parerrs = [0.25, 1.22, 1.04e-4] >>> vals = sample_photon_flux(2, 10, num=5000, scales=parerrs)
In this case the parameter errors are taken from the covariance analysis, using the
parmaxes
field since these are positive.>>> covar() >>> parerrs = get_covar_results().parmaxes >>> vals = sample_photon_flux(0.5, 2, num=1000, scales=parerrs)
Run covariance to estimate the parameter errors and then extract the covariance matrix from the results (as the
cmat
variable). This matrix is then used to define the parameter widths - including correlated terms - in the flux sampling, after being increased by ten percent. This is used to calculate both the absorbed (vals1
) and unabsorbed (vals2
) fluxes. Both arrays have columns: flux, gal.nh, pl.gamma, and pl.ampl.>>> set_source(xsphabs.gal * powlaw1d.pl) >>> fit() >>> covar() >>> cmat = get_covar_results().extra_output >>> vals1 = sample_photon_flux(2, 10, num=5000, correlated=True, ... scales=1.1 * cmat) >>> vals2 = sample_photon_flux(2, 10, num=5000, correlated=True, ... model=pl, scales=1.1 * cmat)
Calculate the flux and error distribution using fits to all datasets:
>>> set_source(xsphabs.gal * xsapec.clus) >>> set_source(2, gal * clus) >>> set_source(3, gal * clus) ... fit the data >>> vals = sample_photon_flux(0.5, 10, model=clus, num=10000)
Calculate the flux and error distribution using fits to an explicit set of datasets (in this case datasets 1 and 2):
>>> vals = sample_photon_flux(0.5, 10, id=1, otherids=[2], ... model=clus, num=10000)
Generate two sets of parameter values, where the parameter values in v1 are generated from a random distribution and then clipped to the hard limits of the parameters, and the values in v2 use the soft limits of the parameters. The last column in both v1 and v2 indicates whether the row had any clipped parameters. The flux1_filt and flux2_filt arrays indicate the photon-flux distribution after it has been filtered to remove any row with clipped parameters:
>>> v1 = sample_photon_flux(0.5, 2, num=1000) >>> v2 = sample_photon_flux(0.5, 2, num=1000, clip='soft') >>> flux1 = v1[:, 0] >>> flux2 = v2[:, 0] >>> flux1_filt = flux1[v1[:, -1] == 0] >>> flux2_filt = flux2[v2[:, -1] == 0]
- save(filename='sherpa.save', clobber=False)[source] [edit on github]
Save the current Sherpa session to a file.
- Parameters
- Raises
sherpa.utils.err.IOErr – If
filename
already exists andclobber
isFalse
.
See also
Notes
The current Sherpa session is saved using the Python
pickle
module. The output is a binary file, which may not be portable between versions of Sherpa, but is platform independent, and contains all the data. This means that files created bysave
can be sent to collaborators to share results.Examples
Save the current session to the file ‘sherpa.save’.
>>> save()
Save the current session to the file ‘bestfit.sherpa’, overwriting any existing version of the file.
>>> save('bestfit.sherpa', clobber=True)
- save_all(outfile=None, clobber=False)[source] [edit on github]
Save the information about the current session to a text file.
This differs to the
save
command in that the output is human readable. Three consequences are:numeric values may not be recorded to their full precision
data sets are not included in the file
some settings and values may not be recorded.
- Parameters
outfile (str or file-like, optional) – If given, the output is written to this file, and the
clobber
parameter controls what happens if the file already exists.outfile
can be a filename string or a file handle (or file-like object, such asStringIO
) to write to. If not set then the standard output is used.clobber (bool, optional) – If
outfile
is a filename, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
See also
Notes
This command will create a series of commands that restores the current Sherpa set up. It does not save the set of commands used. Not all Sherpa settings are saved. Items not fully restored include:
data created by calls to
load_arrays
, or changed from the version on disk - e.g. by calls tosherpa.astro.ui.set_counts
.any optional keywords to comands such as
load_data
orload_pha
user models may not be restored correctly
only a subset of Sherpa commands are saved.
Examples
Write the current Sherpa session to the screen:
>>> save_all()
Save the session to the file ‘fit.sherpa’, overwriting it if it already exists:
>>> save_all('fit.sherpa', clobber=True)
Write the contents to a StringIO object:
>>> from io import StringIO >>> store = StringIO() >>> save_all(store)
- save_arrays(filename, args, fields=None, ascii=True, clobber=False)[source] [edit on github]
Write a list of arrays to a file.
- Parameters
filename (str) – The name of the file to write the array to.
args (array of arrays) – The arrays to write out.
fields (array of str) – The column names (should match the size of
args
).ascii (bool, optional) – If
False
then the data is written as a FITS format binary table. The default isTrue
. The exact format of the output file depends on the I/O library in use (Crates or AstroPy).clobber (bool, optional) – This flag controls whether an existing file can be overwritten (
True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
filename
already exists andclobber
isFalse
.
See also
save_data
Save the data to a file.
save_image
Save the pixel values of a 2D data set to a file.
save_table
Save a data set to a file as a table.
Examples
Write the x and y columns from the default data set to the file ‘src.dat’:
>>> x = get_indep() >>> y = get_dep() >>> save_arrays('src.dat', [x, y])
Use the column names “r” and “surbri” for the columns:
>>> save_arrays('prof.fits', [x,y], fields=["r", "surbri"], ... ascii=False, clobber=True)
- save_data(id, filename=None, bkg_id=None, ascii=True, clobber=False)[source] [edit on github]
Save the data to a file.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to write the array to. The data is written out as an ASCII file.
bkg_id (int or str, optional) – Set if the background should be written out rather than the source (for a PHA data set).
ascii (bool, optional) – If
False
then the data is written as a FITS format binary table. The default isTrue
. The exact format of the output file depends on the I/O library in use (Crates or AstroPy).clobber (bool, optional) – This flag controls whether an existing file can be overwritten (
True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IdentifierErr – If there is no matching data set.
sherpa.utils.err.IOErr – If
filename
already exists andclobber
isFalse
.
See also
save_arrays
Write a list of arrays to a file.
save_delchi
Save the ratio of residuals (data-model) to error to a file.
save_error
Save the errors to a file.
save_filter
Save the filter array to a file.
save_grouping
Save the grouping scheme to a file.
save_image
Save the pixel values of a 2D data set to a file.
save_pha
Save a PHA data set to a file.
save_quality
Save the quality array to a file.
save_resid
Save the residuals (data-model) to a file.
save_staterror
Save the statistical errors to a file.
save_syserror
Save the statistical errors to a file.
save_table
Save a data set to a file as a table.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.Examples
Write the default data set out to the ASCII file ‘src.dat’:
>>> save_data('src.dat')
Write the ‘rprof’ data out to the FITS file ‘prof.fits’, over-writing it if it already exists:
>>> save_data('rprof', 'prof.fits', clobber=True, ascii=True)
- save_delchi(id, filename=None, bkg_id=None, ascii=True, clobber=False)[source] [edit on github]
Save the ratio of residuals (data-model) to error to a file.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to write the array to. The format is determined by the
ascii
argument.bkg_id (int or str, optional) – Set if the background residuals should be written out rather than the source.
ascii (bool, optional) – If
False
then the data is written as a FITS format binary table. The default isTrue
. The exact format of the output file depends on the I/O library in use (Crates or AstroPy).clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IdentifierErr – If no model has been set for this data set.
sherpa.utils.err.IOErr – If
filename
already exists andclobber
isFalse
.
See also
save_data
Save the data to a file.
save_resid
Save the residuals (data-model) to a file.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.The output file contains the columns
X
andDELCHI
. The residuals array respects any filter or (for PHA files), grouping settings.Examples
Write the residuals to the file “delchi.dat”:
>>> save_delchi('delchi.dat')
Write the residuals from the data set ‘jet’ to the FITS file “delchi.fits”:
>>> save_delchi('jet', "delchi.fits", ascii=False)
- save_error(id, filename=None, bkg_id=None, ascii=True, clobber=False)[source] [edit on github]
Save the errors to a file.
The total errors for a data set are the quadrature combination of the statistical and systematic errors. The systematic errors can be 0. If the statistical errors have not been set explicitly, then the values calculated by the statistic - such as
chi2gehrels
orchi2datavar
- will be used.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to write the array to. The format is determined by the
ascii
argument.bkg_id (int or str, optional) – Set if the background should be written out rather than the source.
ascii (bool, optional) – If
False
then the data is written as a FITS format binary table. The default isTrue
. The exact format of the output file depends on the I/O library in use (Crates or AstroPy).clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
filename
already exists andclobber
isFalse
.
See also
get_error
Return the errors on the dependent axis of a data set.
load_staterror
Load the statistical errors from a file.
load_syserror
Load the systematic errors from a file.
save_data
Save the data to a file.
save_staterror
Save the statistical errors to a file.
save_syserror
Save the systematic errors to a file.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.The output file contains the columns
X
andERR
.Examples
Write out the errors from the default data set to the file ‘errs.dat’.
>>> save_error('errs.dat')
Over-write the file it it already exists, and take the data from the data set “jet”:
>>> save_error('jet', 'err.out', clobber=True)
Write the data out in FITS format:
>>> save_error('err.fits', ascii=False)
- save_filter(id, filename=None, bkg_id=None, ascii=True, clobber=False)[source] [edit on github]
Save the filter array to a file.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to write the array to. The format is determined by the
ascii
argument.bkg_id (int or str, optional) – Set if the background should be written out rather than the source.
ascii (bool, optional) – If
False
then the data is written as a FITS format binary table. The default isTrue
. The exact format of the output file depends on the I/O library in use (Crates or AstroPy).clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.DataErr – If the data set has not been filtered.
sherpa.utils.err.IOErr – If
filename
already exists andclobber
isFalse
.
See also
load_filter
Load the filter array from a file and add to a data set.
save_data
Save the data to a file.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.The output file contains the columns
X
andFILTER
.Examples
Write the filter from the default data set as an ASCII file:
>>> save_filter('filt.dat')
Write the filter for data set ‘src’ to a FITS format file:
>>> save_filter('src', 'filter.fits', ascii=False)
- save_grouping(id, filename=None, bkg_id=None, ascii=True, clobber=False)[source] [edit on github]
Save the grouping scheme to a file.
The output is a two-column file, containing the channel and grouping columns from the data set.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to write the array to. The format is determined by the
ascii
argument.bkg_id (int or str, optional) – Set if the grouping array should be taken from the background associated with the data set.
ascii (bool, optional) – If
False
then the data is written as a FITS format binary table. The default isTrue
. The exact format of the output file depends on the I/O library in use (Crates or AstroPy).clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
filename
already exists andclobber
isFalse
.
See also
get_grouping
Return the gouping array for a PHA data set.
load_quality
Load the quality array from a file and add to a PHA data set.
set_grouping
Apply a set of grouping flags to a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.The column names are ‘CHANNEL’ and ‘GROUPS’.
Examples
Save the channel and grouping columns from the default data set to the file ‘group.dat’ as an ASCII file:
>>> save_grouping('group.dat')
Over-write the ‘grp.fits’ file, if it exists, and write out the grouping data from the ‘jet’ data set, as a FITS format file:
>>> save_grouping('jet', 'grp.fits', ascii=False, clobber=True)
- save_image(id, filename=None, ascii=False, clobber=False)[source] [edit on github]
Save the pixel values of a 2D data set to a file.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to write the data to. The format is determined by the
ascii
argument.ascii (bool, optional) – If
False
then the data is written as a FITS format binary table. The default isFalse
. The exact format of the output file depends on the I/O library in use (Crates or AstroPy).clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
filename
already exists andclobber
isFalse
. If the data set does not contain 2D data.
See also
save_data
Save the data to a file.
save_model
Save the model values to a file.
save_source
Save the model values to a file.
save_table
Save a data set to a file as a table.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.Examples
Write the pixel values to the file “img.fits”:
>>> save_image('resid.fits')
Write the data from the data set ‘jet’ to the file “jet.img”:
>>> save_image('jet', 'jet.img', clobber=True)
- save_model(id, filename=None, bkg_id=None, ascii=False, clobber=False)[source] [edit on github]
Save the model values to a file.
The model is evaluated on the grid of the data set, including any instrument response (such as a PSF or ARF and RMF).
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to write the array to. The format is determined by the
ascii
argument.bkg_id (int or str, optional) – Set if the background model should be written out rather than the source.
ascii (bool, optional) – If
False
then the data is written as a FITS format binary table. The default isFalse
. The exact format of the output file depends on the I/O library in use (Crates or AstroPy).clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IdentifierErr – If no model has been set for this data set.
sherpa.utils.err.IOErr – If
filename
already exists andclobber
isFalse
.
See also
save_data
Save the data to a file.
save_source
Save the model values to a file.
set_model
Set the source model expression for a data set.
set_full_model
Define the convolved model expression for a data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.The output file contains the columns
X
andMODEL
for 1D data sets. The residuals array respects any filter or (for PHA files), grouping settings.Examples
Write the model values for the default data set to the file “model.fits”:
>>> save_model('model.fits')
Write the model from the data set ‘jet’ to the ASCII file “model.dat”:
>>> save_model('jet', "model.dat", ascii=True)
For 2D (image) data sets, the model is written out as an image:
>>> save_model('img', 'model.img')
- save_pha(id, filename=None, bkg_id=None, ascii=False, clobber=False)[source] [edit on github]
Save a PHA data set to a file.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to write the array to. The format is determined by the
ascii
argument.bkg_id (int or str, optional) – Set if the background should be written out rather than the source.
ascii (bool, optional) – If
False
then the data is written as a FITS format binary table. The default isTrue
. The exact format of the output file depends on the I/O library in use (Crates or AstroPy).clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain PHA data.
sherpa.utils.err.IOErr – If
filename
already exists andclobber
isFalse
.
See also
load_pha
Load a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.Examples
Write out the PHA data from the default data set to the file ‘src.pi’:
>>> save_pha('src.pi')
Over-write the file it it already exists, and take the data from the data set “jet”:
>>> save_pha('jet', 'out.pi', clobber=True)
Write the data out as an ASCII file:
>>> save_pha('pi.dat', ascii=True)
- save_quality(id, filename=None, bkg_id=None, ascii=True, clobber=False)[source] [edit on github]
Save the quality array to a file.
The output is a two-column file, containing the channel and quality columns from the data set.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to write the array to. The format is determined by the
ascii
argument.bkg_id (int or str, optional) – Set if the quality array should be taken from the background associated with the data set.
ascii (bool, optional) – If
False
then the data is written as a FITS format binary table. The default isTrue
. The exact format of the output file depends on the I/O library in use (Crates or AstroPy).clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
filename
already exists andclobber
isFalse
.
See also
get_quality
Return the quality array for a PHA data set.
load_quality
Load the quality array from a file and add to a PHA data set.
set_quality
Apply a set of quality flags to a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.The column names are ‘CHANNEL’ and ‘QUALITY’.
Examples
Save the channel and quality columns from the default data set to the file ‘quality.dat’ as an ASCII file:
>>> save_quality('quality.dat')
Over-write the ‘qual.fits’ file, if it exists, and write out the quality array from the ‘jet’ data set, as a FITS format file:
>>> save_quality('jet', 'qual.fits', ascii=False, clobber=True)
- save_resid(id, filename=None, bkg_id=None, ascii=False, clobber=False)[source] [edit on github]
Save the residuals (data-model) to a file.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to write the array to. The format is determined by the
ascii
argument.bkg_id (int or str, optional) – Set if the background residuals should be written out rather than the source.
ascii (bool, optional) – If
False
then the data is written as a FITS format binary table. The default isFalse
. The exact format of the output file depends on the I/O library in use (Crates or AstroPy).clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IdentifierErr – If no model has been set for this data set.
sherpa.utils.err.IOErr – If
filename
already exists andclobber
isFalse
.
See also
save_data
Save the data to a file.
save_delchi
Save the ratio of residuals (data-model) to error to a file.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.The output file contains the columns
X
andRESID
. The residuals array respects any filter or (for PHA files), grouping settings.Examples
Write the residuals to the file “resid.fits”:
>>> save_resid('resid.fits')
Write the residuals from the data set ‘jet’ to the ASCII file “resid.dat”:
>>> save_resid('jet', "resid.dat", ascii=True)
- save_source(id, filename=None, bkg_id=None, ascii=False, clobber=False)[source] [edit on github]
Save the model values to a file.
The model is evaluated on the grid of the data set, but does not include any instrument response (such as a PSF or ARF and RMF).
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to write the array to. The format is determined by the
ascii
argument.bkg_id (int or str, optional) – Set if the background model should be written out rather than the source.
ascii (bool, optional) – If
False
then the data is written as a FITS format binary table. The default isFalse
. The exact format of the output file depends on the I/O library in use (Crates or AstroPy).clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IdentifierErr – If no model has been set for this data set.
sherpa.utils.err.IOErr – If
filename
already exists andclobber
isFalse
.
See also
save_data
Save the data to a file.
save_model
Save the model values to a file.
set_model
Set the source model expression for a data set.
set_full_model
Define the convolved model expression for a data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.The output file contains the columns
X
andSOURCE
for 1D data sets. The residuals array respects any filter or (for PHA files), grouping settings.Examples
Write the model values for the default data set to the file “model.fits”:
>>> save_source('model.fits')
Write the model from the data set ‘jet’ to the ASCII file “model.dat”:
>>> save_source('jet', "model.dat", ascii=True)
For 2D (image) data sets, the model is written out as an image:
>>> save_source('img', 'model.img')
- save_staterror(id, filename=None, bkg_id=None, ascii=True, clobber=False)[source] [edit on github]
Save the statistical errors to a file.
If the statistical errors have not been set explicitly, then the values calculated by the statistic - such as
chi2gehrels
orchi2datavar
- will be used.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to write the array to. The format is determined by the
ascii
argument.bkg_id (int or str, optional) – Set if the background should be written out rather than the source.
ascii (bool, optional) – If
False
then the data is written as a FITS format binary table. The default isTrue
. The exact format of the output file depends on the I/O library in use (Crates or AstroPy).clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
filename
already exists andclobber
isFalse
.
See also
load_staterror
Load the statistical errors from a file.
save_error
Save the errors to a file.
save_syserror
Save the systematic errors to a file.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.The output file contains the columns
X
andSTAT_ERR
.Examples
Write out the statistical errors from the default data set to the file ‘errs.dat’.
>>> save_staterror('errs.dat')
Over-write the file it it already exists, and take the data from the data set “jet”:
>>> save_staterror('jet', 'err.out', clobber=True)
Write the data out in FITS format:
>>> save_staterror('staterr.fits', ascii=False)
- save_syserror(id, filename=None, bkg_id=None, ascii=True, clobber=False)[source] [edit on github]
Save the systematic errors to a file.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to write the array to. The format is determined by the
ascii
argument.bkg_id (int or str, optional) – Set if the background should be written out rather than the source.
ascii (bool, optional) – If
False
then the data is written as a FITS format binary table. The default isTrue
. The exact format of the output file depends on the I/O library in use (Crates or AstroPy).clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If the data set does not contain any systematic errors.
sherpa.utils.err.DataErr – If
filename
already exists andclobber
isFalse
.
See also
load_syserror
Load the systematic errors from a file.
save_error
Save the errors to a file.
save_staterror
Save the statistical errors to a file.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.The output file contains the columns
X
andSYS_ERR
.Examples
Write out the systematic errors from the default data set to the file ‘errs.dat’.
>>> save_syserror('errs.dat')
Over-write the file it it already exists, and take the data from the data set “jet”:
>>> save_syserror('jet', 'err.out', clobber=True)
Write the data out in FITS format:
>>> save_syserror('syserr.fits', ascii=False)
- save_table(id, filename=None, ascii=False, clobber=False)[source] [edit on github]
Save a data set to a file as a table.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.filename (str) – The name of the file to write the data to. The format is determined by the
ascii
argument.ascii (bool, optional) – If
False
then the data is written as a FITS format binary table. The default isFalse
. The exact format of the output file depends on the I/O library in use (Crates or AstroPy).clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
filename
already exists andclobber
isFalse
.
See also
save_data
Save the data to a file.
save_image
Save the pixel values of a 2D data set to a file.
save_pha
Save a PHA data set to a file.
save_model
Save the model values to a file.
save_source
Save the model values to a file.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
filename
parameter. If given two un-named arguments, then they are interpreted as theid
andfilename
parameters, respectively. The remaining parameters are expected to be given as named arguments.Examples
Write the data set to the file “table.fits”:
>>> save_table('table.fits')
Write the data from the data set ‘jet’ to the file “jet.dat”, as an ASCII file:
>>> save_table('jet', 'jet.dat', ascii=True, clobber=True)
- set_analysis(id, quantity=None, type='rate', factor=0)[source] [edit on github]
Set the units used when fitting and displaying spectral data.
The set_analysis command sets the units for spectral analysis. Note that in order to change the units of a data set from ‘channel’ to ‘energy’ or ‘wavelength’, the appropriate ARF and RMF instrument response files must be loaded for that data set. The
type
andfactor
arguments control how the data is plotted.- Parameters
id (int or str) – If only one argument is given then this is taken to be the quantity argument (in which case, the change is made to all data sets). If multiple arguments are given then this is the identifier for the data set to change.
quantity ({ 'channel', 'chan', 'bin', 'energy', 'ener', 'wavelength', 'wave' }) – The units to use for the analysis.
type ({ 'rate', 'counts' }, optional) – The units to use on the Y axis of plots. The default is ‘rate’.
factor (int, optional) – The Y axis of plots is multiplied by Energy^factor or Wavelength^factor before display. The default is 0.
- Raises
sherpa.utils.err.IdentifierErr – If the
id
argument is not recognized.
See also
get_analysis
Return the analysis setting for a data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
quantity
parameter. If given two un-named arguments, then they are interpreted as theid
andquantity
parameters, respectively.Examples
Set all loaded data sets to use wavelength for any future fitting or display.
>>> set_analysis('wave')
Set the data set with an identifier of 2 to use energy units.
>>> set_analysis(2, 'energy')
Set data set 1 to use channel units. Plots will use a Y axis of count/bin rather than the default count/s/bin.
>>> set_analysis(1, 'bin', 'counts')
Set data set 1 to use energy units. Plots of this data set will display keV on the X axis and counts keV (i.e. counts/keV * keV^2) in the Y axis.
>>> set_analysis(1, 'energy', 'counts', 2)
- set_areascal(id, area=None, bkg_id=None)[source] [edit on github]
Change the fractional area factor of a PHA data set.
The area scaling factor of a PHA data set is taken from the AREASCAL keyword, but it can be changed once the file has been loaded.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.area (number) – The scaling factor.
bkg_id (int or str, optional) – Set to identify which background component to set. The default value (
None
) means that this is for the source component of the data set.
See also
get_areascal
Return the fractional area factor of a PHA data set.
set_backscal
Change the area scaling of a PHA data set.
set_exposure
Change the exposure time of a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
area
parameter. If given two un-named arguments, then they are interpreted as theid
andarea
parameters, respectively. The remaining parameters are expected to be given as named arguments.
- set_arf(id, arf=None, resp_id=None, bkg_id=None)[source] [edit on github]
Set the ARF for use by a PHA data set.
Set the effective area curve for a PHA data set, or its background.
- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.arf – An ARF, such as returned by
get_arf
orunpack_arf
.resp_id (int or str, optional) – The identifier for the ARF within this data set, if there are multiple responses.
bkg_id (int or str, optional) – Set this to identify the ARF as being for use with the background.
See also
get_arf
Return the ARF associated with a PHA data set.
load_arf
Load an ARF from a file and add it to a PHA data set.
load_pha
Load a file as a PHA data set.
set_full_model
Define the convolved model expression for a data set.
set_rmf
Set the RMF for use by a PHA data set.
unpack_arf
Read in an ARF from a file.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
arf
parameter. If given two un-named arguments, then they are interpreted as theid
andarf
parameters, respectively. The remaining parameters are expected to be given as named arguments.If a PHA data set has an associated ARF - either from when the data was loaded or explicitly with the
set_arf
function - then the model fit to the data will include the effect of the ARF when the model is created withset_model
orset_source
. In this case theget_source
function returns the user model, andget_model
the model that is fit to the data (i.e. it includes any response information; that is the ARF and RMF, if set). To include the ARF explicitly, useset_full_model
.Examples
Copy the ARF from the default data set to data set 2:
>>> arf1 = get_arf() >>> set_arf(2, arf1)
Read in an ARF from the file ‘bkg.arf’ and set it as the ARF for the background model of data set “core”:
>>> arf = unpack_arf('bkg.arf') >>> set_arf('core', arf, bkg_id=1)
- set_backscal(id, backscale=None, bkg_id=None)[source] [edit on github]
Change the area scaling of a PHA data set.
The area scaling factor of a PHA data set is taken from the BACKSCAL keyword or column, but it can be changed once the file has been loaded.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.backscale (number or array) – The scaling factor.
bkg_id (int or str, optional) – Set to identify which background component to set. The default value (
None
) means that this is for the source component of the data set.
See also
get_backscal
Return the area scaling of a PHA data set.
set_areascal
Change the fractional area factor of a PHA data set.
set_exposure
Change the exposure time of a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
backscale
parameter. If given two un-named arguments, then they are interpreted as theid
andbackscale
parameters, respectively. The remaining parameters are expected to be given as named arguments.
- set_bkg(id, bkg=None, bkg_id=None)[source] [edit on github]
Set the background for a PHA data set.
The background can either be fit with a model - using
set_bkg_model
- or removed from the data before fitting, usingsubtract
.- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg – A PHA data set, such as returned by
get_data
orunpack_pha
.bkg_id (int or str, optional) – The identifier for this background, which is needed if there are multiple background estimates for the source.
See also
get_bkg
Return the background for a PHA data set.
load_bkg
Load the background from a file and add it to a PHA data set.
load_pha
Load a file as a PHA data set.
set_bkg_model
Set the background model expression for a data set.
subtract
Subtract the background estimate from a data set.
unpack_pha
Create a PHA data structure.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
bkg
parameter. If given two un-named arguments, then they are interpreted as theid
andbkg
parameters, respectively. The remaining parameters are expected to be given as named arguments.If the background has no grouping of quality arrays then they are copied from the source region. If the background has no response information (ARF or RMF) then the response is copied from the source region.
Examples
Copy the background from the default data set to data set 2:
>>> bkg1 = get_bkg() >>> set_bkg(2, bkg1)
Read in the PHA data from the file ‘bkg.pi’ and set it as the second background component of data set “core”:
>>> bkg = unpack_pha('bkg.pi') >>> set_bkg('core', bkg, bkg_id=2)
- set_bkg_full_model(id, model=None, bkg_id=None)[source] [edit on github]
Define the convolved background model expression for a PHA data set.
Set a model expression for a background data set in the same way that
set_full_model
does for a source. This is for when the background is being fitted simultaneously to the source, rather than subtracted from it.- Parameters
id (int or str, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.Model object) – This defines the model used to fit the data. It can be a Python expression or a string version of it.
bkg_id (int or str, optional) – The identifier for the background of the data set, in cases where multiple backgrounds are provided.
See also
fit
Fit one or more data sets.
set_full_model
Define the convolved model expression for a data set.
set_pileup_model
Include a model of the Chandra ACIS pile up when fitting PHA data.
set_psf
Add a PSF model to a data set.
set_model
Set the source model expression for a data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.Some functions - such as
plot_bkg_source
- may not work for model expressions created byset_bkg_full_model
.Examples
The background is fit by two power laws - one that is passed through the instrument response (
gbgnd
) and one that is not (pbgnd
). The source is modelled byxsphabs * galabs
, together with the background model, scaled by the ratio of area and time. Note that the background component in the source expression uses the source response rather than background response.>>> rsp = get_response() >>> bresp = get_response(bkg_id=1) >>> bscale = get_bkg_scale() >>> smodel = xsphabs.galabs * xsapec.emiss >>> bmdl = brsp(powlaw1d.gbdng) + powlaw1d.pbgnd >>> smdl = rsp(smodel) + bscale*(rsp(gbgnd) + pbgnd) >>> set_full_model(smdl) >>> set_bkg_full_model(bmdl)
- set_bkg_model(id, model=None, bkg_id=None)[source] [edit on github]
Set the background model expression for a PHA data set.
The background emission can be fit by a model, defined by the
set_bkg_model
call, rather than subtracted from the data. If the background is subtracted then the background model is ignored when fitting the data.- Parameters
id (int or str, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.Model object) – This defines the model used to fit the data. It can be a Python expression or a string version of it.
bkg_id (int or str, optional) – The identifier for the background of the data set, in cases where multiple backgrounds are provided.
See also
delete_model
Delete the model expression from a data set.
fit
Fit one or more data sets.
integrate1d
Integrate 1D source expressions.
set_model
Set the model expression for a data set.
set_bkg_full_model
Define the convolved background model expression for a PHA data set.
show_bkg_model
Display the background model expression for a data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.The emission defined by the background model expression is included in the fit to the source dataset, scaling by exposure time and area size (given by the ratio of the background to source BACKSCAL values). That is, if
src_model
andbkg_model
represent the source and background model expressions set by calls toset_model
andset_bkg_model
respectively, the source data is fit by:src_model + scale * bkg_model
where
scale
is the scaling factor.PHA data sets will automatically apply the instrumental response (ARF and RMF) to the background expression. For some cases this is not useful - for example, when different responses should be applied to different model components - in which case
set_bkg_full_model
should be used instead.Examples
The background is model by a gaussian line (
gauss1d
model component calledbline
) together with an absorbed polynomial (thebgnd
component). The absorbing component (gal
) is also used in the source expression.>>> set_model(xsphabs.gal*powlaw1d.pl) >>> set_bkg_model(gauss1d.bline + gal*polynom1d.bgnd)
In this example, the default data set has two background estimates, so models are set for both components. The same model is applied to both, except that the relative normalisations are allowed to vary (by inclusion of the
scale
component).>>> bmodel = xsphabs.gabs * powlaw1d.pl >>> set_bkg_model(2, bmodel) >>> set_bkg_model(2, bmodel * const1d.scale, bkg_id=2)
- set_bkg_source(id, model=None, bkg_id=None) [edit on github]
Set the background model expression for a PHA data set.
The background emission can be fit by a model, defined by the
set_bkg_model
call, rather than subtracted from the data. If the background is subtracted then the background model is ignored when fitting the data.- Parameters
id (int or str, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.Model object) – This defines the model used to fit the data. It can be a Python expression or a string version of it.
bkg_id (int or str, optional) – The identifier for the background of the data set, in cases where multiple backgrounds are provided.
See also
delete_model
Delete the model expression from a data set.
fit
Fit one or more data sets.
integrate1d
Integrate 1D source expressions.
set_model
Set the model expression for a data set.
set_bkg_full_model
Define the convolved background model expression for a PHA data set.
show_bkg_model
Display the background model expression for a data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.The emission defined by the background model expression is included in the fit to the source dataset, scaling by exposure time and area size (given by the ratio of the background to source BACKSCAL values). That is, if
src_model
andbkg_model
represent the source and background model expressions set by calls toset_model
andset_bkg_model
respectively, the source data is fit by:src_model + scale * bkg_model
where
scale
is the scaling factor.PHA data sets will automatically apply the instrumental response (ARF and RMF) to the background expression. For some cases this is not useful - for example, when different responses should be applied to different model components - in which case
set_bkg_full_model
should be used instead.Examples
The background is model by a gaussian line (
gauss1d
model component calledbline
) together with an absorbed polynomial (thebgnd
component). The absorbing component (gal
) is also used in the source expression.>>> set_model(xsphabs.gal*powlaw1d.pl) >>> set_bkg_model(gauss1d.bline + gal*polynom1d.bgnd)
In this example, the default data set has two background estimates, so models are set for both components. The same model is applied to both, except that the relative normalisations are allowed to vary (by inclusion of the
scale
component).>>> bmodel = xsphabs.gabs * powlaw1d.pl >>> set_bkg_model(2, bmodel) >>> set_bkg_model(2, bmodel * const1d.scale, bkg_id=2)
- set_conf_opt(name, val)[source] [edit on github]
Set an option for the confidence interval method.
This is a helper function since the options can also be set directly using the object returned by
get_conf
.- Parameters
- Raises
sherpa.utils.err.ArgumentErr – If the
name
argument is not recognized.
See also
conf
Estimate parameter confidence intervals using the confidence method.
get_conf
Return the conf estimation object.
get_conf_opt
Return one or all options of the conf estimation object.
Examples
>>> set_conf_opt('parallel', False)
- set_coord(id, coord=None)[source] [edit on github]
Set the coordinate system to use for image analysis.
The default coordinate system - that is, the mapping between pixel position and coordinate value, for images (2D data sets) is ‘logical’. This function can change this setting, so that model parameters can be fit using other systems. This setting is also used by the
notice2d
andignore2d
series of commands.- Parameters
id (int or str) – The data set to change. If not given then the default identifier is used, as returned by
get_default_id
.coord ({ 'logical', 'image', 'physical', 'world', 'wcs' }) – The coordinate system to use. The ‘image’ option is the same as ‘logical’, and ‘wcs’ the same as ‘world’.
See also
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
coord
parameter. If given two un-named arguments, then they are interpreted as theid
andcoord
parameters, respectively.Any limits or values already set for model parameters, such as those made by
guess
, may need to be changed after changing the coordinate system.The ‘logical’ system is one in which the center of the lower-left pixel has coordinates
(1, 1)
and the center of the top-right pixel has coordinates(nx, ny)
, for anx
(columns) byny
(rows) pixel image. The pixels have a side of length 1, so the first pixel covers the rangex=0.5
tox=1.5
andy=0.5
toy=1.5
.The ‘physical’ and ‘world’ coordinate systems rely on FITS World Coordinate System (WCS) standard [1]_. The ‘physical’ system refers to a linear transformation, with possible offset, of the ‘logical’ system. The ‘world’ system refers to the mapping to a celestial coordinate system.
References
Examples
Change the coordinate system of the default data set to the world system (‘wcs’ is a synonym for ‘world’).
>>> set_coord('wcs')
Change the data set with the id of ‘m82’ to use the physical coordinate system.
>>> set_coord('m82', 'physical')
- set_counts(id, val=None, bkg_id=None) [edit on github]
Set the dependent axis of a data set.
- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.val (array) – The array of values for the dependent axis.
bkg_id (int or str, optional) – Set to identify which background component to set. The default value (
None
) means that this is for the source component of the data set.
See also
dataspace1d
Create the independent axis for a 1D data set.
dataspace2d
Create the independent axis for a 2D data set.
get_dep
Return the dependent axis of a data set.
load_arrays
Create a data set from array values.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
val
parameter. If given two un-named arguments, then they are interpreted as theid
andval
parameters, respectively.Examples
Create a 1D data set with values at (0,4), (2,10), (4,12), (6,8), (8,2), and (10,12):
>>> dataspace1d(0, 10, 2, dstype=Data1D) >>> set_dep([4, 10, 12, 8, 2, 12])
Set the values for the source and background of the data set ‘src’:
>>> set_dep('src', y1) >>> set_dep('src', bg1, bkg_id=1)
- set_covar_opt(name, val)[source] [edit on github]
Set an option for the covariance method.
This is a helper function since the options can also be set directly using the object returned by
get_covar
.- Parameters
- Raises
sherpa.utils.err.ArgumentErr – If the
name
argument is not recognized.
See also
covar
Estimate parameter confidence intervals using the covariance method.
get_covar
Return the covar estimation object.
get_covar_opt
Return one or all options of the covar estimation object.
Examples
>>> set_covar_opt('sigma', 1.6)
- set_data(id, data=None)[source] [edit on github]
Set a data set.
- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.data (instance of a sherpa.Data.Data-derived class) – The new contents of the data set. This can be copied from an existing data set or loaded in from a file (e.g.
unpack_data
).
See also
copy_data
Copy a data set to a new identifier.
delete_data
Delete a data set by identifier.
get_data
Return the data set by identifier.
load_data
Create a data set from a file.
unpack_data
Read in a file.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
data
parameter. If given two un-named arguments, then they are interpreted as theid
anddata
parameters, respectively.Examples
>>> d1 = get_data(2) >>> set_data(d1)
Copy the background data from the default data set into a new data set identified as ‘bkg’:
>>> set_data('bkg', get_bkg())
- set_default_id(id)[source] [edit on github]
Set the default data set identifier.
The Sherpa data id ties data, model, fit, and plotting information into a data set easily referenced by id. The default identifier, used by many commands, is changed by this command. The current setting can be found by using
get_default_id
.- Parameters
id (int or str) – The default data set identifier to be used by certain Sherpa functions when an identifier is not given, or set to
None
.
See also
get_default_id
Return the default data set identifier.
list_data_ids
List the identifiers for the loaded data sets.
Notes
The default Sherpa data set identifier is the integer 1.
Examples
After the following, many commands, such as
set_source
, will use ‘src’ as the default data set identifier:>>> set_default_id('src')
Restore the default data set identifier.
>>> set_default_id(1)
- set_dep(id, val=None, bkg_id=None)[source] [edit on github]
Set the dependent axis of a data set.
- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.val (array) – The array of values for the dependent axis.
bkg_id (int or str, optional) – Set to identify which background component to set. The default value (
None
) means that this is for the source component of the data set.
See also
dataspace1d
Create the independent axis for a 1D data set.
dataspace2d
Create the independent axis for a 2D data set.
get_dep
Return the dependent axis of a data set.
load_arrays
Create a data set from array values.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
val
parameter. If given two un-named arguments, then they are interpreted as theid
andval
parameters, respectively.Examples
Create a 1D data set with values at (0,4), (2,10), (4,12), (6,8), (8,2), and (10,12):
>>> dataspace1d(0, 10, 2, dstype=Data1D) >>> set_dep([4, 10, 12, 8, 2, 12])
Set the values for the source and background of the data set ‘src’:
>>> set_dep('src', y1) >>> set_dep('src', bg1, bkg_id=1)
- set_exposure(id, exptime=None, bkg_id=None)[source] [edit on github]
Change the exposure time of a PHA data set.
The exposure time of a PHA data set is taken from the
EXPOSURE
keyword in its header, but it can be changed once the file has been loaded.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.exptime (num) – The exposure time, in seconds.
bkg_id (int or str, optional) – Set to identify which background component to set. The default value (
None
) means that this is for the source component of the data set.
See also
get_exposure
Return the exposure time of a PHA data set.
set_areascal
Change the fractional area factor of a PHA data set.
set_backscal
Change the area scaling of a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
exptime
parameter. If given two un-named arguments, then they are interpreted as theid
andexptime
parameters, respectively. The remaining parameters are expected to be given as named arguments.Examples
Increase the exposure time of the default data set by 5 per cent.
>>> etime = get_exposure() >>> set_exposure(etime * 1.05)
Use the EXPOSURE value from the ARF, rather than the value from the PHA file, for data set 2:
>>> set_exposure(2, get_arf(2).exposure)
Set the exposure time of the second background component of the ‘jet’ data set.
>>> set_exposure('jet', 12324.45, bkg_id=2)
- set_filter(id, val=None, bkg_id=None, ignore=False)[source] [edit on github]
Set the filter array of a data set.
- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.val (array) – The array of filter values (
0
or1
). The size should match the array returned byget_dep
.bkg_id (int or str, optional) – Set to identify which background component to set. The default value (
None
) means that this is for the source component of the data set.ignore (bool, optional) – If
False
(the default) then include bins with a non-zero filter value, otherwise exclude these bins.
See also
get_dep
Return the dependent axis of a data set.
get_filter
Return the filter expression for a data set.
ignore
Exclude data from the fit.
load_filter
Load the filter array from a file and add to a data set.
notice
Include data in the fit.
save_filter
Save the filter array to a file.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
val
parameter. If given two un-named arguments, then they are interpreted as theid
andval
parameters, respectively.Examples
Ignore those bins with a value less 20.
>>> d = get_dep() >>> set_filter(d >= 20)
- set_full_model(id, model=None)[source] [edit on github]
Define the convolved model expression for a data set.
The model expression created by
set_model
can be modified by “instrumental effects”, such as a PSF set byset_psf
. Theset_full_model
function is for when this is not sufficient, and full control is needed. An example of when this would be if different PSF models should be applied to different source components.- Parameters
id (int or str, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.Model object) – This defines the model used to fit the data. It can be a Python expression or a string version of it.
See also
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.Some functions - such as
plot_source
- may not work for model expressions created byset_full_model
.Examples
Apply different PSFs to different components, as well as an unconvolved component:
>>> load_psf("psf1", "psf1.dat") >>> load_psf("psf2", "psf2.dat") >>> smodel = psf1(gauss2d.src1) + psf2(beta2d.src2) + const2d.bgnd >>> set_full_model("src", smodel)
- set_grouping(id, val=None, bkg_id=None)[source] [edit on github]
Apply a set of grouping flags to a PHA data set.
A group is indicated by a sequence of flag values starting with
1
and then-1
for all the channels in the group, following [1]_.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.val (array of int) – This must be an array of grouping values of the same length as the data array.
bkg_id (int or str, optional) – Set to group the background associated with the data set.
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain a PHA data set.
See also
fit
Fit one or more data sets.
get_grouping
Return the grouping flags for a PHA data set.
group
Turn on the grouping for a PHA data set.
group_adapt
Adaptively group to a minimum number of counts.
group_adapt_snr
Adaptively group to a minimum signal-to-noise ratio.
group_bins
Group into a fixed number of bins.
group_counts
Group into a minimum number of counts per bin.
group_snr
Group into a minimum signal-to-noise ratio.
group_width
Group into a fixed bin width.
load_grouping
Load the grouping scheme from a file and add to a PHA data set.
set_quality
Apply a set of quality flags to a PHA data set.
ungroup
Turn off the grouping for a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
val
parameter. If given two un-named arguments, then they are interpreted as theid
andval
parameters, respectively.The meaning of the grouping column is taken from [1]_, which says that +1 indicates the start of a bin, -1 if the channel is part of group, and 0 if the data grouping is undefined for all channels.
References
- 1
“The OGIP Spectral File Format”, https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/ogip_92_007.html
Examples
Copy the grouping array from data set 2 into the default data set and ensure it is applied:
>>> grp = get_grouping(2) >>> set_grouping(grp) >>> group()
Copy the grouping from data set “src1” to the source and the first background data set of “src2”:
>>> grp = get_grouping("src1") >>> set_grouping("src2", grp) >>> set_grouping("src2", grp, bkg_id=1) >>> group("src2")
- set_iter_method(meth)[source] [edit on github]
Set the iterative-fitting scheme used in the fit.
Control whether an iterative scheme should be applied to the fit.
Changed in version 4.14.1: The “primini” scheme has been removed from Sherpa.
- Parameters
meth ({ 'none', 'sigmarej' }) – The name of the scheme used during the fit; ‘none’ means no scheme is used. It is only valid to change the scheme when a chi-square statistic is in use.
- Raises
TypeError – When the
meth
argument is not recognized.
See also
fit
Fit a model to one or more data sets.
get_iter_method_name
Return the name of the iterative fitting scheme.
get_iter_method_opt
Return one or all options for the iterative-fitting scheme.
list_iter_methods
List the iterative fitting schemes.
set_iter_method_opt
Set an option for the iterative-fitting scheme.
set_stat
Set the statistical method.
Notes
The parameters of the schemes are described in
set_iter_method_opt
.This is a chi-square statistic where the variance is computed from model amplitudes derived in the previous iteration of the fit. This ‘Iterative Weighting’ ([1]_) attempts to remove biased estimates of model parameters which is inherent in chi-square statistics ([2]_).
The variance in bin i is estimated to be:
sigma^2_i^j = S(i, t_s^(j-1)) + (A_s/A_b)^2 B_off(i, t_b^(j-1))
where j is the number of iterations that have been carried out in the fitting process, B_off is the background model amplitude in bin i of the off-source region, and t_s^(j-1) and t_b^(j-1) are the set of source and background model parameter values derived during the iteration previous to the current one. The variances are set to an array of ones on the first iteration.
In addition to reducing parameter estimate bias, this statistic can be used even when the number of counts in each bin is small (< 5), although the user should proceed with caution.
The
sigmarej
scheme is based on the IRAFsfit
function [2]_, where after a fit data points are excluded if the value of(data-model)/error)
exceeds a threshold, and the data re-fit. This removal of data points continues until the fit has converged. The error removal can be asymmetric, since there are separate parameters for the lower and upper limits.References
- 1
“Multiparameter linear least-squares fitting to Poisson data one count at a time”, Wheaton et al. 1995, ApJ 438, 322 http://adsabs.harvard.edu/abs/1995ApJ…438..322W
- 2
Examples
Switch to the ‘sigmarej’ scheme for iterative fitting and change the low and high rejection limits to 4 and 3 respectively:
>>> set_iter_method('sigmarej') >>> set_iter_method_opt('lrej') = 4 >>> set_iter_method_opt('hrej') = 3
Remove any iterative-fitting method:
>>> set_iter_method('none')
- set_iter_method_opt(optname, val)[source] [edit on github]
Set an option for the iterative-fitting scheme.
- Parameters
optname (str) – The name of the option to set. The
get_iter_method_opt
routine can be used to find out valid values for this argument.val – The new value for the option.
- Raises
sherpa.utils.err.ArgumentErr – If the
optname
argument is not recognized.
See also
get_iter_method_name
Return the name of the iterative fitting scheme.
get_iter_method_opt
Return one or all options for the iterative-fitting scheme.
list_iter_methods
List the iterative fitting schemes.
set_iter_method
Set the iterative-fitting scheme used in the fit.
Notes
The supported fields for the
sigmarej
scheme are:- grow
The number of points adjacent to a rejected point that should also be removed. A value of
0
means that only the discrepant point is removed whereas a value of1
means that the two adjacent points (one lower and one higher) will also be removed.- hrej
The rejection criterion in units of sigma, for data points above the model (it must be >= 0).
- lrej
The rejection criterion in units of sigma, for data points below the model (it must be >= 0).
- maxiters
The maximum number of iterations to perform. If this value is
0
then the fit will run until it has converged.
Examples
Reject any points that are more than 5 sigma away from the best fit model and re-fit.
>>> set_iter_method('sigmarej') >>> set_iter_method_opt('lrej', 5) >>> set_iter_method_opt('hrej', 5) >>> fit()
- set_method(meth)[source] [edit on github]
Set the optimization method.
The primary task of Sherpa is to fit a model M(p) to a set of observed data, where the vector p denotes the model parameters. An optimization method is one that is used to determine the vector of model parameter values, p0, for which the chosen fit statistic is minimized.
- Parameters
meth (str) – The name of the method (case is not important). The
list_methods
function returns the list of supported values.- Raises
sherpa.utils.err.ArgumentErr – If the
meth
argument is not recognized.
See also
get_method_name
Return the name of the current optimization method.
list_methods
List the supported optimization methods.
set_stat
Set the fit statistic.
Notes
The available methods include:
levmar
The Levenberg-Marquardt method is an interface to the MINPACK subroutine lmdif to find the local minimum of nonlinear least squares functions of several variables by a modification of the Levenberg-Marquardt algorithm [1]_.
moncar
The implementation of the moncar method is based on [2]_.
neldermead
The implementation of the Nelder Mead Simplex direct search is based on [3]_.
simplex
This is another name for
neldermead
.
References
- 1
J.J. More, “The Levenberg Marquardt algorithm: implementation and theory,” in Lecture Notes in Mathematics 630: Numerical Analysis, G.A. Watson (Ed.), Springer-Verlag: Berlin, 1978, pp.105-116.
- 2
Storn, R. and Price, K. “Differential Evolution: A Simple and Efficient Adaptive Scheme for Global Optimization over Continuous Spaces.” J. Global Optimization 11, 341-359, 1997.
- 3
Jeffrey C. Lagarias, James A. Reeds, Margaret H. Wright, Paul E. Wright “Convergence Properties of the Nelder-Mead Simplex Algorithm in Low Dimensions”, SIAM Journal on Optimization,Vol. 9, No. 1 (1998), pages 112-147.
Examples
>>> set_method('neldermead')
- set_method_opt(optname, val)[source] [edit on github]
Set an option for the current optimization method.
This is a helper function since the optimization options can also be set directly using the object returned by
get_method
.- Parameters
optname (str) – The name of the option to set. The
get_method
andget_method_opt
routines can be used to find out valid values for this argument.val – The new value for the option.
- Raises
sherpa.utils.err.ArgumentErr – If the
optname
argument is not recognized.
See also
get_method
Return an optimization method.
get_method_opt
Return one or all options of the current optimization method.
set_method
Change the optimization method.
Examples
Change the
maxfev
parameter for the current optimizer to 2000.>>> set_method_opt('maxfev', 2000)
- set_model(id, model=None)[source] [edit on github]
Set the source model expression for a data set.
The function is available as both
set_model
andset_source
. The model fit to the data can be further modified by instrument responses which can be set explicitly - e.g. byset_psf
- or be defined automatically by the type of data being used (e.g. the ARF and RMF of a PHA data set). Theset_full_model
command can be used to explicitly include the instrument response if necessary.- Parameters
id (int or str, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.Model object) – This defines the model used to fit the data. It can be a Python expression or a string version of it.
See also
delete_model
Delete the model expression from a data set.
fit
Fit one or more data sets.
freeze
Fix model parameters so they are not changed by a fit.
get_source
Return the source model expression for a data set.
integrate1d
Integrate 1D source expressions.
sherpa.astro.ui.set_bkg_model
Set the background model expression for a data set.
set_full_model
Define the convolved model expression for a data set.
show_model
Display the source model expression for a data set.
set_par
Set the value, limits, or behavior of a model parameter.
thaw
Allow model parameters to be varied during a fit.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.PHA data sets will automatically apply the instrumental response (ARF and RMF) to the source expression. For some cases this is not useful - for example, when different responses should be applied to different model components - in which case
set_full_model
should be used instead.Model caching is available via the model
cache
attribute. A non-zero value for this attribute means that the results of evaluating the model will be cached if all the parameters are frozen, which may lead to a reduction in the time taken to evaluate a fit. A zero value turns off the caching. The default setting for X-Spec and 1D analytic models is thatcache
is5
, but0
for the 2D analytic models.The
integrate1d
model can be used to apply a numerical integration to an arbitrary model expression.Examples
Create an instance of the
powlaw1d
model type, calledpl
, and use it as the model for the default data set.>>> set_model(polynom1d.pl)
Create a model for the default dataset which is the
xsphabs
model multiplied by the sum of anxsapec
andpowlaw1d
models (the model components are identified by the labelsgal
,clus
, andpl
).>>> set_model(xsphabs.gal * (xsapec.clus + powlaw1d.pl))
Repeat the previous example, using a string to define the model expression:
>>> set_model('xsphabs.gal * (xsapec.clus + powlaw1d.pl)')
Use the same model component (
src
, agauss2d
model) for the two data sets (‘src1’ and ‘src2’).>>> set_model('src1', gauss2d.src + const2d.bgnd1) >>> set_model('src2', src + const2d.bgnd2)
Share an expression - in this case three gaussian lines - between three data sets. The normalization of this line complex is allowed to vary in data sets 2 and 3 (the
norm2
andnorm3
components of theconst1d
model), and each data set has a separatepolynom1d
component (bgnd1
,bgnd2
, andbgnd3
). Thec1
parameters of thepolynom1d
model components are thawed and then linked together (to reduce the number of free parameters):>>> lines = gauss1d.l1 + gauss1d.l2 + gauss1d.l3 >>> set_model(1, lines + polynom1d.bgnd1) >>> set_model(2, lines * const1d.norm2 + polynom1d.bgnd2) >>> set_model(3, lines * const1d.norm3 + polynom1d.bgnd3) >>> thaw(bgnd1.c1, bgnd2.c1, bgnd3.c1) >>> link(bgnd2.c2, bgnd1.c1) >>> link(bgnd3.c3, bgnd1.c1)
For this expression, the
gal
component is frozen, so it is not varied in the fit. Thecache
attribute is set to a non-zero value to ensure that it is cached during a fit (this is actually the default value for this model so it not normally needed).>>> set_model(xsphabs.gal * (xsapec.clus + powlaw1d.pl)) >>> gal.nh = 0.0971 >>> freeze(gal) >>> gal.cache = 1
- set_model_autoassign_func(func=None)[source] [edit on github]
Set the method used to create model component identifiers.
When a model component is created, the default behavior is to add the component to the default Python namespace. This is controlled by a function which can be changed with this routine.
- Parameters
func (function reference) – The function to use: this should accept two arguments, a string (component name), and the model instance.
See also
create_model_component
Create a model component.
get_model_autoassign_func
Return the method used to create model component identifiers
set_model
Set the source model expression for a data set.
Notes
The default assignment function first renames a model component to include the model type and user-defined identifier. It then updates the ‘__main__’ module’s dictionary with the model identifier as the key and the model instance as the value. Similarly, it updates the ‘__builtin__’ module’s dictionary just like ‘__main__’ for compatibility with IPython.
- set_par(par, val=None, min=None, max=None, frozen=None)[source] [edit on github]
Set the value, limits, or behavior of a model parameter.
- Parameters
par (str) – The name of the parameter, using the format “componentname.parametername”.
val (number, optional) – Change the current value of the parameter.
min (number, optional) – Change the minimum value of the parameter (the soft limit).
max (number, optional) – Change the maximum value of the parameter (the soft limit).
frozen (bool, optional) – Freeze (
True
) or thaw (False
) the parameter.
- Raises
sherpa.utils.err.ArgumentErr – If the
par
argument is invalid: the model component does not exist or the given model has no parameter with that name.
See also
Notes
The parameter object can be used to change these values directly, by setting the attribute with the same name as the argument - so that:
set_par('emis.flag', val=2, frozen=True)
is the same as:
emis.flag.val = 2 emis.flag.frozen = True
Examples
Change the parameter value to 23.
>>> set_par('bgnd.c0', 23)
Restrict the line.ampl parameter to be between 1e-4 and 10 and to have a value of 0.1.
>>> set_par('line.ampl', 0.1, min=1e-4, max=10)
- set_pileup_model(id, model=None)[source] [edit on github]
Include a model of the Chandra ACIS pile up when fitting PHA data.
Chandra observations of bright sources can be affected by pileup, so that there is a non-linear correlation between the source model and the predicted counts. This process can be modelled by including the
jdpileup
model for a data set, using theset_pileup_model
.- Parameters
id (int or str, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.model (an instance of the
sherpa.astro.models.JDPileup
class) –
See also
delete_pileup_model
Delete the pile up model for a data set.
fit
Fit one or more data sets.
get_pileup_model
Return the pile up model for a data set.
sherpa.models.model.JDPileup
The ACIS pile up model.
list_pileup_model_ids
List of all the data sets with a pile up model.
set_full_model
Define the convolved model expression for a data set.
set_model
Set the source model expression for a data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.This is a generic function, and can be used to model other non-linear detector effects, but at present the only available model is for the ACIS pile up provided by the jdpileup model.
Examples
Plot up the model (an xsphabs model multiplied by a powlaw1d component) and then overplot the same expression but including the effects of pile up in the Chandra ACIS instrument:
>>> load_pha('src.pi') >>> set_source(xsphabs.gal * powlaw1d.pl) >>> plot_model() >>> set_pileup_model(jdpileup.jpd) >>> plot_model(overplot=True)
- set_prior(par, prior)[source] [edit on github]
Set the prior function to use with a parameter.
The default prior used by
get_draws
for each parameter is flat, varying between the soft minimum and maximum values of the parameter (as given by themin
andmax
attributes of the parameter object). Theset_prior
function is used to change the form of the prior for a parameter, andget_prior
returns the current prior for a parameter.- Parameters
par (a
sherpa.models.parameter.Parameter
instance) – A parameter of a model instance.prior (function or sherpa.models.model.Model instance) – The function to use for a prior. It must accept a single argument and return a value of the same size as the input.
See also
get_draws
Run the pyBLoCXS MCMC algorithm.
get_prior
Return the prior function for a parameter (MCMC).
set_sampler
Set the MCMC sampler.
Examples
Set the prior for the
kT
parameter of thetherm
component to be a gaussian, centered on 1.7 keV and with a FWHM of 0.35 keV:>>> create_model_component('xsapec', 'therm') >>> create_model_component('gauss1d', 'p_temp') >>> p_temp.pos = 1.7 >>> p_temp.fwhm = 0.35 >>> set_prior(therm.kT, p_temp)
Create a function (
lognorm
) and use it as the prior of thenH
parameter of theabs1
model component:>>> def lognorm(x): ... nh = 20 ... sigma = 0.5 # use a sigma of 0.5 ... # nH is in units of 10^-22 so convert ... dx = np.log10(x) + 22 - nh ... norm = sigma / np.sqrt(2 * np.pi) ... return norm * np.exp(-0.5 * dx * dx / (sigma * sigma)) ... >>> create_model_component('xsphabs', 'abs1') >>> set_prior(abs1.nH, lognorm)
- set_proj_opt(name, val)[source] [edit on github]
Set an option for the projection method.
This is a helper function since the options can also be set directly using the object returned by
get_proj
.- Parameters
- Raises
sherpa.utils.err.ArgumentErr – If the
name
argument is not recognized.
See also
conf
Estimate parameter confidence intervals using the confidence method.
proj
Estimate parameter confidence intervals using the projection method.
get_proj
Return the proj estimation object.
get_proj_opt
Return one or all options of the proj estimation object.
Examples
>>> set_proj_opt('parallel', False)
- set_psf(id, psf=None)[source] [edit on github]
Add a PSF model to a data set.
After this call, the model that is fit to the data (as set by
set_model
) will be convolved by the given PSF model. The term “psf” is used in functions to refer to the data sent to this function whereas the term “kernel” refers to the data that is used in the actual convolution (this can be re-normalized and a sub-set of the PSF data).- Parameters
id (int or str, optional) – The data set. If not given then the default identifier is used, as returned by
get_default_id
.psf (str or
sherpa.instrument.PSFModel
instance) – The PSF model created byload_psf
.
See also
delete_psf
Delete the PSF model for a data set.
get_psf
Return the PSF model defined for a data set.
image_psf
Display the 2D PSF model for a data set in the image viewer.
load_psf
Create a PSF model.
plot_psf
Plot the 1D PSF model applied to a data set.
set_full_model
Define the convolved model expression for a data set.
set_model
Set the source model expression for a data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
psf
parameter. If given two un-named arguments, then they are interpreted as theid
andpsf
parameters, respectively.A PSF component should only be applied to a single data set. This is not enforced by the system, and incorrect results can occur if this condition is not true.
The point spread function (PSF) is defined by the full (unfiltered) PSF image loaded into Sherpa or the PSF model expression evaluated over the full range of the dataset; both types of PSFs are established with the
load_psf
command. The kernel is the subsection of the PSF image or model which is used to convolve the data. This subsection is created from the PSF when the size and center of the kernel are defined by the commandset_psf
. While the kernel and PSF might be congruent, defining a smaller kernel helps speed the convolution process by restricting the number of points within the PSF that must be evaluated.In a 1-D PSF model, a radial profile or 1-D model array is used to convolve (fold) the given source model using the Fast Fourier Transform (FFT) technique. In a 2-D PSF model, an image or 2-D model array is used.
The parameters of a PSF model include:
- kernel
The data used for the convolution (file name or model instance).
- size
The number of pixels used in the convolution (this can be a subset of the full PSF). This is a scalar (1D) or a sequence (2D, width then height) value.
- center
The center of the kernel. This is a scalar (1D) or a sequence (2D, width then height) value. The kernel centroid must always be at the center of the extracted sub-image, otherwise, systematic shifts will occur in the best-fit positions.
- radial
Set to
1
to use a symmetric array. The default is0
to reduce edge effects.- norm
Should the kernel be normalized so that it sums to 1? This summation is done over the full data set (not the subset defined by the
size
parameter). The default is1
(yes).
Examples
Use the data in the ASCII file ‘line_profile.dat’ as the PSF for the default data set:
>>> load_psf('psf1', 'line_profile.dat') >>> set_psf(psf1)
Use the same PSF for different data sets:
>>> load_psf('p1', 'psf.img') >>> load_psf('p2', 'psf.img') >>> set_psf(1, 'p1') >>> set_psf(2, 'p2')
Restrict the convolution to a sub-set of the PSF data and compare the two:
>>> set_psf(psf1) >>> psf1.size = (41,41) >>> image_psf() >>> image_kernel(newframe=True, tile=True)
- set_quality(id, val=None, bkg_id=None)[source] [edit on github]
Apply a set of quality flags to a PHA data set.
A quality value of 0 indicates a good channel, otherwise (values >=1) the channel is considered bad and can be excluded using the
ignore_bad
function, as discussed in [1]_.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.val (array of int) – This must be an array of quality values of the same length as the data array.
bkg_id (int or str, optional) – Set if the quality values should be associated with the background associated with the data set.
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain a PHA data set.
See also
fit
Fit one or more data sets.
get_quality
Return the quality array for a PHA data set.
ignore_bad
Exclude channels marked as bad in a PHA data set.
load_quality
Load the quality array from a file and add to a PHA data set.
set_grouping
Apply a set of grouping flags to a PHA data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
val
parameter. If given two un-named arguments, then they are interpreted as theid
andval
parameters, respectively.The meaning of the quality column is taken from [1]_, which says that 0 indicates a “good” channel, 1 and 2 are for channels that are identified as “bad” or “dubious” (respectively) by software, 5 indicates a “bad” channel set by the user, and values of 3 or 4 are not used.
References
- 1
“The OGIP Spectral File Format”, https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/ogip_92_007.html
Examples
Copy the quality array from data set 2 into the default data set, and then ensure that any ‘bad’ channels are ignored:
>>> qual = get_data(2).quality >>> set_quality(qual) >>> ignore_bad()
Copy the quality array from data set “src1” to the source and background data sets of “src2”:
>>> qual = get_data("src1").quality >>> set_quality("src2", qual) >>> set_quality("src2", qual, bkg_id=1)
- set_rmf(id, rmf=None, resp_id=None, bkg_id=None)[source] [edit on github]
Set the RMF for use by a PHA data set.
Set the redistribution matrix for a PHA data set, or its background.
- Parameters
id (int or str, optional) – The data set to use. If not given then the default identifier is used, as returned by
get_default_id
.rmf – An RMF, such as returned by
get_rmf
orunpack_rmf
.resp_id (int or str, optional) – The identifier for the RMF within this data set, if there are multiple responses.
bkg_id (int or str, optional) – Set this to identify the RMF as being for use with the background.
See also
get_rmf
Return the RMF associated with a PHA data set.
load_pha
Load a file as a PHA data set.
load_rmf
Load a RMF from a file and add it to a PHA data set.
set_full_model
Define the convolved model expression for a data set.
set_arf
Set the ARF for use by a PHA data set.
unpack_rmf
Create a RMF data structure.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
rmf
parameter. If given two un-named arguments, then they are interpreted as theid
andrmf
parameters, respectively. The remaining parameters are expected to be given as named arguments.If a PHA data set has an associated RMF - either from when the data was loaded or explicitly with the
set_rmf
function - then the model fit to the data will include the effect of the RMF when the model is created withset_model
orset_source
. In this case theget_source
function returns the user model, andget_model
the model that is fit to the data (i.e. it includes any response information; that is the ARF and RMF, if set). To include the RMF explicitly, useset_full_model
.Examples
Copy the RMF from the default data set to data set 2:
>>> rmf1 = get_rmf() >>> set_rmf(2, rmf1)
Read in a RMF from the file ‘bkg.rmf’ and set it as the RMF for the background model of data set “core”:
>>> rmf = unpack_rmf('bkg.rmf') >>> set_rmf('core', rmf, bkg_id=1)
- set_sampler(sampler)[source] [edit on github]
Set the MCMC sampler.
The sampler determines the type of jumping rule to be used when running the MCMC analysis.
- Parameters
sampler (str or
sherpa.sim.Sampler
instance) – When a string, the name of the sampler to use (case insensitive). The supported options are given by thelist_samplers
function.
See also
get_draws
Run the pyBLoCXS MCMC algorithm.
list_samplers
List the MCMC samplers.
set_sampler
Set the MCMC sampler.
set_sampler_opt
Set an option for the current MCMC sampler.
Notes
The jumping rules are:
- MH
The Metropolis-Hastings rule, which jumps from the best-fit location, even if the previous iteration had moved away from it.
- MetropolisMH
This is the Metropolis with Metropolis-Hastings algorithm, that jumps from the best-fit with probability
p_M
, otherwise it jumps from the last accepted jump. The value ofp_M
can be changed usingset_sampler_opt
.- PragBayes
This is used when the effective area calibration uncertainty is to be included in the calculation. At each nominal MCMC iteration, a new calibration product is generated, and a series of N (the
nsubiters
option) MCMC sub-iteration steps are carried out, choosing between Metropolis and Metropolis-Hastings types of samplers with probabilityp_M
. Only the last of these sub-iterations are kept in the chain. Thensubiters
andp_M
values can be changed usingset_sampler_opt
.- FullBayes
Another sampler for use when including uncertainties due to the effective area.
Examples
>>> set_sampler('metropolismh')
- set_sampler_opt(opt, value)[source] [edit on github]
Set an option for the current MCMC sampler.
- Parameters
opt (str) – The option to change. Use
get_sampler
to view the available options for the current sampler.value – The value for the option.
See also
get_sampler
Return the current MCMC sampler options.
set_prior
Set the prior function to use with a parameter.
set_sampler
Set the MCMC sampler.
Notes
The options depend on the sampler. The options include:
- defaultprior
Set to
False
when the default prior (flat, between the parameter’s soft limits) should not be used. Useset_prior
to set the form of the prior for each parameter.- inv
A bool, or array of bools, to indicate which parameter is on the inverse scale.
- log
A bool, or array of bools, to indicate which parameter is on the logarithm (natural log) scale.
- original
A bool, or array of bools, to indicate which parameter is on the original scale.
- p_M
The proportion of jumps generatd by the Metropolis jumping rule.
- priorshape
An array of bools indicating which parameters have a user-defined prior functions set with
set_prior
.- scale
Multiply the output of
covar
by this factor and use the result as the scale of the t-distribution.
Examples
>>> set_sampler_opt('scale', 3)
- set_source(id, model=None) [edit on github]
Set the source model expression for a data set.
The function is available as both
set_model
andset_source
. The model fit to the data can be further modified by instrument responses which can be set explicitly - e.g. byset_psf
- or be defined automatically by the type of data being used (e.g. the ARF and RMF of a PHA data set). Theset_full_model
command can be used to explicitly include the instrument response if necessary.- Parameters
id (int or str, optional) – The data set containing the source expression. If not given then the default identifier is used, as returned by
get_default_id
.model (str or sherpa.models.Model object) – This defines the model used to fit the data. It can be a Python expression or a string version of it.
See also
delete_model
Delete the model expression from a data set.
fit
Fit one or more data sets.
freeze
Fix model parameters so they are not changed by a fit.
get_source
Return the source model expression for a data set.
integrate1d
Integrate 1D source expressions.
sherpa.astro.ui.set_bkg_model
Set the background model expression for a data set.
set_full_model
Define the convolved model expression for a data set.
show_model
Display the source model expression for a data set.
set_par
Set the value, limits, or behavior of a model parameter.
thaw
Allow model parameters to be varied during a fit.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
model
parameter. If given two un-named arguments, then they are interpreted as theid
andmodel
parameters, respectively.PHA data sets will automatically apply the instrumental response (ARF and RMF) to the source expression. For some cases this is not useful - for example, when different responses should be applied to different model components - in which case
set_full_model
should be used instead.Model caching is available via the model
cache
attribute. A non-zero value for this attribute means that the results of evaluating the model will be cached if all the parameters are frozen, which may lead to a reduction in the time taken to evaluate a fit. A zero value turns off the caching. The default setting for X-Spec and 1D analytic models is thatcache
is5
, but0
for the 2D analytic models.The
integrate1d
model can be used to apply a numerical integration to an arbitrary model expression.Examples
Create an instance of the
powlaw1d
model type, calledpl
, and use it as the model for the default data set.>>> set_model(polynom1d.pl)
Create a model for the default dataset which is the
xsphabs
model multiplied by the sum of anxsapec
andpowlaw1d
models (the model components are identified by the labelsgal
,clus
, andpl
).>>> set_model(xsphabs.gal * (xsapec.clus + powlaw1d.pl))
Repeat the previous example, using a string to define the model expression:
>>> set_model('xsphabs.gal * (xsapec.clus + powlaw1d.pl)')
Use the same model component (
src
, agauss2d
model) for the two data sets (‘src1’ and ‘src2’).>>> set_model('src1', gauss2d.src + const2d.bgnd1) >>> set_model('src2', src + const2d.bgnd2)
Share an expression - in this case three gaussian lines - between three data sets. The normalization of this line complex is allowed to vary in data sets 2 and 3 (the
norm2
andnorm3
components of theconst1d
model), and each data set has a separatepolynom1d
component (bgnd1
,bgnd2
, andbgnd3
). Thec1
parameters of thepolynom1d
model components are thawed and then linked together (to reduce the number of free parameters):>>> lines = gauss1d.l1 + gauss1d.l2 + gauss1d.l3 >>> set_model(1, lines + polynom1d.bgnd1) >>> set_model(2, lines * const1d.norm2 + polynom1d.bgnd2) >>> set_model(3, lines * const1d.norm3 + polynom1d.bgnd3) >>> thaw(bgnd1.c1, bgnd2.c1, bgnd3.c1) >>> link(bgnd2.c2, bgnd1.c1) >>> link(bgnd3.c3, bgnd1.c1)
For this expression, the
gal
component is frozen, so it is not varied in the fit. Thecache
attribute is set to a non-zero value to ensure that it is cached during a fit (this is actually the default value for this model so it not normally needed).>>> set_model(xsphabs.gal * (xsapec.clus + powlaw1d.pl)) >>> gal.nh = 0.0971 >>> freeze(gal) >>> gal.cache = 1
- set_stat(stat)[source] [edit on github]
Set the statistical method.
Changes the method used to evaluate the fit statistic, that is the numerical measure that determines how closely the model represents the data.
- Parameters
stat (str or sherpa.stats.Stat instance) – When a string, the name of the statistic (case is not important): see
list_stats()
for supported values. Otherwise an instance of the statistic to use.- Raises
sherpa.utils.err.ArgumentErr – If the
stat
argument is not recognized.
See also
calc_stat
Calculate the statistic value for a dataset.
get_stat_name
Return the current statistic method.
list_stats
List the supported fit statistics.
load_user_stat
Create a user-defined statistic.
Notes
The available statistics include:
- cash
A maximum likelihood function [1]_.
- chi2
Chi-squared statistic using the supplied error values.
- chi2constvar
Chi-squared with constant variance computed from the counts data.
- chi2datavar
Chi-squared with data variance.
- chi2gehrels
Chi-squared with gehrels method [2]_. This is the default method.
- chi2modvar
Chi-squared with model amplitude variance.
- chi2xspecvar
Chi-squared with data variance (XSPEC-style, variance = 1.0 if data less than or equal to 0.0).
- cstat
A maximum likelihood function (the XSPEC implementation of the Cash function) [3]_. This does not include support for including the background.
- wstat
A maximum likelihood function which includes the background data as part of the fit (i.e. for when it is not being explicitly modelled) (the XSPEC implementation of the Cash function) [3]_.
- leastsq
The least-squares statisic (the error is not used in this statistic).
References
- 1
Cash, W. “Parameter estimation in astronomy through application of the likelihood ratio”, ApJ, vol 228, p. 939-947 (1979). http://adsabs.harvard.edu/abs/1979ApJ…228..939C
- 2
Gehrels, N. “Confidence limits for small numbers of events in astrophysical data”, ApJ, vol 303, p. 336-346 (1986). http://adsabs.harvard.edu/abs/1986ApJ…303..336G
- 3
https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/XSappendixStatistics.html
Examples
>>> set_stat('cash')
- set_staterror(id, val=None, fractional=False, bkg_id=None)[source] [edit on github]
Set the statistical errors on the dependent axis of a data set.
These values over-ride the errors calculated by any statistic, such as
chi2gehrels
orchi2datavar
.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.val (array or scalar) – The systematic error.
fractional (bool, optional) – If
False
(the default value), then theval
parameter is the absolute value, otherwise theval
parameter represents the fractional error, so the absolute value is calculated asget_dep() * val
(andval
must be a scalar).bkg_id (int or str, optional) – Set to identify which background component to set. The default value (
None
) means that this is for the source component of the data set.
See also
load_staterror
Load the statistical errors from a file.
load_syserror
Load the systematic errors from a file.
set_syserror
Set the systematic errors on the dependent axis of a data set.
get_error
Return the errors on the dependent axis of a data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
val
parameter. If given two un-named arguments, then they are interpreted as theid
andval
parameters, respectively.Examples
Set the statistical error for the default data set to the value in
dys
(a scalar or an array):>>> set_staterror(dys)
Set the statistical error on the ‘core’ data set to be 5% of the data values:
>>> set_staterror('core', 0.05, fractional=True)
- set_syserror(id, val=None, fractional=False, bkg_id=None)[source] [edit on github]
Set the systematic errors on the dependent axis of a data set.
- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.val (array or scalar) – The systematic error.
fractional (bool, optional) – If
False
(the default value), then theval
parameter is the absolute value, otherwise theval
parameter represents the fractional error, so the absolute value is calculated asget_dep() * val
(andval
must be a scalar).bkg_id (int or str, optional) – Set to identify which background component to set. The default value (
None
) means that this is for the source component of the data set.
See also
load_staterror
Set the statistical errors on the dependent axis of a data set.
load_syserror
Set the systematic errors on the dependent axis of a data set.
set_staterror
Set the statistical errors on the dependent axis of a data set.
get_error
Return the errors on the dependent axis of a data set.
Notes
The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the
val
parameter. If given two un-named arguments, then they are interpreted as theid
andval
parameters, respectively.Examples
Set the systematic error for the default data set to the value in
dys
(a scalar or an array):>>> set_syserror(dys)
Set the systematic error on the ‘core’ data set to be 5% of the data values:
>>> set_syserror('core', 0.05, fractional=True)
- set_xlinear(plottype='all')[source] [edit on github]
New plots will display a linear X axis.
This setting only affects plots created after the call to
set_xlinear
.- Parameters
plottype (optional) – The type of plot that is to use a log-scaled X axis. The options are the same as accepted by
plot
, together with the ‘all’ option (which is the default setting).
See also
plot
Create one or more plot types.
set_xlog
New plots will display a logarithmically-scaled X axis.
set_ylinear
New plots will display a linear Y axis.
Examples
Use a linear X axis for ‘data’ plots:
>>> set_xlinear('data') >>> plot('data', 'arf')
All plots use a linear scale for the X axis.
>>> set_xlinear() >>> plot_fit()
- set_xlog(plottype='all')[source] [edit on github]
New plots will display a logarithmically-scaled X axis.
This setting only affects plots created after the call to
set_xlog
.- Parameters
plottype (optional) – The type of plot that is to use a log-scaled X axis. The options are the same as accepted by
plot
, together with the ‘all’ option (which is the default setting).
See also
plot
Create one or more plot types.
set_xlinear
New plots will display a linear X axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Examples
Use a logarithmic scale for the X axis of ‘data’ plots:
>>> set_xlog('data') >>> plot('data', 'arf')
All plots use a logarithmic scale for the X axis.
>>> set_xlog() >>> plot_fit()
- set_ylinear(plottype='all')[source] [edit on github]
New plots will display a linear Y axis.
This setting only affects plots created after the call to
set_ylinear
.- Parameters
plottype (optional) – The type of plot that is to use a log-scaled X axis. The options are the same as accepted by
plot
, together with the ‘all’ option (which is the default setting).
See also
plot
Create one or more plot types.
set_xlinear
New plots will display a linear X axis.
set_ylog
New plots will display a logarithmically-scaled Y axis.
Examples
Use a linear Y axis for ‘data’ plots:
>>> set_ylinear('data') >>> plot('data', 'arf')
All plots use a linear scale for the Y axis.
>>> set_ylinear() >>> plot_fit()
- set_ylog(plottype='all')[source] [edit on github]
New plots will display a logarithmically-scaled Y axis.
This setting only affects plots created after the call to
set_ylog
.- Parameters
plottype (optional) – The type of plot that is to use a log-scaled X axis. The options are the same as accepted by
plot
, together with the ‘all’ option (which is the default setting).
See also
plot
Create one or more plot types.
set_xlog
New plots will display a logarithmically-scaled x axis.
set_ylinear
New plots will display a linear Y axis.
Examples
Use a logarithmic scale for the Y axis of ‘data’ plots:
>>> set_ylog('data') >>> plot('data', 'arf')
All plots use a logarithmic scale for the Y axis.
>>> set_ylog() >>> plot_fit()
- show_all(id=None, outfile=None, clobber=False)[source] [edit on github]
Report the current state of the Sherpa session.
Display information about one or all of the data sets that have been loaded into the Sherpa session. The information shown includes that provided by the other
show_xxx
routines, and depends on the type of data that is loaded.- Parameters
id (int or str, optional) – The data set. If not given then all data sets are displayed.
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
See also
clean
Clear all stored session data.
list_data_ids
List the identifiers for the loaded data sets.
save
Save the current Sherpa session to a file.
sherpa.astro.ui.save_all
Save the Sherpa session as an ASCII file.
sherpa.astro.ui.show_bkg
Show the details of the PHA background data sets.
sherpa.astro.ui.show_bkg_model
Display the background model expression for a data set.
sherpa.astro.ui.show_bkg_source
Display the background model expression for a data set.
show_conf
Display the results of the last conf evaluation.
show_covar
Display the results of the last covar evaluation.
show_data
Summarize the available data sets.
show_filter
Show any filters applied to a data set.
show_fit
Summarize the fit results.
show_kernel
Display any kernel applied to a data set.
show_method
Display the current optimization method and options.
show_model
Display the model expression used to fit a data set.
show_proj
Display the results of the last proj evaluation.
show_psf
Display any PSF model applied to a data set.
show_source
Display the source model expression for a data set.
show_stat
Display the current fit statistic.
- show_bkg(id=None, bkg_id=None, outfile=None, clobber=False)[source] [edit on github]
Show the details of the PHA background data sets.
This displays information about the background, or backgrounds, for the loaded data sets. This includes: any filters, the grouping settings, mission-specific header keywords, and the details of any associated instrument responses files (ARF, RMF).
- Parameters
id (int or str, optional) – The data set. If not given then all background data sets are displayed.
bkg_id (int or str, optional) – The background component to display. The default is all components.
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
See also
list_model_ids
List of all the data sets with a source expression.
load_bkg
Load the background from a file and add it to a PHA data set.
show_all
Report the current state of the Sherpa session.
- show_bkg_model(id=None, bkg_id=None, outfile=None, clobber=False)[source] [edit on github]
Display the background model expression used to fit a data set.
This displays the model used to the the background data set, that is, the expression set by
set_bkg_model
orset_bkg_source
combined with any instrumental responses, together with the parameter values for the model. Theshow_bkg_source
function displays just the background model, without the instrument components (if any).- Parameters
id (int or str, optional) – The data set. If not given then all background expressions are displayed.
bkg_id (int or str, optional) – The background component to display. The default is all components.
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
See also
list_model_ids
List of all the data sets with a source expression.
set_bkg_model
Set the background model expression for a data set.
show_all
Report the current state of the Sherpa session.
show_model
Display the model expression used to fit a data set.
show_bkg_source
Display the background model expression for a data set.
- show_bkg_source(id=None, bkg_id=None, outfile=None, clobber=False)[source] [edit on github]
Display the background model expression for a data set.
This displays the background model for a data set, that is, the expression set by
set_bkg_model
orset_bkg_source
, as well as the parameter values for the model. Theshow_bkg_model
function displays the model that is fit to the data; that is, it includes any instrument responses.- Parameters
id (int or str, optional) – The data set. If not given then all background expressions are displayed.
bkg_id (int or str, optional) – The background component to display. The default is all components.
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
See also
list_model_ids
List of all the data sets with a source expression.
set_bkg_model
Set the background model expression for a data set.
show_all
Report the current state of the Sherpa session.
show_model
Display the model expression used to fit a data set.
show_bkg_model
Display the background model expression used to fit a data set.
- show_conf(outfile=None, clobber=False)[source] [edit on github]
Display the results of the last conf evaluation.
The output includes the best-fit model parameter values, associated confidence limits, choice of statistic, and details on the best fit location.
- Parameters
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
- show_covar(outfile=None, clobber=False)[source] [edit on github]
Display the results of the last covar evaluation.
The output includes the best-fit model parameter values, associated confidence limits, choice of statistic, and details on the best fit location.
- Parameters
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
- show_data(id=None, outfile=None, clobber=False)[source] [edit on github]
Summarize the available data sets.
Display information on the data sets that have been loaded. The details depend on the type of the data set (e.g. 1D, image, PHA files).
- Parameters
id (int or str, optional) – The data set. If not given then all data sets are displayed.
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
See also
list_data_ids
List the identifiers for the loaded data sets.
show_all
Report the current state of the Sherpa session.
- show_filter(id=None, outfile=None, clobber=False)[source] [edit on github]
Show any filters applied to a data set.
Display any filters that have been applied to the independent axis or axes of the data set.
- Parameters
id (int or str, optional) – The data set. If not given then all data sets are displayed.
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
See also
ignore
Exclude data from the fit.
sherpa.astro.ui.ignore2d
Exclude a spatial region from an image.
list_data_ids
List the identifiers for the loaded data sets.
notice
Include data in the fit.
sherpa.astro.ui.notice2d
Include a spatial region of an image.
show_all
Report the current state of the Sherpa session.
- show_fit(outfile=None, clobber=False)[source] [edit on github]
Summarize the fit results.
Display the results of the last call to
fit
, including: optimization method, statistic, and details of the fit (it does not reflect any changes made after the fit, such as to the model expression or fit parameters).- Parameters
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
See also
fit
Fit one or more data sets.
get_fit_results
Return the results of the last fit.
list_data_ids
List the identifiers for the loaded data sets.
list_model_ids
List of all the data sets with a source expression.
show_all
Report the current state of the Sherpa session.
- show_kernel(id=None, outfile=None, clobber=False)[source] [edit on github]
Display any kernel applied to a data set.
The kernel represents the subset of the PSF model that is used to fit the data. The
show_psf
function shows the un-filtered version.- Parameters
id (int or str, optional) – The data set. If not given then all data sets are displayed.
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
See also
image_kernel
Plot the 2D kernel applied to a data set.
list_data_ids
List the identifiers for the loaded data sets.
load_psf
Create a PSF model.
plot_kernel
Plot the 1D kernel applied to a data set.
set_psf
Add a PSF model to a data set.
show_all
Report the current state of the Sherpa session.
show_psf
Display any PSF model applied to a data set.
Notes
The point spread function (PSF) is defined by the full (unfiltered) PSF image or model expression evaluated over the full range of the dataset; both types of PSFs are established with
load_psf
. The kernel is the subsection of the PSF image or model which is used to convolve the data: this is changed usingset_psf
. While the kernel and PSF might be congruent, defining a smaller kernel helps speed the convolution process by restricting the number of points within the PSF that must be evaluated.
- show_method(outfile=None, clobber=False)[source] [edit on github]
Display the current optimization method and options.
- Parameters
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
See also
get_method
Return an optimization method.
get_method_opt
Return one or all options of the current optimization method.
show_all
Report the current state of the Sherpa session.
Examples
>>> set_method('levmar') >>> show_method() Optimization Method: LevMar name = levmar ftol = 1.19209289551e-07 xtol = 1.19209289551e-07 gtol = 1.19209289551e-07 maxfev = x epsfcn = 1.19209289551e-07 factor = 100.0 verbose = 0
- show_model(id=None, outfile=None, clobber=False)[source] [edit on github]
Display the model expression used to fit a data set.
This displays the model used to fit the data set, that is, the expression set by
set_model
orset_source
combined with any instrumental responses, together with the parameter values of the model. Theshow_source
function displays just the source expression, without the instrumental components (if any).- Parameters
id (int or str, optional) – The data set. If not given then all source expressions are displayed.
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
See also
list_model_ids
List of all the data sets with a source expression.
set_model
Set the source model expression for a data set.
show_all
Report the current state of the Sherpa session.
show_source
Display the source model expression for a data set.
- show_proj(outfile=None, clobber=False)[source] [edit on github]
Display the results of the last proj evaluation.
The output includes the best-fit model parameter values, associated confidence limits, choice of statistic, and details on the best fit location.
- Parameters
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
- show_psf(id=None, outfile=None, clobber=False)[source] [edit on github]
Display any PSF model applied to a data set.
The PSF model represents the full model or data set that is applied to the source expression. The
show_kernel
function shows the filtered version.- Parameters
id (int or str, optional) – The data set. If not given then all data sets are displayed.
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
See also
image_psf
View the 2D PSF model applied to a data set.
list_data_ids
List the identifiers for the loaded data sets.
load_psf
Create a PSF model.
plot_psf
Plot the 1D PSF model applied to a data set.
set_psf
Add a PSF model to a data set.
show_all
Report the current state of the Sherpa session.
show_kernel
Display any kernel applied to a data set.
Notes
The point spread function (PSF) is defined by the full (unfiltered) PSF image or model expression evaluated over the full range of the dataset; both types of PSFs are established with
load_psf
. The kernel is the subsection of the PSF image or model which is used to convolve the data: this is changed usingset_psf
. While the kernel and PSF might be congruent, defining a smaller kernel helps speed the convolution process by restricting the number of points within the PSF that must be evaluated.
- show_source(id=None, outfile=None, clobber=False)[source] [edit on github]
Display the source model expression for a data set.
This displays the source model for a data set, that is, the expression set by
set_model
orset_source
, as well as the parameter values for the model. Theshow_model
function displays the model that is fit to the data; that is, it includes any instrument responses.- Parameters
id (int or str, optional) – The data set. If not given then all source expressions are displayed.
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
See also
list_model_ids
List of all the data sets with a source expression.
set_model
Set the source model expression for a data set.
show_all
Report the current state of the Sherpa session.
show_model
Display the model expression used to fit a data set.
- show_stat(outfile=None, clobber=False)[source] [edit on github]
Display the current fit statistic.
- Parameters
outfile (str, optional) – If not given the results are displayed to the screen, otherwise it is taken to be the name of the file to write the results to.
clobber (bool, optional) – If
outfile
is notNone
, then this flag controls whether an existing file can be overwritten (True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.IOErr – If
outfile
already exists andclobber
isFalse
.
See also
calc_stat
Calculate the fit statistic for a data set.
calc_stat_info
Display the statistic values for the current models.
get_stat
Return a fit-statistic method.
show_all
Report the current state of the Sherpa session.
Examples
>>> set_stat('cash') >>> show_stat() Statistic: Cash Maximum likelihood function
- simulfit(id=None, *otherids, **kwargs) [edit on github]
Fit a model to one or more data sets.
Use forward fitting to find the best-fit model to one or more data sets, given the chosen statistic and optimization method. The fit proceeds until the results converge or the number of iterations exceeds the maximum value (these values can be changed with
set_method_opt
). An iterative scheme can be added usingset_iter_method
to try and improve the fit. The final fit results are displayed to the screen and can be retrieved withget_fit_results
.- Parameters
id (int or str, optional) – The data set that provides the data. If not given then all data sets with an associated model are fit simultaneously.
*otherids (int or str, optional) – Other data sets to use in the calculation.
outfile (str, optional) – If set, then the fit results will be written to a file with this name. The file contains the per-iteration fit results.
clobber (bool, optional) – This flag controls whether an existing file can be overwritten (
True
) or if it raises an exception (False
, the default setting).
- Raises
sherpa.utils.err.FitErr – If
filename
already exists andclobber
isFalse
.
See also
conf
Estimate parameter confidence intervals using the confidence method.
contour_fit
Contour the fit to a data set.
covar
Estimate the confidence intervals using the confidence method.
freeze
Fix model parameters so they are not changed by a fit.
get_fit_results
Return the results of the last fit.
plot_fit
Plot the fit results (data, model) for a data set.
image_fit
Display the data, model, and residuals for a data set in the image viewer.
set_stat
Set the statistical method.
set_method
Change the optimization method.
set_method_opt
Change an option of the current optimization method.
set_full_model
Define the convolved model expression for a data set.
set_iter_method
Set the iterative-fitting scheme used in the fit.
set_model
Set the model expression for a data set.
show_fit
Summarize the fit results.
thaw
Allow model parameters to be varied during a fit.
Examples
Simultaneously fit all data sets with models and then store the results in the variable fres:
>>> fit() >>> fres = get_fit_results()
Fit just the data set ‘img’:
>>> fit('img')
Simultaneously fit data sets 1, 2, and 3:
>>> fit(1, 2, 3)
Fit data set ‘jet’ and write the fit results to the text file ‘jet.fit’, over-writing it if it already exists:
>>> fit('jet', outfile='jet.fit', clobber=True)
- subtract(id=None)[source] [edit on github]
Subtract the background estimate from a data set.
The
subtract
function performs a channel-by-channel subtraction of the background estimate from the data. After this command, anything that uses the data set - such as a plot, fit, or error analysis - will use the subtracted data. Models should be re-fit ifsubtract
is called.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain a PHA data set.
See also
fit
Fit one or more data sets.
unsubtract
Undo any background subtraction for the data set.
Notes
Unlike X-Spec [1]_, Sherpa does not automatically subtract the background estimate from the data.
Background subtraction can only be performed when data and background are of the same length. If the data and background are ungrouped, both must have same number of channels. If they are grouped, data and background can start with different numbers of channels, but must have the same number of groups after grouping.
The equation for the subtraction is:
src_counts - bg_counts * (src_exposure * src_backscal) ----------------------------- (bg_exposure * bg_backscal)
where src_exposure and bg_exposure are the source and background exposure times, and src_backscal and bg_backscal are the source and background backscales. The backscale, read from the
BACKSCAL
header keyword of the PHA file [2]_, is the ratio of data extraction area to total detector area.The
subtracted
field of a dataset is set toTrue
when the background is subtracted.References
- 1
https://heasarc.gsfc.nasa.gov/xanadu/xspec/manual/XspecSpectralFitting.html
- 2
https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/node5.html
Examples
Background subtract the default data set.
>>> subtract() >>> get_data().subtracted True
Remove the background from the data set labelled ‘src’:
>>> subtract('src') >>> get_data('src').subtracted True
Overplot the background-subtracted data on the original data for the default data set:
>>> plot_data() >>> subtract() >>> plot_data(overplot=True)
- t_sample(num=1, dof=None, id=None, otherids=(), numcores=None)[source] [edit on github]
Sample the fit statistic by taking the parameter values from a Student’s t-distribution.
For each iteration (sample), change the thawed parameters by drawing values from a Student’s t-distribution, and calculate the fit statistic.
- Parameters
num (int, optional) – The number of samples to use (default is 1).
dof (optional) – The number of degrees of freedom to use (the default is to use the number from the current fit).
id (int or str, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
- Returns
A NumPy array table with the first column representing the statistic and later columns the parameters used.
- Return type
samples
See also
fit
Fit a model to one or more data sets.
normal_sample
Sample from the normal distribution.
set_model
Set the source model expression for a data set.
set_stat
Set the statistical method.
uniform_sample
Sample from a uniform distribution.
Examples
The model fit to the default data set has three free parameters. The median value of the statistic calculated by
t_sample
is returned:>>> ans = t_sample(num=10000) >>> ans.shape (1000, 4) >>> np.median(ans[:,0]) 119.9764357725326
- thaw(*args)[source] [edit on github]
Allow model parameters to be varied during a fit.
The arguments can be parameters or models, in which case all parameters of the model are thawed. If no arguments are given then nothing is changed.
See also
Notes
The
freeze
function can be used to reverse this setting, so that parameters are “frozen” and so remain constant during a fit.Certain parameters may be marked as “always frozen”, in which case using the parameter in a call to
thaw
will raise an error. If the model is sent tothaw
then the “always frozen” parameter will be skipped.Examples
Ensure that the FWHM parameter of the line model (in this case a
gauss1d
model) will be varied in any fit.>>> set_source(const1d.bgnd + gauss1d.line) >>> thaw(line.fwhm) >>> fit()
Thaw all parameters of the line model and then re-fit:
>>> thaw(line) >>> fit()
Thaw the nh parameter of the gal model and the abund parameter of the src model:
>>> thaw(gal.nh, src.abund)
- ungroup(id=None, bkg_id=None)[source] [edit on github]
Turn off the grouping for a PHA data set.
A PHA data set can be grouped either because it contains grouping information [1]_, which is automatically applied when the data is read in with
load_pha
orload_data
, or because thegroup_xxx
set of routines has been used to dynamically re-group the data. Theungroup
function removes this grouping (however it was created).- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.bkg_id (int or str, optional) – Set to ungroup the background associated with the data set.
- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain a PHA data set.
Notes
PHA data is often grouped to improve the signal to noise of the data, by decreasing the number of bins, so that a chi-square statistic can be used when fitting the data. After calling
ungroup
, anything that uses the data set - such as a plot, fit, or error analysis - will use the original data values. Models should be re-fit ifungroup
is called; this may require a change of statistic depending on the counts per channel in the spectrum.The grouping is implemented by separate arrays to the main data - the information is stored in the
grouping
andquality
arrays of the PHA data set - so that a data set can be grouped and ungrouped many times, without losing information.The
grouped
field of a PHA data set is set toFalse
when the data is not grouped.If subtracting the background estimate from a data set, the grouping applied to the source data set is used for both source and background data sets.
References
- 1
Arnaud., K. & George, I., “The OGIP Spectral File Format”, http://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/ogip_92_007.html
Examples
Ungroup the data in the default data set:
>>> ungroup() >>> get_data().grouped False
Ungroup the first background component of the ‘core’ data set:
>>> ungroup('core', bkg_id=1) >>> get_bkg('core', bkg_id=1).grouped False
- uniform_sample(num=1, factor=4, id=None, otherids=(), numcores=None)[source] [edit on github]
Sample the fit statistic by taking the parameter values from an uniform distribution.
For each iteration (sample), change the thawed parameters by drawing values from a uniform distribution, and calculate the fit statistic.
- Parameters
num (int, optional) – The number of samples to use (default is 1).
factor (number, optional) – Multiplier to expand the scale parameter (default is 4).
id (int or str, optional) – The data set that provides the data. If not given then all data sets with an associated model are used simultaneously.
otherids (sequence of int or str, optional) – Other data sets to use in the calculation.
numcores (optional) – The number of CPU cores to use. The default is to use all the cores on the machine.
- Returns
A NumPy array table with the first column representing the statistic and later columns the parameters used.
- Return type
samples
See also
fit
Fit a model to one or more data sets.
normal_sample
Sample from a normal distribution.
set_model
Set the source model expression for a data set.
set_stat
Set the statistical method.
t_sample
Sample from the Student’s t-distribution.
Examples
The model fit to the default data set has three free parameters. The median value of the statistic calculated by
uniform_sample
is returned:>>> ans = uniform_sample(num=10000) >>> ans.shape (1000, 4) >>> np.median(ans[:,0]) 284.66534775948134
- unlink(par)[source] [edit on github]
Unlink a parameter value.
Remove any parameter link - created by
link
- for the parameter. The parameter value is reset to the value it had beforelink
was called.- Parameters
par (str or Parameter) – The parameter to unlink. If the parameter is not linked then nothing happens.
See also
Examples
>>> unlink(bgnd.ampl)
- unpack_arf(arg)[source] [edit on github]
Create an ARF data structure.
- Parameters
arg – Identify the ARF: a file name, or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: a
TABLECrate
for crates, as used by CIAO, or a list of AstroPy HDU objects.- Returns
arf
- Return type
a
sherpa.astro.instrument.ARF1D
instance
See also
get_arf
Return the ARF associated with a PHA data set.
load_arf
Load an ARF from a file and add it to a PHA data set.
load_bkg_arf
Load an ARF from a file and add it to the background of a PHA data set.
load_multi_arfs
Load multiple ARFs for a PHA data set.
load_pha
Load a file as a PHA data set.
load_rmf
Load a RMF from a file and add it to a PHA data set.
set_full_model
Define the convolved model expression for a data set.
Notes
The
minimum_energy
setting of theogip
section of the Sherpa configuration file determines the behavior when an ARF with a minimum energy of 0 is read in. The default is to replace the 0 by the value 1e-10, which will also cause a warning message to be displayed.Examples
>>> arf1 = unpack_arf("arf1.fits") >>> arf2 = unpack_arf("arf2.fits")
Read in an ARF using Crates:
>>> acr = pycrates.read_file("src.arf") >>> arf = unpack_arf(acr)
Read in an ARF using AstroPy:
>>> hdus = astropy.io.fits.open("src.arf") >>> arf = unpack_arf(hdus)
- unpack_arrays(*args)[source] [edit on github]
Create a sherpa data object from arrays of data.
The object returned by
unpack_arrays
can be used in aset_data
call.- Parameters
args (array_like) – Arrays of data. The order, and number, is determined by the
dstype
parameter, and listed in theload_arrays
routine.dstype – The data set type. The default is
Data1D
and values include:Data1D
,Data1DInt
,Data2D
,Data2DInt
,DataPHA
, andDataIMG
. The class is expected to be derived fromsherpa.data.BaseData
.
- Returns
The data set object matching the requested
dstype
parameter.- Return type
instance
See also
get_data
Return the data set by identifier.
load_arrays
Create a data set from array values.
set_data
Set a data set.
unpack_data
Create a sherpa data object from a file.
Examples
Create a 1D (unbinned) data set from the values in the x and y arrays. Use the returned object to create a data set labelled “oned”:
>>> x = [1, 3, 7, 12] >>> y = [2.3, 3.2, -5.4, 12.1] >>> dat = unpack_arrays(x, y) >>> set_data("oned", dat)
Include statistical errors on the data:
>>> edat = unpack_arrays(x, y, dy)
Create a “binned” 1D data set, giving the low, and high edges of the independent axis (xlo and xhi respectively) and the dependent values for this grid (y):
>>> hdat = unpack_arrays(xlo, xhi, y, Data1DInt)
Create a 3 column by 4 row image:
>>> ivals = np.arange(12) >>> y, x = np.mgrid[0:3, 0:4] >>> x = x.flatten() >>> y = y.flatten() >>> idat = unpack_arrays(x, y, ivals, (3, 4), DataIMG)
- unpack_ascii(filename, ncols=2, colkeys=None, dstype=<class 'sherpa.data.Data1D'>, sep=' ', comment='#')[source] [edit on github]
Unpack an ASCII file into a data structure.
- Parameters
filename (str) – The name of the file to read in. Selection of the relevant column depends on the I/O library in use (Crates or AstroPy).
ncols (int, optional) – The number of columns to read in (the first
ncols
columns in the file). The meaning of the columns is determined by thedstype
parameter.colkeys (array of str, optional) – An array of the column name to read in. The default is
None
.sep (str, optional) – The separator character. The default is
' '
.comment (str, optional) – The comment character. The default is
'#'
.dstype (optional) – The data class to use. The default is
Data1D
and it is expected to be derived fromsherpa.data.BaseData
.
- Returns
The type of the returned object is controlled by the
dstype
parameter.- Return type
instance
See also
load_ascii
Load an ASCII file as a data set.
set_data
Set a data set.
unpack_table
Unpack a FITS binary file into a data structure.
Examples
Read in the first two columns of the file, as the independent (X) and dependent (Y) columns of a data set:
>>> d = unpack_ascii('sources.dat')
Read in the first three columns (the third column is taken to be the error on the dependent variable):
>>> d = unpack_ascii('sources.dat', ncols=3)
Read in from columns ‘col2’ and ‘col3’:
>>> d = unpack_ascii('tbl.dat', colkeys=['col2', 'col3'])
The first three columns are taken to be the two independent axes of a two-dimensional data set (
x0
andx1
) and the dependent value (y
):>>> d = unpack_ascii('fields.dat', ncols=3, ... dstype=sherpa.astro.data.Data2D)
When using the Crates I/O library, the file name can include CIAO Data Model syntax, such as column selection. This can also be done using the
colkeys
parameter, as shown above:>>> d = unpack_ascii('tbl.dat[cols rmid,sur_bri,sur_bri_err]', ... ncols=3)
- unpack_bkg(arg, use_errors=False)[source] [edit on github]
Create a PHA data structure for a background data set.
Any instrument information referenced in the header of the PHA file - e.g. with the ANCRFILE and RESPFILE, keywords - will also be loaded. Unlike
unpack_pha
, background files will not be loaded.- Parameters
arg – Identify the PHA file: a file name, or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: a
TABLECrate
for crates, as used by CIAO, or a list of AstroPy HDU objects.use_errors (bool, optional) – If
True
then the statistical errors are taken from the input data, rather than calculated by Sherpa from the count values. The default isFalse
.
- Returns
pha
- Return type
a
sherpa.astro.data.DataPHA
instance
See also
Examples
>>> pha1 = unpack_arf("src1.pi") >>> pha2 = unpack_bkg("field.pi") >>> set_data(1, pha1) >>> set_bkg(1, pha2)
Read in a PHA file using Crates:
>>> cr = pycrates.read_file("bg.fits") >>> pha = unpack_pha(cr)
Read in a PHA file using AstroPy:
>>> hdus = astropy.io.fits.open("bg.fits") >>> pha = unpack_pha(hdus)
- unpack_data(filename, *args, **kwargs)[source] [edit on github]
Create a sherpa data object from a file.
The object returned by
unpack_data
can be used in aset_data
call. The data types supported are those supported byunpack_pha
,unpack_image
,unpack_table
, andunpack_ascii
.- Parameters
filename – A file name or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: e.g. a
PHACrateDataset
,TABLECrate
, orIMAGECrate
for crates, as used by CIAO, or a list of AstroPy HDU objects.args – The arguments supported by
unpack_pha
,unpack_image
,unpack_table
, andunpack_ascii
.kwargs – The keyword arguments supported by
unpack_pha
,unpack_image
,unpack_table
, andunpack_ascii
.
- Returns
The data set object.
- Return type
instance
See also
get_data
Return the data set by identifier.
load_arrays
Create a data set from array values.
set_data
Set a data set.
unpack_arrays
Create a sherpa data object from arrays of data.
unpack_ascii
Unpack an ASCII file into a data structure.
unpack_image
Create an image data structure.
unpack_pha
Create a PHA data structure.
unpack_table
Unpack a FITS binary file into a data structure.
Examples
Create a data object from the contents of the file “src.dat” and use it to create a Sherpa data set called “src”:
>>> dat = unpack_data('src.dat') >>> set_data('src', dat)
- unpack_image(arg, coord='logical', dstype=<class 'sherpa.astro.data.DataIMG'>)[source] [edit on github]
Create an image data structure.
- Parameters
arg – Identify the data: a file name, or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: an
IMAGECrate
for crates, as used by CIAO, or a list of AstroPy HDU objects.coord ({ 'logical', 'image', 'physical', 'world', 'wcs' }, optional) – Ensure that the image contains the given coordinate system.
dstype (optional) – The image class to use. The default is
DataIMG
.
- Returns
The class of the returned object is controlled by the
dstype
parameter.- Return type
img
- Raises
sherpa.utils.err.DataErr – If the image does not contain the requested coordinate system.
See also
load_image
Load an image as a data set.
set_data
Set a data set.
Examples
>>> img1 = unpack_img("img.fits") >>> set_data(img1)
>>> img = unpack_img('img.fits', 'physical')
Read in an image using Crates:
>>> cr = pycrates.read_file('broad.img') >>> idata = unpack_img(cr)
Read in an image using AstroPy:
>>> hdus = astropy.io.fits.open('broad.img') >>> idata = unpack_img(hdus)
- unpack_pha(arg, use_errors=False)[source] [edit on github]
Create a PHA data structure.
Any instrument or background data sets referenced in the header of the PHA file - e.g. with the ANCRFILE, RESPFILE, and BACKFILE keywords - will also be loaded.
- Parameters
arg – Identify the PHA file: a file name, or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: a
TABLECrate
for crates, as used by CIAO, or a list of AstroPy HDU objects.use_errors (bool, optional) – If
True
then the statistical errors are taken from the input data, rather than calculated by Sherpa from the count values. The default isFalse
.
- Returns
pha
- Return type
a
sherpa.astro.data.DataPHA
instance
See also
Examples
>>> pha1 = unpack_arf("src1.pi") >>> pha2 = unpack_arf("field.pi") >>> set_data(1, pha1) >>> set_bkg(1, pha2)
Read in a PHA file using Crates:
>>> cr = pycrates.read_file("src.fits") >>> pha = unpack_pha(cr)
Read in a PHA file using AstroPy:
>>> hdus = astropy.io.fits.open("src.fits") >>> pha = unpack_pha(hdus)
- unpack_rmf(arg)[source] [edit on github]
Create a RMF data structure.
- Parameters
arg – Identify the RMF: a file name, or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: a
RMFCrateDataset
for crates, as used by CIAO, or a list of AstroPy HDU objects.- Returns
rmf
- Return type
a
sherpa.astro.instrument.RMF1D
instance
See also
get_rmf
Return the RMF associated with a PHA data set.
load_arf
Load a RMF from a file and add it to a PHA data set.
load_bkg_rmf
Load a RMF from a file and add it to the background of a PHA data set.
load_multi_rmfs
Load multiple RMFs for a PHA data set.
load_pha
Load a file as a PHA data set.
load_rmf
Load a RMF from a file and add it to a PHA data set.
set_full_model
Define the convolved model expression for a data set.
Notes
The
minimum_energy
setting of theogip
section of the Sherpa configuration file determines the behavior when an RMF with a minimum energy of 0 is read in. The default is to replace the 0 by the value 1e-10, which will also cause a warning message to be displayed.Examples
>>> rmf1 = unpack_rmf("rmf1.fits") >>> rmf2 = unpack_rmf("rmf2.fits")
Read in a RMF using Crates:
>>> acr = pycrates.read_rmf("src.rmf") >>> rmf = unpack_rmf(acr)
Read in a RMF using AstroPy:
>>> hdus = astropy.io.fits.open("src.rmf") >>> rmf = unpack_rmf(hdus)
- unpack_table(filename, ncols=2, colkeys=None, dstype=<class 'sherpa.data.Data1D'>)[source] [edit on github]
Unpack a FITS binary file into a data structure.
- Parameters
filename – Identify the file to read: a file name, or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: a
TABLECrate
for crates, as used by CIAO, or a list of AstroPy HDU objects.ncols (int, optional) – The number of columns to read in (the first
ncols
columns in the file). The meaning of the columns is determined by thedstype
parameter.colkeys (array of str, optional) – An array of the column name to read in. The default is
None
.dstype (optional) – The data class to use. The default is
Data1D
and it is expected to be derived fromsherpa.data.BaseData
.
- Returns
The class of the returned object is controlled by the
dstype
parameter.- Return type
instance
See also
load_table
Load a FITS binary file as a data set.
set_data
Set a data set.
unpack_ascii
Unpack an ASCII file into a data structure.
Examples
Read in the first two columns of the file, as the independent (X) and dependent (Y) columns of a data set:
>>> d = unpack_table('sources.fits')
Read in the first three columns (the third column is taken to be the error on the dependent variable):
>>> d = unpack_table('sources.fits', ncols=3)
Read in from columns ‘RMID’ and ‘SUR_BRI’:
>>> d = unpack_table('rprof.fits', colkeys=['RMID', 'SUR_BRI'])
The first three columns are taken to be the two independent axes of a two-dimensional data set (
x0
andx1
) and the dependent value (y
):>>> d = unpack_table('fields.fits', ncols=3, ... dstype=sherpa.astro.data.Data2D)
When using the Crates I/O library, the file name can include CIAO Data Model syntax, such as column selection. This can also be done using the
colkeys
parameter, as shown above:>>> d = unpack_table('rprof.fits[cols rmid,sur_bri,sur_bri_err]', ... ncols=3)
- unsubtract(id=None)[source] [edit on github]
Undo any background subtraction for the data set.
The
unsubtract
function undoes any changes made bysubtract
. After this command, anything that uses the data set - such as a plot, fit, or error analysis - will use the original data values. Models should be re-fit ifsubtract
is called.- Parameters
id (int or str, optional) – The identifier for the data set to use. If not given then the default identifier is used, as returned by
get_default_id
.- Raises
sherpa.utils.err.ArgumentErr – If the data set does not contain a PHA data set.
Notes
The
subtracted
field of a PHA data set is set toFalse
when the background is not subtracted.Examples
Remove the background subtraction from the default data set.
>>> subtract() >>> get_data().subtracted False
Remove the background subtraction from the data set labelled ‘src’:
>>> subtract('src') >>> get_data('src').subtracted False