# DataPHA¶

class sherpa.astro.data.DataPHA(name, channel, counts, staterror=None, syserror=None, bin_lo=None, bin_hi=None, grouping=None, quality=None, exposure=None, backscal=None, areascal=None, header=None)[source] [edit on github]

PHA data set, including any associated instrument and background data.

The PHA format is described in an OGIP document 1.

Parameters
• name (str) – The name of the data set; often set to the name of the file containing the data.

• channel (array of int) – The PHA data.

• counts (array of int) – The PHA data.

• staterror (scalar or array or None, optional) – The statistical and systematic errors for the data, if defined.

• syserror (scalar or array or None, optional) – The statistical and systematic errors for the data, if defined.

• bin_lo (array or None, optional) –

• bin_hi (array or None, optional) –

• grouping (array of int or None, optional) –

• quality (array of int or None, optional) –

• exposure (number or None, optional) – The exposure time for the PHA data set, in seconds.

• backscal (scalar or array or None, optional) –

• areascal (scalar or array or None, optional) –

• header (dict or None, optional) –

name

Used to store the file name, for data read from a file.

Type

str

channel
counts
staterror
syserror
bin_lo
bin_hi
grouping
quality
exposure
backscal
areascal

Notes

The original data is stored in the attributes - e.g. counts - and the data-access methods, such as get_dep and get_staterror, provide any necessary data manipulation to handle cases such as: background subtraction, filtering, and grouping.

The handling of the AREASCAl value - whether it is a scalar or array - is currently in flux. It is a value that is stored with the PHA file, and the OGIP PHA standard (1) describes the observed counts being divided by the area scaling before comparison to the model. However, this is not valid for Poisson-based statistics, and is also not how XSPEC handles AREASCAL (2); the AREASCAL values are used to scale the exposure times instead. The aim is to add this logic to the instrument models in sherpa.astro.instrument, such as sherpa.astro.instrument.RMFModelPHA. The area scaling still has to be applied when calculating the background contribution to a spectrum, as well as when calculating the data and model values used for plots (following XSPEC so as to avoid sharp discontinuities where the area-scaling factor changes strongly).

References

1(1,2)

“The OGIP Spectral File Format”, https://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/ogip_92_007.html

2

Private communication with Keith Arnaud

Attributes Summary

 background_ids IDs of defined background data sets default_background_id The identifier for the background component when not set. dep Left for compatibility with older versions grouped Are the data grouped? indep Return the grid of the data space associated with this data set. mask Mask array for dependent variable plot_fac Number of times to multiply the y-axis quantity by x-axis bin size primary_response_id The identifier for the response component when not set. rate counts or counts/sec response_ids IDs of defined instrument responses (ARF/RMF pairs) subtracted Are the background data subtracted? units Units of the independent axis x Used for compatibility, in particular for __str__ and __repr__

Methods Summary

 apply_filter(data[, groupfunc]) Filter the array data, first passing it through apply_grouping() (using groupfunc) and then applying the general filters apply_grouping(data[, groupfunc]) Apply the data set’s grouping scheme to the array data, combining the grouped data points with groupfunc, and return the grouped array. Remove the background component. Remove the response component. eval_model(modelfunc) eval_model_to_fit(modelfunc) Return the units used when fitting spectral data. get_areascal([group, filter]) Return the fractional area factor of the PHA data set. get_arf([id]) Return the ARF from the response. get_background([id]) Return the background component. get_background_scale([bkg_id, units, group, …]) Return the correction factor for the background dataset. get_backscal([group, filter]) Return the background scaling of the PHA data set. get_dep([filter]) Return the dependent axis of a data set. get_dims([filter]) Return the dimensions of this data space as a tuple of tuples. get_error([filter, staterrfunc]) Return the total error on the dependent variable. get_evaluation_indep([filter, model, …]) get_filter([group, format, delim]) Return the data filter as a string. get_img([yfunc]) Return 1D dependent variable as a 1 x N image get_indep([filter]) Return the independent axes of a data set. Returns the (ungrouped) mask. Return the noticed channels. get_response([id]) Return the response component. get_rmf([id]) Return the RMF from the response. get_specresp([filter]) Return the effective area values for the data set. get_staterror([filter, staterrfunc]) Return the statistical error. get_syserror([filter]) Return any systematic error. get_x([filter, response_id]) get_xerr([filter, response_id]) Return linear view of bin size in independent axis/axes” Return label for linear view of independent axis/axes get_y([filter, yfunc, response_id, …]) Return dependent axis in N-D view of dependent variable” get_yerr([filter, staterrfunc, response_id]) Return errors in dependent axis in N-D view of dependent variable Return label for dependent axis in N-D view of dependent variable” Group the data according to the data set’s grouping scheme group_adapt(minimum[, maxLength, tabStops]) Adaptively group to a minimum number of counts. group_adapt_snr(minimum[, maxLength, …]) Adaptively group to a minimum signal-to-noise ratio. group_bins(num[, tabStops]) Group into a fixed number of bins. group_counts(num[, maxLength, tabStops]) Group into a minimum number of counts per bin. group_snr(snr[, maxLength, tabStops, errorCol]) Group into a minimum signal-to-noise ratio. group_width(val[, tabStops]) Group into a fixed bin width. ignore(*args, **kwargs) Exclude channels marked as bad. notice([lo, hi, ignore, bkg_id]) notice_response([notice_resp, noticed_chans]) set_analysis(quantity[, type, factor]) Return the units used when fitting spectral data. set_arf(arf[, id]) Add or replace the ARF in a response component. set_background(bkg[, id]) Add or replace a background component. set_dep(val) Set the dependent variable values” set_indep(val) set_response([arf, rmf, id]) Add or replace a response component. set_rmf(rmf[, id]) Add or replace the RMF in a response component. Subtract the background data sum_background_data([get_bdata_func]) Sum up data, applying the background correction value. to_component_plot([yfunc, staterrfunc]) to_fit([staterrfunc]) to_plot([yfunc, staterrfunc, response_id]) Ungroup the data Remove background subtraction

Attributes Documentation

background_ids

IDs of defined background data sets

default_background_id = 1

The identifier for the background component when not set.

dep

Left for compatibility with older versions

grouped

Are the data grouped?

indep

Return the grid of the data space associated with this data set. :returns: :rtype: tuple of array_like

mask

Mask array for dependent variable

Returns

mask

Return type

bool or numpy.ndarray

plot_fac

Number of times to multiply the y-axis quantity by x-axis bin size

primary_response_id = 1

The identifier for the response component when not set.

rate

counts or counts/sec

Type

Quantity of y-axis

response_ids

IDs of defined instrument responses (ARF/RMF pairs)

subtracted

Are the background data subtracted?

units

Units of the independent axis

x

Used for compatibility, in particular for __str__ and __repr__

Methods Documentation

apply_filter(data, groupfunc=<function sum>)[source] [edit on github]

Filter the array data, first passing it through apply_grouping() (using groupfunc) and then applying the general filters

apply_grouping(data, groupfunc=<function sum>)[source] [edit on github]

Apply the data set’s grouping scheme to the array data, combining the grouped data points with groupfunc, and return the grouped array. If the data set has no associated grouping scheme or the data are ungrouped, data is returned unaltered.

delete_background(id=None)[source] [edit on github]

Remove the background component.

If the background component does not exist then the method does nothing.

Parameters

id (int or str, optional) – The identifier of the background component. If it is None then the default background identifier is used.

Notes

If this call removes the last of the background components then the subtracted flag is cleared (if set).

delete_response(id=None)[source] [edit on github]

Remove the response component.

If the response component does not exist then the method does nothing.

Parameters

id (int or str, optional) – The identifier of the response component. If it is None then the default response identifier is used.

eval_model(modelfunc)[source] [edit on github]
eval_model_to_fit(modelfunc)[source] [edit on github]
get_analysis()[source] [edit on github]

Return the units used when fitting spectral data.

Returns

setting – The analysis setting.

Return type

{ ‘channel’, ‘energy’, ‘wavelength’ }

Raises

Examples

>>> is_wave = pha.get_analysis() == 'wavelength'

get_areascal(group=True, filter=False)[source] [edit on github]

Return the fractional area factor of the PHA data set.

Return the AREASCAL setting [ASCAL] for the PHA data set.

Parameters
• group (bool, optional) – Should the values be grouped to match the data?

• filter (bool, optional) – Should the values be filtered to match the data?

Returns

areascal – The AREASCAL value, which can be a scalar or a 1D array.

Return type

number or ndarray

Notes

The fractional area scale is normally set to 1, with the ARF used to scale the model.

References

ASCAL

“The OGIP Spectral File Format”, Arnaud, K. & George, I. http://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/ogip_92_007.html

Examples

>>> pha.get_areascal()
1.0

get_arf(id=None)[source] [edit on github]

Return the ARF from the response.

Parameters

id (int or str, optional) – The identifier of the response component. If it is None then the default response identifier is used.

Returns

arf – The ARF, if set.

Return type

sherpa.astro.data.DataARF instance or None

get_background(id=None)[source] [edit on github]

Return the background component.

Parameters

id (int or str, optional) – The identifier of the background component. If it is None then the default background identifier is used.

Returns

bkg – The background dataset. If there is no component then None is returned.

Return type

sherpa.astro.data.DataPHA instance or None

get_background_scale(bkg_id=1, units='counts', group=True, filter=False)[source] [edit on github]

Return the correction factor for the background dataset.

Changed in version 4.12.2: The bkg_id, units, group, and filter parameters have been added and the routine no-longer calculates the average scaling for all the background components but just for the given component.

Parameters
• bkg_id (int or str, optional) – The background component to use (the default is 1).

• units ({'counts', 'rate'}, optional) – The correction is applied to a model defined as counts, the default, or a rate. The latter should be used when calculating the correction factor for adding the background data to the source aperture.

• group (bool, optional) – Should the values be grouped to match the data?

• filter (bool, optional) – Should the values be filtered to match the data?

Returns

scale – The scaling factor to correct the background data onto the source data set. If bkg_id is not valid then None is returned.

Return type

None, number, or NumPy array

Notes

The correction factor when units is ‘counts’ is:

scale_exposure * scale_backscal * scale_areascal / nbkg


where nbkg is the number of background components and scale_x is the source value divided by the background value for the field x.

When units is ‘rate’ the correction is:

scale_backscal / nbkg

and it is currently uncertain whether it should include the AREASCAL scaling.

get_backscal(group=True, filter=False)[source] [edit on github]

Return the background scaling of the PHA data set.

Return the BACKSCAL setting [BSCAL] for the PHA data set.

Parameters
• group (bool, optional) – Should the values be grouped to match the data?

• filter (bool, optional) – Should the values be filtered to match the data?

Returns

backscal – The BACKSCAL value, which can be a scalar or a 1D array.

Return type

number or ndarray

Notes

The BACKSCAL value can be defined as the ratio of the area of the source (or background) extraction region in image pixels to the total number of image pixels. The fact that there is no ironclad definition for this quantity does not matter so long as the value for a source dataset and its associated background dataset are defined in the same manner, because only the ratio of source and background BACKSCAL values is used. It can be a scalar or an array.

References

BSCAL

“The OGIP Spectral File Format”, Arnaud, K. & George, I. http://heasarc.gsfc.nasa.gov/docs/heasarc/ofwg/docs/spectra/ogip_92_007/ogip_92_007.html

Examples

>>> pha.get_backscal()
7.8504301607718007e-06

get_bounding_mask() [edit on github]
get_dep(filter=False)[source] [edit on github]

Return the dependent axis of a data set.

Parameters

filter (bool, optional) – Should the filter attached to the data set be applied to the return value or not. The default is False.

Returns

axis – The dependent axis values for the data set. This gives the value of each point in the data set.

Return type

array

See also

get_indep()

Return the independent axis of a data set.

get_error()

Return the errors on the dependent axis of a data set.

get_staterror()

Return the statistical errors on the dependent axis of a data set.

get_syserror()

Return the systematic errors on the dependent axis of a data set.

get_dims(filter=False) [edit on github]

Return the dimensions of this data space as a tuple of tuples. The first element in the tuple is a tuple with the dimensions of the data space, while the second element provides the size of the dependent array. :returns: :rtype: tuple

get_error(filter=False, staterrfunc=None) [edit on github]

Return the total error on the dependent variable.

Parameters
• filter (bool, optional) – Should the filter attached to the data set be applied to the return value or not. The default is False.

• staterrfunc (function) – If no statistical error has been set, the errors will be calculated by applying this function to the dependent axis of the data set.

Returns

axis – The error for each data point, formed by adding the statistical and systematic errors in quadrature.

Return type

array or None

See also

get_dep()

Return the independent axis of a data set.

get_staterror()

Return the statistical errors on the dependent axis of a data set.

get_syserror()

Return the systematic errors on the dependent axis of a data set.

get_evaluation_indep(filter=False, model=None, use_evaluation_space=False) [edit on github]
get_filter(group=True, format='%.12f', delim=':')[source] [edit on github]

Return the data filter as a string.

For grouped data, or when the analysis setting is not channel, filter values refer to the center of the channel or group.

Parameters
• group (bool, optional) – Should the filter reflect the grouped data?

• format (str, optional) – The formatting of the numeric values (this is ignored for channel units, as a format of “%i” is used).

• delim (str, optional) – The string used to mark the low-to-high range.

Examples

For a Chandra non-grating dataset which has been grouped:

>>> pha.set_analysis('energy')
>>> pha.notice(0.5, 7)
>>> pha.get_filter(format=%.4f')
''0.5183:8.2198''
>>> pha.set_analysis('channel')
>>> pha.get_filter()
'36:563'


The default is to show the data range for the grouped dataset, which uses the center of each group. If the grouping is turned off then the center of the start and ending channel of each group is used (and so show a larger data range):

>>> pha.get_filter(format=%.4f')
'0.5183:8.2198'
>>> pha.get_filter(group=False, format=%.4f')
'0.4745:9.8623'

get_filter_expr()[source] [edit on github]
get_img(yfunc=None) [edit on github]

Return 1D dependent variable as a 1 x N image

Parameters

yfunc

get_imgerr() [edit on github]
get_indep(filter=True)[source] [edit on github]

Return the independent axes of a data set.

Parameters

filter (bool, optional) – Should the filter attached to the data set be applied to the return value or not. The default is False.

Returns

axis – The independent axis values for the data set. This gives the coordinates of each point in the data set.

Return type

tuple of arrays

See also

get_dep()

Return the dependent axis of a data set.

get_mask()[source] [edit on github]

Returns the (ungrouped) mask.

Returns

mask – The mask, in channels, or None.

Return type

ndarray or None

get_noticed_channels()[source] [edit on github]

Return the noticed channels.

Returns

channels – The noticed channels (this is independent of the analysis setting).

Return type

ndarray

get_noticed_expr()[source] [edit on github]
get_response(id=None)[source] [edit on github]

Return the response component.

Parameters

id (int or str, optional) – The identifier of the response component. If it is None then the default response identifier is used.

Returns

arf, rmf – The response, as an ARF and RMF. Either, or both, components can be None.

Return type

sherpa.astro.data.DataARF,sherpa.astro.data.DataRMF instances or None

get_rmf(id=None)[source] [edit on github]

Return the RMF from the response.

Parameters

id (int or str, optional) – The identifier of the response component. If it is None then the default response identifier is used.

Returns

rmf – The RMF, if set.

Return type

sherpa.astro.data.DataRMF instance or None

get_specresp(filter=False)[source] [edit on github]

Return the effective area values for the data set.

Parameters

filter (bool, optional) – Should the filter attached to the data set be applied to the ARF or not. The default is False.

Returns

arf – The effective area values for the data set (or background component).

Return type

array

get_staterror(filter=False, staterrfunc=None)[source] [edit on github]

Return the statistical error.

The staterror column is used if defined, otherwise the function provided by the staterrfunc argument is used to calculate the values.

Parameters
• filter (bool, optional) – Should the channel filter be applied to the return values?

• staterrfunc (function reference, optional) – The function to use to calculate the errors if the staterror field is None. The function takes one argument, the counts (after grouping and filtering), and returns an array of values which represents the one-sigma error for each element of the input array. This argument is designed to work with implementations of the sherpa.stats.Stat.calc_staterror method.

Returns

staterror – The statistical error. It will be grouped and, if filter=True, filtered. The contribution from any associated background components will be included if the background-subtraction flag is set.

Return type

array or None

Notes

There is no scaling by the AREASCAL setting, but background values are scaled by their AREASCAL settings. It is not at all obvious that the current code is doing the right thing, or that this is the right approach.

Examples

>>> dy = dset.get_staterror()


Ensure that there is no pre-defined statistical-error column and then use the Chi2DataVar statistic to calculate the errors:

>>> stat = sherpa.stats.Chi2DataVar()
>>> dset.set_staterror(None)
>>> dy = dset.get_staterror(staterrfunc=stat.calc_staterror)

get_syserror(filter=False)[source] [edit on github]

Return any systematic error.

Parameters

filter (bool, optional) – Should the channel filter be applied to the return values?

Returns

syserror – The systematic error, if set. It will be grouped and, if filter=True, filtered.

Return type

array or None

Notes

There is no scaling by the AREASCAL setting.

get_x(filter=False, response_id=None)[source] [edit on github]
get_xerr(filter=False, response_id=None)[source] [edit on github]

Return linear view of bin size in independent axis/axes”

Parameters
• filter

• yfunc

get_xlabel()[source] [edit on github]

Return label for linear view of independent axis/axes

get_y(filter=False, yfunc=None, response_id=None, use_evaluation_space=False)[source] [edit on github]

Return dependent axis in N-D view of dependent variable”

Parameters
• filter

• yfunc

• use_evaluation_space

get_yerr(filter=False, staterrfunc=None, response_id=None)[source] [edit on github]

Return errors in dependent axis in N-D view of dependent variable

Parameters
• filter

• staterrfunc

get_ylabel()[source] [edit on github]

Return label for dependent axis in N-D view of dependent variable”

Parameters

yfunc

group()[source] [edit on github]

Group the data according to the data set’s grouping scheme

group_adapt(minimum, maxLength=None, tabStops=None)[source] [edit on github]

Adaptively group to a minimum number of counts.

Combine the data so that each bin contains num or more counts. The difference to group_counts is that this algorithm starts with the bins with the largest signal, in order to avoid over-grouping bright features, rather than at the first channel of the data. The adaptive nature means that low-count regions between bright features may not end up in groups with the minimum number of counts. The binning scheme is applied to all the channels, but any existing filter - created by the ignore or notice set of functions - is re-applied after the data has been grouped.

Parameters
• minimum (int) – The number of channels to combine into a group.

• maxLength (int, optional) – The maximum number of channels that can be combined into a single group.

• tabStops (array of int or bool, optional) – If set, indicate one or more ranges of channels that should not be included in the grouped output. The array should match the number of channels in the data set and non-zero or True means that the channel should be ignored from the grouping (use 0 or False otherwise).

See also

group_adapt_snr()

Adaptively group to a minimum signal-to-noise ratio.

group_bins()

Group into a fixed number of bins.

group_counts()

Group into a minimum number of counts per bin.

group_snr()

Group into a minimum signal-to-noise ratio.

group_width()

Group into a fixed bin width.

Notes

If channels can not be placed into a “valid” group, then a warning message will be displayed to the screen and the quality value for these channels will be set to 2.

group_adapt_snr(minimum, maxLength=None, tabStops=None, errorCol=None)[source] [edit on github]

Adaptively group to a minimum signal-to-noise ratio.

Combine the data so that each bin has a signal-to-noise ratio which exceeds minimum. The difference to group_snr is that this algorithm starts with the bins with the largest signal, in order to avoid over-grouping bright features, rather than at the first channel of the data. The adaptive nature means that low-count regions between bright features may not end up in groups with the minimum number of counts. The binning scheme is applied to all the channels, but any existing filter - created by the ignore or notice set of functions - is re-applied after the data has been grouped.

Parameters
• minimum (number) – The minimum signal-to-noise ratio that must be exceeded to form a group of channels.

• maxLength (int, optional) – The maximum number of channels that can be combined into a single group.

• tabStops (array of int or bool, optional) – If set, indicate one or more ranges of channels that should not be included in the grouped output. The array should match the number of channels in the data set and non-zero or True means that the channel should be ignored from the grouping (use 0 or False otherwise).

• errorCol (array of num, optional) – If set, the error to use for each channel when calculating the signal-to-noise ratio. If not given then Poisson statistics is assumed. A warning is displayed for each zero-valued error estimate.

See also

group_adapt()

Adaptively group to a minimum number of counts.

group_bins()

Group into a fixed number of bins.

group_counts()

Group into a minimum number of counts per bin.

group_snr()

Group into a minimum signal-to-noise ratio.

group_width()

Group into a fixed bin width.

Notes

If channels can not be placed into a “valid” group, then a warning message will be displayed to the screen and the quality value for these channels will be set to 2.

group_bins(num, tabStops=None)[source] [edit on github]

Group into a fixed number of bins.

Combine the data so that there num equal-width bins (or groups). The binning scheme is applied to all the channels, but any existing filter - created by the ignore or notice set of functions - is re-applied after the data has been grouped.

Parameters
• num (int) – The number of bins in the grouped data set. Each bin will contain the same number of channels.

• tabStops (array of int or bool, optional) – If set, indicate one or more ranges of channels that should not be included in the grouped output. The array should match the number of channels in the data set and non-zero or True means that the channel should be ignored from the grouping (use 0 or False otherwise).

See also

group_adapt()

Adaptively group to a minimum number of counts.

group_adapt_snr()

Adaptively group to a minimum signal-to-noise ratio.

group_counts()

Group into a minimum number of counts per bin.

group_snr()

Group into a minimum signal-to-noise ratio.

group_width()

Group into a fixed bin width.

Notes

Since the bin width is an integer number of channels, it is likely that some channels will be “left over”. This is even more likely when the tabStops parameter is set. If this happens, a warning message will be displayed to the screen and the quality value for these channels will be set to 2.

group_counts(num, maxLength=None, tabStops=None)[source] [edit on github]

Group into a minimum number of counts per bin.

Combine the data so that each bin contains num or more counts. The binning scheme is applied to all the channels, but any existing filter - created by the ignore or notice set of functions - is re-applied after the data has been grouped. The background is not included in this calculation; the calculation is done on the raw data even if subtract has been called on this data set.

Parameters
• num (int) – The number of channels to combine into a group.

• maxLength (int, optional) – The maximum number of channels that can be combined into a single group.

• tabStops (array of int or bool, optional) – If set, indicate one or more ranges of channels that should not be included in the grouped output. The array should match the number of channels in the data set and non-zero or True means that the channel should be ignored from the grouping (use 0 or False otherwise).

See also

group_adapt()

Adaptively group to a minimum number of counts.

group_adapt_snr()

Adaptively group to a minimum signal-to-noise ratio.

group_bins()

Group into a fixed number of bins.

group_snr()

Group into a minimum signal-to-noise ratio.

group_width()

Group into a fixed bin width.

Notes

If channels can not be placed into a “valid” group, then a warning message will be displayed to the screen and the quality value for these channels will be set to 2.

group_snr(snr, maxLength=None, tabStops=None, errorCol=None)[source] [edit on github]

Group into a minimum signal-to-noise ratio.

Combine the data so that each bin has a signal-to-noise ratio which exceeds snr. The binning scheme is applied to all the channels, but any existing filter - created by the ignore or notice set of functions - is re-applied after the data has been grouped. The background is not included in this calculation; the calculation is done on the raw data even if subtract has been called on this data set.

Parameters
• snr (number) – The minimum signal-to-noise ratio that must be exceeded to form a group of channels.

• maxLength (int, optional) – The maximum number of channels that can be combined into a single group.

• tabStops (array of int or bool, optional) – If set, indicate one or more ranges of channels that should not be included in the grouped output. The array should match the number of channels in the data set and non-zero or True means that the channel should be ignored from the grouping (use 0 or False otherwise).

• errorCol (array of num, optional) – If set, the error to use for each channel when calculating the signal-to-noise ratio. If not given then Poisson statistics is assumed. A warning is displayed for each zero-valued error estimate.

See also

group_adapt()

Adaptively group to a minimum number of counts.

group_adapt_snr()

Adaptively group to a minimum signal-to-noise ratio.

group_bins()

Group into a fixed number of bins.

group_counts()

Group into a minimum number of counts per bin.

group_width()

Group into a fixed bin width.

Notes

If channels can not be placed into a “valid” group, then a warning message will be displayed to the screen and the quality value for these channels will be set to 2.

group_width(val, tabStops=None)[source] [edit on github]

Group into a fixed bin width.

Combine the data so that each bin contains num channels. The binning scheme is applied to all the channels, but any existing filter - created by the ignore or notice set of functions - is re-applied after the data has been grouped.

Parameters
• val (int) – The number of channels to combine into a group.

• tabStops (array of int or bool, optional) – If set, indicate one or more ranges of channels that should not be included in the grouped output. The array should match the number of channels in the data set and non-zero or True means that the channel should be ignored from the grouping (use 0 or False otherwise).

See also

group_adapt()

Adaptively group to a minimum number of counts.

group_adapt_snr()

Adaptively group to a minimum signal-to-noise ratio.

group_bins()

Group into a fixed number of bins.

group_counts()

Group into a minimum number of counts per bin.

group_snr()

Group into a minimum signal-to-noise ratio.

Notes

Unless the requested bin width is a factor of the number of channels (and no tabStops parameter is given), then some channels will be “left over”. If this happens, a warning message will be displayed to the screen and the quality value for these channels will be set to 2.

ignore(*args, **kwargs) [edit on github]
ignore_bad()[source] [edit on github]

Exclude channels marked as bad.

Ignore any bin in the PHA data set which has a quality value that is larger than zero.

Raises

sherpa.utils.err.DataErr – If the data set has no quality array.

See also

ignore()

Exclude data from the fit.

notice()

Include data in the fit.

Notes

Bins with a non-zero quality setting are not automatically excluded when a data set is created.

If the data set has been grouped, then calling ignore_bad will remove any filter applied to the data set. If this happens a warning message will be displayed.

notice(lo=None, hi=None, ignore=False, bkg_id=None)[source] [edit on github]
notice_response(notice_resp=True, noticed_chans=None)[source] [edit on github]
set_analysis(quantity, type='rate', factor=0)[source] [edit on github]

Return the units used when fitting spectral data.

Parameters
• quantity ({'channel', 'energy', 'wavelength'}) – The analysis setting.

• type ({'rate', 'counts'}, optional) – Do plots display a rate or show counts?

• factor (int, optional) – The Y axis of plots is multiplied by Energy^factor or Wavelength^factor before display. The default is 0.

Raises

sherpa.utils.err.DatatErr – If the type argument is invalid, the RMF or ARF has the wrong size, or there in no response.

Examples

>>> pha.set_analysis('energy')

>>> pha.set_analysis('wave', type='counts' factor=1)

set_arf(arf, id=None)[source] [edit on github]

Add or replace the ARF in a response component.

This replaces the existing ARF of the response, keeping the previous RMF (if set). Use the delete_response method to remove the response, rather than setting arf to None.

Parameters
• arf (sherpa.astro.data.DataARF instance) – The ARF to add.

• id (int or str, optional) – The identifier of the response component. If it is None then the default response identifier is used.

set_background(bkg, id=None)[source] [edit on github]

Add or replace a background component.

If the background has no grouping of quality arrays then they are copied from the source region. If the background has no response information (ARF or RMF) then the response is copied from the source region.

Parameters
• bkg (sherpa.astro.data.DataPHA instance) – The background dataset to add. This object may be changed by this method.

• id (int or str, optional) – The identifier of the background component. If it is None then the default background identifier is used.

set_dep(val)[source] [edit on github]

Set the dependent variable values”

Parameters

val

set_indep(val) [edit on github]
set_response(arf=None, rmf=None, id=None)[source] [edit on github]

Add or replace a response component.

To remove a response use delete_response(), as setting arf and rmf to None here does nothing.

Parameters
• arf (sherpa.astro.data.DataARF instance or None, optional) – The ARF to add if any.

• rmf (sherpa.astro.data.DataRMF instance or None, optional) – The RMF to add, if any.

• id (int or str, optional) – The identifier of the response component. If it is None then the default response identifier is used.

set_rmf(rmf, id=None)[source] [edit on github]

Add or replace the RMF in a response component.

This replaces the existing RMF of the response, keeping the previous ARF (if set). Use the delete_response method to remove the response, rather than setting rmf to None.

Parameters
• rmf (sherpa.astro.data.DataRMF instance) – The RMF to add.

• id (int or str, optional) – The identifier of the response component. If it is None then the default response identifier is used.

subtract()[source] [edit on github]

Subtract the background data

sum_background_data(get_bdata_func=<function DataPHA.<lambda>>)[source] [edit on github]

Sum up data, applying the background correction value.

Parameters

get_bdata_func (function, optional) – What data should be used for each background dataset. The function takes the background identifier and background DataPHA object and returns the data to use. The default is to use the counts array of the background dataset.

Returns

value – The sum of the data, including any area, background, and exposure-time corrections.

Return type

scalar or NumPy array

Notes

For each associated background, the data is retrieved (via the get_bdata_func parameter), and then

• divided by its BACKSCAL value (if set)

• divided by its AREASCAL value (if set)

• divided by its exposure time (if set)

The individual background components are then summed together, and then multiplied by the source BACKSCAL (if set), multiplied by the source AREASCAL (if set), and multiplied by the source exposure time (if set). The final step is to divide by the number of background files used.

Example

Calculate the background counts, per channel, scaled to match the source:

>>> bcounts = src.sum_background_data()


Calculate the scaling factor that you need to multiply the background data to match the source data. In this case the background data has been replaced by the value 1 (rather than the per-channel values used with the default argument):

>>> bscale = src.sum_background_data(lambda k, d: 1)

to_component_plot(yfunc=None, staterrfunc=None) [edit on github]
to_fit(staterrfunc=None)[source] [edit on github]
to_guess()[source] [edit on github]
to_plot(yfunc=None, staterrfunc=None, response_id=None)[source] [edit on github]
ungroup()[source] [edit on github]

Ungroup the data

unsubtract()[source] [edit on github]

Remove background subtraction