load_pha

sherpa.astro.ui.load_pha(id, arg=None, use_errors=False)

Load a PHA data set.

This will load the PHA data and any related information, such as ARF, RMF, and background. The background is loaded but not subtracted. Any grouping information in the file will be applied to the data. The quality information is read in, but not automatically applied. See subtract and ignore_bad.

The standard behavior is to create a single data set, but multiple data sets can be loaded with this command, as described in the sherpa.astro.datastack module.

Changed in version 4.12.2: The id argument is now used to define the first identifier when loading in a PHA2 file (previously they always used the range 1 to number of files).

Parameters
  • id (int or str, optional) – The identifier for the data set to use. For PHA2 files, that is those that contain multiple datasets, the id value indicates the first dataset: if it is an integer then the numbering starts at id, and if a string then a suffix of 1 to n is added. If not given then the default identifier is used, as returned by get_default_id.

  • arg – Identify the data to read: a file name, or a data structure representing the data to use, as used by the I/O backend in use by Sherpa: a PHACrateDataset for crates, as used by CIAO, or a list of AstroPy HDU objects.

  • use_errors (bool, optional) – If True then the statistical errors are taken from the input data, rather than calculated by Sherpa from the count values. The default is False.

See also

ignore_bad

Exclude channels marked as bad in a PHA data set.

load_arf

Load an ARF from a file and add it to a PHA data set.

load_bkg

Load the background from a file and add it to a PHA data set.

load_rmf

Load a RMF from a file and add it to a PHA data set.

pack_pha

Convert a PHA data set into a file structure.

save_pha

Save a PHA data set to a file.

subtract

Subtract the background estimate from a data set.

unpack_pha

Create a PHA data structure.

Notes

The function does not follow the normal Python standards for parameter use, since it is designed for easy interactive use. When called with a single un-named argument, it is taken to be the arg parameter. If given two un-named arguments, then they are interpreted as the id and arg parameters, respectively. The remaining parameters are expected to be given as named arguments.

The minimum_energy setting of the ogip section of the Sherpa configuration file determines the behavior when an ARF with a minimum energy of 0 is read in. The default is to replace the 0 by the value 1e-10, which will also cause a warning message to be displayed.

Examples

Load the PHA file ‘src.pi’ into the default data set, and automatically load the ARF, RMF, and background from the files pointed to by the ANCRFILE, RESPFILE, and BACKFILE keywords in the file. The background is then subtracted and any ‘bad quality’ bins are removed:

>>> load_pha('src.pi')
read ARF file src.arf
read RMF file src.rmf
read background file src_bkg.pi
>>> subtract()
>>> ignore_bad()

Load two files into data sets ‘src’ and ‘bg’:

>>> load_pha('src', 'x1.fits')
>>> load_pha('bg', 'x2.fits')

If a type II PHA data set is loaded, then multiple data sets will be created, one for each order. The default behavior is to use the dataset identifiers 1 to the number of files.

>>> clean()
>>> load_pha('src.pha2')
>>> list_data_ids()
[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12]

If given an identifier as the first argument then this is used to start the numbering scheme for PHA2 files. If id is an integer then the numbers go from id:

>>> clean()
>>> load_pha(20, 'src.pha2')
>>> list_data_ids()
[20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31]

If the id is a string then the identifier is formed by adding the number of the dataset (starting at 1) to the end of id. Note that the list_data_ids routine does not guarantee an ordering to the output (as shown below):

>>> clean()
>>> load_pha('x', 'src.pha2')
>>> list_data_ids()
['x1', 'x10', 'x11', 'x12', 'x2', 'x3', 'x4', 'x5', 'x6',
 'x7', 'x8', 'x9']

Create the data set from the data read in by Crates:

>>> pha = pycrates.read_pha('src.pi')
>>> load_pha(pha)
read ARF file src.arf
read RMF file src.rmf
read background file src_bkg.pi

Create the data set from the data read in by AstroPy:

>>> hdus = astropy.io.fits.open('src.pi')
>>> load_pha(hdus)
read ARF file src.arf
read RMF file src.rmf
read background file src_bkg.pi

The default behavior is to calculate the errors based on the counts values and the choice of statistic - e.g. chi2gehrels or chi2datavar - but the statistical errors from the input file can be used instead by setting use_errors to True:

>>> load_pha('source.fits', use_errors=True)