abssenv -- Calibrate absolute digital sensitivity.
abssenv intable outtable ref_flux
This task calculates digital absolute efficiency for up to 2,000 targets having known flux densities. The absolute efficiency of each target is calculated by dividing its observed count rate (corrected for dark signal, high voltage factor, pre-amp noise, and relative sensitivity--see below) by the target's flux density (integrated over the specified filter's bandpass, and weighted by the filter's transmission curve). This routine also checks the instrument mode of each observation. If it has a simultaneous sky measurement, the sky (which is also corrected for dark signal etc.) is subtracted from the target measurement, otherwise the observed count rate of the target is not changed.
The raw digital count rates are corrected for the dark, pre-amp noise, high voltage factor, sensitivity, and dead time following the formula:
scaled digital count rate = (raw digital count rate / (1. - raw digital count rate * dead time) - dark signal - pre-amplifier noise) / (high voltage factor * relative sensitivity)
- intable [file name]
- Name of the input table. The following columns are needed:
'DETECTOB' Object detector ID (int). 'APERTOBJ' Object aperture name (char*10). 'VOLTAGE' High voltage setting (real). 'VGAIND' Gain setting (real). 'THRESH' Discriminator setting (real). 'DET_TEMP' Detector temperature (real). 'DEA_TEMP' DEA temperature (real). 'EPOCH' Epoch of observation (double). 'PTSRCFLG' Point source flag (char*1). We also need the following columns: 'DOBJ' Observed digital count rate (real). 'DOBJ_ERR' Standard deviation of the observed digital count rate (real). 'DSKY' Sky's digital count rate (real). 'MODE' Instrument mode (i.e., SCP, SSP, or ARS) (char*3). 'TRGTNAME' Target name (char*20).
- outtable [file name]
- Name of the output table, which consists of the following columns:
'TRGTNAME' Target name (char*20). 'APER_NAME' Aperture name (char*10). 'COUNT' Scaled digital count rate (real). 'COUNT_ERR' Count rate's error (real). 'REF_FLUX' Reference flux used in this calculation (real). 'SENSITIVITY' Digital absolute sensitivity (real). 'TEMP_KEY' Temperature, as passed from the 'temp_key' parameter (real). 'EPOCH' Epoch (double).
- ref_flux [file name]
- Reference flux density table name. The table has the following columns:
'OBJ_NAME_i' Name(s) of the object, where "i" is an integer between 1 and 5 (char*20). 'FILTER_NAME' Filter name, e.g., F551 (char*4). 'DET_NUM' Detector ID (int). 'FLUX' Flux density (integrated over the filter's bandpass) of the target; density should be in units of erg/sec/sq cm (real).
- (cal_tables = "") [string]
- Pset name for calibration file parameters. Parameters can be individually changed from the command line or can be edited as a group using :e from eparam or from the cl, eparam cal_tables or simply cal_tables. Details about these parameters are available by typing "help cal_tables".
- (save = no) [boolean]
- Save the scratch table containing intermediate calibration corrections? If save = yes, a message will be sent to the terminal and the logfile.
- (temp_key = "DET_TEMP") [string]
- Column name of the temperature in the output table.
1. Calculate absolute sensitivities from the input data table xabssenv$input and put results in the output table yabssenv$output while saving the intermediate result in another local table whose name will be announced to the user. The reference flux density table to be used is xabssenv$ref_flux.
hs> abssenv "xabssenv$input" "yabssenv$output" "xabssenv$ref_flux" save=yes