Pyvista Readthedocs Io en Dataclass
Pyvista Readthedocs Io en Dataclass
Release v0.0
Jon Holtzman
1 Installation 3
1.1 Installation into Python distribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2 Installation using environment variables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.3 Installation using modules . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2 TV module 5
2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.2 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
2.3 Module functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
3 IMRED module 9
3.1 Reducer . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
3.2 imred functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
4 SPECTRA module 13
4.1 Trace . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13
4.2 WaveCal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
4.3 FluxCal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
4.4 spectra functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
5 REDUCE module 17
5.1 reduce functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
6 Stars module 19
7 IMAGE module 21
8 ETC module 25
8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
8.2 Usage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
8.3 Module functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
Index 33
i
ii
pyvista Documentation, Release v0.0
pyvista is Python package that provides image display, and some image processing and analysis tools for a versatile
quick analysis environment for working with astronomical data (imaging and spectroscopy). The goal is to provide a
convenient framework that is relatively straightforward to understand and to use that allows for both performing data
processing efficiently but also has options that may be useful pedagogically.
The initial algorithms are relatively simple, but might be extended over time by interested users. pyvista does use
routines from other packages including astropy, ccdproc, and photutils.
The project is hosted on github, and will be available via pip as stability is achieved. See the Installation page for how
to download and set up.
The name pyvista was adopted because it shares some history and a tiny bit of look-and-feel with the VISTA package
developed during the 1980s at the University of California by Richard Stover, Tod Lauer, Don Terndrup, and others.
However, the code is completely independent. The original VISTA was written in Fortran for a VAX/VMS system. It
was ported to Unix/X11 by Jon Holtzman, with X11 display routines largely developed by John Tonry.
Contents:
Contents 1
pyvista Documentation, Release v0.0
2 Contents
CHAPTER 1
Installation
for development, i.e., if you will be pulling from the github repository, use
3
pyvista Documentation, Release v0.0
As an alternative to using setup.py, define an environment variable PYVISTA_DIR that refers to the top level pyvista
directory, e.g.
In sh/bash:
if [ -z $PYTHONPATH ] ; then
export PYTHONPATH=$PYVISTA_DIR/python
else
export PYTHONPATH=$PYVISTA_DIR/python:$PYTHONPPATH
fi
To keep these definitions across all new sessions, add these to your .cshrc/.tcshrc or .bashrc/.profile file.
Alternatively, use a package management system, e.g., modules, to set these variables when the package is loaded.
There is a sample modules file in $PYVISTA_DIR/etc/modulefile
4 Chapter 1. Installation
CHAPTER 2
TV module
2.1 Introduction
tv is a module that provide an enhanced image display tool built on top of the basic functionality of a matplotlib figure.
The tv tool is an object that displays images and also allows the user to get input from the display.
It can take as input either a numpy array, an astropy NDData/CCDData or a FITS HDU object (e.g., from as-
tropy.io.fits), and will display it in the window. Pixel and value readout is provided. For FITS input, WCS as well
as pixel locations are given, as is the OBJECT information. Two plot windows are created for use with some built-in
functions (row and column plots, radial profiles).
While normal matplotlib options/buttons (zoom/pan) are available, the tool includes an event handler that by default
responds to mouse clicks in the image display section, so it can be very confusing to use the matplotlib tools here. The
event handler in the main display window can be toggled on/off using the ‘z’ key, if users prefer to use the matplotlib
buttons.
Several keys are defined to do asynchronous interaction with the data in the display. In particular, four images are
stored internally, allowing for rapid cycling (blinking) between them using the ‘-’ and ‘+/=’ keys.
A tvmark() method is provided to allow the user to retrieve the location of a keystroke event in the window along with
the key that was pressed.
A imexam() method allows for interactive measurement of objects in the image.
tvbox() and tvcirc() allow the user to overlay graphics.
2.2 Usage
5
pyvista Documentation, Release v0.0
If the pyautogui package is available it will be loaded: this functionality is required for interactive functions that move
the cursor. Note that for some versions of MacOSX, the controlling application needs to be given permission to access
the mouse.
A display tool is created when a pyvista TV object is instantiated, e.g.
display = tv.TV()
display.tv(image)
where image is one of the input data types. By default, the image is displayed using greyscale (colormap “Greys_r”)
with an automatically determined stretch; however, the display scaling can be set explicitly using the min= and max=
keywords, and the cmap= keyword can be used to specify a different colormap. If the input data type is an astropy
NDData/CCDData object, then the sn= keyword can be used to display the S/N image (data extension divided by
uncertainty dimension).
Given a TV object with image data loaded, various asynchronous functions are available in the main display window:
left mouse zoom in, centered on cursor
center mouse zoom out, centered on cursor
right mouse pan, center to cursor
z toggle use of mouse clicks in main display, e.g. if users prefer to use matplotlib button
r redraw at default zoom
+/= toggle to next image in stack
- toggle to previous image in stack
arrow keys move single image pixels (with pyautogui)
a toggle axes on/off
h/? print this help
Several synchronous (waits for input) methods are also available:
imexam(size=11,fwhm=5,scale=1,pafixed=False) : draws radial profiles and fits Gaussian around cursor position
when key is hit
tvmark() : returns (x,y,key) when a key is hit at (data) position (x,y)
clear() : clears the display
tvbox(x,y,box=None,size=3,color=’m’) : displays square of specified size at input position (or pyvista box object)
with specified color
tvcirc(x,y,rad=3,color=’m’) : displays circle of specified size and color at input position
tvclear() : clears graphics patches from display
flip() : toggles vertical flip of displays (default starts with origin in lower left)
6 Chapter 2. TV module
pyvista Documentation, Release v0.0
tvclear()
clears patches from image
tvmark()
Blocking input: waits for key press in display and returns key that was pressed and data pixel location of
the keypress
Parameters none –
Returns key pressed, x data position, y data position
tvtext(x, y, text, color=’m’, ha=’center’, va=’center’)
Annotates with text
pyvista.tv.minmax(data, mask=None, low=3, high=10)
Return min,max scaling factors for input data using median, and MAD
Parameters
8 Chapter 2. TV module
CHAPTER 3
IMRED module
imred is a module that implements some classes/tools for basic data reduction: overscan subraction, bias subtrac-
tion, dark subtraction, and flat fielding. Routines are included for basic combination of images that can be used to
make basic calibration products. Data are read and processed using pyvista Data structures (a superset of astropy
CCDData/NDData, see Dataclass), which carry along both the image data and associated uncertainty and mask arrays.
The routines are implemented with the Reducer class, which performs various tasks: reads data from disk and does
various reduction steps, either individually or as a group. Also contains methods for constructing calibration frames
3.1 Reducer
Given information about instrument detector, a reducer object reads and reduces images. A class is instantiated for
a given instrument. Necessary information can be specified on instantiation, but generally is loaded given an input
instrument name, e.g., ARCTIC, ARCES, DIS, TSPEC. In this case, configuration is read from a configuration file
called {inst}_config.yml; pyvista contains several of these in the PYVISTA_DIR/python/pyvista/data directory tree.
For a new instrument, they should be straightforward to construct, see below.
For a given instrument, there can be multiple files associated with each exposure, e.g. DIS blue/red, and the configura-
tion file then specifies the multiple configurations. During reduction operations, all files for an exposure are processed,
with data returned as a list of pyvista Data objects.
The reading function of the Reducer allows for file names to be read by specifying a sequence number (rather than
typing the full name), if the data taking system supports unique identification by sequence number. File names are con-
structed using a template: {dir}/{root}{formstr}.*, where {dir}, {root}, and {formstr} are customizable attributes of
the reducer. For example, if {dir} is defined as some input directory, {root} is ‘*’, and {formstr} is {:04d}, then if you
were to, e.g. rd(28), it would look for a file {dir}/0028.fits’ and read it. You could also specify rd(‘Image0028.fits’),
in which case it would read from {dir}/Image0028.fits, or you could specify rd(‘./Image0028.fits’) in which case it
would read from the current working directory.
The basic reading method of the reducer is rd(); however, this is generally not the preferred method, since it does not
do any processing (e.g. overscan subtraction), so the noise cannot be correctly calculated. Instead, the reduce() method
is preferred, which will read and also perform basic reduction depending on what calibration frames are loaded.
9
pyvista Documentation, Release v0.0
The reduce() method will read and perform some reductions. It will perform overscan subtraction if self.biastype has
been set >=0, It will perform bias frame subtraction if a bias frame is passed with the bias= keyword, dark subtraction
if a dark frame is passed with the dark= keyword, and flat fielding if a flat frame is passed with the flat= keyword;
see below for construction of calibration frames. There is also the possibility of scattered light subtraction (e.g., for a
multi-order spectrograph) if a scattered light parameter has been set, cosmic ray rejection if a cosmic ray parameter is
passed with crbox=, and bad pixel masking if a bad pixel mask has been set.
Calibration frames can be created using the mkbias, mkdark, mkflat, mkspecflat methods. These take lists of frames
as arguments and combine the input frame using a median filter (with normalization for flats). The calibration frames
are also returned as NDData/CCDData, so that the uncertainties in the calibration products can be assessed, and
propagated.
Processing can be done with associated description and image display of the steps being taken. If the verbose attribute
is set to True, more text is displayed. If the display attribute is set to a pyvista TV object, then frames will be displayed
as they are processed, and user input is requested to proceed from step to step, to allow the user to look at the frames.
Attributes:
• dir : default directory to read images from (can be overridden)
• root : default root file name (can be overridden)
• formstr : format string used to search for images using sequence number
• verbose : turns on verbose output
• gain : gain to use to calculate initial uncertainty
• rn : readout noise used to calculate initial uncertainty
• biastype : specifies algorithm for bias subtraction
• biasbox : pyvista BOX object giving region(s) to use for overscan
• trimbox : pyvista BOX object giving region(s) to trim reduced image to, if requested
• normbox : pyvista BOX object giving region(s) to use for normalization of flats
Methods (see below for full description with keywords) :
• rd: reads image from disk. For convenience, files can be identified by a sequence number alone, and the
input directory will be searched for files matching the input format ({dir}/{root}{formstr}.fits*) with the
input file number
• overscan : subtracts overscan, and calculates uncertainty
• bias : subtracts input combined bias frame
• flat : divides by input combined flat frame
• reduce [reads, overscan subtracts (per biastype), bias subtracts if] given a combined bias, flat fields if
given a combined flat, etc.
• getcube: reads a set of frames into a data cube
• median: median filters images in cube
• sum: : sums images in cube
• mkbias: given input bias frames, creates combined bias with median combination after overscan subtrac-
tion
• mkflat: given input flat field frames, creates combined bias with median combination with normalization,
after overscan subtraction, and bias subtraction if given combined bias
• mkspecflat: creates flat field as above, but then removes structure along columns using spectral signature
calculated from a boxcar filter of a central set of rows, to produce a flat field with spectral signature
removed
Making a new instrument configuration file
Configuration files are made using the YAML format, e.g.:
---
channels : ['blue','red']
formstr : ['{:04d}b','{:04d}r']
gain : [1.68,1.88]
rn : [4.9,4.6]
crbox : [1,11]
biastype : 0
biasregion :
- [[0,2098],[0,1078]]
- [[0,2098],[0,1078]]
biasbox :
- [[2050,2096],[0,1023]] # xs, xe, ys, ye
- [[2050,2096],[0,1023]]
trimbox :
- [[0,2047],[0,1023]]
- [[0,2047],[0,1023]]
normbox :
- [[1000,1050],[500,600]]
- [[1000,1050],[500,600]]
SPECTRA module
spectra is a module that implements some classes/tools for handling basic spectral data reduction: tracing and extrac-
tion of spectra, wavelength calibration, and flux calibration. The routines allow for both longslit data, multiobject data,
echelle data, and a combination (multiple orders with longslit, or multiple slitlets).
Data are read and processed using NDData structures, which carry along both the image data and an associated
uncertainty array.
The routines are implemented with three basic classes, the Trace object, the WaveCal object, and the FluxCal. In more
detail:
4.1 Trace
The Trace object is used to store functional forms and locations of spectral traces. It supports multiple traces per
image, e.g., in a long-slit spectrograph, a multiobject spectrograph (fiber or slitlets), a multiorder spectrograph.
Trace objects can be defined from one images and used on other images, allowing for small global shifts of the traces
in the spatial direction. They can be saved and read from disk as FITS files.
To create a trace from scratch, instantiate a trace object with rows= (to specify window, i.e. length of slit), lags= (to
specify how far search can extend for translation of trace), degree= (to specify degree of polynomial for fit to trace),
and rad= (to specify a radius in pixels to use for centroiding a trace). Alternatively, you can load a saved Trace object
from disk by instantiating using the file= keyword (use file=’?’ to see a listing of files distributed with the package).
To make the trace, the code will generally start at the center of the image in the wavelength direction (unless the sc0
attribute is set to another value), centroid the spectrum there, then work in both directions to get a centroid each pixel
(or every skip pixels if the skip= keyword is given); the centroid for each wavelength is used as a starting guess for the
subsequent wavelength if the peak is above a S/N threshold), Finally, a function will be fit to the derived centroids and
the model (currently astropy Polynomial1D) wll be stored in the trace structure.
To make a trace manually use the trace(spectrum,row) method, where row is the starting guess for the spatial postion
at the center of the spectrum (or sc0). You can use the findpeak() method to find object(s) automatically and pass the
returned value(s) to trace(). If row is a list (i.e. for multiple objects or orders), then multiple traces will be made.
13
pyvista Documentation, Release v0.0
For spectrographs where the traces are relatively stable in location (e.g. multiple orders), a saved model can be
convenient to use to remake traces. To retrace, use the retrace(spectrum) method. This will do a cross-correlation of a
saved cross-section with the input spectrum to determine a shift, then will use that as a starting location to retrace.
Extraction is done using the extract(spectrum,rad=,[back=[[b1,b2],[b3,b4]]) method, which uses boxcar extraction
with the specified radius.
Optionally, a background value as determined from one or more background windows can be subtracted, where the
window locations are specified in pixels relative to the object trace position. Note that if there is non-negligible line
curvature that this can lead to poor subtraction of sky emission lines. In this case, you might want to determine a 2D
wavelength solution (see below), and resample the sky spectra to the wavelength scale of the object (or to some other
wavelength scale) before subtraction.
Attributes
• type : type of astropy model to use for trace shape (currently, just
Polynomial1D)
• model : array of astropy models for each trace
• degree : polynomial degree for model fit
• rad : radius in pixels to use for calculating centroid
• sc0 : starting column for trace, will work both directions from here
• spectrum : reference spatial slice at sc0
• pix0 : derived shift of current image relative to reference
• lags : array of lags to try when finding object location
• transpose : boolean for whether axes need transposing to put spectra along rows
• rows : range of slit length
• index : identification for each trace
Methods:
• find()
• findpeak()
• retrace()
• trace()
• extract()
• extract2d()
4.2 WaveCal
Given this WaveCal instantiation, identify() is called to identify and fit a wavelength solution. This is achieved as
follows:
1. if input wav array/image is specified, use this to identify lines
2. if WaveCal object has an associated spectrum, use cross correlation to identify shift of input spectrum, then use
previous solution to create a wavelength array. Cross correlation lags to try are specified by lags=range(dx1,dx2),
default range(-300,300)
3. if inter==True, prompt user to identify 2 lines
4. use header cards DISPDW and DISPWC for dispersion and wave center or as specified by input
disp=[dispersion] and wref=[lambda,pix]
Given the wavelength guess array, identify() will identify lines from input file of lamp/reference wavelengths, or, if no
file given, the lines saved in the WaveCal structure.
Lines are identified by looking for peaks within rad pixels of initial guess that have S/N exceeded the threshold given
by thresh= keyword (default 100).
After line identification, fit() is automatically called, unless fit=False. During the fit, if plot=True, the user can remove
discrepant lines (use ‘n’ to remove line nearest to cursor in wavelength, ‘l’ to remove all lines to left, ‘r’ to remove all
lines to right; removal is done by setting the weights of these lines to zero in the WaveCal structure. After removing
lines to get a better initial solution, it may be desirable to re-enable these lines to see if they can be more correctly
identified gibven a better initial wavelength guess.
An example, given an extracted spectrum spec, might proceed as follows:
wav=spectra.WaveCal(files='KOSMOS/KOSMOS_blue_waves.fits')
wav.identify(spec,plot=True)
# if you removed a bunch of lines, especally at short and long wavelength ends,
# you can try to recover them using your revised solution:
wav.weights[:] = 1.
wav.identify(spec,plot=True,lags=range(-50,50))
If there is no previous WaveCal, a new WaveCal can be instantiated, specifing the type and degree of the model. Lines
are identified given some estimate of the wavelength solution, either from an input wavelength array (wav= keyword)
or from a [wave,pix] pair plus a dispersion (wref= and disp=) keyword, as described above. This solution will be used
to try to find lines: a centroid around the input position of width given by the rad= keyword is computed.
4.3 FluxCal
The FluxCal object allows for the creation of a spectral response curve given observations of one or more flux standard
stars. It also includes a spectral extinction correction using a set of mean extinction coefficients as a function of
wavelength. The derived response curve can be applied to an input spectrum to provide a flux-calibrated spectrum.
The accuracy of the flux calibration may be limited by differential refraction effects in either the calibration stars or
the object itself. The accuracy of the absolute calibration is further limited by light losses outside the slit.
A FluxCal object is instantiated, specifying the polynomial degree (degree=) to be used for the response curve fit.
Observed 1d (i.e., extracted) spectra of spectrophotometric stars are added using the addstar(spectrum, waves, file=)
method, where the input file gives the true fluxes as a function of wavelength; libraries of standard star spectra from
ESO are included. The observed spectrum is corrected for extinction and saved, along with the true spectrum. Once
or or more stars have been loaded, a response curve is derived using the response() method by fitting a polynomial to
the logarithm of the ratio of the extinction-corrected observed to true fluxes, allowing for an independent scale factor
for each star.
Given the response curve, input spectra can be corrected for extinction and instrumental response using the correct()
method.
4.3. FluxCal 15
pyvista Documentation, Release v0.0
REDUCE module
reduce is a module that implements basic data reduction for imaging and spectroscopic data. It works by reading a
configuration file that contains blocks of information needed to reduce a set of data for a given instrument in a given
night; multiple blocks can be specified in a single input configuration file if desired. It can be run in an interactive
mode where the user sees the results of each step and can make some modification, and also in a batch mode where
the processing proceeds automatically.
The reduction is run using the reduce.reduce() routine where a configuration file is a required input. Optional inputs
control the amount of user interaction: the plot= option allows the user to specify a matplotlib figure in which some
data are displayed, and the display= option allows the user to specify a pyvista.tv.TV() instance into which image data
are displayed. A verbose= option allows the user to control the level of output.
The configuration file provides an instrument identification, which is used to read an instrument configuration file with
basic information such as overscan region, gain, readout noise, normalization region, etc. It then contains blocks of
information used for the calibration (superbias, superdark, superflat, wavelength calibration), which can either specify
existing calibration products, or a list of frames to be used to construct new calibration products. Multiple calibration
products of a given type can be constructed and each is given a label by which it can be referenced for use in reduction.
Finally, blocks of information are given for the object frames to reduce.
The configuration file is formatted as a YAML file, which is a simply and natural way to provide the required infor-
mation. There is some required information in the file, and some optional information. A complete list is provided
below.
17
pyvista Documentation, Release v0.0
....
arcs : # frames to be used for wavelength calibration
- id : "pre" # ID for first wavecal set
frames : [19,20,21,22] # frames to use
bias : "bias" # bias to use
wref : "ARCES_wave" # reference file for template wavecals (pkl file)
wavecal_type : "echelle" # type: echelle or longslit
- id : "post" # ID for second wavecal set
frames : [32]
bias : "bias"
wref : "ARCES_wave"
wavecal_type : "echelle"
Stars module
stars is the pyvista module that implements routines for dealing with stellar images
mark : interactively add stars to internal list
photom : do aperture photometry of stars on internal list
save : save internal list to file
get : load internal list from file
stars functions:
19
pyvista Documentation, Release v0.0
IMAGE module
image is the pyvista module that implements basic image arithmetic, VISTA-style
image functions:
class pyvista.image.BOX(n=None, nr=None, nc=None, sr=1, sc=1, cr=None, cc=None, xr=None,
yr=None)
Defines BOX class
set(xmin, xmax, ymin, ymax)
Resets limits of a box
Parameters
• xmin – lower x value
• xmax – higher x value
• ymin – lower y value
• ymax – higher xyvalue
nrow()
Returns number of rows in a box
Returns : number of rows
ncol()
Returns number of columns in a box
Returns : number of columns
show(header=True)
Prints box limits
mean(data)
Returns mean of data in box
Args : data : input data (Data or np.array)
21
pyvista Documentation, Release v0.0
stdev(data)
Returns standard deviation of data in box
Args : data : input data (Data or np.array)
max(data)
Returns maximum of data in box
Args : data : input data (Data or np.array)
min(data)
Returns minimum of data in box
Args : data : input data (Data or np.array)
median(data)
Returns median of data in box
Args : data : input data (Data or np.array)
setval(data, val)
Sets data in box to specified value
getval(data)
Returns data in box
Args : data : input data (Data or np.array)
pyvista.image.abx(data, box)
Returns dictionary with image statistics in box.
Args : data : input data (Data or np.array) box : pyvista BOX
Returns : dictionary with image statistics : ‘mean’, ‘stdev’, ‘min’, ‘max’, ‘peakx’, ‘peaky’
pyvista.image.gfit(data, x0, y0, size=5, fwhm=3, sub=True, plot=None, fig=1, scale=1,
pafixed=False)
Does gaussian fit to input data given initial xcen,ycen
pyvista.image.tvstar(tv, plot=None, size=11, fwhm=5, scale=1, pafixed=False)
Fit gaussian and show radial profile of stars marked interactively
pyvista.image.window(hdu, box)
Reduce size of image and header accordingly
pyvista.image.stretch(a, ncol=None, nrow=None)
Stretches a 1D image into a 2D image along rows or columns
pyvista.image.add(a, b, dc=0, dr=0, box=None)
Adds b to a, paying attention to CNPIX
Parameters
• a – array_like reference array
• b – array_like array to calculate shifts for
• lags – array_like x-corrlation lags to use
23
pyvista Documentation, Release v0.0
pyvista.image.xcorr2d(a, b, lags)
Two-dimensional cross correlation
Parameters
• b (a,) – input Data frames
• lags – array (1D) of x-corrlation lags
Returns (x,y) position of cross correlation peak from quadratic fit to x-correlation 2D cross correla-
tion function
pyvista.image.zap(hd, size, nsig=3, mask=False)
Median filter array and replace values > nsig*uncertainty
pyvista.image.smooth(hd, size)
Boxcar smooth image
pyvista.image.transpose(im)
Transpose a Data object
ETC module
8.1 Introduction
8.2 Usage
25
pyvista Documentation, Release v0.0
throughput(wave)
Returns atmospheric transmission at input wavelengths
class pyvista.etc.Detector(name=”, efficiency=0.8, rn=5)
class representing a detector
throughput(wave)
Returns detector efficiency at input wavelengths
class pyvista.etc.Instrument(name=”, efficiency=0.8, pixscale=1, dispersion=<Quantity 1.
Angstrom>, rn=0)
class representing an Instrument
filter(wave, filter=”, cent=<Quantity 5500. Angstrom>, wid=<Quantity 850. Angstrom>,
trans=0.8)
Returns filter throughput at input wavelengths
throughput(wave)
Returns instrument throughput at input wavelengths
class pyvista.etc.Mirror(type, const=0.9)
class representing a mirror given coating name, provide method for reflectivity
reflectivity(wave)
Returns reflectivity at input wavelengths
class pyvista.etc.Object(type=’bb’, teff=10000, mag=0, refmag=’V’)
Class representing an object Given a type (blackbody or input spectrum) and a magnitude, provides method for
Fnu, Flam, photon flux
flam(wave)
Return SED in Flambda
fnu(wave)
Return SED in Fnu
photflux(wave)
Return SED in photon flux
sed(wave)
Return sed in Fnu or Flambda, with units (depending on source of data)
class pyvista.etc.Observation(obj=None, atmos=None, telescope=None, instrument=None,
wave=<Quantity [3000., 3001., 3002., ..., 9997., 9998., 9999.]
Angstrom>, seeing=1, phase=0.0)
Object representing an observation
back_counts()
Return integrated photon flux for background
back_photonflux()
Return photon flux for background
counts()
Return integrated photon flux
exptime(sn)
Calculate exptime given S/N
photonflux()
Return photon flux
sn(t)
Calculate S/N given exposure time
• genindex
• modindex
• search
29
pyvista Documentation, Release v0.0
p
pyvista.etc, 25
pyvista.image, 21
pyvista.tv, 7
31
pyvista Documentation, Release v0.0
D P
Detector (class in pyvista.etc), 26 photflux() (pyvista.etc.Object method), 26
div() (in module pyvista.image), 23 photonflux() (pyvista.etc.Observation method), 26
pyvista.etc (module), 25
E pyvista.image (module), 21
emission() (pyvista.etc.Atmosphere method), 25 pyvista.tv (module), 7
exptime() (pyvista.etc.Observation method), 26
R
F rd() (in module pyvista.image), 23
filter() (pyvista.etc.Instrument method), 26 reflectivity() (pyvista.etc.Mirror method), 26
flam() (pyvista.etc.Object method), 26
flip() (pyvista.tv.TV method), 7 S
fnu() (pyvista.etc.Object method), 26 savefig() (pyvista.tv.TV method), 7
sed() (pyvista.etc.Object method), 26
G set() (pyvista.image.BOX method), 21
getval() (pyvista.image.BOX method), 22 setval() (pyvista.image.BOX method), 22
gfit() (in module pyvista.image), 22 show() (pyvista.image.BOX method), 21
signal() (in module pyvista.etc), 27
I sky() (in module pyvista.image), 23
imexam() (pyvista.tv.TV method), 7 smooth() (in module pyvista.image), 24
33
pyvista Documentation, Release v0.0
T
Telescope (class in pyvista.etc), 26
throughput() (pyvista.etc.Atmosphere method), 25
throughput() (pyvista.etc.Detector method), 26
throughput() (pyvista.etc.Instrument method), 26
throughput() (pyvista.etc.Telescope method), 27
transpose() (in module pyvista.image), 24
TV (class in pyvista.tv), 7
tv() (pyvista.tv.TV method), 7
tvbox() (pyvista.tv.TV method), 7
tvcirc() (pyvista.tv.TV method), 7
tvclear() (pyvista.tv.TV method), 7
tvmark() (pyvista.tv.TV method), 7
tvstar() (in module pyvista.image), 22
tvtext() (pyvista.tv.TV method), 7
W
window() (in module pyvista.image), 22
X
xcorr() (in module pyvista.image), 23
xcorr2d() (in module pyvista.image), 24
Z
zap() (in module pyvista.image), 24
34 Index