Basic Imaging and Self-Calibration (T4 + T7) : John Mckean
Basic Imaging and Self-Calibration (T4 + T7) : John Mckean
During this process, we will make a ScriptForImaging.py file that can be used within CASA to make images
automatically.
In the following “> command” is used to show inputs to the terminal and # comment # is used to explain where
possible what is going on.
We will use the e-MERLIN data set on J1252+5634 that was edited and calibrated during the earlier tutorials
(T2 and T3).
If you had problems during T3, download the calibrated dataset from,
http://almanas.jb.man.ac.uk/amsr/3C277p1/1252+5634.ms.tar
We will add our commands to a new ScriptForImaging.py, This script allows us to re-do what we have done, or
parts of the process, automatically (useful for checking mistakes). Download the template from,
http://www.astron.nl/~mckean/ScriptForImaging.py
We can edit this file using your favourite text editor, e.g. emacs, pico, etc.
Nothing will happen because we have no commands yet, but msfile and myspw alias have been set.
STEP 2 - Determine our imaging field-of-view and pixel size
We will make an image by taking the fast Fourier transform (FFT) of the visibility data. This will involve projecting
the sky surface brightness distribution onto a regular grid of pixels. We have some choices to make,
Image size: The visibilities contain information from all of the sources in the field-of-view. Technically we should
make an image that is equal to this field-of-view. Our array is 6 antennas that are 25 m in size.
This should be the field-of-view that we image, but we will use ~5 arcsec for speed.
Pixel Size: We need to Nyqvist sample the data when we projected it onto a regular grid so that we do not lose
information. We can estimate pixel-size by considering the longest baseline in our data set using plotms and
plotting AMP versus UVDIST (colourise SPW, corr=‘RR, LL’, argchannel = ’64’).
We see that the longest baseline is at ~220 km. So we can estimate the
synthesised beam with,
This is approximately what we would expect, so we take 10.7 mas pixels for
safety (1/5 sampling).
STEP 3 - Make an image
We will start by making our first image, which will be the FFT of the visibility data. All deconvolution is carried out
in CASA using the CLEAN task.
This will give you a full summary of the task and suggested input parameters. Many of them we will not use here
for this tutorial.
Estimated synthesised
beam size
Lets look at the output. We have generated 5 images that are all on the same grid
We can look at each of these images using the CASA VIEWER (run interactively or from the command line).
> viewer # start the viewer GUI and DATA MANAGER # Zoom in/out
This will start the VIEWER GUI, select the dirty.b0.image and then select raster map.
Lets look at each of the output images (either start a new VIEWER or add multiple images to the same VIEWER
and use the ANIMATOR option - top menu -> VIEW -> ANIMATOR).
PSF RESIDUAL
All that seemed to work well, so lets add the parameters of our CLEAN run to our script. Every time we run a task
in CASA we generate a, for example clean.last file
and copy the final part to our script, and if we wanted, do what we just did using our script,
So far we have only used robust = 0, but lets try the case of natural and uniform weights (robust = 2 and = -2).
And once that is completed, we can add the clean.last command to our script. The run with robust = -2
Note the synthesised beam sizes that are estimated by CASA for the different weights.
Next lets look at the dirty images and psf images using the VIEWER.
TIP: It is useful to first make a dirty image to see if you choice of pixel size (cell) and image size (imsize) is
appropriate given your target observation.
Also, look at the side-lobe structure of the PSF as it will help you when you are de-convolving the image,
The ripples that we see in the dirty images are due to the side-lobe structure of the PSF. This is dependent on
the uv-coverage (sampling function) and our choice of weighting. For the remainder of the tutorial, we will use
Briggs weighting with robust = 0.
We deconvolve using the CLEAN algorithm, and in this case we will use delta functions to make a model for the
source. Other options, for example, truncated Gaussians are possible, but we will not use here.
Note that we have cleaned a total flux of ~0.45 Jy and the threshold is 0.0007 Jy (we will use these values for
running CLEAN non-INTERACTIVELY).
Lets try running everything using our script (this will overwrite our dirty images and make a new clean image).
Depending on your computer, this should take about ~5 mins to run.
Select regions
B
B
Remember to change the image name, otherwise you will overwrite your previous images.
Measure the flux-density and rms noise of each map, how do they compare.
What we find is that there are still strong image residuals post imaging. Where do these come from?
They are party due to residual phase and amplitude errors in the data.
Why?
Our calibrators are observed at a different time (except for simultaneous observations; in beam-calibration)
and position on the sky than our target.
Advantages:
1) Can correct for residual amplitude and phase errors.
2) Can correct for direction dependent effects (see later).
Disadvantages:
1) Errors in the model or low SNR can propagate into your self-calibration solutions, and you can diverge
from the correct model.
We will use our model for the source that we made during the previous clean process.
Our first step is to blank the MODEL COLUMN of our MS file, to limit any problems from previous work.
~ij = Jij V
V ~ijIDEAL
Our model will be used to determine a new calibration table which will describe the phase variations as a
function of time.
> default gaincal # reset the calibration parameters #
> vis = “1252+5634.ms” # select the visibility dataset #
> caltable = “1.phasecal” # make a new calibration table #
> solint = “60s” # we will start by using a solution interval of 60 s #
> refant = “Mk2” # select MarkII as the reference antenna #
> calmode = “p” # phase-only self cal #
> inp # review parameters #
> go gaincal # start FFT #
3 mins
1 min
3 sec
Want to have,
Shortest possible time-scale to track the gain variations, whist being long enough to
have a sufficient signal-to-noise ratio.
Now we can make a new image and model for the source using CLEAN (non-interactively).
At this point, we have now completed a self-calibration loop (GAINCAL -> APPLYCAL -> CLEAN), this will be
step 5 of our script.
> viewer
A
B
B
STEP 9 - Self-calibration loops (Phase)
** If you see an error, it is likely due to SYNTAX issues. Make sure that you have
the correct indentation for each step (double space), see the line of the script
that reports the error message **
** Also make sure that you are running only STEP 5 - if another step is running,
then it means that you haven’t indent the commands within other steps correctly. **
> viewer
This looks similar to before, how does the noise and peak
flux compare?
mysolint = ’30s’
1) 21:30:00 to 21:32:00
2) 30:20:00 to 32:00:00
Add this to our script, but at which point? (at the beginning)
Lets try a loop of amplitude self-calibration to fix the residual amplitude errors
Now we can make a new image and model for the source using CLEAN (non-interactively).
We will now go through a process of interactive CLEAN to make our next map.
At this point, we have now completed a self-calibration loop (GAINCAL -> APPLYCAL -> CLEAN), this will be
step 6 of our script.
Add new step 6
A
B
A
B
Our dynamic range is peak / rms ~ 1000, which is
limited by deconvolution errors in the complex
bright component.