0% found this document useful (0 votes)
49 views41 pages

Basic Imaging and Self-Calibration (T4 + T7) : John Mckean

1. The document describes the process of imaging and self-calibration of radio interferometric data using CASA. 2. Key steps include making dirty images with different visibility weightings, examining the point spread functions, and deconvolving the images using CLEAN to model the sky brightness distribution. 3. Parameters like pixel size, image size, and number of CLEAN iterations are chosen to properly sample the data and model the radio sources.

Uploaded by

arijit manna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
49 views41 pages

Basic Imaging and Self-Calibration (T4 + T7) : John Mckean

1. The document describes the process of imaging and self-calibration of radio interferometric data using CASA. 2. Key steps include making dirty images with different visibility weightings, examining the point spread functions, and deconvolving the images using CLEAN to model the sky brightness distribution. 3. Parameters like pixel size, image size, and number of CLEAN iterations are chosen to properly sample the data and model the radio sources.

Uploaded by

arijit manna
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 41

Basic Imaging and Self-

Calibration (T4 + T7)


John McKean

Visibilities Fourier Transform Deconvolution


AIM:
1. To make an image by taking the fast Fourier transform of the visibility data.
2. Carry out deconvolution using the CLEAN algorithm with CASA.
3. Use the new model obtained for the sky brightness distribution to carry out self-calibration.

During this process, we will make a ScriptForImaging.py file that can be used within CASA to make images
automatically.

In the following “> command” is used to show inputs to the terminal and # comment # is used to explain where
possible what is going on.

We will use the e-MERLIN data set on J1252+5634 that was edited and calibrated during the earlier tutorials
(T2 and T3).

If you had problems during T3, download the calibrated dataset from,

http://almanas.jb.man.ac.uk/amsr/3C277p1/1252+5634.ms.tar

STEP 1 - Set-up the script

We will add our commands to a new ScriptForImaging.py, This script allows us to re-do what we have done, or
parts of the process, automatically (useful for checking mistakes). Download the template from,

http://www.astron.nl/~mckean/ScriptForImaging.py

We can edit this file using your favourite text editor, e.g. emacs, pico, etc.

> pico ScriptForImaging.py

We will edit the script as we go.


Here we enter our steps

Here we enter our variables

Here we will enter our commands

Here we will enter our commands


To start CASA,

> casapy # start CASA #

You should see the following in your terminal

To run the script,

> mysteps = [0, 1] # this will run steps 0 and 1 #


> execfile(‘ScriptForImaging.py') # this run the script#

Nothing will happen because we have no commands yet, but msfile and myspw alias have been set.
STEP 2 - Determine our imaging field-of-view and pixel size

We will make an image by taking the fast Fourier transform (FFT) of the visibility data. This will involve projecting
the sky surface brightness distribution onto a regular grid of pixels. We have some choices to make,

1. What is the size of the image that we would like to make?


2. How large should the pixels be?

Image size: The visibilities contain information from all of the sources in the field-of-view. Technically we should
make an image that is equal to this field-of-view. Our array is 6 antennas that are 25 m in size.

What is the field of view or a 25 m telescope at ~5 GHz?

> 3600 * (180 / pi) * (3e8 / 5.265e9) / 25 # arcsec * (rad->deg) * (c / ν) / D #


Out: 470.11921651759855 # Full width half max in arcsec #

This should be the field-of-view that we image, but we will use ~5 arcsec for speed.

Pixel Size: We need to Nyqvist sample the data when we projected it onto a regular grid so that we do not lose
information. We can estimate pixel-size by considering the longest baseline in our data set using plotms and
plotting AMP versus UVDIST (colourise SPW, corr=‘RR, LL’, argchannel = ’64’).

We see that the longest baseline is at ~220 km. So we can estimate the
synthesised beam with,

> 3600 * (180 / pi) * (3e8 / 5.265e9) / 220e3


Out: 0.05347342021615011 # max resolution in arcsec #

This is approximately what we would expect, so we take 10.7 mas pixels for
safety (1/5 sampling).
STEP 3 - Make an image

We will start by making our first image, which will be the FFT of the visibility data. All deconvolution is carried out
in CASA using the CLEAN task.

> help clean

This will give you a full summary of the task and suggested input parameters. Many of them we will not use here
for this tutorial.

Start by replacing all of the parameters back to their defaults

> default clean

> vis = msfile # Name of the visibility MS file #


> spw = myspw # spectral windows that we will use #
> cell = “0.0107arcsec” # pixel-size we will use #
> imsize = 512 # image size we will use (~5 arcsec) #
> weighting = “briggs” # set visibility weighting #
> robust = 0 # set robust parameter (balance between nat/uni) #
> niter = 0 # we will do no convolution #
> imagename = “dirty.b0” # call your image something #
> inp # check your inputs (nothin should be red) #
> go clean # run the task #

Look at your logger window to view the progress of CLEAN.


Here are our input
parameters

WARN: No primary beam


model

Estimated synthesised
beam size
Lets look at the output. We have generated 5 images that are all on the same grid

dirty.b0.image # The ‘deconvolved’ image #


dirty.b0.psf # The image of the point spread function (FFT of the uv-sampling function) #
dirty.b0.model # The image containing your model components (delta functions, truncated Gaussians) #
dirty.b0.residual # The image made by subtracting the model from the visibility of doing an FFT #
dirty.b0.flux # An image of the expected primary beam response #

We can look at each of these images using the CASA VIEWER (run interactively or from the command line).

> viewer # start the viewer GUI and DATA MANAGER # Zoom in/out
This will start the VIEWER GUI, select the dirty.b0.image and then select raster map.
Lets look at each of the output images (either start a new VIEWER or add multiple images to the same VIEWER
and use the ANIMATOR option - top menu -> VIEW -> ANIMATOR).

IMAGE BEAM MODEL

PSF RESIDUAL
All that seemed to work well, so lets add the parameters of our CLEAN run to our script. Every time we run a task
in CASA we generate a, for example clean.last file

> !more clean.last

and copy the final part to our script, and if we wanted, do what we just did using our script,

> mysteps = [0] # this will run step 0 #


> execfile(‘ScriptForImaging.py') # this run the script#

Update our steps

copy clean parameters


here (remember to
indent)
STEP 4 - What about image weights

So far we have only used robust = 0, but lets try the case of natural and uniform weights (robust = 2 and = -2).

> tget clean # recover the last set of parameters used #


> robust = 2 # set robust parameter to 2 (natural weighting) #
> imagename = “dirty.b2” # set new image name to make new file #
> go clean # start FFT #

And once that is completed, we can add the clean.last command to our script. The run with robust = -2

> tget clean # recover the last set of parameters used #


> robust = -2 # set robust parameter to -2 (uniform weighting) #
> imagename = “dirty.b-2” # set new image name to make new file #
> go clean # start FFT #

Note the synthesised beam sizes that are estimated by CASA for the different weights.

Next lets look at the dirty images and psf images using the VIEWER.
TIP: It is useful to first make a dirty image to see if you choice of pixel size (cell) and image size (imsize) is
appropriate given your target observation.

Also, look at the side-lobe structure of the PSF as it will help you when you are de-convolving the image,

UNIFORM IMAGE UNI/NAT IMAGE NATURAL IMAGE

UNIFORM PSF UNI/NAT PSF NATURAL PSF


STEP 5 - Deconvolution

The ripples that we see in the dirty images are due to the side-lobe structure of the PSF. This is dependent on
the uv-coverage (sampling function) and our choice of weighting. For the remainder of the tutorial, we will use
Briggs weighting with robust = 0.

> tget clean # recover the last set of parameters used #


> robust = 0 # set robust parameter to 0 (uniform/natural weighting) #

We deconvolve using the CLEAN algorithm, and in this case we will use delta functions to make a model for the
source. Other options, for example, truncated Gaussians are possible, but we will not use here.

The CLEAN algorithm has the following steps:


1. Identify the surface brightness peak in the map.
2. Fit a delta function to this position that has a value of the peak surface brightness * gain factor.
3. Subtract the delta function from the image.
4. Identify the next brightness peak and repeat steps 2 and 3 (Minor Cycle).
5. Subtract the collection of delta functions from the uv-data and re image.
6. Repeat steps 1-5 until some threshold is reached.

Now we need to define two new parameters for CLEAN

> niter = 3000 # number of interactions (trial / error) #


> gain = 0.05 # factor of the peak brightness to be subtracted #
> interactive = T # to allow interactive cleaning #
> imagename = “clean.b0” # set new image name to make new file #
> inp # review parameters #
> go clean # start FFT and deconvolution #

Remember to look at your logger for information.


Zoom around region of
interest
Select clean regions
Select clean regions
Once areas defined,
run a minor cycle
Looks like new part
of the source
parts of the source
CLEANed away
Many noisy features
End interactive
clean
We end clean when we think we have reached a reasonable noise limit.

Note that we have cleaned a total flux of ~0.45 Jy and the threshold is 0.0007 Jy (we will use these values for
running CLEAN non-INTERACTIVELY).

We have also generated a new file,

clean.b0.mask # The mask image that defines the CLEAN regions #

Let’s look at the final images using the VIEWER

> viewer # start the viewer GUI and DATA MANAGER #

Load the RASTER map of the image, model, residual, mask.


MASK IMAGE CLEANED IMAGE

RESIDUAL IMAGE MODEL IMAGE


All that seemed to work well, so lets add the parameters of our CLEAN run to our script. First, we add the
DIRTY
threshold, give IMAGE
a new image name and set not to run interactively, CLEANED IMAGE

> tget clean # recover the last set of parameters used #


> imagename = “clean.b0.auto” # set new image name to make new file #
> interactive = F # don’t allow interactive cleaning #
> threshold = “0.7mJy” # set threshold to stop cleaning #
> mask = “clean.b0.mask” # use of pre-defined mask #
> inp # review parameters #
> tput clean # save parameters to the .last file #

> !more clean.last

and copy the final part to our script (step 2).

Lets try running everything using our script (this will overwrite our dirty images and make a new clean image).
Depending on your computer, this should take about ~5 mins to run.

> mysteps = [0, 1] # this will run step 0 #


> execfile(‘ScriptForImaging.py') # this run the script#
STEP 6 - Image properties
DIRTY IMAGE CLEANED IMAGE
We can use the VIEWER to estimate some image statistics based on our new clean image.

> viewer # start the viewer GUI and DATA MANAGER #

Load the RASTER “clean.b0.auto.image” map of the VIEWER.

Double click inside the regions to get the statistics.

Select regions

B
B

Note the flux-density of our target and the rms noise of


the image
STEP 7 - Student exercise
DIRTY IMAGE CLEANED IMAGE
Try making an image of the source using uniform and natural weighting (robust = 2 and -2), do this by making a
new step 2 and 3 in your imaging script, and run it over lunch.

Remember to change the image name, otherwise you will overwrite your previous images.

Measure the flux-density and rms noise of each map, how do they compare.

What we find is that there are still strong image residuals post imaging. Where do these come from?

They are party due to residual phase and amplitude errors in the data.

Phase Error: Moves the source around, poor deconvolution


Amplitude Error: Results in a different psf than expected, poor deconvolution.
STEP 8 - Self-calibration
DIRTY IMAGE CLEANED IMAGE
After transferring the solutions from a calibrator we may find that there are residual errors in our data.

Why?
Our calibrators are observed at a different time (except for simultaneous observations; in beam-calibration)
and position on the sky than our target.

Use the process of self-calibration:


1) Make an image of your target (after applying calibrator solutions).
2) Use this model to calibrate the data over some solution interval.
3) Make an image of your target (after applying self-calibration solutions).
4) Use this model to calibrate the data over some solution interval.
5) Iterate this process until no major improvement on image quality.

Advantages:
1) Can correct for residual amplitude and phase errors.
2) Can correct for direction dependent effects (see later).

Disadvantages:
1) Errors in the model or low SNR can propagate into your self-calibration solutions, and you can diverge
from the correct model.

We will use our model for the source that we made during the previous clean process.

Our first step is to blank the MODEL COLUMN of our MS file, to limit any problems from previous work.

> tget clearcal # recover the last set of parameters used #


> vis = “1252+5634.ms” # select the visibility dataset #
> inp # review parameters #
> go clearcal # start FFT #
We will use the FT task to take the FFT of our model and generate a set of model visibilities. Note that CLEAN can
do this automatically for you.
DIRTY IMAGE CLEANED IMAGE

> mymodel = “clean.b0.model” # set alias to point to best current model #


> tget ft # recover the last set of parameters used #
> model = mymodel # set the input model to my model #
> usescratch = T # write the model visibilities into the MODEL column #
> inp # review parameters #
> go ft # start FFT and sampling #

Plot the model visibilities (avgchannel = 64), colourise


by SPW
Lets add this to our script (a new step)
CLEANED IMAGE
> !more ft.last

and copy the final part to our script (step 4).

Add new step (4 is you


did the homework, 2 if
you did not)

Here we enter our variables


Here we enter the ft parameters
We will first carry out PHASE-ONLY self-calibration using this model. Remember,
DIRTY IMAGE An error in your model CLEANED IMAGE
can be absorbed in the
calibration

~ij = Jij V
V ~ijIDEAL

Our model will be used to determine a new calibration table which will describe the phase variations as a
function of time.
> default gaincal # reset the calibration parameters #
> vis = “1252+5634.ms” # select the visibility dataset #
> caltable = “1.phasecal” # make a new calibration table #
> solint = “60s” # we will start by using a solution interval of 60 s #
> refant = “Mk2” # select MarkII as the reference antenna #
> calmode = “p” # phase-only self cal #
> inp # review parameters #
> go gaincal # start FFT #

statistics of the solutions


What is an appropriate solution time?

3 mins
1 min
3 sec

Want to have,
Shortest possible time-scale to track the gain variations, whist being long enough to
have a sufficient signal-to-noise ratio.

John McKean - Advanced Calibration in Synthesis Imaging 29


Lets look at the quality of the solutions
DIRTY IMAGE CLEANED IMAGE
> default plotcal # reset the plotting parameters #
> caltable = “1.phasecal” # use our new calibration table #
> xaxis = “time” # plot as a function of time #
> yaxis = “phase” # plot the phase solutions #
> subplot = 231 # plot the 6 antennas on one plot #
> plotrange = [-1,-1,-180,180] # plot all time and +/- 180 deg #
> iteration = “antenna” # lets look at each antenna sep #
> go gaincal # determine calibration parameters #
As the solutions look quite good, lets apply them to the data,
DIRTY IMAGE CLEANED IMAGE
> default applycal # reset the calibration parameters #
> vis = “1252+5634.ms” # select the visibility dataset #
> gaintable = “1.phasecal” # select tables to apply (in correct order) #
> calwt = F # lets not calibrate the weights… #
> inp # review parameters #
> go applycal # apply calibration tables and write CORRECTED column #

Now we can make a new image and model for the source using CLEAN (non-interactively).

> tget clean # recover the last set of parameters used #


> imagename = “clean.b0.self” # set new image name to make new file #
> robust = 0 # set robust parameter to 0 #
> interactive = F # don’t allow interactive cleaning #
> threshold = “0.7mJy” # set threshold to stop cleaning #
> mask = “clean.b0.mask” # use of pre-defined mask #
> usescratch = T # write the model visibilities to MODEL column #
> inp # review parameters #
> go clean # start deconvolution #

At this point, we have now completed a self-calibration loop (GAINCAL -> APPLYCAL -> CLEAN), this will be
step 5 of our script.

Lets add these task to a new step of our script.

> !more gaincal.last # start deconvolution #


> !more applycal.last # start deconvolution #
> !more clean.last # start deconvolution #

add os.system('rm -rf clean.b0.self.*’) to the first part of the step.


DIRTY IMAGE CLEANED IMAGE

Add new step 5

Here we enter our variables


Lets look at our first self-calibrated image

> viewer

Double click inside the regions to get the statistics.

We find that self-calibration has lowered the noise,


and increased the removed flux of the sources
A

A
B

B
STEP 9 - Self-calibration loops (Phase)

We will now attempt a self-calibration loop using our script.

> mysteps = [5] # this will run step 5 only #


> execfile(‘ScriptForImaging.py') # this run the script#

** If you see an error, it is likely due to SYNTAX issues. Make sure that you have
the correct indentation for each step (double space), see the line of the script
that reports the error message **

** Also make sure that you are running only STEP 5 - if another step is running,
then it means that you haven’t indent the commands within other steps correctly. **

Lets look at our next self-calibrated image

> viewer

This looks similar to before, how does the noise and peak
flux compare?

Lets try a final phase-only self-calibration but, lets change of script


to use a shorter solution interval, i.e. track phase changes on shorter
time-scales, but now we have a better model.

mysolint = ’30s’

> mysteps = [5]


> execfile(‘ScriptForImaging.py')
We are not seeing any major improvement, could our problems
be due to bad data, lets check

Lets inspect the residual visibilities,

> default plotms


> vis = “1252+5634.ms”
> correlation = ‘rr, ll’
> avgchannel = ’64’
> inp
> plotms

Residual visibilities are corrected - model

It looks like there are some time ranges when


the amplitudes increase

1) 21:30:00 to 21:32:00
2) 30:20:00 to 32:00:00

Lets flag these time ranges


STEP 10 - Post-calibration flagging

We will now flag the bad time ranges

> default flagdata # restore the parameters to defaults #


> vis = “1252+5634.ms” # select the visibility dataset #
> timerange = “21:30:00~21:32:00” # select the time range to flag #
> inp # review parameters #
> go flagdata # carry out flagging #

Add this to our script, but at which point? (at the beginning)

> !more flagdata.last

> tget flagdata # restore the parameters from last usage #


> timerange = “30:20:00 to 32:00:00” # select the time range to flag #
> inp # review parameters #
> go flagdata # carry out flagging #

Add this to our script.

> !more flagdata.last


STEP 11 - Self-calibration (Amp)

Lets try a loop of amplitude self-calibration to fix the residual amplitude errors

> tget gaincal # recover the last set of parameters used #


> caltable = “1.ampcal” # make a new calibration table #
> solint = “inf” # we will use a very large solution interval
> combine = “scan” that spans over all scans #
> calmode = “a” # amplitude-only self-cal #
> gaintable = [“1.phasecal”] # apply previous phase solutions #
> inp # review parameters #
> go gaincal # determine calibration parameters #

> tget applycal # recover the last set of parameters used #


> gaintable = [“1.phasecal”,“1.ampcal”] # select tables to apply #
> inp # review parameters #
> go applycal # apply calibration tables and write CORRECTED column #

Now we can make a new image and model for the source using CLEAN (non-interactively).

> tget clean # recover the last set of parameters used #


> imagename = “clean.b0.self2” # set new image name to make new file #
> interactive = T # don’t allow interactive cleaning #
> threshold = “” # stop cleaning when based on the residuals #
> inp # review parameters #
> go clean # start deconvolution #

We will now go through a process of interactive CLEAN to make our next map.

At this point, we have now completed a self-calibration loop (GAINCAL -> APPLYCAL -> CLEAN), this will be
step 6 of our script.
Add new step 6
A

B
A

B
Our dynamic range is peak / rms ~ 1000, which is
limited by deconvolution errors in the complex
bright component.

Further careful imaging (with a smaller cell size) my


improve this.

Our final model should be copied into our script, at


step 4,

> ! cp -rf clean.b0.self2.model best-model.model


> ! cp -rf clean.b0.self2.mask best-mask.mask
STEP 12 - Student exercise

1) Make a new directory


2) Copy the 1252+5634.ms.tar file to the directory and untar it (cp … tar xvf)
3) Copy the best-model.model file to the directory (cp -rf …)
4) Copy the best-mask.mask file to the directory (cp -rf …)
5) Copy the ScriptForImaging.py file to the directory (cp …)
6) Change to the directory (cd)
7) Rename best-mask.mask to clean.b0.mask (mv)

Start casapy and run the script using all steps.

You might also like