We wish to obtain surface brightness of each galaxy, in standard magnitudes per , as a function of position. Obtaining this from the raw data, which is in the form of CCD counts per pixel in arbitrary units, involves several steps.
Photons incident on a photodetector give rise to electrons. These electrons can then be transferred to an output stage and counted. This is the principle used in a Charge Coupled Device (CCD). A CCD is a 2-dimensional array of photodetectors (picture elements, or pixels). The basic functions of the CCD include injection of charges, their storage, their transfer and their readout. The CCD is exposed to incident illumination for a period of time and then the accumulated charge, proportional to the incident light, is transferred to the output.
For a given pixel, (i,j), the total number of electrons, , is a sum of three components.
where, is the incident signal, is the quantum efficiency of the pixel of the CCD, is the bias applied to the CCD, and is the dark noise (thermal electrons).
Several preprocessing steps have to be carried out in order to obtain a single clean image per filter per object. These are described below. Tasks from IRAF and STSDAS were used for all basic optical reductions. For IR reductions, command language scripts of IRAF and STSDAS developed by S. E. Persson (Observatories of the Carnegie Institute of Washington), were used.
The dark current is the component in Equation . Electrons that constitute the count are excited into the conduction band by photons. However, thermal energy can also excite electrons into the conduction band. The resulting thermal electrons constitute the undesirable dark current, which makes a contribution even when there is no incident light. In modern CCD detectors, dark current is negligible when the CCD is operated at a temperature near 150 K and can be ignored. However, the NIR detectors are made of semi-conductor material that has a smaller energy gap between the valence and the conduction band and hence these devices have to be operated at much lower temperatures (e.g. 70 K for a HgCdTe detector). Consequently, the dark current is sensitive to thermal background radiation from the ambient environment.
If one does not have dark frames for the exact duration for which an object has been observed in the NIR (e.g. ), dark frames of different durations have to be interpolated to obtain the exact dark current corresponding to the object's exposure. Keeping that in mind, several tens of dark frames of duration 2, 3, 5 and 35 seconds were obtained everyday during the NIR run. The flat field and object exposures were chosen from the above set so that no interpolation was required.
The bias is the component in Equation . The bias is the pedestal level (DC offset) of the CCD introduced at the time of readout. Bias subtraction is done either by calculating the bias from periodically taken special bias frames (e.g. Jedrzejewski, 1987) or from the overscan region (e.g. Peletier et al., 1990). The overscan is a series of readouts of the CCD amplifier with no exposure. It is a convenient provision for reading off the offset level. The main advantages, in using an overscan region for bias subtraction, are (1) no separate bias frames are needed and (2) the bias level obtained is closer in time to the actual observation than a separate frame can provide. At LCO the overscan region was used for bias subtraction during both the runs. A 11-pixel wide region served this purpose. Though the bias was found to be constant throughout the night, separate bias frames were taken at the beginning and end of the night as also once or twice during the night. If at any point some problem was suspected with the chip, additional bias frames were taken and inspected.
The bias is usually constant over the chip. If it does vary a little from pixel to pixel the variation is generally a slight function of the row number i.e., it varies along the columns. The overscan regions spans a few columns. A median column is obtained by extracting the median value of each row. This column is then fit as a function of the row number. This fit, which is often a constant, is then subtracted from each column of the image. The image is then trimmed to get rid of the overscan region. All this was done using the IRAF task colbias .
For the first few runs colbias was applied interactively, the fit was examined and it was found to be satisfactory. Thereafter the procedure was applied automatically to each frame.
Flat fielding involves accounting for the component in Equation . Unlike dark current and bias, this is a multiplicative effect and comes in due to the different response that each pixel has to incident radiation. Thus, the Quantum Efficiency (QE) of a pixel is defined as the probability that an electron in the pixel will be excited at the incidence of a photon, i.e.,
The QE for a pixel, , thus is in the interval [0,1]. This is different for each pixel in a CCD, since each pixel is an individual semiconductor unit and has its own characteristics, dependent on factors like thickness which cannot be fully controlled. Other external factors like vignetting contribute to the CCD response being non-uniform. Thus if the CCD is illuminated by a uniform signal , which is the same over all the pixels, the output is , which varies from pixel to pixel. To correct for this non-uniform response one uses the technique of flat fielding, which is described below.
Let the CCD be exposed to a uniform source of illumination f. The count in the pixel is
Now let the CCD be exposed to a source which provides photons at the pixel. The count in the pixel is then
>From Equations and we get
We have from this operation recovered , from the observed quantities and , each of which is similarly affected by the pixel-to-pixel non-uniformity. The constant f is not known, but is accounted for during the preprocessing. The technique depends on having a uniform field of illumination, with a sufficiently high count , so that shot noise in the flat field does not contribute significantly to the overall noise in the signal.
Flat fields are of three types viz., dome flats, twilight flats and dark sky flats. Dome flats are obtained by uniformly exposing a flat white surface on the inside of the dome. Twilight flats are images of empty sky taken shortly after sunset or before sunrise. Twilight flats are considered superior because the wavelength dependence of the twilight is closer to that of the night sky. For the same reason dark sky flats are sometimes obtained. Here the CCD is exposed to an empty region of the night sky to obtain the pixel to pixel variation. But the procedure requires spending a large amount of telescope time which could otherwise be used for observing objects.
Another way of obtaining a flat field, without devoting extra telescope time, is to get the median flat. For this one needs a large number of images taken over the night(s). One considers all the frames together and at each pixel (i,j) obtains the median of the pixel of all the frames. This method proves effective when a given pixel is more likely to be not illuminated by any source than it is likely to be illuminated by one.
The optimal count to obtain in each flat field is . At this level the CCD is in the linear range of the response curve, there is no saturation of the pixels and the error in the count is kept down to . By suitably arranging several such frames to produce a master flat, the noise can be further reduced.
During our two runs over 50 twilight flats were obtained in each filter. The exposures were chosen so that the counts reached 40-70% of the CCD saturation. The minimum exposure used for the flat fields was three seconds so that the shutter's opening and closing does not give rise to structure in the flat frames due to non-uniform exposure. No gradient was seen in the flat fields. The flat fields were combined using flatcombine taking into consideration the different exposure times and a normalized master flat was created in each filter. To test its accuracy, a flat field not used in making the master flat was divided by the master flat. Variations in the mean count, over a small area, across the flattened image were noted, and were found to be less than 1%. A cut across one such flat fielded image is reproduced in Figure . Each object frame was divided by the master flat for the appropriate filter.
Cosmic rays can interact with the CCD semiconductor substrate to produce a large amount of electrical charge. Often such an event is characterized by being localized at a single pixel on the CCD frame. However, at times, the incidence can be of a grazing nature and several adjacent pixels can be affected. Similar events giving rise to spurious counts can also be generated within the instrument itself by either the electronics, or some radioactive material like the CCD window glass (Buil, 1991). Since isolated pixels are affected, it is easy to detect the isolated peaks in intensity.
Cosmicrays is used to remove the cosmic rays. To begin with, a window size, e.g. pixels, is chosen and the window is slid across the image. Intensity peaks in the window are detected and their intensity is compared with the mean intensity of the remaining window. If the ratio of the peak intensity to mean intensity exceeds a threshold, say five, the pixel is marked as a cosmic ray. The window size and threshold depend on the point spread function (PSF) and the exposure. The task is run interactively the first few times to determine 'reasonable' values for the threshold and window size. A second pass is made to detect cosmic ray events in neighboring pixels and for grazing incidence.
When a CCD is exposed to a region of night sky seemingly devoid of discrete objects, it still registers non-zero counts coming from the sky and monotonically increasing with the exposure duration. On a starry, moonless night, at a good site, the sky can be as bright as . The sky brightness is the result of the following factors (Dube et al., 1972): (1) photochemical processes in the upper atmosphere. This component has an irregular spectrum, (2) Particles in the solar system scattering sunlight and giving rise to zodiacal light, (3) Faint and unresolved stars in our Galaxy and (4) Diffuse extragalactic light from unresolved galaxies. The standard deviation of the background becomes an important factor in determining the validity of the detection of a feature. Thus only when the count in an object exceeds , say, can one consider the feature to be real. The sky background affects extended images the most. At every point of the image, one overestimates the count equivalent to the background, resulting in a gross error if the background is not accounted for. As a result, sky subtraction is a very vital factor in all photometry and various methods have been devised for measuring the sky level in an image.
When a galaxy image is small compared to the CCD frame, the ``boxes method'' is preferred (e.g. Peletier et al., 1990). In this method the average count in boxes in different corners of the CCD, well removed from stars and galaxies, is obtained. The MRC objects are several arcsec across. The CCD field was at least 11 for all observations. Hence there was plenty of empty region for sky estimation within the field, well removed from all sources. For each frame over 20 boxes of size pixels were used for sky estimation. The mean of the medians in these boxes was accepted as the sky value. The estimated accuracy is .
If the galaxy image is large, regions free of the image may not be available for sky subtraction. One then needs to apply a correction to the initial boxes estimate. This is done iteratively by (1) subtracting the estimated sky from the frame and (2) determining the luminosity profile at outer and intermediate values to see if the two fall similarly. If the outer profile falls too steeply, the sky has been over subtracted. If it falls slower than the intermediate profile, the sky has not been fully subtracted. In either case, the sky estimate is updated and the process repeated. The accuracy expected using this method is . A polynomial too can be fitted to the sky for an accurate estimation (e.g. Fasano et al., 1996).
The sky in the NIR is much brighter ( 100 times) compared to the optical sky. Absolute sky fluctuations are also greater. Air glow and temperature fluctuations can lead to a sky level variation of factor 2 during the night. Accurate photometry at fainter levels is thus more difficult in the NIR. Additionally, large format NIR arrays have not been developed yet. Hence, except in the case of very small galaxies (few arcsec) one cannot reliably use the boxes technique.
The ``sandwiching technique'' is used to estimate the sky background when the galaxy covers over a quarter of the array. In this technique an exposure of the galaxy is sandwiched between two exposures of an empty sky. One starts by pointing the telescope to a region close to the galaxy which is affected neither by the galaxy nor by any other bright source. The length of the sky exposure is kept the same as that of the galaxy frame. The galaxy is observed next. It is again followed by a sky exposure of an equal duration. The sky is then estimated from the two sky images.
For all our objects we carefully determined the sky from the galaxy frames or from the sandwiched separate sky frames.
The longer the total exposure is, the better the signal-to-noise will be. However, because of the contribution of the sky and the fast saturating bright stars, very long exposures are not advisable. The solution is to split the exposure into two or more shorter exposures. It is very rare that the images will turn out to be perfectly aligned in successive exposures. In fact, to avoid bad pixels an object is often moved around in the CCD field during different exposures. As a result, one needs to realign the images. In general, alignment requires translation and rotation of frames.
Alignment is accomplished to an accuracy of 0.1 pixels or better using geomap and geotran. The procedure is as follows: (1) In each filter we have several frames per object. One of these frames is taken to be the reference frame for all the filters. (2) 3-4 bright, unsaturated stars well distributed around the program object are chosen as reference stars and their frame coordinates are obtained using imexam to an accuracy of 0.01 pixels. (3) Coordinates of the same stars are located in one of the other frames. (4) Geotran is used to obtain the coordinate differences between the two frames. The task uses the coordinates of all the reference stars chosen. (5) Geomap is then used to map the chosen frame onto the reference frame. (6) The procedure is repeated for all the frames, in all the filters, for the given object. At the end of the procedure we have a set of frames in which corresponding objects have the same frame coordinates within pixel. The frames are ready to be combined in any manner required for improving the signal-to-noise ratio, obtaining color maps etc.
Imcombine is used to combine aligned, multiple frames of objects. The task allows weighing the frames by their exposure. Thus a frame with a longer exposure gets a higher weight. If there are a large number of frames, a rejection criterion too can be adopted whereby the outliers are ignored during the final averaging. We were able to use the rejection criterion in cases where the galaxy was close to a bright star and hence several very short exposures had to be taken in each filter. In the end there exists a single image per filter per object.
After the preprocessing steps the data have to be converted to a standard magnitude system, like the Johnson-Morgan B, V, R system, so that they can be compared with the observations carried out by others. To obtain the required transformation coefficients, from instrumental to standard magnitudes, one observes several standard stars a few times during each night along with the program objects. These stars are non-variable and have been standardized by repeated observations with various telescopes over a period of time and their magnitudes in different standard passbands are known to magnitude. One then measures the effect of conditions of the atmosphere on these standards and uses the information to correct for the magnitudes of the program objects.
The procedure is to obtain instrumental magnitudes for a number of stars, and then to derive the transformations from instrumental to standard magnitudes. To first order, the transformation involves the color of the stars (see Henden and Kaitchuck, 1982).
A large format CCD has the advantage that several stars can be simultaneously observed. Thus, if a region containing many standard stars is chosen, a larger number of data points are obtained for the standardization. During the two runs, two fields from the Standard Areas (Landolt, 1992) viz., SA 98 and SA 104 were used. The standard stars observed are listed in Table .
The CCD counts for the stars were converted to instrumental magnitudes by using
where, and refer to the total number of counts per second from the individual stars in the filters respectively. Since the absorption and diffusion in the earth's atmosphere affect the light, the resulting atmospheric extinction has to be removed by transforming the instrumental magnitudes to their extra-atmospheric values. The amount of extinction depends on the altitude of the observatory, atmospheric conditions, wavelength used and the zenith distance of the object. The zenith distance, z, is given by
where, is the latitude of the observatory, is the declination of the object, and h is the object's hour angle.
The term , often denoted by , is called the air mass. It denotes the thickness of the atmosphere crossed by the light rays. It is defined to be unity at the zenith and increases to infinity at the horizon. The extra-atmospheric magnitudes, and , are obtained from the instrumental magnitudes by making use of the zenith distance of the object and the extinction factor in magnitudes at the zenith. Thus,
where, is the airmass, b and r are the instrumental magnitudes, and and are the extinction coefficients in b and r filters respectively. The standard stars are observed for a range of air-masses. The values of and are obtained by fitting a straight line to the observed magnitudes versus the airmass. The slope gives the extinction factors whereas the intercept gives the extra-atmospheric magnitude (magnitude at ).
Though it is customary to refer to the term above as the extinction coefficient for the filter , it is actually a composite of two factors: (1) a zero-point, , which is an extinction term that depends only on the transparency of the atmosphere. It changes from night to night and (2) a color coefficient, , which is a color dependent term arising from the different response that broadband filters have to objects with different colors. For a given site, this term is generally constant for a season for a given combination of filters. Thus,
The transformations to the standard are given by
where, and are the color coefficients and and are the zero-points.
We followed the procedure outlined above to obtain the extinction correction and transformation coefficients.
For NIR calibration, standards from Elias ( 1982) were observed in J and K bands and are tabulated in Table . The stars are bright (7th magnitude) and hence they were always observed out-of-focus so as to form a torus. This avoids saturating the array and allows one to extract the total magnitude. Each standard was observed several times each night. During each round, the star was first placed in the first quadrant of the IR array and seven exposures obtained. Then it was similarly observed in the second, third and forth quadrants. The sky too was observed by moving the standard out of the frame and making sure no other bright star was in the frame. Since the sky in NIR can change fast, it is very important to observe the sky close to the standard.
The NIR standardization is slightly different from the broad band procedures. First a nominal sky frame is created for each wavelength observed. A note is made of all the contaminating stars that may be present in the images. These are then removed and the sky is estimated to a better accuracy and subtracted out. The magnitudes are then extracted and using the array sensitivity values and the standard magnitudes, the instrumental constant for each night is calculated.
The absorption due to our galaxy depends on the direction of the line of sight through the Galaxy. It is maximum in the plane of the galaxy and least towards the poles. We adopted an absorption free polar-cap model (e.g. Sandage, 1973) to calculate the galactic absorption. It is given in magnitudes by
All the sample objects have and therefore we always used the expression for . Extinction in the bands is obtained from
When galaxies at different redshifts are seen through a fixed passband, the collected light comes from different wavelength ranges in the restframe of the respective galaxies. K-correction is used to transform the restframe magnitudes of each object to the passband of the filter. The K-correction depends on the spectrum of the object, and is given by (Oke and Sandage, 1968)
where, is the energy flux density in the reference frame of the galaxy and is the response function of the filter used. The K-corrected magnitude, , is then obtained from the uncorrected magnitude m using
We have used an approximate form of the K-correction, applicable for small z, from Whitford ( 1971) and Persson ( 1979). For the three filters we have,
The magnitudes corrected in accordance with this approximate form have been used for subsequent correlations and calculations.