Chapter 1
Introduction

The resolution of all large ground-based telescopes is severely limited by the effects of atmospheric turbulence. At even the best sites, the resolution of a 2.5m telescope is degraded by at least a factor of five in the visible.

This thesis presents the development and use of the Lucky Imaging technique, an elegant method of obtaining diffraction-limited images using large telescopes. As well as discussing the implementation of a fully automated Lucky Imaging pipeline, and investigating the performance of the technique in detail, I use the Lucky Imaging system to perform two surveys for very low mass stellar and substellar companions to M-dwarfs.

In this introduction, I first describe the blurring effects of atmospheric turbulence (seeing), then discuss several methods that have been or are used to reduce the resolution loss, before concluding with a detailed look at Lucky Imaging.

1.1 The Effects of Atmospheric Turbulence


PIC PIC

Figure 1.1: The 0.31 arcsec binary LP 359-186 (the companion was discovered as part of the VLM binary survey described in chapter 7). The left panel shows a Lucky Imaging near-diffraction-limited image; the right shows the same target as viewed by a standard CCD camera. The binary is unresolved in the seeing-limited case, where the PSF FWHM is approximately 0.8 arcsec. In this case, telescope wind-shake introduces a slight elongation into the seeing-limited PSF. Almost all VLM binaries cannot be resolved from the ground without turbulence-compensation systems.

The effects of turbulence on astronomical imaging with a telescope were first described shortly after the invention of the instrument (Galileo1610). Short exposure images rapidly change in both shape and position; long exposure images are very blurred (figure 1.1).

A huge variety of astronomical imaging programs rely on greater angular resolution than can be achieved from the ground with seeing-limited instruments. As an illustration of this, Livio et al. (2003) details a spectacular decade of science performed with the 2.5m Hubble Space Telescope. Space telescopes are, however, extremely costly compared to ground-based systems, and the achievement of similar resolution from the ground is a very active area of research. I detail the major turbulence-compensation methods in section 1.2; Lucky Imaging is the only method which is capable of achieving Hubble Space Telescope resolution in the visible (demonstrated in chapters 4, 6 and 7).

1.1.1 Long and short exposure point spread functions

On large telescopes, there are two important regimes of imaging – long and short (less than ~1/4 second) exposures. Because the structure of the turbulence above the telescope is continually changing, the short-exposure point spread function (PSF) is very variable. Short exposure images take the form of speckle patterns (figure 1.2), consisting of multiple distorted and overlayed copies of the diffraction-limited PSF. Short exposure PSFs thus retain information at the diffraction limit in their speckle patterns, although it is usually very highly distorted.

Long exposure images are simply the sum of a large number of short exposure images. Because the short-exposure PSF is highly variable, both in structure and centroid position, the summed long-exposure PSF is highly blurred compared to the diffraction limits, by factors of 5-15× on a 2.5m telescope at a good site.

To increase the signal-to-noise, almost all astronomical imaging is performed with long exposures, unless conditions (such as high sky brightness in the infrared) prevent it. Most conventional infrared and visible astronomical imaging on large ground-based telescopes does not retain information at the diffraction limit of the telescope, unless turbulence correction measures are employed.

1.1.2 Turbulence characterisation

Turbulence induced phase errors are typically on the order of a few wavelengths. The total atmospheric turbulence strength is usually characterised by the quantity r0 – the size of diffraction-limited aperture which has the same resolution as an infinite seeing-limited aperture. The RMS phase error across a circle of diameter r0 is approximately 1 radian, conventionally measured at 550nm wavelength (Fried1967). Because it is expressed in terms of phase error, the parameter (and equivalently the image-degrading effect of the turbulence) is strongly dependant on wavelength, being proportional to λ65. Figure 1.3 shows examples of the image distortions induced in short exposure images by different values of r0 .

Atmospheric turbulence is usually modelled using a Kolmogorov power spectrum, developed by Tatarski (1961) on the basis of studies by Kolmogorov (1941). In the Kolmogorov model, the mean-square difference in phase between two points on a wavefront separated by a distance r is:

                               (  )5∕3
D φ(r) ≡ ⟨|φ(r′) - φ(r′ + r)|2⟩ = 6.88 |r|
                                r0
(1.1)

This model is supported by a variety of experimental measurements (for example, Colavita et al. (1987); Buscher et al. (1995); Nightingale & Buscher (1991)), and is widely used. However, other studies have found significant deviations from Kolmogorov statistics – for example, di Folco et al. (2003) found the turbulence at the Very Large Telescope Interferometer to have a spectral slope consistently 10-20% lower than Kolmogorov. The illustrative simulations presented in this introduction assume a Kolmogorov power spectrum, generated using code kindly supplied by David Buscher.


PIC

Figure 1.2: An example set of I-band LuckyCam short exposure images of the I=+11.1 mag star Ross 530. The frames are sequential 33ms exposures, with time increasing left-to-right and up-to-down. The frames have a resolution of 0.04 arcsecs per pixel, which slightly undersamples the 0.08 arcsec FWHM I-band diffraction-limit of the 2.5m Nordic Optical Telescope used for these observations. The first frame has the 17th highest Strehl ratio in this 3600 frame, 2 minute run, and is the closest-to-diffraction-limited shown in this figure. Note the very fast light concentration and PSF shape changes, and the general decline in quality over the 600ms period shown here.

Seeing

A well-figured telescope with an aperture much smaller than r0 will provide images which are limited only by the diffraction limit of its aperture; equivalently (Fried1966), the seeing disk size ε at an effective wavelength λ is:

       -λ
ε = 0.98r
        0
(1.2)

ε is the turbulence-limited long exposure resolution on a telescope much larger than r0. The astronomical seeing is usually measured (during general observations) from the FWHM of the turbulence-limited long exposure PSF.

Unfortunately, r0 rarely exceeds 50cm at the best astronomical sites (for example, Aristidi et al. (2005)), and is more usually 10-20cm (for example, Vernin & Munoz-Tunon (1994)). The seeing (also by convention measured at 550nm) is only very rarely better than 0.5 arcsec at even the best sites, a factor of ten wider than the diffraction-limited resolution of a 2.5m telescope.

Variability

There are two main components in the coherence timescales of atmospheric turbulence. Firstly, the short-exposure (speckle) coherence time gives the timescale over which high-resolution-imaging systems must be able to correct incoming turbulence errors. The speckle coherence time τ0 has been measured by a variety of authors (for example, Dainty et al. (1981); Vernin & Munoz-Tunon (1994); Roddier et al. (1990); Buscher (1994); Scaddan & Walker (1978)). Generally these studies give values on the order of a few milliseconds or 10s of milliseconds, although there is some disagreement over the detailed definition of the coherence time that should be used (Buscher1993).

Secondly, the seeing itself changes on timescales anywhere between seconds to hours and up (chapter 4, and Racine (1996); Vernin & Muñoz-Tuñón (1998)). Tokovinin et al. (2003) find that the seeing decorrelates at different timescales (between minutes and hours) at different altitudes.

Isoplanatic Angle

The light from two astronomical objects separated by a small angle will travel on slightly different paths though the turbulence in the Earth’s atmosphere. If the objects are separated by a large angle, corrections based on the shape of one object’s wavefront will not be applicable to the other’s wavefront, because their turbulence induced errors will be different. If turbulence corrections are made on the basis of a single guide star, the isoplanatic angle gives the angular distance from that star at which the corrections are still valid for other objects. There are a number of different quantities referred to as the isoplanatic angle, in particular the speckle imaging and adaptive optics isoplanatic angles, which are subtly different. In this thesis I define the isoplanatic angle as the radius at which the Strehl ratio of LuckyCam-corrected images drops to 1∕e of its on-axis value. The isoplanatic angles noted below may not be exactly comparable (even with each other), but give an idea of the expected values.

Roddier et al. (1982) reviews a number of isoplanatic angle measurements, again usually measured at 550nm. Typical values for speckle imaging range from 4-10 arcsec, although the adaptive optics isoplanatic patch is expected to be ~30% smaller. Vernin & Munoz-Tunon (1994) find that at the La Palma Observatory, where all LuckyCam runs 2000-2005 were made, the speckle imaging isoplanatic angle at 550nm is around two arcsecs. This very small angle does not agree with measurements made with LuckyCam, where angles between 15-30 arcsecs are observed in I-band; the discrepancy is discussed in chapter 4.


  
PIC PIC
PIC PIC
PICPIC

Figure 1.3: The large changes in PSF shape induced by a change in turbulence strength, with the wavefront shape unchanged. The three pairs of images show the effect of turbulence with 2, 5 & 10 r0 s across the telescope diameter. The number of speckles, the centroid position and the position of the brightest speckle all change. The phase screen colour map wraps at 2π radians of phase error, and the PSFs are all displayed on the same colour scale. The phase screen is the same in each case, with only the turbulence strength changed.

1.2 Correction Methods

I briefly list below the main techniques that have been developed to combat the problem of atmospheric turbulence, along with some illustrative examples of instruments that employ them.

The quality of images produced by these correction methods is usually measured in one of two ways: the FWHM (full width at half maximum) of the PSFs, or their Strehl Ratio. FWHM is usually used to describe performance which does not approach the diffraction limit, where the seeing increases the image width. The Strehl Ratio, defined as the ratio of the PSF peak flux to the peak flux for a PSF acquired with a perfect telescope with no turbulence, is used to describe the light concentration of systems which give a core with diffraction-limited resolution.

In this thesis I use the terms diffraction-limited or near-diffraction-limited to describe PSFs which have a FWHM equal to that expected for the telescope with no atmospheric turbulence (1.22λ∕D, for effective wavelength λ and telescope diameter D).

1.2.1 Lucky Imaging

The subject of this thesis, Lucky Imaging offers a particularly elegant and uncomplicated solution to the blurring effects of atmospheric turbulence. Among the rapid turbulent fluctuations of the atmosphere, moments of relatively quiet air appear, lasting a few tens of milliseconds. Imaging at high frame rates (frame times the atmospheric coherence time) allows us to follow rapid seeing variations and select those times with very much better images – effectively integrating only during periods of truly superb seeing.

The frames recorded by LuckyCam are sorted according to quality; only the best frames are aligned and co-added to produce a final image. On a 2.5m telescope, the system routinely achieves the diffraction-limit at I-band (Strehl Ratio 0.2, FWHM 0.08 arcsec). The details of the Lucky Imaging process, such as the frame selection level, can all be optimised during post-processing of the data.

Lucky Imaging is treated in much more detail in section 1.3.

1.2.2 Adaptive Optics

Adaptive Optics (AO) describes technologies that actively correct in realtime the detailed wavefront perturbations introduced by the atmosphere. They do so by the addition of a deformable mirror (DM) into the optical path; figure 1.4 shows a schematic of a typical AO system. A wavefront sensor tracks the shape of the incoming wavefront using a nearby guide star. The DM is commanded to deform such that light reflecting from the mirror acquires path length errors designed to cancel out those induced by the atmospheric turbulence.


PIC

Figure 1.4: A diagram of a typical Sodium laser guide star adaptive optics system, excluding the laser launch systems. Light enters from the telescope at the top left, and is corrected for tip/tilt before further correction by the deformable mirror. IR wavelengths (> 1μm) are sent to the science camera. Light from the sodium layer excited by the laser is used for the detailed wavefront sensing, while tip/tilt is measured from the visible light from a nearby natural guide star. Adapted from Wizinowich et al. (2006).

AO systems are in routine use on many large telescopes: for example, Keck (Wizinowich et al.2000), VLT (Brandner et al.2002), MMT (Wildi et al.2003), and Subaru (Takami et al.2003). These systems are currently limited to near infra-red operation only, usually ~1.2-2.2μm, but achieve near diffraction-limited resolutions (Strehl Ratio > 0.2, usually at the longer wavelengths).

The AO systems are limited in useful wavelength by the accuracy to which the wavefront can be measured and corrected. In particular, a more precise correction requires more light from the guide star with which to measure the wavefront, as well as higher resolution deformable mirrors. Typical natural guide star AO systems only achieve full correction in the near-infrared with guide stars brighter than about ~12 mags (for example, van Dam et al. (2006)).

This limitation can be overcome with the use of laser guide star (LGS) systems, where an artificial star is created either by downwards Rayleigh scattering or by excitation of the high-altitude sodium layer. These systems are (as of 2006) starting to become operational for near-infrared science use (for example Brown et al. (2005)).

1.2.3 Tip/tilt correction systems

Tip/tilt correction systems correct for image motion, usually using a movable mirror and an image position monitor. The stabilised image is then integrated on a standard visible or infrared detector. Alternatively, the development of orthogonal transfer CCDs (Tonry et al.1997) allows the image motion corrections to be performed directly on the CCD chip, by moving the stored charge both vertically and horizontally. When packed together into very large (GigaPixel) arrays, these devices can act as very wide-field imagers with “rubber” focal planes - that is, image motion corrections which vary across the field of view (Tonry et al.2002).

In the near-infrared on smaller (1-4m) telescopes, the bulk of the phase error can often be corrected by removing image motion – at 2.2μm on a 3.6m telescope, tip/tilt correction improves 0.6 arcsec seeing images to 0.3 arcsec FWHM (Roddier et al.1991). Jim et al. (2000) describes a tip/tilt corrected secondary mirror on the University of Hawaii 2.2m telescope that also achieves K-band images with 0.3 arcsec FWHM.

The TRIFFID camera (eg. O’Sullivan et al. (1998) and references therein) employs offline data-processing of photon counted data for resolution enhancement. The system includes a pupil mask matched to the prevailing seeing (to maximise the tip/tilt component), and shift-and-add processing of the data is performed after the observations. FWHM improvements of a factor of two have been obtained (Butler et al.1996).

In the visible, tip/tilt makes up a smaller component of the wavefront error. Systems which correct tip/tilt as well as performing some long-timescale image selection (for example, Nieto et al. (1988); Crampton et al. (1992)) obtain typical FWHM resolution improvements on the order of 10-30%.

1.2.4 Speckle Interferometry

First suggested by Labeyrie (1970), speckle interferometry relies on the fact that the final image I(x, y) is the atmospheric PSF P(x,y) convolved with the target object’s brightness distribution O(x, y):

I(x,y) = P(x,y)* I(x,y)
(1.3)

Since the short-exposure speckle pattern contains information at the diffraction limit of the telescope (each speckle is the size of a diffraction-limited PSF), processing of short exposure images can recover information on the object brightness distribution on those small scales. Speckle imaging is in use in a number of science programmes, particularly in imaging binary stars, where the speckle patterns are particularly simple (for example, Mason et al. (2006); Horch et al. (2002)).

Speckle imaging requires reasonable signal-to-noise short exposure images, a capability which has been limited on standard CCD detectors – for example, a recently developed system, the RIT-Yale Tip-tilt Speckle Imager, is expected to be able to resolve binary stars down to ~11.3 mags (Meyer et al.2006).

The very sensitive and fast imaging capabilities of LuckyCam make it an ideal speckle detector as well as Lucky Imager, as the problems of acquiring sufficient signal-to-noise in the PSF are roughly same in the two techniques. In chapter 7 I apply speckle imaging techniques to very close binaries discovered with Lucky Imaging, in order to improve the accuracy and precision of the derived binary parameters. The LuckyCam L3CCD photon event shape requires unique calibration procedures for speckle imaging; the techniques that I developed for this are presented in chapter 5.

1.2.5 Interferometry

Interferometers recombine the incoming wavefronts from different telescopes (or separated parts of the same telescope), producing a set of fringes from which the target object’s high-angular-resolution brightness distribution can be calculated. Interferometry is used routinely by radio astronomers for image reconstruction, because of the otherwise prohibitive telescope sizes required to achieve useful resolution at radio wavelengths.

Using a single telescope with its aperture masked to form several smaller ones, diffraction-limited imaging in the visible has been successfully demonstrated on 5m-class telescopes (Baldwin et al.1986Haniff et al.1987Gorham et al.1989). However, these systems require very small apertures to attain sufficient visibility on the fringes, although recent developments using the Lucky Imaging concept can eliminate that problem (Mackay2005).

Imaging has also been demonstrated with instruments which combine light from different telescopes (described in Michelson (1920)) – for example, the Cambridge Optical Aperture Synthesis Telescope (COAST, Baldwin et al. (1994)). COAST consists of four 50cm telescopes, which combine visible light over a number of baselines. Science observations using the interferometric combination of light from multiple large telescopes have also recently been performed in the near and mid infrared on the Keck and VLTI interferometers (for example, Mennesson et al. (2005); Chesneau et al. (2005)).

1.2.6 The capabilities of presently available systems


PIC

Figure 1.5: A collation of the capabilities of currently-available ground based imaging systems. The area shaded in red is the gap in general imaging capabilities that LuckyCam fills – 0.1 arcsec resolution imaging in the visible. The ability to use faint guide stars is important to allow coverage of a reasonable fraction of the sky with the imaging technique in question. Speckle imaging and interferometry are listed in brackets because they are usually used for specialised purposes, where a model of the target object is already known or suspected (eg. searches for binary stars).

Collating the capabilities of the systems described above, a gap is evident (figure 1.5) – with the exception of Lucky Imaging, no currently available ground-based system is capable of general astronomical imaging with resolution better than 0.3 arcsec in the visible. Lucky Imaging fills that gap simply and elegantly.

1.3 The Lucky Imaging Technique

A Lucky Imaging system takes very short exposures, on the order of the atmospheric coherence time. The rapidly changing turbulence leads to a very variable PSF; the variability of the PSF leads to some frames acquired by the Lucky Imaging system being better quality than the rest. Only the best frames are selected, aligned and co-added to give a final image with much improved angular resolution.


PIC

Figure 1.6: The relative probability of obtaining frames with given Strehl Ratios. The histogram shows the short exposure Strehl Ratio probability density function calculated from 20,000 simulations of Kolmogorov phase screens. The probability density function is normalised to an arbitrary total value, and is calculated for an optically perfect telescope with 6r0 of atmospheric turbulence across its diameter. The panels on the right show example wavefronts for particular Strehl ratios (left panels) and the corresponding monochromatic short exposure PSFs (right panels). Note there is significant low-level speckled noise remaining even in the exposure with a Strehl ratio of 0.45. Real PSFs are degraded by the effects of a longer exposure time, finite bandwidth, mirror aberrations and light scattering.

1.3.1 Previous systems

A number of instruments have been specifically designed to take advantage of rapidly varying seeing statistics. Most of these instruments use integration times in the range of seconds (to reduce readout noise), and thus could not perform true Lucky Imaging, or reach anywhere near diffraction-limited performance. They provide, however, useful improvements in the apparent seeing. There are two main classes of this type of instrument: cameras with optics or electronics designed to recenter and select images during integration, and direct short exposure imagers where the final image reconstruction is undertaken offline.

HRCam (McClure et al.1989) is an example of the first class of instrument. Used on the Canada-France-Hawaii telescope (CFHT), it incorporated a fast tip/tilt mirror to follow centroid image motion and a shutter to reject periods of poorer seeing. It typically achieved 10-20% improvements in FWHM resolution (Crampton et al.1992).

An instructive example of the second, offline data-processing class of instruments is described in Nieto et al. (1987). Using the European Space Agency Photon-Counting Detector (di Serego Alighieri & Perryman1986) with effective integration times of 1.5-20 seconds, they selected and recentered images to form final science images with FWHM resolutions improved by ~30 per cent (Nieto et al.1988). Nieto & Thouvenot (1991) describes some more details of the reconstruction process. Crucially, because their integrations times are so long, their images did not have a speckled character. The system thus does not retain diffraction-limited imaging, but rather takes advantage of long-timescale changes in the seeing itself. They also align images on the basis of the image centroid (actually the correlation between the long exposure integrated image and the short exposure frames), which removes the tip/tilt component of the turbulence but does not allow diffraction-limited image reconstruction even if the imaging frame rate is high. The benefits of alignment on the brightest speckle rather than the image centroid are discussed in detail in for example Christou (1991).

More recently, fast CCD imagers have enabled near diffraction-limited Lucky Imaging on both relatively small telescopes (0.36m, Davis & North (2001)) and medium-sized telescopes (60 inch, (Dantowitz et al.2000)). The availability of cheap webcams has also allowed amateur astronomers to perform automated frame selection and alignment imaging* However, the high noise introduced by running these CCD-based systems fast enough to sample the atmospheric coherence time requires very bright (generally planetary) targets.

1.3.2 Camera requirements for a diffraction-limited Lucky Imaging system

Lucky Imaging is a fairly forgiving process – under almost all conditions frame selection and alignment gives a very significant resolution increase. However, the requirements for diffraction-limited imaging on 2.5m telescopes, the aim of the LuckyCam programme, are much more stringent.

Telescope Size

All high-resolution imaging systems are limited in the size of telescope for which they can adequately correct the turbulence-induced wavefront errors – there is simply more turbulence to correct as the area of the telescope increases.

As the telescope area increases, the probability of getting a diffraction-limited short-exposure image decreases rapidly. Fried (1978) coined the term “lucky exposures” to describe such frames (defined as the phase errors across the telescope aperture having an RMS error of <1 rad.). He found that the probability P of getting a diffraction-limited image varies as

          [             2]
P ≈ 5.6 exp -0.1557(D ∕r0)
(1.4)

where D is the telescope diameter. The expression assumes a telescope that has several turbulence cells across its diameter (D∕r0 >~ 4), and that the only variations in the turbulence follow Kolmogorov statistics. Under these assumptions, given a value of r0 of say 35cm (reasonable for observations in I-band [700nm-800nm] in good conditions), the probability of getting a good image is quite high on relatively small telescopes (30% on a 1.5m telescope) and very low on larger telescopes (essentially zero for a 10m telescope). An example of the probability of obtaining a range of different image qualities in a particular seeing is shown in figure 1.6.

Lucky Imaging benefits from using large telescope apertures in two ways. Firstly, more light is collected by the telescope, allowing fainter guide stars and thus imaging over a larger area of the sky (see chapter 4), as well as imaging of fainter science targets. Secondly, larger telescope apertures have smaller diffraction limits, increasing the final resolution of the system. It is thus beneficial to use as large a telescope as can be managed while still having a reasonable probability of obtaining diffraction-limited frames.

If we adopt a minimum of a 1% frame selection, for a r0 = 35cm the maximum usable telescope aperture for near diffraction-limited imaging is 2.25m. Of course, a different selection level and/or atmospheric conditions will affect this minimum size. In practise, because it is complicated to change the aperture of the telescope being used, we use a 2.5m telescope and adjust the selection criteria to give a useful trade off between the loss in signal from using fewer frames and the resolution of the final image.

Frame rate

The Lucky Imaging system should operate quickly enough to “freeze” the atmosphere (sample within the coherence time) so that, when a diffraction-limited period occurs, it can be recorded without the addition of the non-diffraction-limited time periods surrounding it. As described in section 1.1.2, the coherence time is usually a few to a few tens of milliseconds, depending on atmospheric conditions and the type of measurement.

On-sky measurements of the Lucky Imaging turbulent timescale using the LuckyCam system are presented in Tubbs (2004) and chapter 4. On the basis of that data a minimum of 30 frames per second are required for most observed conditions at the Nordic Optical Telescope, where all LuckyCam data has been acquired. However, the frame rate must be increased still further in times of poor seeing to obtain the best quality (chapter 4).

Sensitivity

The chief limit on previous Lucky Imaging-style systems has been the low sensitivity of the detectors used, requiring very bright guide stars in the field of view from which to estimate the image quality and position. The faintest usable guide star has a direct effect on the potential target-availability of the system, whether we require the science target to be bright enough to serve as its own guide star or simply that a bright enough guide star is randomly in the field of view. The impact of guide star brightness on the sky coverage of the system is discussed in chapter 4.

1.3.3 The Cambridge Lucky Imaging system

The Cambridge Lucky Imaging system, LuckyCam, meets and often exceeds the above requirements. It is based on an L3CCD system which gives photon-counting imaging at CCD quantum efficiencies, and therefore allows guiding on very faint guide stars. The L3CCD is run at a 5-7 MHz pixel rate, giving frame rates of between 12 and 100 frames per second, and thus fully sampling the atmospheric coherence time in almost all encountered conditions. Finally, in all data presented in this thesis the system was used on the 2.56m Nordic Optical Telescope (NOT), an aperture size that (as demonstrated in chapters 6 and 7) allowed routine diffraction-limited imaging in the visible.

A version of LuckyCam was first described in Baldwin et al. (2001). For those tests, a standard CCD camera was used to obtain diffraction-limited images of stars brighter than I=+6 mags on the NOT in the visible. This represented a significant advance, in that very few other resolution improvement systems operate in the visible, and the presented system was extremely simple and cheap. However, the I=+6 mags guide star brightness limit imposed by the high readout noise of the standard CCD used limited the system to technology demonstration.

Tubbs et al. (2002) and Tubbs et al. (2003) demonstrate the first Lucky Imaging observations with an L3CCD-based camera, with guide stars as faint as I=+16 mags. The Lucky Imaging isoplanatic patch in these observations is demonstrated to have a FWHM of 50 arcsecs, much larger than would be expected for I-band observations. Tubbs (2004) covers all LuckyCam developments up to 2003 in great detail.

I have published several papers describing my Lucky Imaging work. Law et al. (2006b) investigates in detail the on-sky performance of Lucky Imaging systems in a variety of conditions, and forms the base for chapter 4. Law et al. (2005) gives initial results from my LuckyCam survey for Very Low Mass (VLM) binaries. Law et al. (2006a), on which chapter 6 is based, presents the first part of the LuckyCam VLM binary survey. Finally, a further much larger sample of VLM binaries is to be published, based on the results detailed in chapter 7.

1.4 The Structure of this Thesis

This thesis is divided into two parts: the Lucky Imaging system, and astronomy that I have performed with LuckyCam.

Chapter 2 details the LuckyCam hardware and discusses the use of L3CCDs for astronomical imaging. I then describe the data acquisition and camera control software I have written for LuckyCam, including a realtime Lucky Imaging preview system for use during observing. My implementation of the LuckyCam data reduction pipeline is described in chapter 3. Chapter 4 is a detailed investigation into the performance of Lucky Imaging in a wide range of atmospheric conditions.

In chapter 5 I detail several data reduction and calibration methods unique to Lucky Imaging and/or L3CCD data that I have developed for the measurement of close binary parameters. In chapters 6 and 7 I present the LuckyCam survey for very low mass binaries, with two separate samples of late-M-dwarf targets.

Finally, I summarise my conclusions in chapter 8.

1.5 Summary