OSA's Digital Library

Optics Express

Optics Express

  • Editor: J. H. Eberly
  • Vol. 1, Iss. 11 — Nov. 24, 1997
  • pp: 355–362
« Show journal navigation

Multiframe blind deconvolution with real data: imagery of the Hubble Space Telescope

Timothy J. Schulz, Bruce E. Stribling, and Jason J. Miller  »View Author Affiliations


Optics Express, Vol. 1, Issue 11, pp. 355-362 (1997)
http://dx.doi.org/10.1364/OE.1.000355


View Full Text Article

Acrobat PDF (221 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Multiframe blind deconvolution - the process of restoring resolution to blurred imagery when the precise form of the blurs is unknown - is discussed as an estimation-theoretic method for improving the resolving power of ground-based telescopes used for space surveillance. The imaging problem is posed in an estimation-theoretic framework whereby the object’s incoherent scattering function is estimated through the simultaneous identification and correction of the distorting effects of atmospheric turbulence. An iterative method derived via the expectation-maximization (EM) procedure is reviewed, and results obtained from telescope imagery of the Hubble Space Telescope are presented.

© Optical Society of America

1. Introduction

The distorting effects of atmospheric turbulence on ground-based imagery of astronomical and space objects are well-documented, and numerous methods have been proposed, implemented, and tested for their correction1

1. M. C. Roggemann and B. Welsh, Imaging Through Turbulence, CRC Press, Inc. (1996).

. These methods can be broadly classified into two categories: post-processing through signal processing whereby the image distortions are identified and compensated by computer processing of the recorded imagery; and pre-processing through adaptive optics whereby the distortions are identified and compensated with wavefront sensors and deformable mirrors so that undistorted imagery is, in principle, recorded directly. With current technology the cost of adaptive compensation can be prohibitive, particularly if full compensation is desired. Because of this, many systems in use today benefit only from partial compensation or none at all, and post-processing methods such as the one discussed in this paper provide an economical means for restoring a telescope’s lost resolving power.

It is widely accepted that only modest improvements in image quality can be attained through the measurement and processing of long exposure (on the order of minutes to hours) telescope imagery. This difficulty is often quantified through Fried’s seeing parameter r 0, which provides a measure of the largest spatial frequency that is present in the long-exposure imagery. Whereas a diffraction-limited telescope records spatial frequencies as high as the ratio D/λdo , where D is the telescope diameter, λ is the nominal wavelength being sensed, and do is the distance from the telescope to the object, long-exposure imagery acquired through atmospheric turbulence will only contain frequencies to the turbulence-limited cut-off frequency r 0/λd 0, where r 0 can take values under 10cm. For a 2m telescope with r 0 = 10cm, then, recovery of diffraction-limited resolution requires one to super-resolve the long-exposure imagery by a factor of 20. Such improvements are rarely, if ever, accomplished with real data.

Exposure times on the order of a few milliseconds, however, provide imagery with spatial frequency information out to the diffraction limit. This fact was first exploited by Labeyrie2

2. A. Labeyrie, “Attainment of diffraction limited resolution in large telescopes by Fourier analyzing speckle patterns in star images,” Astron. Astrophys. , 6, 85 (1970).

, and has since spawned a number of signal-processing methods broadly classified as speckle imaging techniques. A review of these is found in Ref. 1

1. M. C. Roggemann and B. Welsh, Imaging Through Turbulence, CRC Press, Inc. (1996).

and the many references within. Many of these, such as Labeyrie’s speckle interferometry, the Knox-Thompson method3

3. K. T. Knox and B. J. Thompson, “Recovery of images from atmospherically degraded short exposure images,” Astrophys. J. , 193, L45–L48 (1974). [CrossRef]

, and the triple-correlation method4

4. A. W. Lohmann, G. Weigelt, and B. Wirnitzer, “Speckle masking in astronomy: triple correlation theory and applications,” Appl. Opt. , 22, 4028–4037 (1983). [CrossRef] [PubMed]

, rely on the existence of a data transformation that is invariant to the effects of atmospheric turbulence. After performing this transformation along with any required calibration, a non-trivial image-recovery problem must be solved to obtain an estimate of the underlying object. As an alternative to these methods, the data can be processed to directly estimate and correct for the effects of atmospheric turbulence. Methods based on this approach are broadly classified as multiframe blind deconvolution 5

5. T. J. Schulz, “Multi-frame blind deconvolution of astronomical images”, J. Opt. Soc. Am. A , 10, 1064–1073 (1993). [CrossRef]

techniques. The application of a multiframe blind deconvolution method to real telescope data is the focus of this paper.

2. Mathematical model

For vertical path, ground-to-space imaging, the turbulence-induced distortions are commonly modeled as a time-varying phase screen that induces a point-spread function given by:

h(y;θt)=A(u)exp{j[θt(u)2πλdou·y]}du2,
(1)

where: u and y are two-dimensional spatial variables in the telescope-pupil and image planes, respectively; A(·) is the telescope pupil function; θt (·) is the time-varying phase distortion caused by turbulence-induced changes in the refractive index of the atmosphere; λ is the nominal wavelength; and do is the distance from the object to the telescope. This is the isoplanatic approximation and results in an imaging model that is linear and spatially invariant. In the absence of other effects, then, the time-varying intensity recorded by the telescope’s camera is:

i(y;θt,o)=h(yx;θt)o(x)dx
=h(y;θt)*o(y),
(2)

where o(·) is the object’s incoherent scattering function, after magnification and inversion6

6. Strictly speaking, because of image inversion and magnification an imaging system is never spatially invariant; however, if one views the input signal as the inverted and magnified object, many imaging systems are then well-modeled as spatially invariant.

. The notation i(·; θt , o) shows the image intensity’s explicit dependence on the unknown object o(·) and turbulence-induced phase aberrations θt (·).

Many telescope imaging systems in use today use charge-coupled-device (CCD) cameras to record the image intensity. As discussed by Snyder et al. in Ref. 7

7. D. L. Snyder, A. M. Hammoud, and R. L. White, “Image recovery from data acquired with a charge-coupled-device camera,” J. Opt. Soc. Am. A , 10, 1014–1023 (1993). [CrossRef] [PubMed]

, however, the data acquired by these cameras include other effects such as internal and external background counts, nonuniform camera response, electronic read-out noise, and other sources of camera bias. Accordingly, a reasonable model for the data acquired by the nth detector element in a CCD camera array during the kth exposure is:

dk[n]=Nk[n]+Mk[n]+gk[n]+b[n],
(3)

where Nk [n] is a Poisson distributed random variable that models the number of object-induced photocounts recorded by the detector, Mk [n] is a Poisson distributed random variable that models the number of background-induced (both internal and external) photocounts recorded by the detector, gk [n] is a Gaussian distributed random variable that models the CCD read-out noise, and b[n] is a deterministic bias induced by the electronic circuitry used to acquire and record the data. All of these variables are usually independent from each other and across detector elements.

The expected number of object-induced photocounts is modeled as:

E(Nk[n])=Ik[n]
=γ[n]yntktk+Ti(y;θt,o)dydt,
(4)

where tk is the time and T the duration of the kth exposure, yn is the spatial region over which the nth detector element integrates the image intensity, and γ[n] is a spatially-varying scale factor that accounts for detector efficiency. Pixels that record meaningless data are accommodated with this model by setting γ[n] = 0. The phase aberrations are usually constant over a short exposure, and the detector regions are often small in size relative to the fluctuations in the image intensity so that the integrating operation can be modeled by the sampling operation:

Ik[n]γ[n]Tyik(yn;θtk,o),
(5)

where yn is the location of the nth detector element and |y| is its integration area. Furthermore, the object and point-spread functions are often approximated on a discrete grid so that:

ik(yn;θtk,o)mh(ynxm;θtk)o(xm)Δx2,
(6)

and

h(yn;θtk)lA(ul)exp{j[θtk(ul)2πΔuΔyλdoul·yn]}Δu22,
(7)

where Δy, Δx, and Δu are the image-, object-, and pupil-plane grid sizes, respectively. When the pupil-plane array size is N × N and ΔuΔy/λdo = 1/N, the computations required to create the point-spread function can be accomplished efficiently by using a fast Fourier transform (FFT) algorithm.

The expected number of background-induced photocounts Mk [n] is denoted by Ib [n] and assumed to be the same for each exposure. The CCD read-out noise gk [n] are typically modeled as zero-mean, independent random variables with variance σ 2[n]. Appropriate values for the gain function γ[·], background mean Ib [·], read-noise variance σ 2[·], and bias b[·] are usually determined through a carefully controlled study of the data acquisition system. This typically involves the analysis of data acquired both without illumination (dark frames) and with uniform illumination (flat fields).

3. Maximum-likelihood estimation

Pr{Nk[n]+Mk[n]=N;θtk,o}=exp{(Ik[n]+Ib[n])}(Ik[n]+Ib[n])NN!,
(8)

where

Ik[n]=γ[n]Tymh(ynxm;θtk)o(xm)Δx2
=γ[n]mh(ynxm;θtk)o[m].
(9)

Here, o[m] = T|y|Δx2 o(xm ) contains the dependence on the unknown object, and h(·; θtk) contains the dependence on the unknown phase aberrations. Furthermore, the probability density for the read-out noise is

pgk[n](g)=(2πσ2[n])12exp[g2(2σ2[n])],
(10)

and the density for the measured data at each detector element is:

pdk[n](d,θtk,o)=N=0pgk[n](db[n]N)Pr{Nk[n]+Mk[n]=N;θtk,o}.
(11)

For a given set of data {dk [·]}, the maximum-likelihood estimate of the unknown object and phase aberrations is selected to maximize the likelihood:

loθ=k=1Knpdk[n](dk[n];θtk,o),
(12)

or, as is commonly done, its logarithm (the log-likelihood):

𝐿oθ=lnloθ
=k=1Knlnpdk[n](dk[n];θtk,o).
(13)

The complicated form for the measurement density (Eq. 11) makes the maximization of 𝐿(o, θ) an overly complicated problem. When the read-out noise variance is large (greater than 50 or so), however, the data can be modified according to:

d˜k[n]=dk[n]b[n]+σ2[n]
=Nk[n]+Mk[n]+gk[n]+σ2[n],
(14)

and these modified data are then similar (in distribution) to the sum of three Poisson-distributed random variables8

8. D. L. Snyder, C. W. Helstrom, A. D. Lanterman, M. Faisal, and R. L. White, “Compensation for readout noise in CCD images,” J. Opt. Soc. Am. A , 12, 272–283 (1985). [CrossRef]

. Accordingly, the modified data are approximately Poisson distributed with the mean value Ik [n] + Ib [n] + σ 2[n]. The probability mass function for these data is then modeled as:

Pr{d˜k[n]=D;θtk,o}=exp{(Ik[n]+Ib[n]+σ2[n])}(Ik[n]+Ib[n]+σ2[n])DD!,
(15)

so that the log-likelihood is:

𝐿oθ=k=1Kn{(Ik[n]+Ib[n]+σ2[n])+d˜k[n]ln(Ik[n]+Ib[n]+σ2[n])},
(16)

where terms not dependent on the object or phase aberrations have been omitted. The maximum-likelihood phase aberration and object estimates are then those that maximize 𝐿(o, θ) subject to any physical constraints that are imposed on these parameters. For all problems we have considered, the object intensity o[·] is constrained to be a nonnegative function. Known object support is another constraint that is commonly used; however, implementation of this constraint in an optimization procedure usually results in a trivial reduction of the number of unknown parameters.

3.1 Comments

In recent years, many methods for the blind deconvolution of astronomical and space-object imagery have been proposed. These can be broadly classified into two categories: i) the Ayers-Dainty9

9. G. R. Ayers and J. C. Dainty, “Iterative blind deconvolution method and its application”, Opt. Lett. , 13, 547–549 (1988). [CrossRef] [PubMed]

class of iterative algorithms; and ii) constrained numerical optimization algorithms. The methodology described here is one of constrained numerical optimization, and is a straightforward extension of the method proposed by Schulz5

5. T. J. Schulz, “Multi-frame blind deconvolution of astronomical images”, J. Opt. Soc. Am. A , 10, 1064–1073 (1993). [CrossRef]

with important modifications to accommodate real sensor artifacts such as read-noise, internal and external background, and non-uniform gain. Other methods of constrained optimization have been proposed10

10. R. G. Lane, “Blind deconvolution of speckle images,” J. Opt. Soc. Am. A , 9, 1508–1514 (1992). [CrossRef]

,11

11. S. M. Jefferies and J. C. Christou, “Restoration of astronomical images by iterative blind deconvolution”, Astrophys. J. , 63, 862–874 (1993). [CrossRef]

,12

12. E. Thiebaut and J.-M. Conan, “Strict a priori constraints for maximum-likelihood blind deconvolution,” J. Opt. Soc. Am. A , 12, 485–492 (1995). [CrossRef]

, and some have relied on essentially the same methodology as the one proposed in Ref. 5

5. T. J. Schulz, “Multi-frame blind deconvolution of astronomical images”, J. Opt. Soc. Am. A , 10, 1064–1073 (1993). [CrossRef]

with the differences being the method used for numerical optimization12

12. E. Thiebaut and J.-M. Conan, “Strict a priori constraints for maximum-likelihood blind deconvolution,” J. Opt. Soc. Am. A , 12, 485–492 (1995). [CrossRef]

, and the utilization (or lack thereof) of realistic sensor models. Key points regarding the methodology presented in this paper are:

  1. A practical and faithful model is used for the data collection process. By using the maximum-likelihood method for estimation, this model is in turn used to induce a cost function for a constrained optimization problem.
  2. The importance of the point-spread function model (Eq. 7) should not be understated. This point was emphasized by Cornwell in his summary of the 1994 workshop on the Restoration of HST Images and Spectra II as he commented on the subject of blind deconvolution13

    13. T. J. Cornwell, “Where have we been, where are we now, where are we going?”, in The Restoration of HST Images and Spectra II, B. Hanisch and R. L. White, editors, (Space Telescope Science Institute, Baltimore, MD, 1993) pp. 369–372.

    :

    “One always gains by adding more information in the form of known imaging physics. For example, one can ask whether the closure relations are enforced? Alternatively, is the PSF forced to be the Fourier transform of a cross-correlation of a pupil plane with phase errors? If not, then it should be.”

3.2 Numerical optimization

With straightforward modifications to accommodate sensor effects such as non-uniform gain and backgrounds, the EM method of Ref. 5

5. T. J. Schulz, “Multi-frame blind deconvolution of astronomical images”, J. Opt. Soc. Am. A , 10, 1064–1073 (1993). [CrossRef]

can be applied to this problem. Additionally, the derivatives of the log-likelihood with respect to the object and phase-aberration parameters are easily derived and can be used in a gradient-based optimization method. Both methods have performed well for this problem, and the determination of an optimal method of optimization is still an open problem.

4. Results with real data

Telescope data were acquired at the Air Force Maui Optical Station (AMOS) on March 9, 1995 and subsequently processed to determine maximum-likelihood object estimates. The telescope and imaging-system parameters are listed in Table 1. The object under observation was the Hubble Space Telescope in its 600 km orbit, and 656 short exposure images were acquired, each with an 8 ms exposure time. Four of these images are shown in Fig. 1. The 656 images were partitioned into 41 sets of 16, and each set in the partition was used to obtain one object restoration. The optimization method used was a straightforward extension of the expectation-maximization method described in Ref. 5

5. T. J. Schulz, “Multi-frame blind deconvolution of astronomical images”, J. Opt. Soc. Am. A , 10, 1064–1073 (1993). [CrossRef]

. Four of these restorations are shown in Fig. 2. Each restoration required approximately 15 minutes of processing time on 9 nodes of an IBM SP2 at the Maui High Performance Computing Center. Whereas very little detail is available in the raw data, the restored images allow one to clearly identify the satellite’s solar panels along with the cover (lower left) for the space telescope’s 2.4 m primary mirror. An mpeg video showing all data and restorations is available at the URL listed as Ref. 14

14. T. J. Schulz, “Movie of processed Hubble Space Telescope imagery,” http://www.ee.mtu.edu/faculty/schulz/mpeg/hst.mpeg

.

5. Summary

For ground-based surveillance of astronomical and space objects, post-detection processing through multiframe blind deconvolution is a viable method for restoring image resolution. To support this claim, a maximum-likelihood estimation method has been implemented and tested on real telescope data, and preliminary results of this study are presented in this paper. Much work still needs to be done, however, and important topics for consideration include: the use of prior models for the object and phase aberration parameters; the optimal use of measurement diversity as supplied by a wavefront sensor or phase-diversity system15

15. R. G. Paxman, T. J. Schulz, and J. R. Fienup, “Joint estimation of object and aberrations using phase diversity,” J. Opt. Soc. Am. A , 9, 1072–1085 (1992). [CrossRef]

; clever methods for generating initial parameter estimates; and the development and implementation of improved methods of optimization.

Table 1:. Telescope and imaging system parameters for the AMOS imagery of the Hubble Space Telescope.

table-icon
View This Table
Figure 1: Four short-exposure images of the Hubble Space Telescope as acquired by the 1.6 m AMOS telescope.
Figure 2: Four restored images of the Hubble Space Telescope.

6. Acknowledgments

This work was supported by the Air Force Maui Optical Station, the Maui High Performance Computing Center, and in part by the Air Force Office of Scientific Research through grant F49620-97-1-0053.

References and links

1.

M. C. Roggemann and B. Welsh, Imaging Through Turbulence, CRC Press, Inc. (1996).

2.

A. Labeyrie, “Attainment of diffraction limited resolution in large telescopes by Fourier analyzing speckle patterns in star images,” Astron. Astrophys. , 6, 85 (1970).

3.

K. T. Knox and B. J. Thompson, “Recovery of images from atmospherically degraded short exposure images,” Astrophys. J. , 193, L45–L48 (1974). [CrossRef]

4.

A. W. Lohmann, G. Weigelt, and B. Wirnitzer, “Speckle masking in astronomy: triple correlation theory and applications,” Appl. Opt. , 22, 4028–4037 (1983). [CrossRef] [PubMed]

5.

T. J. Schulz, “Multi-frame blind deconvolution of astronomical images”, J. Opt. Soc. Am. A , 10, 1064–1073 (1993). [CrossRef]

6.

Strictly speaking, because of image inversion and magnification an imaging system is never spatially invariant; however, if one views the input signal as the inverted and magnified object, many imaging systems are then well-modeled as spatially invariant.

7.

D. L. Snyder, A. M. Hammoud, and R. L. White, “Image recovery from data acquired with a charge-coupled-device camera,” J. Opt. Soc. Am. A , 10, 1014–1023 (1993). [CrossRef] [PubMed]

8.

D. L. Snyder, C. W. Helstrom, A. D. Lanterman, M. Faisal, and R. L. White, “Compensation for readout noise in CCD images,” J. Opt. Soc. Am. A , 12, 272–283 (1985). [CrossRef]

9.

G. R. Ayers and J. C. Dainty, “Iterative blind deconvolution method and its application”, Opt. Lett. , 13, 547–549 (1988). [CrossRef] [PubMed]

10.

R. G. Lane, “Blind deconvolution of speckle images,” J. Opt. Soc. Am. A , 9, 1508–1514 (1992). [CrossRef]

11.

S. M. Jefferies and J. C. Christou, “Restoration of astronomical images by iterative blind deconvolution”, Astrophys. J. , 63, 862–874 (1993). [CrossRef]

12.

E. Thiebaut and J.-M. Conan, “Strict a priori constraints for maximum-likelihood blind deconvolution,” J. Opt. Soc. Am. A , 12, 485–492 (1995). [CrossRef]

13.

T. J. Cornwell, “Where have we been, where are we now, where are we going?”, in The Restoration of HST Images and Spectra II, B. Hanisch and R. L. White, editors, (Space Telescope Science Institute, Baltimore, MD, 1993) pp. 369–372.

14.

T. J. Schulz, “Movie of processed Hubble Space Telescope imagery,” http://www.ee.mtu.edu/faculty/schulz/mpeg/hst.mpeg

15.

R. G. Paxman, T. J. Schulz, and J. R. Fienup, “Joint estimation of object and aberrations using phase diversity,” J. Opt. Soc. Am. A , 9, 1072–1085 (1992). [CrossRef]

OCIS Codes
(100.1830) Image processing : Deconvolution
(100.3020) Image processing : Image reconstruction-restoration
(100.3190) Image processing : Inverse problems

ToC Category:
Focus Issue: Signal collection and recovery

History
Original Manuscript: September 17, 1997
Published: November 24, 1997

Citation
Timothy Schulz, Bruce Stribling, and Jason Miller, "Multiframe blind deconvolution with real data: imagery of the Hubble Space Telescope," Opt. Express 1, 355-362 (1997)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-1-11-355


Sort:  Journal  |  Reset  

References

  1. M. C. Roggemann and B. Welsh, Imaging Through Turbulence, CRC Press, Inc. (1996).
  2. A. Labeyrie, "Attainment of diffraction limited resolution in large telescopes by Fourier analyzing speckle patterns in star images," Astron. Astrophys., 6, 85 (1970).
  3. K. T. Knox and B. J. Thompson, "Recovery of images from atmospherically degraded short exposure images," Astrophys. J., 193, L45-L48 (1974). [CrossRef]
  4. A. W. Lohmann, G. Weigelt and B. Wirnitzer, "Speckle masking in astronomy: triple correlation theory and applications," Appl. Opt. , 22, 4028-4037 (1983). [CrossRef] [PubMed]
  5. T. J. Schulz, "Multi-frame blind deconvolution of astronomical images", J. Opt. Soc. Am. A , 10, 1064-1073 (1993). [CrossRef]
  6. Strictly speaking, because of image inversion and magnification an imaging system is never spatially invariant; however, if one views the input signal as the inverted and magnified object, many imaging systems are then well-modeled as spatially invariant.
  7. D. L. Snyder, A. M. Hammoud, and R. L. White, "Image recovery from data acquired with a charge-coupled-device camera," J. Opt. Soc. Am. A , 10, 1014-1023 (1993). [CrossRef] [PubMed]
  8. D. L. Snyder, C. W. Helstrom, A. D. Lanterman, M. Faisal, and R. L. White, "Compensation for readout noise in CCD images," J. Opt. Soc. Am. A , 12, 272-283 (1985). [CrossRef]
  9. G. R. Ayers and J. C. Dainty, "Iterative blind deconvolution method and its application", Opt. Lett. , 13, 547-549 (1988). [CrossRef] [PubMed]
  10. R. G. Lane, "Blind deconvolution of speckle images," J. Opt. Soc. Am. A , 9, 1508-1514 (1992). [CrossRef]
  11. S. M. Jefferies and J. C. Christou, "Restoration of astronomical images by iterative blind deconvolution", Astrophys. J., 63, 862-874 (1993). [CrossRef]
  12. E. Thiebaut and J.-M. Conan, "Strict a priori constraints for maximum-likelihood blind deconvolution," J. Opt. Soc. Am. A , 12, 485-492 (1995). [CrossRef]
  13. T. J. Cornwell, "Where have we been, where are we now, where are we going?", in The Restoration of HST Images and Spectra II, B. Hanisch and R. L. White, editors, (Space Telescope Science Institute, Baltimore, MD, 1993) pp. 369-372.
  14. T. J. Schulz, "Movie of processed Hubble Space Telescope imagery," http://www.ee.mtu.edu/faculty/schulz/mpeg/hst.mpeg
  15. R. G. Paxman, T. J. Schulz, and J. R. Fienup, "Joint estimation of object and aberrations using phase diversity," J. Opt. Soc. Am. A , 9, 1072-1085 (1992). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Figures

Figure 1: Figure 2:
 

« Previous Article

OSA is a member of CrossRef.

CrossCheck Deposited