## Superimposed video disambiguation for increased field of view

Optics Express, Vol. 16, Issue 21, pp. 16352-16363 (2008)

http://dx.doi.org/10.1364/OE.16.016352

Acrobat PDF (2097 KB)

### Abstract

Many infrared optical systems in wide-ranging applications such as surveillance and security frequently require large fields of view (FOVs). Often this necessitates a focal plane array (FPA) with a large number of pixels, which, in general, is very expensive. In a previous paper, we proposed a method for increasing the FOV without increasing the pixel resolution of the FPA by superimposing multiple sub-images within a static scene and disambiguating the observed data to reconstruct the original scene. This technique, in effect, allows each sub-image of the scene to share a single FPA, thereby increasing the FOV without compromising resolution. In this paper, we demonstrate the increase of FOVs in a realistic setting by physically generating a superimposed video from a single scene using an optical system employing a beamsplitter and a movable mirror. Without prior knowledge of the contents of the scene, we are able to disambiguate the two sub-images, successfully capturing both large-scale features and fine details in each sub-image. We improve upon our previous reconstruction approach by allowing each sub-image to have slowly changing components, carefully exploiting correlations between sequential video frames to achieve small mean errors and to reduce run times. We show the effectiveness of this improved approach by reconstructing the constituent images of a surveillance camera video.

© 2008 Optical Society of America

## 1. Introduction

1. Y. Hagiwara, “High-density and high-quality frame transfer CCD imager with very low smear, low dark current, and very high blue sensitivity,” IEEE Trans. Electron Devices **43**, 2122–2130 (1996). [CrossRef]

2. H. S. P. Wong, R. T. Chang, E. Crabbe, and P. D. Agnello, “CMOS active pixel image sensors fabricated using a 1.8-V, 0.25-mu m CMOS technology,” IEEE Trans. Electron Devices **45**, 889–894 (1998). [CrossRef]

3. S. D. Gunapala, S. V. Bandara, J. K. Liu, C. J. Hill, S. B. Rafol, J. M. Mumolo, J. T. Trinh, M. Z. Tidrow, and P. D. Le Van, “1024 x 1024 pixel mid-wavelength and long-wavelength infrared QWIP focal plane arrays for imaging applications,” Semicond. Sci. Technol. **20**, 473–480 (2005). [CrossRef]

4. S. Krishna, D. Forman, S. Annamalai, P. Dowd, P. Varangis, T. Tumolillo, A. Gray, J. Zilko, K. Sun, M. G. Liu, J. Campbell, and D. Carothers, “Demonstration of a 320x256 two-color focal plane array using InAs/InGaAs quantum dots in well detectors,” Appl. Phys. Lett. **86**, 193,501 (2005). [CrossRef]

*µ*m). As the FPAs sensitive to this spectral range remain very expensive, techniques capable of achieving a wide FOV with a small-pixel-count FPA are desired.

6. R. A. Hicks, V. T. Nasis, and T. P. Kurzweg, “Programmable imaging with two-axis micromirrors,” Opt. Lett. **32**, 1066–1068 (2007). [CrossRef] [PubMed]

7. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: A technical overview,” IEEE Signal Process. Mag. **20**, 21–36 (2003). [CrossRef]

8. R. C. Hardie, K. J. Barnard, J. G. Bognar, E. E. Armstrong, and E. A. Watson, “High-resolution image reconstruction from a sequence of rotated and translated frames and its application to an infrared imaging system,” Opt. Eng. **37**, 247–260 (1998). [CrossRef]

9. J. C. Gillett, T. M. Stadtmiller, and R. C. Hardie, “Aliasing reduction in staring infrared imagers utilizing subpixel techniques,” Opt. Eng. **34**, 3130–3137 (1995). [CrossRef]

10. M. Irani and S. Peleg, “Improving resolution by image registration,” CVGIP: Graph. Models Image Process. **53**, 231–239 (1991). [CrossRef]

*static*scene to be imaged is partitioned into smaller scenes, which are imaged onto a single FPA to form a composite image. We developed an efficient video processing approach to separate the composite image into its constituent images, thus restoring the complete scene corresponding to the overall FOV. To make this otherwise highly ill-posed problem of disambiguating the image tractable, the super-imposed sub-images are moved relative to one another between video frames. The disambiguation problem that we considered is similar to the blind source separation problem [12

12. P. D. O’Grady, B. A. Pearlmutter, and S. T. Rickard, “Survey of sparse and non-sparse methods in source separation,” Int. J. Imag. Syst. Tech. **15**, 18–33 (2005). [CrossRef]

13. A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse ICA for blind separation of transmitted and reflected images,” Int. J. Imag. Syst. Tech. **15**, 84–91 (2005). [CrossRef]

14. E. Be’ery and A. Yeredor, “Blind separation of superimposed shifted images using parameterized joint diagonalization,” IEEE Trans. Image Process. **17**, 340–353 (2008). [CrossRef] [PubMed]

*physically*generate a composite video from a scene using an optical system employing a beamsplitter and a movable mirror. Without considerable prior knowledge of the contents of the scene, we are able to separate the sub-images, succesfully capturing both the large-scale features and fine details. Second, we show the effectiveness of the proposed approach by reconstructing with small mean square errors a

*dynamic*scene where, in constrast to the previous demonstration, objects in the scene are moving. Third, we improve upon the previous computational methods by

*exploiting correlations*between sequential video frames, particularly the sparsity in the difference between successive frames, leading to accurate solutions with reduced computational time.

## 2. Proposed camera architecture for generating a superimposed video

## 3. Mathematical model and computational approach for disambiguation

**x**

*} be a sequence of frames representing a slowly changing scene. The superimposition process (Fig. 1(a)) can be modeled mathematically at the*

_{t}*t*

^{th}frame as

**z**

*∈*

_{t}*IR*

^{m×1}is the observed composite image,

**x**

*∈*

_{t}*IR*

^{n×1}is the (unknown) scene to be reconstructed,

**A**

*∈*

_{t}*IR*

^{m×n}is the projection matrix that describes the superimposition, and

*ε*is noise at frame

_{t}*t*. We assume in this paper that

*ε*is zero-mean white Gaussian noise. The disambiguation problem is the inverse problem of solving for

_{t}**x**

*given the observations*

_{t}**z**

*and the matrix*

_{t}**A**

*. In this setting,*

_{t}*n*>

*m*, which makes Eq. (1) underdetermined. There are several techniques for approaching this ill-posed statistical inverse problem of disambiguating the sub-images, many of which exploit the sparsity of

**x**

*in one or more bases (cf. [15*

_{t}15. J. Bobin, J.-L. Starck, J. Fadili, and Y. Moudden, “Morphological Diversity and Source Separation,” IEEE Trans. Signal Process. **13**, 409–412 (2006). [CrossRef]

17. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE Journal of Selected Topics in Signal Processing: Special Issue on Convex Optimization Methods for Signal Processing (To appear). [PubMed]

**x**

*can have (small) changes for each*

_{t}*t*, whereas previously,

**x**

_{t}is static, i.e.,

**x**

_{t+1}=

**x**

*for all*

_{t}*t*.

**x**

*=[*

_{t}**x**

^{(1)}

*;*

_{t}**x**

^{(2)}

*] are the pixel intensities corresponding to the two images, then*

_{t}**A**

*is the underdetermined matrix [*

_{t}**I**

**S**

*], where*

_{t}**I**is the identity matrix and

**S**

*describes the movement of the second sub-image in relation to the first at the*

_{t}*t*

^{th}frame. Here, we assume that

**x**

^{(1)}

*corresponds to the stationary sub-image while*

_{t}**x**

^{(2)}

*corresponds to the sub-image whose shifting is induced by the moving mirror (see Fig. 1(b)). Then the above system can be modeled mathematically as*

_{t}**S**̃

*=[*

_{t}**I**

**S**

*]. Here, we write*

_{t}**x**

*=*

_{t}**W**̃

**θ**

*, where*

_{t}**θ**

*denotes the vector of coefficients of the two sub-images in the wavelet basis and*

_{t}**W**̃ denotes the inverse wavelet transform. (We use the wavelet transform here because of its effectiveness with many natural images, but alternative bases could certainly be used depending on the setting.) We note that this formulation is slightly different from that found in our previous paper [11], where the coefficients for each sub-image are treated separately. In Eq. (2),

**θ**

*contains the wavelet coefficients for the entire image*

_{t}**x**

*, as opposed to the concatenation of the wavelet coefficients of*

_{t}**x**

^{(1)}

*and*

_{t}**x**

^{(2)}

*, resulting in a more seamless interface between the sub-images.*

_{t}**z**

*-*

_{t}**S**̃

_{t}**W**̃

**θ**

*‖ along with a regularization term*

_{t}*τ*‖

**θ**

*‖, for some tuning parameter*

_{t}*τ*, at each time frame and using the computed minimum as the initial value for the following frame. Since the underlying inverse problem is underdetermined, the regularization term in the objective function is necessary to make the disambiguation problem well-posed. This formulation of the reconstruction problem is similar to the ℓ

^{2}-ℓ

^{1}formulation of the compressed sensing problem [18

18. E. Candès and T. Tao, “Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies,” (2006). To be published in IEEE Transactions on Information Theory.http://www.acm.caltech.edu/~emmanuel/papers/OptimalRecovery.pdf. [CrossRef]

*ℓ*

^{1}regularization term can lead to reasonably accurate solutions to an otherwise underdetermined and ill-posed inverse problem, particularly when the true scene is

*very*sparse in the wavelet basis and significant amounts of computation time are devoted to each frame. However, when the scene is stationary or slowly varying relative to the frame rate of the imaging system, subsequent frames of observations can be used simultaneously to achieve significantly better solutions. We describe a family of methods that depend on the number of frames solved simultaneously for exploiting interframe correlations.

**1-Frame Method.**For a scene that changes only slightly from frame to frame, the reconstruction from a previous frame is often a good approximation to the following frame. In the 1-Frame Method, we use the solution

**θ**̑

*to the optimization problem (3) at the*

_{t}*t*

^{th}frame to initialize the optimization problem for the (

*t*+1)

^{th}frame.

**2-Frame Method.**We can improve upon the 1-Frame Method by solving for multiple frames in each optimization problem. In the 2-Frame Method we solve for two successive frames simultaneously. However, rather than solving for

**θ**

*and*

_{t}**θ**

_{t+1}, we solve for

**θ**

*and Δ*

_{t}**θ**

*≡*

_{t}**θ**

_{t+1}-

**θ**

*for two main reasons. First, for slowly changing scenes,*

_{t}**θ**

_{t+1}≈

**θ**

*and since both*

_{t}**θ**

_{t+1}and

**θ**

*are already sparse, Δ*

_{t}**θ**

*is even sparser, making Δ*

_{t}**θ**

*even more appropriate for the sparsity-inducing*

_{t}*ℓ*

^{2}-

*ℓ*

^{1}minimization. Second, solving for Δ

**θ**

*allows for*

_{t}*coupling*the frames in an otherwise separable objective function, leading to accurate solutions to both

**θ**

*and Δ*

_{t}**θ**

*. The minimization problem can be formulated as follows:*

_{t}**S**̃

*=[*

_{i}**I**

**S**

*] for*

_{i}*i*=

*t*and

*t*+1. The following optimization problem for frame (

*t*+1) is initialized using

**θ**̑

^{[22. H. S. P. Wong, R. T. Chang, E. Crabbe, and P. D. Agnello, “CMOS active pixel image sensors fabricated using a 1.8-V, 0.25-mu m CMOS technology,” IEEE Trans. Electron Devices 45, 889–894 (1998). [CrossRef] ]}

_{t+1}. Note that the formulation in (4) is different from that proposed in our previous paper [11], where

**θ**

*corresponds to coefficients of static images, i.e.,*

_{t}**θ**

*=*

_{t}**θ**

_{t+1}, whereas here, we allow for movements within each sub-image. In addition, since Δ

**θ**

*is significantly sparser than*

_{t}**θ**

*, we use a different regularization parameter*

_{t}*ρ*on ‖Δ

**θ**

*‖*

_{t}_{1}to encourage very sparse Δ

**θ**

*solutions, which leads to the following optimization problem:*

_{t}*ρ*=(1.0×10

^{3})

*τ*.

**4-Frame Method.**The 4-Frame Method is very similar to 2-Frame Method, but we solve for the coefficients using four successive frames instead of two, using the observation vectors

**z**

_{t+2}and

**z**

_{t+3}and the observation operation matrices

**S**

_{t+2}and

**S**

_{t+3}. By coupling more frames, the coefficients are required to satisfy more equations, leading to more accurate solutions. The drawback, however, is that the corresponding linear systems to be solved are larger and require more computation time. The corresponding minimization problem is given by

**θ**̑

^{[44. S. Krishna, D. Forman, S. Annamalai, P. Dowd, P. Varangis, T. Tumolillo, A. Gray, J. Zilko, K. Sun, M. G. Liu, J. Campbell, and D. Carothers, “Demonstration of a 320x256 two-color focal plane array using InAs/InGaAs quantum dots in well detectors,” Appl. Phys. Lett. 86, 193,501 (2005). [CrossRef] ]}

*=[*

_{t}**θ**̑

*; Δ*

_{t}**θ**̑

*; Δ*

_{t}**θ**̑

_{t+1}; Δ

**θ**̑

_{t+2}], Δ

**θ**

*≡*

_{i}**θ**

_{i+1}-

**θ**

*for*

_{i}*i*=

*t*,⋯,

*t*+2, and

**S**̃

*=[*

_{i}**I**

**S**

*] for*

_{i}*i*=

*t*,⋯,

*t*+3. There is another formulation for simultaneously solving for four frames (see [21]). However, results from that paper indicate that solving (5) is more effective in generating more accurate solutions. As in the 2-Frame Method, we place the same weights (

*ρ*=1.0×10

^{3}·

*τ*) on ‖Δ

**θ**

*‖*

_{i}_{1}for

*i*=

*t*,⋯,

*t*+2 to encourage very sparse solutions.

**n-Frame Method**can be defined likewise for simultaneously solving for

*n*frames. In our numerical experiments, we also use the 8- and 12-Frame Methods.

## 4. Experimental methods

17. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE Journal of Selected Topics in Signal Processing: Special Issue on Convex Optimization Methods for Signal Processing (To appear). [PubMed]

^{2}-ℓ

^{1}minimization problem, it fixes the non-zero pattern of the optimal

**θ**

*and minimizes the ℓ*

_{t}^{2}term of the objective function, resulting in a minimal error in the reconstruction while keeping the number of non-zeros in the wavelet coefficients at a minimum. It has been shown to outperform many of the state-of-the-art codes for solving the ℓ

^{2}-ℓ

^{1}minimization problem or its equivalent formulations.

**x**

^{(1)}

*;*

_{t}**x**

^{(2)}

*] should cover the entire scene at all frames. However, as*

_{t}**x**

^{(2)}

*moves, according to either the movement of the mirror in the optical experiment or the prescribed motion in the numerical experiment, there can be a portion of the scene that is not contained in the superimposed image at some frames, creating a “blind zone”. For a frame when the disambiguated image does not cover the entire scene, the result obtained for the blind zone from the previous frame is combined with the disambiguated image to reconstruct the entire scene.*

_{t}### 4.1. Optical experiment: Duke Earth Day

**x**

^{(2)}, is moved while that corresponding to the left half of the scene,

**x**

^{(1)}, remains still. In our experiment, the movement of

**x**

^{(2)}was along the

*x*-direction with its position following a sinusoidal function of frame. This was achieved by moving the mirror with a motion controller along a circular path on the

*x*-

*z*plane with a constant velocity; the displacement of the mirror along the

*x*-direction causes

**x**

^{(2)}to move in the same direction whereas the motion of the mirror in the

*z*-direction does not create any change in the composite video. To determine

**S**̃

*in Eq. (2) corresponding to a given circular movement of the mirror, we performed a calibration experiment where a scene with a white background contained a black dot on its right half. By tracking the dot in the recorded video, we verified that the movement of the dot in the videowas indeed sinusoidal, and also determined its amplitude and period. In the actual experiment, the scene was replaced with a photograph (“Duke Earth Day”) while leaving the rest of the system unaltered. Hence, the amplitude and period of the movement of*

_{t}**x**

^{(2)}are the same as those obtained in the calibration experiment. The phase of the sinusoidal movement was determined for each recording by calculating a mean square difference between adjacent frames to identify the frame at which

**x**

^{(2)}moved to the farthest right (or left).

### 4.2. Numerical experiment: Surveillance video

22.
“Benchmark Data for PETS-ECCV 2004,” in *Sixth IEEE International Workshop on Performance Evaluation of Tracking and Surveillance* (2004). URL http://www-prima.imag.fr/PETS04/caviar\char’-data.html.

*initial*frames for the 8-Frames approach are generally worse than the 2- and 4-Frame Methods, but the sharp decrease in MSE value for the 8-Frame Method indicates that the solutions from the previous frames are being used effectively to initialize the current frame optimization. Fourth, the relatively ragged behavior of the various methods in Fig. 4(a) compared to that in Fig. 4(b), especially in the 8- and 12-Frame Methods, can perhaps be attributed to the difference in time restriction. Because of the relatively few GPSR iterations allowed within the 5 second time limit per frame, sufficiently good solutions, relative to the other frames, are not found in some instances. Qualitatively, the disambiguated reconstruction captures both large-scale features and fine details of the original scene. The overall structure of the lobby is correctly depicted, while small details such as those of the several kiosks and handrails and motions such as those of the man’s arms on the right half of the scene are reproduced accurately. The ghosting on the bottom of the right half of the reconstruction results from the lack of contrast in regions of high pixel intensity values on the left half of the scene. Yet in spite of this ghosting, details such as the edges of the lobby floor tiles are still distinguishable in the areas where this ghosting occurs.

### 4.3. Discussion

23. M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, “Single-pixel imaging via compressive sampling, ” IEEE Signal Process. Mag. **25**, 83–91, March 2008. [CrossRef]

## 5. Conclusions

## Acknowledgments

## References and links

1. | Y. Hagiwara, “High-density and high-quality frame transfer CCD imager with very low smear, low dark current, and very high blue sensitivity,” IEEE Trans. Electron Devices |

2. | H. S. P. Wong, R. T. Chang, E. Crabbe, and P. D. Agnello, “CMOS active pixel image sensors fabricated using a 1.8-V, 0.25-mu m CMOS technology,” IEEE Trans. Electron Devices |

3. | S. D. Gunapala, S. V. Bandara, J. K. Liu, C. J. Hill, S. B. Rafol, J. M. Mumolo, J. T. Trinh, M. Z. Tidrow, and P. D. Le Van, “1024 x 1024 pixel mid-wavelength and long-wavelength infrared QWIP focal plane arrays for imaging applications,” Semicond. Sci. Technol. |

4. | S. Krishna, D. Forman, S. Annamalai, P. Dowd, P. Varangis, T. Tumolillo, A. Gray, J. Zilko, K. Sun, M. G. Liu, J. Campbell, and D. Carothers, “Demonstration of a 320x256 two-color focal plane array using InAs/InGaAs quantum dots in well detectors,” Appl. Phys. Lett. |

5. | R. Szeliski, “Image mosaicing for tele-reality applications,” Proc. IEEEWorkshop on Applications of Computer Vision pp. |

6. | R. A. Hicks, V. T. Nasis, and T. P. Kurzweg, “Programmable imaging with two-axis micromirrors,” Opt. Lett. |

7. | S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: A technical overview,” IEEE Signal Process. Mag. |

8. | R. C. Hardie, K. J. Barnard, J. G. Bognar, E. E. Armstrong, and E. A. Watson, “High-resolution image reconstruction from a sequence of rotated and translated frames and its application to an infrared imaging system,” Opt. Eng. |

9. | J. C. Gillett, T. M. Stadtmiller, and R. C. Hardie, “Aliasing reduction in staring infrared imagers utilizing subpixel techniques,” Opt. Eng. |

10. | M. Irani and S. Peleg, “Improving resolution by image registration,” CVGIP: Graph. Models Image Process. |

11. | R. F. Marcia, C. Kim, J. Kim, D. Brady, and R. M. Willett, “Fast disambiguation of superimposed images for increased field of view,” Accepted to “Proc. IEEE Int. Conf. Image Proc. (ICIP 2008)”. |

12. | P. D. O’Grady, B. A. Pearlmutter, and S. T. Rickard, “Survey of sparse and non-sparse methods in source separation,” Int. J. Imag. Syst. Tech. |

13. | A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse ICA for blind separation of transmitted and reflected images,” Int. J. Imag. Syst. Tech. |

14. | E. Be’ery and A. Yeredor, “Blind separation of superimposed shifted images using parameterized joint diagonalization,” IEEE Trans. Image Process. |

15. | J. Bobin, J.-L. Starck, J. Fadili, and Y. Moudden, “Morphological Diversity and Source Separation,” IEEE Trans. Signal Process. |

16. | S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Sci. Comput. |

17. | M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE Journal of Selected Topics in Signal Processing: Special Issue on Convex Optimization Methods for Signal Processing (To appear). [PubMed] |

18. | E. Candès and T. Tao, “Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies,” (2006). To be published in IEEE Transactions on Information Theory.http://www.acm.caltech.edu/~emmanuel/papers/OptimalRecovery.pdf. [CrossRef] |

19. | D. L. Donoho and Y. Tsaig, “Fast solution of ~1-norm minimization problems when the solution may be sparse,” Preprint (2006). |

20. | R. Tibshirani, “Regression shrinkage and selection via the lasso,” J. Roy. Statist. Soc. Ser. B |

21. | R. F. Marcia and R. M. Willett, “Compressive coded aperture video reconstruction,” Accepted to “Proc. Sixteenth European Signal Processing Conference (EUSIPCO 2008)”. |

22. |
“Benchmark Data for PETS-ECCV 2004,” in |

23. | M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, “Single-pixel imaging via compressive sampling, ” IEEE Signal Process. Mag. |

**OCIS Codes**

(100.2000) Image processing : Digital image processing

(100.7410) Image processing : Wavelets

(110.1758) Imaging systems : Computational imaging

(110.4155) Imaging systems : Multiframe image processing

(110.3010) Imaging systems : Image reconstruction techniques

**ToC Category:**

Image Processing

**History**

Original Manuscript: July 18, 2008

Revised Manuscript: September 11, 2008

Manuscript Accepted: September 22, 2008

Published: September 29, 2008

**Citation**

Roummel F. Marcia, Changsoon Kim, Cihat Eldeniz, Jungsang Kim, David J. Brady, and Rebecca M. Willett, "Superimposed video disambiguation for increased field of view," Opt. Express **16**, 16352-16363 (2008)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-21-16352

Sort: Year | Journal | Reset

### References

- Y. Hagiwara, "High-density and high-quality frame transfer CCD imager with very low smear, low dark current, and very high blue sensitivity," IEEE Trans. Electron Devices 43, 2122-2130 (1996). [CrossRef]
- H. S. P. Wong, R. T. Chang, E. Crabbe, and P. D. Agnello, "CMOS active pixel image sensors fabricated using a 1.8-V, 0.25-mu m CMOS technology," IEEE Trans. Electron Devices 45, 889-894 (1998). [CrossRef]
- S. D. Gunapala, S. V. Bandara, J. K. Liu, C. J. Hill, S. B. Rafol, J. M. Mumolo, J. T. Trinh, M. Z. Tidrow, and P. D. Le Van, "1024 x 1024 pixel mid-wavelength and long-wavelength infrared QWIP focal plane arrays for imaging applications," Semicond. Sci. Technol. 20, 473-480 (2005). [CrossRef]
- S. Krishna, D. Forman, S. Annamalai, P. Dowd, P. Varangis, T. Tumolillo, A. Gray, J. Zilko, K. Sun, M. G. Liu, J. Campbell, and D. Carothers, "Demonstration of a 320x256 two-color focal plane array using InAs/InGaAs quantum dots in well detectors," Appl. Phys. Lett. 86, 193,501 (2005). [CrossRef]
- R. Szeliski, "Image mosaicing for tele-reality applications," Proc. IEEEWorkshop on Applications of Computer Vision pp. 44-53 (1994).
- R. A. Hicks, V. T. Nasis, and T. P. Kurzweg, "Programmable imaging with two-axis micromirrors," Opt. Lett. 32, 1066-1068 (2007). [CrossRef] [PubMed]
- S. C. Park, M. K. Park, and M. G. Kang, "Super-resolution image reconstruction: A technical overview," IEEE Signal Process. Mag. 20, 21-36 (2003). [CrossRef]
- R. C. Hardie, K. J. Barnard, J. G. Bognar, E. E. Armstrong, and E. A. Watson, "High-resolution image reconstruction from a sequence of rotated and translated frames and its application to an infrared imaging system," Opt. Eng. 37, 247-260 (1998). [CrossRef]
- J. C. Gillett, T. M. Stadtmiller, and R. C. Hardie, "Aliasing reduction in staring infrared imagers utilizing subpixel techniques," Opt. Eng. 34, 3130-3137 (1995). [CrossRef]
- M. Irani and S. Peleg, "Improving resolution by image registration," CVGIP: Graph. Models Image Process. 53, 231-239 (1991). [CrossRef]
- R. F. Marcia, C. Kim, J. Kim, D. Brady, and R. M. Willett, "Fast disambiguation of superimposed images for increased field of view," Accepted to "Proc. IEEE Int. Conf. Image Proc. (ICIP 2008)".
- P. D. O'Grady, B. A. Pearlmutter, and S. T. Rickard, "Survey of sparse and non-sparse methods in source separation," Int. J. Imag. Syst. Tech. 15, 18-33 (2005). [CrossRef]
- A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, "Sparse ICA for blind separation of transmitted and reflected images," Int. J. Imag. Syst. Tech. 15, 84-91 (2005). [CrossRef]
- E. Be'ery and A. Yeredor, "Blind separation of superimposed shifted images using parameterized joint diagonalization," IEEE Trans. Image Process. 17, 340-353 (2008). [CrossRef] [PubMed]
- J. Bobin, J.-L. Starck, J. Fadili, and Y. Moudden, "Morphological Diversity and Source Separation," IEEE Trans. Signal Process. 13, 409-412 (2006). [CrossRef]
- S. S. Chen, D. L. Donoho, and M. A. Saunders, "Atomic decomposition by basis pursuit," SIAM J. Sci. Comput. 20, 33-61 (electronic) (1998).
- M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, "Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems," IEEE J. Sel. Topics in Signal Processing: Special Issue on Convex Optimization Methods for Signal Processing (To appear). [PubMed]
- E. Candés and T. Tao, "Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies," (2006). To be published in IEEE Transactions on Information Theory. http://www.acm.caltech.edu/~emmanuel/papers/OptimalRecovery.pdf. [CrossRef]
- D. L. Donoho and Y. Tsaig, "Fast solution of l1-norm minimization problems when the solution may be sparse," Preprint (2006).
- R. Tibshirani, "Regression shrinkage and selection via the lasso," J. Roy. Statist. Soc. Ser. B 58, 267-288 (1996).
- R. F. Marcia and R. M. Willett, "Compressive coded aperture video reconstruction," Accepted to "Proc. Sixteenth European Signal Processing Conference (EUSIPCO 2008)".
- "Benchmark Data for PETS-ECCV 2004," in Sixth IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (2004). URL http://www-prima.imag.fr/PETS04/caviar\char?? data.html.
- M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, "Single-pixel imaging via compressive sampling, " IEEE Signal Process. Mag. 25, 83 - 91, March 2008. [CrossRef]

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.