OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 16, Iss. 21 — Oct. 13, 2008
  • pp: 16352–16363
« Show journal navigation

Superimposed video disambiguation for increased field of view

Roummel F. Marcia, Changsoon Kim, Cihat Eldeniz, Jungsang Kim, David J. Brady, and Rebecca M. Willett  »View Author Affiliations


Optics Express, Vol. 16, Issue 21, pp. 16352-16363 (2008)
http://dx.doi.org/10.1364/OE.16.016352


View Full Text Article

Acrobat PDF (2097 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Many infrared optical systems in wide-ranging applications such as surveillance and security frequently require large fields of view (FOVs). Often this necessitates a focal plane array (FPA) with a large number of pixels, which, in general, is very expensive. In a previous paper, we proposed a method for increasing the FOV without increasing the pixel resolution of the FPA by superimposing multiple sub-images within a static scene and disambiguating the observed data to reconstruct the original scene. This technique, in effect, allows each sub-image of the scene to share a single FPA, thereby increasing the FOV without compromising resolution. In this paper, we demonstrate the increase of FOVs in a realistic setting by physically generating a superimposed video from a single scene using an optical system employing a beamsplitter and a movable mirror. Without prior knowledge of the contents of the scene, we are able to disambiguate the two sub-images, successfully capturing both large-scale features and fine details in each sub-image. We improve upon our previous reconstruction approach by allowing each sub-image to have slowly changing components, carefully exploiting correlations between sequential video frames to achieve small mean errors and to reduce run times. We show the effectiveness of this improved approach by reconstructing the constituent images of a surveillance camera video.

© 2008 Optical Society of America

1. Introduction

The performance of a typical imaging system is characterized by the resolution (the smallest feature that the system can resolve) and the field of view (FOV: the maximum angular extent that can be observed at a given instance). In most electronic imaging systems today, the detector element is a focal plane array (FPA) typically made out of semiconductor photodetectors [1

1. Y. Hagiwara, “High-density and high-quality frame transfer CCD imager with very low smear, low dark current, and very high blue sensitivity,” IEEE Trans. Electron Devices 43, 2122–2130 (1996). [CrossRef]

, 2

2. H. S. P. Wong, R. T. Chang, E. Crabbe, and P. D. Agnello, “CMOS active pixel image sensors fabricated using a 1.8-V, 0.25-mu m CMOS technology,” IEEE Trans. Electron Devices 45, 889–894 (1998). [CrossRef]

, 3

3. S. D. Gunapala, S. V. Bandara, J. K. Liu, C. J. Hill, S. B. Rafol, J. M. Mumolo, J. T. Trinh, M. Z. Tidrow, and P. D. Le Van, “1024 x 1024 pixel mid-wavelength and long-wavelength infrared QWIP focal plane arrays for imaging applications,” Semicond. Sci. Technol. 20, 473–480 (2005). [CrossRef]

, 4

4. S. Krishna, D. Forman, S. Annamalai, P. Dowd, P. Varangis, T. Tumolillo, A. Gray, J. Zilko, K. Sun, M. G. Liu, J. Campbell, and D. Carothers, “Demonstration of a 320x256 two-color focal plane array using InAs/InGaAs quantum dots in well detectors,” Appl. Phys. Lett. 86, 193,501 (2005). [CrossRef]

]. A FPA performs spatial sampling of the optical intensities at the image plane, with the maximum resolvable spatial frequency being inversely proportional to the center-to-center distance between pixels. Therefore, to obtain a high-resolution image with a given FPA, the optics must provide sufficient magnification, which limits the FOV. There are many applications where this trade-off between the resolution and FOV needs to be overcome. A good example is thermal imaging surveillance systems operating at the mid- and long-wave infrared wavelengths (3-20 µm). As the FPAs sensitive to this spectral range remain very expensive, techniques capable of achieving a wide FOV with a small-pixel-count FPA are desired.

Many techniques proposed to date to overcome the FOV-resolution trade-off are based on acquisition of multiple images and their subsequent numerical processing. For example, image mosaicing techniques increase the FOV while retaining the resolution, by tiling multiple sequentially captured images corresponding to different portions of the overall FOV [5

5. R. Szeliski, “Image mosaicing for tele-reality applications,” Proc. IEEEWorkshop on Applications of Computer Vision pp. 44–53 (1994).

, 6

6. R. A. Hicks, V. T. Nasis, and T. P. Kurzweg, “Programmable imaging with two-axis micromirrors,” Opt. Lett. 32, 1066–1068 (2007). [CrossRef] [PubMed]

]. When applied to a video system, these techniques require acquisition of all sub-images for each video frame in order to accurately capture relative motion between adjacent frames. This means that the scanning element must scan through the sub-images at a rate much faster than the video frame rate, which is challenging to implement using conventional video cameras. In another example, super-resolution techniques provide means to overcome the resolution limit imposed by the FPA pixel size. In this technique, multiple images are obtained from a single scene, with each image having a different sub-pixel displacement from the others [7

7. S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: A technical overview,” IEEE Signal Process. Mag. 20, 21–36 (2003). [CrossRef]

, 8

8. R. C. Hardie, K. J. Barnard, J. G. Bognar, E. E. Armstrong, and E. A. Watson, “High-resolution image reconstruction from a sequence of rotated and translated frames and its application to an infrared imaging system,” Opt. Eng. 37, 247–260 (1998). [CrossRef]

]. The sub-pixel displacements provide additional information about the scene as compared with a single image, which can be exploited to construct an image of the scene with resolution better than that imposed by the FPA pixel size. For these technqiues to succeed, the displacements of the low-resolution images need to be known with sub-pixel accuracy either by a precise control of hardware motion or by an accurate image registration algorithm [9

9. J. C. Gillett, T. M. Stadtmiller, and R. C. Hardie, “Aliasing reduction in staring infrared imagers utilizing subpixel techniques,” Opt. Eng. 34, 3130–3137 (1995). [CrossRef]

, 10

10. M. Irani and S. Peleg, “Improving resolution by image registration,” CVGIP: Graph. Models Image Process. 53, 231–239 (1991). [CrossRef]

]. As the pixel size of the FPAs continues to shrink, this requirement translates to micron-level control/registration which is difficult to maintain in a realistic operating environment subject to vibrations and temperature variations.

Recently, we proposed a numerical method by which the FOV of an imaging system can be increased without compromising its resolution [11

11. R. F. Marcia, C. Kim, J. Kim, D. Brady, and R. M. Willett, “Fast disambiguation of superimposed images for increased field of view,” Accepted to “Proc. IEEE Int. Conf. Image Proc. (ICIP 2008)”.

]. In our setup, a static scene to be imaged is partitioned into smaller scenes, which are imaged onto a single FPA to form a composite image. We developed an efficient video processing approach to separate the composite image into its constituent images, thus restoring the complete scene corresponding to the overall FOV. To make this otherwise highly ill-posed problem of disambiguating the image tractable, the super-imposed sub-images are moved relative to one another between video frames. The disambiguation problem that we considered is similar to the blind source separation problem [12

12. P. D. O’Grady, B. A. Pearlmutter, and S. T. Rickard, “Survey of sparse and non-sparse methods in source separation,” Int. J. Imag. Syst. Tech. 15, 18–33 (2005). [CrossRef]

, 13

13. A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse ICA for blind separation of transmitted and reflected images,” Int. J. Imag. Syst. Tech. 15, 84–91 (2005). [CrossRef]

, 14

14. E. Be’ery and A. Yeredor, “Blind separation of superimposed shifted images using parameterized joint diagonalization,” IEEE Trans. Image Process. 17, 340–353 (2008). [CrossRef] [PubMed]

], where the manner in which the sub-images are superimposed is unknown. In our case, we control how the sub-images are superimposed by prescribing the relative motion between the sub-images. Incorporating this knowledge into our proposed optimization algorithms, we can succesfully and efficiently differentiate the sub-images to accurately reconsruct the original scene.

The paper is organized as follows: In Sec. 2, we discuss the concept of our technique and a detailed description of the proposed architecture. Sec. 3 shows how the video disambiguation problem can be formulated and solved using optimization techniques based on sparse representation algorithms. In Sec. 4, we describe the both physical and numerical experiments. We conclude with a summary of the paper in Sec. 5.

2. Proposed camera architecture for generating a superimposed video

Figure 1(a) schematically shows the basic concept of superimposition and disambiguation. In the superimposition process, multiple sub-images are merged to form a composite image (shown on the right side of Fig. 1(a)) in a straightforward manner; the intensity of each pixel in the composite image is the sum of the intensities of the corresponding pixels in the individual images. However, the inverse process – the disambiguation of the individual sub-images from this composite image – is more challenging. For this, we must determine how the intensity of each pixel in the composite image is distributed over the corresponding pixels in the individual sub-images so that the resulting reconstruction accurately represents the original scene. Our technique achieves this task by measuring a composite video sequence, where the position of each sub-image is slightly altered at each frame. It is the movement of these individual sub-images that allows disambiguation to succeed. For simplicity, we consider the examples of superimposing only two sub-images in our experiments, but the approach we describe can be extended to more general cases.

Fig. 1. (a) Basic concept of superimposition and disambiguation. (b) Proposed camera architecture for superimposing two sub-images from the top view. The scene is split into two halves, x (1) t and x (2) t. The optical field from the left half propagates directly through the beamsplitter to hit the FPA in the camera. The optical field from the right half hits a movable mirror before propagating to the beamsplitter and being reflected to the FPA in the camera.

Superimposed images which are shifted relative to one another at different frames can easily be recorded using a simple camera architecture, depicted for two sub-images in Fig. 1(b). Constructed using beamsplitters and movable mirrors, the proposed assembly merges the sub-images into a single image and temporally varies the relative position of the two sub-images as they hit the detector. The optical field from the left half of the scene propagates directly through the beamsplitter and hits the FPA in the camera at the same relative position for every frame. The optical field from the right half of the scene, however, is reflected by a movable mirror followed by the beamsplitter before hitting the FPA. When the mirror, mounted on a linear stage, is moved, the right half of the scene is moved correspondingly. The image recorded by the FPA is then the sum of the stationary left sub-image and the right sub-image that is moved for each frame, resulting in a superimposed video sequence.

3. Mathematical model and computational approach for disambiguation

Let {x t} be a sequence of frames representing a slowly changing scene. The superimposition process (Fig. 1(a)) can be modeled mathematically at the t th frame as

zt=Atxt+εt,
(1)

In the camera architecture described in Sec. 2, one sub-image is held stationary relative to the other. If x t=[x (1) t;x (2) t] are the pixel intensities corresponding to the two images, then A t is the underdetermined matrix [I S t], where I is the identity matrix and S t describes the movement of the second sub-image in relation to the first at the t th frame. Here, we assume that x (1) t corresponds to the stationary sub-image while x (2) t corresponds to the sub-image whose shifting is induced by the moving mirror (see Fig. 1(b)). Then the above system can be modeled mathematically as

zt=[ISt][xt(1)xt(2)]+εt=S˜tW˜θt+εt,
(2)

We formulate the reconstruction problem as a sequence of nonlinear optimization problems, minimizing the norm of the error ‖z t-S̃t W̃θ t‖ along with a regularization term τθ t‖, for some tuning parameter τ, at each time frame and using the computed minimum as the initial value for the following frame. Since the underlying inverse problem is underdetermined, the regularization term in the objective function is necessary to make the disambiguation problem well-posed. This formulation of the reconstruction problem is similar to the ℓ2-ℓ1 formulation of the compressed sensing problem [18

18. E. Candès and T. Tao, “Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies,” (2006). To be published in IEEE Transactions on Information Theory.http://www.acm.caltech.edu/~emmanuel/papers/OptimalRecovery.pdf. [CrossRef]

, 19

19. D. L. Donoho and Y. Tsaig, “Fast solution of ~1-norm minimization problems when the solution may be sparse,” Preprint (2006).

, 20

20. R. Tibshirani, “Regression shrinkage and selection via the lasso,” J. Roy. Statist. Soc. Ser. B 58, 267–288 (1996).

] for suitably chosen norms: using the Euclidean norm for the error term gives the least-squares error while using the one norm for the regularization term induces sparsity in the solution. Sparse solutions in the wavelet domain provide accurate reconstructions of the original signal since the wavelet transform typically retains the majority of natural images’ energy in a relatively small number of basis coefficients. To solve the problem of disambiguating two superimposed images, we thus formulate it as the nonlinear optimization problem

θ̂t=argminθtztS˜tW˜θt22+τθt1.
(3)

If we solve the optimization problem (3) for each frame independently, the 1 regularization term can lead to reasonably accurate solutions to an otherwise underdetermined and ill-posed inverse problem, particularly when the true scene is very sparse in the wavelet basis and significant amounts of computation time are devoted to each frame. However, when the scene is stationary or slowly varying relative to the frame rate of the imaging system, subsequent frames of observations can be used simultaneously to achieve significantly better solutions. We describe a family of methods that depend on the number of frames solved simultaneously for exploiting interframe correlations.

2-Frame Method. We can improve upon the 1-Frame Method by solving for multiple frames in each optimization problem. In the 2-Frame Method we solve for two successive frames simultaneously. However, rather than solving for θ t and θ t+1, we solve for θ t and Δθ tθ t+1 -θ t for two main reasons. First, for slowly changing scenes, θ t+1θ t and since both θ t+1 and θ t are already sparse, Δθ t is even sparser, making Δθ t even more appropriate for the sparsity-inducing 2- 1 minimization. Second, solving for Δθ t allows for coupling the frames in an otherwise separable objective function, leading to accurate solutions to both θ t and Δθ t. The minimization problem can be formulated as follows:

θ̂t[2][θ̂tΔθ̂t]=argminθt,Δθt[ztzt+1][S˜t00S˜t+1][W˜0W˜W˜][θtΔθt]22+τ[θtΔθt]1,
(4)

where S̃i=[I S i] for i=t and t+1. The following optimization problem for frame (t+1) is initialized using

[θt+1(0)Δθt+1(0)][θ̂t+Δθ̂tΔθ̂t],

θ̂t[2]=arg minθt,Δθt[ztzt+1][S˜t00S˜t+1][W˜0W˜W˜][θtΔθt]22+τθt1+ρΔθt1,

In our experiments, we use ρ=(1.0×103)τ.

4-Frame Method. The 4-Frame Method is very similar to 2-Frame Method, but we solve for the coefficients using four successive frames instead of two, using the observation vectors z t+2 and z t+3 and the observation operation matrices S t+2 and S t+3. By coupling more frames, the coefficients are required to satisfy more equations, leading to more accurate solutions. The drawback, however, is that the corresponding linear systems to be solved are larger and require more computation time. The corresponding minimization problem is given by

θ̂t[4]=argminθ̄tz̄tS̄tW̄θ̄t22+τθt1+ρj=02Δθt+j1,
(5)

where the minimizer θ̑[4

4. S. Krishna, D. Forman, S. Annamalai, P. Dowd, P. Varangis, T. Tumolillo, A. Gray, J. Zilko, K. Sun, M. G. Liu, J. Campbell, and D. Carothers, “Demonstration of a 320x256 two-color focal plane array using InAs/InGaAs quantum dots in well detectors,” Appl. Phys. Lett. 86, 193,501 (2005). [CrossRef]

]
t=[θ̑t; Δθ̑t; Δθ̑t+1; Δθ̑ t+2], Δθ iθ i+1-θ i for i=t,⋯,t+2, and

z̄t=[ztzt+1zt+2zt+3],S̄t=[S˜tS˜t+1S˜t+2S˜t+3],W̄=[W˜W˜W˜W˜W˜W˜W˜W˜W˜W˜],andθ̄t=[θtΔθtΔθt+1Δθt+2].

Here, S̃i=[I S i] for i=t,⋯, t+3. There is another formulation for simultaneously solving for four frames (see [21

21. R. F. Marcia and R. M. Willett, “Compressive coded aperture video reconstruction,” Accepted to “Proc. Sixteenth European Signal Processing Conference (EUSIPCO 2008)”.

]). However, results from that paper indicate that solving (5) is more effective in generating more accurate solutions. As in the 2-Frame Method, we place the same weights (ρ=1.0×103·τ) on ‖Δθ i1 for i=t,⋯, t+2 to encourage very sparse solutions.

A general n-Frame Method can be defined likewise for simultaneously solving for n frames. In our numerical experiments, we also use the 8- and 12-Frame Methods.

4. Experimental methods

In these experiments, we solve the optimization problems for the various proposed methods using the Gradient Projection for Sparse Reconstruction (GPSR) algorithm of Figueiredo et al. [17

17. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE Journal of Selected Topics in Signal Processing: Special Issue on Convex Optimization Methods for Signal Processing (To appear). [PubMed]

]. GPSR is a gradient-based optimization method that is very fast, accurate, and efficient. In addition, GPSR has a debiasing phase, where upon solving the ℓ2-ℓ1 minimization problem, it fixes the non-zero pattern of the optimal θ t and minimizes the ℓ2 term of the objective function, resulting in a minimal error in the reconstruction while keeping the number of non-zeros in the wavelet coefficients at a minimum. It has been shown to outperform many of the state-of-the-art codes for solving the ℓ2-ℓ1 minimization problem or its equivalent formulations.

The reconstruction video for both physical and numerical experiments are available at http://www.ee.duke.edu/nislab/videos/disambiguation under the names duke-earth-day.avi and surveillance.avi.

4.1. Optical experiment: Duke Earth Day

Fig. 2. (a) Original “Duke Earth Day” scene used in the experiment. The box with a solid red border represents the extent of x (1), which is stationary during the superimposition process. As the mirror moves in a circular motion in the x-z plane shown in Fig. 1(b), the blue box with a solid border, which represents the moving boundary of x (2) at object plane, oscillates between the left and right turning points, represented by the blue boxes with dashed and dotted borders, respectively. (b) Superimposed image (left panel) and re-constructed scene (right panel) when the moving boundary is near the mid-point of the oscillation in the superimposed video. (c) Superimposed image (left panel) and reconstructed scene (right panel) when the moving boundary is near the left turning point, where sub-images are not completely disambiguated: the man in the hat and the banner (circled in yellow) partly appear in the left half of the disambiguated image.

As mentioned above, the system shown in Fig. 1(b) is capable of generating a composite video where the sub-image corresponding to the right half of the scene, x (2), is moved while that corresponding to the left half of the scene, x (1), remains still. In our experiment, the movement of x (2) was along the x-direction with its position following a sinusoidal function of frame. This was achieved by moving the mirror with a motion controller along a circular path on the x-z plane with a constant velocity; the displacement of the mirror along the x-direction causes x (2) to move in the same direction whereas the motion of the mirror in the z-direction does not create any change in the composite video. To determine S̃ t in Eq. (2) corresponding to a given circular movement of the mirror, we performed a calibration experiment where a scene with a white background contained a black dot on its right half. By tracking the dot in the recorded video, we verified that the movement of the dot in the videowas indeed sinusoidal, and also determined its amplitude and period. In the actual experiment, the scene was replaced with a photograph (“Duke Earth Day”) while leaving the rest of the system unaltered. Hence, the amplitude and period of the movement of x (2) are the same as those obtained in the calibration experiment. The phase of the sinusoidal movement was determined for each recording by calculating a mean square difference between adjacent frames to identify the frame at which x (2) moved to the farthest right (or left).

4.2. Numerical experiment: Surveillance video

The video used in this numerical experiment is obtained from the Benchmark Data for PETS-ECCV 2004 [22

22. “Benchmark Data for PETS-ECCV 2004,” in Sixth IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (2004). URL http://www-prima.imag.fr/PETS04/caviar\char’-data.html.

]. Called Fight_OneManDown.mpg, it depicts two people fighting, with one man eventually falling down while the other runs away. The video was filmed using a wide angle camera lens in the entrance lobby of the INRIA Labs at Grenoble, France. Originally in 384×288 pixel resolution, the color video is rescaled to be 512×256 and converted to grayscale for ease of processing. This type of video is appropriate for our application since the scene of the lobby is relatively static with only some small scene changes corresponding to people moving in the lobby. We only use parts of the video where there is movement on both half of the scenes to test whether our approach will be able to assign each moving component to the proper sub-image (see objects circled in yellow in Fig. 3(a)). Zero-mean white Gaussian noise is added in our simulations. We add 10 frames between video frames using interpolation to simulate a faster video frame rate.

Fig. 3. (a) Original surveillance video with two moving components (circled in yellow), (b) the observed superimposed sub-images of the left and right half of the scene, and (c) the reconstruction using the 8-Frame Method allowing 20 seconds of GPSR iterations.
Fig. 4. MSE values for the 90 frames allowing 5 seconds (a) and 20 seconds (b) to solve the optimization problems for the different n-Frame Methods for the surveillance video.

4.3. Discussion

Aside from these challenges associated with hardware and temporal correlations, the mathematical methods described in this paper should extend to handling much larger numbers of sub-images. For example, consider an extreme case in which there is one sub-image for each pixel in the high resolution scene. In this setting, each observation frame would be the sum of a different subset of the pixels (sub-images) in the scene because the shifting of sub-images would make some of them unobservable by the detector at different times. This model is highly analogous to the Rice “Single Pixel Camera” [23

23. M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, “Single-pixel imaging via compressive sampling, ” IEEE Signal Process. Mag. 25, 83–91, March 2008. [CrossRef]

], in which each measurement (in time) is the sum of a random collection of pixels. Duarte et al. demonstrate that if sufficiently many measurements are collected over time using this setup, then a static scene can be reconstructed with high accuracy.

We note that the proposed approach is different from classical mosaicing as described in Sec. 1. For example, consider a scene with a quickly moving, transient object in one location. With classical mosaicing, this object would only be observed at half the frame rate (since the other half of the frame rate is used to observe the other half of the scene); however, it would be observed with high spatial resolution. In contrast, the proposed technique would observe every part of the image during each frame acquisition, resulting in very high temporal resolution for detecting transient objects. However, because the disambiguation procedure relies on temporal correlations, the spatial resolution of the reconstructed object would be relatively poor – i.e., it would look blurred.

5. Conclusions

Acknowledgments

The authors would like to thank Les Todd, assistant director of Duke Photography, for allowing the use of the “Duke Earth Day” photograph in our physical experiments. The authors were partially supported by DARPA Contract No. HR0011-04-C-0111, ONR Grant No. N00014-06-1-0610, and DARPA Contract No. HR0011-06-C-0109.

References and links

1.

Y. Hagiwara, “High-density and high-quality frame transfer CCD imager with very low smear, low dark current, and very high blue sensitivity,” IEEE Trans. Electron Devices 43, 2122–2130 (1996). [CrossRef]

2.

H. S. P. Wong, R. T. Chang, E. Crabbe, and P. D. Agnello, “CMOS active pixel image sensors fabricated using a 1.8-V, 0.25-mu m CMOS technology,” IEEE Trans. Electron Devices 45, 889–894 (1998). [CrossRef]

3.

S. D. Gunapala, S. V. Bandara, J. K. Liu, C. J. Hill, S. B. Rafol, J. M. Mumolo, J. T. Trinh, M. Z. Tidrow, and P. D. Le Van, “1024 x 1024 pixel mid-wavelength and long-wavelength infrared QWIP focal plane arrays for imaging applications,” Semicond. Sci. Technol. 20, 473–480 (2005). [CrossRef]

4.

S. Krishna, D. Forman, S. Annamalai, P. Dowd, P. Varangis, T. Tumolillo, A. Gray, J. Zilko, K. Sun, M. G. Liu, J. Campbell, and D. Carothers, “Demonstration of a 320x256 two-color focal plane array using InAs/InGaAs quantum dots in well detectors,” Appl. Phys. Lett. 86, 193,501 (2005). [CrossRef]

5.

R. Szeliski, “Image mosaicing for tele-reality applications,” Proc. IEEEWorkshop on Applications of Computer Vision pp. 44–53 (1994).

6.

R. A. Hicks, V. T. Nasis, and T. P. Kurzweg, “Programmable imaging with two-axis micromirrors,” Opt. Lett. 32, 1066–1068 (2007). [CrossRef] [PubMed]

7.

S. C. Park, M. K. Park, and M. G. Kang, “Super-resolution image reconstruction: A technical overview,” IEEE Signal Process. Mag. 20, 21–36 (2003). [CrossRef]

8.

R. C. Hardie, K. J. Barnard, J. G. Bognar, E. E. Armstrong, and E. A. Watson, “High-resolution image reconstruction from a sequence of rotated and translated frames and its application to an infrared imaging system,” Opt. Eng. 37, 247–260 (1998). [CrossRef]

9.

J. C. Gillett, T. M. Stadtmiller, and R. C. Hardie, “Aliasing reduction in staring infrared imagers utilizing subpixel techniques,” Opt. Eng. 34, 3130–3137 (1995). [CrossRef]

10.

M. Irani and S. Peleg, “Improving resolution by image registration,” CVGIP: Graph. Models Image Process. 53, 231–239 (1991). [CrossRef]

11.

R. F. Marcia, C. Kim, J. Kim, D. Brady, and R. M. Willett, “Fast disambiguation of superimposed images for increased field of view,” Accepted to “Proc. IEEE Int. Conf. Image Proc. (ICIP 2008)”.

12.

P. D. O’Grady, B. A. Pearlmutter, and S. T. Rickard, “Survey of sparse and non-sparse methods in source separation,” Int. J. Imag. Syst. Tech. 15, 18–33 (2005). [CrossRef]

13.

A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, “Sparse ICA for blind separation of transmitted and reflected images,” Int. J. Imag. Syst. Tech. 15, 84–91 (2005). [CrossRef]

14.

E. Be’ery and A. Yeredor, “Blind separation of superimposed shifted images using parameterized joint diagonalization,” IEEE Trans. Image Process. 17, 340–353 (2008). [CrossRef] [PubMed]

15.

J. Bobin, J.-L. Starck, J. Fadili, and Y. Moudden, “Morphological Diversity and Source Separation,” IEEE Trans. Signal Process. 13, 409–412 (2006). [CrossRef]

16.

S. S. Chen, D. L. Donoho, and M. A. Saunders, “Atomic decomposition by basis pursuit,” SIAM J. Sci. Comput. 20, 33–61 (electronic) (1998).

17.

M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, “Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems,” IEEE Journal of Selected Topics in Signal Processing: Special Issue on Convex Optimization Methods for Signal Processing (To appear). [PubMed]

18.

E. Candès and T. Tao, “Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies,” (2006). To be published in IEEE Transactions on Information Theory.http://www.acm.caltech.edu/~emmanuel/papers/OptimalRecovery.pdf. [CrossRef]

19.

D. L. Donoho and Y. Tsaig, “Fast solution of ~1-norm minimization problems when the solution may be sparse,” Preprint (2006).

20.

R. Tibshirani, “Regression shrinkage and selection via the lasso,” J. Roy. Statist. Soc. Ser. B 58, 267–288 (1996).

21.

R. F. Marcia and R. M. Willett, “Compressive coded aperture video reconstruction,” Accepted to “Proc. Sixteenth European Signal Processing Conference (EUSIPCO 2008)”.

22.

“Benchmark Data for PETS-ECCV 2004,” in Sixth IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (2004). URL http://www-prima.imag.fr/PETS04/caviar\char’-data.html.

23.

M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, “Single-pixel imaging via compressive sampling, ” IEEE Signal Process. Mag. 25, 83–91, March 2008. [CrossRef]

OCIS Codes
(100.2000) Image processing : Digital image processing
(100.7410) Image processing : Wavelets
(110.1758) Imaging systems : Computational imaging
(110.4155) Imaging systems : Multiframe image processing
(110.3010) Imaging systems : Image reconstruction techniques

ToC Category:
Image Processing

History
Original Manuscript: July 18, 2008
Revised Manuscript: September 11, 2008
Manuscript Accepted: September 22, 2008
Published: September 29, 2008

Citation
Roummel F. Marcia, Changsoon Kim, Cihat Eldeniz, Jungsang Kim, David J. Brady, and Rebecca M. Willett, "Superimposed video disambiguation for increased field of view," Opt. Express 16, 16352-16363 (2008)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-21-16352


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. Y. Hagiwara, "High-density and high-quality frame transfer CCD imager with very low smear, low dark current, and very high blue sensitivity," IEEE Trans. Electron Devices 43, 2122-2130 (1996). [CrossRef]
  2. H. S. P. Wong, R. T. Chang, E. Crabbe, and P. D. Agnello, "CMOS active pixel image sensors fabricated using a 1.8-V, 0.25-mu m CMOS technology," IEEE Trans. Electron Devices 45, 889-894 (1998). [CrossRef]
  3. S. D. Gunapala, S. V. Bandara, J. K. Liu, C. J. Hill, S. B. Rafol, J. M. Mumolo, J. T. Trinh, M. Z. Tidrow, and P. D. Le Van, "1024 x 1024 pixel mid-wavelength and long-wavelength infrared QWIP focal plane arrays for imaging applications," Semicond. Sci. Technol. 20, 473-480 (2005). [CrossRef]
  4. S. Krishna, D. Forman, S. Annamalai, P. Dowd, P. Varangis, T. Tumolillo, A. Gray, J. Zilko, K. Sun, M. G. Liu, J. Campbell, and D. Carothers, "Demonstration of a 320x256 two-color focal plane array using InAs/InGaAs quantum dots in well detectors," Appl. Phys. Lett. 86, 193,501 (2005). [CrossRef]
  5. R. Szeliski, "Image mosaicing for tele-reality applications," Proc. IEEEWorkshop on Applications of Computer Vision pp. 44-53 (1994).
  6. R. A. Hicks, V. T. Nasis, and T. P. Kurzweg, "Programmable imaging with two-axis micromirrors," Opt. Lett. 32, 1066-1068 (2007). [CrossRef] [PubMed]
  7. S. C. Park, M. K. Park, and M. G. Kang, "Super-resolution image reconstruction: A technical overview," IEEE Signal Process. Mag. 20, 21-36 (2003). [CrossRef]
  8. R. C. Hardie, K. J. Barnard, J. G. Bognar, E. E. Armstrong, and E. A. Watson, "High-resolution image reconstruction from a sequence of rotated and translated frames and its application to an infrared imaging system," Opt. Eng. 37, 247-260 (1998). [CrossRef]
  9. J. C. Gillett, T. M. Stadtmiller, and R. C. Hardie, "Aliasing reduction in staring infrared imagers utilizing subpixel techniques," Opt. Eng. 34, 3130-3137 (1995). [CrossRef]
  10. M. Irani and S. Peleg, "Improving resolution by image registration," CVGIP: Graph. Models Image Process. 53, 231-239 (1991). [CrossRef]
  11. R. F. Marcia, C. Kim, J. Kim, D. Brady, and R. M. Willett, "Fast disambiguation of superimposed images for increased field of view," Accepted to "Proc. IEEE Int. Conf. Image Proc. (ICIP 2008)".
  12. P. D. O'Grady, B. A. Pearlmutter, and S. T. Rickard, "Survey of sparse and non-sparse methods in source separation," Int. J. Imag. Syst. Tech. 15, 18-33 (2005). [CrossRef]
  13. A. M. Bronstein, M. M. Bronstein, M. Zibulevsky, and Y. Y. Zeevi, "Sparse ICA for blind separation of transmitted and reflected images," Int. J. Imag. Syst. Tech. 15, 84-91 (2005). [CrossRef]
  14. E. Be'ery and A. Yeredor, "Blind separation of superimposed shifted images using parameterized joint diagonalization," IEEE Trans. Image Process. 17, 340-353 (2008). [CrossRef] [PubMed]
  15. J. Bobin, J.-L. Starck, J. Fadili, and Y. Moudden, "Morphological Diversity and Source Separation," IEEE Trans. Signal Process. 13, 409-412 (2006). [CrossRef]
  16. S. S. Chen, D. L. Donoho, and M. A. Saunders, "Atomic decomposition by basis pursuit," SIAM J. Sci. Comput.  20, 33-61 (electronic) (1998).
  17. M. A. T. Figueiredo, R. D. Nowak, and S. J. Wright, "Gradient projection for sparse reconstruction: Application to compressed sensing and other inverse problems," IEEE J. Sel. Topics in Signal Processing: Special Issue on Convex Optimization Methods for Signal Processing (To appear). [PubMed]
  18. E. Candés and T. Tao, "Near Optimal Signal Recovery From Random Projections: Universal Encoding Strategies," (2006). To be published in IEEE Transactions on Information Theory. http://www.acm.caltech.edu/~emmanuel/papers/OptimalRecovery.pdf. [CrossRef]
  19. D. L. Donoho and Y. Tsaig, "Fast solution of l1-norm minimization problems when the solution may be sparse," Preprint (2006).
  20. R. Tibshirani, "Regression shrinkage and selection via the lasso," J. Roy. Statist. Soc. Ser. B 58, 267-288 (1996).
  21. R. F. Marcia and R. M. Willett, "Compressive coded aperture video reconstruction," Accepted to "Proc. Sixteenth European Signal Processing Conference (EUSIPCO 2008)".
  22. "Benchmark Data for PETS-ECCV 2004," in Sixth IEEE International Workshop on Performance Evaluation of Tracking and Surveillance (2004). URL http://www-prima.imag.fr/PETS04/caviar\char?? data.html.
  23. M. Duarte, M. Davenport, D. Takhar, J. Laska, T. Sun, K. Kelly, and R. Baraniuk, "Single-pixel imaging via compressive sampling, " IEEE Signal Process. Mag. 25, 83 - 91, March 2008. [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

Figures

Fig. 1. Fig. 2. Fig. 3.
 
Fig. 4.
 

« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited