OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijn de Sterke
  • Vol. 19, Iss. 11 — May. 23, 2011
  • pp: 10762–10768
« Show journal navigation

Real-time image stabilization for arbitrary motion blurred image based on opto-electronic hybrid joint transform correlator

Yixian Qian, Yong Li, Jie Shao, and Hua Miao  »View Author Affiliations


Optics Express, Vol. 19, Issue 11, pp. 10762-10768 (2011)
http://dx.doi.org/10.1364/OE.19.010762


View Full Text Article

Acrobat PDF (911 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

An efficient approach was put forward to keep real-time image stabilization based on opto-electronic hybrid processing, by which image motion vector can be effectively detected and point spread function (PSF) was accurately modeled instantaneously, it will alleviate greatly the complexity of image restoration algorithm. The approach applies to arbitrary motion blurred images. We have also constructed an image stabilization measurement system. The experimental results show that the proposed method has advantages of real time and preferable effect.

© 2011 OSA

1. Introduction

High-resolution imaging systems are of interest for a wide variety of applications, particularly in remote sensing, airborne reconnaissance, and aerial photograph. However, image resolution is always limited by image motions [1

1. Y. Yitzhaky, I. Mor, A. Lantzman, and N. S. Kopeika, “Direct method for restoration of motion-blurred images,” J. Opt. Soc. Am. A 15(6), 1512–1519 (1998). [CrossRef]

3

3. B. Golik and D. Wueller, “Measurement method for image stabilizing systems,” Proc. SPIE 6502, 65020O, 65020O–10 (2007). [CrossRef]

] induced by flying aircraft, various vibration and varying attitude, which are often the major factors of image degradation.

In order to compensate the image motion and keep image stabilization, an image stabilizer is required to improve image quality. Generally, image stabilizers can be divided into two types, digital image stabilizers (DIS) and optical image stabilizers (OIS) [4

4. H. Choi, J.-P. Kim, M.-G. Song, W. C. Kim, N. C. Park, Y. P. Park, and K. S. Park, “Effects of motion of an imaging system and optical image stabilizer on the modulation transfer function,” Opt. Express 16(25), 21132–21141 (2008). [CrossRef] [PubMed]

], respectively. A DIS reduces the image blur by using an image-restoration filter or algorithm [5

5. B. Likhterov and N. S. Kopeika, “Motion-blurred image restoration using modified inverse all-pole filters,” J. Electron. Imaging 13(2), 257–263 (2004). [CrossRef]

7

7. Y. Tian, C. Rao, L. Zhu, and X. Rao, “Modified self-deconvolution restoration algorithm for adaptive-optics solar images,” Opt. Lett. 35(15), 2541–2543 (2010). [CrossRef] [PubMed]

], such as inverse filter, Wiener filter or blind image deconvolution [8

8. V. Loyev and Y. Yitzhaky, “Initialization of iterative parametric algorithms for blind deconvolution of motion-blurred images,” Appl. Opt. 45(11), 2444–2452 (2006). [CrossRef] [PubMed]

,9

9. J. Zhang, Q. Zhang, and G. He, “Blind deconvolution of a noisy degraded image,” Appl. Opt. 48(12), 2350–2355 (2009). [CrossRef] [PubMed]

]. This approach extracts image motions vectors, and acquires the PSF from the blurred images without the need for external measurement devices such as gyroscopes or accelerometers [1

1. Y. Yitzhaky, I. Mor, A. Lantzman, and N. S. Kopeika, “Direct method for restoration of motion-blurred images,” J. Opt. Soc. Am. A 15(6), 1512–1519 (1998). [CrossRef]

]. However, a DIS requires additional buffer memory for image processing and takes a long time to measure and correct the image, so real-time performance of high-resolution image is bad. Moreover, for an image-restoration filter, knowledge of point spread function (PSF) is often required in advance. However, direct sensing of the PSF and then calculation of the modulation transfer function (MTF) are often complex and inaccurate in most applications. Numerous restoration algorithms have been reported to derive PSF from the blurred image itself or from the image sequence. However, the accuracy of PSF is limited because of the complexity and randomness of various motions. In addition, another limitation of these algorithms is their dependency on the image content. If the images do not contain well-defined edges, an accurate estimation of the motion function can be difficult. So the compensation accuracy of motion image is limited for a DIS. Although Blind image deconvolution need no the knowledge of PSF. Unfortunately, this iterative algorithm requires an initial guess of the PSF, the accuracy of the estimated PSF and the quality of the restored image depends on this guess completely.

An OIS method is to move the lens system or sensor [3

3. B. Golik and D. Wueller, “Measurement method for image stabilizing systems,” Proc. SPIE 6502, 65020O, 65020O–10 (2007). [CrossRef]

,10

10. C. W. Chiu, P. C.-P. Chao, and D. Y. Wu, “Optimal design of magnetically actuated optical image stabilizer mechanism for cameras in mobile phones via genetic algorithm,” IEEE Trans. Magn. 43(6), 2582–2584 (2007). [CrossRef]

] to compensate image motion. Moving lens (may rotate mirror) or sensor can change the optical path to keep image stabilization. But this approach needs a complicated optical-mechanical-electrical system to measure motion changes by gyroscopes or accelerometers. It makes image stabilizer complexity, high cost and also low precision due to friction and wind resistance. Moreover, real-time image for OIS method is bad.

In this paper, for the above reasons, we put forward an efficient approach to achieve real-time image stabilization by opto-electronic hybrid processing based on opto-electronic hybrid joint transform correlator (JTC) [11

11. J. F. Barrera, C. Vargas, M. Tebaldi, R. Torroba, and N. Bolognini, “Known-plaintext attack on a joint transform correlator encrypting system,” Opt. Lett. 35(21), 3553–3555 (2010). [CrossRef] [PubMed]

15

15. J. Widjaja, “Wavelet filter for improving detection performance of compression-based joint transform correlator,” Appl. Opt. 49(30), 5768–5776 (2010). [CrossRef] [PubMed]

]. Firstly, image velocity vector can be detected and obtained by image sequence captured by high-speed CCD based on JTC. And then accurate PSF model can be constructed according the velocity vector; it will alleviate greatly the complexity of image restoration algorithm. Finally, an appropriate algorithm is implemented to restore the blurred image rapidly. We have also constructed an image stabilization measurement system. This paper describes the principle of the proposed approach and presents some preliminary experimental results.

2. Theory

2.1 Principle of image stabilization

2.2 Motion vector detection and Image restoration algorithms

An appropriate algorithm can be utilized to restore blurred image on the condition of known PSF. Here we utilized Richardson–Lucy (RL) algorithm [6

6. S. Prasad, “Statistical-information-based performance criteria for Richardson-Lucy image deblurring,” J. Opt. Soc. Am. A 19(7), 1286–1296 (2002). [CrossRef]

]. The RL algorithm is a nonlinear iterative deconvolution technique for deblurring an image when the PSF is perfectly known in advance. The RL algorithm can be given as following
fi+1(x,y)=fi(x,y)(hi(x,y)g(x,y)hi(x,y)fi(x,y))
(7)
where f i (x, y) is the restored image after i iterations,* is the convolution operation, and ⊗ is the correlation operation.

3. Experimental results and discussions

We developed a real-time image stabilization system based on opto-electronic hybrid joint transform correlator to verify the approach effective. The experimental set-up is given in Fig. 2
Fig. 2 Experiment set-up (part).
. It consists of a primary CCD camera with resolution of 1028 × 1024 pixels, exposure period of 20 ms and frame rate of 1 fps; three high-speed CCD cameras with 1280 × 1024 pixels, frame rate of 636 fps and pixel size of 12 × 12 um; two Ferroelectric Liquid Crystal SLMs (FLCSLM) with resolution of 512 × 512 pixels, frame rate of 1015 fps and pixel size of 7.68 × 7.68 um. The system also incorporates a laser system with 532 nm, two Fourier Lens with the focal length of 300 mm, and a digital signal processor. To simulate the relative motion, for simplicity, we assume prime CCD camera and high-speed CCD camera keep still, and the object is uniform linear motion driven by a motor, experimental measurement system is schematically illustrated in Fig. 3
Fig. 3 Experimental measurement system.
. The moving object is actually a newspaper.

Actually, in order to lower the resolution requirement of SLM, input image is a sub-image extracted from the complete image captured by high-speed CCD, Moreover, an advantage of employing sub-image is that there are smaller noises compared with the complete image.

A critical goal for JTC is to detect the position of cross-correlation peak. However, the classical JTC always has several drawbacks, such as it is sensitive to geometric distortions and noise in the input scene, it contains a strong zero order peak and its discrimination ability is low. In order to overcome this difficulty, several approaches have been reported [14

14. J. A. Butt and T. D. Wilkinson, “Binary phase only filters for rotation and scale invariant pattern recognition with the joint transform correlator,” Opt. Commun. 262(1), 17–26 (2006). [CrossRef]

,15

15. J. Widjaja, “Wavelet filter for improving detection performance of compression-based joint transform correlator,” Appl. Opt. 49(30), 5768–5776 (2010). [CrossRef] [PubMed]

]. Here we employed wavelet transform (WT) to deal with joint transform power spectrum (JTPS).

A mother wavelet Ф(x, y) is a finite-duration window function that can generate a daughter wavelets by varying dilation (ax, ay) and shift (bx, by) and is given as
Φax,ay(x,y)=1axayΦ(xbxax,ybyay)
(8)
The mother wavelet must satisfy the admissibility conditions that it must be oscillatory, have fast decay to zero, and integrate to zero. WT is defined as an inner product between an analyzed signal f(x, y) and daughter wavelets Φax,ay as
Wf(ax,ay;bx,by)=1axayf(x,y)×Φ(xbxax,ybyay)dxdy
(9)
where * denotes the complex conjugate. By dilating the wavelet, the WT provides a multiresolution decomposition of the signal with good spatial resolution at high frequency and good frequency resolution at low frequency. Therefore, the WT can localize particular features of signals being analyzed.

In our work, two-dimensional Mexican hat wavelet is adopted owing to its ability to extract edges of equal width disregarding the size or the orientation of the input pattern, which is the second derivative of the Gaussian function and is given as
Φ(x,y)=[1(x2+y2)]exp(x2+y22)
(10)
The Fourier transform of the Mexican hat is represented as
Ψ(ωx,ωy)=4π2(ωx2+ωy2)exp[2π(ωx2+ωy2)]
(11)
where (ωx, ωy) are the spatial frequency coordinates in the x and y directions, respectively.

WT can effectively suppress the noise interference of JPS, improve the energy of ± 1 order diffractive light, and then enhance the detection ability. The comparison of cross-correlation peak was shown in Fig. 4
Fig. 4 Comparison, (a) before processing, (b) after processing.
, the cross-correlation peak cannot nearly be detected before processing, but we obtained obvious output peak by WT processing. Cross-correlation peak intensity was also presented in Fig. 5
Fig. 5 Cross-correlation peak intensity.
, the normalized correlation intensity reaches 0.5.

Real-time image stabilization process is described in detail as following. Firstly, during an exposure period high-speed CCD camera captures the image sequence, which are sent to the JTC and processed by DSP, and then obtain h(x, y). Secondly, the single blurred image captured by prime CCD camera is conveyed to DSP and restored by the R-L algorithm. Lastly, real-time and high-resolution images are acquired without delay. Figure 6
Fig. 6 experimental results and comparisons, (a) Original image at v = 2.5 um/ms; (b) Stabilized image at v = 2.5 um/ms; (c) Original image at v = 12.5 um/ms; (d) Stabilized image at v = 12.5 um/ms.
shows the real-time image stabilization results at different moving velocity, respectively.

The time that high-speed CCD camera captures an image is 1.57 ms (for frame rate is 636 fps), and that SLM update twice an image is approximate 0.98 × 2 = 1.96 ms (for frame rate is 1015 fps). By test and calculation, detection and process for every JPS by CCD1 will cost about 11.7 ms, and that for every cross-correlation output by CCD2 will spend 9.3 ms. Moreover, the R-L restoration algorithm need expends approximately 24 ms. The Fourier transform are implemented by light velocity, so the Fourier transforming time will nearly be ignored. For prime CCD camera exposure period is 20 ms, during the exposure period the high-speed CCD camera will capture 13 images, therefore, the process of every complete image stabilization by JTC needs approximately 0.37 s, which is less than 1 s (1fps for prime CCD camera), the real-time performance is strong.

4. Conclusions

An effective method for real-time image stabilization was proposed based on opto-electronic hybrid joint transform correlator. Velocity vector can be detected rapidly by JTC, meanwhile, accurate PSF for degrade image is modeled instantaneously by DSP; a simple and appropriate algorithm was employed to restore blurred image. A JTC measurement system was developed to verify the approach effective, the experimental results show the proposed method has advantages of real time and preferable effect. The approach applies to arbitrary motion blurred images. If frame rate for CCD and SLM is higher, such as thousands fps, the real time and restoration accuracy would be better.

Acknowledgments

This work is supported by the National Natural Science Foundation of China (NO. 60702078) and Zhejiang province Science and Technology Foundation (NO. 2010C33162).

References and links

1.

Y. Yitzhaky, I. Mor, A. Lantzman, and N. S. Kopeika, “Direct method for restoration of motion-blurred images,” J. Opt. Soc. Am. A 15(6), 1512–1519 (1998). [CrossRef]

2.

G. Hochman, Y. Yitzhaky, N. S. Kopeika, Y. Lauber, M. Citroen, and A. Stern, “Restoration of images captured by a staggered time delay and integration camera in the presence of mechanical vibrations,” Appl. Opt. 43(22), 4345–4354 (2004). [CrossRef] [PubMed]

3.

B. Golik and D. Wueller, “Measurement method for image stabilizing systems,” Proc. SPIE 6502, 65020O, 65020O–10 (2007). [CrossRef]

4.

H. Choi, J.-P. Kim, M.-G. Song, W. C. Kim, N. C. Park, Y. P. Park, and K. S. Park, “Effects of motion of an imaging system and optical image stabilizer on the modulation transfer function,” Opt. Express 16(25), 21132–21141 (2008). [CrossRef] [PubMed]

5.

B. Likhterov and N. S. Kopeika, “Motion-blurred image restoration using modified inverse all-pole filters,” J. Electron. Imaging 13(2), 257–263 (2004). [CrossRef]

6.

S. Prasad, “Statistical-information-based performance criteria for Richardson-Lucy image deblurring,” J. Opt. Soc. Am. A 19(7), 1286–1296 (2002). [CrossRef]

7.

Y. Tian, C. Rao, L. Zhu, and X. Rao, “Modified self-deconvolution restoration algorithm for adaptive-optics solar images,” Opt. Lett. 35(15), 2541–2543 (2010). [CrossRef] [PubMed]

8.

V. Loyev and Y. Yitzhaky, “Initialization of iterative parametric algorithms for blind deconvolution of motion-blurred images,” Appl. Opt. 45(11), 2444–2452 (2006). [CrossRef] [PubMed]

9.

J. Zhang, Q. Zhang, and G. He, “Blind deconvolution of a noisy degraded image,” Appl. Opt. 48(12), 2350–2355 (2009). [CrossRef] [PubMed]

10.

C. W. Chiu, P. C.-P. Chao, and D. Y. Wu, “Optimal design of magnetically actuated optical image stabilizer mechanism for cameras in mobile phones via genetic algorithm,” IEEE Trans. Magn. 43(6), 2582–2584 (2007). [CrossRef]

11.

J. F. Barrera, C. Vargas, M. Tebaldi, R. Torroba, and N. Bolognini, “Known-plaintext attack on a joint transform correlator encrypting system,” Opt. Lett. 35(21), 3553–3555 (2010). [CrossRef] [PubMed]

12.

H. T. Chang and C. T. T. Chen, “Enhanced optical image verification based on Joint Transform Correlator adopting Fourier hologram,” Opt. Rev. 11(3), 165–169 (2004). [CrossRef]

13.

A. R. Alsamman, “Spatially efficient reference phase-encrypted joint transform correlator,” Appl. Opt. 49(10), B104–B110 (2010). [CrossRef] [PubMed]

14.

J. A. Butt and T. D. Wilkinson, “Binary phase only filters for rotation and scale invariant pattern recognition with the joint transform correlator,” Opt. Commun. 262(1), 17–26 (2006). [CrossRef]

15.

J. Widjaja, “Wavelet filter for improving detection performance of compression-based joint transform correlator,” Appl. Opt. 49(30), 5768–5776 (2010). [CrossRef] [PubMed]

16.

O. Hadar, I. Dror, and N. S. Kopeika, “Numerical calculation of image motion and vibration modulation transfer functions-a new method,” Proc. SPIE 1533, 61–74 (1991). [CrossRef]

OCIS Codes
(100.0100) Image processing : Image processing
(300.0300) Spectroscopy : Spectroscopy

ToC Category:
Image Processing

History
Original Manuscript: February 8, 2011
Revised Manuscript: March 25, 2011
Manuscript Accepted: May 1, 2011
Published: May 18, 2011

Citation
Yixian Qian, Yong Li, Jie Shao, and Hua Miao, "Real-time image stabilization for arbitrary motion blurred image based on opto-electronic hybrid joint transform correlator," Opt. Express 19, 10762-10768 (2011)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-19-11-10762


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. Y. Yitzhaky, I. Mor, A. Lantzman, and N. S. Kopeika, “Direct method for restoration of motion-blurred images,” J. Opt. Soc. Am. A 15(6), 1512–1519 (1998). [CrossRef]
  2. G. Hochman, Y. Yitzhaky, N. S. Kopeika, Y. Lauber, M. Citroen, and A. Stern, “Restoration of images captured by a staggered time delay and integration camera in the presence of mechanical vibrations,” Appl. Opt. 43(22), 4345–4354 (2004). [CrossRef] [PubMed]
  3. B. Golik and D. Wueller, “Measurement method for image stabilizing systems,” Proc. SPIE 6502, 65020O, 65020O–10 (2007). [CrossRef]
  4. H. Choi, J.-P. Kim, M.-G. Song, W. C. Kim, N. C. Park, Y. P. Park, and K. S. Park, “Effects of motion of an imaging system and optical image stabilizer on the modulation transfer function,” Opt. Express 16(25), 21132–21141 (2008). [CrossRef] [PubMed]
  5. B. Likhterov and N. S. Kopeika, “Motion-blurred image restoration using modified inverse all-pole filters,” J. Electron. Imaging 13(2), 257–263 (2004). [CrossRef]
  6. S. Prasad, “Statistical-information-based performance criteria for Richardson-Lucy image deblurring,” J. Opt. Soc. Am. A 19(7), 1286–1296 (2002). [CrossRef]
  7. Y. Tian, C. Rao, L. Zhu, and X. Rao, “Modified self-deconvolution restoration algorithm for adaptive-optics solar images,” Opt. Lett. 35(15), 2541–2543 (2010). [CrossRef] [PubMed]
  8. V. Loyev and Y. Yitzhaky, “Initialization of iterative parametric algorithms for blind deconvolution of motion-blurred images,” Appl. Opt. 45(11), 2444–2452 (2006). [CrossRef] [PubMed]
  9. J. Zhang, Q. Zhang, and G. He, “Blind deconvolution of a noisy degraded image,” Appl. Opt. 48(12), 2350–2355 (2009). [CrossRef] [PubMed]
  10. C. W. Chiu, P. C.-P. Chao, and D. Y. Wu, “Optimal design of magnetically actuated optical image stabilizer mechanism for cameras in mobile phones via genetic algorithm,” IEEE Trans. Magn. 43(6), 2582–2584 (2007). [CrossRef]
  11. J. F. Barrera, C. Vargas, M. Tebaldi, R. Torroba, and N. Bolognini, “Known-plaintext attack on a joint transform correlator encrypting system,” Opt. Lett. 35(21), 3553–3555 (2010). [CrossRef] [PubMed]
  12. H. T. Chang and C. T. T. Chen, “Enhanced optical image verification based on Joint Transform Correlator adopting Fourier hologram,” Opt. Rev. 11(3), 165–169 (2004). [CrossRef]
  13. A. R. Alsamman, “Spatially efficient reference phase-encrypted joint transform correlator,” Appl. Opt. 49(10), B104–B110 (2010). [CrossRef] [PubMed]
  14. J. A. Butt and T. D. Wilkinson, “Binary phase only filters for rotation and scale invariant pattern recognition with the joint transform correlator,” Opt. Commun. 262(1), 17–26 (2006). [CrossRef]
  15. J. Widjaja, “Wavelet filter for improving detection performance of compression-based joint transform correlator,” Appl. Opt. 49(30), 5768–5776 (2010). [CrossRef] [PubMed]
  16. O. Hadar, I. Dror, and N. S. Kopeika, “Numerical calculation of image motion and vibration modulation transfer functions-a new method,” Proc. SPIE 1533, 61–74 (1991). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited