OSA's Digital Library

Virtual Journal for Biomedical Optics

Virtual Journal for Biomedical Optics

| EXPLORING THE INTERFACE OF LIGHT AND BIOMEDICINE

  • Editor: Gregory W. Faris
  • Vol. 2, Iss. 10 — Oct. 31, 2007
« Show journal navigation

Image quality enhancement in 3D computational integral imaging by use of interpolation methods

Dong-Hak Shin and Hoon Yoo  »View Author Affiliations


Optics Express, Vol. 15, Issue 19, pp. 12039-12049 (2007)
http://dx.doi.org/10.1364/OE.15.012039


View Full Text Article

Acrobat PDF (362 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

In this paper, we propose a computational integral imaging reconstruction (CIIR) method by use of image interpolation algorithms to improve the visual quality of 3D reconstructed images. We investigate the characteristics of the conventional CIIR method along the distance between lenslet and objects. What we observe is that the visual quality of reconstructed images is periodically degraded. The experimentally observed period is half size of the elemental image. To remedy this problem, we focus on the interpolation methods in computational integral imaging. Several interpolation methods are applied to the conventional CIIR method and their performances are analyzed. To objectively evaluate the proposed CIIR method, we introduce an experimental framework for the computational pickup process and the CIIR process using a Gaussian function. We also carry out experiments on real objects to subjectively evaluate the proposed method. Experimental results indicate that our method outperforms the conventional CIIR method. In addition, our method reduces the grid noise that the conventional CIIR method suffers from.

© 2007 Optical Society of America

1. Introduction

Integral imaging has been one of the attractive autostereoscopic three-dimensional (3D) display techniques since it was proposed by Lippman in 1908 [1

1. G. Lippmann, “La photographic intergrale,” Comptes-Rendus, Acad. Sci. 146, 446–451 (1908).

16

16. B. Javidi, R. Ponce-Diaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects using volumetric reconstruction,” Opt. Lett. 31, 1106–1108 (2006). [CrossRef] [PubMed]

]. It has attracted many researchers because of various merits such as full parallax, continuous viewing angle and full color display. In general, an integral imaging system consists of two parts; pickup and reconstruction. In the pickup part, the rays coming from a 3D object through a lenslet array is recorded as elemental images representing different perspectives of a 3D object. On the other hand, in the reconstruction part, there are two kinds of integral imaging reconstruction techniques. One is based on optical integral imaging reconstruction (OIIR) [1

1. G. Lippmann, “La photographic intergrale,” Comptes-Rendus, Acad. Sci. 146, 446–451 (1908).

9

9. D.-H. Shin, B. Lee, and E.-S. Kim, “Multi-direction-curved integral imaging with large depth by additional use of a large-aperture lens,” Appl. Opt. 45, 7375–7381 (2006). [CrossRef] [PubMed]

] and the other is based on computational integral imaging reconstruction (CIIR) [10

10. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001) [CrossRef]

12

12. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004). [CrossRef] [PubMed]

]. In the OIIR techniques, the recorded elemental images are displayed on a display panel and then a 3D image can be reconstructed and observed optically through a lenslet array. However, the OIIR techniques produce a low-resolution 3D image due to an insufficient number of elemental images. Also, physical limitations of optical devices degrade the image quality of the 3D images.

Recently, to overcome these drawbacks of the OIIR techniques and to extract voxel (volumetric pixel) information of 3D objects, a CIIR method has been introduced [10

10. H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001) [CrossRef]

12

12. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004). [CrossRef] [PubMed]

]. Also, it is utilized for a recognition system [15

15. S. -H. Hong and B. Javidi, “Three-dimensional visualization of partially occluded objects using integral imaging,” J. Display Technol. 1, 354- (2005). [CrossRef]

] which is an actual optical system for computational integral imaging. The basic structure of a CII system is composed of an optical pickup process and a CIIR process. In the optical pickup process, elemental images are recorded by use of a lenslet array and an image sensor. In the CIIR process, the voxel information of 3D objects is digitally reconstructed from elemental images by use of a computer without optical devices. The extracted voxels can be used for 3D visualization as like OIIR [12

12. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004). [CrossRef] [PubMed]

,13

13. S. -H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12, 4579–4588 (2004) [CrossRef] [PubMed]

] and object recognition using correlation methods to recognize occluded 3D objects [15

15. S. -H. Hong and B. Javidi, “Three-dimensional visualization of partially occluded objects using integral imaging,” J. Display Technol. 1, 354- (2005). [CrossRef]

,16

16. B. Javidi, R. Ponce-Diaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects using volumetric reconstruction,” Opt. Lett. 31, 1106–1108 (2006). [CrossRef] [PubMed]

].

The principle of CIIR, which is based on the pinhole-array model, is that 3D images are digitally reconstructed at the required output planes by superposition of all of the inversely mapped and magnified elemental images. The magnification factor increases in proportion to the distance of required output plane. When 3D object is recorded through a square-shaped lens array in the pickup process, the reconstructed images of CIIR have intensity irregularities with the grid noise. Thus, the visual quality of reconstructed image gets degraded. To solve this problem, some studies were discussed [13

13. S. -H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12, 4579–4588 (2004) [CrossRef] [PubMed]

,14

14. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced computational integral imaging reconstruction using intermediate-view reconstruction technique,” Opt. Eng. 45, 117004 (2006). [CrossRef]

]. Among them, Hong and Javidi reported a method using the hybrid moving lenslet array technique where they normalized the intensity of the pixels of reconstructed images by the overlapping numbers of elemental images and increased the resolution of reconstructed images by using the moving technique of lenslet array [13

13. S. -H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12, 4579–4588 (2004) [CrossRef] [PubMed]

]. However, it is a sophisticated process including movement of camera sensor and their results showed only a visual evaluation, not an objective evaluation.

To objectively evaluate the three methods, we introduce an experimental framework for the computational pickup and the CIIR process using a Gaussian function. An optical experiment is also carried out to subjectively evaluate the three methods. Experimental results indicate that the proposed interpolation-based CIIR methods improve the quality of reconstructed images and reduce the grid noise in reconstructed images whereas the conventional CIIR suffers from the noise.

2. Proposed CIIR Methods

2.1 The conventional CIIR method

Figure 1 shows the conventional CIIR method based on the pinhole-array model [12

12. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004). [CrossRef] [PubMed]

]. As shown Fig. 1(a), 3D object are recorded as elemental images through lenslet array. In the computational reconstruction process as shown in Fig. 1(b), the elemental images are digitally reconstructed by use of a computer where 3D images can be easily reconstructed at any output planes without optical devices. Each elemental image is inversely mapped on the output plane through each pinhole. And the mapped elemental images are magnified by a factor of z/g, where z is the distance between the reconstructed output plane and the virtual pinhole array and g is the distance between the elemental images and the virtual pinhole array. Each magnified elemental image is overlapped each other and an reconstructed image is finally produced at the reconstructed output plane z. Iterative computation of the above process by varying the z value provides a series of images along z axis. This 3D image is a so called CIIR image.

The simple magnification of elemental images used in the conventional CIIR method is shown in Fig. 2(a). The size of single elemental image is set to be 2×2 and the magnification factor is set to be z/g=2. Each pixel is simply magnified into 2×2 pixels so the intensity values of these four pixels are equal. This method is considered to be the zero-order interpolation algorithm in 2D image processing. Therefore magnification process of the conventional CIIR is identical with the zero-order interpolation algorithm.

When 3D object is recorded with square-shaped lens array in the pickup process as shown in Fig. 1(a), the reconstructed images of CIIR have intensity irregularities with the grid noise. Thus, image quality becomes degraded. An example is shown in Fig. 2(b). The grid noise in the reconstructed image is caused by square-shaped elemental images and simple magnification mapping [12

12. S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004). [CrossRef] [PubMed]

,13

13. S. -H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12, 4579–4588 (2004) [CrossRef] [PubMed]

].

Fig. 1. Principle of conventional CIIR method (a) Pickup (b) Display.
Fig. 2. (a). Simple magnification in the conventional CIIR (b). Example of reconstructed image with the grid noise

2.2 Proposed CIIR method by use of an image interpolation algorithm

To understand what the interpolation technique is, the explanation for one-dimensional function is follows. The extension to two-dimension is straight forward. Let f(xk) be the sampled version of a continuous function f(x). The Shannon theorem states that the Sinc interpolation perfectly reconstructs the continuous function f(x) from its samples f(xk) if the sampling frequency is larger than twice the maximum frequency of the function f(x). The relationship between f(x) and its samples f(xk) is represented in the form

f(x)=k=0N1f(xk)β(xk),
(1)

β0(x)={1,0x<0.50,elsewhere.
(2)

The linear interpolation kernel is defined as

β1(x)={1x,0x∣<10,elsewhere.
(3)

And the CCI kernel is defined as

β3(x)={32x352x2+1,0x<112x3+52x24x2,1x<20,elsewhere.
(4)

Note that the sizes of the three interpolation kernels β 0(x), β 1(x), and β 3(x) are one, two, and four, respectively. It can be easily seen that the complexity of interpolation increases as the support of interpolation kernels increases. We apply the linear interpolation and the CCI to the conventional CIIR method as an image magnification technique.

In CIIR, each elemental image superimposed on the reconstructed output plane is magnified by a factor of M=z/g. Thus an elemental image having K×K pixels becomes larger as much as MK×MK pixels after the process of image interpolation. Each magnified elemental image is overlapped each other to reconstruct 3D images at the reconstruction output plane. To completely reconstruct a 3D plane image, this same process is repeatedly performed to all of the elemental images through each corresponding pinhole. As shown in Fig. 3, the proposed method use resolution-improved elemental images compared with conventional one because of using an image interpolation algorithm thus it provides improvement of reconstructed 3D images.

Fig. 3. Process of proposed method

3. Experiments and Results

3.1 Experiments using Gaussian function

In this paper, to numerically evaluate the proposed interpolation-based CIIR methods and to investigate the characteristics of the CIIR methods, a framework having a computational pickup and a CIIR process using one-dimensional (1D) Gaussian function is introduced as shown in Fig. 4. In pickup process, suppose a 1D Gaussian function G(x) as the continuous function f(x), whose pixels are N, be located at the distance z. The Gaussian function used in this paper is defined by

G(x)=ex22
(5)

And the pinhole array used in this experimental setup is composed of 30 pinholes and is located at z=0 mm. The interval between pinholes is 1.08 mm and the gap g between the elemental images and the pinhole array is 3 mm. Then 1D elemental images are computed by a computational pickup [9

9. D.-H. Shin, B. Lee, and E.-S. Kim, “Multi-direction-curved integral imaging with large depth by additional use of a large-aperture lens,” Appl. Opt. 45, 7375–7381 (2006). [CrossRef] [PubMed]

]. The pickuped 1D elemental images are used in the CIIR process using an interpolation algorithm. In CIIR process, the elemental images are magnified by a factor of z/g by using an interpolation algorithm and are superimposed on the reconstruction plane at z. Here we consider that 1D elemental images are interpolated at the distance z equal to a multiple of g. Finally, the reconstructed function R(xk) at z is obtained after superimposition of all elemental images. To objectively evaluate the quality of a reconstructed function (image), we calculate the mean square error (MSE) defined as

MSE=1Nk=1NG(xk)R(xk)2
(6)

For comparison, three kinds of interpolation algorithm were used: zero-order interpolation, linear interpolation and CCI. By varying z value, a series of R(xk) is calculated and compared with their original version in terms of MSE.

Fig. 4. Experimental structure for performance evaluation of Gaussian function.

3.2 Analysis of Gaussian function test

Fig. 5. Comparison of MSE according to three types of interpolation algorithm. Round mark (blue line): zero-order interpolation. Diamond mark (red line): linear interpolation. Star mark (black line): CCI (a) p=30 (b) p=40 (c) p=50 (d) p=60.
Fig. 6. Examples of reconstructed images when p=30. (a) z/g=14 (b) z/g=15. Dot blue line: original Gaussian function. Dash red line : reconstructed image.

3.3 Experiments using 3D objects

To show the usefulness of our proposed CIIR method, some experiments on reconstruction of 3D objects in a scene were performed. The experimental structure is shown in Fig. 7. In the experiment, 3D test objects composed of three patterns, ‘tree’ and ‘car’ are used. Each of the patterns has 1020×750 pixels. The ‘tree’ and ‘car’ patterns are longitudinally located at z=18 mm and z=45 mm, respectively.

To quantitatively estimate the viewing quality enhancement of the reconstructed images in the proposed method, MSE was calculated between original image and each reconstructed image. The calculated MSE values are presented in Fig. 9. For three different interpolation algorithms, the MES values were found for two reconstructed images, respectively. From the MSE results, image quality of the image reconstructed by using CCI algorithm might be improved about averagely 12% comparing with those by using zero-order interpolation algorithm. Specially, the large improvement was obtained at ‘car’ image located at z/g=15 which is the same with the pixel number of elemental image.

Fig. 7. Experimental Structure
Fig. 8. Images reconstructed at z=45 mm (z/g=15) by using three interpolation algorithms. (a) Zero-order interpolation (b) Linear interpolation (c) CCI
Fig. 9. MSE results for 3D objects

3.4 Experiments using real 3D objects

Next, an experiment using elemental images of real object picked up from optical pickup was carried out as shown in Fig. 10(a). A test real object is composed of two mark pattern, ‘Mark1’ and ‘Mark2’. The ‘Mark1’ and ‘Mark2’ patterns are longitudinally located at z=30 mm and z=45 mm, respectively. The lenslet array with 34×25 lenslets is located at z=0 mm. Each lenslet size d is 1.08 mm and single elemental image is composed of 60×60 pixels. The elemental images obtained through an optical pickup are shown in Fig. 10(b).

Figure 11 show the computationally reconstructed images at z=30 mm and 45 mm for both conventional method and proposed method, respectively. It must be noted here that there is the improvement of visual quality between two images reconstructed. In the result of the conventional method of Fig. 11(a), we can see the intensity irregularities with a grid structure cased by square-shaped mapping of elemental images. This is the reason why visual quality of the 3D reconstructed image is degraded in the conventional method. However, the proposed method can provide better results as shown in Fig. 11(b). This is due to the superposition of resolution-improved elemental images using CCI algorithm for each magnified elemental image.

Fig. 10. (a). Structure of optical pickup (b). Pickuped elemental images
Fig. 11. Experiments by optical pickup. (a) Conventional CIIR method. (b) Proposed CIIR method.

4. Conclusions

References and links

1.

G. Lippmann, “La photographic intergrale,” Comptes-Rendus, Acad. Sci. 146, 446–451 (1908).

2.

F. Okano, H. Hoshino, J. Arai, and I. Yuyama, “Real-time pickup method for a three-dimensional image based on integral photography,” Appl. Opt. 36, 1598–1603 (1997). [CrossRef] [PubMed]

3.

B. Lee, S. Jung, and J.-H. Park, “Viewing-angle-enhanced integral imaging by lens switching,” Opt. Lett. 27, 818–820 (2002). [CrossRef]

4.

J.-S. Jang and B. Javidi, “Formation of orthoscopic three-dimensional real images in direct pickup one-stepintegral imaging,” Opt. Eng. 42, 1869–1870 (2003). [CrossRef]

5.

A. Stern and B. Javidi, “Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging,” Appl. Opt. 42, 7036–7042 (2003). [CrossRef] [PubMed]

6.

D.-H. Shin, M. Cho, and E.-S. Kim, “Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets,” ETRI Journal 27, 289–293 (2005). [CrossRef]

7.

M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, “Integral imaging with improved depth of field by use of amplitude modulated microlens array,” Appl. Opt. 43, 5806–5813 (2004). [CrossRef] [PubMed]

8.

J.-H. Park, J. Kim, Y. Kim, and B. Lee, “Resolution-enhanced three-dimension/two-dimension convertible display based on integral imaging,” Opt. Express 13, 1875–1884 (2005). [CrossRef] [PubMed]

9.

D.-H. Shin, B. Lee, and E.-S. Kim, “Multi-direction-curved integral imaging with large depth by additional use of a large-aperture lens,” Appl. Opt. 45, 7375–7381 (2006). [CrossRef] [PubMed]

10.

H. Arimoto and B. Javidi, “Integral three-dimensional imaging with digital reconstruction,” Opt. Lett. 26, 157–159 (2001) [CrossRef]

11.

Y. Frauel and B. Javidi, “Digital three-dimensional image correlation by use of computer-reconstructed integral imaging,” Appl. Opt. 41, 5488–5496 (2002). [CrossRef] [PubMed]

12.

S.-H. Hong, J.-S. Jang, and B. Javidi, “Three-dimensional volumetric object reconstruction using computational integral imaging,” Opt. Express 12, 483–491 (2004). [CrossRef] [PubMed]

13.

S. -H. Hong and B. Javidi, “Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing,” Opt. Express 12, 4579–4588 (2004) [CrossRef] [PubMed]

14.

J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, “Resolution-enhanced computational integral imaging reconstruction using intermediate-view reconstruction technique,” Opt. Eng. 45, 117004 (2006). [CrossRef]

15.

S. -H. Hong and B. Javidi, “Three-dimensional visualization of partially occluded objects using integral imaging,” J. Display Technol. 1, 354- (2005). [CrossRef]

16.

B. Javidi, R. Ponce-Diaz, and S.-H. Hong, “Three-dimensional recognition of occluded objects using volumetric reconstruction,” Opt. Lett. 31, 1106–1108 (2006). [CrossRef] [PubMed]

17.

W. K. Pratt, Digital Image Processing, (New York: Wiley, 1991).

18.

E. Meijering, “A Chronology of interpolation: From ancient astronomy to modern signal and image processing,” Proc. IEEE 90, 319–342 (2002).0 [CrossRef]

19.

T. Blu, P. Thevenaz, and M. Unser, “Linear interpolation revitalized,” IEEE Trans. Image Proc. 13, pp.710–719 (2004). [CrossRef]

20.

H. Yoo, “Closed-form least-squares technique for adaptive linear image interpolation,” Elect. Lett. 43, pp. 210–212 (2007). [CrossRef]

21.

R.G Keys, “Cubic convolution interpolation for digital image processing,” IEEE Trans. Acoust. Speech Signal Process. 29, 1153–1160 (1981). [CrossRef]

OCIS Codes
(100.6890) Image processing : Three-dimensional image processing
(110.2990) Imaging systems : Image formation theory

ToC Category:
Image Processing

History
Original Manuscript: June 11, 2007
Revised Manuscript: August 27, 2007
Manuscript Accepted: August 28, 2007
Published: September 6, 2007

Virtual Issues
Vol. 2, Iss. 10 Virtual Journal for Biomedical Optics

Citation
Dong-Hak Shin and Hoon Yoo, "Image quality enhancement in 3D computational integral imaging by use of interpolation methods," Opt. Express 15, 12039-12049 (2007)
http://www.opticsinfobase.org/vjbo/abstract.cfm?URI=oe-15-19-12039


Sort:  Year  |  Journal  |  Reset  

References

  1. G. Lippmann, "La photographic intergrale," Comptes-Rendus, Acad. Sci. 146, 446-451 (1908).
  2. F. Okano, H. Hoshino, J. Arai, and I. Yuyama, "Real-time pickup method for a three-dimensional image based on integral photography," Appl. Opt. 36, 1598-1603 (1997). [CrossRef] [PubMed]
  3. B. Lee, S. Jung, and J.-H. Park, "Viewing-angle-enhanced integral imaging by lens switching," Opt. Lett. 27, 818-820 (2002). [CrossRef]
  4. J.-S. Jang and B. Javidi, "Formation of orthoscopic three-dimensional real images in direct pickup one-stepintegral imaging," Opt. Eng. 42, 1869-1870 (2003). [CrossRef]
  5. A. Stern and B. Javidi, "Three-dimensional image sensing and reconstruction with time-division multiplexed computational integral imaging," Appl. Opt. 42, 7036-7042 (2003). [CrossRef] [PubMed]
  6. D.-H. Shin, M. Cho and E.-S. Kim, "Computational implementation of asymmetric integral imaging by use of two crossed lenticular sheets," ETRI Journal 27, 289-293 (2005). [CrossRef]
  7. M. Martínez-Corral, B. Javidi, R. Martínez-Cuenca, and G. Saavedra, "Integral imaging with improved depth of field by use of amplitude modulated microlens array," Appl. Opt. 43, 5806-5813 (2004). [CrossRef] [PubMed]
  8. J.-H. Park, J. Kim, Y. Kim, and B. Lee, "Resolution-enhanced three-dimension/two-dimension convertible display based on integral imaging," Opt. Express 13, 1875-1884 (2005). [CrossRef] [PubMed]
  9. D.-H. Shin, B. Lee and E.-S. Kim, "Multi-direction-curved integral imaging with large depth by additional use of a large-aperture lens," Appl. Opt. 45, 7375-7381 (2006). [CrossRef] [PubMed]
  10. H. Arimoto and B. Javidi, "Integral three-dimensional imaging with digital reconstruction," Opt. Lett. 26, 157-159 (2001) [CrossRef]
  11. Y. Frauel and B. Javidi, "Digital three-dimensional image correlation by use of computer-reconstructed integral imaging," Appl. Opt. 41, 5488-5496 (2002). [CrossRef] [PubMed]
  12. S.-H. Hong, J.-S. Jang, and B. Javidi, "Three-dimensional volumetric object reconstruction using computational integral imaging," Opt. Express 12, 483-491 (2004). [CrossRef] [PubMed]
  13. S. -H. Hong and B. Javidi, "Improved resolution 3D object reconstruction using computational integral imaging with time multiplexing," Opt. Express 12, 4579-4588 (2004) [CrossRef] [PubMed]
  14. J.-S. Park, D.-C. Hwang, D.-H. Shin, and E.-S. Kim, "Resolution-enhanced computational integral imaging reconstruction using intermediate-view reconstruction technique," Opt. Eng. 45, 117004 (2006). [CrossRef]
  15. S. -H. Hong and B. Javidi, "Three-dimensional visualization of partially occluded objects using integral imaging," J. Display Technol. 1, 354 (2005). [CrossRef]
  16. B. Javidi, R. Ponce-Diaz, and S.-H. Hong, "Three-dimensional recognition of occluded objects using volumetric reconstruction," Opt. Lett. 31, 1106-1108 (2006). [CrossRef] [PubMed]
  17. W. K. Pratt, Digital Image Processing, (New York: Wiley, 1991).
  18. E. Meijering, "A Chronology of interpolation: From ancient astronomy to modern signal and image processing," Proc. IEEE 90, 319-342 (2002). [CrossRef]
  19. T. Blu, P. Thevenaz, and M. Unser, "Linear interpolation revitalized," IEEE Trans. Image Proc. 13, pp.710-719 (2004). [CrossRef]
  20. H. Yoo, "Closed-form least-squares technique for adaptive linear image interpolation," Elect. Lett. 43, pp. 210-212 (2007). [CrossRef]
  21. Keys, R.G , "Cubic convolution interpolation for digital image processing," IEEE Trans. Acoust. Speech Signal Process. 29, 1153-1160 (1981). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited