OSA's Digital Library

Optics Express

Optics Express

  • Editor: Andrew M. Weiner
  • Vol. 22, Iss. 15 — Jul. 28, 2014
  • pp: 17897–17907
« Show journal navigation

Resolution improvement of integral imaging based on time multiplexing sub-pixel coding method on common display panel

Yujiao Chen, Xiaorui Wang, Jianlei Zhang, Shuo Yu, Qiping Zhang, and Bingtao Guo  »View Author Affiliations


Optics Express, Vol. 22, Issue 15, pp. 17897-17907 (2014)
http://dx.doi.org/10.1364/OE.22.017897


View Full Text Article

Acrobat PDF (1658 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

Considering the limited pixel number and large pixel size of common display panel, the captured elemental images (EIs) array of high density pixels cannot be reconstructed sufficiently in the display process of integral imaging, because of matched display requirement. To solve this problem, this paper presents a novel approach to improve integral imaging resolution by designing a coded sub-pixel mask on common display panel. Specifically, multi-pixels in the captured EIs are displayed in a pixel in the common display panel with time multiplexing along with the corresponding aperture switched on/off of the coded sub-pixel mask periodically, in which the resolution of the reconstructed image is determined by the coded aperture size of the sub-pixel mask rather than the pixel size of the display panel. Then, the mapping relationship between the displayed pixel and the position of the switched on aperture of the coded sub-pixel mask is established theoretically. Computational reconstruction and optical experimental results show that this method can match the pixel number of the captured EIs with that of the display panel and the resolution of integral imaging can be improved significantly.

© 2014 Optical Society of America

1. Introduction

A typical integral imaging system consists of two parts: a recording unit and a display unit, which are shown in Fig. 1. In the recording process, the elemental images (EIs) to record 3D information from different perspectives are captured by a camera with a lenslet array or camera array. In the display process, the captured EIs are displayed on a display panel, and a 3D image is reconstructed through a lenslet array. Three-dimensional (3D) integral imaging has been subjects of many researchers due to its advantages, such as full parallax, continuous viewing points, operating with incoherent light, etc. [1

1. C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. 58, 71–74 (1968). [CrossRef]

8

8. X. Wang and H. Hua, “Theoretical analysis for integral imaging performance based on microscanning of a microlens array,” Opt. Lett. 33, 449–451 (2008). [CrossRef] [PubMed]

]. However, for its practical application, integral imaging also has some limitations to be overcome, such as low resolution, limited depth of focus, and so on.

Fig. 1 Principle of integral imaging.

The resolution of the reconstructed image is determined by many system parameters, for example, the size and the pitch of the lenslet and the resolution of the CCD and the display device [1

1. C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. 58, 71–74 (1968). [CrossRef]

, 2

2. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. 15, 2059–2065 (1998). [CrossRef]

]. That is to say, the inherent size of the components limits the upper resolution of an integral imaging system. So far, there are many studies carried out on this aspect [3

3. J. S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]

8

8. X. Wang and H. Hua, “Theoretical analysis for integral imaging performance based on microscanning of a microlens array,” Opt. Lett. 33, 449–451 (2008). [CrossRef] [PubMed]

]. The moving array lenslet technique (MALT) [3

3. J. S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]

] and the synthetic aperture integral imaging (SAII) [4

4. J.-S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27, 1144–1146 (2002). [CrossRef]

] can improve the spatial sampling frequency to overcome the Nyquist upper limit of the resolution. Another resolution enhanced method that rotated a pair of prism sheet in front of the lenslet array instead of the lenslet array has been reported [5

5. H. Liao, T. Dohi, and M. Iwahara, “Improved viewing resolution of integral videography by use of rotated prism sheets,” Opt. Express 15, 4814–4822 (2007). [CrossRef] [PubMed]

]. But all these methods require the rapid mechanical movement, which causes vibration and noise. Electrically controllable pinhole array has been proposed to improve the resolution [6

6. Y. Kim, J. Kim, J.-M. Kang, J.-H. Jung, H. Choi, and B. Lee, “Point light source integral imaging with improved resolution and viewing angle by the use of electrically movable pinhole array,” Opt. Express 15, 18253–18267 (2007). [CrossRef] [PubMed]

]. All the above methods also present some difficulties in practical implementation. Intermediate-view reconstruction (IVR) technique [7

7. D. C. Hwang, J. S. Park, S. C. Kim, D. H. Shin, and E. S. Kim, “Magnification of 3d reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45, 4631–4637 (2006). [CrossRef] [PubMed]

] and the microscanning (MS) of the lenslet array [8

8. X. Wang and H. Hua, “Theoretical analysis for integral imaging performance based on microscanning of a microlens array,” Opt. Lett. 33, 449–451 (2008). [CrossRef] [PubMed]

] can improve the resolution of the reconstructed image through the increment of the number of EIs and the resolution of EIs, respectively. But now, the pixel number of the captured device is much higher than that of the display device. For example, the pixel number of Cannon 5D camera and Huawei ascend d2-2010 mobile phone are 4368 × 2912 and 1080 × 1920, respectively. To match the pixel number of the EIs with that of the display device, Jang et al. [9

9. J.-S. Jang, Y.-S. Oh, and B. Javidi, “Spatiotemporally multiplexed integral imaging projector for large-scale high-resolution three-dimensional display,” Opt. Express 12, 557–563 (2004). [CrossRef] [PubMed]

] has proposed a spatiotemporally multiplexed integral imaging projector for large-scale high-resolution three-dimensional display. In this method, the entire set of high resolution EIs is spatially divided into small image subsets, and then they are projected simultaneously onto the corresponding lenslet array positions using multiple display devices. But the imaging system is very complicated.

Thus, in this paper, the sub-pixel coding method is reported to overcome the upper resolution limit imposed by the pixel size of common display panel. Here, we place a coded sub-pixel mask on each R, G or B sub-pixel in the display panel. The concept is similar to the focal plane coding method used to improve optical imaging resolution [10

10. L. Xiao, K. Liu, D. Han, and J. Liu, “Focal plane coding method for high resolution infrared imaging,” Infrared Laser Eng. 40, 2065–2070 (2011).

, 11

11. L. Xiao, K. Liu, D. Han, and J. Liu, “A compressed sensing approach for enhancing infrared imaging resolution,” Opt. Laser Technol. 44, 2354–2360 (2012). [CrossRef]

]. Specifically, by designing a coded aperture mask on the focal plane in the optical system, the image sensor obtains high resolution sampling by a fraction of the pixel in the CCD. Inspired by this method, we also can obtain high resolution integral imaging reconstruction by a fraction of the pixel in common limited pixel number display panel through placing a coded sub-pixel mask on it. Because the eyes experience the effect of the afterimages, we can display as many pixels of slightly different perspectives as possible by changing the coding mode of the coded sub-pixel mask and the displayed image within the time constant of the eyes’ response time. In the following sections, we will describe the principle of the sub-pixel coding method. And the experimental results will be provided to verify the validity of our method.

2. Proposed sub-pixel coding method

Assuming that the pixel number of the display panel is M × N and the pixel number of the captured EIs is Mele × Nele, which is n × n times of that of the display panel. That is to say,
{Mele=n×MNele=n×N
(1)
where n is an integer no less than 1. If the captured EIs are displayed on the display panel directly, the EIs must be zoomed out to match the pixel number of the display panel. The high frequency information of the EIs is lost. To overcome this problem, one pixel in the display panel must display n × n pixels in the captured EIs, as shown in Fig. 2(a). In Fig. 2, small black squares denote the multiple pixels in EIs that will be displayed in one pixel in a display panel, which is shown by the large purple square.

Fig. 2 (a) Nine pixels in EIs displayed in one pixel in the common display panel; (b) Regular arrangement of the RGB sub-pixels in the display panel; (c) Sub-pixels arrangement corresponding to (a).

The display device expresses a color image by the combination of red (R), green (G) and blue (B) sub-pixels with arrangement of the primary colors such as triangular, strips, and diagonal. The regular arrangement of color pixels with stripe type is shown in Fig. 2(b). In this paper, the RGB arrangement of the used display panel is stripe type, but it is not difficult to expand our research to other type display panel. Thus, to display n × n pixels in EIs in one pixel in the display panel, the RGB sub-pixels of these pixels must be displayed as shown in Fig. 2(c). Generally speaking, the R, G and B sub-pixels must be first divided into n × n parts, respectively. Then, the RGB sub-pixels of a pixel in the ith row and the jth col part of Fig. 2(a) must be displayed in the the ith row and the jth col parts of the RGB sub-pixels of Fig. 2(c), respectively.

But at one time, only one of these multi-pixels in EIs can be displayed in one pixel in the display panel. In this paper, we first divide the captured EIs into multiple new images and set the corresponding coded sub-pixel mask which is composed by multiple small apertures and used for a fraction of the pixel in the display panel, then display these images with time multiplexing by placing the coded sub-pixel mask on the display panel, as shown in Fig. 3.

Fig. 3 Sub-pixel coding in the display panel.

Figure 4(a) shows the captured EIs of which the pixel number is much higher than that of the display panel and Fig. 4(b) illustrates a image which will be displayed on the display panel. If we want to display n × n pixels in one pixel in the display panel, n × n new images, as shown in Fig. 5, should be first formed by selecting corresponding pixels in EIs and the coded sub-pixel mask is composed by n × n apertures for each R, G or B sub-pixel in the display panel, as shown in Fig. 6(a). These apertures can be switched on/off dynamically. Thus, each R, G or B sub-pixel is divided into n × n fractions. We call that the coded sub-pixel mask with different coding modes corresponding to different apertures for each R, G or B sub-pixel switched on/off. In the captured EIs, a pixel in the ith row and the jth col is denoted as Iele(i, j). Similarly, in the uth row and the vth col image of Fig. 5, a pixel in the ith row and the jth col is denoted as Iu,v(i, j).

Fig. 4 (a) Captured EIs; (b) a new image displayed on the display panel at t1.
Fig. 5 Nine new images which are formed by pixels selected from the corresponding position in Fig. 4(a).
Fig. 6 (a) Coded sub-pixel mask for each R, G or B sub-pixel; (b) (i) Different sub-pixel coding modes corresponding to a single pixel at different times, respectively.

The uth row and the vth col image of Fig. 5 can be obtained by
Iu,v=Iele(u+(i1)n,v+(j1)n)
(2)

Then, the new images of Fig. 5 are displayed on the display panel sequentially by changing the coding mode of the coded sub-pixel mask. At time tm, the uth row and the vth col image Iu,v of Fig. 5 is displayed on the display panel, where
m=(u1)n+v
(3)
And the uth row and the vth col aperture of Fig. 6(a) is switched on, and other apertures are switched off. The time that n × n new images are displayed on the display panel sequentially is denoted as the period T, that is, T is the interval time from t1 to tn·n. Finally, the process is repeated by t1t2 → ··· →tn·nt1.

Specifically, when n = 3, Figs. 6(b)–6(j) give nine 9 × 9 coded sub-pixel mask arrays, corresponding to RGB sub-pixels of a pixel in the display panel with a 3 × 3 mask sub-array for each sub-pixel. The white area and the black area denote that the corresponding apertures are switched on and off, respectively. The light only can pass through the aperture of the coded sub-pixel mask which is switched on. When different images of Fig. 5 are displayed on the display panel, the coding mode varies with the displayed image, but the coding mode corresponding to each sub-pixel in the same image is identical. At time t1, I1,1 is displayed on the display panel and the corresponding sub-pixel coding modes is illustrated in Fig. 6(b). The left upper apertures of the coded sub-pixel mask for each sub-pixel are switched on. Thus, only the light from the left upper part of each sub-pixel can pass through the sub-pixel coding mask, as shown in Fig. 3. That is to say, at t1, only marked with Number “1” part in Fig. 2(c) is bright and other parts are black. In the similar way, marked with Number “2” ∼ “9” part in Fig. 2(c) is bright sequentially at t2t9, respectively. Finally, the process is repeated all the time.

It is worth noting that the angle that a pixel in common display panel incident on the observer’s eye must be smaller than the angular resolution of the human eye. Thus the human eye cannot distinguish the light area and the black area in a pixel and the observer only can view that the light square of low brightness. When the period T is shorter than the time constant of the eyes’ response time, it seems to the observer that multi-pixels in captured EIs are displayed in one pixel in the display panel at the same time. Therefore, a pixel in common display panel is divided into n × n parts and the size of each part, which is equal to the size of three apertures of the coded sub-pixel mask, is one nth as that of a pixel in the display panel in the observers eye.

And it is well known that the resolution of the reconstructed image can be approximately expressed as Eq. (4) in disregard of the diffraction effect caused by the lenslet array [12

12. S. W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44, 71–74 (2005). [CrossRef]

].
RI=glPX
(4)
where RI is the reconstructed image resolution, PX is the pixel size of the display panel, g is the gap between the display panel and the lenslet array, and l is the position of the central depth plane, as shown in Fig. 1(b). According to Eq. (4), the reconstructed image resolution is inversely proportional to the pixel size of the display panel. Thus, it is the aperture size of the coded sub-pixel mask that determine the resolution of the reconstructed image and the resolution of the reconstructed image can be improved n times.

3. Experiment and discussion

To verify the effectiveness of the proposed method, some computational experiments are performed for the test plane image, which is the left upper part of ISO 12233 resolution testing chart shown in Fig. 7. The pixel size of the display panel is 0.09836mm × 0.09836mm. In this experiment, the lenslet array is composed of 50 × 50 lenslets. Each lenslet element is square shaped and has a uniform base size of 0.9836mm × 0.9836mm. Thus, one lenslet can cover 10 × 10 pixels in the display panel. The distance between the object and the lenslet array is 70mm and the focal length of each lenslet is 3.3mm. The captured EIs are displayed on the display panel and can be reconstructed by the CIIR method.

Fig. 7 Test plane image.

The pixel number of each captured EI is 10 × 10 and the captured EIs are displayed on the display panel directly. The reconstructed image is shown in Fig. 8(a). In order to be consistent with the below analysis, we call that the coded sub-pixel mask with a 1 × 1 mask sub-array for each sub-pixel is used in this case. The reconstructed image is very blur and only “2” in the reconstructed image can be just clearly distinguished.

Fig. 8 Reconstructed images for different coded sub-pixel masks with a : (a) 1 × 1; (b) 2 × 2; (c) 3 × 3 and (d) 4 × 4 mask sub-array for each sub-pixel, respectively.

If the resolution of each captured EI is (10·n)×(10·n) and the captured EIs are displayed on the display panel directly, the EIs must be zoomed out to match the pixel number of the display panel. To take full use of the captured information, n × n new images is firstly formed by Eq. (2). Then, n × n new images are displayed on the display panel with time multiplexing where the coded sub-pixel mask with an n × n mask sub-array for each R, G or B sub-pixel is placed on.

As shown in Fig. 8, the reconstructed images are more and more clear with n being greater. The number of the set of line which can be just identified in the reconstructed image is depicted in Fig. 9(a). Because of the test plane image is the resolution test chart, the y-coordinate can express the resolution of the reconstructed image to some extent. To verify the theoretical analysis of our proposed method, we calculate the times that the resolution can be improved comparing with that of the image in Fig. 8(a), as shown in Fig. 9(b). From Fig. 9, we note that the sub-pixel coding method can greatly improve the resolution of the reconstructed image in an integral imaging system and the quantitative results have a good consistency with the theoretical analysis. Thus the feasibility of the proposed method is verified.

Fig. 9 Resolution enhanced results for the reconstructed image based on our proposed method.

To verify the proposed method and the computer simulation result, the optical experiment is also carried out. The ultra-high resolution liquid crystal panel (UHRLC) without backlighting can act as the coded sub-pixel mask. Each pixel in the UHRLC represents an aperture in the coded sub-pixel mask and the pixel in the UHRLC is white/black for the corresponding aperture of the coded sub-pixel mask switched on/off. Further more, for an n × n coded sub-pixel mask, the resolution of the coded sub-pixel mask must be n × n times of that of the display panel and the frame rate of the display panel must be improved n × n times. Besides, the coding modes of the coded sub-pixel mask and the displayed images must be changed synchronously. Thus, the frame rate of the coded sub-pixel mask and that of the display panel must be equal. However, due to the limitation of the experimental conditions, there is not this kind of UHRLC in our laborary.In this paper, the optical experiment is carried out based on the equivalence principle. In the experiment, 6 × 6 adjacent pixels in the display panel act as an equivalent pixel in the displayed image and the display panel can also denote “the coded sub-pixel mask”. And the shutter time of the camera is set to be longer than the time that all new images are displayed on the display panel sequentially to simulate the persistence effects of human vision. The experimental setup is shown in Fig. 10 and when n = 3, the pixels displayed on the display panel at different times are illustrated by Fig. 11.

Fig. 10 Optical experimental setup.
Fig. 11 Optical reconstruction results for different coded sub-pixel masks: (a) 1 × 1; (b) 2 × 2; (c) 3 × 3.

As shown in Fig. 10, in this experiment, we use the image “Lena” as the input scene and the EIs are captured by the computer and reconstructed optically. The used lenslet array is composed of 7 × 9 lenslets. The size and the focal length of each lenslet is 7mm × 5.4mm and 41.9mm, respectively. Huawei ascend d2-2010 mobile phone is used as the display panel with 0.0577mm × 0.0577mm pixel size. Therefore, each lenslet can approximately cover 121 × 94 pixels. A digital camera of Cannon Powershot SX40 is used as a virtual observer to capture the reconstructed images.

Because that 6 × 6 adjacent pixels in the display panel equivalent to a pixel in the displayed image,the pixel number of each EI without the coded sub-pixel mask is 20 × 16. The pixel number of each EI for the coded sub-pixel masks with a 2 × 2 and a 3 × 3 coded mask is 40 × 32 and 60 × 48, respectively. Thus, for 3 × 3, there are nine new images is firstly formed by Eq. (2), as shown in Fig. 5. Then these images are displayed on the display panel with time multiplexing, as shown in Fig. 11. The smallest gray square represent a pixel on the display panel and the large purple square represent an equivalent pixel, which is corresponding to a pixel in the displayed image. When n = 3, the “aperture” size of “the coded sub-pixel mask” is the size of four pixels in the display panel, as shown by the small white square. At time t1, I1,1 of Fig. 5 is displayed on the display panel and only the upper left area of each purple square can display images and other areas are black. In the similar way, the other images are displayed on the display panel sequentially at time t2t9. This process is repeated. To simulate the persistence effects of human vision, the shutter time of the camera is longer than the time that nine new images are displayed on the display panel sequentially. The reconstructed image is captured by the camera. And for 2×2, the reconstructed image can be obtained in the similar way.

The optical reconstruction results for different coded sub-pixel mask are shown in Fig. 12. It can be observed that the reconstructed image quality has been improved greatly and more details can be obtained with n being larger. However, at the same time, Fig. 12 shows granular noise caused by the wide pitch of the lenslet array and color distortion to some extent. As n increases, the aperture size of the coded sub-pixel mask is smaller. Thus, the brightness of the displayed image is decreased. Considering the imaging principle of digital camera, color distortion may be caused because the information of a pixel is the integration of the light luminance information which comes from the corresponding area in the object space over the area and the time.

Fig. 12 Optical reconstruction results for different coded sub-pixel masks: (a) 1 × 1; (b) 2 × 2; (c) 3 × 3.

Although the optical experiment can verify the effectiveness of our proposed method, there are some limitations of the experimental setup. It decreases the resolution of the displayed image that multiple pixels in the display panel act as an equivalent pixel. The long shutter time may increase the influence of the spray light and the pitch of the lenslet is relatively large. In the future work, to overcome these limitations, the frame rate of the display panel is must be improved and the high resolution and high frame rate liquid crystal panel is needed. To optimize the system design, the more appropriate lenslet array should be selected and the optical unit to eliminate the spray light must be added to the experimental equipment.

4. Conclusion

In conclusion, we present a new method to improve the resolution of integral imaging by designing a coded sub-pixel mask based on common display panel. In this proposed method, both the unmatched relationship of the pixel number of the captured EIs with that of the common display panel and the low viewing resolution resulting from larger display pixel size are solved. The resolution of integral imaging is determined by the aperture size of the coded sub-pixel mask rather than the pixel size of the display panel. Finally, both computer simulations and optical experiments are performed to confirm the validity of the proposed method and the resolution of integral imaging can be improved significantly. Although the high frame frequency display panel and small size aperture of the coded sub-pixel mask must be required, this method provides a new path to display high-resolution integral image on common display panel.

Acknowledgments

This work was supported by the National Natural Science Foundation of China ( 61007014, 61377007). On the other hand, we express our sincere appreciation for reviewers’ valuable comments.

References and links

1.

C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am. 58, 71–74 (1968). [CrossRef]

2.

H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am. 15, 2059–2065 (1998). [CrossRef]

3.

J. S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett. 27, 324–326 (2002). [CrossRef]

4.

J.-S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett. 27, 1144–1146 (2002). [CrossRef]

5.

H. Liao, T. Dohi, and M. Iwahara, “Improved viewing resolution of integral videography by use of rotated prism sheets,” Opt. Express 15, 4814–4822 (2007). [CrossRef] [PubMed]

6.

Y. Kim, J. Kim, J.-M. Kang, J.-H. Jung, H. Choi, and B. Lee, “Point light source integral imaging with improved resolution and viewing angle by the use of electrically movable pinhole array,” Opt. Express 15, 18253–18267 (2007). [CrossRef] [PubMed]

7.

D. C. Hwang, J. S. Park, S. C. Kim, D. H. Shin, and E. S. Kim, “Magnification of 3d reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt. 45, 4631–4637 (2006). [CrossRef] [PubMed]

8.

X. Wang and H. Hua, “Theoretical analysis for integral imaging performance based on microscanning of a microlens array,” Opt. Lett. 33, 449–451 (2008). [CrossRef] [PubMed]

9.

J.-S. Jang, Y.-S. Oh, and B. Javidi, “Spatiotemporally multiplexed integral imaging projector for large-scale high-resolution three-dimensional display,” Opt. Express 12, 557–563 (2004). [CrossRef] [PubMed]

10.

L. Xiao, K. Liu, D. Han, and J. Liu, “Focal plane coding method for high resolution infrared imaging,” Infrared Laser Eng. 40, 2065–2070 (2011).

11.

L. Xiao, K. Liu, D. Han, and J. Liu, “A compressed sensing approach for enhancing infrared imaging resolution,” Opt. Laser Technol. 44, 2354–2360 (2012). [CrossRef]

12.

S. W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys. 44, 71–74 (2005). [CrossRef]

OCIS Codes
(100.6890) Image processing : Three-dimensional image processing
(120.2040) Instrumentation, measurement, and metrology : Displays
(170.0110) Medical optics and biotechnology : Imaging systems
(170.3010) Medical optics and biotechnology : Image reconstruction techniques

ToC Category:
Imaging Systems

History
Original Manuscript: June 6, 2014
Revised Manuscript: July 2, 2014
Manuscript Accepted: July 4, 2014
Published: July 16, 2014

Citation
Yujiao Chen, Xiaorui Wang, Jianlei Zhang, Shuo Yu, Qiping Zhang, and Bingtao Guo, "Resolution improvement of integral imaging based on time multiplexing sub-pixel coding method on common display panel," Opt. Express 22, 17897-17907 (2014)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-22-15-17897


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. C. B. Burckhardt, “Optimum parameters and resolution limitation of integral photography,” J. Opt. Soc. Am.58, 71–74 (1968). [CrossRef]
  2. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis of resolution limitation of integral photography,” J. Opt. Soc. Am.15, 2059–2065 (1998). [CrossRef]
  3. J. S. Jang and B. Javidi, “Improved viewing resolution of three-dimensional integral imaging by use of nonstationary micro-optics,” Opt. Lett.27, 324–326 (2002). [CrossRef]
  4. J.-S. Jang and B. Javidi, “Three-dimensional synthetic aperture integral imaging,” Opt. Lett.27, 1144–1146 (2002). [CrossRef]
  5. H. Liao, T. Dohi, and M. Iwahara, “Improved viewing resolution of integral videography by use of rotated prism sheets,” Opt. Express15, 4814–4822 (2007). [CrossRef] [PubMed]
  6. Y. Kim, J. Kim, J.-M. Kang, J.-H. Jung, H. Choi, and B. Lee, “Point light source integral imaging with improved resolution and viewing angle by the use of electrically movable pinhole array,” Opt. Express15, 18253–18267 (2007). [CrossRef] [PubMed]
  7. D. C. Hwang, J. S. Park, S. C. Kim, D. H. Shin, and E. S. Kim, “Magnification of 3d reconstructed images in integral imaging using an intermediate-view reconstruction technique,” Appl. Opt.45, 4631–4637 (2006). [CrossRef] [PubMed]
  8. X. Wang and H. Hua, “Theoretical analysis for integral imaging performance based on microscanning of a microlens array,” Opt. Lett.33, 449–451 (2008). [CrossRef] [PubMed]
  9. J.-S. Jang, Y.-S. Oh, and B. Javidi, “Spatiotemporally multiplexed integral imaging projector for large-scale high-resolution three-dimensional display,” Opt. Express12, 557–563 (2004). [CrossRef] [PubMed]
  10. L. Xiao, K. Liu, D. Han, and J. Liu, “Focal plane coding method for high resolution infrared imaging,” Infrared Laser Eng.40, 2065–2070 (2011).
  11. L. Xiao, K. Liu, D. Han, and J. Liu, “A compressed sensing approach for enhancing infrared imaging resolution,” Opt. Laser Technol.44, 2354–2360 (2012). [CrossRef]
  12. S. W. Min, J. Kim, and B. Lee, “New characteristic equation of three-dimensional integral imaging system and its applications,” Jpn. J. Appl. Phys.44, 71–74 (2005). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited