## Towards a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators |

Optics Express, Vol. 21, Issue 4, pp. 4796-4810 (2013)

http://dx.doi.org/10.1364/OE.21.004796

Acrobat PDF (15637 KB)

### Abstract

Most image sensors are planar, opaque, and inflexible. We present a novel image sensor that is based on a luminescent concentrator (LC) film which absorbs light from a specific portion of the spectrum. The absorbed light is re-emitted at a lower frequency and transported to the edges of the LC by total internal reflection. The light transport is measured at the border of the film by line scan cameras. With these measurements, images that are focused onto the LC surface can be reconstructed. Thus, our image sensor is fully transparent, flexible, scalable and, due to its low cost, potentially disposable.

© 2013 OSA

## 1. Introduction

1. H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C. J. Yu, J. B. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature **454**(7205), 748–753 (2008). [CrossRef] [PubMed]

2. T. N. Ng, W. S. Wong, M. L. Chabinyc, S. Sambandan, and R. A. Street, “Flexible image sensor array with bulk heterojunction organic photodiode,” Appl. Phys. Lett. **92**(21), 213303 (2008). [CrossRef]

4. T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE T. Electron Dev. **52**(11), 2502–2511 (2005). [CrossRef]

5. A. F. Abouraddy, O. Shapira, M. Bayindir, J. Arnold, F. Sorin, D. S. Hinczewski, J. D. Joannopoulos, and Y. Fink, “Large-scale optical-field measurements with geometric fibre constructs,” Nature Mat. **5**(7), 532–536 (2006). [CrossRef]

6. R. Koeppe, A. Neulinger, P. Bartu, and S. Bauer, “Video-speed detection of the absolute position of a light point on a large-area photodetector based on luminescent waveguides,” Opt. Express **18**(3), 2209–2218 (2010). [CrossRef] [PubMed]

9. I. S. Melnik and A. H. Rawicz, “Thin-film luminescent concentrators for position-sensitive devices,” Appl. Opt. **36**(34), 9025–9033 (1997). [CrossRef]

## 2. Light transport within luminescent concentrators

10. J. S. Batchelder, A. H. Zewail, and T. Cole, “Luminescent solar concentrators. 1: Theory of operation and
techniques for performance evaluation,” Appl. Opt. **18**(18), 3090–3110
(1979). [CrossRef]

*n*is the refractive index. For example, TIR occurs at an angle greater than 39.2 degrees for an LC made of polycarbonate. The solid angles above and below a fluorescent particle where TIR does not occur are cone-shaped. For a planar LC with refractive index

*n*, the fraction of luminescence

*P*that is lost due to cone-loss is given by For an LC film made of polycarbonate (

*n*= 1.58), the loss is approximately 22.6%.

*I*is the intensity leaving the material,

*I*

_{0}is the intensity entering the material,

*μ*is the attenuation coefficient that is constant along the transport path, and

*d*is the length of the transport path.

## 3. Measuring light transport by sampling a 2D light field

*n*×

*m*=

*l*discrete entrance points (i.e., pixels). The amount of transported light at the four edges of the LC sheet is measured with CIS (contact image sensor) line scan cameras. Each line scan camera consists of a single array of photosensors.

*p*) on the LC surface and the total of

*k*photosensors (

*s*) at the edges of the LC sheet can be represented by where

*s⃗*is the

*k*-dimensional column vector of all photosensor responses,

*p⃗*the

*l*-dimensional column vector of all pixel intensities, and

*T*the

*k*×

*l*-dimensional light-transport matrix of the LC. Note that

*e⃗*is the

*k*-dimensional column vector of the constant ambient light contribution that is additionally transported to the photosensors (including also the sensors’ constant noise level).

*T*as explained in section 2 would require precise knowledge of the LC’s internal (and potentially imperfect) structure and shape at each location, which is practically impossible. Instead, we measure

*T*as part of a one-time calibration procedure: Projecting a single light impulse to one pixel

*p⃗*enables simultaneous measurement of the

_{i}*i*-th column of

*T*, which equals the sensor responses

*s⃗*under the impulse illumination at pixel

*p⃗*. Repeating this for all pixels

_{i}*p⃗*, with 1 ≤

_{i}*i*≤

*l*yields all coefficients of

*T*. Note that the photosensor response has to be linear. Thus, the line scan cameras must initially be linearized. Furthermore, the ambient light contribution

*e⃗*must be measured and subtracted from

*s⃗*when the matrix coefficients are sampled. The measurement of

*e⃗*is part of the calibration process, and must be repeated if the ambient light changes significantly over time. The transport matrix remains constant as long as the shape of the LC is not changed; otherwise,

*T*must be re-calibrated.

*L*(

*x*,

*ϕ*) and describes the amount of light being transported within the LC film towards each discrete position

*x*at the LC edges, from each discrete direction

*ϕ*. In this case the light-transport matrix used for image reconstruction in Eq. (5) becomes sparse and its condition number is reduced. Further, more positional and directional samples are available for an alternative tomographic image reconstruction.

## 4. Image reconstruction

*p⃗*. By determining the (pseudo-)inverse light-transport matrix

*T*

^{−1}, a direct solution in

*p⃗*can be found, as Eq. (5) explains. However, the inverse cannot be calculated for every matrix and, even if it could, it would not be very robust against noise. Other methods, such as QR decomposition (QRD), singular value decomposition (SVD), biconjugate gradients stabilized (BiCGStab) and non-negative least squares (NNLS), yield better solutions in the presence of noise. In our experiments, we found BiCGStab and NNLS to be most robust when compared to QRD, SVD, and the pseudo-inverse of

*T*. Section 7 discusses this in more detail.

*L*(

*x*,

*ϕ*) over varying directions

*ϕ*, as illustrated in Fig. 3(c). This corresponds to a Radon transform, and tomographic image reconstruction can be accomplished using a backprojection technique that enables fast and robust reconstruction of higher-resolution images:

*T*represents the contribution of all image pixels to one photosensor, a single column of

*T*represents the contribution of one image pixel to all photosensors. Thus, the tomographic back-projection operator is, in principle, equivalent to the transpose of the light-transport matrix, and tomographic image reconstruction with backprojection corresponds to where the transpose of

*T*is equal to the matrix multiplication of all columns of

*T*with the measured photosensor values (without ambient light contribution).

*p*, which is projected orthogonally onto the first hyperplane (the first equation) of the linear system

*y*=

*Tx*, resulting in a solution for

*p*. This process is repeated for the remaining equations of the system, which yields a solution vector

*p*that approximates the overall solution. One iterative step of ART is repeated

*n*times, each time using the solution vector

*p*of the previous iteration as the initial guess. We apply a faster variant of ART called simultaneous algebraic reconstruction technique (SART) [13

13. A. H. Andersen and A. C. Kak, “Simultaneous algebraic reconstruction technique (SART): a superior implementation of the ART algorithm,” Ultrasonic Imaging **6**(1), 81–94 (1984). [CrossRef] [PubMed]

*p*sequentially for each equation, SART calculates

*p*simultaneously for all equations of the linear system within one iteration.

## 5. Super-resolution imaging

*h*×

*h*resolution with multiple

*l*×

*l*image reconstructions, (

*h/l*)

^{2}light-transport matrices must be calibrated.

## 6. Experimental setup and implementation

*a*and the distance between photosensors and aperture

*d*, the optimal number of triangular slits

*n*that surround the LC imaging area must be determined.

*a*or increasing

*d*, but –in order to retain a wide field of view for each triangular slit– a small aperture width is preferred to a large distance between sensor elements and aperture.

*a*and the the distances

*d*and

*w*define both the field of view

*α*of a triangular slit and the integration area of a single photosensor: These parameters must be chosen such that a single photosensor captures the light of as few pixels as possible. At the same time, the whole LC surface area must be covered such that no pixel is omitted and that each pixel is measured multiple times from different directions. In general, this requires a wide field of view

*α*and a small aperture width

*a*. However, there is no clear analytical correlation between the parameter values and the condition number of the resulting light-transport matrix (which defines its numerical stability).

*a*,

*d*, and total number of triangular slits

*n*per edge) by minimizing the condition number of the resulting light-transport matrix. For a given set of parameters, the light-transport matrix is simulated with the analytical light-transport calculations, as explained in section 2. The constraints considered in this optimization task are defined by the limitations of the fabrication process, the line scan cameras used, the constant size of the evaluated LCs, and the desired image resolution.

*a*of 0.5 mm to avoid breakage and the distance

*d*was constrained to a range of 2 to 5 mm. The number of triangular slits per edge

*n*kept below 1.5 times the desired image resolution (i.e.,

*n*= 1...24 for a desired image resolution of 16 × 16).

*a*is always the defined minimum, as it ensures the highest-resolution directional sampling. It should be noted that, without fabrication limitations, smaller aperture sizes yield even smaller condition numbers. The optimal numbers of triangular slits per edge

*n*were found to be 16 and 32 for the smaller sheet with a desired resolution of 16 × 16 and the larger sheet with a target resolution of 32 × 32, respectively. Thus, 54 photosensors were used for each triangular slit in both cases. The optimal distance

*d*between aperture and photosensors was found to be 3.25 mm in both cases. Parameters for other configurations and target resolutions were determined analogously.

## 7. Results

*T*(QRD, SVD, PINV) or insufficient directional sampling (FB). Only SART, NNLS, and BiCGStab (without an overshooting number of iterations) provided reasonable reconstruction quality. For these techniques, Fig. 10 shows direct reconstruction and super-resolution reconstruction results for different resolutions. Note, that for SART we use an initial guess that is computed with a few (30) BiCGStab iterations. In the following, we refer to this as BiSART. NNLS and BiCGStab do not require an initial guess. We apply the structural similarity index (SSIM) [14

14. Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. **13**(4), 600–612 (2004). [CrossRef] [PubMed]

## 8. Limitations

## 9. Future work and applications

- new forms of user-interfaces, such as non-touch screens (i.e., graphical user interfaces that react to user input without the screen surface being touched) – e.g., by recording and evaluating shadows cast on the sensor surface);
- novel lens-less imaging devices that record 4D light fields – as discussed in [5];
5. A. F. Abouraddy, O. Shapira, M. Bayindir, J. Arnold, F. Sorin, D. S. Hinczewski, J. D. Joannopoulos, and Y. Fink, “Large-scale optical-field measurements with geometric fibre constructs,” Nature Mat.

**5**(7), 532–536 (2006). [CrossRef] - wide-field-of-view imaging systems with low aberrations – as presented in [1];
1. H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C. J. Yu, J. B. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature

**454**(7205), 748–753 (2008). [CrossRef] [PubMed] - high-dynamic-range or multi-spectral extensions for conventional cameras, e.g., by mounting a stack of LC layers on top of a high-resolution CMOS or CCD sensor, and by recording and combining low- and high-resolution images at multiple exposures or spectral bands as done, for example, in HDR and wide color gamut displays combining a high-resolution LCD panel with a low-resolution LED backlight matrix [15];
15. H. Seetzen, W. Heidrich, W. Stuerzlinger, G. Ward, L. Whitehead, M. Trentacoste, A. Ghosh, and A. Vorozcovs, “High dynamic range display systems,” ACM T. Graphic

**23**(3), 760–768 (2004). [CrossRef] - improved touch-sensing devices that are based on frustrated total internal reflection (FTIR) [16, 17
16. J. Y. Han, “Low-cost multi-touch sensing through frustrated total internal reflection,” in Proceedings of the 18th annual ACM symposium on User interface software and technology, (Association for Computing Machinery, New York, 2005), 115–118. [CrossRef]

] – e.g., by enabling the recording of 2D light fields using arrays of triangular apertures within the light guides for improving image reconstruction, or by sandwiching our image sensor with an unmodified light guide to enable thin form-factors (compared to the common FTIR devices that apply regular cameras).17. J. Moeller and A. Kerne, “Scanning FTIR: unobtrusive optoelectronic multi-touch sensing through waveguide transmissivity imaging,” in Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction, (Association for Computing Machinery, New York, 2010), 73–76. [CrossRef]

## Acknowledgments

## References and links

1. | H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C. J. Yu, J. B. Geddes III, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature |

2. | T. N. Ng, W. S. Wong, M. L. Chabinyc, S. Sambandan, and R. A. Street, “Flexible image sensor array with bulk heterojunction organic photodiode,” Appl. Phys. Lett. |

3. | G. Yu, J. Wang, J. McElvain, and A. J. Heeger, “Large-area, full-color image sensors made with semiconducting polymers,” Adv. Mater. |

4. | T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE T. Electron Dev. |

5. | A. F. Abouraddy, O. Shapira, M. Bayindir, J. Arnold, F. Sorin, D. S. Hinczewski, J. D. Joannopoulos, and Y. Fink, “Large-scale optical-field measurements with geometric fibre constructs,” Nature Mat. |

6. | R. Koeppe, A. Neulinger, P. Bartu, and S. Bauer, “Video-speed detection of the absolute position of a light point on a large-area photodetector based on luminescent waveguides,” Opt. Express |

7. | S. A. Evenson and A. H. Rawicz, “Thin-film luminescent concentrators for integrated devices,” Appl. Optics |

8. | P. J. Jungwirth, I. S. Melnik, and A. H. Rawicz, “Position-sensitive receptive fields based on photoluminescent concentrators,” P. Soc. Photo-Opt. Ins. |

9. | I. S. Melnik and A. H. Rawicz, “Thin-film luminescent concentrators for position-sensitive devices,” Appl. Opt. |

10. | J. S. Batchelder, A. H. Zewail, and T. Cole, “Luminescent solar concentrators. 1: Theory of operation and
techniques for performance evaluation,” Appl. Opt. |

11. | M. Slaney and A. Kak, |

12. | G. T. Herman, |

13. | A. H. Andersen and A. C. Kak, “Simultaneous algebraic reconstruction technique (SART): a superior implementation of the ART algorithm,” Ultrasonic Imaging |

14. | Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process. |

15. | H. Seetzen, W. Heidrich, W. Stuerzlinger, G. Ward, L. Whitehead, M. Trentacoste, A. Ghosh, and A. Vorozcovs, “High dynamic range display systems,” ACM T. Graphic |

16. | J. Y. Han, “Low-cost multi-touch sensing through frustrated total internal reflection,” in Proceedings of the 18th annual ACM symposium on User interface software and technology, (Association for Computing Machinery, New York, 2005), 115–118. [CrossRef] |

17. | J. Moeller and A. Kerne, “Scanning FTIR: unobtrusive optoelectronic multi-touch sensing through waveguide transmissivity imaging,” in Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction, (Association for Computing Machinery, New York, 2010), 73–76. [CrossRef] |

**OCIS Codes**

(110.0110) Imaging systems : Imaging systems

(110.3010) Imaging systems : Image reconstruction techniques

**ToC Category:**

Imaging Systems

**History**

Original Manuscript: December 6, 2012

Revised Manuscript: February 8, 2013

Manuscript Accepted: February 11, 2013

Published: February 20, 2013

**Virtual Issues**

Vol. 8, Iss. 3 *Virtual Journal for Biomedical Optics*

**Citation**

Alexander Koppelhuber and Oliver Bimber, "Towards a transparent, flexible, scalable and disposable image sensor using thin-film luminescent concentrators," Opt. Express **21**, 4796-4810 (2013)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-21-4-4796

Sort: Year | Journal | Reset

### References

- H. C. Ko, M. P. Stoykovich, J. Song, V. Malyarchuk, W. M. Choi, C. J. Yu, J. B. Geddes, J. Xiao, S. Wang, Y. Huang, and J. A. Rogers, “A hemispherical electronic eye camera based on compressible silicon optoelectronics,” Nature454(7205), 748–753 (2008). [CrossRef] [PubMed]
- T. N. Ng, W. S. Wong, M. L. Chabinyc, S. Sambandan, and R. A. Street, “Flexible image sensor array with bulk heterojunction organic photodiode,” Appl. Phys. Lett.92(21), 213303 (2008). [CrossRef]
- G. Yu, J. Wang, J. McElvain, and A. J. Heeger, “Large-area, full-color image sensors made with semiconducting polymers,” Adv. Mater.10(17), 1431–1434 (1998). [CrossRef]
- T. Someya, Y. Kato, S. Iba, Y. Noguchi, T. Sekitani, H. Kawaguchi, and T. Sakurai, “Integration of organic fets with organic photodiodes for a large area, flexible, and lightweight sheet image scanners,” IEEE T. Electron Dev.52(11), 2502–2511 (2005). [CrossRef]
- A. F. Abouraddy, O. Shapira, M. Bayindir, J. Arnold, F. Sorin, D. S. Hinczewski, J. D. Joannopoulos, and Y. Fink, “Large-scale optical-field measurements with geometric fibre constructs,” Nature Mat.5(7), 532–536 (2006). [CrossRef]
- R. Koeppe, A. Neulinger, P. Bartu, and S. Bauer, “Video-speed detection of the absolute position of a light point on a large-area photodetector based on luminescent waveguides,” Opt. Express18(3), 2209–2218 (2010). [CrossRef] [PubMed]
- S. A. Evenson and A. H. Rawicz, “Thin-film luminescent concentrators for integrated devices,” Appl. Optics34(31), 7231–7238 (1995). [CrossRef]
- P. J. Jungwirth, I. S. Melnik, and A. H. Rawicz, “Position-sensitive receptive fields based on photoluminescent concentrators,” P. Soc. Photo-Opt. Ins.3199, 239–247 (1998).
- I. S. Melnik and A. H. Rawicz, “Thin-film luminescent concentrators for position-sensitive devices,” Appl. Opt.36(34), 9025–9033 (1997). [CrossRef]
- J. S. Batchelder, A. H. Zewail, and T. Cole, “Luminescent solar concentrators. 1: Theory of operation and techniques for performance evaluation,” Appl. Opt.18(18), 3090–3110 (1979). [CrossRef]
- M. Slaney and A. Kak, Principles of Computerized Tomographic Imaging (IEEE Press, 1988).
- G. T. Herman, Fundamentals of Computerized Tomography: Image Reconstruction from Projections, 2nd ed. (Springer Verlag, 2010).
- A. H. Andersen and A. C. Kak, “Simultaneous algebraic reconstruction technique (SART): a superior implementation of the ART algorithm,” Ultrasonic Imaging6(1), 81–94 (1984). [CrossRef] [PubMed]
- Z. Wang, A.C. Bovik, H.R. Sheikh, and E.P. Simoncelli, “Image quality assessment: From error visibility to structural similarity,” IEEE Trans. Image Process.13(4), 600–612 (2004). [CrossRef] [PubMed]
- H. Seetzen, W. Heidrich, W. Stuerzlinger, G. Ward, L. Whitehead, M. Trentacoste, A. Ghosh, and A. Vorozcovs, “High dynamic range display systems,” ACM T. Graphic23(3), 760–768 (2004). [CrossRef]
- J. Y. Han, “Low-cost multi-touch sensing through frustrated total internal reflection,” in Proceedings of the 18th annual ACM symposium on User interface software and technology, (Association for Computing Machinery, New York, 2005), 115–118. [CrossRef]
- J. Moeller and A. Kerne, “Scanning FTIR: unobtrusive optoelectronic multi-touch sensing through waveguide transmissivity imaging,” in Proceedings of the fourth international conference on Tangible, embedded, and embodied interaction, (Association for Computing Machinery, New York, 2010), 73–76. [CrossRef]

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.