OSA's Digital Library

Optics Express

Optics Express

  • Editor: C. Martijin de Sterke
  • Vol. 15, Iss. 9 — Apr. 30, 2007
  • pp: 5494–5503
« Show journal navigation

Uniform illumination and rigorous electromagnetic simulations applied to CMOS image sensors

Jérôme Vaillant, Axel Crocherie, Flavien Hirigoyen, Adam Cadien, and James Pond  »View Author Affiliations


Optics Express, Vol. 15, Issue 9, pp. 5494-5503 (2007)
http://dx.doi.org/10.1364/OE.15.005494


View Full Text Article

Acrobat PDF (700 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

This paper describes a new methodology we have developed for the optical simulation of CMOS image sensors. Finite Difference Time Domain (FDTD) software is used to simulate light propagation and diffraction effects throughout the stack of dielectrics layers. With the use of an incoherent summation of plane wave sources and Bloch Periodic Boundary Conditions, this new methodology allows not only the rigorous simulation of a diffuse-like source which reproduces real conditions, but also an important gain of simulation efficiency for 2D or 3D electromagnetic simulations. This paper presents a theoretical demonstration of the methodology as well as simulation results with FDTD software from Lumerical Solutions.

© 2007 Optical Society of America

1. Introduction

The Image Sensors market has experienced considerable growth over recent years due to the increasing demands of digital still and video cameras, security cameras, webcams, and mainly mobile cameras [1

1. A. El Gamal and H. Eltoukhy, “CMOS Image Sensors. An introduction to the technology, design, and performance limits, presenting recent developments and future directions,” IEEE Circuits & Devices Magazine (May/June 2005).

, 2

2. E. R. Fossum, “CMOS Image Sensors: Electronic Camera-On-A-Chip,” IEEE Trans. Electron. Devices 44, 1689–1698 (1997). [CrossRef]

]. Charge-coupled devices (CCDs) have historically been the dominant image-sensor technology. Nevertheless, in recent years, Complementary Metal Oxide Semiconductor (CMOS) technology has shown competitive performance but also many advantages in on-chip functionality, power consumption, pixel readout, and cost, so that it has become the main actor in the Image Sensor Industry.

Market trend follows an increase of resolution (large number of pixels) while keeping small sensors. Thus, pixel size and photodiode area (where photons are collected) shrink. Besides, since the thickness of interconnect layers scales less than the planar dimension, light has to travel through a narrower “tunnel” to reach the photodiode emphasizing the problem of light collection (and the way to simulate it) mainly for oblique incidence [3

3. P. B. Catrysse, X. Liu, and A. El Gamal, “QE Reduction due to Pixel Vignetting in CMOS Image Sensors”, Proc. SPIE 3965, 420–430 (2000). [CrossRef]

, 4

4. P. B. Catrysse and B. A. Wandell, “Optical efficiency of image sensor pixels,” J. Opt. Soc. Am A 19, 1610–1620 (2002). [CrossRef]

].

To characterize this sensor but also to optimize its performance (Quantum Efficiency, Crosstalk, Angular Response…) by modifying the stack’s geometry for example, optical simulations are essential. We previously developed at ST Microelectronics ray tracing based simulations to optimize microlens and photon collection inside pixels [5

5. J. Vaillant and F. Hirigoyen, “Optical simulation for CMOS imager microlens optimization,” Proc. SPIE 5459, 200–210 (2004). [CrossRef]

]. But regarding smaller pixel sizes diffraction effects can substantially affect light propagation and thus, photon collection [6

6. H. Rhodes, G. Agranov, C. Hong, U. Boettiger, R. Mauritzon, J. Ladd, I. Karasev, J. McKee, E. Jenkins, W. Quinlin, I. Patrick, J. Li, X. Fan, R. Panicacci, S. Smith, C. Mouli, and J. Bruce, “CMOS Imager Technology Shrinks and Image Performance,” IEEE (2004).

]. Now a ray tracing description is not sufficient and we must use a more fundamental description to simulate these diffractions effects. We chose an electromagnetic simulation tool based on Finite Difference Time Domain (FDTD) [7

7. K. S. Yee, “Numerical solution of initial boundary value problems involving Maxwell’s equations in Isotropic Media,” IEEE Trans. Antennas Propag. 14, 302–307 (1966). [CrossRef]

, 8

8. A. Taflove and S. C. Hagness, Computational Electrodynamics : the finite-difference time-domain method, 2nd Edition, H. E. Schrank, Series Editor (Artech House, Boston, Ma, 2000).

], available from Lumerical Solutions [9

9. Lumerical Solutions, Inc. http://www.lumerical.com.

], to describe light propagation and photon collection inside the pixels. Nowadays, there are no electromagnetic tools providing uniform illumination adapted to CMOS image sensor simulation, making mandatory the development of a diffuse light source compatible with periodic boundary conditions. The most efficient way is to sum incoherently tilted plane waves. We demonstrate this approach equivalent to uniform illumination at the focal plane of the CMOS camera.

The paper is organized as follows: in section 2 we will describe the details of the problem to be solved and the objective of our simulation methodology, in section 3 a theoretical demonstration of this latter will be made, section 4 will present the results of simulations, and section 5 presents concluding remarks.

2. The simulation methodology

2.1 The problem: the light shape

A CMOS Image Sensor consists of an array of pixels on a silicon wafer, each containing a photodiode, surrounded by a readout circuitry. The ratio of the photodiode area to the whole pixel area is called the Fill Factor. The rest of the area is occupied by the transistors for the purpose of collecting, converting and reading photogenerated electrons.

Above the photodiode, several metal layers separated by dielectrics layers form the interconnections between transistors. Then color filters allow color reconstruction and finally microlenses are deposited on top of the pixels to focus the light on each photodiode and reduce optical losses in the stack (see Fig. 1). Finally, the dice are encapsulated in a module in which an integrated macroscopic objective-lens system focuses light onto the pixels array.

Fig. 1. CMOS image sensor: schematic (left) and SEM picture (center) of CMOS pixel, and the final module (right).

In order to correctly evaluate pixels optical performances we must simulate a product-like illumination. It is somewhat difficult to simulate the objective-lens and pixels together. The main problem is the scale: the lens is hundreds of times bigger than the pixel (several millimeters compared to several micrometers) so that computational requirements (speed and memory) of FDTD are too large to be practical. The second problem is that many objective lenses can be used for a same sensor, which means as many simulations as objectives. Thus, we have to find a source that recreates the effect of the objective-lens.

We first simulate a small group of pixels receiving the same uniform illumination (the spatial extension of the group is small compared to the spatial variation of the source). At the pixel level, light is distributed uniformly: spatially over the pixel area and angularly inside a cone defined by the exit pupil of the objective and the pixel (see Fig. 2). The parameters that define the angular distribution are the f-number of the objective (ratio between its focal length and its exit pupil diameter) and the chief-ray angle (the ray passing through the center of the exit pupil and the center of the pixel of interest).

Fig. 2. Light shape in case of a uniform pixel illumination provided by an objective lens.

The object to be imaged could be represented by a wide source far from the detector. As it is wide, it could be decomposed in a sum of different incoherent point sources at different spatial locations. Each point emits a spherical wave, but as it is far from the detector, this kind of source could be approximated by plane waves that hit the detector with random phase from a variety of angles. Then each plane wave focuses on different points on pixels array depending on the incidence angle.

Thus, any object to be imaged is simply an incoherent sum of point sources at different spatial locations. Therefore an image on the CMOS detector array can be reconstructed by incoherently summing the electromagnetic fields Ei from these single point sources that pass through the objective-lens. Figure 3 below shows a schematic representation of the light source seen by the pixel array.

Fig. 3. Schematic representation of the light source.

2.2 The different solutions for simulation

2.2.1 Impulse response superposition

In Lumerical software, a source called “thin lens” allows the simulation of such an objective-lens with a given f-number and chief ray angle (see Fig. 4). This source consists of a sum of plane waves that creates either a Gaussian beam or a sinc at the focal point of the simulated “thin lens”. Thus, to correctly simulate our uniform light source, we have to incoherently sum different spatial “thin lens” sources.

Fig. 4. The “thin lens” source simulated by the FDTD software from Lumerical.

We need to know how many sources are required to correctly simulate a uniform source. The simulation results were completed on a two dimensional structure of 5 pixels with pitch equal to 2μm (see Fig. 5 on the left). In the figure below, examples outputs are shown for 5 and 19 Gaussian sources with a spacing of 1μm and 0.5μm. The individual Gaussian waves are shown on the left and the sum of the waves is on the right.

From these results, we see that 19 sources with 0.5μm separation are sufficient to illuminate uniformly 5 microlenses. The 2D simulation in the section 4 will use 5 sources per pixel of 2μm pitch.

Fig. 5. Layout of the simulated structure (left) and intensity in the focal plane for different number of Gaussian waves (right).

Nevertheless, one must notice that the “thin lens” option can not be used with Bloch periodic boundary conditions which require only one incident angle for the light source. In this case, we have to use absorbing boundary conditions, and then simulate a wide domain to take in account the cross-talk between neighbor pixels (here 5 pixels simulated to study the central one). For 3D simulations, the computational volume becomes large and the simulation time increases proportionately. The methodology we have developed to simulate this diffuse-like source reduces the size of structure that must be simulated and consequently the simulation time, by using Bloch periodic boundary conditions: in this case, only the central pixel is simulated.

2.2.2 Plane wave’s superposition

If the number of plane wave angles N is comparable to the number of focused beam positions to be simulated, more than one order of magnitude in simulation time can be gained by using plane waves and thus Bloch periodic conditions, as shown in Table 1.

Table 1. Comparison of the two methodologies for 3D simulation. In both cases, two polarization states must be simulated to calculate the response to unpolarized light. The Bloch boundary conditions used for the second solution require complex-valued fields

table-icon
View This Table

3. Theoretical demonstration of the methodology

Let’s consider a thin lens with a diameter D and a focal length f. Its transmission versus the coordinate inside the exit pupil is t(x,y) which possibly includes vignetting and/or aberrations. This lens is illuminated by the superposition of incoherent tilted plane waves and the wavelength λ. Each wave has an amplitude A and its tilt is referred with its direction cosines α0=x0/f, β0=y0/f, and γ0=1α02β02 where x0 and y0 are the coordinates in the image plane of the impulse response offset (see Fig. 6).

The intensity I x0,y0 (xf,yf) due to each plane wave in the focal plan (xf,yf) (corresponding to the Ei electric field in Fig. 3) is given [10

10. J. W. Goodman, Introduction to Fourier Optics, 3rd Edition (Roberts & Company Publishers, Englewood, Co, 2005), Chap. 5. [PubMed]

] by:

Ix0,y0(xf,yf)=A2λ2f2t(x,y)e2λf[x(xf+x0)+y(yf+y0)]dxdy2
(1)
Fig. 6. Schematic of the light source with the different parameters

With a uniform distribution of tilted plane waves, we have a uniform intensity If in the image plane given by:

If=Ix0,y0(xf,yf)dx0dy0
(2)
If=A2t(x,y)e2λf(x.xf+y.yf)e2λf(x.x0+y.y0)dxdy2d(x0λf)d(y0λf)
(3)
If=A2FT{t(x,y)e2λf(x.xf+y.yf)}(x0λf,y0λf)2d(x0λf)d(y0λf)
(4)

Note: FT denotes Fourier Transform.

Using Parseval’s theorem (energy conservation between two spaces conjugated by Fourier Transform):

FT{f(x,y)}(u,v)2dudv=f(x,y)2dxdy
(5)

We obtain:

If=A2t(x,y)e2λf(x.xf+y.yf)2dxdy
(6)
If=A2t(x,y)2dxdy
(7)

If we transform this expression to introduce the wave vector k→:

k=kxx+kyy=2πλαx+2πλβy
(8)

with α=xf and β=yf,then:

If=A2(λf2π)2t(λf2πkx,λf2πky)2dkxdky
(9)

In case of a perfect thin lens (neither aberration nor vignetting), the transmission of the lens t(kx,ky) is simply the pupil function:

t(kx,ky)=P(kx,ky)={1,kx2+ky2NA*k20,else
(10)

with NA the Numerical Aperture, NA=D2f=sinφ.

Let’s now consider a sum of incoherent tilted plane waves (direction wave vectors kx, ky and amplitude A) weighted by the function W(kx,ky). The total intensity IPW in the plane (xf,yf) is:

IPW=Aei(kxx+kyy).W(kx,ky)2dkxdky
(11)
IPW=A2W(kx,ky)2dkxdky
(12)

This expression can be identified with Eq. (9) if W(kx,ky) = P(kx,ky), i.e. a uniform distribution of plane waves inside a cone defined by the exit pupil diameter D and the focal length f.

One must notice that even if we have taken here a perfect lens with no aberration and no vignetting, this demonstration is still true if we consider a lens with a non-perfect transmission, i.e. with P(kx,ky)≠1 and non-constant inside the cone defined by the pupil. In this case, we still could simulate the new thin lens by a sum of incoherent plane waves with different weight according to the wave vector kx, ky.

Finally, in the case of non-uniform object, simulations can also be made using the solution with the “thin lens” source in order to image a finite-sized object for example.

4. Simulation results

The simulation results were completed on a two dimensional structure to test the methodology and determine the number of plane waves required (see structure on Fig. 5). The results were then compared to those achieved using the thin lens sources, in which case 5 thin lens sources per period were used.

In the case of thin lens sources 5 pixels are simulated using absorbing boundary conditions, whereas with the plane wave sources only one pixel is simulated with Bloch periodic conditions on the boundaries (the pixel has been repeated 4 times in post-processing for comparison with the other method).

The results are shown in Figs. 7, 8 and 9. The pitch of the sensor array is a = 2 μm. The wavelength is 550nm and the aperture NA=0.26 (see Fig. 4). The Poynting vector along y direction, Py(x), is normalized to the source power per unit cell such that the total transmission T, normalized to the source power can be calculated by

T=a2a212Py(x)dx
(13)

Figure 7 shows the simulation results of the Poynting vector Py for the thin lens sources (on the left) and for the plane wave sources (on the right) for on-axis pixels, i.e. pixels at the center of the sensor array. Figure 8 presents similar results for off-axis pixels, i.e. pixels on the edge of the sensor with an angle-shift of 10°.

Fig. 7. Poynting results of the simulated structure for on-axis pixels: propagation along the structure (top) and results at Silicon interface, y=0μm (bottom). On the left the 5 pixels with the Gaussian sources (“thin lens”). On the right, the 5 pixels with 16 plane wave sources (top) and different numbers of plane waves (bottom).
Fig. 8. Poynting results of the simulated structure for off-axis pixels (10° shift): propagation along the structure (top) and results at Silicon interface, y=0μm (bottom). On the left the 5 pixels with the Gaussian sources (“thin lens”). On the right, the 5 pixels with 16 plane wave sources (top) and different numbers of plane waves (bottom).

Figure 9 shows the Poynting vector at Silicon interface (focal plane of the microlenses) for the central pixel of both cases and for the 2 methods.

Fig. 9. Comparison of the 2 methods: Poynting vector at Silicon interface for the central pixel For on-axis pixels (on the left) and off-axis pixels with 10° shift (on the right)

In these figures, there are some variations for 4 and 8 plane waves, but 16, 32 and 64 plane waves appear reasonably converged. However, the small number (4 or 8) of plane waves is still reasonably accurate and could be used for rapid optimization initially that could be verified with a larger number of simulations once the optimal structure was determined.

5. Conclusion

This new light source is perfectly adapted to CMOS Image Sensor design verification and optimization where structures are important and diffraction problems not negligible anymore.

Thus, we are able to predict optical performances like Microlens Gain, Quantum Efficiency, Crosstalk, or Angular Response for different structures. We can then identify and understand potential optical limitations of pixels helping CMOS sensor design and process engineers optimise the pixel. Finally, we can anticipate by selecting design/process solutions and giving specifications to achieve good optical performance for CMOS Image Sensor.

References and links

1.

A. El Gamal and H. Eltoukhy, “CMOS Image Sensors. An introduction to the technology, design, and performance limits, presenting recent developments and future directions,” IEEE Circuits & Devices Magazine (May/June 2005).

2.

E. R. Fossum, “CMOS Image Sensors: Electronic Camera-On-A-Chip,” IEEE Trans. Electron. Devices 44, 1689–1698 (1997). [CrossRef]

3.

P. B. Catrysse, X. Liu, and A. El Gamal, “QE Reduction due to Pixel Vignetting in CMOS Image Sensors”, Proc. SPIE 3965, 420–430 (2000). [CrossRef]

4.

P. B. Catrysse and B. A. Wandell, “Optical efficiency of image sensor pixels,” J. Opt. Soc. Am A 19, 1610–1620 (2002). [CrossRef]

5.

J. Vaillant and F. Hirigoyen, “Optical simulation for CMOS imager microlens optimization,” Proc. SPIE 5459, 200–210 (2004). [CrossRef]

6.

H. Rhodes, G. Agranov, C. Hong, U. Boettiger, R. Mauritzon, J. Ladd, I. Karasev, J. McKee, E. Jenkins, W. Quinlin, I. Patrick, J. Li, X. Fan, R. Panicacci, S. Smith, C. Mouli, and J. Bruce, “CMOS Imager Technology Shrinks and Image Performance,” IEEE (2004).

7.

K. S. Yee, “Numerical solution of initial boundary value problems involving Maxwell’s equations in Isotropic Media,” IEEE Trans. Antennas Propag. 14, 302–307 (1966). [CrossRef]

8.

A. Taflove and S. C. Hagness, Computational Electrodynamics : the finite-difference time-domain method, 2nd Edition, H. E. Schrank, Series Editor (Artech House, Boston, Ma, 2000).

9.

Lumerical Solutions, Inc. http://www.lumerical.com.

10.

J. W. Goodman, Introduction to Fourier Optics, 3rd Edition (Roberts & Company Publishers, Englewood, Co, 2005), Chap. 5. [PubMed]

OCIS Codes
(040.0040) Detectors : Detectors
(110.0110) Imaging systems : Imaging systems

ToC Category:
Imaging Systems

History
Original Manuscript: November 10, 2006
Revised Manuscript: April 18, 2007
Manuscript Accepted: April 18, 2007
Published: April 20, 2007

Citation
Jérôme Vaillant, Axel Crocherie, Flavien Hirigoyen, Adam Cadien, and James Pond, "Uniform illumination and rigorous electromagnetic simulations applied to CMOS image sensors," Opt. Express 15, 5494-5503 (2007)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-15-9-5494


Sort:  Author  |  Year  |  Journal  |  Reset  

References

  1. A. El Gamal and H. Eltoukhy, "CMOS Image Sensors. An introduction to the technology, design, and performance limits, presenting recent developments and future directions," IEEE Circuits & Devices Magazine (May/June 2005).
  2. E. R. Fossum, "CMOS Image Sensors: Electronic Camera-On-A-Chip," IEEE Trans. Electron. Devices 44, 1689-1698 (1997). [CrossRef]
  3. P. B. Catrysse, X. Liu, and A. El Gamal, "QE Reduction due to Pixel Vignetting in CMOS Image Sensors," Proc. SPIE 3965, 420-430 (2000). [CrossRef]
  4. P. B. Catrysse and B. A. Wandell, "Optical efficiency of image sensor pixels," J. Opt. Soc. Am A 19, 1610-1620 (2002). [CrossRef]
  5. J. Vaillant and F. Hirigoyen, "Optical simulation for CMOS imager microlens optimization," Proc. SPIE 5459, 200-210 (2004). [CrossRef]
  6. H. Rhodes, G. Agranov, C. Hong, U. Boettiger, R. Mauritzon, J. Ladd, I. Karasev, J. McKee, E. Jenkins, W. Quinlin, I. Patrick, J. Li, X. Fan, R. Panicacci, S. Smith, C. Mouli, and J. Bruce, "CMOS Imager Technology Shrinks and Image Performance," IEEE (2004).
  7. K. S. Yee, "Numerical solution of initial boundary value problems involving Maxwell’s equations in Isotropic Media," IEEE Trans. Antennas Propag. 14, 302-307 (1966). [CrossRef]
  8. A. Taflove and S. C. Hagness, Computational Electrodynamics : the finite-difference time-domain method, 2nd Edition, H. E. Schrank, Series Editor (Artech House, Boston, Ma, 2000).
  9. Lumerical Solutions, Inc.http://www.lumerical.com.
  10. J. W. Goodman, Introduction to Fourier Optics, 3rd Edition (Roberts & Company Publishers, Englewood, Co, 2005), Chap. 5. [PubMed]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited