OSA's Digital Library

Optics Express

Optics Express

  • Editor: J. H. Eberly
  • Vol. 8, Iss. 3 — Jan. 29, 2001
  • pp: 173–190
« Show journal navigation

Block-iterative frequency-domain methods for Maxwell’s equations in a planewave basis

Steven G. Johnson and J. D. Joannopoulos  »View Author Affiliations


Optics Express, Vol. 8, Issue 3, pp. 173-190 (2001)
http://dx.doi.org/10.1364/OE.8.000173


View Full Text Article

Acrobat PDF (317 KB)





Browse Journals / Lookup Meetings

Browse by Journal and Year


   


Lookup Conference Papers

Close Browse Journals / Lookup Meetings

Article Tools

Share
Citations

Abstract

We describe a fully-vectorial, three-dimensional algorithm to compute the definite-frequency eigenstates of Maxwell’s equations in arbitrary periodic dielectric structures, including systems with anisotropy (birefringence) or magnetic materials, using preconditioned block-iterative eigensolvers in a planewave basis. Favorable scaling with the system size and the number of computed bands is exhibited. We propose a new effective dielectric tensor for anisotropic structures, and demonstrate that Ox2) convergence can be achieved even in systems with sharp material discontinuities. We show how it is possible to solve for interior eigenvalues, such as localized defect modes, without computing the many underlying eigenstates. Preconditioned conjugate-gradient Rayleigh-quotient minimization is compared with the Davidson method for eigensolution, and a number of iteration variants and preconditioners are characterized. Our implementation is freely available on the Web.

© Optical Society of America

1 Introduction

Optical systems have been the subject of enormous practical and theoretical interest in recent years, with a corresponding need for mathematical and computational tools. One fundamental approach in their analysis is eigenmode decomposition: the possible forms of electromagnetic propagation are expressed as a set of definite-frequency (time-harmonic) modes. In the absence of nonlinear effects, all optical phenomena can then be understood in terms of a superposition of these modes, and many forms of analytical study are possible once the modes are known. Of special interest are periodic (or translationally-symmetric) systems, such as photonic crystals (or waveguides), which give rise to many novel and interesting optical effects [1

1. See, e.g., J. D. Joannopoulos, P. R. Villeneuve, and S. Fan, “Photonic crystals: putting a new twist on light,” Nature (London) 386, 143–149 (1997). [CrossRef]

]. Another important basic system is that of resonant cavities, which confine light to a point-like region. There, the boundary conditions are, in principle, irrelevant if the mode is sufficiently confined-so they can be treated under the rubric of periodic structures as well via the “super-cell” technique. In this paper, we describe a fully-vectorial, three-dimensional method for computing general eigenmodes of arbitrary periodic dielectric systems, including anisotropy, based on the preconditioned block-iterative solution of Maxwell’s equations in a planewave basis. A new effective dielectric tensor for anisotropic systems is introduced, and we also describe a technique for computing eigenvalues in the interior of the spectrum (e.g. defect modes) without computing the underlying bands. We present comparisons of different iterative solution schemes, preconditioners, and other aspects of frequency-domain calculations. The result of this work is available as a free and flexible computer program downloadable from the Web [2

2. S. G. Johnson and J. D. Joannopoulos, The MIT Photonic-Bands Package home page http://ab-initio.mit.edu/mpb/.

].

There are a few common approaches to eigen-decomposition of electromagnetic systems. One, which we employ in this paper, is to expand the fields as definite-frequency states in some truncation of a complete basis (e.g. planewaves with a finite cutoff) and then solve the resulting linear eigenproblem. Such methods have seen widespread use, with many variations distinguished by the critical choices of basis and eigensolver algorithm [3

3. K. M. Ho, C. T. Chan, and C. M. Soukoulis, “Existence of a photonic gap in periodic dielectric structures,” Phys. Rev. Lett. 65, 3152–3155 (1990). [CrossRef] [PubMed]

17

17. E. Lidorikis, M. M. Sigalas, E. N. Economou, and C. M. Soukoulis, “Tight-binding parameterization for photonic band-gap materials,” Phys. Rev. Lett. 81, 1405–1408 (1998). [CrossRef]

]. This “frequency-domain” method is discussed in further detail below. Another common technique involves the direct simulation of Maxwell’s equations over time on a discrete grid by finite-difference time-domain (FDTD) algorithms [18

18. See, e.g., K. S. Kunz and R. J. Luebbers, The Finite Difference Time Domain Methods (CRC, Boca Raton, Fla., 1993).

]-one Fourier-transforms the time-varying response of the system to some input, and the peaks in the resulting spectrum correspond to the eigenfrequencies [19

19. C. T. Chan, S. Datta, Q. L. Yu, M. Sigalas, K. M. Ho, and C. M. Soukoulis, “New structures and algorithms for photonic band gaps,” Physica A 211, 411–419 (1994). [CrossRef]

24

24. A. J. Ward and J. B. Pendry, “A program for calculating photonic band structures, Green’s functions and transmission/reflection coefficients using a non-orthogonal FDTD method,” Comput. Phys. Comm. 128, 590–621 (2000). [CrossRef]

]. (Care must be taken to ensure that the source is not accidentally near-orthogonal to an eigenmode.) This has the unique (and sometimes desirable) feature of finding the eigenfrequencies only-to compute the associated fields, one re-runs the simulation with a narrow-band filter for each state. (Time-domain calculations can also address problems of a dynamic nature, such as transmission processes, that are not so amenable to eigenmethods.) A third class of techniques are referred to as “transfer-matrix” methods: at a fixed frequency, one computes the transfer matrix relating field amplitudes at one end of a unit cell with those at the other end (via finite-difference, analytical, or other methods) [25

25. P. Yeh, Optical Waves in Layered Media (Wiley, New York, 1988).

30

30. J. Chongjun, Q. Bai, Y. Miao, and Q. Ruhu, “Two-dimensional photonic band structure in the chiral medium—transfer matrix method,” Opt. Commun. 142, 179–183 (1997). [CrossRef]

]. This yields the transmission spectrum directly, and mode wavevectors via the eigenvalues of the matrix; in some sense, this is a hybrid of time- and frequency-domain. Transfer-matrix methods may be especially attractive when the structure is decomposable into a few more-easily soluable components, and also for other cases such as frequency-dependent dielectrics.

In any method, the computation is characterized by some number N of degrees of freedom (e.g., the number of planewaves or grid points), and one might be interested to compare how the number of operations (the “complexity”) in each algorithm scales with N. Unfortunately, there is no simple answer. As we shall see below, the complexity in frequency-domain is O(icpN log N)+O(icp 2 N) for a planewave basis, where pN is the number of desired eigenmodes and ic is the number of iterations for the eigen-solver to converge. In time-domain, the complexity is O(itN) to find the frequencies, and O(pitN) to also solve for the fields of p modes, where it is the number of time steps. (Transfer-matrix methods have too many variations to consider here.) The difficulty in both cases comes from the number of iterations, which scales in different ways depending upon how the problem size is increased. In frequency-domain, ic is hard to predict, but we shall show below that it often grows only very slowly with p and N, so we can treat it as approximately constant (often<20). In time-domain, however, it must increase linearly with the spatial resolution (a certain kind of N increase) to maintain stability [18

18. See, e.g., K. S. Kunz and R. J. Luebbers, The Finite Difference Time Domain Methods (CRC, Boca Raton, Fla., 1993).

]. It also increases inversely with the required frequency resolution, by the uncertainty principle of the Fourier transform: it ΔtΔf~1 (where Δt is the timestep). Not only, then, is it a large number (typically>1000), but it must also increase dramatically to resolve closely-spaced modes (although this can be ameliorated somewhat by sophisticated signal processing [31

31. V. A. Mandelshtam and H. S. Taylor, “Harmonic inversion of time signals,” J. Chem. Phys.107, 6756–6769 (1997). Erratum: ibid, 109, 4128 (1998). [CrossRef]

]); in contrast, frequency-domain methods have no special problem resolving even degenerate modes. One traditional advantage of time-domain has been its ability to extract modes in the interior of the spectrum without computing the lower-frequency states, but we will show that this is feasible in frequency-domain as well. We feel that both time- and frequency-domain methods remain useful tools to extract eigenmodes from many structures.

There is sometimes concern that discontinuities in the dielectric function may cause poor convergence in a planewave basis. As is described in Sec. 2.3, however, this can be alleviated by the use of smoothed effective dielectric tensor [5

5. R. D. Meade, A. M. Rappe, K. D. Brommer, J. D. Joannopoulos, and O. L. Alerhand, “Accurate theoretical analysis of photonic band-gap materials,” Phys. Rev. B48, 8434–8437 (1993). Erratum: S. G. Johnson, ibid55, 15942 (1997). [CrossRef]

], and we demonstrate that convergence proportional to the square of the spatial resolution, Δx 2, can be achieved even for sharply discontinuous anisotropic dielectric structures.

In the paper that follows, we describe in greater detail our method for obtaining the eigenmodes of Maxwell’s equations, dividing the discussion into two parts: Maxwell’s equations and eigensolvers. First, we review how Maxwell’s equations can be cast as an eigenproblem for the frequencies, discuss the choice of basis and the computation of an effective dielectric tensor, and consider the critical selection of an approximate preconditioner. Second, we describe various block-iterative algorithms for solving this eigensystem; in principle this is independent of the equations being solved, but in practice there are a number of specific considerations. Throughout, we illustrate the methods being compared with numerical results for example systems.

2 The Maxwell Eigenproblem

We first express Maxwell’s equations as a linear eigenproblem, abstracting where possible from the differential equations in the individual field components to a higher-level view that better illuminates the overall process. To this end, we employ the Dirac notation of abstract operators  and states |H〉 to provide a representation-independent expression for the fields and inner products, and later use ordinary matrix notation to indicate the transition to a finite problem. (The underlying equations remain fully vectorial.) This eigenproblem formulation has appeared in various forms elsewhere, and we begin by reviewing it here. The source-free Maxwell’s equations for a linear dielectric ε=ε (x⃗) can be written in terms of only the magnetic field |H〉 [1]:

×1ε×H=1c22t2H,
(1)
·H=0.
(2)

We consider only states with definite frequency ω, i.e. a time-dependence e -iωt. Furthermore, we suppose that the system is periodic—in that case, Bloch’s theorem for periodic eigenproblems says that the states can be chosen to be of the form [32

32. N. W. Ashcroft and N. D. Mermin, Solid State Physics (Holt Saunders, Philadelphia, 1976).

]:

H=ei(k·xωt)Hk,
(3)

where k⃗ is the “Bloch wavevector” and |Hk ⃗〉 is a periodic field (completely defined by its values in the unit cell). Eq. (1) then becomes the linear eigenproblem in the unit cell:

ÂkHk=(ωc)2Hk,
(4)

where Âk⃗ is the positive semi-definite Hermitian operator:

Âk=(+ik)×1ε(+ik)×.
(5)

All the familiar theorems of Hermitian eigenproblems apply. Because |Hk ⃗〉 has compact support, the solutions are a discrete sequence of eigenfrequencies ωn (k⃗) forming a continuous “band structure” (or “dispersion relation”) as a function of k⃗. These discrete bands (modes as a function of k⃗) provide a complete picture of all possible electromagnetic states of the system, but typically one is interested in only the lowest few. (For example, in a photonic crystal it is possible for there to be a band gap in the lower bands: a range of ω in which no states exist [1

1. See, e.g., J. D. Joannopoulos, P. R. Villeneuve, and S. Fan, “Photonic crystals: putting a new twist on light,” Nature (London) 386, 143–149 (1997). [CrossRef]

].) Furthermore, the modes at a given k⃗ may be chosen to be orthonormal:

Hk(n)|Hk(m)=δn,m,
(6)

where δn,m is the Kronecker delta.

2.1 The choice of basis

In frequency-domain methods, Eq. (4) is transformed into a finite problem by expanding the states in some truncated (possibly vectorial) basis {|bm 〉}:

Hkm=1Nhmbm.
(7)

This expression becomes exact as the number N of basis functions goes to infinity, assuming a complete basis. One then has the ordinary generalized eigenproblem

Ah=(ωc)2Bh,
(8)

where h is a column vector of the basis coefficients hm , and A and B are N×N matrices with entries Aℓm =〈b |Âk⃗|bm ) and Bℓm =〈(b |bm 〉. It is important to note that Eq. (8) by itself is incomplete, however-the modes must also satisfy the “transversality” constraint of Eq. (2); zero-frequency spurious modes are otherwise introduced, as can be seen by taking the divergence of both sides of Eq. (1).

In principle, one could compute the entries of A and B and then use a standard matrix algorithm to solve Eq. (8) directly, and this method is sometimes employed [3

3. K. M. Ho, C. T. Chan, and C. M. Soukoulis, “Existence of a photonic gap in periodic dielectric structures,” Phys. Rev. Lett. 65, 3152–3155 (1990). [CrossRef] [PubMed]

,4

4. H. S. Sozüer and J. W. Haus, “Photonic bands: convergence problems with the plane-wave method,” Phys. Rev. B 45, 13962–13972 (1992). [CrossRef]

,6

6. T. Suzuki and P. K. L. Yu, “Method of projection operators for photonic band structures with perfectly conducting elements,” Phys. Rev. B 57, 2229–2241 (1998). [CrossRef]

,7

7. K. Busch and S. John, “Liquid-crystal photonic-band-gap materials: the tunable electromagnetic vacuum,” Phys. Rev. Lett. 83, 967–970 (1999). [CrossRef]

,10

10. W. C. Sailor, F. M. Mueller, and P. R. Villeneuve, “Augmented-plane-wave method for photonic band-gap materials,” Phys. Rev. B 57, 8819–8822 (1998). [CrossRef]

]. Such a computation, however, requires O(N 2) storage for the matrices and O(N3) work for the diagonalization, making it impractical for large problems. Fortunately, since only p bands are typically desired, for some pN, iterative methods are available to compute the bands with only O(pN) storage and roughly O(p 2 N) work; these methods are the subject of Sec. 3. The relevant property of iterative methods is this: they require only a fast, ideally O(N), method to compute the products Ah and Bh, with no need to store the matrices explicitly.

The choice of basis functions |bn 〉, then, is determined by three factors. First, they should form an compact representation so that a reasonable N yields good accuracy. Second, a convenient and efficient method for computing Ah and Bh must be available. Third, they should be inherently transverse, or otherwise provide an inexpensive way to maintain the constraint of Eq. (2).

2.1.1 The planewave basis

We chose to use a planewave basis (for reasons described below) [3

3. K. M. Ho, C. T. Chan, and C. M. Soukoulis, “Existence of a photonic gap in periodic dielectric structures,” Phys. Rev. Lett. 65, 3152–3155 (1990). [CrossRef] [PubMed]

7

7. K. Busch and S. John, “Liquid-crystal photonic-band-gap materials: the tunable electromagnetic vacuum,” Phys. Rev. Lett. 83, 967–970 (1999). [CrossRef]

], in which |bm=eiGm·x for some reciprocal-lattice vectors Gm ; the truncation N is determined by choosing a maximum cutoff for the magnitude of Gm . Strictly speaking, a cutoff magnitude would result in a spherical volume of G⃗ vectors, but we expand this into a parallelopiped volume so that the transformation between planewave and spatial representations takes the convenient form of a Discrete Fourier Transform (DFT). (Such an extension also removes an ambiguity of the order in which to invert and Fourier-transform ε [5

5. R. D. Meade, A. M. Rappe, K. D. Brommer, J. D. Joannopoulos, and O. L. Alerhand, “Accurate theoretical analysis of photonic band-gap materials,” Phys. Rev. B48, 8434–8437 (1993). Erratum: S. G. Johnson, ibid55, 15942 (1997). [CrossRef]

].) The planewave set then has a duality with a spatial grid, which is often a more intuitive representation. In particular, suppose that the three primitive lattice vectors (the units of periodicity) are {R1, R2, R3} and the primitive reciprocal-lattice vectors are {G1, G2, G3}, defined by Ri ·Gj =2πδitj [32

32. N. W. Ashcroft and N. D. Mermin, Solid State Physics (Holt Saunders, Philadelphia, 1976).

]. Then the basis functions are bm1,m2,m3=eijmjGj·x with mj =-[Nj /2

2. S. G. Johnson and J. D. Joannopoulos, The MIT Photonic-Bands Package home page http://ab-initio.mit.edu/mpb/.

]+1,…, [Nj /2

2. S. G. Johnson and J. D. Joannopoulos, The MIT Photonic-Bands Package home page http://ab-initio.mit.edu/mpb/.

],1 N=N 1 N 2 N 3, and Eq. (7) for the spatial field becomes:

Hk(knkRkNk)={mj}h{mj}eij,kmjGj·nkRkNk={mj}h{mj}e2πijmjnjNj.
(9)

Here, nk =0,…,Nk -1 describe spatial coordinates on an N 1×N 2×N 3 affine grid along the lattice directions. This is precisely a three-dimensional DFT, and can be computed by an efficient Fast Fourier Transform (FFT) algorithm [33

33. M. Frigo and S. G. Johnson, “FFTW: an adaptive software architecture for the FFT,” in Proc. 1998 IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (Institute of Electrical and Electronics Engineers, New York, 1998), 1381–1384.

] in O(NlogN) time.

Thus, in a planewave representation, the product Ah from Eq. (8) can be computed in O(N log N) time by taking the curl in wavevector space (just the cross-product with k⃗+Gm ), computing the FFT, multiplying by ε1˜, computing the inverse FFT, and taking the curl again [5

5. R. D. Meade, A. M. Rappe, K. D. Brommer, J. D. Joannopoulos, and O. L. Alerhand, “Accurate theoretical analysis of photonic band-gap materials,” Phys. Rev. B48, 8434–8437 (1993). Erratum: S. G. Johnson, ibid55, 15942 (1997). [CrossRef]

]:

Am=(k+G)×IFFTε1˜FFT(k+Gm)×.
(10)

The determination of an effective inverse dielectric tensor, ε1˜, is discussed in Sec. 2.3. The matrix B is simply the identity, thanks to the orthonormality of the basis.

Since the basis functions themselves are scalars in this case, the amplitudes hm in Eq. (7) must be vectors. In addition to Eq. (8), the field must satisfy the transversality constraint, and it is here that a key advantage of the planewave basis becomes apparent: Eq. (2) becomes merely a local constraint on the amplitudes, hm ·(k⃗+Gm )=0. For each reciprocal vector Gm one chooses a pair {û m ,υ̂ m } of orthonormal unit vectors that are perpendicular to k⃗+Gm , and writes the amplitude as hm =hm(1)u^m +hm(2) υ^m . Then the basis is intrinsically transverse, and one can treat Eq. (8) as an ordinary eigenproblem of rank n=2N without worrying about any constraint.

2.1.2 Other possible bases

The planewave basis has at least two potential disadvantages: first, it corresponds to a uniform spatial grid, and may thus be a less economical representation than one based on e.g. a general mesh; second, the computation of the Maxwell operator A requires O(NlogN) time instead of O(N), although the difference may be small in practice. Both of these problems could be overcome by using a different basis-for example, a traditional finite-element basis formed of localized functions on an unstructured mesh. Such a basis, however, would make it more difficult to maintain the transversality constraint, which is why we eschew it in our implementation. (One way around this might be to replace the magnetic field with the vector potential, H⃗=∇×A⃗, although this introduces higher-order derivatives into the eigenproblem.) Alternatively, it is possible to solve the eigenproblem without transversality and a posteriori identify and remove the resulting spurious modes (which lie at zero frequency unless a “penalty” term is added to the eigen-equation) [8

8. J. Jin, The Finite-Element Method in Electromagnetics (Wiley, New York, 1993), Chap. 5.7.

,14

14. S. J. Cooke and B. Levush, “Eigenmode solution of 2-D and 3-D electromagnetic cavities containing absorbing materials using the Jacobi-Davidson algorithm,” J. Comput. Phys. 157, 350–370 (2000). [CrossRef]

].

In two dimensions, though, transversality ceases to be a problem: one simply chooses the magnetic field along z (TE fields) or the electric field along z (TM fields, for which the eigenproblem could be recast in E⃗ or D⃗). This fact has been employed by various researchers to implement finite-element or other-basis frequency-domain methods in 2D [8

8. J. Jin, The Finite-Element Method in Electromagnetics (Wiley, New York, 1993), Chap. 5.7.

14

14. S. J. Cooke and B. Levush, “Eigenmode solution of 2-D and 3-D electromagnetic cavities containing absorbing materials using the Jacobi-Davidson algorithm,” J. Comput. Phys. 157, 350–370 (2000). [CrossRef]

]. (Our implementation also supports the 2D TE/TM case, of course.)

Given the eigenmodes for the primitive cell of a lattice, it may be possible to use them to construct a localized Wannier-function representation that is a useful basis for defect-mode calculations [15

15. K. M. Leung, “Defect modes in photonic band structures: a Green’s function approach using vector Wannier functions,” J. Opt. Soc. Am. B 10, 303–306 (1993). [CrossRef]

,16

16. J. P. Albert, C. Jouanin, D. Cassagne, and D. Bertho, “Generalized Wannier function method for photonic crystals,” Phys. Rev. B 61, 4381–4384 (2000). [CrossRef]

], although the non-uniqueness of the Wannier functions must be resolved, e.g., by fitting to the precomputed band structure in “tight-binding” fashion [16

16. J. P. Albert, C. Jouanin, D. Cassagne, and D. Bertho, “Generalized Wannier function method for photonic crystals,” Phys. Rev. B 61, 4381–4384 (2000). [CrossRef]

]. Such a basis is automatically divergenceless since the constituent eigenmodes are transverse. Another possibility is a tight-binding basis that is not specified explicitly but whose matrix elements are fitted to an existing band diagram [17

17. E. Lidorikis, M. M. Sigalas, E. N. Economou, and C. M. Soukoulis, “Tight-binding parameterization for photonic band-gap materials,” Phys. Rev. Lett. 81, 1405–1408 (1998). [CrossRef]

].

2.2 Inversion symmetry

In general, the basis coefficients h are complex, but additional simplifications are possible when the dielectric function possesses inversion symmetry: ε(-x⃗)=ε(x⃗). The Fourier transform of a real and even function is real and even, so it follows that the planewave representation of  kEq. (10), is then a real-symmetric matrix. This means that the planewave amplitudes hm can be chosen to be purely real, resulting in a factor of two savings in storage, more than a factor of two in time (due to more-efficient FFTs and matrix operations for real data), and possibly a reduction in the number of eigen-solver iterations (due to the reduced degrees of freedom). The spatial fields, in contrast, cannot generally be chosen as real-rather, with inversion symmetry they may be chosen to satisfy the property of the Fourier transform of a real function: Hk ⃗(x⃗)=Hk⃗(-x⃗)*. Because inversion symmetry is extremely common in practical structures of interest, our implementation supports this optimization when the symmetry is present. For generality, however, we also handle the case of complex h for non-symmetric systems.

Fig. 1. Eigenvalue convergence as a function of grid resolution (grid points per lattice constant a) for three different methods of determining an effective dielectric tensor at each point: no averaging, simply taking the dielectric constant at each grid point; averaging, the smoothed effective dielectric tensor of Eq. (12); and backwards averaging, the same smoothed dielectric but with the averaging methods of the two polarizations reversed.

2.3 The effective dielectric tensor

When computing the product  k ⃗|Hk ⃗〉 in a planewave basis, the multiplication by ε -1 is done in the spatial domain after a Fourier transform, so one might simply use the inverse of the actual dielectric constant at that point. Unfortunately, this can lead to suboptimal convergence of the frequencies as a function of N, due to the problems of representing discontinuities in a Fourier basis. It has been shown, however, that using a smoothed, effective dielectric tensor near dielectric interfaces can circumvent these problems, and achieve accurate results for moderate N [5

5. R. D. Meade, A. M. Rappe, K. D. Brommer, J. D. Joannopoulos, and O. L. Alerhand, “Accurate theoretical analysis of photonic band-gap materials,” Phys. Rev. B48, 8434–8437 (1993). Erratum: S. G. Johnson, ibid55, 15942 (1997). [CrossRef]

]. In particular, near a dielectric interface, one must average the dielectric in two different ways according to effective-medium theory depending upon the polarization of the incident light relative to the surface normal n. For an electric field E⃗‖, one averages the inverse of ε; for E⃗⊥, one takes the inverse of the average of ε. This results in an effective inverse dielectric tensor ε1˜:

ε1˜=ε1˜P+ε-1(1P)
(11)

where P is the projection matrix onto : Pij =niUj . Here, the averaging is done over one voxel (cubic grid unit) around the given spatial point; if e is constant, Eq. (11) reduces simply to ε -1. (The original formulation was in terms of ε̃, but is equivalent.)

We have generalized this procedure to handle the case of anisotropic (birefringent) dielectric materials, in which case the intrinsic e is already a real-symmetric tensor. Or, more generally, for magnetic materials ε may be complex-Hermitian. In this case, an analogous equation is needed that: (i) produces a Hermitian effective inverse dielectric, (ii) reduces to Eq. (11) for the case of scalar ε, (iii) yields ε -1 for constant ε, (iv) retains the physical justification of the different averaging methods for the different polarizations, and (v) similarly improves convergence. The expression we propose that satisfies these criteria is:

ε1˜=12({ε1¯,P}+{ε1¯,(1P)}),
(12)

where e may be a tensor and {a, b} denotes the anti-commutator ab+ba. Our first three conditions are manifestly satisfied. A physical justification of this formula is that a given averaging method should be used when either the field or the inverse e times the field is in the appropriate direction relative to , hence the anti-commutators. To illustrate the convergence impact of Eq. (12), we consider a simple example case similar to that of [7

7. K. Busch and S. John, “Liquid-crystal photonic-band-gap materials: the tunable electromagnetic vacuum,” Phys. Rev. Lett. 83, 967–970 (1999). [CrossRef]

]: an fcc lattice (lattice constant a) of close-packed spherical holes in dielectric (ε=12), where the holes are filled by an anisotropic “liquid crystal” material with an ε of 2 for fields along an “extraordinary” 011 () direction and 1 otherwise. We compute the frequency of the ninth band (just below the gap for air holes) at the L point as a function of grid resolution for three cases: no e averaging, with averaging as in Eq. (12), and with averaging backwards from Eq. (12) (P switched with 1-P). The results, shown in Fig. 1, exhibit a significant acceleration of convergence by the averaging; conversely, the poor convergence of the backwards averaging demonstrates the importance of polarization for the smoothing method. With the averaging of Eq. (12), we see that the error decreases with the square of the resolution, just as for standard FDTD methods [18

18. See, e.g., K. S. Kunz and R. J. Luebbers, The Finite Difference Time Domain Methods (CRC, Boca Raton, Fla., 1993).

].

As a practical matter, it can be cumbersome to compute (or even to define) the surface normal for complicated three-dimensional structures, so we implement an approximation. Given a flat dielectric interface, the normal direction is exactly ∫rε over a spherical surface intersecting the interface, where r⃗ is the vector from the center of the sphere. This procedure also yields the correct normal for spherical and cylindrical interfaces, by symmetry. Therefore, we use this spherical average to define the normal direction in all cases.2 In order to compute the average, we employ a 50-point 11th-degree spherical-quadrature formula [34

34. A. H. Stroud, Approximate Calculation of Multiple Integrals (Prentice-Hall, Englewood Cliffs, NJ, 1971).

] to numerically integrate rε over a spherical surface inscribed within the ε averaging voxel. (Testing this method by computing the normal vectors to a large number of random planar surfaces, we found a mean angular error of about 5°.)

An interesting unanswered question is whether a similar effective ε tensor would improve the convergence of FDTD methods, for which scalar averaging methods have already been explored to improve modeling of discontinuous material interfaces [35

35. J. Nadobny, D. Sullivan, P. Wust, M. Seebass, P. Deuflhard, and R. Felix, “A high-resolution interpolation at arbitrary interfaces for the FDTD method,” IEEE Trans. Microwave Theory Tech. 46, 1759–1766 (1998). [CrossRef]

,36

36. P. Yang, K. N. Liou, M. I. Mishchenko, and B.-C. Gao, “Efficient finite-difference time-domain scheme for light scattering by dielectric particles: application to aerosols,” Appl. Opt. 39, 3727–3737 (2000). [CrossRef]

].

2.4 Preconditioners

A critical factor in the performance of an iterative eigensolver is the choice of “preconditioning” operator. As will be explained in Sec. 3, our preconditioner requires us to supply an approximate inverse A ̃-1 of A such that A ̃-1 h can be computed quickly. This choice of a “good” preconditioner is highly problem-dependent, and is unfortunately a matter of trial and error. We consider two possible preconditioners. The first is a diagonal preconditioner [5

5. R. D. Meade, A. M. Rappe, K. D. Brommer, J. D. Joannopoulos, and O. L. Alerhand, “Accurate theoretical analysis of photonic band-gap materials,” Phys. Rev. B48, 8434–8437 (1993). Erratum: S. G. Johnson, ibid55, 15942 (1997). [CrossRef]

,12

12. D. C. Dobson, “An efficient method for band structure calculations in 2D photonic crystals,” J. Comput. Phys. 149, 363–376 (1999). [CrossRef]

,37

37. R. D. Meade, private communications.

], inspired by the “kinetic energy” preconditioners often used in electronic calculations [38

38. M. C. Payne, M. P. Tater, D. C. Allan, T. A. Arias, and J. D. Joannopoulos, “Iterative minimization techniques for ab initio total-energy calculations: molecular dynamics and conjugate gradients,” Rev. Mod. Phys. 64, 1045–1097 (1992). [CrossRef]

]. In this case, Ã is simply the diagonal elements of A:

A˜m=k+Gm2δ,m,
(13)

where we have dropped an overall scaling factor (irrelevant to a preconditioner). The motivation for this approximation is that, for large G⃗ components, the operator  k ⃗ is dominated by the curl operations and not by the variations in ε. Computing Ã-1 h is then an O(N) diagonal operation.

We also consider a more accurate inverse [37

37. R. D. Meade, private communications.

]. Ideally, one would like to simply reverse each of the steps in computing Ah via the planewave representation of Eq. (10) to find an exact inverse. Note that a cross product of perpendicular vectors is invertible. So, the only operation of Eq. (10) that is not trivially reversible is the final cross product with k⃗+G⃗ℓ, since -iωE⃗=ε -1∇⃗×H⃗ is not generally transverse (divergenceless). Since ε is normally piecewise constant and ∇⃗·ε E⃗=0, however, it is plausible that E⃗ is “mostly” divergenceless. With this in mind, we approximate the operator  k ⃗ by inserting projection operators T onto the transverse field components:

˜=×P̂T1εP̂T×,
(14)

where the rightmost T is superfluous and serves only to make the operator clearly Hermitian. Both curls are now reversible, since they act on transverse planewaves. The inverse of Eq. (14) can be thus applied in O(N log N) time in a manner exactly analogous to Eq. (10): invert the k⃗+G cross product (projecting the result), FFT, multiply by ε, inverse FFT, and invert the k⃗+Gm cross product. This preconditioner is more expensive to compute than that of Eq. (13), but we shall show that it yields a substantial speedup in the eigensolver.

2.4.1 Removing the singularity

There is one evident obstacle in this preconditioning process-at k⃗=0 (the “Γ” point), the operator  k ⃗ is singular. (This will also be a problem for iterative methods such as conjugate-gradient, which are known to converge poorly for ill-conditioned matrices [46

46. P. E. Gill, W. Murray, and M. H. Wright, Practical Optimization (Academic, London, 1981).

].) This difficulty is easily overcome, however, because the singular solutions are known analytically: they are the constant-field solutions at ω=0. So, at Γ we simply remove these singular solutions from the basis space (only considering planewaves with Gm ≠0), and re-insert them after we have solved for the non-singular eigenvectors.

3 Iterative Eigensolvers

Iterative eigensolvers are fast methods for computing the pn smallest (or largest, or extremal) eigenvalues and eigenvectors of an n×n generalized eigenproblem AyBy, by iteratively improving guesses for the eigenvectors until convergence is achieved. (B is the identity in our case, but we consider the full problem here for generality.) They are thus ideally suited for the problem of finding the few lowest eigenstates of Maxwell’s equations. Many iterative eigensolver algorithms have been developed, but we focus our investigations on two in particular: preconditioned conjugate-gradient minimization of the block Rayleigh quotient [5

5. R. D. Meade, A. M. Rappe, K. D. Brommer, J. D. Joannopoulos, and O. L. Alerhand, “Accurate theoretical analysis of photonic band-gap materials,” Phys. Rev. B48, 8434–8437 (1993). Erratum: S. G. Johnson, ibid55, 15942 (1997). [CrossRef]

,38

38. M. C. Payne, M. P. Tater, D. C. Allan, T. A. Arias, and J. D. Joannopoulos, “Iterative minimization techniques for ab initio total-energy calculations: molecular dynamics and conjugate gradients,” Rev. Mod. Phys. 64, 1045–1097 (1992). [CrossRef]

40

40. S. Ismail-Beigi and T. A. Arias, “New algebraic formulation of density functional calculation,” Comp. Phys. Commun. 128, 1–45 (2000). [CrossRef]

], and the Davidson method [41

41. E. R. Davidson, “The iterative calculation of a few of the lowest eigenvalues and corresponding eigenvectors of large real-symmetric matrices,” Comput. Phys. 17, 87–94 (1975). [CrossRef]

43

43. G. L. G. Sleijpen and H. A. van der Vorst, “A Jacobi-Davidson iteration method for linear eigenvalue problems,” SIAM J. Matrix Anal. Appl. 17, 401–425 (1996). [CrossRef]

] (an extension of Lanczos’s algorithm [44

44. B. N. Parlett, The Symmetric Eigenvalue Problem (Prentice-Hall, Englewood Cliffs, NJ, 1980).

]). We choose these two because they are able to take advantage of the good preconditioners available for this problem, and because they are Krylov-subspace methods-at each step, they compute the best solution in the subspace of all previously tried directions (with some approximations, described below) [39

39. See, e.g., A. Edelman and S. T. Smith, “On conjugate gradient-like methods for eigen-like problems,” BIT 36, 494–509 (1996). [CrossRef]

,45

45. H. A. van der Vorst, “Krylov subspace iteration,” Computing in Sci. and Eng. 2, 32–37 (2000). [CrossRef]

].

As a test case for the convergence of the algorithms discussed below, we use a 3d diamond lattice of dielectric spheres (ε=12) in air, which has a gap between its second and third bands [3

3. K. M. Ho, C. T. Chan, and C. M. Soukoulis, “Existence of a photonic gap in periodic dielectric structures,” Phys. Rev. Lett. 65, 3152–3155 (1990). [CrossRef] [PubMed]

]. Except where otherwise noted, we solve for the first five bands at the L wavevector point with a 16×16×16 grid resolution in the (affine) primitive cell (~22.6 grid points per lattice constant), tracking the error in the eigenvalue trace versus the converged value, with the non-diagonal preconditioner of Eq. (14). A set of five pseudo-random fields are used as the starting point for the eigensolver, and we report the results from the case that converges in the median number of iterations. Note that the number of iterations in practice will often be less than that reported here-not only is the minimum solution tolerance usually bigger, but one typically starts with the fields from a nearby k point (yielding a significant “head start” vs. random fields).

3.1 Conjugate-gradient minimization of the Rayleigh quotient

The smallest eigenvalue λ0 and the corresponding eigenvector y0 of a Hermitian matrix A are well-known to satisfy a variational problem—they minimize the expression:

0=miny0Ay0y0By0,
(15)

known as the “Rayleigh quotient,” where denotes the Hermitian adjoint (conjugate-transpose). One can then compute this eigenpair by performing an unconstrained minimization of the Rayleigh quotient, using a method such as conjugate-gradient [46

46. P. E. Gill, W. Murray, and M. H. Wright, Practical Optimization (Academic, London, 1981).

]. To find the subsequent eigenvalue and eigenvector, the minimization is repeated while maintaining orthogonality to y0 (through B), and so on; a process known as “deflation.” As in all iterative methods, the matrix A need never be computed explicitly, only the product Ay (and By). This method [39

39. See, e.g., A. Edelman and S. T. Smith, “On conjugate gradient-like methods for eigen-like problems,” BIT 36, 494–509 (1996). [CrossRef]

] for computing the smallest eigenpairs of a matrix, also called a “Rayleigh-Ritz” algorithm, was employed by [5

5. R. D. Meade, A. M. Rappe, K. D. Brommer, J. D. Joannopoulos, and O. L. Alerhand, “Accurate theoretical analysis of photonic band-gap materials,” Phys. Rev. B48, 8434–8437 (1993). Erratum: S. G. Johnson, ibid55, 15942 (1997). [CrossRef]

] to solve for the eigenstates of Maxwell’s equations in a planewave basis.

3.1.1 The block Rayleigh quotient

We use an extension of the conjugate-gradient Rayleigh-quotient method, adapted from [40

40. S. Ismail-Beigi and T. A. Arias, “New algebraic formulation of density functional calculation,” Comp. Phys. Commun. 128, 1–45 (2000). [CrossRef]

], that solves for all of the eigenvectors at once instead of one-by-one with deflation. Such a process is called a “block” algorithm, and it has two advantages in our case: first, it can take advantage of efficient block matrix operations that have superior cache utilization on modern computers [47

47. J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, “A set of Level 3 Basic Linear Algebra Subprograms,” ACM Trans. Math. Soft. 16, 1–17 (1990). [CrossRef]

,48

48. E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen, LAPACK Users’ Guide (SIAM, Philadelphia, 1999). [CrossRef]

]; second, on parallel machines, block algorithms can communicate in larger chunks that mask latencies. Actually, as described in Sec. 3.4, we employ a hybrid of block algorithms and deflation, but such a generalization can only be made if one has a block method to begin with.

Let Y be an n×p matrix whose columns are the p eigenvectors with the smallest eigenvalues. Then Y minimizes the trace tr [Y AY] subject to the orthonormality constraint Y BY=I (where I is the identity matrix) [50

50. A. H. Sameh and J. A. Wisniewski, “A trace minimization algorithm for the generalized eigenvalue problem,” SIAM J. Numer. Anal. 19, 1243–1259 (1982). [CrossRef]

]. Although it is possible to perform such a minimization directly by observing the differential geometry of the constraint manifold [49

49. A. Edelman, T. A. Arias, and S. T. Smith, “The geometry of algorithms with orthogonality constraints,” SIAM J. Matrix Anal. Appl. 20, 303–353 (1998). [CrossRef]

], it is more convenient to introduce a rescaling, similar to that of the Rayleigh quotient, that makes the problem unconstrained.3 If we write:

Y=Z(ZBZ)12
(16)

for an arbitrary non-singular n×p matrix Z, then the orthonormality constraint is automatically satisfied-this is known as the “polar” or “symmetric” orthogonalization of Z, and happens to be optimal from a certain numerical-stability standpoint [51

51. B. Philippe, “An algorithm to improve nearly orthonormal sets of vectors on a vector processor,” SIAM J. Alg. Disc. Meth. 8, 396–403 (1987). [CrossRef]

]. More importantly we can now solve for the eigenvectors by performing an unconstrained minimization of the block Rayleigh quotient:

tr[ZAZU],
(17)

where U=(Z BZ)-1. Once the minimization is completed, the eigenvectors are a superposition of the columns of Z, and can be found by diagonalizing the p×p matrix Y AY where Y is given by Eq. (16).

Fig. 2. Eigensolver convergence for two variants of conjugate gradient, Fletcher-Reeves and Polak-Ribiere, along with preconditioned steepest-descent for comparison.

3.1.2 Conjugate gradient

G=PAZU,
(18)

where P is the projection operator onto the space orthonormal to Z: P =1-BZUZ . The conjugate minimization direction D is then:

D=K̂G+γD0,
(19)

γ=tr[GK̂G]tr[G0K̂G0]
(20)

in the Fletcher-Reeves variant or:

γ=tr[(GG0)K̂G]tr[G0K̂G0]
(21)

Once the conjugate direction is determined, one needs to minimize Eq. (17) over Z′=ZD. By substituting Z′ into Eq. (17), one finds a function in terms of a and constant p×p matrices, which we then minimize via a one-dimensional optimization algorithm specifically designed for use with conjugate-gradient-like methods [52

52. J. J. Moré and D. J. Thuente, “Line search algorithms with guaranteed sufficient decrease,” ACM Trans. Math. Software 20, 286–307 (1994). [CrossRef]

]. Alternatively, one could make a two-point approximation for the second derivative of the function along the line and fit to a quadratic (i.e., apply one step of Newton’s method) [38

38. M. C. Payne, M. P. Tater, D. C. Allan, T. A. Arias, and J. D. Joannopoulos, “Iterative minimization techniques for ab initio total-energy calculations: molecular dynamics and conjugate gradients,” Rev. Mod. Phys. 64, 1045–1097 (1992). [CrossRef]

,40

40. S. Ismail-Beigi and T. A. Arias, “New algebraic formulation of density functional calculation,” Comp. Phys. Commun. 128, 1–45 (2000). [CrossRef]

]. This method requires two O(p 2 n) matrix multiplications (out of eight per iteration), but often produces somewhat slower and less reliable convergence.

3.1.3 Preconditioning

Large gains in the rate of convergence can be achieved by choosing a proper Hermitian preconditioning operator in Eq. (19)-ideally, an approximate inverse of the Hessian (second-derivative) matrix of the objective functional, Eq. (17).4 One can think of this as an application of the multi-dimensional Newton’s method to find the zero of the gradient G: the Newton update G of the solution guess Z is the function G divided by its derivative. Equivalently if we are at

Z=Z*+δZ,
(22)

where Z* is the unknown minimum, we wish to solve for δZG to lowest order. Thus, we substitute Eq. (22) into Eq. (18) for the gradient and expand to first order in δZ, noting that G*=0, and find [53

53. S. Ismail-Beiji, private communications.

]:

GP(AδZBδZUZAZ)U,
(23)

Suppose that Z were rotated to diagonalize the generalized eigenproblem Z AZx=Z BZx λ˜ . Then, the second term in Eq. (23) would be simply BδZ times the current eigenvalue approximations. Inverting Eq. (23) thus involves the inversion of A-B λ˜ . At this point, however, we make an additional approximation: since the desired eigenvalues λ are small, we neglect that term and use:

GPAδZU.
(24)
Fig. 3. The effect of two preconditioning schemes from section 2.4, diagonal and transverse-projection (non-diagonal), on the conjugate-gradient method.

An inverse of this equation is given by the Hermitian operation δZ=A -1 GU -1, which is then further approximated by:

δZK̂G=A1˜GU1,
(25)

where we have used an approximate inverse for A such as those in Sec. 2.4. The effects of the preconditioner of Eq. (25) with our two approximate inverses are demonstrated in Fig. 3, showing significant benefit from the non-diagonal preconditioner.

Additional refinements are possible that we do not consider here. One could solve Eq. (23) exactly by an iterative method, or at least a few iterations thereof to improve the preconditioner. Also, because of the projection operator P , one has a choice of how to invert Eqs. (23, 24)—it has been suggested in a similar context that one should choose δZ in the space orthogonal to Z [43

43. G. L. G. Sleijpen and H. A. van der Vorst, “A Jacobi-Davidson iteration method for linear eigenvalue problems,” SIAM J. Matrix Anal. Appl. 17, 401–425 (1996). [CrossRef]

]. This can be done at the expense of a few extra matrix operations, but it did not seem to improve convergence significantly in our case. Such a choice would become more important, however, if one attempted a more accurate preconditioner that inverts expressions of the form A-Bλ, lest singularities arise.

3.2 The Davidson method

In the Davidson method, instead of iteratively improving a single (block) eigenvector approximation Y, one builds up an increasing subspace V that eventually contains the desired eigenvectors [42

42. M. Crouzeix, B. Philippe, and M. Sadkane, “The Davidson Method,” SIAM J. Sci. Comput. 15, 62–76 (1994). [CrossRef]

]. While both the Davidson and conjugate-gradient methods are Krylov-subspace algorithms, this is only exactly true of conjugate-gradient for pure quadratic forms, and of Davidson in the absence of restarting (described below). We summarize the method briefly in the following.

At each iteration, V is an n×q matrix for some qp, whose columns span the current subspace, with V BV=I. The task is to find p new vectors to add to V so that it spans a better approximation for the desired p eigenvectors. To do this, one first computes the best p eigenvectors Y so far: A is projected to the V subspace, Av =V AV, and the associated eigenvectors are computed for this q×q matrix; of these eigenvectors, one chooses the block of the p smallest eigenvalues, L=diag1…λp), and their associated qxp eigenvectors Yv , and computes the “Ritz” eigenvectors Y=VYV . The residual (i.e., error) for this eigenvalue approximation is R=BYL-AY. The new directions to be added to V are then given by D=R (orthogonalized, to maintain the orthonormality of V), where is a preconditioning operator.

Fig. 4. Comparison of the Davidson method with the block conjugate-gradient algorithm of section 3.1. We reset the Davidson subspace to the best current eigenvectors every 2, 3, 4, or 5 iterations, with a corresponding in increase in memory usage and computational costs.

To keep the subspace limited to a reasonable size, this process is usually restarted every few iterations by setting V=Y, with some tradeoff in speed of convergence. (Alternatively, one could keep the r lowest eigenvectors for some r>p [43

43. G. L. G. Sleijpen and H. A. van der Vorst, “A Jacobi-Davidson iteration method for linear eigenvalue problems,” SIAM J. Matrix Anal. Appl. 17, 401–425 (1996). [CrossRef]

].) Restarting in this way on every iteration (never increasing the subspace) is a form of steepest-descent; a method of this sort was used in [12

12. D. C. Dobson, “An efficient method for band structure calculations in 2D photonic crystals,” J. Comput. Phys. 149, 363–376 (1999). [CrossRef]

]. The ideal preconditioner for the Davidson method, remarkably, has been shown to be the same as that of Eq. (23) for trace-minimization, combined with the fact that Y in this case is an orthonormal solution to the projected eigenproblem. That is, R should be an approximate solution for D in [43

43. G. L. G. Sleijpen and H. A. van der Vorst, “A Jacobi-Davidson iteration method for linear eigenvalue problems,” SIAM J. Matrix Anal. Appl. 17, 401–425 (1996). [CrossRef]

]:

R=P(ADBDL),
(26)

where L is the diagonal matrix of eigenvalues from above and P is the projection matrix 1-BYY as in Sec. 3.1.2. Again, D should ideally be found in the space orthonormal to Y [43

43. G. L. G. Sleijpen and H. A. van der Vorst, “A Jacobi-Davidson iteration method for linear eigenvalue problems,” SIAM J. Matrix Anal. Appl. 17, 401–425 (1996). [CrossRef]

]; this is the “Jacobi-Davidson” algorithm. To solve Eq. (26), we make the same approximations as in Sec. 3.1.3 and use K̂R=A1˜R. The Davidson method is also adaptable to non-Hermitian operators, and it has thus been employed to treat electromagnetic problems containing absorption [14

14. S. J. Cooke and B. Levush, “Eigenmode solution of 2-D and 3-D electromagnetic cavities containing absorbing materials using the Jacobi-Davidson algorithm,” J. Comput. Phys. 157, 350–370 (2000). [CrossRef]

].

Fig. 5. Conjugate-gradient convergence of the lowest TM eigenvalue for the “interior” eigensolver of Eq. (27), solving for the monopole defect state formed by one vacancy in a 2D square lattice of dielectric rods in air, using three different supercell sizes (3×3, 5×5, and 7×7).

3.3 Interior eigenvalues

One of the most interesting aspects of photonic crystals is the possibility of localized modes associated with point defects (cavities) and line defects (waveguides) in the crystal [1

1. See, e.g., J. D. Joannopoulos, P. R. Villeneuve, and S. Fan, “Photonic crystals: putting a new twist on light,” Nature (London) 386, 143–149 (1997). [CrossRef]

,54

54. P. R. Villeneuve, S. Fan, and J. D. Joannopoulos, “Microcavities in photonic crystals: mode symmetry, tunability, and coupling efficiency,” Phys. Rev. B 54, 7837–7842 (1996). [CrossRef]

]. In this case, the desired modes lie in a known frequency range (the band gap) in the interior of the spectrum. Moreover, since a supercell of some sort must be used to surround the defect with sufficient bulk crystal, the underlying non-localized modes are “folded” many times (increasing their number by the number of primitive cells in the supercell). Ideally, one would like to compute only the defect modes in the band gap, without the waste of computation and memory of finding all the folded modes below them. One way to accomplish this is to use the FDTD algorithms referenced in Sec. 1. Here, we demonstrate that it is also possible to compute only the interior eigenvalues using iterative frequency-domain methods.

The center of the band gap is known, so we can state the problem as one of finding the eigenvalues and eigenvectors closest to the mid-gap frequency ωm. Since the methods presented above compute the minimum eigenvalues of an operator, we can apply them directly merely by shifting the spectrum [55

55. L.-W. Wang and A. Zunger, “Solving Schrödinger’s equation around a desired energy: application to Silicon quantum dots,” J. Chem. Phys. 100, 2394–2397 (1994). [CrossRef]

]:

Â'k=(Âkωm2c2)2.
(27)

This operator has the same eigenvectors as  k ⃗, but its lowest eigenvalues are the ones closest to ωm , and thus we can compute a single defect state without computing any other eigenstates. As a preconditioner for this operator, we again approximate ωm as small and simply use K̂G=A2˜GU1, essentially the square of Eq. (25). This imperfect preconditioner and the worsened condition number of Eq. (27) will lead to slower eigensolver convergence, but this is generally more than compensated for by the reduction in the number of bands computed (and the resulting savings in memory and time per iteration). To demonstrate this “interior” eigensolver, we consider the case of a two-dimensional square lattice of dielectric rods in air, with a single rod removed from the crystal. Such a defect supports one confined TM mode in the gap [54

54. P. R. Villeneuve, S. Fan, and J. D. Joannopoulos, “Microcavities in photonic crystals: mode symmetry, tunability, and coupling efficiency,” Phys. Rev. B 54, 7837–7842 (1996). [CrossRef]

], and in Fig. 5 we plot the eigensolver convergence of the lowest TM eigenvalue of Eq. (27), for 3×3 to 7×7 supercells with a resolution of 16 grid points per lattice constant. (To find the same state in a 7×7 supercell via the ordinary method, p=49 bands must be computed.)

An alternative method for computing interior eigenvalues using a modified version of the Davidson method has also been suggested [43

43. G. L. G. Sleijpen and H. A. van der Vorst, “A Jacobi-Davidson iteration method for linear eigenvalue problems,” SIAM J. Matrix Anal. Appl. 17, 401–425 (1996). [CrossRef]

] and we intend to investigate this algorithm in future work, along with more accurate preconditioners (e.g. based on iterative linear solvers).

3.4 To block or not to block?

We have described the use of block eigensolvers to solve for all of the desired eigenstates simultaneously. The classical algorithm of computing them one by one with deflation has its advantages, however. First, it uses less memory-O(pn) memory is required to store the eigenstates themselves, but only O(n) memory is required for auxiliary matrices such as G and D, versus O(pn) for the block method. (At least two such auxiliary matrices seem to be required for conjugate gradient, although three is more convenient, and one more for the Polak-Ribiere method.) Second, because our preconditioner is a better approximation for the lower bands (where ω is smallest), the eigensolver convergence of the upper bands is slower-it is more efficient to compute them separately. Third, the eigensolver iterations themselves can require fewer operations for the deflation algorithm, as described below.

3.5 Scaling

One final question that we wish to address is that of scaling: how does the eigensolver convergence rate scale with the size of the problem? Specifically, we consider the convergence of the block conjugate-gradient algorithm for the diamond structure as we increase either the resolution or the number of bands. In both cases, we plot the mean number of iterations (over five runs with random starting fields) for the eigensolver to converge to within a fractional tolerance of 10-7 (which is probably lower than the tolerance that would be used in practice). In the case of the increased number of bands, we solve for the bands in blocks of b=5 bands, as described in Sec. 3.4, and use the average number of iterations per block. The results, plotted in Fig. 6, demonstrate that the number of iterations increases only very slowly as the problem size is increased. (We suspect that the slowed convergence for the increased number of bands results largely from the worsening of the small-ω approximation in our preconditioner.) In contrast, FDTD algorithms must scale their number of iterations (or, strictly speaking, Δt) linearly with the spatial resolution in order to maintain stability.

Fig. 6. Scaling of the number of conjugate-gradient iterations required for convergence (to a fractional tolerance of 10-7) as a function of the spatial resolution (in grid points per lattice constant, with a corresponding planewave cutoff), or the number p of bands computed.

4 Conclusion

We have presented efficient preconditioned block-iterative algorithms for computing eigenstates of Maxwell’s equations for periodic dielectric systems using a planewave basis. Such methods, combined with appropriate effective dielectric tensors for accuracy, interior eigensolvers to compute only the desired modes in defect systems, and a flexible and freely-available implementation, provide an attractive way to perform eigen-analyses of diverse electromagnetic systems.

Acknowledgments

This work was supported in part by the Materials Research Science and Engineering Center program of the National Science Foundation under Grant No. DMR-9400334 and by the U. S. Army Research Office under Grant No. DAAG55-97-1-0366. S. G. Johnson is grateful for the support of a National Defense Science and Engineering Graduate Fellowship and an MIT Karl Taylor Compton Fellowship.

Footnotes

1 This is equivalent to mj =0,…,Nj -1 for the DFT, in which rrij is interpreted modulo Nj , but choosing zero-centered wavevectors is important when taking derivatives of the basis.
2 This method for defining can produce suboptimal results when the averaging voxel straddles two near-parallel dielectric interfaces. Preliminary investigations show that gains of a factor of two or more in eigenvalue accuracy are sometimes possible if a better approximation for is used in such cases.
3 We also tried using the differential-geometry conjugate-gradient algorithm of [49

49. A. Edelman, T. A. Arias, and S. T. Smith, “The geometry of algorithms with orthogonality constraints,” SIAM J. Matrix Anal. Appl. 20, 303–353 (1998). [CrossRef]

]. In addition to requiring more matrix operations per iteration than the method described here, it also seemed to work much more poorly with our choice of preconditioners, for unknown reasons.
4 In the literature, “preconditioner” sometimes refers instead to -1, the approximate Hessian.

References and links

1.

See, e.g., J. D. Joannopoulos, P. R. Villeneuve, and S. Fan, “Photonic crystals: putting a new twist on light,” Nature (London) 386, 143–149 (1997). [CrossRef]

2.

S. G. Johnson and J. D. Joannopoulos, The MIT Photonic-Bands Package home page http://ab-initio.mit.edu/mpb/.

3.

K. M. Ho, C. T. Chan, and C. M. Soukoulis, “Existence of a photonic gap in periodic dielectric structures,” Phys. Rev. Lett. 65, 3152–3155 (1990). [CrossRef] [PubMed]

4.

H. S. Sozüer and J. W. Haus, “Photonic bands: convergence problems with the plane-wave method,” Phys. Rev. B 45, 13962–13972 (1992). [CrossRef]

5.

R. D. Meade, A. M. Rappe, K. D. Brommer, J. D. Joannopoulos, and O. L. Alerhand, “Accurate theoretical analysis of photonic band-gap materials,” Phys. Rev. B48, 8434–8437 (1993). Erratum: S. G. Johnson, ibid55, 15942 (1997). [CrossRef]

6.

T. Suzuki and P. K. L. Yu, “Method of projection operators for photonic band structures with perfectly conducting elements,” Phys. Rev. B 57, 2229–2241 (1998). [CrossRef]

7.

K. Busch and S. John, “Liquid-crystal photonic-band-gap materials: the tunable electromagnetic vacuum,” Phys. Rev. Lett. 83, 967–970 (1999). [CrossRef]

8.

J. Jin, The Finite-Element Method in Electromagnetics (Wiley, New York, 1993), Chap. 5.7.

9.

A. Figotin and Y. A. Godin, “The computation of spectra of some 2D photonic crystals,” J. Comput. Phys. 136, 585–598 (1997). [CrossRef]

10.

W. C. Sailor, F. M. Mueller, and P. R. Villeneuve, “Augmented-plane-wave method for photonic band-gap materials,” Phys. Rev. B 57, 8819–8822 (1998). [CrossRef]

11.

W. Axmann and P. Kuchment, “An efficient finite element method for computing spectra of photonic and acoustic band-gap materials: I. Scalar case,” J. Comput. Phys. 150, 468–481 (1999). [CrossRef]

12.

D. C. Dobson, “An efficient method for band structure calculations in 2D photonic crystals,” J. Comput. Phys. 149, 363–376 (1999). [CrossRef]

13.

D. Mogilevtsev, T. A. Birks, and P. St. J. Russell, “Localized function method for modeling defect modes in 2D photonic crystals,” J. Lightwave Tech. 17, 2078–2081 (1999). [CrossRef]

14.

S. J. Cooke and B. Levush, “Eigenmode solution of 2-D and 3-D electromagnetic cavities containing absorbing materials using the Jacobi-Davidson algorithm,” J. Comput. Phys. 157, 350–370 (2000). [CrossRef]

15.

K. M. Leung, “Defect modes in photonic band structures: a Green’s function approach using vector Wannier functions,” J. Opt. Soc. Am. B 10, 303–306 (1993). [CrossRef]

16.

J. P. Albert, C. Jouanin, D. Cassagne, and D. Bertho, “Generalized Wannier function method for photonic crystals,” Phys. Rev. B 61, 4381–4384 (2000). [CrossRef]

17.

E. Lidorikis, M. M. Sigalas, E. N. Economou, and C. M. Soukoulis, “Tight-binding parameterization for photonic band-gap materials,” Phys. Rev. Lett. 81, 1405–1408 (1998). [CrossRef]

18.

See, e.g., K. S. Kunz and R. J. Luebbers, The Finite Difference Time Domain Methods (CRC, Boca Raton, Fla., 1993).

19.

C. T. Chan, S. Datta, Q. L. Yu, M. Sigalas, K. M. Ho, and C. M. Soukoulis, “New structures and algorithms for photonic band gaps,” Physica A 211, 411–419 (1994). [CrossRef]

20.

C. T. Chan, Q. L. Lu, and K. M. Ho, “Order-N spectral method for electromagnetic waves,” Phys. Rev. B 51, 16635–16642 (1995). [CrossRef]

21.

S. Fan, P. R. Villeneuve, and J. D. Joannopoulos, “Large omnidirectional band gaps in metallo-dielectric photonic crystals,” Phys. Rev. B 54, 11245–11251 (1996). [CrossRef]

22.

K. Sakoda and H. Shiroma, “Numerical method for localized defect modes in photonic lattices,” Phys. Rev. B 56, 4830–4835 (1997). [CrossRef]

23.

J. Arriaga, A. J. Ward, and J. B. Pendry, “Order N photonic band structures for metals and other dispersive materials,” Phys. Rev. B 59, 1874–1877 (1999). [CrossRef]

24.

A. J. Ward and J. B. Pendry, “A program for calculating photonic band structures, Green’s functions and transmission/reflection coefficients using a non-orthogonal FDTD method,” Comput. Phys. Comm. 128, 590–621 (2000). [CrossRef]

25.

P. Yeh, Optical Waves in Layered Media (Wiley, New York, 1988).

26.

J. B. Pendry and A. MacKinnon, “Calculation of photon dispersion relations,” Phys. Rev. Lett. 69, 2772–2775 (1992). [CrossRef] [PubMed]

27.

P. M. Bell, J. B. Pendry, L. M. Moreno, and A. J. Ward, “A program for calculating photonic band structures and transmission coefficients of complex structures,” Comput. Phys. Comm. 85, 306–322 (1995). [CrossRef]

28.

J. M. Elson and P. Tran, “Dispersion in photonic media and diffraction from gratings: a different modal expansion for the R-matrix propagation technique,” J. Opt. Soc. Am. A 12, 1765–1771 (1995). [CrossRef]

29.

J. M. Elson and P. Tran, “Coupled-mode calculation with the R-matrix propagator for the dispersion of surface waves on truncated photonic crystal,” Phys. Rev. B 54, 1711–1715 (1996). [CrossRef]

30.

J. Chongjun, Q. Bai, Y. Miao, and Q. Ruhu, “Two-dimensional photonic band structure in the chiral medium—transfer matrix method,” Opt. Commun. 142, 179–183 (1997). [CrossRef]

31.

V. A. Mandelshtam and H. S. Taylor, “Harmonic inversion of time signals,” J. Chem. Phys.107, 6756–6769 (1997). Erratum: ibid, 109, 4128 (1998). [CrossRef]

32.

N. W. Ashcroft and N. D. Mermin, Solid State Physics (Holt Saunders, Philadelphia, 1976).

33.

M. Frigo and S. G. Johnson, “FFTW: an adaptive software architecture for the FFT,” in Proc. 1998 IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (Institute of Electrical and Electronics Engineers, New York, 1998), 1381–1384.

34.

A. H. Stroud, Approximate Calculation of Multiple Integrals (Prentice-Hall, Englewood Cliffs, NJ, 1971).

35.

J. Nadobny, D. Sullivan, P. Wust, M. Seebass, P. Deuflhard, and R. Felix, “A high-resolution interpolation at arbitrary interfaces for the FDTD method,” IEEE Trans. Microwave Theory Tech. 46, 1759–1766 (1998). [CrossRef]

36.

P. Yang, K. N. Liou, M. I. Mishchenko, and B.-C. Gao, “Efficient finite-difference time-domain scheme for light scattering by dielectric particles: application to aerosols,” Appl. Opt. 39, 3727–3737 (2000). [CrossRef]

37.

R. D. Meade, private communications.

38.

M. C. Payne, M. P. Tater, D. C. Allan, T. A. Arias, and J. D. Joannopoulos, “Iterative minimization techniques for ab initio total-energy calculations: molecular dynamics and conjugate gradients,” Rev. Mod. Phys. 64, 1045–1097 (1992). [CrossRef]

39.

See, e.g., A. Edelman and S. T. Smith, “On conjugate gradient-like methods for eigen-like problems,” BIT 36, 494–509 (1996). [CrossRef]

40.

S. Ismail-Beigi and T. A. Arias, “New algebraic formulation of density functional calculation,” Comp. Phys. Commun. 128, 1–45 (2000). [CrossRef]

41.

E. R. Davidson, “The iterative calculation of a few of the lowest eigenvalues and corresponding eigenvectors of large real-symmetric matrices,” Comput. Phys. 17, 87–94 (1975). [CrossRef]

42.

M. Crouzeix, B. Philippe, and M. Sadkane, “The Davidson Method,” SIAM J. Sci. Comput. 15, 62–76 (1994). [CrossRef]

43.

G. L. G. Sleijpen and H. A. van der Vorst, “A Jacobi-Davidson iteration method for linear eigenvalue problems,” SIAM J. Matrix Anal. Appl. 17, 401–425 (1996). [CrossRef]

44.

B. N. Parlett, The Symmetric Eigenvalue Problem (Prentice-Hall, Englewood Cliffs, NJ, 1980).

45.

H. A. van der Vorst, “Krylov subspace iteration,” Computing in Sci. and Eng. 2, 32–37 (2000). [CrossRef]

46.

P. E. Gill, W. Murray, and M. H. Wright, Practical Optimization (Academic, London, 1981).

47.

J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, “A set of Level 3 Basic Linear Algebra Subprograms,” ACM Trans. Math. Soft. 16, 1–17 (1990). [CrossRef]

48.

E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen, LAPACK Users’ Guide (SIAM, Philadelphia, 1999). [CrossRef]

49.

A. Edelman, T. A. Arias, and S. T. Smith, “The geometry of algorithms with orthogonality constraints,” SIAM J. Matrix Anal. Appl. 20, 303–353 (1998). [CrossRef]

50.

A. H. Sameh and J. A. Wisniewski, “A trace minimization algorithm for the generalized eigenvalue problem,” SIAM J. Numer. Anal. 19, 1243–1259 (1982). [CrossRef]

51.

B. Philippe, “An algorithm to improve nearly orthonormal sets of vectors on a vector processor,” SIAM J. Alg. Disc. Meth. 8, 396–403 (1987). [CrossRef]

52.

J. J. Moré and D. J. Thuente, “Line search algorithms with guaranteed sufficient decrease,” ACM Trans. Math. Software 20, 286–307 (1994). [CrossRef]

53.

S. Ismail-Beiji, private communications.

54.

P. R. Villeneuve, S. Fan, and J. D. Joannopoulos, “Microcavities in photonic crystals: mode symmetry, tunability, and coupling efficiency,” Phys. Rev. B 54, 7837–7842 (1996). [CrossRef]

55.

L.-W. Wang and A. Zunger, “Solving Schrödinger’s equation around a desired energy: application to Silicon quantum dots,” J. Chem. Phys. 100, 2394–2397 (1994). [CrossRef]

OCIS Codes
(000.3860) General : Mathematical methods in physics
(000.4430) General : Numerical approximation and analysis

ToC Category:
Focus Issue: Photonic bandgap calculations

History
Original Manuscript: November 17, 2000
Published: January 29, 2001

Citation
Steven Johnson and John Joannopoulos, "Block-iterative frequency-domain methods for Maxwell's equations in a planewave basis," Opt. Express 8, 173-190 (2001)
http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-8-3-173


Sort:  Journal  |  Reset  

References

  1. See, e.g., J. D. Joannopoulos, P. R. Villeneuve, and S. Fan, "Photonic crystals: putting a new twist on light," Nature (London) 386, 143-149 (1997). [CrossRef]
  2. S. G. Johnson and J. D. Joannopoulos, The MIT Photonic-Bands Package home page http://ab-initio.mit.edu/mpb/.
  3. K. M. Ho, C. T. Chan, and C. M. Soukoulis, "Existence of a photonic gap in periodic dielectric structures," Phys. Rev. Lett. 65, 3152-3155 (1990). [CrossRef] [PubMed]
  4. H. S. Sözüer and J. W. Haus, "Photonic bands: convergence problems with the plane-wave method," Phys. Rev. B 45, 13962-13972 (1992). [CrossRef]
  5. R. D. Meade, A. M. Rappe, K. D. Brommer, J. D. Joannopoulos, and O. L. Alerhand, "Accurate theoretical analysis of photonic band-gap materials," Phys. Rev. B 48, 8434-8437 (1993). Erratum: S. G. Johnson, ibid 55, 15942 (1997). [CrossRef]
  6. T. Suzuki and P. K. L. Yu, "Method of projection operators for photonic band structures with perfectly conducting elements," Phys. Rev. B 57, 2229-2241 (1998). [CrossRef]
  7. K. Busch and S. John, "Liquid-crystal photonic-band-gap materials: the tunable electromagnetic vacuum," Phys. Rev. Lett. 83, 967-970 (1999). [CrossRef]
  8. J. Jin, The Finite-Element Method in Electromagnetics (Wiley, New York, 1993), Chap. 5.7.
  9. A. Figotin, Y. A. Godin, "The computation of spectra of some 2D photonic crystals," J. Comput. Phys. 136, 585-598 (1997). [CrossRef]
  10. W. C. Sailor, F. M. Mueller, and P. R. Villeneuve, "Augmented-plane-wave method for photonic band-gap materials," Phys. Rev. B 57, 8819-8822 (1998). [CrossRef]
  11. W. Axmann and P. Kuchment, "An efficient finite element method for computing spectra of photonic and acoustic band-gap materials: I. Scalar case," J. Comput. Phys. 150, 468-481 (1999). [CrossRef]
  12. D. C. Dobson, "An efficient method for band structure calculations in 2D photonic crystals," J. Comput. Phys. 149, 363-376 (1999). [CrossRef]
  13. D. Mogilevtsev, T. A. Birks, and P. St. J. Russell, "Localized function method for modeling defect modes in 2D photonic crystals," J. Lightwave Tech. 17, 2078-2081 (1999). [CrossRef]
  14. S. J. Cooke and B. Levush, "Eigenmode solution of 2-D and 3-D electromagnetic cavities contain- ing absorbing materials using the Jacobi-Davidson algorithm," J. Comput. Phys. 157, 350-370 (2000). [CrossRef]
  15. K. M. Leung, "Defect modes in photonic band structures: a Green's function approach using vector Wannier functions," J. Opt. Soc. Am. B 10, 303-306 (1993). [CrossRef]
  16. J. P. Albert, C. Jouanin, D. Cassagne, and D. Bertho, "Generalized Wannier function method for photonic crystals," Phys. Rev. B 61, 4381-4384 (2000). [CrossRef]
  17. E. Lidorikis, M. M. Sigalas, E. N. Economou, and C. M. Soukoulis, "Tight-binding parameterization for photonic band-gap materials," Phys. Rev. Lett. 81, 1405-1408 (1998). [CrossRef]
  18. See, e.g., K. S. Kunz and R. J. Luebbers, The Finite Difference Time Domain Methods (CRC, Boca Raton, Fla., 1993).
  19. C. T. Chan, S. Datta, Q. L. Yu, M. Sigalas, K. M. Ho, C. M. Soukoulis, "New structures and algorithms for photonic band gaps," Physica A 211, 411-419 (1994). [CrossRef]
  20. C. T. Chan, Q. L. Lu, and K. M. Ho, "Order-N spectral method for electromagnetic waves," Phys. Rev. B 51, 16635-16642 (1995). [CrossRef]
  21. S. Fan, P. R. Villeneuve, and J. D. Joannopoulos, "Large omnidirectional band gaps in metallo-dielectric photonic crystals," Phys. Rev. B 54, 11245-11251 (1996). [CrossRef]
  22. K. Sakoda and H. Shiroma, "Numerical method for localized defect modes in photonic lattices," Phys. Rev. B 56, 4830-4835 (1997). [CrossRef]
  23. J. Arriaga, A. J. Ward, and J. B. Pendry, "Order N photonic band structures for metals and other dispersive materials," Phys. Rev. B 59, 1874-1877 (1999). [CrossRef]
  24. A. J. Ward and J. B. Pendry, "A program for calculating photonic band structures, Green's functions and transmission/reflection coefficients using a non-orthogonal FDTD method," Comput. Phys. Comm. 128, 590-621 (2000). [CrossRef]
  25. P. Yeh, Optical Waves in Layered Media (Wiley, New York, 1988).
  26. J. B. Pendry and A. MacKinnon, "Calculation of photon dispersion relations," Phys. Rev. Lett. 69, 2772-2775 (1992). [CrossRef] [PubMed]
  27. P. M. Bell, J. B. Pendry, L. M. Moreno, and A. J. Ward, "A program for calculating photonic band structures and transmission coefficients of complex structures," Comput. Phys. Comm. 85, 306-322 (1995). [CrossRef]
  28. J. M. Elson and P. Tran, "Dispersion in photonic media and diffraction from gratings: a different modal expansion for the R-matrix propagation technique," J. Opt. Soc. Am. A 12, 1765-1771 (1995). [CrossRef]
  29. J. M. Elson and P. Tran, "Coupled-mode calculation with the R-matrix propagator for the dispersion of surface waves on truncated photonic crystal," Phys. Rev. B 54, 1711-1715 (1996). [CrossRef]
  30. J. Chongjun, Q. Bai, Y. Miao, and Q. Ruhu, "Two-dimensional photonic band structure in the chiral medium - transfer matrix method," Opt. Commun. 142, 179-183 (1997). [CrossRef]
  31. V. A. Mandelshtam and H. S. Taylor, "Harmonic inversion of time signals," J. Chem. Phys. 107, 6756-6769 (1997). Erratum: ibid, 109, 4128 (1998). [CrossRef]
  32. N. W. Ashcroft and N. D. Mermin, Solid State Physics (Holt Saunders, Philadelphia, 1976).
  33. M. Frigo and S. G. Johnson, "FFTW: an adaptive software architecture for the FFT," in Proc. 1998 IEEE Intl. Conf. on Acoustics, Speech, and Signal Processing (Institute of Electrical and Electronics Engineers, New York, 1998), 1381-1384.
  34. A. H. Stroud, Approximate Calculation of Multiple Integrals (Prentice-Hall, Englewood Cliffs, NJ, 1971).
  35. J. Nadobny, D. Sullivan, P. Wust, M. Seebass, P. Deu hard, and R. Felix, "A high-resolution interpolation at arbitrary interfaces for the FDTD method," IEEE Trans. Microwave Theory Tech. 46, 1759-1766 (1998). [CrossRef]
  36. P. Yang, K. N. Liou, M. I. Mishchenko, and B.-C. Gao, "Efficient finite-difference time-domain scheme for light scattering by dielectric particles: application to aerosols," Appl. Opt. 39, 3727-3737 (2000). [CrossRef]
  37. R. D. Meade, private communications.
  38. M. C. Payne, M. P. Tater, D. C. Allan, T. A. Arias, and J. D. Joannopoulos, "Iterative minimization techniques for ab initio total-energy calculations: molecular dynamics and conjugate gradients," Rev. Mod. Phys. 64, 1045-1097 (1992). [CrossRef]
  39. See, e.g., A. Edelman and S. T. Smith, "On conjugate gradient-like methods for eigen-like problems," BIT 36, 494-509 (1996). [CrossRef]
  40. S. Ismail-Beigi and T. A. Arias, "New algebraic formulation of density functional calculation," Comp. Phys. Commun. 128, 1-45 (2000). [CrossRef]
  41. E. R. Davidson, "The iterative calculation of a few of the lowest eigenvalues and corresponding eigenvectors of large real-symmetric matrices," Comput. Phys. 17, 87-94 (1975). [CrossRef]
  42. M. Crouzeix, B. Philippe, and M. Sadkane, "The Davidson Method," SIAM J. Sci. Comput. 15, 62-76 (1994). [CrossRef]
  43. G. L. G. Sleijpen and H. A. van der Vorst, "A Jacobi-Davidson iteration method for linear eigen-value problems," SIAM J. Matrix Anal. Appl. 17, 401-425 (1996). [CrossRef]
  44. B. N. Parlett, The Symmetric Eigenvalue Problem (Prentice-Hall, Englewood Cliffs, NJ, 1980).
  45. H. A. van der Vorst, "Krylov subspace iteration," Computing in Sci. and Eng. 2, 32-37 (2000). [CrossRef]
  46. P. E. Gill, W. Murray, and M. H. Wright, Practical Optimization (Academic, London, 1981).
  47. J. J. Dongarra, J. Du Croz, I. S. Duff, and S. Hammarling, "A set of Level 3 Basic Linear Algebra Subprograms," ACM Trans. Math. Soft. 16, 1-17 (1990). [CrossRef]
  48. E. Anderson, Z. Bai, C. Bischof, S. Blackford, J. Demmel, J. Dongarra, J. Du Croz, A. Greenbaum, S. Hammarling, A. McKenney, and D. Sorensen, LAPACK Users' Guide (SIAM, Philadelphia, 1999). [CrossRef]
  49. A. Edelman, T. A. Arias, and S. T. Smith, "The geometry of algorithms with orthogonality constraints," SIAM J. Matrix Anal. Appl. 20, 303-353 (1998). [CrossRef]
  50. A. H. Sameh and J. A. Wisniewski, "A trace minimization algorithm for the generalized eigenvalue problem," SIAM J. Numer. Anal. 19, 1243-1259 (1982). [CrossRef]
  51. B. Philippe, "An algorithm to improve nearly orthonormal sets of vectors on a vector processor," SIAM J. Alg. Disc. Meth. 8, 396-403 (1987). [CrossRef]
  52. J. J. Moré and D. J. Thuente, "Line search algorithms with guaranteed sufficient decrease," ACM Trans. Math. Software 20, 286-307 (1994). [CrossRef]
  53. S. Ismail-Beiji, private communications.
  54. P. R. Villeneuve, S. Fan, and J. D. Joannopoulos, "Microcavities in photonic crystals: mode symmetry, tunability, and coupling efficiency," Phys. Rev. B 54, 7837-7842 (1996). [CrossRef]
  55. L.-W. Wang and A. Zunger, "Solving Schrödinger's equation around a desired energy: application to Silicon quantum dots," J. Chem. Phys. 100, 2394-2397 (1994). [CrossRef]

Cited By

Alert me when this paper is cited

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.


« Previous Article  |  Next Article »

OSA is a member of CrossRef.

CrossCheck Deposited