## An adaptive smoothness regularization algorithm for optical tomography

Optics Express, Vol. 16, Issue 24, pp. 19957-19977 (2008)

http://dx.doi.org/10.1364/OE.16.019957

Acrobat PDF (1799 KB)

### Abstract

In diffuse optical tomography (DOT), the object with unknown optical properties is illuminated with near infrared light and the absorption and diffusion coefficient distributions of a body are estimated from the scattering and transmission data. The problem is notoriously ill-posed and complementary information concerning the optical properties needs to be used to counter-effect the ill-posedness. In this article, we propose an adaptive inhomogenous anisotropic smoothness regularization scheme that corresponds to the prior information that the unknown object has a blocky structure. The algorithm updates alternatingly the current estimate and the smoothness penalty functional, and it is demonstrated with simulated data that the algorithm is capable of locating well blocky inclusions. The dynamical range of the reconstruction is improved, compared to traditional smoothness regularization schemes, and the crosstalk between the diffusion and absorption images is clearly less. The algorithm is tested also with a three-dimensional phantom data.

© 2008 Optical Society of America

## 1. Introduction

1. S. R. Arridge, “Optical Tomography in Medical Imaging,” Inverse Probl. **15**, R41–R93 (1999). [CrossRef]

2. J. Hebden, A. Gibson, R. Yusof, N. Everdell, E. Hillman, D. Delpy, S. Arridge, T. Austin, J. Meek, and J. Wyatt, “Three-dimensional optical tomography of the premature infant brain,” Phys. Med. Biol. **47**, 4155–4166 (2002). [CrossRef] [PubMed]

3. J. Hebden and T. Austin, “Optical tomography of the neonatal brain,” European Radiology **17**, 2926–2933 (2007). [CrossRef] [PubMed]

4. J. Culver, T. Durduran, D. Furya, C. Cheung, J. Greenberg, and A. Yodh, “Diffuse optical tomography of cerebral blood flow, oxygenation, and metabolism in rat during focal ischemia,” J. Cerebral Blood Flow & Metabolism **23**, 911–924 (2003). [CrossRef]

6. J. Kaipio, V. Kolehmainen, M. Vauhkonen, and E. Somersalo, “Inverse problems with structural prior information,” Inverse Probl. **15**, 713–729 (1999). [CrossRef]

7. B. Pogue, T. McBride, J. Prewitt, U. sterberg, and K. Paulsen, “Spatially Variant Regularization Improves Diffuse Optical Tomography,” Appl. Opt. **38**, 2950–2961 (1999). [CrossRef]

8. A. Douiri, M. Schweiger, J. Riley, and S. Arridge, “Anisotropic diffusion regularization methods for diffuse optical tomography using edge prior information,” Meas. Sci. Technol. **18**, 87–95 (2007). [CrossRef]

9. B. Brooksby, S. Jiang, C. Kogel, M. Duyley, H. Dehgani, J. Weaver, S. Poplack, B. Pogue, and K. Paulsen, “Magnetic resonance guided near infrared tomography of the breast,” Rev. Sci. Instrum. **75**, 5262–5270 (2004). [CrossRef]

10. P. Yalavarthy, B. Pogue, H. Dehgani, C. Carpenter, S. Jiang, and K. Paulsen, “Structural information within regularization matrices improves near infrared diffuse optical tomography,” Opt. Express **15**, 8043–8058 (2007). [CrossRef] [PubMed]

*pilot image*, interpreted as a smooth approximation of the underlying structure, and iteratively improves the approximation, eventually finding a solution that is locally smooth but may have fast variations across the boundaries of the inclusions.

11. D. Calvetti, F. Sgallari, and E. Somersalo, “Image inpainting with structural bootstrap priors,” Image and Vision Comput. **24**, 782–793 (2006). [CrossRef]

12. D. Calvetti and E. Somersalo, “Microlocal sequential regularization in imaging,” Inverse Problems and Imaging **1**, 1–11 (2007). [CrossRef]

13. D. Calvetti and E. Somersalo, “Gaussian hypermodels and recovery of blocky objects,” Inverse Probl. **23**, 733–754 (2007). [CrossRef]

14. D. Calvetti and E. Somersalo, “Hypermodels in the Bayesian imaging framework,” Inverse Probl. **24**, 034013 (20pp) (2008). [CrossRef]

## 2. Material and methods

### 2.1. Forward model and inverse problem

*ω*> 0, the photon density

*ϕ*=

*ϕ*(

**x**) in the diffusion approximation satisfies the elliptic partial differential equation where Ω ∈

*,*

^{d}*d*= 2, 3, is a bounded domain representing the target, and the diffusion coefficient is defined in terms of the absorption coefficient

*μ*(

_{a}**x**) and reduced scattering coefficient

*μ*′

*(*

_{s}**x**) as

*κ*(

**x**) = (

*d*(

*μ*(

_{a}**x**) +

*μ*′

*(*

_{s}**x**)))

^{−1}. The appropriate boundary condition is a Robin condition, where

*q*is the boundary source and the impedance coefficient

*ζ*takes into account the refractive mismatch of the body and the outside air on the boundary. We use the semi-empirical model where

*R*is the reflection coefficient of the boundary and

*n*is the refraction index of the medium, and

*n*

_{air}= 1.0, see [16

16. M. Schweiger, S. Arridge, M. Hiraoka, and D. Delpy, “The finite element method for the propagation of light in scattering media: Boundary and source conditions,” Med. Phys. **22**, 1779 – 1792 (1995). [CrossRef] [PubMed]

*ω*= 2

*πf*,

*f*= 100MHz and

*c*is speed of light in the medium, assumed to be constant.

*μ*

_{a}(

**x**),

*κ*(

**x**)) |

**x**∈ Ω} from the noisy observations of the outcoming photon densities corresponding to all applicable boundary sources. Given the boundary source

*q*, the outcoming photon density on the boundary, or exitance, is defined as Hence, the inverse problem is to determine the pair (

*μ*

_{a},

*κ*) from all available pairs (

*q*, Γ

*), where the measured exitance Γ*

_{q}*is corrupted by noise.*

_{q}*q*, 1 ≤

^{ℓ}*ℓ*≤

*L*, and the data consists of the corresponding photon densities, denoted by Γ

*, recorded only at few boundary points*

^{ℓ}*r*, 1 ≤

_{j}*j*≤

*m*, where

*m*is the number of the detectors at the boundary. Using the amplitude-phase representation and ignoring the measurement noise, the data set is thus

#### 2.1.1. Discretization

*,*

^{d}*d*= 2, 3, is given, with vertices

**x**

*, 1 ≤*

_{i}*i*≤

*N*, and assume that the unknown optical distributed parameters

*μ*

_{a}and

*κ*are discretized and represented by the collocation values (

*μ*

_{a}(

**x**

*),*

_{i}*κ*(

**x**

*)) at the nodal points. We denote by*

_{i}*p*∈

^{2}

*the vector containing the*

^{N}*N*collocation values of both unknown parameters. Using finite elements, we may discretize the forward diffusion approximation model to obtain a discrete forward model. Let us denote by

*y*∈

^{ℓ}^{2}

*the vector containing the boundary data (5) corresponding to the amplitude and phase of the exitance Γ*

^{m}*at the detector locations*

^{ℓ}*r*, 1 ≤

_{j}*j*≤

*m*. By stacking all the observation vectors into a single vector

*y*, the finite element model leads to a discrete forward model

#### 2.1.2. Statistical inference

17. D. Calvetti and E. Somersalo, *Introduction to Bayesian Scientific Computing – Ten Lectures on Subjective Computing* (Springer Verlag, 2007). [PubMed]

*π*

_{noise}. This model allows us to write the

*likelihood density*of the measurement, where the symbol “∝” stands for “equal up to a multiplicative constant”. Hence, the data conditional on the true signal is distributed around the noiseless value

*f*(

*p*) as the noise is distributed around the origin.

*p*is available, and this information is expressed in the form of a probability density

*π*

_{prior}(

*p*). This density, the

*prior probability density*, expresses our belief about values that we expect, where

*π*

_{prior}(

*p*) is large, and on the other hand, it excludes values of

*p*that we believe to be unlikely or impossible, where

*π*

_{prior}(

*p*) is small or vanishes. The

*joint probability density*of the random variables

*p*and

*y*can be written as Reversing the roles of the observable and unknown in the equation for the joint probability density, we may define the probability density of

*p*conditioned on the observation. The result is Bayes’ equation for the

*posterior probability density*, Above,

*π*(

*y*) is the marginal density of

*y*, obtained by integrating out the variable

*p*from the joint probability density, and it is ignored here as it contributes only to normalizing of the posterior density. In the Bayesian framework, the posterior density is the solution of the inverse problem, since it is an expression of the information that is available about

*p*, taking the prior belief, the noisy measurement and the model between the unknown and the data into account.

*C*

_{n}and

*C*

_{p}are positive definite, then the posterior density assumes the form By writing the symmetric (e.g., Cholesky) decompositions of the inverses of the covariance matrices, the density can be expressed in the form This equation is particularly useful for establishing the connection between regularization and statistics. We observe that the maximizer of the posterior density, known as the Maximum A Posteriori (MAP) estimate of

*p*, denoted by

*p*

_{MAP}, satisfies which is the

*Tikhonov regularized solution*of the inverse problem with the

*penalty functional*and the regularization parameter equal to unity. Hence, an appropriate choice of a Gaussian prior density leads to the Tikhonov regularized solution as the MAP estimate, and conversely, a judiciously chosen penalty functional helps to design a Gaussian prior density. We use this well known interplay in both direction in the discussion to ensue.

#### 2.1.3. Construction of the prior

*the object consists of well defined simple subdomains with slowly varying optical properties, while across the subdomain boundaries, sharp variations may occur.*Such a description is often said to define a

*blocky object*, and techniques for retrieving the parameter fields in such cases have been developed [18

18. L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D **60**, 259–268 (1992). [CrossRef]

19. C. R. Vogel and M. E. Oman, “Fast, robust total variation-based reconstruction of noisy, blurred images,” *IEEE Trans. Image Process.*7, 813–824 (1998). [CrossRef]

20. P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” *IEEE Trans. Pattern Anal. Mach. Intell.*12, 629–639 (1990). [CrossRef]

13. D. Calvetti and E. Somersalo, “Gaussian hypermodels and recovery of blocky objects,” Inverse Probl. **23**, 733–754 (2007). [CrossRef]

12. D. Calvetti and E. Somersalo, “Microlocal sequential regularization in imaging,” Inverse Problems and Imaging **1**, 1–11 (2007). [CrossRef]

14. D. Calvetti and E. Somersalo, “Hypermodels in the Bayesian imaging framework,” Inverse Probl. **24**, 034013 (20pp) (2008). [CrossRef]

12. D. Calvetti and E. Somersalo, “Microlocal sequential regularization in imaging,” Inverse Problems and Imaging **1**, 1–11 (2007). [CrossRef]

11. D. Calvetti, F. Sgallari, and E. Somersalo, “Image inpainting with structural bootstrap priors,” Image and Vision Comput. **24**, 782–793 (2006). [CrossRef]

**1**, 1–11 (2007). [CrossRef]

#### 2.1.4. Structural interior prior

*u*: Ω → denote a twice differentiable function to be estimated from indirect observations. In the present application,

*u*is either the diffusion coefficient or the absorption coefficient. Furthermore, let

*λ*: Ω → a given nonnegative weight function such that

*λ*|

_{∂}_{Ω}= 1. We can then define an isotropic second order penalty functional with weight

*λ*≥ 0 in the interior of Ω of the form The role of the penalty functional in the estimation problem is to favor solutions yielding small values. To understand better for which functions

*u*the penalty takes on small values note that, when

*λ*is fixed, the minimizer of

*u*

*𝒥*(

*u*;

*λ*) is a steady state diffusion field with the diffusion coefficient

*λ*. In other words, the function

*λ*controls which parts of the image

*u*mix together via diffusion, thus conveying structural prior information to our model. Strategies for the choice of

*λ*will be discussed later in this section.

**x**

*is given. Let Ω*

_{i}*denote a Voronoi cell containing an interior vertex*

_{i}**x**

*, i.e., see Fig. 1. To discretize the penalty functional, we write an approximation where the second identity follows from the divergence theorem, with*

_{i}*n*denoting the exterior normal vector of the boundary

*∂*Ω

*. To approximate the integral in the rightmost term, we denote by*

_{i}**x**

*an interior vertex and by 1 ≤*

_{i}*i*≤

*N*

_{int}, and

*∂*Ω

*separating the vertices*

_{i}**x**

*and*

_{i}*j*≤

*p*. Then Collecting in the vector

*p*the values of the function

*u*at the collocation points,

*p*=

_{i}*u*(

**x**

*), the above approximation leads to a matrix representation of the second order differential in the interior points, where the matrix*

_{i}*L*

_{int}, whose elements are computed using (9) depends on the weight function

*λ*. The discrete counterpart of the interior penalty functional (8) is therefore We partition the vector

*p*into the vectors

*p*

_{int}∈

^{Nint}and

*p*

_{bdry}∈

^{Nbdry}containing the interior node and the boundary node values, respectively. Assuming, for the time being, that the boundary values are known, we may define the following probability density of the interior nodes conditional on the boundary nodes, To obtain a proper prior for the vector

*p*, taking into account that the boundary values are also unknown, it suffices to construct a prior density for the boundary node values.

#### 2.1.5. Variance adjustment at the boundary

**x**

*∈*

_{i}*∂*Ω, 1 ≤

*i*≤

*N*

_{bdry}. The smoothness penalty functional derived above involves the values of

*u*at the boundary points, but in order to obtain a proper prior density, additional information concerning the regularity over the boundary is required. Our approach follows the reasoning in [15]. Here, for simplicity, we restrict the explanation to the case when the surface

*∂*Ω is smooth.

*u*is second order smooth, we define a second order boundary penalty, where Δ

*denotes the surface Laplacian. The mesh defined over Ω restricted to the boundary induces a meshing of the boundary. We discretize the surface Laplacian analogously with the interior Laplacian, that is, where*

_{τ}*C*is the boundary of Ω

_{i}*∩*

_{i}*∂*Ω and

*ν*is its normal vector tangent to

*∂*Ω. If Ω ⊂

^{2}, the calculation of the rightmost integral reduces to evaluations at two points. Approximating the normal derivatives with finite differences at neighboring collocation points, as in the case of interior points, we arrive at a discrete approximation of the boundary penalty functional, where

*L*

_{bdry}is the finite difference matrix which approximates the integral (12). We then define the Gaussian marginal smoothness density corresponding to this penalty function, where

*α*> 0 is a scaling constant, whose role will be discussed below.

*π*(

*p*

_{int}|

*p*

_{bdry}) and the marginal density

*π*(

*p*

_{bdry}) yields the prior density If we partition the matrix

*L*

_{int}according to the partitioning of

*p*into interior and boundary values,

*L*

_{int}= [

*L*′

_{int}

*L*″

_{int}],

*L*′

_{int}∈

^{Nint×Nint},

*L*″

_{int}∈

^{Nint×Nbdry}, we can express the prior density in the form where

*L*is the matrix To choose the value of the parameter

*α*, we require that when

*λ*= 1, the variance of components

*p*in the interior and at the boundary are of the same order of magnitude, recalling that the variance of a component

_{i}*p*can be calculated by the equation where

_{i}*e*is the

_{i}*i*th unit coordinate vector. The details can be found in Appendix A.

#### 2.1.6. Structural prior

*λ*in the definition of the smoothness penalty functional and an adaptive updating scheme for it. From Eq. (9) it follows that

*u*(

**x**

*) and*

_{i}*believe*that the function value is likely to change significantly between these two collocation points, while it should be large if the collocation values are believed to be close to each other.

*û*of the unknown function

*u*to be estimated, which we will refer to as

*pilot function*, we can define

*λ*to be where

*τ*> 0,

*k*∈ are shape parameters of the function

*λ*, and

*λ*makes the smoothness prior direction sensitive. The rational for this choice is that when the pilot function has a strong gradient in the given direction

*u*for changing rapidly while passing form

**x**

*to*

_{i}*L*using the pilot image

*û*, denoted by

*L*can be formed knowing only the values of the pilot image at the interpolation points

_{û}### 2.2. Adaptation

*κ*̂ and

*μ*̂

_{a}are the pilot functions of the diffusion and absorption coefficients, respectively. Since these coefficients are estimated simultaneously from the data, we define the matrix where

*γ*

_{1},

*γ*

_{2}> 0 are scaling constants. Similarly as in [11

11. D. Calvetti, F. Sgallari, and E. Somersalo, “Image inpainting with structural bootstrap priors,” Image and Vision Comput. **24**, 782–793 (2006). [CrossRef]

**1**, 1–11 (2007). [CrossRef]

- Initialize
*κ*̂ = 1,*μ*̂_{a}= 1, set*j*= 1 and prescribe a maximum number of iterations*n*_{iter}. - Using the current pilot images
*κ*̂ and*μ*̂_{a}, form the matrix*W*according to (16).

*δ*> 0 in (17) will be discussed in the next section where we address various issues concerning the practical implementation of the algorithm and the details of the numerical experiments. The solution of the minimization subproblem (17) is computed with a damped Gauss-Newton algorithm.

13. D. Calvetti and E. Somersalo, “Gaussian hypermodels and recovery of blocky objects,” Inverse Probl. **23**, 733–754 (2007). [CrossRef]

14. D. Calvetti and E. Somersalo, “Hypermodels in the Bayesian imaging framework,” Inverse Probl. **24**, 034013 (20pp) (2008). [CrossRef]

## 3. Results

### 3.1. Simulated data

*n*= 1.4 assumed known, and 17 optical fibers are uniformly spaced along the circumference of the target. The data are simulated by sequentially letting one of the fibers act as a source and the remaining ones as detectors, leading to 16 × 17 = 272 measured amplitudes and phases. Due to the wide dynamical range, amplitudes are routinely expressed in the logarithmic scale, corresponding to the model of Section 2.1.

*σ*

_{log|}

_{Γ}

_{|}= 10

^{−3}max|log|Γ|| = 0.0228 for the the component added to the logarithm of the amplitude, and of

*σ*= 10

_{φ}^{−3}max|

*φ|*= 0.0016 for the component added to the phase shift. Therefore the Cholesky factor of inverse of the noise covariance matrix is the diagonal matrix

*k*= 2 and

*τ*= 50. The scaling coefficients

*γ*

_{1}and

*γ*

_{2}appearing in the definition (16) of

*W*are set in the first iteration round,when the pilot images are flat and the penalty is a homogenous second order smoothness penalty, and never changed afterwards. The criterion used for the selection of the scaling is that we no pixel values of the simulated inclusions should not deviate from the background by more than two standard deviations. This adjustment can be done once, together with the boundary variance adjustment. In a realistic situation, the standard deviation needs to be estimated based on the prior belief of the dynamical range about the coefficients.

*δ*should be be adjusted at each iteration sweep, e.g., using the Morozov discrepancy principle, but this strategy would increase the computational burden significantly. In our computed examples, set the value of

*delta*to 10

^{−3}once and for all. The maximum number of iteration steps was set to

*n*

_{iter}= 20, and up to 15 Gauss-Newton iteration steps were allowed for the solution of the minimization problem. Within each Gauss-Newton step, the new search direction is computed by solving the corresponding linear system with the GMRES algorithm [23

23. Y. Saad, Iterative Methods for Sparse Linear Systems (Society for Industrial Mathematics, 2003). [CrossRef]

*μ*

_{a}and the top right one b) to the diffusion coefficient

*κ*. The image in the left of each panel displays the reconstructions after the first outer iteration before the smoothness penalty adaptation, while the image on the right shows the final iteration (

*n*

_{iter}= 20). The corresponding profiles of the reconstructions along the circular curve indicated in Fig. 2, traversed in counterclockwise direction, are shown underneath.

### 3.2. Reconstructions from phantom data

24. I. Nissilä, J. Hebden, D. Jennions, J. Heino, M. Schweiger, K. Kotilahti, T. Noponen, A. Gibson, S. Järvenpää, L. Lipiäinen, and T. Katila, “Comparison between a time-domain and a frequency-domain system for optical tomography.” J. Biomed Opt.11, 064015 (2006). [CrossRef]

*y*

_{h}and

*y*

_{nh}, respectively. To suppress reconstruction artifacts due to the diffusion approximation model, we consider the difference data translated around the computed response of the homogenous background. In other words, we define where

*f*(

*p*

_{h}) is the 3D FEM solution computed on a grid of 21207 tetrahedral elements with quadratic basis functions, assuming that the homogeneous optical parameters are known. The absorption and scattering coefficient of the inclusions in the non-homogeneous phantom are approximately twice the values for the background. We estimated absorption coefficient at

*μ*

_{a,bg}≈ 0.0097 mm

^{−1}for the scattering coefficient

*μ*′

_{s,bg}≈ 1.04 mm

^{−1}and at

*n*= 1.56 for the refraction index. A schematics of the non homogeneous phantom is shown in Fig. 5.

*L*of the inverted prior covariance matrix

*τ*= 150. The Tikhonov regularization parameter is held at

_{j}*δ*= 10

^{−6}.

*z*= 0, see Fig. 5. The scattering coefficient image is calculated from the reconstructed diffusion and absorption coefficient images.

*contrast*of the inclusion

*C*

_{μa}to be the ratio of the means

*μ*

_{a}inside and outside the inclusion. The contrast

*C*

_{μs′}is defined in a similar manner. Here, the boundary of the inclusions is assumed to be known. The contrasts in the reconstruction with the smoothness prior are

*C*

_{μa}= 1.12 and

*C*

_{μs′}= 1.11, respectively, and after eight adaptive iterations, they attain the values

*C*

_{μa}= 1.57 and

*C*

_{μs′}= 1.21. The algorithm does not remove completely the crosstalk in the absorption image: in fact, in the location of the inclusion with the same absorption coefficient as the background, the mean value of

*μ*

_{a}is 0.0102mm

^{−1}before adaptation and

*μ*= 0.0106 mm

_{a}^{−1}after the adaptation. However, since the adaptation improves considerably the dynamical range of the reconstruction, the relative crosstalk is diminished, and it is visually less prominent.

## 4. Discussion

25. T. Tarvainen, M. Vauhkonen, V. Kolehmainen, and J. P. Kaipio, “Finite element model for the coupled radiative transfer equation and diffusion approximation,” Int. J. Numer. Meth. Eng. **63**, 383–405 (2006). [CrossRef]

26. J. Heino, E. Somersalo, and J. Kaipio, “Statistical compensation of geometric mismodeling in optical tomography,” Opt. Express **13**, 296–308 (2005). [CrossRef] [PubMed]

27. S. R. Arridge, J. P. Kaipio, V. Kolehmainen, M. Schweiger, E. Somersalo, T. Tarvainen, and M. Vauhkonen, “Approximation errors and model reduction with an application in optical diffusion tomography,” Inverse Probl. **22**, 175–195 (2006). [CrossRef]

## A. Variance adjustment

*L*. Since the block

*L*′

_{int}∈

^{Nint×Nint}corresponds to a discrete Laplacian with homogenous Dirichlet data, it is invertible, while the matrix

*L*

_{bdry}∈

^{Nbdry×Nbdry}has a one-dimensional null space spanned by the unit vector 1 = [1, 1, . . . , 1]

^{T}∈

^{Nbrdy}. Therefore it follows that the null space of

*L*is also one-dimensional, spanned by the unit vector 1 ∈

*. Consequently, given*

^{N}*Lp*, it is possible to recover the vector

*p*only up to an undetermined additive constant which can be specified by, e.g., specifying the mean value

*μ*over the boundary values

*p*

_{bdry}. We therefore define the folding matrix whose range is an (

*N*

_{bdry}− 1)-dimensional subspace in

^{Nbdry}perpendicular to the null space of

*L*

_{bdry}, and we reparameterize

*p*

_{bdry}as With this representation, where Without loss of generality, we may assume here that

*μ*= 0, i.e,

*r*= 0.

*K*∈

_{α}

^{N}^{×(}

^{N}^{–1)}has full column rank. We are interested in the covariance of the components of the vector

*p*with respect to this density. Consider an interior node value

*p*, expressed in terms of

_{j}*p*′ as where

*e*∈

_{j}^{Nint}is

*j*th unit coordinate vector. Denoting by E the expectation with respect to the probability density (22), the variance of the component is where

*v*is the least squares solution of the linear system where

**n**

*∈ *

_{k}^{Nbdry}a unit vector with a single non-zero element, the corresponding boundary node value can be expressed as and the variance of the value is found to be where

**u**is the least squares solution of the system or, equivalently, leading to the equation The condition var(

*p*) = var(

_{j}*p*) therefore leads to the condition In practice, the linear systems are solved in the least squares sense using the LSQR algorithm [23

_{k}23. Y. Saad, Iterative Methods for Sparse Linear Systems (Society for Industrial Mathematics, 2003). [CrossRef]

29. C. Paige and M. Saunders, “LSQR: An Algorithm for Sparse Linear Equations And Sparse Least Squares,” ACM Trans. Math. Software **8**, 43–71 (1982). [CrossRef]

## Acknowledgments

## References and links

1. | S. R. Arridge, “Optical Tomography in Medical Imaging,” Inverse Probl. |

2. | J. Hebden, A. Gibson, R. Yusof, N. Everdell, E. Hillman, D. Delpy, S. Arridge, T. Austin, J. Meek, and J. Wyatt, “Three-dimensional optical tomography of the premature infant brain,” Phys. Med. Biol. |

3. | J. Hebden and T. Austin, “Optical tomography of the neonatal brain,” European Radiology |

4. | J. Culver, T. Durduran, D. Furya, C. Cheung, J. Greenberg, and A. Yodh, “Diffuse optical tomography of cerebral blood flow, oxygenation, and metabolism in rat during focal ischemia,” J. Cerebral Blood Flow & Metabolism |

5. | J. Kaipio and E. Somersalo, |

6. | J. Kaipio, V. Kolehmainen, M. Vauhkonen, and E. Somersalo, “Inverse problems with structural prior information,” Inverse Probl. |

7. | B. Pogue, T. McBride, J. Prewitt, U. sterberg, and K. Paulsen, “Spatially Variant Regularization Improves Diffuse Optical Tomography,” Appl. Opt. |

8. | A. Douiri, M. Schweiger, J. Riley, and S. Arridge, “Anisotropic diffusion regularization methods for diffuse optical tomography using edge prior information,” Meas. Sci. Technol. |

9. | B. Brooksby, S. Jiang, C. Kogel, M. Duyley, H. Dehgani, J. Weaver, S. Poplack, B. Pogue, and K. Paulsen, “Magnetic resonance guided near infrared tomography of the breast,” Rev. Sci. Instrum. |

10. | P. Yalavarthy, B. Pogue, H. Dehgani, C. Carpenter, S. Jiang, and K. Paulsen, “Structural information within regularization matrices improves near infrared diffuse optical tomography,” Opt. Express |

11. | D. Calvetti, F. Sgallari, and E. Somersalo, “Image inpainting with structural bootstrap priors,” Image and Vision Comput. |

12. | D. Calvetti and E. Somersalo, “Microlocal sequential regularization in imaging,” Inverse Problems and Imaging |

13. | D. Calvetti and E. Somersalo, “Gaussian hypermodels and recovery of blocky objects,” Inverse Probl. |

14. | D. Calvetti and E. Somersalo, “Hypermodels in the Bayesian imaging framework,” Inverse Probl. |

15. | D. Calvetti, J. P. Kaipio, and E. Somersalo, “Aristotelian prior boundary conditions,” Int. J. Math. Comp. Sci. |

16. | M. Schweiger, S. Arridge, M. Hiraoka, and D. Delpy, “The finite element method for the propagation of light in scattering media: Boundary and source conditions,” Med. Phys. |

17. | D. Calvetti and E. Somersalo, |

18. | L. I. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D |

19. | C. R. Vogel and M. E. Oman, “Fast, robust total variation-based reconstruction of noisy, blurred images,” |

20. | P. Perona and J. Malik, “Scale-space and edge detection using anisotropic diffusion,” |

21. | D. Calvetti and E. Somersalo, “Recovery of shapes: hypermodels and Bayesian learning,” Proc. of the Applied Inverse Problems 2007: Theoretical and Computational Aspects. J. of Physics Conference Series (to appear). |

22. | “Automatically Tuned Linear Algebra Software (ATLAS),” http://math-atlas.sourceforge.net/ (19th June 2008). |

23. | Y. Saad, Iterative Methods for Sparse Linear Systems (Society for Industrial Mathematics, 2003). [CrossRef] |

24. | I. Nissilä, J. Hebden, D. Jennions, J. Heino, M. Schweiger, K. Kotilahti, T. Noponen, A. Gibson, S. Järvenpää, L. Lipiäinen, and T. Katila, “Comparison between a time-domain and a frequency-domain system for optical tomography.” J. Biomed Opt.11, 064015 (2006). [CrossRef] |

25. | T. Tarvainen, M. Vauhkonen, V. Kolehmainen, and J. P. Kaipio, “Finite element model for the coupled radiative transfer equation and diffusion approximation,” Int. J. Numer. Meth. Eng. |

26. | J. Heino, E. Somersalo, and J. Kaipio, “Statistical compensation of geometric mismodeling in optical tomography,” Opt. Express |

27. | S. R. Arridge, J. P. Kaipio, V. Kolehmainen, M. Schweiger, E. Somersalo, T. Tarvainen, and M. Vauhkonen, “Approximation errors and model reduction with an application in optical diffusion tomography,” Inverse Probl. |

28. | J. P. Kaipio and E. Somersalo, “Statistical inverse problems: discretization, model reduction and inverse crimes,” J. Comp. Appl. Math. |

29. | C. Paige and M. Saunders, “LSQR: An Algorithm for Sparse Linear Equations And Sparse Least Squares,” ACM Trans. Math. Software |

**OCIS Codes**

(100.3010) Image processing : Image reconstruction techniques

(100.3190) Image processing : Inverse problems

(100.6890) Image processing : Three-dimensional image processing

(100.6950) Image processing : Tomographic image processing

(170.3010) Medical optics and biotechnology : Image reconstruction techniques

(170.6960) Medical optics and biotechnology : Tomography

**ToC Category:**

Image Processing

**History**

Original Manuscript: June 19, 2008

Revised Manuscript: October 30, 2008

Manuscript Accepted: November 11, 2008

Published: November 20, 2008

**Virtual Issues**

Vol. 4, Iss. 1 *Virtual Journal for Biomedical Optics*

**Citation**

Petri Hiltunen, Daniela Calvetti, and Erkki Somersalo, "An adaptive smoothness regularization algorithm for optical tomography," Opt. Express **16**, 19957-19977 (2008)

http://www.opticsinfobase.org/oe/abstract.cfm?URI=oe-16-24-19957

Sort: Year | Journal | Reset

### References

- S. R. Arridge, "Optical Tomography in Medical Imaging," Inverse Probl. 15, R41-R93 (1999). [CrossRef]
- J. Hebden, A. Gibson, R. Yusof, N. Everdell, E. Hillman, D. Delpy, S. Arridge, T. Austin, J. Meek, and J. Wyatt, "Three-dimensional optical tomography of the premature infant brain," Phys. Med. Biol. 47, 4155-4166 (2002). [CrossRef] [PubMed]
- J. Hebden and T. Austin, "Optical tomography of the neonatal brain," Eur. Radiol. 17, 2926-2933 (2007). [CrossRef] [PubMed]
- J. Culver, T. Durduran, D. Furya, C. Cheung, J. Greenberg, and A. Yodh, "Diffuse optical tomography of cerebral blood flow, oxygenation, and metabolism in rat during focal ischemia," J. Cereb. Blood Flow Metab. 23, 911-924 (2003). [CrossRef]
- J. Kaipio and E. Somersalo, Statistical and Computational Inverse Problems (Springer Verlag, 2004).
- J. Kaipio, V. Kolehmainen, M. Vauhkonen, and E. Somersalo, "Inverse problems with structural prior information," Inverse Probl. 15, 713-729 (1999). [CrossRef]
- B. Pogue, T. McBride, J. Prewitt, U. Sterberg, and K. Paulsen, "Spatially Variant Regularization Improves Diffuse Optical Tomography," Appl. Opt. 38, 2950-2961 (1999). [CrossRef]
- A. Douiri, M. Schweiger, J. Riley, and S. Arridge, "Anisotropic diffusion regularization methods for diffuse optical tomography using edge prior information," Meas. Sci. Technol. 18, 87-95 (2007). [CrossRef]
- B. Brooksby, S. Jiang, C. Kogel, M. Duyley, H. Dehgani, J. Weaver, S. Poplack, B. Pogue, and K. Paulsen, "Magnetic resonance guided near infrared tomography of the breast," Rev. Sci. Instrum. 75, 5262-5270 (2004). [CrossRef]
- P. Yalavarthy, B. Pogue, H. Dehgani, C. Carpenter, S. Jiang, and K. Paulsen, "Structural information within regularization matrices improves near infrared diffuse optical tomography," Opt. Express 15, 8043-8058 (2007). [CrossRef] [PubMed]
- D. Calvetti, F. Sgallari, and E. Somersalo, "Image inpainting with structural bootstrap priors," Image Vis. Comput. 24, 782-793 (2006) [CrossRef]
- D. Calvetti and E. Somersalo, "Microlocal sequential regularization in imaging," Inverse Probl. Imaging 1, 1-11 (2007). [CrossRef]
- D. Calvetti and E. Somersalo, "Gaussian hypermodels and recovery of blocky objects," Inverse Probl. 23, 733-754 (2007). [CrossRef]
- D. Calvetti and E. Somersalo, "Hypermodels in the Bayesian imaging framework," Inverse Probl. 24, 034013 (2008). [CrossRef]
- D. Calvetti, J. P. Kaipio, and E. Somersalo, "Aristotelian prior boundary conditions," Int. J. Math. Comp. Sci. 1, 63-81 (2006).
- M. Schweiger, S. Arridge, M. Hiraoka, and D. Delpy, "The finite element method for the propagation of light in scattering media: Boundary and source conditions," Med. Phys. 22, 1779 - 1792 (1995). [CrossRef] [PubMed]
- D. Calvetti and E. Somersalo, Introduction to Bayesian Scientific Computing - Ten Lectures on Subjective Computing (Springer Verlag, 2007). [PubMed]
- L. I. Rudin, S. Osher, and E. Fatemi, "Nonlinear total variation based noise removal algorithms," Physica D 60, 259-268 (1992). [CrossRef]
- C. R. Vogel and M. E. Oman, "Fast, robust total variation-based reconstruction of noisy, blurred images," IEEE Trans. Image Process. 7, 813-824 (1998). [CrossRef]
- P. Perona and J. Malik, "Scale-space and edge detection using anisotropic diffusion," IEEE Trans. Pattern Anal. Mach. Intell. 12, 629-639 (1990). [CrossRef]
- D. Calvetti and E. Somersalo, "Recovery of shapes: hypermodels and Bayesian learning," Proc. of the Applied Inverse Problems 2007: Theoretical and Computational Aspects.J. of Physics Conference Series (to appear).
- "Automatically Tuned Linear Algebra Software (ATLAS)," http://math-atlas.sourceforge.net/ (19th June 2008).
- Y. Saad, Iterative Methods for Sparse Linear Systems (Society for Industrial Mathematics, 2003). [CrossRef]
- I. Nissilä, J. Hebden, D. Jennions, J. Heino, M. Schweiger, K. Kotilahti, T. Noponen, A. Gibson, S. Järvenpää, L. Lipiäinen, and T. Katila, "Comparison between a time-domain and a frequency-domain system for optical tomography," J. Biomed Opt. 11, 064015 (2006). [CrossRef]
- T. Tarvainen, M. Vauhkonen, V. Kolehmainen, and J. P. Kaipio, "Finite element model for the coupled radiative transfer equation and diffusion approximation," Int. J. Numer. Methods Eng. 63, 383-405 (2006). [CrossRef]
- J. Heino, E. Somersalo, and J. Kaipio, "Statistical compensation of geometric mismodeling in optical tomography," Opt. Express 13, 296-308 (2005). [CrossRef] [PubMed]
- S. R. Arridge, J. P. Kaipio, V. Kolehmainen, M. Schweiger, E. Somersalo, T. Tarvainen, and M. Vauhkonen, "Approximation errors and model reduction with an application in optical diffusion tomography," Inverse Probl. 22, 175-195 (2006). [CrossRef]
- J. P. Kaipio and E. Somersalo, "Statistical inverse problems: discretization, model reduction and inverse crimes," J. Comp. Appl. Math. 22, 493-504 (2006).
- C. Paige and M. Saunders, "LSQR: An Algorithm for Sparse Linear Equations and Sparse Least Squares," ACM Trans. Math. Software 8,43-71 (1982). Comput. 24, 782-793 (2006). [CrossRef]

## Cited By |
Alert me when this paper is cited |

OSA is able to provide readers links to articles that cite this paper by participating in CrossRef's Cited-By Linking service. CrossRef includes content from more than 3000 publishers and societies. In addition to listing OSA journal articles that cite this paper, citing articles from other participating publishers will also be listed.

« Previous Article | Next Article »

OSA is a member of CrossRef.