A medical-imaging system is typically designed to estimate some set of parameters associated with a real-world object. For instance, in single-photon emission computed tomography (SPECT), the parameters that the system attempts to estimate are the radiotracer concentrations within object volume elements, or voxels. (Often some smaller set of values, or a single value, is the desired final product, but this final product is almost always calculated by condensing the voxel estimates.)
In SPECT, as in other forms of tomographic imaging, the estimation procedure is referred to as reconstruction, and a number of reconstruction algorithms have been advanced. These include filtered backprojection, maximum-likelihood algorithms, the algebraic reconstruction technique (ART), and various Bayesian techniques. Each of these reconstruction methods has advantages and disadvantages that have been discussed in the literature [see, for example, 1
1. D.R. Gilland, B.M.W. Tsui, C.E. Metz, R.J. Jaszczak, and J.R. Perry, “An evaluation of maximum likelihood-EM reconstruction for SPECT by ROC analysis,” J. Nucl. Med. 33, 451–457 (1992). [PubMed]
2. X-L. Xu, J-S. Liow, and S.C. Strother, “Iterative algebraic reconstruction algorithms for emission computed tomography: A unified framework and its application to positron emission tomography,” Med. Phys. , 20, 1675–1684 (1993). [CrossRef] [PubMed]
]. However, inherent in the tomographic reconstruction problem are limitations more fundamental than differences between the various algorithms. Three examples of estimation-limiting factors are statistical noise in the data collected by the imaging system, an improper model of the imaging system, and inadequacies of the imaging system itself.
Statistical noise and improper system modeling have been discussed in depth [3
3. S.C. Moore, S.P. Kijewski, S.P. Muller, and B.L. Holman, “SPECT Image Noise Power: Effects of Nonstationary Projection Noise and Attenuation Compensation,” J. Nucl. Med. 29, 1704–1709, (1988). [PubMed]
4. H.H. Barrett, D.W. Wilson, and B.M.W. Tsui, “Noise properties of the EM algorithm: I. Theory,” Phys. Med. Biol. 39, 833–846 (1994). [CrossRef] [PubMed]
], and we shall not focus on them here. Instead we shall study the problems associated with the design of the imaging system, and how these problems relate to the inability of the system to estimate certain parameters. We do not, during the course of this study, attempt to suggest optimal system designs. Rather, we derive a method for determining the parameters that are estimable by a given system. We then demonstrate how our technique can separate estimation problems associated with the imaging system from problems associated with other factors.
2. Background and derivation
In the absence of noise, a SPECT imaging system can be well modeled with the equation
where f, a continuous function of the position r, represents the radioisotope distribution within the patient, and the continuous-to-discrete imaging operator, 𝛨, has kernel hm
(r) equal to the probability that a photon emitted at object position r will be detected in detector pixel, m. Elements of the discretely collected data vector, g, are calculated as
for m = 1,…,M
In SPECT, the desired imaging goal is to estimate f
. However, it is clearly impossible to fully determine the infinite-dimensional object vector f
from an M
-dimensional data vector g
. The usual method for circumventing this problem is to divide the continuous f
into a number of voxels approximately equal to the number of elements in g
. We rewrite equation (1
where, f, the discrete approximation of f has elements
for n = 1,…,N, and H, the matrix approximation of 𝛨, has elements
In these equations vn
represents the n
th voxel in object space and the subscript on ga
indicates that this is only an approximation of the true projection process.
2.1 Null space, measurement space and singular-value decomposition
Ideally, H would be invertible. However, it is generally found that even the approximation of the true imaging system is singular. This singularity results from the manner that tomographic systems sample the object space and the well-known fact that high frequencies are transferred very poorly in tomography. One way that the singularity of H has been explored is with singular-value decomposition (SVD) theory. It can be shown that any m×n matrix, H, can be decomposed as
where the M×M
U and the N×N
V are nonsingular orthonormal matrices, and the M×N
D has zero elements only at the first p locations on the diagonal.
are orthonormal, their column vectors, ui
, form an orthonormal basis for the projection space and discretized object space, respectively. Equation 3
can be rewritten, in terms of ui
∙f is the inner product of the i
th object-space eigenvector and the object. Thus, we see that the first p
contribute to the projection operation and the rest do not. We can then therefore define an n×n measurement-space projector, M, with p rows equal to the first p
, and N - p rows equal to the 0 vector. The null-space projector, N, is simply the identity matrix minus M. The Mf operation is then the projection of f onto the measurement space, and Nf is the projection of f onto the null space.
The Moore-Penrose pseudoinverse of H is
greater than zero, and zero otherwise. We note that the measurement- and null-space projectors can be written as M = H
H and N = I - H
The vector H
is a least-squares solution to the problem stated by equation 3
. However, a modeling error has been introduced by the discretization of equation 1
, and a least-squares solution of the discretized problem does not imply a least-squares solution of the real continuous-to-discrete system. Although H
is not the least-squares solution to the real imaging problem, it is very advantageous be able to calculate it when analyzing imaging systems or reconstruction algorithms. We shall present two examples demonstrating uses of H
in section 3.
2.4 Derivation of the algorithm
One problem associated with the singular-value decomposition of an imaging system is the computational difficulty of the operation. Most SVD algorithms require that the entire H be stored in memory, and typical three-dimensional tomographic systems have H matrices with a size of several gigabytes.
For this reason the pseudoinverse is rarely used for three-dimensional reconstruction. An estimate f̂ of the discretized object is normally reconstructed by an iterative process (except in the case of the filtered-backprojection algorithm, which assumes a very simplified form of H). Iterative algorithms allow the reconstruction process to be done “on the fly” so that the H matrix does not need to be stored in memory. They also allow the possibility of stopping when a result of suitable accuracy has been achieved, thus saving the computation time required to reach convergence.
Most iterative algorithms are not designed for separating the null and measurement space. Instead, they arrive at solutions that contain some null-space components and omit some measurement-space components. However, important examples of algorithms that do iterate toward measurement- and null-space solutions are the Landweber algorithm [5
5. O.N. Strand, “Theory and methods related to the singular-function expansion and Landweber’s iteration for integral equations of the first kind,” SIAM J. Numer. Anal. , 11, 798–825 (1974). [CrossRef]
] and some algebraic reconstruction techniques [6
6. K.M. Hanson, “Bayesian and Related Methods in Image Reconstruction from Incomplete Data,” in Image Recovery, Theory and Application, Henry Stark, ed. (Academic, Orlando, Fla. 1987)
We have developed an additional iterative algorithm capable of finding Nf
for any given f
. We shall present only an outline of the algorithm’s derivation here. (A full derivation is contained in [7
].) We start with a limiting representation of the Moore-Penrose pseudoinverse:
We note that an operator [I + X]-1 can be expanded in the Neuman series
When we expand equation 9
in the Neuman series and allow η to go to zero, we arrive at an iterative algorithm for calculating the null space of a digitized f
In this section we shall present two examples that illustrate uses of the null-space algorithm. In the first example we were able to find the source of an imaging-system resolution problem. The second example demonstrates how the null-space algorithm can be combined with another technique to reconstruct tomographic images.
3.1 Example 1: FASTSPECT imaging resolution
One of the major projects being carried out by our group is the design and construction of a stationary SPECT imager. This system consists of 24 photon-sensitive modular scintillation cameras arranged about the patient, with incoming photons collimated by pinholes. Since the system is stationary, it can collect full tomographic information without need of motion, giving it the capability of dynamic SPECT. For this reason we refer to our system as FASTSPECT. More information on the FASTSPECT system can be obtained at [9
When imaging dynamically, we often collect projection frames over very short time periods. The obvious problem associated with fast imaging is the small number of collected photons (and resulting low signal-to-noise ratio) in the individual projection images. We can increase the number of collected photons by increasing the diameter of the pinholes, but the trade-off associated with increased pinhole diameter is a decrease in system spatial resolution.
Fig. 1. A slice from the reconstructed image of the wedge phantom.
An application of the FASTSPECT system is dynamic cardiac imaging (DCI). Since the goal of DCI is to collect images throughout the cardiac cycle, a very short imaging time per frame is required. In our first attempt at DCI, we increased the pinhole diameter to eight
millimeters. This was far larger than any previously used pinhole diameter and caused concern that the system’s spatial resolution might be compromised.
One method we employ to measure system resolution is to image the wedge phantom shown at [10
]. By determining the width at which we can resolve the wedge fingers in a reconstructed image, we can arrive at a very rough estimate of system resolution. A slice from the reconstructed image of the wedge phantom taken with the eight-millimeter-pinhole system is shown in Fig. 1
. It demonstrates a resolution far below what is normally achieved by FASTSPECT, and worse than expected.
Fig. 2. A slice from (a) the digital wedge phantom, (b) the measurement space of the phantom, and (c) the null space of the phantom.
There were several possible reasons for the poor resolution. While the most likely cause was the large pinhole diameter, it was also possible that our problems resulted from one or more of: 1) the approximation of the continuous-to-discrete system by a discrete-to-discrete matrix (𝛨→H), 2) an improper model of the of hi
(r) when estimating H or 3) the reconstruction process. Since we physically measured H, we were fairly confident that it accurately characterized a voxelized FASTSPECT system, but problems sometimes arise when we do the measurements. Additionally, scatter from within the wedge phantom was not included, and we had measured H with larger than usual voxels. Also, as will demonstrate in example two, our reconstruction algorithm is sensitive to smoothing parameters.
Fig. 3. The wedge phantom for the redesigned system, with (a) measurement space for the digitized phantom and (b) reconstruction of the actual phantom.
In order to determine the source of the poor resolution, we created a digitized wedge phantom from measurements of the physical wedge phantom. A slice from this digital phantom is given in Fig. 2(a)
. We then calculated the null and measurement projections of this phantom with respect to H
. Modeling errors, approximations due to voxelization, and reconstruction had now been removed from the problem. The only remaining effect was the measured response of the voxelized imaging system, so if the phantom’s measurement space showed quality similar to the reconstruction given in Fig. 1
, we would assume that the reduced resolution was a function of the actual imaging system. It is observed in Figs. 2(b)
that the measurement space of the digitized phantom looks much the same as the reconstructions, with most of the high-frequency information contained in the null space. From this we concluded that the poor resolution was an inherent system problem, and redesigned DCI FASTSPECT with smaller (two millimeter) pinholes. The measurement space of the digitized wedge phantom with respect to the redesigned system is given in Fig. 3(a)
. Fig. 3(b)
shows reconstructed projections of the physical wedge phantom. It is seen that
the system resolution is improved and that the null-space algorithm accurately predicts this improvement.
3.2 Example 2: Reconstruction using the null-space algorithm.
In this section we demonstrate a method for combining the null-space algorithm and another technique to produce a tomographic reconstruction algorithm. We do not suggest that this combination is superior to existing algorithms. Rather, we present our results to illustrate the detrimental effects associated with attempts to reconstruct the null space.
Fig. 4. A slice from the f̂ with best agreement with a g collected by our FASTSPECT imaging system.
We currently reconstruct images with a search method that changes the estimate of individual voxels to maximize agreement between the projection of the estimated object distribution, Hf̂ , and the data collected by the imaging system, g. This algorithm quickly arrives at an f̂ in good agreement with the projection data, but it in no way suppresses the null-space components of f̂ . Unless the reconstructed image is artificially smoothed, it will be of extremely poor visual quality. For this reason we perform some type of local smoothing of f̂ at each iteration of the reconstruction process. This diminishes the agreement between the estimate and the projection data, but increases the apparent visual quality of the image.
Fig. 5. The (a) null space and (b) measurement space of the reconstruction image shown in Fig. 6.
With our new technique, however, we did not smooth the estimates, but allowed Hf̂
to reach maximum agreement with g
. The resulting f̂
is given in Fig. 4
. This estimate has very good agreement with the data collected by the imaging system but, clearly, little can be determined from it – including the phantom imaged in the experiment.
We then calculated Mf̂
) and Nf̂
) with respect to the H
used for reconstruction. It is seen that Mf̂
conveys more information about the object being imaged than f̂
alone, and that Nf̂
only served to confound the original estimate. We draw from this experiment the somewhat obvious conclusion that a reconstruction algorithm should not be allowed to proceed heedless of the null space.