0% found this document useful (0 votes)
2 views

Tomography

This document presents a probabilistic approach to reconstructing internal structures from limited x-ray projections using Gaussian processes. The proposed method eliminates the need for manual parameter tuning and reduces computational complexity through a basis function expansion technique. Results indicate that this approach outperforms traditional filtered backprojection methods in terms of image quality, particularly in handling streak artifacts.

Uploaded by

Fahime Heydari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

Tomography

This document presents a probabilistic approach to reconstructing internal structures from limited x-ray projections using Gaussian processes. The proposed method eliminates the need for manual parameter tuning and reduces computational complexity through a basis function expansion technique. Results indicate that this approach outperforms traditional filtered backprojection methods in terms of image quality, particularly in handling streak artifacts.

Uploaded by

Fahime Heydari
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Probabilistic approach to limited-data computed

tomography reconstruction
Zenith Purisha1 , Carl Jidling2 , Niklas Wahlström2 , Thomas B.
Schön2 , Simo Särkkä1
arXiv:1809.03779v3 [cs.CV] 3 Jul 2019

1
Department of Electrical Engineering and Automation, Aalto University, Finland
2
Department of Information Technology, Uppsala University, Sweden

E-mail: [email protected]

2019

Abstract. In this work, we consider the inverse problem of reconstructing the


internal structure of an object from limited x-ray projections. We use a Gaussian
process prior to model the target function and estimate its (hyper)parameters from
measured data. In contrast to other established methods, this comes with the
advantage of not requiring any manual parameter tuning, which usually arises in
classical regularization strategies. Our method uses a basis function expansion
technique for the Gaussian process which significantly reduces the computational
complexity and avoids the need for numerical integration. The approach also allows
for reformulation of come classical regularization methods as Laplacian and Tikhonov
regularization as Gaussian process regression, and hence provides an efficient algorithm
and principled means for their parameter tuning. Results from simulated and real
data indicate that this approach is less sensitive to streak artifacts as compared to the
commonly used method of filtered backprojection.

Keywords: computed tomography, limited data; probabilistic method; Gaussian process;


Markov chain Monte Carlo

1. Introduction

X-ray computed tomography (CT) imaging is a non-invasive method to recover the


internal structure of an object by collecting projection data from multiple angles. The
projection data is recorded by a detector array and it represents the attenuation of the
x-rays which are transmitted through the object. Since the 1960s, CT has been used to
a deluge of applications in medicine [1, 2, 3, 4, 5, 6] and industry [7, 8, 9].
Currently, the so-called filtered back projection (FBP) is the reconstruction
algorithm of choice because it is very fast [10, 11]. This method requires dense sampling
of the projection data to obtain a satisfying image reconstruction. However, for some
decades, the limited-data x-ray tomography problem has been a major concern in, for
Probabilistic approach to limited-data computed tomography reconstruction 2

instance, the medical imaging community. The limited data case—also referred to as
sparse projections—calls for a good solution for several important reasons, including:
• the needs to examine a patient by using low radiation doses to reduce the risk of
malignancy or to in vivo samples to avoid the modification of the properties of
living tissues,
• geometric restrictions in the measurement setting make it difficult to acquire the
complete data [12], such as in mammography [13, 14, 15, 16] and electron imaging
[17],
• the high demand to obtain the data using short acquisition times and to avoid
massive memory storage, and
• the needs to avoid—or at least minimize the impact of—the moving artifacts during
the acquisition.
Classical algorithms—such as FBP—fail to generate good image reconstruction
when dense sampling is not possible and we only have access to limited data. The
under-sampling of the projection data makes the image reconstruction (in classical
terms) an ill-posed problem [18]. In other words, the inverse problem is sensitive to
measurement noise and modeling errors. Hence, alternative and more powerful methods
are required. Statistical estimation methods play an important role in handling the ill-
posedness of the problem by restating the inverse problem as a well-posed extension in
a larger space of probability distributions [19]. Over the years there have been a lot of
work on tomographic reconstruction from limited data using statistical methods (see,
e.g., [14, 20, 21, 22, 23, 24]).
In the statistical approach, incorporation of a priori knowledge is a crucial part
in improving the quality of the image reconstructed from limited projection data.
That can be viewed as an equivalent of the regularization parameter in classical
regularization methods. However, statistical methods, unlike classical regularization
methods, also provide a principled means to estimate the parameters of the prior
(i.e., the hyperparameters) which corresponds to automatic tuning of regularization
parameters.
In our work we build the statistical model by using a Gaussian process model [25]
with a hierarchical prior in which the (hyper)parameters in the prior become part of
the inference problem. As this kind of hierarchical prior can be seen as an instance
of a Gaussian process (GP) regression model, the computational methods developed
for GP regression in machine learning context [25] become applicable. It is worth
noting that some works on employing GP methods for tomographic problems have
also appeared before. An iterative algorithm to compute a maximum likelihood point
in which the prior information is represented by GP is introduced in [26]. In [27, 28],
tomographic reconstruction using GPs to model the strain field from neutron Bragg-edge
measurements has been studied. Tomographic inversion using GP for plasma fusion and
soft x-ray tomography have been done in [29, 30]. Nevertheless, the proposed approach
is different from the existing work.
Probabilistic approach to limited-data computed tomography reconstruction 3

Our aim is to employ a hierarchical Gaussian process regression model to


reconstruct the x-ray tomographic image from limited projection data. Due to the
measurement model involving line integral computations, the direct GP approach does
not allow for closed form expressions. The first contribution of this article is to overcome
this issue by employing the basis function expansion method proposed in [31], which
makes the line integral computations tractable as it detaches the integrals from the
model parameters. This approach can be directly used for common GP regression
covariance functions such as Matérn or squared exponential. The second contribution
of this article is to point out that the we can also reformulate classical regularization, in
particular Laplacian and Tikhonov regularization, as Gaussian process regression where
only the spectral density of the process (although not the covariance function itself)
is well defined. As the basis function expansion only requires the availability of the
spectral density, we can build a hierarchical model off a classical regularization model
as well and have a principles means to tune the regularization parameters. Finally,
the third contribution is to present methods for hyperparameter estimation that arise
from the machine learning literature and apply the methodology to the tomographic
reconstruction problem. In particular, the proposed methods are applied to simulated
2D chest phantom data available in Matlab and real carved cheese data measured
with µCT system. The results show that the reconstruction images created using the
proposed GP method outperforms the FBP reconstructions in terms of image quality
measured as relative error and as peak signal to noise ratio.

2. Constructing the model

2.1. The tomographic measurement data


Consider a physical domain Ω ⊂ R2 and an attenuation function f : Ω → R. The x-rays
travel through Ω along straight lines and we assume that the initial intensity (photons)
of the x-ray is I0 and the exiting x-ray intensity is Id . If we denote a ray through the
object as function s 7→ (x1 (s), x2 (s)) Then the formula for the intensity loss of the x-ray
within a small distance ds is given as:

dI(s)
= −f (x1 (s), x2 (s))ds, (1)
I(s)
and by integrating both sides of (1), the following relationship is obtained
Z R
I0
f (x1 (s), x2 (s))ds = log , (2)
−R Id

where R is the radius of the object or area being examined.


In x-ray tomographic imaging, the aim is to reconstruct f using measurement data
collected from the intensities Id of x-rays for all lines through the object taken from
different angles of view. The problem can be expressed using the Radon transform,
Probabilistic approach to limited-data computed tomography reconstruction 4

Figure 1. An illustration of the Radon transform. It maps the object f on the


(x1 , x2 )-domain into f on the (r, θ) domain. The measurement data is collected from
the intensities Id of x-rays for all lines L through the object f (x1 , x2 ) and from different
angles of view.

which can be expressed as


Z
Rf (r, θ) = f (x1 , x2 )dxL , (3)

where dxL denotes the 1-dimensional Lebesgue measure along the line defined by
L = {(x1 , x2 ) ∈ R2 : x1 cos θ + x2 sin θ = r}, where θ ∈ [0, π) is the angle and r ∈ R is
the distance of L from the origin as shown in Figure 1.
The parametrization of the straight line L with respect to the arc length s can be
written as:
x1 (s, θ, r) = r cos(θ) − s sin(θ),
(4)
x2 (s, θ, r) = r sin(θ) + s cos(θ).

In this work, the object is placed inside a circular disk with radius R. Then, as a
function of r and θ the line integral in (3) can be written as
Z R
Rf (r, θ) = f (x1 (s, θ, r), x2 (s, θ, r)) ds
−R
Z R (5)
0
= f (x + sû)ds,
−R

where
h iT h iT
0
x = r cos(θ) r sin(θ) , û = − sin(θ) cos(θ) .

In a real x-ray tomography application, the measurement is corrupted by at least


two noise types: photons statistics and electronic noise. In x-ray imaging, a massive
number of photons are usually recorded at each detector pixel. In such case, a Gaussian
Probabilistic approach to limited-data computed tomography reconstruction 5

approximation for the attenuation data in (2) can be used [32, 33]. Recall that a
logarithm of the intensity is involved in (5), and so additive noise is a reasonable model
for the electronic noise.
We collect a set of measurements as
Z R
yi = f (x0i + sûi )ds + εi , (6)
−R

where i corresponds to the data point index. The corresponding inverse problem is given
the noisy measurement data {yi }ni=1 in (6) to reconstruct the object f .

2.2. Gaussian processes as functional priors


A Gaussian process (GP) [25] can be viewed as a distribution over functions, where the
function value in each point is treated as a Gaussian random variable. To denote that
the function f is modeled as a GP, we formally write

f (x) ∼ GP (m(x), k(x, x0 )) . (7)

The GP is uniquely specified by the mean function m(x) = E[f (x)] and the covariance
function k(x, x0 ) = E[(f (x) − m(x))(f (x0 ) − m(x0 ))]. The mean function encodes our
prior belief of the value of f in any point. In lack of better knowledge it is common to
pick m(x) = 0, a choice that we will stick to also in this paper.
The covariance function on the other hand describes the covariance between two
different function values f (x) and f (x0 ). The choice of covariance function is the most
important part in the GP model, as it stipulates the properties assigned to f . A few
different options are discussed in Section 2.4.
As data is collected our belief about f is updated. The aim of regression is to
predict the function value f (x∗ ) at an unseen test point x∗ by conditioning on the seen
data. Consider direct function measurements on the form

yi = f (xi ) + εi , (8)

where εi is independent and identically distributed (iid) Gaussian noise with variance
σ 2 , that is, εi ∼ N (0, σ 2 ). Let the measurements be stored in the vector y. Then the
mean value and the variance of the predictive distribution p(f (x∗ ) | y) are given by [25]

E[f (x∗ ) | y] = kT∗ (K + σ 2 I)−1 y, (9a)


V[f (x∗ ) | y] = k(x∗ , x∗ ) − kT∗ (K + σ 2 I)−1 k∗ . (9b)

Here the vector k∗ contains the covariances between f (x∗ ) and each measurement while
the matrix K contains the covariance between all measurements, such that

(k∗ )i = k(xi , x∗ ), (10a)


Kij = k(xi , xj ). (10b)
Probabilistic approach to limited-data computed tomography reconstruction 6

An example of GP regression for a two-dimensional input is given in Figure 2. The red


stars indicate the measurements, while the shaded surface is the GP prediction. The
blue line highlights a slice of the plot that is shown explicitly to the right, including the
95% credibility region.

Figure 2. Left: GP prediction (shaded surface) obtained from the measurements (red
stars, also indicated by their deviation from the prediction). Right: slice plot of the
blue line in the left figure, including the 95% credibility region.

2.3. The Gaussian process for x-ray tomography


In this section, we show how to apply the functional priors presented in Section 2.2 to
x-ray tomography application. Since the x-ray measurements (5) are line integrals of
the unknown function f (x), they are linear functionals of the Gaussian process. Hence,
we can define a linear functional Hx,i as follows:
Z R
Hx,i f (x) = f (x0i + sûi )ds. (11)
−R
and thus the GP regression problem becomes
f (x) ∼ GP (m(x), k(x, x0 )) , (12a)
yi = Hx,i f (x) + εi . (12b)
As discussed, for example, in [34, 31] the GP regression equations can be extended to
this kind of models, which in this case leads to the following:
E[f (x∗ )|y] = qT∗ (K + σ 2 I)−1 y, (13a)
−1
V[f (x∗ )|y] = k(x∗ , x∗ ) − qT∗ (Q 2
+ σ I) q∗ , (13b)
h iT
where y = y1 · · · yn and
Z R
(q∗ )i = k(x0i + sûi , x∗ )ds, (14a)
−R
Z RZ R
Qij = k(x0i + sûi , x0j + s0 ûj )dsds0 . (14b)
−R −R
Probabilistic approach to limited-data computed tomography reconstruction 7

In general we can not expect closed form solutions to (14a)–(14b) and numerical
computations are then required. However, even with efficient numerical methods,
the process of selecting the hyperparameters is tedious since the hyperparameters are
in general not decoupled from the integrand and the integrals need to be computed
repeatedly in several iterations. In this paper, we avoid this by using the basis function
expansion that will be described in Section 2.6.

2.4. Squared exponential and Matérn covariance functions


An important modeling parameter in Gaussian process regression is the covariance
function k(x, x0 ) which can be selected in various ways. Because the basis function
expansion described in Section 2.6 requires the covariance function to be stationary, we
here limit our discussion to covariance functions of this form. Stationarity means that
k(x, x0 ) = k(r) where r = x−x0 , so the covariance only depends on the distance between
the input points. In that case we can also work with the spectral density, which is the
Fourier transform of the stationary covariance function
Z
T
S(ω) = F[k] = k(r)e−iω r dr, (15)

where again r = x − x0 .
The perhaps most commonly used covariance function within the machine learning
context [25] is the squared exponential (SE) covariance function
 
2 1 2
kSE (r) = σf exp − 2 krk2 , (16)
2l
which has the following spectral density
l2 kωk22
 
SSE (ω) = σf2 (2π)d/2 ld exp − , (17)
2
where d is the dimensionality of x (in our case d = 2). The SE covariance
function is characterized by the magnitude parameter σf and the length scale l. The
squared exponential covariance function is popular due to its simplicity and ease of
implementation. It corresponds to a process whose sample paths are infinitely many
times differentiable and thus the functions modeled by it are very smooth.
Another common family of covariance functions is given by the Matérn class
1−ν
√ !ν √ !
2 2νkrk 2 2νkrk2
kMatern (r) = σf2 Kν , (18a)
Γ(ν) l l
d d/2
−(ν+d/2)
Γ(ν + d/2)(2ν)ν 2ν

22 π 2
SMatern (ω) = σf + kωk2 , (18b)
Γ(ν)l2ν l2
where Kν is a modified Bessel function [25]. The smoothness of the process is increased
with the parameter ν: in the limit ν → ∞ we recover the squared exponential covariance
function.
Probabilistic approach to limited-data computed tomography reconstruction 8

Gaussian processes are also closely connected to classical spline smoothing [35] as
well as other classical regularization methods [19, 36] for inverse problems. Although
the construction of the corresponding covariance function is hard (or impossible), it
is still possible to construct the corresponding spectral density in many cases. With
these spectral densities and the basis function method of Section 2.6, we can construct
probabilistic versions of the classical regularization methods as discussed in the next
section.

2.5. Covariance functions arising from classical regularization


Let us recall that a classical way to seek for solutions to inverse problems is via
optimization of a functional of the form
Z
1 X 1
J [f ] = 2 (yi − Hx,i f (x)) + 2 |Lf (x)|2 dx,
2
(19)
2σ i 2σf

where L is a linear operator. This is equivalent to a Gaussian process regression


problem, where the covariance operator is formally chosen to be K = [L∗ L]−1 . In
(classical) Tikhonov regularization we have L = I (identity operator) which corresponds
to penalizing the norm of the solution. Another option is to penalize the Laplacian which
gives L = ∇2 .
Although the kernel of this covariance operator is ill-defined with the classical
choices of L and thus it is not possible to form the corresponding covariance function, we
can still compute the corresponding spectral density function by computing the Fourier
transform (15) of L∗ L and then inverting it to form the spectral density:

σf2
S(ω) = . (20)
F[L∗ L]

In particular, the minimum norm or (classical) Tikhonov regularization can be recovered


by using a white noise prior which is given by the constant spectral density

STikhonov (ω) = σf2 , (21)

where σf is a scaling parameter. Another interesting case is the Laplacian operator


based regularization which corresponds to
σf2
SLaplacian (ω) = . (22)
kωk42
It is useful to note that the latter spectral density corresponds to a l → ∞ limit of the
Matérn covariance function with ν + d/2 = 2 and the white noise to l → 0 in either the
SE or the Matérn covariance functions. The covariance functions corresponding to the
above spectral densities would be degenerate, but this does not prevent us from using
the spectral densities in the basis function expansion method described in Section 2.6
as the method only requires the availability of the spectral density.
Probabilistic approach to limited-data computed tomography reconstruction 9

2.6. Basis function expansion


To overcome the computational hazard described in Section 2.3, we consider the
approximation method proposed in [31], which relies on the following truncated basis
function expansion
m
0
X √
k(x, x ) ≈ S( λi )φi (x)φi (x0 ), (23)
i=1

where S denotes the spectral density of the covariance function, and m is the truncation
number. The basis functions φi (x) and eigenvalues λi are obtained from the solution to
the Laplace eigenvalue problem on the domain Ω
(
−∆φi (x) = λi φi (x), x ∈ Ω,
(24)
φi (x) = 0, x ∈ ∂Ω.

In two dimensions with Ω = [−L1 , L1 ] × [−L2 , L2 ] we introduce the positive integers


i1 ≤ m1 and i2 ≤ m2 . The number of basis functions is then m = m1 m2 and the
solution to (24) is given by
1  
φi (x) = √ sin ϕi1 (x1 + L1 ) sin ϕi2 (x2 + L2 ) , (25a)
L1 L2
πi1 πi2
λi = ϕ2i1 + ϕ2i2 , ϕi1 = , ϕi2 = , (25b)
2L1 2L2
where i = i1 +m1 (i2 −1). Let us now build the vector φ∗ ∈ Rm×1 , the matrix Φ ∈ Rm×M
and the diagonal matrix Λ ∈ Rm×m as

(φ∗ )i = φi (x∗ ), (26a)


Z R
Φij = φi (x0j + sûj )ds, (26b)
−R
p
Λii = S( λi ). (26c)

The entries Φij can be computed in closed form with details given in Appendix A. Now
we substitute Q ≈ ΦT ΛΦ and q∗ ≈ ΦT Λφ∗ to obtain

E[f (x∗ ) | y] ≈ φT∗ ΛΦ(ΦT ΛΦ + σ 2 I)−1 y, (27a)


−1
V[f (x∗ ) | y] ≈ φT∗ Λφ∗ − φT∗ ΛΦ(ΦT ΛΦ 2
+ σ I) Φ Λφ∗ . T
(27b)

When using the spectral densities corresponding to the classical regularization methods
in (21) and (22), the mean equation reduces to the classical solution (on the given
basis). However, also for the classical regularization methods we can compute the
variance function which gives uncertainty estimate for the solution which in the classical
formulation is not available. Furthermore, the hyperparameter estimation methods
outlined in the next section provide principled means to estimate the parameters also
in the classical regularization methods.
Probabilistic approach to limited-data computed tomography reconstruction 10

3. Hyperparameter estimation

In this section, we will consider some methods for estimating the hyperparameters. The
free parameters of the covariance function, for example, the parameters σf and l in
the squared exponential covariance function, are together with the noise parameter σ
referred to as the hyperparameters of the model. In this work, we employ a Bayesian
approach to estimate the hyperparameters, and comparisons with standard parameter
estimation methods such as L-curve and cross-validation methods are given as well.

3.1. Posterior distribution of hyperparameters


The marginal likelihood function corresponding to the model (12) is given as

p(y | σf , l, σ) = N (y | 0, Q(σf , l) + σ 2 I), (28)

where Q(σf , l) is defined by (14a). The posterior distribution of parameters can now be
written as follows:

p(σf , l, σ | y) ∝ p(y | σf , l, σ)p(σf )p(l)p(σ), (29)


1 1 1
where non-informative priors are used: p(σf ) ∝ σf
, p(l) ∝ l
and p(σ) ∝ σ
. The
logarithm of (29) can be written as
1 1 1 1 1
log p(σf , l, σ | y) = const.− log det(Q+σ 2 I)− yT (Q+σ 2 I)−1 y−log −log −log .
2 2 σf l σ
(30)
Given the posterior distribution we have a wide selection of methods from statistics to
estimate the parameters. One approach is to compute the maximum a posteriori (MAP)
estimate of the parameters by using, for example, gradient-based optimization methods
[25]. However, using this kind of point estimate loses the uncertainty information of
the hyperparameters and therefore in this article we use Markov chain Monte Carlo
(MCMC) methods [37] which retain the information about the uncertainty in the final
result.

3.2. Metropolis–Hastings sampling of hyperparameters


As discussed in the previous section, the statistical formulation of the inverse problem
gives a posterior distribution of the hyperparameters ϕ = (σf , l, σ) as the solution rather
than single estimates. The MCMC methods are capable of generating samples from the
distribution. The Monte Carlo samples can then be used for computing the mean, the
variance, or some other statistics of the posterior distribution [38]. In this work, we
employ the Metropolis–Hastings algorithm to sample from the posterior distribution.
Probabilistic approach to limited-data computed tomography reconstruction 11

3.3. The L-curve method


One of the classical methods to obtain information about the optimum value for σ is
the L-curve method [39], which operats by plotting the norm of the solution kfσ (x)k2
versus the residual norm kHx,i fσ (x) − yi k2 . The associated L-curve is defined as the
continous curve consisting of all the points (kHx,i fσ (x) − yi k2 , kfσ (x)k2 ) for σ ∈ [0, ∞).

3.4. Cross-validation
As a comparison, we also consider to use methods of cross-validation (CV) for model
selection. In k-fold CV, the data are partitioned into k disjoint sets yj , and at each
round j of CV, the predictive likelihood of the set yj is computed given the rest of
the data y−j . These likelihoods are used to monitor the predictive performance of the
model. This performance is used to estimate the generalization error, and it can be used
to carry out model selection [40, 25, 41].
The Bayesian CV estimate of the predictive fit with given parameters ϕ is
n
X
CV = log p(yj | y−j , ϕ), (31)
j=1

where p(yj | y−j , ϕ) is the predictive likelihood of the data yj given the rest of the data.
The best parameter values with respect to CV can be computed by enumerating the
possible parameter values and selecting the one which gives the best fit in terms of CV.

4. Experimental results

In this section, we present numerical results using the GP model for limited x-
ray tomography problems. All the computations were implemented in Matlab 9.4
(R2018a) and performed on an Intel Core i5 at 2.3 GHz and CPU 8GB 2133MHz
LPDDR3 memory.
For both simulated data (see Section 4.1) and real data (see Section 4.2) we use
m = 104 basis functions in (23). The measurements are obtained from the line integral of
each x-ray over the attenuation coefficient of the measured objects. The measurements
are taken for each direction (angle of view), and later they will be referred to as
projections. The same number of rays in each direction is used. The computation
of the hyperparameters is carried out using the Metropolis–Hastings algorithms with
5 000 samples, and the first 1 000 samples are thrown away (burn-in period). The
reconstruction is computed by taking the conditional mean of the object estimate.

4.1. Simulated data: 2D Chest phantom


As for the simulated data, we use one slice of Matlab’s 3D Chest dataset [42] as a
ground truth, ftrue , which is shown in Figure 4(a). The size of the phantom is N × N ,
with N = 128. The black region indicates zero values and lighter regions indicate higher
Probabilistic approach to limited-data computed tomography reconstruction 12

attenuation function values. The measurements (i.e. sinogram) of the chest phantom
are computed using the radon command in Matlab and corrupted by additive white
Gaussian noise with zero mean and 0.1 variance (σtrue = 0.32).
Several reconstructions of the chest phantom using different covariance functions,
namely squared exponential (SE), Matérn, Laplacian, and Tikhonov, are presented. For
the SE, Matérn, and Laplacian covariance functions, the paramameters σf , l, and σ are
estimated using the proposed method. We use ν = 1 for the Matérn covariance. As
for the Tikhonov covariance, it is not characterized by the length scale l, and hence
only σf and σ are estimated. All the estimated parameters are reported in Table 2.
Figure 3 presents the histograms of the 1-d marginal posterior distribution of each
parameters using different covariance functions. The histograms show the distribution
of the parameters samples in the Metropolis–Hastings samples. The results show that
the σf estimate for SE and Matérn covariances is 0.12, while for Laplacian and Tikhonov,
the estimates are 0.05 and 0.64. For Matérn, Laplacian, and Tikhonov covariance
functions, the σ estimates are concentrated around the same values 0.34 − 0.39 with
standard deviation (SD) between 0.02 − 0.03. These noise estimates are well-estimated
the ground-truth noise, σtrue = 0.32, with the absolute error is between 0.02 − 0.07. The
estimate of the SE kernel appears to overestimate the noise, σ = 0.60. It is reported that
the length-scale parameter, l, for Laplacian and SE covariance functions are concentrated
in the same values, while for Matérn yields higher estimate, l = 10.14.
Figure 4(c)-(f) shows GP reconstructions of the 2D chest phantom using different
covariance functions from 9 projections (uniformly spaced) out of 180◦ angle of view
and 185 number of rays for each projection. The computation times for all numerical
tests are reported in Table 1. The Metropolis–Hastings reconstruction shows longer
computational time due to the need for generation of a large number of samples from
the posterior distribution. However, the benefit of this algorithm is that it is easy to
implement and it is reliable for sampling from high dimensional distributions.

Table 1. Computation times of chest phantom (in seconds)

Target FBP SE Matérn Laplacian Tikhonov


Chest phantom 0.5 11 210 9 676 9 615 9 615

The numerical test of the simulated data reconstructions is compared against figures
of merit, namely:
• the relative error (RE)

kftrue − frec k2
,
kftrue k2

where frec is the image reconstruction, and


Probabilistic approach to limited-data computed tomography reconstruction 13

• the peak-signal-to-noise ratio (PSNR)

peakval2
 
10 log10 ,
MSE

where peakval is the maximum possible value of the image and MSE is the mean
square error between ftrue and frec ,
as shown in Table 3.
In practice, image quality in CT depends on other parameters as well, such as image
contrast, spatial resolution, and image noise [43]. These parameters can be evaluated
when the CT device is equipped with CT numbers for various materials, high-resolution
image is available, and statistical fluctuations of image noise which require several times
of measurement to record random variations in detected x-ray intensity are acquired.
However, in this work, the collected datasets are not supported by the aforementioned
factors and they fall outside the scope of this paper. The results presented here are
focusing on the implementation of a new algorithm to limited-data CT reconstruction
and are reported as a preliminary study.
Reconstruction using a conventional method is computed as well with the built-
in Matlab function iradon, which uses the FBP to invert the Radon transform. It
reconstructs a two-dimensional slice of the sample from the corresponding projections.
The angles for which the projections are available are given as an argument to the
function. Linear interpolation is applied during the backprojection and a Ram–Lak
or ramp filter is used. The FBP reconstruction of the chest phantom is shown in
Figure 4(b). For comparison, FBP reconstructions computed using some other filters
are seen in Figure 5.

Table 2. The GP parameter estimates for the chest phantom. The estimates are
calculated using the conditional mean, and the standard deviation (SD) values are
also reported in parentheses.

Covariance σf (SD) l (SD) σ (SD)


function
SE 0.12 (0.04) 5.03 (0.03) 0.60 (0.02)
Matérn 0.12 (0.07) 10.14 (0.08) 0.34 (0.03)
Laplacian 0.05 (0.10) 4.49 (0.02) 0.39 (0.03)
Tikhonov 0.64 (0.02) - 0.35 (0.03)

We also compared the results to the L-curve method and the CV:
• The L-curve method is applied to the Laplacian and the Tikhonov covariances
and the L-curve plots from different values of parameter 10−1 ≤ σ ≤ 10 for both
covariances are shown in Figure 6. Both plots show that the corner of the L-curve
is located in between 0.2 ≤ σ ≤ 1.
Probabilistic approach to limited-data computed tomography reconstruction 14

σf l σ

120 180 120

160
100 100
140

80 120 80
SE

100
60 60
80

40 60 40

40
20 20
20

0 0 0
0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1

70 60 70

60 60
Matérn

50

50 50
40

40 40

30
30 30

20
20 20

10
10 10

0 0 0
0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1

40 70 70

35 60
Laplacian

60

30
50 50

25
40 40
20
30 30
15

20 20
10

5 10 10

0 0 0
0 0.2 0.4 0.6 0.8 1 0 10 20 30 40 50 0 0.2 0.4 0.6 0.8 1

60 50

45
50
40
Tikhonov

35
40
30

30 25

20
20
15

10
10
5

0 0
0 0.2 0.4 0.6 0.8 1 0 0.2 0.4 0.6 0.8 1

Figure 3. Histogram of the 1-d marginal distribution of the GP parameters. Left,


middle and right columns are the marginal distribution for parameter σf , l and σ
with corresponding covariance functions indicated in the vertical text in the left of the
figure. The estimate of the parameter l is not available for Tikhonov covariance.
Probabilistic approach to limited-data computed tomography reconstruction 15

(a) (b) (c)

(d) (e) (f)


Figure 4. (a) A ground truth of 2D chest phantom. (b) Filtered backprojection
reconstruction (Ram–Lak filter) from 9 projections. (c) GP reconstruction using SE
covariance, (d) GP reconstruction using Matérn covariance, (e) GP reconstruction
using Laplacian covariance, (f) GP reconstruction using Tikhonov covariance. The
GP reconstructions are using 9 projections.

(a) (b)

(c) (d)
Figure 5. Filtered backprojection reconstructions using (a) Shepp–Logan filter, (b)
Cosine filter (c) Hamming filter, (d) Hann filter. Values of relative error (RE) are
between 23.6 − 25.2 and PSNR values are between 18.1 − 19.9%.
Probabilistic approach to limited-data computed tomography reconstruction 16

Table 3. Figures of merit for chest phantom reconstructions.

Method RE (%) PSNR


FBP (Ram–Lak filter) 25.86 18.44
GP-SE 29.41 21.76
GP-Matérn 23.26 22.76
GP-Laplacian 29.18 21.79
GP-Tikhonov 23.39 22.73
Lcurve-Laplacian 23.38 22.62
Lcurve-Tikhonov 23.26 22.63
CV-Laplacian 25.18 22.31
CV-Tikhonov 23.47 22.75

40
σ = 0.1 180 σ = 0.1
35 160

140
30

120
kfσ (x)k2

kfσ (x)k2

25
100
20 0.2 ≤ σ ≤ 1
80
σ = 10
1

15

60
σ

10

40
σ = 102
2

σ = 10
0.

5 20
σ = 102
0 0
0 100 200 300 400 500 600 5 10 15 20 25 30 35 40

kHx,i fσ (x) − yi k2 kHx,i fσ (x) − yi k2

(a) (b)

Figure 6. The L-curve for (a) Tikhonov and (b) Laplacian covariance from the chest
phantom reconstruction.

• The CV is tested for the Laplacian and Tikhonov covariances using point-wise
evaluation of 10−2 ≤ σ ≤ 1 and 10−2 ≤ σf ≤ 1. For the Laplacian covariance,
several points of length scale 1 ≤ ` ≤ 100 are tested as well. The minimum
prediction error was obtained for σf = 0.8, σ = 0.8 and ` = 10. For the Tikhonov
covariance, the minimum prediction error was obtained for σ = 0.5 and σf = 0.5.
The estimates of σf and σ for Laplacian are 0.8 and 0.5, respectively, and they
give the same estimates for the Tikhonov covariance function. The estimates of σ
for both kernels appear to overestimate the σtrue . The absolute error is in between
0.18 − 0.48. The length-scale estimate from Laplacian covariance, l = 10, appears
Probabilistic approach to limited-data computed tomography reconstruction 17

(a) (b) (c)

(d) (e)
Figure 7. (a) A ground truth of 2D chest phantom. (b) & (c) reconstructions using
L-curve parameter choice method with Laplacian (using σ = 1) and Tikhonov (using
σ = 0.2) covariance functions, respectively. (d) & (e) reconstructions using CV with
Laplacian and Tikhonov covariance functions, respectively

to close to the estimate in Matérn covariance.


Image reconstructions for both L-curve and CV methods are shown in Figure 7.

4.2. Real data: Carved cheese


We now consider a real-world example using the tomographic x-ray data of a carved
cheese slice measured with a custom-built CT device available at the University of
Helsinki, Finland. The dataset is available online [44]. For a detailed documentation of
the acquisition setup—including the specifications of the x-ray systems—see [45]. We
use the downsampled sinogram with 140 rays and 15 projections from 360◦ angle of
view. In the computations, the size of the target is set to 120 × 120.
Figure 8(c) shows the GP reconstruction (Matérn covariance function) of the cross
section of the carved cheese slice using 15 projections (uniformly spaced) out of 360◦
angle of view. For comparison, the FBP reconstruction is shown in Figure 8(b).
The computation times for the carved cheese are reported in Table 5.

4.3. Discussion
We have presented x-ray tomography reconstructions from both simulated and real
data for limited projections (i.e. sparse sampling) using an approach based on
Probabilistic approach to limited-data computed tomography reconstruction 18

Table 4. Estimated GP parameters for the carved cheese using Matérn covariance
function. The estimates are calculated using the conditional mean, and the standard
deviation (SD) values are also reported in parentheses.

Covariance σf (SD) l (SD) σ (SD)


function
Matérn 0.012 (0.07) 11.00 (0.08) 0.02 (0.04)

(a) (b) (c)

Figure 8. (a) FBP reconstruction (Ram–Lak filter) of the carved cheese using dense
360 projections. (b) Filtered backprojection reconstruction from 15 projections. (c)
GP reconstruction using Matérn covariance from 15 projections.

Table 5. Computation times of the carved cheese (in seconds)

Target FBP Matérn


Carved cheese 0.1 12 604

the Gaussian process. However, other limited-data problems such as limited angle
tomography could be explored as well. The quality of GP reconstructions using different
covariance functions looks rather the same qualitatively. However, quantitatively, the
reconstruction using Matérn covariance is the best one: it has the lowest RE 23.26%
and the highest PSNR 22.76. PSNR describes the similarity of the original target
with the reconstructioned image (the higher value, the better of the reconstruction).
Figures of merit estimates are not available for the real cheese data since there is
no comparable ground truth. Nevertheless, the quality of the reconstruction can be
observed qualitatively by comparing with the FBP reconstruction obtained with dense
360 projections from 360 degrees shown in Figure 8(a). The corresponding parameter
estimates for the chest phantom and the cheese are reported in Table 2 and 4. For the
Probabilistic approach to limited-data computed tomography reconstruction 19

chest phantom case, the estimate of parameter σ using Matérn, Laplacian and Tikhonov
kernels tend to be close to the true value σtrue . As for the SE covariance, the standard
deviation of noise is overestimated.
The reconstructions produced by the FBP benchmark algorithm using sparse
projections are overwhelmed by streak artefacts due to the nature of backprojection
reconstruction, as shown in Figure 4(b) for the chest phantom and Figure 8(b) the for
cheese target. The edges of the target are badly reconstructed. Due to the artefacts,
especially for the chest phantom, it is difficult to distinguish the lighter region (which is
assumed to be tissue) and the black region (air). The FBP reconstruction has the worst
quality and it is confirmed in Table 3 that it has a high RE value (25.86%) and the
lowest PSNR (18.44). FBP reconstructions computed with different filters are shown in
Figure 5. However, there is no significant improvement in the images as it is clarified
by the RE and PSNR values in the caption as well as by qualitative investigation.
On the other hand, the GP reconstructions outperform the FBP algorithm in terms
of image quality as reported in the figures of merit. The PSNR values of the GP-
based reconstructions are higher than that of the FBP reconstruction. Nevertheless,
in GP reconstructions, sharp boundaries are difficult to achieve due to the smoothness
assumptions embedded in the model.
The GP prior clearly suppresses the artifacts in the reconstructions as shown in
Figure 4(c) and 8(c). In Figure 4(c), the air and tissue region are recovered much better,
since the prominent artefacts are much less. In Figure 8(c), the air region (outside the
cheese and the C and T letters) are much sharper than in the FBP reconstruction.
Overall, the results indicate that the image quality can be improved significantly by
employing the GP method.
In Figure 7 the image reconstructions using L-curve and CV methods are presented.
The quality of the reconstructions is reported in Table 3 as well. In these methods, finer
point-wise evaluations might help to improve the quality of the reconstructions.
We emphasize that in the proposed GP-approach, some parameters in the prior
is a part of the inference problem (see Equation (16)). Henceforth, we can avoid the
difficulty in choosing the prior parameters. This problem corresponds to the classical
regularization methods, in which selecting the regularization parameters is a very crucial
step to produce a good reconstruction.

5. Conclusions

We have employed the Gaussian process with a hierarchical prior to computed


tomography using limited projection data. The method was implemented to estimate the
x-ray attenuation function from the measured data produced by the Radon transform.
The performance has been tested on simulated and real data, with promising results
shown. Unlike algorithms commonly used for the limited x-ray tomography problem
that require manual tuning of prior parameters, the proposed GP method offers an
easier set up as it takes into account the prior parameters as a part of the estimation.
Probabilistic approach to limited-data computed tomography reconstruction 20

Henceforth, it constitutes a promising and user-friendly strategy.


The most important part of the GP model is the selection of the covariance function,
since it stipulates the properties of the unknown function. As such, it also leaves most
room for improvement. Considering the examples in Section 4, a common feature of
the target functions is that they consist of a number of well-defined, separate regions.
The function values are similar and thus highly correlated within the regions, while the
correlation is low at the edges where rapid changes occur. This kind of behavior is
hard to capture with a stationary covariance function that models the correlation as
completely dependent on the distance between the input locations. A non-stationary
alternative is provided by, for example, the neural network covariance function, which
is known for its ability to model functions with non-smooth features [25]. The basis
function approximation method employed in this work is only applicable to stationary
covariance functions, but other approximations can of course be considered.
Despite its success, the computational burden of the proposed algorithm is relatively
high. To solve this problem, speed-up strategies are available, such as implementing
parallelized GPU code, optimizing the covariances of the sampling strategy, or by
changing the MCMC algorithm to another one. Investigating finer resolution images
and statistical records would also be interesting future research to evaluate other image
quality parameters. Moreover, the proposed method can be applied to multidetector
CT imaging [46, 47] as well as 3D CT problems using sparse data [48, 49].

Acknowledgments

Authors acknowledge support from the Academy of Finland (314474 and 313708).
Furthermore, this research was financially supported by the Swedish Foundation for
Strategic Research (SSF) via the project ASSEMBLE (contract number: RIT15-0012).
Probabilistic approach to limited-data computed tomography reconstruction 21

Appendix A. Details on the computation of Φ

Here we derive the closed-form expression of the entries Φij stated in (26b). We get that
Z R
Φij = φi (x0j + sûj )ds
−R
Z R
1
= √ sin(ϕi1 rj cos θj − ϕi1 s sin θj + ϕi1 L1 ) sin(ϕi2 rj sin θj + ϕi2 s cos θj + ϕi2 L2 )ds
L1 L2 −R
Z R
1
= √ sin(αij s + βij ) sin(γij s + δij )ds
L1 L2 −R
Z R
1
= √ cos((αij − γij )s + βij − δij ) − cos((αij + γij )s + βij + δij )ds
2 L1 L1 −R
1 h 1 1 iR
= √ sin((αij − γij )s + βij − δij ) − sin((αij + γij )s + βij + δij )
2 L1 L1 αij − γij αij + γij −R
1  1 1
= √ sin((αij − γij )R + βij − δij ) − sin((αij + γij )R + βij + δij )
2 L1 L1 αij − γij αij + γij
1 1 
− sin(−(αij − γij )R + βij − δij ) + sin(−(α + γij )R + βij + δij ) ,
αij − γij αij + γij
(A.1)

where

αij = ϕi1 sin θj , (A.2a)


βij = ϕi1 rj cos θj + ϕi1 L1 , (A.2b)
γij = ϕi2 cos θj , (A.2c)
δij = ϕi2 rj sin θj + ϕi2 L2 . (A.2d)

References

[1] A. M. Cormack, “Representation of a function by its line integrals, with some radiological
applications,” Journal of Applied physics, vol. 34, no. 9, pp. 2722–2727, 1963.
[2] A. M. Cormack, “Representation of a function by its line integrals, with some radiological
applications. ii,” Journal of Applied Physics, vol. 35, no. 10, pp. 2908–2913, 1964.
[3] G. T. Herman, “Image reconstruction from projections,” Topics in Applied Physics, vol. 32, 1979.
[4] P. Kuchment, The Radon transform and medical imaging. SIAM, 2013.
[5] N. R. Council et al., Mathematics and physics of emerging biomedical imaging. National Academies
Press, 1996.
[6] L. A. Shepp and J. Kruskal, “Computerized tomography: the new medical x-ray technology,”
American Mathematical Monthly, pp. 420–439, 1978.
[7] S. Akin and A. Kovscek, “Computed tomography in petroleum engineering research,” Geological
Society, London, Special Publications, vol. 215, no. 1, pp. 23–38, 2003.
[8] L. Cartz, Nondestructive testing. ASM International, 1995.
[9] L. De Chiffre, S. Carmignato, J.-P. Kruth, R. Schmitt, and A. Weckenmann, “Industrial
applications of computed tomography,” CIRP Annals-Manufacturing Technology, vol. 63, no. 2,
pp. 655–677, 2014.
Probabilistic approach to limited-data computed tomography reconstruction 22

[10] A. C. Kak and M. Slaney, Principles of computerized tomographic imaging. IEEE press New York,
1988.
[11] T. M. Buzug, Computed tomography: from photon statistics to modern cone-beam CT. Springer
Science & Business Media, 2008.
[12] N. Riis, J. Frøsig, Y. Dong, and P. Hansen, “Limited-data x-ray CT for underwater pipeline
inspection,” Inverse Problems, vol. 34, no. 3, p. 034002, 2018.
[13] L. T. Niklason, B. T. Christian, L. E. Niklason, D. B. Kopans, D. E. Castleberry, B. Opsahl-Ong,
C. E. Landberg, P. J. Slanetz, A. A. Giardino, R. Moore, et al., “Digital tomosynthesis in breast
imaging,” Radiology, vol. 205, no. 2, pp. 399–406, 1997.
[14] M. Rantala, S. Vanska, S. Jarvenpaa, M. Kalke, M. Lassas, J. Moberg, and S. Siltanen, “Wavelet-
based reconstruction for limited-angle x-ray tomography,” IEEE transactions on medical
imaging, vol. 25, no. 2, pp. 210–217, 2006.
[15] T. Wu, A. Stewart, M. Stanton, T. McCauley, W. Phillips, D. B. Kopans, R. H. Moore, J. W.
Eberhard, B. Opsahl-Ong, L. Niklason, et al., “Tomographic mammography using a limited
number of low-dose cone-beam projection images,” Medical physics, vol. 30, no. 3, pp. 365–380,
2003.
[16] Y. Zhang, H.-P. Chan, B. Sahiner, J. Wei, M. M. Goodsitt, L. M. Hadjiiski, J. Ge, and C. Zhou, “A
comparative study of limited-angle cone-beam reconstruction methods for breast tomosynthesis,”
Medical physics, vol. 33, no. 10, pp. 3781–3795, 2006.
[17] D. Fanelli and O. Öktem, “Electron tomography: a short overview with an emphasis on the
absorption potential model for the forward problem,” Inverse Problems, vol. 24, no. 1, p. 013001,
2008.
[18] F. Natterer, The mathematics of computerized tomography, vol. 32. Siam, 1986.
[19] J. Kaipio and E. Somersalo, Statistical and computational inverse problems, vol. 160. Springer
Science & Business Media, 2006.
[20] C. A. Bouman and K. Sauer, “A unified approach to statistical tomography using coordinate
descent optimization,” IEEE Transactions on image processing, vol. 5, no. 3, pp. 480–492, 1996.
[21] H. Haario, A. Kallonen, M. Laine, E. Niemi, Z. Purisha, and S. Siltanen, “Shape recovery for sparse-
data tomography,” Mathematical Methods in the Applied Sciences, vol. 40, no. 18, pp. 6649–
6669, 2017.
[22] V. Kolehmainen, S. Siltanen, S. Järvenpää, J. P. Kaipio, P. Koistinen, M. Lassas, J. Pirttilä,
and E. Somersalo, “Statistical inversion for medical x-ray tomography with few radiographs: II.
application to dental radiology,” Physics in Medicine & Biology, vol. 48, no. 10, p. 1465, 2003.
[23] S. Siltanen, V. Kolehmainen, S. Järvenpää, J. Kaipio, P. Koistinen, M. Lassas, J. Pirttilä, and
E. Somersalo, “Statistical inversion for medical x-ray tomography with few radiographs: I.
general theory,” Physics in Medicine & Biology, vol. 48, no. 10, p. 1437, 2003.
[24] K. Sauer, J. Sachs, and C. Klifa, “Bayesian estimation of 3-D objects from few radiographs,” IEEE
transactions on nuclear science, vol. 41, no. 5, pp. 1780–1790, 1994.
[25] C. E. Rasmussen and C. K. I. Williams, Gaussian processes for machine learning. MIT press,
Cambridge, MA, 2006.
[26] A. Tarantola, Inverse problem theory and methods for model parameter estimation, vol. 89. siam,
2005.
[27] J. Hendriks, A. Gregg, C. Wensrich, and W. Wills, “Implementation of traction constraints in
bragg-edge neutron transmission strain tomography,” arXiv preprint arXiv:1805.09760, 2018.
[28] C. Jidling, J. Hendriks, N. Wahlström, A. Gregg, T. B. Schön, C. Wensrich, and A. Wills,
“Probabilistic modelling and reconstruction of strain,” arXiv preprint arXiv:1802.03636, 2018.
[29] D. Li, J. Svensson, H. Thomsen, F. Medina, A. Werner, and R. Wolf, “Bayesian soft x-ray
tomography using non-stationary Gaussian processes,” Review of Scientific Instruments, vol. 84,
no. 8, p. 083506, 2013.
[30] J. Svensson, Non-parametric tomography using Gaussian processes. EFDA, 2011.
[31] A. Solin and S. Särkkä, “Hilbert space methods for reduced-rank Gaussian process regression,”
Probabilistic approach to limited-data computed tomography reconstruction 23

tech. rep., arXiv:1401.5508, June 2018.


[32] C. Bouman and K. Sauer, “A generalized gaussian image model for edge-preserving map
estimation,” ECE Technical Reports, p. 277, 1992.
[33] J. Sachs and K. Sauer, “3d reconstruction from sparse radiographic data,” in Discrete Tomography,
pp. 363–383, Springer, 1999.
[34] S. Särkkä, “Linear operators and stochastic partial differential equations in Gaussian process
regression,” in Proceedings of ICANN, 2011.
[35] G. S. Kimeldorf and G. Wahba, “A correspondence between Bayesian estimation on stochastic
processes and smoothing by splines,” The Annals of Mathematical Statistics, vol. 41, no. 2,
pp. 495–502, 1970.
[36] J. L. Mueller and S. Siltanen, Linear and nonlinear inverse problems with practical applications,
vol. 10. Siam, 2012.
[37] S. Brooks, A. Gelman, G. Jones, and X.-L. Meng, Handbook of markov chain monte carlo. CRC
press, 2011.
[38] A. Gelman, J. B. Carlin, H. S. Stern, D. B. Dunson, A. Vehtari, and D. B. Rubin, Bayesian Data
Analysis. Chapman and Hall/CRC, third ed., 2013.
[39] P. C. Hansen, “Analysis of discrete ill-posed problems by means of the l-curve,” SIAM review,
vol. 34, no. 4, pp. 561–580, 1992.
[40] R. Kohavi et al., “A study of cross-validation and bootstrap for accuracy estimation and model
selection,” in Ijcai, vol. 14, pp. 1137–1145, Montreal, Canada, 1995.
[41] A. Vehtari, A. Gelman, and J. Gabry, “Practical bayesian model evaluation using leave-one-out
cross-validation and waic,” Statistics and Computing, vol. 27, no. 5, pp. 1413–1432, 2017.
[42] “Segment lungs from 3-d chest scan and calculate lung volume.” https://se.mathworks.com/
help/images/segment-lungs-from-3-d-chest-mri-data.html. Accessed: 2018-07-09.
[43] L. W. Goldman, “Principles of ct: radiation dose and image quality,” Journal of nuclear medicine
technology, vol. 35, no. 4, pp. 213–225, 2007.
[44] https://www.fips.fi/datasetpage.php. Accessed: 2018-07-09.
[45] T. A. Bubba, M. Juvonen, J. Lehtonen, M. März, A. Meaney, Z. Purisha, and S. Siltanen,
“Tomographic x-ray data of carved cheese,” arXiv preprint arXiv:1705.05732, 2017.
[46] M. R. K. Mookiah, K. Subburaj, K. Mei, F. K. Kopp, J. Kaesmacher, P. M. Jungmann, P. Foehr,
P. B. Noel, J. S. Kirschke, and T. Baum, “Multidetector computed tomography imaging: effect
of sparse sampling and iterative reconstruction on trabecular bone microstructure,” Journal of
computer assisted tomography, vol. 42, no. 3, pp. 441–447, 2018.
[47] T. G. Flohr, S. Schaller, K. Stierstorfer, H. Bruder, B. M. Ohnesorge, and U. J. Schoepf, “Multi–
detector row ct systems and image-reconstruction techniques,” Radiology, vol. 235, no. 3,
pp. 756–773, 2005.
[48] E. Y. Sidky and X. Pan, “Image reconstruction in circular cone-beam computed tomography
by constrained, total-variation minimization,” Physics in Medicine & Biology, vol. 53, no. 17,
p. 4777, 2008.
[49] Z. Purisha, S. S. Karhula, J. H. Ketola, J. Rimpeläinen, M. T. Nieminen, S. Saarakkala, H. Kröger,
and S. Siltanen, “An automatic regularization method: An application for 3-d x-ray micro-ct
reconstruction using sparse data,” IEEE transactions on medical imaging, vol. 38, no. 2, pp. 417–
425, 2018.

You might also like