Texture Analysis Review
Texture Analysis Review
(1) International Center for Numerical Methods in Engineering, Polytechnic University of Catalonia,
Barcelona, Spain
(2) Biomedical Engineering Research Center, Polytechnic University of Catalonia, Barcelona, Spain
(3) Faculty of Medical Sciences, Central University of Venezuela, Caracas, Venezuela
ABSTRACT
Geometric models of human body organs are obtained from imaging techniques like Computed
Tomography (CT) and Magnetic Resonance Images (MRI) that allow an accurate visualization of the
inner body, thus providing relevant information about their structure and pathologies. Next, these
models are used to generate surface and volumetric meshes, which can be used further for
visualization, measurement, biomechanical simulation, rapid prototyping and prosthesis design.
However, going from geometric models to numerical models is not an easy task, being necessary to
apply image-processing techniques to solve the complexity of human tissues and to get more simplified
geometric models, thus reducing the complexity of the subsequent numerical analysis. In this work, an
integrated and efficient methodology to obtain models of soft tissues like gray and white matter of
brain and hard tissues like jaw and spine bones is proposed. The methodology is based in image-
processing algorithms chosen according to some characteristics of the tissue: type, intensity profiles
and boundaries quality. Firstly, low-quality images are improved by using enhancement algorithms to
reduce image noise and to increase structures contrast. Then, hybrid segmentation for tissue
identification is applied through a multi-stage approach. Finally, the obtained models are resampled
and exported in formats readable by Computer Aided Design (CAD) tools. In CAD environments, this
data is used to generate discrete models using FEM or other numerical methods like the Boundary
Element Method (BEM). Results have shown that the proposed methodology is useful and versatile to
obtain accurate geometric models that can be used in several clinical cases to obtain relevant
quantitative and qualitative information.
Keywords: 3D modeling, human tissues, segmentation, medical images, Finite Element Method
1
1. INTRODUCTION
Geometric models of human body parts, such as organs and tissues, as well as their pathologies,
planning and surgery are usually carried out by using numerical methods to get approximate models of
such organs. To obtain a tissue model using numerical methods, such as the Finite Element Method
(FEM), the geometric model of the organ is divided into surface or volumetric elements, the properties
of each element are formulated, and then the elements are combined to compute the organ's
deformation states under the influence of external forces applied by surgical instruments [1]. However,
the generation of these models is not a trivial task, considering the complex shape of human anatomic
parts, which are generally not symmetric. Likewise, the imposition of boundary conditions and
biological loadings in the model are neither trivial nor simple tasks. Actually, soft tissues such as brain
or heart and hard tissues such as bone have a diverse and complex morphology, usually overlapping.
Current techniques such as magnetic resonance (MR), computed tomography (CT) and positrons
emission tomography (PET) have been used, among others, based on radiations which produce images
when interacting with human tissues. The reconstruction of human tissues is carried out with digital
processing techniques. A gray-scale medical image is represented by a mxnxz matrix, formed by
several parallel slices of images in the z direction, having mxn pixels. Each matrix element has a gray-
intensity value obtained by the interaction between radiation and human tissue. Processing and
visualization techniques for medical images involve a set of mathematical algorithms, applied to the
matrix representation described above in order to modify their elements values. There exist some
previous works on this subject [2,3,4] that have studied the process to extract and to analyze human
anatomic structures from these radiological techniques. The aforementioned authors agree to define
three main steps, once the medical data have been digitalized: a) images preprocessing to reduce noise
and enhance contrast, b) segmentation to extract regions of interest for further analysis and c)
visualization of segmented regions (volume, surfaces, discretized meshes) for further manipulation.
These studies, based on the manipulation and visualization of medical images, are a key aspect
for medical diagnosis and diseases treatment. These techniques not only allow medical doctors and
scientists to obtain vital information by using noninvasive techniques but also they are essential tools in
getting more accurate geometric models of human body parts. Some works on this research line can be
mentioned here. 3D models of highly heterogeneous bone from CT images have been obtained in [5].
The authors then used the FEM for the mechanical analysis of bone. In [6] a new methodology to get
titanium-prostheses designs from CT bone-structures has been proposed. These authors applied
processing techniques for images and modeling using the FEM. Isaza et al[7] reconstructed facial-skull
2
structures from CT images by applying image processing techniques to get the segmentation of
structures of interest. Then they applied the FEM to simulate a device used in orthodontics, both for the
dental and skeletal use in cervical traction.
The main goal of this work is to propose an integrated methodology to obtain geometric models
of human-body parts, which will help in medical visualization, measurement, biomechanical
simulation, rapid prototyping and prosthesis design.
3
2.1 Main problems and characteristics of medical images
Medical images are usually contaminated by noise generated by interference or other sources. Usually,
noise is inherent to the medical images acquisition and to the performance of medical instruments as
well [4]. Moreover, radiological procedures modify the image contrast and visualization details [2].
Thus, it is mandatory to modify the gray-intensity range of images in order to improve the visualization
of more brilliant zones as compared to other not so brilliant ones. The success in getting reliable tissue
geometric-models will depend on the techniques used in this first step.
g[u ( x, y)] = ∫∫ n ( x, y, x ' , y' )u ( x ' , y' )dx ' dy' (2)
where u(x, y) is the original image and v(x, y) is the observed image (corrupted by noise); n(x, y) stands
for the additive noise. The formation image process can be modeled by the linear system described in
eqn. 1, where n(x, y, x’, y’) is the response to the image acquisition.
The interpretation of noise in a medical image will depend on the image itself and on the visual
perception. The estimation of the statistical characteristics of noise in an image is needed to separate
the noise from the image. Four kinds of noise are usually reported (see ref. [9]): additive,
multiplicative, impulsive and quantification noise. Since additive and multiplicative are the most
commonly observed noises in medical images, we include a brief description below:
• Additive noise. Is the noise generated by white Gaussian sensors, as defined in eqn. 3, where
g(x,y) is the observed image having noise resulting from the image I(x, y) corrupted by additive
noise n(x, y).
4
g ( x , y ) = I( x , y ) + n ( x , y ) (3)
Figure 1 shows an example of additive noise, resulting from adding Gaussian-type noise to a
phantom image which simulates a MR image of brain. The behavior of added noise can be
observed in the histogram shown in figure 1(d).
Figure 1
Brain phantom corrupted by additive noise. a) Axial slice of original phantom. b) Histogram of a). c)
Original image in a) with Gaussian additive noise. d) Histogram of c)
• Multiplicative noise. This is a kind of speckle noise, observed in medical images, particularly in
ultrasound and magnetic resonance images. This kind of noise is represented in eqn. 4, where
g(x, y) is the observed image having noise, I(x,y) is the image in formation, c(x, y) is the
multiplicative noise component and n(x,y) is the added noise.
g ( x , y ) = I( x , y )c( x , y ) + n ( x , y ) (4)
Figure 2 shows an example of multiplicative noise added to a phantom image that represents a MR
brain image. For the sake of simplicity, c(x,y) was assumed constant in figure 2. The noise behavior
can be observed in figure 2(d)
Figure 2
Brain phantom corrupted by multiplicative noise: a) Axial slice of original phantom. b) Histogram
of a). c) Original image corrupted by multiplicative noise. d) Histogram of c).
∂I ∂I ∂I (5)
∇I = , ,
∂x ∂y ∂z
The final results show how abrupt or soft the image changes at each point are. It also shows how a
specific point represents an image edge and its orientation. Figure 3 depicts a facial-skull CT image
with the boundaries highlighted through the gradient calculation in directions X, Y, Z. Note that eqn. 5
can also be used in MR images.
Figure 3
Craneofacial CT image with highlighted boundaries: (a) Original CT image. (b) Boundaries
detection in (a) with the gradient size calculation in directions X, Y, Z.
6
and organs activity and pathological regions. Its application includes brain tumor detection [10],
extraction of an area affected by extra-pulmonary tuberculosis [11], heart pathologies visualization
[12], coronary borders in angiograms, multiple sclerosis damage quantification, surgery planning and
simulation, tumor volume measuring and tumor response to therapies, blood cells automated
classification, brain development study, micro calcifications in mammographies among other
applications.
The methods used to carry out the segmentation process vary according to the specific need,
image type, among other factors. For example, the brain tissue segmentation is different from the heart
or bone segmentation, such as a jaw or femur. It has been found that specialized methods for specific
applications can lead to better results when having a prior knowledge. However, the choice of the right
method for segmentation problems, when dealing with all types of medical images, is sometimes
difficult due to the lack of robust methods. We have combined several segmentation methods which is
known as the hybrid segmentation approach.
Figure 4
9
3. PROPOSED METHODOLOGY
To obtain useful geometric tissue models it is necessary to properly apply a set of image-processing
techniques to deal with the complexity of human tissues and to get simplified models. These models
will reduce the complexity of the subsequent numerical analysis. In this way, the geometric models
obtained by this methodology will be used to generate surface and complex finite-element meshes,
which will help in visualization, measurement, biomechanical simulation, rapid prototyping and
prosthesis design. Thus we propose an integrated methodology based in the observation of tissue type,
its intensity profiles and its boundary quality, which consists of five main steps integrated into the
computational tool Biomedical View [22,23]. The algorithms were developed using MATLAB tool [8]
and the Insight ToolKit (ITK) library code [24]. The flowchart of the proposed methodology is shown
in figure 5.
Figure 5
Schematic flowchart of integrated methodology
The first step of the methodology, called 3D reconstruction, leads to an initial volume from two
dimensional medical images. The second step consists in to apply preprocessing techniques to the
original volumes in order to reduce noise and other artifacts. These techniques are chosen according to
the image characteristics.
The third step is the segmentation of the initial volume considering regions-of-interest (ROI)
such as soft and hard tissues. The fourth step refines the obtained models by applied morphological
operators and gaussian filter. Finally, in the last step, the volumes are saved in standard output format
readable by the most of CAD programs or the geometric model discretization through numerical
methods. Each one of the steps of the methodology are explained in detail in the following sections.
10
region-of-interest (ROI) in the initial 3D image to obtain sub volumes containing zones of interest. In
each one of the sub volumes a flowchart consisting of algorithms was applied from the preprocessing
step to obtain the relevant tissue-volume.
I t = ∇ ⋅ (g ( ∇I )∇I) (6)
The function |∇I| is used to reduce conductance in high value areas. |∇I| = 0 where the gradient is high
and decreases completely in low gradients. It means g(x) → 0, if x→∞ (reached value in one edge) and
g(x) → 1, if x→0 (reached value within a region).
2
∂I ∂I ∂I
2 2
∇I = + + (7)
∂x ∂y ∂z
11
The gradient can be highly sensitive to noise if a smoothing technique is not applied before, so
the input images passing through this filter were the images previously smoothed by the anisotropic
diffusion filter. In some cases, after improving the boundaries with the gradient calculation, an
additional filter was applied to strengthen the boundaries and to ensure a suitable segmentation. The
library of sigmoid filter provided by ITK [24] was integrated. It transforms the gray-scale intensity
values of the image generating an image Isigmoind with the voxels of the strengthen boundaries and the
other voxels of the regions progressively smoothed. This filter is configurated using four parameters as
follows
1
I sigmoid = (Max − Min ) + Min
−(
I −β
) (8)
1 + e α
where I contains the input voxels intensity. The image Isigmoid contains the output voxels intensities, Min
and Max are the minimum and maximum values of the output image, α is the width of the input
intensity range while β defines the intensity over which the range is centered.
12
of interest. The next step analyzes the neighbor voxels to the region, calculating the average and the
standard deviation σ, adding the pixels of position X whose gray-scale intensity values meets the
required condition in eqn. 9:
where I: image, X: neighbor voxel position analyzed, m: mean, σ: standard deviation, f: multiplying
factor. The second step was implemented until no more voxels can be added. Finally, the segmented
object was represented by all the elements being accepted during the searching process.
One of the main problems of this technique is the over segmentation of regions caused by the
images noise, hence the importance of the preprocessing stage to reduce noise and to enhance the
edges. It was necessary to group some of the adjacent segments as per their gray scale levels of the
labeled regions in order to obtain the entire volume. Thus, in some cases the final volumes were
obtained through the thresholding process, setting the lower threshold in t0 and the highest threshold to
tf , where t0 ≤ If ≤ tf .
Figure 6
Flowchart for jaw and spine models
Figure 7 shows the results obtained in jaw CT images in DICOM format, size 192 x 192 pixeles, voxel
spacing 1.5625 x 1.5625 x 2.5 mm. Also, figure 8 shows the results obtained for each step using spine
CT images in format DICOM, 513 slices, size 512x512 pixels, voxel spacing: 0.782 x 0.782 x 1.0 mm.
15
Figure 7
Jaw bone Preproccessing and segmentation. (a) CT Skull Axial slice view (b) ROI with noise
reduction through the anisotropic diffusion filter. (c) Image Gradient module of (b). (d) Watershed
segmentation applied to border image (c). (e) Watershed image view on color map. (f) Jaw zone
selection through threshold technique.
Figure 8
Preproccessing and segmentation of CT images. (a) CT Axial slice view (b) ROI with noise reduction
through the anisotropic diffusion filter. (c) Image Gradient module of (b). (d) Watershed
segmentation applied to border image (c). (e)Watershed image view on color map. (f) Spine zone
selection through threshold technique
The geometric model of a jaw is displayed in figure 9. Different types of tissues of the jaw bone are
discriminated (fig 9.a): cortical bone, medullary bone, alveoli and even the prosthetic screw implanted
to the patient. The surface and mesh views of the volume obtained are presented in figures 9.b and 9.c.
A spine geometric model obtained by the proposed method is presented in figure 10. The views
presented in this figure have been obtained using the Paraview and GiD softwares.
Figure 9
Jaw volumetric view. (a) Jaw bone 3D visualization. (b) Surface volume (c) Mesh volume
Figure 10
Views of the spine model. (a) Sagittal slice view. (b) 3D view using Paraview. (c) 3D view using GiD
• Preprocessing: the images noise was filtered using the anisotropic diffusion algorithm
smoothing the noise and preserving the images boundaries.
• Segmentation: the region growing algorithm was applied over the filtered image, placing
spheres (seeds) in the zone of interest. The condition for inclusion is described in eqn. 9 based
on the average and the standard deviation of the neighbor voxels (see section 3.3.1). The
resulting volume was a binary image with the white material zone colored white (value 255).
17
• Resampling and CAD exportation: in order to improve the initial geometric model, a resampling
of the volume was carried out through the morphological dilatation technique with a structural
round shaped element of radio 3x3x3 to smooth the overlapping surfaces and fill the holes
generated during the segmentation due to the sensitivity of the segmentation condition. The
final geometric model was saved in a legible format by a visualization software and CAD tools.
• Discretization: finally, using these tools, test boundary conditions were applied over random
zones of the model.
Figure 11
Flowchart for gray matter model
Figure 12 displays the results obtained in each stage in brain MR image of DICOM format, 60 slices of
sizes 256 x 256 pixels, voxel spacing: 0.86 x 0.86 mm x 3.0 mm. For visualization purposes, only one
of the axial slices is presented. It can be observed in figure 12(b) the selection of five seeds in the zone
of interest. The success of the segmentation will depend on where the seeds are placed.
Figure 12
Segmentation of gray matter using region growing algorithm. (a) Original volume from brain MR
images. (b) Coronal slice view with selection of five initial seeds. (c) Denoising of image using
anisotropic filter. (d) Coronal slice view (b) with segmented gray material through growing region.
(e) Volumetric view of the segmented White material in (d).
Flowchart based on
Flowchart based on watershed region growing
Compilation ID algorithm algorithm
4.4 Validation
Validation experiments are needed to assess the performance of any methodology based on
preprocessing and segmentation of medical images. A common approach to validate segmentation
methods is through the use of computational phantoms. They simulate the image acquisition process
using only simplified models.
The proposed methodology was validated using a computational phantom obtained from brain
MR images available in the web site BrainWeb [13]. The flowcharts used were the same applied to
obtain de jaw and gray matter models (see sections 4.1 and 4.2.1), both based in the region growing and
watershed algorithms, respectively. The results were compared to the volumes obtained from web site
using statistical texture analysis to quantify the performance of the proposed methodology.
The texture analysis applied to images is related to the spatial distribution of digital levels of the
image. For the validation, the texture analysis was applied using statistical descriptors that study the
value of the pixel and describe its smoothness, rugosity, etc.. Finally, the absolute error percentage of
voxels was calculated between both segmented models (segmented and truth models). For consistency
purposes only, the statistical descriptors are briefly discussed below
19
• Standard deviation. It measures the dispersion or contrast among digital levels, being related to
the homogeneity observed in the image. In dark images, the standard deviation σ is high if there
are high intensity gray level pixels with low gray level background. If σ = 0, then the image
intensity level is constant, if σ = 1, the image intensity level has high changing values.
• Skewness. It measures the histogram’s asymmetry. A symmetric histogram will have a third
order moment value close to zero. In mathematical terms, skewness is the asymmetry value of
the data with respect to the average. If it is negative, the data is distributed more to the left of
the average than to the right. If is positive, the data is distributed more to the right. (or any
perfectly symmetric distribution). The μ3 of a normal distribution is zero. This descriptor is
defined as
1
µ3 =
σ3
∑ (i − µ) h (i)
3 (11)
where μ is the mean of gray levels of image, σ is the standard deviation of image, h(i) is the
probability histogram for the gray levels.
• Fourth Order Moment (kurtosis). Kurtosis (μ4) measures the flattening of the histogram’s upper
peak. The smaller the value the smoother the peak. The value of a normal distribution is 3.
Distributions likely to have more atypical values than the normal distribution have a value
greater than 3. Distributions less likely to have atypical values have a value less than 3.
Homogeneity of an image is defined as
1
µ4 =
σ4
∑ (i − µ) 4
h (i) (12)
where μ is the mean of gray levels of image, σ is the standard deviation of image, h(i) is the
probabilityy histogram for the gray levels.
• Average entropy. It measures the image granularity. It is a random statistical data used to
charaterize the image texture. A high value indicates a thick texture and its value is zero if it is
constant. The entropy is then defined as
20
where h(i) is a probability histogram for the gray levels.
The phantom of Brainweb was used as “ground truth” with known substance classes which
simulates brain MR images through “fuzzy” volumes where each type of tissue is represented. Global
discrete anatomic volumes for each type of voxels were labeled with numerical values as follows:
• 0=Background (BG)
• 1=Cephalicmedulla Liquid (CL)
• 2=Gray Matter (GM)
• 3=White Matter (WM)
• 4=Fat (FA)
• 5=Muscle/Skin (MS)
• 6=Skin (SK)
• 7=Skull (SU)
• 8=Glial Matter (GL)
• 9=Connective Tissue (CT)
21
The coordinates (X,Y,Z) of their centers are: Seed1=(116,100,99), Seed2=(113,82,99),
Seed3=(91,64,60), Seed4=(83,111,60). Figure 13(c) shows the gray matter zone provided by the
Brainweb.
The texture analysis was carried out to validate the segmentation results by calculating
statistical descriptors in the volumes obtained in the gray matter zone. Table 2 collects the statistical
values and their respective percentage of error, showing that the error in the statistical calculation in
segmentation zones is less than 0.3%.
Figure 13
Gray matter segmentation in phantom volume. (a) Axial slice number 98 of the original phantom
image (b) Gray matter segmentation by the proposed methodology using the region growing
algorithm (c) Gray matter zone segmentation by Brainweb.
Figure 14 shows the results obtained by segmenting the gray matter in the phantom volumen with
additive Gaussian noise and its filtering using the anisotropic diffusion filter following the
flowchart of figure 11. In figure 14(a), the original phantom image is presented showing the axial
slice number 98 of the phantom. Figure 14(b) shows the phantom image with Gaussian additive
noise. Figure 14(c) shows the resulting image after filtering (b) with the anisotropic diffusion
filter-parameters: interacting number=7, time step=0.0625 and conductance=5.0. Also two seeds
are observed (seed points) chosen to start the region growing segmentation. The seeds used were
spherical with 2 pixels of radio 2 mm, with the center coordinates X,Y,Z, the seeds coordinates
are: Seed1=(99,50,99), Seed2=(74,124,99. Figure 14(d) contains the segmenting results. In figure
14(e) the gray matter zone provided by the Brainweb is displayed.
Table 2 Validation of the volume of the gray matter zone using the region-growing algorithm and
the statistical texture analysis
# pixels Mean Stand.Dev. Asymmetry Kurtosis Entropy
22
Figure 14
The segmented gray matter in phantom volume. (a) Axial slice number 98 of the original phantom
image. (b) Original image of the added Gaussian noise (c) Image with noise filtered with an
anisotropic diffusion filter and two seeds points. (d) Gray matter segmented by the region growing
algorithm with 5 spherical seeds (e) Gray matter zone segmented by Brainweb.
Statistical values were calculated. Their respective percentages of error between the segmented
volume and the volume provided by the Brainweb are presented in t¡Table 3. Note that the
percentage of error of the statistical calculation in both volumes is less than 7%.
Table 3 Validation of gray matter volume obtained with region growing algorithm after adding
Gaussian-noise in a brain phantom and applying the anisotropic diffusion-filter
# pixels Mean Stand.Dev. Asymmetry kurtosis Entropy
Diffusion filter and Region Growing 955876 0.1345 0.3411 2.1430 5.5926 0.5695
Figure 15
Gray Matter segmentation in brain phantom. (a) Axial slice number 98 of an original phantom
image. (b y c) gray Matter segmentation through the proposed methodology using the watershed
algorithm. (d) Gray Matter zone segmentated by the Brainweb
23
The texture analysis was used to validate the results by calculating statistical descriptors in the obtained
volumes. In table 4 the statistical values and their respective percentage of error are presented. Note
that the percentage of error of the statistical calculations in both volumes is less than 2%.
Diffusion filter and Watershed 857718 0.1207 0.3257 2.3293 6.4256 0.5312
The 3D views of the gray matter zone volume provided by the Brainweb and our methodology are
displayed in figure 16.
Figure 16
Volumetric view of the gray-matter zone. (a) Original volumen of the gray-matter provided by the
BrainWeb. (b) Volume obtained by Region Growing
24
very small. In critical conditions, when corrupting the images with Gaussian-noise (see table 3), low
percentages of error were obtained: number of pixels error= 5.9, average error = 5.9, standard deviation
error = 2.5, asymmetry error = 4.3, homogeneity error = 7.1 and entropy error = 3.7. Likewise, it was
verified that the implemented techniques are suitable to generate and export volumes in formats *.vtk y
*.stl, easy to read from other programs and CAD tools. Their usefulness to generate different meshes,
solids and surfaces views was also demonstrated. As for CAD tools, test values were applied in
boundary conditions and models were discretized using the FEM, showing the usefulness of the
generated volumes for their further numerical analysis.
It should be remarked herein that the current implementation of the proposed approach has
some limitations. First, the methodology is demonstrated using only MR and CT images, other medical
images modalities such as PET, 4D MR images could be used. Also, there exist other more
sophisticated image-processing algorithms which could be integrated into the methodology.
Finally, the methodology is independent of the underlying properties of the finite-element
modeling. We only use the morphological operators and Gaussian-filters to smooth the surfaces of
tissue models, but other techniques could be applied in order to reduce the mathematical complexity of
the soft and hard tissue models and the size and irregularity of the FE mesh as well.
REFERENCES
1. I. Peterlik, M. Sedef, C. Basdogan, and L. Matyska. Medical imaging analysis of the three
dimensional (3D) architecture of trabecular bone: Techniques and their applications. Computers &
Graphics 34(1):43–54, 2010.
2. I. Bankman. Handbook of Medical Imaging, Processing and Analysis. Academic Press, UK, 2000.
3. B. Preim and D. Bartz. Visualization in Medicine. Theory, Algorithms, and Applications. Morgan
Kaufmann Publishers, Elsevier, NY, 2007.
4. J. Semmlow. Biosignal and Biomedical Image Processing, MATLAB Based Applications. CRC
Press, Boca Raton, 2004.
5. C. M. Müller-Karger, E. Rank, and M. Cerrolaza, P-version of the finite element method for highly
heterogeneous simulation of human bone. Finite Elem in Analysis & Design 40(7):757–770, 2004.
6. V. Pattijn, F. Gelaude, J. Vander, and R. Van. Medical image-based preformed titanium
membranes for bone reconstruction. In C. T. Leondes (Ed) Medical Imaging Systems Technology,
Methods in General Anatomy, vol. 5, World Scientific Publishing Co. Pte. Ltd., Singapore, 2005.
7. J. Isaza, S. Correa, and J. Congote. Methodology for 3D reconstruction of craniofacial structures
25
and their use in the finite element method (in spanish). In IV Latin American Congress on
Biomedical Engineering 2007, Bioengineering Solutions for Latin America Health, volume 18,
pages 766–769. Springer Berlin, 2007.
8. MathWorks. Image Processing Toolbox TM 6 User’s Guide. Release 2009a. MAT-LAB: Matrix
Laboratory, 2009.
9. T. Acharya and A. K. Ray. Image Processing. Principles and Applications, John Wiley, NY, 2005
10. C.-L. Chuang and C.-M. Chen. A novel region-based approach for extracting brain tumor in CT
images with precision. In R. Magjarevic, J. H. Nagel, and R. Magjarevic, editors, World Congress
on Medical Physics and Biomedical Engineering 2006, Vol. 14, IFMBE Proceedings, pages 2488–
2492. Springer Berlin Heidelberg, 2007
11. I. Avazpour, M. I. Saripan, A. J. Nordin, and R. S. A. R. Abdullah. Segmentation of
extrapulmonary tuberculosis infection using modified automatic seeded region growing. Biological
Procedures Online 11(1):241–252, 2009.
12. C. Ciofolo and M. Fradkin. Segmentation of pathologic hearts in long-axis late-enhancement MRI.
Med Image Comput Comput Assist Interv 11:186–193, 2008.
13. C. Cocosco, V. Kollokian, R.-S. Kwan, and A. Evans. Brainweb:online interface to a 3D MRI
simulated brain database. NeuroImage 5(4):S425, 1997.
14. L. Landini, V. Positano, and M. Santarelli. 3D medical image processing. In Image Processing in
Radiology (Neri, Caramella & Bartolozzi, Eds), Springer, pages 67–85, 2008
15. H. Park, M. Kwon, and Y. Han. Techniques in image segmentation and 3D visualization in brain
MRI and their applications. In C. T. Leondes (Ed) Medical Imaging Systems Technology - Methods
in Cardiovascular and Brain Systems, vol. 5, pages 207–253. World Scientific Publishing Co. Pte.
Ltd., Singapore, 2005.
16. R. Goldenberg, R. Kimmel, E. Rivlin, and M. Rudzsky. Techniques in automatic cortical gray
matter segmentation of three-dimensional (3D) brain images. In C. T. Leondes (Ed) Medical
Imaging Systems Technology - Methods in Cardiovascular and Brain Systems, vol. 5, World
Scientific Publishing Co. Pte. Ltd., Singapore, 2005.
17. A. W.-C. A. Liew and H. Yan. Computer techniques for the automatic segmentation of 3D MR
brain images. In C. T. Leondes, editor, Medical Imaging Systems Technology - Methods in
Cardiovascular and Brain Systems, Vol. 5. World Scientific Publishing Co. Pte. Ltd., Singapore,
2005.
18. H.-O. Peitgen, S. Oeltze, and B. Preim. Geometrical and structural analysis of vessel systems in 3D
26
medical image datasets. In C. T. Leondes (Ed) Medical Imaging Systems Technology - Methods in
Cardiovascular and Brain Systems, vol. 5, pages 1–60, World Scientific Publishing Co. Pte. Ltd.,
Singapore, 2005.
19. A. P. Accardo, I. Strolka, R. Toffanin, and F. Vittur. Medical imaging analysis of the three
dimensional (3D) architecture of trabecular bone: Techniques and their applications. Medical
Imaging Systems Technology, Methods in Diagnosis Optimization 5:1–41, 2005.
20. M. Levoy. Volume rendering: Display of surfaces from volume data. IEEE Computer Graphics
and Applications 8(3):29–37, 1988.
21. W. E. Lorensen and H. E. Cline. Marching cubes: A high resolution 3D surface construction
algorithm. Computer Graphics 21(4):163–169, 1987.
22. G. Gavidia, E. Soudah, J. Suit, M. Cerrolaza, and E. Onate. Development of a Matlab tool to
medical image processing and its integration into medical GID software, Technical Report IT-595,
CIMNE, International Center of Numerical Methods in Engineering, Barcelona, Spain, 2009.
23. G. Gavidia, E. Soudah, M. Martin-Landrove, and M. Cerrolaza. Discrete modeling of human body
using preprocessing and segmentation techniques of medical images. Rev. Int´l Met Num for
Analysis & Des in Eng (in Spanish) 27(3):220–226, 2011.
24. L. Ibáñez, W. Schroeder, L. Ng, and J. Cates. The ITK Software Guide. Kitware Inc, 2nd ed., 2005
25. P. Perona and J. Malik. Scale-space and edge detection using anisotropic diffusion. IEEE Trans.
Pattern Anal. Machine Intell 12:629–639, 1990.
26. J. Liu, S. Huang, and W. Nowinski. Automatic segmentation of the human brain ventricles from
MR images by knowledge-based region growing and trimming. Neuroinformatics 7(2):131–146,
2009.
27. G. Mühlenbruch, M. Das, C. Hohl, J. E. Wildberger, D. Rinck, T. Flohr, R. Koos, C. Knackstedt,
R. W. Gunther, and M. A.H. Global left ventricular function in cardiac CT evaluation of an
automated 3D region-growing segmentation algorithm. European Radiology 16(5):1117–1123,
2005.
28. H. K. Hahn and H.-O. Peitgen. IWT—interactive watershed transform: a hierarchical method for
efficient interactive and automated segmentation of multidimensional grayscale images. SPIE
Medical Imaging 5032:643–653, 2003.
29. H. Digabel and C. Lantuejoul. Iterative algorithms. Proceedings of the 2nd European Symposium
on Quantitative analysis of microstructures, vol.1, pages 85–99, Riederer Verlag, 1978.
30. Kitware, Inc. VTK User’s Guide, 5 ed., 2006.
27
31. 3D Systems, Inc. Stereolithography Interface Specification, 2010
32. R. Ribó, M. Pasenau, E. Escolano, J. Pérez, A. Coll, and A. Melendo. GiD The Personal Pre and
Postprocessor. Reference Manual, version 9. Internal report, CIMNE, International Center for
Numerical Methods in Engineering, Barcelona, 2009.
33. Kitware, Inc. ParaView: Parallel Visualization Application. User’s Guide, version 1.6., 2009.
34. Autodesk Inventor Professional. User’s Manual and Guide, 2009.
35. Abaqus 6.9. ABAQUS/CAE User’s Manual, 2009.
36. National Electrical Manufacturers Association. DICOM: Digital Imaging and Communications in
Medicine, 2008.
28