SlideShare a Scribd company logo
Intracerebral
Hemorrhage (ICH)
Understanding the CT imaging
features for development of deep
learning networks, ranging from
restoration, segmentation,
prognosis and prescriptive
purposes
Petteri Teikari, PhD
High-dimensionalNeurology,Queen’sSquareof
Neurology,UCL,London
https://www.linkedin.com/in/petteriteikari/
Version “06/10/20“
Forwhoisthis“literaturereview
forvisuallyorientatedpeople”
for?
”A bitofeverythingrelatedto headCT
deeplearning,focusedonintracerebral
hemorrhage(ICH) analysis”
Itisassumedthatthereader isfamiliar
with deeplearning/computervision, but
lessso withcomputerizedtomography
(CT)andICH
https://www.linkedin.com/in/andriyburkov
WhatisICH?
”hemorrhagic stroke”
SpontaneousIntracerebralHemorrhage(ICH)
https://www.grepmed.com/images/4925/intracerebral-suba
rachnoid-hemorrhage-comparison-diagnosis-neurology-ep
idural
http://doi.org/10.13140/RG.2.1.1572.8167
”HemorrhagicStroke” , lesscommon than ischemic stroke, the “layman definition” of stroke
“Spontaneous”, asin opposed to, traumaticbrain hemorrhage caused bya blowto the head (“traumatic brain injury”, TBI)
https://www.strokeinfo.org/stroke-treatme
nts-hemorrhagic-stroke/
https://mc.ai/building-an-algorithm-to-detect-differe
nt-types-of-intracranial-brain-hemorrhage-using-de
ep/
https://mayfieldclinic.com/pe-ich.htm
https://aneskey.com/intracerebral-hemorrhagic-stroke/
TheICHBasics fromAnesthesiaKey
Thetypicalhemorrhagelocation
basedonetiology
Primarymechanicalinjury → Secondaryinjuries
PathophysiologicalMechanismsand PotentialTherapeutic
TargetsinIntracerebralHemorrhage
ZhiweiShao etal.(FrontPharmacol.2019; 10: 1079,Sept2019)
https://dx.doi.org/10.3389%2Ffphar.2019.01079
Intracerebral hemorrhage (ICH) is a subtype of hemorrhagic stroke with high mortality
and morbidity. The resulting hematoma within brain parenchyma induces a series of
adverse events causing primary and secondary brain injury. The mechanism of
injuryafterICHisverycomplicatedandhasnot yet beenilluminated.
This review discusses some key pathophysiology mechanisms in ICH such as
oxidative stress (OS), inflammation, iron toxicity, and thrombin formation.
Thecorrespondingtherapeutic targetsandtherapeuticstrategiesarealsoreviewed.
The initial pathological damage of cerebral hemorrhage to brain is the mechanical
compression caused by hematoma. The hematoma mass can increase intracranial
pressure, compressing brain and thereby potentially affecting blood flow, and
subsequentlyleadingtobrainhernia(Keepet al.,2012).
Subsequently, brain hernia and brain edema cause secondary injury, which may be
associatedwithpooroutcomeandmortalityinICHpatients(Yangetal.,2016).
Unfortunately, the common treatment of brain edema (steroids, mannitol, glycerol, and
hyperventilation) cannot effectively reduce intracranial pressure or prevent secondary
brain injury (Cordonnieret al., 2018). Truly effective clinical treatments are very
limited, mainly because the problem of transforming preclinical research into clinical
application has not yet been solved. Therefore, a multi-target neuroprotective
therapy will make clinically effective treatment strategies possible, but also requires
furtherstudy.
Pro-andanti-inflammatorycytokinesinsecondarybraininjuryafter ICH.
Mechanismsoferythrocyte
lysates and thrombin in
secondarybrain injuryafter ICH.
The Keap1–Nrf2–ARE pathway. Keap1 is an OS sensor
and negatively regulates Nrf2. Once exposed to reactive
oxygen species (ROS), the activated Nrf2 translocates to the
nucleus, binds to antioxidant response element (ARE),
heterodimerizes with one of the small Maf (musculo-
aponeurotic fibrosarcoma oncogene homolog) proteins, and
enhances the upregulation of cytoprotective, antioxidant,
anti-inflammatory, and detoxification genes that mediate cell
survival.
”Time isBrain” Neural injury(and your imagingfeatures*)
and depend on the time since initialhematoma
Intracerebral haemorrhage
DrAdnanIQureshi,ADavidMendelow, DanielFHanley
TheLancetVolume373,Issue9675,9–15May2009,Pages1632-1644
https://doi.org/10.1016/S0140-6736(09)60371-8
Cascadeofneuralinjuryinitiatedbyintracerebralhaemorrhage Thestepsinthefirst
4harerelatedtothedirecteffectofthehaematoma,laterstepstotheproductsreleasedfrom
thehaematoma.BBB=blood–brainbarrier.MMP=matrixmetallopeptidase.TNF=tumour
necrosisfactor.PMN=polymorphonuclearcells.
ProgressionofhaemotomaandoedemaonCT
Top:hyperacuteexpansion of haematoma ina patientwithintracerebral haemorrhageon serial CTscans.
Smallhaematoma detected in thebasal ganglia and thalamus (A). Expansion of haematoma after151 min
(B). Continued progression of haematoma after another 82min(C). Stabilisationof haematomaafter
another 76 min (D). Bottom:progressionof haematomaand perihaematomaloedema in apatientwith
intracerebralhaemorrhageonserialCT scans. Thefirstscan (E)wasacquired beforetheintracerebral
haemorrhage. Perihaematoma oedemaishighlighted in green to facilitaterecognitionof progressionof
oedema. At4h aftersymptomonsetthereisa small haematoma inthebasal ganglia (F). Expansionof
haematoma with extension into thelateral ventricleand newmass-effectand midlineshiftat14h (G).
Worsening hydrocephalusand earlyperihaematomal oedema at28 h (H). Continued mass-effectwith
prominentperihaematomal oedema at73 h (I).Resolving haematoma with moreprominent
perihaematomal oedema at7days (J).
or how much is the timereallybrain?
Influenceoftimetoadmissiontoa
comprehensivestrokecentreonthe
outcomeofpatientswithintracerebral
haemorrhage(Jan 2020)
Luis Prats-Sánchez, MarinaGuasch-Jiménez, IgnasiGich, Elba Pascual-Goñi, Noelia Flores, Pol Camps-Renom, Daniel
Guisado-Alonso, Alejandro Martínez-Domeño, Raquel Delgado-Mederos, Ana Rodríguez-Campello, Angel Ois,
AlejandraGómez-Gonzalez, ElisaCuadrado-Godia, JaumeRoquer, JoanMartí-Fàbregas
https://doi.org/10.1177%2F2396987320901616
In patients with spontaneous intracerebral
haemorrhage, it is uncertain if diagnostic and
therapeutic measures are time-sensitive on
their impact on the outcome. We sought to
determine the influence of the time to admission to
a comprehensive stroke centre on the outcome of
patients with acute intracerebral haemorrhage.
Our results suggest that in patients with
intracerebral haemorrhage and known symptom
onset who are admitted to a comprehensive stroke
centre, an early admission (≤110 min) does not
influencetheoutcomeat90 days.
Distributionofpropensity
scoreblocksbytimeto
admission.Foreachpair
ofblocks,theboxonthe
leftrepresentsthegroup
ofpatientswithan
admission≤110 minand
theoneontheright
representsthegroupwho
wasadmitted > 110 min.
ManagementofICH less options than for ischemic stroke
Intracerebral haemorrhage
DrAdnanIQureshi,ADavidMendelow, DanielFHanley
TheLancetVolume373,Issue9675,9–15May2009,Pages1632-1644
https://doi.org/10.1016/S0140-6736(09)60371-8
haemorrhage
Oddsratiofordeathor disabilityinpatientswithlobar intracerebral
haemorrhagetreatedsurgicallyor conservatively. BoxesarePeto'soddsratio
(OR),linesare95% CI.Adapted with permission from LippincottWilliamsandWilkins
Clinical evidencesuggeststheimportanceof threemanagementtasksin intracerebralhaemorrhage:
stopping thebleeding,81 removing theclot,70 and controlling cerebral perfusion pressure.
92 Theprecision needed to achievethesegoalsand thedegreeof benefitattributableto eachclinical goal
would bepreciselydefined whentheresultsof trialsinprogress becomeavailable. AnNIHworkshop150
 identified theimportanceof animal modelsof intracerebral haemorrhageand of humanpathology
studies.Useof real-time, high-fieldMRI with three-dimensional imagingand high-resolution tissue
probesisanotherpriority.Trialsof acuteblood-pressuretreatment and coagulopathyreversal arealso
medical priorities.And trialsof minimallyinvasivesurgicaltechniquesincluding mechanical and
pharmacologicaladjunctsaresurgical priorities. TheSTICH II trial should determinethebenefitof
craniotomyforlobarhaemorrhage. Abetter understanding of methodological challenges, including
establishmentofresearchnetworksandmultispecialty approaches, isalso needed.150 New
information created in eachof theseareasshould add substantially to ourknowledgeabout theefficacy
of treatmentfor intracerebral haemorrhage.
Bestcareisprevention with blood pressuremedication
Intracerebralhaemorrhage:currentapproaches
toacute management
Prof CharlotteCordonnier, Prof AndrewDemchuk,Wendy Ziai,Prof CraigS
Anderson
TheLancetVolume392, Issue10154,6–12October2018,Pages1257-1268
https://doi.org/10.1016/S0140-6736(18)31878-6
ICH, as a heterogeneous disease, certain clinical and imaging features help identify
the cause, prognosis, and how to manage the disease. Survival and recovery
from intracerebral haemorrhage are related to the site, mass effect, and intracranial
pressure from the underlying haematoma, and by subsequent cerebral oedema from
perihaematomal neurotoxicity or inflammation and complications from prolonged
neurological dysfunction.
A moderate level of evidence supports there being beneficial effects of active
management goals with avoidance of early palliative care orders, well-coordinated
specialist stroke unit care, targeted neurointensive and surgical interventions, early
control ofelevated blood pressure, and rapid reversal of abnormal coagulation.
The concept of time is brain, developed for the management of acute ischaemic
stroke, applies readily to the management of acute intracerebral
haemorrhage. Initiation of haemostatic treatment within the first few hours after
onset, using deferral or waiver of informed consent or even earlier initiation using a
prehospital settingwith mobile stroke unit technologies, require evaluation.
For patients with intracerebral haemorrhage presenting at later or unwitnessed time
windows, refining the approach of spot sign detection through newer imaging
techniques, such as multi-phase CT angiography (Rodriguez-Lunaet al. 2017),
might prove useful, ashasbeen shown with theuse ofCTperfusion in the detection of
viable cerebral ischaemia in patients with acute ischaemic stroke who present in a
late window (Alberset al. 2018;Nogueiraetal. 2018).
Ultimately, the best treatment of intracerebral haemorrhage isprevention and
effective detection, management, and control of hypertension across the
community and in high-risk groups will have the greatest effect on reducing the
burden ofintracerebral haemorrhage worldwide.
ICH High fatality still
EuropeanStrokeOrganisation(ESO)Guidelines
fortheManagementofSpontaneous
Intracerebral Hemorrhage (August 2014)
ThorstenSteiner, RustamAl-ShahiSalman, Ronnie Beer, Hanne Christensen, Charlotte Cordonnier, Laszlo Csiba, Michael Forsting, Sagi
Harnof, CatharinaJ. M. Klijn, Derk Krieger, A. David Mendelow, Carlos Molina, Joan Montaner, Karsten Overgaard, JesperPetersson,
Risto O. Roine, Erich Schmutzhard, KarstenSchwerdtfeger, ChristianStapf, Turgut Tatlisumak, Brenda M. Thomas, Danilo Toni, Andreas
Unterberg, Markus Wagner
https://doi.org/10.1111%2Fijs.12309
Intracerebral hemorrhage (ICH) accounted for 9% to 27% of all strokes
worldwide in the last decade, with high early case fatality and poor functional
outcome. In view of recent randomized controlled trials (RCTs) of the
management of ICH, the European Stroke Organisation (ESO) has updated its
evidence-basedguidelinesforthemanagementofICH.
We found moderate- to high-quality evidence to support strong
recommendations for managing patients with acute ICH on an acute
stroke unit, avoiding hemostatic therapy for acute ICH not associated with
antithrombotic drug use, avoiding graduated compression stockings, using
intermittent pneumatic compression in immobile patients, and using blood
pressureloweringforsecondaryprevention.
We found moderate-quality evidence to support weak recommendations for
intensive lowering of systolic blood pressure to <140 mmHg within six-hours of
ICH onset, early surgery for patients with a Glasgow Coma Scale score 9–12, and
avoidanceofcorticosteroids.
These guidelines inform the management of ICH based on evidence for the
effects of treatments in RCTs. Outcome after ICH remains poor, prioritizing
furtherRCTsofinterventionstoimproveoutcome.
Age-standardizedincidenceofhemorrhagicstrokeper 100000person-years
for 1990(a),2005(b),and2010(c).FromFeigin etal.(1).
1990
2005
2010
CTtypicallythefirstscandone andMRIlater where accessible
MRI offersbetterimagequality,butthecost ofthetechnologylimitsitsavailability
Intracerebralhemorrhage: an
update ondiagnosisandtreatment
IsabelC. Hostettler,DavidJ.Seiffge&DavidJ.Werringet
al.(12Jun 2019) UCLStrokeResearchCentre,DepartmentofBrain Repairand
Rehabilitation,UCLInstituteofNeurologyandtheNationalHospitalforNeurologyandNeurosurgery,
London,UK
ExpertReviewofNeurotherapeuticsVolume19,2019-
Issue7 https://doi.org/10.1080/14737175.2019.1623671
Expert opinion: In recent years, significant
advances have been made in deciphering
causes, understanding pathophysiology, and
improving acute treatment and prevention of ICH.
However, the clinical outcome remains poor
andmany challenges remain.
Acute interventions delivered rapidly
(including medical therapies – targeting
hematoma expansion, hemoglobin toxicity,
inflammation, edema, anticoagulant reversal –
and minimally invasive surgery) are likely to
improveacuteoutcomes.
Improved classification of the underlying
arteriopathies (fromneuroimaging and genetic
studies) and prognosis should allow tailored
prevention strategies (including sustained
blood pressure control and optimized
antithrombotic therapy) to further improve
longer-termoutcomeinthisdevastatingdisease.
A) ModifiedBostoncriteria,B)CTEdinburghcriteria.
ICHcarepathway.
Pathwaytodecideonintra-arterial
digitalsubtractionangiography(IADSA)
tofurtherinvestigateICHcause
(adaptedfromWilsonetal.2017).
small vesseldiseases(SVD),intra-arterial digital
subtraction angiography(IADSA),WhiteMatter
Hyperintensities(WMH)
Angiographyalsoforhemorrhagic stroke
Hemorrhagic Stroke (2014)
JuliusGriauzde,ElliotDickersonandJoseph J.
Gemmete Department of Radiology,Radiology Resident,University of
Michigan
http://doi.org/10.1007/978-1-4614-9212-2_46-1
Non-contrast computed tomography has
long been the initial imaging tool in the acute
neurologic patient. As MRI technology and
angiographic imaging has evolved, they too
have proven to be beneficial in narrowing the
differentialdiagnosisandtriaging patientcare.
Several biological and physical characteristics
contribute significantly to the appearance of
blood products on neuroimaging. To
adequately interpret images in the patient with
hemorrhagic stroke, the evaluator must have a
knowledge of the interplay between imaging
modalitiesandintracranialbloodproducts.
Additionally, an understanding of technical
parameters as well as the limitations of
imagingmodalities canbehelpfulinavoiding
pitfalls. Recognition of typical imaging patterns
and clinical presentations can further aid the
evaluatorinrapiddiagnosisanddirectedcare.
Computedtomographyangiography (CTA)
Magneticresonanceangiography (MRA)
TimeofFlightMRA (TOFMRA),initssimplestform,
takesadvantageoftheflowofblood
Contrast-EnhancedMRA (CEMRA)employsfast
spoiledgradient-recalledecho-basedsequences
(FSPGR)andtheparamagneticpropertiesof
gadoliniumtointensifythesignalwithinvessels
“Brainistime” alsofor theappearance of the blood
Evolution ofbloodproductson MRI (Derived fromafigurecreated by Dr. Frank Gaillard as
presented on http://radiopaedia.org/articles/ageing-blood-on-mri ,withpermission)
http://doi.org/10.1007/978-1-4614-9212-2_46-1:
TheappearanceoftheICH atdifferentperiodsoftimedepends
considerablyuponanumber offactors. Forinstance,in earlyphases,
thehematocritandproteinlevelsofthehematomawilldramaticallyalter
theCTattenuationinthehematoma.In laterphases,factorssuchas
oxygentensionatthehematomawilldeterminehowquickly
deoxyhemoglobintransitionsintomethemoglobinandhowquicklyred
bloodcellsfinallylyseanddecreasethefieldinhomogeneityeffectsof
sequesteredmethemoglobin. Theintegrityoftheblood-brainbarrier
alsohelpstodeterminethedegreetowhichhemosiderin-laden
macrophagesremaintrappedintheparenchymacausinghemosiderin
staininglongafterthevastmajorityofthehematomamasshasbeen
resorbed[Parizeletal.2001].
Intracranial
hemorrhage made
easy- asemiological
approach on CT and
MRI
http://doi.org/10.1594/ecr2
014/C-1120
:CTappearanceof
ageingblood.Several
factorswhich vary
dependingon thestageof
thebleeding
Evolution of CTdensityof
intracranial haemorrhage
(diagram)Case contributed by 
Assoc Prof FrankGaillard
https://radiopaedia.org/cases/evolutio
n-of-ct-density-of-intracranial-haemor
rhage-diagram
AppearanceofBloodonComputedTomographyand
MagneticResonanceImagingScansbyStage
http://doi.org/10.1007/s13311-010-0009-x
Whatpredicts
theoutcomeafterICH?
ICHScore the simplisticbaseline for prognosis
ICHScore subcomponents:Glasgow ComaScale (GCS)
https://www.firstaidforfree.com/glasgow-coma-scale-gcs-first-aiders/
https://emottawablog.com/2018/07/gcs-remastered-recent-
updates-to-the-glasgow-coma-scale-gcs-p/
ICHScore subcomponents:Hematomavolume
Howtomeasureinpractice? Notethat deeplearningsegmentation networksarenot reallyin use
RyanHakimi,DO,MSAssistantProfessor
https://slideplayer.com/slide/3883134/
Vivien H. Leeetal.(2016)citesthe
●
Kwak’s sABC/2 formula (Kwak et al.
1983,10.1161/01.str.14.4.493, Cited by 252)
●
Kothari’s ABC/2 formula(Kothari et al.
1996,  10.1161/01.str.27.8.1304, Cited by 1653) 
Excellent accuracy of ABC/2volume formulacompared to computer-
assisted volumetricanalysisof subdural hematomas Sae-Yeon Won et
al. (2018) https://doi.org/10.1371/journal.pone.0199809
TheABC/2methodisasimpleandfastbedsideformulaforthemeasurementof
SDHvolumein atimelymannerwithoutlimitedaccessthrough simpleadaption,
which mayreplacethecomputer-assistedvolumetric measurementintheclinical
andresearch area.
Assessment oftheABC/2MethodofEpidural
HematomaVolume MeasurementasComparedto
Computer-AssistedPlanimetricAnalysis (2015)
https://doi.org/10.1177%2F1099800415577634
ICHScore subcomponents:Intraventicular Hemorrhage
https://www.childrensmn.org/educationmaterials/childrensmn
/article/15353/intraventricular-hemorrhage-in-premature-babi
es/
Jacksonetal.(2013)
https://doi.org/10.1007/s12028-012-9713-1
ICHScore subcomponents:Infratentorial(cerebellar) bleed
https://aneskey.com/intrace
rebral-hemorrhagic-stroke/
Impact of SupratentorialCerebralHemorrhageon the
ComplexityofHeartRate Variabilityin Acute Stroke Chih-Hao
Chen, Sung-Chun Tang,Ding-YuanLee, Jiann-ShingShieh,Dar-Ming Lai,An-YuWu&Jiann-
ShingJengScientificReportsvolume8,Articlenumber: 11473(2018)
https://doi.org/10.1038/s41598-018-29961-y
Acute stroke commonly affects cardiac autonomic responses resulting in reduced
heart rate variability (HRV). Multiscale entropy (MSE) is a novel non-linear
method to quantify the complexity of HRV. This study investigated the influence of
intracerebral hemorrhage (ICH) locations and intraventricular
hemorrhage (IVH) on the complexity of HRV. In summary, more severe stroke
and larger hematomavolumeresulted in lower complexityofHRV.Lobar hemorrhage
andIVHhadgreatimpactsonthecardiacautonomicfunction.
https://neupsykey.com/
diagnosis-and-treatmen
t-of-intracerebral-hemor
rhage/
Location →
functionalmeasures?
We collected ECG analogue data
directly from the bedside monitor
(Philips Intellivue MP70, Koninklijke
Philips N.V., Amsterdam, Netherlands)
foreachpatient. 
ICH Score validationandmodification somewhat ok/suboptimal performance
Modifyingtheintracerebral
hemorrhagescore tosuitthe
needs ofthedevelopingworld
AjayHegde,GirishMenon (Nov2018)
http://doi.org/10.4103/aian.AIAN_419_17
ICH Score failed to accurately predict
mortality in our cohort. ICH is
predominately seen at a younger
age group in India and hence have
better outcomes in comparison to
the west. We propose a minor
modification in the ICH score by
reducing the age criteria by 10 years to
prognosticate the disease better in our
population.
External Validation of the ICH
Score
JenniferL Clarkeetal.(2004)
https://doi.org/10.1385/ncc:1:1:53
The ICH score accurately stratifies
outcome in an external patient cohort.
Thus, the ICH score is a validated
clinical grading scale that can be easily
and rapidly applied at ICH presentation.
Ascale such as the ICH score could be
used to standardize clinical treatment
protocolsorclinical studies.
ValidationofICHScore inalarge
UrbanPopulation
TahaNisaret al.(2018)
https://doi.org/10.1016/j.clineuro.2018.09.007
We conducted a retrospective chart review of
245 adult patients who presented with acute
ICH to University Hospital, Newark. Our study
is one of the largest done at a single urban
center to validate the ICH score. Age ≥ 80
years wasn't statistically significant with
respect to 30-day mortality in our group.
Restratification of the weight of
individual variable in the ICH equation with
modification of the ICH score can potentially
more accurately establish mortality risk.
Nevertheless, the overall prediction of
mortality was accurate and reproducible in
ourstudy.
Validation of the ICH score in
patients with spontaneous
intracerebral haemorrhage
admitted to the intensive care unit
inSouthernSpain
SoniaRodríguez-Fernández etal.(2018)
http://dx.doi.org/10.1136/bmjopen-2018-021719
ICH score shows an acceptable discrimination as a tool to
predict mortality rates in patients with spontaneous ICH
admitted totheICU, but its calibration issuboptimal.
24-HourICHScoreIs aBetter
Predictor of Outcomethan
AdmissionICHScore
AimeeM. Aysenneet al.(2013)
https://doi.org/10.1155/2013/605286
Early determination of the ICH score may
incorrectly estimate the severity and
expected outcome after ICH. Calculations of
the ICH score 24 hours after admission
will better predict earlyoutcomes.
Assessment and comparison of the
max-ICH score and ICH score by
externalvalidation
Felix A.Schmidt,etal.(2018)
https://doi.org/10.1212/WNL.0000000000006117
We tested the hypothesis that the maximally treated
intracerebral hemorrhage (max-ICH) score is superior
to the ICH score for characterizing mortality and functional
outcome prognosis in patients with ICH, particularly those who
receive maximal treatment.
External validation with direct comparison of the ICH score and
max-ICH score shows that their prognostic performance is not
meaningfully different. Alternatives to simple scores are
likely needed to improve prognostic estimates for patient
care decisions.
Yes, so, do you like to use
oversimplified models after all?
ICHScore works forsome parts of the population
OriginalIntracerebralHemorrhageScoreforthePrediction
of Short-Term Mortality inCerebral Hemorrhage:
Systematic Review and Meta-Analysis
Gregório,Tiago; Pipa, Sara;Cavaleiro,Pedro; Atanásio,Gabriel;
Albuquerque,Inês; CastroChaves, Paulo;Azevedo,Luís
Journal of Strokeand CerebrovascularDiseases
Volume29,Issue4,April2020,104630
https://doi.org/10.1097/CCM.0000000000003744
To systematically assess the discrimination and
calibration of the Intracerebral Hemorrhage score for
prediction of short-term mortality (38 studies, 15,509
patients) in intracerebral hemorrhage patients and to study its
determinantsusing heterogeneityanalysis.
Fifty-five studiesprovideddataondiscrimination,and35studies
provided data on calibration. Overall, the Intracerebral
Hemorrhage score discriminated well (pooled C-statistic
0.84; 95% CI, 0.82-0.85) but overestimated mortality
(pooled observed:expected mortality ratio = 0.87; 95% CI, 0.78-
0.97), with high heterogeneity for both estimates (I 80% and
84%,respectively).
The Intracerebral Hemorrhage score is a valid clinical
prediction rule for short-term mortality in intracerebral
hemorrhage patients but discriminated mortality worse in more
severe cohorts. It also overestimated mortality in the highest
Intracerebral Hemorrhage score patients, with significant
inconsistency between cohorts. These results suggest that
mortality for these patients is dependent on factors
not included in the score. Further studies are needed to
determinethesefactors.
StartwithICHscore but then youneed better models?
Management ofIntracerebral
Hemorrhage:JACCFocusSeminar
MatthewSchrag,HowardKirshner
Journalof theAmerican CollegeofCardiology
Volume75,Issue15,21April2020
https://doi.org/10.1016/j.jacc.2019.10.066
The most widely used tool for assessing prognosis is the “ICH score,” a scale that predicts
mortality based on hemorrhage size, patient age, Glasgow coma score, hemorrhage location
(infratentorial or supratentorial), and the presence of intraventricular hemorrhage (
Hemphilletal.2001). This score has been widely criticized for overestimating the
mortality associated with ICH, and this is attributed to the high rate of early withdrawal of
medical care in more severe hemorrhages in the cohort, leading to a “self-fulfilling
prophecy”ofearlymortality (Zahuranecetal.2007,Zahuranecetal.2010).
Nevertheless, no high-performing alternative scale or biomarker has
entered routine clinical use, so the ICH score remains a starting point for
clinical prognostication. A recent re-evaluation of this clinical tool found that both
physicians’ and nurses’ subjective predictions of 3-month outcomes made within 24 h
of the hemorrhage outperformed the accuracy of the ICH score, underscoring the
important role of clinician experience and judgement in guiding families (
Hwanget al. 2015).
In addition to hemorrhage size and initial clinical deficits, factors that seem to predict a poor
overall outcome include any early neurological deterioration, hemorrhages in deep locations,
particularly the thalamus, and age/baseline functional status (Yogendrakumaretal.2018;
Sreekrishnanetal.2016; Ullmanetal.2019). When the clinical prognosis is unclear,
physicians should generally advocate for additional time and continued supportive
care(Hemphilletal.2015).
Recovery after intracerebral hemorrhage is often delayed when compared with
ischemic strokes of similar severity, and outcomes may need to be evaluated at
later timepoints to capture the full extent of potential recovery. This is important both
for calibrating patient and family expectations and in the design of outcomes for clinical
trials.
Severalscoresandmeasuresexist
Intracerebralhemorrhage outcome:A
comprehensive update
João Pinho etal.(15March 2019)
https://doi.org/10.1016/j.jns.2019.01.013
The focus of outcome assessment after ICH has been
mortality in most studies, because of the high early case
fatality which reaches 40% in some population-based
studies. The most robust and consistent predictors of early
mortality include age, severity of neurological impairment,
hemorrhage volume and antithrombotic therapy at the time
oftheevent.
Long-term outcome assessment is multifaceted and
includes not only mortality and functional outcome,
but also patient self-assessment of the health-
related quality of life, occurrence of cognitive
impairment, psychiatric disorders, epileptic seizures,
recurrent ICH andsubsequent thromboembolicevents.
Several scores which predict mortality and functional
outcome after ICH have been validated and are useful in
the daily clinical practice, however they must be used in
combination with the clinical judgment for individualized
patients. Management of patients with ICH both in the acute
and chronic phases, requires health care professionals to
have a comprehensive and updated perspective on
outcome, which informs decisions that are needed to be
taken togetherwiththepatient andnext ofkin
Locationspecifiedquitecrudely
http://doi.org/10.1007/978-1-4614-9212-2_46-1
Management of brainstem haemorrhages
DOI: https://doi.org/10.4414/smw.2019.20062
https://aneskey.com/intracerebral-hemorrhagic-stroke/
Too “handwavey”reporting of thelocationatthemoment
IntracerebralHemorrhageLocationandFunctional
OutcomesofPatients: ASystematicLiterature Reviewand
Meta-Analysis
AnirudhSreekrishnan etal. (NeurocriticalCarevolume25,pages384–391,2016)
https://doi.org/10.1177%2F0272989X19879095 - Citedby35
Intracerebral hemorrhage (ICH) has the highest mortality rate among all
strokes. While ICH location, lobar versus non-lobar, has been
established as a predictor of mortality, less is known regarding the
relationship between more specific ICH locations and functional
outcome. This review summarizes current work studying how ICH
location affects outcome, with an emphasis on how studies designate
regionsof interest.
Multiple studies have examined motor-centric outcomes, with few studies
examining quality of life (QoL) or cognition. Better functional outcomes
have been suggested for lobar versus non-lobar ICH; few studies
attempted finer topographic comparisons. This study highlights the
need for improved reporting in ICH outcomes research, including
a detailed description of hemorrhage location, reporting of the full
range of functional outcome scales, and inclusion of cognitive and
QoL outcomes.
Meta-analysisofstudiesdescribingtheoddsratioofpooroutcomesfor
lobar comparedtodeep/non-lobar ICH. a Poor outcomemRS(3,4,5,6)or
GOS(4,3,2,1); b PooroutcomemRS(4,5,6)or GOS(3,2,1); c Poor
outcomemRS(5,6).*Significantresults(p < 0.05)
LobarvsDeep?
https://slideplayer.com/slide/2404245/
NEnglJMed2001;344:1450-1460
http://doi.org/10.1056/NEJM200105103441907
Twogeneralcategoriesintermsofpathophysiology:
--Lobar(towardstheperiphery,typicallylinkedto
cerebralamyloidangiopathy[CAA])
--Deep(inthedeepwhitematter ofthecerebrum,
typicallylinkedtohypertension,HTN)
https://www.cram.com/flashcards/draft-23-16-intracra
nial-hemorrhage-2439833
Long-termrisks higher after lobarICH?
Ten-yearrisksofrecurrentstroke,
disability,dementiaandcostin relationto
siteofprimaryintracerebralhaemorrhage:
population-basedstudy (2019)
LinxinLi,Ramon Luengo-Fernandez,SusannaMZuurbier, NicolaC
Beddows,PhilippaLavallee,LouiseESilver, WilhelmKuker,Peter
MalcolmRothwell
http://dx.doi.org/10.1136/jnnp-2019-322663
Patients with primary intracerebral haemorrhage (ICH)
are at increased long-term risks of recurrent stroke and
other comorbidities. However, available estimates
come predominantly from hospital-based studies with
relatively short follow-up. Moreover, there are also
uncertainties about the influence of ICH location
on risks of recurrent stroke, disability, dementia and
qualityoflife.
Methods In a population-based study (Oxford Vascular
Study/2002–2018) of patients with a first ICH with
follow-up to 10 years, we determined the long-term
risks of recurrent stroke, disability, quality of life,
dementia and hospital care costs stratified by
haematomalocation.
ICHcanbecategorisedinto lobarandnon-lobaraccording tothehaematomalocation.
Giventhedifferentbalanceofpathologiesfor lobarversusnon-lobar ICH,thelong-term
prognosisofICHcouldbeexpectedtodiffer byhaematomalocation.However,whilesome
studiessuggestedthathaematomalocationwasassociatedwithrecurrentstroke,others
havenot.
Compared with non-lobarICH, thesubstantially higher 10-year
risks of recurrent stroke, dementiaand lower QALYs after lobar
ICH highlighttheneedformoreeffectiveprevention for
this patient group.
(top) Ten-year risks of recurrent stroke, disability or death stratified
by haematoma location. (right) Ten-year mean healthcare costs
overtimeafterprimaryintracerebralhaemorrhage.
HematomaEnlargement deepvs lobar, volume?
Hematoma enlargement characteristicsin
deep versuslobarintracerebralhemorrhage
Jochen A.Sembill etal. (04March2020)
https://doi.org/10.1002/acn3.51001
Hematoma enlargement (HE) is associated with
clinical outcomes after supratentorial intracerebral
hemorrhage (ICH). This study evaluates whether HE
characteristics and association with functional
outcome differ in deep versus lobarICH.
HE occurrence does not differ among deep and lobar
ICH. However, compared to lobar ICH, HE after deep
ICH is of greater extent in OAC ICH, occurs earlier‐ICH, occurs earlier 
and may be of greater clinical relevance. Overall,
clinical significance is more apparent after
small–medium compared to large sized‐sized 
bleedings.
These data may be valuable for both routine clinical
management as well as for designing future studies
on hemostatic and blood pressure management
aming at minimizing HE. However, further studies
with improved design are needed to replicate these
findings and to investigate the pathophysiological
mechanismsaccounting fortheseobservations. Study flowchart. Altogether, individual level data from 3,580 spontaneous ICH patients were analyzed to identify 1,954
supratentorial ICH patients eligible for outcome analyses. Data were provided by two parts of a German wide observational‐ICH, occurs earlier 
studies(RETRACE I and II) conducted at 22 participatingtertiarycenters, and byone single center universityhospital registry.‐ICH, occurs earlier
IntracerebralHemorrhage:ClinicalManifestationsRelatedtoSite.
https://clinicalgate.com/intracerebral-hemorrhage/
https://all-about-hipertency.blogspot.com/2003/0
8/hypertensive-hemorrhagic-stroke.html
https://radiologyassistant.nl/neuroradiology/non-
traumatic-intracranial-haemorrhage-in-adults
Otherfactors you shouldtake into account
BrianA.Stettler,
MDAssistant
Professor
https://slideplayer.c
om/slide/3129821/
Subfalcial herniation, midline shiftand uncal
herniation secondary tolarge subdural hematomain
the left hemisphere.
https://www.startradiology.com/internships/neurology/brain/ct-brain-
hemorrhage/
Hydrocephalus
https://kidshealth.or
g/en/parents/hydro
cephalus.html
RiskFactorsHypertension thelargest risk factor
RiskFactorsof IntracerebralHemorrhage:
ACase-ControlStudy
HanneSallinen, ArtoPietilä,VeikkoSalomaa,DanielStrbian
Journal of Strokeand Cerebrovascular Diseases
Volume29,Issue4,April2020,104630
https://doi.org/10.1016/j.jstrokecerebrovasdis.2019.104630
Hypertension is a well-known risk factor for
intracerebral hemorrhage (ICH). On many of the other
potential risk factors, such as smoking, diabetes,
and alcohol intake, results are conflicting. We
assessed risk factors of ICH, taking also into account
priordepression andfatigue.
Analyzing all cases and controls, the cases had more
hypertension, history of heart attack, lipid-lowering
medication, and reported more frequently fatigue prior
to ICH. In persons aged less than 70 years,
hypertension and fatigue were more common among
cases. In persons aged greater than or equal to 70
years, factors associated with risk of ICH were fatigue
prior to ICH, use of lipid-lowering medication, and
overweight.
Hypertension was associated with risk of ICH
among all patients and in the group of patients under
70 years. Fatigue prior to ICH was more common
among all ICH cases.
StrokeorIntensive CareUnit for ICH patients
Strokeunitadmissionisassociatedwith better
outcomeandlowermortality inpatientswith
intracerebral hemorrhage
MaM. N. Ungerer P. Ringleb B.Reuter C.Stock F.Ippen S. Hyrenbach I.Bruder P
Martus C. Gumbinger theAGSchlaganfall
https://doi.org/10.1111/ene.14164 (Feb2020)
There is no clear consensus among current guidelines on the preferred
admission ward [i.e. intensive care unit (ICU) or stroke unit (SU)] for
patients with intracerebral hemorrhage. Based on expert opinion, the American
Heart Association and European Stroke Organization recommend treatment in
neurological/neuroscience ICUs (NICUs) or SUs. The European Stroke
Organization guideline states that there are no studies available directly
comparingoutcomesbetween ICUsandSUs.
We performed an observational study comparing outcomes of 10 811
consecutive non comatose patients with intracerebral hemorrhage according‐ICH, occurs earlier 
to admission ward [ICUs, SUs and normal wards (NWs)]. Primary outcomes
were the modified Rankin Scale score at discharge and intrahospital mortality.
An additional analysiscomparedNICUswithSus.
Treatment in SUs was associated with better functional outcome and reduced
mortality compared with ICUs and NWs. Our findings support the current
guideline recommendations to treat patients with intracerebral
hemorrhage in SUs or NICUs and suggest that some patients may further
benefit from NICU treatment.
MobileStrokeUnitReduces
TimetoTreatment
JULY03,2018
https://www.itnonline.com/articl
e/mobile-stroke-unit-reduces-ti
me-treatment
Formorefine-grainedpredictions
youprobablywanttousebetter imagingmodalities?
PredictingMotorOutcomeinAcute
Intracerebral Hemorrhage (May 2019)
J.Puig, G.Blasco,M.Terceño,P.Daunis-i-Estadella,G.Schlaug,M.Hernandez-
Perez,V.Cuba, G.Carbó,J.Serena,M.Essig, C.R.Figley, K.Nael,C.Leiva-
Salinas, S.PedrazaandY.Silva
https://doi.org/10.3174/ajnr.A6038
Predicting motor outcome following
intracerebral hemorrhage is challenging. We
tested whether the combination of
clinical scores and Diffusion tensor
imaging (DTI)-based assessment of
corticospinal tract damage within the first 12
hours of symptom onset after intracerebral
hemorrhage predicts motor outcome at 3
months.
Combined assessment of motor function
and posterior limb of the internal capsule
damage during acute intracerebral
hemorrhage accurately predicts motor
outcome.
Assessing corticospinal tract involvement with diffusion tensor tractography superimposed on gradient
recalled echo and FLAIR images. In the upper row, the corticospinal tract wasaffected by ICH (passes through
it) at the level of the corona radiata and posterior limb of the internal capsule. Note that in lower row, the
corticospinal tract was displaced slightly forward but preserved around the intracerebral hematoma. Vol
indicatesvolume.
Exampleof ROI objectmapsusedto measure
intracerebral hematoma(blue)and perihematomal
edema(yellow)volumes.
CombiningmNIHSSand PLICaffected by ICH in the first
12 hours of onset can accurately predict motor outcome.
The reliability of DTI in denoting very early damage to
the CST could make it a prognostic biomarker
useful for determining management strategies
to improve outcome in the hyperacute stage.
Our approach eliminates the need for advanced
postprocessing techniques that are time-
consuming and require greater specialization, so it can
be applied more widely and benefit more patients.
Prospective large-scale studies are warranted to
validate these findings and determine whether this
information could be used to stratify risk in patients with
ICH.
Cliniciansliketohuntforthe“(linear)magicalbiomarkers”
opposed tononlinear multivariate modelswith higher capacity(and higher probability tooverfitaswell)
Early hematomaretractionin
intracerebralhemorrhageis
uncommonanddoesnotpredict
outcome
AnaC.Klahr,MaheshKate,JaymeKosior,Brian
Buck,AshfaqShuaib,DerekEmery,KennethButcher
Published: October9,2018
https://doi.org/10.1371/journal.pone.0205436
Citedby2 -Relatedarticles
Clot retraction in intracerebral hemorrhage (ICH)
has been described and postulated to be
related to effective hemostasis and
perihematoma edema (PHE) formation. The
incidence and quantitative extent of hematoma
retraction (HR) is unknown. Our aim was to
determine the incidence of HR between baseline
and time of admission. We also tested the
hypothesis that patients with HR had higher PHE
volumeandgoodprognosis.
Early HR is rare and associated with IVH, but not
with PHE or clinical outcome. There was no
relationship between HR, PHE, and patient
prognosis. Therefore, HR is unlikely to be a useful
endpointinclinicalICHstudies.
PerihematomalEdema(PHE) Diagnostic value?
NeoplasticandNon-NeoplasticCausesof Acute
IntracerebralHemorrhageonCT:The
DiagnosticValueofPerihematomalEdema
Jawed Nawabi, UtaHanning, Gabriel Broocks, Gerhard Schön, TanjaSchneider, Jens
Fiehler, Christian Thaler & Susanne Gellissen
ClinicalNeuroradiology(2019)
https://doi.org/10.1007/s00062-019-00774-4
The aim of this study was to investigate the
diagnostic value of perihematomal
edema (PHE) volume in non-enhanced
computed tomography (NECT) to
discriminate neoplastic and non-neoplastic
causes of acute intracerebral hemorrhage
(ICH).
Relative PHE with a cut-off of >0.50 is a
specific and simple indicator for
neoplastic causes of acute ICH and a
potential tool for clinical implementation. This
observation needs to be validated in an
independentpatientcohort.
Two representative cases of region of interest object maps used to measure intracerebral
hemorrhage (ICH), volume (Vol ICH) and total hemorrhage (Vol ICH+PHE)
volume. a Neoplastic and non-neoplastic ICH volume (red) and b total hemorrhage volume
(grey) on non-enhanced CT (NECT) delineated with an edge detection
algorithm. c Neoplastic and non-neoplastic PHE (green) calculated by subtraction of total
hemorrhagevolumeandICHvolume(Vol PHE= Vol ICH+PHE− Vol ICH)
Youngpatients tendto recover better(seems obvious)
Isnontraumaticintracerebral hemorrhage
different betweenyoungandelderly
patients?
NaRaeYang,Ji HeeKim,Jun Hyong Ahn,JaeKeun Oh,In BokChang
&JoonHo Song NeurosurgicalReviewvolume43, pages781–
791(2020)https://doi.org/10.1007/s10143-019-01120-5
Only a few studies have reported
nontraumatic intracerebral hemorrhage in
young patients notwithstanding its fatal
and devastating characteristics. This study
investigated the clinical characteristics and
outcome of nontraumatic intracerebral
hemorrhage in young patients in
comparison to thoseof theelderly.
Nontraumatic intracerebral hemorrhage in
younger patients appears to be
associated with excessive alcohol
consumption and high BMI. Younger
patients had similar short-term
mortality but more favorable
functional outcome than the elderly.
DistributionofmodifiedRankinScalescoresatthelastfollow-upforeachgroup
Genotype-baseddifferencesexist
Racial/ethnicdisparitiesinthe riskof intracerebral
hemorrhage recurrence
AudreyC.Leasure,ZacharyA.King,VictorTorres-Lopez,SantoshB.Murthy,HoomanKamel,AshkanShoamanesh,Rustam
Al-Shahi Salman,JonathanRosand,WendyC.Ziai, DanielF.Hanley,DanielWoo,CharlesC.Matouk,LaurenH.Sansing,
Guido J.Falcone,KevinN.Sheth
Neurology December12,2019
https://doi.org/10.1212/WNL.0000000000008737
To estimate the risk of intracerebral hemorrhage (ICH) recurrence in a
large, diverse, US-based population and to identify racial/ethnic and
socioeconomic subgroups at higher risk. Black and Asian patients
had a higher risk of ICH recurrence than white patients, whereas
private insurance was associated with reduced risk compared to those
with Medicare.
Further research is needed to determine the drivers of these
disparities. While this is the largest study of ICH recurrence in a United
States–based, racially and ethnically diverse population, our study has
several limitations related to the use of administrative data that require
consideration. First, there is a possibility of misclassification of the
exposures and outcomes. The attribution of race/ethnicity that is not
based on direct self-report may not be accurate; for example, patients
who belong to 2 or more racial/ethnic categories may be classified
based on phenotypic descriptions and may not reflect true
ancestry. In terms of outcome classification, we relied on ICD-9-CM
codes to identify our outcome of recurrent ICH. However, we used
previously validated diagnosis codes that have high positive predictive
valuesfor identifyingprimaryICH
asICHnotthatwellunderstood sonewmechanismsareproposed
Globalbraininflammationinstroke
Kaibin Shi etal.(LancetNeurology,July2019)
https://doi.org/10.1016/S1474-4422(19)30078-X
Stroke, including acute ischaemic stroke (AIS) and
intracerebral haemorrhage (ICH), results in
neuronal cell death and the release of factors
such as damage-associated molecular patterns
(DAMPs) that elicit localised inflammation in the
injured brain region. Such focal brain
inflammation aggravates secondary brain
injury by exacerbating blood–brain barrier damage,
microvascular failure, brain oedema, oxidative stress,
andbydirectlyinducingneuronalcell death.
In addition to inflammation localised to the injured
brain region, a growing body of evidence suggests
that inflammatory responses after a stroke occur and
persist throughout the entire brain. Global brain
inflammation might continuously shape the
evolving pathology after a stroke and affect the
patients'long-termneurologicaloutcome.
Future efforts towards understanding the
mechanisms governing the emergence of so-called
global brain inflammation would facilitate modulation
of this inflammation as a potential therapeutic
strategyforstroke.
MMPsinICH? In emerging theories
Matrix MetalloproteinasesinAcute
IntracerebralHemorrhage
SimonaLattanzi, MarioDi Napoli,SilviaRicci &Afshin A.Divani
Neurotherapeutics(January2020)
https://doi.org/10.1007/s13311-020-00839-0
So far, clinical trials on ICH have mainly targeted primary
cerebral injury and have substantially failed to improve
clinicaloutcomes.
The understanding of the pathophysiology of early and delayed
injury after ICH is, hence, of paramount importance to identify
potential targets of intervention and develop effective
therapeutic strategies. Matrix metalloproteinases (MMPs)
represent a ubiquitous superfamily of structurally related zinc-
dependent endopeptidases able to degrade any component of
the extracellular matrix. They are upregulated after ICH, in
which different cell types, including leukocytes, activated
microglia, neurons, and endothelial cells, are involved in their
synthesis and secretion. The role of MMPs as a potential target
for the treatment of ICH has been widely discussed in the last
decade. The impact of MMPs on extracellular matrix
destruction and blood–brain barrier BBB disruption in
patientssufferingfromICHhasbeen ofinterest.
The aim of this review is to summarize the available
experimental and clinical evidence about the role of MMPs in
brain injury following spontaneous ICH and provide critical
insightsintotheunderlyingmechanisms.
Overall, there is substantially converging evidence from
experimental studies to suggest that early and short-
term inhibition of MMPs after ICH can be an
effective strategy to reduce cerebral damage
and improve the outcome, whereas long-term
treatment may be associated with more harm than
benefit. It is, however, worth to notice that, so far, we do
not have a clear understanding of the time-specific
role that the different MMPs assume within the
pathophysiology of secondary brain injury and recovery
after ICH. In addition, most of the studies exploring
pharmacological strategies to modulate MMPs can
only provide indirect evidence of the benefit to target
MMP activity.
The prospects for effective therapeutic targeting of
MMPs require the establishment of conditions to
specifically modulate a given MMP isoform, or asubset of
MMPs, in a given spatio-temporal context (Rivera2019).
Further research is warranted to better understand the
interactions between MMPs and their molecular
and cellular environments, determine the optimal
timing of MMPs inhibition for achieving a favorable
therapeutic outcome, and implement the discovery of
innovative selective agents to spare harmful effects
before therapeutic strategies targeting MMPs can be
successfully incorporated into routine practice (
Lattaniet al. 2018;Hostettler et al. 2019).
Whatarethetreatmentsfor
ICH and can wedo prescriptive
modeling(“precision medicine”),
and tailor thetreatment
individually?
Hemostatic
Therapy
Overview
Managementof Intracerebral
Hemorrhage:JACCFocusSeminar
MatthewSchrag,HowardKirshner
JournaloftheAmerican CollegeofCardiology
Volume75, Issue15,21April2020
https://doi.org/10.1016/j.jacc.2019.10.066
AnimalmodelsofICH exist of courseas well
Intracerebral haemorrhage:from clinicalsettingsto
animalmodelsQian Bai et al.(2020)
http://dx.doi.org/10.1136/svn-2020-000334
Effective treatment for ICH is still scarce. However, clinical
therapeutic strategies includes medication and surgery. Drug
therapy is the most common treatment for ICH. This includes
prevention of ICH based on treating an individual’s underlying
risk factors, for example, control of hypertension. Hyperglycaemia
in diabetics is common after stroke; managing glucose level may
reduce the stroke size. Oxygen is given as needed. Surgery can be
used to prevent ICH by repairing vascular damage or
malformations in and around the brain, or to treat acute ICH by
evacuating the haematoma; however, the benefit of surgical
treatment is still controversial due to very few controlled
randomised trials. Rehabilitation may help overcome disabilities
thatresultfromICHdamage.
Despite great advances in ischaemia stroke, no prominent improvement
in the morbidity and mortality after ICH have been realised. The current
understanding of ICH is still limited, and the models do not
completely mirror the human condition. Novel effective modelling is
required to mimic spontaneous ICH in humans and allow for effective
studies on mechanisms and treatment of haematoma expansion and
secondary braininjury.
GenomicsforStrokerecovery #1
Geneticriskfactorsfor
spontaneousintracerebral
haemorrhage AmandaM.Carpenter,I.
P. Singh,ChiragD. Gandhi,CharlesJ.
Prestigiacomo(NatureReviews
Neurology2016)
https://doi.org/10.1038/nrneurol.2015.226
Familial aggregation of ICH has been
observed, and the heritability of ICH
risk has been estimated at 44%.
Few genes have been found to be
associated with ICH at the population
level, and much of the evidence for
genetic risk factors for ICH comes
from single studies conducted in
relatively small and homogenous
populations. In this Review, we
summarize the current knowledge of
genetic variants associated with primary
spontaneousICH.
Although evidence for genetic
contributions to the risk of ICH exists, we
donot yet fully understand how and
to what extent this information can be
utilizedto preventandtreatICH.
GenomicsforStrokerecovery #2
Geneticunderpinnings ofrecoveryafter
stroke:anopportunity for genediscovery,
riskstratification,andprecisionmedicine
Julián N.Acostaetal. (September2019)
https://doi.org/10.1186/s13073-019-0671-5
As the number of stroke survivors continues to increase,
identification of therapeutic targets for stroke
recovery has become a priority in stroke genomics
research. The introduction of high-throughput
genotyping technologies and novel analytical tools has
significantly advanced our understanding of the genetic
underpinningsofstrokerecovery.
In summary, functional outcome and recovery
constitute important endpoints for genetic studies
of stroke. The combination of improving statistical power
and novel analytical tools will surely lead to the discovery
of novel pathophysiological mechanisms
underlying stroke recovery. Information on these
newly discoveredpathwayscan beusedto develop new
rehabilitation interventions and precision-
medicine strategies aimed at improving management
options for stroke survivors. The continuous growth and
strengthening of existing dedicated collaborations and the
utilization of standardized approaches to ascertain
recovery-related phenotypes will be crucial for the
successofthispromisingfield.
Geneticriskof Spontaneousintracerebralhemorrhage: Systematic
review andfuture directions KolawoleWasiuet al.(15December2019)
https://doi.org/10.1016/j.jns.2019.116526
Given this limited information on the genetic contributors to spontaneous intracerebral hemorrhage (SICH),
more genomic studies are needed to provide additional insights into the pathophysiology of SICH, and
develop targeted preventive and therapeutic strategies. This call for additional investigation of the
pathogenesis of SICH is likely to yield more discoveries in the unexplored indigenous African populations
whichalsohaveagreaterpredilection.
Multilevelomics for thediscoveryofbiomarkersandtherapeutic
targetsforstroke Joan Montaneretal. (22April2020)
https://doi.org/10.1016/j.jns.2019.116526
Despite many years of research, no biomarkers for stroke are available to use in clinical practice. Progress in high-
throughput technologies has provided new opportunities to understand the pathophysiology of thiscomplex disease, and
these studies have generated large amounts of data and information at different molecular levels. We summarize how
proteomics, metabolomics, transcriptomics and genomics are all contributing to the identification of new candidate
biomarkers that could be developed and used in clinical stroke management.
Influencesof geneticvariantsonstrokerecovery:ameta-analysisof
the 31,895 cases NikhilMathetal.(29 July2019)
https://doi.org/10.1007/s10072-019-04024-w
17p12InfluencesHematomaVolume andOutcome inSpontaneous
IntracerebralHemorrhage SandroMarini etal.(30Jul2018)
https://doi.org/10.1016/j.jns.2019.116526
Surgicalmanagement not that wellunderstoodeither
Surgery forspontaneousintracerebral
hemorrhage(Feb 2020)
AirtonLeonardode OliveiraManoel
https://doi.org/10.1186/s13054-020-2749-2
Spontaneous intracerebral hemorrhage is a devastating disease,
accounting for 10 to 15% of all types of stroke; however, it is
associated with disproportionally higher rates of
mortality and disability. Despite significant progress in the
acute management of these patients, the ideal surgical
management is still to be determined. Surgical hematoma
drainage has many theoretical benefits, such as the prevention of
mass effect and cerebral herniation, reduction in intracranial
pressure, and the decrease of excitotoxicity and neurotoxicity of
blood products.
Mechanismsofsecondarybraininjury
after ICH.MLS-midlineshift; IVH-
intraventricular hemorrhage
Case 02 of open craniotomy for hematoma
drainage. a, b Day 1—Large hematoma in the left
cerebral hemisphere leading to collapse of the left
lateral ventricle with a midline shift of 12 mm, with a
large left ventricular and third ventricle flooding, as
well as diffuse effacement of cortical sulci of that
hemisphere. c–e Day 2—Left frontoparietal
craniotomy, with well-positioned bone fragment,
aligned and fixed with metal clips. Reduction of the
left frontal/frontotemporal intraparenchymal
hematic content, with remnant hematic residues
and air foci in this region. There was a significant
reduction in the mass effect, with a decrease in
lateral ventricular compression and a reduction in
the midline shift. Bifrontal pneumocephalus
causing shift and compressing the adjacent
parenchyma. f–h Day 36—Resolution of residual
hematic residues and pneumocephalus.
Encephalomalacia in the left frontal/frontotemporal
region. Despite the good surgical results, the
patient remainedin vegetativestate
Open craniotomy. Patient lies on an
operating table and receives general
anesthesia. The head is set in a three-pin
skull fixation device attached to the
operating table, in order to hold the head
standing still. Once the anesthesia and
positioning are established, skin is
prepared, cleaned with an antiseptic
solution, and incised typically behind the
hairline. Then, both skin and muscles are
dissected and lifted off the skull. Once
the bone is exposed, burr holes are built
in by a special drill. The burr holes are
made to permit the entrance of the
craniotome. The craniotomy flap is lifted
and removed, uncovering the dura mater.
The bone flap is stored to be replaced at
the end of the procedure. The duramater
is then opened to expose the brain
parenchyma. Surgical retractors are
used to open a passage to assess the
hematoma. After the hematoma is
drained, the retractors are removed, the
dura mater is closed, and the bone flap is
positioned, aligned, and fixed with metal
clips. Finally, the skin is sutured
Real-timesegmentationforICHsurgery?
Intraoperative CT and cone-beamCT
imagingforminimallyinvasive
evacuationofspontaneous
intracerebralhemorrhage
NilsHecht etal.(ActaNeurochirurgica2020)
https://doi.org/10.1007/s00701-020-04284-y
Minimally invasive surgery (MIS) for evacuation
of spontaneous intracerebral hemorrhage (ICH)
has shown promise but there remains a need
for intraoperative performance assessment
considering the wide range of evacuation
effectiveness. In this feasibility study, we
analyzed the benefit of intraoperative 3-
dimensional imaging during navigated
endoscopy-assisted ICH evacuation by
mechanicalclotfragmentationandaspiration.
Routine utilization of intraoperative
computerized tomography (iCT) or
cone-beam CT (CBCT) imaginginMIS for
ICH permits direct surgical performance
assessment and the chance for immediate
re-aspiration, which may optimize targeting of
an ideal residual hematoma volume and reduce
secondary revision rates.
CTAnatomical
Background
Non-ContrastCT What areyouseeing?
An Evidence-Based Approach To Imaging Of Acute
Neurological Conditions (2007)
https://www.ebmedicine.net/media_library/marketi
ngLandingPages/1207.pdf
HUUnits Absoluteunits“meansomething”
CT Scan basically a density measurement device
https://www.sciencedirect.com/topics/medicine-and-dentistry/hounsfield-scale
A, AxialCTslice, viewedwithbrainwindowsettings.Noticeinthegrayscalebar attherightsideof
thefigurethatthefullrangeofshadesfromblacktowhitehasbeendistributedoveranarrowHUrange,
fromzero(pureblack)to+100HU(purewhite).Thisallowsfinediscriminationoftissueswithinthis
densityrange,butattheexpenseofevaluationoftissuesoutsideofthisrange.Alargesubduralhematoma
iseasilydiscriminatedfromnormalbrain,eventhoughthetwotissuesdiffer indensitybylessthan100HU.
Anytissuesgreater than+100HUindensitywillappear purewhite,eveniftheir densitiesaredramatically
different.Consequently,theinternalstructureofbonecannotbeseenwiththiswindowsetting.Fat(-
50HU) andair (-1000HU)cannotbedistinguishedwiththissetting,asbothhavedensitieslessthanzero
HUandarepureblack. 
B, ThesameaxialCT slice viewedwithabone
windowsetting.Nowthescalebarattherightside
ofthefigureshowsthegrayscaletobedistributed
over averywideHUrange,from-450HU(pure
black)to+1050HU(purewhite).Air caneasilybe
discriminatedfromsofttissuesonthissetting
becauseitisassignedpureblack,whilesofttissues
aredarkgray.Detailsof bonecanbeseen,
becausealargeportionofthetotalrangeofgray
shadesisdevotedtodensitiesintherangeofbone.
Softtissuedetailislostinthiswindowsetting,
becausetherangeofsofttissuedensities(-50HUto
around+100HU)representsanarrowportionofthe
grayscale.
HUUnits ”water1000x ~1kg/l
denserthanair ~1g/l
”
ClinicalCT quick introonwhatyousee
How to interpret an unenhanced CT Brain scan.Part 1:Basicprinciplesof Computed
Tomography and relevant neuroanatomy (2016)
http://www.southsudanmedicaljournal.com/archive/august-2016/how-to-interpret-an-unenhanced-ct-brain-sca
n.-part-1-basic-principles-of-computed-tomography-and-relevant-neuroanatomy.html
CutsandGantryTilt Clinical CT typically havequite thickcuts
https://slideplayer.com/slide/5990473/ ComputedTomographyII–
RAD473PublishedbyMelindaWiggins
https://slideplayer.com/slide/7831746/
Designpatternformulti-modal
coordinatespaces
Figure4:PlanningthelocationoftheCTslices,
withtiltedgantry.Thegantryistiltedtoavoid
radiatingtheeyes,whilecapturingamaximum
ofrelevantanatomicaldata.
https://www.researchgate.net/publication/22
8672978_Design_pattern_for_multi-modal_co
ordinate_spaces
Tiltingthegantryfor CT-guided spineprocedures
https://doi.org/10.1007/s11547-013-0344-1 Gantry tilt. Use of bolsters. Gantry-
needle alignment. a, b Range of gantry angulation, which is ±30° on most scanners.
Spine curvature and spatial orientation can be modified using bolsters and wedges.
A bolster under the lower abdomen (c) flattens the lordotic curvature and reduces
the L5–S1 disc plane obliquity; under the chest (d) flattens the thoracic kyphosis and
reduces the upper thoracic pedicles' obliquity; under the hips (e) increases the
lordosis and brings the long-axis of the sacrum closer to the axial plane. The desired
needle path for spinal accesses can be paralleled by gantry tilt (solid lines on c– e)
relative to straight axial orientation (dashed lines on c– e). f Gantry-needle alignment,
with laser beam precisely bisecting the needle at the hub and the skin entry point.
Maintaining this alignment keeps the needle in plane and allows visualization of the
entireneedlethroughoutitstrajectoryon asingleCTslice
DiagnosingstrokeswithimagingCT,MRI,andAngiography|KhanAcademy
https://www.khanacademy.org/science/health-and-medicine/circulatory-system-diseases/stroke/v/diagnosing-strokes-with-imaging-ct-mri-and-angiography
CTSkullWindowmicrostructureofbonemightbiasyourbrainmodel?
Estimationof skulltablethicknesswithclinicalCTandvalidation
withmicroCThttp://doi.org/10.1111/joa.12259
Lossof bonemineraldensityfollowing
sepsisusingHounsfieldunitsby
computedtomography
http://doi.org/10.1002/ams2.401
Opportunistic
osteoporosis
screeningvia the
measurement of
frontalskull
Hounsfieldunits
derived from brain
computed
tomographyimages
https://doi.org/10.1371/jour
nal.pone.0197336 TheADAM-pelvisphantom-ananthropomorphic,
deformableandmultimodalphantomforMrgRT
http://doi.org/10.1088/1361-6560/aafd5f
ConstructionandanalysisofaheadCT-scan databaseforcraniofacialreconstruction
FrançoiseTilotta, Frédéric Richard, Joan Alexis Glaunès, Maxime Berar, Servane Gey, Stéphane Verdeille, Yves
Rozenholc, Jean-François Gaudy https://hal-descartes.archives-ouvertes.fr/hal-00278579/document
CT Bonevery useful for brain imaging/stimulationsimulation models e.g. ultrasoundandNIRS
MeasurementsoftheRelationship BetweenCT
HounsfieldUnitsand AcousticVelocityandHow It
ChangesWithPhotonEnergy and Reconstruction
Method
Webb TD, LeungSA, RosenbergJ, Ghanouni P, Dahl JJ, PelcNJ, PaulyKB
IEEETransactions onUltrasonics, Ferroelectrics,and FrequencyControl, 01Jul 2018,
65(7):1111-1124 https://doi.org/10.1109/tuffc.2018.2827899
Transcranial magnetic resonance-guided focused ultrasound
continues to gain traction as a noninvasive treatment option for a
variety of pathologies. Focusing ultrasound through the skull
can be accomplished by adding a phase correction to each element
of a hemispherical transducer array. The phase corrections are
determined with acoustic simulations that rely on speed of sound
estimates derived from CT scans. While several studies have
investigated the relationship between acoustic velocity and
CT Hounsfield units (HUs), these studies havelargely ignored the
impact of X-ray energy, reconstruction method, and reconstruction
kernel on the measured HU, and therefore the estimated velocity, and
nonehavemeasuredtherelationshipdirectly.
As measured by the R-squared value, the results show that CT is
able to account for 23%-53% of the variation in velocity in
the human skull. Both the X-ray energy and the reconstruction
technique significantly alter the R-squared value and the linear
relationship between HU and speed of sound in bone. Accounting for
these variations will lead to more accurate phase corrections
and more efficient transmission of acoustic energy through
theskull.
The impact of CT energy as measured by the dual energy scan on the GE system with a bone kernel. a) The dotted
line shows the HU calculated using Equation (1) and linear attenuation values from NIST. The circles show the average HU measured
in the densest sample of cortical boneas measured by the averageHU (red), theaverageHU value of all thefragments fromtheinner
and outer tables (yellow), and the average HU value of all the fragments from the medullary bone (purple). Error bars show the
standard deviation.b) Speedof soundasa functionof HUfor fivedifferentenergies.
Comparison of the measurements presented in this paper to prior models. a)
Comparison to prior modelsusing data from the monochromatic images acquired
with the dual energy scan on the GE system. b) Comparison to prior models
using standard CT scans with unknown effective energies. In order to estimate
Aubry’s model at each energy, an effective energy of 2/3 of the peak tube voltage
wasassumed.
Further work needs to be done to
characterize either an average
relationship across a patient
population or a method for adapting
velocity estimates to specific
patient skulls. Such a study will
require a large number of skulls and is
outside the scope of the present
work. 
Future studies should examine
improvements in velocity estimates
and phase corrections (e.g. using
ultrashort echo time (UTE) MRI)
will lead to the more efficient transfer
of acoustic energy through the skull,
resulting in a decrease in the energy
required to achieve ablation at the
focalspot.
Muscle/FatCTalsouseful
(a) The relationship between graylevel and Hounsfieldunits(HU) determinedby windowlevel (WL), windowwidth
(WW),andbitdepthper pixel(BIT).(b)TheeffectofdifferentWL,WW,andBITconfigurationsonthesameimage
Pixel-LevelDeepSegmentation:ArtificialIntelligenceQuantifiesMuscleonComputed TomographyforBodyMorphometricAnalysis
HyunkwangLee&FabianM.Troschel&ShaheinTajmir&GeorgFuchs&JuliaMario&FlorianJ.Fintelmann&SynhoDo
JDigitImaging,http://doi.org/10.1007/s10278-017-9988-z
Body Composition as a Predictor of Toxicity in
Patients Receiving Anthracycline and Taxane–
Based Chemotherapy for Early-Stage Breast
Cancer
http://doi.org/10.1158/1078-0432.CCR-16-2266
Quantitativeanalysisofskeletalmuscleby
computedtomographyimaging—Stateof the
art https://doi.org/10.1016/j.jot.2018.10.004
Baseof Skull AxialCT: Wherebrainstrippingcoulduse deeplearning
Baseofskull, axialCT
1) Nasalspineoffrontalbone
2) Eyeball
3) Frontalprocessofzygomaticbone
4) Ethmoidalaircells
5) Temporalfossa
6) Greater wingofsphenoidbone
7) Sphenoidalsinus
8) Zygomaticprocessoftemporalbone
9) Headofmandible
10) Carotidcanal,firstpart
11) Jugular foramen,posteriortointrajugular
process
12) Posterior border ofjugularforamen
13) Sigmoidsinus
14) Lateralpartofoccipitalbone
15) Hypoglossalcanal
16) Foramenmagnum
17) Nasalseptum
18) Nasalcavity
19) Bodyofsphenoidbone
20) Foramenlacerum
21) Foramenovale
22) Foramenspinosum
23) Sphenopetrousfissure/Eustachiantube
24) Carotidcanal,secondpart
25) Aircellsintemporalbone
26) Apexofpetrousbone
27) Petro-occipitalfissure
RadiologyKey
FastestRadiologyInsightEngine
https://radiologykey.com/skull/
CSFSpacesas seen by CT
An Evidence-Based Approach To Imaging Of Acute
Neurological Conditions (2007)
https://www.ebmedicine.net/media_library/marketi
ngLandingPages/1207.pdf
AirinBrain as seen by CT
Airdefines anatomicalshapes usefuloutside ICH analysis→
Amultiscale imagingand modelling
dataset of thehumaninner ear
Gerber etal.(2017)ScientificDatavolume
4,Articlenumber: 170132 (2017)
https://doi.org/10.1038/sdata.2017.132
BE-FNet:3DBoundingBox
EstimationFeature Pyramid
NetworkforAccurateand Efficient
Maxillary Sinus Segmentation
Zhuofu Deng etal. (2020)
https://doi.org/10.1155/2020/5689301
Maxillary sinus segmentation plays an important
role in the choice of therapeutic strategies for
nasal disease and treatment monitoring.
Difficulties in traditional approaches deal with
extremely heterogeneous intensity caused by
lesions, abnormal anatomy structures, and
blurringboundariesofcavity
Development ofCT-basedmethods
for longitudinalanalysesof
paranasalsinusosteitisin
granulomatosiswithpolyangiitis
SigrunSkaarHolme etal.(2019)
https://doi.org/10.1186/s12880-019-0315-7
Eventhough progressiverhinosinusitiswith
osteitisisamajor clinicalproblemin
granulomatosiswithpolyangiitis(GPA),thereare
nostudiesonhowGPA-relatedosteitisdevelops
overtime, andnoquantitativemethods for
longitudinalassessment.Here, weaimedto
identifysimpleandrobustCT-basedmethodsfor
captureandquantificationoftime-dependent
changesinGPA-relatedparanasalsinusosteitis
Gray/WhiteMatter Contrast not as nice as with MRI
An Evidence-Based Approach To Imaging Of Acute Neurological Conditions
(2007)
https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf
Comparison between brain-deadpatients' and normalcontrolsubjects'CTscans: 1, normal control CTscan;2, CT
scan with lossof WM/GMdifferentiation; 3, CTscan with reversed GM/WMratio.
GrayMatter-White Matter De-Differentiation on Brain Computed TomographyPredictsBrain Death Occurrence.
https://doi.org/10.1016/j.transproceed.2016.05.006
Calcifications choroidplexusandpinealglandverycommonlocations
IntracranialcalcificationsonCT: an
updated reviewCharbelSaade,ElieNajem,Karl
Asmar, RidaSalman,BassamElAchkar,LenaNaffaa(2019)
http://doi.org/10.3941/jrcr.v13i8.3633
In a study that was done by Yalcin et al.(2016) that
focused on determining the location and extent of
intracranial calcificationsin 11,941 subjects, the
pineal gland was found to be the most common
site of physiologic calcifications (71.6%) followed
by the choroid plexus (70.2%) with male
dominance in both sites with a mean age of 47.3
and 49.8 respectively. However, the choroid
plexus was found to be the most common site
of physiologic calcification after the 5th
decade and second most common after the
pineal gland in subjects aged between 15-45
years. According to Yalcin et al. (2016) dural
calcifications were seen in up to 12.5% of the
studied population with the majority found in male
patients. Basal ganglia calcifications were
found in only 1.3% in the same study conducted
by Yalcin et al.(2016)Yalcin etal. (2016).
Interestingly, BGC were reported to be more
prevalent among females than males with a mean
age of 52.
Examples of patterns of calcification and related terminology. (a)
dots, (b) lines, (c) conglomerate or mass-like, (d) rock-like, (e)
blush,(f)gyriform/band-like,(g) stippled(h) reticular.
Calcifications #2
An Evidence-Based Approach To Imaging Of Acute Neurological Conditions
(2007)
https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf
Pinealglandofa72-year-oldmale.
Image a revealstheoutlinedpinealglandon
sagittalplaneandimage b demonstratesthe3-
dimensionalimageandvolumeofthetissue.
Greenareasonimage c and d exhibitthe
restrictedparenchymabyexcludingallthe
calcifiedtissuesfromtheslices.
http://doi.org/10.5334/jbr-btr.892
Pinealglandofa35-year-oldfemale.
Image a and b revealtheoutlinedpinealglandon
sagittal(a)andaxial(b)planesonnoncontrast
computerizedtomographyimages.Greenareas
onimage c exhibittherestrictedparenchymaby
excludingallthecalcifiedtissuesfromtheslices.
Imaged demonstratesthe3-dimensionalimage
andvolumeofnoncalcifedpinealtissue.
Weassumethatoptimizedvolumetryofactive
pinealtissueandthereforeahighercorrelation
of melatoninandpinealparenchymacan
potentiallybeimprovedbyacombinationof
MRandCT imaging inadditionto serum
melatoninlevels.Moreover,inordertoimprove
MRquantificationofpinealcalcifications,the
combinedapproachwouldpossiblyallowan
optimizationandcalibrationofMRIsequencesby
CTandthenperhapsevenmakeCT
unnecessary 
Massesrealor hacked“adversarialattacks”
An Evidence-Based Approach To Imaging Of Acute Neurological Conditions
(2007)
https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf
by BrittanyGoetting — Thursday,April04,2019,09:24PMEDT
TerrifyingMalwareAltersCTScansTo
LookLikeCancer,FoolsRadiologists
https://hothardware.com/news/malware-creates-fake-cancerous-nodes-in-ct-scans
... Unfortunately, this vital technology is vulnerable to hackers. Researchers recently
designed malware that can add or take away fake cancerous nodules from CT and MRI
scans. Researchers at the University Cyber Security Research Center in Israel
developed malware that can modify CT and MRI scans. During their research, they
showed radiologists real lung CT scans, 70 of which had been altered. At least three
radiologistswerefooled nearlyeverytime.
Pituitaryapoplexy: twoverydifferent
presentationswithoneunifying diagnosis
CTbrainscanshowinga
hyperdensemassarising
fromthepituitaryfossa,
representingpituitary
macroadenomawith
haemorrhage
http://doi.org/10.1258/shorts.201
0.100073
CerebralAbscess
Low density due to cerebral inflammatory disease. A, Typical appearance of a cerebral abscess: round,
low-density cavity (arrow) surrounded by low-density vasogenic edema. Differentiation from other cavitary
lesions such as radionecrotic cysts or cystic neoplasms often requires clinical/laboratory correlation, with help
often provided by contrast-enhanced and diffusion weighted MRI. B, Progressive multifocal
leukoencephalopathy. Whereas white matter low density is nonspecific, involvement of the subcortical
U-shaped fibers in the AIDS patient can help differentiate this disorder from HIV encephalitis. C,
Toxoplasmosis. Patchy white matterlowdensity(asterisks) in an immunocompromisedpatientwith
alteredmentalstatus.
https://radiologykey.com/analysis-of-density-signal-intensity-and-echogenicity/
https://www.slideshare.net/Raeez/cns-infections-radiology
Clinicalstagesofhumanbrainabscesseson
serial CTscans aftercontrastinfusion
Computerized tomographic,neuropathological,
andclinicalcorrelations(1983)
https://doi.org/10.3171/jns.1983.59.6.0972
Ischemicstroke hypodensity (CSF-like looks)→
An Evidence-Based Approach To Imaging Of Acute Neurological Conditions (2007)
https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf
CTscansliceof thebrain showingaright-hemispheric cerebralinfarct(left
sideofimage).https://en.wikipedia.org/wiki/Cerebral_infarction
BrainSymmetry midline shift frommass effect #1
An Evidence-Based Approach To Imaging Of Acute
Neurological Conditions (2007)
https://www.ebmedicine.net/media_library/marketi
ngLandingPages/1207.pdf
https://en.wikipedi
a.org/wiki/Midline
_shift
https://www.slideshare.net/drlokeshmahar/approach-to-head-ct
BrainSymmetry midline shift #2:Estimate with ICP
Automated Midline Shift and
Intracranial Pressure
Estimation based on Brain CT
Images
Wenan Chen, Ashwin Belle,CharlesCockrell, KevinR. Ward,
Kayvan Najarian
J.Vis. Exp.(74),e3871,doi:10.3791/3871(2013).
https://www.jove.com/video/3871
In this paper we present an automated system
based mainly on the computed tomography
(CT) images consisting of two main
components: the midline shift
estimation and intracranial pressure
(ICP) pre-screening system. To estimate the
midline shift, first an estimation of the ideal
midline is performed based on the symmetry
of the skull and anatomical features in the brain
CTscan.
Then, segmentation of the ventricles from the
CT scan is performed and used as a guide for
the identification of the actual midline through
shapematching.
These processes mimic the
measuringprocess by physicians and
have shown promising results in the
evaluation. In the second component, more
features are extracted related to ICP, such as
the texture information, blood amount from CT
scans and other recorded features, such as
age, injury severity score to estimate the ICP
arealsoincorporated.
Theresultof theideal midline
detection.Thered lineisthe
approximateideal midline. The
two rectangular boxescover
theboneprotrusionand the
lowerfalxcerebri respectively.
Theseboxesareused to
reducetheregionsof interest.
Thegreen dash lineisthefinal
detected ideal midline, which
capturestheboneprotrusion
and thelowerfalxcerebri
accurately.
BrainSymmetry midline shift #3:Detection algorithms
Themiddlesliceandtheanatomicalmarkers. 
Adeformedmidlineexampleandtheanatomicalmidlineshiftmarker 
https://doi.org/10.1016/j.compmedimag.2013.11.001 (2014) ASimple,Fastand FullyAutomated ApproachforMidlineShift Measurement on
BrainComputedTomography Huan-ChihWang, Shih-Hao Ho,Furen Xiao,Jen-Hai Chou
https://arxiv.org/abs/1703.00797
IncorporatingTask-Specific Structural KnowledgeintoCNNs forBrainMidlineShift
Detection MaximPisovetal.(2019)
https://doi.org/10.1007/978-3-030-33850-3_4
https://github.com/neuro-ml/midline-shift-detection
CommercialCTScanners
Siemens hot in London
Intracerebral Hemorrhage (ICH): Understanding the CT imaging features
Siemens Unveils AI
Apps for Automatic
MRI Image
Segmentation
DECEMBER 4TH, 2019  MEDGADGET EDITORS 
NEUROLOGY, NEUROSURGERY, RADIOLOGY, 
UROLOGY
The AI-Rad Companion Brain MR for
Morphometry Analysis, without any manual
intervention, segments brain images from
MRI exams, calculates brain volume, and
automatically marks volume deviations in
result tables that neurologists rely on for
diagnostics and therapeutics. The last part it
does by comparing the levels of gray matter,
white matter, and cerebrospinal fluid in a
given patient’s brain to normal levels. This
can help with diagnosing Alzheimer’s,
Parkinson’s, and other diseases.
https://www.medgadget.com/2019/12/siemens-unveils-ai-
apps-for-automatic-mri-image-segmentation.html
Siemenscouldprovide similartoolforCTtoo
https://global.canon/en/technology/interview/ct/index.html
CTSystemReceivesFDAClearanceforAI-BasedImageReconstruction
Technology 07Nov2019CanonMedicalSystemsUSA,Inc.(Tustin,CA,USA) hasreceived510(k)clearancefor itsAdvancedIntelligentClear-IQEngine(AiCE)for the
AquilionPrecision https://www.medimaging.net/industry-news/articles/294779910/ct-system-receives-fda-clearance-for-ai-based-image-reconstruction-technology.html
Canon Medical is releasing a
new high-end digital PET/CT
scanner at the upcoming
RSNA conference in Chicago.
The Cartesion Prime Digital
PET/CT combines Canon’s
Aquilion Prime SP CT
scanner and the SiPM (silicon
photomultiplier) PET detector,
providing high resolution
imaging and easy operator
control, according to the
company.
Productpage:
 CartesionPrime DigitalPET/CT
EpicaSeeFactorCT3Multi-Modality
SystemWinsFDAClearance
OCTOBER8TH,2019
https://www.medgadget.com/2019/10/epica-seefactorct3-multi-modality-system-wins-fda-clearance.html
The SeeFactorCT3 produces sliceless CT images, unlike typical CT systems, which
means that there’s no interpolation involved and therefore less chance of introducing
artifacts.Isotropicimaging resolution goesdownto 0.1millimetersinsoft andhard
tissues and lesions that are only 0.2 millimeter in diameter can be detected. Thanks to
the company’s “Pulsed Technology,” the system can perform high resolution imaging
while reducing the overall radiation delivered. Much of this is possible thanks to a
dynamic flat panel detector that captures image sequences accurately and at high
fidelity.
A big advantage of the SeeFactorCT3 is its mobility, since it can be wheeled in and
out of ORs, through hospital halls, and even taken inside patient rooms. When set for
transport,thedeviceisnarrowenoughtobepushedthroughatypicalopendoor.
RoyalPhilipsextends
diagnosticimaging
portfolio
DIAGNOSTICDEVICESDIAGNOSTICIMAGING
By NSMedicalStaffWriter  01Mar 2019
https://www.nsmedicaldevices.com/news/philips-incisive-ct-imagi
ng-system/
The system is being offered with ‘Tube for Life’
guarantee, as it will replace the Incisive’s X-ray tube, the
key component of any CT system, at no additional cost
throughout the entire life of the system, potentially
lowering operating expensesbyabout$400,000.
Additionally, the system features the company’s iDose4
Premium Package which includes two technologies that
can improve image quality, iDose4 and metal
artifactreductionfor large orthopedicimplants(O-MAR).
iDose4 can improve image quality through artifact
prevention and increased spatial resolution at low dose.
O-MAR reduces artifacts caused by large orthopedic
implants. Together they produce high image quality with
reducedartifacts.
The system’s 70 kV scan mode is touted to offer
improved low-contrast detectability and confidence at
lowdose.
https://youtu.be/izXI3qry8kY
Intracerebral Hemorrhage (ICH): Understanding the CT imaging features
Intracerebral Hemorrhage (ICH): Understanding the CT imaging features
PortableCTs CereTom
Reviewof Portable CT withAssessmentof
aDedicated HeadCT Scanner
Z.Rumboldt, W.Huda and J.W.All
AmericanJournal ofNeuroradiology October
2009, 30 (9) 1630-1636
https://doi.org/10.3174/ajnr.A1603 - Citedby91
This article reviews a number of portable CT
scanners for clinical imaging. These include
the CereTom, Tomoscan, xCAT ENT, and
OTOscan. The Tomoscan scanner consists
of a gantry with multisection detectors and a
detachable table. It can perform a full-body
scanning, or the gantry can be used without
the table to scan the head. The xCAT ENT is a
conebeam CT scanner that is intended for
intraoperative scanning of cranial bones and
sinuses. The OTOscan is a multisection CT
scanner intended for imaging in ear, nose, and
throat settings and can be used to assess
boneandsofttissueofthehead.
We also specifically evaluated the technical
and clinical performance of the CereTom, a
scanner designed specifically for
neuroradiologicheadimaging.
https://doi.org/10.1097/JNN.0b013e3181ce5c5b
GinatandGupta(2014)
https://doi.org/10.1146/annurev-bioeng
-121813-113601
CT“Startup”Scanners
addressing“market inefficiencies”andgoing smaller and cheaper
FutureofCTFromEnergycounting(EID)toPhotoncounting(PCD)?
TheFuture of ComputedTomography
Personalized,Functional,and Precise
Alkadhi,HatemandEuler, André
Investigative Radiology:September 2020 -Volume 55- Issue 9- p 545-555
http://doi.org/10.1097/RLI.0000000000000668
Modern medicine cannot be imagined without the
diagnostic capabilities of computed tomography
(CT). Although the past decade witnessed a
tremendous increase in scan speed, volume
coverage, and temporal resolution, along with a
considerable reduction of radiation dose, current
trends in CT aim toward more patient-
centric, tailored imaging approaches that
deliver diagnostic information being personalized
to each individual patient. Functional CT with
dual-and multienergy, as well as dynamic,
perfusion imaging became clinical reality and will
further prosper in the near future, and upcoming
photon-counting detectors will deliver images
ataheretoforeunmatchedspatialresolution.
This article aims to provide an overview of current
trends in CT imaging, taking into account the
potential of photon-counting detector systems,
and seeks to illustrate how the future of CT will
beshaped.
CTStartup NanoxfromIsraelGreatideaifthiswouldworkassaid? #1
https://www.mobihealthnews.com/news/nanoxs-digital-x-ray-system-wins-26m-investors
The end goal is to deliver a
robust imaging system that
can drive earlier disease
detection, especially in regions
where traditional systems
are either too costly or too
complicated to roll out
broadly.
Looking at the longer term,
Nanox said that it will be seeking
regulatory approval for its
platform, and then deploying its
globally under a pay-per-scan
business model that it says will
enable cheaper medical imaging
and screening for private and
publicprovider systems.
CTStartup NanoxfromIsraelGreatideaifthiswouldworkassaid? #2
MuddyWatersResearch@
muddywatersre
MW is short $NNOX. We
conclude that $NNOX
has no product to sell
other than its stock.
Like $NKLA, NNOX
appears to have faked its
demo video. A convicted
felon appears to be behind
the IPO. A US partner has
been requesting images
for 6 months to no avail
"But NNOX gets much worse," the report
says. "A convicted felon, who crashed an
$8 billion market cap dotcom into the
ground, was seemingly instrumental in
plucking NNOX out of obscurity and
bringing its massively exaggerated story
to the U.S. NNOX touts distribution
partnerships that supposedly amount to
$180.8 million in annual commitments.
Almost all of the company’s partnerships
give reason for skepticism."
MartyStempniak |September18,2020| HealthcareEconomics&Policy
Nanoxhitwithclass actionlawsuitamidcriticism
labeling imaging startupas ‘Theranos2.0’
https://www.radiologybusiness.com/topics/healthcare-economics
The news comes just weeks after the Israeli firm completed a successful initial public offering
that raised $190 million. Nanox has inked a series of deals in several countries to provide its
novel imaging system, claiming to offer high-end medical imaging at a fraction of the cost and
footprint. But analysts at Citron Research raised red flags Tuesday, Sept. 15, claiming the
company is merely a “stock promotion” amassing millions without any FDA approvals or
scientific evidence.
Citron’s analysis—titled “A Complete Farce on the Market: Theranos 2.0”—drew
widespread attention, with several law firms soliciting investors looking to sue Nanox over its
claims. Plaintiff Matthew White and law firm Rosen Law are one of the first to follow
through, filing a proposed securities class action in New York on Wednesday.
He claims the company made false statements to both the SEC and investors to inflate its
stock value, Bloomberg Law reported. White and his attorneys also allege Nanox fabricated
commercial agreements and made misleading statements about its imaging technology.
Several other law firms also announced their own lawsuits on behalf of investors Friday. 
Nanox did not respond to a Radiology Business request for comment. However, the Neve Ilan,
Israel-based company posted a statement to its webpage Wednesday, Sept. 16, addressing the
“unusual trading activities” after investors dumped the stock en masse in response to
Citron’s concerns.
CommercialCTDetectors
Ifyouwantto build your own CT scanner
FromAdvancesinComputedTomography Imaging Technology
Ginatand Gupta(2014) https://doi.org/10.1146/annurev-bioeng-121813-113601
From A typical multidetector CT scanner consists of a mosaic of scintillators that
convert X-rays into light in the visible spectrum, a photodiode array that
converts the light into an electrical signal, a switching array that enables switching
between channels, and a connector that conveys the signal to a data acquisition
system(Figure6).
The multiple channels between the detectors acquire multiple sets of projection data
for each rotation of the scanner gantry. The channels can sample different detector
elementssimultaneouslyandcancombinethesignals.
The detector elements can vary in size, and hybrid detectors that comprise narrow
(0.5-mm, 0.625-mm, or 0.75-mm) detectors in the center with wider (1.0-mm, 1.25-mm,
or 1.5-mm)detectorsflankedalongthesidesarecommonlyused(Saini2004).
Third-generation CT scanners featured rotate-rotate geometry,
whereby the tube and the detectors rotated together around the patient.
In conjunction with a wide X-ray fan beam that encompassed the entire
patient cross-section and an array of detectors to intercept the beam,
scan times of less than 5 s could be achieved. However, third-generation
CT scanners were prone to ring artifacts that resulted in drift in the
calibration of one detector relative to the other detectors. Fourth-
generation scanners featured stationary ring detectors and a rotating
fan-beam X-ray tube (Figure 5), which mitigated the issues related
to ring artifacts. However, the ring-detector arrangement limited the
useofscatterreduction.
leakage current MOS switch ASICs and ultra-low noise pre-amplification ASICs.
Our modern, automated, high-precision assembly process guarantees our
productsareofhighreliabilityandstability.
With our core competences in photodiode, ASIC and assembly technologies we
offer products in different assembly levels, ranging from photodiode chips to full
detector modules. Our strong experience in designing and developing CT
detector modules ensures that customized solutions are quickly and cost-
efficientlyinuseatour customers.
CTPhysics+Tech
Acquisition Sinogram Reconstruction→ →
Fransson(2019): Although many different reconstruction methods are available there are mainly two categories, filtered back-
projection (FBP) and iterative reconstruction (IR). FBP is a simpler method than IR and it takes less time to compute, but artifacts are more
frequent and dominant (Stiller2018). The image that provide the anatomical information is said to exist in the image domain. By applying
a mathematical operation, called the Fourier transform, on the image data it is transformed into the projection domain. In the projection
domain image processing is performed with the use of filters, or kernels, in order to enhance the image in various ways, such as reducing
the noise level. When the processing is completed the Inverse Fourier transform is applied on the data in order to acquire the anatomical
imagethatisdesired.
Acquisition Sinogram Reconstruction→ →
Stiller2018:Basicsofiterativereconstructionmethodsincomputedtomography:Avendor-independentoverview
Sinogram ImageSpace→
MachineFriendly MachineLearning:InterpretationofComputed
TomographyWithout Image Reconstruction
HyunkwangLee,ChaoHuang,SehyoYune,ShaheinH.Tajmir,MyeongchanKim&SynhoDo
Department ofRadiology,Massachusetts General Hospital,Boston;JohnA.Paulson SchoolofEngineeringand AppliedSciences,Harvard University,
ScientificReportsvolume9,Articlenumber: 15540(2019)
https://doi.org/10.1038/s41598-019-51779-5
Examples of reconstructed images and sinograms with different labels for (a), body part recognition
and (b), ICH detection. From left to right: original CT images, windowed CT images, sinograms with
360 projections by 729 detector pixels, and windowed sinograms 360 × 729. In the last row, an
example CT with hemorrhage is annotated with a dotted circle in image-space with the region of
interest converted into the sinogram domain using Radon transform. This area is highlighted in red on
thesinogramin thefifthcolumn.
Reconstructionfrom sparsemeasurements
common problem in all scanning-based imaging
Zhuetal.(2018) Nature"Imagereconstructionbydomain-transformmanifold learning" https://doi.org/10.1038/nature25988
Radonprojection;Spiralnon-CartesianFourier;UndersampledFourier;MisalignedFourier-  Citedby238 -https://youtu.be/o-vt1Ld6v-M-
https://github.com/chongduan/MRI-AUTOMAP
They describe the technique - dubbed AUTOMAP
(automated transform by manifold approximation) - in a
paper published today in the journal Nature.
"An essential part of the clinical imaging pipeline is image
reconstruction, which transforms the raw data coming
off the scanner into images forradiologists to evaluate,"
https://phys.org/news/2018-03-arti
ficial-intelligence-technique-quality-
medical.html
PET+CTJoint Reconstruction
Improvingthe Accuracy ofSimultaneously
Reconstructed ActivityandAttenuationMapsUsing
Deep Learning
DonghwiHwang,Kyeong YunKim,SeungKwanKang,SeonghoSeo,Jin
ChulPaeng,DongSooLeeandJaeSungLee
JNucl Med2018;59:1624–1629
http://doi.org/10.2967/jnumed.117.202317
Simultaneous reconstruction of activity and attenuation using
the maximum-likelihood reconstruction of activity and
attenuation (MLAA) augmented by time-of-flight information
is a promising method for PET attenuation correction.
However, it still suffers from several problems, including
crosstalk artifacts, slow convergence speed, and noisy
attenuation maps (μ-maps). In this work, we developed deep
convolutional neural networks (CNNs) to overcome these
MLAA limitations, and we verified their feasibility using a
clinical brain PET dataset.
There are someexistingworks on applying deeplearningto predict CT
m-maps based on T1-weighted MR images or a combination of Dixon
and zero-echo-time images (51,52). The approach using the Dixon and
zero-echo-time images would be more physically relevant than the T1-
weighted MRI-based approach because the Dixon and zero-echo-
time sequences provide more direct information on the tissue
composition than does the T1 sequence. The method proposed in this
study has the same physical relevance as the Dixon or zero-echo-time
approachbut doesnot requiretheacquisitionofadditionalMRimages.
Reconstructionexample forPET from sinograms
DirectPET:FullSize Neural
NetworkPET Reconstruction
fromSinogramData
William Whiteley, WingK. Luk, JensGregor Siemens Medical
Solutions USA
https://arxiv.org/abs/1908.07516
This paper proposes a new more
efficient network design called
DirectPET which is capable of
reconstructing a multi-slice Positron
Emission Tomography (PET) image
volume (i.e., 16x400x400) by
addressing the computational
challenges through a specially
designed Radon inversion layer. We
compare the proposed method to the
benchmark Ordered Subsets
Expectation Maximization
(OSEM) algorithm using signal-to-
noise ratio, bias, mean absolute error
and structural similarity measures.
Line profiles and full-width half-
maximum measurements are
also providedforasampleoflesions.
Looking toward future work, there are many possibilities in
network architecture, loss functions and training optimization to
explore, which will undoubtedly lead to more efficient
reconstructions and even higher quality images. However, the
biggest challenge with producing medical images is providing
overall confidence on neural network reconstruction on
unseensamples
ImprovingtheAccuracy ofSimultaneouslyReconstructedActivity and AttenuationMapsUsingDeepLearning
JNuclMed2018;59:1624–1629 http://doi.org/10.2967/jnumed.117.202317
CTArtifacts
BeamHardeningArtifactfoundoftenatlowerslicesnearbrainstemwithsmallspacessurroundedby bone
Beamhardeningartifact(left),andpartialvolumeeffect(right)
http://doi.org/10.13140/RG.2.1.2575.3122
UnderstandingandMitigatingUnexpectedArtifactsinHeadCTs:APracticalExperience
FlaviusD.RaslauJ.ZhangJ.Riley-GrahamE.J.Escott(2016)
http://doi.org/10.3174/ng.2160146
BeamHardening. The most commonly encountered artifact in CT
scanning is beam hardening, which causes the edges of an object to appear
brighter thanthecenter, evenifthematerialisthe same throughout
The artifact derives its name from its underlying cause: the increase in mean X-ray energy, or “hardening” of
the X-ray beam as it passes through the scanned object. Because lower-energy X-rays are attenuated more readily
than higher-energy X-rays, a polychromatic beam passing through an object preferentially loses the lower-
energy parts of its spectrum. The end result is a beam that, though diminished in overall intensity, has a higher
average energy than the incident beam. This also means that, as the beam passes through an object, the effective
attenuation coefficient of any material diminishes, thus making short ray paths proportionally more attenuating than
long ray paths. In X-ray CT images of sufficiently attenuating material, this process generally manifests itself
as an artificial darkening at the center of long ray paths, and a corresponding brightening near the edges.
In objects with roughly circular cross sections this process can cause the edge to appear brighter than the interior,
but in irregular objects it is commonly difficult to differentiate between beam hardening artifacts and
actualmaterial variations.
MotionArtifacts as inmostofimagingwhen thesubjectmovesduringthe acquisition
There are several steps to be taken to prevent the
voluntary movement of the body during scanning while
it is difficult to prevent involuntary movement. Some
modern scanning devices have some features that
reducetheresultingartifacts
Ameretal.(2018)researchgate.net
ArtifactsinCT:recognitionandavoidance.
BarrettandKeat(2004)
https://doi.org/10.1148/rg.246045065
Freeze!RevisitingCTmotionartifacts:Formation,recognitionand remedies.
semanticscholar.org
CTbrain withseveremotionartifact
https://radiopaedia.org/images/4974802
StreakArtifactsfrom high density structures
An Evidence-Based Approach To Imaging Of Acute Neurological Conditions (2007)
https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf
Dr BalajiAnvekar'sNeuroradiologyCases:StreakartifactsCT
http://www.neuroradiologycases.com/2011/10/streak-artifacts.html
Hegazy, M.A.A., Cho, M.H., Cho, M.H. et al.
U-netbasedmetalsegmentation
onprojectiondomainfor metal
artifact reductionindentalCT
(2019)
https://doi.org/10.1007/s13534-019-00
110-2
RingArtifacts from high density structures
CTartifacts:causes andreduction
techniques (2012)
FEdwardBoas&DominikFleischmann Department
ofRadiology, StanfordUniversitySchoolofMedicine,
300PasteurDrive, Stanford,CA94305, USA
https://www.openaccessjournals.com/articles/ct-artif
acts-causes-and-reduction-techniques.html http://doi.org/10.1088/0031-9155/46/12/309
ZebraandStair-stepArtifacts
CTartifacts:causesand reductiontechniques (2012)
FEdwardBoas&DominikFleischmannDepartment ofRadiology,StanfordUniversitySchoolofMedicine
https://www.openaccessjournals.com/articles/ct-artifacts-causes-and-reduction-techniques.html
Zebra and stair-step artifacts. (A) Zebra artifacts (alternating high and low noise
slices, arrows) due to helical interpolation. These are more prominent at the periphery
of the field of view. (B) Stair-step artifacts (arrows) seen with helical and
multidetector rowCT.Thesearealsomoreprominentnear theperipheryofthefieldof
view.Therefore,itisimportanttoplace theobjectofinterestnear thecenter ofthefield
ofview.
Zebrastripes
https://radiopaedia.org/articles/zebra-stripes-1?lang=gb
AndrewMurphy  and ◉ and  Dr J.RayBallinger etal.
Zebrastripes/artifacts appear asalternatingbrightanddarkbandsinaMRIimage.Theterm
hasbeenusedtodescribeseveraldifferentkindofartifactscausingsomeconfusion.
Artifactsthathavebeendescribedasazebraartifactincludethefollowing:
●
Moirefringes 
●
Zero-fillartifact
●
Spikeink-space 
Zebrastripeshavebeendescribedassociatedwith susceptibilityartifacts.
InCTthereisalsoazebraartifactfrom3Dreconstructionsandazebrasignfrom
haemorrhageinthecerebellar sulci.
Itthereforeseemsprudenttouse"zebra"withatermlike"stripes"rather than"artifacts".
Bonediscontinuities from factures
An Evidence-Based Approach To Imaging Of Acute Neurological Conditions (2007)
https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf
https://www.ncbi.nlm.nih.gov/pubmed/21691535
Bonefractures in practice
DoctorExplains Serious UFCEyeInjuryforKarolinaKowalkiewicz - UFC FightNight168
Brian Sutterer,https://youtu.be/XwvoNsypP-I
OrbitalFloorfracture
muscleorfatgoingtomaxillarysinus
https://en.wikipedia.org/wiki/Orbital_blowout_fracture
Networks trainedfor fractures as well
DeepConvolutionalNeural
NetworksforAutomatic
DetectionofOrbitalBlowout
Fractures
D.Ng,L.Churilov,P. Mitchell, R.DowlingandB.Yan
American Journalof NeuroradiologyFebruary2018,39
(2)232-237; https://doi.org/10.3174/ajnr.A5465
Orbital blow out fracture is a common
disease in emergency department and a
delay or failure in diagnosis can lead to
permanent visual changes. This study aims to
evaluate the ability of an automatic orbital
blowout fractures detection system based on
computedtomography(CT) data.
The limitations of this work should be
mentioned. First, our method was developed
and evaluated on data from a single-tertiary
hospital. Thus, further assessment of large
data from other centers is required to increase
the generalizability of the findings, which will be
addressed in a future work. Fracture location is
also an important parameter in accurate
diagnosis and planning for surgical
management. With further improvements and
clinical verification, an optimized model could be
implemented in the development of computer-
aideddecisionsystems.
Preprocessing of DICOM data. A, Original pixel values visualized on a CT slice. B, Effect after finding the largest link
area. C, Image with bone window limitation. D, Binary image of a CT slice. E, Image clipped with the maximum outer
rectangular frame.CT,computedtomography.
“Signs”
Clinician-invented
handcrafted features
‘Signs’ human-definedpatterns predictingthe outcome#1
Noncontrastcomputedtomography
markersof outcome inintracerebral
hemorrhage patients
MiguelQuintas-Nevesetal. (Oct 2019)
AJournalof ProgressinNeurosurgery,NeurologyandNeurosciences
https://doi.org/10.1080/01616412.2019.1673279
328patients wereincluded.Themostfrequent
NCCTmarkerwas‘anyhypodensity’(68.0%) andthe
lessfrequent wastheblendsign(11.6%). Eventhough
somenoncontrast computedtomography(NCCT)
markersareindependent predictorsofHGand30-
daysurvival,theyhave suboptimaldiagnostic
testperformances forsuch outcomes.
Withphysical background of course, but still a bit subjective
‘Signs’ human-definedpatterns predictingthe outcome #2
fromHemorrhagic Stroke (2014)
JuliusGriauzde, ElliotDickerson and Joseph J. Gemmete Department ofRadiology,RadiologyResident,UniversityofMichigan
http://doi.org/10.1007/978-1-4614-9212-2_46-1
Active Hemorrhage Observing active extravasation
of blood into the area of hemorrhage is an ominous
radiologic finding that suggests both ongoing expansion
of the hematoma and a poor clinical outcome [
Kimetal.2008]. On non-contrast examinations, freshly
extravasated blood will have attenuation
characteristics different from the blood which has
been present in the hematoma for a longer
period, and these heterogeneous groups of blood
products can circle around one another to produce a
“swirl sign” which has also been associated with
hemorrhage growth and poor outcomes [Kimetal.2008
].
If the patient receives a CTA study, active extravasation
can present as a tiny spot on arterial phase images
(the “spot sign”) which can rapidly expand on more
delayed phase images. Even when a spot of precise
extravasation is not identified on arterial phase images,
more delayed images can directly demonstrate
extravasatedcontrastindicatingongoing hemorrhage.
Withphysical background of course, but still a bit subjective
a NCCT of deep right ICH (38 ml) with swirl sign (arrow). b Corresponding hematoma CT densitometry
histogram (Mean HU 55.3, SD 9.7, CV 0.18, Skewness −0.26, Kurtosis 2.41). c CTA with multiple spot signs
present (arrows). The patient subsequently underwent hematoma expansion of 41 ml. d NCCT of a
different patient with right frontal lobar ICH (38 ml) and trace IVH. e Corresponding hematoma CT
densitometry histogram (Mean HU 61.5, SD 12.2, CV 0.20, Skewness −0.64, Kurtosis 2.6). f CTA
demonstratesnoevidence of spot sign. Thepatient had astable hematomaon 24-hour follow-up
Swirls and spots: relationship betweenqualitative and quantitative
hematoma heterogeneity,hematomaexpansion, and the spot sign Dale
Connor, ThienJ. Huynh, AndrewM. Demchuk, Dar Dowlatshahi,David J. Gladstone, SivaniyaSubramaniapillai,Sean P. Symons&
Richard I.AvivNeurovascularImagingvolume1, Articlenumber: 8 (2015)
https://doi.org/10.1186/s40809-015-0010-1
CT“SwirlSign” associated with hematomaexpansion
TheCTSwirlSignIsAssociated
withHematomaExpansionin
IntracerebralHemorrhage
D.Ng,L.Churilov,P.Mitchell,R.DowlingandB.Yan
AmericanJournalofNeuroradiologyFebruary2018,39
(2)232-237; https://doi.org/10.3174/ajnr.A5465
Hematoma expansion is an
independent determinant of poor
clinical outcome in intracerebral
hemorrhage. Although the “spot sign”
predicts hematoma expansion, the
identification requires CT angiography, which
limits its general accessibility in some hospital
settings. Noncontrast CT (NCCT), without the
need for CT angiography, may identify sites of
active extravasation, termed the “swirl sign.”
We aimed to determine the association of the
swirl sign withhematoma expansion.
The NCCT swirl sign was reliably identified
and is associated with hematoma expansion.
We propose that the swirl sign be
included in risk stratification of
intracerebral hemorrhage and
considered for inclusion in clinical
trials.
NoncontrastbrainCTofa73-year-old
womanwhopresentedwithright-sided
weakness.
InitialbrainCT(A–C)demonstratesaleft
parietalhematomameasuring33mL,
demonstrating hypodensehematomawith
hypodensefoci,theswirlsign.
Follow-upCT (D–F)performed8hourslater
demonstratesincreasedhematomavolume,
46mL.
Imagingfeaturesof swirlsignandspot sign
CoronalnonenhancedCT (A)demonstratesthe
hypodenseareawithinthehematoma(swirlsign
[asterisk]),whereasahyperdensespotisshown on
CTangiography(arrow) (B).Thereisalreadymass
effect with midlineshiftand intraventricular
hematomaextension.
https://doi.org/10.1212/WNL.0000000000003290
CT“SpotSign”
AdvancesinCTforpredictionof hematoma
expansioninacuteintracerebral
hemorrhage
ThienJ Huynh, Sean P Symonsand Richard I Aviv
Division of Neuroradiology, Departmentof Medical Imaging,Sunnybrook Health Sciencesand
University of Toronto, Toronto,Canada
Imagingin Medicine(2013)Vol5Issue6
https://www.openaccessjournals.com/articles/advances-in-ct-for-predict
ion-of-hematoma-expansion-in-acute-intracerebral-hemorrhage.html
Noncontrast CT imaging plays a critical role in acute
intracerebral hemorrhage (ICH) diagnosis, as
clinical features are unable to reliably distinguish
ischemic from hemorrhagic stroke. For
detectionof acute hemorrhage, CT isconsidered
the gold-standard; however CT and MRI have
been found to be similar in accuracy. CT is
preferred over MR imaging due to reduced
cost, rapid scan times, increased patient tolerability
and increased accessibility in the emergency
setting. It is important to note, however, that CT
lacks sensitivity in identifying foci of chronic
hemorrhage compared with gradient echo and T2*
susceptibility- weighted MRI. MR imaging may also
provide additional information regarding the
presence of cavernous malformations and
characterizingperihematomaledema
CT“Blackholesign”
ComparisonofSwirlSignand
BlackHoleSigninPredicting
EarlyHematomaGrowthin
PatientswithSpontaneous
IntracerebralHemorrhage
Xin Xiong etal. (2018)
http://doi.org/10.12659/MSM.906708
Early hematoma growth is associated with
poor outcome in patients with spontaneous
intracerebral hemorrhage (ICH). The swirl
sign (SS) and the black hole sign (BHS) are
imaging markers in ICH patients. The aim of
this study was to compare the predictive
value of these 2 signs for early hematoma
growth
Illustrationofswirlsign,black
holesign,andfollow-upCT
images.(A)A60-year-old
manpresentedwithsudden
onsetofleft-sidedparalysis.
AdmissionCTimage
performed1hafter onsetof
symptomsshowing thalamic
ICHwithaswirlsign(arrow)
andthehematomavolume
was16.57ml.(B)Hematoma
volumeremainsthesameon
follow-upCT scan
performed23hafter onset
ofsymptoms.(C)A75-year-
oldmanwithleftdeepICH.
InitialCT imageperformed2
hafter onsetofsymptoms
showsblackholesign
(arrow).(D)Follow-upCT
image4hlatershows
significanthematoma
growth. 
CT“LeakageSign” Youprobablynoted thepatternalready?Insteadofadmittingthat no
single “sign”cantell youthe whole storyandtryingtodefinesome non-robust“biomarkers”,data-driven
methodsare notexplored heavilybyclinicians(applyingtomostclinicaldomains)
LeakageSignforPrimary
IntracerebralHemorrhage
ANovelPredictorofHematomaGrowth
Kimihiko Orito, Masaru Hirohata, Yukihiko Nakamura, Nobuyuki
Takeshige,TakachikaAoki, GousukeHattori, Kiyohiko Sakata,Toshi Abe,
YuusukeUchiyama, TeruoSakamoto, and Motohiro Morioka
Stroke. 2016;47:958–963
https://doi.org/10.1161/STROKEAHA.115.011578
Recent studies of intracerebral
hemorrhage treatments have
highlighted the need to identify reliable
predictors of hematoma expansion.
Several studies have suggested that the
spot sign on computed tomographic
angiography (CTA) is a sensitive
radiological predictor of hematoma
expansion in the acute phase. However,
the spot sign has low sensitivity for
hematoma expansion. In this study, we
evaluated the usefulness of a novel
predictive method, called the leakage
sign.
The leakage sign wasmore sensitive than the spot sign forpredicting hematoma expansion in patients
withICH. In addition to the indication foran operation and aggressive treatment, we expect that this
methodwill be helpful to understandthe dynamicsof ICH in clinical medicine.
CT“IslandSign”
Island Sign:AnImaging
Predictor forEarly Hematoma
ExpansionandPoorOutcomein
PatientsWithIntracerebral
Hemorrhage
Qi Li,Qing-Jun Liu, Wen-Song Yang, Xing-ChenWang,Li-Bo Zhao,Xin
Xiong, Rui Li, Du Cao, Dan Zhu, Xiao Wei,and Peng Xie
Stroke.2017;48:3019–302510Oct2017
https://doi.org/10.1161/STROKEAHA.117.017985
We included patients with spontaneous
intracerebral hemorrhage (ICH) who
had undergone baseline CT within 6 hours
after ICH symptom onset in our hospital
between July 2011 and September 2016. A
total of 252 patients who met the inclusion
criteria were analyzed. Among them, 41
(16.3%) patients had the island sign on
baseline noncontrast CT scans. In addition,
the island sign was observed in 38 of 85
patients(44.7%) withhematoma growth.
Multivariate logistic regression analysis
demonstrated that the time to baseline CT
scan, initial hematoma volume, and the
presence of the island sign on baseline
CT scan independently predicted early
hematoma growth.
 Illustration of island sign. Axial noncontrast computed tomography (CT) images
of 4 patients with CT island sign. A, CT island sign in a patient with basal ganglia
hemorrhage. Note the there are 3 small scattered little hematomas (arrows),
each separate from the main hematoma. B, Putaminal intracerebral hemorrhage
with 3 small separate hematomas (arrowheads). Note that there arehypointense
areas between the 3 small hematomas and the main hematoma. C, Lobar
hematoma with 4 scattered separate hematomas (arrowheads). D, Large basal
ganglia hemorrhagewith intraventricular extension. The hematoma consists of 4
bubble-like or sprout-like small hematomas (arrowheads) that connect with the
mainhematomaand oneseparatesmall hematoma(arrow).
Illustration of differences between the
Barras shape scale and Li Qi’s island sign. A,
Barras scale category IV lobulated hematoma. Note
that irregular margin had a broad base, and the
border of the main hematoma was spike-like
(arrow). B, A lobulated hematoma that belongs to
Barras scale category V. Note that the hematoma
consisted of 4 spike-like projections (lobules). C, The
island sign consisted of one separate small island
(arrow) and 3 little islands (arrowheads) that connect
with the main hematoma. Note that the 3 small
hematomas were bubble-like or sprout-like
outpouching from the main hematoma. D, A large
hematoma with 4 bubble-like or sprout-like small
hematomas (arrowheads) all connected with the
main bleeding. Note that the large lobule (big arrow)
in the bottom of the main hematoma was not
considered islands.
Howwelldohumansagreeonthesigndefinitions
Inter-andIntraraterAgreementofSpot
SignandNoncontrastCT MarkersforEarly
Intracerebral HemorrhageExpansion
JawedNawabiet al. J.Clin.Med. 2020,9(4), 1020;
https://doi.org/10.3390/jcm9041020
(ThisarticlebelongstotheSpecialIssue 
IntracerebralHemorrhage:ClinicalandNeuroimagingCharacteristics)
The aim of this study was to assess the inter- and
intrarater reliability of noncontrast CT (NCCT) markers
[Black Hole Sign (BH), Blend Sign (BS), Island Sign (IS),
and Hypodensities (HD)] and Spot Sign (SS) on CTA in
patients with spontaneous intracerebral hemorrhage
(ICH)
NCCT imaging findings and SS on CTA have good-to-
excellent inter- and intrarater reliabilities, with the
highestagreementfor BHandSS.
Representative examplesof
disagreed ratingsof four non-
contrast computed tomographic
(NCCT) markersand SpotSign (SS)
onCT-angiography(CTA)for
intracerebral hemorrhage
expansion. (A)SSonCTA(white
arrow) mistaken for intraventricular
plexuscalcification (black arrow)
(B). (C) Blend sign (white arrows)
mistaken for Fluid Sign1. (D) Swirl
Signmistaken for Hypodensities
(black arrow). (E) Hypodensities
(black arrow) mistakenfor Swirl Sign
(F)
Radiomics
CADComputer-aided diagnosis notdesign
RebrandedasRadiomics→
FromHandcraftedtoDeep-Learning-
BasedCancerRadiomics:Challengesand
Opportunities
ParnianAfshar et al. (2019)
IEEE Signal ProcessingMagazine ( Volume:36, Issue:4 , July 2019)
https://doi.org/10.1109/MSP.2019.2900993
Radiomics, an emerging and relatively new research field,
refers to extracting semi-quantitative and/or
quantitative features from medical images with the goal
of developing predictive and/or prognostic models. In the
near future, it is expected to be a critical component for
integrating image-derived information used for personalized
treatment. The conventional radiomics workflow is typically
based on extracting predesigned features (also referred to as
handcrafted or engineered features) from a segmented
region of interest (ROI). Nevertheless, recent advancements
in deep learning have inspired trends toward deep-
learning-based radiomics (DLRs) (also referred to as
discoveryradiomics).
Thedifferentcategories
ofhandcrafted
featurescommonly
usedwithinthecontext
ofradiomics.
ExtractingDeep-Learning-
Radiomics(DLR). Theinputto
thenetworkcanbetheoriginal
image,thesegmentedROI,or a
combinationofboth.Eitherthe
extractedradiomicsfeaturesare
usedthroughouttherestofthe
network,oranexternalmodelis
usedtomakethedecisionbased
onradiomicsfeatures.
Reproducibilityoftraditionalradiomicfeatures #1
ReproducibilityofCT Radiomic Featureswithin
theSamePatient: Influence of RadiationDose and
CT ReconstructionSettings
MathiasMeyer, JamesRonald, FedericaVernuccio, Rendon C. Nelson, Juan Carlos
Ramirez-Giraldo, Justin Solomon, BhavikN. Patel, Ehsan Samei, Daniele Marin
Radiology (1 Oct 2019)
https://doi.org/10.1148/radiol.2019190928
Results of recent phantom studies show that variation in CT acquisition
parameters and reconstruction techniques may make radiomic features
largely nonreproduceable and oflimited use for prognosticclinical studies.
Conclusion: Most radiomic features are highly affected by CT acquisition and
reconstruction settings, to the point of being nonreproducible. Selecting reproducible
radiomic features along with study-specific correction factors offers improved
clusteringreproducibility.
Images in 63-year-old female study participant with
metastatic liver disease from colon cancer. CT images
reconstructed in the axial plane with (top row) 5.0 mm and
(bottom row) 3.0 mm. The texture distribution alters
between the two reconstruction algorithms with
direct effect on the quantitative texture radiomic features,
such as gray-level size zone matrix large area high
level emphasis (LAHGLE) (5.0 mm LAHGLE =
4301732.0 vs 3.0 mm LAHGLE = 7089324.3) as
displayed in the lesion overlay images (middle column)
and the heatmap distributions (rightmost column). The
heat maps (rightmost column) display the difference of
original image and a convolution. Note how the heat map
distribution changes between the different section
thicknesses. The heat map was generated by using
MintLesion (version 3.4.4; MintMedical, Heidelberg,
Germany).
Reproducibilityoftraditionalradiomicfeatures #2
ReliabilityofCT-basedtexture features:Phantom
study.
BinoA. Varghese Darryl Hwang Steven Y. Cen JoshuaLevy DerekLiu Christopher
Lau MarielenaRivas Bhushan Desai David J. Goodenough VinayA. Duddalwar
Journal ofAppliedClinical Medical Physics(20JuneOct2019)
https://doi.org/10.1002/acm2.12666
Objective: To determine the intra , inter and test retest variability of‐ICH, occurs earlier  ‐ICH, occurs earlier  ‐ICH, occurs earlier 
CT basedtextureanalysis(CTTA)metrics.‐ICH, occurs earlier 
Results: As expected, the robustness, repeatability and
reproducibility of CTTA metrics are variably sensitive to various
scanner(PhilipsBrilliance 64 CT,ToshibaAquilion Prime160CT) and
scanning parameters. Entropy of Fast Fourier Transform‐sized 
basedtexture metricswasoverallmost reliable acrossthetwo
scanners and scanning conditions. Post processing techniques‐sized 
that reduce image noise while preserving the underlying edges
associated with true anatomy or pathology bring about significant
differences in radiomic reliability compared to when they were
notused.
(Left) Texture phantom comprising of three texture patterns. (Middle) Phantom placement for image
acquisition. (Right) Cross section of texture phantom patterns. (1), (2) and (3) are 3D printed ABS plastic
with fill levels 10%, 20%, and 40%, respectively. (Bk) is ahomogenousABS material. (Thewindowlevel is
−500 HUwitha width of 1600 HU).
3.4 Effect of post processing‐sized 
techniques that reduce image noise
while preserving the underlying edges
associated with true anatomy or
pathology
By comparing the changes in robustness of
the CTTA metrics across the two scanners, we
observe that post processing techniques that‐ICH, occurs earlier 
reduce image noise while preserving the
underlying anatomical edges for example, I‐ICH, occurs earlier 
dose levels (here 6 levels) on the Philips
scanner and Mild/Strong (here 2 levels) levels
on the Toshiba scanner produce significant
difference in CTTA robustness compared to
the base setting (Fig. 3). Stronger noise
reduction techniques were associated
with a significant reduction in reliability
in the Philips scanner, however, the
opposite was observed on the Toshiba
scanner. In both cases, no noise reduction
techniqueswereused in thebasesetting.
Robustnessassessment of thetexture metricsdue to
changes in reconstruction filters;I dose levels(Philips‐ICH, occurs earlier 
scanner [a]and changesin noise correctionslevels
(Mild or Strong) onthe Toshibascanner[b].
Reproducibilityoftraditionalradiomicfeatures #3
RadiomicsofCT FeaturesMayBe
Nonreproducible andRedundant:InfluenceofCT
AcquisitionParameters
RobertoBerenguer, Maríadel RosarioPastor-Juan, JesúsCanales-Vázquez, Miguel
Castro-García, MaríaVictoriaVillas, FranciscoMansillaLegorburo, SebastiàSabater
Radiology (24April 2018)
https://doi.org/10.1148/radiol.2018172361
Materials and Methods Two phantoms were used to test radiomic
feature (RF) reproducibility by using test-retest analysis, by changing the CT
acquisition parameters (hereafter, intra-CT analysis), and by comparing five
different scanners with the same CT parameters (hereafter, inter-CT analysis).
Reproducible RFs were selected by using the concordance correlation
coefficient (as a measure of the agreement between variables) and the
coefficient of variation (defined as the ratio of the standard deviation to the
mean). Redundant features were grouped by using hierarchical cluster
analysis.
Conclusion ManyRFs wereredundant and nonreproducible. If allthe
CT parameters are fixed except field of view, tube voltage, and milliamperage,
then the information provided by the analyzed RFs can be summarized in
only10 RFs(eachrepresentingacluster)becauseofredundancy.
Graph shows cluster
dendrogram and representative
radiomics features (RFs). Red
boxes differentiate 10 extracted
clusters, which were selected by
height. Representative RFs of
each cluster were selected
based on highest concordance
correlation coefficient value of
test-retest analysis. 
Reproducibilityoftraditionalradiomicfeatures #4
Reproducibilitytest of radiomics using
networkanalysisand WassersteinK-means
algorithm
JungHun Oh, AdityaP. Apte, EvangeliaKatsoulakis, Nadeem Riaz, Vaios
Hatzoglou, YaoYu, Jonathan E. Leeman, UsmanMahmood, Maryam
Pouryahya, AditiIyer, AmitaShukla-Dave, AllenR. Tannenbaum, NancyY. Lee,
Joseph O. Deasy
https://doi.org/10.1101/773168 (19Sept 2019)
To construct robust and validated radiomic predictive models, the
development of a reliable method that can identify reproducible
radiomic features robust to varying image acquisition methods
and other scanner parameters should be preceded with rigorous
validation. We further propose a novel Wasserstein K-means
algorithm coupled with the optimal mass transport (OMT)
theorytoclustersamples.
Despite such great progress in radiomics in recent years, however, the
development of computational techniques to identify repeatable and
reproducible radiomic features remains challenging and relatively
retarded. This has led many radiomic models built using a dataset to
be unsuccessful in subsequent external validation on
independent data [Virginiaetal. 2018]. One of the reasons of these
consequences is likely due to the susceptibility of radiomic
features to image reconstruction and acquisition
parameters. Since radiomic features are computed via multiple
tasks including imaging acquisition, segmentation, and feature
extraction, the selection of parameters present in each step may
affect the stability of features computed. As such, prior to
model building, development of radiomic features with high
repeatability and high reproducibility as well as development of tools
that can identify such features is more likely to be urgently needed in
thefieldofradiomics.
CTLabels
“ICHCTLabels” e.g. hematoma primary injury
, PHE secondary injury
Airton Leonardo deOliveiraManoel (Feb2020)
PHE– peri-hematomaedema
https://doi.org/10.1186/s13054-020-2749-2
Intraventicularextension ofhemorrhage(IVH)mightchangeventricleshape
makingsegmentation rathertricky especiallyifyouhavetrained yourbrain
modelswithnon-pathologicalbrain. SliceexamplefromCROMISstudyatUCL.
Imagingfeatures are time-dependent (fromhourstolong-termoutcomes) #1
https://doi.org/10.1212/WNL.0b013e3182343387
https://doi.org/10.2176/nmc.ra.2016-0327
Advancesin CT forprediction ofhematomaexpansion in acute intracerebral
hemorrhage Thien J Huynh, SeanP Symons and Richard IAviv
Division ofNeuroradiology,Department ofMedicalImaging, Sunnybrook HealthSciencesand Universityof Toronto
https://www.openaccessjournals.com/articles/advances-in-ct-for-prediction-of-hematoma-expansion-in-acute-intracerebral-he
morrhage.html
PerihematomalEdemaAfter Spontaneous
Intracerebral Hemorrhage (2019)
https://doi.org/10.1161/STROKEAHA.119.024965
A)Exampleofhematoma and perihematoma edemaregions ofinterest (ROIs).The ROIs were drawnon the
noncontrast computed tomography (CT)and transferred to perfusion maps.(B)Maps ofcerebral blood flow
(CBF), cerebral blood volume (CBV), and time to peak ofthe impulseresponse curve (TMAX)froman ICH
ADAPTstudypatient randomized to a targetsystolic BP<150mmHg. 10.1038/jcbfm.2015.36
Imagingfeatures are time-dependent (fromhourstolong-termoutcomes) #2
Intracerebralhemorrhage(ICH)growthpredictsmortality
andfunctionaloutcome.Wehypothesized that irregular
hematoma shapeand densityheterogeneity,
reflectingactive,multifocalbleedingoravariablebleeding
timecourse,wouldpredict ICHgrowth.
https://doi.org/10.1161/STROKEAHA.108.536888
A,Shape(left)anddensity(right)categoricalscales
and(B)examplesofhomogeneous,regularICH(left)
andheterogeneous,irregularICH(right).
Absolute(A)andrelative
(B)perihematomaledemafor
decompressivecraniotomy
treatmentandcontrolgroups,
andcorrectedabsolute(C)
andcorrectedrelative(D)
perihematomaledemafor the
treatmentandcontrolgroups.
10.1371/journal.pone.0149169
Example of a CT scan demonstrating delineation of the region of Example of a CT scan demonstrating delineation of
the region of PHE (outlined in green) and ICH (outlined in red). The oedema extension distance (EED) is the
difference between the radius (re
) of a sphere (shown in green) equal to the combined volume of PHE and ICH and the
radiusofasphere (shown in red) equal tothe volume of theICH alone (rh
).
Oedema extension distance in intracerebral haemorrhage: Association with baseline characteristics and long-term outcome
http://dx.doi.org/10.1177/2396987319848203
Imagingfeatures are time-dependent (fromhourstolong-termoutcomes) #3
IntraventricularHemorrhageGrowth:
Definition,Prevalence andAssociationwith
Hematoma ExpansionandPrognosis
QiLi etal. (NeurocriticalCare(2020))
https://doi.org/10.1007/s12028-020-00958-8
The objective of this study is to propose a definition of
intraventricular hemorrhage (IVH) growth and to
investigate whether IVH growth is associated with ICH
expansion and functional outcome. IVH growth is not
uncommon and independently predicts poor outcome
in ICH patients. It may serve as a promising therapeutic
targetforintervention.
Illustration of IVHgrowth on
noncontrast CT. 
a Baseline CTscanrevealsa
putaminal hematomawithout
concurrent intraventricular
hemorrhage. 
b Follow-up CTscan performed
11 h later showsenlarged
hematomaand intraventricular
extension ofparenchymal
hemorrhage. 
c Admission CTscan showsa
basal gangliahemorrhage with
ventricular extension of
hematoma. 
d Follow-up CTscan performed
24 h after baseline CTscan
revealsthe significantincrease in
ventricular hematoma
volume. CT computed
tomography, IVH intraventricular
hemorrhage
DistributionofmodifiedRankinscale inpatientswithor
withoutIVHgrowth.Theordinalanalysisshoweda
significantunfavorableshiftinthedistributionofscoresonthe
modifiedRankinscaleIVHgrowth(pooledoddsratioforshiftto
higher modifiedRankinscore
SegmentationLabels? WM/GMcontrastabitlowinCTcomparedtoMRI
WhiteMatterandGray MatterSegmentationin4DComputed
Tomography
RashindraManniesing, Marcel T. H. Oei, Luuk J. Oostveen, Jaime Melendez , Ewoud J. Smit, Bram Platel , ClaraI. Sánchez,
Frederick J. A. Meijer , Mathias Prokop & Bram van Ginneken.
SciRep7,119(2017) https://doi.org/10.1038/s41598-017-00239-z -Cited by7 
SegmentationLabels? WM/GM supervise with MRI?
WholeBrainSegmentationandLabeling
fromCTUsing SyntheticMRImages
CanZhao,Aaron Carass, Junghoon Lee, YufanHe, JerryL.Prince
InternationalWorkshoponMachineLearningin MedicalImaging
MLMI2017:Machine LearninginMedicalImagingpp291-298
https://doi.org/10.1007/978-3-319-67389-9_34
To achieve whole-brain segmentation—i.e., classifying tissues within and
immediately around the brain as gray matter (GM), white matter (WM), and
cerebrospinal fluid—magnetic resonance (MR) imaging is nearly alwaysused.
However, there are many clinical scenarios where computed tomography
(CT)istheonly modality thatisacquiredandyetwholebrainsegmentation
(and labeling) is desired. This is a very challenging task, primarily because CT
has poor soft tissue contrast; very few segmentation methods have been
reported to date and there are no reports on automatic labeling. This
paper presents a whole brain segmentation and labeling method for non-
contrast CT images that first uses a fully convolutional network (FCN) to
synthesize an MR image from a CT image and then uses the synthetic MR
imageinastandardpipelinefor wholebrainsegmentationandlabeling.
In summary, we have used a modified U-net to synthesize T1-w images
from CT, and then directly segmented the synthetic T1-w using either MALP-
EM or a multi-atlas label fusion scheme. Our results show that using
synthetic MR can significantly improve the segmentation over
using the CT image directly. This is the first paper to provide GM
anatomical labels on a CT neuroimage. Also, despite previous assertions that
CT-to-MR synthesis is impossible from CNNs, we show that it is not only
possible but it can be done with sufficient quality to open up new clinical and
scientificopportunitiesinneuroimaging.
For one subject, we show the (a) input CT image, the (b) output synthetic T1-w, and
the (c) ground truth T1-w image. (d) is the dynamic range of (a). Shown
in (e) and (f) aretheMALP-EMsegmentationsofthe synthetic andgroundtruth T1-w
images,respectively.
SegmentationLabels? Propagate from pairedMRI?
Thegoalofthisprojectistodevelopanalgorithmforthesegmentationand
separationofthecerebralhemispheres,thecerebellumandbrainstem
innon-contrastCTimages.© 2019 Department of Radiology and Nuclear Medicine, Radboud
university medical center, Nijmegen
http://www.diagnijmegen.nl/index.php/Automatic_cerebral_hemisphere,_cerebellum_and_brainstem_
segmentation_in_non-contrast_CT
GIF:UNIFIEDBRAINSEGMENTATION
ANDPARCELATION
TheGIF algorithmisanonlinebrainextraction,tissue
segmentationandparcelationtool forT1-weightedimages.
GIF,which standsforgeodesical informationflows,will be
deployedaspartof NiftySeg.You candownloadthe
parcelationlabelsinxmlfromhere (v2, v3)andinexcelfrom
here(v2, v3).
http://niftyweb.cs.ucl.ac.uk/
SynSeg-Net:SyntheticSegmentationWithoutTargetModalityGroundTruth
https://arxiv.org/abs/1810.06498
UsefulingeneraltohaveCT/MRIpairs?
BrainMRI withQuantitative
Susceptibility Mapping:
Relationship toCT
AttenuationValues
https://doi.org/10.1148/radiol.2019182934
Toassesstherelationship
amongmetalconcentration,CT
attenuationvalues,andmagnetic
susceptibilityinparamagnetic
anddiamagneticphantoms,and
therelationshipbetweenCT
attenuationvaluesand
susceptibilityinbrainstructures
thathaveparamagneticor
diamagneticproperties.
CT Segmentation Labels vs MRIlabels
Loss Switching In segmentation tasks, the dice score is often reported as the performance metric.
A loss function that directly correlates with the dice score is the weighted dice loss. Based on our
empirical observation, the network trained with only weighted dice loss was unable to escape local
optimum and did not converge. Also, empirically it was seen that the stability of the model, in terms
of convergence, decreased as the number of classes and class imbalance increased. We
found that weighted cross-entropy loss, on the other hand, did not get stuck in any local optima and
learned reasonably good segmentations. As the model’s performance with regard to dice score
flattened out, we switched from weighted cross entropy to weighted dice loss, after which
the model’s performance further increased by 3-4 % in terms of average dice score. This loss
switching mechanism, therefore, is found to be useful to further improve the performance of the
model.
Onbrainatlaschoice andautomaticsegmentationmethods: a
comparisonof MAPER& FreeSurfer usingthreeatlas
databases https://doi.org/10.1038/s41598-020-57951-6
DARTS:DenseUnet-basedAutomaticRapidToolforbrain
SegmentationAakash Kaku, ChaitraV. Hegde, JeffreyHuang, Sohae Chung,
Xiuyuan Wang, Matthew Young, AlirezaRadmanesh, YvonneW. Lui, NargesRazavian
(Submitted on13Nov 2019 https://arxiv.org/abs/1911.05567
Weaklabels for CT Segmentation
Extracting2Dweaklabelsfromvolumelabels
usingmultipleinstancelearningin CT
hemorrhagedetection
Samuel W. Remedios, Zihao Wu,Camilo Bermudez,CaileyI. Kerley, SnehashisRoy, MayurB. Patel, John A.Butman, BennettA.Landman,Dzung L.
Pham (Submittedon 13Nov2019) https://arxiv.org/abs/1911.05650
https://github.com/sremedios/multiple_instance_learning
Multipleinstancelearning (MIL)isasupervisedlearning methodology thataimsto
allowmodelstolearninstanceclasslabelsfrombag classlabels,whereabag is
definedto containmultipleinstances.MIL isgainingtractionforlearning fromweak
labelsbuthasnotbeenwidelyappliedto3Dmedicalimaging.
MILiswell-suitedtoclinicalCT acquisitionssince (1) thehighlyanisotropicvoxels
hinderapplicationoftraditional3D networksand (2) patch-basednetworkshave
limitedabilitytolearnwholevolumelabels.Inthiswork,weapplyMIL with adeep
convolutional neural network to identify whetherclinicalCT headimage
volumespossessoneormore largehemorrhages(>20cm3
), resulting ina
learned2D modelwithouttheneedfor2Dslice annotations.
Individualimage volumesareconsideredseparate bags,andthe slicesin
eachvolumeareinstances. Such aframework setsthestagefor incorporating
informationobtainedin clinicalreportsto helptraina2Dsegmentationapproach.
Withinthiscontext, weevaluatethedatarequirementsto enablegeneralizationof MIL
byvarying theamountof training data.Ourresultsshowthatatraining size of at least
400patient image volumeswasneededtoachieveaccurateper-slice
hemorrhage detection.
Weak Label Densemodeling→
ImprovingRetinaNetforCT Lesion
DetectionwithDense MasksfromWeak
RECIST Labels
Martin Zlocha, QiDou, and Ben Glocker
https://arxiv.org/pdf/1906.02283v1.pdf
https://github.com/fizyr/keras-retinanet
https://github.com/martinzlocha/anchor-optimization
Accurate, automated lesion detection in Computed
Tomography (CT) is an important yet challenging task
due to the large variation of lesion types, sizes, locations
and appearances. Recent work on CT lesion detection
employs two-stage region proposal based methods
trained with centroid or bounding-box annotations. We
propose a highly accurate and efficient one-stage lesion
detector, by re-designing a RetinaNet to meet the
particular challenges in medical imaging. Specifically, we
optimize the anchor configurations using a differential
evolutionsearchalgorithm
Interestingly, we could show that by task-specific
optimization of an out-of-the-box detector we already
achieve results superior than the best reported in the
literature. Exploitation of clinically available RECIST
annotations bears great promise as large amounts of
such training data should be available in many hospitals.
With a sensitivity of about 91% at 4 FPs per image, our
system may reach clinical readiness. Future work will
focus on new applications such as whole-body MRI in
oncology.
SegmentationLabels? Synthetic CT from MRI
Hybrid GenerativeAdversarial
NetworksforDeepMRto CTSynthesis
Using UnpairedData
GuodongZengandGuoyanZheng(MICCAI2019)
https://doi.org/10.1007/978-3-030-32251-9_83
2D cycle-consistent Generative Adversarial Networks (2D-
cGAN) have been explored before for generating synthetic CTs
from MR images but the results are not satisfied due to spatial
inconsistency. There exists attempt to develop 3D cycle GAN
(3D-cGAN) for image translation but its training requires large
numberof data whichmaynot be alwaysavailable.
In this paper, we introduce two novel mechanisms to address
above mentioned problems. First, we introduce a hybrid GAN
(hGAN) consisting of a 3D generator network and a 2D
discriminator network for deep MR to CT synthesis using
unpaired data. We use 3D fully convolutional networks to form the
generator, which can better model the 3D spatial information and
thuscould solve the discontinuityproblem acrossslices.
Second, we take the results generated from the 2D-cGAN
as weak labels, which will be used together with an adversarial
training strategy to encourage the generator’s 3D output to look
like a stack of real CT slicesasmuchaspossible.
SegmentationLabels Vascular segmentation
RobustSegmentationoftheFullCerebral
Vasculaturein4DCT ofSuspected Stroke
Patients
MidasMeijs,AjayPatel,SilC.vandeLeemput,MathiasProkop,Ewoud
J.vanDijk,Frank-Erik deLeeuw,FrederickJ.A.Meijer,Bramvan
Ginneken&RashindraManniesing
ScientificReportsvolume7,Articlenumber:15622(2017)
https://doi.org/10.1038/s41598-017-15617-w
Arobust methodispresented forthe segmentation of the full
cerebral vasculature in 4-dimensional (4D)computed
tomography(CT).
Temporal information, in combination with contrast
agent, is important for vessel segmentation as is reflected by
the WTV feature. The added value of 4D CT with improved
evaluation of intracranial hemodynamics comes at a cost, as a
4D CT protocol is associated with a higher radiation
dose. Although 4D CT imaging is not common practice,
applications of 4D CT are expanding. We expect 4D CT to
become a single acquisition for stroke workup as it
contains both noncontrast CT and CTA information.
These modalities might be reconstructed from a 4D CT
acquisition, resulting in a reduction of acquisitions and radiation
dose. In addition, studies suggest that 4D CT can be
acquired at half the dose of standard clinical protocol,
further reducingthe radiation dose forthe patient.
Coronalviewofatemporal
maximumintensityprojection
visualizingpartofthemiddle
cerebralarteryincludingtheM1,
M2andM3segments.Intensity
differencesfromproximalto
distalinanonaffectedvessel
canreachupto450HUand
higher.Vesselocclusions,
vesselwallcalcifications,
collateralflow,clipandstent
artifactshavealargeinfluence
onthecontinuityofintensity
valuesalongthevessel.
Examplesofdifficultiesencounteredinvesselsegmentation.Fromlefttoright:Skullbase
region,arteriesandveinssurroundedbyhyperdensbonystructuresintheir coursethrough
theskullbase,whichrendersdifficultiesinseparating themfromeachother;patientwith
coilsplacedattheanterior communicating artery;patientwithventricularshuntcausinga
linear artifactintheleftcerebralhemisphere.
CTASegmentationExample with multi-tasklearning
Deep DistanceTransformfor TubularStructure
SegmentationinCT Scans
Yan Wang, Xu Wei, Fengze Liu, JienengChen, Yuyin Zhou, Wei Shen, Elliot K. Fishman, Alan
L. Yuille (Submitted on 6Dec2019)
https://arxiv.org/abs/1912.03383
Tubular structure segmentation in medical images, e.g.,
segmenting vessels in CT scans, serves as a vital step in the
use of computers to aid in screening early stages of related
diseases. But automatic tubular structure segmentation in CT
scans is a challenging problem, due to issues such as poor
contrast, noise andcomplicatedbackground.
A tubular structure usually has a cylinder-like shape which
can be well represented by its skeleton and cross-sectional
radii (scales). Inspired by this, we propose a geometry-aware
tubular structure segmentation method, Deep Distance
Transform (DDT), which combines intuitions from the
classical distance transform for skeletonization and modern
deep segmentation networks. DDT first learns a multi-task
network to predict a segmentation mask for a tubular
structureandadistancemap.
Each value in the map represents the distance from each tubular
structure voxel to the tubular structure surface. Then the
segmentation mask is refined by leveraging the shape prior
reconstructedfrom the distance map.
SegmentationLabels? 4Dforvessels,andmulti-framereconstruction?
MulticlassBrainTissueSegmentationin4DCTUsing
ConvolutionalNeuralNetworks
SilC.VanDeLeemput,MidasMeijs,AjayPatel,FrederickJ.A.Meijer,BramVan
Ginneken,RashindraManniesing
IEEE Access(Volume:7,11 April2019)
https://doi.org/10.1109/ACCESS.2019.2910348
4D CT imaging has a great potential for use in stroke workup. A fully
convolutionalneuralnetwork (CNN) for 3D multiclasssegmentation in 4DCT
is presented, which can be trained end-to-end from sparse 2D annotations.
The CNN was trained and validated on 42 4D CT acquisitions of the brain of
patients with suspicion of acute ischemic stroke. White matter, gray matter,
cerebrospinalfluid,andvesselswereannotatedby twotrainedobservers.
The dataset used for the evaluation
consisted exclusively of normal
appearing brain tissues without
pathology or foreign objects, which are
seen in everyday clinical practice. The
data was collected as such to focus on
testing the feasibility of segmentation of
WM/GM/CSF and vessels in 4D CT
using deep learning, which is traditionally
the domain of MR imaging. This implies
that the method likely must be trained on
cases with pathology or foreign objects
and at least be evaluated on such cases,
before it can be used in practice.
However, we argue that our method
provides a valuable first step towards this
goal.
Example axial cross section for the derived images of a single 4D CT image
used for annotation. Left: the temporal average for WM, GM, and CSF
segmentation.Right:thetemporalvarianceforvesselsegmentation.
Three cross sections (axial, coronal, sagittal) of an exemplar 4D CT case.
Blue areas were selected for annotation by the observers, other areas were not
annotated.Brainmaskfromskullstripping
SegmentationLabels? MusculoskeletalCT segmentation #1
Pixel-Level Deep Segmentation:ArtificialIntelligenceQuantifies
Muscle onComputed TomographyforBodyMorphometric
Analysis
Hyunkwang Lee&FabianM.Troschel&ShaheinTajmir& Georg Fuchs& Julia
Mario &FlorianJ. Fintelmann&SynhoDo Department of Radiology, Massachusetts General Hospital
JDigitImaging(2017) http://doi.org/10.1007/s10278-017-9988-z
The muscle segmentation AI can be enhanced further by using the original 12-bit
image resolution with 4096 gray levels which could enable the network to learn
othersignificantdeterminantswhichcould bemissed inthelowerresolution.
In addition, an exciting target would be adipose tissue segmentation. Adipose
tissue segmentation is relatively straightforward since fat can be thresholded within a
unique HU range [−190 to −30]. Prior studies proposed creating an outer muscle
boundary to segment HU thresholded adipose tissue into visceral adipose
tissue (VAT) andsubcutaneous adipose tissue (SAT).
However, precise boundary generation is dependent on accurate muscle
segmentation. By combining our muscle segmentation network with a subsequent
adipose tissue thresholding system, we could quickly and accurately provide VAT
and SAT values in addition to muscle CSA. Visceral adipose tissue has been
implicated in cardiovascular outcomes and metabolic syndrome, and accurate fat
segmentation would increase the utility of our system beyond cancer
prognostication. Ultimately, our system should be extended to wholebody
volumetric analysis rather than axial CSA, providing rapid and accurate
characterization of body morphometric parameters.
SegmentationLabels? MusculoskeletalCT segmentation #2
AutomatedMuscle SegmentationfromClinicalCT usingBayesian
U-Net forPersonalizationofaMusculoskeletalModel
YutaHiasa,YoshitoOtake,Masaki Takao,TakeshiOgawa, NobuhikoSugano,
andYoshinobu Sato
https://arxiv.org/abs/1907.08915 (21 July2019)
We propose a method for automatic segmentation of individual
muscles from a clinical CT. The method uses Bayesian
convolutional neural networks with the U-Net architecture, using
Monte Carlo dropout that infers an uncertainty metric in addition
to the segmentation label.
We evaluated validity of the uncertainty metric in the multi-class
organ segmentation problem and demonstrated a
correlation between the pixels with high uncertainty and
the segmentation failure. One application of the uncertainty
metric in active learning is demonstrated, and the proposed
query pixel selection method considerably reduced the manual
annotation cost for expanding the training data set. The proposed
method allows an accurate patient-specific analysis of
individual muscle shapes in a clinical routine. This would open
up various applications including personalization of biomechanical
simulation and quantitativeevaluation of muscleatrophy.
Phantoms
forHead CT
CT/MRI/PETPhantom from Bristolfor AlzheimerNeuroimaging
CreationofananthropomorphicCT head
phantomforverificationofimage
segmentationMedical Physics(11 March 2020)
https://doi.org/10.1002/mp.14127
Robin B. Holmes  IanS. Negus  Sophie J. Wiltshire  GarethC. Thorne  Peter Young  
The Alzheimer’sDiseaseNeuroimagingInitiative
Department of Medical Physics and Bioengineering, UniversityHospitals Bristol NHS
Foundation Trust, Bristol, BS28HW United Kingdom
“Accuracy of CT segmentation will depend, to
some extent, on the ability of CT images to accurately
depict the structures of the head. This in turn will depend
on the scanner used and the exposure and
reconstruction factors selected. The delineation of soft
tissue structures will depend on material contrast,
edge resolution and image noise, which are in turn
affected by the peak tube potential (kVp), filtration, tube
current (mA), rotation time, reconstructed slice width
and the reconstruction algorithm, including iterative
methods and any other post-acquisition image
processing.
The limitation of the phantoms presented in these
(previous) studies is that they do not allow for
complex nested structures with multiple
material properties, as would be required to simulate
the brain. ... The effects of neuroimaging on clinical
confidence analyses is not an area that has been
investigated rigourously, the effects of analyses
even less so e.g. Motaraet al.2017; Boelaartset al.2016
. The literature
appears to concentrate more on novel methods rather
than demonstrating the usefulness of existing
ones.”
This work aims to use 3D printing to create a realistic anthropomorphic phantom representing the CT properties of a normal
human brain and skull. Properly developed, this type of phantom will allow the optimization and validation of CT
segmentation across different scanners and disease states. If sufficient realism can be attained with the phantom,
imaging the resulting phantom on different scanners and using different acquisition parameters will enable the
validation of the entire processing chain in the proposed clinical implementation of CT-VBM. ... may well be possible to use
phantoms to measure parameters that could be used as exclusion criteria in the clinical use of CT analyses, thereby increasing
sensitivity, specificity and clinical confidence.It would be relatively straightforward to create multiple phantoms of the
same subject with progressive atrophy; the atrophy could be simulated from a ‘base’ scan or by the assessment of
multiplepatient scansfromtheADNI database
3DP brain(left) andthecompletedphantom
after coatingwithplasterofParis(right)
Comparisonof the sourceMRI (column 1) and phantom
scan C(120kV, 300mAs) for scanner 1 (column 2) and
scanner 2 (column 3) with an80kVacquisition on scanner
2 (column ). The three rowsdepict different slices at
differentlevelsin the head/phantom. Asthe printer was
onlycapable of printing 3different typesof plastic
nonon-brainstructures– such asthe eyesor skull – were
printed. CTscanshave 60HU subtraction and are
displayed with awindow level of 30HU, window width
90HU. Representative ROIs used for determination ofthe
meanHU for eachtissue type are shown in red.
seealso“Physicalimagingphantomsfor simulationoftumorheterogeneityinPET,CT,andMRI” https://doi.org/10.1002/mp.14045
CT artifactsto simulate for intracerebral hemorrhage (ICH)analysis
Starburst/streak
artifactfromdense materials
(metal,teeth)
Maketwo phantoms(onewith
metalencased,andtheother
without)? Orhaveinsertable
densematerials?
http://www.neuroradiologycases.com/2011/
10/streak-artifacts.html
Motionartifacts
Haveamotormovingthephantomso
you wouldexactlynowthe “blur kernel”,
wouldyou benefitfromfiducialson
phantom? Metal motor itself causing
artifactsto the image?
https://www.openaccessjournals.com/articles/ct-artifacts-causes-a
nd-reduction-techniques.html
Calcifications
Useful especially for dual-
energy CT simulation and
‘virtual noncalcium image’
https://doi.org/10.1093/neuros/nyaa029
ICH(i.e.blood)
Howrealisticcanyoumake
this?Playwith infill
density/patterntoallow
injectionof blood-likematerial
tothephantom?ICHshape
veryrandom
see e.g.Chindaet al.2018
http://dx.doi.org/10.1136/bmjopen-2017-020260
Beamhardening i.e.
attenuationofsignalina“skull
pocket”->phantom wouldbenefit
frombone-likeencasing.
e.g. http://doi.org/10.13140/RG.2.1.2575.3122
see eg. Raslauetal. 2016 https://doi.org/10.3174/ng.2160146
CTExtra the texture “radiomicsstory”andwithfullydeep end-to-end networks?
ReliabilityofCT-basedtexturefeatures:Phantom
study
BinoA. Varghese Darryl Hwang Steven Y. Cen JoshuaLevy DerekLiu Christopher
Lau MarielenaRivas Bhushan Desai David J. Goodenough VinayA. Duddalwar
JournalofAppliedClinical Medical Physics(20JuneOct2019)
https://doi.org/10.1002/acm2.12666 - Citedby1 -Relatedarticles
Objective: To determine the intra, inter and test retest variability of‐ICH, occurs earlier  ‐ICH, occurs earlier 
CT basedtextureanalysis(CTTA)metrics.‐ICH, occurs earlier 
Results: As expected, the robustness, repeatability and
reproducibility of CTTA metrics are variably sensitive to various
scanner(PhilipsBrilliance 64 CT,ToshibaAquilion Prime160CT) and
scanning parameters. Entropy of Fast Fourier Transform‐sized 
basedtexture metricswasoverallmost reliable acrossthetwo
scanners and scanning conditions. Post processing techniques‐sized 
that reduce image noise while preserving the underlying edges
associated with true anatomy or pathology bring about significant
differences in radiomic reliability compared to when they were
notused.
(Left) Texture phantom comprising of three texture patterns. (Middle) Phantom placement for image
acquisition. (Right) Cross section of texture phantom patterns. (1), (2) and (3) are 3D printed ABS plastic
with fill levels 10%, 20%, and 40%, respectively. (Bk) isa homogenousABS material. (Thewindowlevel is
−500 HUwitha width of 1600 HU).
3.4 Effect of post processing techniques that reduce‐sized 
image noise while preserving the underlying edges
associated withtrue anatomyor pathology
By comparing the changes in robustness of the CTTA
metrics across the two scanners, we observe that post‐ICH, occurs earlier 
processing techniques that reduce image noise while
preserving the underlying anatomical edges for
example, I dose levels (here 6 levels) on the Philips‐ICH, occurs earlier 
scanner and Mild/Strong (here 2 levels) levels on the
Toshiba scanner produce significant difference in CTTA
robustness compared to the base setting (Fig. 3).
Stronger noise reduction techniques were
associated with a significant reduction in
reliability in the Philips scanner, however, the
opposite was observed on the Toshiba scanner.
In both cases, no noise reduction techniques were used
in the base setting.
CTPhantomStudy for deeplearningbased reconstruction
Deep LearningReconstructionatCT:PhantomStudy
oftheImageCharacteristics Toru Higakietal. Academic
Radiology Volume27,Issue1,January 2020, Pages82-87
https://doi.org/10.1016/j.acra.2019.09.008
Noise, commonly encountered on computed tomography (CT)
images, can impact diagnostic accuracy. To reduce the image
noise, we developed a deep-learning reconstruction (DLR)
method thatintegrates deep convolutional neural networksinto image
reconstruction. In this phantom study, we compared the image noise
characteristics, spatial resolution, and task-based detectability on
DLR images and images reconstructed with other state-of-the art
techniques.
On images reconstructed with DLR, the noise was lower than
on images subjected to other reconstructions, especially at low
radiation dose settings. Noise power spectrum measurements also
showedthatthe noiseamplitudewaslower,especially forlow-
frequency components, on DLR images. Based on the MTF,
spatial resolution was higher on model-based iterative reconstruction
image than DLR image, however, for lower-contrast objects, the MTF
on DLR images was comparable to images reconstructed with other
methods. The machine observer study showed that at reduced
radiation-dosesettings,DLRyieldedthebestdetectability.
Phantom images scanned at 2.5 mGy. The image noise is lowest on the DLR
image, the texture is preserved, and the object boundary is sharper than
ontheotherimages.
DualEnergyCT Nicefor CT as wellwith thecalcium separation
Optimisingdual-
energyCTscan
parametersfor
virtualnon-
calciumimaging
ofthebone
marrow:a
phantomstudy
https://doi.org/10.11
86/s41747-019-012
5-2
Effectsof Patient Size andRadiation Dose on Iodine Quantification in
Dual-SourceDual-Energy CT https://doi.org/10.1016/j.acra.2019.12.027
Figure1. Across-sectionCTimageofthemedium-sized
phantomwitheightiodineinserts.Thenumberaboveeach
insertindicatesitsiodineconcentrationinmg/ml.
Figure6. The80kVpimagesfromtheDECTscanofa32-cmdiameterCTDI
phantomwithdifferentcombinationsofeffectivemAsandrotationtime:(a)53mAs
and0.5s,(b)106mAs,1.0s,(c)106mAs,0.5s,and(d)530mAs,0.5s.Anarrow
windowof200HUisusedtoshowthebiasintheCTnumber.Fourcircular ROIsof1.6
cmdiameterareshowninpanel(d),atdistancesof3.4,6.7,10,and13.3cmfromthe
center.
RememberthatCThasnon-medicalusesaswell andyoucan have a look ofthat
literature if youare interested
MeasuringIdentificationand QuantificationErrorsinSpectralCTMaterial
Decompositionhttps://doi.org/10.3390/app8030467
(a)Spectroscopicphantomwiththree6mmdiameter
hydroxyapatitecalibrationrods(54.3,211.7and808.5mg/mL)and6
mmdiameter vialsofgadolinium(1,2,4,8mg/mL),oil(canolaoil)and
distilledwater;(b)CTimageofthephantom.
CT-specific
preprocessing
beforethemore
“generalcomputer
vision techniques”
“TheTechStack”
fromhere
CTVolumescan beboth anisotropic and isotropic
Brainatlasfusionfrom high-thicknessdiagnosticmagnetic
resonanceimagesbylearning-basedsuper-resolution Zhangetal.
(2017) https://doi.org/10.1016/j.patcog.2016.09.019Cited by 12 
ANISOTROPICVOLUME
“Staircased”volumedueto
lowz-resolution
ISOTROPICVOLUME
A lotsmoothervolume
reconstruction
Lego corgi :
corgi
Reddit
Same
”asadog”
wellpracticallyalwaysanisotropic,andtheyareresampledtobeisotropic
StaircasingExample When z-resolution is too coarse
Co-registrationofBOLDactivationareaon3Dbrainimage(Courtesy
Siemens) http://mriquestions.com/registrationnormalization.html
UCLData
https://doi.org/10.1016/j.jneumeth.2016.03.001
Getrid ofbackgroundand“non-brain”
Cushion
contours
Plastic
“helmet”
Head mask Brain mask
8-bitmapping“int13”input 1 sign bit + 12 bit intensity
[−1024,3071]HUclipping
−100to100HUstilllinear betweenthese
valuessonothingcompressedandlost,but
remaining55valuesareusedforthe
outsidevaluesthatarenotasrelevantfor
brain.
CTPreprocessing ClipHU units, useNifTI, andavoidbias field
Recommendationsfor
ProcessingHeadCT Data
JohnMuschelli (2019)
https://doi.org/10.3389/fninf.2019.00061
DepartmentofBiostatistics,JohnsHopkinsBloombergSchoolof
Public Health,Baltimore,MD,UnitedStates
Many different general 3D medical imaging formats exist, such as ANALYZE, NIfTI, NRRD, and MNC.
We recommend the NIfTI format (e.g. https://github.com/rordenlab/dcm2niix), as it can be read
by nearly all medical imaging platforms, has been widely used, has a format standard, can be stored
inacompressedformat,andishowmuchofthedataisreleasedonline.
Once converted to NIfTI format, one should ensure the scale of the data. Most CT data is
between −1024 and 3071 Hounsfield Units (HU). Values less than −1024 HU are commonly
found due to areas of the image outside the field of view that were not actually imaged. One
first processing step would be to Winsorizethedata (clip the values) to the [−1024, 3071]
range. After this step, the header elements scl_slope and scl_inter elements of the NIfTI
image should be set to 1 and 0, respectively, to ensure no data rescaling is done in other software.
Though HU is the standard format used in CT analysis, negative HU values may cause issues with
standard imaging pipelines built for MRI, which typically have positive values. Rorden (CITE)
proposed a lossless transformation, called Cormack units, which have a minimum value of 0.
The goal of the transformation is to increase the range of the data that is usually of interest, from
−100 to 100 HU and is implemented in the Clinical toolbox. Most analyses are done using
HU,however.
Though CT data has no coil or assumed bias field, as in MRI, due to the nature of the data, one can
test if trying to harmonize the data spatially with one of these correction procedures improves
performance ofamethod.Though we donotrecommendthisproceduregenerally,asitmay
reduce contrasts between areas of interest, such as hemorrhages in the brain, but has been used to
improve segmentation (Cauleyetal.,2018). We would like to discuss potential methods and CT-
specificissues.
http://neurovascularmedicine.com/imagingct.php
https://www.sli
deshare.net/drt
arungoyal/basi
c-principle-of-c
t-and-ct-gener
ations-1220533
36
OptimizingtheHUwindowinsteadofusingthefullHUrange #1
PracticalWindow SettingOptimizationfor
MedicalImage Deep Learning
HyunkwangLee, Myeongchan Kim, SynhoDo Harvard / Mass General
(Submitted on 3Dec 2018)
https://arxiv.org/abs/1812.00572v1
https://github.com/suryachintu/RSNA-Intracranial-Hemorrhage-Detection
https://github.com/MGH-LMIC/windows_optimization Keras
The deep learning community has to date neglected window
display settings - a key feature of clinical CT interpretation and
opportunity for additional optimization. Here we propose a window
setting optimization (WSO) module that is fully trainable with
convolutional neural networks (CNNs) to find optimal window
settingsfor clinicalperformance.
Our approach was inspired by the method commonly used by
practicing radiologists to interpret CT images by adjusting window
settings to increase the visualization of certain pathologies. Our
approach provides optimal window ranges to enhance the
conspicuity of abnormalities, and was used to enable
performance enhancement for intracranial hemorrhage and urinary
stonedetection.
On each task, the WSO model outperformed models trained
over the full range of Hounsfield unit values in CT images,
as well as images windowed with pre-defined settings. The WSO
module can be readily applied to any analysis of CT images, and can
befurther generalizedtotasksonother medicalimaging modalities.
Our WSO models can be further optimized by investigating the effects of the number of input
image channels, 𝜖 and U on the performance of target application. Additionally, we stress that the
WSO-based approach described here is not specific to abnormality classification on CT images,
but rather generalizable to various image interpretation task on a variety of medical imaging
modalities.
OptimizingtheHUwindowinsteadofusingthefullHUrange #2
CT windowtrainable neuralnetworkfor
improvingintracranialhemorrhage detectionby
combiningmultiple settings
Manohar Karkietal. CAIDE Systems Inc.,Lowell, MA, USA
(20 May2020)
https://doi.org/10.1016/j.artmed.2020.101850
●
This method gives a novel approach where a deep convolutional neural
network (DCNN) is trained in conjunction with a CT window
estimator module in an end-to-end manner for better predictions in
diagnostic radiology.
●
A learnable module for approximating the window settings for Computed
Tomography (CT) images is proposed to be trained in a distant supervised
manner without prior knowledge of best window settings values by
simultaneously trainingalesion classifier.
●
Based on the learned module, several candidate window settings are
automatically identified, and the raw CT data are scaled at each settings and
separate lesion classification modelsare trained on each.
10(/11bit)mapping and youcan actually display it?
Displayhasa2000:1contrastratio.
Productpages: CoronisFusion6MP (MDCC-6530) and 
CoronisFusion4MP(MDCC-4430)
https://www.medgadget.com/2019/11/barcos-flagship-multimodality-diagnostic-monito
r-gets-an-upgrade.html
Should HDRDisplays
Follow the Perceptual
Quantizer (PQ) Curve?
[discussionstarted asan email
thread in the HDR workgroup
ofthe International Committee
ofDisplay Metrology(ICDM)]
https://www.displaydaily.com/article/displ
ay-daily/should-hdr-displays-follow-the-p
q-curve
You canassumeyourHUstobeproperlycalibrated?
Automaticdeeplearning-basednormalizationofbreast
dynamiccontrast-enhancedmagneticresonance
images
Jun Zhang, AshirbaniSaha, Brian J. Soher, Maciej A. Mazurowski
Department ofRadiology,Duke University
(5Jul 2018)
https://arxiv.org/abs/1807.02152
To develop an automatic image normalization algorithm
for intensity correction of images from breast dynamic
contrast-enhanced magnetic resonance imaging (DCE-MRI)
acquired by different MRI scanners with various imaging
parameters, usingonlyimageinformation.
DCE-MR images of 460 subjects with breast cancer acquired
by different scanners were used in this study. Each subject
had one T1-weighted pre-contrast image and three T1-
weighted post-contrast images available. Our
normalization algorithm operated under the assumption that
the same type of tissue in different patients should be
represented by the same voxel value.
The proposed image normalization strategy based on tissue
segmentation can perform intensity correction fully
automatically, without the knowledge of the scanner
parameters.
And handled by the device manufacturer? Would there still be room for post-processing?
CTPreprocessing Defacing(De-Identification)
RecommendationsforProcessingHeadCTData
JohnMuschelli (2019) https://doi.org/10.3389/fninf.2019.00061
DepartmentofBiostatistics,JohnsHopkinsBloombergSchoolofPublic Health,Baltimore,MD,UnitedStates
As part of the Health Insurance Portability and Accountability Act (HIPAA) in the United States, under the “Safe Harbor”
method, releasing of data requires the removal a number of protected health information (PHI) (
CentersforMedicare&MedicaidServices,1996). For head CT images, a notable identifier is “Full-face photographs and
any comparable images”. Head CT images have the potential for 3D reconstructions, which likely fall under this PHI
category, and present an issue for reidentification of participants (SchimkeandHale,2015). Thus, removing areas of the
face, called defacing, may be necessary for releasing data. If parts of the face and nasal cavities are the target of
theimaging, then defacing may be an issue. As earsmay beafutureidentifyingbiometric marker,anddentalrecordsmay
beused foridentification, theseareasmaydesirableto remove(Cadavidetal.,2009; Mosher, 2010).
The obvious method for image defacing is to perform brain extraction we described above. If we consider defacing to be
removing parts the face, while preserving the rest of the image as much as possible, this solution is not sufficient.
Additional options for defacing exist such as the MRI Deface software (https://www.nitrc.org/projects/mri_deface/),
which is packaged in the FreeSurfer software and can be run using the mri_deface function from the freesurfer R
package (Bischoff-Gretheetal.,2007; Fischl,2012). We have found this method does not work well out of the box on
head CTdata,includingwhen alargeamountoftheneckisimaged.
Registration methods involve registering images to the CT and applying the transformation of a mask of the
removal areas (such as the face). Examples of this implementation in Python modules for defacing
are pydeface (https://github.com/poldracklab/pydeface/tree/master/pydeface) and mridefacer (
https://github.com/mih/mridefacer). These methods work since the registration from MRI to CT tends to
performs adequately, usually with a cross-modality cost function such as mutual information. Other
estimation methods such as the Quickshear Defacing method rely on finding the face by its relative
placement compared to a modality-agnostic brain mask (SchimkeandHale,2011). The fslr R package
implements both the methods of pydeface and Quickshear. The ichseg R package also has a
function ct_biometric_mask that tries to remove the face and ears based registration to a CT template
(described below). Overall, removing potential biometric markers from imaging data should be considered
when releasing data and a number of methods exist, but do not guarantee complete de-identification and
maynotworkdirectlywithCTwithoutmodification.
https://slideplayer.com/slide/12844720/
https://neurostars.org/t/sharing-data-on-openneuro-
without-consent-form-but-consent-by-the-ethics-co
mmittee/1593
BrainExtractionTools(BETs) nothinggoodavailable really forCT?
i.e.skullstripping moreoptionsforMRI→ more options for MRI
ValidatedAutomaticBrain Extraction of
HeadCTImages JohnMuschelli etal. (2015)
https://dx.doi.org/10.1016%2Fj.neuroimage.2015.03.074
https://rdrr.io/github/muschellij2/ichseg/man/CT_Skull_Strip_robust.html R
https://johnmuschelli.com/neuroc/ss_ct/index.html
DepartmentofBiostatistics,JohnsHopkinsBloombergSchoolofPublic Health,
Baltimore, MD, UnitedStates
Aim: To systematically analyze and validate the performance of FSL's brain extraction tool
(BET) on head CT images of patients with intracranial hemorrhage. This was done by
comparing the manual gold standard with the results of several versions of automatic brain
extraction and by estimating the reliability of automated segmentation of longitudinal scans.
The effects of the choice of BET parameters and data smoothing is studied and reported. BET
performs well at brain extraction on thresholded, 1mm3
smoothed CT images with an
fractional intensity (FI) of 0.01 or 0.1. Smoothing before applying BET is an important
step notpreviouslydiscussed in theliterature.
Automatedbrain extractionfrom headCTandCTA
imagesusingconvexoptimizationwithshape
propagation MohamedNajmi etal.(2019)
https://doi.org/10.1016/j.cmpb.2019.04.030
https://github.com/WuChanada/StripSkullCT Matlab
Robustbrain extractiontoolforCTheadimages
Zeynettin Akkus,Petro Kostandy,KennethA.Philbrick,BradleyJ.Erickson etal.(7 June2020)
https://doi.org/10.1016/j.neucom.2018.12.085-Citedby2
https://github.com/aqqush/CT_BETKerasPython
CTPreprocessing MNI Space
Normalization to spatial coordinates “registration problem”
Classification of damaged tissue in stroke CTs. A
representative stroke CT scan (A) is normalized to MNI space
(B) and spatially smoothed (C). Next, the resulting image is
compared to a group of control CTs by means of the Crawford–
Howell t-test. The resulting t-score map is converted to a probability
map, which is then overlaid ontothe image itself (D). By thresholding
this probability map at a given significance level, the lesioned
regions can be delineated. The lesion map in MNI space can be
transformed back to individual subject space (E), so that it
can be compared with a lesion map manually delineated by an
operator (F) on the original CT image.
http://doi.org/10.1016/j.nicl.2014.03.009 -
Citedby64 
Human Brainin
Standard MNI Space
(2017) JürgenMaiMilan
Majtanik
TheTalairachcoordinateofapointintheMNI
space:howtointerpretit WilkinChauand
AnthonyR.McIntosh (2005)
https://doi.org/10.1016/j.neuroimage.2004.12.007
“The two most widely used spaces in
the neuroscience community are the Talairach space and the
Montreal Neurological Institute (MNI) space. The Talairach
coordinate system has become the standard reference for
reporting the brain locations in scientific publication, even when
the data have been spatially transformed into different
brain templates (e.g., MNI space). “
CTPreprocessing Space Transform Optimization
Like with every signal processing step, you can always do better, and some pros/cons related to each method
https://www.slideserve.com/shaina/group-analyses-in-fmri
http://www.diedrichsenlab.org/imaging/propatlas.htm
Citedby660
AdvancedNormalisationTools(ANTs)
http://www.mrmikehart.com/tutorials.html
Transcranialbrain atlas
http://doi.org/10.1126/sciadv.aar6904
SpatialNormalization -anoverview
https://www.sciencedirect.com/topics/medicine-and-
dentistry/spatial-normalization
MAR
MetalArtifact
Reduction
gettingridofthe
metal/bone(dense
material)artifacts
Deep-MAR
Fast EnhancedCT Metal ArtifactReductionusingData
DomainDeep Learning MuhammadUsmanGhani,W.ClemKarl
https://arxiv.org/abs/1904.04691v3 (2019)
Filteredbackprojection(FBP) isthemostwidely usedmethod
for image reconstruction in X-raycomputedtomography (CT)
scanners,andcanproduceexcellentimagesinmanycases.
However,thepresenceofdense materials, such asmetals,can
strongly attenuateorevencompletelyblock X-rays,producing
severestreakingartifacts intheFBP reconstruction.These
metalartifactscangreatly limitsubsequentobjectdelineationand
informationextractionfromtheimages,restricting their diagnostic
value.
DuDoNet Joint use ofsinogramandimage domains
DuDoNet:DualDomainNetworkforCT MetalArtifactReduction
Wei-AnLin, Haofu Liao,ChengPeng,Xiaohang Sun,JingdanZhang,Jiebo
Luo,RamaChellappa, ShaohuaKevinZhou (2019)
http://openaccess.thecvf.com/content_CVPR_2019/html/Lin_DuDoNet_Dual_Domain_Ne
twork_for_CT_Metal_Artifact_Reduction_CVPR_2019_paper.html
Computed tomography (CT) is an imaging modality widely used for
medical diagnosis and treatment. CT images are often corrupted by
undesirable artifacts when metallic implants are carried by patients, which
createstheproblemof metalartifactreduction(MAR).
Existing methods for reducing the artifacts due to metallic implants are
inadequate for two main reasons. First, metal artifacts are structured and
non-local so that simple image domain enhancement approaches would
not suffice. Second, the MAR approaches which attempt to reduce metal
artifacts in the X-ray projection (sinogram) domain inevitably lead to
severesecondaryartifactdue to sinograminconsistency.
To overcome these difficulties, we propose an end-to-end trainable Dual
Domain Network (DuDoNet) to simultaneously restore sinogram
consistency and enhance CT images. The linkage between the
sigogram and image domains is a novel Radon inversion layer
that allows the gradients to back-propagate from the image domain to the
sinogram domain during training. Extensive experiments show that our
method achieves significant improvements over other single domain MAR
approaches. To the best of our knowledge, it is the first end-to-end dual-
domain network for MAR.
DuDoNet++ Joint use ofsinogramandimagedomains
DuDoNet++: EncodingmaskprojectiontoreduceCT metal
artifactsYuanyuanLyu,Wei-AnLin,JingjingLu,S. KevinZhou
(Submittedon2Jan2020(v1), lastrevised18 Jan2020)
https://arxiv.org/abs/2001.00340
CT metal artifact reduction (MAR) is a notoriously challenging task
because the artifacts are structured and non-local in the image
domain. However, they are inherently local in the sinogram domain.
DuDoNet is the state-of-the-art MAR algorithm which exploits the
latter characteristic by learning to reduce artifacts in the
sinogram and image domain jointly. By design, DuDoNet treats
the metal-affected regions in sinogram as missing and replaces them
with thesurrogatedatageneratedbyaneuralnetwork.
Since fine-grained details within the metal-affected regions are
completely ignored, the artifact-reduced CT images by DuDoNet
tend to be over-smoothed and distorted. In this work, we investigate
the issue by theoretical derivation. We propose to address the
problem by (1) retaining the metal-affected regions in sinogram and
(2) replacing the binarized metal trace with the metal mask projection
such that the geometry information of metal implants is encoded.
Extensive experiments on simulated datasets and expert evaluations
on clinical images demonstrate that our network called DuDoNet++
yields anatomically more precise artifact-reduced images
than DuDoNet, especially when the metallic objects are large.
UnsupervisedApproach ADN with goodperformance
ArtifactDisentanglementNetworkfor
UnsupervisedMetalArtifact Reduction
Haofu Liao,Wei-AnLin,Jianbo Yuan,S.KevinZhou,Jiebo
Luo (Submittedon5Jun2019)
https://arxiv.org/abs/1906.01806v5
https://github.com/liaohaofu/adn PyTorch
Current deep neural network based approaches to
computed tomography (CT) metal artifact reduction (MAR)
are supervised methods which rely heavily on synthesized
data for training. However, as synthesized data may not
perfectly simulate the underlying physical mechanisms of
CT imaging, the supervised methods often generalize
poorly to clinical applications. To address this problem, we
propose, to the best of our knowledge, the first
unsupervised learningapproach toMAR. Specifically,
we introduce a novel artifact disentanglement network that
enables different forms of generations and regularizations
between the artifact-affected and artifact-free image
domains to support unsupervised learning. Extensive
experiments show that our method significantly
outperforms the existing unsupervised models for image-
to-image translation problems, and achieves
comparable performance to existing supervised models on
a synthesized dataset. When applied to clinical datasets,
our method achieves considerable improvements
overthesupervisedmodels.
UnsupervisedImprovementover ADN?
Three-dimensional GenerativeAdversarialNets
for UnsupervisedMetalArtifact Reduction
MegumiNakao,Keiho Imanishi, NobuhiroUeda,Yuichiro
Imai,Tadaaki Kirita,TetsuyaMatsuda
(Submittedon19Nov 2019))
https://arxiv.org/abs/1911.08105
In this paper, we introduce metal artifact reduction methods
based on an unsupervised volume-to-volume
translation learned from clinical CT images. We construct
three-dimensional adversarial nets with a regularized loss
function designed for metal artifacts from multiple
dental fillings. The results of experiments using 915 CT
volumes from real patients demonstrate that the proposed
framework has an outstanding capacity to reduce strong
artifacts and to recover underlying missing voxels, while
preserving the anatomical features of soft tissues and tooth
structuresfromthe originalimages.
Usingpairedartifact-freeMRI for CT MAR
CombiningmultimodalinformationforMetal
ArtefactReduction:Anunsuperviseddeep
learningframework
MartaB.M.Ranzini,IrmeGroothuis,KerstinKläser,M.JorgeCardoso,
JohannHenckel,SébastienOurselin,Alister Hart,MarcModat
[Submittedon20Apr 2020]
https://arxiv.org/abs/2004.09321
Metal artefact reduction (MAR) techniques aim at
removing metal-induced noise from clinical images. In
Computed Tomography (CT), supervised deep learning
approaches have been shown effective but limited in
generalisability, as they mostly rely on synthetic data. In
Magnetic Resonance Imaging (MRI) instead, no methodhas
yet been introduced to correct the susceptibility
artefact,still presenteveninMAR-specificacquisitions.
In this work, we hypothesise that a multimodal approach
to MAR would improve both CT and MRI. Given their
different artefact appearance, their complementary
information can compensate for the corrupted signal in
either modality. We thus propose an unsupervised deep
learning method for multimodal MAR. We introduce the use
of Locally Normalised Cross Correlation as a loss
term to encourage the fusion of multimodal information.
Experiments show that our approach favours a smoother
correction in the CT, while promoting signal recovery in the
MRI.
UnsupervisedApproach jointly with other tasks
Joint UnsupervisedLearningforthe Vertebra
Segmentation,ArtifactReductionandModality
Translationof CBCT Images
YuanyuanLyu,Haofu Liao,Heqin Zhu,S. KevinZhou
(Submittedon2Jan2020(v1), lastrevised18 Jan2020)
https://arxiv.org/abs/2001.00339
We investigate the unsupervised learning of the vertebra
segmentation, artifact reduction and modality translation of
CBCT images. To this end, we formulate this problem under a
unified framework that jointly addresses these three
tasks and intensively leverages the knowledge sharing. The
unsupervised learning of this framework is enabled by 1) a
novel shape-aware artifact disentanglement network that
supports different forms of image synthesis and vertebra
segmentation and 2) a deliberate fusion of knowledge from
an independent CT dataset. Specifically, the proposed
framework takes a random pair of CBCT and CT images as the
input, and manipulates the synthesis and segmentation via
different combinations of the decodings of the disentangled
latent codes. Then, by discovering various forms of
consistencies between the synthesized images and
segmented , the learning is achieved via self-learning from
the given CBCT and CT images obviating the need for the
paired (i.e.,anatomicallyidentical)ground-truth data.
MandiblesegmentationtohelpMAR?
Recurrentconvolutionalneuralnetworksfor
mandible segmentationfromcomputed tomography
Bingjiang Qiu,JiapanGuo,JoepKraeima, HayeH. Glas,Ronald
J.H. Borra,Max J.H. Witjes,PeterM.A.vanOoijen(Submitted
on13Mar 2020) https://arxiv.org/abs/2003.06486
Recently, accurate mandible segmentation in CT scans
based on deep learning methods has attracted much attention.
However, there still exist two major challenges, namely, metal
artifacts among mandibles and large variations in
shape or size among individuals. To address these two
challenges, we propose a recurrent segmentation
convolutional neural network (RSegCNN) that embeds
segmentation convolutional neural network (SegCNN) into the
recurrent neural network (RNN) for robust and accurate
segmentation of the mandible. Such a design of the system
takes into account the similarity and continuity of the mandible
shapes captured in adjacent image slices in CT scans. The
RSegCNN infers the mandible information based on the
recurrent structure with the embedded encoder-decoder
segmentation (SegCNN) components. The recurrent
structure guides the system to exploit relevant and important
information from adjacent slices, while the SegCNN
component focuses on the mandible shapes from a single CT
slice.
CTNoise
Modelingand
Denoising
Background
Noise Review #1
AreviewonCTimagenoiseandits
denoising
Manoj Diwakara, Manoj Kumar
BiomedicalSignalProcessingandControl(April2018)
https://doi.org/10.1016/j.bspc.2018.01.010
The process of CT image reconstruction depends on
many physical measurements such as radiation dose,
software/hardware. Due to statistical uncertainty in all physical
measurements in Computed Tomography, the inevitable noise
is introduced in CT images. Therefore, edge-preserving
denoising methods are required to enhance the quality of CT
images. However, there is a tradeoff between noise reduction
and the preservation of actual medical relevant contents.
Reducing the noise without losing the important features of the
image such as edges, corners and other sharp structures, is a
challengingtask.
Nevertheless, various techniques have been presented to
suppress the noise from the CT scanned images. Each
technique has their own assumptions, merits and
limitations. This paper contains a survey of some significant
work in the area of CT image denoising. Often, researchers
face difficulty to understand the noise in CT images
and also to select an appropriate denoising method that is
specific to their purpose. Hence, a brief introduction about CT
imaging, the characteristics of noise in CT images and the
popular methods of CT image denoising are presented here.
The merits and drawbacks of CT image denoising methods are
alsodiscussed.
Majorfactorsaffecting thequalityofCTimages:
●
Blurring
1) How the equipment isoperated.
2) Appropriate protocol factor values.
3) Blurringof image due topatient movement.
4) Fluctuation ofCTnumber between pixelsin the image for ascan ofuniform material.
5) Some ofthe filter algorithmsor bad parametersoffilter algorithms(toreduce noise)blur the image
●
Fieldofview(FOV)
●
Artifacts
●
Beam hardening
●
Metal artifact
●
Patient motion
●
Software / hardware based artifacts
●
Visualnoise
To reconstruct a good quality CT image, the CT scanner has
twoimportantcharacteristics:
(1) Geometric efficiency: When X-rays are transmitted to
the human body and some absorbed data are not received by
theactivedetectors,it meansgeometric efficiencyisreduced.
(2) Absorption efficiency: When X-rays are transmitted to
the human body and some absorbed data are not captured by
theactivedetectors,it meansabsorption efficiencyisreduced.
Therefore,therelationshipbetweennoiseand
radiationdoseinCT scanner mustbeanalyzed.
●
Detector
●
Collimators
●
Scanrange
●
Tubecurrent
●
Scan(rotation)time
●
Slicethickness
●
Peakkilovoltage(KVP)
(1) By understanding the radiation dose and improving the dose
efficiencyof CTsystems, the low dose CTimagecan be improved.
(2) In second approach, CT image quality can be improved by
developing algorithms to reduce the noise from CT images. These
algorithms can be further used in order to reduce the radiation dose.
Generally, the process of noise suppression is known as image
denoising.
Noise Review #2:Noise Sourceshttps://doi.org/10.1016/j.bspc.2018.01.010
Random noise: It may arise from the detection of a finite number of X-
ray quanta in the projection. It looks like a fluctuation in the image density.
Asaresult,the changeintoimagedensityisunpredictableandin random
manner,thisisknownasrandomnoise.
Statistical noise: The energy of X-rays are transmitted in the form of
individual chunks of energy called quanta. Therefore, these finite number
of X-ray quanta are detected by the X-ray detector. The number of
detected X-ray quanta may differ with another measurement because of
statistical fluctuation. The statistical noise in CT images may appear
because of fluctuations in detecting a finite number of X-ray quanta.
Statistical noise may also be called quantum noise. As more quanta are
detected in each measurement, the relative accuracy of each
measurement is improved. The only way to reduce the effects of
statistical noise is to increase the number of detected X-ray quanta.
Normally, this is achieved by increasing the number of transmitted X-rays
throughanincreaseinX-raydose.
Electronic noise: There are electric circuits to receive analog signals
which are also known as analog circuits.The processofreceiving analog
signals by the electronic circuits may be affected with some noise, which
is referred as electronic noise. The latest CT scanners are well designed
toreducetheelectronicnoise.
Roundoff errors: The analog signals are converted into digital signals
using signal processing steps and then sent to the digital computer for
CT image reconstruction. In digital computers, there are digital circuits to
handle the process of discrete signals. Due to limited number of bits for
storage of discrete signals in computer system, mathematical
computation is not possible without roundoff. This limitation is referred as
roundofferror
Generally, noise in reconstructed CT images are introduced mainly by two reasons.
First, a continuously varying error due to electrical noise or roundoff errors, can be
modeled as a simple additive noise, and second reason is the possible error due to
randomvariationsindetectedX-rayintensity.
To differentiate tissues (soft and hard), CT numbers are defined by using Hounsfield
unit (HU) [60] for CT image reconstruction. Hounsfield unit (HU) scale is displayed in Fig. 3,
where some CT numbersare defined. The CT number for agiven tissue is determined by the
X-ray linear attenuation coefficient (LAC). Linearity is the ability of the CT image to
assign the correct Hounsfield unit (HU) to a given tissue. A good linearity is essential for
quantitativeanalysisofCTimages.
The distribution of noise in CT image can be
derived by estimating the noise variance through
reconstructions algorithms. However, the distribution
of noise in CT image can be accurately characterized
using the Poisson distribution. But for multi-
detector CT (MDCT) scanner, the noise distribution
is more accurately characterized by the Gaussian
distribution. The literature [51,57,121,117] also confirms
that the noise in CT images is generally an
additivewhite Gaussian noise.
Noise Review #3:Denoising methodcomparisonhttps://doi.org/10.1016/j.bspc.2018.01.010
[25] H. Chen, Y.Zhang,M.K. Kalra,F.Lin, P.Liao,J. Zhou, G. Wang,
Low-DoseCTwith aResidual Encoder–Decoder
Convolutional Neural Network (RED-CNN), 2017 arXiv
preprintarXiv:1702.00288.https://arxiv.org/abs/1702.00288 -
Cited by224
[54] L.Gondara,Medical image denoisingusing
convolutional denoisingautoencoders., in: 2016 IEEE16th
International ConferenceonDataMining Workshops(ICDMW),
IEEE,2016,pp. 241–246.
https://doi.org/10.1109/ICDMW.2016.0041 - Cited by 76
[67]E. Kang,J.Min, J.C. Ye, A Deep ConvolutionalNeural
Network UsingDirectional Waveletsfor Low-DoseX-
Ray CTReconstruction, 2016 arXivpreprintarXiv:1610.09736.
https://www.ncbi.nlm.nih.gov/pubmed/29027238
CTNoise in Practice
AssessingRobustnesstoNoise:Low-
CostHeadCTTriage
SarahM. Hooper, Jared A. Dunnmon, Matthew P. Lungren, Sanjiv Sam Gambhir,
Christopher Ré, Adam S. Wang, BhavikN. Patel Stanford University
17Mar2020 https://arxiv.org/abs/2003.07977
In this work we use simulations tostudy noise
from low-cost scanners, which enables
systematic evaluation over large datasets without
increasing labeling demand. However, studying
variations in acquisition protocol using synthetic
data is relevant when considering model
deployment in any healthcare system.
Different institutions often have differing
acquisition protocols, with noise levels adjusted to
suit the needs of their healthcare practitioners.
However, robustness tests over acquisition
protocol and noise level are rarely
reported. Thus, the line of work presented in this
study is relevant for model testing prior to
deployment within any healthcare system. Finally,
learning directly in sinogram space instead of
reconstructed image space is an interestingfuture
study that may also be pursued with synthetic
data.
Low-DoseCT
Reducepatient dose,
with reduction in image quality
PoissonnoiseinCT Low-doseCT (Low photon counts)
Island Sign:AnImagingPredictorforEarly
HematomaExpansionandPoorOutcome
inPatientsWithIntracerebralHemorrhage
Qi Li,Qing-Jun Liu, Wen-Song Yang, Xing-ChenWang,Li-Bo Zhao,Xin Xiong,Rui Li, DuCao,
DanZhu,Xiao Wei, andPeng Xie
Stroke.2017;48:3019–302510Oct2017
https://doi.org/10.1161/STROKEAHA.117.017985
Poisson noise is due to the statistical error of low photon counts
and results in random, thin, bright and dark streaks that appear
preferentially in the direction of greatest attenuation (Figure 2).
With increased noise, high-contrast objects, such as bone, may
still be visible, but low-contrast soft-tissue boundaries may
beobscured.
Poisson noise can be decreased by increasing the mAs.
Modern scanners can perform tube current modulation, selectively
increasing the dose when acquiring a projection with high attenuation.
They also typically use bowtie filters, which provide a higher dose
towards the center of the field of view compared with the periphery.
There is a tradeoff between noise and resolution, so noise can also be
reduced by increasing the slice thickness, using a softer reconstruction
kernel(soft-tissuekernelinsteadofbonekernel)orblurring theimage.
Noise can also be reduced by moving the arms out of the scanned
volume for an abdominal CT. If the arms cannot be moved out of the
scanned volume, placing them on top of the abdomen should reduce
noise relative to placing them at the sides. Similarly, large breasts
should be constrained in the front of the thorax rather than on both sides
in thoracic and cardiac CT. This is becausethe noise increases rapidlyas
the photon counts approach zero, which means that the maximum
attenuation hasalargereffecton thenoisethan theaverageattenuation.
Iterative methods require faster computer chips, and have
only recently become available for clinical use. One iterative
method, model-based iterative reconstruction (MBIR;
GE Healthcare, WI, USA) [5,6], received US FDA approval in
September 2011 [101]. MBIR substantially reduces image
noise and improves image quality, thus allowing scans to be
acquired at lower radiation doses (Figure 3) [2]. Furthermore,
owing to the tradeoff between noise and resolution,
these methods will also probably be important for reducing
noise in higher resolution images.
DosereductionvsImageQuality
Vendor freebasicsofradiationdose
reductiontechniquesforCT
TakeshiKubo (2019)EuropeanJournalof Radiology
https://doi.org/10.1016/j.ejrad.2018.11.002
●
Automatic exposure control and iterative reconstruction methods
playasignificant rolein theCTradiation dosereduction.
●
The validity of dose reduction can be evaluated with objective and
subjectiveimagequality,anddiagnosticaccuracy.
●
Realizing the reference dose level for common CT imaging
protocolsisnecessarytoavoid overdosein theCTexaminations.
●
Efforts need to be made to decrease the low-yield CT
examination. Clinical decision support is expected to play a
significant role in leading to the more meaningful application of CT
examinations.
Tube current and image quality. CT Images of an anthropomorphic phantom
obtained with (a) 125 mAs and (b) 55 mAs at the level of lung bases. Standard
deviations of Hounsfield unit in the region of interest are 14.5 and 19.3 in the
image (a) and (b), respectively. Streak artifacts originating from the 
thoracicvertebra are seen as black linear structures and more readily perceptible in
the image (b). The image acquired with lower radiation dose (b, 55 mAs) has more
noiseandstreak artifactstheonewithhigherradiationdose(a,125mAs).
Tubecurrentadjustmentbyautomaticexposurecontrolsystem.
Modification of X-ray energy
profile (a) X-ray energy profile at
140kVp (solid line) and 80 kVp
(dashed line). (b, c) Modification of
energy profile with an extra X-ray
filter. Energy profile at 100 keV
without a filter (b) and at 100kVp with
an additional filter (c). Low energy X-
ray is mostly removed with the
additionalfilter.
Low-doseCTofcoursebenefit from better restoration
SUPERLearning:ASupervised-Unsupervised
FrameworkforLow-Dose CT Image Reconstruction
ZhipengLi, Siqi Ye, YongLong, Saiprasad Ravishankar
(Submitted on 26Oct 2019) https://arxiv.org/abs/1910.12024
Recent years have witnessed growing interest in machine learning-based
models and techniques for low-dose X-ray CT (LDCT) imaging tasks. The
methods can typically be categorized into supervised learning methods and
unsupervised or model-based learning methods. Supervised learning
methods have recently shown success in image restoration tasks. However,
they often rely on large training sets. Model-based learning methods
such as dictionary or transform learning do not require large or paired
training sets and often have good generalization properties, since they learn
generalpropertiesofCTimagesets.
Recent works have shown the promising reconstruction performance of
methods such as PWLS-ULTRA that rely on clustering the underlying
(reconstructed) image patches into a learned union of transforms. In this paper,
we propose a new Supervised-UnsuPERvised (SUPER)
reconstruction framework for LDCT image reconstruction that combines the
benefitsofsupervisedlearning methodsand(unsupervised) transformlearning-
based methods such as PWLS-ULTRA that involve highly image-adaptive
clustering. The SUPER model consists of several layers, each of which
includes a deep network learned in a supervised manner and an
unsupervised iterative method that involves image-adaptive
components. The SUPER reconstruction algorithms are learned in a greedy
manner from training data. The proposed SUPER learning methods
dramatically outperform both the constituent supervised learning-based
networks and iterative algorithms for LDCT, and use much fewer iterationsin the
iterativereconstructionmodules.
Dual-energy/detectorCT “sort ofCT HDR” #1
Dual energycomputedtomographyforthehead
NorihitoNarutoToshihide ItohKyoNoguchi
JapaneseJournalofRadiologyFebruary2018,Volume36,Issue2,pp69–80
https://doi.org/10.1007/s11604-017-0701-4 -Citedby2 
Dual energy CT (DECT) is a promising technology that provides better
diagnostic accuracy in several brain diseases. DECT can generate various types
of CT images from a single acquisition data set at high kV and low kV based on
material decomposition algorithms. The two-material decomposition
algorithm can separate bone/calcification from iodine accurately. The three-
material decomposition algorithm can generate a virtual non-contrast image,
which helps to identify conditions such as brain hemorrhage. A virtual
monochromatic image has the potential to eliminate metal artifacts by
reducingbeam-hardeningeffects.
DECT also enables exploration of advanced imaging to make diagnosis easier.
One such novel application of DECT is the X-Map, which helps to
visualizeischemicstrokeinthebrain withoutusingiodinecontrastmedium.
The X-Map uses a modified 3MD algorithm. A
motivation of this application is to visualize an
ischemic change of the brain parenchyma by
detecting an increase in water content in
a voxel. To identify a small change in water
content, the 3MD algorithm had a lipid-specific
slope of 2.0 applied in order to suppress the
small difference between gray matter and white
matter, which is mainly the difference in the lipid
content in gray and white matter. As shown in
the diagram, the nominal values of gray matter
and white matter are 33 HU at Sn150 kV and 42
HU at 80 kV, and 29 HU at Sn150 kV and 34 HU
at 80 kV, respectively. The lipid-specific slope
between the nominal point of gray matter and
white matter is 2.0 using the third generation
DSCT (SOMATOM Force; Siemens
Healthcare,Forchheim, Germany)
A patient with acute ischemic stroke 3 h after onset. A simulated standard CT image (a) obtained 3 h
after the ischemic stroke onset shows no definite early ischemic change, although the left
frontoparietal operculum may show questionable hypo-density. The X-Map (b) clearly shows the
ischemic lesion in the left middle cerebral artery territory. The diffusion-weighed image (c) alsoshowsa
definite acute ischemic lesion in theleftMCAterritory
The two-material decomposition (2MD) is the
algorithm that generates several dual energy (DE)
images. The 2MD algorithm (a) can distinguish
one material from other materialssuch as bone and
iodine using a separation line. This algorithm has
been used for the DE direct bone removal
application. The three-material decomposition
(3MD) algorithm (b) can extract the iodine
component from contrast-enhanced tissues. All
voxels are projected along the iodine-specific slope
to the line connecting fat and soft-tissue. This
algorithm has been used for DE brain hemorrhage
application
Dual-energy/detectorCT “sort of CT HDR”#2
Technical limitationsofdual-energy CT in
neuroradiology:30-monthinstitutional
experienceandreviewofliterature
Julien Dinkel, Omid Khalilzadeh, Catherine M Phan, Ajit H Goenka, Albert JYoo, JoshuaA Hirsch, Rajiv
Gupta| Journal ofNeuroInterventional Surgery2015;7:596-602.
http://dx.doi.org/10.1136/neurintsurg-2014-011241
Although dual-energy CT systems (DECT) appears to be a promising option,
its limitations in neuroimaging have not been systematically studied.
Knowledge of these limitations is essential before DECT can be considered as a
standard modality in neuroradiology. In this study, a retrospective analysis was
performed to analyze failure modes and limitations of DECT in neuroradiology. To
further illustrate potential limitations of DECT, clinical analysis was supplemented
with an in vitro dilution experiment using cylinders containing predetermined
concentrationsofheparinizedswineblood,normalsaline,andiodine.
There is a chronic infarct in the right middle cerebral artery territory with diffuse mineralization in this
region (circled). A single-energy image (A) and virtual non-contrast image (B) show hyperdensity
(mean of 58 HU) surrounding infarction of the right basal ganglia and adjacent internal capsule. There is
trace corresponding hyperdensity on the iodine overlay image (C). This finding, by itself, may represent
mineralization or a combination of iodine and hemorrhage. Hard-plaque removal software
(D)cannot identifythisregion offaint, diffuse mineralization.
Single-energy image (A) with beam-hardening artifacts from clips on
a right middle cerebral artery aneurysm. An iodine overlay image (C)
is particularly impaired by the metallic artifact. The virtual non-
contrast image (B)islessaffected bythe metallic artifact.
Aproposed algorithm for
assessing
intraparenchymal
calcification usingdual-
energyCTprocessing.
The original 80 and 140 kV
imagesare decomposed
intotwo alternate base-
pairs: brain parenchyma
and calcium.
Ahyperdensity
disappearingon the brain
overlay can be regarded
asacalcification. ICH,
intracranial hemorrhage.
Two types of hyperattenuation seen on a mixed image (A, D) obtained by
dual-energy CT in a patient who underwent recanalization therapy. Contrast
staining (oval) in the right basal ganglia is also depicted in the iodine overlay
image (C) but not in the virtual non-contrast (VNC) image (B). A faint focal
mineralization is seen on the left lentiform nucleus (arrow). The iodine-specific
material decomposition algorithm cannot identify this fourth material which is
seen on both VNC (B) and iodine overlay image (C). After postprocessing using
the brain mineralization application, this hyperdensity disappears on the brain
overlay (E), confirming a calcification. Note that both iodine content and
calcificationsareseen on the‘calciumoverlay’ (F).
Dual-energy/detectorCT “sort ofCT HDR” #3
Characteristic images of the CT brain protocol from the single-layer detector CT (SLCT; Brilliance iCT, Philipshealthcare)
and dual-layer detector CT (DLCT; IQon spectral CT, Philips Healthcare). The contrast between the grey and white
matter isclear inbothimages.IntheSLCT image,adrainisvisible.Thewindowlevelandwidthfor bothimagesis40/80.
Van Ommen et al. (January 2019)
Dose of CT protocols acquired in clinical routine using a dual-layer detector CT scanner: A preliminary report
http://doi.org/10.1016/j.ejrad.2019.01.011
VeronicaFransson’sMaster’sthesis(2019)
http://lup.lub.lu.se/luur/download?func=downloa
dFile&recordOId=8995820&fileOId=8995821
IodineQuantificationUsingDualEnergy
ComputedTomographyandapplicationsin
BrainImaging
A ReviewoftheApplicationsofDual-Energy CTin AcuteNeuroimaging
https://doi.org/10.1177%2F0846537120904347
Dual-energy CT is a powerful tool for supplementing standard CT in acute
neuroimaging. Many clinical applications have been demonstrated to
represent added value, notably for improved diagnoses and diagnostic
confidence in head and spinal trauma, cerebral ischemia and hemorrhage, and
angiography. Emerging iodine quantification methods have potential to guide
medical, surgical, and interventional therapy and prognostication in stroke,
aneurysmal hemorrhage, and traumatic contusions. As the technology of DECT
continues to evolve, these tools promise maturation and expansion of their role in
emergent neurologicalpresentations.
In three-material decomposition, if a fourth (or more) material, such as calcium, is
present at a certain concentration in a voxel, DECT cannot separate the constituent
materials and will misclassify them which may present challenges in separating
calcification from enhancement or hemorrhage. Iodine concentrations that are
too low may be unquantifiable or undetectable, and concentrations that are too high
may prevent complete iodine subtraction. The limitation of a relatively narrow field of
view (25-36.5 cm, depending on scanner generation) is of lesser importance in
neuroradiology, as the brain and spine, when centered in the field of view, should be
adequatelycovered.
Using Dual-Energy CTto Identify Small Foci of
Hemorrhage in the Emergency Setting
https://doi.org/10.1148/radiol.2019192258
Dual-Energy CT shouldbetterdistinguish calciumfromhematomaE
Dual-EnergyHeadCTEnables
AccurateDistinctionof
IntraparenchymalHemorrhagefrom
CalcificationinEmergency
DepartmentPatientsRanliang Hu,Laleh
DaftariBesheli,JosephYoung,MarkusWu,
StuartPomerantz,MichaelH.Lev,RajivGupta
https://doi.org/10.1148/radiol.2015150877
To evaluate the ability of dual-energy (DE) computed
tomography (CT) to differentiate calcification from
acute hemorrhage in the emergency department
setting.
In this institutional review board-approved study, all unenhanced
DE head CT examinations that were performed in the emergency
department in November and December 2014 were
retrospectively reviewed. Simulated 120-kVp single-energy
CT images were derived from the DE CT acquisition via
postprocessing. Patients with at least one focus of
intraparenchymal hyperattenuation on single-energy CT images
were included, and DE material decomposition postprocessing
was performed. Each focal hyperattenuation was analyzed on the
basis of the virtual noncalcium and calcium overlay
images and classified as calcification or hemorrhage.
Sensitivity, specificity, and accuracy were calculated for single-
energy and DE CT by using a common reference standard
established by relevant prior and follow-up imaging and clinical
information.
DE CT by using material decomposition enables
accurate differentiation between calcification and
hemorrhage in patients presenting for emergency
head imaging and can be especially useful in
problem-solving complex cases that are difficult
to determine based on conventional CT
appearance alone.
Multi-energyCT
Uniquenesscriteriain multi-energy CT
GuillaumeBal, FatmaTerzioglu(Submitted on 6Jan2020)
https://arxiv.org/abs/2001.06095
Multi-Energy Computed Tomography (ME-CT) is a
medical imaging modality aiming to reconstruct the spatial
density of materials from the attenuation properties of probing
x-rays. For each line in two- or three-dimensional space, ME-
CT measurements may be written as a nonlinear mapping
from the integrals of the unknown densities of a finite number
of materials along said line to an equal or larger number of
energy-weighted integrals corresponding to different x-ray
sourceenergyspectra.
ME-CT reconstructions may thus be decomposed as a two-
stepprocess:
1) Reconstruct line integrals of the material densities from the
availableenergy measurements;and
2) Reconstructdensitiesfromtheirlineintegrals.
Step (ii) is the standard linear x-ray CT problem whose
invertibility iswell-known,sothispaperfocusesonstep(i).
Low-doseMulti-energyCT
JointReconstructioninLowDoseMulti-Energy CT
JussiToivanen, Alexander Meaney, SamuliSiltanen, Ville Kolehmainen
(Submitted on 11Apr 2019(v1), lastrevised 13Feb 2020 (thisversion, v3))
https://arxiv.org/abs/1904.05671
Multi-energy CT takes advantage of the non-linearly varying attenuation
properties of elemental media with respect to energy, enabling more precise
material identification than single-energy CT. The increased precision comes with
the cost of a higher radiation dose. A straightforward way to lower the dose is to
reduce the number of projections per energy, but this makes tomographic
reconstructionmoreill-posed.
In this paper, we propose how this problem can be overcome with a combination of
a regularization method that promotes structural similarity between images at
different energies and a suitably selected low-dose data acquisition protocol using
non-overlapping projections. The performance of various joint regularization
models is assessed with both simulated and experimental data,using the novel low-
dosedataacquisition protocol.Three ofthemodelsarewell-established,namelythe
joint total variation, the linear parallel level sets and the spectral smoothness
promotingregularizationmodels.
Furthermore, one new joint regularization model is introduced for multi-
energy CT: a regularization based on the structure function from the
structural similarity index. The findings show that joint regularization
outperforms individual channel-by-channel reconstruction. Furthermore, the
proposed combination of joint reconstruction and non-overlapping projection
geometryenablessignificantreductionofradiationdose.
G Poludniowski, G Landry, F DeBlois, P M Evans, and F Verhaegen.
SpekCalc: a program to calculate photon spectra from tungsten anode
x-ray tubes. Physics in Medicine and Biology, 54:N433—-N438, 2009.
https://doi.org/10.1088/0031-9155/54/19/N01
3DFew-viewCTReconstruction
DeepEncoder-decoderAdversarial
Reconstruction (DEAR)Networkfor3DCTfrom
Few-viewData
HuidongXie,HongmingShan,GeWang(Submittedon13Nov2019
https://arxiv.org/abs/1911.05880
In this paper, we propose a deep encoder-decoder adversarial reconstruction (DEAR) network
for 3D CT image reconstruction from few-view data. Since the artifacts caused by few-view
reconstruction appear in 3D instead of 2D geometry, a 3D deep network has a great potential for
improving the image quality in a data-driven fashion. More specifically, our proposed DEAR-3D
network aims at reconstructing 3D volume directly from clinical 3D spiral cone-beam image
data. DEAR-3D utilizes 3D convolutional layers to extract 3D information from multiple adjacent
slicesin agenerativeadversarial network(GAN) framework. Different fromreconstructing2D
images from 3D input data, DEAR-3D directly reconstructs a 3D volume, with faithful texture and
image details; DEAR is validated on a publicly available abdominal CT dataset prepared and
authorized by Mayo Clinic. Compared with other 2D deep-learning methods, the proposed
DEAR-3Dnetworkcan utilize3Dinformationtoproducepromisingreconstructionresults
Few-view CT may be implemented as a mechanically stationary scanner in the future [
Crameret al. 2018] for health-care and other utilities. Current commercial CT scanners use one or
two x-ray sources mounted on a rotating gantry, and take hundreds of projections around a patient.
Therotatingmechanismisnot onlymassivebut alsopower-consuming.Hence, current commercial
CT scanners are inaccessible outside hospitals and imaging centers, due to their size,
weight, and cost. Designing a stationary gantry with multiple miniature x-ray sources is an
interestingapproach toresolvethisissue [Crameret al.2018].
CT
Registration
Traditional
Background
MultimodalSpatialNormalization Example #1
Image processing steps for three methods of spatial normalization and measuring regional SUV. (a) Skull-stripping of original CT image, (b) spatial normalization of skull-stripped CT to skull-stripped CT
template, (c) applying transformation parameter normalizing CT image for spatial normalization of PET image, (d) skull-stripping of original MR image, (e) spatial normalization of skull-stripped MR image to skull-stripped
MR template, (f) coregistration of PET image to MR image, (g) applying transformation parameter normalizing MR image for spatial normalization of PET image, (h) spatial normalization of PET image with MNI PET
template, (i) measuringregional SUVwith modified AAL VOI template, (j) acquisition of FSVOI withFreeSurfer, and (k) measuringregional SUVby usingFSVOI overlaid onPETimage coregistered toMR. AAL = automated
anatomical labeling, FSVOI = FreeSurfer-generated volumeof interest, MNI = Montreal Neurological Institute, PET= positronemission tomography, SUV= standardized uptake value, VOI= volume of interest
AComputed
Tomography-Based
SpatialNormalizationfor
theAnalysisof[18F]
Fluorodeoxyglucose
PositronEmission
TomographyoftheBrain
KoreanJRadiol.2014Nov-
Dec;15(6):862-870.
https://doi.org/10.3348/kjr.20
14.15.6.862
MultimodalSpatialNormalization Example #2
SpatialNormalizationOfCT ImagesToMniSpaceARepresentative
http://fbcover.us/mni-template/
PrettyMniTemplateImagesGalleryStudySpecificEpiTemplate
http://fbcover.us/mni-template/
BicTheMcconnellBrainImaging CentreIcbm152NLin2009
http://fbcover.us/mni-template/
MRISpatialNormalization Example
Spatial registration for functional near-infrared spectroscopy: From
channel position on the scalp tocortical location in individual and
group analyses(NeuroImage 2013)
https://doi.org/10.1016/j.neuroimage.2013.07.025
Probabilistic registration of single-subject data without MRI. (A) Positions for channels and reference
points in real-world (RW) space are measured using a 3D-digitizer. The minimum number of reference
points is four, as in this case, where Nz (nasion), Cz, and left and right preauricular points (AL and AR) are
used. Alternatively, whole or selected 10/20 positions may be used. (B) The reference points in RW are
affine-transformed tothe corresponding reference pointsin eachentry in referencetotheMRIdatabasein
MNI space. (C) Channels of the scalp are projected onto the cortical surface of the reference brains. (D)
The cortically projected channel positions are integrated to yield the most likely coordinates (average:
centersofspheres)and variability (compositestandard deviation: radii ofspheres)in MNIspace.
(NeuroImage 2013)
https://doi.org/10.1016/j.neuroimage.2013.07.025
ANTpackage and SyNasthe “SOTA”
Evaluationof14nonlineardeformation
algorithmsappliedtohumanbrainMRI
registration
ArnoKleinet al. (2009)
NeuroImageVolume46,Issue3,1July2009, Pages786-802
https://doi.org/10.1016/j.neuroimage.2008.12.037 -Citedby1776
More than 45,000 registrations between 80 manually labeled brains
were performed by algorithms including: AIR, ANIMAL, ART,
Diffeomorphic Demons, FNIRT, IRTK, JRD-fluid, ROMEO, SICLE, SyN,
and four different SPM5 algorithms (“SPM2-type” and regular
Normalization, Unified Segmentation, and the DARTEL Toolbox). All of
these registrations were preceded by linear registration between the
sameimagepairsusingFLIRT.
One of the most significant findings of this study is that the relative
performances of the registration methods under comparison appear to
be little affected by the choice of subject population, labeling
protocol, and type of overlap measure. This is important because
it suggests that the findings are generalizable to new subject
populations that are labeled or evaluated using different labeling
protocols. Furthermore, we ranked the 14 methods according to
three completely independent analyses (permutation tests, one-way
ANOVA tests, and indifference-zone ranking) and derived three almost
identical top rankings of the methods. ART, SyN, IRTK, and
SPM's DARTEL Toolbox gave the best results according to
overlap and distance measures, with ART and SyN delivering the
most consistently high accuracy across subjects and label
sets. Updates will be published on the
http://www.mindboggle.info/papers/website
Blaiotta et al. (2018): “Advanced normalisation Tools (ANTs) package, through
the web site http://stnava.github.io/ANTs/. Indeed, the symmetric diffeomorphic
registration framework implemented in ANTs has established itself as the state-of-
the-art of medical image nonlinear spatial normalisation (Klein et al., 2009).”
Image Registration
Diffeomorphisms: SyN, Independent Evaluation: Klein, Murphy, Template Construction 
(2004)(2010), Similarity Metrics, Multivariate registration, 
Multiple modality analysis and statistical bias
How aboutmissingdata?
Diffeomorphicregistrationwithintensity
transformationandmissingdata:
Applicationto3Ddigitalpathology of
Alzheimer’sdisease
DanielTward•TimothyBrown •YusukeKageyama•JayminPatel
•ZhipengHou•SusumuMori •MarilynAlbert•Juan Troncoso•
MichaelMillerbioRxivpreprint first postedonlineDec. 11,2018; doi:
http://dx.doi.org/10.1101/494005
This paper examines the problem of diffeomorphic image mapping in
the presence of differing image intensity profiles and missing data.
Our motivation comes from the problem of aligning 3D brain MRI with 100
micron isotropic resolution, to histology sections with 1 micron in plane
resolution. Multiple stains, as well as damaged, folded, or missing tissue are
common in this situation. We overcome these challenges by introducing
two new concepts. Cross modality image matching is achieved by
jointly estimating polynomial transformations of the atlas intensity, together
with pose and deformation parameters. Missing data is accommodated
via a multiple atlas selection procedure where several atlases may be of
homogeneousintensityandcorrespond to“background”or“artifact”.
The two concepts are combined within an Expectation
Maximization algorithm, where atlas selection posteriors and
deformation parameters are updated iteratively, and polynomial
coefficients are computed in closed form. We show results for 3D
reconstruction of digital pathology and MRI in standard atlas coordinates. In
conjunction with convolutional neural networks, we quantify the 3D density
distribution of tauopathy throughout the medial temporal lobe of an
Alzheimer’sdiseasepostmortemspecimen.
DiffusionTensorImaging registration pipelineexample
Improvingspatialnormalizationofbrain
diffusionMRI tomeasure longitudinal
changesoftissue microstructure inhuman
cortex andwhitematter
FlorenciaJacobacci, Jorge Jovicich, GonzaloLerner, Edson AmaroJr, Jorge
Armony, Julien Doyon, ValeriaDella-Maggiore Universidad de Buenos Aires
https://doi.org/10.1101/590521 (March 28, 2019)
https://github.com/florjaco/DWIReproducibleNormalization
Scalar diffusion tensor imaging (DTI) measures, such as
fractional anisotropy (FA) and mean diffusivity (MD),
are increasingly being used to evaluate longitudinal changes
in brain tissue microstructure. In this study, we aimed at
optimizing the normalization approach of longitudinal
DTI data in humans to improve registration in gray
matter and reduce artifacts associated with
multisession registrations. For this purpose, we examined the
impact of different normalization features on the across-
session test-retest reproducibility error of FA and MD maps
frommultiplescanningsessions.
We found that a normalization approach using ANTs as the
registration algorithm, MNI152 T1 template as the target
image, FA as the moving image, and an intermediate FA
template yielded the highest test-retest reproducibility in
registering longitudinal DTI maps for both gray matter and
white matter. Our optimized normalization pipeline opens a
window to quantify longitudinal changes in
microstructureatthecorticallevel.
CT
Image Quality
Regulatory and
technicaldefinitions
TechnicalImageQuality validated by radiologists
ValidationofalgorithmicCT image
qualitymetricswithpreferencesof
radiologists
Yuan Cheng Ehsan Abadi Taylor Brunton Smith FrancescoRia
MathiasMeyer Daniele Marin Ehsan Samei
https://doi.org/10.1002/mp.13795 (29August 2019)
Automated assessment of perceptual image quality on
clinical Computed Tomography (CT) data by computer algorithms
has the potential to greatly facilitate data driven monitoring and‐ICH, occurs earlier 
optimization of CT image acquisition protocols. The application of
these techniques in clinical operation requires the knowledge of
how the output of the computer algorithms corresponds
to clinical expectations. This study addressed the need to
validate algorithmic image quality measurements on clinical CT
images with preferences of radiologists and determine the
clinically acceptable range of algorithmic measurements for
abdominal CTexaminations.
Algorithmic measurements of image quality metrics
(organ HU, noise magnitude, and clarity) were performed on a
clinical CT image dataset with supplemental measures of noise
power spectrum from phantom images using techniques
developed previously. The algorithmic measurements were
compared to clinical expectations of image quality in an observer
studywithseven radiologists.
The observer study results indicated that these algorithms can
robustly assess the perceptual quality of clinical CT
images in an automated fashion. Clinically acceptable ranges
of algorithmic measurements were determined. The
correspondence of these image quality assessment algorithms to
clinical expectations paves the way toward establishing diagnostic
reference levels in terms of clinically acceptable perceptual image
quality and data driven optimization of CT image‐sized 
acquisition protocols.
ImageQuality(and resolution) task-specific
sometimes blurry+pixelatedvolumes cangetyou somewhere?
TheEffectofImageResolutionon
Deep LearninginRadiography
YCarlF.Sabottke,BradleyM.Spieler Liang Radiology:Artificial
Intelligence(Jan 22 2020)
https://doi.org/10.1148/ryai.2019190015
Tracking convolutional neural network performance as a
functionof image resolution allowsinsightintohow the
relative subtlety of different radiology findings can affect the
success of deep learning in diagnostic radiology
applications.
Maximum AUCs were achieved at image resolutions
between 256 × 256 and 448 × 448 pixels for binary
decision networks targeting emphysema, cardiomegaly,
hernias,edema,effusions,atelectasis,masses,andnodules.
When comparing performance between networks that
utilize lower resolution (64 × 64 pixels) versus higher
(320 × 320 pixels) resolution inputs, emphysema,
cardiomegaly, hernia, and pulmonary nodule detection had
the highest fractional improvementsin AUC at higher image
resolutions.
Increasing image resolution for CNN training often has a
trade-off with the maximum possible batch size, yet
optimal selection of image resolution has the potential for
further increasing neural network performance for various
radiology-based machine learning tasks. Furthermore,
identifying diagnosis-specific tasks that require
relatively higher image resolution can potentially
provide insight into the relative difficulty of identifying
differentradiologyfindings.
RegulatoryImage Quality
AchievingCT RegulatoryCompliance:A
Comprehensive andContinuousQuality
ImprovementApproach
Matthew E. Zygmont, RebeccaNeill, ShalmaliDharmadhikari, Phuong-Anh T.
DuongCurrentProblemsin DiagnosticRadiology
Availableonline12 February2020
https://doi.org/10.1067/j.cpradiol.2020.01.013
Computed tomography (CT) represents one of the largest sources of
radiation exposure to the public in the United States. Regulatory
requirements now mandate dose tracking for all exams and
investigation of dose events that exceed set dose thresholds.
Radiology practices are tasked with ensuring quality control and
optimizing patient CT exam doses while maintaining diagnostic
efficacy. Meeting regulatory requirements necessitates the
developmentof aneffectivequalityprograminCT.
This review provides a template for accreditation compliant
quality control and CT dose optimization. The following paper
summarizes a large health system approach for establishing a quality
program in CT and discusses successes, challenges, and future
needs.
Protocol management was one of the most time intensive
components of our CT quality program. Central protocol
management with cross platform compatibility would allow for
efficient standardization and would have great impact especially in
large organizations. Modular protocol design from
manufacturers is another missing piece in the optimization
process. Having recursive protocol modules would greatly alleviate
the burden of making parameter changes to core imaging units. For
example, our routine head protocol is a standalone exam, but also
exists in combination protocols for CT angiography of the head and
neck,perfusion imaging,and traumaexams.
CT
Registration
DeepLearning
https://arxiv.org/pdf/1903.03545.pdf
https://paperswithcode.com/task/diffeomorphic-medical-image-registration
Conditional variational autoencoderfordiffeomorphicregistration #1
LearningaProbabilisticModel forDiffeomorphic
Registration Julian Krebs; HervéDelingette; BorisMailhé; NicholasAyache; Tommaso
Mansi Université Côte d’Azur,Inria / Siemens Healthineers,Digital Services,Digital Technology and Innovation,Princeton, NJ,USA
IEEE Transactionson Medical Imaging (Volume: 38 , Issue: 9, Sept.2019) https://doi.org/10.1109/TMI.2019.2897112
Medical image registration is one of the key processing steps for biomedical image analysis such as cancer
diagnosis. Recently, deep learningbased supervised and unsupervised image registration methodshave been
extensively studied due to its excellent performance in spite of ultra-fast computational time
compared tothe classical approaches.
In this paper, we present a novel unsupervised medical image registration method that trains deep neural
network for deformable registrationof 3Dvolumesusinga cycle-consistency.
To guarantee the topology preservation between the deformed and fixed images, we here adopt the cycle
consistency constraint between the original moving image and its re-deformed image. That is, the deformed
volumes are given as the inputs tothe networks again by switchingtheir order toimpose the cycle consistency.
Thisconstraint ensuresthat the shape ofdeformed imagessuccessivelyreturnstothe original shape.
Thanks to the cycle consistency, the proposed deep neural networks can take diverse pair of image data
with severe deformation for accurate registration. Experimental results using multiphase liver CT
images demonstrate that our method provides very precise 3D image registration within a few seconds,
resultingin more accurate cancer size estimation.
Thenumber of
trainable
parametersin the
networkwas
~420k. The
frameworkhas
been
implemented in
Tensorflow using
Keras. Training
took ~24 hours
and testing a
single registration
casetook 0.32s
on aNVIDIAGTX
TITAN X GPU.
Conditional variational autoencoderfordiffeomorphicregistration #2
LearningaProbabilisticModel forDiffeomorphic
Registration Julian Krebs; HervéDelingette; BorisMailhé; NicholasAyache; Tommaso
Mansi Université Côte d’Azur,Inria / Siemens Healthineers,Digital Services,Digital Technology and Innovation,Princeton, NJ,USA
IEEE Transactionson Medical Imaging (Volume: 38 , Issue: 9, Sept.2019) https://doi.org/10.1109/TMI.2019.2897112 -
Citedby13 
26. J. Fan, X. Cao, P.-T. Yap, D. Shen, Birnet: Brain image registration using dual-supervised fully
convolutional networks, 2018. https://arxiv.org/abs/1802.04692.
27. A. V. Dalca, G. Balakrishnan, J. Guttag, M. R. Sabuncu, "Unsupervised learning for fast probabilistic
diffeomorphic registration", Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent., pp. 729-738,
2018. https://arxiv.org/abs/1805.04605 - See next slide →
29. Y. Hu et al., "Weakly-supervised convolutional neural networks for multimodal image registration",
Med. Image Anal., vol. 49, pp. 1-13, Oct. 2018. https://arxiv.org/abs/1807.03361
UnsupervisedProbabilistic+diffeomorphictweakofVoxelMorph
UnsupervisedLearningofProbabilistic
DiffeomorphicRegistration forImagesand
SurfacesAdrianV.Dalca, GuhaBalakrishnan, JohnGuttag, 
MertR.Sabuncu(Submittedon8Mar 2019(v1),lastrevised23Jul2019
(thisversion,v2))https://arxiv.org/abs/1903.03545
https://github.com/voxelmorph/voxelmorph
Paperswithcode DiffeomorphicMedicalImageRegistration
Classicaldeformableregistrationtechniquesachieveimpressiveresults
andoffer arigoroustheoreticaltreatment,butare computationally
intensivesincetheysolveanoptimizationproblemforeachimagepair.
Recently,learning-basedmethodshavefacilitatedfastregistrationby
learningspatialdeformationfunctions.However,theseapproachesuse
restricteddeformationmodels,requiresupervisedlabels,or donot
guaranteeadiffeomorphic(topology-preserving) registration.
Furthermore,learning-basedregistrationtools havenotbeen
derivedfromaprobabilisticframework thatcanoffer uncertainty
estimates.
In this paper, we build a connection between classical and
learning-based methods. We present a probabilistic generative
model and derive an unsupervised learning-based inference algorithm
that uses insights from classical registration methods and makes use of
recent developments in convolutional neural networks (CNNs). We
demonstrate our method on a 3D brain registration task for both
images and anatomical surfaces, and provide extensive empirical
analyses. Our principled approach results in state of the art
accuracy and very fast runtimes, while providing diffeomorphic
guarantees.
Our algorithm can infer the registration of new image
pairs in under a second. Compared to traditional
methods, our approach is significantly faster,
and compared to recent learning based methods,
our method offers diffeomorphic guarantees.
We demonstrate that the surface extension to our
model can help improve registration while
preserving properties such as low runtime and
diffeomorphisms. Furthermore, several conclusions
shown in recent papers apply to our method. For
example, when only given very limited
training data, deformation from VoxelMorph can
still be used as initialization to a classical
method, enabling faster convergence (
Balakrishnanet al, 2019)
Notthatmanyannotatedtrainingsamplesrequired?
FewLabeledAtlasesareNecessary for
Deep-Learning-BasedSegmentation Hyeon
WooLee,MertR.Sabuncu,AdrianV.Dalca(Submittedon13Aug2019
(v1),lastrevised15Aug2019(thisversion,v3))
https://arxiv.org/abs/1908.04466
We tackle biomedical image segmentation in the scenario of only
afew labeledbrainMRimages.Thisisan importantandchallenging
task in medical applications, where manual annotations are time-
consuming. Classical multi-atlas based anatomical segmentation
methods use image registration to warp segments from labeled images
onto a new scan. These approaches have traditionally required
significant runtime, but recent learning-based registration
methodspromisesubstantialruntimeimprovement.
In a different paradigm, supervised learning-based segmentation
strategies have gained popularity. These methods have consistently
usedrelativelylargesetsoflabeledtraining data,andtheir behavior inthe
regime of a few labeled images has not been thoroughly evaluated. In
this work, we provide two important results for anatomical
segmentation in the scenario where few labeled images are available.
First, we propose a straightforward implementation of efficient semi-
supervised learning-based registration method, which we
showcase in a multi-atlas segmentation framework. Second, through a
thorough empirical study, we evaluate the performance of a supervised
segmentationapproach,wherethetrainingimagesareaugmented
via random deformations. Surprisingly, we find that in both
paradigms, accurate segmentation is generally possible even
inthecontextoffewlabeledimages.
Metriclearningapproachfordiffeomorphictransformation
MetricLearningforImageRegistration
MarcNiethammer,RolandKwitt,François-Xavier Vialard(2019)
https://arxiv.org/abs/1904.09524/CVPR2019
https://github.com/uncbiag/registration
Image registration isa keytechnique in medical image analysisto estimate
deformationsbetween image pairs. Agood deformation model isimportant for
high-qualityestimates. However, mostexisting approachesuse ad-hoc
deformation models chosen for mathematical convenience ratherthanto
capture observed datavariation. Recent deep learningapproacheslearn
deformation modelsdirectlyfrom data.
However, they provide limited control over the spatial regularity of
transformations. Instead of learning the entire registration approach, we learn a
spatially-adaptive regularizer within a registration model. This allows
controlling the desired level of regularity and preserving structural
propertiesof aregistrationmodel.
For example, diffeomorphic transformations can be attained. Our approach is a
radical departure from existing deep learning approaches to image registration
by embedding a deep learning model in an optimization-based
registration algorithm to parameterize and data-adapt the
registrationmodel itself.
Much experimental and theoretical work remains. More sophisticated CNN
models should be explored; the method should be adapted for fast end-to-end
regression; more general parameterizations of regularizers should be studied
(e.g., allowingsliding), and the approachshould be developedforLDDMM.
One/Few-shotlearningforimageregistrationaswell
OneShotLearningforDeformableMedical
ImageRegistrationandPeriodicMotion
Tracking TobiasFechter,DimosBaltas(11Jul2019)
https://arxiv.org/abs/1907.04641
Deformable image registration is a very important field of research in medical imaging. Recently
multiple deep learningapproacheswere published in this areashowing promisingresults. However,
drawbacks of deep learning methods are the need for a large amount of training
datasets and their inability to register unseen images different from the training datasets. One
shot learning comes without the need of large training datasets and has already been proven to
be applicable to3Ddata.
In this work we present an one shot registration approach for periodic motion tracking in
3D and 4D datasets. When applied to 3D dataset the algorithm calculates the inverse of a
registration vector field simultaneously. For registration we employed a U-Net combined with a
coarse to fine approach and a differential spatial transformer module. The algorithm
was thoroughly tested with multiple 4D and 3D datasets publicly available. The results show that
the presented approach is able to track periodic motion and to yield a competitive registration
accuracy. Possible applications are the use as a stand-alone algorithm for 3D and 4D
motion tracking or in the beginning of studies until enough datasets for a separate training phase
are available.
Inpaintingwithregistration
SynthesisandInpainting-BasedMR-
CTRegistration forImage-Guided
ThermalAblationofLiverTumors
DongmingWei,SaharAhmad,JiayuHuo, Wen Peng, YunhaoGe,
ZhongXue,Pew-Thian Yap,Wentao Li,DinggangShen, Qian Wang
[Submittedon 30Jul2019] https://arxiv.org/abs/1907.13020
In this paper, we propose a fast MR-CT image registration
method to overlay a pre-procedural MR (pMR)
image onto an intra-procedural CT (iCT) image for
guiding the thermal ablation of liver tumors. By first using a
Cycle-GAN model with mutual information constraint to
generate synthesized CT (sCT) image from the cor-
responding pMR, pre-procedural MR-CT image
registration is carried out through traditional mono-
modalityCT-CTimage registration.
At the intra-procedural stage, a partial-convolution-
based network is first used to inpaint the probe and its
artifacts in the iCT image. Then, an unsupervised
registration network is used to efficiently align the pre-
procedural CT (pCT) with the inpainted iCT (inpCT)
image.
The final transformation from pMR to iCT is obtained
by combining the two estimated
transformations,i.e., (1) from the pMR image space to
the pCT image space (through sCT) and (2) from the pCT
image space to the iCT image space (throughinpCT).
Registrationwith Segmentationjointly
DeepLearning-BasedConcurrentBrain
RegistrationandTumorSegmentation
ThéoEstienneetal. (2020)Front.Comput.Neurosci.,20March2020|
https://doi.org/10.3389/fncom.2020.00017
https://github.com/TheoEst/joint_registration_tumor_segmentation Keras
In thispaper, we propose a novel, efficient, and multi-task algorithm that
addressesthe problemsof image registration and braintumor
segmentation jointly. Ourmethodexploitsthe dependenciesbetween
these tasksthrougha natural couplingoftheirinterdependenciesduring
inference.In particular, the similarityconstraintsare relaxed within the
tumorregionsusingan efficient and relativelysimple formulation. We
evaluated the performance ofourformulation bothquantitativelyand
qualitativelyforregistration and segmentation problemson two publicly
available datasets(BraTS 2018 andOASIS 3), reporting competitive results
withotherrecent state-of-the-art methods.
Registrationwith SegmentationandSynthesis
JSSR:A JointSynthesis,Segmentation,andRegistration
System for3DMulti-Modal ImageAlignmentofLarge-
scalePathologicalCT Scans
FengzeLiu, JingzhengCai, YuankaiHuo,Chi-TungCheng,Ashwin Raju,Dakai Jin,JingXiao, Alan Yuille,Le
Lu,ChienHungLiao,AdamPHarrison
[Submittedon 25May2020]https://arxiv.org/abs/2005.12209
Multi-modal image registration is a challenging problem yet important clinical
task in many real applications and scenarios. For medical imaging based diagnosis,
deformable registration among different image modalities is often required in order to
provide complementary visual information, as the first step. During the registration, the
semantic information is the key to match homologous points and pixels.
Nevertheless, many conventional registration methods are incapable to capture
the high-level semanticanatomicaldense correspondences.
In this work, we propose a novel multi-task learning system, JSSR, based on an end-
to-end 3D convolutional neural network that is composed of a generator, a
register and a segmentor, for the tasks of synthesis, registration and segmentation,
respectively.
This system is optimized to satisfy the implicit constraints between different tasks
unsupervisedly. It first synthesizes the source domain images into the target
domain, then an intra-modal registration is applied on the synthesized images and
target images. Then we can get the semantic segmentation by applying segmentors
on the synthesized images and target images, which are aligned by the same
deformation field generated by the registers. The supervision from another fully-
annotated dataset is used to regularize the segmentors.
Follow https://paperswithcode.com/
Papers with Code for state-of-the-art
CTDenoising
DeepLearning
https://github.com/SSinyu/CT-Denoising-Review
Plentyofdeeplearning attempts
Deep LearningforLow-DoseCTDenoising
MaryamGholizadeh-Ansari,JavadAlirezaie,PaulBabyn(Submitted
on25 Feb2019) https://arxiv.org/abs/1902.10127
In this paper, we propose a deep
neural network that uses dilated
convolutions with different dilation
rates instead of standard convolution
helping to capture more contextual
information in fewer layers. Also, we
have employed residual learning by
creating shortcut connections to
transmit image information from the
early layers to later ones. To further
improve the performance of the
network, we have introduced a non-
trainable edge detection layer
that extracts edges in horizontal,
vertical, and diagonal directions.
Finally, we demonstrate that
optimizing the network by a
combination of mean-square
error loss and perceptual loss
preserves many structural details in
the CT image. This objective function
does not suffer from over smoothing
and blurring effects caused by per-
pixel loss and grid-like artifacts
resulting from perceptual loss. The
experiments show that each
modification to the network improves
the outcome while only minimally
changing the complexity of the
network.
Few-viewCTReconstruction to reduce radiation dose
Dual NetworkArchitectureforFew-viewCT--
TrainedonImageNetDataandTransferredfor
Medical Imaging
HuidongXie,Hongming Shan,WenxiangCong, XiaohuaZhang,ShaohuaLiu,Ruola
Ning,GeWang(12Sept2019)
https://arxiv.org/abs/1907.01262
Few-view CT image reconstruction is an important topic to reduce
the radiation dose. Recently, data-driven algorithms have shown
great potential to solve the few-view CT problem. In this paper, we
develop a dual network architecture (DNA) for reconstructing
images directly from sinograms. In the proposed DNA method, a point-
based fully-connected layer learns the backprojection process
requestingsignificantlylessmemorythanthepriorartsdo.
This paper is not the first work for
reconstructing images directly from raw
data, but previously proposed methods
requirea significantly greater amount of GPU
memory for training. It is underlined that our
proposed method solves the memory issue
by learning the reconstruction process with
the point-wise fully-connected layer and
other proper network ingredients. Also, by
passing only a single point into the fully-
connected layer, the proposed method can
truly learn the backprojection process. In our
study, the DNA network demonstrates
superior performance and generalizability. In
the future works, we will validate the
proposed method on images up to
dimension 512× 512oreven1024 × 1024.
WassersteinGANs for low-doseCT denoising
LowDoseCT ImageDenoisingUsinga
GenerativeAdversarial Networkwith
WassersteinDistanceandPerceptualLoss
QingsongYangetal.(2018) Rensselaer PolytechnicInstitute, Troy, NY
https://dx.doi.org/10.1109%2FTMI.2018.2827462- Citedby139 
Over the past years, variouslow-dose CTmethodshaveproduced impressiveresults.
However, most ofthe algorithmsdeveloped for this application, includingthe recently
popularized deep learningtechniques, aim for minimizingthe mean-squared-error (MSE)
between adenoised CTimageand theground truth under generic penalties. Although
the peaksignal-to-noiseratio (PSNR)is improved, MSE-or weighted-MSE-based
methodscancompromise the visibility of important structural detailsafter
aggressive denoising.
This paper introduces a new CT image denoising method based on the generative
adversarial network (GAN) with Wasserstein distance and perceptual similarity. The
Wasserstein distance is a key concept of the optimal transport theory, and promises
to improve the performance of GAN. The perceptual loss suppresses noise by
comparing the perceptual features of a denoised output against those of the ground truth
in an established feature space, while the GAN focuses more on migrating the data
noise distribution from strong to weak statistically. Therefore, our proposed method
transfers our knowledge of visual perception to the image denoising task
and iscapable of not onlyreducing the image noise level but also trying to keep the critical
information at the same time. Promising results have been obtained in our experiments
with clinical CTimages.
In the future, we plan to incorporate the WGAN-VGG network with more
complicated generators such as the networks reported in [
Chenetal.2017, Kangetal.2016] and extend these networks for image
reconstruction from raw databy making aneural network counterpartof
theFBPprocess.
Sinogram pre-filtration and image post-processing are computationally efficient compared to iterative
reconstruction. Noise characteristic was well modeled in the sinogram domain for sinogram-domain filtration.
However, sinogram data of commercial scanners are not readily available to users, and these
methods may suffer from resolution loss and edge blurring. Sinogram data need to be carefully
processed, otherwise artifacts may be induced in the reconstructed images. Differently from sinogram
denoising, image post-processing directly operates on an image. Many efforts were made in the image
domain toreduce LDCTnoise and suppressartifacts.
Despite the impressive denoising results with these innovative deep learning network structures, they fall into a category of an
end-to-end network that typically usesthe mean squarederror (MSE) between thenetwork output and theground truth as
the loss function. As revealed by the recent work [Johnsonetal.2016; Ledig etal. 2016], this per-pixel MSE is often
associated with over-smoothed edges and loss of details. As an algorithm tries to minimize per-pixel MSE, it overlooks
subtle image textures/signatures critical for human perception. It is reasonable to assume that CT images distribute over some
manifolds. From that point of view, the MSE based approach tends to take the mean of high-resolution patches using
the Euclidean distance rather than the geodesic distance. Therefore, in addition to the blurring effect, artifacts are also
possiblesuch as non-uniformbiases.
Zoomed ROI of the red rectangle in Fig.7 demonstrates the two
attenuation liver lesions in the red and blue circles. The display
windowis[−160,240]HU.
AttentionwithGANs
VisualAttentionNetworkfor Low-DoseCT
WenchaoDu;HuChen; PeixiLiao; HongyuYang; GeWang; Yi Zhang|IEEE
SignalProcessingLetters( Volume:26, Issue: 8,Aug.2019)
https://doi.org/10.1109/LSP.2019.2922851
Noise and artifacts are intrinsic to low-dose
computed tomography (LDCT) data
acquisition, and will significantly affect the
imaging performance. Perfect noise
removal and image restoration is
intractable in the context of LDCT due to
the statistical and the technical
uncertainties. In this letter, we apply the
generative adversarial network (GAN)
framework with a visual attention
mechanism to deal with this problem in a
data-driven/machinelearningfashion.
Our main idea is to inject visual attention
knowledge into the learning process of
GAN to provide a powerful prior of the
noise distribution. By doing this, both the
generator and discriminator networks are
empowered with visual attention
information so that they will not only
pay special attention to noisy
regions and surrounding structures
but also explicitly assess the local
consistency of the recovered
regions. Our experiments qualitatively
and quantitatively demonstrate the
effectiveness of the proposed method with
clinic CTimages.
Cycle-consistentadversarialdenoising forCT
Cycle consistentadversarial denoising‐sized 
networkformultiphasecoronary CT
angiography
EunheeKang HyunJungKoo DongHyunYang JoonBumSeo Jong
ChulYe.MedicalPhysics(2018) https://doi.org/10.1002/mp.13284
We propose an unsupervised learning technique that can remove
the noise of the CT images in the low dose phases‐sized  by learning
from the CT images in the routine dose phases. Although a supervised
learning approach is not applicable due to the differences in the
underlying heart structure in two phases, the images are closely
related in two phases, so we propose a cycle consistent adversarial‐ICH, occurs earlier 
denoising network to learn the mapping between the low and‐sized 
high dosecardiacphases‐sized  .
Experimental results showed that the proposed method effectively
reduces the noise in the low dose CT image while‐ICH, occurs earlier  preserving
detailed texture and edge information. Moreover, thanks to the
cyclic consistency and identity loss, the proposed network does not
create any artificial features that are not present in the input images.
Visual grading and quality evaluation also confirm that the proposed
methodprovidessignificantimprovementindiagnosticquality.
The proposed network can learn the image distributions from the
routine dose cardiac phases, which is a big advantage over the existing‐ICH, occurs earlier 
supervised learning networks that need exactly matched low and‐ICH, occurs earlier 
routine dose CT images. Considering the effectiveness and‐ICH, occurs earlier 
practicability of the proposed method, we believe that the proposed
canbeappliedformanyotherCTacquisitionprotocols.
Example of multiphase coronary
CTA acquisition protocol. Low dose‐ICH, occurs earlier 
acquisition isperformed in phase 1
and 2, whereasroutine dose‐ICH, occurs earlier 
acquisition isperformed in phases
3–10.
Denoising
Insights
OutsideCTs
ImageDenoising Notnecessarily needingnoise-freegroundtruth
Noise2Noise:Learning ImageRestoration
withoutCleanDataJaakkoLehtinen, Jacob Munkberg, Jon Hasselgren,
SamuliLaine, TeroKarras, MiikaAittala, TimoAila NVIDIA; Aalto University; MITCSAIL
(Submitted on 12 Mar 2018)
https://arxiv.org/abs/1803.04189 https://github.com/NVlabs/noise2noise
We apply basic statistical reasoning to signal reconstruction by
machine learning -- learning to map corrupted observations to clean
signals -- with a simple and powerful conclusion: it is possible to learn
to restore images by only looking at corrupted examples, at
performance at and sometimes exceeding training using clean data,
without explicit image priors or likelihood models of the corruption. In
practice, we show that a single model learns photographic noise
removal, denoising synthetic Monte Carlo images, and
reconstruction of undersampled MRI scans -- all corrupted by
different processes-- based on noisydata only.
That clean data is not necessary for denoising is not a new observation:
indeed, consider, for instance, the classic BM3D algorithm that draws on
self-similar patches within a single noisy image. We show that the
previously-demonstrated high restoration performance of deep neural
networks can likewise be achieved entirely without clean data, all based on
the same general-purpose deep convolutional model. This points
thewaytosignificant benefitsin manyapplicationsbyremovingtheneedfor
potentiallystrenuouscollection ofcleandata.
Finnish CenterforArtificial IntelligenceFCAI
Published onNov19, 2018
https://youtu.be/dcV0OfxjrPQ
As a sanity check though, would be nice to have some clean “multiple frame averaged” ground truths.
[DnCNN]BeyondaGaussianDenoiser:ResidualLearningofDeep
CNNforImageDenoising https://arxiv.org/abs/1608.03981 Thiswasintroducedabove
already
Noise2Noise:LearningImageRestorationwithoutCleanData
https://arxiv.org/abs/1803.04189 Thiswasintroducedabovealready
For benchmarking deep learning methods, unlike previous work [Abdelhamed etal.2018]
that directly tests with the pre-trained models, we re-train these models with the
same network architecture and similar hyper-parameters on the FMD dataset
from scratch. Specifically, we compare two representative models, one of
which requires ground truth (DnCNN) and the other does not
(Noise2Noise).
The benchmark results show that deep learning denoising models
trained on our FMD dataset outperforms other methods by a large
margin acrossall imagingmodalitiesand noise levels.
APoisson-GaussianDenoisingDatasetwith RealFluorescenceMicroscopy Images
YideZhang,YinhaoZhu,EvanNichols,QingfeiWang,SiyuanZhang,CodySmith,ScottHoward University ofNotreDame
(Submittedon26Dec2018(v1),lastrevised5Apr2019)
https://arxiv.org/abs/1812.10366 -http://tinyurl.com/y6mwqcjs- https://github.com/bmmi/denoising-fluorescence
ShapePriorsforICH Youcan probablyforget aboutit?
Haematoma“goeswhere it can” modelasanomaly? Butyou prorabablywantto co-segmenthematomawith somemoreregularshapes?
AutomationofCT-basedhaemorrhagic strokeassessment for
improvedclinicaloutcomes:studyprotocolanddesign
BettyChinda,GeorgeMedvedev,WilliamSiu,MartinEster,AliArab,Tao Gu,
SylvainMoreno,RyanC ND’Arcy,Xiaowei Song
BMJOpen|Neurology| Protocol
http://dx.doi.org/10.1136/bmjopen-2017-020260 (2018)
Haemorrhagic stroke is of significant healthcare concern due to its association with high
mortality and lasting impact on the survivors’ quality of life. Treatment decisions
and clinical outcomes depend strongly on the size, spread and location of
the haematoma. Non-contrast CT (NCCT) is the primary neuroimaging modality for
haematoma assessment in haemorrhagic stroke diagnosis. Current procedures do not
allow convenient NCCT-based haemorrhage volume calculation in clinical settings, while
research-based approaches are yet to be tested for clinical utility; there is a
demonstrated need for developing effective solutions. The project under review
investigates the development of an automatic NCCT-based haematoma
computationtool in support of accurate quantification of haematoma volumes.
CT scans showing different shapes of haematoma. The regions of hyperintensities
(bright) indicate the bleeding. Left panel shows it in an elliptical shape. The volume of the
haematoma can be estimated using the ABC/2 method. The red arrow indicates the ‘A’
dimension, while the green arrow is the ‘B’ dimension. Right panel shows the haematoma in a
non-elliptical (irregular) shape that has encroached into the lateral ventricles. The ABC/2 method
cannotbeapplied to thiscase.
An example showing haematoma with
no clear bleed-parenchyma boundary;
the volume of which cannot be
correctly calculated using existing
automation software and
demonstrating the need for improved
algorithms.
Ascreenshot of theQuantomosoftware beaning
used for comparison tovaliditytesting. The top
toolbar shows optionsfor selection and estimation
ofhaematoma;the left tool bar showsthe
measurement panel where thetotal volume is
displayed. The most accurate wayof estimating the
volume isby goingslice byslice in 2D, which can be
time consuming, whereasthe 3D estimate tendsto
missclassified normal tissuessurroundingthe
haematoma.
Imagerestoration constraining withshape priors
Anatomically ConstrainedNeuralNetworks(ACNN):
Applicationto CardiacImageEnhancementandSegmentation
OzanOktay, EnzoFerrante, KonstantinosKamnitsas, Mattias Heinrich, WenjiaBai, Jose Caballero, StuartCook, Antoniode Marvao, Timothy
Dawes, Declan O’Regan, Bernhard Kainz, Ben Glocker, andDaniel Rueckert Biomedical Image Analysis Group, Imperial College London; MRCClinical Sciences Centre (CSC), London
(5Dec2017) https://arxiv.org/abs/1705.08302 -Citedby95 
Incorporation of prior knowledge about organ
shape and location is key to improve performance of
image analysis approaches. In particular, priors can be
useful in cases where images are corrupted and
contain artefacts due to limitations in image
acquisition. The highly constrained nature of anatomical
objects can be well captured with learning based
techniques. However, in most recent and promising
techniques such as CNN based segmentation it is not
obvioushow toincorporate such prior knowledge.
The new framework encourages models to follow
the global anatomical properties of the underlying
anatomy (e.g. shape, label structure) via learnt non-
linear representations of the shape. We show that the
proposed approach can be easily adapted to different
analysis tasks (e.g. image enhancement,
segmentation) andimprovethepredictionaccuracyof
the state-of-the-artmodels
Transformers asway ofgettingtheshapepriorin?
TETRIS:TemplateTransformerNetworks
forImageSegmentation
MatthewChungHaiLee,KerstenPetersen,Nick Pawlowski,Ben
Glocker,MichielSchaapBiomedicalImageAnalysisGroup,ImperialCollegeLondon /HeartFlow
10Apr 2019(modified:11Jun2019)MIDL2019
https://openreview.net/forum?id=r1lKJlSiK4 -Citedby3 
http://wp.doc.ic.ac.uk/bglocker/project/semantic-imaging/
In this paper we introduce and compare different approaches for
incorporating shape prior information into neural network
based image segmentation. Specifically, we introduce the concept
of template transformer networks (TeTrIS) where a shape
template is deformed to match the underlying structure of interest
through an end-to-end trained spatial transformer network. This has
the advantage of explicitly enforcing shape priors and is free of
discretisation artefacts by providing a soft partial volume
segmentation. We also introduce a simple yet effective way of
incorporating priors in state-of-the-art pixel-wise binary
classification methods such as fully convolutional networks and
U-net. Here, the template shape is given as an additional input
channel, incorporating this information significantly reduces false
positives. We report results on sub-voxel segmentation of
coronary lumen structures in cardiac computed tomography
showing the benefit of incorporating priors in neural network based
imagesegmentation.
AnatomicalShapeprior forpartiallylabeledsegmentation
Prior-awareNeural Networkfor
Partially-SupervisedMulti-Organ
Segmentation
Yuyin Zhou, Zhe Li,Song Bai,ChongWang, Xinlei
Chen, Mei Han, Elliot Fishman, Alan Yuille
(Submitted on 12 Apr 2019)
https://arxiv.org/abs/1904.06346
As data annotation requires massive human labor
from experienced radiologists, it is common that training
dataare partially labeled, e.g., pancreas datasetsonly
have the pancreas labeled while leaving the rest marked
as background. However, these background labels can
be misleading in multi-organ segmentation since the
"background" usually contains some other organs of
interest. To address the background ambiguity in these
partially-labeled datasets, we propose Prior-aware
Neural Network (PaNN) via explicitly incorporating
anatomical priors on abdominal organ sizes, guiding
the training process with domain-specific
knowledge. More specifically, PaNN assumes that the
average organ size distributions in the abdomen should
approximate their empirical distributions, a prior
statistics obtained from the fully-labeled
dataset.
Multi-task learningwithshapepriors
Shape-AwareComplementary-Task
LearningforMulti-Organ
Segmentation
FernandoNavarro, SuprosannaShit, Ivan Ezhov, Johannes
Paetzold, AndreiGafita, JanPeeken, Stephanie Combs, Bjoern
Menze(Submitted on14Aug2019)
https://arxiv.org/abs/1908.05099v1
https://github.com/JunMa11/SegWithDistMap
Multi-organ segmentation in whole-body
computed tomography (CT) is a constant
pre-processing step which finds its
application in organ-specific image retrieval,
radiotherapy planning, and interventional
image analysis. We address this problem
from an organ-specific shape-prior
learning perspective. We introduce the
idea of complementary-task learning
to enforce shape-prior leveraging the
existingtargetlabels.
We propose two complementary-tasks
namely i) distance map regression and
ii) contour map detection to explicitly
encode the geometric properties of each
organ. We evaluate the proposed solution on
the public VISCERAL dataset containing CT
scansofmultiple organs.
Flagging problematicvolumes/sliceslikewith clinicalreferrals?
An AlarmSystem ForSegmentation
AlgorithmBasedOn ShapeModel
FengzeLiu,YingdaXia,Dong Yang,AlanYuille,DaguangXu
(Submittedon26Mar 2019 https://arxiv.org/abs/1903.10645
We build an alarm system that will set off alerts when the
segmentation result is possibly unsatisfactory, assuming no
corresponding ground truth mask is provided. One plausible solution
is to project the segmentation results into a low dimensional feature
space; then learnclassifiers/regressorsto predicttheirqualities.
Motivated by this, in this paper, we learn a feature space using the
shape information which is a strong prior shared among
different datasets and robust to the appearance variation of input
data.The shape feature is captured using a Variational Auto-Encoder
(VAE)networkthattrainedwithonlythegroundtruthmasks.
During testing, the segmentation results with bad shapes shall
not fit the shape prior well, resulting in large loss values. Thus,
the VAE is able to evaluate the quality of segmentation result on
unseen data, without using ground truth. Finally, we learn a regressor
in the one-dimensional feature space to predict the qualities of
segmentation results. Our alarm system is evaluated on several
recent state-of-art segmentation algorithms for 3D medical
segmentationtasks.
The visualize on an NIH CT data for
pancreas segmentation. The Dice
between GT and prediction is 47.06
(real Dice) while the Dice between
prediction and Prediction
(Reconstruction) from VAE is 47.25
(fakeDice).
Our method use the fake Dice to
predict the former real Dice which
is usually unknown at inference phase
of real applications. This case shows
how these two Dice scores are related
to each other. In contrast, the
uncertainty used in existing
approaches mainly distributes on the
boundary of predicted mask, which
makes it a vague information when
detecting thefailurecases.
Imagerestoration jointly withsegmentationandautomaticlabelling?
CTImageEnhancementUsingStackedGenerativeAdversarialNetworksandTransferLearningforLesion
SegmentationImprovement
YoubaoTang, JinzhengCai, Le Lu, Adam P. Harrison, KeYan, JingXiao, Lin Yang, Ronald M. Summers
(Submittedon18Jul2018) https://arxiv.org/abs/1807.07144
Automated lesion segmentation from
computed tomography (CT) is an important
and challenging task in medical image analysis.
While many advancements have been made,
there is room for continued
improvements.
One hurdle is that CT images can exhibit high
noise and low contrast, particularly in lower
dosages. To address this, we focus on a
preprocessing method for CT images that uses
stacked generative adversarial networks
(SGAN) approach. The first GAN reduces the
noise in the CT image and the second GAN
generates a higher resolution image with
enhanced boundaries and high contrast.
To make up for the absence of high quality CT
images, we detail how to synthesize a large
number of low- and high-quality natural
images and use transfer learning with
progressivelylargeramountsof CT images. 
INPUT BM3D DnCNN SingleGAN
Our
denoising
GAN
Our
SGAN
Three examples of CT image enhancement results using
differentmethodsonoriginalimages
JointDeepDenoising andSegmentation
DenoiSeg:JointDenoisingandSegmentation
Tim-Oliver Buchholz, Mangal Prakash, Alexander Krull, FlorianJug
[Submittedon6May2020]
https://arxiv.org/abs/2005.02987
https://github.com/juglab/DenoiSeg Tensorflow
https://pypi.org/project/denoiseg/
Here we propose DenoiSeg, a new
method that can be trained end-to-end on
only a few annotated ground truth
segmentations. We achieve this by
extending Noise2Void, a self-
supervised denoising scheme that can be
trained on noisy images alone, to also predict
dense3-classsegmentations.
We reason that the success of our proposed
method originates from the fact that similar
“skills” are required for denoising and
segmentation. The network becomes a
denoising expert by seeing all available raw
data, while co-learning to segment, even if
only a few segmentation labels are available.
This hypothesis is additionally fueled by our
observation that the best segmentation
results on high quality (very low noise) raw
data are obtained when moderate amounts
of syntheticnoiseareadded.
Orevenwithoutthesegmentationtarget?
Segmentation-AwareImageDenoisingwithout
KnowingTrueSegmentation
SichengWang,BihanWen,JunruWu,DachengTao,ZhangyangWang
(Submittedon22May2019)
https://arxiv.org/abs/1905.08965
Several recent works discussed application-driven image restoration neural networks,
which are capable of not only removing noise in images but also preserving
their semantic-aware details, making them suitable for various high-level
computer vision tasks as the pre-processing step. However, such approaches
require extra annotations for their high-level vision tasks, in order to train the joint
pipeline using hybrid losses. The availability of those annotations is yet often limited
to a few image sets, potentially restricting the general applicability of these methods to
denoisingmoreunseenandunannotatedimages.
Motivated by that, we propose a segmentation-aware image denoising model
dubbed U-SAID, based on a novel unsupervised approach with a pixel-wise
uncertainty loss. U-SAID does not need any ground-truth segmentation
map,andthuscan be applied toanyimage dataset. It generatesdenoised imageswith
comparable or even better quality, and the denoised results show stronger
robustness for subsequent semantic segmentation tasks, when compared
toeither itssupervisedcounterpartor classical"application-agnostic"denoisers.
Moreover, we demonstrate the superior generalizability of U-SAID in three-folds, by
plugging its "universal" denoiser without fine-tuning: (1) denoising unseen types
of images; (2) denoising as pre-processing for segmenting unseen noisy
images; and(3)denoisingfor unseenhigh-leveltasks.
Deblurring
”Deconvolution”
“data-driven
sharpening
oftheimage”
ImageDeblurring for CT Bone “leaks”tosurroundingissue
Weighted deblurring for bone? Maybe intuitively easier to sharpen the bone/brain interfaces? In other words both your image
and labels are probabilistic distributions with point estimates describing the reality at some accuracy
Strictly speakingyoucannot
reallyassume that pixels/voxels
are independent measurements
ofthat“receptive field”.Real-world
PSF“smears“the signal
http://doi.org/10.1155/2015/450341
PETquantification:strategiesforpartial
volumecorrectionV.Bettinardi,I.Castiglioni,E.De
Bernardi&M.C.Gilardi ClinicalandTranslational
Imagingvolume2,pages199–218(2014)
https://doi.org/10.1007/s40336-014-0066-y
https://doi.org/10.1109/NSSMIC.2011.6153678
“Partial-volume effectand a partial-volumecorrectionfor the
NanoPET/CT™ preclinical PET/CTscanner”
Diagram of partial volumeeffect. (A) Pixel
computed tomography(CT) value with thick
slice. (B) Pixel CTvalue with thin slice. The
partial volumeeffect can be defined asthe loss
ofapparent activityinsmall objectsor regions
because of the limited resolution of theimaging
system
https://doi.org/10.3341/jkos.2016.57.11.1
671
http://doi.org/10.2967/jnumed.106.035576
deconvolvingwithPSF
CTSuper-resolutionwithU-Net
Computedtomographysuper-
resolutionusingdeep convolutional
neuralnetwork
JunyoungParket al. (2018)
https://doi.org/10.1088/1361-6560/aacdd4
The objective of this study is to develop a
convolutional neural network (CNN) for
computed tomography (CT) image super-
resolution. The network learns an end-to-end
mapping between low (thick-slice thickness)
and high (thin-slice thickness) resolution
images using the modified U-Net. To verify the
proposed method, we train and test the CNN
using axially averaged data of existing thin-
slice CT images as input and their middle slice
asthelabel.
The extraction and expansion paths of the
network with a large receptive field1 effectively
captured the high-resolution features as high-
resolution features. Although this work
mainly focused on resolution
improvement, the Z-axis averaging plus
super-resolution strategy was also useful
for reducingnoise.
NottoomanyCTsuper-resolutionnetworks
CT Super-resolution
GANConstrainedby
theIdentical,
Residual,andCycle
Learning
Ensemble(GAN-
CIRCLE)
ChenyuYou, Guang Li, YiZhang,
Xiaoliu Zhang, HongmingShan,
Shenghong Ju, Zhen Zhao, Zhuiyang
Zhang, WenxiangCong, Michael W.
Vannier, Punam K. Saha, GeWang
(Submitted on 10 Aug2018
https://arxiv.org/abs/1808.04256
In this paper, we present a semi-
supervised deep learning
approach to accurately recover
high-resolution (HR) CT images
from low-resolution (LR)
counterparts. Specifically, with the
generative adversarial network
(GAN) as the building block, we
enforce the cycle-consistency in
terms of the Wasserstein distance
to establish a nonlinear end-to-end
mapping from noisy LR input
images to denoised and deblurred
HR outputs. We also include the
joint constraints in the loss function
tofacilitatestructuralpreservation.
To make further progress, we may also undertake
efforts to add more constraints such as the sinogram
consistence and the low-dimensional manifold
constraint to decipher the relationship between
noise, blurry appearances of images and the ground
truth, and even develop an adaptive and/or task-
specificlossfunction.
SyntheticX-Ray
ADeep Learning-BasedScatterCorrectionof
SimulatedX-rayImages
HeesinLeeandJoonwhoanLee(2019)
https://doi.org/10.3390/electronics8090944
X-ray scattering significantly limits image quality.
Conventional strategies for scatter reduction based on
physical equipment or measurements inevitably
increase the dose to improve the image quality. In
addition, scatter reduction based on a computational
algorithm could take a large amount of time. We propose
a deep learning-based scatter correction method,
which adopts a convolutional neural network (CNN) for
restorationof degradedimages.
Because it is hard to obtain real data from an X-ray
imaging system for training the network, Monte Carlo
(MC) simulation was performed to generate the
training data. For simulating X-ray images of a human
chest, a cone beam CT (CBCT) was designed and
modeled as an example. Then, pairs of simulated images,
which correspond to scattered and scatter-free images,
respectively, were obtained from the model with different
doses. The scatter components, calculated by taking the
differences of the pairs, were used as targets to train the
weightparametersoftheCNN.
ImageDeblurring for CT with GANs?
Threedimensionalblind imagedeconvolutionfor
fluorescencemicroscopy usinggenerative
adversarialnetworks
Soonam Lee, ShuoHan, Paul Salama, Kenneth W. Dunn, Edward J. Delp Purdue University / Indiana University
(Submitted on 19Apr 2019) https://arxiv.org/abs/1904.09974
Due to image blurring image deconvolution is often used for
studying biological structures in fluorescence microscopy.
Fluorescence microscopy image volumes inherently suffer from
intensity inhomogeneity, blur, and are corrupted by various types of
noise which exacerbate image quality at deeper tissue depth.
Therefore, quantitative analysis of fluorescence microscopy in deeper
tissue still remains a challenge. This paper presents a three
dimensional blind image deconvolution method for fluorescence
microscopy using 3-way spatially constrained cycle-
consistent adversarial networks (CycleGAN). The restored
volumes of the proposed deconvolution method and other well-known
deconvolution methods, denoising methods, and an inhomogeneity
correction methodare visuallyand numericallyevaluated.
Using the 3-Way SpCycleGAN, we can successfullyrestore the blurred
and noisy volume to good quality volume so that deeper volume can
be used for the biological research. Future work will include
developing a 3D segmentation technique using our proposed
deconvolution method asa preprocessing step.
INPUT SpCycleGAN
xy
xz
Alotofideastostealfrom(optical)microscopy
Anew deep learningmethod
for imagedeblurringinoptical
microscopic systems
Huangxuan Zhaoet al. (2019)
http://doi.org/10.1002/jbio.201960147
In this paper, we present a deep-
learning-based deblurring method
that is fast and applicable to optical
microscopic imaging systems. We
tested the robustness of proposed
deblurring method on the publicly
available data, simulated data and
experimental data (including 2D
optical microscopic data and 3D
photoacoustic microscopic data),
which all showed much improved
deblurred results compared to
deconvolution. We compared our
results against several existing
deconvolutionmethods.
In addition, our method could also
replace traditional deconvolution
algorithms and become an
algorithm of choice in various
biomedicalimaging systems
CycleGANsfordeblurring can be donefor unpaired data
CycleGANwithaBlurKernelfor
DeconvolutionMicroscopy:
OptimalTransport Geometry
Sungjun Lim et al. (2019)
https://arxiv.org/abs/1908.09414
In this paper, we present a novel
unsupervised cycle-consistent
generative adversarial network
(cycleGAN) with a linear blur
kernel, which can be used for both
blind- and non-blind image
deconvolution. In contrast to the
conventional cycleGAN approaches
that require two generators, the
proposed cycleGAN approach needs
only a single generator, which
significantly improves the robustness of
network training. We show that the
proposed architecture is indeed a dual
formulation of an optimal
transport problem that uses a
special form of penalized least squares
as transport cost. Experimental results
using simulated and real experimental
data confirm the efficacy of the
algorithm.
Inspiration from NaturalImages
LSD2
-JointDenoisingand Deblurringof
ShortandLongExposure Imageswith
ConvolutionalNeural Networks
Janne Mustaniemi, JuhoKannala, Jiri Matas, SimoSärkkä, Janne Heikkilä
(23Nov 2018)
https://arxiv.org/abs/1811.09485
The paper addresses the problem of acquiring
high-quality photographs with handheld
smartphone cameras in low-light imaging
conditions. We propose an approach based on
capturing pairs of short and long exposure
images in rapid succession and fusing
them into a single high-quality photograph. Unlike
existing methods, we take advantage of both
images simultaneously and perform a joint
denoising and deblurring using a
convolutional neural network. The network is
trained using a combination of real and
simulated data. To that end, we introduce a
novel approach for generating realistic short-long
exposure image pairs. The evaluation shows that
the method produces good images in extremely
challenging conditions and outperforms existing
denoising and deblurring methods. Furthermore,
it enables exposure fusion even in the
presence of motion blur.
DeblurringPlug’n’Playframework forexisting networks
DeepPlug-and-Play Super-Resolution
forArbitrary BlurKernels
Kai Zhang, WangmengZuo,LeiZhang(Submittedon 29Mar2019)
https://arxiv.org/abs/1903.12529
https://github.com/cszn/DPSR PyTorch
While deep neural networks (DNN) based single image
super-resolution (SISR) methods are rapidly gaining
popularity, they are mainly designed for the widely-
used bicubic degradation, and there still remains the
fundamental challenge for them to super-resolve low-
resolution (LR) image with arbitrary blur kernels. In the
meanwhile, plug-and-play image restoration has been
recognized with high flexibility due to its modular structure for
easy plug-in of denoiser priors. In this paper, we propose a
principled formulation and framework (DSPR) by extending
bicubic degradation based deep SISR with the help of plug-
and-play framework to handle LR images with arbitrary blur
kernels. Specifically, we design a new SISR degradation
model so as to take advantage of existing blind deblurring
methods for blur kernel estimation. To optimize the new
degradation induced energy function, we then derive a plug-
and-play algorithm via variable splitting technique, which
allowsusto plugany super-resolverprior ratherthan the
denoiserpriorasamodularpart.
Edge-Aware
Smoothing
Insights
OutsideCT
Couldbeusedinmulti-task
settingforsegmentation,but
notthemostusefulmaybe
withdeepsegmentation
networks.Shouldhelpsome
simpleoldschoolalgorithms
ImageSmoothing whilekeepingedges Edge-AwareSmoothing→
In theory, image restoration
tries to restore the “original
image”under the degradation.
In contrast, edge-preserving
smoothing can be seen as
simplifying enhancement
technique that made “old
school” algorithms perform
better.
e.g. LiisLindvereet al.(2013):”Priorto
segmentation,thedataweresubjectedtoedge-
preserving 3Danisotropic diffusionfiltering (
PeronaandMalik,1990  Citedby13,940
)
Popular algorithms include
anisotropic diffusion, bilateral
and trilater filter, guided filter
and L0 gradient minimization
filter.
Quick anddirty Matlabtest with three
methods for non-denoised input.
Bilateral filter actually does not
preserve edges, and the guide (the
input image itself) makes the
smoothing to take place for
background.
Image smoothingvia L0
gradientminimization
https://doi.org/10.1145/2024156.2024208 - Cited by 872 
https://youtu.be/jliea54nNFM?t=119
Deep Textureand StructureAware
FilteringNetworkforImageSmoothing
KaiyueLu,ShaodiYou,Nick Barnes; The
EuropeanConferenceonComputerVision
(ECCV),2018,pp.217-233
http://openaccess.thecvf.com/content_ECC
V_2018/html/Kaiyue_Lu_Deep_Texture_and_
ECCV_2018_paper.html
ImageSmoothing texture biaspossibleforvasculature? Notlikely?
ImageNet-trained CNNs arebiasedtowards
texture;increasing shapebias improves
accuracyandrobustness
RobertGeirhos,PatriciaRubisch,ClaudioMichaelis,MatthiasBethge,FelixA.
Wichmann,WielandBrendel(Submittedon29Nov2018)
https://arxiv.org/abs/1811.12231
Some recent studies suggest a more important role of image
textures. We here put these conflicting hypotheses to a
quantitative test by evaluating CNNs and human observers on
images with a texture-shape cue conflict. We show that
ImageNet-trained CNNs are strongly biased towards recognising
textures rather than shapes, which is in stark contrast to human
behavioural evidence and reveals fundamentally different
classificationstrategies.
INPUT
DENOISED &
EDGE-AWARE SMOOTHING
Thisshould be easierto segment giving
that no significant data wasthrown
away, thusthe end-to-endconstrain of
image restoration block, and asside-
effect the usercould obtain a denoised
version forvisualization
RESIDUALNOISE &
“TEXTURE” (miscartifacts)
ImageSmoothing texture biaspossibleforvasculature? orcouldthere be?
IsTexturePredictiveforAgeand
SexinBrainMRI?
NickPawlowski,BenGlocker
BiomedicalImageAnalysis Group,ImperialCollegeLondon,UK
15Apr2019(modified:11Jun2019)MIDL2019Conference
https://arxiv.org/abs/1811.12231
Deep learning builds the foundation for many medical
image analysis tasks where neural networks are often
designed to have a large receptive field to incorporate
long spatial dependencies. Recent work has shown that
large receptive fields are not always necessary for
computer vision tasks on natural images. Recently
introduced BagNets (Brendel and Bethge, 2019) have
shown that on natural images, neural networks can perform
complex classification tasks by only interpreting
texture information rather than global structure.
BagNets interpret a neural network as a bag-of-features
classifier that is composed of a localised feature
extractor and a classifier that acts on the average bag-
encoding.
We explore whether this translates to certain medical
imaging tasks such as age and sex prediction from a T1-
weightedbrain MRI scans.
We have generalised the concept of BagNets to the
setting of 3D images and general regression tasks. We
have shown that a BagNet with a receptive field of
(9mm) 3
yields surprisingly accurate predictions of age
and sex from T1-weight MRI scans. However, we find
that localised predictions of age and sex do not
yield easily interpretable insights into the
workings of the neural network which will be subject of
future work. Further, we believe that more accurate
localised predictions could lead to advanced
clinical insights similar to (Becker et al., 2018;
Cole et al., 2018).
ImageSmoothing with ImageRestoration?
In theory, additional “deep intermediate target” could help the final segmentation result as
you want your network “to pop out” the vasculature, without the texture, from the
background.
In practice then, think of how to either get the intermediate target in such a way that you do not throw any details
away (see Xu et al. 2015), or employ a Noise2Noise type of network for edge-aware smoothing as well. And check the
use of bilateral kernels in deep learning (see e.g. Barron andPoole2015; Jampanietal.2016; Gharbietal.2017; Su etal. 2019
). The proposal of
Suetal.2019 seemslikeagoodstartingpoint ifyou areinto making this happen?
RAW After IMAGE RESTORATION Edge-AwareIMAGESMOOTHING
UnsupervisedImage Smoothing for “ground truth”?
Imagesmoothingviaunsupervised learning
QingnanFan,JiaolongYang,DavidWipf,BaoquanChen,XinTong
ShandongUniversity,BeijingFilmAcademy;MicrosoftResearchAsia;PekingUniversity
(Submittedon7Nov2018) https://arxiv.org/abs/1811.02804 |
https://github.com/fqnchina/ImageSmoothing -Cited by 4 
In thispaper,wepresentaunified unsupervised(label-free)learning
frameworkthat facilitatesgeneratingflexibleandhigh-qualitysmoothing
effectsby directlylearningfromdatausingdeepconvolutionalneural
networks(CNNs).Theheartofthedesign isthetrainingsignalas a novel
energyfunction thatincludesanedge-preservingregularizer which
helpsmaintain importantyetpotentially vulnerableimagestructures,
andaspatially-adaptiveLp
flatteningcriterionwhichimposes
differentformsofregularization ontodifferentimageregionsforbetter
smoothing quality.
We implement a diverse set of image smoothing solutions employing
the unified framework targeting various applications such as, image
abstraction, pencil sketching, detail enhancement, texture removal and
content-aware image manipulation, and obtain results comparable with or
betterthan previousmethods.
We have also shown that training a deep neural network on a large corpus
of raw images without ground truth labels can adequately solve the
underlying minimization problem and generate impressive results.
Moreover, the end-to-end mapping from a single input image to its
corresponding smoothed counterpart by the neural network can be
computed efficiently on both GPU and CPU, and the experiments have
shown that our method runs orders of magnitude faster than traditional
methods. We foresee a wide range of applications that can benefit from our
newpipeline.
Elimination of low-amplitude details while maintaining high-contrast edges using our method and representative traditional methods L0
and SGF. L0 regularization has a strong flattening effect. However, the side effect is that some spurious edges arise in local regions with
smooth gradations, such as those on the cloud. SGF is dedicated to elimination of fine-scale high-contrast details while preserving large-
scale salient structures. However, semantically-meaningful information such as the architecture and flagpole can be over-smoothed. In
contrast,our result exhibits a moreappropriate, targeted balance between color flattening and salientedgepreservation.
We also demonstrate the binary edge map B detected by our heuristic detection
method, which shows consistent image structure with our style image. Note that
binary edge maps are only used in the objective function for training; they are not
used inthe teststageand are presented here onlyfor comparisonpurpose
‘CT
Normalization’
Acrossdifferent
scanners
Morepaperspublished on MRInormalization butsomealsoforCT
Normalizationofmulticenter
CT radiomicsby agenerative
adversarial networkmethod
YajunLi,GuoqiangHan,XiaomeiWu,ZhenhuiLi,
KeZhao,ZhipingZhang,ZaiyiLiuandChanghong
LiangPhysicsin Medicine& Biology (25March 2020)
https://doi.org/10.1088/1361-6560/ab8319
To reduce the variability of radiomics
features caused by computed tomography (CT)
imaging protocols through using a generative
adversarial network (GAN) method. Material and
Methods: In this study, we defined a set of images
acquired with a certain imaging protocol as a
domain, and a total of 4 domains (A, B, C, and T
[target]) from 3 different scanners were
included.
Finally, to investigate whether our proposed
method could facilitate multicenter radiomics
analysis, we built the lasso classifier to
distinguish short-term from long-term survivors
basedonacertaingroup
Our proposed GAN-based normalization method
could reduce the variability of radiomics features
caused by different CT imaging protocols and
facilitatemulticenterradiomicsanalysis.
CT
Segmentation
Traditional
Background
ProbabilisticSegmentationfrom2004
Unifiedsegmentation
John Ashburnerand Karl J. Friston
NeuroImage Volume 26,Issue 3,1 July 2005,
Pages839-851
https://doi.org/10.1016/j.neuroimage.2005.02.018
A probabilistic framework is presented that enables
image registration, tissue classification, and
bias correction to be combined within the same
generative model. A derivation of a log-likelihood
objective function for the unified model is provided.
The model is based on a mixture of Gaussians
and is extended to incorporate a smooth intensity
variation and nonlinear registration with tissue
probability maps. A strategy for optimising the model
parameters is described, along with the requisite
partialderivativesoftheobjectivefunction.
The hierarchical modelling scheme could be
extendedinordertogeneratetissueprobability
maps and other priors using data from many subjects.
This would involve a very large model, whereby many
images of different subjects are simultaneously
processed within the same heirarchical framework.
Strategies for creating average (in both shape and
intensity) brain atlases are currently being devised (
Ashburner et al.,2000, AvantsandGee,2004, Joshietal., 2004)
.Such approaches
could be refined in order to produce average shaped
tissueprobabilitymapsandotherdatafor useaspriors.
Thetissueprobabilitymapsforgreymatter, whitematter,
CSF, and “other”.
Results from applying the method to the BrainWeb data.
The first column shows the tissue probability maps for grey
and white matter. The first row of columns two, three, and
four show the 100% RF BrainWeb T1, T2, and PD 
imagesafter they are warped to match the tissue
probability maps (by inverting the spatial transform). Below
the warped BrainWeb images are the corresponding
segmented greyand white matter.
Thisfigure showsthe
underlyinggenerative model
for the BrainWeb simulated T1,
T2, and PD imageswith 100%
intensitynonuniformity. The
BrainWeb imagesare shown
on the left. The right hand
column showsdatasimulated
usingthe estimated generative
model parametersfor the
correspondingBrainWeb
images.
Our current implementation uses a low-dimensional
approach, which parameterises the deformations by a
linear combination of about a thousand cosine
transform bases (Ashburner and Friston, 1999). This is
not an especially precise way of encoding
deformations, but it can model the variability of overall
brain shape. Evaluations have shown that this simple
model can achieve a registration accuracy
comparable to other fully automated methods with
many more parameters (Hellier et al., 2001, 
Hellier etal., 2002).
Follow-upwith Ashburner et al. (2018) #1
Generativediffeomorphicmodellingof
largeMRIdatasetsforprobabilistic
templateconstruction
ClaudiaBlaiotta,PatrickFreund,M. JorgeCardoso, JohnAshburner
NeuroImageVolume166,1February2018,Pages117-134
https://doi.org/10.1016/j.neuroimage.2017.10.060
One of the main challenges, which is encountered in all neuroimaging studies,
originates from the difficulty of mapping between different anatomical
shapes. In particular, a fundamental problem arises from having to ensure that
this mapping operation preserves topological properties and that it
provides, not only anatomical, but also functional overlap between distinct
instancesofthesameanatomicalobject(Brettetal.,2002).
This explains the rapid development of the discipline known as
computational anatomy (Grenanderand Miller,1998), which aimsto provide
mathematically sound tools and algorithmic solutions to model high-
dimensional anatomical shapes, with the ultimate goal of encoding, or
accountingfor,theirvariability.
In this paper we propose a general modelling scheme and a training algorithm,
which, given a large cross-sectional data set of MR scans, can learn a set of
average-shaped tissue probability maps, either in an unsupervised or
semi-supervised manner. This is achieved by building a hierarchical
generative model of MR data, where image intensities are captured using
multivariate Gaussian mixture models, after diffeomorphic warping
(Ashburner and Friston, 2011, Joshi et al., 2004) of a set of unknown probabilistic
templates, which act as anatomical priors. In addition, intensity
inhomogeneity artefacts are explicitly represented in our model, meaning that
theinputdatadoesnot needtobebiascorrectedpriortomodelfitting.
● We present a generative modelling framework to process large MRI
data sets.
● The proposed framework can serve to learn average-shaped tissue
probability maps and empirical intensity priors.
● We explore semi-supervised learning and variational inference
schemes.
● The method is validated against state-of-the-art tools using publicly
available data.
Tothebestofour knowledge,the particular mathematical
formulation that we adopt to combine such modelling
techniques has never been adopted before. The
resulting approach enables processing simultaneously a
large number of MR scans in a groupwise fashion and
particularly it allows the tasks of image segmentation,
image registration, bias correction and atlas construction
tobe solvedbyoptimising asingleobjective function,
with one iterative algorithm. This is in contrast to a
commonly adopted approach to mathematical
modelling, which involves a pipeline of multiple
model fitting strategies that solve sub-problems
sequentially,withouttakingintoaccounttheir circular 
dependencies.
Follow-upwith Ashburner et al. (2018) #2
GenerativediffeomorphicmodellingoflargeMRI
datasetsforprobabilistictemplateconstruction
ClaudiaBlaiotta,PatrickFreund,M. JorgeCardoso, JohnAshburner
NeuroImageVolume166,1February2018,Pages117-134
https://doi.org/10.1016/j.neuroimage.2017.10.060
OASIS data set. The first data set consists of thirty five T1-weighted MR
scans from the OASIS (Open Access Series of Imaging Studies) database (
Marcuset al., 2007). The data is freely available from the web site 
http://www.oasis-brains.org, where details on the population demographics
and acquisition protocols are also reported. Additionally, the selected thirty five
subjects are the same ones that were used within the 2012 MICCAI Multi-Atlas
Labeling Challenge (Landman andWarfield, 2012).
Balgrist data set. The second data set consists of brain and cervical cord
 scans of twenty healthy adults, acquired at University Hospital Balgrist with a
3T scanner (Siemens Magnetom Verio). Magnetisation-prepared rapid
acquisition gradient echo (MPRAGE) sequences, at 1 mm isotropic resolution,
were used to obtain T1-weighted data, while PD-weighted images of the same
subjects were acquired with a multi-echo 3D fast low-angle shot (FLASH)
sequence, within a whole-brain multi-parameter mapping protocol (
Weiskopf et al., 2013, Helmset al., 2008).
IXI data set. The third and last data set comprises twenty five T1-, T2-and
PD-weighted scans of healthy adults from the freely available IXI brain
database, which were acquired at Guy's Hospital, in London, on a 1.5T system
(Philips Medical Systems Gyroscan Intera). Additional information regarding
the demographics of the population, as well as the acquisition protocols, can
be found at http://brain-development.org/ixi-dataset.
Tissue probability maps obtained by applying the presented groupwise generative model to
a multispectral data set comprising head and neck scans of eighty healthy adults, from three
different databases.
Follow-upwith Ashburner et al. (2018) #3
GenerativediffeomorphicmodellingoflargeMRI
datasetsforprobabilistictemplateconstruction
ClaudiaBlaiotta,PatrickFreund,M. JorgeCardoso, JohnAshburner
NeuroImageVolume166,1February2018,Pages117-134
https://doi.org/10.1016/j.neuroimage.2017.10.060
The accuracy of the algorithm presented here is compared to that achieved by the
groupwise image registration method described in Avantset al.(2010), whose
implementation is publicly available, as part of the Advanced normalisation Tools
(ANTs) package, through the web site http://stnava.github.io/ANTs/. Indeed, the
symmetric diffeomorphic registration framework implemented in ANTs has
established itself as the state-of-the-art of medical image nonlinear 
spatialnormalisation (Kleinet al.,2009).
Brain segmentation accuracyof
the presented method in
comparison toSPM12 image
segmentation algorithm. Boxplots
indicate thedistributionsof Dice
scorecoefficients, with overlaid
scatter plotsof theestimated
scores. Red starsdenote outliers.
Modellingunseendata
Further validation experiments were performed to quantify the
accuracy of the framework described in this paper to model unseen
data, that is to say data that was not included in the atlas generation
process.
In particular, we evaluated registration accuracy using data from the
Internet Brain Segmentation Repository (IBSR), which is provided by
the Centre for Morphometric Analysis at Massachusetts General
Hospital (http://www.cma.mgh.harvard.edu/ibsr/). Experiments to
assess bias correction and segmentation accuracy were instead
performed on synthetic T1-weighted brain MR scans from the
Brainweb database (http://brainweb.bic.mni.mcgill.ca/), which were
simulated using a healthy anatomical model under different noise and
biasconditions.
Dicescoresbetweenthe
estimatedandground
truth segmentations for
brainwhitematter and
braingraymatter,under
differentnoiseandbias
conditions,forsynthetic
T1-weighteddata.
Ctseg as headCT pipeline fromDukeUniversity
A Method to Estimate Brain Volume from HeadCT
Imagesand Applicationto Detect Brain Atrophy in
AlzheimerDisease
V. Adduru, S.A. Baum, C.Zhang, M.Helguera, R.Zand, M.
Lichtenstein, C.J.Griessenauer and A.M.Michael
AmericanJournalof Neuroradiology February2020, 41
(2) 224-230;DOI: https://doi.org/10.3174/ajnr.A6402
https://github.com/NuroAI/CTSeg
We present an automated head CT segmentation
method (CTseg) to estimate total brain volume and total
intracranial volume. CTseg adapts a widely used brain MR
imaging segmentation method from the Statistical
Parametric Mapping toolbox using a CT-based
template for initial registration. CTseg was tested and
validated usinghead CT imagesfrom a clinical archive.
In current clinical practice, brain atrophy is assessed by
inaccurate and subjective “eyeballing” of CT images.
Manual segmentation of head CT images is prohibitively
arduous and time-consuming. CTseg can potentially help
clinicians to automatically measure total brain volume and
detect and track atrophy in neurodegenerative diseases.
In addition, CTseg can be applied to large clinical archives
fora variety of researchstudies.
CTSeg pipeline for intracranial space and brain parenchyma segmentation from head CT images.
Within parentheses is the 3D coordinate space of the image. MNI indicates Montreal Neurological
Institute.
CT
Segmentation
andDetection
DeepLearning
Badlywrittenreview but probably lists relevant papers though
AutomaticNeuroimageProcessingand
AnalysisinStroke–A SystematicReview
Roger M.Sarmentoet al.(2019)
IEEEReviewsinBiomedicalEngineering(23August 2019)
https://doi.org/10.1109/RBME.2019.2934500
There are some points that require greater attention such as low
sensitivity, optimization of the algorithm, a reduction of false positives,
improvement in the identification and segmentation processes of
different sizes and shapes. Also there is a need, to improve the
classificationstepsofdifferentstroketypesandsubtypes.
Another important challenge to overcome is the lack of studies aimed at
identifying and classifying stroke in its subtypes: intracerebral
hemorrhage, subarachnoid hemorrhage, and brain ischemia due to
thrombosis, embolism, or systemic hypoperfusion. There is also no
record of work focusing on the detection and segmentation of the
penumbra zone, a region that presents a high probability of recovery if
identifiedandmedicatedquicklyandcorrectly.
Moreover, transient ischemic attack (TIA) does not receive the focus it
merits from the researchers. Although it is a transient and reversible
alteration it can be a warning sign of an imminent ischemic stroke. In
many cases doctors are not able to distinguish a stroke from a TIA
beforethe symptomsappear.NeuroimagingsuchasCT andMRI are not
made for this type of accident, but there is a type of MRI, called diffusion
weighted imaging (DWI), which can show areas of brain tissue that are
not working and thus help to diagnose TIA. A potential research would
be the location of the TIA, the affected area and the severity of the
accident.
DeepSymNet
Combining symmetricand standard deep convolutional
representations for detecting brain hemorrhage
Arko Barman; VictorLopez-Rivera; SongmiLee; Farhaan S.Vahidy; James Z.Fan;
SeanI.Savitz;SunilA.Sheth; LucaGiancardo(16 March2020)
https://doi.org/10.1117/12.2549384
https://doi.org/10.3389/fnins.2019.01053
https://www.uth.edu/news/story.htm?id=5b8f2ad1-e3dd-4ad0-aca3-c845d7364953
We compare andcontrast symmetry-aware,symmetry-naive feature
representationsand theircombination forthe detection of Brain
hemorrhage (BH) using CTimaging. One ofthe proposed
architectures, e-DeepSymNet, achievesAUC0.99 [0.97-1.00]
for BH
detection. An analysisof the activation valuesshowsthatboth
symmetry-aware and symmetry-naive representationsoffer
complementaryinformation withsymmetry-aware representation
naive contributing20% towardsthe final predictions.
Qure validation datasetavailablefrom The Lancet paper
Deeplearningalgorithmsfor detectionof critical
findingsinheadCTscans:a retrospectivestudy
Sasank Chilamkurthy,RohitGhosh , Swetha Tanamala, MustafaBivijiDNB,
NorbertGCampeau, Vasantha KumarVenugopal, VidurMahajan, , PoojaRao,
PrashantWarier
TheLancet
Volume 392, Issue 10162, 1–7 December 2018, Pages2388-2396
https://doi.org/10.1016/S0140-6736(18)31645-3
We retrospectively collected a dataset containing 313 318
head CT scans together with their clinical reports from
around 20 centres in India between Jan 1, 2011, and June 1,
2017.
We describe the development and validation of fully
automated deep learning algorithms that are trained to
detect abnormalities requiring urgent attention on non-
contrast head CT scans. The trained algorithms detect five
types of intracranial haemorrhage (namely,
intraparenchymal, intraventricular, subdural, extradural, and
subarachnoid) and calvarial (cranial vault) fractures. The
algorithms also detect mass effect and midline shift, both
usedasindicatorsofseverityofthebraininjury.
The algorithms produced good results for normal scans
without bleed, scans with medium to large sized
intraparenchymal and extra-axial haemorrhages,
haemorrhages with fractures, and in predicting midline shift.
There was room for improvement for small-sized
intraparenchymal, intraventricular haemorrhages
and haemorrhages close to the skull base. In this study,
we did not separate chronic and acute haemorrhages. This
approach resulted in occasional prediction of scans with
infarcts and prominent cerebrospinal fluid spaces as
intracranial haemorrhages. However, the false positive rates of
the algorithms should not impedeits usability as atriaging tool.
DeepLearning for ICH Segmentation
Precisediagnosisofintracranial
hemorrhageandsubtypesusingathree-
dimensionaljointconvolutionaland
recurrentneural network
Hai Ye,FengGao, Youbing Yin,DanfengGuo,Pengfei Zhao,Yi Lu,Xin Wang,
JunjieBai,Kunlin Cao, QiSong, HeyeZhang, WeiChen,XuejunGuo,Jun Xia
EuropeanRadiology(2019)29:6191–6201
https://doi.org/10.1007/s00330-019-06163-2
It took our algorithm less than 30 s on average to process a 3D CT scan. For the
two-type classification task (predicting bleeding or not), our algorithm achieved
excellentvalues(≥ 0.98)acrossallreportingmetricson thesubjectlevel.
The proposed method was able to accurately detect ICH and its subtypes with
fast speed, suggesting its potential for assisting radiologists and physicians in
theirclinicaldiagnosisworkflow.
DeepLearning for ICH Segmentation:Review ofStudies
IntracranialHemorrhageSegmentationUsingDeepConvolutionalModel(18Oct2019) https://arxiv.org/pdf/1910.08643.pdf
DeepLearning for ICH Segmentation
Intracranial HemorrhageSegmentation
UsingDeep Convolutional Model
MurtadhaD.Hssayeni,MuayadS.Croock,Aymen Al-Ani,HassanFalahAl-
khafaji,ZakariaA. Yahya,andBehnaz Ghoraani (18Oct2019)
https://arxiv.org/pdf/1910.08643.pdf
https://alpha.physionet.org/content/ct-ich/1.0.0/
WedevelopedadeepFCN,called U-Net,tosegmenttheICHregionsfromthe
CTscansin afully automatedmanner.Themethod achieved aDicecoefficient
of0.31 fortheICHsegmentation basedon 5-fold cross-validation.
Data Description
The dataset is release in JPG (and NIfTI soon) formats at PhysioNet (
http://alpha.physionet.org/content/ct-ich/1.0.0/),
A dataset of 82 CT scans was collected, including 36 scans for patients
diagnosed with intracranial hemorrhage with the following types:
Intraventricular, Intraparenchymal, Subarachnoid, Epidural and Subdural. Each
CT scan for each patient includes about 30 slices with 5 mm slice-thickness.
The mean and std of patients' age were 27.8 and 19.5, respectively. 46 of the
patients were males and 36 of them were females. Each slice of the non-
contrast CT scans was by two radiologists who recorded hemorrhage types if
hemorrhage occurred or if a fracture occurred. The radiologists also
delineated the ICH regions in each slice. There was a consensus between the
radiologists. Radiologists did not have access to clinical history of the
patients, and used a down-sampled version of the CT scan.
During data collection, syngo by Siemens Medical Solutions was first used to
read the CT DICOM files and save two videos (avi format) using brain and bone
windows, respectively. Second, a custom tool was implemented in Matlab and
used to read the avi files, record the radiologist annotations, delineate
hemorrhage region and save it as white region in a black 650x650 image (jpg
format). Gray-scale 650x650 images (jpg format) for each CT slice were also
saved for both windows (brain and bone).
KaggleChallenges eventually for alltypes ofdata
RSNAIntracranialHemorrhageDetection
Identify acuteintracranialhemorrhageanditssubtypes
$25,000PrizeMoneyRadiologicalSocietyofNorthAmerica
https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/data
petteriTeikari/RSNA_kaggle_CT_wrangle
https://www.kaggle.com/anjum48/reconstructing-3d-
volumes-from-metadata
KaggleChallenge howthe datawas annotated
ConstructionofaMachineLearningDatasetthrough
Collaboration:TheRSNA2019BrainCTHemorrhage
Challenge
AdamE. Flanders , LucianoM. Prevedello, GeorgeShih, SafwanS. Halabi, Jayashree Kalpathy-Cramer, Robyn Ball, JohnT. Mongan, Anouk Stein,
FelipeC. Kitamura, MatthewP. Lungren, GagandeepChoudhary, LesleyCala, LuizCoelho, Monique Mogensen, FannyMorón, Elka Miller, Ichiro
Ikuta, VaheZohrabian, Olivia McDonnell, ChristieLincoln, Lubdha Shah, David Joyner, AmitAgarwal, RyanK. Lee, Jaya Nath, Forthe RSNA-ASNR
2019 Brain HemorrhageCTAnnotators
https://doi.org/10.1148/ryai.2020190211
The amount of volunteer labor required to compile, curate, and annotate a large
complex dataset of this type was substantial. A work commitment from our volunteer
force was set at no more than 10 hours of aggregate effort per annotator, recognizing
that there would be a wide range in performance per individual. An examination could be
accuratelyreviewedandlabeledin aminute or less.On thebasisoftheseestimates,itwas
projected that the 60 annotators could potentially evaluate and effectively label 36,000
examinations at a rate of one per minute for a maximum of 10 hours of effort. This
providedabuffer of11,000potential annotations.
Even though the use case was limited to hemorrhage labels alone, it took thousands of
radiologist-hours to produce a final working dataset in the stipulated time period. To optimally
mitigate against misclassification in the training data, the training, validation, and test datasets
should have employed multiple reviewers. The size of the final dataset and the narrow time
frame to deliver it prohibited multiple evaluations for all of the available examinations. The auditing
mechanism employed for training new annotators showed that the most common error
produced was under-labeling of data, namely tagging an entire examination with a single
image label. Raising awareness of this error early in the process before the annotators began
working on the actual data helped to reducethe frequency of this error and improve consistency of
thesingleevaluations.
As this is a public dataset, it is available for further enhancement and use including the
possibility of adding multiple readers for all studies, performance of detailed segmentations,
performance of federated learning on the separate datasets, and evaluation of the
examinationsfor diseaseentitiesbeyond hemorrhage.
KaggleChallenge Competition Entry example
Intracranial HemorrhageClassification
usingCNN Hyun Joo Lee,Department of MechanicalEngineering,
Stanford University(CS230Fall2019)
http://cs230.stanford.edu/projects_fall_2019/reports/26248009.pdf
In this study, multi-class classification is conducted
to diagnose intracranial hemorrhages and its five
subtypes: intraparenchymal, intraventricular,
subarachnoid, subdural, epidural. Transfer
learning is applied based on ResNet-50 and
linear windowing is compared with sigmoid
windowing in its performance.
Due to the high imbalance in the number of
examples available, an undersampling approach
was taken to provide a better balanced training
dataset. As a result, the combination of sigmoid
windowing and combining three windows of
interest showed thehighest F1score.
Smalldatasetsget detailedannotations
Expert-leveldetectionofacuteintracranial
hemorrhage onheadcomputedtomographyusing
deep learning
Weicheng Kuo, ChristianHäne, PratikMukherjee,Jitendra
Malik,andEstherL.Yuh PNASOctober21,2019
https://doi.org/10.1073/pnas.1908021116
We trained a fully convolutional neural network with 4,396
head CT scans performed at the University of California at
San Francisco and affiliated hospitals and compared the
algorithm’s performance to that of 4 American Board of
Radiology (ABR) certified radiologists on an independent
testsetof 200randomlyselectedheadCT scans.
https://www.ucsf.edu/news/2019/10/415681/ai-rivals-exper
t-radiologists-detecting-brain-hemorrhages
But the training images used by the researchers were
packed with information, because each small
abnormality was manually delineated at the pixel
level. The richness of this data – along with other steps
that prevented the model from misinterpreting
random variations or “noise” as meaningful –
createdanextremelyaccuratealgorithm. A deep learning algorithm recognizes abnormal
CT scans of the head in neurological
emergencies in 1 second. The algorithm also
classifies the pathological subtype of each
abnormality: red - subarachnoid hemorrhage,
purple - contusion, green - subdural hemorrhage.
Fivecasesjudgednegative byatleast2of4
radiologists, but positive for acute
hemorrhage by both the algorithm and the
goldstandard.
3DCNNsforsegmentation
3D Deep Neural NetworkSegmentationof
IntracerebralHemorrhage:Developmentand
ValidationforClinicalTrials
MatthewSharrock, W. AndrewMould,HasanAli, Meghan Hildreth, DanielF Hanley,JohnMuschelli
https://www.medrxiv.org/content/10.1101/2020.03.05.20031823v1
https://github.com/msharrock/deepbleed
Using an automated pipeline and 2D and 3D deep neural networks, we
show that we can quickly and accurately estimate ICH volume
with high agreement with time-consuming manual segmentation. The
trainingand validation datasetsinclude significant heterogeneityin terms
of pathology, such as the presence of intraventricular (IVH) or
subdural hemorrhages (SDH) as well as variable image acquisition
parameters. We show that deep neural networks trained with an
appropriate anatomic context in the network receptive field, can
effectively performICH segmentation, but those without enough context
will overestimate hemorrhage along the skull and around
calcifications intheventricularsystem.
The natural history of ICH includes intraventricular extension of blood, particularly for
hemorrhages close to the ventricles and the success of segmentation in this context
has not previously been accounted for in segmentation studies based on either MRI
or CT. This is a clear example of the need to understand the natural history of the
underlying neuropathology as well account for the variability in acquisition
when developing models for the clinical context, tasks that are frequently
overlooked. This is especially so in the realm of DNNs where models with millions of
parameters can be finely tuned to aspects of a curated dataset from a single
institution that are not applicable externally. In our view, when decisions regarding
potential therapeutic intervention are to be made, they should be informed by
metrics and models validated in a prospective clinical trial on multicenter
data designedwithafullunderstandingoftheunderlyingpathology
StantardU-Netwith DenseCRF
ICHNet: Intracerebral Hemorrhage (ICH)
SegmentationUsing Deep Learning
MobarakolIslam NUSGraduateSchoolforIntegrativeSciencesandEngineering(NGS)NationalUniversityof Singapore
,Parita
Sanghani,AngelaAn Qi See,MichaelLucasJames,Nicolas
Kon KamKing,HongliangRen
InternationalMICCAIBrainlesionWorkshop BrainLes2018:Brainlesion: Glioma,
MultipleSclerosis, Strokeand TraumaticBrain Injuries
https://doi.org/10.1007/978-3-030-11723-8_46
ICHNet, evolves by integrating dilated
convolution neural network (CNN) with
hypercolumn features where a modest number
of pixels are sampled and corresponding
features from multiple layers are concatenated.
Due to freedom of sampling pixels rather than
image patch, this model trains within the brain
region and ignores the CT background
padding. This boosts the convergence time
and accuracy by learning only healthy and
defected brain tissues. To overcome the class
imbalance problem, we sample an equal
number of pixels from each class. We also
incorporate 3D conditional random field
(3D CRF, deepmedic/dense3dCrf) to smoothen the
predicted segmentation as a post-processing
step. ICHNet demonstrates 87.6% Dice
accuracy in hemorrhage segmentation, that is
comparable toradiologists.
“Sharperboundary”-tweaksalsoforICH
-Net:FocusingontheborderΨ-Net: Focusing on the border 
areasofintracerebral hemorrhage
onCT images
Zhuo Kuang, XianboDeng,Li Yua,HongkuiWang, Tiansong Li,
Shengwei Wang.ComputerMethodsandProgramsin
Biomedicine (Availableonline14May 2020)
https://doi.org/10.1016/j.cmpb.2020.105546
Highlights
●
A CNN-based architecture is proposed for the ICH
segmentation on CT images. It consists of a novel model,
namedas -Net,Ψ-Net: Focusing on the border  andamulti-leveltrainingstrategy.
●
With the help of two attention blocks, firstly, -Net couldΨ-Net could 
suppress the irrelevant information, secondly, -Net couldΨ-Net could 
capture the spatial contextual information to fine tune the
borderareasof theICH.
●
The multi-level training strategy includes two levels of
tasks, classification of the whole slice and the pixel-wise
segmentation. This structure speeds up the rate of
convergence and alleviate the vanishing gradient and class
imbalanceproblems.
●
Compared to the previous works on the ICH segmentation.
Our method takes less time for training, and obtains more
accurateand robustperformance.
Youcan seeallthemulti-task“Dice+Hausdorff” papers,
e.g. Calivaetal.2019,Karimi etal.2019
TBIsegmentationverysimilartoICHsegmentation
Multiclasssemanticsegmentationand quantification of traumatic brain
injury lesionson head CTusing deep learning: an algorithm development
and multicentre validation study
MiguelMonteiro*,VirginiaF JNewcombe*,FrancoisMathieu,KrishmaAdatia,KonstantinosKamnitsas,
Enzo Ferrante,TilakDas,DanielWhitehouse,DanielRueckert,DavidK Menon†,Ben Glocker
FundingEuropeanUnion7thFrameworkProgramme,HanneloreKohlStiftung,OneMind,NeuroTraumaSciences,Integra
Neurosciences,EuropeanResearchCouncilHorizon2020
LancetDigitalHealth2020https://doi.org/10.1016/S2589-7500(20)30085-6
CT is the most common imaging modality in traumatic brain injury (TBI).
However, its conventional use requires expert clinical interpretation and does
not provide detailed quantitative outputs, which may have prognostic
importance. We aimed to use deep learning to reliably and efficiently
quantifyanddetect differentlesiontypes.
We show the ability of a CNN to separately segment, quantify, and detect
multiclass haemorrhagic lesions and perilesional oedema. These
volumetric lesion estimates allow clinically relevant quantification of lesion
burdenandprogression,with potential applicationsfor personalisedtreatment
strategiesandclinicalresearch inTBI.
Future work needs to focus on the optimal incorporation of such algorithms
into clinical practice, which must be accompanied by a rigorous
assessment of performance, strengths, and weaknesses. Such algorithms will
find clear research applications, and, if adequately validated, may be used to
help facilitate radiology workflows by flagging scans that require urgent
attention, aid reporting in resource-constrained environments, and detect
pathoanatomically relevant features for prognostication and a better
understandingoflesionprogression
Perihematomaledemasegmentation
Fully Automated SegmentationAlgorithm
forPerihematomalEdemaVolumetry
AfterSpontaneous Intracerebral
Hemorrhage
Natasha Ironside, Ching-JenChen,Simukayi Mutasa,JustinL.Sim, DaleDing,
Saurabh Marfatiah, David Roh, Sugoto Mukherjee,Karen C. Johnston, Andrew
M. Southerland,Stephan A. Mayer,Angela Lignelli, Edward Sander Connolly
2Feb2020 Stroke.2020;51:815–823
https://doi.org/10.1161/STROKEAHA.119.026764
Perihematomal edema (PHE) is a promising surrogate
marker of secondary brain injury in patients with
spontaneous intracerebral hemorrhage, but it can be
challenging to accurately and rapidly quantify. The
aims of this study are to derive and internally validate a fully
automated segmentation algorithm for volumetric analysis of
PHE.
Inpatient computed tomography scans of 400
consecutive adults with spontaneous, supratentorial
intracerebral hemorrhage enrolled in the Intracerebral
Hemorrhage Outcomes Project (2009–2018) were
separatedinto training(n=360)andtest(n=40)datasets.
The fully automated segmentation algorithm accurately
quantified PHE volumes from computed tomography scans
of supratentorial intracerebral hemorrhage patients
with high fidelity and greater efficiency compared with manual
and semiautomated segmentation methods. External
validation of fully automated segmentation for assessment of
PHE iswarranted.
Examplesof
perihematomaledema
(PHE)segmentationin
thetestdataset.
Column A showsthe
inputaxial,noncontrast
computed
tomographyslice.
Column B showsthe
corresponding
manualPHE
segmentation(blue
line).
Column C showsthe
correspondingsemi-
automated PHE
segmentation(red
line).
Column D showsthe
correspondingfully
automated PHE
segmentation(green
line).
intheendend-to-end
system for the upstream
restoration/segmentation
withdownstreamtasks
such asprognosisand
prescriptivetreatment
Inpractice, notalotof end-
to-endnetworksfor
prognosiseven, probably
duetolackof suchopen-
sourceddatasets
“Simultaneous”ClassificationandSegmentation
JCS:AnExplainableCOVID-19DiagnosisSystem
Prognosismodels
mostlyoutside the scope
of thispresentation
but herea small teaser for “the
actual”analysis of the imaging
features with non-imagingfeatures
Best tolook inspiration from modelingofother pathologies asnotmuchspecificallyonICH
AWideandDeep NeuralNetwork
forSurvivalAnalysisfrom
Anatomical ShapeandTabular
ClinicalData
Sebastian Pölsterl, IgnacioSarasua, Benjamín Gutiérrez-Becker, and
Christian Wachinger (9Sept 2019)
https://arxiv.org/abs/1909.03890
Feature-GuidedDeep Radiomics
forGlioblastomaPatientSurvival
Prediction
ZeinaA. Shboul, Mahbubul Alam, LasithaVidyaratne, LinminPei,
Mohamed I. Elbakary and Khan M. IftekharuddinFront. Neurosci., 20
September 2019| https://doi.org/10.3389/fnins.2019.00966
Deep learningsurvivalanalysis
enhancesthevalueofhybrid
PET/CTforlong-term
cardiovasculareventprediction
L E Juarez-Orozco, J WBenjamins, TMaaniitty, ASaraste, PVan Der
Harst, JKnuuti EuropeanHeart Journal, Volume 40, Issue Supplement_1,
October 2019, ehz748.0177,
https://doi.org/10.1093/eurheartj/ehz748.0177
Deep RecurrentSurvival Analysis
Kan Ren et al. (2019)
https://doi.org/10.1609/aaai.v33i01.33014798
Useofradiomicsfortheprediction oflocalcontrolofbrain metastasesafterstereotacticradiosurgery
https://doi.org/10.1093/neuonc/noaa007 (20January 2020)byAndrei Mouravievetal.
https://towardsdatascience.com/deep-learning-for-survival-analysis-fdd1505293c9
Prescriptivemodels
mostlyoutside the scope
of thispresentation
how to treatthe patient basedon
the features measured from the
patient,i.e. “precision medicine”
Reinforcementlearning and Controlmodels
IsDeepReinforcementLearningReadyfor
PracticalApplicationsinHealthcare?A
SensitivityAnalysisofDuel-DDQN forSepsis
Treatment
MingYuLu,ZacharyShahn,DabySow,FinaleDoshi-Velez,Li-weiH.Lehman MIT;
IBMResearch,NYC;Harvard University
[Submittedon8May2020]
https://arxiv.org/abs/2005.04301
In thiswork, we perform a sensitivityanalysison a state-of-the-art RL
algorithm (DuelingDouble Deep Q-Networks) appliedto
hemodynamicstabilization treatment strategiesfor septic
patientsin the ICU
●
TreatmentHistory: Excludingtreatment historyleadsto
aggressive treatment policies.
●
Time bindurations: Longertime binsresult in more aggressive
policies.
●
Rewards: Long-term objectiveslead to more aggressive and less
stable policies
●
Embedding model: Highsensitivity to architecture
●
Random Restarts: DRL policieshave manylocal optima
●
SubgroupAnalysis:Groupingbyy Sequential Organ Failure
Assessment (SOFA) score findsDQNagentsare
underaggressive inhigh risk patients and overaggressive
inlowrisk patients
https://photos.app.goo.gl/pptobiD22E9osiWf6
Finale Doshi-Velez @NeurIPSMachineLearning for Healh 2018 (ML4H)
AssociateProfessor of Computer Science, Harvard Paulson School ofEngineeringand Applied Sciences(SEAS)
Deep Reinforcement LearninginMedicine
AndersJonsson KidneyDis 2019;5:18–22 https://doi.org/10.1159/000492670
Deep Reinforcement LearningandSimulationasaPathToward
PrecisionMedicine
BrendenK. Petersen,Jiachen Yang,WillS. Grathwohl, ChaseCockrell,ClaudioSantiago,GaryAn,and
DanielM. Faissol6Jun 2019 https://doi.org/10.1089/cmb.2018.0168
Deep Reinforcement LearningforDynamicTreatment Regimes
onMedicalRegistryData
Ying Liu,BrentLogan, NingLiu, Zhiyuan Xu, Jian Tang,and Yanzhi Wang
HealthcInform.2017Aug;2017:380–385. doi: 10.1109/ICHI.2017.45
DynamicTreatmentRecommendation withuncleartargets
SupervisedReinforcementLearningwithRecurrentNeural
NetworkforDynamicTreatmentRecommendation
LuWang,WeiZhang,XiaofengHe,HongyuanZha
KDD'18Proceedingsofthe24thACMSIGKDDInternationalConferenceonKnowledgeDiscovery
&DataMining https://doi.org/10.1145/3219819.3219961
The data-driven research on treatment recommendation involves two main branches: supervised
learning (SL) and reinforcement learning (RL) for prescription. SL based prescription tries to
minimize the difference between the recommended prescriptions and indicator signal which
denotes doctor prescriptions. Several pattern-based methods generate recommendations by utilizing the
similarity of patients [Huetal.2016, Sun etal.2016]
, but they are challenging to directly learn the relation between
patients and medications. Recently, some deep models achieve significant improvements by learning a
nonlinear mapping from multiple diseases to multiple drug categories [BajorandLasko2017, Wangetal.2018,
Wangetal.2017
. Unfortunately, a key concern for these SL based models still remains unresolved, i.e, the ground
truth of “good” treatment strategy being unclear in the medical literature [Marik2015]. More
importantly, the original goal of clinical decision also considers the outcome of patients instead of only
matchingtheindicatorsignal.
The above issues can be addressed by reinforcement learning for dynamic treatment regime
(DTR) [Murphy2003, Robins1986]. DTR is a sequence of tailored treatments according to the dynamic
states of patients, which conforms to the clinical practice. As a real example shown in Figure 1, treatments
for the patient vary dynamically over time with the accruing observations. The optimal DTR is
determined by maximizing the evaluation signal which indicates the long-term outcome of patients, due to the
delayed effect of the current treatment and the influence of future treatment choices [
Chakrabortyand Moodie2013]. With the desired properties of dealing with delayed reward and
inferring optimal policy based on non-optimal prescription behaviors, a set of reinforcement learning
methods have been adapted to generate optimal DTR for life-threatening diseases, such as schizophrenia,
non-small cell lung cancer, and sepsis [e.g. Nemati etal.2016]. Recently, some studies employ deep RL to
solve the DTR problem based on large scale EHRs [Pengetal.2019, Raghuetal.2017, Wengetal.2016
. Nevertheless, these
methods may recommend treatments that are obviously different from doctors’ prescriptions due to the lack
of the supervision from doctors, which may cause high risk [Shen et al.2013] in clinical practice. In
addition, the existing methods are challenging for analyzing multiple diseases and the complex medication
space.
In fact, the evaluation signal and indicator signal play complementary roles,
where the indicator signal gives a basic effectiveness and the evaluation
signal helps optimize policy. Imitation learning (e.g. Finn et al. 2016) utilizes the
indicator signal toestimate areward function for training robotsbysupposing the
indicator signal isoptimal, which is notinlinewith theclinicalreality. Supervised
actor-critic (e.g. Zhu et al. 2017) uses the indicator signal to pre-train a
“guardian” and then combines “actor” output and “guardian” output to send
low-risk actions for robots. However, the two types of signals are trained
separately and cannot learn from each other. Inspired by these studies, we
propose a novel deep architecture to generate recommendations for
more general DTR involving multiple diseases and medications, called
Supervised Reinforcement Learning with Recurrent
Neural Network (SRL-RNN). The main novelty of SRL-RNN is to
combine the evaluation signal and indicator signal at the same time to learn an
integrated policy. More specifically, SRL-RNN consists of an off-policy actor-
critic framework to learn complex relations among medications, diseases, and
individual characteristics. The “actor” in the framework is not only influenced by
the evaluation signal like traditional RL but also adjusted by the doctors’
behaviors to ensure safe actions. RNN is further adopted to capture the
dependence of the longitudinal and temporal records of patients for the
POMDP problem. Note that treatment and prescription are used
interchangeablyin thispaper.
!
PrecisionMedicineasControlProblem
Precisionmedicineasacontrol problem:Using
simulationanddeep reinforcementlearningtodiscover
adaptive,personalizedmulti-cytokine therapy for sepsis
BrendenK.Petersen,JiachenYang,WillS.Grathwohl,ChaseCockrell,Claudio
Santiago,GaryAn,DanielM.Faissol(Submittedon8Feb2018)
https://arxiv.org/abs/1802.10440-Citedby8 -Relatedarticles
In thisstudy,weattempttodiscover an effective cytokine mediation treatment
strategy for sepsis using a previously developed agent-based model that
simulates the innate immune response to infection: the Innate
Immune Response agent-based model (IIRABM). Previous
attempts at reducing mortality with multi-cytokine mediation using the IIRABM
have failed to reduce mortality acrossallpatientparameterizationsandmotivated
us to investigate whether adaptive, personalized multi-cytokine
mediation can control the trajectory of sepsis and lower patient
mortality. We used the IIRABM to compute a treatment policy in which
systemic patient measurements are used in a feedback loop to inform future
treatment.
Using deep reinforcement learning, we identified a policy that achieves 0%
mortality on the patient parameterization on which it was trained. More
importantly, this policy also achieves 0.8% mortality over 500 randomly selected
patient parameterizations with baseline mortalities ranging from 1 - 99% (with an
average of 49%) spanning the entire clinically plausible parameter space of the
IIRABM. These results suggest that adaptive, personalized multi-cytokine
mediation therapy could be a promising approach for treating sepsis. We
hope that thiswork motivatesresearcherstoconsider such an approach aspart
of future clinical trials. To the best of our knowledge, this work is the first to
consider adaptive, personalized multi-cytokine mediation therapy for sepsis, and
is the first to exploit deep reinforcement learning on a biological
simulation.
Sepsisseemstopresentthebestproblemforhospitalsfrom healtheconomicsview
SurfaceExtraction
andParcellation
Hardtodo“MRI-level”
parcellations,butwe
might wanttovisualize
atleastthevolumesas
meshorNURBS
Surface (mesh orNURBS) fromvolumetricdata
FastSurfer- Afastandaccurate deep learning
basedneuroimagingpipeline
Leonie Henschel et al. German Center for Neurodegenerative Diseases (DZNE),Bonn, Germany
https://arxiv.org/abs/1910.03866 (9Oct 2019)
To this end, we introduce an advanced deep learning architecture
capable of whole brain segmentation into 95 classes in
under 1 minute, mimicking FreeSurfer’s anatomical
segmentation and cortical parcellation. The network architecture
incorporates local and global competition via competitive dense
blocks and competitive skip pathways, as well as multi-slice
information aggregation that specifically tailor network
performance towards accurate segmentation of both
corticaland sub-corticalstructures.
Further, we perform fast cortical surface reconstruction and
thickness analysis by introducing a spectral spherical
embedding and by directly mapping the cortical labels from the
image to the surface. This approach provides a full FreeSurfer
alternative for volumetric analysis (within 1 minute) and
surface-based thickness analysis (within only around
1h run time). For sustainability of this approach we perform
extensive validation: we assert high segmentation accuracy on
several unseen datasets, measure generalizability and
demonstrate increased test-retest reliability, and increased
sensitivity to disease effectsrelative to traditional FreeSurfer.
Meshe.g. Deep MarchingCubes / DeepSDF
DeepMarchingCubes:LearningExplicitSurface
Representations
Yiyi Liao,SimonDonné,AndreasGeiger(2018)
http://www.cvlibs.net/publications/Liao2018CVPR.pdf -Citedby42
https://github.com/yiyiliao/deep_marching_cubes
Marchingcubes:Ahighresolution
3Dsurfaceconstructionalgorithm
(1987) WELorensen,HECline
doi: 10.1145/37401.37422
Cited by 14,986 articles
In future work, we plan to adapt
our method to higher
resolution outputs using
octreestechniques
Curriculum DeepSDF
Yueqi Duan,HaidongZhu,HeWang,Li Yi, Ram Nevatia,LeonidasJ.Guibas(March2020)
https://arxiv.org/abs/2003.08593
https://github.com/haidongz-usc/Curriculum-DeepSDF PyTorch
Mesh→ Unreal/Unity/WebGL, etc. ifyouare into visualization
Helpingbrainsurgeonspractice withreal-time
simulationAugust30,2019bySébastien Lozé
https://www.unrealengine.com/en-US/spotlights/helping-brai
n-surgeons-practice-with-real-time-simulation
In their 2018 paper Enhancement Techniquesfor Human AnatomyVisualization, Hirofumi
Seo and Takeo Igarashi state that “Human anatomy is so complex that just visualizing it in
traditional ways is insufficient for easy understanding…” To address this problem, Seo has
proposed a practical approach to brain surgery using real-time rendering with
UnrealEngine. 
Now Seo and his team have taken this concept a step further with their 2019 paper 
Real-Time Virtual Brain Aneurysm ClippingSurgery, where they demonstrate an
application prototype for viewing and manipulating a CG representation of a
patient’sbrain in real time.
The software prototype, made possible with a grant (Grant Number JP18he1602001) from 
JapanAgencyforMedical Researchand Development(AMED), helps surgeons visualize a patient’s
uniquebrainstructurebefore, during,and after anoperation.
BrainBrowser isanopensourcefree3DbrainatlasbuiltonWebGLtechnologies,
ituses Three.JStoprovide3D/layeredbrainvisualization. Reviewedin
medevel.com
Blender.blendfilebyplacedintheAssetsfolderofaUnityproject
https://forum.unity.com/threads/holes-in-mesh-on-import-from-blender.248126/
Interaction betweenVolumeRendered3DTextureandMeshObjects
https://forum.unity.com/threads/interaction-between-volume-rendered-3d-texture-and-mes
h-objects.451345/
Easythentovisualizeon computer/VR/MR/AR
OCTOBER14,2017 BY ANDIJAKL
VisualizingMRI &CT Scans inMixedReality /VR/AR,Part 4:
SegmentingtheBrain
https://www.andreasjakl.com/visualizing-mri-ct-scans-in-mixed-reality-vr-ar-part-4-segmenting-the-brain/
Combining3DscansandMRIdata
http://www.neuro-memento-mori.com/combining-3d-scans-and-
mri-data/
VRsoftwaremaybring
MRIsegmentationinto
thefuture
MattO'Connor July30,
2018
AdvancedVisualization
https://www.healthimaging.
com/topics/advanced-visu
alization/vr-software-mri-s
egmentation-future
Nextmed:Automatic
Imaging
Segmentation,3D
Reconstruction, and
3DModel
VisualizationPlatform
Using Augmentedand
VirtualReality (2020)
http://doi.org/10.3390/s2
0102962
NURBS e.g. DeepSplines
BézierGAN:AutomaticGenerationof
SmoothCurvesfrom InterpretableLow-
DimensionalParameters
Wei Chen,MarkFugeUniversityofMaryland-workwassupportedbyTheDefenseAdvancedResearchProjectsAgency
(DARPA-16-63-YFAFP-059)viatheYoungFacultyAward(YFA)Program
https://arxiv.org/abs/1808.08871
Many real-world objects are designed by smooth curves, especially in the
domain of aerospace and ship, where aerodynamic shapes (e.g., airfoils) and
hydrodynamic shapes (e.g., hulls)are designed. However, theprocess of selecting
the desired design is complicated, especially for engineering applications where
strict requirements are imposed. For example, in aerodynamic or hydrodynamic
shape optimization, generally three main components for finding the desired
design are: (1) a shape synthesis method (e.g., B-spline or NURBS
parameterization), (2) a simulator that computes the performance metric of any
given shape, and (3) an optimization algorithm (e.g., genetic algorithm) to
select the design parameters that result in the best performance [1, 2]. To facilitate
the design process of those objects, we propose a deep learning based
generative adversarial networks (GAN) model that can synthesize smooth
curves. The model maps a low-dimensional latent representation to a sequence
ofdiscretepointssampledfromarational Bézier curve.
DeepSpline:Data-Driven reconstructionof
ParametricCurvesandSurfaces
JunGao,Chengcheng Tang,VigneshGanapathi-Subramanian, Jiahui Huang, Hao Su, LeonidasJ. Guibas Universityof Toronto;
VectorInstitute;Tsinghua University; Stanford University; UC San Diego
(Submittedon 12Jan2019) https://arxiv.org/abs/1901.03781
Reconstruction of geometry based on different input modes, such as images or point clouds, has been
instrumental in the development of computer aided design and computer graphics. Optimal
implementations of these applications have traditionally involved the use of spline-based
representations at their core. Most such methods attempt to solve optimization problems that minimize
an output-target mismatch. However, these optimization techniques require an initialization that is close
enough, as they are local methods by nature. We propose a deep learning architecture that adapts to
perform spline fitting tasks accordingly, providing complementary results to the aforementioned
traditional methods.
To tackle challenges with the 2D cases such as multiple splines with intersections, we use a
hierarchical Recurrent Neural Network (RNN) Krause et al. 2017
trained with ground truth labels, to predict a
variable number of spline curves, eachwith an undetermined number of control points.
In the 3D case, we reconstruct surfaces of revolution and extrusion without sel-fintersection
through an unsupervised learning approach, that circumvents the requirement for ground truth
labels. We use the Chamferdistance to measure the distance between the predicted point cloud and target
point cloud. This architecture is generalizable, since predicting other kinds of surfaces (like surfaces of
sweeping or NURBS), would require only a change of this individual layer, with the rest of the model
remainingthe same.
Makingthe Brains physicalwith 3D Printing
Makingdatamatter:Voxelprintingforthe digital
fabricationof data acrossscalesanddomains
Christoph Bader et al. The Mediated Matter Group,Media Lab,Massachusetts Institute of Technology,Cambridg
https://doi.org/10.1126/sciadv.aas8652 (30 May2018)
We present a multimaterial voxel-printing method that
enables the physical visualization of data sets commonly
associated with scientific imaging. Leveraging voxel-based
control of multimaterial three-dimensional (3D) printing, our
method enables additive manufacturing of discontinuous data
types such as point cloud data, curve and graph data, image-
based data, and volumetric data. By converting data sets into
dithered material deposition descriptions, through
modifications to rasterization processes, we demonstrate that
data sets frequently visualized on screen can be converted into
physical, materiallyheterogeneousobjects.
Representative 3D-printed models of image-based data. (A) In vitro reconstructed living human lung
tissue on a microfluidic device, observed through confocal microscopy (29). The cilia, responsible for transporting
airway secretions and mucus-trapped particles and pathogens, are colored orange. Goblet cells, responsible for
mucus production, are colored cyan. (B) Biopsy from a mouse hippocampus, observed via confocal expansion
microscopy(proExM) (30). The 3D print visualizesneuronal cell bodies, axons, and dendrites.
(H) White matter tractography data of the human brain, created with the
3D Slicer medical image processing platform (37), visualizing bundles
of axons, which connect different regions of the brain. The original data
wereacquiredthroughdiffusion-weighted(DWI) MRI.
Gettingthe
softwaretools
Toclinical use,
e.g.
Detection/Segmentation →
clinicalprognosis(mortality
andfunctionaloutcome
prediction)
FiveFDA-approvedsoftwareexist May 2020
Neuroimaging of IntracerebralHemorrhage
RimaSRindler,JasonW Allen,Jack WBarrow, GustavoPradilla,
DanielLBarrow
Neurosurgery, Volume 86, Issue 5, May2020, PagesE414–E423,
https://doi.org/10.1093/neuros/nyaa029
Intracerebral hemorrhage (ICH) accounts for 10% to 20% of
strokes worldwide and is associated with high morbidity and
mortality rates. Neuroimaging is indispensable for rapid
diagnosis of ICH and identification of the underlying etiology,
thus facilitating triage and appropriate treatment of patients.
The most common neuroimaging modalities include
noncontrast computed tomography (CT), CT angiography
(CTA), digital subtraction angiography, and magnetic
resonance imaging (MRI). The strengths and disadvantages of
eachmodalitywillbereviewed.
Novel technologies such as dual-energy CT/CTA, rapid
MRI techniques, near-infrared spectroscopy (NIRS)*, and
automated ICH detection hold promise for faster pre- and in-
hospitalICHdiagnosisthatmayimpactpatientmanagement.
* The depth of near-infrared light penetration limits detection
of deep hemorrhages, and the size, type, and location of intracranial
hemorrhages cannot be determined with accuracy. Bilateral ICH may
be missed given that NIRS depends upon the differential light
absorbance between contralateral head locations. Patients with
traumatic brain injury may also have scalp hematomas that produce
false-positive results. Finally, variations in hair, scalp, and skull
thicknessintroduceadditionalbarrierstoICH detection.
AutomatedICHDetection
Rapid advancements in machine learning techniques have prompted a number of studies to
evaluate automated ICH detection algorithms for identifying both intra- and extra-axial ICH with
varying sensitivities (81% Majumdar etal.2018
, area under the curve 0.846 Arbabshiranietal.2018
to 0.90
Chilamkurthy etal.2018
)andspecificities(92%). Yeetal.2019
FDA-approved programs are listed in the Table (A Bar, MS et al, unpublished data, September
2018).Ojedaetal.2019
Automated algorithms that detect critical findings would facilitate triage of cases awaiting
interpretation, especially in underserved areas, thereby improving workflow and patient outcomes
Chilamkurthy etal.2018
. Utilizing a machine learning algorithm to detect ICH reduces the time to diagnosis by
96%Arbabshiranietal.2018
.
However, barriers have prevented widespread adoption of these techniques, including limited
inter-institutional generalizability of algorithms that were trained on limited, occasionally
singlesite datasets. Furthermore, ultimate accountability for errors generated using a machine
learningalgorithmremainstobedetermined.
AIDOC FDA-approved ‘CT software’#1
Theutility ofdeeplearning:evaluationofa
convolutionalneural networkfordetectionof
intracranialbleeds onnon-contrasthead computed
tomographystudies
P.Ojeda; M. Zawaideh; M. Mossa-Basha; D.Haynor
ProceedingsVolume10949,MedicalImaging2019: ImageProcessing;
109493J(2019)https://doi.org/10.1117/12.2513167
The algorithm was tested on 7112 non-contrast head
CTs acquired during 2016–2017 from a two, large urban
academic and trauma centers. Ground truth labels
were assigned to the test data per PACS query and
prior reports by expert neuroradiologists. No
scans from these two hospitals had been used during
the algorithm training process and Aidoc staff were at all
timesblindedtothegroundtruth labels.
Model output was reviewed by three radiologists
and manual error analysis performed on
discordant findings. Specificity was 99%, sensitivity
was 95%, and overall accuracy was 98%. In summary, we
report promising results of a scalable and clinically
pragmatic deep learning model tested on a large
set of real-world data from high-volume medical centers.
This model holds promise for assisting clinicians in the
identification and prioritization of exams suspicious for
ICH, facilitating both the diagnosis and treatment of an
emergentandlife-threateningcondition.
AIDOC FDA-approved ‘CT software’#2
Analysisofhead CTscans flagged by deep
learning software for acute intracranial
hemorrhage
DanielT.Ginat Departmentof Radiology,Section ofNeuroradiology,UniversityofChicago
Neuroradiology volume62,pages335–340(2020)
https://doi.org/10.1007/s00234-019-02330-w
To analyze the implementation of deep learning software for
the detection and worklist prioritization of acute intracranial
hemorrhage on non-contrast head CT (NCCT) in various
clinicalsettingsatan academic medicalcenter.
This study reveals that the performance of the deep learning
software [Aidoc (Tel Aviv, Israel)] for acute intracranial
hemorrhagedetection varies depending upon thepatient visit
location. Furthermore, a substantial portion of flagged cases
were follow-up exams, the majority of which were inpatient
exams. These findings can help optimize the artificial
intelligence-driven clincicalworkflow.
This study has several limitations. The clinical impact of
the software, in terms of the significance of flagged cases
with pathology not related to ICH, reduction of the
turnaround time, a survey of radiologists regarding their
personal perspectives regarding the software
implementation, and whether there was improved
patient outcome were not a part of this study, but can be
addressed in future studies. Nevertheless, this study identified
potential deficiencies in the current software version, such as
not accounting for patient visit location and whether there are
prior head CTs. Such information could provide important
clinical context to improve the overall algorithm accuracy,
therebyflaggingcasesin amoreusefulmanner.

More Related Content

PPT
Ecv- External Cephalic Version- Define, Risk, procedure, step, benefits PPT
PPTX
PPTX
Denver melanie education slides
PPTX
Radiological findings of pleural effussion
PPTX
Hypercalcemia
PDF
3 malpresentations.warda (3)- FACE PRESENTATION
PPT
Sudden Infant Death Syndrome
PPT
Respiratory Distress Syndrome (Rds)
Ecv- External Cephalic Version- Define, Risk, procedure, step, benefits PPT
Denver melanie education slides
Radiological findings of pleural effussion
Hypercalcemia
3 malpresentations.warda (3)- FACE PRESENTATION
Sudden Infant Death Syndrome
Respiratory Distress Syndrome (Rds)

What's hot (20)

PPTX
Aspiration pneumonia
PDF
Medical Imaging of Pneumothorax (PNO)-Walif Chbeir
PPTX
Approach to foreign body ingestion
PDF
3.OBSTETRICS & GYNECOLOGY OSCE REVISION-3
PPTX
differential for large for date uterus
PPT
DKA case study
PPTX
Pregnancy Induced Hypertension - Pre eclampsia
PDF
Fundamentals of chest radiology
PPTX
Intrapartum fetal survellence
PPTX
backlogを使ったテレワーク時代の社員教育「遠隔徒弟制度」 株式会社テンタス小泉智洋
PPTX
Hemithorax white out (1)
PPTX
Pneumoconiosis
PPTX
Mellss Antepartum hemmorrhage abruptio placenta and local causes
PPTX
Ovarian cysts
PPTX
Intrapartum fetal heart rate assessment
PPTX
Intrapartum fetal monitering
PPT
Abortion
PDF
Obstetric History I
PPTX
Ppt on chest radiography.pptx
PPT
OSCE student exam in Obstetrics &Gynecology Zagazig University 2014
Aspiration pneumonia
Medical Imaging of Pneumothorax (PNO)-Walif Chbeir
Approach to foreign body ingestion
3.OBSTETRICS & GYNECOLOGY OSCE REVISION-3
differential for large for date uterus
DKA case study
Pregnancy Induced Hypertension - Pre eclampsia
Fundamentals of chest radiology
Intrapartum fetal survellence
backlogを使ったテレワーク時代の社員教育「遠隔徒弟制度」 株式会社テンタス小泉智洋
Hemithorax white out (1)
Pneumoconiosis
Mellss Antepartum hemmorrhage abruptio placenta and local causes
Ovarian cysts
Intrapartum fetal heart rate assessment
Intrapartum fetal monitering
Abortion
Obstetric History I
Ppt on chest radiography.pptx
OSCE student exam in Obstetrics &Gynecology Zagazig University 2014
Ad

Similar to Intracerebral Hemorrhage (ICH): Understanding the CT imaging features (20)

PDF
Spontaneous intracerebral hemorrhage
PPTX
Intracerebral Hemorrhage - Classification, Clinical symptoms, Diagnostics
PDF
Management of spontaneous intracerebral hemorrhage
PPTX
Hemorrhagic stroke management Dr Ganesh.pptx
PPTX
Intracerebral hemorrhage
PPTX
Ich imaging mbs kota
PPTX
Seminar on haemorrhage
PDF
testai2008.pdf
PPTX
Hemorrhagic stroke presentation for neet pg
PPTX
Management of Traumatic versus Non-Traumatic Intracerebral Bleed
PPTX
Evaluation and management of spontaneous Intracerebral hemorrhage
PPTX
Non traumatic haemorrhage
PDF
Seminar for Physiotherapy(year III).pdf
PPTX
Book Rev Intracerebral Hemorrhage.pptx...
PPTX
Haemorrhagic stroke
PPTX
Seminar for Physiotherapy(year III).pptx
PDF
GEMC: Intracerebral Hemorrhage (ICH): Resident Training
PPTX
Critical case study mariam fahad (1)
PDF
Bleeding Brain Intraparenchymal
PDF
Minimally Invasive Intracerebral Hemorrhage Evacuation.pdf
Spontaneous intracerebral hemorrhage
Intracerebral Hemorrhage - Classification, Clinical symptoms, Diagnostics
Management of spontaneous intracerebral hemorrhage
Hemorrhagic stroke management Dr Ganesh.pptx
Intracerebral hemorrhage
Ich imaging mbs kota
Seminar on haemorrhage
testai2008.pdf
Hemorrhagic stroke presentation for neet pg
Management of Traumatic versus Non-Traumatic Intracerebral Bleed
Evaluation and management of spontaneous Intracerebral hemorrhage
Non traumatic haemorrhage
Seminar for Physiotherapy(year III).pdf
Book Rev Intracerebral Hemorrhage.pptx...
Haemorrhagic stroke
Seminar for Physiotherapy(year III).pptx
GEMC: Intracerebral Hemorrhage (ICH): Resident Training
Critical case study mariam fahad (1)
Bleeding Brain Intraparenchymal
Minimally Invasive Intracerebral Hemorrhage Evacuation.pdf
Ad

More from PetteriTeikariPhD (20)

PDF
ML and Signal Processing for Lung Sounds
PDF
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
PDF
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...
PDF
Wearable Continuous Acoustic Lung Sensing
PDF
Precision Medicine for personalized treatment of asthma
PDF
Two-Photon Microscopy Vasculature Segmentation
PDF
Skin temperature as a proxy for core body temperature (CBT) and circadian phase
PDF
Summary of "Precision strength training: The future of strength training with...
PDF
Precision strength training: The future of strength training with data-driven...
PDF
Hand Pose Tracking for Clinical Applications
PDF
Precision Physiotherapy & Sports Training: Part 1
PDF
Multimodal RGB-D+RF-based sensing for human movement analysis
PDF
Creativity as Science: What designers can learn from science and technology
PDF
Light Treatment Glasses
PDF
Deep Learning for Biomedical Unstructured Time Series
PDF
Hyperspectral Retinal Imaging
PDF
Instrumentation for in vivo intravital microscopy
PDF
Future of Retinal Diagnostics
PDF
OCT Monte Carlo & Deep Learning
PDF
Optical Designs for Fundus Cameras
ML and Signal Processing for Lung Sounds
Next Gen Ophthalmic Imaging for Neurodegenerative Diseases and Oculomics
Next Gen Computational Ophthalmic Imaging for Neurodegenerative Diseases and ...
Wearable Continuous Acoustic Lung Sensing
Precision Medicine for personalized treatment of asthma
Two-Photon Microscopy Vasculature Segmentation
Skin temperature as a proxy for core body temperature (CBT) and circadian phase
Summary of "Precision strength training: The future of strength training with...
Precision strength training: The future of strength training with data-driven...
Hand Pose Tracking for Clinical Applications
Precision Physiotherapy & Sports Training: Part 1
Multimodal RGB-D+RF-based sensing for human movement analysis
Creativity as Science: What designers can learn from science and technology
Light Treatment Glasses
Deep Learning for Biomedical Unstructured Time Series
Hyperspectral Retinal Imaging
Instrumentation for in vivo intravital microscopy
Future of Retinal Diagnostics
OCT Monte Carlo & Deep Learning
Optical Designs for Fundus Cameras

Recently uploaded (20)

PDF
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
PPTX
SOPHOS-XG Firewall Administrator PPT.pptx
PDF
Machine learning based COVID-19 study performance prediction
PDF
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
PPTX
Tartificialntelligence_presentation.pptx
PPTX
OMC Textile Division Presentation 2021.pptx
PPTX
Programs and apps: productivity, graphics, security and other tools
PDF
Accuracy of neural networks in brain wave diagnosis of schizophrenia
PDF
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
PDF
A comparative study of natural language inference in Swahili using monolingua...
PDF
Reach Out and Touch Someone: Haptics and Empathic Computing
PDF
Assigned Numbers - 2025 - Bluetooth® Document
PPTX
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
PDF
Agricultural_Statistics_at_a_Glance_2022_0.pdf
PPTX
A Presentation on Artificial Intelligence
PDF
Univ-Connecticut-ChatGPT-Presentaion.pdf
PPTX
cloud_computing_Infrastucture_as_cloud_p
PDF
Diabetes mellitus diagnosis method based random forest with bat algorithm
PDF
Building Integrated photovoltaic BIPV_UPV.pdf
PDF
MIND Revenue Release Quarter 2 2025 Press Release
TokAI - TikTok AI Agent : The First AI Application That Analyzes 10,000+ Vira...
SOPHOS-XG Firewall Administrator PPT.pptx
Machine learning based COVID-19 study performance prediction
Build a system with the filesystem maintained by OSTree @ COSCUP 2025
Tartificialntelligence_presentation.pptx
OMC Textile Division Presentation 2021.pptx
Programs and apps: productivity, graphics, security and other tools
Accuracy of neural networks in brain wave diagnosis of schizophrenia
7 ChatGPT Prompts to Help You Define Your Ideal Customer Profile.pdf
A comparative study of natural language inference in Swahili using monolingua...
Reach Out and Touch Someone: Haptics and Empathic Computing
Assigned Numbers - 2025 - Bluetooth® Document
TechTalks-8-2019-Service-Management-ITIL-Refresh-ITIL-4-Framework-Supports-Ou...
Agricultural_Statistics_at_a_Glance_2022_0.pdf
A Presentation on Artificial Intelligence
Univ-Connecticut-ChatGPT-Presentaion.pdf
cloud_computing_Infrastucture_as_cloud_p
Diabetes mellitus diagnosis method based random forest with bat algorithm
Building Integrated photovoltaic BIPV_UPV.pdf
MIND Revenue Release Quarter 2 2025 Press Release

Intracerebral Hemorrhage (ICH): Understanding the CT imaging features

  • 1. Intracerebral Hemorrhage (ICH) Understanding the CT imaging features for development of deep learning networks, ranging from restoration, segmentation, prognosis and prescriptive purposes Petteri Teikari, PhD High-dimensionalNeurology,Queen’sSquareof Neurology,UCL,London https://www.linkedin.com/in/petteriteikari/ Version “06/10/20“
  • 2. Forwhoisthis“literaturereview forvisuallyorientatedpeople” for? ”A bitofeverythingrelatedto headCT deeplearning,focusedonintracerebral hemorrhage(ICH) analysis” Itisassumedthatthereader isfamiliar with deeplearning/computervision, but lessso withcomputerizedtomography (CT)andICH https://www.linkedin.com/in/andriyburkov
  • 4. SpontaneousIntracerebralHemorrhage(ICH) https://www.grepmed.com/images/4925/intracerebral-suba rachnoid-hemorrhage-comparison-diagnosis-neurology-ep idural http://doi.org/10.13140/RG.2.1.1572.8167 ”HemorrhagicStroke” , lesscommon than ischemic stroke, the “layman definition” of stroke “Spontaneous”, asin opposed to, traumaticbrain hemorrhage caused bya blowto the head (“traumatic brain injury”, TBI) https://www.strokeinfo.org/stroke-treatme nts-hemorrhagic-stroke/ https://mc.ai/building-an-algorithm-to-detect-differe nt-types-of-intracranial-brain-hemorrhage-using-de ep/ https://mayfieldclinic.com/pe-ich.htm
  • 6. Primarymechanicalinjury → Secondaryinjuries PathophysiologicalMechanismsand PotentialTherapeutic TargetsinIntracerebralHemorrhage ZhiweiShao etal.(FrontPharmacol.2019; 10: 1079,Sept2019) https://dx.doi.org/10.3389%2Ffphar.2019.01079 Intracerebral hemorrhage (ICH) is a subtype of hemorrhagic stroke with high mortality and morbidity. The resulting hematoma within brain parenchyma induces a series of adverse events causing primary and secondary brain injury. The mechanism of injuryafterICHisverycomplicatedandhasnot yet beenilluminated. This review discusses some key pathophysiology mechanisms in ICH such as oxidative stress (OS), inflammation, iron toxicity, and thrombin formation. Thecorrespondingtherapeutic targetsandtherapeuticstrategiesarealsoreviewed. The initial pathological damage of cerebral hemorrhage to brain is the mechanical compression caused by hematoma. The hematoma mass can increase intracranial pressure, compressing brain and thereby potentially affecting blood flow, and subsequentlyleadingtobrainhernia(Keepet al.,2012). Subsequently, brain hernia and brain edema cause secondary injury, which may be associatedwithpooroutcomeandmortalityinICHpatients(Yangetal.,2016). Unfortunately, the common treatment of brain edema (steroids, mannitol, glycerol, and hyperventilation) cannot effectively reduce intracranial pressure or prevent secondary brain injury (Cordonnieret al., 2018). Truly effective clinical treatments are very limited, mainly because the problem of transforming preclinical research into clinical application has not yet been solved. Therefore, a multi-target neuroprotective therapy will make clinically effective treatment strategies possible, but also requires furtherstudy. Pro-andanti-inflammatorycytokinesinsecondarybraininjuryafter ICH. Mechanismsoferythrocyte lysates and thrombin in secondarybrain injuryafter ICH. The Keap1–Nrf2–ARE pathway. Keap1 is an OS sensor and negatively regulates Nrf2. Once exposed to reactive oxygen species (ROS), the activated Nrf2 translocates to the nucleus, binds to antioxidant response element (ARE), heterodimerizes with one of the small Maf (musculo- aponeurotic fibrosarcoma oncogene homolog) proteins, and enhances the upregulation of cytoprotective, antioxidant, anti-inflammatory, and detoxification genes that mediate cell survival.
  • 7. ”Time isBrain” Neural injury(and your imagingfeatures*) and depend on the time since initialhematoma Intracerebral haemorrhage DrAdnanIQureshi,ADavidMendelow, DanielFHanley TheLancetVolume373,Issue9675,9–15May2009,Pages1632-1644 https://doi.org/10.1016/S0140-6736(09)60371-8 Cascadeofneuralinjuryinitiatedbyintracerebralhaemorrhage Thestepsinthefirst 4harerelatedtothedirecteffectofthehaematoma,laterstepstotheproductsreleasedfrom thehaematoma.BBB=blood–brainbarrier.MMP=matrixmetallopeptidase.TNF=tumour necrosisfactor.PMN=polymorphonuclearcells. ProgressionofhaemotomaandoedemaonCT Top:hyperacuteexpansion of haematoma ina patientwithintracerebral haemorrhageon serial CTscans. Smallhaematoma detected in thebasal ganglia and thalamus (A). Expansion of haematoma after151 min (B). Continued progression of haematoma after another 82min(C). Stabilisationof haematomaafter another 76 min (D). Bottom:progressionof haematomaand perihaematomaloedema in apatientwith intracerebralhaemorrhageonserialCT scans. Thefirstscan (E)wasacquired beforetheintracerebral haemorrhage. Perihaematoma oedemaishighlighted in green to facilitaterecognitionof progressionof oedema. At4h aftersymptomonsetthereisa small haematoma inthebasal ganglia (F). Expansionof haematoma with extension into thelateral ventricleand newmass-effectand midlineshiftat14h (G). Worsening hydrocephalusand earlyperihaematomal oedema at28 h (H). Continued mass-effectwith prominentperihaematomal oedema at73 h (I).Resolving haematoma with moreprominent perihaematomal oedema at7days (J).
  • 8. or how much is the timereallybrain? Influenceoftimetoadmissiontoa comprehensivestrokecentreonthe outcomeofpatientswithintracerebral haemorrhage(Jan 2020) Luis Prats-Sánchez, MarinaGuasch-Jiménez, IgnasiGich, Elba Pascual-Goñi, Noelia Flores, Pol Camps-Renom, Daniel Guisado-Alonso, Alejandro Martínez-Domeño, Raquel Delgado-Mederos, Ana Rodríguez-Campello, Angel Ois, AlejandraGómez-Gonzalez, ElisaCuadrado-Godia, JaumeRoquer, JoanMartí-Fàbregas https://doi.org/10.1177%2F2396987320901616 In patients with spontaneous intracerebral haemorrhage, it is uncertain if diagnostic and therapeutic measures are time-sensitive on their impact on the outcome. We sought to determine the influence of the time to admission to a comprehensive stroke centre on the outcome of patients with acute intracerebral haemorrhage. Our results suggest that in patients with intracerebral haemorrhage and known symptom onset who are admitted to a comprehensive stroke centre, an early admission (≤110 min) does not influencetheoutcomeat90 days. Distributionofpropensity scoreblocksbytimeto admission.Foreachpair ofblocks,theboxonthe leftrepresentsthegroup ofpatientswithan admission≤110 minand theoneontheright representsthegroupwho wasadmitted > 110 min.
  • 9. ManagementofICH less options than for ischemic stroke Intracerebral haemorrhage DrAdnanIQureshi,ADavidMendelow, DanielFHanley TheLancetVolume373,Issue9675,9–15May2009,Pages1632-1644 https://doi.org/10.1016/S0140-6736(09)60371-8 haemorrhage Oddsratiofordeathor disabilityinpatientswithlobar intracerebral haemorrhagetreatedsurgicallyor conservatively. BoxesarePeto'soddsratio (OR),linesare95% CI.Adapted with permission from LippincottWilliamsandWilkins Clinical evidencesuggeststheimportanceof threemanagementtasksin intracerebralhaemorrhage: stopping thebleeding,81 removing theclot,70 and controlling cerebral perfusion pressure. 92 Theprecision needed to achievethesegoalsand thedegreeof benefitattributableto eachclinical goal would bepreciselydefined whentheresultsof trialsinprogress becomeavailable. AnNIHworkshop150  identified theimportanceof animal modelsof intracerebral haemorrhageand of humanpathology studies.Useof real-time, high-fieldMRI with three-dimensional imagingand high-resolution tissue probesisanotherpriority.Trialsof acuteblood-pressuretreatment and coagulopathyreversal arealso medical priorities.And trialsof minimallyinvasivesurgicaltechniquesincluding mechanical and pharmacologicaladjunctsaresurgical priorities. TheSTICH II trial should determinethebenefitof craniotomyforlobarhaemorrhage. Abetter understanding of methodological challenges, including establishmentofresearchnetworksandmultispecialty approaches, isalso needed.150 New information created in eachof theseareasshould add substantially to ourknowledgeabout theefficacy of treatmentfor intracerebral haemorrhage.
  • 10. Bestcareisprevention with blood pressuremedication Intracerebralhaemorrhage:currentapproaches toacute management Prof CharlotteCordonnier, Prof AndrewDemchuk,Wendy Ziai,Prof CraigS Anderson TheLancetVolume392, Issue10154,6–12October2018,Pages1257-1268 https://doi.org/10.1016/S0140-6736(18)31878-6 ICH, as a heterogeneous disease, certain clinical and imaging features help identify the cause, prognosis, and how to manage the disease. Survival and recovery from intracerebral haemorrhage are related to the site, mass effect, and intracranial pressure from the underlying haematoma, and by subsequent cerebral oedema from perihaematomal neurotoxicity or inflammation and complications from prolonged neurological dysfunction. A moderate level of evidence supports there being beneficial effects of active management goals with avoidance of early palliative care orders, well-coordinated specialist stroke unit care, targeted neurointensive and surgical interventions, early control ofelevated blood pressure, and rapid reversal of abnormal coagulation. The concept of time is brain, developed for the management of acute ischaemic stroke, applies readily to the management of acute intracerebral haemorrhage. Initiation of haemostatic treatment within the first few hours after onset, using deferral or waiver of informed consent or even earlier initiation using a prehospital settingwith mobile stroke unit technologies, require evaluation. For patients with intracerebral haemorrhage presenting at later or unwitnessed time windows, refining the approach of spot sign detection through newer imaging techniques, such as multi-phase CT angiography (Rodriguez-Lunaet al. 2017), might prove useful, ashasbeen shown with theuse ofCTperfusion in the detection of viable cerebral ischaemia in patients with acute ischaemic stroke who present in a late window (Alberset al. 2018;Nogueiraetal. 2018). Ultimately, the best treatment of intracerebral haemorrhage isprevention and effective detection, management, and control of hypertension across the community and in high-risk groups will have the greatest effect on reducing the burden ofintracerebral haemorrhage worldwide.
  • 11. ICH High fatality still EuropeanStrokeOrganisation(ESO)Guidelines fortheManagementofSpontaneous Intracerebral Hemorrhage (August 2014) ThorstenSteiner, RustamAl-ShahiSalman, Ronnie Beer, Hanne Christensen, Charlotte Cordonnier, Laszlo Csiba, Michael Forsting, Sagi Harnof, CatharinaJ. M. Klijn, Derk Krieger, A. David Mendelow, Carlos Molina, Joan Montaner, Karsten Overgaard, JesperPetersson, Risto O. Roine, Erich Schmutzhard, KarstenSchwerdtfeger, ChristianStapf, Turgut Tatlisumak, Brenda M. Thomas, Danilo Toni, Andreas Unterberg, Markus Wagner https://doi.org/10.1111%2Fijs.12309 Intracerebral hemorrhage (ICH) accounted for 9% to 27% of all strokes worldwide in the last decade, with high early case fatality and poor functional outcome. In view of recent randomized controlled trials (RCTs) of the management of ICH, the European Stroke Organisation (ESO) has updated its evidence-basedguidelinesforthemanagementofICH. We found moderate- to high-quality evidence to support strong recommendations for managing patients with acute ICH on an acute stroke unit, avoiding hemostatic therapy for acute ICH not associated with antithrombotic drug use, avoiding graduated compression stockings, using intermittent pneumatic compression in immobile patients, and using blood pressureloweringforsecondaryprevention. We found moderate-quality evidence to support weak recommendations for intensive lowering of systolic blood pressure to <140 mmHg within six-hours of ICH onset, early surgery for patients with a Glasgow Coma Scale score 9–12, and avoidanceofcorticosteroids. These guidelines inform the management of ICH based on evidence for the effects of treatments in RCTs. Outcome after ICH remains poor, prioritizing furtherRCTsofinterventionstoimproveoutcome. Age-standardizedincidenceofhemorrhagicstrokeper 100000person-years for 1990(a),2005(b),and2010(c).FromFeigin etal.(1). 1990 2005 2010
  • 12. CTtypicallythefirstscandone andMRIlater where accessible MRI offersbetterimagequality,butthecost ofthetechnologylimitsitsavailability Intracerebralhemorrhage: an update ondiagnosisandtreatment IsabelC. Hostettler,DavidJ.Seiffge&DavidJ.Werringet al.(12Jun 2019) UCLStrokeResearchCentre,DepartmentofBrain Repairand Rehabilitation,UCLInstituteofNeurologyandtheNationalHospitalforNeurologyandNeurosurgery, London,UK ExpertReviewofNeurotherapeuticsVolume19,2019- Issue7 https://doi.org/10.1080/14737175.2019.1623671 Expert opinion: In recent years, significant advances have been made in deciphering causes, understanding pathophysiology, and improving acute treatment and prevention of ICH. However, the clinical outcome remains poor andmany challenges remain. Acute interventions delivered rapidly (including medical therapies – targeting hematoma expansion, hemoglobin toxicity, inflammation, edema, anticoagulant reversal – and minimally invasive surgery) are likely to improveacuteoutcomes. Improved classification of the underlying arteriopathies (fromneuroimaging and genetic studies) and prognosis should allow tailored prevention strategies (including sustained blood pressure control and optimized antithrombotic therapy) to further improve longer-termoutcomeinthisdevastatingdisease. A) ModifiedBostoncriteria,B)CTEdinburghcriteria. ICHcarepathway. Pathwaytodecideonintra-arterial digitalsubtractionangiography(IADSA) tofurtherinvestigateICHcause (adaptedfromWilsonetal.2017). small vesseldiseases(SVD),intra-arterial digital subtraction angiography(IADSA),WhiteMatter Hyperintensities(WMH)
  • 13. Angiographyalsoforhemorrhagic stroke Hemorrhagic Stroke (2014) JuliusGriauzde,ElliotDickersonandJoseph J. Gemmete Department of Radiology,Radiology Resident,University of Michigan http://doi.org/10.1007/978-1-4614-9212-2_46-1 Non-contrast computed tomography has long been the initial imaging tool in the acute neurologic patient. As MRI technology and angiographic imaging has evolved, they too have proven to be beneficial in narrowing the differentialdiagnosisandtriaging patientcare. Several biological and physical characteristics contribute significantly to the appearance of blood products on neuroimaging. To adequately interpret images in the patient with hemorrhagic stroke, the evaluator must have a knowledge of the interplay between imaging modalitiesandintracranialbloodproducts. Additionally, an understanding of technical parameters as well as the limitations of imagingmodalities canbehelpfulinavoiding pitfalls. Recognition of typical imaging patterns and clinical presentations can further aid the evaluatorinrapiddiagnosisanddirectedcare. Computedtomographyangiography (CTA) Magneticresonanceangiography (MRA) TimeofFlightMRA (TOFMRA),initssimplestform, takesadvantageoftheflowofblood Contrast-EnhancedMRA (CEMRA)employsfast spoiledgradient-recalledecho-basedsequences (FSPGR)andtheparamagneticpropertiesof gadoliniumtointensifythesignalwithinvessels
  • 14. “Brainistime” alsofor theappearance of the blood Evolution ofbloodproductson MRI (Derived fromafigurecreated by Dr. Frank Gaillard as presented on http://radiopaedia.org/articles/ageing-blood-on-mri ,withpermission) http://doi.org/10.1007/978-1-4614-9212-2_46-1: TheappearanceoftheICH atdifferentperiodsoftimedepends considerablyuponanumber offactors. Forinstance,in earlyphases, thehematocritandproteinlevelsofthehematomawilldramaticallyalter theCTattenuationinthehematoma.In laterphases,factorssuchas oxygentensionatthehematomawilldeterminehowquickly deoxyhemoglobintransitionsintomethemoglobinandhowquicklyred bloodcellsfinallylyseanddecreasethefieldinhomogeneityeffectsof sequesteredmethemoglobin. Theintegrityoftheblood-brainbarrier alsohelpstodeterminethedegreetowhichhemosiderin-laden macrophagesremaintrappedintheparenchymacausinghemosiderin staininglongafterthevastmajorityofthehematomamasshasbeen resorbed[Parizeletal.2001]. Intracranial hemorrhage made easy- asemiological approach on CT and MRI http://doi.org/10.1594/ecr2 014/C-1120 :CTappearanceof ageingblood.Several factorswhich vary dependingon thestageof thebleeding Evolution of CTdensityof intracranial haemorrhage (diagram)Case contributed by  Assoc Prof FrankGaillard https://radiopaedia.org/cases/evolutio n-of-ct-density-of-intracranial-haemor rhage-diagram AppearanceofBloodonComputedTomographyand MagneticResonanceImagingScansbyStage http://doi.org/10.1007/s13311-010-0009-x
  • 17. ICHScore subcomponents:Glasgow ComaScale (GCS) https://www.firstaidforfree.com/glasgow-coma-scale-gcs-first-aiders/ https://emottawablog.com/2018/07/gcs-remastered-recent- updates-to-the-glasgow-coma-scale-gcs-p/
  • 18. ICHScore subcomponents:Hematomavolume Howtomeasureinpractice? Notethat deeplearningsegmentation networksarenot reallyin use RyanHakimi,DO,MSAssistantProfessor https://slideplayer.com/slide/3883134/ Vivien H. Leeetal.(2016)citesthe ● Kwak’s sABC/2 formula (Kwak et al. 1983,10.1161/01.str.14.4.493, Cited by 252) ● Kothari’s ABC/2 formula(Kothari et al. 1996,  10.1161/01.str.27.8.1304, Cited by 1653)  Excellent accuracy of ABC/2volume formulacompared to computer- assisted volumetricanalysisof subdural hematomas Sae-Yeon Won et al. (2018) https://doi.org/10.1371/journal.pone.0199809 TheABC/2methodisasimpleandfastbedsideformulaforthemeasurementof SDHvolumein atimelymannerwithoutlimitedaccessthrough simpleadaption, which mayreplacethecomputer-assistedvolumetric measurementintheclinical andresearch area. Assessment oftheABC/2MethodofEpidural HematomaVolume MeasurementasComparedto Computer-AssistedPlanimetricAnalysis (2015) https://doi.org/10.1177%2F1099800415577634
  • 20. ICHScore subcomponents:Infratentorial(cerebellar) bleed https://aneskey.com/intrace rebral-hemorrhagic-stroke/ Impact of SupratentorialCerebralHemorrhageon the ComplexityofHeartRate Variabilityin Acute Stroke Chih-Hao Chen, Sung-Chun Tang,Ding-YuanLee, Jiann-ShingShieh,Dar-Ming Lai,An-YuWu&Jiann- ShingJengScientificReportsvolume8,Articlenumber: 11473(2018) https://doi.org/10.1038/s41598-018-29961-y Acute stroke commonly affects cardiac autonomic responses resulting in reduced heart rate variability (HRV). Multiscale entropy (MSE) is a novel non-linear method to quantify the complexity of HRV. This study investigated the influence of intracerebral hemorrhage (ICH) locations and intraventricular hemorrhage (IVH) on the complexity of HRV. In summary, more severe stroke and larger hematomavolumeresulted in lower complexityofHRV.Lobar hemorrhage andIVHhadgreatimpactsonthecardiacautonomicfunction. https://neupsykey.com/ diagnosis-and-treatmen t-of-intracerebral-hemor rhage/ Location → functionalmeasures? We collected ECG analogue data directly from the bedside monitor (Philips Intellivue MP70, Koninklijke Philips N.V., Amsterdam, Netherlands) foreachpatient. 
  • 21. ICH Score validationandmodification somewhat ok/suboptimal performance Modifyingtheintracerebral hemorrhagescore tosuitthe needs ofthedevelopingworld AjayHegde,GirishMenon (Nov2018) http://doi.org/10.4103/aian.AIAN_419_17 ICH Score failed to accurately predict mortality in our cohort. ICH is predominately seen at a younger age group in India and hence have better outcomes in comparison to the west. We propose a minor modification in the ICH score by reducing the age criteria by 10 years to prognosticate the disease better in our population. External Validation of the ICH Score JenniferL Clarkeetal.(2004) https://doi.org/10.1385/ncc:1:1:53 The ICH score accurately stratifies outcome in an external patient cohort. Thus, the ICH score is a validated clinical grading scale that can be easily and rapidly applied at ICH presentation. Ascale such as the ICH score could be used to standardize clinical treatment protocolsorclinical studies. ValidationofICHScore inalarge UrbanPopulation TahaNisaret al.(2018) https://doi.org/10.1016/j.clineuro.2018.09.007 We conducted a retrospective chart review of 245 adult patients who presented with acute ICH to University Hospital, Newark. Our study is one of the largest done at a single urban center to validate the ICH score. Age ≥ 80 years wasn't statistically significant with respect to 30-day mortality in our group. Restratification of the weight of individual variable in the ICH equation with modification of the ICH score can potentially more accurately establish mortality risk. Nevertheless, the overall prediction of mortality was accurate and reproducible in ourstudy. Validation of the ICH score in patients with spontaneous intracerebral haemorrhage admitted to the intensive care unit inSouthernSpain SoniaRodríguez-Fernández etal.(2018) http://dx.doi.org/10.1136/bmjopen-2018-021719 ICH score shows an acceptable discrimination as a tool to predict mortality rates in patients with spontaneous ICH admitted totheICU, but its calibration issuboptimal. 24-HourICHScoreIs aBetter Predictor of Outcomethan AdmissionICHScore AimeeM. Aysenneet al.(2013) https://doi.org/10.1155/2013/605286 Early determination of the ICH score may incorrectly estimate the severity and expected outcome after ICH. Calculations of the ICH score 24 hours after admission will better predict earlyoutcomes. Assessment and comparison of the max-ICH score and ICH score by externalvalidation Felix A.Schmidt,etal.(2018) https://doi.org/10.1212/WNL.0000000000006117 We tested the hypothesis that the maximally treated intracerebral hemorrhage (max-ICH) score is superior to the ICH score for characterizing mortality and functional outcome prognosis in patients with ICH, particularly those who receive maximal treatment. External validation with direct comparison of the ICH score and max-ICH score shows that their prognostic performance is not meaningfully different. Alternatives to simple scores are likely needed to improve prognostic estimates for patient care decisions. Yes, so, do you like to use oversimplified models after all?
  • 22. ICHScore works forsome parts of the population OriginalIntracerebralHemorrhageScoreforthePrediction of Short-Term Mortality inCerebral Hemorrhage: Systematic Review and Meta-Analysis Gregório,Tiago; Pipa, Sara;Cavaleiro,Pedro; Atanásio,Gabriel; Albuquerque,Inês; CastroChaves, Paulo;Azevedo,Luís Journal of Strokeand CerebrovascularDiseases Volume29,Issue4,April2020,104630 https://doi.org/10.1097/CCM.0000000000003744 To systematically assess the discrimination and calibration of the Intracerebral Hemorrhage score for prediction of short-term mortality (38 studies, 15,509 patients) in intracerebral hemorrhage patients and to study its determinantsusing heterogeneityanalysis. Fifty-five studiesprovideddataondiscrimination,and35studies provided data on calibration. Overall, the Intracerebral Hemorrhage score discriminated well (pooled C-statistic 0.84; 95% CI, 0.82-0.85) but overestimated mortality (pooled observed:expected mortality ratio = 0.87; 95% CI, 0.78- 0.97), with high heterogeneity for both estimates (I 80% and 84%,respectively). The Intracerebral Hemorrhage score is a valid clinical prediction rule for short-term mortality in intracerebral hemorrhage patients but discriminated mortality worse in more severe cohorts. It also overestimated mortality in the highest Intracerebral Hemorrhage score patients, with significant inconsistency between cohorts. These results suggest that mortality for these patients is dependent on factors not included in the score. Further studies are needed to determinethesefactors.
  • 23. StartwithICHscore but then youneed better models? Management ofIntracerebral Hemorrhage:JACCFocusSeminar MatthewSchrag,HowardKirshner Journalof theAmerican CollegeofCardiology Volume75,Issue15,21April2020 https://doi.org/10.1016/j.jacc.2019.10.066 The most widely used tool for assessing prognosis is the “ICH score,” a scale that predicts mortality based on hemorrhage size, patient age, Glasgow coma score, hemorrhage location (infratentorial or supratentorial), and the presence of intraventricular hemorrhage ( Hemphilletal.2001). This score has been widely criticized for overestimating the mortality associated with ICH, and this is attributed to the high rate of early withdrawal of medical care in more severe hemorrhages in the cohort, leading to a “self-fulfilling prophecy”ofearlymortality (Zahuranecetal.2007,Zahuranecetal.2010). Nevertheless, no high-performing alternative scale or biomarker has entered routine clinical use, so the ICH score remains a starting point for clinical prognostication. A recent re-evaluation of this clinical tool found that both physicians’ and nurses’ subjective predictions of 3-month outcomes made within 24 h of the hemorrhage outperformed the accuracy of the ICH score, underscoring the important role of clinician experience and judgement in guiding families ( Hwanget al. 2015). In addition to hemorrhage size and initial clinical deficits, factors that seem to predict a poor overall outcome include any early neurological deterioration, hemorrhages in deep locations, particularly the thalamus, and age/baseline functional status (Yogendrakumaretal.2018; Sreekrishnanetal.2016; Ullmanetal.2019). When the clinical prognosis is unclear, physicians should generally advocate for additional time and continued supportive care(Hemphilletal.2015). Recovery after intracerebral hemorrhage is often delayed when compared with ischemic strokes of similar severity, and outcomes may need to be evaluated at later timepoints to capture the full extent of potential recovery. This is important both for calibrating patient and family expectations and in the design of outcomes for clinical trials.
  • 24. Severalscoresandmeasuresexist Intracerebralhemorrhage outcome:A comprehensive update João Pinho etal.(15March 2019) https://doi.org/10.1016/j.jns.2019.01.013 The focus of outcome assessment after ICH has been mortality in most studies, because of the high early case fatality which reaches 40% in some population-based studies. The most robust and consistent predictors of early mortality include age, severity of neurological impairment, hemorrhage volume and antithrombotic therapy at the time oftheevent. Long-term outcome assessment is multifaceted and includes not only mortality and functional outcome, but also patient self-assessment of the health- related quality of life, occurrence of cognitive impairment, psychiatric disorders, epileptic seizures, recurrent ICH andsubsequent thromboembolicevents. Several scores which predict mortality and functional outcome after ICH have been validated and are useful in the daily clinical practice, however they must be used in combination with the clinical judgment for individualized patients. Management of patients with ICH both in the acute and chronic phases, requires health care professionals to have a comprehensive and updated perspective on outcome, which informs decisions that are needed to be taken togetherwiththepatient andnext ofkin
  • 25. Locationspecifiedquitecrudely http://doi.org/10.1007/978-1-4614-9212-2_46-1 Management of brainstem haemorrhages DOI: https://doi.org/10.4414/smw.2019.20062 https://aneskey.com/intracerebral-hemorrhagic-stroke/
  • 26. Too “handwavey”reporting of thelocationatthemoment IntracerebralHemorrhageLocationandFunctional OutcomesofPatients: ASystematicLiterature Reviewand Meta-Analysis AnirudhSreekrishnan etal. (NeurocriticalCarevolume25,pages384–391,2016) https://doi.org/10.1177%2F0272989X19879095 - Citedby35 Intracerebral hemorrhage (ICH) has the highest mortality rate among all strokes. While ICH location, lobar versus non-lobar, has been established as a predictor of mortality, less is known regarding the relationship between more specific ICH locations and functional outcome. This review summarizes current work studying how ICH location affects outcome, with an emphasis on how studies designate regionsof interest. Multiple studies have examined motor-centric outcomes, with few studies examining quality of life (QoL) or cognition. Better functional outcomes have been suggested for lobar versus non-lobar ICH; few studies attempted finer topographic comparisons. This study highlights the need for improved reporting in ICH outcomes research, including a detailed description of hemorrhage location, reporting of the full range of functional outcome scales, and inclusion of cognitive and QoL outcomes. Meta-analysisofstudiesdescribingtheoddsratioofpooroutcomesfor lobar comparedtodeep/non-lobar ICH. a Poor outcomemRS(3,4,5,6)or GOS(4,3,2,1); b PooroutcomemRS(4,5,6)or GOS(3,2,1); c Poor outcomemRS(5,6).*Significantresults(p < 0.05)
  • 28. Long-termrisks higher after lobarICH? Ten-yearrisksofrecurrentstroke, disability,dementiaandcostin relationto siteofprimaryintracerebralhaemorrhage: population-basedstudy (2019) LinxinLi,Ramon Luengo-Fernandez,SusannaMZuurbier, NicolaC Beddows,PhilippaLavallee,LouiseESilver, WilhelmKuker,Peter MalcolmRothwell http://dx.doi.org/10.1136/jnnp-2019-322663 Patients with primary intracerebral haemorrhage (ICH) are at increased long-term risks of recurrent stroke and other comorbidities. However, available estimates come predominantly from hospital-based studies with relatively short follow-up. Moreover, there are also uncertainties about the influence of ICH location on risks of recurrent stroke, disability, dementia and qualityoflife. Methods In a population-based study (Oxford Vascular Study/2002–2018) of patients with a first ICH with follow-up to 10 years, we determined the long-term risks of recurrent stroke, disability, quality of life, dementia and hospital care costs stratified by haematomalocation. ICHcanbecategorisedinto lobarandnon-lobaraccording tothehaematomalocation. Giventhedifferentbalanceofpathologiesfor lobarversusnon-lobar ICH,thelong-term prognosisofICHcouldbeexpectedtodiffer byhaematomalocation.However,whilesome studiessuggestedthathaematomalocationwasassociatedwithrecurrentstroke,others havenot. Compared with non-lobarICH, thesubstantially higher 10-year risks of recurrent stroke, dementiaand lower QALYs after lobar ICH highlighttheneedformoreeffectiveprevention for this patient group. (top) Ten-year risks of recurrent stroke, disability or death stratified by haematoma location. (right) Ten-year mean healthcare costs overtimeafterprimaryintracerebralhaemorrhage.
  • 29. HematomaEnlargement deepvs lobar, volume? Hematoma enlargement characteristicsin deep versuslobarintracerebralhemorrhage Jochen A.Sembill etal. (04March2020) https://doi.org/10.1002/acn3.51001 Hematoma enlargement (HE) is associated with clinical outcomes after supratentorial intracerebral hemorrhage (ICH). This study evaluates whether HE characteristics and association with functional outcome differ in deep versus lobarICH. HE occurrence does not differ among deep and lobar ICH. However, compared to lobar ICH, HE after deep ICH is of greater extent in OAC ICH, occurs earlier‐ICH, occurs earlier and may be of greater clinical relevance. Overall, clinical significance is more apparent after small–medium compared to large sized‐sized bleedings. These data may be valuable for both routine clinical management as well as for designing future studies on hemostatic and blood pressure management aming at minimizing HE. However, further studies with improved design are needed to replicate these findings and to investigate the pathophysiological mechanismsaccounting fortheseobservations. Study flowchart. Altogether, individual level data from 3,580 spontaneous ICH patients were analyzed to identify 1,954 supratentorial ICH patients eligible for outcome analyses. Data were provided by two parts of a German wide observational‐ICH, occurs earlier studies(RETRACE I and II) conducted at 22 participatingtertiarycenters, and byone single center universityhospital registry.‐ICH, occurs earlier
  • 31. Otherfactors you shouldtake into account BrianA.Stettler, MDAssistant Professor https://slideplayer.c om/slide/3129821/ Subfalcial herniation, midline shiftand uncal herniation secondary tolarge subdural hematomain the left hemisphere. https://www.startradiology.com/internships/neurology/brain/ct-brain- hemorrhage/ Hydrocephalus https://kidshealth.or g/en/parents/hydro cephalus.html
  • 32. RiskFactorsHypertension thelargest risk factor RiskFactorsof IntracerebralHemorrhage: ACase-ControlStudy HanneSallinen, ArtoPietilä,VeikkoSalomaa,DanielStrbian Journal of Strokeand Cerebrovascular Diseases Volume29,Issue4,April2020,104630 https://doi.org/10.1016/j.jstrokecerebrovasdis.2019.104630 Hypertension is a well-known risk factor for intracerebral hemorrhage (ICH). On many of the other potential risk factors, such as smoking, diabetes, and alcohol intake, results are conflicting. We assessed risk factors of ICH, taking also into account priordepression andfatigue. Analyzing all cases and controls, the cases had more hypertension, history of heart attack, lipid-lowering medication, and reported more frequently fatigue prior to ICH. In persons aged less than 70 years, hypertension and fatigue were more common among cases. In persons aged greater than or equal to 70 years, factors associated with risk of ICH were fatigue prior to ICH, use of lipid-lowering medication, and overweight. Hypertension was associated with risk of ICH among all patients and in the group of patients under 70 years. Fatigue prior to ICH was more common among all ICH cases.
  • 33. StrokeorIntensive CareUnit for ICH patients Strokeunitadmissionisassociatedwith better outcomeandlowermortality inpatientswith intracerebral hemorrhage MaM. N. Ungerer P. Ringleb B.Reuter C.Stock F.Ippen S. Hyrenbach I.Bruder P Martus C. Gumbinger theAGSchlaganfall https://doi.org/10.1111/ene.14164 (Feb2020) There is no clear consensus among current guidelines on the preferred admission ward [i.e. intensive care unit (ICU) or stroke unit (SU)] for patients with intracerebral hemorrhage. Based on expert opinion, the American Heart Association and European Stroke Organization recommend treatment in neurological/neuroscience ICUs (NICUs) or SUs. The European Stroke Organization guideline states that there are no studies available directly comparingoutcomesbetween ICUsandSUs. We performed an observational study comparing outcomes of 10 811 consecutive non comatose patients with intracerebral hemorrhage according‐ICH, occurs earlier to admission ward [ICUs, SUs and normal wards (NWs)]. Primary outcomes were the modified Rankin Scale score at discharge and intrahospital mortality. An additional analysiscomparedNICUswithSus. Treatment in SUs was associated with better functional outcome and reduced mortality compared with ICUs and NWs. Our findings support the current guideline recommendations to treat patients with intracerebral hemorrhage in SUs or NICUs and suggest that some patients may further benefit from NICU treatment. MobileStrokeUnitReduces TimetoTreatment JULY03,2018 https://www.itnonline.com/articl e/mobile-stroke-unit-reduces-ti me-treatment
  • 34. Formorefine-grainedpredictions youprobablywanttousebetter imagingmodalities? PredictingMotorOutcomeinAcute Intracerebral Hemorrhage (May 2019) J.Puig, G.Blasco,M.Terceño,P.Daunis-i-Estadella,G.Schlaug,M.Hernandez- Perez,V.Cuba, G.Carbó,J.Serena,M.Essig, C.R.Figley, K.Nael,C.Leiva- Salinas, S.PedrazaandY.Silva https://doi.org/10.3174/ajnr.A6038 Predicting motor outcome following intracerebral hemorrhage is challenging. We tested whether the combination of clinical scores and Diffusion tensor imaging (DTI)-based assessment of corticospinal tract damage within the first 12 hours of symptom onset after intracerebral hemorrhage predicts motor outcome at 3 months. Combined assessment of motor function and posterior limb of the internal capsule damage during acute intracerebral hemorrhage accurately predicts motor outcome. Assessing corticospinal tract involvement with diffusion tensor tractography superimposed on gradient recalled echo and FLAIR images. In the upper row, the corticospinal tract wasaffected by ICH (passes through it) at the level of the corona radiata and posterior limb of the internal capsule. Note that in lower row, the corticospinal tract was displaced slightly forward but preserved around the intracerebral hematoma. Vol indicatesvolume. Exampleof ROI objectmapsusedto measure intracerebral hematoma(blue)and perihematomal edema(yellow)volumes. CombiningmNIHSSand PLICaffected by ICH in the first 12 hours of onset can accurately predict motor outcome. The reliability of DTI in denoting very early damage to the CST could make it a prognostic biomarker useful for determining management strategies to improve outcome in the hyperacute stage. Our approach eliminates the need for advanced postprocessing techniques that are time- consuming and require greater specialization, so it can be applied more widely and benefit more patients. Prospective large-scale studies are warranted to validate these findings and determine whether this information could be used to stratify risk in patients with ICH.
  • 35. Cliniciansliketohuntforthe“(linear)magicalbiomarkers” opposed tononlinear multivariate modelswith higher capacity(and higher probability tooverfitaswell) Early hematomaretractionin intracerebralhemorrhageis uncommonanddoesnotpredict outcome AnaC.Klahr,MaheshKate,JaymeKosior,Brian Buck,AshfaqShuaib,DerekEmery,KennethButcher Published: October9,2018 https://doi.org/10.1371/journal.pone.0205436 Citedby2 -Relatedarticles Clot retraction in intracerebral hemorrhage (ICH) has been described and postulated to be related to effective hemostasis and perihematoma edema (PHE) formation. The incidence and quantitative extent of hematoma retraction (HR) is unknown. Our aim was to determine the incidence of HR between baseline and time of admission. We also tested the hypothesis that patients with HR had higher PHE volumeandgoodprognosis. Early HR is rare and associated with IVH, but not with PHE or clinical outcome. There was no relationship between HR, PHE, and patient prognosis. Therefore, HR is unlikely to be a useful endpointinclinicalICHstudies.
  • 36. PerihematomalEdema(PHE) Diagnostic value? NeoplasticandNon-NeoplasticCausesof Acute IntracerebralHemorrhageonCT:The DiagnosticValueofPerihematomalEdema Jawed Nawabi, UtaHanning, Gabriel Broocks, Gerhard Schön, TanjaSchneider, Jens Fiehler, Christian Thaler & Susanne Gellissen ClinicalNeuroradiology(2019) https://doi.org/10.1007/s00062-019-00774-4 The aim of this study was to investigate the diagnostic value of perihematomal edema (PHE) volume in non-enhanced computed tomography (NECT) to discriminate neoplastic and non-neoplastic causes of acute intracerebral hemorrhage (ICH). Relative PHE with a cut-off of >0.50 is a specific and simple indicator for neoplastic causes of acute ICH and a potential tool for clinical implementation. This observation needs to be validated in an independentpatientcohort. Two representative cases of region of interest object maps used to measure intracerebral hemorrhage (ICH), volume (Vol ICH) and total hemorrhage (Vol ICH+PHE) volume. a Neoplastic and non-neoplastic ICH volume (red) and b total hemorrhage volume (grey) on non-enhanced CT (NECT) delineated with an edge detection algorithm. c Neoplastic and non-neoplastic PHE (green) calculated by subtraction of total hemorrhagevolumeandICHvolume(Vol PHE= Vol ICH+PHE− Vol ICH)
  • 37. Youngpatients tendto recover better(seems obvious) Isnontraumaticintracerebral hemorrhage different betweenyoungandelderly patients? NaRaeYang,Ji HeeKim,Jun Hyong Ahn,JaeKeun Oh,In BokChang &JoonHo Song NeurosurgicalReviewvolume43, pages781– 791(2020)https://doi.org/10.1007/s10143-019-01120-5 Only a few studies have reported nontraumatic intracerebral hemorrhage in young patients notwithstanding its fatal and devastating characteristics. This study investigated the clinical characteristics and outcome of nontraumatic intracerebral hemorrhage in young patients in comparison to thoseof theelderly. Nontraumatic intracerebral hemorrhage in younger patients appears to be associated with excessive alcohol consumption and high BMI. Younger patients had similar short-term mortality but more favorable functional outcome than the elderly. DistributionofmodifiedRankinScalescoresatthelastfollow-upforeachgroup
  • 38. Genotype-baseddifferencesexist Racial/ethnicdisparitiesinthe riskof intracerebral hemorrhage recurrence AudreyC.Leasure,ZacharyA.King,VictorTorres-Lopez,SantoshB.Murthy,HoomanKamel,AshkanShoamanesh,Rustam Al-Shahi Salman,JonathanRosand,WendyC.Ziai, DanielF.Hanley,DanielWoo,CharlesC.Matouk,LaurenH.Sansing, Guido J.Falcone,KevinN.Sheth Neurology December12,2019 https://doi.org/10.1212/WNL.0000000000008737 To estimate the risk of intracerebral hemorrhage (ICH) recurrence in a large, diverse, US-based population and to identify racial/ethnic and socioeconomic subgroups at higher risk. Black and Asian patients had a higher risk of ICH recurrence than white patients, whereas private insurance was associated with reduced risk compared to those with Medicare. Further research is needed to determine the drivers of these disparities. While this is the largest study of ICH recurrence in a United States–based, racially and ethnically diverse population, our study has several limitations related to the use of administrative data that require consideration. First, there is a possibility of misclassification of the exposures and outcomes. The attribution of race/ethnicity that is not based on direct self-report may not be accurate; for example, patients who belong to 2 or more racial/ethnic categories may be classified based on phenotypic descriptions and may not reflect true ancestry. In terms of outcome classification, we relied on ICD-9-CM codes to identify our outcome of recurrent ICH. However, we used previously validated diagnosis codes that have high positive predictive valuesfor identifyingprimaryICH
  • 39. asICHnotthatwellunderstood sonewmechanismsareproposed Globalbraininflammationinstroke Kaibin Shi etal.(LancetNeurology,July2019) https://doi.org/10.1016/S1474-4422(19)30078-X Stroke, including acute ischaemic stroke (AIS) and intracerebral haemorrhage (ICH), results in neuronal cell death and the release of factors such as damage-associated molecular patterns (DAMPs) that elicit localised inflammation in the injured brain region. Such focal brain inflammation aggravates secondary brain injury by exacerbating blood–brain barrier damage, microvascular failure, brain oedema, oxidative stress, andbydirectlyinducingneuronalcell death. In addition to inflammation localised to the injured brain region, a growing body of evidence suggests that inflammatory responses after a stroke occur and persist throughout the entire brain. Global brain inflammation might continuously shape the evolving pathology after a stroke and affect the patients'long-termneurologicaloutcome. Future efforts towards understanding the mechanisms governing the emergence of so-called global brain inflammation would facilitate modulation of this inflammation as a potential therapeutic strategyforstroke.
  • 40. MMPsinICH? In emerging theories Matrix MetalloproteinasesinAcute IntracerebralHemorrhage SimonaLattanzi, MarioDi Napoli,SilviaRicci &Afshin A.Divani Neurotherapeutics(January2020) https://doi.org/10.1007/s13311-020-00839-0 So far, clinical trials on ICH have mainly targeted primary cerebral injury and have substantially failed to improve clinicaloutcomes. The understanding of the pathophysiology of early and delayed injury after ICH is, hence, of paramount importance to identify potential targets of intervention and develop effective therapeutic strategies. Matrix metalloproteinases (MMPs) represent a ubiquitous superfamily of structurally related zinc- dependent endopeptidases able to degrade any component of the extracellular matrix. They are upregulated after ICH, in which different cell types, including leukocytes, activated microglia, neurons, and endothelial cells, are involved in their synthesis and secretion. The role of MMPs as a potential target for the treatment of ICH has been widely discussed in the last decade. The impact of MMPs on extracellular matrix destruction and blood–brain barrier BBB disruption in patientssufferingfromICHhasbeen ofinterest. The aim of this review is to summarize the available experimental and clinical evidence about the role of MMPs in brain injury following spontaneous ICH and provide critical insightsintotheunderlyingmechanisms. Overall, there is substantially converging evidence from experimental studies to suggest that early and short- term inhibition of MMPs after ICH can be an effective strategy to reduce cerebral damage and improve the outcome, whereas long-term treatment may be associated with more harm than benefit. It is, however, worth to notice that, so far, we do not have a clear understanding of the time-specific role that the different MMPs assume within the pathophysiology of secondary brain injury and recovery after ICH. In addition, most of the studies exploring pharmacological strategies to modulate MMPs can only provide indirect evidence of the benefit to target MMP activity. The prospects for effective therapeutic targeting of MMPs require the establishment of conditions to specifically modulate a given MMP isoform, or asubset of MMPs, in a given spatio-temporal context (Rivera2019). Further research is warranted to better understand the interactions between MMPs and their molecular and cellular environments, determine the optimal timing of MMPs inhibition for achieving a favorable therapeutic outcome, and implement the discovery of innovative selective agents to spare harmful effects before therapeutic strategies targeting MMPs can be successfully incorporated into routine practice ( Lattaniet al. 2018;Hostettler et al. 2019).
  • 41. Whatarethetreatmentsfor ICH and can wedo prescriptive modeling(“precision medicine”), and tailor thetreatment individually?
  • 43. AnimalmodelsofICH exist of courseas well Intracerebral haemorrhage:from clinicalsettingsto animalmodelsQian Bai et al.(2020) http://dx.doi.org/10.1136/svn-2020-000334 Effective treatment for ICH is still scarce. However, clinical therapeutic strategies includes medication and surgery. Drug therapy is the most common treatment for ICH. This includes prevention of ICH based on treating an individual’s underlying risk factors, for example, control of hypertension. Hyperglycaemia in diabetics is common after stroke; managing glucose level may reduce the stroke size. Oxygen is given as needed. Surgery can be used to prevent ICH by repairing vascular damage or malformations in and around the brain, or to treat acute ICH by evacuating the haematoma; however, the benefit of surgical treatment is still controversial due to very few controlled randomised trials. Rehabilitation may help overcome disabilities thatresultfromICHdamage. Despite great advances in ischaemia stroke, no prominent improvement in the morbidity and mortality after ICH have been realised. The current understanding of ICH is still limited, and the models do not completely mirror the human condition. Novel effective modelling is required to mimic spontaneous ICH in humans and allow for effective studies on mechanisms and treatment of haematoma expansion and secondary braininjury.
  • 44. GenomicsforStrokerecovery #1 Geneticriskfactorsfor spontaneousintracerebral haemorrhage AmandaM.Carpenter,I. P. Singh,ChiragD. Gandhi,CharlesJ. Prestigiacomo(NatureReviews Neurology2016) https://doi.org/10.1038/nrneurol.2015.226 Familial aggregation of ICH has been observed, and the heritability of ICH risk has been estimated at 44%. Few genes have been found to be associated with ICH at the population level, and much of the evidence for genetic risk factors for ICH comes from single studies conducted in relatively small and homogenous populations. In this Review, we summarize the current knowledge of genetic variants associated with primary spontaneousICH. Although evidence for genetic contributions to the risk of ICH exists, we donot yet fully understand how and to what extent this information can be utilizedto preventandtreatICH.
  • 45. GenomicsforStrokerecovery #2 Geneticunderpinnings ofrecoveryafter stroke:anopportunity for genediscovery, riskstratification,andprecisionmedicine Julián N.Acostaetal. (September2019) https://doi.org/10.1186/s13073-019-0671-5 As the number of stroke survivors continues to increase, identification of therapeutic targets for stroke recovery has become a priority in stroke genomics research. The introduction of high-throughput genotyping technologies and novel analytical tools has significantly advanced our understanding of the genetic underpinningsofstrokerecovery. In summary, functional outcome and recovery constitute important endpoints for genetic studies of stroke. The combination of improving statistical power and novel analytical tools will surely lead to the discovery of novel pathophysiological mechanisms underlying stroke recovery. Information on these newly discoveredpathwayscan beusedto develop new rehabilitation interventions and precision- medicine strategies aimed at improving management options for stroke survivors. The continuous growth and strengthening of existing dedicated collaborations and the utilization of standardized approaches to ascertain recovery-related phenotypes will be crucial for the successofthispromisingfield. Geneticriskof Spontaneousintracerebralhemorrhage: Systematic review andfuture directions KolawoleWasiuet al.(15December2019) https://doi.org/10.1016/j.jns.2019.116526 Given this limited information on the genetic contributors to spontaneous intracerebral hemorrhage (SICH), more genomic studies are needed to provide additional insights into the pathophysiology of SICH, and develop targeted preventive and therapeutic strategies. This call for additional investigation of the pathogenesis of SICH is likely to yield more discoveries in the unexplored indigenous African populations whichalsohaveagreaterpredilection. Multilevelomics for thediscoveryofbiomarkersandtherapeutic targetsforstroke Joan Montaneretal. (22April2020) https://doi.org/10.1016/j.jns.2019.116526 Despite many years of research, no biomarkers for stroke are available to use in clinical practice. Progress in high- throughput technologies has provided new opportunities to understand the pathophysiology of thiscomplex disease, and these studies have generated large amounts of data and information at different molecular levels. We summarize how proteomics, metabolomics, transcriptomics and genomics are all contributing to the identification of new candidate biomarkers that could be developed and used in clinical stroke management. Influencesof geneticvariantsonstrokerecovery:ameta-analysisof the 31,895 cases NikhilMathetal.(29 July2019) https://doi.org/10.1007/s10072-019-04024-w 17p12InfluencesHematomaVolume andOutcome inSpontaneous IntracerebralHemorrhage SandroMarini etal.(30Jul2018) https://doi.org/10.1016/j.jns.2019.116526
  • 46. Surgicalmanagement not that wellunderstoodeither Surgery forspontaneousintracerebral hemorrhage(Feb 2020) AirtonLeonardode OliveiraManoel https://doi.org/10.1186/s13054-020-2749-2 Spontaneous intracerebral hemorrhage is a devastating disease, accounting for 10 to 15% of all types of stroke; however, it is associated with disproportionally higher rates of mortality and disability. Despite significant progress in the acute management of these patients, the ideal surgical management is still to be determined. Surgical hematoma drainage has many theoretical benefits, such as the prevention of mass effect and cerebral herniation, reduction in intracranial pressure, and the decrease of excitotoxicity and neurotoxicity of blood products. Mechanismsofsecondarybraininjury after ICH.MLS-midlineshift; IVH- intraventricular hemorrhage Case 02 of open craniotomy for hematoma drainage. a, b Day 1—Large hematoma in the left cerebral hemisphere leading to collapse of the left lateral ventricle with a midline shift of 12 mm, with a large left ventricular and third ventricle flooding, as well as diffuse effacement of cortical sulci of that hemisphere. c–e Day 2—Left frontoparietal craniotomy, with well-positioned bone fragment, aligned and fixed with metal clips. Reduction of the left frontal/frontotemporal intraparenchymal hematic content, with remnant hematic residues and air foci in this region. There was a significant reduction in the mass effect, with a decrease in lateral ventricular compression and a reduction in the midline shift. Bifrontal pneumocephalus causing shift and compressing the adjacent parenchyma. f–h Day 36—Resolution of residual hematic residues and pneumocephalus. Encephalomalacia in the left frontal/frontotemporal region. Despite the good surgical results, the patient remainedin vegetativestate Open craniotomy. Patient lies on an operating table and receives general anesthesia. The head is set in a three-pin skull fixation device attached to the operating table, in order to hold the head standing still. Once the anesthesia and positioning are established, skin is prepared, cleaned with an antiseptic solution, and incised typically behind the hairline. Then, both skin and muscles are dissected and lifted off the skull. Once the bone is exposed, burr holes are built in by a special drill. The burr holes are made to permit the entrance of the craniotome. The craniotomy flap is lifted and removed, uncovering the dura mater. The bone flap is stored to be replaced at the end of the procedure. The duramater is then opened to expose the brain parenchyma. Surgical retractors are used to open a passage to assess the hematoma. After the hematoma is drained, the retractors are removed, the dura mater is closed, and the bone flap is positioned, aligned, and fixed with metal clips. Finally, the skin is sutured
  • 47. Real-timesegmentationforICHsurgery? Intraoperative CT and cone-beamCT imagingforminimallyinvasive evacuationofspontaneous intracerebralhemorrhage NilsHecht etal.(ActaNeurochirurgica2020) https://doi.org/10.1007/s00701-020-04284-y Minimally invasive surgery (MIS) for evacuation of spontaneous intracerebral hemorrhage (ICH) has shown promise but there remains a need for intraoperative performance assessment considering the wide range of evacuation effectiveness. In this feasibility study, we analyzed the benefit of intraoperative 3- dimensional imaging during navigated endoscopy-assisted ICH evacuation by mechanicalclotfragmentationandaspiration. Routine utilization of intraoperative computerized tomography (iCT) or cone-beam CT (CBCT) imaginginMIS for ICH permits direct surgical performance assessment and the chance for immediate re-aspiration, which may optimize targeting of an ideal residual hematoma volume and reduce secondary revision rates.
  • 49. Non-ContrastCT What areyouseeing? An Evidence-Based Approach To Imaging Of Acute Neurological Conditions (2007) https://www.ebmedicine.net/media_library/marketi ngLandingPages/1207.pdf
  • 50. HUUnits Absoluteunits“meansomething” CT Scan basically a density measurement device https://www.sciencedirect.com/topics/medicine-and-dentistry/hounsfield-scale A, AxialCTslice, viewedwithbrainwindowsettings.Noticeinthegrayscalebar attherightsideof thefigurethatthefullrangeofshadesfromblacktowhitehasbeendistributedoveranarrowHUrange, fromzero(pureblack)to+100HU(purewhite).Thisallowsfinediscriminationoftissueswithinthis densityrange,butattheexpenseofevaluationoftissuesoutsideofthisrange.Alargesubduralhematoma iseasilydiscriminatedfromnormalbrain,eventhoughthetwotissuesdiffer indensitybylessthan100HU. Anytissuesgreater than+100HUindensitywillappear purewhite,eveniftheir densitiesaredramatically different.Consequently,theinternalstructureofbonecannotbeseenwiththiswindowsetting.Fat(- 50HU) andair (-1000HU)cannotbedistinguishedwiththissetting,asbothhavedensitieslessthanzero HUandarepureblack.  B, ThesameaxialCT slice viewedwithabone windowsetting.Nowthescalebarattherightside ofthefigureshowsthegrayscaletobedistributed over averywideHUrange,from-450HU(pure black)to+1050HU(purewhite).Air caneasilybe discriminatedfromsofttissuesonthissetting becauseitisassignedpureblack,whilesofttissues aredarkgray.Detailsof bonecanbeseen, becausealargeportionofthetotalrangeofgray shadesisdevotedtodensitiesintherangeofbone. Softtissuedetailislostinthiswindowsetting, becausetherangeofsofttissuedensities(-50HUto around+100HU)representsanarrowportionofthe grayscale.
  • 52. ClinicalCT quick introonwhatyousee How to interpret an unenhanced CT Brain scan.Part 1:Basicprinciplesof Computed Tomography and relevant neuroanatomy (2016) http://www.southsudanmedicaljournal.com/archive/august-2016/how-to-interpret-an-unenhanced-ct-brain-sca n.-part-1-basic-principles-of-computed-tomography-and-relevant-neuroanatomy.html
  • 53. CutsandGantryTilt Clinical CT typically havequite thickcuts https://slideplayer.com/slide/5990473/ ComputedTomographyII– RAD473PublishedbyMelindaWiggins https://slideplayer.com/slide/7831746/ Designpatternformulti-modal coordinatespaces Figure4:PlanningthelocationoftheCTslices, withtiltedgantry.Thegantryistiltedtoavoid radiatingtheeyes,whilecapturingamaximum ofrelevantanatomicaldata. https://www.researchgate.net/publication/22 8672978_Design_pattern_for_multi-modal_co ordinate_spaces Tiltingthegantryfor CT-guided spineprocedures https://doi.org/10.1007/s11547-013-0344-1 Gantry tilt. Use of bolsters. Gantry- needle alignment. a, b Range of gantry angulation, which is ±30° on most scanners. Spine curvature and spatial orientation can be modified using bolsters and wedges. A bolster under the lower abdomen (c) flattens the lordotic curvature and reduces the L5–S1 disc plane obliquity; under the chest (d) flattens the thoracic kyphosis and reduces the upper thoracic pedicles' obliquity; under the hips (e) increases the lordosis and brings the long-axis of the sacrum closer to the axial plane. The desired needle path for spinal accesses can be paralleled by gantry tilt (solid lines on c– e) relative to straight axial orientation (dashed lines on c– e). f Gantry-needle alignment, with laser beam precisely bisecting the needle at the hub and the skin entry point. Maintaining this alignment keeps the needle in plane and allows visualization of the entireneedlethroughoutitstrajectoryon asingleCTslice
  • 55. CTSkullWindowmicrostructureofbonemightbiasyourbrainmodel? Estimationof skulltablethicknesswithclinicalCTandvalidation withmicroCThttp://doi.org/10.1111/joa.12259 Lossof bonemineraldensityfollowing sepsisusingHounsfieldunitsby computedtomography http://doi.org/10.1002/ams2.401 Opportunistic osteoporosis screeningvia the measurement of frontalskull Hounsfieldunits derived from brain computed tomographyimages https://doi.org/10.1371/jour nal.pone.0197336 TheADAM-pelvisphantom-ananthropomorphic, deformableandmultimodalphantomforMrgRT http://doi.org/10.1088/1361-6560/aafd5f
  • 56. ConstructionandanalysisofaheadCT-scan databaseforcraniofacialreconstruction FrançoiseTilotta, Frédéric Richard, Joan Alexis Glaunès, Maxime Berar, Servane Gey, Stéphane Verdeille, Yves Rozenholc, Jean-François Gaudy https://hal-descartes.archives-ouvertes.fr/hal-00278579/document
  • 57. CT Bonevery useful for brain imaging/stimulationsimulation models e.g. ultrasoundandNIRS MeasurementsoftheRelationship BetweenCT HounsfieldUnitsand AcousticVelocityandHow It ChangesWithPhotonEnergy and Reconstruction Method Webb TD, LeungSA, RosenbergJ, Ghanouni P, Dahl JJ, PelcNJ, PaulyKB IEEETransactions onUltrasonics, Ferroelectrics,and FrequencyControl, 01Jul 2018, 65(7):1111-1124 https://doi.org/10.1109/tuffc.2018.2827899 Transcranial magnetic resonance-guided focused ultrasound continues to gain traction as a noninvasive treatment option for a variety of pathologies. Focusing ultrasound through the skull can be accomplished by adding a phase correction to each element of a hemispherical transducer array. The phase corrections are determined with acoustic simulations that rely on speed of sound estimates derived from CT scans. While several studies have investigated the relationship between acoustic velocity and CT Hounsfield units (HUs), these studies havelargely ignored the impact of X-ray energy, reconstruction method, and reconstruction kernel on the measured HU, and therefore the estimated velocity, and nonehavemeasuredtherelationshipdirectly. As measured by the R-squared value, the results show that CT is able to account for 23%-53% of the variation in velocity in the human skull. Both the X-ray energy and the reconstruction technique significantly alter the R-squared value and the linear relationship between HU and speed of sound in bone. Accounting for these variations will lead to more accurate phase corrections and more efficient transmission of acoustic energy through theskull. The impact of CT energy as measured by the dual energy scan on the GE system with a bone kernel. a) The dotted line shows the HU calculated using Equation (1) and linear attenuation values from NIST. The circles show the average HU measured in the densest sample of cortical boneas measured by the averageHU (red), theaverageHU value of all thefragments fromtheinner and outer tables (yellow), and the average HU value of all the fragments from the medullary bone (purple). Error bars show the standard deviation.b) Speedof soundasa functionof HUfor fivedifferentenergies. Comparison of the measurements presented in this paper to prior models. a) Comparison to prior modelsusing data from the monochromatic images acquired with the dual energy scan on the GE system. b) Comparison to prior models using standard CT scans with unknown effective energies. In order to estimate Aubry’s model at each energy, an effective energy of 2/3 of the peak tube voltage wasassumed. Further work needs to be done to characterize either an average relationship across a patient population or a method for adapting velocity estimates to specific patient skulls. Such a study will require a large number of skulls and is outside the scope of the present work.  Future studies should examine improvements in velocity estimates and phase corrections (e.g. using ultrashort echo time (UTE) MRI) will lead to the more efficient transfer of acoustic energy through the skull, resulting in a decrease in the energy required to achieve ablation at the focalspot.
  • 58. Muscle/FatCTalsouseful (a) The relationship between graylevel and Hounsfieldunits(HU) determinedby windowlevel (WL), windowwidth (WW),andbitdepthper pixel(BIT).(b)TheeffectofdifferentWL,WW,andBITconfigurationsonthesameimage Pixel-LevelDeepSegmentation:ArtificialIntelligenceQuantifiesMuscleonComputed TomographyforBodyMorphometricAnalysis HyunkwangLee&FabianM.Troschel&ShaheinTajmir&GeorgFuchs&JuliaMario&FlorianJ.Fintelmann&SynhoDo JDigitImaging,http://doi.org/10.1007/s10278-017-9988-z Body Composition as a Predictor of Toxicity in Patients Receiving Anthracycline and Taxane– Based Chemotherapy for Early-Stage Breast Cancer http://doi.org/10.1158/1078-0432.CCR-16-2266 Quantitativeanalysisofskeletalmuscleby computedtomographyimaging—Stateof the art https://doi.org/10.1016/j.jot.2018.10.004
  • 59. Baseof Skull AxialCT: Wherebrainstrippingcoulduse deeplearning Baseofskull, axialCT 1) Nasalspineoffrontalbone 2) Eyeball 3) Frontalprocessofzygomaticbone 4) Ethmoidalaircells 5) Temporalfossa 6) Greater wingofsphenoidbone 7) Sphenoidalsinus 8) Zygomaticprocessoftemporalbone 9) Headofmandible 10) Carotidcanal,firstpart 11) Jugular foramen,posteriortointrajugular process 12) Posterior border ofjugularforamen 13) Sigmoidsinus 14) Lateralpartofoccipitalbone 15) Hypoglossalcanal 16) Foramenmagnum 17) Nasalseptum 18) Nasalcavity 19) Bodyofsphenoidbone 20) Foramenlacerum 21) Foramenovale 22) Foramenspinosum 23) Sphenopetrousfissure/Eustachiantube 24) Carotidcanal,secondpart 25) Aircellsintemporalbone 26) Apexofpetrousbone 27) Petro-occipitalfissure RadiologyKey FastestRadiologyInsightEngine https://radiologykey.com/skull/
  • 60. CSFSpacesas seen by CT An Evidence-Based Approach To Imaging Of Acute Neurological Conditions (2007) https://www.ebmedicine.net/media_library/marketi ngLandingPages/1207.pdf
  • 62. Airdefines anatomicalshapes usefuloutside ICH analysis→ Amultiscale imagingand modelling dataset of thehumaninner ear Gerber etal.(2017)ScientificDatavolume 4,Articlenumber: 170132 (2017) https://doi.org/10.1038/sdata.2017.132 BE-FNet:3DBoundingBox EstimationFeature Pyramid NetworkforAccurateand Efficient Maxillary Sinus Segmentation Zhuofu Deng etal. (2020) https://doi.org/10.1155/2020/5689301 Maxillary sinus segmentation plays an important role in the choice of therapeutic strategies for nasal disease and treatment monitoring. Difficulties in traditional approaches deal with extremely heterogeneous intensity caused by lesions, abnormal anatomy structures, and blurringboundariesofcavity Development ofCT-basedmethods for longitudinalanalysesof paranasalsinusosteitisin granulomatosiswithpolyangiitis SigrunSkaarHolme etal.(2019) https://doi.org/10.1186/s12880-019-0315-7 Eventhough progressiverhinosinusitiswith osteitisisamajor clinicalproblemin granulomatosiswithpolyangiitis(GPA),thereare nostudiesonhowGPA-relatedosteitisdevelops overtime, andnoquantitativemethods for longitudinalassessment.Here, weaimedto identifysimpleandrobustCT-basedmethodsfor captureandquantificationoftime-dependent changesinGPA-relatedparanasalsinusosteitis
  • 63. Gray/WhiteMatter Contrast not as nice as with MRI An Evidence-Based Approach To Imaging Of Acute Neurological Conditions (2007) https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf Comparison between brain-deadpatients' and normalcontrolsubjects'CTscans: 1, normal control CTscan;2, CT scan with lossof WM/GMdifferentiation; 3, CTscan with reversed GM/WMratio. GrayMatter-White Matter De-Differentiation on Brain Computed TomographyPredictsBrain Death Occurrence. https://doi.org/10.1016/j.transproceed.2016.05.006
  • 64. Calcifications choroidplexusandpinealglandverycommonlocations IntracranialcalcificationsonCT: an updated reviewCharbelSaade,ElieNajem,Karl Asmar, RidaSalman,BassamElAchkar,LenaNaffaa(2019) http://doi.org/10.3941/jrcr.v13i8.3633 In a study that was done by Yalcin et al.(2016) that focused on determining the location and extent of intracranial calcificationsin 11,941 subjects, the pineal gland was found to be the most common site of physiologic calcifications (71.6%) followed by the choroid plexus (70.2%) with male dominance in both sites with a mean age of 47.3 and 49.8 respectively. However, the choroid plexus was found to be the most common site of physiologic calcification after the 5th decade and second most common after the pineal gland in subjects aged between 15-45 years. According to Yalcin et al. (2016) dural calcifications were seen in up to 12.5% of the studied population with the majority found in male patients. Basal ganglia calcifications were found in only 1.3% in the same study conducted by Yalcin et al.(2016)Yalcin etal. (2016). Interestingly, BGC were reported to be more prevalent among females than males with a mean age of 52. Examples of patterns of calcification and related terminology. (a) dots, (b) lines, (c) conglomerate or mass-like, (d) rock-like, (e) blush,(f)gyriform/band-like,(g) stippled(h) reticular.
  • 65. Calcifications #2 An Evidence-Based Approach To Imaging Of Acute Neurological Conditions (2007) https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf Pinealglandofa72-year-oldmale. Image a revealstheoutlinedpinealglandon sagittalplaneandimage b demonstratesthe3- dimensionalimageandvolumeofthetissue. Greenareasonimage c and d exhibitthe restrictedparenchymabyexcludingallthe calcifiedtissuesfromtheslices. http://doi.org/10.5334/jbr-btr.892 Pinealglandofa35-year-oldfemale. Image a and b revealtheoutlinedpinealglandon sagittal(a)andaxial(b)planesonnoncontrast computerizedtomographyimages.Greenareas onimage c exhibittherestrictedparenchymaby excludingallthecalcifiedtissuesfromtheslices. Imaged demonstratesthe3-dimensionalimage andvolumeofnoncalcifedpinealtissue. Weassumethatoptimizedvolumetryofactive pinealtissueandthereforeahighercorrelation of melatoninandpinealparenchymacan potentiallybeimprovedbyacombinationof MRandCT imaging inadditionto serum melatoninlevels.Moreover,inordertoimprove MRquantificationofpinealcalcifications,the combinedapproachwouldpossiblyallowan optimizationandcalibrationofMRIsequencesby CTandthenperhapsevenmakeCT unnecessary 
  • 66. Massesrealor hacked“adversarialattacks” An Evidence-Based Approach To Imaging Of Acute Neurological Conditions (2007) https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf by BrittanyGoetting — Thursday,April04,2019,09:24PMEDT TerrifyingMalwareAltersCTScansTo LookLikeCancer,FoolsRadiologists https://hothardware.com/news/malware-creates-fake-cancerous-nodes-in-ct-scans ... Unfortunately, this vital technology is vulnerable to hackers. Researchers recently designed malware that can add or take away fake cancerous nodules from CT and MRI scans. Researchers at the University Cyber Security Research Center in Israel developed malware that can modify CT and MRI scans. During their research, they showed radiologists real lung CT scans, 70 of which had been altered. At least three radiologistswerefooled nearlyeverytime. Pituitaryapoplexy: twoverydifferent presentationswithoneunifying diagnosis CTbrainscanshowinga hyperdensemassarising fromthepituitaryfossa, representingpituitary macroadenomawith haemorrhage http://doi.org/10.1258/shorts.201 0.100073
  • 67. CerebralAbscess Low density due to cerebral inflammatory disease. A, Typical appearance of a cerebral abscess: round, low-density cavity (arrow) surrounded by low-density vasogenic edema. Differentiation from other cavitary lesions such as radionecrotic cysts or cystic neoplasms often requires clinical/laboratory correlation, with help often provided by contrast-enhanced and diffusion weighted MRI. B, Progressive multifocal leukoencephalopathy. Whereas white matter low density is nonspecific, involvement of the subcortical U-shaped fibers in the AIDS patient can help differentiate this disorder from HIV encephalitis. C, Toxoplasmosis. Patchy white matterlowdensity(asterisks) in an immunocompromisedpatientwith alteredmentalstatus. https://radiologykey.com/analysis-of-density-signal-intensity-and-echogenicity/ https://www.slideshare.net/Raeez/cns-infections-radiology Clinicalstagesofhumanbrainabscesseson serial CTscans aftercontrastinfusion Computerized tomographic,neuropathological, andclinicalcorrelations(1983) https://doi.org/10.3171/jns.1983.59.6.0972
  • 68. Ischemicstroke hypodensity (CSF-like looks)→ An Evidence-Based Approach To Imaging Of Acute Neurological Conditions (2007) https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf CTscansliceof thebrain showingaright-hemispheric cerebralinfarct(left sideofimage).https://en.wikipedia.org/wiki/Cerebral_infarction
  • 69. BrainSymmetry midline shift frommass effect #1 An Evidence-Based Approach To Imaging Of Acute Neurological Conditions (2007) https://www.ebmedicine.net/media_library/marketi ngLandingPages/1207.pdf https://en.wikipedi a.org/wiki/Midline _shift https://www.slideshare.net/drlokeshmahar/approach-to-head-ct
  • 70. BrainSymmetry midline shift #2:Estimate with ICP Automated Midline Shift and Intracranial Pressure Estimation based on Brain CT Images Wenan Chen, Ashwin Belle,CharlesCockrell, KevinR. Ward, Kayvan Najarian J.Vis. Exp.(74),e3871,doi:10.3791/3871(2013). https://www.jove.com/video/3871 In this paper we present an automated system based mainly on the computed tomography (CT) images consisting of two main components: the midline shift estimation and intracranial pressure (ICP) pre-screening system. To estimate the midline shift, first an estimation of the ideal midline is performed based on the symmetry of the skull and anatomical features in the brain CTscan. Then, segmentation of the ventricles from the CT scan is performed and used as a guide for the identification of the actual midline through shapematching. These processes mimic the measuringprocess by physicians and have shown promising results in the evaluation. In the second component, more features are extracted related to ICP, such as the texture information, blood amount from CT scans and other recorded features, such as age, injury severity score to estimate the ICP arealsoincorporated. Theresultof theideal midline detection.Thered lineisthe approximateideal midline. The two rectangular boxescover theboneprotrusionand the lowerfalxcerebri respectively. Theseboxesareused to reducetheregionsof interest. Thegreen dash lineisthefinal detected ideal midline, which capturestheboneprotrusion and thelowerfalxcerebri accurately.
  • 71. BrainSymmetry midline shift #3:Detection algorithms Themiddlesliceandtheanatomicalmarkers.  Adeformedmidlineexampleandtheanatomicalmidlineshiftmarker  https://doi.org/10.1016/j.compmedimag.2013.11.001 (2014) ASimple,Fastand FullyAutomated ApproachforMidlineShift Measurement on BrainComputedTomography Huan-ChihWang, Shih-Hao Ho,Furen Xiao,Jen-Hai Chou https://arxiv.org/abs/1703.00797 IncorporatingTask-Specific Structural KnowledgeintoCNNs forBrainMidlineShift Detection MaximPisovetal.(2019) https://doi.org/10.1007/978-3-030-33850-3_4 https://github.com/neuro-ml/midline-shift-detection
  • 74. Siemens Unveils AI Apps for Automatic MRI Image Segmentation DECEMBER 4TH, 2019  MEDGADGET EDITORS  NEUROLOGY, NEUROSURGERY, RADIOLOGY,  UROLOGY The AI-Rad Companion Brain MR for Morphometry Analysis, without any manual intervention, segments brain images from MRI exams, calculates brain volume, and automatically marks volume deviations in result tables that neurologists rely on for diagnostics and therapeutics. The last part it does by comparing the levels of gray matter, white matter, and cerebrospinal fluid in a given patient’s brain to normal levels. This can help with diagnosing Alzheimer’s, Parkinson’s, and other diseases. https://www.medgadget.com/2019/12/siemens-unveils-ai- apps-for-automatic-mri-image-segmentation.html Siemenscouldprovide similartoolforCTtoo
  • 76. CTSystemReceivesFDAClearanceforAI-BasedImageReconstruction Technology 07Nov2019CanonMedicalSystemsUSA,Inc.(Tustin,CA,USA) hasreceived510(k)clearancefor itsAdvancedIntelligentClear-IQEngine(AiCE)for the AquilionPrecision https://www.medimaging.net/industry-news/articles/294779910/ct-system-receives-fda-clearance-for-ai-based-image-reconstruction-technology.html
  • 77. Canon Medical is releasing a new high-end digital PET/CT scanner at the upcoming RSNA conference in Chicago. The Cartesion Prime Digital PET/CT combines Canon’s Aquilion Prime SP CT scanner and the SiPM (silicon photomultiplier) PET detector, providing high resolution imaging and easy operator control, according to the company. Productpage:  CartesionPrime DigitalPET/CT
  • 78. EpicaSeeFactorCT3Multi-Modality SystemWinsFDAClearance OCTOBER8TH,2019 https://www.medgadget.com/2019/10/epica-seefactorct3-multi-modality-system-wins-fda-clearance.html The SeeFactorCT3 produces sliceless CT images, unlike typical CT systems, which means that there’s no interpolation involved and therefore less chance of introducing artifacts.Isotropicimaging resolution goesdownto 0.1millimetersinsoft andhard tissues and lesions that are only 0.2 millimeter in diameter can be detected. Thanks to the company’s “Pulsed Technology,” the system can perform high resolution imaging while reducing the overall radiation delivered. Much of this is possible thanks to a dynamic flat panel detector that captures image sequences accurately and at high fidelity. A big advantage of the SeeFactorCT3 is its mobility, since it can be wheeled in and out of ORs, through hospital halls, and even taken inside patient rooms. When set for transport,thedeviceisnarrowenoughtobepushedthroughatypicalopendoor.
  • 79. RoyalPhilipsextends diagnosticimaging portfolio DIAGNOSTICDEVICESDIAGNOSTICIMAGING By NSMedicalStaffWriter  01Mar 2019 https://www.nsmedicaldevices.com/news/philips-incisive-ct-imagi ng-system/ The system is being offered with ‘Tube for Life’ guarantee, as it will replace the Incisive’s X-ray tube, the key component of any CT system, at no additional cost throughout the entire life of the system, potentially lowering operating expensesbyabout$400,000. Additionally, the system features the company’s iDose4 Premium Package which includes two technologies that can improve image quality, iDose4 and metal artifactreductionfor large orthopedicimplants(O-MAR). iDose4 can improve image quality through artifact prevention and increased spatial resolution at low dose. O-MAR reduces artifacts caused by large orthopedic implants. Together they produce high image quality with reducedartifacts. The system’s 70 kV scan mode is touted to offer improved low-contrast detectability and confidence at lowdose. https://youtu.be/izXI3qry8kY
  • 82. PortableCTs CereTom Reviewof Portable CT withAssessmentof aDedicated HeadCT Scanner Z.Rumboldt, W.Huda and J.W.All AmericanJournal ofNeuroradiology October 2009, 30 (9) 1630-1636 https://doi.org/10.3174/ajnr.A1603 - Citedby91 This article reviews a number of portable CT scanners for clinical imaging. These include the CereTom, Tomoscan, xCAT ENT, and OTOscan. The Tomoscan scanner consists of a gantry with multisection detectors and a detachable table. It can perform a full-body scanning, or the gantry can be used without the table to scan the head. The xCAT ENT is a conebeam CT scanner that is intended for intraoperative scanning of cranial bones and sinuses. The OTOscan is a multisection CT scanner intended for imaging in ear, nose, and throat settings and can be used to assess boneandsofttissueofthehead. We also specifically evaluated the technical and clinical performance of the CereTom, a scanner designed specifically for neuroradiologicheadimaging. https://doi.org/10.1097/JNN.0b013e3181ce5c5b GinatandGupta(2014) https://doi.org/10.1146/annurev-bioeng -121813-113601
  • 84. FutureofCTFromEnergycounting(EID)toPhotoncounting(PCD)? TheFuture of ComputedTomography Personalized,Functional,and Precise Alkadhi,HatemandEuler, André Investigative Radiology:September 2020 -Volume 55- Issue 9- p 545-555 http://doi.org/10.1097/RLI.0000000000000668 Modern medicine cannot be imagined without the diagnostic capabilities of computed tomography (CT). Although the past decade witnessed a tremendous increase in scan speed, volume coverage, and temporal resolution, along with a considerable reduction of radiation dose, current trends in CT aim toward more patient- centric, tailored imaging approaches that deliver diagnostic information being personalized to each individual patient. Functional CT with dual-and multienergy, as well as dynamic, perfusion imaging became clinical reality and will further prosper in the near future, and upcoming photon-counting detectors will deliver images ataheretoforeunmatchedspatialresolution. This article aims to provide an overview of current trends in CT imaging, taking into account the potential of photon-counting detector systems, and seeks to illustrate how the future of CT will beshaped.
  • 85. CTStartup NanoxfromIsraelGreatideaifthiswouldworkassaid? #1 https://www.mobihealthnews.com/news/nanoxs-digital-x-ray-system-wins-26m-investors The end goal is to deliver a robust imaging system that can drive earlier disease detection, especially in regions where traditional systems are either too costly or too complicated to roll out broadly. Looking at the longer term, Nanox said that it will be seeking regulatory approval for its platform, and then deploying its globally under a pay-per-scan business model that it says will enable cheaper medical imaging and screening for private and publicprovider systems.
  • 86. CTStartup NanoxfromIsraelGreatideaifthiswouldworkassaid? #2 MuddyWatersResearch@ muddywatersre MW is short $NNOX. We conclude that $NNOX has no product to sell other than its stock. Like $NKLA, NNOX appears to have faked its demo video. A convicted felon appears to be behind the IPO. A US partner has been requesting images for 6 months to no avail "But NNOX gets much worse," the report says. "A convicted felon, who crashed an $8 billion market cap dotcom into the ground, was seemingly instrumental in plucking NNOX out of obscurity and bringing its massively exaggerated story to the U.S. NNOX touts distribution partnerships that supposedly amount to $180.8 million in annual commitments. Almost all of the company’s partnerships give reason for skepticism." MartyStempniak |September18,2020| HealthcareEconomics&Policy Nanoxhitwithclass actionlawsuitamidcriticism labeling imaging startupas ‘Theranos2.0’ https://www.radiologybusiness.com/topics/healthcare-economics The news comes just weeks after the Israeli firm completed a successful initial public offering that raised $190 million. Nanox has inked a series of deals in several countries to provide its novel imaging system, claiming to offer high-end medical imaging at a fraction of the cost and footprint. But analysts at Citron Research raised red flags Tuesday, Sept. 15, claiming the company is merely a “stock promotion” amassing millions without any FDA approvals or scientific evidence. Citron’s analysis—titled “A Complete Farce on the Market: Theranos 2.0”—drew widespread attention, with several law firms soliciting investors looking to sue Nanox over its claims. Plaintiff Matthew White and law firm Rosen Law are one of the first to follow through, filing a proposed securities class action in New York on Wednesday. He claims the company made false statements to both the SEC and investors to inflate its stock value, Bloomberg Law reported. White and his attorneys also allege Nanox fabricated commercial agreements and made misleading statements about its imaging technology. Several other law firms also announced their own lawsuits on behalf of investors Friday.  Nanox did not respond to a Radiology Business request for comment. However, the Neve Ilan, Israel-based company posted a statement to its webpage Wednesday, Sept. 16, addressing the “unusual trading activities” after investors dumped the stock en masse in response to Citron’s concerns.
  • 88. FromAdvancesinComputedTomography Imaging Technology Ginatand Gupta(2014) https://doi.org/10.1146/annurev-bioeng-121813-113601 From A typical multidetector CT scanner consists of a mosaic of scintillators that convert X-rays into light in the visible spectrum, a photodiode array that converts the light into an electrical signal, a switching array that enables switching between channels, and a connector that conveys the signal to a data acquisition system(Figure6). The multiple channels between the detectors acquire multiple sets of projection data for each rotation of the scanner gantry. The channels can sample different detector elementssimultaneouslyandcancombinethesignals. The detector elements can vary in size, and hybrid detectors that comprise narrow (0.5-mm, 0.625-mm, or 0.75-mm) detectors in the center with wider (1.0-mm, 1.25-mm, or 1.5-mm)detectorsflankedalongthesidesarecommonlyused(Saini2004). Third-generation CT scanners featured rotate-rotate geometry, whereby the tube and the detectors rotated together around the patient. In conjunction with a wide X-ray fan beam that encompassed the entire patient cross-section and an array of detectors to intercept the beam, scan times of less than 5 s could be achieved. However, third-generation CT scanners were prone to ring artifacts that resulted in drift in the calibration of one detector relative to the other detectors. Fourth- generation scanners featured stationary ring detectors and a rotating fan-beam X-ray tube (Figure 5), which mitigated the issues related to ring artifacts. However, the ring-detector arrangement limited the useofscatterreduction.
  • 89. leakage current MOS switch ASICs and ultra-low noise pre-amplification ASICs. Our modern, automated, high-precision assembly process guarantees our productsareofhighreliabilityandstability. With our core competences in photodiode, ASIC and assembly technologies we offer products in different assembly levels, ranging from photodiode chips to full detector modules. Our strong experience in designing and developing CT detector modules ensures that customized solutions are quickly and cost- efficientlyinuseatour customers.
  • 91. Acquisition Sinogram Reconstruction→ → Fransson(2019): Although many different reconstruction methods are available there are mainly two categories, filtered back- projection (FBP) and iterative reconstruction (IR). FBP is a simpler method than IR and it takes less time to compute, but artifacts are more frequent and dominant (Stiller2018). The image that provide the anatomical information is said to exist in the image domain. By applying a mathematical operation, called the Fourier transform, on the image data it is transformed into the projection domain. In the projection domain image processing is performed with the use of filters, or kernels, in order to enhance the image in various ways, such as reducing the noise level. When the processing is completed the Inverse Fourier transform is applied on the data in order to acquire the anatomical imagethatisdesired.
  • 92. Acquisition Sinogram Reconstruction→ → Stiller2018:Basicsofiterativereconstructionmethodsincomputedtomography:Avendor-independentoverview
  • 93. Sinogram ImageSpace→ MachineFriendly MachineLearning:InterpretationofComputed TomographyWithout Image Reconstruction HyunkwangLee,ChaoHuang,SehyoYune,ShaheinH.Tajmir,MyeongchanKim&SynhoDo Department ofRadiology,Massachusetts General Hospital,Boston;JohnA.Paulson SchoolofEngineeringand AppliedSciences,Harvard University, ScientificReportsvolume9,Articlenumber: 15540(2019) https://doi.org/10.1038/s41598-019-51779-5 Examples of reconstructed images and sinograms with different labels for (a), body part recognition and (b), ICH detection. From left to right: original CT images, windowed CT images, sinograms with 360 projections by 729 detector pixels, and windowed sinograms 360 × 729. In the last row, an example CT with hemorrhage is annotated with a dotted circle in image-space with the region of interest converted into the sinogram domain using Radon transform. This area is highlighted in red on thesinogramin thefifthcolumn.
  • 94. Reconstructionfrom sparsemeasurements common problem in all scanning-based imaging Zhuetal.(2018) Nature"Imagereconstructionbydomain-transformmanifold learning" https://doi.org/10.1038/nature25988 Radonprojection;Spiralnon-CartesianFourier;UndersampledFourier;MisalignedFourier-  Citedby238 -https://youtu.be/o-vt1Ld6v-M- https://github.com/chongduan/MRI-AUTOMAP They describe the technique - dubbed AUTOMAP (automated transform by manifold approximation) - in a paper published today in the journal Nature. "An essential part of the clinical imaging pipeline is image reconstruction, which transforms the raw data coming off the scanner into images forradiologists to evaluate," https://phys.org/news/2018-03-arti ficial-intelligence-technique-quality- medical.html
  • 95. PET+CTJoint Reconstruction Improvingthe Accuracy ofSimultaneously Reconstructed ActivityandAttenuationMapsUsing Deep Learning DonghwiHwang,Kyeong YunKim,SeungKwanKang,SeonghoSeo,Jin ChulPaeng,DongSooLeeandJaeSungLee JNucl Med2018;59:1624–1629 http://doi.org/10.2967/jnumed.117.202317 Simultaneous reconstruction of activity and attenuation using the maximum-likelihood reconstruction of activity and attenuation (MLAA) augmented by time-of-flight information is a promising method for PET attenuation correction. However, it still suffers from several problems, including crosstalk artifacts, slow convergence speed, and noisy attenuation maps (μ-maps). In this work, we developed deep convolutional neural networks (CNNs) to overcome these MLAA limitations, and we verified their feasibility using a clinical brain PET dataset. There are someexistingworks on applying deeplearningto predict CT m-maps based on T1-weighted MR images or a combination of Dixon and zero-echo-time images (51,52). The approach using the Dixon and zero-echo-time images would be more physically relevant than the T1- weighted MRI-based approach because the Dixon and zero-echo- time sequences provide more direct information on the tissue composition than does the T1 sequence. The method proposed in this study has the same physical relevance as the Dixon or zero-echo-time approachbut doesnot requiretheacquisitionofadditionalMRimages.
  • 96. Reconstructionexample forPET from sinograms DirectPET:FullSize Neural NetworkPET Reconstruction fromSinogramData William Whiteley, WingK. Luk, JensGregor Siemens Medical Solutions USA https://arxiv.org/abs/1908.07516 This paper proposes a new more efficient network design called DirectPET which is capable of reconstructing a multi-slice Positron Emission Tomography (PET) image volume (i.e., 16x400x400) by addressing the computational challenges through a specially designed Radon inversion layer. We compare the proposed method to the benchmark Ordered Subsets Expectation Maximization (OSEM) algorithm using signal-to- noise ratio, bias, mean absolute error and structural similarity measures. Line profiles and full-width half- maximum measurements are also providedforasampleoflesions. Looking toward future work, there are many possibilities in network architecture, loss functions and training optimization to explore, which will undoubtedly lead to more efficient reconstructions and even higher quality images. However, the biggest challenge with producing medical images is providing overall confidence on neural network reconstruction on unseensamples
  • 97. ImprovingtheAccuracy ofSimultaneouslyReconstructedActivity and AttenuationMapsUsingDeepLearning JNuclMed2018;59:1624–1629 http://doi.org/10.2967/jnumed.117.202317
  • 99. BeamHardeningArtifactfoundoftenatlowerslicesnearbrainstemwithsmallspacessurroundedby bone Beamhardeningartifact(left),andpartialvolumeeffect(right) http://doi.org/10.13140/RG.2.1.2575.3122 UnderstandingandMitigatingUnexpectedArtifactsinHeadCTs:APracticalExperience FlaviusD.RaslauJ.ZhangJ.Riley-GrahamE.J.Escott(2016) http://doi.org/10.3174/ng.2160146 BeamHardening. The most commonly encountered artifact in CT scanning is beam hardening, which causes the edges of an object to appear brighter thanthecenter, evenifthematerialisthe same throughout The artifact derives its name from its underlying cause: the increase in mean X-ray energy, or “hardening” of the X-ray beam as it passes through the scanned object. Because lower-energy X-rays are attenuated more readily than higher-energy X-rays, a polychromatic beam passing through an object preferentially loses the lower- energy parts of its spectrum. The end result is a beam that, though diminished in overall intensity, has a higher average energy than the incident beam. This also means that, as the beam passes through an object, the effective attenuation coefficient of any material diminishes, thus making short ray paths proportionally more attenuating than long ray paths. In X-ray CT images of sufficiently attenuating material, this process generally manifests itself as an artificial darkening at the center of long ray paths, and a corresponding brightening near the edges. In objects with roughly circular cross sections this process can cause the edge to appear brighter than the interior, but in irregular objects it is commonly difficult to differentiate between beam hardening artifacts and actualmaterial variations.
  • 100. MotionArtifacts as inmostofimagingwhen thesubjectmovesduringthe acquisition There are several steps to be taken to prevent the voluntary movement of the body during scanning while it is difficult to prevent involuntary movement. Some modern scanning devices have some features that reducetheresultingartifacts Ameretal.(2018)researchgate.net ArtifactsinCT:recognitionandavoidance. BarrettandKeat(2004) https://doi.org/10.1148/rg.246045065 Freeze!RevisitingCTmotionartifacts:Formation,recognitionand remedies. semanticscholar.org CTbrain withseveremotionartifact https://radiopaedia.org/images/4974802
  • 101. StreakArtifactsfrom high density structures An Evidence-Based Approach To Imaging Of Acute Neurological Conditions (2007) https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf Dr BalajiAnvekar'sNeuroradiologyCases:StreakartifactsCT http://www.neuroradiologycases.com/2011/10/streak-artifacts.html Hegazy, M.A.A., Cho, M.H., Cho, M.H. et al. U-netbasedmetalsegmentation onprojectiondomainfor metal artifact reductionindentalCT (2019) https://doi.org/10.1007/s13534-019-00 110-2
  • 102. RingArtifacts from high density structures CTartifacts:causes andreduction techniques (2012) FEdwardBoas&DominikFleischmann Department ofRadiology, StanfordUniversitySchoolofMedicine, 300PasteurDrive, Stanford,CA94305, USA https://www.openaccessjournals.com/articles/ct-artif acts-causes-and-reduction-techniques.html http://doi.org/10.1088/0031-9155/46/12/309
  • 103. ZebraandStair-stepArtifacts CTartifacts:causesand reductiontechniques (2012) FEdwardBoas&DominikFleischmannDepartment ofRadiology,StanfordUniversitySchoolofMedicine https://www.openaccessjournals.com/articles/ct-artifacts-causes-and-reduction-techniques.html Zebra and stair-step artifacts. (A) Zebra artifacts (alternating high and low noise slices, arrows) due to helical interpolation. These are more prominent at the periphery of the field of view. (B) Stair-step artifacts (arrows) seen with helical and multidetector rowCT.Thesearealsomoreprominentnear theperipheryofthefieldof view.Therefore,itisimportanttoplace theobjectofinterestnear thecenter ofthefield ofview. Zebrastripes https://radiopaedia.org/articles/zebra-stripes-1?lang=gb AndrewMurphy  and ◉ and  Dr J.RayBallinger etal. Zebrastripes/artifacts appear asalternatingbrightanddarkbandsinaMRIimage.Theterm hasbeenusedtodescribeseveraldifferentkindofartifactscausingsomeconfusion. Artifactsthathavebeendescribedasazebraartifactincludethefollowing: ● Moirefringes  ● Zero-fillartifact ● Spikeink-space  Zebrastripeshavebeendescribedassociatedwith susceptibilityartifacts. InCTthereisalsoazebraartifactfrom3Dreconstructionsandazebrasignfrom haemorrhageinthecerebellar sulci. Itthereforeseemsprudenttouse"zebra"withatermlike"stripes"rather than"artifacts".
  • 104. Bonediscontinuities from factures An Evidence-Based Approach To Imaging Of Acute Neurological Conditions (2007) https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf https://www.ncbi.nlm.nih.gov/pubmed/21691535
  • 105. Bonefractures in practice DoctorExplains Serious UFCEyeInjuryforKarolinaKowalkiewicz - UFC FightNight168 Brian Sutterer,https://youtu.be/XwvoNsypP-I OrbitalFloorfracture muscleorfatgoingtomaxillarysinus https://en.wikipedia.org/wiki/Orbital_blowout_fracture
  • 106. Networks trainedfor fractures as well DeepConvolutionalNeural NetworksforAutomatic DetectionofOrbitalBlowout Fractures D.Ng,L.Churilov,P. Mitchell, R.DowlingandB.Yan American Journalof NeuroradiologyFebruary2018,39 (2)232-237; https://doi.org/10.3174/ajnr.A5465 Orbital blow out fracture is a common disease in emergency department and a delay or failure in diagnosis can lead to permanent visual changes. This study aims to evaluate the ability of an automatic orbital blowout fractures detection system based on computedtomography(CT) data. The limitations of this work should be mentioned. First, our method was developed and evaluated on data from a single-tertiary hospital. Thus, further assessment of large data from other centers is required to increase the generalizability of the findings, which will be addressed in a future work. Fracture location is also an important parameter in accurate diagnosis and planning for surgical management. With further improvements and clinical verification, an optimized model could be implemented in the development of computer- aideddecisionsystems. Preprocessing of DICOM data. A, Original pixel values visualized on a CT slice. B, Effect after finding the largest link area. C, Image with bone window limitation. D, Binary image of a CT slice. E, Image clipped with the maximum outer rectangular frame.CT,computedtomography.
  • 108. ‘Signs’ human-definedpatterns predictingthe outcome#1 Noncontrastcomputedtomography markersof outcome inintracerebral hemorrhage patients MiguelQuintas-Nevesetal. (Oct 2019) AJournalof ProgressinNeurosurgery,NeurologyandNeurosciences https://doi.org/10.1080/01616412.2019.1673279 328patients wereincluded.Themostfrequent NCCTmarkerwas‘anyhypodensity’(68.0%) andthe lessfrequent wastheblendsign(11.6%). Eventhough somenoncontrast computedtomography(NCCT) markersareindependent predictorsofHGand30- daysurvival,theyhave suboptimaldiagnostic testperformances forsuch outcomes. Withphysical background of course, but still a bit subjective
  • 109. ‘Signs’ human-definedpatterns predictingthe outcome #2 fromHemorrhagic Stroke (2014) JuliusGriauzde, ElliotDickerson and Joseph J. Gemmete Department ofRadiology,RadiologyResident,UniversityofMichigan http://doi.org/10.1007/978-1-4614-9212-2_46-1 Active Hemorrhage Observing active extravasation of blood into the area of hemorrhage is an ominous radiologic finding that suggests both ongoing expansion of the hematoma and a poor clinical outcome [ Kimetal.2008]. On non-contrast examinations, freshly extravasated blood will have attenuation characteristics different from the blood which has been present in the hematoma for a longer period, and these heterogeneous groups of blood products can circle around one another to produce a “swirl sign” which has also been associated with hemorrhage growth and poor outcomes [Kimetal.2008 ]. If the patient receives a CTA study, active extravasation can present as a tiny spot on arterial phase images (the “spot sign”) which can rapidly expand on more delayed phase images. Even when a spot of precise extravasation is not identified on arterial phase images, more delayed images can directly demonstrate extravasatedcontrastindicatingongoing hemorrhage. Withphysical background of course, but still a bit subjective a NCCT of deep right ICH (38 ml) with swirl sign (arrow). b Corresponding hematoma CT densitometry histogram (Mean HU 55.3, SD 9.7, CV 0.18, Skewness −0.26, Kurtosis 2.41). c CTA with multiple spot signs present (arrows). The patient subsequently underwent hematoma expansion of 41 ml. d NCCT of a different patient with right frontal lobar ICH (38 ml) and trace IVH. e Corresponding hematoma CT densitometry histogram (Mean HU 61.5, SD 12.2, CV 0.20, Skewness −0.64, Kurtosis 2.6). f CTA demonstratesnoevidence of spot sign. Thepatient had astable hematomaon 24-hour follow-up Swirls and spots: relationship betweenqualitative and quantitative hematoma heterogeneity,hematomaexpansion, and the spot sign Dale Connor, ThienJ. Huynh, AndrewM. Demchuk, Dar Dowlatshahi,David J. Gladstone, SivaniyaSubramaniapillai,Sean P. Symons& Richard I.AvivNeurovascularImagingvolume1, Articlenumber: 8 (2015) https://doi.org/10.1186/s40809-015-0010-1
  • 110. CT“SwirlSign” associated with hematomaexpansion TheCTSwirlSignIsAssociated withHematomaExpansionin IntracerebralHemorrhage D.Ng,L.Churilov,P.Mitchell,R.DowlingandB.Yan AmericanJournalofNeuroradiologyFebruary2018,39 (2)232-237; https://doi.org/10.3174/ajnr.A5465 Hematoma expansion is an independent determinant of poor clinical outcome in intracerebral hemorrhage. Although the “spot sign” predicts hematoma expansion, the identification requires CT angiography, which limits its general accessibility in some hospital settings. Noncontrast CT (NCCT), without the need for CT angiography, may identify sites of active extravasation, termed the “swirl sign.” We aimed to determine the association of the swirl sign withhematoma expansion. The NCCT swirl sign was reliably identified and is associated with hematoma expansion. We propose that the swirl sign be included in risk stratification of intracerebral hemorrhage and considered for inclusion in clinical trials. NoncontrastbrainCTofa73-year-old womanwhopresentedwithright-sided weakness. InitialbrainCT(A–C)demonstratesaleft parietalhematomameasuring33mL, demonstrating hypodensehematomawith hypodensefoci,theswirlsign. Follow-upCT (D–F)performed8hourslater demonstratesincreasedhematomavolume, 46mL. Imagingfeaturesof swirlsignandspot sign CoronalnonenhancedCT (A)demonstratesthe hypodenseareawithinthehematoma(swirlsign [asterisk]),whereasahyperdensespotisshown on CTangiography(arrow) (B).Thereisalreadymass effect with midlineshiftand intraventricular hematomaextension. https://doi.org/10.1212/WNL.0000000000003290
  • 111. CT“SpotSign” AdvancesinCTforpredictionof hematoma expansioninacuteintracerebral hemorrhage ThienJ Huynh, Sean P Symonsand Richard I Aviv Division of Neuroradiology, Departmentof Medical Imaging,Sunnybrook Health Sciencesand University of Toronto, Toronto,Canada Imagingin Medicine(2013)Vol5Issue6 https://www.openaccessjournals.com/articles/advances-in-ct-for-predict ion-of-hematoma-expansion-in-acute-intracerebral-hemorrhage.html Noncontrast CT imaging plays a critical role in acute intracerebral hemorrhage (ICH) diagnosis, as clinical features are unable to reliably distinguish ischemic from hemorrhagic stroke. For detectionof acute hemorrhage, CT isconsidered the gold-standard; however CT and MRI have been found to be similar in accuracy. CT is preferred over MR imaging due to reduced cost, rapid scan times, increased patient tolerability and increased accessibility in the emergency setting. It is important to note, however, that CT lacks sensitivity in identifying foci of chronic hemorrhage compared with gradient echo and T2* susceptibility- weighted MRI. MR imaging may also provide additional information regarding the presence of cavernous malformations and characterizingperihematomaledema
  • 112. CT“Blackholesign” ComparisonofSwirlSignand BlackHoleSigninPredicting EarlyHematomaGrowthin PatientswithSpontaneous IntracerebralHemorrhage Xin Xiong etal. (2018) http://doi.org/10.12659/MSM.906708 Early hematoma growth is associated with poor outcome in patients with spontaneous intracerebral hemorrhage (ICH). The swirl sign (SS) and the black hole sign (BHS) are imaging markers in ICH patients. The aim of this study was to compare the predictive value of these 2 signs for early hematoma growth Illustrationofswirlsign,black holesign,andfollow-upCT images.(A)A60-year-old manpresentedwithsudden onsetofleft-sidedparalysis. AdmissionCTimage performed1hafter onsetof symptomsshowing thalamic ICHwithaswirlsign(arrow) andthehematomavolume was16.57ml.(B)Hematoma volumeremainsthesameon follow-upCT scan performed23hafter onset ofsymptoms.(C)A75-year- oldmanwithleftdeepICH. InitialCT imageperformed2 hafter onsetofsymptoms showsblackholesign (arrow).(D)Follow-upCT image4hlatershows significanthematoma growth. 
  • 113. CT“LeakageSign” Youprobablynoted thepatternalready?Insteadofadmittingthat no single “sign”cantell youthe whole storyandtryingtodefinesome non-robust“biomarkers”,data-driven methodsare notexplored heavilybyclinicians(applyingtomostclinicaldomains) LeakageSignforPrimary IntracerebralHemorrhage ANovelPredictorofHematomaGrowth Kimihiko Orito, Masaru Hirohata, Yukihiko Nakamura, Nobuyuki Takeshige,TakachikaAoki, GousukeHattori, Kiyohiko Sakata,Toshi Abe, YuusukeUchiyama, TeruoSakamoto, and Motohiro Morioka Stroke. 2016;47:958–963 https://doi.org/10.1161/STROKEAHA.115.011578 Recent studies of intracerebral hemorrhage treatments have highlighted the need to identify reliable predictors of hematoma expansion. Several studies have suggested that the spot sign on computed tomographic angiography (CTA) is a sensitive radiological predictor of hematoma expansion in the acute phase. However, the spot sign has low sensitivity for hematoma expansion. In this study, we evaluated the usefulness of a novel predictive method, called the leakage sign. The leakage sign wasmore sensitive than the spot sign forpredicting hematoma expansion in patients withICH. In addition to the indication foran operation and aggressive treatment, we expect that this methodwill be helpful to understandthe dynamicsof ICH in clinical medicine.
  • 114. CT“IslandSign” Island Sign:AnImaging Predictor forEarly Hematoma ExpansionandPoorOutcomein PatientsWithIntracerebral Hemorrhage Qi Li,Qing-Jun Liu, Wen-Song Yang, Xing-ChenWang,Li-Bo Zhao,Xin Xiong, Rui Li, Du Cao, Dan Zhu, Xiao Wei,and Peng Xie Stroke.2017;48:3019–302510Oct2017 https://doi.org/10.1161/STROKEAHA.117.017985 We included patients with spontaneous intracerebral hemorrhage (ICH) who had undergone baseline CT within 6 hours after ICH symptom onset in our hospital between July 2011 and September 2016. A total of 252 patients who met the inclusion criteria were analyzed. Among them, 41 (16.3%) patients had the island sign on baseline noncontrast CT scans. In addition, the island sign was observed in 38 of 85 patients(44.7%) withhematoma growth. Multivariate logistic regression analysis demonstrated that the time to baseline CT scan, initial hematoma volume, and the presence of the island sign on baseline CT scan independently predicted early hematoma growth.  Illustration of island sign. Axial noncontrast computed tomography (CT) images of 4 patients with CT island sign. A, CT island sign in a patient with basal ganglia hemorrhage. Note the there are 3 small scattered little hematomas (arrows), each separate from the main hematoma. B, Putaminal intracerebral hemorrhage with 3 small separate hematomas (arrowheads). Note that there arehypointense areas between the 3 small hematomas and the main hematoma. C, Lobar hematoma with 4 scattered separate hematomas (arrowheads). D, Large basal ganglia hemorrhagewith intraventricular extension. The hematoma consists of 4 bubble-like or sprout-like small hematomas (arrowheads) that connect with the mainhematomaand oneseparatesmall hematoma(arrow). Illustration of differences between the Barras shape scale and Li Qi’s island sign. A, Barras scale category IV lobulated hematoma. Note that irregular margin had a broad base, and the border of the main hematoma was spike-like (arrow). B, A lobulated hematoma that belongs to Barras scale category V. Note that the hematoma consisted of 4 spike-like projections (lobules). C, The island sign consisted of one separate small island (arrow) and 3 little islands (arrowheads) that connect with the main hematoma. Note that the 3 small hematomas were bubble-like or sprout-like outpouching from the main hematoma. D, A large hematoma with 4 bubble-like or sprout-like small hematomas (arrowheads) all connected with the main bleeding. Note that the large lobule (big arrow) in the bottom of the main hematoma was not considered islands.
  • 115. Howwelldohumansagreeonthesigndefinitions Inter-andIntraraterAgreementofSpot SignandNoncontrastCT MarkersforEarly Intracerebral HemorrhageExpansion JawedNawabiet al. J.Clin.Med. 2020,9(4), 1020; https://doi.org/10.3390/jcm9041020 (ThisarticlebelongstotheSpecialIssue  IntracerebralHemorrhage:ClinicalandNeuroimagingCharacteristics) The aim of this study was to assess the inter- and intrarater reliability of noncontrast CT (NCCT) markers [Black Hole Sign (BH), Blend Sign (BS), Island Sign (IS), and Hypodensities (HD)] and Spot Sign (SS) on CTA in patients with spontaneous intracerebral hemorrhage (ICH) NCCT imaging findings and SS on CTA have good-to- excellent inter- and intrarater reliabilities, with the highestagreementfor BHandSS. Representative examplesof disagreed ratingsof four non- contrast computed tomographic (NCCT) markersand SpotSign (SS) onCT-angiography(CTA)for intracerebral hemorrhage expansion. (A)SSonCTA(white arrow) mistaken for intraventricular plexuscalcification (black arrow) (B). (C) Blend sign (white arrows) mistaken for Fluid Sign1. (D) Swirl Signmistaken for Hypodensities (black arrow). (E) Hypodensities (black arrow) mistakenfor Swirl Sign (F)
  • 117. CADComputer-aided diagnosis notdesign RebrandedasRadiomics→ FromHandcraftedtoDeep-Learning- BasedCancerRadiomics:Challengesand Opportunities ParnianAfshar et al. (2019) IEEE Signal ProcessingMagazine ( Volume:36, Issue:4 , July 2019) https://doi.org/10.1109/MSP.2019.2900993 Radiomics, an emerging and relatively new research field, refers to extracting semi-quantitative and/or quantitative features from medical images with the goal of developing predictive and/or prognostic models. In the near future, it is expected to be a critical component for integrating image-derived information used for personalized treatment. The conventional radiomics workflow is typically based on extracting predesigned features (also referred to as handcrafted or engineered features) from a segmented region of interest (ROI). Nevertheless, recent advancements in deep learning have inspired trends toward deep- learning-based radiomics (DLRs) (also referred to as discoveryradiomics). Thedifferentcategories ofhandcrafted featurescommonly usedwithinthecontext ofradiomics. ExtractingDeep-Learning- Radiomics(DLR). Theinputto thenetworkcanbetheoriginal image,thesegmentedROI,or a combinationofboth.Eitherthe extractedradiomicsfeaturesare usedthroughouttherestofthe network,oranexternalmodelis usedtomakethedecisionbased onradiomicsfeatures.
  • 118. Reproducibilityoftraditionalradiomicfeatures #1 ReproducibilityofCT Radiomic Featureswithin theSamePatient: Influence of RadiationDose and CT ReconstructionSettings MathiasMeyer, JamesRonald, FedericaVernuccio, Rendon C. Nelson, Juan Carlos Ramirez-Giraldo, Justin Solomon, BhavikN. Patel, Ehsan Samei, Daniele Marin Radiology (1 Oct 2019) https://doi.org/10.1148/radiol.2019190928 Results of recent phantom studies show that variation in CT acquisition parameters and reconstruction techniques may make radiomic features largely nonreproduceable and oflimited use for prognosticclinical studies. Conclusion: Most radiomic features are highly affected by CT acquisition and reconstruction settings, to the point of being nonreproducible. Selecting reproducible radiomic features along with study-specific correction factors offers improved clusteringreproducibility. Images in 63-year-old female study participant with metastatic liver disease from colon cancer. CT images reconstructed in the axial plane with (top row) 5.0 mm and (bottom row) 3.0 mm. The texture distribution alters between the two reconstruction algorithms with direct effect on the quantitative texture radiomic features, such as gray-level size zone matrix large area high level emphasis (LAHGLE) (5.0 mm LAHGLE = 4301732.0 vs 3.0 mm LAHGLE = 7089324.3) as displayed in the lesion overlay images (middle column) and the heatmap distributions (rightmost column). The heat maps (rightmost column) display the difference of original image and a convolution. Note how the heat map distribution changes between the different section thicknesses. The heat map was generated by using MintLesion (version 3.4.4; MintMedical, Heidelberg, Germany).
  • 119. Reproducibilityoftraditionalradiomicfeatures #2 ReliabilityofCT-basedtexture features:Phantom study. BinoA. Varghese Darryl Hwang Steven Y. Cen JoshuaLevy DerekLiu Christopher Lau MarielenaRivas Bhushan Desai David J. Goodenough VinayA. Duddalwar Journal ofAppliedClinical Medical Physics(20JuneOct2019) https://doi.org/10.1002/acm2.12666 Objective: To determine the intra , inter and test retest variability of‐ICH, occurs earlier ‐ICH, occurs earlier ‐ICH, occurs earlier CT basedtextureanalysis(CTTA)metrics.‐ICH, occurs earlier Results: As expected, the robustness, repeatability and reproducibility of CTTA metrics are variably sensitive to various scanner(PhilipsBrilliance 64 CT,ToshibaAquilion Prime160CT) and scanning parameters. Entropy of Fast Fourier Transform‐sized basedtexture metricswasoverallmost reliable acrossthetwo scanners and scanning conditions. Post processing techniques‐sized that reduce image noise while preserving the underlying edges associated with true anatomy or pathology bring about significant differences in radiomic reliability compared to when they were notused. (Left) Texture phantom comprising of three texture patterns. (Middle) Phantom placement for image acquisition. (Right) Cross section of texture phantom patterns. (1), (2) and (3) are 3D printed ABS plastic with fill levels 10%, 20%, and 40%, respectively. (Bk) is ahomogenousABS material. (Thewindowlevel is −500 HUwitha width of 1600 HU). 3.4 Effect of post processing‐sized techniques that reduce image noise while preserving the underlying edges associated with true anatomy or pathology By comparing the changes in robustness of the CTTA metrics across the two scanners, we observe that post processing techniques that‐ICH, occurs earlier reduce image noise while preserving the underlying anatomical edges for example, I‐ICH, occurs earlier dose levels (here 6 levels) on the Philips scanner and Mild/Strong (here 2 levels) levels on the Toshiba scanner produce significant difference in CTTA robustness compared to the base setting (Fig. 3). Stronger noise reduction techniques were associated with a significant reduction in reliability in the Philips scanner, however, the opposite was observed on the Toshiba scanner. In both cases, no noise reduction techniqueswereused in thebasesetting. Robustnessassessment of thetexture metricsdue to changes in reconstruction filters;I dose levels(Philips‐ICH, occurs earlier scanner [a]and changesin noise correctionslevels (Mild or Strong) onthe Toshibascanner[b].
  • 120. Reproducibilityoftraditionalradiomicfeatures #3 RadiomicsofCT FeaturesMayBe Nonreproducible andRedundant:InfluenceofCT AcquisitionParameters RobertoBerenguer, Maríadel RosarioPastor-Juan, JesúsCanales-Vázquez, Miguel Castro-García, MaríaVictoriaVillas, FranciscoMansillaLegorburo, SebastiàSabater Radiology (24April 2018) https://doi.org/10.1148/radiol.2018172361 Materials and Methods Two phantoms were used to test radiomic feature (RF) reproducibility by using test-retest analysis, by changing the CT acquisition parameters (hereafter, intra-CT analysis), and by comparing five different scanners with the same CT parameters (hereafter, inter-CT analysis). Reproducible RFs were selected by using the concordance correlation coefficient (as a measure of the agreement between variables) and the coefficient of variation (defined as the ratio of the standard deviation to the mean). Redundant features were grouped by using hierarchical cluster analysis. Conclusion ManyRFs wereredundant and nonreproducible. If allthe CT parameters are fixed except field of view, tube voltage, and milliamperage, then the information provided by the analyzed RFs can be summarized in only10 RFs(eachrepresentingacluster)becauseofredundancy. Graph shows cluster dendrogram and representative radiomics features (RFs). Red boxes differentiate 10 extracted clusters, which were selected by height. Representative RFs of each cluster were selected based on highest concordance correlation coefficient value of test-retest analysis. 
  • 121. Reproducibilityoftraditionalradiomicfeatures #4 Reproducibilitytest of radiomics using networkanalysisand WassersteinK-means algorithm JungHun Oh, AdityaP. Apte, EvangeliaKatsoulakis, Nadeem Riaz, Vaios Hatzoglou, YaoYu, Jonathan E. Leeman, UsmanMahmood, Maryam Pouryahya, AditiIyer, AmitaShukla-Dave, AllenR. Tannenbaum, NancyY. Lee, Joseph O. Deasy https://doi.org/10.1101/773168 (19Sept 2019) To construct robust and validated radiomic predictive models, the development of a reliable method that can identify reproducible radiomic features robust to varying image acquisition methods and other scanner parameters should be preceded with rigorous validation. We further propose a novel Wasserstein K-means algorithm coupled with the optimal mass transport (OMT) theorytoclustersamples. Despite such great progress in radiomics in recent years, however, the development of computational techniques to identify repeatable and reproducible radiomic features remains challenging and relatively retarded. This has led many radiomic models built using a dataset to be unsuccessful in subsequent external validation on independent data [Virginiaetal. 2018]. One of the reasons of these consequences is likely due to the susceptibility of radiomic features to image reconstruction and acquisition parameters. Since radiomic features are computed via multiple tasks including imaging acquisition, segmentation, and feature extraction, the selection of parameters present in each step may affect the stability of features computed. As such, prior to model building, development of radiomic features with high repeatability and high reproducibility as well as development of tools that can identify such features is more likely to be urgently needed in thefieldofradiomics.
  • 123. “ICHCTLabels” e.g. hematoma primary injury , PHE secondary injury Airton Leonardo deOliveiraManoel (Feb2020) PHE– peri-hematomaedema https://doi.org/10.1186/s13054-020-2749-2 Intraventicularextension ofhemorrhage(IVH)mightchangeventricleshape makingsegmentation rathertricky especiallyifyouhavetrained yourbrain modelswithnon-pathologicalbrain. SliceexamplefromCROMISstudyatUCL.
  • 124. Imagingfeatures are time-dependent (fromhourstolong-termoutcomes) #1 https://doi.org/10.1212/WNL.0b013e3182343387 https://doi.org/10.2176/nmc.ra.2016-0327 Advancesin CT forprediction ofhematomaexpansion in acute intracerebral hemorrhage Thien J Huynh, SeanP Symons and Richard IAviv Division ofNeuroradiology,Department ofMedicalImaging, Sunnybrook HealthSciencesand Universityof Toronto https://www.openaccessjournals.com/articles/advances-in-ct-for-prediction-of-hematoma-expansion-in-acute-intracerebral-he morrhage.html PerihematomalEdemaAfter Spontaneous Intracerebral Hemorrhage (2019) https://doi.org/10.1161/STROKEAHA.119.024965 A)Exampleofhematoma and perihematoma edemaregions ofinterest (ROIs).The ROIs were drawnon the noncontrast computed tomography (CT)and transferred to perfusion maps.(B)Maps ofcerebral blood flow (CBF), cerebral blood volume (CBV), and time to peak ofthe impulseresponse curve (TMAX)froman ICH ADAPTstudypatient randomized to a targetsystolic BP<150mmHg. 10.1038/jcbfm.2015.36
  • 125. Imagingfeatures are time-dependent (fromhourstolong-termoutcomes) #2 Intracerebralhemorrhage(ICH)growthpredictsmortality andfunctionaloutcome.Wehypothesized that irregular hematoma shapeand densityheterogeneity, reflectingactive,multifocalbleedingoravariablebleeding timecourse,wouldpredict ICHgrowth. https://doi.org/10.1161/STROKEAHA.108.536888 A,Shape(left)anddensity(right)categoricalscales and(B)examplesofhomogeneous,regularICH(left) andheterogeneous,irregularICH(right). Absolute(A)andrelative (B)perihematomaledemafor decompressivecraniotomy treatmentandcontrolgroups, andcorrectedabsolute(C) andcorrectedrelative(D) perihematomaledemafor the treatmentandcontrolgroups. 10.1371/journal.pone.0149169 Example of a CT scan demonstrating delineation of the region of Example of a CT scan demonstrating delineation of the region of PHE (outlined in green) and ICH (outlined in red). The oedema extension distance (EED) is the difference between the radius (re ) of a sphere (shown in green) equal to the combined volume of PHE and ICH and the radiusofasphere (shown in red) equal tothe volume of theICH alone (rh ). Oedema extension distance in intracerebral haemorrhage: Association with baseline characteristics and long-term outcome http://dx.doi.org/10.1177/2396987319848203
  • 126. Imagingfeatures are time-dependent (fromhourstolong-termoutcomes) #3 IntraventricularHemorrhageGrowth: Definition,Prevalence andAssociationwith Hematoma ExpansionandPrognosis QiLi etal. (NeurocriticalCare(2020)) https://doi.org/10.1007/s12028-020-00958-8 The objective of this study is to propose a definition of intraventricular hemorrhage (IVH) growth and to investigate whether IVH growth is associated with ICH expansion and functional outcome. IVH growth is not uncommon and independently predicts poor outcome in ICH patients. It may serve as a promising therapeutic targetforintervention. Illustration of IVHgrowth on noncontrast CT.  a Baseline CTscanrevealsa putaminal hematomawithout concurrent intraventricular hemorrhage.  b Follow-up CTscan performed 11 h later showsenlarged hematomaand intraventricular extension ofparenchymal hemorrhage.  c Admission CTscan showsa basal gangliahemorrhage with ventricular extension of hematoma.  d Follow-up CTscan performed 24 h after baseline CTscan revealsthe significantincrease in ventricular hematoma volume. CT computed tomography, IVH intraventricular hemorrhage DistributionofmodifiedRankinscale inpatientswithor withoutIVHgrowth.Theordinalanalysisshoweda significantunfavorableshiftinthedistributionofscoresonthe modifiedRankinscaleIVHgrowth(pooledoddsratioforshiftto higher modifiedRankinscore
  • 127. SegmentationLabels? WM/GMcontrastabitlowinCTcomparedtoMRI WhiteMatterandGray MatterSegmentationin4DComputed Tomography RashindraManniesing, Marcel T. H. Oei, Luuk J. Oostveen, Jaime Melendez , Ewoud J. Smit, Bram Platel , ClaraI. Sánchez, Frederick J. A. Meijer , Mathias Prokop & Bram van Ginneken. SciRep7,119(2017) https://doi.org/10.1038/s41598-017-00239-z -Cited by7 
  • 128. SegmentationLabels? WM/GM supervise with MRI? WholeBrainSegmentationandLabeling fromCTUsing SyntheticMRImages CanZhao,Aaron Carass, Junghoon Lee, YufanHe, JerryL.Prince InternationalWorkshoponMachineLearningin MedicalImaging MLMI2017:Machine LearninginMedicalImagingpp291-298 https://doi.org/10.1007/978-3-319-67389-9_34 To achieve whole-brain segmentation—i.e., classifying tissues within and immediately around the brain as gray matter (GM), white matter (WM), and cerebrospinal fluid—magnetic resonance (MR) imaging is nearly alwaysused. However, there are many clinical scenarios where computed tomography (CT)istheonly modality thatisacquiredandyetwholebrainsegmentation (and labeling) is desired. This is a very challenging task, primarily because CT has poor soft tissue contrast; very few segmentation methods have been reported to date and there are no reports on automatic labeling. This paper presents a whole brain segmentation and labeling method for non- contrast CT images that first uses a fully convolutional network (FCN) to synthesize an MR image from a CT image and then uses the synthetic MR imageinastandardpipelinefor wholebrainsegmentationandlabeling. In summary, we have used a modified U-net to synthesize T1-w images from CT, and then directly segmented the synthetic T1-w using either MALP- EM or a multi-atlas label fusion scheme. Our results show that using synthetic MR can significantly improve the segmentation over using the CT image directly. This is the first paper to provide GM anatomical labels on a CT neuroimage. Also, despite previous assertions that CT-to-MR synthesis is impossible from CNNs, we show that it is not only possible but it can be done with sufficient quality to open up new clinical and scientificopportunitiesinneuroimaging. For one subject, we show the (a) input CT image, the (b) output synthetic T1-w, and the (c) ground truth T1-w image. (d) is the dynamic range of (a). Shown in (e) and (f) aretheMALP-EMsegmentationsofthe synthetic andgroundtruth T1-w images,respectively.
  • 129. SegmentationLabels? Propagate from pairedMRI? Thegoalofthisprojectistodevelopanalgorithmforthesegmentationand separationofthecerebralhemispheres,thecerebellumandbrainstem innon-contrastCTimages.© 2019 Department of Radiology and Nuclear Medicine, Radboud university medical center, Nijmegen http://www.diagnijmegen.nl/index.php/Automatic_cerebral_hemisphere,_cerebellum_and_brainstem_ segmentation_in_non-contrast_CT GIF:UNIFIEDBRAINSEGMENTATION ANDPARCELATION TheGIF algorithmisanonlinebrainextraction,tissue segmentationandparcelationtool forT1-weightedimages. GIF,which standsforgeodesical informationflows,will be deployedaspartof NiftySeg.You candownloadthe parcelationlabelsinxmlfromhere (v2, v3)andinexcelfrom here(v2, v3). http://niftyweb.cs.ucl.ac.uk/ SynSeg-Net:SyntheticSegmentationWithoutTargetModalityGroundTruth https://arxiv.org/abs/1810.06498
  • 130. UsefulingeneraltohaveCT/MRIpairs? BrainMRI withQuantitative Susceptibility Mapping: Relationship toCT AttenuationValues https://doi.org/10.1148/radiol.2019182934 Toassesstherelationship amongmetalconcentration,CT attenuationvalues,andmagnetic susceptibilityinparamagnetic anddiamagneticphantoms,and therelationshipbetweenCT attenuationvaluesand susceptibilityinbrainstructures thathaveparamagneticor diamagneticproperties.
  • 131. CT Segmentation Labels vs MRIlabels Loss Switching In segmentation tasks, the dice score is often reported as the performance metric. A loss function that directly correlates with the dice score is the weighted dice loss. Based on our empirical observation, the network trained with only weighted dice loss was unable to escape local optimum and did not converge. Also, empirically it was seen that the stability of the model, in terms of convergence, decreased as the number of classes and class imbalance increased. We found that weighted cross-entropy loss, on the other hand, did not get stuck in any local optima and learned reasonably good segmentations. As the model’s performance with regard to dice score flattened out, we switched from weighted cross entropy to weighted dice loss, after which the model’s performance further increased by 3-4 % in terms of average dice score. This loss switching mechanism, therefore, is found to be useful to further improve the performance of the model. Onbrainatlaschoice andautomaticsegmentationmethods: a comparisonof MAPER& FreeSurfer usingthreeatlas databases https://doi.org/10.1038/s41598-020-57951-6 DARTS:DenseUnet-basedAutomaticRapidToolforbrain SegmentationAakash Kaku, ChaitraV. Hegde, JeffreyHuang, Sohae Chung, Xiuyuan Wang, Matthew Young, AlirezaRadmanesh, YvonneW. Lui, NargesRazavian (Submitted on13Nov 2019 https://arxiv.org/abs/1911.05567
  • 132. Weaklabels for CT Segmentation Extracting2Dweaklabelsfromvolumelabels usingmultipleinstancelearningin CT hemorrhagedetection Samuel W. Remedios, Zihao Wu,Camilo Bermudez,CaileyI. Kerley, SnehashisRoy, MayurB. Patel, John A.Butman, BennettA.Landman,Dzung L. Pham (Submittedon 13Nov2019) https://arxiv.org/abs/1911.05650 https://github.com/sremedios/multiple_instance_learning Multipleinstancelearning (MIL)isasupervisedlearning methodology thataimsto allowmodelstolearninstanceclasslabelsfrombag classlabels,whereabag is definedto containmultipleinstances.MIL isgainingtractionforlearning fromweak labelsbuthasnotbeenwidelyappliedto3Dmedicalimaging. MILiswell-suitedtoclinicalCT acquisitionssince (1) thehighlyanisotropicvoxels hinderapplicationoftraditional3D networksand (2) patch-basednetworkshave limitedabilitytolearnwholevolumelabels.Inthiswork,weapplyMIL with adeep convolutional neural network to identify whetherclinicalCT headimage volumespossessoneormore largehemorrhages(>20cm3 ), resulting ina learned2D modelwithouttheneedfor2Dslice annotations. Individualimage volumesareconsideredseparate bags,andthe slicesin eachvolumeareinstances. Such aframework setsthestagefor incorporating informationobtainedin clinicalreportsto helptraina2Dsegmentationapproach. Withinthiscontext, weevaluatethedatarequirementsto enablegeneralizationof MIL byvarying theamountof training data.Ourresultsshowthatatraining size of at least 400patient image volumeswasneededtoachieveaccurateper-slice hemorrhage detection.
  • 133. Weak Label Densemodeling→ ImprovingRetinaNetforCT Lesion DetectionwithDense MasksfromWeak RECIST Labels Martin Zlocha, QiDou, and Ben Glocker https://arxiv.org/pdf/1906.02283v1.pdf https://github.com/fizyr/keras-retinanet https://github.com/martinzlocha/anchor-optimization Accurate, automated lesion detection in Computed Tomography (CT) is an important yet challenging task due to the large variation of lesion types, sizes, locations and appearances. Recent work on CT lesion detection employs two-stage region proposal based methods trained with centroid or bounding-box annotations. We propose a highly accurate and efficient one-stage lesion detector, by re-designing a RetinaNet to meet the particular challenges in medical imaging. Specifically, we optimize the anchor configurations using a differential evolutionsearchalgorithm Interestingly, we could show that by task-specific optimization of an out-of-the-box detector we already achieve results superior than the best reported in the literature. Exploitation of clinically available RECIST annotations bears great promise as large amounts of such training data should be available in many hospitals. With a sensitivity of about 91% at 4 FPs per image, our system may reach clinical readiness. Future work will focus on new applications such as whole-body MRI in oncology.
  • 134. SegmentationLabels? Synthetic CT from MRI Hybrid GenerativeAdversarial NetworksforDeepMRto CTSynthesis Using UnpairedData GuodongZengandGuoyanZheng(MICCAI2019) https://doi.org/10.1007/978-3-030-32251-9_83 2D cycle-consistent Generative Adversarial Networks (2D- cGAN) have been explored before for generating synthetic CTs from MR images but the results are not satisfied due to spatial inconsistency. There exists attempt to develop 3D cycle GAN (3D-cGAN) for image translation but its training requires large numberof data whichmaynot be alwaysavailable. In this paper, we introduce two novel mechanisms to address above mentioned problems. First, we introduce a hybrid GAN (hGAN) consisting of a 3D generator network and a 2D discriminator network for deep MR to CT synthesis using unpaired data. We use 3D fully convolutional networks to form the generator, which can better model the 3D spatial information and thuscould solve the discontinuityproblem acrossslices. Second, we take the results generated from the 2D-cGAN as weak labels, which will be used together with an adversarial training strategy to encourage the generator’s 3D output to look like a stack of real CT slicesasmuchaspossible.
  • 135. SegmentationLabels Vascular segmentation RobustSegmentationoftheFullCerebral Vasculaturein4DCT ofSuspected Stroke Patients MidasMeijs,AjayPatel,SilC.vandeLeemput,MathiasProkop,Ewoud J.vanDijk,Frank-Erik deLeeuw,FrederickJ.A.Meijer,Bramvan Ginneken&RashindraManniesing ScientificReportsvolume7,Articlenumber:15622(2017) https://doi.org/10.1038/s41598-017-15617-w Arobust methodispresented forthe segmentation of the full cerebral vasculature in 4-dimensional (4D)computed tomography(CT). Temporal information, in combination with contrast agent, is important for vessel segmentation as is reflected by the WTV feature. The added value of 4D CT with improved evaluation of intracranial hemodynamics comes at a cost, as a 4D CT protocol is associated with a higher radiation dose. Although 4D CT imaging is not common practice, applications of 4D CT are expanding. We expect 4D CT to become a single acquisition for stroke workup as it contains both noncontrast CT and CTA information. These modalities might be reconstructed from a 4D CT acquisition, resulting in a reduction of acquisitions and radiation dose. In addition, studies suggest that 4D CT can be acquired at half the dose of standard clinical protocol, further reducingthe radiation dose forthe patient. Coronalviewofatemporal maximumintensityprojection visualizingpartofthemiddle cerebralarteryincludingtheM1, M2andM3segments.Intensity differencesfromproximalto distalinanonaffectedvessel canreachupto450HUand higher.Vesselocclusions, vesselwallcalcifications, collateralflow,clipandstent artifactshavealargeinfluence onthecontinuityofintensity valuesalongthevessel. Examplesofdifficultiesencounteredinvesselsegmentation.Fromlefttoright:Skullbase region,arteriesandveinssurroundedbyhyperdensbonystructuresintheir coursethrough theskullbase,whichrendersdifficultiesinseparating themfromeachother;patientwith coilsplacedattheanterior communicating artery;patientwithventricularshuntcausinga linear artifactintheleftcerebralhemisphere.
  • 136. CTASegmentationExample with multi-tasklearning Deep DistanceTransformfor TubularStructure SegmentationinCT Scans Yan Wang, Xu Wei, Fengze Liu, JienengChen, Yuyin Zhou, Wei Shen, Elliot K. Fishman, Alan L. Yuille (Submitted on 6Dec2019) https://arxiv.org/abs/1912.03383 Tubular structure segmentation in medical images, e.g., segmenting vessels in CT scans, serves as a vital step in the use of computers to aid in screening early stages of related diseases. But automatic tubular structure segmentation in CT scans is a challenging problem, due to issues such as poor contrast, noise andcomplicatedbackground. A tubular structure usually has a cylinder-like shape which can be well represented by its skeleton and cross-sectional radii (scales). Inspired by this, we propose a geometry-aware tubular structure segmentation method, Deep Distance Transform (DDT), which combines intuitions from the classical distance transform for skeletonization and modern deep segmentation networks. DDT first learns a multi-task network to predict a segmentation mask for a tubular structureandadistancemap. Each value in the map represents the distance from each tubular structure voxel to the tubular structure surface. Then the segmentation mask is refined by leveraging the shape prior reconstructedfrom the distance map.
  • 137. SegmentationLabels? 4Dforvessels,andmulti-framereconstruction? MulticlassBrainTissueSegmentationin4DCTUsing ConvolutionalNeuralNetworks SilC.VanDeLeemput,MidasMeijs,AjayPatel,FrederickJ.A.Meijer,BramVan Ginneken,RashindraManniesing IEEE Access(Volume:7,11 April2019) https://doi.org/10.1109/ACCESS.2019.2910348 4D CT imaging has a great potential for use in stroke workup. A fully convolutionalneuralnetwork (CNN) for 3D multiclasssegmentation in 4DCT is presented, which can be trained end-to-end from sparse 2D annotations. The CNN was trained and validated on 42 4D CT acquisitions of the brain of patients with suspicion of acute ischemic stroke. White matter, gray matter, cerebrospinalfluid,andvesselswereannotatedby twotrainedobservers. The dataset used for the evaluation consisted exclusively of normal appearing brain tissues without pathology or foreign objects, which are seen in everyday clinical practice. The data was collected as such to focus on testing the feasibility of segmentation of WM/GM/CSF and vessels in 4D CT using deep learning, which is traditionally the domain of MR imaging. This implies that the method likely must be trained on cases with pathology or foreign objects and at least be evaluated on such cases, before it can be used in practice. However, we argue that our method provides a valuable first step towards this goal. Example axial cross section for the derived images of a single 4D CT image used for annotation. Left: the temporal average for WM, GM, and CSF segmentation.Right:thetemporalvarianceforvesselsegmentation. Three cross sections (axial, coronal, sagittal) of an exemplar 4D CT case. Blue areas were selected for annotation by the observers, other areas were not annotated.Brainmaskfromskullstripping
  • 138. SegmentationLabels? MusculoskeletalCT segmentation #1 Pixel-Level Deep Segmentation:ArtificialIntelligenceQuantifies Muscle onComputed TomographyforBodyMorphometric Analysis Hyunkwang Lee&FabianM.Troschel&ShaheinTajmir& Georg Fuchs& Julia Mario &FlorianJ. Fintelmann&SynhoDo Department of Radiology, Massachusetts General Hospital JDigitImaging(2017) http://doi.org/10.1007/s10278-017-9988-z The muscle segmentation AI can be enhanced further by using the original 12-bit image resolution with 4096 gray levels which could enable the network to learn othersignificantdeterminantswhichcould bemissed inthelowerresolution. In addition, an exciting target would be adipose tissue segmentation. Adipose tissue segmentation is relatively straightforward since fat can be thresholded within a unique HU range [−190 to −30]. Prior studies proposed creating an outer muscle boundary to segment HU thresholded adipose tissue into visceral adipose tissue (VAT) andsubcutaneous adipose tissue (SAT). However, precise boundary generation is dependent on accurate muscle segmentation. By combining our muscle segmentation network with a subsequent adipose tissue thresholding system, we could quickly and accurately provide VAT and SAT values in addition to muscle CSA. Visceral adipose tissue has been implicated in cardiovascular outcomes and metabolic syndrome, and accurate fat segmentation would increase the utility of our system beyond cancer prognostication. Ultimately, our system should be extended to wholebody volumetric analysis rather than axial CSA, providing rapid and accurate characterization of body morphometric parameters.
  • 139. SegmentationLabels? MusculoskeletalCT segmentation #2 AutomatedMuscle SegmentationfromClinicalCT usingBayesian U-Net forPersonalizationofaMusculoskeletalModel YutaHiasa,YoshitoOtake,Masaki Takao,TakeshiOgawa, NobuhikoSugano, andYoshinobu Sato https://arxiv.org/abs/1907.08915 (21 July2019) We propose a method for automatic segmentation of individual muscles from a clinical CT. The method uses Bayesian convolutional neural networks with the U-Net architecture, using Monte Carlo dropout that infers an uncertainty metric in addition to the segmentation label. We evaluated validity of the uncertainty metric in the multi-class organ segmentation problem and demonstrated a correlation between the pixels with high uncertainty and the segmentation failure. One application of the uncertainty metric in active learning is demonstrated, and the proposed query pixel selection method considerably reduced the manual annotation cost for expanding the training data set. The proposed method allows an accurate patient-specific analysis of individual muscle shapes in a clinical routine. This would open up various applications including personalization of biomechanical simulation and quantitativeevaluation of muscleatrophy.
  • 141. CT/MRI/PETPhantom from Bristolfor AlzheimerNeuroimaging CreationofananthropomorphicCT head phantomforverificationofimage segmentationMedical Physics(11 March 2020) https://doi.org/10.1002/mp.14127 Robin B. Holmes  IanS. Negus  Sophie J. Wiltshire  GarethC. Thorne  Peter Young   The Alzheimer’sDiseaseNeuroimagingInitiative Department of Medical Physics and Bioengineering, UniversityHospitals Bristol NHS Foundation Trust, Bristol, BS28HW United Kingdom “Accuracy of CT segmentation will depend, to some extent, on the ability of CT images to accurately depict the structures of the head. This in turn will depend on the scanner used and the exposure and reconstruction factors selected. The delineation of soft tissue structures will depend on material contrast, edge resolution and image noise, which are in turn affected by the peak tube potential (kVp), filtration, tube current (mA), rotation time, reconstructed slice width and the reconstruction algorithm, including iterative methods and any other post-acquisition image processing. The limitation of the phantoms presented in these (previous) studies is that they do not allow for complex nested structures with multiple material properties, as would be required to simulate the brain. ... The effects of neuroimaging on clinical confidence analyses is not an area that has been investigated rigourously, the effects of analyses even less so e.g. Motaraet al.2017; Boelaartset al.2016 . The literature appears to concentrate more on novel methods rather than demonstrating the usefulness of existing ones.” This work aims to use 3D printing to create a realistic anthropomorphic phantom representing the CT properties of a normal human brain and skull. Properly developed, this type of phantom will allow the optimization and validation of CT segmentation across different scanners and disease states. If sufficient realism can be attained with the phantom, imaging the resulting phantom on different scanners and using different acquisition parameters will enable the validation of the entire processing chain in the proposed clinical implementation of CT-VBM. ... may well be possible to use phantoms to measure parameters that could be used as exclusion criteria in the clinical use of CT analyses, thereby increasing sensitivity, specificity and clinical confidence.It would be relatively straightforward to create multiple phantoms of the same subject with progressive atrophy; the atrophy could be simulated from a ‘base’ scan or by the assessment of multiplepatient scansfromtheADNI database 3DP brain(left) andthecompletedphantom after coatingwithplasterofParis(right) Comparisonof the sourceMRI (column 1) and phantom scan C(120kV, 300mAs) for scanner 1 (column 2) and scanner 2 (column 3) with an80kVacquisition on scanner 2 (column ). The three rowsdepict different slices at differentlevelsin the head/phantom. Asthe printer was onlycapable of printing 3different typesof plastic nonon-brainstructures– such asthe eyesor skull – were printed. CTscanshave 60HU subtraction and are displayed with awindow level of 30HU, window width 90HU. Representative ROIs used for determination ofthe meanHU for eachtissue type are shown in red. seealso“Physicalimagingphantomsfor simulationoftumorheterogeneityinPET,CT,andMRI” https://doi.org/10.1002/mp.14045
  • 142. CT artifactsto simulate for intracerebral hemorrhage (ICH)analysis Starburst/streak artifactfromdense materials (metal,teeth) Maketwo phantoms(onewith metalencased,andtheother without)? Orhaveinsertable densematerials? http://www.neuroradiologycases.com/2011/ 10/streak-artifacts.html Motionartifacts Haveamotormovingthephantomso you wouldexactlynowthe “blur kernel”, wouldyou benefitfromfiducialson phantom? Metal motor itself causing artifactsto the image? https://www.openaccessjournals.com/articles/ct-artifacts-causes-a nd-reduction-techniques.html Calcifications Useful especially for dual- energy CT simulation and ‘virtual noncalcium image’ https://doi.org/10.1093/neuros/nyaa029 ICH(i.e.blood) Howrealisticcanyoumake this?Playwith infill density/patterntoallow injectionof blood-likematerial tothephantom?ICHshape veryrandom see e.g.Chindaet al.2018 http://dx.doi.org/10.1136/bmjopen-2017-020260 Beamhardening i.e. attenuationofsignalina“skull pocket”->phantom wouldbenefit frombone-likeencasing. e.g. http://doi.org/10.13140/RG.2.1.2575.3122 see eg. Raslauetal. 2016 https://doi.org/10.3174/ng.2160146
  • 143. CTExtra the texture “radiomicsstory”andwithfullydeep end-to-end networks? ReliabilityofCT-basedtexturefeatures:Phantom study BinoA. Varghese Darryl Hwang Steven Y. Cen JoshuaLevy DerekLiu Christopher Lau MarielenaRivas Bhushan Desai David J. Goodenough VinayA. Duddalwar JournalofAppliedClinical Medical Physics(20JuneOct2019) https://doi.org/10.1002/acm2.12666 - Citedby1 -Relatedarticles Objective: To determine the intra, inter and test retest variability of‐ICH, occurs earlier ‐ICH, occurs earlier CT basedtextureanalysis(CTTA)metrics.‐ICH, occurs earlier Results: As expected, the robustness, repeatability and reproducibility of CTTA metrics are variably sensitive to various scanner(PhilipsBrilliance 64 CT,ToshibaAquilion Prime160CT) and scanning parameters. Entropy of Fast Fourier Transform‐sized basedtexture metricswasoverallmost reliable acrossthetwo scanners and scanning conditions. Post processing techniques‐sized that reduce image noise while preserving the underlying edges associated with true anatomy or pathology bring about significant differences in radiomic reliability compared to when they were notused. (Left) Texture phantom comprising of three texture patterns. (Middle) Phantom placement for image acquisition. (Right) Cross section of texture phantom patterns. (1), (2) and (3) are 3D printed ABS plastic with fill levels 10%, 20%, and 40%, respectively. (Bk) isa homogenousABS material. (Thewindowlevel is −500 HUwitha width of 1600 HU). 3.4 Effect of post processing techniques that reduce‐sized image noise while preserving the underlying edges associated withtrue anatomyor pathology By comparing the changes in robustness of the CTTA metrics across the two scanners, we observe that post‐ICH, occurs earlier processing techniques that reduce image noise while preserving the underlying anatomical edges for example, I dose levels (here 6 levels) on the Philips‐ICH, occurs earlier scanner and Mild/Strong (here 2 levels) levels on the Toshiba scanner produce significant difference in CTTA robustness compared to the base setting (Fig. 3). Stronger noise reduction techniques were associated with a significant reduction in reliability in the Philips scanner, however, the opposite was observed on the Toshiba scanner. In both cases, no noise reduction techniques were used in the base setting.
  • 144. CTPhantomStudy for deeplearningbased reconstruction Deep LearningReconstructionatCT:PhantomStudy oftheImageCharacteristics Toru Higakietal. Academic Radiology Volume27,Issue1,January 2020, Pages82-87 https://doi.org/10.1016/j.acra.2019.09.008 Noise, commonly encountered on computed tomography (CT) images, can impact diagnostic accuracy. To reduce the image noise, we developed a deep-learning reconstruction (DLR) method thatintegrates deep convolutional neural networksinto image reconstruction. In this phantom study, we compared the image noise characteristics, spatial resolution, and task-based detectability on DLR images and images reconstructed with other state-of-the art techniques. On images reconstructed with DLR, the noise was lower than on images subjected to other reconstructions, especially at low radiation dose settings. Noise power spectrum measurements also showedthatthe noiseamplitudewaslower,especially forlow- frequency components, on DLR images. Based on the MTF, spatial resolution was higher on model-based iterative reconstruction image than DLR image, however, for lower-contrast objects, the MTF on DLR images was comparable to images reconstructed with other methods. The machine observer study showed that at reduced radiation-dosesettings,DLRyieldedthebestdetectability. Phantom images scanned at 2.5 mGy. The image noise is lowest on the DLR image, the texture is preserved, and the object boundary is sharper than ontheotherimages.
  • 145. DualEnergyCT Nicefor CT as wellwith thecalcium separation Optimisingdual- energyCTscan parametersfor virtualnon- calciumimaging ofthebone marrow:a phantomstudy https://doi.org/10.11 86/s41747-019-012 5-2
  • 146. Effectsof Patient Size andRadiation Dose on Iodine Quantification in Dual-SourceDual-Energy CT https://doi.org/10.1016/j.acra.2019.12.027 Figure1. Across-sectionCTimageofthemedium-sized phantomwitheightiodineinserts.Thenumberaboveeach insertindicatesitsiodineconcentrationinmg/ml. Figure6. The80kVpimagesfromtheDECTscanofa32-cmdiameterCTDI phantomwithdifferentcombinationsofeffectivemAsandrotationtime:(a)53mAs and0.5s,(b)106mAs,1.0s,(c)106mAs,0.5s,and(d)530mAs,0.5s.Anarrow windowof200HUisusedtoshowthebiasintheCTnumber.Fourcircular ROIsof1.6 cmdiameterareshowninpanel(d),atdistancesof3.4,6.7,10,and13.3cmfromthe center.
  • 147. RememberthatCThasnon-medicalusesaswell andyoucan have a look ofthat literature if youare interested MeasuringIdentificationand QuantificationErrorsinSpectralCTMaterial Decompositionhttps://doi.org/10.3390/app8030467 (a)Spectroscopicphantomwiththree6mmdiameter hydroxyapatitecalibrationrods(54.3,211.7and808.5mg/mL)and6 mmdiameter vialsofgadolinium(1,2,4,8mg/mL),oil(canolaoil)and distilledwater;(b)CTimageofthephantom.
  • 149. CTVolumescan beboth anisotropic and isotropic Brainatlasfusionfrom high-thicknessdiagnosticmagnetic resonanceimagesbylearning-basedsuper-resolution Zhangetal. (2017) https://doi.org/10.1016/j.patcog.2016.09.019Cited by 12  ANISOTROPICVOLUME “Staircased”volumedueto lowz-resolution ISOTROPICVOLUME A lotsmoothervolume reconstruction Lego corgi : corgi Reddit Same ”asadog” wellpracticallyalwaysanisotropic,andtheyareresampledtobeisotropic
  • 150. StaircasingExample When z-resolution is too coarse Co-registrationofBOLDactivationareaon3Dbrainimage(Courtesy Siemens) http://mriquestions.com/registrationnormalization.html UCLData https://doi.org/10.1016/j.jneumeth.2016.03.001
  • 151. Getrid ofbackgroundand“non-brain” Cushion contours Plastic “helmet” Head mask Brain mask 8-bitmapping“int13”input 1 sign bit + 12 bit intensity [−1024,3071]HUclipping −100to100HUstilllinear betweenthese valuessonothingcompressedandlost,but remaining55valuesareusedforthe outsidevaluesthatarenotasrelevantfor brain.
  • 152. CTPreprocessing ClipHU units, useNifTI, andavoidbias field Recommendationsfor ProcessingHeadCT Data JohnMuschelli (2019) https://doi.org/10.3389/fninf.2019.00061 DepartmentofBiostatistics,JohnsHopkinsBloombergSchoolof Public Health,Baltimore,MD,UnitedStates Many different general 3D medical imaging formats exist, such as ANALYZE, NIfTI, NRRD, and MNC. We recommend the NIfTI format (e.g. https://github.com/rordenlab/dcm2niix), as it can be read by nearly all medical imaging platforms, has been widely used, has a format standard, can be stored inacompressedformat,andishowmuchofthedataisreleasedonline. Once converted to NIfTI format, one should ensure the scale of the data. Most CT data is between −1024 and 3071 Hounsfield Units (HU). Values less than −1024 HU are commonly found due to areas of the image outside the field of view that were not actually imaged. One first processing step would be to Winsorizethedata (clip the values) to the [−1024, 3071] range. After this step, the header elements scl_slope and scl_inter elements of the NIfTI image should be set to 1 and 0, respectively, to ensure no data rescaling is done in other software. Though HU is the standard format used in CT analysis, negative HU values may cause issues with standard imaging pipelines built for MRI, which typically have positive values. Rorden (CITE) proposed a lossless transformation, called Cormack units, which have a minimum value of 0. The goal of the transformation is to increase the range of the data that is usually of interest, from −100 to 100 HU and is implemented in the Clinical toolbox. Most analyses are done using HU,however. Though CT data has no coil or assumed bias field, as in MRI, due to the nature of the data, one can test if trying to harmonize the data spatially with one of these correction procedures improves performance ofamethod.Though we donotrecommendthisproceduregenerally,asitmay reduce contrasts between areas of interest, such as hemorrhages in the brain, but has been used to improve segmentation (Cauleyetal.,2018). We would like to discuss potential methods and CT- specificissues. http://neurovascularmedicine.com/imagingct.php https://www.sli deshare.net/drt arungoyal/basi c-principle-of-c t-and-ct-gener ations-1220533 36
  • 153. OptimizingtheHUwindowinsteadofusingthefullHUrange #1 PracticalWindow SettingOptimizationfor MedicalImage Deep Learning HyunkwangLee, Myeongchan Kim, SynhoDo Harvard / Mass General (Submitted on 3Dec 2018) https://arxiv.org/abs/1812.00572v1 https://github.com/suryachintu/RSNA-Intracranial-Hemorrhage-Detection https://github.com/MGH-LMIC/windows_optimization Keras The deep learning community has to date neglected window display settings - a key feature of clinical CT interpretation and opportunity for additional optimization. Here we propose a window setting optimization (WSO) module that is fully trainable with convolutional neural networks (CNNs) to find optimal window settingsfor clinicalperformance. Our approach was inspired by the method commonly used by practicing radiologists to interpret CT images by adjusting window settings to increase the visualization of certain pathologies. Our approach provides optimal window ranges to enhance the conspicuity of abnormalities, and was used to enable performance enhancement for intracranial hemorrhage and urinary stonedetection. On each task, the WSO model outperformed models trained over the full range of Hounsfield unit values in CT images, as well as images windowed with pre-defined settings. The WSO module can be readily applied to any analysis of CT images, and can befurther generalizedtotasksonother medicalimaging modalities. Our WSO models can be further optimized by investigating the effects of the number of input image channels, 𝜖 and U on the performance of target application. Additionally, we stress that the WSO-based approach described here is not specific to abnormality classification on CT images, but rather generalizable to various image interpretation task on a variety of medical imaging modalities.
  • 154. OptimizingtheHUwindowinsteadofusingthefullHUrange #2 CT windowtrainable neuralnetworkfor improvingintracranialhemorrhage detectionby combiningmultiple settings Manohar Karkietal. CAIDE Systems Inc.,Lowell, MA, USA (20 May2020) https://doi.org/10.1016/j.artmed.2020.101850 ● This method gives a novel approach where a deep convolutional neural network (DCNN) is trained in conjunction with a CT window estimator module in an end-to-end manner for better predictions in diagnostic radiology. ● A learnable module for approximating the window settings for Computed Tomography (CT) images is proposed to be trained in a distant supervised manner without prior knowledge of best window settings values by simultaneously trainingalesion classifier. ● Based on the learned module, several candidate window settings are automatically identified, and the raw CT data are scaled at each settings and separate lesion classification modelsare trained on each.
  • 155. 10(/11bit)mapping and youcan actually display it? Displayhasa2000:1contrastratio. Productpages: CoronisFusion6MP (MDCC-6530) and  CoronisFusion4MP(MDCC-4430) https://www.medgadget.com/2019/11/barcos-flagship-multimodality-diagnostic-monito r-gets-an-upgrade.html Should HDRDisplays Follow the Perceptual Quantizer (PQ) Curve? [discussionstarted asan email thread in the HDR workgroup ofthe International Committee ofDisplay Metrology(ICDM)] https://www.displaydaily.com/article/displ ay-daily/should-hdr-displays-follow-the-p q-curve
  • 156. You canassumeyourHUstobeproperlycalibrated? Automaticdeeplearning-basednormalizationofbreast dynamiccontrast-enhancedmagneticresonance images Jun Zhang, AshirbaniSaha, Brian J. Soher, Maciej A. Mazurowski Department ofRadiology,Duke University (5Jul 2018) https://arxiv.org/abs/1807.02152 To develop an automatic image normalization algorithm for intensity correction of images from breast dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) acquired by different MRI scanners with various imaging parameters, usingonlyimageinformation. DCE-MR images of 460 subjects with breast cancer acquired by different scanners were used in this study. Each subject had one T1-weighted pre-contrast image and three T1- weighted post-contrast images available. Our normalization algorithm operated under the assumption that the same type of tissue in different patients should be represented by the same voxel value. The proposed image normalization strategy based on tissue segmentation can perform intensity correction fully automatically, without the knowledge of the scanner parameters. And handled by the device manufacturer? Would there still be room for post-processing?
  • 157. CTPreprocessing Defacing(De-Identification) RecommendationsforProcessingHeadCTData JohnMuschelli (2019) https://doi.org/10.3389/fninf.2019.00061 DepartmentofBiostatistics,JohnsHopkinsBloombergSchoolofPublic Health,Baltimore,MD,UnitedStates As part of the Health Insurance Portability and Accountability Act (HIPAA) in the United States, under the “Safe Harbor” method, releasing of data requires the removal a number of protected health information (PHI) ( CentersforMedicare&MedicaidServices,1996). For head CT images, a notable identifier is “Full-face photographs and any comparable images”. Head CT images have the potential for 3D reconstructions, which likely fall under this PHI category, and present an issue for reidentification of participants (SchimkeandHale,2015). Thus, removing areas of the face, called defacing, may be necessary for releasing data. If parts of the face and nasal cavities are the target of theimaging, then defacing may be an issue. As earsmay beafutureidentifyingbiometric marker,anddentalrecordsmay beused foridentification, theseareasmaydesirableto remove(Cadavidetal.,2009; Mosher, 2010). The obvious method for image defacing is to perform brain extraction we described above. If we consider defacing to be removing parts the face, while preserving the rest of the image as much as possible, this solution is not sufficient. Additional options for defacing exist such as the MRI Deface software (https://www.nitrc.org/projects/mri_deface/), which is packaged in the FreeSurfer software and can be run using the mri_deface function from the freesurfer R package (Bischoff-Gretheetal.,2007; Fischl,2012). We have found this method does not work well out of the box on head CTdata,includingwhen alargeamountoftheneckisimaged. Registration methods involve registering images to the CT and applying the transformation of a mask of the removal areas (such as the face). Examples of this implementation in Python modules for defacing are pydeface (https://github.com/poldracklab/pydeface/tree/master/pydeface) and mridefacer ( https://github.com/mih/mridefacer). These methods work since the registration from MRI to CT tends to performs adequately, usually with a cross-modality cost function such as mutual information. Other estimation methods such as the Quickshear Defacing method rely on finding the face by its relative placement compared to a modality-agnostic brain mask (SchimkeandHale,2011). The fslr R package implements both the methods of pydeface and Quickshear. The ichseg R package also has a function ct_biometric_mask that tries to remove the face and ears based registration to a CT template (described below). Overall, removing potential biometric markers from imaging data should be considered when releasing data and a number of methods exist, but do not guarantee complete de-identification and maynotworkdirectlywithCTwithoutmodification. https://slideplayer.com/slide/12844720/ https://neurostars.org/t/sharing-data-on-openneuro- without-consent-form-but-consent-by-the-ethics-co mmittee/1593
  • 158. BrainExtractionTools(BETs) nothinggoodavailable really forCT? i.e.skullstripping moreoptionsforMRI→ more options for MRI ValidatedAutomaticBrain Extraction of HeadCTImages JohnMuschelli etal. (2015) https://dx.doi.org/10.1016%2Fj.neuroimage.2015.03.074 https://rdrr.io/github/muschellij2/ichseg/man/CT_Skull_Strip_robust.html R https://johnmuschelli.com/neuroc/ss_ct/index.html DepartmentofBiostatistics,JohnsHopkinsBloombergSchoolofPublic Health, Baltimore, MD, UnitedStates Aim: To systematically analyze and validate the performance of FSL's brain extraction tool (BET) on head CT images of patients with intracranial hemorrhage. This was done by comparing the manual gold standard with the results of several versions of automatic brain extraction and by estimating the reliability of automated segmentation of longitudinal scans. The effects of the choice of BET parameters and data smoothing is studied and reported. BET performs well at brain extraction on thresholded, 1mm3 smoothed CT images with an fractional intensity (FI) of 0.01 or 0.1. Smoothing before applying BET is an important step notpreviouslydiscussed in theliterature. Automatedbrain extractionfrom headCTandCTA imagesusingconvexoptimizationwithshape propagation MohamedNajmi etal.(2019) https://doi.org/10.1016/j.cmpb.2019.04.030 https://github.com/WuChanada/StripSkullCT Matlab Robustbrain extractiontoolforCTheadimages Zeynettin Akkus,Petro Kostandy,KennethA.Philbrick,BradleyJ.Erickson etal.(7 June2020) https://doi.org/10.1016/j.neucom.2018.12.085-Citedby2 https://github.com/aqqush/CT_BETKerasPython
  • 159. CTPreprocessing MNI Space Normalization to spatial coordinates “registration problem” Classification of damaged tissue in stroke CTs. A representative stroke CT scan (A) is normalized to MNI space (B) and spatially smoothed (C). Next, the resulting image is compared to a group of control CTs by means of the Crawford– Howell t-test. The resulting t-score map is converted to a probability map, which is then overlaid ontothe image itself (D). By thresholding this probability map at a given significance level, the lesioned regions can be delineated. The lesion map in MNI space can be transformed back to individual subject space (E), so that it can be compared with a lesion map manually delineated by an operator (F) on the original CT image. http://doi.org/10.1016/j.nicl.2014.03.009 - Citedby64  Human Brainin Standard MNI Space (2017) JürgenMaiMilan Majtanik TheTalairachcoordinateofapointintheMNI space:howtointerpretit WilkinChauand AnthonyR.McIntosh (2005) https://doi.org/10.1016/j.neuroimage.2004.12.007 “The two most widely used spaces in the neuroscience community are the Talairach space and the Montreal Neurological Institute (MNI) space. The Talairach coordinate system has become the standard reference for reporting the brain locations in scientific publication, even when the data have been spatially transformed into different brain templates (e.g., MNI space). “
  • 160. CTPreprocessing Space Transform Optimization Like with every signal processing step, you can always do better, and some pros/cons related to each method https://www.slideserve.com/shaina/group-analyses-in-fmri http://www.diedrichsenlab.org/imaging/propatlas.htm Citedby660 AdvancedNormalisationTools(ANTs) http://www.mrmikehart.com/tutorials.html Transcranialbrain atlas http://doi.org/10.1126/sciadv.aar6904 SpatialNormalization -anoverview https://www.sciencedirect.com/topics/medicine-and- dentistry/spatial-normalization
  • 162. Deep-MAR Fast EnhancedCT Metal ArtifactReductionusingData DomainDeep Learning MuhammadUsmanGhani,W.ClemKarl https://arxiv.org/abs/1904.04691v3 (2019) Filteredbackprojection(FBP) isthemostwidely usedmethod for image reconstruction in X-raycomputedtomography (CT) scanners,andcanproduceexcellentimagesinmanycases. However,thepresenceofdense materials, such asmetals,can strongly attenuateorevencompletelyblock X-rays,producing severestreakingartifacts intheFBP reconstruction.These metalartifactscangreatly limitsubsequentobjectdelineationand informationextractionfromtheimages,restricting their diagnostic value.
  • 163. DuDoNet Joint use ofsinogramandimage domains DuDoNet:DualDomainNetworkforCT MetalArtifactReduction Wei-AnLin, Haofu Liao,ChengPeng,Xiaohang Sun,JingdanZhang,Jiebo Luo,RamaChellappa, ShaohuaKevinZhou (2019) http://openaccess.thecvf.com/content_CVPR_2019/html/Lin_DuDoNet_Dual_Domain_Ne twork_for_CT_Metal_Artifact_Reduction_CVPR_2019_paper.html Computed tomography (CT) is an imaging modality widely used for medical diagnosis and treatment. CT images are often corrupted by undesirable artifacts when metallic implants are carried by patients, which createstheproblemof metalartifactreduction(MAR). Existing methods for reducing the artifacts due to metallic implants are inadequate for two main reasons. First, metal artifacts are structured and non-local so that simple image domain enhancement approaches would not suffice. Second, the MAR approaches which attempt to reduce metal artifacts in the X-ray projection (sinogram) domain inevitably lead to severesecondaryartifactdue to sinograminconsistency. To overcome these difficulties, we propose an end-to-end trainable Dual Domain Network (DuDoNet) to simultaneously restore sinogram consistency and enhance CT images. The linkage between the sigogram and image domains is a novel Radon inversion layer that allows the gradients to back-propagate from the image domain to the sinogram domain during training. Extensive experiments show that our method achieves significant improvements over other single domain MAR approaches. To the best of our knowledge, it is the first end-to-end dual- domain network for MAR.
  • 164. DuDoNet++ Joint use ofsinogramandimagedomains DuDoNet++: EncodingmaskprojectiontoreduceCT metal artifactsYuanyuanLyu,Wei-AnLin,JingjingLu,S. KevinZhou (Submittedon2Jan2020(v1), lastrevised18 Jan2020) https://arxiv.org/abs/2001.00340 CT metal artifact reduction (MAR) is a notoriously challenging task because the artifacts are structured and non-local in the image domain. However, they are inherently local in the sinogram domain. DuDoNet is the state-of-the-art MAR algorithm which exploits the latter characteristic by learning to reduce artifacts in the sinogram and image domain jointly. By design, DuDoNet treats the metal-affected regions in sinogram as missing and replaces them with thesurrogatedatageneratedbyaneuralnetwork. Since fine-grained details within the metal-affected regions are completely ignored, the artifact-reduced CT images by DuDoNet tend to be over-smoothed and distorted. In this work, we investigate the issue by theoretical derivation. We propose to address the problem by (1) retaining the metal-affected regions in sinogram and (2) replacing the binarized metal trace with the metal mask projection such that the geometry information of metal implants is encoded. Extensive experiments on simulated datasets and expert evaluations on clinical images demonstrate that our network called DuDoNet++ yields anatomically more precise artifact-reduced images than DuDoNet, especially when the metallic objects are large.
  • 165. UnsupervisedApproach ADN with goodperformance ArtifactDisentanglementNetworkfor UnsupervisedMetalArtifact Reduction Haofu Liao,Wei-AnLin,Jianbo Yuan,S.KevinZhou,Jiebo Luo (Submittedon5Jun2019) https://arxiv.org/abs/1906.01806v5 https://github.com/liaohaofu/adn PyTorch Current deep neural network based approaches to computed tomography (CT) metal artifact reduction (MAR) are supervised methods which rely heavily on synthesized data for training. However, as synthesized data may not perfectly simulate the underlying physical mechanisms of CT imaging, the supervised methods often generalize poorly to clinical applications. To address this problem, we propose, to the best of our knowledge, the first unsupervised learningapproach toMAR. Specifically, we introduce a novel artifact disentanglement network that enables different forms of generations and regularizations between the artifact-affected and artifact-free image domains to support unsupervised learning. Extensive experiments show that our method significantly outperforms the existing unsupervised models for image- to-image translation problems, and achieves comparable performance to existing supervised models on a synthesized dataset. When applied to clinical datasets, our method achieves considerable improvements overthesupervisedmodels.
  • 166. UnsupervisedImprovementover ADN? Three-dimensional GenerativeAdversarialNets for UnsupervisedMetalArtifact Reduction MegumiNakao,Keiho Imanishi, NobuhiroUeda,Yuichiro Imai,Tadaaki Kirita,TetsuyaMatsuda (Submittedon19Nov 2019)) https://arxiv.org/abs/1911.08105 In this paper, we introduce metal artifact reduction methods based on an unsupervised volume-to-volume translation learned from clinical CT images. We construct three-dimensional adversarial nets with a regularized loss function designed for metal artifacts from multiple dental fillings. The results of experiments using 915 CT volumes from real patients demonstrate that the proposed framework has an outstanding capacity to reduce strong artifacts and to recover underlying missing voxels, while preserving the anatomical features of soft tissues and tooth structuresfromthe originalimages.
  • 167. Usingpairedartifact-freeMRI for CT MAR CombiningmultimodalinformationforMetal ArtefactReduction:Anunsuperviseddeep learningframework MartaB.M.Ranzini,IrmeGroothuis,KerstinKläser,M.JorgeCardoso, JohannHenckel,SébastienOurselin,Alister Hart,MarcModat [Submittedon20Apr 2020] https://arxiv.org/abs/2004.09321 Metal artefact reduction (MAR) techniques aim at removing metal-induced noise from clinical images. In Computed Tomography (CT), supervised deep learning approaches have been shown effective but limited in generalisability, as they mostly rely on synthetic data. In Magnetic Resonance Imaging (MRI) instead, no methodhas yet been introduced to correct the susceptibility artefact,still presenteveninMAR-specificacquisitions. In this work, we hypothesise that a multimodal approach to MAR would improve both CT and MRI. Given their different artefact appearance, their complementary information can compensate for the corrupted signal in either modality. We thus propose an unsupervised deep learning method for multimodal MAR. We introduce the use of Locally Normalised Cross Correlation as a loss term to encourage the fusion of multimodal information. Experiments show that our approach favours a smoother correction in the CT, while promoting signal recovery in the MRI.
  • 168. UnsupervisedApproach jointly with other tasks Joint UnsupervisedLearningforthe Vertebra Segmentation,ArtifactReductionandModality Translationof CBCT Images YuanyuanLyu,Haofu Liao,Heqin Zhu,S. KevinZhou (Submittedon2Jan2020(v1), lastrevised18 Jan2020) https://arxiv.org/abs/2001.00339 We investigate the unsupervised learning of the vertebra segmentation, artifact reduction and modality translation of CBCT images. To this end, we formulate this problem under a unified framework that jointly addresses these three tasks and intensively leverages the knowledge sharing. The unsupervised learning of this framework is enabled by 1) a novel shape-aware artifact disentanglement network that supports different forms of image synthesis and vertebra segmentation and 2) a deliberate fusion of knowledge from an independent CT dataset. Specifically, the proposed framework takes a random pair of CBCT and CT images as the input, and manipulates the synthesis and segmentation via different combinations of the decodings of the disentangled latent codes. Then, by discovering various forms of consistencies between the synthesized images and segmented , the learning is achieved via self-learning from the given CBCT and CT images obviating the need for the paired (i.e.,anatomicallyidentical)ground-truth data.
  • 169. MandiblesegmentationtohelpMAR? Recurrentconvolutionalneuralnetworksfor mandible segmentationfromcomputed tomography Bingjiang Qiu,JiapanGuo,JoepKraeima, HayeH. Glas,Ronald J.H. Borra,Max J.H. Witjes,PeterM.A.vanOoijen(Submitted on13Mar 2020) https://arxiv.org/abs/2003.06486 Recently, accurate mandible segmentation in CT scans based on deep learning methods has attracted much attention. However, there still exist two major challenges, namely, metal artifacts among mandibles and large variations in shape or size among individuals. To address these two challenges, we propose a recurrent segmentation convolutional neural network (RSegCNN) that embeds segmentation convolutional neural network (SegCNN) into the recurrent neural network (RNN) for robust and accurate segmentation of the mandible. Such a design of the system takes into account the similarity and continuity of the mandible shapes captured in adjacent image slices in CT scans. The RSegCNN infers the mandible information based on the recurrent structure with the embedded encoder-decoder segmentation (SegCNN) components. The recurrent structure guides the system to exploit relevant and important information from adjacent slices, while the SegCNN component focuses on the mandible shapes from a single CT slice.
  • 171. Noise Review #1 AreviewonCTimagenoiseandits denoising Manoj Diwakara, Manoj Kumar BiomedicalSignalProcessingandControl(April2018) https://doi.org/10.1016/j.bspc.2018.01.010 The process of CT image reconstruction depends on many physical measurements such as radiation dose, software/hardware. Due to statistical uncertainty in all physical measurements in Computed Tomography, the inevitable noise is introduced in CT images. Therefore, edge-preserving denoising methods are required to enhance the quality of CT images. However, there is a tradeoff between noise reduction and the preservation of actual medical relevant contents. Reducing the noise without losing the important features of the image such as edges, corners and other sharp structures, is a challengingtask. Nevertheless, various techniques have been presented to suppress the noise from the CT scanned images. Each technique has their own assumptions, merits and limitations. This paper contains a survey of some significant work in the area of CT image denoising. Often, researchers face difficulty to understand the noise in CT images and also to select an appropriate denoising method that is specific to their purpose. Hence, a brief introduction about CT imaging, the characteristics of noise in CT images and the popular methods of CT image denoising are presented here. The merits and drawbacks of CT image denoising methods are alsodiscussed. Majorfactorsaffecting thequalityofCTimages: ● Blurring 1) How the equipment isoperated. 2) Appropriate protocol factor values. 3) Blurringof image due topatient movement. 4) Fluctuation ofCTnumber between pixelsin the image for ascan ofuniform material. 5) Some ofthe filter algorithmsor bad parametersoffilter algorithms(toreduce noise)blur the image ● Fieldofview(FOV) ● Artifacts ● Beam hardening ● Metal artifact ● Patient motion ● Software / hardware based artifacts ● Visualnoise To reconstruct a good quality CT image, the CT scanner has twoimportantcharacteristics: (1) Geometric efficiency: When X-rays are transmitted to the human body and some absorbed data are not received by theactivedetectors,it meansgeometric efficiencyisreduced. (2) Absorption efficiency: When X-rays are transmitted to the human body and some absorbed data are not captured by theactivedetectors,it meansabsorption efficiencyisreduced. Therefore,therelationshipbetweennoiseand radiationdoseinCT scanner mustbeanalyzed. ● Detector ● Collimators ● Scanrange ● Tubecurrent ● Scan(rotation)time ● Slicethickness ● Peakkilovoltage(KVP) (1) By understanding the radiation dose and improving the dose efficiencyof CTsystems, the low dose CTimagecan be improved. (2) In second approach, CT image quality can be improved by developing algorithms to reduce the noise from CT images. These algorithms can be further used in order to reduce the radiation dose. Generally, the process of noise suppression is known as image denoising.
  • 172. Noise Review #2:Noise Sourceshttps://doi.org/10.1016/j.bspc.2018.01.010 Random noise: It may arise from the detection of a finite number of X- ray quanta in the projection. It looks like a fluctuation in the image density. Asaresult,the changeintoimagedensityisunpredictableandin random manner,thisisknownasrandomnoise. Statistical noise: The energy of X-rays are transmitted in the form of individual chunks of energy called quanta. Therefore, these finite number of X-ray quanta are detected by the X-ray detector. The number of detected X-ray quanta may differ with another measurement because of statistical fluctuation. The statistical noise in CT images may appear because of fluctuations in detecting a finite number of X-ray quanta. Statistical noise may also be called quantum noise. As more quanta are detected in each measurement, the relative accuracy of each measurement is improved. The only way to reduce the effects of statistical noise is to increase the number of detected X-ray quanta. Normally, this is achieved by increasing the number of transmitted X-rays throughanincreaseinX-raydose. Electronic noise: There are electric circuits to receive analog signals which are also known as analog circuits.The processofreceiving analog signals by the electronic circuits may be affected with some noise, which is referred as electronic noise. The latest CT scanners are well designed toreducetheelectronicnoise. Roundoff errors: The analog signals are converted into digital signals using signal processing steps and then sent to the digital computer for CT image reconstruction. In digital computers, there are digital circuits to handle the process of discrete signals. Due to limited number of bits for storage of discrete signals in computer system, mathematical computation is not possible without roundoff. This limitation is referred as roundofferror Generally, noise in reconstructed CT images are introduced mainly by two reasons. First, a continuously varying error due to electrical noise or roundoff errors, can be modeled as a simple additive noise, and second reason is the possible error due to randomvariationsindetectedX-rayintensity. To differentiate tissues (soft and hard), CT numbers are defined by using Hounsfield unit (HU) [60] for CT image reconstruction. Hounsfield unit (HU) scale is displayed in Fig. 3, where some CT numbersare defined. The CT number for agiven tissue is determined by the X-ray linear attenuation coefficient (LAC). Linearity is the ability of the CT image to assign the correct Hounsfield unit (HU) to a given tissue. A good linearity is essential for quantitativeanalysisofCTimages. The distribution of noise in CT image can be derived by estimating the noise variance through reconstructions algorithms. However, the distribution of noise in CT image can be accurately characterized using the Poisson distribution. But for multi- detector CT (MDCT) scanner, the noise distribution is more accurately characterized by the Gaussian distribution. The literature [51,57,121,117] also confirms that the noise in CT images is generally an additivewhite Gaussian noise.
  • 173. Noise Review #3:Denoising methodcomparisonhttps://doi.org/10.1016/j.bspc.2018.01.010 [25] H. Chen, Y.Zhang,M.K. Kalra,F.Lin, P.Liao,J. Zhou, G. Wang, Low-DoseCTwith aResidual Encoder–Decoder Convolutional Neural Network (RED-CNN), 2017 arXiv preprintarXiv:1702.00288.https://arxiv.org/abs/1702.00288 - Cited by224 [54] L.Gondara,Medical image denoisingusing convolutional denoisingautoencoders., in: 2016 IEEE16th International ConferenceonDataMining Workshops(ICDMW), IEEE,2016,pp. 241–246. https://doi.org/10.1109/ICDMW.2016.0041 - Cited by 76 [67]E. Kang,J.Min, J.C. Ye, A Deep ConvolutionalNeural Network UsingDirectional Waveletsfor Low-DoseX- Ray CTReconstruction, 2016 arXivpreprintarXiv:1610.09736. https://www.ncbi.nlm.nih.gov/pubmed/29027238
  • 174. CTNoise in Practice AssessingRobustnesstoNoise:Low- CostHeadCTTriage SarahM. Hooper, Jared A. Dunnmon, Matthew P. Lungren, Sanjiv Sam Gambhir, Christopher Ré, Adam S. Wang, BhavikN. Patel Stanford University 17Mar2020 https://arxiv.org/abs/2003.07977 In this work we use simulations tostudy noise from low-cost scanners, which enables systematic evaluation over large datasets without increasing labeling demand. However, studying variations in acquisition protocol using synthetic data is relevant when considering model deployment in any healthcare system. Different institutions often have differing acquisition protocols, with noise levels adjusted to suit the needs of their healthcare practitioners. However, robustness tests over acquisition protocol and noise level are rarely reported. Thus, the line of work presented in this study is relevant for model testing prior to deployment within any healthcare system. Finally, learning directly in sinogram space instead of reconstructed image space is an interestingfuture study that may also be pursued with synthetic data.
  • 176. PoissonnoiseinCT Low-doseCT (Low photon counts) Island Sign:AnImagingPredictorforEarly HematomaExpansionandPoorOutcome inPatientsWithIntracerebralHemorrhage Qi Li,Qing-Jun Liu, Wen-Song Yang, Xing-ChenWang,Li-Bo Zhao,Xin Xiong,Rui Li, DuCao, DanZhu,Xiao Wei, andPeng Xie Stroke.2017;48:3019–302510Oct2017 https://doi.org/10.1161/STROKEAHA.117.017985 Poisson noise is due to the statistical error of low photon counts and results in random, thin, bright and dark streaks that appear preferentially in the direction of greatest attenuation (Figure 2). With increased noise, high-contrast objects, such as bone, may still be visible, but low-contrast soft-tissue boundaries may beobscured. Poisson noise can be decreased by increasing the mAs. Modern scanners can perform tube current modulation, selectively increasing the dose when acquiring a projection with high attenuation. They also typically use bowtie filters, which provide a higher dose towards the center of the field of view compared with the periphery. There is a tradeoff between noise and resolution, so noise can also be reduced by increasing the slice thickness, using a softer reconstruction kernel(soft-tissuekernelinsteadofbonekernel)orblurring theimage. Noise can also be reduced by moving the arms out of the scanned volume for an abdominal CT. If the arms cannot be moved out of the scanned volume, placing them on top of the abdomen should reduce noise relative to placing them at the sides. Similarly, large breasts should be constrained in the front of the thorax rather than on both sides in thoracic and cardiac CT. This is becausethe noise increases rapidlyas the photon counts approach zero, which means that the maximum attenuation hasalargereffecton thenoisethan theaverageattenuation. Iterative methods require faster computer chips, and have only recently become available for clinical use. One iterative method, model-based iterative reconstruction (MBIR; GE Healthcare, WI, USA) [5,6], received US FDA approval in September 2011 [101]. MBIR substantially reduces image noise and improves image quality, thus allowing scans to be acquired at lower radiation doses (Figure 3) [2]. Furthermore, owing to the tradeoff between noise and resolution, these methods will also probably be important for reducing noise in higher resolution images.
  • 177. DosereductionvsImageQuality Vendor freebasicsofradiationdose reductiontechniquesforCT TakeshiKubo (2019)EuropeanJournalof Radiology https://doi.org/10.1016/j.ejrad.2018.11.002 ● Automatic exposure control and iterative reconstruction methods playasignificant rolein theCTradiation dosereduction. ● The validity of dose reduction can be evaluated with objective and subjectiveimagequality,anddiagnosticaccuracy. ● Realizing the reference dose level for common CT imaging protocolsisnecessarytoavoid overdosein theCTexaminations. ● Efforts need to be made to decrease the low-yield CT examination. Clinical decision support is expected to play a significant role in leading to the more meaningful application of CT examinations. Tube current and image quality. CT Images of an anthropomorphic phantom obtained with (a) 125 mAs and (b) 55 mAs at the level of lung bases. Standard deviations of Hounsfield unit in the region of interest are 14.5 and 19.3 in the image (a) and (b), respectively. Streak artifacts originating from the  thoracicvertebra are seen as black linear structures and more readily perceptible in the image (b). The image acquired with lower radiation dose (b, 55 mAs) has more noiseandstreak artifactstheonewithhigherradiationdose(a,125mAs). Tubecurrentadjustmentbyautomaticexposurecontrolsystem. Modification of X-ray energy profile (a) X-ray energy profile at 140kVp (solid line) and 80 kVp (dashed line). (b, c) Modification of energy profile with an extra X-ray filter. Energy profile at 100 keV without a filter (b) and at 100kVp with an additional filter (c). Low energy X- ray is mostly removed with the additionalfilter.
  • 178. Low-doseCTofcoursebenefit from better restoration SUPERLearning:ASupervised-Unsupervised FrameworkforLow-Dose CT Image Reconstruction ZhipengLi, Siqi Ye, YongLong, Saiprasad Ravishankar (Submitted on 26Oct 2019) https://arxiv.org/abs/1910.12024 Recent years have witnessed growing interest in machine learning-based models and techniques for low-dose X-ray CT (LDCT) imaging tasks. The methods can typically be categorized into supervised learning methods and unsupervised or model-based learning methods. Supervised learning methods have recently shown success in image restoration tasks. However, they often rely on large training sets. Model-based learning methods such as dictionary or transform learning do not require large or paired training sets and often have good generalization properties, since they learn generalpropertiesofCTimagesets. Recent works have shown the promising reconstruction performance of methods such as PWLS-ULTRA that rely on clustering the underlying (reconstructed) image patches into a learned union of transforms. In this paper, we propose a new Supervised-UnsuPERvised (SUPER) reconstruction framework for LDCT image reconstruction that combines the benefitsofsupervisedlearning methodsand(unsupervised) transformlearning- based methods such as PWLS-ULTRA that involve highly image-adaptive clustering. The SUPER model consists of several layers, each of which includes a deep network learned in a supervised manner and an unsupervised iterative method that involves image-adaptive components. The SUPER reconstruction algorithms are learned in a greedy manner from training data. The proposed SUPER learning methods dramatically outperform both the constituent supervised learning-based networks and iterative algorithms for LDCT, and use much fewer iterationsin the iterativereconstructionmodules.
  • 179. Dual-energy/detectorCT “sort ofCT HDR” #1 Dual energycomputedtomographyforthehead NorihitoNarutoToshihide ItohKyoNoguchi JapaneseJournalofRadiologyFebruary2018,Volume36,Issue2,pp69–80 https://doi.org/10.1007/s11604-017-0701-4 -Citedby2  Dual energy CT (DECT) is a promising technology that provides better diagnostic accuracy in several brain diseases. DECT can generate various types of CT images from a single acquisition data set at high kV and low kV based on material decomposition algorithms. The two-material decomposition algorithm can separate bone/calcification from iodine accurately. The three- material decomposition algorithm can generate a virtual non-contrast image, which helps to identify conditions such as brain hemorrhage. A virtual monochromatic image has the potential to eliminate metal artifacts by reducingbeam-hardeningeffects. DECT also enables exploration of advanced imaging to make diagnosis easier. One such novel application of DECT is the X-Map, which helps to visualizeischemicstrokeinthebrain withoutusingiodinecontrastmedium. The X-Map uses a modified 3MD algorithm. A motivation of this application is to visualize an ischemic change of the brain parenchyma by detecting an increase in water content in a voxel. To identify a small change in water content, the 3MD algorithm had a lipid-specific slope of 2.0 applied in order to suppress the small difference between gray matter and white matter, which is mainly the difference in the lipid content in gray and white matter. As shown in the diagram, the nominal values of gray matter and white matter are 33 HU at Sn150 kV and 42 HU at 80 kV, and 29 HU at Sn150 kV and 34 HU at 80 kV, respectively. The lipid-specific slope between the nominal point of gray matter and white matter is 2.0 using the third generation DSCT (SOMATOM Force; Siemens Healthcare,Forchheim, Germany) A patient with acute ischemic stroke 3 h after onset. A simulated standard CT image (a) obtained 3 h after the ischemic stroke onset shows no definite early ischemic change, although the left frontoparietal operculum may show questionable hypo-density. The X-Map (b) clearly shows the ischemic lesion in the left middle cerebral artery territory. The diffusion-weighed image (c) alsoshowsa definite acute ischemic lesion in theleftMCAterritory The two-material decomposition (2MD) is the algorithm that generates several dual energy (DE) images. The 2MD algorithm (a) can distinguish one material from other materialssuch as bone and iodine using a separation line. This algorithm has been used for the DE direct bone removal application. The three-material decomposition (3MD) algorithm (b) can extract the iodine component from contrast-enhanced tissues. All voxels are projected along the iodine-specific slope to the line connecting fat and soft-tissue. This algorithm has been used for DE brain hemorrhage application
  • 180. Dual-energy/detectorCT “sort of CT HDR”#2 Technical limitationsofdual-energy CT in neuroradiology:30-monthinstitutional experienceandreviewofliterature Julien Dinkel, Omid Khalilzadeh, Catherine M Phan, Ajit H Goenka, Albert JYoo, JoshuaA Hirsch, Rajiv Gupta| Journal ofNeuroInterventional Surgery2015;7:596-602. http://dx.doi.org/10.1136/neurintsurg-2014-011241 Although dual-energy CT systems (DECT) appears to be a promising option, its limitations in neuroimaging have not been systematically studied. Knowledge of these limitations is essential before DECT can be considered as a standard modality in neuroradiology. In this study, a retrospective analysis was performed to analyze failure modes and limitations of DECT in neuroradiology. To further illustrate potential limitations of DECT, clinical analysis was supplemented with an in vitro dilution experiment using cylinders containing predetermined concentrationsofheparinizedswineblood,normalsaline,andiodine. There is a chronic infarct in the right middle cerebral artery territory with diffuse mineralization in this region (circled). A single-energy image (A) and virtual non-contrast image (B) show hyperdensity (mean of 58 HU) surrounding infarction of the right basal ganglia and adjacent internal capsule. There is trace corresponding hyperdensity on the iodine overlay image (C). This finding, by itself, may represent mineralization or a combination of iodine and hemorrhage. Hard-plaque removal software (D)cannot identifythisregion offaint, diffuse mineralization. Single-energy image (A) with beam-hardening artifacts from clips on a right middle cerebral artery aneurysm. An iodine overlay image (C) is particularly impaired by the metallic artifact. The virtual non- contrast image (B)islessaffected bythe metallic artifact. Aproposed algorithm for assessing intraparenchymal calcification usingdual- energyCTprocessing. The original 80 and 140 kV imagesare decomposed intotwo alternate base- pairs: brain parenchyma and calcium. Ahyperdensity disappearingon the brain overlay can be regarded asacalcification. ICH, intracranial hemorrhage. Two types of hyperattenuation seen on a mixed image (A, D) obtained by dual-energy CT in a patient who underwent recanalization therapy. Contrast staining (oval) in the right basal ganglia is also depicted in the iodine overlay image (C) but not in the virtual non-contrast (VNC) image (B). A faint focal mineralization is seen on the left lentiform nucleus (arrow). The iodine-specific material decomposition algorithm cannot identify this fourth material which is seen on both VNC (B) and iodine overlay image (C). After postprocessing using the brain mineralization application, this hyperdensity disappears on the brain overlay (E), confirming a calcification. Note that both iodine content and calcificationsareseen on the‘calciumoverlay’ (F).
  • 181. Dual-energy/detectorCT “sort ofCT HDR” #3 Characteristic images of the CT brain protocol from the single-layer detector CT (SLCT; Brilliance iCT, Philipshealthcare) and dual-layer detector CT (DLCT; IQon spectral CT, Philips Healthcare). The contrast between the grey and white matter isclear inbothimages.IntheSLCT image,adrainisvisible.Thewindowlevelandwidthfor bothimagesis40/80. Van Ommen et al. (January 2019) Dose of CT protocols acquired in clinical routine using a dual-layer detector CT scanner: A preliminary report http://doi.org/10.1016/j.ejrad.2019.01.011 VeronicaFransson’sMaster’sthesis(2019) http://lup.lub.lu.se/luur/download?func=downloa dFile&recordOId=8995820&fileOId=8995821 IodineQuantificationUsingDualEnergy ComputedTomographyandapplicationsin BrainImaging
  • 182. A ReviewoftheApplicationsofDual-Energy CTin AcuteNeuroimaging https://doi.org/10.1177%2F0846537120904347 Dual-energy CT is a powerful tool for supplementing standard CT in acute neuroimaging. Many clinical applications have been demonstrated to represent added value, notably for improved diagnoses and diagnostic confidence in head and spinal trauma, cerebral ischemia and hemorrhage, and angiography. Emerging iodine quantification methods have potential to guide medical, surgical, and interventional therapy and prognostication in stroke, aneurysmal hemorrhage, and traumatic contusions. As the technology of DECT continues to evolve, these tools promise maturation and expansion of their role in emergent neurologicalpresentations. In three-material decomposition, if a fourth (or more) material, such as calcium, is present at a certain concentration in a voxel, DECT cannot separate the constituent materials and will misclassify them which may present challenges in separating calcification from enhancement or hemorrhage. Iodine concentrations that are too low may be unquantifiable or undetectable, and concentrations that are too high may prevent complete iodine subtraction. The limitation of a relatively narrow field of view (25-36.5 cm, depending on scanner generation) is of lesser importance in neuroradiology, as the brain and spine, when centered in the field of view, should be adequatelycovered.
  • 183. Using Dual-Energy CTto Identify Small Foci of Hemorrhage in the Emergency Setting https://doi.org/10.1148/radiol.2019192258 Dual-Energy CT shouldbetterdistinguish calciumfromhematomaE Dual-EnergyHeadCTEnables AccurateDistinctionof IntraparenchymalHemorrhagefrom CalcificationinEmergency DepartmentPatientsRanliang Hu,Laleh DaftariBesheli,JosephYoung,MarkusWu, StuartPomerantz,MichaelH.Lev,RajivGupta https://doi.org/10.1148/radiol.2015150877 To evaluate the ability of dual-energy (DE) computed tomography (CT) to differentiate calcification from acute hemorrhage in the emergency department setting. In this institutional review board-approved study, all unenhanced DE head CT examinations that were performed in the emergency department in November and December 2014 were retrospectively reviewed. Simulated 120-kVp single-energy CT images were derived from the DE CT acquisition via postprocessing. Patients with at least one focus of intraparenchymal hyperattenuation on single-energy CT images were included, and DE material decomposition postprocessing was performed. Each focal hyperattenuation was analyzed on the basis of the virtual noncalcium and calcium overlay images and classified as calcification or hemorrhage. Sensitivity, specificity, and accuracy were calculated for single- energy and DE CT by using a common reference standard established by relevant prior and follow-up imaging and clinical information. DE CT by using material decomposition enables accurate differentiation between calcification and hemorrhage in patients presenting for emergency head imaging and can be especially useful in problem-solving complex cases that are difficult to determine based on conventional CT appearance alone.
  • 184. Multi-energyCT Uniquenesscriteriain multi-energy CT GuillaumeBal, FatmaTerzioglu(Submitted on 6Jan2020) https://arxiv.org/abs/2001.06095 Multi-Energy Computed Tomography (ME-CT) is a medical imaging modality aiming to reconstruct the spatial density of materials from the attenuation properties of probing x-rays. For each line in two- or three-dimensional space, ME- CT measurements may be written as a nonlinear mapping from the integrals of the unknown densities of a finite number of materials along said line to an equal or larger number of energy-weighted integrals corresponding to different x-ray sourceenergyspectra. ME-CT reconstructions may thus be decomposed as a two- stepprocess: 1) Reconstruct line integrals of the material densities from the availableenergy measurements;and 2) Reconstructdensitiesfromtheirlineintegrals. Step (ii) is the standard linear x-ray CT problem whose invertibility iswell-known,sothispaperfocusesonstep(i).
  • 185. Low-doseMulti-energyCT JointReconstructioninLowDoseMulti-Energy CT JussiToivanen, Alexander Meaney, SamuliSiltanen, Ville Kolehmainen (Submitted on 11Apr 2019(v1), lastrevised 13Feb 2020 (thisversion, v3)) https://arxiv.org/abs/1904.05671 Multi-energy CT takes advantage of the non-linearly varying attenuation properties of elemental media with respect to energy, enabling more precise material identification than single-energy CT. The increased precision comes with the cost of a higher radiation dose. A straightforward way to lower the dose is to reduce the number of projections per energy, but this makes tomographic reconstructionmoreill-posed. In this paper, we propose how this problem can be overcome with a combination of a regularization method that promotes structural similarity between images at different energies and a suitably selected low-dose data acquisition protocol using non-overlapping projections. The performance of various joint regularization models is assessed with both simulated and experimental data,using the novel low- dosedataacquisition protocol.Three ofthemodelsarewell-established,namelythe joint total variation, the linear parallel level sets and the spectral smoothness promotingregularizationmodels. Furthermore, one new joint regularization model is introduced for multi- energy CT: a regularization based on the structure function from the structural similarity index. The findings show that joint regularization outperforms individual channel-by-channel reconstruction. Furthermore, the proposed combination of joint reconstruction and non-overlapping projection geometryenablessignificantreductionofradiationdose. G Poludniowski, G Landry, F DeBlois, P M Evans, and F Verhaegen. SpekCalc: a program to calculate photon spectra from tungsten anode x-ray tubes. Physics in Medicine and Biology, 54:N433—-N438, 2009. https://doi.org/10.1088/0031-9155/54/19/N01
  • 186. 3DFew-viewCTReconstruction DeepEncoder-decoderAdversarial Reconstruction (DEAR)Networkfor3DCTfrom Few-viewData HuidongXie,HongmingShan,GeWang(Submittedon13Nov2019 https://arxiv.org/abs/1911.05880 In this paper, we propose a deep encoder-decoder adversarial reconstruction (DEAR) network for 3D CT image reconstruction from few-view data. Since the artifacts caused by few-view reconstruction appear in 3D instead of 2D geometry, a 3D deep network has a great potential for improving the image quality in a data-driven fashion. More specifically, our proposed DEAR-3D network aims at reconstructing 3D volume directly from clinical 3D spiral cone-beam image data. DEAR-3D utilizes 3D convolutional layers to extract 3D information from multiple adjacent slicesin agenerativeadversarial network(GAN) framework. Different fromreconstructing2D images from 3D input data, DEAR-3D directly reconstructs a 3D volume, with faithful texture and image details; DEAR is validated on a publicly available abdominal CT dataset prepared and authorized by Mayo Clinic. Compared with other 2D deep-learning methods, the proposed DEAR-3Dnetworkcan utilize3Dinformationtoproducepromisingreconstructionresults Few-view CT may be implemented as a mechanically stationary scanner in the future [ Crameret al. 2018] for health-care and other utilities. Current commercial CT scanners use one or two x-ray sources mounted on a rotating gantry, and take hundreds of projections around a patient. Therotatingmechanismisnot onlymassivebut alsopower-consuming.Hence, current commercial CT scanners are inaccessible outside hospitals and imaging centers, due to their size, weight, and cost. Designing a stationary gantry with multiple miniature x-ray sources is an interestingapproach toresolvethisissue [Crameret al.2018].
  • 188. MultimodalSpatialNormalization Example #1 Image processing steps for three methods of spatial normalization and measuring regional SUV. (a) Skull-stripping of original CT image, (b) spatial normalization of skull-stripped CT to skull-stripped CT template, (c) applying transformation parameter normalizing CT image for spatial normalization of PET image, (d) skull-stripping of original MR image, (e) spatial normalization of skull-stripped MR image to skull-stripped MR template, (f) coregistration of PET image to MR image, (g) applying transformation parameter normalizing MR image for spatial normalization of PET image, (h) spatial normalization of PET image with MNI PET template, (i) measuringregional SUVwith modified AAL VOI template, (j) acquisition of FSVOI withFreeSurfer, and (k) measuringregional SUVby usingFSVOI overlaid onPETimage coregistered toMR. AAL = automated anatomical labeling, FSVOI = FreeSurfer-generated volumeof interest, MNI = Montreal Neurological Institute, PET= positronemission tomography, SUV= standardized uptake value, VOI= volume of interest AComputed Tomography-Based SpatialNormalizationfor theAnalysisof[18F] Fluorodeoxyglucose PositronEmission TomographyoftheBrain KoreanJRadiol.2014Nov- Dec;15(6):862-870. https://doi.org/10.3348/kjr.20 14.15.6.862
  • 189. MultimodalSpatialNormalization Example #2 SpatialNormalizationOfCT ImagesToMniSpaceARepresentative http://fbcover.us/mni-template/ PrettyMniTemplateImagesGalleryStudySpecificEpiTemplate http://fbcover.us/mni-template/ BicTheMcconnellBrainImaging CentreIcbm152NLin2009 http://fbcover.us/mni-template/
  • 190. MRISpatialNormalization Example Spatial registration for functional near-infrared spectroscopy: From channel position on the scalp tocortical location in individual and group analyses(NeuroImage 2013) https://doi.org/10.1016/j.neuroimage.2013.07.025 Probabilistic registration of single-subject data without MRI. (A) Positions for channels and reference points in real-world (RW) space are measured using a 3D-digitizer. The minimum number of reference points is four, as in this case, where Nz (nasion), Cz, and left and right preauricular points (AL and AR) are used. Alternatively, whole or selected 10/20 positions may be used. (B) The reference points in RW are affine-transformed tothe corresponding reference pointsin eachentry in referencetotheMRIdatabasein MNI space. (C) Channels of the scalp are projected onto the cortical surface of the reference brains. (D) The cortically projected channel positions are integrated to yield the most likely coordinates (average: centersofspheres)and variability (compositestandard deviation: radii ofspheres)in MNIspace. (NeuroImage 2013) https://doi.org/10.1016/j.neuroimage.2013.07.025
  • 191. ANTpackage and SyNasthe “SOTA” Evaluationof14nonlineardeformation algorithmsappliedtohumanbrainMRI registration ArnoKleinet al. (2009) NeuroImageVolume46,Issue3,1July2009, Pages786-802 https://doi.org/10.1016/j.neuroimage.2008.12.037 -Citedby1776 More than 45,000 registrations between 80 manually labeled brains were performed by algorithms including: AIR, ANIMAL, ART, Diffeomorphic Demons, FNIRT, IRTK, JRD-fluid, ROMEO, SICLE, SyN, and four different SPM5 algorithms (“SPM2-type” and regular Normalization, Unified Segmentation, and the DARTEL Toolbox). All of these registrations were preceded by linear registration between the sameimagepairsusingFLIRT. One of the most significant findings of this study is that the relative performances of the registration methods under comparison appear to be little affected by the choice of subject population, labeling protocol, and type of overlap measure. This is important because it suggests that the findings are generalizable to new subject populations that are labeled or evaluated using different labeling protocols. Furthermore, we ranked the 14 methods according to three completely independent analyses (permutation tests, one-way ANOVA tests, and indifference-zone ranking) and derived three almost identical top rankings of the methods. ART, SyN, IRTK, and SPM's DARTEL Toolbox gave the best results according to overlap and distance measures, with ART and SyN delivering the most consistently high accuracy across subjects and label sets. Updates will be published on the http://www.mindboggle.info/papers/website Blaiotta et al. (2018): “Advanced normalisation Tools (ANTs) package, through the web site http://stnava.github.io/ANTs/. Indeed, the symmetric diffeomorphic registration framework implemented in ANTs has established itself as the state-of- the-art of medical image nonlinear spatial normalisation (Klein et al., 2009).” Image Registration Diffeomorphisms: SyN, Independent Evaluation: Klein, Murphy, Template Construction  (2004)(2010), Similarity Metrics, Multivariate registration,  Multiple modality analysis and statistical bias
  • 192. How aboutmissingdata? Diffeomorphicregistrationwithintensity transformationandmissingdata: Applicationto3Ddigitalpathology of Alzheimer’sdisease DanielTward•TimothyBrown •YusukeKageyama•JayminPatel •ZhipengHou•SusumuMori •MarilynAlbert•Juan Troncoso• MichaelMillerbioRxivpreprint first postedonlineDec. 11,2018; doi: http://dx.doi.org/10.1101/494005 This paper examines the problem of diffeomorphic image mapping in the presence of differing image intensity profiles and missing data. Our motivation comes from the problem of aligning 3D brain MRI with 100 micron isotropic resolution, to histology sections with 1 micron in plane resolution. Multiple stains, as well as damaged, folded, or missing tissue are common in this situation. We overcome these challenges by introducing two new concepts. Cross modality image matching is achieved by jointly estimating polynomial transformations of the atlas intensity, together with pose and deformation parameters. Missing data is accommodated via a multiple atlas selection procedure where several atlases may be of homogeneousintensityandcorrespond to“background”or“artifact”. The two concepts are combined within an Expectation Maximization algorithm, where atlas selection posteriors and deformation parameters are updated iteratively, and polynomial coefficients are computed in closed form. We show results for 3D reconstruction of digital pathology and MRI in standard atlas coordinates. In conjunction with convolutional neural networks, we quantify the 3D density distribution of tauopathy throughout the medial temporal lobe of an Alzheimer’sdiseasepostmortemspecimen.
  • 193. DiffusionTensorImaging registration pipelineexample Improvingspatialnormalizationofbrain diffusionMRI tomeasure longitudinal changesoftissue microstructure inhuman cortex andwhitematter FlorenciaJacobacci, Jorge Jovicich, GonzaloLerner, Edson AmaroJr, Jorge Armony, Julien Doyon, ValeriaDella-Maggiore Universidad de Buenos Aires https://doi.org/10.1101/590521 (March 28, 2019) https://github.com/florjaco/DWIReproducibleNormalization Scalar diffusion tensor imaging (DTI) measures, such as fractional anisotropy (FA) and mean diffusivity (MD), are increasingly being used to evaluate longitudinal changes in brain tissue microstructure. In this study, we aimed at optimizing the normalization approach of longitudinal DTI data in humans to improve registration in gray matter and reduce artifacts associated with multisession registrations. For this purpose, we examined the impact of different normalization features on the across- session test-retest reproducibility error of FA and MD maps frommultiplescanningsessions. We found that a normalization approach using ANTs as the registration algorithm, MNI152 T1 template as the target image, FA as the moving image, and an intermediate FA template yielded the highest test-retest reproducibility in registering longitudinal DTI maps for both gray matter and white matter. Our optimized normalization pipeline opens a window to quantify longitudinal changes in microstructureatthecorticallevel.
  • 195. TechnicalImageQuality validated by radiologists ValidationofalgorithmicCT image qualitymetricswithpreferencesof radiologists Yuan Cheng Ehsan Abadi Taylor Brunton Smith FrancescoRia MathiasMeyer Daniele Marin Ehsan Samei https://doi.org/10.1002/mp.13795 (29August 2019) Automated assessment of perceptual image quality on clinical Computed Tomography (CT) data by computer algorithms has the potential to greatly facilitate data driven monitoring and‐ICH, occurs earlier optimization of CT image acquisition protocols. The application of these techniques in clinical operation requires the knowledge of how the output of the computer algorithms corresponds to clinical expectations. This study addressed the need to validate algorithmic image quality measurements on clinical CT images with preferences of radiologists and determine the clinically acceptable range of algorithmic measurements for abdominal CTexaminations. Algorithmic measurements of image quality metrics (organ HU, noise magnitude, and clarity) were performed on a clinical CT image dataset with supplemental measures of noise power spectrum from phantom images using techniques developed previously. The algorithmic measurements were compared to clinical expectations of image quality in an observer studywithseven radiologists. The observer study results indicated that these algorithms can robustly assess the perceptual quality of clinical CT images in an automated fashion. Clinically acceptable ranges of algorithmic measurements were determined. The correspondence of these image quality assessment algorithms to clinical expectations paves the way toward establishing diagnostic reference levels in terms of clinically acceptable perceptual image quality and data driven optimization of CT image‐sized acquisition protocols.
  • 196. ImageQuality(and resolution) task-specific sometimes blurry+pixelatedvolumes cangetyou somewhere? TheEffectofImageResolutionon Deep LearninginRadiography YCarlF.Sabottke,BradleyM.Spieler Liang Radiology:Artificial Intelligence(Jan 22 2020) https://doi.org/10.1148/ryai.2019190015 Tracking convolutional neural network performance as a functionof image resolution allowsinsightintohow the relative subtlety of different radiology findings can affect the success of deep learning in diagnostic radiology applications. Maximum AUCs were achieved at image resolutions between 256 × 256 and 448 × 448 pixels for binary decision networks targeting emphysema, cardiomegaly, hernias,edema,effusions,atelectasis,masses,andnodules. When comparing performance between networks that utilize lower resolution (64 × 64 pixels) versus higher (320 × 320 pixels) resolution inputs, emphysema, cardiomegaly, hernia, and pulmonary nodule detection had the highest fractional improvementsin AUC at higher image resolutions. Increasing image resolution for CNN training often has a trade-off with the maximum possible batch size, yet optimal selection of image resolution has the potential for further increasing neural network performance for various radiology-based machine learning tasks. Furthermore, identifying diagnosis-specific tasks that require relatively higher image resolution can potentially provide insight into the relative difficulty of identifying differentradiologyfindings.
  • 197. RegulatoryImage Quality AchievingCT RegulatoryCompliance:A Comprehensive andContinuousQuality ImprovementApproach Matthew E. Zygmont, RebeccaNeill, ShalmaliDharmadhikari, Phuong-Anh T. DuongCurrentProblemsin DiagnosticRadiology Availableonline12 February2020 https://doi.org/10.1067/j.cpradiol.2020.01.013 Computed tomography (CT) represents one of the largest sources of radiation exposure to the public in the United States. Regulatory requirements now mandate dose tracking for all exams and investigation of dose events that exceed set dose thresholds. Radiology practices are tasked with ensuring quality control and optimizing patient CT exam doses while maintaining diagnostic efficacy. Meeting regulatory requirements necessitates the developmentof aneffectivequalityprograminCT. This review provides a template for accreditation compliant quality control and CT dose optimization. The following paper summarizes a large health system approach for establishing a quality program in CT and discusses successes, challenges, and future needs. Protocol management was one of the most time intensive components of our CT quality program. Central protocol management with cross platform compatibility would allow for efficient standardization and would have great impact especially in large organizations. Modular protocol design from manufacturers is another missing piece in the optimization process. Having recursive protocol modules would greatly alleviate the burden of making parameter changes to core imaging units. For example, our routine head protocol is a standalone exam, but also exists in combination protocols for CT angiography of the head and neck,perfusion imaging,and traumaexams.
  • 199. Conditional variational autoencoderfordiffeomorphicregistration #1 LearningaProbabilisticModel forDiffeomorphic Registration Julian Krebs; HervéDelingette; BorisMailhé; NicholasAyache; Tommaso Mansi Université Côte d’Azur,Inria / Siemens Healthineers,Digital Services,Digital Technology and Innovation,Princeton, NJ,USA IEEE Transactionson Medical Imaging (Volume: 38 , Issue: 9, Sept.2019) https://doi.org/10.1109/TMI.2019.2897112 Medical image registration is one of the key processing steps for biomedical image analysis such as cancer diagnosis. Recently, deep learningbased supervised and unsupervised image registration methodshave been extensively studied due to its excellent performance in spite of ultra-fast computational time compared tothe classical approaches. In this paper, we present a novel unsupervised medical image registration method that trains deep neural network for deformable registrationof 3Dvolumesusinga cycle-consistency. To guarantee the topology preservation between the deformed and fixed images, we here adopt the cycle consistency constraint between the original moving image and its re-deformed image. That is, the deformed volumes are given as the inputs tothe networks again by switchingtheir order toimpose the cycle consistency. Thisconstraint ensuresthat the shape ofdeformed imagessuccessivelyreturnstothe original shape. Thanks to the cycle consistency, the proposed deep neural networks can take diverse pair of image data with severe deformation for accurate registration. Experimental results using multiphase liver CT images demonstrate that our method provides very precise 3D image registration within a few seconds, resultingin more accurate cancer size estimation. Thenumber of trainable parametersin the networkwas ~420k. The frameworkhas been implemented in Tensorflow using Keras. Training took ~24 hours and testing a single registration casetook 0.32s on aNVIDIAGTX TITAN X GPU.
  • 200. Conditional variational autoencoderfordiffeomorphicregistration #2 LearningaProbabilisticModel forDiffeomorphic Registration Julian Krebs; HervéDelingette; BorisMailhé; NicholasAyache; Tommaso Mansi Université Côte d’Azur,Inria / Siemens Healthineers,Digital Services,Digital Technology and Innovation,Princeton, NJ,USA IEEE Transactionson Medical Imaging (Volume: 38 , Issue: 9, Sept.2019) https://doi.org/10.1109/TMI.2019.2897112 - Citedby13  26. J. Fan, X. Cao, P.-T. Yap, D. Shen, Birnet: Brain image registration using dual-supervised fully convolutional networks, 2018. https://arxiv.org/abs/1802.04692. 27. A. V. Dalca, G. Balakrishnan, J. Guttag, M. R. Sabuncu, "Unsupervised learning for fast probabilistic diffeomorphic registration", Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent., pp. 729-738, 2018. https://arxiv.org/abs/1805.04605 - See next slide → 29. Y. Hu et al., "Weakly-supervised convolutional neural networks for multimodal image registration", Med. Image Anal., vol. 49, pp. 1-13, Oct. 2018. https://arxiv.org/abs/1807.03361
  • 201. UnsupervisedProbabilistic+diffeomorphictweakofVoxelMorph UnsupervisedLearningofProbabilistic DiffeomorphicRegistration forImagesand SurfacesAdrianV.Dalca, GuhaBalakrishnan, JohnGuttag,  MertR.Sabuncu(Submittedon8Mar 2019(v1),lastrevised23Jul2019 (thisversion,v2))https://arxiv.org/abs/1903.03545 https://github.com/voxelmorph/voxelmorph Paperswithcode DiffeomorphicMedicalImageRegistration Classicaldeformableregistrationtechniquesachieveimpressiveresults andoffer arigoroustheoreticaltreatment,butare computationally intensivesincetheysolveanoptimizationproblemforeachimagepair. Recently,learning-basedmethodshavefacilitatedfastregistrationby learningspatialdeformationfunctions.However,theseapproachesuse restricteddeformationmodels,requiresupervisedlabels,or donot guaranteeadiffeomorphic(topology-preserving) registration. Furthermore,learning-basedregistrationtools havenotbeen derivedfromaprobabilisticframework thatcanoffer uncertainty estimates. In this paper, we build a connection between classical and learning-based methods. We present a probabilistic generative model and derive an unsupervised learning-based inference algorithm that uses insights from classical registration methods and makes use of recent developments in convolutional neural networks (CNNs). We demonstrate our method on a 3D brain registration task for both images and anatomical surfaces, and provide extensive empirical analyses. Our principled approach results in state of the art accuracy and very fast runtimes, while providing diffeomorphic guarantees. Our algorithm can infer the registration of new image pairs in under a second. Compared to traditional methods, our approach is significantly faster, and compared to recent learning based methods, our method offers diffeomorphic guarantees. We demonstrate that the surface extension to our model can help improve registration while preserving properties such as low runtime and diffeomorphisms. Furthermore, several conclusions shown in recent papers apply to our method. For example, when only given very limited training data, deformation from VoxelMorph can still be used as initialization to a classical method, enabling faster convergence ( Balakrishnanet al, 2019)
  • 202. Notthatmanyannotatedtrainingsamplesrequired? FewLabeledAtlasesareNecessary for Deep-Learning-BasedSegmentation Hyeon WooLee,MertR.Sabuncu,AdrianV.Dalca(Submittedon13Aug2019 (v1),lastrevised15Aug2019(thisversion,v3)) https://arxiv.org/abs/1908.04466 We tackle biomedical image segmentation in the scenario of only afew labeledbrainMRimages.Thisisan importantandchallenging task in medical applications, where manual annotations are time- consuming. Classical multi-atlas based anatomical segmentation methods use image registration to warp segments from labeled images onto a new scan. These approaches have traditionally required significant runtime, but recent learning-based registration methodspromisesubstantialruntimeimprovement. In a different paradigm, supervised learning-based segmentation strategies have gained popularity. These methods have consistently usedrelativelylargesetsoflabeledtraining data,andtheir behavior inthe regime of a few labeled images has not been thoroughly evaluated. In this work, we provide two important results for anatomical segmentation in the scenario where few labeled images are available. First, we propose a straightforward implementation of efficient semi- supervised learning-based registration method, which we showcase in a multi-atlas segmentation framework. Second, through a thorough empirical study, we evaluate the performance of a supervised segmentationapproach,wherethetrainingimagesareaugmented via random deformations. Surprisingly, we find that in both paradigms, accurate segmentation is generally possible even inthecontextoffewlabeledimages.
  • 203. Metriclearningapproachfordiffeomorphictransformation MetricLearningforImageRegistration MarcNiethammer,RolandKwitt,François-Xavier Vialard(2019) https://arxiv.org/abs/1904.09524/CVPR2019 https://github.com/uncbiag/registration Image registration isa keytechnique in medical image analysisto estimate deformationsbetween image pairs. Agood deformation model isimportant for high-qualityestimates. However, mostexisting approachesuse ad-hoc deformation models chosen for mathematical convenience ratherthanto capture observed datavariation. Recent deep learningapproacheslearn deformation modelsdirectlyfrom data. However, they provide limited control over the spatial regularity of transformations. Instead of learning the entire registration approach, we learn a spatially-adaptive regularizer within a registration model. This allows controlling the desired level of regularity and preserving structural propertiesof aregistrationmodel. For example, diffeomorphic transformations can be attained. Our approach is a radical departure from existing deep learning approaches to image registration by embedding a deep learning model in an optimization-based registration algorithm to parameterize and data-adapt the registrationmodel itself. Much experimental and theoretical work remains. More sophisticated CNN models should be explored; the method should be adapted for fast end-to-end regression; more general parameterizations of regularizers should be studied (e.g., allowingsliding), and the approachshould be developedforLDDMM.
  • 204. One/Few-shotlearningforimageregistrationaswell OneShotLearningforDeformableMedical ImageRegistrationandPeriodicMotion Tracking TobiasFechter,DimosBaltas(11Jul2019) https://arxiv.org/abs/1907.04641 Deformable image registration is a very important field of research in medical imaging. Recently multiple deep learningapproacheswere published in this areashowing promisingresults. However, drawbacks of deep learning methods are the need for a large amount of training datasets and their inability to register unseen images different from the training datasets. One shot learning comes without the need of large training datasets and has already been proven to be applicable to3Ddata. In this work we present an one shot registration approach for periodic motion tracking in 3D and 4D datasets. When applied to 3D dataset the algorithm calculates the inverse of a registration vector field simultaneously. For registration we employed a U-Net combined with a coarse to fine approach and a differential spatial transformer module. The algorithm was thoroughly tested with multiple 4D and 3D datasets publicly available. The results show that the presented approach is able to track periodic motion and to yield a competitive registration accuracy. Possible applications are the use as a stand-alone algorithm for 3D and 4D motion tracking or in the beginning of studies until enough datasets for a separate training phase are available.
  • 205. Inpaintingwithregistration SynthesisandInpainting-BasedMR- CTRegistration forImage-Guided ThermalAblationofLiverTumors DongmingWei,SaharAhmad,JiayuHuo, Wen Peng, YunhaoGe, ZhongXue,Pew-Thian Yap,Wentao Li,DinggangShen, Qian Wang [Submittedon 30Jul2019] https://arxiv.org/abs/1907.13020 In this paper, we propose a fast MR-CT image registration method to overlay a pre-procedural MR (pMR) image onto an intra-procedural CT (iCT) image for guiding the thermal ablation of liver tumors. By first using a Cycle-GAN model with mutual information constraint to generate synthesized CT (sCT) image from the cor- responding pMR, pre-procedural MR-CT image registration is carried out through traditional mono- modalityCT-CTimage registration. At the intra-procedural stage, a partial-convolution- based network is first used to inpaint the probe and its artifacts in the iCT image. Then, an unsupervised registration network is used to efficiently align the pre- procedural CT (pCT) with the inpainted iCT (inpCT) image. The final transformation from pMR to iCT is obtained by combining the two estimated transformations,i.e., (1) from the pMR image space to the pCT image space (through sCT) and (2) from the pCT image space to the iCT image space (throughinpCT).
  • 206. Registrationwith Segmentationjointly DeepLearning-BasedConcurrentBrain RegistrationandTumorSegmentation ThéoEstienneetal. (2020)Front.Comput.Neurosci.,20March2020| https://doi.org/10.3389/fncom.2020.00017 https://github.com/TheoEst/joint_registration_tumor_segmentation Keras In thispaper, we propose a novel, efficient, and multi-task algorithm that addressesthe problemsof image registration and braintumor segmentation jointly. Ourmethodexploitsthe dependenciesbetween these tasksthrougha natural couplingoftheirinterdependenciesduring inference.In particular, the similarityconstraintsare relaxed within the tumorregionsusingan efficient and relativelysimple formulation. We evaluated the performance ofourformulation bothquantitativelyand qualitativelyforregistration and segmentation problemson two publicly available datasets(BraTS 2018 andOASIS 3), reporting competitive results withotherrecent state-of-the-art methods.
  • 207. Registrationwith SegmentationandSynthesis JSSR:A JointSynthesis,Segmentation,andRegistration System for3DMulti-Modal ImageAlignmentofLarge- scalePathologicalCT Scans FengzeLiu, JingzhengCai, YuankaiHuo,Chi-TungCheng,Ashwin Raju,Dakai Jin,JingXiao, Alan Yuille,Le Lu,ChienHungLiao,AdamPHarrison [Submittedon 25May2020]https://arxiv.org/abs/2005.12209 Multi-modal image registration is a challenging problem yet important clinical task in many real applications and scenarios. For medical imaging based diagnosis, deformable registration among different image modalities is often required in order to provide complementary visual information, as the first step. During the registration, the semantic information is the key to match homologous points and pixels. Nevertheless, many conventional registration methods are incapable to capture the high-level semanticanatomicaldense correspondences. In this work, we propose a novel multi-task learning system, JSSR, based on an end- to-end 3D convolutional neural network that is composed of a generator, a register and a segmentor, for the tasks of synthesis, registration and segmentation, respectively. This system is optimized to satisfy the implicit constraints between different tasks unsupervisedly. It first synthesizes the source domain images into the target domain, then an intra-modal registration is applied on the synthesized images and target images. Then we can get the semantic segmentation by applying segmentors on the synthesized images and target images, which are aligned by the same deformation field generated by the registers. The supervision from another fully- annotated dataset is used to regularize the segmentors.
  • 210. Plentyofdeeplearning attempts Deep LearningforLow-DoseCTDenoising MaryamGholizadeh-Ansari,JavadAlirezaie,PaulBabyn(Submitted on25 Feb2019) https://arxiv.org/abs/1902.10127 In this paper, we propose a deep neural network that uses dilated convolutions with different dilation rates instead of standard convolution helping to capture more contextual information in fewer layers. Also, we have employed residual learning by creating shortcut connections to transmit image information from the early layers to later ones. To further improve the performance of the network, we have introduced a non- trainable edge detection layer that extracts edges in horizontal, vertical, and diagonal directions. Finally, we demonstrate that optimizing the network by a combination of mean-square error loss and perceptual loss preserves many structural details in the CT image. This objective function does not suffer from over smoothing and blurring effects caused by per- pixel loss and grid-like artifacts resulting from perceptual loss. The experiments show that each modification to the network improves the outcome while only minimally changing the complexity of the network.
  • 211. Few-viewCTReconstruction to reduce radiation dose Dual NetworkArchitectureforFew-viewCT-- TrainedonImageNetDataandTransferredfor Medical Imaging HuidongXie,Hongming Shan,WenxiangCong, XiaohuaZhang,ShaohuaLiu,Ruola Ning,GeWang(12Sept2019) https://arxiv.org/abs/1907.01262 Few-view CT image reconstruction is an important topic to reduce the radiation dose. Recently, data-driven algorithms have shown great potential to solve the few-view CT problem. In this paper, we develop a dual network architecture (DNA) for reconstructing images directly from sinograms. In the proposed DNA method, a point- based fully-connected layer learns the backprojection process requestingsignificantlylessmemorythanthepriorartsdo. This paper is not the first work for reconstructing images directly from raw data, but previously proposed methods requirea significantly greater amount of GPU memory for training. It is underlined that our proposed method solves the memory issue by learning the reconstruction process with the point-wise fully-connected layer and other proper network ingredients. Also, by passing only a single point into the fully- connected layer, the proposed method can truly learn the backprojection process. In our study, the DNA network demonstrates superior performance and generalizability. In the future works, we will validate the proposed method on images up to dimension 512× 512oreven1024 × 1024.
  • 212. WassersteinGANs for low-doseCT denoising LowDoseCT ImageDenoisingUsinga GenerativeAdversarial Networkwith WassersteinDistanceandPerceptualLoss QingsongYangetal.(2018) Rensselaer PolytechnicInstitute, Troy, NY https://dx.doi.org/10.1109%2FTMI.2018.2827462- Citedby139  Over the past years, variouslow-dose CTmethodshaveproduced impressiveresults. However, most ofthe algorithmsdeveloped for this application, includingthe recently popularized deep learningtechniques, aim for minimizingthe mean-squared-error (MSE) between adenoised CTimageand theground truth under generic penalties. Although the peaksignal-to-noiseratio (PSNR)is improved, MSE-or weighted-MSE-based methodscancompromise the visibility of important structural detailsafter aggressive denoising. This paper introduces a new CT image denoising method based on the generative adversarial network (GAN) with Wasserstein distance and perceptual similarity. The Wasserstein distance is a key concept of the optimal transport theory, and promises to improve the performance of GAN. The perceptual loss suppresses noise by comparing the perceptual features of a denoised output against those of the ground truth in an established feature space, while the GAN focuses more on migrating the data noise distribution from strong to weak statistically. Therefore, our proposed method transfers our knowledge of visual perception to the image denoising task and iscapable of not onlyreducing the image noise level but also trying to keep the critical information at the same time. Promising results have been obtained in our experiments with clinical CTimages. In the future, we plan to incorporate the WGAN-VGG network with more complicated generators such as the networks reported in [ Chenetal.2017, Kangetal.2016] and extend these networks for image reconstruction from raw databy making aneural network counterpartof theFBPprocess. Sinogram pre-filtration and image post-processing are computationally efficient compared to iterative reconstruction. Noise characteristic was well modeled in the sinogram domain for sinogram-domain filtration. However, sinogram data of commercial scanners are not readily available to users, and these methods may suffer from resolution loss and edge blurring. Sinogram data need to be carefully processed, otherwise artifacts may be induced in the reconstructed images. Differently from sinogram denoising, image post-processing directly operates on an image. Many efforts were made in the image domain toreduce LDCTnoise and suppressartifacts. Despite the impressive denoising results with these innovative deep learning network structures, they fall into a category of an end-to-end network that typically usesthe mean squarederror (MSE) between thenetwork output and theground truth as the loss function. As revealed by the recent work [Johnsonetal.2016; Ledig etal. 2016], this per-pixel MSE is often associated with over-smoothed edges and loss of details. As an algorithm tries to minimize per-pixel MSE, it overlooks subtle image textures/signatures critical for human perception. It is reasonable to assume that CT images distribute over some manifolds. From that point of view, the MSE based approach tends to take the mean of high-resolution patches using the Euclidean distance rather than the geodesic distance. Therefore, in addition to the blurring effect, artifacts are also possiblesuch as non-uniformbiases. Zoomed ROI of the red rectangle in Fig.7 demonstrates the two attenuation liver lesions in the red and blue circles. The display windowis[−160,240]HU.
  • 213. AttentionwithGANs VisualAttentionNetworkfor Low-DoseCT WenchaoDu;HuChen; PeixiLiao; HongyuYang; GeWang; Yi Zhang|IEEE SignalProcessingLetters( Volume:26, Issue: 8,Aug.2019) https://doi.org/10.1109/LSP.2019.2922851 Noise and artifacts are intrinsic to low-dose computed tomography (LDCT) data acquisition, and will significantly affect the imaging performance. Perfect noise removal and image restoration is intractable in the context of LDCT due to the statistical and the technical uncertainties. In this letter, we apply the generative adversarial network (GAN) framework with a visual attention mechanism to deal with this problem in a data-driven/machinelearningfashion. Our main idea is to inject visual attention knowledge into the learning process of GAN to provide a powerful prior of the noise distribution. By doing this, both the generator and discriminator networks are empowered with visual attention information so that they will not only pay special attention to noisy regions and surrounding structures but also explicitly assess the local consistency of the recovered regions. Our experiments qualitatively and quantitatively demonstrate the effectiveness of the proposed method with clinic CTimages.
  • 214. Cycle-consistentadversarialdenoising forCT Cycle consistentadversarial denoising‐sized networkformultiphasecoronary CT angiography EunheeKang HyunJungKoo DongHyunYang JoonBumSeo Jong ChulYe.MedicalPhysics(2018) https://doi.org/10.1002/mp.13284 We propose an unsupervised learning technique that can remove the noise of the CT images in the low dose phases‐sized by learning from the CT images in the routine dose phases. Although a supervised learning approach is not applicable due to the differences in the underlying heart structure in two phases, the images are closely related in two phases, so we propose a cycle consistent adversarial‐ICH, occurs earlier denoising network to learn the mapping between the low and‐sized high dosecardiacphases‐sized . Experimental results showed that the proposed method effectively reduces the noise in the low dose CT image while‐ICH, occurs earlier preserving detailed texture and edge information. Moreover, thanks to the cyclic consistency and identity loss, the proposed network does not create any artificial features that are not present in the input images. Visual grading and quality evaluation also confirm that the proposed methodprovidessignificantimprovementindiagnosticquality. The proposed network can learn the image distributions from the routine dose cardiac phases, which is a big advantage over the existing‐ICH, occurs earlier supervised learning networks that need exactly matched low and‐ICH, occurs earlier routine dose CT images. Considering the effectiveness and‐ICH, occurs earlier practicability of the proposed method, we believe that the proposed canbeappliedformanyotherCTacquisitionprotocols. Example of multiphase coronary CTA acquisition protocol. Low dose‐ICH, occurs earlier acquisition isperformed in phase 1 and 2, whereasroutine dose‐ICH, occurs earlier acquisition isperformed in phases 3–10.
  • 216. ImageDenoising Notnecessarily needingnoise-freegroundtruth Noise2Noise:Learning ImageRestoration withoutCleanDataJaakkoLehtinen, Jacob Munkberg, Jon Hasselgren, SamuliLaine, TeroKarras, MiikaAittala, TimoAila NVIDIA; Aalto University; MITCSAIL (Submitted on 12 Mar 2018) https://arxiv.org/abs/1803.04189 https://github.com/NVlabs/noise2noise We apply basic statistical reasoning to signal reconstruction by machine learning -- learning to map corrupted observations to clean signals -- with a simple and powerful conclusion: it is possible to learn to restore images by only looking at corrupted examples, at performance at and sometimes exceeding training using clean data, without explicit image priors or likelihood models of the corruption. In practice, we show that a single model learns photographic noise removal, denoising synthetic Monte Carlo images, and reconstruction of undersampled MRI scans -- all corrupted by different processes-- based on noisydata only. That clean data is not necessary for denoising is not a new observation: indeed, consider, for instance, the classic BM3D algorithm that draws on self-similar patches within a single noisy image. We show that the previously-demonstrated high restoration performance of deep neural networks can likewise be achieved entirely without clean data, all based on the same general-purpose deep convolutional model. This points thewaytosignificant benefitsin manyapplicationsbyremovingtheneedfor potentiallystrenuouscollection ofcleandata. Finnish CenterforArtificial IntelligenceFCAI Published onNov19, 2018 https://youtu.be/dcV0OfxjrPQ As a sanity check though, would be nice to have some clean “multiple frame averaged” ground truths.
  • 217. [DnCNN]BeyondaGaussianDenoiser:ResidualLearningofDeep CNNforImageDenoising https://arxiv.org/abs/1608.03981 Thiswasintroducedabove already Noise2Noise:LearningImageRestorationwithoutCleanData https://arxiv.org/abs/1803.04189 Thiswasintroducedabovealready For benchmarking deep learning methods, unlike previous work [Abdelhamed etal.2018] that directly tests with the pre-trained models, we re-train these models with the same network architecture and similar hyper-parameters on the FMD dataset from scratch. Specifically, we compare two representative models, one of which requires ground truth (DnCNN) and the other does not (Noise2Noise). The benchmark results show that deep learning denoising models trained on our FMD dataset outperforms other methods by a large margin acrossall imagingmodalitiesand noise levels. APoisson-GaussianDenoisingDatasetwith RealFluorescenceMicroscopy Images YideZhang,YinhaoZhu,EvanNichols,QingfeiWang,SiyuanZhang,CodySmith,ScottHoward University ofNotreDame (Submittedon26Dec2018(v1),lastrevised5Apr2019) https://arxiv.org/abs/1812.10366 -http://tinyurl.com/y6mwqcjs- https://github.com/bmmi/denoising-fluorescence
  • 218. ShapePriorsforICH Youcan probablyforget aboutit? Haematoma“goeswhere it can” modelasanomaly? Butyou prorabablywantto co-segmenthematomawith somemoreregularshapes? AutomationofCT-basedhaemorrhagic strokeassessment for improvedclinicaloutcomes:studyprotocolanddesign BettyChinda,GeorgeMedvedev,WilliamSiu,MartinEster,AliArab,Tao Gu, SylvainMoreno,RyanC ND’Arcy,Xiaowei Song BMJOpen|Neurology| Protocol http://dx.doi.org/10.1136/bmjopen-2017-020260 (2018) Haemorrhagic stroke is of significant healthcare concern due to its association with high mortality and lasting impact on the survivors’ quality of life. Treatment decisions and clinical outcomes depend strongly on the size, spread and location of the haematoma. Non-contrast CT (NCCT) is the primary neuroimaging modality for haematoma assessment in haemorrhagic stroke diagnosis. Current procedures do not allow convenient NCCT-based haemorrhage volume calculation in clinical settings, while research-based approaches are yet to be tested for clinical utility; there is a demonstrated need for developing effective solutions. The project under review investigates the development of an automatic NCCT-based haematoma computationtool in support of accurate quantification of haematoma volumes. CT scans showing different shapes of haematoma. The regions of hyperintensities (bright) indicate the bleeding. Left panel shows it in an elliptical shape. The volume of the haematoma can be estimated using the ABC/2 method. The red arrow indicates the ‘A’ dimension, while the green arrow is the ‘B’ dimension. Right panel shows the haematoma in a non-elliptical (irregular) shape that has encroached into the lateral ventricles. The ABC/2 method cannotbeapplied to thiscase. An example showing haematoma with no clear bleed-parenchyma boundary; the volume of which cannot be correctly calculated using existing automation software and demonstrating the need for improved algorithms. Ascreenshot of theQuantomosoftware beaning used for comparison tovaliditytesting. The top toolbar shows optionsfor selection and estimation ofhaematoma;the left tool bar showsthe measurement panel where thetotal volume is displayed. The most accurate wayof estimating the volume isby goingslice byslice in 2D, which can be time consuming, whereasthe 3D estimate tendsto missclassified normal tissuessurroundingthe haematoma.
  • 219. Imagerestoration constraining withshape priors Anatomically ConstrainedNeuralNetworks(ACNN): Applicationto CardiacImageEnhancementandSegmentation OzanOktay, EnzoFerrante, KonstantinosKamnitsas, Mattias Heinrich, WenjiaBai, Jose Caballero, StuartCook, Antoniode Marvao, Timothy Dawes, Declan O’Regan, Bernhard Kainz, Ben Glocker, andDaniel Rueckert Biomedical Image Analysis Group, Imperial College London; MRCClinical Sciences Centre (CSC), London (5Dec2017) https://arxiv.org/abs/1705.08302 -Citedby95  Incorporation of prior knowledge about organ shape and location is key to improve performance of image analysis approaches. In particular, priors can be useful in cases where images are corrupted and contain artefacts due to limitations in image acquisition. The highly constrained nature of anatomical objects can be well captured with learning based techniques. However, in most recent and promising techniques such as CNN based segmentation it is not obvioushow toincorporate such prior knowledge. The new framework encourages models to follow the global anatomical properties of the underlying anatomy (e.g. shape, label structure) via learnt non- linear representations of the shape. We show that the proposed approach can be easily adapted to different analysis tasks (e.g. image enhancement, segmentation) andimprovethepredictionaccuracyof the state-of-the-artmodels
  • 220. Transformers asway ofgettingtheshapepriorin? TETRIS:TemplateTransformerNetworks forImageSegmentation MatthewChungHaiLee,KerstenPetersen,Nick Pawlowski,Ben Glocker,MichielSchaapBiomedicalImageAnalysisGroup,ImperialCollegeLondon /HeartFlow 10Apr 2019(modified:11Jun2019)MIDL2019 https://openreview.net/forum?id=r1lKJlSiK4 -Citedby3  http://wp.doc.ic.ac.uk/bglocker/project/semantic-imaging/ In this paper we introduce and compare different approaches for incorporating shape prior information into neural network based image segmentation. Specifically, we introduce the concept of template transformer networks (TeTrIS) where a shape template is deformed to match the underlying structure of interest through an end-to-end trained spatial transformer network. This has the advantage of explicitly enforcing shape priors and is free of discretisation artefacts by providing a soft partial volume segmentation. We also introduce a simple yet effective way of incorporating priors in state-of-the-art pixel-wise binary classification methods such as fully convolutional networks and U-net. Here, the template shape is given as an additional input channel, incorporating this information significantly reduces false positives. We report results on sub-voxel segmentation of coronary lumen structures in cardiac computed tomography showing the benefit of incorporating priors in neural network based imagesegmentation.
  • 221. AnatomicalShapeprior forpartiallylabeledsegmentation Prior-awareNeural Networkfor Partially-SupervisedMulti-Organ Segmentation Yuyin Zhou, Zhe Li,Song Bai,ChongWang, Xinlei Chen, Mei Han, Elliot Fishman, Alan Yuille (Submitted on 12 Apr 2019) https://arxiv.org/abs/1904.06346 As data annotation requires massive human labor from experienced radiologists, it is common that training dataare partially labeled, e.g., pancreas datasetsonly have the pancreas labeled while leaving the rest marked as background. However, these background labels can be misleading in multi-organ segmentation since the "background" usually contains some other organs of interest. To address the background ambiguity in these partially-labeled datasets, we propose Prior-aware Neural Network (PaNN) via explicitly incorporating anatomical priors on abdominal organ sizes, guiding the training process with domain-specific knowledge. More specifically, PaNN assumes that the average organ size distributions in the abdomen should approximate their empirical distributions, a prior statistics obtained from the fully-labeled dataset.
  • 222. Multi-task learningwithshapepriors Shape-AwareComplementary-Task LearningforMulti-Organ Segmentation FernandoNavarro, SuprosannaShit, Ivan Ezhov, Johannes Paetzold, AndreiGafita, JanPeeken, Stephanie Combs, Bjoern Menze(Submitted on14Aug2019) https://arxiv.org/abs/1908.05099v1 https://github.com/JunMa11/SegWithDistMap Multi-organ segmentation in whole-body computed tomography (CT) is a constant pre-processing step which finds its application in organ-specific image retrieval, radiotherapy planning, and interventional image analysis. We address this problem from an organ-specific shape-prior learning perspective. We introduce the idea of complementary-task learning to enforce shape-prior leveraging the existingtargetlabels. We propose two complementary-tasks namely i) distance map regression and ii) contour map detection to explicitly encode the geometric properties of each organ. We evaluate the proposed solution on the public VISCERAL dataset containing CT scansofmultiple organs.
  • 223. Flagging problematicvolumes/sliceslikewith clinicalreferrals? An AlarmSystem ForSegmentation AlgorithmBasedOn ShapeModel FengzeLiu,YingdaXia,Dong Yang,AlanYuille,DaguangXu (Submittedon26Mar 2019 https://arxiv.org/abs/1903.10645 We build an alarm system that will set off alerts when the segmentation result is possibly unsatisfactory, assuming no corresponding ground truth mask is provided. One plausible solution is to project the segmentation results into a low dimensional feature space; then learnclassifiers/regressorsto predicttheirqualities. Motivated by this, in this paper, we learn a feature space using the shape information which is a strong prior shared among different datasets and robust to the appearance variation of input data.The shape feature is captured using a Variational Auto-Encoder (VAE)networkthattrainedwithonlythegroundtruthmasks. During testing, the segmentation results with bad shapes shall not fit the shape prior well, resulting in large loss values. Thus, the VAE is able to evaluate the quality of segmentation result on unseen data, without using ground truth. Finally, we learn a regressor in the one-dimensional feature space to predict the qualities of segmentation results. Our alarm system is evaluated on several recent state-of-art segmentation algorithms for 3D medical segmentationtasks. The visualize on an NIH CT data for pancreas segmentation. The Dice between GT and prediction is 47.06 (real Dice) while the Dice between prediction and Prediction (Reconstruction) from VAE is 47.25 (fakeDice). Our method use the fake Dice to predict the former real Dice which is usually unknown at inference phase of real applications. This case shows how these two Dice scores are related to each other. In contrast, the uncertainty used in existing approaches mainly distributes on the boundary of predicted mask, which makes it a vague information when detecting thefailurecases.
  • 224. Imagerestoration jointly withsegmentationandautomaticlabelling? CTImageEnhancementUsingStackedGenerativeAdversarialNetworksandTransferLearningforLesion SegmentationImprovement YoubaoTang, JinzhengCai, Le Lu, Adam P. Harrison, KeYan, JingXiao, Lin Yang, Ronald M. Summers (Submittedon18Jul2018) https://arxiv.org/abs/1807.07144 Automated lesion segmentation from computed tomography (CT) is an important and challenging task in medical image analysis. While many advancements have been made, there is room for continued improvements. One hurdle is that CT images can exhibit high noise and low contrast, particularly in lower dosages. To address this, we focus on a preprocessing method for CT images that uses stacked generative adversarial networks (SGAN) approach. The first GAN reduces the noise in the CT image and the second GAN generates a higher resolution image with enhanced boundaries and high contrast. To make up for the absence of high quality CT images, we detail how to synthesize a large number of low- and high-quality natural images and use transfer learning with progressivelylargeramountsof CT images.  INPUT BM3D DnCNN SingleGAN Our denoising GAN Our SGAN Three examples of CT image enhancement results using differentmethodsonoriginalimages
  • 225. JointDeepDenoising andSegmentation DenoiSeg:JointDenoisingandSegmentation Tim-Oliver Buchholz, Mangal Prakash, Alexander Krull, FlorianJug [Submittedon6May2020] https://arxiv.org/abs/2005.02987 https://github.com/juglab/DenoiSeg Tensorflow https://pypi.org/project/denoiseg/ Here we propose DenoiSeg, a new method that can be trained end-to-end on only a few annotated ground truth segmentations. We achieve this by extending Noise2Void, a self- supervised denoising scheme that can be trained on noisy images alone, to also predict dense3-classsegmentations. We reason that the success of our proposed method originates from the fact that similar “skills” are required for denoising and segmentation. The network becomes a denoising expert by seeing all available raw data, while co-learning to segment, even if only a few segmentation labels are available. This hypothesis is additionally fueled by our observation that the best segmentation results on high quality (very low noise) raw data are obtained when moderate amounts of syntheticnoiseareadded.
  • 226. Orevenwithoutthesegmentationtarget? Segmentation-AwareImageDenoisingwithout KnowingTrueSegmentation SichengWang,BihanWen,JunruWu,DachengTao,ZhangyangWang (Submittedon22May2019) https://arxiv.org/abs/1905.08965 Several recent works discussed application-driven image restoration neural networks, which are capable of not only removing noise in images but also preserving their semantic-aware details, making them suitable for various high-level computer vision tasks as the pre-processing step. However, such approaches require extra annotations for their high-level vision tasks, in order to train the joint pipeline using hybrid losses. The availability of those annotations is yet often limited to a few image sets, potentially restricting the general applicability of these methods to denoisingmoreunseenandunannotatedimages. Motivated by that, we propose a segmentation-aware image denoising model dubbed U-SAID, based on a novel unsupervised approach with a pixel-wise uncertainty loss. U-SAID does not need any ground-truth segmentation map,andthuscan be applied toanyimage dataset. It generatesdenoised imageswith comparable or even better quality, and the denoised results show stronger robustness for subsequent semantic segmentation tasks, when compared toeither itssupervisedcounterpartor classical"application-agnostic"denoisers. Moreover, we demonstrate the superior generalizability of U-SAID in three-folds, by plugging its "universal" denoiser without fine-tuning: (1) denoising unseen types of images; (2) denoising as pre-processing for segmenting unseen noisy images; and(3)denoisingfor unseenhigh-leveltasks.
  • 228. ImageDeblurring for CT Bone “leaks”tosurroundingissue Weighted deblurring for bone? Maybe intuitively easier to sharpen the bone/brain interfaces? In other words both your image and labels are probabilistic distributions with point estimates describing the reality at some accuracy Strictly speakingyoucannot reallyassume that pixels/voxels are independent measurements ofthat“receptive field”.Real-world PSF“smears“the signal http://doi.org/10.1155/2015/450341 PETquantification:strategiesforpartial volumecorrectionV.Bettinardi,I.Castiglioni,E.De Bernardi&M.C.Gilardi ClinicalandTranslational Imagingvolume2,pages199–218(2014) https://doi.org/10.1007/s40336-014-0066-y https://doi.org/10.1109/NSSMIC.2011.6153678 “Partial-volume effectand a partial-volumecorrectionfor the NanoPET/CT™ preclinical PET/CTscanner” Diagram of partial volumeeffect. (A) Pixel computed tomography(CT) value with thick slice. (B) Pixel CTvalue with thin slice. The partial volumeeffect can be defined asthe loss ofapparent activityinsmall objectsor regions because of the limited resolution of theimaging system https://doi.org/10.3341/jkos.2016.57.11.1 671 http://doi.org/10.2967/jnumed.106.035576 deconvolvingwithPSF
  • 229. CTSuper-resolutionwithU-Net Computedtomographysuper- resolutionusingdeep convolutional neuralnetwork JunyoungParket al. (2018) https://doi.org/10.1088/1361-6560/aacdd4 The objective of this study is to develop a convolutional neural network (CNN) for computed tomography (CT) image super- resolution. The network learns an end-to-end mapping between low (thick-slice thickness) and high (thin-slice thickness) resolution images using the modified U-Net. To verify the proposed method, we train and test the CNN using axially averaged data of existing thin- slice CT images as input and their middle slice asthelabel. The extraction and expansion paths of the network with a large receptive field1 effectively captured the high-resolution features as high- resolution features. Although this work mainly focused on resolution improvement, the Z-axis averaging plus super-resolution strategy was also useful for reducingnoise.
  • 230. NottoomanyCTsuper-resolutionnetworks CT Super-resolution GANConstrainedby theIdentical, Residual,andCycle Learning Ensemble(GAN- CIRCLE) ChenyuYou, Guang Li, YiZhang, Xiaoliu Zhang, HongmingShan, Shenghong Ju, Zhen Zhao, Zhuiyang Zhang, WenxiangCong, Michael W. Vannier, Punam K. Saha, GeWang (Submitted on 10 Aug2018 https://arxiv.org/abs/1808.04256 In this paper, we present a semi- supervised deep learning approach to accurately recover high-resolution (HR) CT images from low-resolution (LR) counterparts. Specifically, with the generative adversarial network (GAN) as the building block, we enforce the cycle-consistency in terms of the Wasserstein distance to establish a nonlinear end-to-end mapping from noisy LR input images to denoised and deblurred HR outputs. We also include the joint constraints in the loss function tofacilitatestructuralpreservation. To make further progress, we may also undertake efforts to add more constraints such as the sinogram consistence and the low-dimensional manifold constraint to decipher the relationship between noise, blurry appearances of images and the ground truth, and even develop an adaptive and/or task- specificlossfunction.
  • 231. SyntheticX-Ray ADeep Learning-BasedScatterCorrectionof SimulatedX-rayImages HeesinLeeandJoonwhoanLee(2019) https://doi.org/10.3390/electronics8090944 X-ray scattering significantly limits image quality. Conventional strategies for scatter reduction based on physical equipment or measurements inevitably increase the dose to improve the image quality. In addition, scatter reduction based on a computational algorithm could take a large amount of time. We propose a deep learning-based scatter correction method, which adopts a convolutional neural network (CNN) for restorationof degradedimages. Because it is hard to obtain real data from an X-ray imaging system for training the network, Monte Carlo (MC) simulation was performed to generate the training data. For simulating X-ray images of a human chest, a cone beam CT (CBCT) was designed and modeled as an example. Then, pairs of simulated images, which correspond to scattered and scatter-free images, respectively, were obtained from the model with different doses. The scatter components, calculated by taking the differences of the pairs, were used as targets to train the weightparametersoftheCNN.
  • 232. ImageDeblurring for CT with GANs? Threedimensionalblind imagedeconvolutionfor fluorescencemicroscopy usinggenerative adversarialnetworks Soonam Lee, ShuoHan, Paul Salama, Kenneth W. Dunn, Edward J. Delp Purdue University / Indiana University (Submitted on 19Apr 2019) https://arxiv.org/abs/1904.09974 Due to image blurring image deconvolution is often used for studying biological structures in fluorescence microscopy. Fluorescence microscopy image volumes inherently suffer from intensity inhomogeneity, blur, and are corrupted by various types of noise which exacerbate image quality at deeper tissue depth. Therefore, quantitative analysis of fluorescence microscopy in deeper tissue still remains a challenge. This paper presents a three dimensional blind image deconvolution method for fluorescence microscopy using 3-way spatially constrained cycle- consistent adversarial networks (CycleGAN). The restored volumes of the proposed deconvolution method and other well-known deconvolution methods, denoising methods, and an inhomogeneity correction methodare visuallyand numericallyevaluated. Using the 3-Way SpCycleGAN, we can successfullyrestore the blurred and noisy volume to good quality volume so that deeper volume can be used for the biological research. Future work will include developing a 3D segmentation technique using our proposed deconvolution method asa preprocessing step. INPUT SpCycleGAN xy xz
  • 233. Alotofideastostealfrom(optical)microscopy Anew deep learningmethod for imagedeblurringinoptical microscopic systems Huangxuan Zhaoet al. (2019) http://doi.org/10.1002/jbio.201960147 In this paper, we present a deep- learning-based deblurring method that is fast and applicable to optical microscopic imaging systems. We tested the robustness of proposed deblurring method on the publicly available data, simulated data and experimental data (including 2D optical microscopic data and 3D photoacoustic microscopic data), which all showed much improved deblurred results compared to deconvolution. We compared our results against several existing deconvolutionmethods. In addition, our method could also replace traditional deconvolution algorithms and become an algorithm of choice in various biomedicalimaging systems
  • 234. CycleGANsfordeblurring can be donefor unpaired data CycleGANwithaBlurKernelfor DeconvolutionMicroscopy: OptimalTransport Geometry Sungjun Lim et al. (2019) https://arxiv.org/abs/1908.09414 In this paper, we present a novel unsupervised cycle-consistent generative adversarial network (cycleGAN) with a linear blur kernel, which can be used for both blind- and non-blind image deconvolution. In contrast to the conventional cycleGAN approaches that require two generators, the proposed cycleGAN approach needs only a single generator, which significantly improves the robustness of network training. We show that the proposed architecture is indeed a dual formulation of an optimal transport problem that uses a special form of penalized least squares as transport cost. Experimental results using simulated and real experimental data confirm the efficacy of the algorithm.
  • 235. Inspiration from NaturalImages LSD2 -JointDenoisingand Deblurringof ShortandLongExposure Imageswith ConvolutionalNeural Networks Janne Mustaniemi, JuhoKannala, Jiri Matas, SimoSärkkä, Janne Heikkilä (23Nov 2018) https://arxiv.org/abs/1811.09485 The paper addresses the problem of acquiring high-quality photographs with handheld smartphone cameras in low-light imaging conditions. We propose an approach based on capturing pairs of short and long exposure images in rapid succession and fusing them into a single high-quality photograph. Unlike existing methods, we take advantage of both images simultaneously and perform a joint denoising and deblurring using a convolutional neural network. The network is trained using a combination of real and simulated data. To that end, we introduce a novel approach for generating realistic short-long exposure image pairs. The evaluation shows that the method produces good images in extremely challenging conditions and outperforms existing denoising and deblurring methods. Furthermore, it enables exposure fusion even in the presence of motion blur.
  • 236. DeblurringPlug’n’Playframework forexisting networks DeepPlug-and-Play Super-Resolution forArbitrary BlurKernels Kai Zhang, WangmengZuo,LeiZhang(Submittedon 29Mar2019) https://arxiv.org/abs/1903.12529 https://github.com/cszn/DPSR PyTorch While deep neural networks (DNN) based single image super-resolution (SISR) methods are rapidly gaining popularity, they are mainly designed for the widely- used bicubic degradation, and there still remains the fundamental challenge for them to super-resolve low- resolution (LR) image with arbitrary blur kernels. In the meanwhile, plug-and-play image restoration has been recognized with high flexibility due to its modular structure for easy plug-in of denoiser priors. In this paper, we propose a principled formulation and framework (DSPR) by extending bicubic degradation based deep SISR with the help of plug- and-play framework to handle LR images with arbitrary blur kernels. Specifically, we design a new SISR degradation model so as to take advantage of existing blind deblurring methods for blur kernel estimation. To optimize the new degradation induced energy function, we then derive a plug- and-play algorithm via variable splitting technique, which allowsusto plugany super-resolverprior ratherthan the denoiserpriorasamodularpart.
  • 238. ImageSmoothing whilekeepingedges Edge-AwareSmoothing→ In theory, image restoration tries to restore the “original image”under the degradation. In contrast, edge-preserving smoothing can be seen as simplifying enhancement technique that made “old school” algorithms perform better. e.g. LiisLindvereet al.(2013):”Priorto segmentation,thedataweresubjectedtoedge- preserving 3Danisotropic diffusionfiltering ( PeronaandMalik,1990  Citedby13,940 ) Popular algorithms include anisotropic diffusion, bilateral and trilater filter, guided filter and L0 gradient minimization filter. Quick anddirty Matlabtest with three methods for non-denoised input. Bilateral filter actually does not preserve edges, and the guide (the input image itself) makes the smoothing to take place for background. Image smoothingvia L0 gradientminimization https://doi.org/10.1145/2024156.2024208 - Cited by 872  https://youtu.be/jliea54nNFM?t=119 Deep Textureand StructureAware FilteringNetworkforImageSmoothing KaiyueLu,ShaodiYou,Nick Barnes; The EuropeanConferenceonComputerVision (ECCV),2018,pp.217-233 http://openaccess.thecvf.com/content_ECC V_2018/html/Kaiyue_Lu_Deep_Texture_and_ ECCV_2018_paper.html
  • 239. ImageSmoothing texture biaspossibleforvasculature? Notlikely? ImageNet-trained CNNs arebiasedtowards texture;increasing shapebias improves accuracyandrobustness RobertGeirhos,PatriciaRubisch,ClaudioMichaelis,MatthiasBethge,FelixA. Wichmann,WielandBrendel(Submittedon29Nov2018) https://arxiv.org/abs/1811.12231 Some recent studies suggest a more important role of image textures. We here put these conflicting hypotheses to a quantitative test by evaluating CNNs and human observers on images with a texture-shape cue conflict. We show that ImageNet-trained CNNs are strongly biased towards recognising textures rather than shapes, which is in stark contrast to human behavioural evidence and reveals fundamentally different classificationstrategies. INPUT DENOISED & EDGE-AWARE SMOOTHING Thisshould be easierto segment giving that no significant data wasthrown away, thusthe end-to-endconstrain of image restoration block, and asside- effect the usercould obtain a denoised version forvisualization RESIDUALNOISE & “TEXTURE” (miscartifacts)
  • 240. ImageSmoothing texture biaspossibleforvasculature? orcouldthere be? IsTexturePredictiveforAgeand SexinBrainMRI? NickPawlowski,BenGlocker BiomedicalImageAnalysis Group,ImperialCollegeLondon,UK 15Apr2019(modified:11Jun2019)MIDL2019Conference https://arxiv.org/abs/1811.12231 Deep learning builds the foundation for many medical image analysis tasks where neural networks are often designed to have a large receptive field to incorporate long spatial dependencies. Recent work has shown that large receptive fields are not always necessary for computer vision tasks on natural images. Recently introduced BagNets (Brendel and Bethge, 2019) have shown that on natural images, neural networks can perform complex classification tasks by only interpreting texture information rather than global structure. BagNets interpret a neural network as a bag-of-features classifier that is composed of a localised feature extractor and a classifier that acts on the average bag- encoding. We explore whether this translates to certain medical imaging tasks such as age and sex prediction from a T1- weightedbrain MRI scans. We have generalised the concept of BagNets to the setting of 3D images and general regression tasks. We have shown that a BagNet with a receptive field of (9mm) 3 yields surprisingly accurate predictions of age and sex from T1-weight MRI scans. However, we find that localised predictions of age and sex do not yield easily interpretable insights into the workings of the neural network which will be subject of future work. Further, we believe that more accurate localised predictions could lead to advanced clinical insights similar to (Becker et al., 2018; Cole et al., 2018).
  • 241. ImageSmoothing with ImageRestoration? In theory, additional “deep intermediate target” could help the final segmentation result as you want your network “to pop out” the vasculature, without the texture, from the background. In practice then, think of how to either get the intermediate target in such a way that you do not throw any details away (see Xu et al. 2015), or employ a Noise2Noise type of network for edge-aware smoothing as well. And check the use of bilateral kernels in deep learning (see e.g. Barron andPoole2015; Jampanietal.2016; Gharbietal.2017; Su etal. 2019 ). The proposal of Suetal.2019 seemslikeagoodstartingpoint ifyou areinto making this happen? RAW After IMAGE RESTORATION Edge-AwareIMAGESMOOTHING
  • 242. UnsupervisedImage Smoothing for “ground truth”? Imagesmoothingviaunsupervised learning QingnanFan,JiaolongYang,DavidWipf,BaoquanChen,XinTong ShandongUniversity,BeijingFilmAcademy;MicrosoftResearchAsia;PekingUniversity (Submittedon7Nov2018) https://arxiv.org/abs/1811.02804 | https://github.com/fqnchina/ImageSmoothing -Cited by 4  In thispaper,wepresentaunified unsupervised(label-free)learning frameworkthat facilitatesgeneratingflexibleandhigh-qualitysmoothing effectsby directlylearningfromdatausingdeepconvolutionalneural networks(CNNs).Theheartofthedesign isthetrainingsignalas a novel energyfunction thatincludesanedge-preservingregularizer which helpsmaintain importantyetpotentially vulnerableimagestructures, andaspatially-adaptiveLp flatteningcriterionwhichimposes differentformsofregularization ontodifferentimageregionsforbetter smoothing quality. We implement a diverse set of image smoothing solutions employing the unified framework targeting various applications such as, image abstraction, pencil sketching, detail enhancement, texture removal and content-aware image manipulation, and obtain results comparable with or betterthan previousmethods. We have also shown that training a deep neural network on a large corpus of raw images without ground truth labels can adequately solve the underlying minimization problem and generate impressive results. Moreover, the end-to-end mapping from a single input image to its corresponding smoothed counterpart by the neural network can be computed efficiently on both GPU and CPU, and the experiments have shown that our method runs orders of magnitude faster than traditional methods. We foresee a wide range of applications that can benefit from our newpipeline. Elimination of low-amplitude details while maintaining high-contrast edges using our method and representative traditional methods L0 and SGF. L0 regularization has a strong flattening effect. However, the side effect is that some spurious edges arise in local regions with smooth gradations, such as those on the cloud. SGF is dedicated to elimination of fine-scale high-contrast details while preserving large- scale salient structures. However, semantically-meaningful information such as the architecture and flagpole can be over-smoothed. In contrast,our result exhibits a moreappropriate, targeted balance between color flattening and salientedgepreservation. We also demonstrate the binary edge map B detected by our heuristic detection method, which shows consistent image structure with our style image. Note that binary edge maps are only used in the objective function for training; they are not used inthe teststageand are presented here onlyfor comparisonpurpose
  • 244. Morepaperspublished on MRInormalization butsomealsoforCT Normalizationofmulticenter CT radiomicsby agenerative adversarial networkmethod YajunLi,GuoqiangHan,XiaomeiWu,ZhenhuiLi, KeZhao,ZhipingZhang,ZaiyiLiuandChanghong LiangPhysicsin Medicine& Biology (25March 2020) https://doi.org/10.1088/1361-6560/ab8319 To reduce the variability of radiomics features caused by computed tomography (CT) imaging protocols through using a generative adversarial network (GAN) method. Material and Methods: In this study, we defined a set of images acquired with a certain imaging protocol as a domain, and a total of 4 domains (A, B, C, and T [target]) from 3 different scanners were included. Finally, to investigate whether our proposed method could facilitate multicenter radiomics analysis, we built the lasso classifier to distinguish short-term from long-term survivors basedonacertaingroup Our proposed GAN-based normalization method could reduce the variability of radiomics features caused by different CT imaging protocols and facilitatemulticenterradiomicsanalysis.
  • 246. ProbabilisticSegmentationfrom2004 Unifiedsegmentation John Ashburnerand Karl J. Friston NeuroImage Volume 26,Issue 3,1 July 2005, Pages839-851 https://doi.org/10.1016/j.neuroimage.2005.02.018 A probabilistic framework is presented that enables image registration, tissue classification, and bias correction to be combined within the same generative model. A derivation of a log-likelihood objective function for the unified model is provided. The model is based on a mixture of Gaussians and is extended to incorporate a smooth intensity variation and nonlinear registration with tissue probability maps. A strategy for optimising the model parameters is described, along with the requisite partialderivativesoftheobjectivefunction. The hierarchical modelling scheme could be extendedinordertogeneratetissueprobability maps and other priors using data from many subjects. This would involve a very large model, whereby many images of different subjects are simultaneously processed within the same heirarchical framework. Strategies for creating average (in both shape and intensity) brain atlases are currently being devised ( Ashburner et al.,2000, AvantsandGee,2004, Joshietal., 2004) .Such approaches could be refined in order to produce average shaped tissueprobabilitymapsandotherdatafor useaspriors. Thetissueprobabilitymapsforgreymatter, whitematter, CSF, and “other”. Results from applying the method to the BrainWeb data. The first column shows the tissue probability maps for grey and white matter. The first row of columns two, three, and four show the 100% RF BrainWeb T1, T2, and PD  imagesafter they are warped to match the tissue probability maps (by inverting the spatial transform). Below the warped BrainWeb images are the corresponding segmented greyand white matter. Thisfigure showsthe underlyinggenerative model for the BrainWeb simulated T1, T2, and PD imageswith 100% intensitynonuniformity. The BrainWeb imagesare shown on the left. The right hand column showsdatasimulated usingthe estimated generative model parametersfor the correspondingBrainWeb images. Our current implementation uses a low-dimensional approach, which parameterises the deformations by a linear combination of about a thousand cosine transform bases (Ashburner and Friston, 1999). This is not an especially precise way of encoding deformations, but it can model the variability of overall brain shape. Evaluations have shown that this simple model can achieve a registration accuracy comparable to other fully automated methods with many more parameters (Hellier et al., 2001,  Hellier etal., 2002).
  • 247. Follow-upwith Ashburner et al. (2018) #1 Generativediffeomorphicmodellingof largeMRIdatasetsforprobabilistic templateconstruction ClaudiaBlaiotta,PatrickFreund,M. JorgeCardoso, JohnAshburner NeuroImageVolume166,1February2018,Pages117-134 https://doi.org/10.1016/j.neuroimage.2017.10.060 One of the main challenges, which is encountered in all neuroimaging studies, originates from the difficulty of mapping between different anatomical shapes. In particular, a fundamental problem arises from having to ensure that this mapping operation preserves topological properties and that it provides, not only anatomical, but also functional overlap between distinct instancesofthesameanatomicalobject(Brettetal.,2002). This explains the rapid development of the discipline known as computational anatomy (Grenanderand Miller,1998), which aimsto provide mathematically sound tools and algorithmic solutions to model high- dimensional anatomical shapes, with the ultimate goal of encoding, or accountingfor,theirvariability. In this paper we propose a general modelling scheme and a training algorithm, which, given a large cross-sectional data set of MR scans, can learn a set of average-shaped tissue probability maps, either in an unsupervised or semi-supervised manner. This is achieved by building a hierarchical generative model of MR data, where image intensities are captured using multivariate Gaussian mixture models, after diffeomorphic warping (Ashburner and Friston, 2011, Joshi et al., 2004) of a set of unknown probabilistic templates, which act as anatomical priors. In addition, intensity inhomogeneity artefacts are explicitly represented in our model, meaning that theinputdatadoesnot needtobebiascorrectedpriortomodelfitting. ● We present a generative modelling framework to process large MRI data sets. ● The proposed framework can serve to learn average-shaped tissue probability maps and empirical intensity priors. ● We explore semi-supervised learning and variational inference schemes. ● The method is validated against state-of-the-art tools using publicly available data. Tothebestofour knowledge,the particular mathematical formulation that we adopt to combine such modelling techniques has never been adopted before. The resulting approach enables processing simultaneously a large number of MR scans in a groupwise fashion and particularly it allows the tasks of image segmentation, image registration, bias correction and atlas construction tobe solvedbyoptimising asingleobjective function, with one iterative algorithm. This is in contrast to a commonly adopted approach to mathematical modelling, which involves a pipeline of multiple model fitting strategies that solve sub-problems sequentially,withouttakingintoaccounttheir circular  dependencies.
  • 248. Follow-upwith Ashburner et al. (2018) #2 GenerativediffeomorphicmodellingoflargeMRI datasetsforprobabilistictemplateconstruction ClaudiaBlaiotta,PatrickFreund,M. JorgeCardoso, JohnAshburner NeuroImageVolume166,1February2018,Pages117-134 https://doi.org/10.1016/j.neuroimage.2017.10.060 OASIS data set. The first data set consists of thirty five T1-weighted MR scans from the OASIS (Open Access Series of Imaging Studies) database ( Marcuset al., 2007). The data is freely available from the web site  http://www.oasis-brains.org, where details on the population demographics and acquisition protocols are also reported. Additionally, the selected thirty five subjects are the same ones that were used within the 2012 MICCAI Multi-Atlas Labeling Challenge (Landman andWarfield, 2012). Balgrist data set. The second data set consists of brain and cervical cord  scans of twenty healthy adults, acquired at University Hospital Balgrist with a 3T scanner (Siemens Magnetom Verio). Magnetisation-prepared rapid acquisition gradient echo (MPRAGE) sequences, at 1 mm isotropic resolution, were used to obtain T1-weighted data, while PD-weighted images of the same subjects were acquired with a multi-echo 3D fast low-angle shot (FLASH) sequence, within a whole-brain multi-parameter mapping protocol ( Weiskopf et al., 2013, Helmset al., 2008). IXI data set. The third and last data set comprises twenty five T1-, T2-and PD-weighted scans of healthy adults from the freely available IXI brain database, which were acquired at Guy's Hospital, in London, on a 1.5T system (Philips Medical Systems Gyroscan Intera). Additional information regarding the demographics of the population, as well as the acquisition protocols, can be found at http://brain-development.org/ixi-dataset. Tissue probability maps obtained by applying the presented groupwise generative model to a multispectral data set comprising head and neck scans of eighty healthy adults, from three different databases.
  • 249. Follow-upwith Ashburner et al. (2018) #3 GenerativediffeomorphicmodellingoflargeMRI datasetsforprobabilistictemplateconstruction ClaudiaBlaiotta,PatrickFreund,M. JorgeCardoso, JohnAshburner NeuroImageVolume166,1February2018,Pages117-134 https://doi.org/10.1016/j.neuroimage.2017.10.060 The accuracy of the algorithm presented here is compared to that achieved by the groupwise image registration method described in Avantset al.(2010), whose implementation is publicly available, as part of the Advanced normalisation Tools (ANTs) package, through the web site http://stnava.github.io/ANTs/. Indeed, the symmetric diffeomorphic registration framework implemented in ANTs has established itself as the state-of-the-art of medical image nonlinear  spatialnormalisation (Kleinet al.,2009). Brain segmentation accuracyof the presented method in comparison toSPM12 image segmentation algorithm. Boxplots indicate thedistributionsof Dice scorecoefficients, with overlaid scatter plotsof theestimated scores. Red starsdenote outliers. Modellingunseendata Further validation experiments were performed to quantify the accuracy of the framework described in this paper to model unseen data, that is to say data that was not included in the atlas generation process. In particular, we evaluated registration accuracy using data from the Internet Brain Segmentation Repository (IBSR), which is provided by the Centre for Morphometric Analysis at Massachusetts General Hospital (http://www.cma.mgh.harvard.edu/ibsr/). Experiments to assess bias correction and segmentation accuracy were instead performed on synthetic T1-weighted brain MR scans from the Brainweb database (http://brainweb.bic.mni.mcgill.ca/), which were simulated using a healthy anatomical model under different noise and biasconditions. Dicescoresbetweenthe estimatedandground truth segmentations for brainwhitematter and braingraymatter,under differentnoiseandbias conditions,forsynthetic T1-weighteddata.
  • 250. Ctseg as headCT pipeline fromDukeUniversity A Method to Estimate Brain Volume from HeadCT Imagesand Applicationto Detect Brain Atrophy in AlzheimerDisease V. Adduru, S.A. Baum, C.Zhang, M.Helguera, R.Zand, M. Lichtenstein, C.J.Griessenauer and A.M.Michael AmericanJournalof Neuroradiology February2020, 41 (2) 224-230;DOI: https://doi.org/10.3174/ajnr.A6402 https://github.com/NuroAI/CTSeg We present an automated head CT segmentation method (CTseg) to estimate total brain volume and total intracranial volume. CTseg adapts a widely used brain MR imaging segmentation method from the Statistical Parametric Mapping toolbox using a CT-based template for initial registration. CTseg was tested and validated usinghead CT imagesfrom a clinical archive. In current clinical practice, brain atrophy is assessed by inaccurate and subjective “eyeballing” of CT images. Manual segmentation of head CT images is prohibitively arduous and time-consuming. CTseg can potentially help clinicians to automatically measure total brain volume and detect and track atrophy in neurodegenerative diseases. In addition, CTseg can be applied to large clinical archives fora variety of researchstudies. CTSeg pipeline for intracranial space and brain parenchyma segmentation from head CT images. Within parentheses is the 3D coordinate space of the image. MNI indicates Montreal Neurological Institute.
  • 252. Badlywrittenreview but probably lists relevant papers though AutomaticNeuroimageProcessingand AnalysisinStroke–A SystematicReview Roger M.Sarmentoet al.(2019) IEEEReviewsinBiomedicalEngineering(23August 2019) https://doi.org/10.1109/RBME.2019.2934500 There are some points that require greater attention such as low sensitivity, optimization of the algorithm, a reduction of false positives, improvement in the identification and segmentation processes of different sizes and shapes. Also there is a need, to improve the classificationstepsofdifferentstroketypesandsubtypes. Another important challenge to overcome is the lack of studies aimed at identifying and classifying stroke in its subtypes: intracerebral hemorrhage, subarachnoid hemorrhage, and brain ischemia due to thrombosis, embolism, or systemic hypoperfusion. There is also no record of work focusing on the detection and segmentation of the penumbra zone, a region that presents a high probability of recovery if identifiedandmedicatedquicklyandcorrectly. Moreover, transient ischemic attack (TIA) does not receive the focus it merits from the researchers. Although it is a transient and reversible alteration it can be a warning sign of an imminent ischemic stroke. In many cases doctors are not able to distinguish a stroke from a TIA beforethe symptomsappear.NeuroimagingsuchasCT andMRI are not made for this type of accident, but there is a type of MRI, called diffusion weighted imaging (DWI), which can show areas of brain tissue that are not working and thus help to diagnose TIA. A potential research would be the location of the TIA, the affected area and the severity of the accident.
  • 253. DeepSymNet Combining symmetricand standard deep convolutional representations for detecting brain hemorrhage Arko Barman; VictorLopez-Rivera; SongmiLee; Farhaan S.Vahidy; James Z.Fan; SeanI.Savitz;SunilA.Sheth; LucaGiancardo(16 March2020) https://doi.org/10.1117/12.2549384 https://doi.org/10.3389/fnins.2019.01053 https://www.uth.edu/news/story.htm?id=5b8f2ad1-e3dd-4ad0-aca3-c845d7364953 We compare andcontrast symmetry-aware,symmetry-naive feature representationsand theircombination forthe detection of Brain hemorrhage (BH) using CTimaging. One ofthe proposed architectures, e-DeepSymNet, achievesAUC0.99 [0.97-1.00] for BH detection. An analysisof the activation valuesshowsthatboth symmetry-aware and symmetry-naive representationsoffer complementaryinformation withsymmetry-aware representation naive contributing20% towardsthe final predictions.
  • 254. Qure validation datasetavailablefrom The Lancet paper Deeplearningalgorithmsfor detectionof critical findingsinheadCTscans:a retrospectivestudy Sasank Chilamkurthy,RohitGhosh , Swetha Tanamala, MustafaBivijiDNB, NorbertGCampeau, Vasantha KumarVenugopal, VidurMahajan, , PoojaRao, PrashantWarier TheLancet Volume 392, Issue 10162, 1–7 December 2018, Pages2388-2396 https://doi.org/10.1016/S0140-6736(18)31645-3 We retrospectively collected a dataset containing 313 318 head CT scans together with their clinical reports from around 20 centres in India between Jan 1, 2011, and June 1, 2017. We describe the development and validation of fully automated deep learning algorithms that are trained to detect abnormalities requiring urgent attention on non- contrast head CT scans. The trained algorithms detect five types of intracranial haemorrhage (namely, intraparenchymal, intraventricular, subdural, extradural, and subarachnoid) and calvarial (cranial vault) fractures. The algorithms also detect mass effect and midline shift, both usedasindicatorsofseverityofthebraininjury. The algorithms produced good results for normal scans without bleed, scans with medium to large sized intraparenchymal and extra-axial haemorrhages, haemorrhages with fractures, and in predicting midline shift. There was room for improvement for small-sized intraparenchymal, intraventricular haemorrhages and haemorrhages close to the skull base. In this study, we did not separate chronic and acute haemorrhages. This approach resulted in occasional prediction of scans with infarcts and prominent cerebrospinal fluid spaces as intracranial haemorrhages. However, the false positive rates of the algorithms should not impedeits usability as atriaging tool.
  • 255. DeepLearning for ICH Segmentation Precisediagnosisofintracranial hemorrhageandsubtypesusingathree- dimensionaljointconvolutionaland recurrentneural network Hai Ye,FengGao, Youbing Yin,DanfengGuo,Pengfei Zhao,Yi Lu,Xin Wang, JunjieBai,Kunlin Cao, QiSong, HeyeZhang, WeiChen,XuejunGuo,Jun Xia EuropeanRadiology(2019)29:6191–6201 https://doi.org/10.1007/s00330-019-06163-2 It took our algorithm less than 30 s on average to process a 3D CT scan. For the two-type classification task (predicting bleeding or not), our algorithm achieved excellentvalues(≥ 0.98)acrossallreportingmetricson thesubjectlevel. The proposed method was able to accurately detect ICH and its subtypes with fast speed, suggesting its potential for assisting radiologists and physicians in theirclinicaldiagnosisworkflow.
  • 256. DeepLearning for ICH Segmentation:Review ofStudies IntracranialHemorrhageSegmentationUsingDeepConvolutionalModel(18Oct2019) https://arxiv.org/pdf/1910.08643.pdf
  • 257. DeepLearning for ICH Segmentation Intracranial HemorrhageSegmentation UsingDeep Convolutional Model MurtadhaD.Hssayeni,MuayadS.Croock,Aymen Al-Ani,HassanFalahAl- khafaji,ZakariaA. Yahya,andBehnaz Ghoraani (18Oct2019) https://arxiv.org/pdf/1910.08643.pdf https://alpha.physionet.org/content/ct-ich/1.0.0/ WedevelopedadeepFCN,called U-Net,tosegmenttheICHregionsfromthe CTscansin afully automatedmanner.Themethod achieved aDicecoefficient of0.31 fortheICHsegmentation basedon 5-fold cross-validation. Data Description The dataset is release in JPG (and NIfTI soon) formats at PhysioNet ( http://alpha.physionet.org/content/ct-ich/1.0.0/), A dataset of 82 CT scans was collected, including 36 scans for patients diagnosed with intracranial hemorrhage with the following types: Intraventricular, Intraparenchymal, Subarachnoid, Epidural and Subdural. Each CT scan for each patient includes about 30 slices with 5 mm slice-thickness. The mean and std of patients' age were 27.8 and 19.5, respectively. 46 of the patients were males and 36 of them were females. Each slice of the non- contrast CT scans was by two radiologists who recorded hemorrhage types if hemorrhage occurred or if a fracture occurred. The radiologists also delineated the ICH regions in each slice. There was a consensus between the radiologists. Radiologists did not have access to clinical history of the patients, and used a down-sampled version of the CT scan. During data collection, syngo by Siemens Medical Solutions was first used to read the CT DICOM files and save two videos (avi format) using brain and bone windows, respectively. Second, a custom tool was implemented in Matlab and used to read the avi files, record the radiologist annotations, delineate hemorrhage region and save it as white region in a black 650x650 image (jpg format). Gray-scale 650x650 images (jpg format) for each CT slice were also saved for both windows (brain and bone).
  • 258. KaggleChallenges eventually for alltypes ofdata RSNAIntracranialHemorrhageDetection Identify acuteintracranialhemorrhageanditssubtypes $25,000PrizeMoneyRadiologicalSocietyofNorthAmerica https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/data petteriTeikari/RSNA_kaggle_CT_wrangle https://www.kaggle.com/anjum48/reconstructing-3d- volumes-from-metadata
  • 259. KaggleChallenge howthe datawas annotated ConstructionofaMachineLearningDatasetthrough Collaboration:TheRSNA2019BrainCTHemorrhage Challenge AdamE. Flanders , LucianoM. Prevedello, GeorgeShih, SafwanS. Halabi, Jayashree Kalpathy-Cramer, Robyn Ball, JohnT. Mongan, Anouk Stein, FelipeC. Kitamura, MatthewP. Lungren, GagandeepChoudhary, LesleyCala, LuizCoelho, Monique Mogensen, FannyMorón, Elka Miller, Ichiro Ikuta, VaheZohrabian, Olivia McDonnell, ChristieLincoln, Lubdha Shah, David Joyner, AmitAgarwal, RyanK. Lee, Jaya Nath, Forthe RSNA-ASNR 2019 Brain HemorrhageCTAnnotators https://doi.org/10.1148/ryai.2020190211 The amount of volunteer labor required to compile, curate, and annotate a large complex dataset of this type was substantial. A work commitment from our volunteer force was set at no more than 10 hours of aggregate effort per annotator, recognizing that there would be a wide range in performance per individual. An examination could be accuratelyreviewedandlabeledin aminute or less.On thebasisoftheseestimates,itwas projected that the 60 annotators could potentially evaluate and effectively label 36,000 examinations at a rate of one per minute for a maximum of 10 hours of effort. This providedabuffer of11,000potential annotations. Even though the use case was limited to hemorrhage labels alone, it took thousands of radiologist-hours to produce a final working dataset in the stipulated time period. To optimally mitigate against misclassification in the training data, the training, validation, and test datasets should have employed multiple reviewers. The size of the final dataset and the narrow time frame to deliver it prohibited multiple evaluations for all of the available examinations. The auditing mechanism employed for training new annotators showed that the most common error produced was under-labeling of data, namely tagging an entire examination with a single image label. Raising awareness of this error early in the process before the annotators began working on the actual data helped to reducethe frequency of this error and improve consistency of thesingleevaluations. As this is a public dataset, it is available for further enhancement and use including the possibility of adding multiple readers for all studies, performance of detailed segmentations, performance of federated learning on the separate datasets, and evaluation of the examinationsfor diseaseentitiesbeyond hemorrhage.
  • 260. KaggleChallenge Competition Entry example Intracranial HemorrhageClassification usingCNN Hyun Joo Lee,Department of MechanicalEngineering, Stanford University(CS230Fall2019) http://cs230.stanford.edu/projects_fall_2019/reports/26248009.pdf In this study, multi-class classification is conducted to diagnose intracranial hemorrhages and its five subtypes: intraparenchymal, intraventricular, subarachnoid, subdural, epidural. Transfer learning is applied based on ResNet-50 and linear windowing is compared with sigmoid windowing in its performance. Due to the high imbalance in the number of examples available, an undersampling approach was taken to provide a better balanced training dataset. As a result, the combination of sigmoid windowing and combining three windows of interest showed thehighest F1score.
  • 261. Smalldatasetsget detailedannotations Expert-leveldetectionofacuteintracranial hemorrhage onheadcomputedtomographyusing deep learning Weicheng Kuo, ChristianHäne, PratikMukherjee,Jitendra Malik,andEstherL.Yuh PNASOctober21,2019 https://doi.org/10.1073/pnas.1908021116 We trained a fully convolutional neural network with 4,396 head CT scans performed at the University of California at San Francisco and affiliated hospitals and compared the algorithm’s performance to that of 4 American Board of Radiology (ABR) certified radiologists on an independent testsetof 200randomlyselectedheadCT scans. https://www.ucsf.edu/news/2019/10/415681/ai-rivals-exper t-radiologists-detecting-brain-hemorrhages But the training images used by the researchers were packed with information, because each small abnormality was manually delineated at the pixel level. The richness of this data – along with other steps that prevented the model from misinterpreting random variations or “noise” as meaningful – createdanextremelyaccuratealgorithm. A deep learning algorithm recognizes abnormal CT scans of the head in neurological emergencies in 1 second. The algorithm also classifies the pathological subtype of each abnormality: red - subarachnoid hemorrhage, purple - contusion, green - subdural hemorrhage. Fivecasesjudgednegative byatleast2of4 radiologists, but positive for acute hemorrhage by both the algorithm and the goldstandard.
  • 262. 3DCNNsforsegmentation 3D Deep Neural NetworkSegmentationof IntracerebralHemorrhage:Developmentand ValidationforClinicalTrials MatthewSharrock, W. AndrewMould,HasanAli, Meghan Hildreth, DanielF Hanley,JohnMuschelli https://www.medrxiv.org/content/10.1101/2020.03.05.20031823v1 https://github.com/msharrock/deepbleed Using an automated pipeline and 2D and 3D deep neural networks, we show that we can quickly and accurately estimate ICH volume with high agreement with time-consuming manual segmentation. The trainingand validation datasetsinclude significant heterogeneityin terms of pathology, such as the presence of intraventricular (IVH) or subdural hemorrhages (SDH) as well as variable image acquisition parameters. We show that deep neural networks trained with an appropriate anatomic context in the network receptive field, can effectively performICH segmentation, but those without enough context will overestimate hemorrhage along the skull and around calcifications intheventricularsystem. The natural history of ICH includes intraventricular extension of blood, particularly for hemorrhages close to the ventricles and the success of segmentation in this context has not previously been accounted for in segmentation studies based on either MRI or CT. This is a clear example of the need to understand the natural history of the underlying neuropathology as well account for the variability in acquisition when developing models for the clinical context, tasks that are frequently overlooked. This is especially so in the realm of DNNs where models with millions of parameters can be finely tuned to aspects of a curated dataset from a single institution that are not applicable externally. In our view, when decisions regarding potential therapeutic intervention are to be made, they should be informed by metrics and models validated in a prospective clinical trial on multicenter data designedwithafullunderstandingoftheunderlyingpathology
  • 263. StantardU-Netwith DenseCRF ICHNet: Intracerebral Hemorrhage (ICH) SegmentationUsing Deep Learning MobarakolIslam NUSGraduateSchoolforIntegrativeSciencesandEngineering(NGS)NationalUniversityof Singapore ,Parita Sanghani,AngelaAn Qi See,MichaelLucasJames,Nicolas Kon KamKing,HongliangRen InternationalMICCAIBrainlesionWorkshop BrainLes2018:Brainlesion: Glioma, MultipleSclerosis, Strokeand TraumaticBrain Injuries https://doi.org/10.1007/978-3-030-11723-8_46 ICHNet, evolves by integrating dilated convolution neural network (CNN) with hypercolumn features where a modest number of pixels are sampled and corresponding features from multiple layers are concatenated. Due to freedom of sampling pixels rather than image patch, this model trains within the brain region and ignores the CT background padding. This boosts the convergence time and accuracy by learning only healthy and defected brain tissues. To overcome the class imbalance problem, we sample an equal number of pixels from each class. We also incorporate 3D conditional random field (3D CRF, deepmedic/dense3dCrf) to smoothen the predicted segmentation as a post-processing step. ICHNet demonstrates 87.6% Dice accuracy in hemorrhage segmentation, that is comparable toradiologists.
  • 264. “Sharperboundary”-tweaksalsoforICH -Net:FocusingontheborderΨ-Net: Focusing on the border areasofintracerebral hemorrhage onCT images Zhuo Kuang, XianboDeng,Li Yua,HongkuiWang, Tiansong Li, Shengwei Wang.ComputerMethodsandProgramsin Biomedicine (Availableonline14May 2020) https://doi.org/10.1016/j.cmpb.2020.105546 Highlights ● A CNN-based architecture is proposed for the ICH segmentation on CT images. It consists of a novel model, namedas -Net,Ψ-Net: Focusing on the border andamulti-leveltrainingstrategy. ● With the help of two attention blocks, firstly, -Net couldΨ-Net could suppress the irrelevant information, secondly, -Net couldΨ-Net could capture the spatial contextual information to fine tune the borderareasof theICH. ● The multi-level training strategy includes two levels of tasks, classification of the whole slice and the pixel-wise segmentation. This structure speeds up the rate of convergence and alleviate the vanishing gradient and class imbalanceproblems. ● Compared to the previous works on the ICH segmentation. Our method takes less time for training, and obtains more accurateand robustperformance. Youcan seeallthemulti-task“Dice+Hausdorff” papers, e.g. Calivaetal.2019,Karimi etal.2019
  • 265. TBIsegmentationverysimilartoICHsegmentation Multiclasssemanticsegmentationand quantification of traumatic brain injury lesionson head CTusing deep learning: an algorithm development and multicentre validation study MiguelMonteiro*,VirginiaF JNewcombe*,FrancoisMathieu,KrishmaAdatia,KonstantinosKamnitsas, Enzo Ferrante,TilakDas,DanielWhitehouse,DanielRueckert,DavidK Menon†,Ben Glocker FundingEuropeanUnion7thFrameworkProgramme,HanneloreKohlStiftung,OneMind,NeuroTraumaSciences,Integra Neurosciences,EuropeanResearchCouncilHorizon2020 LancetDigitalHealth2020https://doi.org/10.1016/S2589-7500(20)30085-6 CT is the most common imaging modality in traumatic brain injury (TBI). However, its conventional use requires expert clinical interpretation and does not provide detailed quantitative outputs, which may have prognostic importance. We aimed to use deep learning to reliably and efficiently quantifyanddetect differentlesiontypes. We show the ability of a CNN to separately segment, quantify, and detect multiclass haemorrhagic lesions and perilesional oedema. These volumetric lesion estimates allow clinically relevant quantification of lesion burdenandprogression,with potential applicationsfor personalisedtreatment strategiesandclinicalresearch inTBI. Future work needs to focus on the optimal incorporation of such algorithms into clinical practice, which must be accompanied by a rigorous assessment of performance, strengths, and weaknesses. Such algorithms will find clear research applications, and, if adequately validated, may be used to help facilitate radiology workflows by flagging scans that require urgent attention, aid reporting in resource-constrained environments, and detect pathoanatomically relevant features for prognostication and a better understandingoflesionprogression
  • 266. Perihematomaledemasegmentation Fully Automated SegmentationAlgorithm forPerihematomalEdemaVolumetry AfterSpontaneous Intracerebral Hemorrhage Natasha Ironside, Ching-JenChen,Simukayi Mutasa,JustinL.Sim, DaleDing, Saurabh Marfatiah, David Roh, Sugoto Mukherjee,Karen C. Johnston, Andrew M. Southerland,Stephan A. Mayer,Angela Lignelli, Edward Sander Connolly 2Feb2020 Stroke.2020;51:815–823 https://doi.org/10.1161/STROKEAHA.119.026764 Perihematomal edema (PHE) is a promising surrogate marker of secondary brain injury in patients with spontaneous intracerebral hemorrhage, but it can be challenging to accurately and rapidly quantify. The aims of this study are to derive and internally validate a fully automated segmentation algorithm for volumetric analysis of PHE. Inpatient computed tomography scans of 400 consecutive adults with spontaneous, supratentorial intracerebral hemorrhage enrolled in the Intracerebral Hemorrhage Outcomes Project (2009–2018) were separatedinto training(n=360)andtest(n=40)datasets. The fully automated segmentation algorithm accurately quantified PHE volumes from computed tomography scans of supratentorial intracerebral hemorrhage patients with high fidelity and greater efficiency compared with manual and semiautomated segmentation methods. External validation of fully automated segmentation for assessment of PHE iswarranted. Examplesof perihematomaledema (PHE)segmentationin thetestdataset. Column A showsthe inputaxial,noncontrast computed tomographyslice. Column B showsthe corresponding manualPHE segmentation(blue line). Column C showsthe correspondingsemi- automated PHE segmentation(red line). Column D showsthe correspondingfully automated PHE segmentation(green line).
  • 267. intheendend-to-end system for the upstream restoration/segmentation withdownstreamtasks such asprognosisand prescriptivetreatment Inpractice, notalotof end- to-endnetworksfor prognosiseven, probably duetolackof suchopen- sourceddatasets
  • 269. Prognosismodels mostlyoutside the scope of thispresentation but herea small teaser for “the actual”analysis of the imaging features with non-imagingfeatures
  • 270. Best tolook inspiration from modelingofother pathologies asnotmuchspecificallyonICH AWideandDeep NeuralNetwork forSurvivalAnalysisfrom Anatomical ShapeandTabular ClinicalData Sebastian Pölsterl, IgnacioSarasua, Benjamín Gutiérrez-Becker, and Christian Wachinger (9Sept 2019) https://arxiv.org/abs/1909.03890 Feature-GuidedDeep Radiomics forGlioblastomaPatientSurvival Prediction ZeinaA. Shboul, Mahbubul Alam, LasithaVidyaratne, LinminPei, Mohamed I. Elbakary and Khan M. IftekharuddinFront. Neurosci., 20 September 2019| https://doi.org/10.3389/fnins.2019.00966 Deep learningsurvivalanalysis enhancesthevalueofhybrid PET/CTforlong-term cardiovasculareventprediction L E Juarez-Orozco, J WBenjamins, TMaaniitty, ASaraste, PVan Der Harst, JKnuuti EuropeanHeart Journal, Volume 40, Issue Supplement_1, October 2019, ehz748.0177, https://doi.org/10.1093/eurheartj/ehz748.0177 Deep RecurrentSurvival Analysis Kan Ren et al. (2019) https://doi.org/10.1609/aaai.v33i01.33014798 Useofradiomicsfortheprediction oflocalcontrolofbrain metastasesafterstereotacticradiosurgery https://doi.org/10.1093/neuonc/noaa007 (20January 2020)byAndrei Mouravievetal. https://towardsdatascience.com/deep-learning-for-survival-analysis-fdd1505293c9
  • 271. Prescriptivemodels mostlyoutside the scope of thispresentation how to treatthe patient basedon the features measured from the patient,i.e. “precision medicine”
  • 272. Reinforcementlearning and Controlmodels IsDeepReinforcementLearningReadyfor PracticalApplicationsinHealthcare?A SensitivityAnalysisofDuel-DDQN forSepsis Treatment MingYuLu,ZacharyShahn,DabySow,FinaleDoshi-Velez,Li-weiH.Lehman MIT; IBMResearch,NYC;Harvard University [Submittedon8May2020] https://arxiv.org/abs/2005.04301 In thiswork, we perform a sensitivityanalysison a state-of-the-art RL algorithm (DuelingDouble Deep Q-Networks) appliedto hemodynamicstabilization treatment strategiesfor septic patientsin the ICU ● TreatmentHistory: Excludingtreatment historyleadsto aggressive treatment policies. ● Time bindurations: Longertime binsresult in more aggressive policies. ● Rewards: Long-term objectiveslead to more aggressive and less stable policies ● Embedding model: Highsensitivity to architecture ● Random Restarts: DRL policieshave manylocal optima ● SubgroupAnalysis:Groupingbyy Sequential Organ Failure Assessment (SOFA) score findsDQNagentsare underaggressive inhigh risk patients and overaggressive inlowrisk patients https://photos.app.goo.gl/pptobiD22E9osiWf6 Finale Doshi-Velez @NeurIPSMachineLearning for Healh 2018 (ML4H) AssociateProfessor of Computer Science, Harvard Paulson School ofEngineeringand Applied Sciences(SEAS) Deep Reinforcement LearninginMedicine AndersJonsson KidneyDis 2019;5:18–22 https://doi.org/10.1159/000492670 Deep Reinforcement LearningandSimulationasaPathToward PrecisionMedicine BrendenK. Petersen,Jiachen Yang,WillS. Grathwohl, ChaseCockrell,ClaudioSantiago,GaryAn,and DanielM. Faissol6Jun 2019 https://doi.org/10.1089/cmb.2018.0168 Deep Reinforcement LearningforDynamicTreatment Regimes onMedicalRegistryData Ying Liu,BrentLogan, NingLiu, Zhiyuan Xu, Jian Tang,and Yanzhi Wang HealthcInform.2017Aug;2017:380–385. doi: 10.1109/ICHI.2017.45
  • 273. DynamicTreatmentRecommendation withuncleartargets SupervisedReinforcementLearningwithRecurrentNeural NetworkforDynamicTreatmentRecommendation LuWang,WeiZhang,XiaofengHe,HongyuanZha KDD'18Proceedingsofthe24thACMSIGKDDInternationalConferenceonKnowledgeDiscovery &DataMining https://doi.org/10.1145/3219819.3219961 The data-driven research on treatment recommendation involves two main branches: supervised learning (SL) and reinforcement learning (RL) for prescription. SL based prescription tries to minimize the difference between the recommended prescriptions and indicator signal which denotes doctor prescriptions. Several pattern-based methods generate recommendations by utilizing the similarity of patients [Huetal.2016, Sun etal.2016] , but they are challenging to directly learn the relation between patients and medications. Recently, some deep models achieve significant improvements by learning a nonlinear mapping from multiple diseases to multiple drug categories [BajorandLasko2017, Wangetal.2018, Wangetal.2017 . Unfortunately, a key concern for these SL based models still remains unresolved, i.e, the ground truth of “good” treatment strategy being unclear in the medical literature [Marik2015]. More importantly, the original goal of clinical decision also considers the outcome of patients instead of only matchingtheindicatorsignal. The above issues can be addressed by reinforcement learning for dynamic treatment regime (DTR) [Murphy2003, Robins1986]. DTR is a sequence of tailored treatments according to the dynamic states of patients, which conforms to the clinical practice. As a real example shown in Figure 1, treatments for the patient vary dynamically over time with the accruing observations. The optimal DTR is determined by maximizing the evaluation signal which indicates the long-term outcome of patients, due to the delayed effect of the current treatment and the influence of future treatment choices [ Chakrabortyand Moodie2013]. With the desired properties of dealing with delayed reward and inferring optimal policy based on non-optimal prescription behaviors, a set of reinforcement learning methods have been adapted to generate optimal DTR for life-threatening diseases, such as schizophrenia, non-small cell lung cancer, and sepsis [e.g. Nemati etal.2016]. Recently, some studies employ deep RL to solve the DTR problem based on large scale EHRs [Pengetal.2019, Raghuetal.2017, Wengetal.2016 . Nevertheless, these methods may recommend treatments that are obviously different from doctors’ prescriptions due to the lack of the supervision from doctors, which may cause high risk [Shen et al.2013] in clinical practice. In addition, the existing methods are challenging for analyzing multiple diseases and the complex medication space. In fact, the evaluation signal and indicator signal play complementary roles, where the indicator signal gives a basic effectiveness and the evaluation signal helps optimize policy. Imitation learning (e.g. Finn et al. 2016) utilizes the indicator signal toestimate areward function for training robotsbysupposing the indicator signal isoptimal, which is notinlinewith theclinicalreality. Supervised actor-critic (e.g. Zhu et al. 2017) uses the indicator signal to pre-train a “guardian” and then combines “actor” output and “guardian” output to send low-risk actions for robots. However, the two types of signals are trained separately and cannot learn from each other. Inspired by these studies, we propose a novel deep architecture to generate recommendations for more general DTR involving multiple diseases and medications, called Supervised Reinforcement Learning with Recurrent Neural Network (SRL-RNN). The main novelty of SRL-RNN is to combine the evaluation signal and indicator signal at the same time to learn an integrated policy. More specifically, SRL-RNN consists of an off-policy actor- critic framework to learn complex relations among medications, diseases, and individual characteristics. The “actor” in the framework is not only influenced by the evaluation signal like traditional RL but also adjusted by the doctors’ behaviors to ensure safe actions. RNN is further adopted to capture the dependence of the longitudinal and temporal records of patients for the POMDP problem. Note that treatment and prescription are used interchangeablyin thispaper. !
  • 274. PrecisionMedicineasControlProblem Precisionmedicineasacontrol problem:Using simulationanddeep reinforcementlearningtodiscover adaptive,personalizedmulti-cytokine therapy for sepsis BrendenK.Petersen,JiachenYang,WillS.Grathwohl,ChaseCockrell,Claudio Santiago,GaryAn,DanielM.Faissol(Submittedon8Feb2018) https://arxiv.org/abs/1802.10440-Citedby8 -Relatedarticles In thisstudy,weattempttodiscover an effective cytokine mediation treatment strategy for sepsis using a previously developed agent-based model that simulates the innate immune response to infection: the Innate Immune Response agent-based model (IIRABM). Previous attempts at reducing mortality with multi-cytokine mediation using the IIRABM have failed to reduce mortality acrossallpatientparameterizationsandmotivated us to investigate whether adaptive, personalized multi-cytokine mediation can control the trajectory of sepsis and lower patient mortality. We used the IIRABM to compute a treatment policy in which systemic patient measurements are used in a feedback loop to inform future treatment. Using deep reinforcement learning, we identified a policy that achieves 0% mortality on the patient parameterization on which it was trained. More importantly, this policy also achieves 0.8% mortality over 500 randomly selected patient parameterizations with baseline mortalities ranging from 1 - 99% (with an average of 49%) spanning the entire clinically plausible parameter space of the IIRABM. These results suggest that adaptive, personalized multi-cytokine mediation therapy could be a promising approach for treating sepsis. We hope that thiswork motivatesresearcherstoconsider such an approach aspart of future clinical trials. To the best of our knowledge, this work is the first to consider adaptive, personalized multi-cytokine mediation therapy for sepsis, and is the first to exploit deep reinforcement learning on a biological simulation. Sepsisseemstopresentthebestproblemforhospitalsfrom healtheconomicsview
  • 276. Surface (mesh orNURBS) fromvolumetricdata FastSurfer- Afastandaccurate deep learning basedneuroimagingpipeline Leonie Henschel et al. German Center for Neurodegenerative Diseases (DZNE),Bonn, Germany https://arxiv.org/abs/1910.03866 (9Oct 2019) To this end, we introduce an advanced deep learning architecture capable of whole brain segmentation into 95 classes in under 1 minute, mimicking FreeSurfer’s anatomical segmentation and cortical parcellation. The network architecture incorporates local and global competition via competitive dense blocks and competitive skip pathways, as well as multi-slice information aggregation that specifically tailor network performance towards accurate segmentation of both corticaland sub-corticalstructures. Further, we perform fast cortical surface reconstruction and thickness analysis by introducing a spectral spherical embedding and by directly mapping the cortical labels from the image to the surface. This approach provides a full FreeSurfer alternative for volumetric analysis (within 1 minute) and surface-based thickness analysis (within only around 1h run time). For sustainability of this approach we perform extensive validation: we assert high segmentation accuracy on several unseen datasets, measure generalizability and demonstrate increased test-retest reliability, and increased sensitivity to disease effectsrelative to traditional FreeSurfer.
  • 277. Meshe.g. Deep MarchingCubes / DeepSDF DeepMarchingCubes:LearningExplicitSurface Representations Yiyi Liao,SimonDonné,AndreasGeiger(2018) http://www.cvlibs.net/publications/Liao2018CVPR.pdf -Citedby42 https://github.com/yiyiliao/deep_marching_cubes Marchingcubes:Ahighresolution 3Dsurfaceconstructionalgorithm (1987) WELorensen,HECline doi: 10.1145/37401.37422 Cited by 14,986 articles In future work, we plan to adapt our method to higher resolution outputs using octreestechniques Curriculum DeepSDF Yueqi Duan,HaidongZhu,HeWang,Li Yi, Ram Nevatia,LeonidasJ.Guibas(March2020) https://arxiv.org/abs/2003.08593 https://github.com/haidongz-usc/Curriculum-DeepSDF PyTorch
  • 278. Mesh→ Unreal/Unity/WebGL, etc. ifyouare into visualization Helpingbrainsurgeonspractice withreal-time simulationAugust30,2019bySébastien Lozé https://www.unrealengine.com/en-US/spotlights/helping-brai n-surgeons-practice-with-real-time-simulation In their 2018 paper Enhancement Techniquesfor Human AnatomyVisualization, Hirofumi Seo and Takeo Igarashi state that “Human anatomy is so complex that just visualizing it in traditional ways is insufficient for easy understanding…” To address this problem, Seo has proposed a practical approach to brain surgery using real-time rendering with UnrealEngine.  Now Seo and his team have taken this concept a step further with their 2019 paper  Real-Time Virtual Brain Aneurysm ClippingSurgery, where they demonstrate an application prototype for viewing and manipulating a CG representation of a patient’sbrain in real time. The software prototype, made possible with a grant (Grant Number JP18he1602001) from  JapanAgencyforMedical Researchand Development(AMED), helps surgeons visualize a patient’s uniquebrainstructurebefore, during,and after anoperation. BrainBrowser isanopensourcefree3DbrainatlasbuiltonWebGLtechnologies, ituses Three.JStoprovide3D/layeredbrainvisualization. Reviewedin medevel.com Blender.blendfilebyplacedintheAssetsfolderofaUnityproject https://forum.unity.com/threads/holes-in-mesh-on-import-from-blender.248126/ Interaction betweenVolumeRendered3DTextureandMeshObjects https://forum.unity.com/threads/interaction-between-volume-rendered-3d-texture-and-mes h-objects.451345/
  • 279. Easythentovisualizeon computer/VR/MR/AR OCTOBER14,2017 BY ANDIJAKL VisualizingMRI &CT Scans inMixedReality /VR/AR,Part 4: SegmentingtheBrain https://www.andreasjakl.com/visualizing-mri-ct-scans-in-mixed-reality-vr-ar-part-4-segmenting-the-brain/ Combining3DscansandMRIdata http://www.neuro-memento-mori.com/combining-3d-scans-and- mri-data/ VRsoftwaremaybring MRIsegmentationinto thefuture MattO'Connor July30, 2018 AdvancedVisualization https://www.healthimaging. com/topics/advanced-visu alization/vr-software-mri-s egmentation-future Nextmed:Automatic Imaging Segmentation,3D Reconstruction, and 3DModel VisualizationPlatform Using Augmentedand VirtualReality (2020) http://doi.org/10.3390/s2 0102962
  • 280. NURBS e.g. DeepSplines BézierGAN:AutomaticGenerationof SmoothCurvesfrom InterpretableLow- DimensionalParameters Wei Chen,MarkFugeUniversityofMaryland-workwassupportedbyTheDefenseAdvancedResearchProjectsAgency (DARPA-16-63-YFAFP-059)viatheYoungFacultyAward(YFA)Program https://arxiv.org/abs/1808.08871 Many real-world objects are designed by smooth curves, especially in the domain of aerospace and ship, where aerodynamic shapes (e.g., airfoils) and hydrodynamic shapes (e.g., hulls)are designed. However, theprocess of selecting the desired design is complicated, especially for engineering applications where strict requirements are imposed. For example, in aerodynamic or hydrodynamic shape optimization, generally three main components for finding the desired design are: (1) a shape synthesis method (e.g., B-spline or NURBS parameterization), (2) a simulator that computes the performance metric of any given shape, and (3) an optimization algorithm (e.g., genetic algorithm) to select the design parameters that result in the best performance [1, 2]. To facilitate the design process of those objects, we propose a deep learning based generative adversarial networks (GAN) model that can synthesize smooth curves. The model maps a low-dimensional latent representation to a sequence ofdiscretepointssampledfromarational Bézier curve. DeepSpline:Data-Driven reconstructionof ParametricCurvesandSurfaces JunGao,Chengcheng Tang,VigneshGanapathi-Subramanian, Jiahui Huang, Hao Su, LeonidasJ. Guibas Universityof Toronto; VectorInstitute;Tsinghua University; Stanford University; UC San Diego (Submittedon 12Jan2019) https://arxiv.org/abs/1901.03781 Reconstruction of geometry based on different input modes, such as images or point clouds, has been instrumental in the development of computer aided design and computer graphics. Optimal implementations of these applications have traditionally involved the use of spline-based representations at their core. Most such methods attempt to solve optimization problems that minimize an output-target mismatch. However, these optimization techniques require an initialization that is close enough, as they are local methods by nature. We propose a deep learning architecture that adapts to perform spline fitting tasks accordingly, providing complementary results to the aforementioned traditional methods. To tackle challenges with the 2D cases such as multiple splines with intersections, we use a hierarchical Recurrent Neural Network (RNN) Krause et al. 2017 trained with ground truth labels, to predict a variable number of spline curves, eachwith an undetermined number of control points. In the 3D case, we reconstruct surfaces of revolution and extrusion without sel-fintersection through an unsupervised learning approach, that circumvents the requirement for ground truth labels. We use the Chamferdistance to measure the distance between the predicted point cloud and target point cloud. This architecture is generalizable, since predicting other kinds of surfaces (like surfaces of sweeping or NURBS), would require only a change of this individual layer, with the rest of the model remainingthe same.
  • 281. Makingthe Brains physicalwith 3D Printing Makingdatamatter:Voxelprintingforthe digital fabricationof data acrossscalesanddomains Christoph Bader et al. The Mediated Matter Group,Media Lab,Massachusetts Institute of Technology,Cambridg https://doi.org/10.1126/sciadv.aas8652 (30 May2018) We present a multimaterial voxel-printing method that enables the physical visualization of data sets commonly associated with scientific imaging. Leveraging voxel-based control of multimaterial three-dimensional (3D) printing, our method enables additive manufacturing of discontinuous data types such as point cloud data, curve and graph data, image- based data, and volumetric data. By converting data sets into dithered material deposition descriptions, through modifications to rasterization processes, we demonstrate that data sets frequently visualized on screen can be converted into physical, materiallyheterogeneousobjects. Representative 3D-printed models of image-based data. (A) In vitro reconstructed living human lung tissue on a microfluidic device, observed through confocal microscopy (29). The cilia, responsible for transporting airway secretions and mucus-trapped particles and pathogens, are colored orange. Goblet cells, responsible for mucus production, are colored cyan. (B) Biopsy from a mouse hippocampus, observed via confocal expansion microscopy(proExM) (30). The 3D print visualizesneuronal cell bodies, axons, and dendrites. (H) White matter tractography data of the human brain, created with the 3D Slicer medical image processing platform (37), visualizing bundles of axons, which connect different regions of the brain. The original data wereacquiredthroughdiffusion-weighted(DWI) MRI.
  • 283. FiveFDA-approvedsoftwareexist May 2020 Neuroimaging of IntracerebralHemorrhage RimaSRindler,JasonW Allen,Jack WBarrow, GustavoPradilla, DanielLBarrow Neurosurgery, Volume 86, Issue 5, May2020, PagesE414–E423, https://doi.org/10.1093/neuros/nyaa029 Intracerebral hemorrhage (ICH) accounts for 10% to 20% of strokes worldwide and is associated with high morbidity and mortality rates. Neuroimaging is indispensable for rapid diagnosis of ICH and identification of the underlying etiology, thus facilitating triage and appropriate treatment of patients. The most common neuroimaging modalities include noncontrast computed tomography (CT), CT angiography (CTA), digital subtraction angiography, and magnetic resonance imaging (MRI). The strengths and disadvantages of eachmodalitywillbereviewed. Novel technologies such as dual-energy CT/CTA, rapid MRI techniques, near-infrared spectroscopy (NIRS)*, and automated ICH detection hold promise for faster pre- and in- hospitalICHdiagnosisthatmayimpactpatientmanagement. * The depth of near-infrared light penetration limits detection of deep hemorrhages, and the size, type, and location of intracranial hemorrhages cannot be determined with accuracy. Bilateral ICH may be missed given that NIRS depends upon the differential light absorbance between contralateral head locations. Patients with traumatic brain injury may also have scalp hematomas that produce false-positive results. Finally, variations in hair, scalp, and skull thicknessintroduceadditionalbarrierstoICH detection. AutomatedICHDetection Rapid advancements in machine learning techniques have prompted a number of studies to evaluate automated ICH detection algorithms for identifying both intra- and extra-axial ICH with varying sensitivities (81% Majumdar etal.2018 , area under the curve 0.846 Arbabshiranietal.2018 to 0.90 Chilamkurthy etal.2018 )andspecificities(92%). Yeetal.2019 FDA-approved programs are listed in the Table (A Bar, MS et al, unpublished data, September 2018).Ojedaetal.2019 Automated algorithms that detect critical findings would facilitate triage of cases awaiting interpretation, especially in underserved areas, thereby improving workflow and patient outcomes Chilamkurthy etal.2018 . Utilizing a machine learning algorithm to detect ICH reduces the time to diagnosis by 96%Arbabshiranietal.2018 . However, barriers have prevented widespread adoption of these techniques, including limited inter-institutional generalizability of algorithms that were trained on limited, occasionally singlesite datasets. Furthermore, ultimate accountability for errors generated using a machine learningalgorithmremainstobedetermined.
  • 284. AIDOC FDA-approved ‘CT software’#1 Theutility ofdeeplearning:evaluationofa convolutionalneural networkfordetectionof intracranialbleeds onnon-contrasthead computed tomographystudies P.Ojeda; M. Zawaideh; M. Mossa-Basha; D.Haynor ProceedingsVolume10949,MedicalImaging2019: ImageProcessing; 109493J(2019)https://doi.org/10.1117/12.2513167 The algorithm was tested on 7112 non-contrast head CTs acquired during 2016–2017 from a two, large urban academic and trauma centers. Ground truth labels were assigned to the test data per PACS query and prior reports by expert neuroradiologists. No scans from these two hospitals had been used during the algorithm training process and Aidoc staff were at all timesblindedtothegroundtruth labels. Model output was reviewed by three radiologists and manual error analysis performed on discordant findings. Specificity was 99%, sensitivity was 95%, and overall accuracy was 98%. In summary, we report promising results of a scalable and clinically pragmatic deep learning model tested on a large set of real-world data from high-volume medical centers. This model holds promise for assisting clinicians in the identification and prioritization of exams suspicious for ICH, facilitating both the diagnosis and treatment of an emergentandlife-threateningcondition.
  • 285. AIDOC FDA-approved ‘CT software’#2 Analysisofhead CTscans flagged by deep learning software for acute intracranial hemorrhage DanielT.Ginat Departmentof Radiology,Section ofNeuroradiology,UniversityofChicago Neuroradiology volume62,pages335–340(2020) https://doi.org/10.1007/s00234-019-02330-w To analyze the implementation of deep learning software for the detection and worklist prioritization of acute intracranial hemorrhage on non-contrast head CT (NCCT) in various clinicalsettingsatan academic medicalcenter. This study reveals that the performance of the deep learning software [Aidoc (Tel Aviv, Israel)] for acute intracranial hemorrhagedetection varies depending upon thepatient visit location. Furthermore, a substantial portion of flagged cases were follow-up exams, the majority of which were inpatient exams. These findings can help optimize the artificial intelligence-driven clincicalworkflow. This study has several limitations. The clinical impact of the software, in terms of the significance of flagged cases with pathology not related to ICH, reduction of the turnaround time, a survey of radiologists regarding their personal perspectives regarding the software implementation, and whether there was improved patient outcome were not a part of this study, but can be addressed in future studies. Nevertheless, this study identified potential deficiencies in the current software version, such as not accounting for patient visit location and whether there are prior head CTs. Such information could provide important clinical context to improve the overall algorithm accuracy, therebyflaggingcasesin amoreusefulmanner.