Intracerebral Hemorrhage (ICH): Understanding the CT imaging features
PetteriTeikariPhD
531 views
232 slides
Oct 06, 2020
Slide 1 of 285
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
About This Presentation
Overview of CT basics and deep learning literature mostly focused on the analysis of ICH.
Intracerebral hemorrhage (ICH), also known as cerebral bleed, is a type of intracranial bleed that occurs within the brain tissue or ventricles. Intracerebral bleeds are the second most common cause of stroke,...
Overview of CT basics and deep learning literature mostly focused on the analysis of ICH.
Intracerebral hemorrhage (ICH), also known as cerebral bleed, is a type of intracranial bleed that occurs within the brain tissue or ventricles. Intracerebral bleeds are the second most common cause of stroke, accounting for 10% of hospital admissions for stroke.
For spontaneous ICH seen on CT scan, the death rate (mortality) is 34–50% by 30 days after the insult,and half of the deaths occur in the first 2 days. Even though the majority of deaths occurs in the first days after ICH, survivors have a long term excess mortality of 27% compared to the general population.
Deep learning and computational steps roughly can be categorized to 1) Preprocessing, 2) Image Restoration (denoising, deblurring, inpainting, reconstruction), 3) Diffeomorphic registration for spatial normalization, 4) Hand-crafted radiomics and texture analysis, 5) Hemorrhage segmentation, among other relevant head CT issues
Alternative download link: https://www.dropbox.com/s/8l2h93cl2pmle4g/CT_hemorrhage.pdf?dl=0
Size: 57.67 MB
Language: en
Added: Oct 06, 2020
Slides: 232 pages
Slide Content
Intracerebral
Hemorrhage (ICH)
Understanding the CT imaging
features for development of deep
learning networks, ranging from
restoration, segmentation,
prognosis and prescriptive
purposes
Petteri Teikari, PhD
High-dimensional Neurology, Queen’s Square of
Neurology, UCL, London
https://www.linkedin.com/in/petteriteikari/
Version “06/10/20“
For who is this “literature review
for visually orientated people”
for?
”A bit of everything related to head CT
deep learning, focused on intracerebral
hemorrhage (ICH) analysis”
It is assumed that the reader is familiar
with deep learning / computer vision, but
less so with computerized tomography
(CT) and ICH
https://www.linkedin.com/in/andriyburkov
What is ICH?
”hemorrhagic stroke”
Spontaneous Intracerebral Hemorrhage (ICH)
https://www.grepmed.com/images/4925/intracerebral-suba
rachnoid-hemorrhage-comparison-diagnosis-neurology-ep
idural
http://doi.org/10.13140/RG.2.1.1572.8167
”Hemorrhagic Stroke” , less common than ischemic stroke, the “layman definition” of stroke
“Spontaneous”, as in opposed to, traumatic brain hemorrhage caused by a blow to the head (“traumatic brain injury”, TBI)
https://www.strokeinfo.org/stroke-treatme
nts-hemorrhagic-stroke/
https://mc.ai/building-an-algorithm-to-detect-differe
nt-types-of-intracranial-brain-hemorrhage-using-de
ep/
https://mayfieldclinic.com/pe-ich.htm
https://aneskey.com/intracerebral-hemorrhagic-stroke/
The ICH Basics from Anesthesia Key
The typical hemorrhage location
based on etiology
Primary mechanical injury
→
Secondary injuries
Pathophysiological Mechanisms and Potential Therapeutic
Targets in Intracerebral Hemorrhage
Zhiwei Shao et al. (Front Pharmacol. 2019; 10: 1079, Sept 2019)
https://dx.doi.org/10.3389%2Ffphar.2019.01079
Intracerebral hemorrhage (ICH) is a subtype of hemorrhagic stroke with high mortality
and morbidity. The resulting hematoma within brain parenchyma induces a series of
adverse events causing primary and secondary brain injury. The mechanism of
injury after ICH is very complicated and has not yet been illuminated.
This review discusses some key pathophysiology mechanisms in ICH such as
oxidative stress (OS), inflammation, iron toxicity, and thrombin formation.
The corresponding therapeutic targets and therapeutic strategies are also reviewed.
The initial pathological damage of cerebral hemorrhage to brain is the mechanical
compression caused by hematoma. The hematoma mass can increase intracranial
pressure, compressing brain and thereby potentially affecting blood flow, and
subsequently leading to brain hernia (Keep et al., 2012).
Subsequently, brain hernia and brain edema cause secondary injury, which may be
associated with poor outcome and mortality in ICH patients (Yang et al., 2016).
Unfortunately, the common treatment of brain edema (steroids, mannitol, glycerol, and
hyperventilation) cannot effectively reduce intracranial pressure or prevent secondary
brain injury (Cordonnier et al., 2018). Truly effective clinical treatments are very
limited, mainly because the problem of transforming preclinical research into clinical
application has not yet been solved. Therefore, a multi-target neuroprotective
therapy will make clinically effective treatment strategies possible, but also requires
further study.
Pro- and anti-inflammatory cytokines in secondary brain injury after ICH.
Mechanisms of erythrocyte
lysates and thrombin in
secondary brain injury after ICH.
The Keap1–Nrf2–ARE pathway. Keap1 is an OS sensor
and negatively regulates Nrf2. Once exposed to reactive
oxygen species (ROS), the activated Nrf2 translocates to the
nucleus, binds to antioxidant response element (ARE),
heterodimerizes with one of the small Maf (musculo-
aponeurotic fibrosarcoma oncogene homolog) proteins, and
enhances the upregulation of cytoprotective, antioxidant,
anti-inflammatory, and detoxification genes that mediate cell
survival.
”Time is Brain” Neural injury (and your imaging features*)
and depend on the time since initial hematoma
Intracerebral haemorrhage
DrAdnan I Qureshi, A David Mendelow, Daniel F Hanley
The Lancet Volume 373, Issue 9675, 9–15 May 2009, Pages 1632-1644
https://doi.org/10.1016/S0140-6736(09)60371-8
Cascade of neural injury initiated by intracerebral haemorrhage The steps in the first
4 h are related to the direct effect of the haematoma, later steps to the products released from
the haematoma. BBB=blood–brain barrier. MMP=matrix metallopeptidase. TNF=tumour
necrosis factor. PMN=polymorphonuclear cells.
Progression of haemotoma and oedema on CT
Top: hyperacute expansion of haematoma in a patient with intracerebral haemorrhage on serial CT scans.
Small haematoma detected in the basal ganglia and thalamus (A). Expansion of haematoma after 151 min
(B). Continued progression of haematoma after another 82 min (C). Stabilisation of haematoma after
another 76 min (D). Bottom: progression of haematoma and perihaematomal oedema in a patient with
intracerebral haemorrhage on serial CT scans. The first scan (E) was acquired before the intracerebral
haemorrhage. Perihaematoma oedema is highlighted in green to facilitate recognition of progression of
oedema. At 4 h after symptom onset there is a small haematoma in the basal ganglia (F). Expansion of
haematoma with extension into the lateral ventricle and new mass-effect and midline shift at 14 h (G).
Worsening hydrocephalus and early perihaematomal oedema at 28 h (H). Continued mass-effect with
prominent perihaematomal oedema at 73 h (I). Resolving haematoma with more prominent
perihaematomal oedema at 7 days (J).
or how much is the time really brain?
Influence of time to admission to a
comprehensive stroke centre on the
outcome of patients with intracerebral
haemorrhage (Jan 2020)
Luis Prats-Sánchez, Marina Guasch-Jiménez, Ignasi Gich, Elba Pascual-Goñi, Noelia Flores, Pol Camps-Renom, Daniel
Guisado-Alonso, Alejandro Martínez-Domeño, Raquel Delgado-Mederos, Ana Rodríguez-Campello, Angel Ois,
Alejandra Gómez-Gonzalez, Elisa Cuadrado-Godia, Jaume Roquer, Joan Martí-Fàbregas
https://doi.org/10.1177%2F2396987320901616
In patients with spontaneous intracerebral
haemorrhage, it is uncertain if diagnostic and
therapeutic measures are time-sensitive on
their impact on the outcome. We sought to
determine the influence of the time to admission to
a comprehensive stroke centre on the outcome of
patients with acute intracerebral haemorrhage.
Our results suggest that in patients with
intracerebral haemorrhage and known symptom
onset who are admitted to a comprehensive stroke
centre, an early admission (≤110 min) does not
influence the outcome at 90 days.
Distribution of propensity
score blocks by time to
admission. For each pair
of blocks, the box on the
left represents the group
of patients with an
admission ≤ 110 min and
the one on the right
represents the group who
was admitted > 110 min.
Management of ICH less options than for ischemic stroke
Intracerebral haemorrhage
DrAdnan I Qureshi, A David Mendelow, Daniel F Hanley
The Lancet Volume 373, Issue 9675, 9–15 May 2009, Pages 1632-1644
https://doi.org/10.1016/S0140-6736(09)60371-8
haemorrhage
Odds ratio for death or disability in patients with lobar intracerebral
haemorrhage treated surgically or conservatively. Boxes are Peto's odds ratio
(OR), lines are 95% CI. Adapted with permission from Lippincott Williams and Wilkins
Clinical evidence suggests the importance of three management tasks in intracerebral haemorrhage:
stopping the bleeding,81 removing the clot,70 and controlling cerebral perfusion pressure.
92 The precision needed to achieve these goals and the degree of benefit attributable to each clinical goal
would be precisely defined when the results of trials in progress become available. An NIH workshop150
identified the importance of animal models of intracerebral haemorrhage and of human pathology
studies. Use of real-time, high-field MRI with three-dimensional imaging and high-resolution tissue
probes is another priority. Trials of acute blood-pressure treatment and coagulopathy reversal are also
medical priorities. And trials of minimally invasive surgical techniques including mechanical and
pharmacological adjuncts are surgical priorities. The STICH II trial should determine the benefit of
craniotomy for lobar haemorrhage. A better understanding of methodological challenges, including
establishment of research networks and multispecialty approaches, is also needed.150 New
information created in each of these areas should add substantially to our knowledge about the efficacy
of treatment for intracerebral haemorrhage.
Best care is prevention with blood pressure medication
Intracerebral haemorrhage: current approaches
to acute management
Prof Charlotte Cordonnier, Prof Andrew Demchuk, Wendy Ziai, Prof Craig S
Anderson
The Lancet Volume 392, Issue 10154, 6–12 October 2018, Pages 1257-1268
https://doi.org/10.1016/S0140-6736(18)31878-6
ICH, as a heterogeneous disease, certain clinical and imaging features help identify
the cause, prognosis, and how to manage the disease. Survival and recovery
from intracerebral haemorrhage are related to the site, mass effect, and intracranial
pressure from the underlying haematoma, and by subsequent cerebral oedema from
perihaematomal neurotoxicity or inflammation and complications from prolonged
neurological dysfunction.
A moderate level of evidence supports there being beneficial effects of active
management goals with avoidance of early palliative care orders, well-coordinated
specialist stroke unit care, targeted neurointensive and surgical interventions, early
control of elevated blood pressure, and rapid reversal of abnormal coagulation.
The concept of time is brain, developed for the management of acute ischaemic
stroke, applies readily to the management of acute intracerebral
haemorrhage. Initiation of haemostatic treatment within the first few hours after
onset, using deferral or waiver of informed consent or even earlier initiation using a
prehospital setting with mobile stroke unit technologies, require evaluation.
For patients with intracerebral haemorrhage presenting at later or unwitnessed time
windows, refining the approach of spot sign detection through newer imaging
techniques, such as multi-phase CT angiography (Rodriguez-Luna et al. 2017),
might prove useful, as has been shown with the use of CT perfusion in the detection of
viable cerebral ischaemia in patients with acute ischaemic stroke who present in a
late window (Albers et al. 2018; Nogueira et al. 2018).
Ultimately, the best treatment of intracerebral haemorrhage is prevention and
effective detection, management, and control of hypertension across the
community and in high-risk groups will have the greatest effect on reducing the
burden of intracerebral haemorrhage worldwide.
ICH High fatality still
European Stroke Organisation (ESO) Guidelines
for the Management of Spontaneous
Intracerebral Hemorrhage (August 2014)
Thorsten Steiner, Rustam Al-Shahi Salman, Ronnie Beer, Hanne Christensen, Charlotte Cordonnier, Laszlo Csiba, Michael Forsting, Sagi
Harnof, Catharina J. M. Klijn, Derk Krieger, A. David Mendelow, Carlos Molina, Joan Montaner, Karsten Overgaard, Jesper Petersson,
Risto O. Roine, Erich Schmutzhard, Karsten Schwerdtfeger, Christian Stapf, Turgut Tatlisumak, Brenda M. Thomas, Danilo Toni, Andreas
Unterberg, Markus Wagner
https://doi.org/10.1111%2Fijs.12309
Intracerebral hemorrhage (ICH) accounted for 9% to 27% of all strokes
worldwide in the last decade, with high early case fatality and poor functional
outcome. In view of recent randomized controlled trials (RCTs) of the
management of ICH, the European Stroke Organisation (ESO) has updated its
evidence-based guidelines for the management of ICH.
We found moderate- to high-quality evidence to support strong
recommendations for managing patients with acute ICH on an acute
stroke unit, avoiding hemostatic therapy for acute ICH not associated with
antithrombotic drug use, avoiding graduated compression stockings, using
intermittent pneumatic compression in immobile patients, and using blood
pressure lowering for secondary prevention.
We found moderate-quality evidence to support weak recommendations for
intensive lowering of systolic blood pressure to <140 mmHg within six-hours of
ICH onset, early surgery for patients with a Glasgow Coma Scale score 9–12, and
avoidance of corticosteroids.
These guidelines inform the management of ICH based on evidence for the
effects of treatments in RCTs. Outcome after ICH remains poor, prioritizing
further RCTs of interventions to improve outcome.
Age-standardized incidence of hemorrhagic stroke per 100 000 person-years
for 1990 (a), 2005 (b), and 2010 (c). From Feigin et al. (1).
1990
2005
2010
CT typically the first scan done and MRI later where accessible
MRI offers better image quality, but the cost of the technology limits its availability
Intracerebral hemorrhage: an
update on diagnosis and treatment
Isabel C. Hostettler, David J. Seiffge & David J. Werring et
al. (12 Jun 2019)
UCL Stroke Research Centre, Department of Brain Repair and
Rehabilitation, UCL Institute of Neurology and the National Hospital for Neurology and Neurosurgery,
London, UK
Expert Review of Neurotherapeutics Volume 19, 2019 -
Issue 7 https://doi.org/10.1080/14737175.2019.1623671
Expert opinion: In recent years, significant
advances have been made in deciphering
causes, understanding pathophysiology, and
improving acute treatment and prevention of ICH.
However, the clinical outcome remains poor
and many challenges remain.
Acute interventions delivered rapidly
(including medical therapies – targeting
hematoma expansion, hemoglobin toxicity,
inflammation, edema, anticoagulant reversal –
and minimally invasive surgery) are likely to
improve acute outcomes.
Improved classification of the underlying
arteriopathies (from neuroimaging and genetic
studies) and prognosis should allow tailored
prevention strategies (including sustained
blood pressure control and optimized
antithrombotic therapy) to further improve
longer-term outcome in this devastating disease.
A) Modified Boston criteria, B) CT Edinburgh criteria.
ICH care pathway.
Pathway to decide on intra-arterial
digital subtraction angiography (IADSA)
to further investigate ICH cause
(adapted from Wilson et al. 2017).
small vessel diseases (SVD), intra-arterial digital
subtraction angiography (IADSA), White Matter
Hyperintensities (WMH)
Angiography also for hemorrhagic stroke
Hemorrhagic Stroke (2014)
Julius Griauzde, Elliot Dickerson and Joseph J.
Gemmete
Department of Radiology, Radiology Resident, University of
Michigan
http://doi.org/10.1007/978-1-4614-9212-2_46-1
Non-contrast computed tomography has
long been the initial imaging tool in the acute
neurologic patient. As MRI technology and
angiographic imaging has evolved, they too
have proven to be beneficial in narrowing the
differential diagnosis and triaging patient care.
Several biological and physical characteristics
contribute significantly to the appearance of
blood products on neuroimaging. To
adequately interpret images in the patient with
hemorrhagic stroke, the evaluator must have a
knowledge of the interplay between imaging
modalities and intracranial blood products.
Additionally, an understanding of technical
parameters as well as the limitations of
imaging modalities can be helpful in avoiding
pitfalls. Recognition of typical imaging patterns
and clinical presentations can further aid the
evaluator in rapid diagnosis and directed care.
Computed tomography angiography (CTA)
Magnetic resonance angiography (MRA)
Time of Flight MRA (TOFMRA), in its simplest form,
takes advantage of the flow of blood
Contrast-Enhanced MRA (CEMRA) employs fast
spoiled gradient-recalled echo-based sequences
(FSPGR) and the paramagnetic properties of
gadolinium to intensify the signal within vessels
“Brain is time” also for the appearance of the blood
Evolution of blood products on MRI (Derived from a figure created by Dr. Frank Gaillard as
presented on http://radiopaedia.org/articles/ageing-blood-on-mri , with permission)
http://doi.org/10.1007/978-1-4614-9212-2_46-1:
The appearance of the ICH at different periods of time depends
considerably upon a number of factors. For instance, in early phases,
the hematocrit and protein levels of the hematoma will dramatically alter
the CT attenuation in the hematoma. In later phases, factors such as
oxygen tension at the hematoma will determine how quickly
deoxyhemoglobin transitions into methemoglobin and how quickly red
blood cells finally lyse and decrease the field inhomogeneity effects of
sequestered methemoglobin. The integrity of the blood-brain barrier
also helps to determine the degree to which hemosiderin-laden
macrophages remain trapped in the parenchyma causing hemosiderin
staining long after the vast majority of the hematoma mass has been
resorbed [Parizel et al. 2001].
Intracranial
hemorrhage made
easy - a semiological
approach on CT and
MRI
http://doi.org/10.1594/ecr2
014/C-1120
: CT appearance of
ageing blood. Several
factors which vary
depending on the stage of
the bleeding
Evolution of CT density of
intracranial haemorrhage
(diagram) Case contributed by
Assoc Prof Frank Gaillard
https://radiopaedia.org/cases/evolutio
n-of-ct-density-of-intracranial-haemor
rhage-diagram
Appearance of Blood on Computed Tomography and
Magnetic Resonance Imaging Scans by Stage
http://doi.org/10.1007/s13311-010-0009-x
What predicts
the outcome after ICH?
ICH Score the simplistic baseline for prognosis
ICH Score subcomponents: Glasgow Coma Scale (GCS)
https://www.firstaidforfree.com/glasgow-coma-scale-gcs-first-aiders/
https://emottawablog.com/2018/07/gcs-remastered-recent-
updates-to-the-glasgow-coma-scale-gcs-p/
ICH Score subcomponents: Hematoma volume
How to measure in practice? Note that deep learning segmentation networks are not really in use
Ryan Hakimi, DO, MS Assistant Professor
https://slideplayer.com/slide/3883134/
Vivien H. Lee et al. (2016) cites the
●Kwak’s sABC/2 formula
(Kwak et al.
1983,10.1161/01.str.14.4.493, Cited by 252)
●Kothari’s ABC/2 formula
(Kothari et al.
1996, 10.1161/01.str.27.8.1304, Cited by 1653)
Excellent accuracy of ABC/2 volume formula compared to computer-
assisted volumetric analysis of subdural hematomas Sae-Yeon Won et
al. (2018) https://doi.org/10.1371/journal.pone.0199809
The ABC/2 method is a simple and fast bedside formula for the measurement of
SDH volume in a timely manner without limited access through simple adaption,
which may replace the computer-assisted volumetric measurement in the clinical
and research area.
Assessment of the ABC/2 Method of Epidural
Hematoma Volume Measurement as Compared to
Computer-Assisted Planimetric Analysis (2015)
https://doi.org/10.1177%2F1099800415577634
ICH Score subcomponents: Intraventicular Hemorrhage
https://www.childrensmn.org/educationmaterials/childrensmn
/article/15353/intraventricular-hemorrhage-in-premature-babi
es/
Jackson et al. (2013)
https://doi.org/10.1007/s12028-012-9713-1
ICH Score subcomponents: Infratentorial (cerebellar) bleed
https://aneskey.com/intrace
rebral-hemorrhagic-stroke/
Impact of Supratentorial Cerebral Hemorrhage on the
Complexity of Heart Rate Variability in Acute Stroke Chih-Hao
Chen, Sung-Chun Tang, Ding-Yuan Lee, Jiann-Shing Shieh, Dar-Ming Lai, An-Yu Wu & Jiann-
Shing Jeng Scientific Reports volume 8, Article number: 11473 (2018)
https://doi.org/10.1038/s41598-018-29961-y
Acute stroke commonly affects cardiac autonomic responses resulting in reduced
heart rate variability (HRV). Multiscale entropy (MSE) is a novel non-linear
method to quantify the complexity of HRV. This study investigated the influence of
intracerebral hemorrhage (ICH) locations and intraventricular
hemorrhage (IVH) on the complexity of HRV. In summary, more severe stroke
and larger hematoma volume resulted in lower complexity of HRV. Lobar hemorrhage
and IVH had great impacts on the cardiac autonomic function.
https://neupsykey.com/
diagnosis-and-treatmen
t-of-intracerebral-hemor
rhage/
Location
→
functional measures?
We collected ECG analogue data
directly from the bedside monitor
(Philips Intellivue MP70, Koninklijke
Philips N.V., Amsterdam, Netherlands)
for each patient.
ICH Score validation and modification somewhat ok/suboptimal performance
Modifying the intracerebral
hemorrhage score to suit the
needs of the developing world
Ajay Hegde, Girish Menon (Nov 2018)
http://doi.org/10.4103/aian.AIAN_419_17
ICH Score failed to accurately predict
mortality in our cohort. ICH is
predominately seen at a younger
age group in India and hence have
better outcomes in comparison to
the west. We propose a minor
modification in the ICH score by
reducing the age criteria by 10 years to
prognosticate the disease better in our
population.
External Validation of the ICH
Score
Jennifer L Clarke et al. (2004)
https://doi.org/10.1385/ncc:1:1:53
The ICH score accurately stratifies
outcome in an external patient cohort.
Thus, the ICH score is a validated
clinical grading scale that can be easily
and rapidly applied at ICH presentation.
Ascale such as the ICH score could be
used to standardize clinical treatment
protocols or clinical studies.
Validation of ICH Score in a large
Urban Population
Taha Nisar et al. (2018)
https://doi.org/10.1016/j.clineuro.2018.09.007
We conducted a retrospective chart review of
245 adult patients who presented with acute
ICH to University Hospital, Newark. Our study
is one of the largest done at a single urban
center to validate the ICH score. Age ≥ 80
years wasn't statistically significant with
respect to 30-day mortality in our group.
Restratification of the weight of
individual variable in the ICH equation with
modification of the ICH score can potentially
more accurately establish mortality risk.
Nevertheless, the overall prediction of
mortality was accurate and reproducible in
our study.
Validation of the ICH score in
patients with spontaneous
intracerebral haemorrhage
admitted to the intensive care unit
in Southern Spain
Sonia Rodríguez-Fernández et al. (2018)
http://dx.doi.org/10.1136/bmjopen-2018-021719
ICH score shows an acceptable discrimination as a tool to
predict mortality rates in patients with spontaneous ICH
admitted to the ICU, but its calibration is suboptimal.
24-Hour ICH Score Is a Better
Predictor of Outcome than
Admission ICH Score
Aimee M. Aysenne et al. (2013)
https://doi.org/10.1155/2013/605286
Early determination of the ICH score may
incorrectly estimate the severity and
expected outcome after ICH. Calculations of
the ICH score 24 hours after admission
will better predict early outcomes.
Assessment and comparison of the
max-ICH score and ICH score by
external validation
Felix A. Schmidt, et al. (2018)
https://doi.org/10.1212/WNL.0000000000006117
We tested the hypothesis that the maximally treated
intracerebral hemorrhage (max-ICH) score is superior
to the ICH score for characterizing mortality and functional
outcome prognosis in patients with ICH, particularly those who
receive maximal treatment.
External validation with direct comparison of the ICH score and
max-ICH score shows that their prognostic performance is not
meaningfully different. Alternatives to simple scores are
likely needed to improve prognostic estimates for patient
care decisions.
Yes, so, do you like to use
oversimplified models after all?
ICH Score works for some parts of the population
Original Intracerebral Hemorrhage Score for the Prediction
of Short-Term Mortality in Cerebral Hemorrhage:
Systematic Review and Meta-Analysis
Gregório, Tiago; Pipa, Sara; Cavaleiro, Pedro; Atanásio, Gabriel;
Albuquerque, Inês; Castro Chaves, Paulo ; Azevedo, Luís
Journal of Stroke and Cerebrovascular Diseases
Volume 29, Issue 4, April 2020, 104630
https://doi.org/10.1097/CCM.0000000000003744
To systematically assess the discrimination and
calibration of the Intracerebral Hemorrhage score for
prediction of short-term mortality (38 studies, 15,509
patients) in intracerebral hemorrhage patients and to study its
determinants using heterogeneity analysis.
Fifty-five studies provided data on discrimination, and 35 studies
provided data on calibration. Overall, the Intracerebral
Hemorrhage score discriminated well (pooled C-statistic
0.84; 95% CI, 0.82-0.85) but overestimated mortality
(pooled observed:expected mortality ratio = 0.87; 95% CI, 0.78-
0.97), with high heterogeneity for both estimates (I 80% and
84%, respectively).
The Intracerebral Hemorrhage score is a valid clinical
prediction rule for short-term mortality in intracerebral
hemorrhage patients but discriminated mortality worse in more
severe cohorts. It also overestimated mortality in the highest
Intracerebral Hemorrhage score patients, with significant
inconsistency between cohorts. These results suggest that
mortality for these patients is dependent on factors
not included in the score. Further studies are needed to
determine these factors.
Start with ICH score but then you need better models?
Management of Intracerebral
Hemorrhage: JACC Focus Seminar
Matthew Schrag,Howard Kirshner
Journal of the American College of Cardiology
Volume 75, Issue 15, 21 April 2020
https://doi.org/10.1016/j.jacc.2019.10.066
The most widely used tool for assessing prognosis is the “ICH score,” a scale that predicts
mortality based on hemorrhage size, patient age, Glasgow coma score, hemorrhage location
(infratentorial or supratentorial), and the presence of intraventricular hemorrhage (
Hemphill et al. 2001). This score has been widely criticized for overestimating the
mortality associated with ICH, and this is attributed to the high rate of early withdrawal of
medical care in more severe hemorrhages in the cohort, leading to a “self-fulfilling
prophecy” of early mortality (Zahuranec et al. 2007, Zahuranec et al. 2010).
Nevertheless, no high-performing alternative scale or biomarker has
entered routine clinical use, so the ICH score remains a starting point for
clinical prognostication. A recent re-evaluation of this clinical tool found that both
physicians’ and nurses’ subjective predictions of 3-month outcomes made within 24 h
of the hemorrhage outperformed the accuracy of the ICH score, underscoring the
important role of clinician experience and judgement in guiding families (
Hwang et al. 2015).
In addition to hemorrhage size and initial clinical deficits, factors that seem to predict a poor
overall outcome include any early neurological deterioration, hemorrhages in deep locations,
particularly the thalamus, and age/baseline functional status (Yogendrakumar et al. 2018;
Sreekrishnan et al. 2016; Ullman et al. 2019). When the clinical prognosis is unclear,
physicians should generally advocate for additional time and continued supportive
care (Hemphill et al. 2015).
Recovery after intracerebral hemorrhage is often delayed when compared with
ischemic strokes of similar severity, and outcomes may need to be evaluated at
later timepoints to capture the full extent of potential recovery. This is important both
for calibrating patient and family expectations and in the design of outcomes for clinical
trials.
Several scores and measures exist
Intracerebral hemorrhage outcome: A
comprehensive update
João Pinho et al. (15 March 2019)
https://doi.org/10.1016/j.jns.2019.01.013
The focus of outcome assessment after ICH has been
mortality in most studies, because of the high early case
fatality which reaches 40% in some population-based
studies. The most robust and consistent predictors of early
mortality include age, severity of neurological impairment,
hemorrhage volume and antithrombotic therapy at the time
of the event.
Long-term outcome assessment is multifaceted and
includes not only mortality and functional outcome,
but also patient self-assessment of the health-
related quality of life, occurrence of cognitive
impairment, psychiatric disorders, epileptic seizures,
recurrent ICH and subsequent thromboembolic events.
Several scores which predict mortality and functional
outcome after ICH have been validated and are useful in
the daily clinical practice, however they must be used in
combination with the clinical judgment for individualized
patients. Management of patients with ICH both in the acute
and chronic phases, requires health care professionals to
have a comprehensive and updated perspective on
outcome, which informs decisions that are needed to be
taken together with the patient and next of kin
Too “handwavey” reporting of the location at the moment
Intracerebral Hemorrhage Location and Functional
Outcomes of Patients: A Systematic Literature Review and
Meta-Analysis
Anirudh Sreekrishnan et al. (Neurocritical Care volume 25, pages 384–391, 2016)
https://doi.org/10.1177%2F0272989X19879095 - Cited by 35
Intracerebral hemorrhage (ICH) has the highest mortality rate among all
strokes. While ICH location, lobar versus non-lobar, has been
established as a predictor of mortality, less is known regarding the
relationship between more specific ICH locations and functional
outcome. This review summarizes current work studying how ICH
location affects outcome, with an emphasis on how studies designate
regions of interest.
Multiple studies have examined motor-centric outcomes, with few studies
examining quality of life (QoL) or cognition. Better functional outcomes
have been suggested for lobar versus non-lobar ICH; few studies
attempted finer topographic comparisons. This study highlights the
need for improved reporting in ICH outcomes research, including
a detailed description of hemorrhage location, reporting of the full
range of functional outcome scales, and inclusion of cognitive and
QoL outcomes.
Meta-analysis of studies describing the odds ratio of poor outcomes for
lobar compared to deep/non-lobar ICH. a Poor outcome mRS (3, 4, 5, 6) or
GOS (4, 3, 2, 1); b Poor outcome mRS (4, 5, 6) or GOS (3, 2, 1); c Poor
outcome mRS (5, 6). *Significant results (p < 0.05)
Lobar vs Deep?
https://slideplayer.com/slide/2404245/
N Engl J Med 2001; 344:1450-1460
http://doi.org/10.1056/NEJM200105103441907
Two general categories in terms of pathophysiology:
-- Lobar (towards the periphery, typically linked to
cerebral amyloid angiopathy [CAA])
-- Deep (in the deep white matter of the cerebrum,
typically linked to hypertension, HTN)
https://www.cram.com/flashcards/draft-23-16-intracra
nial-hemorrhage-2439833
Long-term risks higher after lobar ICH?
Ten-year risks of recurrent stroke,
disability, dementia and cost in relation to
site of primary intracerebral haemorrhage:
population-based study (2019)
Linxin Li, Ramon Luengo-Fernandez, Susanna M Zuurbier, Nicola C
Beddows, Philippa Lavallee, Louise E Silver, Wilhelm Kuker, Peter
Malcolm Rothwell
http://dx.doi.org/10.1136/jnnp-2019-322663
Patients with primary intracerebral haemorrhage (ICH)
are at increased long-term risks of recurrent stroke and
other comorbidities. However, available estimates
come predominantly from hospital-based studies with
relatively short follow-up. Moreover, there are also
uncertainties about the influence of ICH location
on risks of recurrent stroke, disability, dementia and
quality of life.
Methods In a population-based study (Oxford Vascular
Study/2002–2018) of patients with a first ICH with
follow-up to 10 years, we determined the long-term
risks of recurrent stroke, disability, quality of life,
dementia and hospital care costs stratified by
haematoma location.
ICH can be categorised into lobar and non-lobar according to the haematoma location.
Given the different balance of pathologies for lobar versus non-lobar ICH, the long-term
prognosis of ICH could be expected to differ by haematoma location. However, while some
studies suggested that haematoma location was associated with recurrent stroke, others
have not.
Compared with non-lobar ICH, the substantially higher 10-year
risks of recurrent stroke, dementia and lower QALYs after lobar
ICH highlight the need for more effective prevention for
this patient group.
(top) Ten-year risks of recurrent stroke, disability or death stratified
by haematoma location. (right) Ten-year mean healthcare costs
over time after primary intracerebral haemorrhage.
Hematoma Enlargement deep vs lobar, volume?
Hematoma enlargement characteristics in
deep versus lobar intracerebral hemorrhage
Jochen A. Sembill et al. (04 March 2020)
https://doi.org/10.1002/acn3.51001
Hematoma enlargement (HE) is associated with
clinical outcomes after supratentorial intracerebral
hemorrhage (ICH). This study evaluates whether HE
characteristics and association with functional
outcome differ in deep versus lobar ICH.
HE occurrence does not differ among deep and lobar
ICH. However, compared to lobar ICH, HE after deep
ICH is of greater extent in OAC ICH, occurs earlier
‐ICH, occurs earlier
and may be of greater clinical relevance. Overall,
clinical significance is more apparent after
small–medium compared to large sized
‐sized
bleedings.
These data may be valuable for both routine clinical
management as well as for designing future studies
on hemostatic and blood pressure management
aming at minimizing HE. However, further studies
with improved design are needed to replicate these
findings and to investigate the pathophysiological
mechanisms accounting for these observations.
Study flowchart. Altogether, individual level data from 3,580 spontaneous ICH patients were analyzed to identify 1,954
supratentorial ICH patients eligible for outcome analyses. Data were provided by two parts of a German wide observational
‐ICH, occurs earlier
studies (RETRACE I and II) conducted at 22 participating tertiary centers, and by one single center university hospital registry.
‐ICH, occurs earlier
Intracerebral Hemorrhage: Clinical Manifestations Related to Site.
https://clinicalgate.com/intracerebral-hemorrhage/
https://all-about-hipertency.blogspot.com/2003/0
8/hypertensive-hemorrhagic-stroke.html
https://radiologyassistant.nl/neuroradiology/non-
traumatic-intracranial-haemorrhage-in-adults
Other factors you should take into account
Brian A. Stettler,
MDAssistant
Professor
https://slideplayer.c
om/slide/3129821/
Subfalcial herniation, midline shift and uncal
herniation secondary to large subdural hematoma in
the left hemisphere.
https://www.startradiology.com/internships/neurology/brain/ct-brain-
hemorrhage/
Hydrocephalus
https://kidshealth.or
g/en/parents/hydro
cephalus.html
Risk Factors Hypertension the largest risk factor
Risk Factors of Intracerebral Hemorrhage:
A Case-Control Study
Hanne Sallinen, Arto Pietilä, Veikko Salomaa, Daniel Strbian
Journal of Stroke and Cerebrovascular Diseases
Volume 29, Issue 4, April 2020, 104630
https://doi.org/10.1016/j.jstrokecerebrovasdis.2019.104630
Hypertension is a well-known risk factor for
intracerebral hemorrhage (ICH). On many of the other
potential risk factors, such as smoking, diabetes,
and alcohol intake, results are conflicting. We
assessed risk factors of ICH, taking also into account
prior depression and fatigue.
Analyzing all cases and controls, the cases had more
hypertension, history of heart attack, lipid-lowering
medication, and reported more frequently fatigue prior
to ICH. In persons aged less than 70 years,
hypertension and fatigue were more common among
cases. In persons aged greater than or equal to 70
years, factors associated with risk of ICH were fatigue
prior to ICH, use of lipid-lowering medication, and
overweight.
Hypertension was associated with risk of ICH
among all patients and in the group of patients under
70 years. Fatigue prior to ICH was more common
among all ICH cases.
Stroke or Intensive Care Unit for ICH patients
Stroke unit admission is associated with better
outcome and lower mortality in patients with
intracerebral hemorrhage
MaM. N. Ungerer P. Ringleb B. Reuter C. Stock F. Ippen S. Hyrenbach I. Bruder P
Martus C. Gumbinger the AG Schlaganfall
https://doi.org/10.1111/ene.14164 (Feb 2020)
There is no clear consensus among current guidelines on the preferred
admission ward [i.e. intensive care unit (ICU) or stroke unit (SU)] for
patients with intracerebral hemorrhage. Based on expert opinion, the American
Heart Association and European Stroke Organization recommend treatment in
neurological/neuroscience ICUs (NICUs) or SUs. The European Stroke
Organization guideline states that there are no studies available directly
comparing outcomes between ICUs and SUs.
We performed an observational study comparing outcomes of 10 811
consecutive non comatose patients with intracerebral hemorrhage according
‐ICH, occurs earlier
to admission ward [ICUs, SUs and normal wards (NWs)]. Primary outcomes
were the modified Rankin Scale score at discharge and intrahospital mortality.
An additional analysis compared NICUs with Sus.
Treatment in SUs was associated with better functional outcome and reduced
mortality compared with ICUs and NWs. Our findings support the current
guideline recommendations to treat patients with intracerebral
hemorrhage in SUs or NICUs and suggest that some patients may further
benefit from NICU treatment.
Mobile Stroke Unit Reduces
Time to Treatment
JULY 03, 2018
https://www.itnonline.com/articl
e/mobile-stroke-unit-reduces-ti
me-treatment
For more fine-grained predictions
you probably want to use better imaging modalities?
Predicting Motor Outcome in Acute
Intracerebral Hemorrhage (May 2019)
J. Puig, G. Blasco, M. Terceño, P. Daunis-i-Estadella, G. Schlaug, M. Hernandez-
Perez, V. Cuba, G. Carbó, J. Serena, M. Essig, C.R. Figley, K. Nael, C. Leiva-
Salinas, S. Pedraza and Y. Silva
https://doi.org/10.3174/ajnr.A6038
Predicting motor outcome following
intracerebral hemorrhage is challenging. We
tested whether the combination of
clinical scores and Diffusion tensor
imaging (DTI)-based assessment of
corticospinal tract damage within the first 12
hours of symptom onset after intracerebral
hemorrhage predicts motor outcome at 3
months.
Combined assessment of motor function
and posterior limb of the internal capsule
damage during acute intracerebral
hemorrhage accurately predicts motor
outcome.
Assessing corticospinal tract involvement with diffusion tensor tractography superimposed on gradient
recalled echo and FLAIR images. In the upper row, the corticospinal tract was affected by ICH (passes through
it) at the level of the corona radiata and posterior limb of the internal capsule. Note that in lower row, the
corticospinal tract was displaced slightly forward but preserved around the intracerebral hematoma. Vol
indicates volume.
Example of ROI object maps used to measure
intracerebral hematoma (blue) and perihematomal
edema (yellow) volumes.
Combining mNIHSS and PLIC affected by ICH in the first
12 hours of onset can accurately predict motor outcome.
The reliability of DTI in denoting very early damage to
the CST could make it a prognostic biomarker
useful for determining management strategies
to improve outcome in the hyperacute stage.
Our approach eliminates the need for advanced
postprocessing techniques that are time-
consuming and require greater specialization, so it can
be applied more widely and benefit more patients.
Prospective large-scale studies are warranted to
validate these findings and determine whether this
information could be used to stratify risk in patients with
ICH.
Clinicians like to hunt for the “(linear) magical biomarkers”
opposed to nonlinear multivariate models with higher capacity (and higher probability to overfit as well)
Early hematoma retraction in
intracerebral hemorrhage is
uncommon and does not predict
outcome
Ana C. Klahr,Mahesh Kate,Jayme Kosior,Brian
Buck,Ashfaq Shuaib,Derek Emery,Kenneth Butcher
Published: October 9, 2018
https://doi.org/10.1371/journal.pone.0205436
Cited by 2 - Related articles
Clot retraction in intracerebral hemorrhage (ICH)
has been described and postulated to be
related to effective hemostasis and
perihematoma edema (PHE) formation. The
incidence and quantitative extent of hematoma
retraction (HR) is unknown. Our aim was to
determine the incidence of HR between baseline
and time of admission. We also tested the
hypothesis that patients with HR had higher PHE
volume and good prognosis.
Early HR is rare and associated with IVH, but not
with PHE or clinical outcome. There was no
relationship between HR, PHE, and patient
prognosis. Therefore, HR is unlikely to be a useful
endpoint in clinical ICH studies.
Perihematomal Edema (PHE) Diagnostic value?
Neoplastic and Non-Neoplastic Causes of Acute
Intracerebral Hemorrhage on CT: The
Diagnostic Value of Perihematomal Edema
Jawed Nawabi, Uta Hanning, Gabriel Broocks, Gerhard Schön, Tanja Schneider, Jens
Fiehler, Christian Thaler & Susanne Gellissen
Clinical Neuroradiology (2019)
https://doi.org/10.1007/s00062-019-00774-4
The aim of this study was to investigate the
diagnostic value of perihematomal
edema (PHE) volume in non-enhanced
computed tomography (NECT) to
discriminate neoplastic and non-neoplastic
causes of acute intracerebral hemorrhage
(ICH).
Relative PHE with a cut-off of >0.50 is a
specific and simple indicator for
neoplastic causes of acute ICH and a
potential tool for clinical implementation. This
observation needs to be validated in an
independent patient cohort.
Two representative cases of region of interest object maps used to measure intracerebral
hemorrhage (ICH), volume (Vol ICH) and total hemorrhage (Vol ICH +PHE)
volume. a Neoplastic and non-neoplastic ICH volume (red) and b total hemorrhage volume
(grey) on non-enhanced CT (NECT) delineated with an edge detection
algorithm. c Neoplastic and non-neoplastic PHE (green) calculated by subtraction of total
hemorrhage volume and ICH volume (Vol PHE = Vol ICH +PHE − Vol ICH)
Young patients tend to recover better (seems obvious)
Is nontraumatic intracerebral hemorrhage
different between young and elderly
patients?
Na Rae Yang, Ji Hee Kim, Jun Hyong Ahn, Jae Keun Oh, In Bok Chang
& Joon Ho Song Neurosurgical Review volume 43, pages781–
791(2020) https://doi.org/10.1007/s10143-019-01120-5
Only a few studies have reported
nontraumatic intracerebral hemorrhage in
young patients notwithstanding its fatal
and devastating characteristics. This study
investigated the clinical characteristics and
outcome of nontraumatic intracerebral
hemorrhage in young patients in
comparison to those of the elderly.
Nontraumatic intracerebral hemorrhage in
younger patients appears to be
associated with excessive alcohol
consumption and high BMI. Younger
patients had similar short-term
mortality but more favorable
functional outcome than the elderly.
Distribution of modified Rankin Scale scores at the last follow-up for each group
Genotype-based differences exist
Racial/ethnic disparities in the risk of intracerebral
hemorrhage recurrence
Audrey C. Leasure, Zachary A. King, Victor Torres-Lopez, Santosh B. Murthy, Hooman Kamel, Ashkan Shoamanesh, Rustam
Al-Shahi Salman, Jonathan Rosand, Wendy C. Ziai, Daniel F. Hanley, Daniel Woo, Charles C. Matouk, Lauren H. Sansing,
Guido J. Falcone, Kevin N. Sheth
Neurology December 12, 2019
https://doi.org/10.1212/WNL.0000000000008737
To estimate the risk of intracerebral hemorrhage (ICH) recurrence in a
large, diverse, US-based population and to identify racial/ethnic and
socioeconomic subgroups at higher risk. Black and Asian patients
had a higher risk of ICH recurrence than white patients, whereas
private insurance was associated with reduced risk compared to those
with Medicare.
Further research is needed to determine the drivers of these
disparities. While this is the largest study of ICH recurrence in a United
States–based, racially and ethnically diverse population, our study has
several limitations related to the use of administrative data that require
consideration. First, there is a possibility of misclassification of the
exposures and outcomes. The attribution of race/ethnicity that is not
based on direct self-report may not be accurate; for example, patients
who belong to 2 or more racial/ethnic categories may be classified
based on phenotypic descriptions and may not reflect true
ancestry. In terms of outcome classification, we relied on ICD-9-CM
codes to identify our outcome of recurrent ICH. However, we used
previously validated diagnosis codes that have high positive predictive
values for identifying primary ICH
as ICH not that well understood so new mechanisms are proposed
Global brain inflammation in stroke
Kaibin Shi et al. (Lancet Neurology, July 2019)
https://doi.org/10.1016/S1474-4422(19)30078-X
Stroke, including acute ischaemic stroke (AIS) and
intracerebral haemorrhage (ICH), results in
neuronal cell death and the release of factors
such as damage-associated molecular patterns
(DAMPs) that elicit localised inflammation in the
injured brain region. Such focal brain
inflammation aggravates secondary brain
injury by exacerbating blood–brain barrier damage,
microvascular failure, brain oedema, oxidative stress,
and by directly inducing neuronal cell death.
In addition to inflammation localised to the injured
brain region, a growing body of evidence suggests
that inflammatory responses after a stroke occur and
persist throughout the entire brain. Global brain
inflammation might continuously shape the
evolving pathology after a stroke and affect the
patients' long-term neurological outcome.
Future efforts towards understanding the
mechanisms governing the emergence of so-called
global brain inflammation would facilitate modulation
of this inflammation as a potential therapeutic
strategy for stroke.
MMPs in ICH? In emerging theories
Matrix Metalloproteinases in Acute
Intracerebral Hemorrhage
Simona Lattanzi, Mario Di Napoli, Silvia Ricci & Afshin A. Divani
Neurotherapeutics (January 2020)
https://doi.org/10.1007/s13311-020-00839-0
So far, clinical trials on ICH have mainly targeted primary
cerebral injury and have substantially failed to improve
clinical outcomes.
The understanding of the pathophysiology of early and delayed
injury after ICH is, hence, of paramount importance to identify
potential targets of intervention and develop effective
therapeutic strategies. Matrix metalloproteinases (MMPs)
represent a ubiquitous superfamily of structurally related zinc-
dependent endopeptidases able to degrade any component of
the extracellular matrix. They are upregulated after ICH, in
which different cell types, including leukocytes, activated
microglia, neurons, and endothelial cells, are involved in their
synthesis and secretion. The role of MMPs as a potential target
for the treatment of ICH has been widely discussed in the last
decade. The impact of MMPs on extracellular matrix
destruction and blood–brain barrier BBB disruption in
patients suffering from ICH has been of interest.
The aim of this review is to summarize the available
experimental and clinical evidence about the role of MMPs in
brain injury following spontaneous ICH and provide critical
insights into the underlying mechanisms.
Overall, there is substantially converging evidence from
experimental studies to suggest that early and short-
term inhibition of MMPs after ICH can be an
effective strategy to reduce cerebral damage
and improve the outcome, whereas long-term
treatment may be associated with more harm than
benefit. It is, however, worth to notice that, so far, we do
not have a clear understanding of the time-specific
role that the different MMPs assume within the
pathophysiology of secondary brain injury and recovery
after ICH. In addition, most of the studies exploring
pharmacological strategies to modulate MMPs can
only provide indirect evidence of the benefit to target
MMP activity.
The prospects for effective therapeutic targeting of
MMPs require the establishment of conditions to
specifically modulate a given MMP isoform, or a subset of
MMPs, in a given spatio-temporal context (Rivera 2019).
Further research is warranted to better understand the
interactions between MMPs and their molecular
and cellular environments, determine the optimal
timing of MMPs inhibition for achieving a favorable
therapeutic outcome, and implement the discovery of
innovative selective agents to spare harmful effects
before therapeutic strategies targeting MMPs can be
successfully incorporated into routine practice (
Lattani et al. 2018; Hostettler et al. 2019).
What are the treatments for
ICH and can we do prescriptive
modeling (“precision medicine”),
and tailor the treatment
individually?
Hemostatic
Therapy
Overview
Management of Intracerebral
Hemorrhage: JACC Focus Seminar
Matthew Schrag,Howard Kirshner
Journal of the American College of Cardiology
Volume 75, Issue 15, 21 April 2020
https://doi.org/10.1016/j.jacc.2019.10.066
Animal models of ICH exist of course as well
Intracerebral haemorrhage: from clinical settings to
animal models Qian Bai et al. (2020)
http://dx.doi.org/10.1136/svn-2020-000334
Effective treatment for ICH is still scarce. However, clinical
therapeutic strategies includes medication and surgery. Drug
therapy is the most common treatment for ICH. This includes
prevention of ICH based on treating an individual’s underlying
risk factors, for example, control of hypertension. Hyperglycaemia
in diabetics is common after stroke; managing glucose level may
reduce the stroke size. Oxygen is given as needed. Surgery can be
used to prevent ICH by repairing vascular damage or
malformations in and around the brain, or to treat acute ICH by
evacuating the haematoma; however, the benefit of surgical
treatment is still controversial due to very few controlled
randomised trials. Rehabilitation may help overcome disabilities
that result from ICH damage.
Despite great advances in ischaemia stroke, no prominent improvement
in the morbidity and mortality after ICH have been realised. The current
understanding of ICH is still limited, and the models do not
completely mirror the human condition. Novel effective modelling is
required to mimic spontaneous ICH in humans and allow for effective
studies on mechanisms and treatment of haematoma expansion and
secondary brain injury.
Genomics for Stroke recovery #1
Genetic risk factors for
spontaneous intracerebral
haemorrhage Amanda M. Carpenter, I.
P. Singh, Chirag D. Gandhi, Charles J.
Prestigiacomo (Nature Reviews
Neurology 2016)
https://doi.org/10.1038/nrneurol.2015.226
Familial aggregation of ICH has been
observed, and the heritability of ICH
risk has been estimated at 44%.
Few genes have been found to be
associated with ICH at the population
level, and much of the evidence for
genetic risk factors for ICH comes
from single studies conducted in
relatively small and homogenous
populations. In this Review, we
summarize the current knowledge of
genetic variants associated with primary
spontaneous ICH.
Although evidence for genetic
contributions to the risk of ICH exists, we
do not yet fully understand how and
to what extent this information can be
utilized to prevent and treat ICH.
Genomics for Stroke recovery #2
Genetic underpinnings of recovery after
stroke: an opportunity for gene discovery,
risk stratification, and precision medicine
Julián N. Acosta et al. (September 2019)
https://doi.org/10.1186/s13073-019-0671-5
As the number of stroke survivors continues to increase,
identification of therapeutic targets for stroke
recovery has become a priority in stroke genomics
research. The introduction of high-throughput
genotyping technologies and novel analytical tools has
significantly advanced our understanding of the genetic
underpinnings of stroke recovery.
In summary, functional outcome and recovery
constitute important endpoints for genetic studies
of stroke. The combination of improving statistical power
and novel analytical tools will surely lead to the discovery
of novel pathophysiological mechanisms
underlying stroke recovery. Information on these
newly discovered pathways can be used to develop new
rehabilitation interventions and precision-
medicine strategies aimed at improving management
options for stroke survivors. The continuous growth and
strengthening of existing dedicated collaborations and the
utilization of standardized approaches to ascertain
recovery-related phenotypes will be crucial for the
success of this promising field.
Genetic risk of Spontaneous intracerebral hemorrhage: Systematic
review and future directions Kolawole Wasiu et al. (15 December 2019)
https://doi.org/10.1016/j.jns.2019.116526
Given this limited information on the genetic contributors to spontaneous intracerebral hemorrhage (SICH),
more genomic studies are needed to provide additional insights into the pathophysiology of SICH, and
develop targeted preventive and therapeutic strategies. This call for additional investigation of the
pathogenesis of SICH is likely to yield more discoveries in the unexplored indigenous African populations
which also have a greater predilection.
Multilevel omics for the discovery of biomarkers and therapeutic
targets for stroke Joan Montaner et al. (22 April 2020)
https://doi.org/10.1016/j.jns.2019.116526
Despite many years of research, no biomarkers for stroke are available to use in clinical practice. Progress in high-
throughput technologies has provided new opportunities to understand the pathophysiology of this complex disease, and
these studies have generated large amounts of data and information at different molecular levels. We summarize how
proteomics, metabolomics, transcriptomics and genomics are all contributing to the identification of new candidate
biomarkers that could be developed and used in clinical stroke management.
Influences of genetic variants on stroke recovery: a meta-analysis of
the 31,895 cases Nikhil Math et al. (29 July 2019)
https://doi.org/10.1007/s10072-019-04024-w
17p12 Influences Hematoma Volume and Outcome in Spontaneous
Intracerebral Hemorrhage Sandro Marini et al. (30 Jul 2018)
https://doi.org/10.1016/j.jns.2019.116526
Surgical management not that well understood either
Surgery for spontaneous intracerebral
hemorrhage (Feb 2020)
Airton Leonardo de Oliveira Manoel
https://doi.org/10.1186/s13054-020-2749-2
Spontaneous intracerebral hemorrhage is a devastating disease,
accounting for 10 to 15% of all types of stroke; however, it is
associated with disproportionally higher rates of
mortality and disability. Despite significant progress in the
acute management of these patients, the ideal surgical
management is still to be determined. Surgical hematoma
drainage has many theoretical benefits, such as the prevention of
mass effect and cerebral herniation, reduction in intracranial
pressure, and the decrease of excitotoxicity and neurotoxicity of
blood products.
Mechanisms of secondary brain injury
after ICH. MLS - midline shift; IVH -
intraventricular hemorrhage
Case 02 of open craniotomy for hematoma
drainage. a, b Day 1—Large hematoma in the left
cerebral hemisphere leading to collapse of the left
lateral ventricle with a midline shift of 12 mm, with a
large left ventricular and third ventricle flooding, as
well as diffuse effacement of cortical sulci of that
hemisphere. c–e Day 2—Left frontoparietal
craniotomy, with well-positioned bone fragment,
aligned and fixed with metal clips. Reduction of the
left frontal/frontotemporal intraparenchymal
hematic content, with remnant hematic residues
and air foci in this region. There was a significant
reduction in the mass effect, with a decrease in
lateral ventricular compression and a reduction in
the midline shift. Bifrontal pneumocephalus
causing shift and compressing the adjacent
parenchyma. f–h Day 36—Resolution of residual
hematic residues and pneumocephalus.
Encephalomalacia in the left frontal/frontotemporal
region. Despite the good surgical results, the
patient remained in vegetative state
Open craniotomy. Patient lies on an
operating table and receives general
anesthesia. The head is set in a three-pin
skull fixation device attached to the
operating table, in order to hold the head
standing still. Once the anesthesia and
positioning are established, skin is
prepared, cleaned with an antiseptic
solution, and incised typically behind the
hairline. Then, both skin and muscles are
dissected and lifted off the skull. Once
the bone is exposed, burr holes are built
in by a special drill. The burr holes are
made to permit the entrance of the
craniotome. The craniotomy flap is lifted
and removed, uncovering the dura mater.
The bone flap is stored to be replaced at
the end of the procedure. The dura mater
is then opened to expose the brain
parenchyma. Surgical retractors are
used to open a passage to assess the
hematoma. After the hematoma is
drained, the retractors are removed, the
dura mater is closed, and the bone flap is
positioned, aligned, and fixed with metal
clips. Finally, the skin is sutured
Real-time segmentation for ICH surgery?
Intraoperative CT and cone-beam CT
imaging for minimally invasive
evacuation of spontaneous
intracerebral hemorrhage
Nils Hecht et al. (Acta Neurochirurgica 2020)
https://doi.org/10.1007/s00701-020-04284-y
Minimally invasive surgery (MIS) for evacuation
of spontaneous intracerebral hemorrhage (ICH)
has shown promise but there remains a need
for intraoperative performance assessment
considering the wide range of evacuation
effectiveness. In this feasibility study, we
analyzed the benefit of intraoperative 3-
dimensional imaging during navigated
endoscopy-assisted ICH evacuation by
mechanical clot fragmentation and aspiration.
Routine utilization of intraoperative
computerized tomography (iCT) or
cone-beam CT (CBCT) imaging in MIS for
ICH permits direct surgical performance
assessment and the chance for immediate
re-aspiration, which may optimize targeting of
an ideal residual hematoma volume and reduce
secondary revision rates.
CT Anatomical
Background
Non-Contrast CT What are you seeing?
An Evidence-Based Approach To Imaging Of Acute
Neurological Conditions (2007)
https://www.ebmedicine.net/media_library/marketi
ngLandingPages/1207.pdf
HU Units Absolute units “mean something”
CT Scan basically a density measurement device
https://www.sciencedirect.com/topics/medicine-and-dentistry/hounsfield-scale
A, Axial CT slice, viewed with brain window settings. Notice in the grayscale bar at the right side of
the figure that the full range of shades from black to white has been distributed over a narrow HU range,
from zero (pure black) to +100HU (pure white). This allows fine discrimination of tissues within this
density range, but at the expense of evaluation of tissues outside of this range. A large subdural hematoma
is easily discriminated from normal brain, even though the two tissues differ in density by less than 100HU.
Any tissues greater than +100HU in density will appear pure white, even if their densities are dramatically
different. Consequently, the internal structure of bone cannot be seen with this window setting. Fat (-
50HU) and air (-1000HU) cannot be distinguished with this setting, as both have densities less than zero
HU and are pure black.
B, The same axial CT slice viewed with a bone
window setting. Now the scale bar at the right side
of the figure shows the grayscale to be distributed
over a very wide HU range, from -450HU (pure
black) to +1050HU (pure white). Air can easily be
discriminated from soft tissues on this setting
because it is assigned pure black, while soft tissues
are dark gray. Details of bone can be seen,
because a large portion of the total range of gray
shades is devoted to densities in the range of bone.
Soft tissue detail is lost in this window setting,
because the range of soft tissue densities (-50HU to
around +100HU) represents a narrow portion of the
gray scale.
HU Units ”water 1000x
~1kg/l
denser than air
~1g/l
”
Clinical CT quick intro on what you see
How to interpret an unenhanced CT Brain scan. Part 1: Basic principles of Computed
Tomography and relevant neuroanatomy (2016)
http://www.southsudanmedicaljournal.com/archive/august-2016/how-to-interpret-an-unenhanced-ct-brain-sca
n.-part-1-basic-principles-of-computed-tomography-and-relevant-neuroanatomy.html
Cuts and Gantry Tilt Clinical CT typically have quite thick cuts
https://slideplayer.com/slide/5990473/ Computed Tomography II –
RAD 473 Published byMelinda Wiggins
https://slideplayer.com/slide/7831746/
Design pattern for multi-modal
coordinate spaces
Figure 4: Planning the location of the CT slices,
with tilted gantry. The gantry is tilted to avoid
radiating the eyes, while capturing a maximum
of relevant anatomical data.
https://www.researchgate.net/publication/22
8672978_Design_pattern_for_multi-modal_co
ordinate_spaces
Tilting the gantry for CT-guided spine procedures
https://doi.org/10.1007/s11547-013-0344-1 Gantry tilt. Use of bolsters. Gantry-
needle alignment. a, b Range of gantry angulation, which is ±30° on most scanners.
Spine curvature and spatial orientation can be modified using bolsters and wedges.
A bolster under the lower abdomen (c) flattens the lordotic curvature and reduces
the L5–S1 disc plane obliquity; under the chest (d) flattens the thoracic kyphosis and
reduces the upper thoracic pedicles' obliquity; under the hips (e) increases the
lordosis and brings the long-axis of the sacrum closer to the axial plane. The desired
needle path for spinal accesses can be paralleled by gantry tilt (solid lines on c– e)
relative to straight axial orientation (dashed lines on c– e). f Gantry-needle alignment,
with laser beam precisely bisecting the needle at the hub and the skin entry point.
Maintaining this alignment keeps the needle in plane and allows visualization of the
entire needle throughout its trajectory on a single CT slice
Diagnosing strokes with imaging CT, MRI, and Angiography | Khan Academy
https://www.khanacademy.org/science/health-and-medicine/circulatory-system-diseases/stroke/v/diagnosing-strokes-with-imaging-ct-mri-and-angiography
CT Skull Window microstructure of bone might bias your brain model?
Estimation of skull table thickness with clinical CT and validation
with microCT http://doi.org/10.1111/joa.12259
Loss of bone mineral density following
sepsis using Hounsfield units by
computed tomography
http://doi.org/10.1002/ams2.401
Opportunistic
osteoporosis
screening via the
measurement of
frontal skull
Hounsfield units
derived from brain
computed
tomography images
https://doi.org/10.1371/jour
nal.pone.0197336
The ADAM-pelvis phantom - an anthropomorphic,
deformable and multimodal phantom for MrgRT
http://doi.org/10.1088/1361-6560/aafd5f
Construction and analysis of a head CT-scan database for craniofacial reconstruction
Françoise Tilotta, Frédéric Richard, Joan Alexis Glaunès, Maxime Berar, Servane Gey, Stéphane Verdeille, Yves
Rozenholc, Jean-François Gaudy https://hal-descartes.archives-ouvertes.fr/hal-00278579/document
CT Bone very useful for brain imaging/stimulation simulation models
e.g. ultrasound and NIRS
Measurements of the Relationship Between CT
Hounsfield Units and Acoustic Velocity and How It
Changes With Photon Energy and Reconstruction
Method
Webb TD, Leung SA, Rosenberg J, Ghanouni P, Dahl JJ, Pelc NJ, Pauly KB
IEEE Transactions on Ultrasonics, Ferroelectrics, and Frequency Control, 01 Jul 2018,
65(7):1111-1124 https://doi.org/10.1109/tuffc.2018.2827899
Transcranial magnetic resonance-guided focused ultrasound
continues to gain traction as a noninvasive treatment option for a
variety of pathologies. Focusing ultrasound through the skull
can be accomplished by adding a phase correction to each element
of a hemispherical transducer array. The phase corrections are
determined with acoustic simulations that rely on speed of sound
estimates derived from CT scans. While several studies have
investigated the relationship between acoustic velocity and
CT Hounsfield units (HUs), these studies have largely ignored the
impact of X-ray energy, reconstruction method, and reconstruction
kernel on the measured HU, and therefore the estimated velocity, and
none have measured the relationship directly.
As measured by the R-squared value, the results show that CT is
able to account for 23%-53% of the variation in velocity in
the human skull. Both the X-ray energy and the reconstruction
technique significantly alter the R-squared value and the linear
relationship between HU and speed of sound in bone. Accounting for
these variations will lead to more accurate phase corrections
and more efficient transmission of acoustic energy through
the skull.
The impact of CT energy as measured by the dual energy scan on the GE system with a bone kernel. a) The dotted
line shows the HU calculated using Equation (1) and linear attenuation values from NIST. The circles show the average HU measured
in the densest sample of cortical bone as measured by the average HU (red), the average HU value of all the fragments from the inner
and outer tables (yellow), and the average HU value of all the fragments from the medullary bone (purple). Error bars show the
standard deviation. b) Speed of sound as a function of HU for five different energies.
Comparison of the measurements presented in this paper to prior models. a)
Comparison to prior models using data from the monochromatic images acquired
with the dual energy scan on the GE system. b) Comparison to prior models
using standard CT scans with unknown effective energies. In order to estimate
Aubry’s model at each energy, an effective energy of 2/3 of the peak tube voltage
was assumed.
Further work needs to be done to
characterize either an average
relationship across a patient
population or a method for adapting
velocity estimates to specific
patient skulls. Such a study will
require a large number of skulls and is
outside the scope of the present
work.
Future studies should examine
improvements in velocity estimates
and phase corrections (e.g. using
ultrashort echo time (UTE) MRI)
will lead to the more efficient transfer
of acoustic energy through the skull,
resulting in a decrease in the energy
required to achieve ablation at the
focal spot.
Muscle/Fat CT also useful
(a) The relationship between gray level and Hounsfield units (HU) determined by window level (WL), window width
(WW), and bit depth per pixel (BIT). (b) The effect of different WL, WW, and BIT configurations on the same image
Pixel-Level Deep Segmentation: Artificial Intelligence Quantifies Muscle on Computed Tomography for Body Morphometric Analysis
Hyunkwang Lee & Fabian M. Troschel & Shahein Tajmir & Georg Fuchs & Julia Mario & Florian J. Fintelmann & Synho Do
J Digit Imaging, http://doi.org/10.1007/s10278-017-9988-z
Body Composition as a Predictor of Toxicity in
Patients Receiving Anthracycline and Taxane–
Based Chemotherapy for Early-Stage Breast
Cancer
http://doi.org/10.1158/1078-0432.CCR-16-2266
Quantitative analysis of skeletal muscle by
computed tomography imaging—State of the
art https://doi.org/10.1016/j.jot.2018.10.004
Base of Skull Axial CT: Where brainstripping could use deep learning
Base of skull, axial CT
1)Nasal spine of frontal bone
2)Eyeball
3)Frontal process of zygomatic bone
4)Ethmoidal air cells
5)Temporal fossa
6)Greater wing of sphenoid bone
7)Sphenoidal sinus
8)Zygomatic process of temporal bone
9)Head of mandible
10)Carotid canal, first part
11)Jugular foramen, posterior to intrajugular
process
12)Posterior border of jugular foramen
13)Sigmoid sinus
14)Lateral part of occipital bone
15)Hypoglossal canal
16)Foramen magnum
17)Nasal septum
18)Nasal cavity
19)Body of sphenoid bone
20)Foramen lacerum
21)Foramen ovale
22)Foramen spinosum
23)Sphenopetrous fissure/ Eustachian tube
24)Carotid canal, second part
25)Air cells in temporal bone
26)Apex of petrous bone
27)Petro-occipital fissure
Radiology Key
Fastest Radiology Insight Engine
https://radiologykey.com/skull/
CSF Spaces as seen by CT
An Evidence-Based Approach To Imaging Of Acute
Neurological Conditions (2007)
https://www.ebmedicine.net/media_library/marketi
ngLandingPages/1207.pdf
Air in Brain as seen by CT
Air defines anatomical shapes useful outside ICH analysis
→
A multiscale imaging and modelling
dataset of the human inner ear
Gerber et al. (2017) Scientific Data volume
4, Article number: 170132 (2017)
https://doi.org/10.1038/sdata.2017.132
BE-FNet: 3D Bounding Box
Estimation Feature Pyramid
Network for Accurate and Efficient
Maxillary Sinus Segmentation
Zhuofu Deng et al. (2020)
https://doi.org/10.1155/2020/5689301
Maxillary sinus segmentation plays an important
role in the choice of therapeutic strategies for
nasal disease and treatment monitoring.
Difficulties in traditional approaches deal with
extremely heterogeneous intensity caused by
lesions, abnormal anatomy structures, and
blurring boundaries of cavity
Development of CT-based methods
for longitudinal analyses of
paranasal sinus osteitis in
granulomatosis with polyangiitis
Sigrun Skaar Holme et al. (2019)
https://doi.org/10.1186/s12880-019-0315-7
Even though progressive rhinosinusitis with
osteitis is a major clinical problem in
granulomatosis with polyangiitis (GPA), there are
no studies on how GPA-related osteitis develops
over time, and no quantitative methods for
longitudinal assessment. Here, we aimed to
identify simple and robust CT-based methods for
capture and quantification of time-dependent
changes in GPA-related paranasal sinus osteitis
Gray/White Matter Contrast not as nice as with MRI
An Evidence-Based Approach To Imaging Of Acute Neurological Conditions
(2007)
https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf
Comparison between brain-dead patients' and normal control subjects' CT scans: 1, normal control CT scan; 2, CT
scan with loss of WM/GM differentiation; 3, CT scan with reversed GM/WM ratio.
Gray Matter-White Matter De-Differentiation on Brain Computed Tomography Predicts Brain Death Occurrence.
https://doi.org/10.1016/j.transproceed.2016.05.006
Calcifications choroid plexus and pineal gland very common locations
Intracranial calcifications on CT: an
updated review Charbel Saade, Elie Najem, Karl
Asmar, Rida Salman, Bassam El Achkar, Lena Naffaa (2019)
http://doi.org/10.3941/jrcr.v13i8.3633
In a study that was done by Yalcin et al. (2016) that
focused on determining the location and extent of
intracranial calcificationsin 11,941 subjects, the
pineal gland was found to be the most common
site of physiologic calcifications (71.6%) followed
by the choroid plexus (70.2%) with male
dominance in both sites with a mean age of 47.3
and 49.8 respectively. However, the choroid
plexus was found to be the most common site
of physiologic calcification after the 5th
decade and second most common after the
pineal gland in subjects aged between 15-45
years. According to Yalcin et al. (2016) dural
calcifications were seen in up to 12.5% of the
studied population with the majority found in male
patients. Basal ganglia calcifications were
found in only 1.3% in the same study conducted
by Yalcin et al. (2016)Yalcin et al. (2016).
Interestingly, BGC were reported to be more
prevalent among females than males with a mean
age of 52.
Examples of patterns of calcification and related terminology. (a)
dots, (b) lines, (c) conglomerate or mass-like, (d) rock-like, (e)
blush, (f) gyriform/band-like, (g) stippled (h) reticular.
Calcifications #2
An Evidence-Based Approach To Imaging Of Acute Neurological Conditions
(2007)
https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf
Pineal gland of a 72-year-old male.
Image a reveals the outlined pineal gland on
sagittal plane and image b demonstrates the 3-
dimensional image and volume of the tissue.
Green areas on image c and d exhibit the
restricted parenchyma by excluding all the
calcified tissues from the slices.
http://doi.org/10.5334/jbr-btr.892
Pineal gland of a 35-year-old female.
Image a and b reveal the outlined pineal gland on
sagittal (a) and axial (b) planes on noncontrast
computerized tomography images. Green areas
on image c exhibit the restricted parenchyma by
excluding all the calcified tissues from the slices.
Image d demonstrates the 3-dimensional image
and volume of noncalcifed pineal tissue.
We assume that optimized volumetry of active
pineal tissue and therefore a higher correlation
of melatonin and pineal parenchyma can
potentially be improved by a combination of
MR and CT imaging in addition to serum
melatonin levels. Moreover, in order to improve
MR quantification of pineal calcifications, the
combined approach would possibly allow an
optimization and calibration of MRI sequences by
CT and then perhaps even make CT
unnecessary
Masses real or hacked “adversarial attacks”
An Evidence-Based Approach To Imaging Of Acute Neurological Conditions
(2007)
https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf
by Brittany Goetting — Thursday, April 04, 2019, 09:24 PM EDT
Terrifying Malware Alters CT Scans To
Look Like Cancer, Fools Radiologists
https://hothardware.com/news/malware-creates-fake-cancerous-nodes-in-ct-scans
... Unfortunately, this vital technology is vulnerable to hackers. Researchers recently
designed malware that can add or take away fake cancerous nodules from CT and MRI
scans. Researchers at the University Cyber Security Research Center in Israel
developed malware that can modify CT and MRI scans. During their research, they
showed radiologists real lung CT scans, 70 of which had been altered. At least three
radiologists were fooled nearly every time.
Pituitary apoplexy: two very different
presentations with one unifying diagnosis
CT brain scan showing a
hyperdense mass arising
from the pituitary fossa,
representing pituitary
macroadenoma with
haemorrhage
http://doi.org/10.1258/shorts.201
0.100073
Cerebral Abscess
Low density due to cerebral inflammatory disease. A, Typical appearance of a cerebral abscess: round,
low-density cavity (arrow) surrounded by low-density vasogenic edema. Differentiation from other cavitary
lesions such as radionecrotic cysts or cystic neoplasms often requires clinical/laboratory correlation, with help
often provided by contrast-enhanced and diffusion weighted MRI. B, Progressive multifocal
leukoencephalopathy. Whereas white matter low density is nonspecific, involvement of the subcortical
U-shaped fibers in the AIDS patient can help differentiate this disorder from HIV encephalitis. C,
Toxoplasmosis. Patchy white matter low density (asterisks) in an immunocompromised patient with
altered mental status.
https://radiologykey.com/analysis-of-density-signal-intensity-and-echogenicity/
https://www.slideshare.net/Raeez/cns-infections-radiology
Clinical stages of human brain abscesses on
serial CT scans after contrast infusion
Computerized tomographic, neuropathological,
and clinical correlations (1983)
https://doi.org/10.3171/jns.1983.59.6.0972
Ischemic stroke hypodensity (CSF-like looks)
→
An Evidence-Based Approach To Imaging Of Acute Neurological Conditions (2007)
https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf
CT scan slice of the brain showing a right-hemispheric cerebral infarct (left
side of image). https://en.wikipedia.org/wiki/Cerebral_infarction
Brain Symmetry midline shift from mass effect #1
An Evidence-Based Approach To Imaging Of Acute
Neurological Conditions (2007)
https://www.ebmedicine.net/media_library/marketi
ngLandingPages/1207.pdf
https://en.wikipedi
a.org/wiki/Midline
_shift
https://www.slideshare.net/drlokeshmahar/approach-to-head-ct
Brain Symmetry midline shift #2: Estimate with ICP
Automated Midline Shift and
Intracranial Pressure
Estimation based on Brain CT
Images
Wenan Chen, Ashwin Belle, Charles Cockrell, Kevin R. Ward,
Kayvan Najarian
J. Vis. Exp. (74), e3871, doi:10.3791/3871 (2013).
https://www.jove.com/video/3871
In this paper we present an automated system
based mainly on the computed tomography
(CT) images consisting of two main
components: the midline shift
estimation and intracranial pressure
(ICP) pre-screening system. To estimate the
midline shift, first an estimation of the ideal
midline is performed based on the symmetry
of the skull and anatomical features in the brain
CT scan.
Then, segmentation of the ventricles from the
CT scan is performed and used as a guide for
the identification of the actual midline through
shape matching.
These processes mimic the
measuringprocess by physicians and
have shown promising results in the
evaluation. In the second component, more
features are extracted related to ICP, such as
the texture information, blood amount from CT
scans and other recorded features, such as
age, injury severity score to estimate the ICP
are also incorporated.
The result of the ideal midline
detection. The red line is the
approximate ideal midline. The
two rectangular boxes cover
the bone protrusion and the
lower falx cerebri respectively.
These boxes are used to
reduce the regions of interest.
The green dash line is the final
detected ideal midline, which
captures the bone protrusion
and the lower falx cerebri
accurately.
Brain Symmetry midline shift #3: Detection algorithms
The middle slice and the anatomical markers.
A deformed midline example and the anatomical midline shift marker
https://doi.org/10.1016/j.compmedimag.2013.11.001 (2014)
A Simple, Fast and Fully Automated Approach for Midline Shift Measurement on
Brain Computed Tomography Huan-Chih Wang, Shih-Hao Ho, Furen Xiao, Jen-Hai Chou
https://arxiv.org/abs/1703.00797
Incorporating Task-Specific Structural Knowledge into CNNs for Brain Midline Shift
Detection Maxim Pisov et al. (2019)
https://doi.org/10.1007/978-3-030-33850-3_4
https://github.com/neuro-ml/midline-shift-detection
Commercial CT Scanners
Siemens hot in London
Siemens Unveils AI
Apps for Automatic
MRI Image
Segmentation
DECEMBER 4TH, 2019 MEDGADGET EDITORS
NEUROLOGY, NEUROSURGERY , RADIOLOGY,
UROLOGY
The AI-Rad Companion Brain MR for
Morphometry Analysis, without any manual
intervention, segments brain images from
MRI exams, calculates brain volume, and
automatically marks volume deviations in
result tables that neurologists rely on for
diagnostics and therapeutics. The last part it
does by comparing the levels of gray matter,
white matter, and cerebrospinal fluid in a
given patient’s brain to normal levels. This
can help with diagnosing Alzheimer’s,
Parkinson’s, and other diseases.
https://www.medgadget.com/2019/12/siemens-unveils-ai-
apps-for-automatic-mri-image-segmentation.html
Siemens could provide similar tool for CT too
CT System Receives FDA Clearance for AI-Based Image Reconstruction
Technology 07 Nov 2019 Canon Medical Systems USA, Inc. (Tustin, CA, USA) has received 510(k) clearance for its Advanced Intelligent Clear-IQ Engine (AiCE) for the
Aquilion Precision https://www.medimaging.net/industry-news/articles/294779910/ct-system-receives-fda-clearance-for-ai-based-image-reconstruction-technology.html
Canon Medical is releasing a
new high-end digital PET/CT
scanner at the upcoming
RSNA conference in Chicago.
The Cartesion Prime Digital
PET/CT combines Canon’s
Aquilion Prime SP CT
scanner and the SiPM (silicon
photomultiplier) PET detector,
providing high resolution
imaging and easy operator
control, according to the
company.
Product page:
Cartesion Prime Digital PET/CT
Epica SeeFactorCT3 Multi-Modality
System Wins FDA Clearance
OCTOBER 8TH, 2019
https://www.medgadget.com/2019/10/epica-seefactorct3-multi-modality-system-wins-fda-clearance.html
The SeeFactorCT3 produces sliceless CT images, unlike typical CT systems, which
means that there’s no interpolation involved and therefore less chance of introducing
artifacts. Isotropic imaging resolution goes down to 0.1 millimeters in soft and hard
tissues and lesions that are only 0.2 millimeter in diameter can be detected. Thanks to
the company’s “Pulsed Technology,” the system can perform high resolution imaging
while reducing the overall radiation delivered. Much of this is possible thanks to a
dynamic flat panel detector that captures image sequences accurately and at high
fidelity.
A big advantage of the SeeFactorCT3 is its mobility, since it can be wheeled in and
out of ORs, through hospital halls, and even taken inside patient rooms. When set for
transport, the device is narrow enough to be pushed through a typical open door.
Royal Philips extends
diagnostic imaging
portfolio
DIAGNOSTIC DEVICESDIAGNOSTIC IMAGING
By NS Medical Staff Writer 01 Mar 2019
https://www.nsmedicaldevices.com/news/philips-incisive-ct-imagi
ng-system/
The system is being offered with ‘Tube for Life’
guarantee, as it will replace the Incisive’s X-ray tube, the
key component of any CT system, at no additional cost
throughout the entire life of the system, potentially
lowering operating expenses by about $400,000.
Additionally, the system features the company’s iDose4
Premium Package which includes two technologies that
can improve image quality, iDose4 and metal
artifact reduction for large orthopedic implants (O-MAR).
iDose4 can improve image quality through artifact
prevention and increased spatial resolution at low dose.
O-MAR reduces artifacts caused by large orthopedic
implants. Together they produce high image quality with
reduced artifacts.
The system’s 70 kV scan mode is touted to offer
improved low-contrast detectability and confidence at
low dose.
https://youtu.be/izXI3qry8kY
Portable CTs CereTom
Review of Portable CT with Assessment of
a Dedicated Head CT Scanner
Z. Rumboldt, W. Huda and J.W. All
American Journal of Neuroradiology October
2009, 30 (9) 1630-1636
https://doi.org/10.3174/ajnr.A1603 - Cited by 91
This article reviews a number of portable CT
scanners for clinical imaging. These include
the CereTom, Tomoscan, xCAT ENT, and
OTOscan. The Tomoscan scanner consists
of a gantry with multisection detectors and a
detachable table. It can perform a full-body
scanning, or the gantry can be used without
the table to scan the head. The xCAT ENT is a
conebeam CT scanner that is intended for
intraoperative scanning of cranial bones and
sinuses. The OTOscan is a multisection CT
scanner intended for imaging in ear, nose, and
throat settings and can be used to assess
bone and soft tissue of the head.
We also specifically evaluated the technical
and clinical performance of the CereTom, a
scanner designed specifically for
neuroradiologic head imaging.
https://doi.org/10.1097/JNN.0b013e3181ce5c5b
Ginat and Gupta (2014)
https://doi.org/10.1146/annurev-bioeng
-121813-113601
CT “Startup” Scanners
addressing “market inefficiencies” and going smaller and cheaper
Future of CT From Energy counting (EID) to Photon counting (PCD)?
The Future of Computed Tomography
Personalized, Functional, and Precise
Alkadhi, Hatem and Euler, André
Investigative Radiology: September 2020 - Volume 55 - Issue 9 - p 545-555
http://doi.org/10.1097/RLI.0000000000000668
Modern medicine cannot be imagined without the
diagnostic capabilities of computed tomography
(CT). Although the past decade witnessed a
tremendous increase in scan speed, volume
coverage, and temporal resolution, along with a
considerable reduction of radiation dose, current
trends in CT aim toward more patient-
centric, tailored imaging approaches that
deliver diagnostic information being personalized
to each individual patient. Functional CT with
dual-and multienergy, as well as dynamic,
perfusion imaging became clinical reality and will
further prosper in the near future, and upcoming
photon-counting detectors will deliver images
at a heretofore unmatched spatial resolution.
This article aims to provide an overview of current
trends in CT imaging, taking into account the
potential of photon-counting detector systems,
and seeks to illustrate how the future of CT will
be shaped.
CT Startup Nanox from Israel Great idea if this would work as said? #1
https://www.mobihealthnews.com/news/nanoxs-digital-x-ray-system-wins-26m-investors
The end goal is to deliver a
robust imaging system that
can drive earlier disease
detection, especially in regions
where traditional systems
are either too costly or too
complicated to roll out
broadly.
Looking at the longer term,
Nanox said that it will be seeking
regulatory approval for its
platform, and then deploying its
globally under a pay-per-scan
business model that it says will
enable cheaper medical imaging
and screening for private and
public provider systems.
CT Startup Nanox from Israel Great idea if this would work as said? #2
MuddyWatersResearch@
muddywatersre
MW is short $NNOX. We
conclude that $NNOX
has no product to sell
other than its stock.
Like $NKLA, NNOX
appears to have faked its
demo video. A convicted
felon appears to be behind
the IPO. A US partner has
been requesting images
for 6 months to no avail
"But NNOX gets much worse," the report
says. "A convicted felon, who crashed an
$8 billion market cap dotcom into the
ground, was seemingly instrumental in
plucking NNOX out of obscurity and
bringing its massively exaggerated story
to the U.S. NNOX touts distribution
partnerships that supposedly amount to
$180.8 million in annual commitments.
Almost all of the company’s partnerships
give reason for skepticism."
Marty Stempniak | September 18, 2020 | Healthcare Economics & Policy
Nanox hit with class action lawsuit amid criticism
labeling imaging startup as ‘Theranos 2.0’
https://www.radiologybusiness.com/topics/healthcare-economics
The news comes just weeks after the Israeli firm completed a successful initial public offering
that raised $190 million. Nanox has inked a series of deals in several countries to provide its
novel imaging system, claiming to offer high-end medical imaging at a fraction of the cost and
footprint. But analysts at Citron Research raised red flags Tuesday, Sept. 15, claiming the
company is merely a “stock promotion” amassing millions without any FDA approvals or
scientific evidence.
Citron’s analysis—titled “A Complete Farce on the Market: Theranos 2.0”—drew
widespread attention, with several law firms soliciting investors looking to sue Nanox over its
claims. Plaintiff Matthew White and law firm Rosen Law are one of the first to follow
through, filing a proposed securities class action in New York on Wednesday.
He claims the company made false statements to both the SEC and investors to inflate its
stock value, Bloomberg Law reported. White and his attorneys also allege Nanox fabricated
commercial agreements and made misleading statements about its imaging technology.
Several other law firms also announced their own lawsuits on behalf of investors Friday.
Nanox did not respond to a Radiology Business request for comment. However, the Neve Ilan,
Israel-based company posted a statement to its webpage Wednesday, Sept. 16, addressing the
“unusual trading activities” after investors dumped the stock en masse in response to
Citron’s concerns.
Commercial CT Detectors
If you want to build your own CT scanner
From Advances in Computed Tomography Imaging Technology
Ginat and Gupta (2014) https://doi.org/10.1146/annurev-bioeng-121813-113601
From A typical multidetector CT scanner consists of a mosaic of scintillators that
convert X-rays into light in the visible spectrum, a photodiode array that
converts the light into an electrical signal, a switching array that enables switching
between channels, and a connector that conveys the signal to a data acquisition
system (Figure 6).
The multiple channels between the detectors acquire multiple sets of projection data
for each rotation of the scanner gantry. The channels can sample different detector
elements simultaneously and can combine the signals.
The detector elements can vary in size, and hybrid detectors that comprise narrow
(0.5-mm, 0.625-mm, or 0.75-mm) detectors in the center with wider (1.0-mm, 1.25-mm,
or 1.5-mm) detectors flanked along the sides are commonly used (Saini 2004).
Third-generation CT scanners featured rotate-rotate geometry,
whereby the tube and the detectors rotated together around the patient.
In conjunction with a wide X-ray fan beam that encompassed the entire
patient cross-section and an array of detectors to intercept the beam,
scan times of less than 5 s could be achieved. However, third-generation
CT scanners were prone to ring artifacts that resulted in drift in the
calibration of one detector relative to the other detectors. Fourth-
generation scanners featured stationary ring detectors and a rotating
fan-beam X-ray tube (Figure 5), which mitigated the issues related
to ring artifacts. However, the ring-detector arrangement limited the
use of scatter reduction.
leakage current MOS switch ASICs and ultra-low noise pre-amplification ASICs.
Our modern, automated, high-precision assembly process guarantees our
products are of high reliability and stability.
With our core competences in photodiode, ASIC and assembly technologies we
offer products in different assembly levels, ranging from photodiode chips to full
detector modules. Our strong experience in designing and developing CT
detector modules ensures that customized solutions are quickly and cost-
efficiently in use at our customers.
CT Physics + Tech
Acquisition Sinogram Reconstruction
→ →
Fransson (2019): Although many different reconstruction methods are available there are mainly two categories, filtered back-
projection (FBP) and iterative reconstruction (IR). FBP is a simpler method than IR and it takes less time to compute, but artifacts are more
frequent and dominant (Stiller2018). The image that provide the anatomical information is said to exist in the image domain. By applying
a mathematical operation, called the Fourier transform, on the image data it is transformed into the projection domain. In the projection
domain image processing is performed with the use of filters, or kernels, in order to enhance the image in various ways, such as reducing
the noise level. When the processing is completed the Inverse Fourier transform is applied on the data in order to acquire the anatomical
image that is desired.
Acquisition Sinogram Reconstruction
→ →
Stiller2018: Basics of iterative reconstruction methods in computed tomography: A vendor-independent overview
Sinogram Image Space
→
Machine Friendly Machine Learning: Interpretation of Computed
Tomography Without Image Reconstruction
Hyunkwang Lee, Chao Huang, Sehyo Yune, Shahein H. Tajmir, Myeongchan Kim & Synho Do
Department of Radiology, Massachusetts General Hospital, Boston; John A. Paulson School of Engineering and Applied Sciences, Harvard University,
Scientific Reports volume 9, Article number: 15540 (2019)
https://doi.org/10.1038/s41598-019-51779-5
Examples of reconstructed images and sinograms with different labels for (a), body part recognition
and (b), ICH detection. From left to right: original CT images, windowed CT images, sinograms with
360 projections by 729 detector pixels, and windowed sinograms 360 × 729. In the last row, an
example CT with hemorrhage is annotated with a dotted circle in image-space with the region of
interest converted into the sinogram domain using Radon transform. This area is highlighted in red on
the sinogram in the fifth column.
Reconstruction from sparse measurements
common problem in all scanning-based imaging
Zhu et al. (2018) Nature "Image reconstruction by domain-transform manifold learning" https://doi.org/10.1038/nature25988
Radon projection; Spiral non-Cartesian Fourier; Undersampled Fourier; Misaligned Fourier - Cited by 238 - https://youtu.be/o-vt1Ld6v-M -
https://github.com/chongduan/MRI-AUTOMAP
They describe the technique - dubbed AUTOMAP
(automated transform by manifold approximation) - in a
paper published today in the journal Nature.
"An essential part of the clinical imaging pipeline is image
reconstruction, which transforms the raw data coming
off the scanner into images for radiologists to evaluate,"
https://phys.org/news/2018-03-arti
ficial-intelligence-technique-quality-
medical.html
PET + CT Joint Reconstruction
Improving the Accuracy of Simultaneously
Reconstructed Activity and Attenuation Maps Using
Deep Learning
Donghwi Hwang, Kyeong Yun Kim, Seung Kwan Kang, Seongho Seo, Jin
Chul Paeng, Dong Soo Lee and Jae Sung Lee
J Nucl Med 2018; 59:1624–1629
http://doi.org/10.2967/jnumed.117.202317
Simultaneous reconstruction of activity and attenuation using
the maximum-likelihood reconstruction of activity and
attenuation (MLAA) augmented by time-of-flight information
is a promising method for PET attenuation correction.
However, it still suffers from several problems, including
crosstalk artifacts, slow convergence speed, and noisy
attenuation maps (μ-maps). In this work, we developed deep
convolutional neural networks (CNNs) to overcome these
MLAA limitations, and we verified their feasibility using a
clinical brain PET dataset.
There are some existing works on applying deep learning to predict CT
m-maps based on T1-weighted MR images or a combination of Dixon
and zero-echo-time images (51,52). The approach using the Dixon and
zero-echo-time images would be more physically relevant than the T1-
weighted MRI-based approach because the Dixon and zero-echo-
time sequences provide more direct information on the tissue
composition than does the T1 sequence. The method proposed in this
study has the same physical relevance as the Dixon or zero-echo-time
approach but does not require the acquisition of additional MR images.
Reconstruction example for PET from sinograms
DirectPET: Full Size Neural
Network PET Reconstruction
from Sinogram Data
William Whiteley, Wing K. Luk, Jens Gregor
Siemens Medical
Solutions USA
https://arxiv.org/abs/1908.07516
This paper proposes a new more
efficient network design called
DirectPET which is capable of
reconstructing a multi-slice Positron
Emission Tomography (PET) image
volume (i.e., 16x400x400) by
addressing the computational
challenges through a specially
designed Radon inversion layer. We
compare the proposed method to the
benchmark Ordered Subsets
Expectation Maximization
(OSEM) algorithm using signal-to-
noise ratio, bias, mean absolute error
and structural similarity measures.
Line profiles and full-width half-
maximum measurements are
also provided for a sample of lesions.
Looking toward future work, there are many possibilities in
network architecture, loss functions and training optimization to
explore, which will undoubtedly lead to more efficient
reconstructions and even higher quality images. However, the
biggest challenge with producing medical images is providing
overall confidence on neural network reconstruction on
unseen samples
Improving the Accuracy of Simultaneously Reconstructed Activity and Attenuation Maps Using Deep Learning
J Nucl Med 2018; 59:1624–1629 http://doi.org/10.2967/jnumed.117.202317
CT Artifacts
Beam Hardening Artifact found often at lower slices near brainstem with small spaces surrounded by bone
Beam hardening artifact (left), and partial volume effect (right)
http://doi.org/10.13140/RG.2.1.2575.3122
Understanding and Mitigating Unexpected Artifacts in Head CTs: A Practical Experience
Flavius D. RaslauJ. ZhangJ. Riley-GrahamE.J. Escott (2016)
http://doi.org/10.3174/ng.2160146
Beam Hardening. The most commonly encountered artifact in CT
scanning is beam hardening, which causes the edges of an object to appear
brighter than the center, even if the material is the same throughout
The artifact derives its name from its underlying cause: the increase in mean X-ray energy, or “hardening” of
the X-ray beam as it passes through the scanned object. Because lower-energy X-rays are attenuated more readily
than higher-energy X-rays, a polychromatic beam passing through an object preferentially loses the lower-
energy parts of its spectrum. The end result is a beam that, though diminished in overall intensity, has a higher
average energy than the incident beam. This also means that, as the beam passes through an object, the effective
attenuation coefficient of any material diminishes, thus making short ray paths proportionally more attenuating than
long ray paths. In X-ray CT images of sufficiently attenuating material, this process generally manifests itself
as an artificial darkening at the center of long ray paths, and a corresponding brightening near the edges.
In objects with roughly circular cross sections this process can cause the edge to appear brighter than the interior,
but in irregular objects it is commonly difficult to differentiate between beam hardening artifacts and
actual material variations.
Motion Artifacts as in most of imaging when the subject moves during the acquisition
There are several steps to be taken to prevent the
voluntary movement of the body during scanning while
it is difficult to prevent involuntary movement. Some
modern scanning devices have some features that
reduce the resulting artifacts
Amer et al. (2018) researchgate.net
Artifacts in CT: recognition and avoidance.
Barrett and Keat (2004)
https://doi.org/10.1148/rg.246045065
Freeze! Revisiting CT motion artifacts: Formation, recognition and remedies.
semanticscholar.org
CT brain with severe motion artifact
https://radiopaedia.org/images/4974802
Streak Artifacts from high density structures
An Evidence-Based Approach To Imaging Of Acute Neurological Conditions (2007)
https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf
Dr Balaji Anvekar's Neuroradiology Cases: Streak artifacts CT
http://www.neuroradiologycases.com/2011/10/streak-artifacts.html
Hegazy, M.A.A., Cho, M.H., Cho, M.H. et al.
U-net based metal segmentation
on projection domain for metal
artifact reduction in dental CT
(2019)
https://doi.org/10.1007/s13534-019-00
110-2
Ring Artifacts from high density structures
CT artifacts: causes and reduction
techniques (2012)
F Edward Boas & Dominik Fleischmann Department
of Radiology, Stanford University School of Medicine,
300 Pasteur Drive, Stanford, CA 94305, USA
https://www.openaccessjournals.com/articles/ct-artif
acts-causes-and-reduction-techniques.html
http://doi.org/10.1088/0031-9155/46/12/309
Zebra and Stair-step Artifacts
CT artifacts: causes and reduction techniques (2012)
F Edward Boas & Dominik Fleischmann Department of Radiology, Stanford University School of Medicine
https://www.openaccessjournals.com/articles/ct-artifacts-causes-and-reduction-techniques.html
Zebra and stair-step artifacts. (A) Zebra artifacts (alternating high and low noise
slices, arrows) due to helical interpolation. These are more prominent at the periphery
of the field of view. (B) Stair-step artifacts (arrows) seen with helical and
multidetector row CT. These are also more prominent near the periphery of the field of
view. Therefore, it is important to place the object of interest near the center of the field
of view.
Zebra stripes
https://radiopaedia.org/articles/zebra-stripes-1?lang=gb
Andrew Murphy and
◉ and
Dr J. Ray Ballinger et al.
Zebra stripes/artifacts appear as alternating bright and dark bands in a MRI image. The term
has been used to describe several different kind of artifacts causing some confusion.
Artifacts that have been described as a zebra artifact include the following:
●Moire fringes
●Zero-fill artifact
●
Spike in k-space
Zebra stripes have been described associated with susceptibility artifacts.
In CT there is also a zebra artifact from 3D reconstructions and a zebra sign from
haemorrhage in the cerebellar sulci.
It therefore seems prudent to use "zebra" with a term like "stripes" rather than "artifacts".
Bone discontinuities from factures
An Evidence-Based Approach To Imaging Of Acute Neurological Conditions (2007)
https://www.ebmedicine.net/media_library/marketingLandingPages/1207.pdf
https://www.ncbi.nlm.nih.gov/pubmed/21691535
Bone fractures in practice
Doctor Explains Serious UFC Eye Injury for Karolina Kowalkiewicz - UFC Fight Night 168
Brian Sutterer, https://youtu.be/XwvoNsypP-I
Orbital Floor fracture
muscle or fat going to maxillary sinus
https://en.wikipedia.org/wiki/Orbital_blowout_fracture
Networks trained for fractures as well
Deep Convolutional Neural
Networks for Automatic
Detection of Orbital Blowout
Fractures
D. Ng, L. Churilov, P. Mitchell, R. Dowling and B. Yan
American Journal of Neuroradiology February 2018, 39
(2) 232-237; https://doi.org/10.3174/ajnr.A5465
Orbital blow out fracture is a common
disease in emergency department and a
delay or failure in diagnosis can lead to
permanent visual changes. This study aims to
evaluate the ability of an automatic orbital
blowout fractures detection system based on
computed tomography (CT) data.
The limitations of this work should be
mentioned. First, our method was developed
and evaluated on data from a single-tertiary
hospital. Thus, further assessment of large
data from other centers is required to increase
the generalizability of the findings, which will be
addressed in a future work. Fracture location is
also an important parameter in accurate
diagnosis and planning for surgical
management. With further improvements and
clinical verification, an optimized model could be
implemented in the development of computer-
aided decision systems.
Preprocessing of DICOM data. A, Original pixel values visualized on a CT slice. B, Effect after finding the largest link
area. C, Image with bone window limitation. D, Binary image of a CT slice. E, Image clipped with the maximum outer
rectangular frame. CT, computed tomography.
“Signs”
Clinician-invented
handcrafted features
‘Signs’ human-defined patterns predicting the outcome #1
Noncontrast computed tomography
markers of outcome in intracerebral
hemorrhage patients
Miguel Quintas-Neves et al. (Oct 2019)
A Journal of Progress in Neurosurgery, Neurology and Neurosciences
https://doi.org/10.1080/01616412.2019.1673279
328 patients were included. The most frequent
NCCT marker was ‘any hypodensity’ (68.0%) and the
less frequent was the blend sign (11.6%). Even though
some noncontrast computed tomography (NCCT)
markers are independent predictors of HG and 30-
day survival, they have suboptimal diagnostic
test performances for such outcomes.
With physical background of course, but still a bit subjective
‘Signs’ human-defined patterns predicting the outcome #2
from Hemorrhagic Stroke (2014)
Julius Griauzde, Elliot Dickerson and Joseph J. Gemmete
Department of Radiology, Radiology Resident, University of Michigan
http://doi.org/10.1007/978-1-4614-9212-2_46-1
Active Hemorrhage Observing active extravasation
of blood into the area of hemorrhage is an ominous
radiologic finding that suggests both ongoing expansion
of the hematoma and a poor clinical outcome [
Kim et al. 2008]. On non-contrast examinations, freshly
extravasated blood will have attenuation
characteristics different from the blood which has
been present in the hematoma for a longer
period, and these heterogeneous groups of blood
products can circle around one another to produce a
“swirl sign” which has also been associated with
hemorrhage growth and poor outcomes [Kim et al. 2008
].
If the patient receives a CTA study, active extravasation
can present as a tiny spot on arterial phase images
(the “spot sign”) which can rapidly expand on more
delayed phase images. Even when a spot of precise
extravasation is not identified on arterial phase images,
more delayed images can directly demonstrate
extravasated contrast indicating ongoing hemorrhage.
With physical background of course, but still a bit subjective
a NCCT of deep right ICH (38 ml) with swirl sign (arrow). b Corresponding hematoma CT densitometry
histogram (Mean HU 55.3, SD 9.7, CV 0.18, Skewness −0.26, Kurtosis 2.41). c CTA with multiple spot signs
present (arrows). The patient subsequently underwent hematoma expansion of 41 ml. d NCCT of a
different patient with right frontal lobar ICH (38 ml) and trace IVH. e Corresponding hematoma CT
densitometry histogram (Mean HU 61.5, SD 12.2, CV 0.20, Skewness −0.64, Kurtosis 2.6). f CTA
demonstrates no evidence of spot sign. The patient had a stable hematoma on 24-hour follow-up
Swirls and spots: relationship between qualitative and quantitative
hematoma heterogeneity, hematoma expansion, and the spot sign Dale
Connor, Thien J. Huynh, Andrew M. Demchuk, Dar Dowlatshahi, David J. Gladstone, Sivaniya Subramaniapillai, Sean P. Symons &
Richard I. Aviv Neurovascular Imaging volume 1, Article number: 8 (2015)
https://doi.org/10.1186/s40809-015-0010-1
CT “Swirl Sign” associated with hematoma expansion
The CT Swirl Sign Is Associated
with Hematoma Expansion in
Intracerebral Hemorrhage
D. Ng, L. Churilov, P. Mitchell, R. Dowling and B. Yan
American Journal of Neuroradiology February 2018, 39
(2) 232-237; https://doi.org/10.3174/ajnr.A5465
Hematoma expansion is an
independent determinant of poor
clinical outcome in intracerebral
hemorrhage. Although the “spot sign”
predicts hematoma expansion, the
identification requires CT angiography, which
limits its general accessibility in some hospital
settings. Noncontrast CT (NCCT), without the
need for CT angiography, may identify sites of
active extravasation, termed the “swirl sign.”
We aimed to determine the association of the
swirl sign with hematoma expansion.
The NCCT swirl sign was reliably identified
and is associated with hematoma expansion.
We propose that the swirl sign be
included in risk stratification of
intracerebral hemorrhage and
considered for inclusion in clinical
trials.
Noncontrast brain CT of a 73-year-old
woman who presented with right-sided
weakness.
Initial brain CT (A–C) demonstrates a left
parietal hematoma measuring 33 mL,
demonstrating hypodense hematoma with
hypodense foci, the swirl sign.
Follow-up CT (D–F) performed 8 hours later
demonstrates increased hematoma volume,
46 mL.
Imaging features of swirl sign and spot sign
Coronal nonenhanced CT (A) demonstrates the
hypodense area within the hematoma (swirl sign
[asterisk]), whereas a hyperdense spot is shown on
CT angiography (arrow) (B). There is already mass
effect with midline shift and intraventricular
hematoma extension.
https://doi.org/10.1212/WNL.0000000000003290
CT “Spot Sign”
Advances in CT for prediction of hematoma
expansion in acute intracerebral
hemorrhage
Thien J Huynh, Sean P Symons and Richard I Aviv
Division of Neuroradiology, Department of Medical Imaging, Sunnybrook Health Sciences and
University of Toronto, Toronto, Canada
Imaging in Medicine (2013) Vol 5 Issue 6
https://www.openaccessjournals.com/articles/advances-in-ct-for-predict
ion-of-hematoma-expansion-in-acute-intracerebral-hemorrhage.html
Noncontrast CT imaging plays a critical role in acute
intracerebral hemorrhage (ICH) diagnosis, as
clinical features are unable to reliably distinguish
ischemic from hemorrhagic stroke. For
detection of acute hemorrhage, CT is considered
the gold-standard; however CT and MRI have
been found to be similar in accuracy. CT is
preferred over MR imaging due to reduced
cost, rapid scan times, increased patient tolerability
and increased accessibility in the emergency
setting. It is important to note, however, that CT
lacks sensitivity in identifying foci of chronic
hemorrhage compared with gradient echo and T2*
susceptibility- weighted MRI. MR imaging may also
provide additional information regarding the
presence of cavernous malformations and
characterizing perihematomal edema
CT “Black hole sign”
Comparison of Swirl Sign and
Black Hole Sign in Predicting
Early Hematoma Growth in
Patients with Spontaneous
Intracerebral Hemorrhage
Xin Xiong et al. (2018)
http://doi.org/10.12659/MSM.906708
Early hematoma growth is associated with
poor outcome in patients with spontaneous
intracerebral hemorrhage (ICH). The swirl
sign (SS) and the black hole sign (BHS) are
imaging markers in ICH patients. The aim of
this study was to compare the predictive
value of these 2 signs for early hematoma
growth
Illustration of swirl sign, black
hole sign, and follow-up CT
images. (A) A 60-year-old
man presented with sudden
onset of left-sided paralysis.
Admission CT image
performed 1 h after onset of
symptoms showing thalamic
ICH with a swirl sign (arrow)
and the hematoma volume
was 16.57 ml. (B) Hematoma
volume remains the same on
follow-up CT scan
performed 23 h after onset
of symptoms. (C) A 75-year-
old man with left deep ICH.
Initial CT image performed 2
h after onset of symptoms
shows black hole sign
(arrow). (D) Follow-up CT
image 4 h later shows
significant hematoma
growth.
CT “Leakage Sign” You probably noted the pattern already? Instead of admitting that no
single “sign” can tell you the whole story and trying to define some non-robust “biomarkers”, data-driven
methods are not explored heavily by clinicians (applying to most clinical domains)
Leakage Sign for Primary
Intracerebral Hemorrhage
A Novel Predictor of Hematoma Growth
Kimihiko Orito, Masaru Hirohata, Yukihiko Nakamura, Nobuyuki
Takeshige, Takachika Aoki, Gousuke Hattori, Kiyohiko Sakata, Toshi Abe,
Yuusuke Uchiyama, Teruo Sakamoto, and Motohiro Morioka
Stroke. 2016;47:958–963
https://doi.org/10.1161/STROKEAHA.115.011578
Recent studies of intracerebral
hemorrhage treatments have
highlighted the need to identify reliable
predictors of hematoma expansion.
Several studies have suggested that the
spot sign on computed tomographic
angiography (CTA) is a sensitive
radiological predictor of hematoma
expansion in the acute phase. However,
the spot sign has low sensitivity for
hematoma expansion. In this study, we
evaluated the usefulness of a novel
predictive method, called the leakage
sign.
The leakage sign was more sensitive than the spot sign for predicting hematoma expansion in patients
with ICH. In addition to the indication for an operation and aggressive treatment, we expect that this
method will be helpful to understand the dynamics of ICH in clinical medicine.
CT “Island Sign”
Island Sign: An Imaging
Predictor for Early Hematoma
Expansion and Poor Outcome in
Patients With Intracerebral
Hemorrhage
Qi Li, Qing-Jun Liu, Wen-Song Yang, Xing-Chen Wang, Li-Bo Zhao, Xin
Xiong, Rui Li, Du Cao, Dan Zhu, Xiao Wei, and Peng Xie
Stroke. 2017;48:3019–3025 10 Oct 2017
https://doi.org/10.1161/STROKEAHA.117.017985
We included patients with spontaneous
intracerebral hemorrhage (ICH) who
had undergone baseline CT within 6 hours
after ICH symptom onset in our hospital
between July 2011 and September 2016. A
total of 252 patients who met the inclusion
criteria were analyzed. Among them, 41
(16.3%) patients had the island sign on
baseline noncontrast CT scans. In addition,
the island sign was observed in 38 of 85
patients (44.7%) with hematoma growth.
Multivariate logistic regression analysis
demonstrated that the time to baseline CT
scan, initial hematoma volume, and the
presence of the island sign on baseline
CT scan independently predicted early
hematoma growth.
Illustration of island sign. Axial noncontrast computed tomography (CT) images
of 4 patients with CT island sign. A, CT island sign in a patient with basal ganglia
hemorrhage. Note the there are 3 small scattered little hematomas (arrows),
each separate from the main hematoma. B, Putaminal intracerebral hemorrhage
with 3 small separate hematomas (arrowheads). Note that there are hypointense
areas between the 3 small hematomas and the main hematoma. C, Lobar
hematoma with 4 scattered separate hematomas (arrowheads). D, Large basal
ganglia hemorrhage with intraventricular extension. The hematoma consists of 4
bubble-like or sprout-like small hematomas (arrowheads) that connect with the
main hematoma and one separate small hematoma (arrow).
Illustration of differences between the
Barras shape scale and Li Qi’s island sign. A,
Barras scale category IV lobulated hematoma. Note
that irregular margin had a broad base, and the
border of the main hematoma was spike-like
(arrow). B, A lobulated hematoma that belongs to
Barras scale category V. Note that the hematoma
consisted of 4 spike-like projections (lobules). C, The
island sign consisted of one separate small island
(arrow) and 3 little islands (arrowheads) that connect
with the main hematoma. Note that the 3 small
hematomas were bubble-like or sprout-like
outpouching from the main hematoma. D, A large
hematoma with 4 bubble-like or sprout-like small
hematomas (arrowheads) all connected with the
main bleeding. Note that the large lobule (big arrow)
in the bottom of the main hematoma was not
considered islands.
How well do humans agree on the sign definitions
Inter- and Intrarater Agreement of Spot
Sign and Noncontrast CT Markers for Early
Intracerebral Hemorrhage Expansion
Jawed Nawabi et al. J. Clin. Med. 2020, 9(4), 1020;
https://doi.org/10.3390/jcm9041020
(This article belongs to the Special Issue
Intracerebral Hemorrhage: Clinical and Neuroimaging Characteristics)
The aim of this study was to assess the inter- and
intrarater reliability of noncontrast CT (NCCT) markers
[Black Hole Sign (BH), Blend Sign (BS), Island Sign (IS),
and Hypodensities (HD)] and Spot Sign (SS) on CTA in
patients with spontaneous intracerebral hemorrhage
(ICH)
NCCT imaging findings and SS on CTA have good-to-
excellent inter- and intrarater reliabilities, with the
highest agreement for BH and SS.
Representative examples of
disagreed ratings of four non-
contrast computed tomographic
(NCCT) markers and Spot Sign (SS)
on CT-angiography (CTA) for
intracerebral hemorrhage
expansion. (A) SS on CTA (white
arrow) mistaken for intraventricular
plexus calcification (black arrow)
(B). (C) Blend sign (white arrows)
mistaken for Fluid Sign1. (D) Swirl
Sign mistaken for Hypodensities
(black arrow). (E) Hypodensities
(black arrow) mistaken for Swirl Sign
(F)
Radiomics
CAD
Computer-aided diagnosis not design
Rebranded as Radiomics
→
From Handcrafted to Deep-Learning-
Based Cancer Radiomics: Challenges and
Opportunities
Parnian Afshar et al. (2019)
IEEE Signal Processing Magazine ( Volume: 36 , Issue: 4 , July 2019 )
https://doi.org/10.1109/MSP.2019.2900993
Radiomics, an emerging and relatively new research field,
refers to extracting semi-quantitative and/or
quantitative features from medical images with the goal
of developing predictive and/or prognostic models. In the
near future, it is expected to be a critical component for
integrating image-derived information used for personalized
treatment. The conventional radiomics workflow is typically
based on extracting predesigned features (also referred to as
handcrafted or engineered features) from a segmented
region of interest (ROI). Nevertheless, recent advancements
in deep learning have inspired trends toward deep-
learning-based radiomics (DLRs) (also referred to as
discovery radiomics).
The different categories
of handcrafted
features commonly
used within the context
of radiomics.
Extracting Deep-Learning-
Radiomics (DLR). The input to
the network can be the original
image, the segmented ROI, or a
combination of both. Either the
extracted radiomics features are
used throughout the rest of the
network, or an external model is
used to make the decision based
on radiomics features.
Reproducibility of traditional radiomic features #1
Reproducibility of CT Radiomic Features within
the Same Patient: Influence of Radiation Dose and
CT Reconstruction Settings
Mathias Meyer, James Ronald, Federica Vernuccio, Rendon C. Nelson, Juan Carlos
Ramirez-Giraldo, Justin Solomon, Bhavik N. Patel, Ehsan Samei, Daniele Marin
Radiology (1 Oct 2019)
https://doi.org/10.1148/radiol.2019190928
Results of recent phantom studies show that variation in CT acquisition
parameters and reconstruction techniques may make radiomic features
largely nonreproduceable and of limited use for prognostic clinical studies.
Conclusion: Most radiomic features are highly affected by CT acquisition and
reconstruction settings, to the point of being nonreproducible. Selecting reproducible
radiomic features along with study-specific correction factors offers improved
clustering reproducibility.
Images in 63-year-old female study participant with
metastatic liver disease from colon cancer. CT images
reconstructed in the axial plane with (top row) 5.0 mm and
(bottom row) 3.0 mm. The texture distribution alters
between the two reconstruction algorithms with
direct effect on the quantitative texture radiomic features,
such as gray-level size zone matrix large area high
level emphasis (LAHGLE) (5.0 mm LAHGLE =
4301732.0 vs 3.0 mm LAHGLE = 7089324.3) as
displayed in the lesion overlay images (middle column)
and the heatmap distributions (rightmost column). The
heat maps (rightmost column) display the difference of
original image and a convolution. Note how the heat map
distribution changes between the different section
thicknesses. The heat map was generated by using
MintLesion (version 3.4.4; MintMedical, Heidelberg,
Germany).
Reproducibility of traditional radiomic features #2
Reliability of CT-based texture features: Phantom
study.
Bino A. Varghese Darryl Hwang Steven Y. Cen Joshua Levy Derek Liu Christopher
Lau Marielena Rivas Bhushan Desai David J. Goodenough Vinay A. Duddalwar
Journal of Applied Clinical Medical Physics (20 June Oct 2019)
https://doi.org/10.1002/acm2.12666
Objective: To determine the intra , inter and test retest variability of
‐ICH, occurs earlier ‐ICH, occurs earlier ‐ICH, occurs earlier
CT based texture analysis (CTTA) metrics.
‐ICH, occurs earlier
Results: As expected, the robustness, repeatability and
reproducibility of CTTA metrics are variably sensitive to various
scanner (Philips Brilliance 64 CT, Toshiba Aquilion Prime 160 CT) and
scanning parameters. Entropy of Fast Fourier Transform
‐sized
based texture metrics was overall most reliable across the two
scanners and scanning conditions. Post processing techniques
‐sized
that reduce image noise while preserving the underlying edges
associated with true anatomy or pathology bring about significant
differences in radiomic reliability compared to when they were
not used.
(Left) Texture phantom comprising of three texture patterns. (Middle) Phantom placement for image
acquisition. (Right) Cross section of texture phantom patterns. (1), (2) and (3) are 3D printed ABS plastic
with fill levels 10%, 20%, and 40%, respectively. (Bk) is a homogenous ABS material. (The window level is
−500 HU with a width of 1600 HU).
3.4 Effect of post processing
‐sized
techniques that reduce image noise
while preserving the underlying edges
associated with true anatomy or
pathology
By comparing the changes in robustness of
the CTTA metrics across the two scanners, we
observe that post processing techniques that
‐ICH, occurs earlier
reduce image noise while preserving the
underlying anatomical edges for example, I
‐ICH, occurs earlier
dose levels (here 6 levels) on the Philips
scanner and Mild/Strong (here 2 levels) levels
on the Toshiba scanner produce significant
difference in CTTA robustness compared to
the base setting (Fig. 3). Stronger noise
reduction techniques were associated
with a significant reduction in reliability
in the Philips scanner, however, the
opposite was observed on the Toshiba
scanner. In both cases, no noise reduction
techniques were used in the base setting.
Robustness assessment of the texture metrics due to
changes in reconstruction filters; I dose levels (Philips
‐ICH, occurs earlier
scanner [a] and changes in noise corrections levels
(Mild or Strong) on the Toshiba scanner[b].
Reproducibility of traditional radiomic features #3
Radiomics of CT Features May Be
Nonreproducible and Redundant: Influence of CT
Acquisition Parameters
Roberto Berenguer, María del Rosario Pastor-Juan, Jesús Canales-Vázquez, Miguel
Castro-García, María Victoria Villas, Francisco Mansilla Legorburo, Sebastià Sabater
Radiology (24 April 2018)
https://doi.org/10.1148/radiol.2018172361
Materials and Methods Two phantoms were used to test radiomic
feature (RF) reproducibility by using test-retest analysis, by changing the CT
acquisition parameters (hereafter, intra-CT analysis), and by comparing five
different scanners with the same CT parameters (hereafter, inter-CT analysis).
Reproducible RFs were selected by using the concordance correlation
coefficient (as a measure of the agreement between variables) and the
coefficient of variation (defined as the ratio of the standard deviation to the
mean). Redundant features were grouped by using hierarchical cluster
analysis.
Conclusion Many RFs were redundant and nonreproducible. If all the
CT parameters are fixed except field of view, tube voltage, and milliamperage,
then the information provided by the analyzed RFs can be summarized in
only 10 RFs (each representing a cluster) because of redundancy.
Graph shows cluster
dendrogram and representative
radiomics features (RFs). Red
boxes differentiate 10 extracted
clusters, which were selected by
height. Representative RFs of
each cluster were selected
based on highest concordance
correlation coefficient value of
test-retest analysis.
Reproducibility of traditional radiomic features #4
Reproducibility test of radiomics using
network analysis and Wasserstein K-means
algorithm
Jung Hun Oh, Aditya P. Apte, Evangelia Katsoulakis, Nadeem Riaz, Vaios
Hatzoglou, Yao Yu, Jonathan E. Leeman, Usman Mahmood, Maryam
Pouryahya, Aditi Iyer, Amita Shukla-Dave, Allen R. Tannenbaum, Nancy Y. Lee,
Joseph O. Deasy
https://doi.org/10.1101/773168 (19 Sept 2019)
To construct robust and validated radiomic predictive models, the
development of a reliable method that can identify reproducible
radiomic features robust to varying image acquisition methods
and other scanner parameters should be preceded with rigorous
validation. We further propose a novel Wasserstein K-means
algorithm coupled with the optimal mass transport (OMT)
theory to cluster samples.
Despite such great progress in radiomics in recent years, however, the
development of computational techniques to identify repeatable and
reproducible radiomic features remains challenging and relatively
retarded. This has led many radiomic models built using a dataset to
be unsuccessful in subsequent external validation on
independent data [Virginia et al. 2018]. One of the reasons of these
consequences is likely due to the susceptibility of radiomic
features to image reconstruction and acquisition
parameters. Since radiomic features are computed via multiple
tasks including imaging acquisition, segmentation, and feature
extraction, the selection of parameters present in each step may
affect the stability of features computed. As such, prior to
model building, development of radiomic features with high
repeatability and high reproducibility as well as development of tools
that can identify such features is more likely to be urgently needed in
the field of radiomics.
CT Labels
“ICH CT Labels” e.g. hematoma
primary injury
, PHE
secondary injury
Airton Leonardo de Oliveira Manoel (Feb 2020)
PHE – peri-hematoma edema
https://doi.org/10.1186/s13054-020-2749-2
Intraventicular extension of hemorrhage (IVH)might change ventricle shape
making segmentation rather tricky especially if you have trained your brain
models with non-pathological brain. Slice example from CROMIS study at UCL.
Imaging features are time-dependent (from hours to long-term outcomes) #1
https://doi.org/10.1212/WNL.0b013e3182343387
https://doi.org/10.2176/nmc.ra.2016-0327
Advances in CT for prediction of hematoma expansion in acute intracerebral
hemorrhage Thien J Huynh, Sean P Symons and Richard I Aviv
Division of Neuroradiology, Department of Medical Imaging, Sunnybrook Health Sciences and University of Toronto
https://www.openaccessjournals.com/articles/advances-in-ct-for-prediction-of-hematoma-expansion-in-acute-intracerebral-he
morrhage.html
Perihematomal Edema After Spontaneous
Intracerebral Hemorrhage (2019)
https://doi.org/10.1161/STROKEAHA.119.024965
A) Example of hematoma and perihematoma edema regions of interest (ROIs). The ROIs were drawn on the
noncontrast computed tomography (CT) and transferred to perfusion maps. (B) Maps of cerebral blood flow
(CBF), cerebral blood volume (CBV), and time to peak of the impulse response curve (TMAX) from an ICH
ADAPT study patient randomized to a target systolic BP <150 mm Hg. 10.1038/jcbfm.2015.36
Imaging features are time-dependent (from hours to long-term outcomes) #2
Intracerebral hemorrhage (ICH) growth predicts mortality
and functional outcome. We hypothesized that irregular
hematoma shape and density heterogeneity,
reflecting active, multifocal bleeding or a variable bleeding
time course, would predict ICH growth.
https://doi.org/10.1161/STROKEAHA.108.536888
A, Shape (left) and density (right) categorical scales
and (B) examples of homogeneous, regular ICH (left)
and heterogeneous, irregular ICH (right).
Absolute (A) and relative
(B) perihematomal edema for
decompressive craniotomy
treatment and control groups,
and corrected absolute (C)
and corrected relative (D)
perihematomal edema for the
treatment and control groups.
10.1371/journal.pone.0149169
Example of a CT scan demonstrating delineation of the region of Example of a CT scan demonstrating delineation of
the region of PHE (outlined in green) and ICH (outlined in red). The oedema extension distance (EED) is the
difference between the radius (r
e
) of a sphere (shown in green) equal to the combined volume of PHE and ICH and the
radius of a sphere (shown in red) equal to the volume of the ICH alone (r
h
).
Oedema extension distance in intracerebral haemorrhage: Association with baseline characteristics and long-term outcome
http://dx.doi.org/10.1177/2396987319848203
Imaging features are time-dependent (from hours to long-term outcomes) #3
Intraventricular Hemorrhage Growth:
Definition, Prevalence and Association with
Hematoma Expansion and Prognosis
Qi Li et al. (Neurocritical Care (2020))
https://doi.org/10.1007/s12028-020-00958-8
The objective of this study is to propose a definition of
intraventricular hemorrhage (IVH) growth and to
investigate whether IVH growth is associated with ICH
expansion and functional outcome. IVH growth is not
uncommon and independently predicts poor outcome
in ICH patients. It may serve as a promising therapeutic
target for intervention.
Illustration of IVH growth on
noncontrast CT.
a Baseline CT scan reveals a
putaminal hematoma without
concurrent intraventricular
hemorrhage.
b Follow-up CT scan performed
11 h later shows enlarged
hematoma and intraventricular
extension of parenchymal
hemorrhage.
c Admission CT scan shows a
basal ganglia hemorrhage with
ventricular extension of
hematoma.
d Follow-up CT scan performed
24 h after baseline CT scan
reveals the significant increase in
ventricular hematoma
volume. CT computed
tomography, IVH intraventricular
hemorrhage
Distribution of modified Rankin scale in patients with or
without IVH growth. The ordinal analysis showed a
significant unfavorable shift in the distribution of scores on the
modified Rankin scale IVH growth (pooled odds ratio for shift to
higher modified Rankin score
Segmentation Labels? WM/GM contrast a bit low in CT compared to MRI
White Matter and Gray Matter Segmentation in 4D Computed
Tomography
Rashindra Manniesing , Marcel T. H. Oei, Luuk J. Oostveen, Jaime Melendez , Ewoud J. Smit, Bram Platel , Clara I. Sánchez,
Frederick J. A. Meijer , Mathias Prokop & Bram van Ginneken.
Sci Rep 7, 119 (2017) https://doi.org/10.1038/s41598-017-00239-z - Cited by 7
Segmentation Labels? WM/GM supervise with MRI?
Whole Brain Segmentation and Labeling
from CT Using Synthetic MR Images
Can Zhao, Aaron Carass, Junghoon Lee, Yufan He, Jerry L. Prince
International Workshop on Machine Learning in Medical Imaging
MLMI 2017: Machine Learning in Medical Imaging pp 291-298
https://doi.org/10.1007/978-3-319-67389-9_34
To achieve whole-brain segmentation—i.e., classifying tissues within and
immediately around the brain as gray matter (GM), white matter (WM), and
cerebrospinal fluid—magnetic resonance (MR) imaging is nearly always used.
However, there are many clinical scenarios where computed tomography
(CT) is the only modality that is acquired and yet whole brain segmentation
(and labeling) is desired. This is a very challenging task, primarily because CT
has poor soft tissue contrast; very few segmentation methods have been
reported to date and there are no reports on automatic labeling. This
paper presents a whole brain segmentation and labeling method for non-
contrast CT images that first uses a fully convolutional network (FCN) to
synthesize an MR image from a CT image and then uses the synthetic MR
image in a standard pipeline for whole brain segmentation and labeling.
In summary, we have used a modified U-net to synthesize T1-w images
from CT, and then directly segmented the synthetic T1-w using either MALP-
EM or a multi-atlas label fusion scheme. Our results show that using
synthetic MR can significantly improve the segmentation over
using the CT image directly. This is the first paper to provide GM
anatomical labels on a CT neuroimage. Also, despite previous assertions that
CT-to-MR synthesis is impossible from CNNs, we show that it is not only
possible but it can be done with sufficient quality to open up new clinical and
scientific opportunities in neuroimaging.
For one subject, we show the (a) input CT image, the (b) output synthetic T1-w, and
the (c) ground truth T1-w image. (d) is the dynamic range of (a). Shown
in (e) and (f) are the MALP-EM segmentations of the synthetic and ground truth T1-w
images, respectively.
Useful in general to have CT/MRI pairs?
Brain MRI with Quantitative
Susceptibility Mapping:
Relationship to CT
Attenuation Values
https://doi.org/10.1148/radiol.2019182934
To assess the relationship
among metal concentration, CT
attenuation values, and magnetic
susceptibility in paramagnetic
and diamagnetic phantoms, and
the relationship between CT
attenuation values and
susceptibility in brain structures
that have paramagnetic or
diamagnetic properties.
CT Segmentation Labels vs MRI labels
Loss Switching In segmentation tasks, the dice score is often reported as the performance metric.
A loss function that directly correlates with the dice score is the weighted dice loss. Based on our
empirical observation, the network trained with only weighted dice loss was unable to escape local
optimum and did not converge. Also, empirically it was seen that the stability of the model, in terms
of convergence, decreased as the number of classes and class imbalance increased. We
found that weighted cross-entropy loss, on the other hand, did not get stuck in any local optima and
learned reasonably good segmentations. As the model’s performance with regard to dice score
flattened out, we switched from weighted cross entropy to weighted dice loss, after which
the model’s performance further increased by 3-4 % in terms of average dice score. This loss
switching mechanism, therefore, is found to be useful to further improve the performance of the
model.
On brain atlas choice and automatic segmentation methods: a
comparison of MAPER & FreeSurfer using three atlas
databases https://doi.org/10.1038/s41598-020-57951-6
DARTS: DenseUnet-based Automatic Rapid Tool for brain
Segmentation Aakash Kaku, Chaitra V. Hegde, Jeffrey Huang, Sohae Chung,
Xiuyuan Wang, Matthew Young, Alireza Radmanesh, Yvonne W. Lui, Narges Razavian
(Submitted on 13 Nov 2019 https://arxiv.org/abs/1911.05567
Weak labels for CT Segmentation
Extracting 2D weak labels from volume labels
using multiple instance learning in CT
hemorrhage detection
Samuel W. Remedios, Zihao Wu, Camilo Bermudez, Cailey I. Kerley, Snehashis Roy, Mayur B. Patel, John A. Butman, Bennett A. Landman, Dzung L.
Pham (Submitted on 13 Nov 2019) https://arxiv.org/abs/1911.05650
https://github.com/sremedios/multiple_instance_learning
Multiple instance learning (MIL) is a supervised learning methodology that aims to
allow models to learn instance class labels from bag class labels, where a bag is
defined to contain multiple instances. MIL is gaining traction for learning from weak
labels but has not been widely applied to 3D medical imaging.
MIL is well-suited to clinical CT acquisitions since (1) the highly anisotropic voxels
hinder application of traditional 3D networks and (2) patch-based networks have
limited ability to learn whole volume labels. In this work, we apply MIL with a deep
convolutional neural network to identify whether clinical CT head image
volumes possess one or more large hemorrhages (> 20cm
3
), resulting in a
learned 2D model without the need for 2D slice annotations.
Individual image volumes are considered separate bags, and the slices in
each volume are instances. Such a framework sets the stage for incorporating
information obtained in clinical reports to help train a 2D segmentation approach.
Within this context, we evaluate the data requirements to enable generalization of MIL
by varying the amount of training data. Our results show that a training size of at least
400 patient image volumes was needed to achieve accurate per-slice
hemorrhage detection.
Weak Label Dense modeling
→
Improving RetinaNet for CT Lesion
Detection with Dense Masks from Weak
RECIST Labels
Martin Zlocha, Qi Dou, and Ben Glocker
https://arxiv.org/pdf/1906.02283v1.pdf
https://github.com/fizyr/keras-retinanet
https://github.com/martinzlocha/anchor-optimization
Accurate, automated lesion detection in Computed
Tomography (CT) is an important yet challenging task
due to the large variation of lesion types, sizes, locations
and appearances. Recent work on CT lesion detection
employs two-stage region proposal based methods
trained with centroid or bounding-box annotations. We
propose a highly accurate and efficient one-stage lesion
detector, by re-designing a RetinaNet to meet the
particular challenges in medical imaging. Specifically, we
optimize the anchor configurations using a differential
evolution search algorithm
Interestingly, we could show that by task-specific
optimization of an out-of-the-box detector we already
achieve results superior than the best reported in the
literature. Exploitation of clinically available RECIST
annotations bears great promise as large amounts of
such training data should be available in many hospitals.
With a sensitivity of about 91% at 4 FPs per image, our
system may reach clinical readiness. Future work will
focus on new applications such as whole-body MRI in
oncology.
Segmentation Labels? Synthetic CT from MRI
Hybrid Generative Adversarial
Networks for Deep MR to CT Synthesis
Using Unpaired Data
Guodong Zeng and Guoyan Zheng (MICCAI 2019)
https://doi.org/10.1007/978-3-030-32251-9_83
2D cycle-consistent Generative Adversarial Networks (2D-
cGAN) have been explored before for generating synthetic CTs
from MR images but the results are not satisfied due to spatial
inconsistency. There exists attempt to develop 3D cycle GAN
(3D-cGAN) for image translation but its training requires large
number of data which may not be always available.
In this paper, we introduce two novel mechanisms to address
above mentioned problems. First, we introduce a hybrid GAN
(hGAN) consisting of a 3D generator network and a 2D
discriminator network for deep MR to CT synthesis using
unpaired data. We use 3D fully convolutional networks to form the
generator, which can better model the 3D spatial information and
thus could solve the discontinuity problem across slices.
Second, we take the results generated from the 2D-cGAN
as weak labels, which will be used together with an adversarial
training strategy to encourage the generator’s 3D output to look
like a stack of real CT slices as much as possible.
Segmentation Labels Vascular segmentation
Robust Segmentation of the Full Cerebral
Vasculature in 4D CT of Suspected Stroke
Patients
Midas Meijs, Ajay Patel, Sil C. van de Leemput, Mathias Prokop, Ewoud
J. van Dijk, Frank-Erik de Leeuw, Frederick J. A. Meijer, Bram van
Ginneken & Rashindra Manniesing
Scientific Reports volume 7, Article number: 15622 (2017)
https://doi.org/10.1038/s41598-017-15617-w
A robust method is presented for the segmentation of the full
cerebral vasculature in 4-dimensional (4D) computed
tomography (CT).
Temporal information, in combination with contrast
agent, is important for vessel segmentation as is reflected by
the WTV feature. The added value of 4D CT with improved
evaluation of intracranial hemodynamics comes at a cost, as a
4D CT protocol is associated with a higher radiation
dose. Although 4D CT imaging is not common practice,
applications of 4D CT are expanding. We expect 4D CT to
become a single acquisition for stroke workup as it
contains both noncontrast CT and CTA information.
These modalities might be reconstructed from a 4D CT
acquisition, resulting in a reduction of acquisitions and radiation
dose. In addition, studies suggest that 4D CT can be
acquired at half the dose of standard clinical protocol,
further reducing the radiation dose for the patient.
Coronal view of a temporal
maximum intensity projection
visualizing part of the middle
cerebral artery including the M1,
M2 and M3 segments. Intensity
differences from proximal to
distal in a nonaffected vessel
can reach up to 450 HU and
higher. Vessel occlusions,
vessel wall calcifications,
collateral flow, clip and stent
artifacts have a large influence
on the continuity of intensity
values along the vessel.
Examples of difficulties encountered in vessel segmentation. From left to right: Skull base
region, arteries and veins surrounded by hyperdens bony structures in their course through
the skull base, which renders difficulties in separating them from each other; patient with
coils placed at the anterior communicating artery; patient with ventricular shunt causing a
linear artifact in the left cerebral hemisphere.
CTA Segmentation Example with multi-task learning
Deep Distance Transform for Tubular Structure
Segmentation in CT Scans
Yan Wang, Xu Wei, Fengze Liu, Jieneng Chen, Yuyin Zhou, Wei Shen, Elliot K. Fishman, Alan
L. Yuille (Submitted on 6 Dec 2019)
https://arxiv.org/abs/1912.03383
Tubular structure segmentation in medical images, e.g.,
segmenting vessels in CT scans, serves as a vital step in the
use of computers to aid in screening early stages of related
diseases. But automatic tubular structure segmentation in CT
scans is a challenging problem, due to issues such as poor
contrast, noise and complicated background.
A tubular structure usually has a cylinder-like shape which
can be well represented by its skeleton and cross-sectional
radii (scales). Inspired by this, we propose a geometry-aware
tubular structure segmentation method, Deep Distance
Transform (DDT), which combines intuitions from the
classical distance transform for skeletonization and modern
deep segmentation networks. DDT first learns a multi-task
network to predict a segmentation mask for a tubular
structure and a distance map.
Each value in the map represents the distance from each tubular
structure voxel to the tubular structure surface. Then the
segmentation mask is refined by leveraging the shape prior
reconstructed from the distance map.
Segmentation Labels? 4D for vessels, and multi-frame reconstruction?
Multiclass Brain Tissue Segmentation in 4D CT Using
Convolutional Neural Networks
Sil C. Van De Leemput, Midas Meijs, Ajay Patel, Frederick J. A. Meijer, Bram Van
Ginneken, Rashindra Manniesing
IEEE Access ( Volume: 7, 11 April 2019 )
https://doi.org/10.1109/ACCESS.2019.2910348
4D CT imaging has a great potential for use in stroke workup. A fully
convolutional neural network (CNN) for 3D multiclass segmentation in 4D CT
is presented, which can be trained end-to-end from sparse 2D annotations.
The CNN was trained and validated on 42 4D CT acquisitions of the brain of
patients with suspicion of acute ischemic stroke. White matter, gray matter,
cerebrospinal fluid, and vessels were annotated by two trained observers.
The dataset used for the evaluation
consisted exclusively of normal
appearing brain tissues without
pathology or foreign objects, which are
seen in everyday clinical practice. The
data was collected as such to focus on
testing the feasibility of segmentation of
WM/GM/CSF and vessels in 4D CT
using deep learning, which is traditionally
the domain of MR imaging. This implies
that the method likely must be trained on
cases with pathology or foreign objects
and at least be evaluated on such cases,
before it can be used in practice.
However, we argue that our method
provides a valuable first step towards this
goal.
Example axial cross section for the derived images of a single 4D CT image
used for annotation. Left: the temporal average for WM, GM, and CSF
segmentation. Right: the temporal variance for vessel segmentation.
Three cross sections (axial, coronal, sagittal) of an exemplar 4D CT case.
Blue areas were selected for annotation by the observers, other areas were not
annotated. Brain mask from skullstripping
Segmentation Labels? Musculoskeletal CT segmentation #1
Pixel-Level Deep Segmentation: Artificial Intelligence Quantifies
Muscle on Computed Tomography for Body Morphometric
Analysis
Hyunkwang Lee & Fabian M. Troschel & Shahein Tajmir & Georg Fuchs & Julia
Mario & Florian J. Fintelmann & Synho Do
Department of Radiology, Massachusetts General Hospital
J Digit Imaging (2017) http://doi.org/10.1007/s10278-017-9988-z
The muscle segmentation AI can be enhanced further by using the original 12-bit
image resolution with 4096 gray levels which could enable the network to learn
other significant determinants which could be missed in the lower resolution.
In addition, an exciting target would be adipose tissue segmentation. Adipose
tissue segmentation is relatively straightforward since fat can be thresholded within a
unique HU range [−190 to −30]. Prior studies proposed creating an outer muscle
boundary to segment HU thresholded adipose tissue into visceral adipose
tissue (VAT) and subcutaneous adipose tissue (SAT).
However, precise boundary generation is dependent on accurate muscle
segmentation. By combining our muscle segmentation network with a subsequent
adipose tissue thresholding system, we could quickly and accurately provide VAT
and SAT values in addition to muscle CSA. Visceral adipose tissue has been
implicated in cardiovascular outcomes and metabolic syndrome, and accurate fat
segmentation would increase the utility of our system beyond cancer
prognostication. Ultimately, our system should be extended to wholebody
volumetric analysis rather than axial CSA, providing rapid and accurate
characterization of body morphometric parameters.
Segmentation Labels? Musculoskeletal CT segmentation #2
Automated Muscle Segmentation from Clinical CT using Bayesian
U-Net for Personalization of a Musculoskeletal Model
Yuta Hiasa, Yoshito Otake, Masaki Takao, Takeshi Ogawa, Nobuhiko Sugano,
and Yoshinobu Sato
https://arxiv.org/abs/1907.08915 (21 July 2019)
We propose a method for automatic segmentation of individual
muscles from a clinical CT. The method uses Bayesian
convolutional neural networks with the U-Net architecture, using
Monte Carlo dropout that infers an uncertainty metric in addition
to the segmentation label.
We evaluated validity of the uncertainty metric in the multi-class
organ segmentation problem and demonstrated a
correlation between the pixels with high uncertainty and
the segmentation failure. One application of the uncertainty
metric in active learning is demonstrated, and the proposed
query pixel selection method considerably reduced the manual
annotation cost for expanding the training data set. The proposed
method allows an accurate patient-specific analysis of
individual muscle shapes in a clinical routine. This would open
up various applications including personalization of biomechanical
simulation and quantitative evaluation of muscle atrophy.
Phantoms
for Head CT
CT/MRI/PET Phantom from Bristol for Alzheimer Neuroimaging
Creation of an anthropomorphic CT head
phantom for verification of image
segmentation Medical Physics (11 March 2020)
https://doi.org/10.1002/mp.14127
Robin B. Holmes Ian S. Negus Sophie J. Wiltshire Gareth C. Thorne Peter Young
The Alzheimer’s Disease Neuroimaging Initiative
Department of Medical Physics and Bioengineering, University Hospitals Bristol NHS
Foundation Trust, Bristol, BS28HW United Kingdom
“Accuracy of CT segmentation will depend, to
some extent, on the ability of CT images to accurately
depict the structures of the head. This in turn will depend
on the scanner used and the exposure and
reconstruction factors selected. The delineation of soft
tissue structures will depend on material contrast,
edge resolution and image noise, which are in turn
affected by the peak tube potential (kVp), filtration, tube
current (mA), rotation time, reconstructed slice width
and the reconstruction algorithm, including iterative
methods and any other post-acquisition image
processing.
The limitation of the phantoms presented in these
(previous) studies is that they do not allow for
complex nested structures with multiple
material properties, as would be required to simulate
the brain. ... The effects of neuroimaging on clinical
confidence analyses is not an area that has been
investigated rigourously, the effects of analyses
even less so
e.g. Motara et al. 2017; Boelaarts et al. 2016
. The literature
appears to concentrate more on novel methods rather
than demonstrating the usefulness of existing
ones.”
This work aims to use 3D printing to create a realistic anthropomorphic phantom representing the CT properties of a normal
human brain and skull. Properly developed, this type of phantom will allow the optimization and validation of CT
segmentation across different scanners and disease states. If sufficient realism can be attained with the phantom,
imaging the resulting phantom on different scanners and using different acquisition parameters will enable the
validation of the entire processing chain in the proposed clinical implementation of CT-VBM. ... may well be possible to use
phantoms to measure parameters that could be used as exclusion criteria in the clinical use of CT analyses, thereby increasing
sensitivity, specificity and clinical confidence. It would be relatively straightforward to create multiple phantoms of the
same subject with progressive atrophy; the atrophy could be simulated from a ‘base’ scan or by the assessment of
multiple patient scans from the ADNI database
3DP brain (left) and the completed phantom
after coating with plaster of Paris (right)
Comparison of the source MRI (column 1) and phantom
scan C (120kV, 300mAs) for scanner 1 (column 2) and
scanner 2 (column 3) with an 80kV acquisition on scanner
2 (column ). The three rows depict different slices at
different levels in the head/phantom. As the printer was
only capable of printing 3 different types of plastic
no non-brain structures – such as the eyes or skull – were
printed. CT scans have 60HU subtraction and are
displayed with a window level of 30HU, window width
90HU. Representative ROIs used for determination of the
mean HU for each tissue type are shown in red.
see also “Physical imaging phantoms for simulation of tumor heterogeneity in PET, CT, and MRI” https://doi.org/10.1002/mp.14045
CT artifacts to simulate for intracerebral hemorrhage (ICH) analysis
Starburst/streak
artifact from dense materials
(metal, teeth)
Make two phantoms (one with
metal encased, and the other
without)? Or have insertable
dense materials?
http://www.neuroradiologycases.com/2011/
10/streak-artifacts.html
Motion artifacts
Have a motor moving the phantom so
you would exactly now the “blur kernel”,
would you benefit from fiducials on
phantom? Metal motor itself causing
artifacts to the image?
https://www.openaccessjournals.com/articles/ct-artifacts-causes-a
nd-reduction-techniques.html
Calcifications
Useful especially for dual-
energy CT simulation and
‘virtual noncalcium image’
https://doi.org/10.1093/neuros/nyaa029
ICH (i.e. blood)
How realistic can you make
this? Play with infill
density/pattern to allow
injection of blood-like material
to the phantom? ICH shape
very random
see e.g. Chinda et al. 2018
http://dx.doi.org/10.1136/bmjopen-2017-020260
Beam hardening i.e.
attenuation of signal in a “skull
pocket” -> phantom would benefit
from bone-like encasing.
e.g. http://doi.org/10.13140/RG.2.1.2575.3122
see eg. Raslau et al. 2016 https://doi.org/10.3174/ng.2160146
CT Extra the texture “radiomics story” and with fully deep end-to-end networks?
Reliability of CT-based texture features: Phantom
study
Bino A. Varghese Darryl Hwang Steven Y. Cen Joshua Levy Derek Liu Christopher
Lau Marielena Rivas Bhushan Desai David J. Goodenough Vinay A. Duddalwar
Journal of Applied Clinical Medical Physics (20 June Oct 2019)
https://doi.org/10.1002/acm2.12666 - Cited by 1 - Related articles
Objective: To determine the intra, inter and test retest variability of
‐ICH, occurs earlier ‐ICH, occurs earlier
CT based texture analysis (CTTA) metrics.
‐ICH, occurs earlier
Results: As expected, the robustness, repeatability and
reproducibility of CTTA metrics are variably sensitive to various
scanner (Philips Brilliance 64 CT, Toshiba Aquilion Prime 160 CT) and
scanning parameters. Entropy of Fast Fourier Transform
‐sized
based texture metrics was overall most reliable across the two
scanners and scanning conditions. Post processing techniques
‐sized
that reduce image noise while preserving the underlying edges
associated with true anatomy or pathology bring about significant
differences in radiomic reliability compared to when they were
not used.
(Left) Texture phantom comprising of three texture patterns. (Middle) Phantom placement for image
acquisition. (Right) Cross section of texture phantom patterns. (1), (2) and (3) are 3D printed ABS plastic
with fill levels 10%, 20%, and 40%, respectively. (Bk) is a homogenous ABS material. (The window level is
−500 HU with a width of 1600 HU).
3.4 Effect of post processing techniques that reduce
‐sized
image noise while preserving the underlying edges
associated with true anatomy or pathology
By comparing the changes in robustness of the CTTA
metrics across the two scanners, we observe that post
‐ICH, occurs earlier
processing techniques that reduce image noise while
preserving the underlying anatomical edges for
example, I dose levels (here 6 levels) on the Philips
‐ICH, occurs earlier
scanner and Mild/Strong (here 2 levels) levels on the
Toshiba scanner produce significant difference in CTTA
robustness compared to the base setting (Fig. 3).
Stronger noise reduction techniques were
associated with a significant reduction in
reliability in the Philips scanner, however, the
opposite was observed on the Toshiba scanner.
In both cases, no noise reduction techniques were used
in the base setting.
CT Phantom Study for deep learning based reconstruction
Deep Learning Reconstruction at CT: Phantom Study
of the Image Characteristics Toru Higaki et al. Academic
Radiology Volume 27, Issue 1, January 2020, Pages 82-87
https://doi.org/10.1016/j.acra.2019.09.008
Noise, commonly encountered on computed tomography (CT)
images, can impact diagnostic accuracy. To reduce the image
noise, we developed a deep-learning reconstruction (DLR)
method that integrates deep convolutional neural networks into image
reconstruction. In this phantom study, we compared the image noise
characteristics, spatial resolution, and task-based detectability on
DLR images and images reconstructed with other state-of-the art
techniques.
On images reconstructed with DLR, the noise was lower than
on images subjected to other reconstructions, especially at low
radiation dose settings. Noise power spectrum measurements also
showed that the noise amplitude was lower, especially for low-
frequency components, on DLR images. Based on the MTF,
spatial resolution was higher on model-based iterative reconstruction
image than DLR image, however, for lower-contrast objects, the MTF
on DLR images was comparable to images reconstructed with other
methods. The machine observer study showed that at reduced
radiation-dose settings, DLR yielded the best detectability.
Phantom images scanned at 2.5 mGy. The image noise is lowest on the DLR
image, the texture is preserved, and the object boundary is sharper than
on the other images.
Dual Energy CT Nice for CT as well with the calcium separation
Optimising dual-
energy CT scan
parameters for
virtual non-
calcium imaging
of the bone
marrow: a
phantom study
https://doi.org/10.11
86/s41747-019-012
5-2
Effects of Patient Size and Radiation Dose on Iodine Quantification in
Dual-Source Dual-Energy CT https://doi.org/10.1016/j.acra.2019.12.027
Figure 1. A cross-section CT image of the medium-sized
phantom with eight iodine inserts. The number above each
insert indicates its iodine concentration in mg/ml.
Figure 6. The 80 kVp images from the DECT scan of a 32-cm diameter CTDI
phantom with different combinations of effective mAs and rotation time: (a) 53 mAs
and 0.5 s, (b) 106 mAs, 1.0s, (c) 106 mAs, 0.5 s, and (d) 530 mAs, 0.5 s. A narrow
window of 200 HU is used to show the bias in the CT number. Four circular ROIs of 1.6
cm diameter are shown in panel (d), at distances of 3.4, 6.7, 10, and 13.3 cm from the
center.
Remember that CT has non-medical uses as well and you can have a look of that
literature if you are interested
Measuring Identification and Quantification Errors in Spectral CT Material
Decomposition https://doi.org/10.3390/app8030467
(a) Spectroscopic phantom with three 6 mm diameter
hydroxyapatite calibration rods (54.3, 211.7 and 808.5 mg/mL) and 6
mm diameter vials of gadolinium (1, 2, 4, 8 mg/mL), oil (canola oil) and
distilled water; (b) CT image of the phantom.
CT-specific
preprocessing
before the more
“general computer
vision techniques”
“The Tech Stack”
from here
CT Volumes can be both anisotropic and isotropic
Brain atlas fusion from high-thickness diagnostic magnetic
resonance images by learning-based super-resolution Zhang et al.
(2017) https://doi.org/10.1016/j.patcog.2016.09.019Cited by 12
ANISOTROPIC VOLUME
“Staircased” volume due to
low z-resolution
ISOTROPIC VOLUME
A lot smoother volume
reconstruction
Lego corgi :
corgi
Reddit
Same
”as a dog”
well practically always anisotropic, and they are resampled to be isotropic
Staircasing Example When z-resolution is too coarse
Co-registration of BOLD activation area on 3D brain image (Courtesy
Siemens) http://mriquestions.com/registrationnormalization.html
UCL Data
https://doi.org/10.1016/j.jneumeth.2016.03.001
Get rid of background and “non-brain”
Cushion
contours
Plastic
“helmet”
Head mask Brain mask
8-bit mapping“int13” input 1 sign bit + 12 bit intensity
[−1024, 3071] HU clipping
−100 to 100 HU still linear between these
values so nothing compressed and lost, but
remaining 55 values are used for the
outside values that are not as relevant for
brain.
CT Preprocessing Clip HU units, use NifTI, and avoid bias field
Recommendations for
Processing Head CT Data
John Muschelli (2019)
https://doi.org/10.3389/fninf.2019.00061
Department of Biostatistics, Johns Hopkins Bloomberg School of
Public Health, Baltimore, MD, United States
Many different general 3D medical imaging formats exist, such as ANALYZE, NIfTI, NRRD, and MNC.
We recommend the NIfTI format (e.g. https://github.com/rordenlab/dcm2niix), as it can be read
by nearly all medical imaging platforms, has been widely used, has a format standard, can be stored
in a compressed format, and is how much of the data is released online.
Once converted to NIfTI format, one should ensure the scale of the data. Most CT data is
between −1024 and 3071 Hounsfield Units (HU). Values less than −1024 HU are commonly
found due to areas of the image outside the field of view that were not actually imaged. One
first processing step would be to Winsorize the data (clip the values) to the [−1024, 3071]
range. After this step, the header elements scl_slope and scl_inter elements of the NIfTI
image should be set to 1 and 0, respectively, to ensure no data rescaling is done in other software.
Though HU is the standard format used in CT analysis, negative HU values may cause issues with
standard imaging pipelines built for MRI, which typically have positive values. Rorden (CITE)
proposed a lossless transformation, called Cormack units, which have a minimum value of 0.
The goal of the transformation is to increase the range of the data that is usually of interest, from
−100 to 100 HU and is implemented in the Clinical toolbox. Most analyses are done using
HU, however.
Though CT data has no coil or assumed bias field, as in MRI, due to the nature of the data, one can
test if trying to harmonize the data spatially with one of these correction procedures improves
performance of a method. Though we do not recommend this procedure generally, as it may
reduce contrasts between areas of interest, such as hemorrhages in the brain, but has been used to
improve segmentation (Cauley et al., 2018). We would like to discuss potential methods and CT-
specific issues.
http://neurovascularmedicine.com/imagingct.php
https://www.sli
deshare.net/drt
arungoyal/basi
c-principle-of-c
t-and-ct-gener
ations-1220533
36
Optimizing the HU window instead of using the full HU range #1
Practical Window Setting Optimization for
Medical Image Deep Learning
Hyunkwang Lee, Myeongchan Kim, Synho Do
Harvard / Mass General
(Submitted on 3 Dec 2018)
https://arxiv.org/abs/1812.00572v1
https://github.com/suryachintu/RSNA-Intracranial-Hemorrhage-Detection
https://github.com/MGH-LMIC/windows_optimization
Keras
The deep learning community has to date neglected window
display settings - a key feature of clinical CT interpretation and
opportunity for additional optimization. Here we propose a window
setting optimization (WSO) module that is fully trainable with
convolutional neural networks (CNNs) to find optimal window
settings for clinical performance.
Our approach was inspired by the method commonly used by
practicing radiologists to interpret CT images by adjusting window
settings to increase the visualization of certain pathologies. Our
approach provides optimal window ranges to enhance the
conspicuity of abnormalities, and was used to enable
performance enhancement for intracranial hemorrhage and urinary
stone detection.
On each task, the WSO model outperformed models trained
over the full range of Hounsfield unit values in CT images,
as well as images windowed with pre-defined settings. The WSO
module can be readily applied to any analysis of CT images, and can
be further generalized to tasks on other medical imaging modalities.
Our WSO models can be further optimized by investigating the effects of the number of input
image channels, ?????? and U on the performance of target application. Additionally, we stress that the
WSO-based approach described here is not specific to abnormality classification on CT images,
but rather generalizable to various image interpretation task on a variety of medical imaging
modalities.
Optimizing the HU window instead of using the full HU range #2
CT window trainable neural network for
improving intracranial hemorrhage detection by
combining multiple settings
Manohar Karki et al.
CAIDE Systems Inc., Lowell, MA, USA
(20 May 2020)
https://doi.org/10.1016/j.artmed.2020.101850
●This method gives a novel approach where a deep convolutional neural
network (DCNN) is trained in conjunction with a CT window
estimator module in an end-to-end manner for better predictions in
diagnostic radiology.
●A learnable module for approximating the window settings for Computed
Tomography (CT) images is proposed to be trained in a distant supervised
manner without prior knowledge of best window settings values by
simultaneously training a lesion classifier.
●Based on the learned module, several candidate window settings are
automatically identified, and the raw CT data are scaled at each settings and
separate lesion classification models are trained on each.
10 (/11 bit) mapping and you can actually display it?
Display has a 2000:1 contrast ratio.
Product pages: Coronis Fusion 6MP (MDCC-6530) and
Coronis Fusion 4MP (MDCC-4430)
https://www.medgadget.com/2019/11/barcos-flagship-multimodality-diagnostic-monito
r-gets-an-upgrade.html
Should HDR Displays
Follow the Perceptual
Quantizer (PQ) Curve?
[discussion started as an email
thread in the HDR work group
of the International Committee
of Display Metrology (ICDM)]
https://www.displaydaily.com/article/displ
ay-daily/should-hdr-displays-follow-the-p
q-curve
You can assume your HUs to be properly calibrated?
Automatic deep learning-based normalization of breast
dynamic contrast-enhanced magnetic resonance
images
Jun Zhang, Ashirbani Saha, Brian J. Soher, Maciej A. Mazurowski
Department of Radiology, Duke University
(5 Jul 2018)
https://arxiv.org/abs/1807.02152
To develop an automatic image normalization algorithm
for intensity correction of images from breast dynamic
contrast-enhanced magnetic resonance imaging (DCE-MRI)
acquired by different MRI scanners with various imaging
parameters, using only image information.
DCE-MR images of 460 subjects with breast cancer acquired
by different scanners were used in this study. Each subject
had one T1-weighted pre-contrast image and three T1-
weighted post-contrast images available. Our
normalization algorithm operated under the assumption that
the same type of tissue in different patients should be
represented by the same voxel value .
The proposed image normalization strategy based on tissue
segmentation can perform intensity correction fully
automatically, without the knowledge of the scanner
parameters.
And handled by the device manufacturer? Would there still be room for post-processing?
CT Preprocessing Defacing (De-Identification)
Recommendations for Processing Head CT Data
John Muschelli (2019) https://doi.org/10.3389/fninf.2019.00061
Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health, Baltimore, MD, United States
As part of the Health Insurance Portability and Accountability Act (HIPAA) in the United States, under the “Safe Harbor”
method, releasing of data requires the removal a number of protected health information (PHI) (
Centers for Medicare & Medicaid Services, 1996). For head CT images, a notable identifier is “Full-face photographs and
any comparable images”. Head CT images have the potential for 3D reconstructions, which likely fall under this PHI
category, and present an issue for reidentification of participants (Schimke and Hale, 2015). Thus, removing areas of the
face, called defacing, may be necessary for releasing data. If parts of the face and nasal cavities are the target of
the imaging, then defacing may be an issue. As ears may be a future identifying biometric marker, and dental records may
be used for identification, these areas may desirable to remove (Cadavid et al., 2009; Mosher, 2010).
The obvious method for image defacing is to perform brain extraction we described above. If we consider defacing to be
removing parts the face, while preserving the rest of the image as much as possible, this solution is not sufficient.
Additional options for defacing exist such as the MRI Deface software (https://www.nitrc.org/projects/mri_deface/),
which is packaged in the FreeSurfer software and can be run using the mri_deface function from the freesurfer R
package (Bischoff-Grethe et al., 2007; Fischl, 2012). We have found this method does not work well out of the box on
head CT data, including when a large amount of the neck is imaged.
Registration methods involve registering images to the CT and applying the transformation of a mask of the
removal areas (such as the face). Examples of this implementation in Python modules for defacing
are pydeface (https://github.com/poldracklab/pydeface/tree/master/pydeface) and mridefacer (
https://github.com/mih/mridefacer). These methods work since the registration from MRI to CT tends to
performs adequately, usually with a cross-modality cost function such as mutual information. Other
estimation methods such as the Quickshear Defacing method rely on finding the face by its relative
placement compared to a modality-agnostic brain mask (Schimke and Hale, 2011). The fslr R package
implements both the methods of pydeface and Quickshear. The ichseg R package also has a
function ct_biometric_mask that tries to remove the face and ears based registration to a CT template
(described below). Overall, removing potential biometric markers from imaging data should be considered
when releasing data and a number of methods exist, but do not guarantee complete de-identification and
may not work directly with CT without modification.
https://slideplayer.com/slide/12844720/
https://neurostars.org/t/sharing-data-on-openneuro-
without-consent-form-but-consent-by-the-ethics-co
mmittee/1593
Brain Extraction Tools (BETs) nothing good available really for CT?
i.e. skullstripping more options for MRI→ more options for MRI
Validated Automatic Brain Extraction of
Head CT Images John Muschelli et al. (2015)
https://dx.doi.org/10.1016%2Fj.neuroimage.2015.03.074
https://rdrr.io/github/muschellij2/ichseg/man/CT_Skull_Strip_robust.html R
https://johnmuschelli.com/neuroc/ss_ct/index.html
Department of Biostatistics, Johns Hopkins Bloomberg School of Public Health,
Baltimore, MD, United States
Aim: To systematically analyze and validate the performance of FSL's brain extraction tool
(BET) on head CT images of patients with intracranial hemorrhage. This was done by
comparing the manual gold standard with the results of several versions of automatic brain
extraction and by estimating the reliability of automated segmentation of longitudinal scans.
The effects of the choice of BET parameters and data smoothing is studied and reported. BET
performs well at brain extraction on thresholded, 1mm
3
smoothed CT images with an
fractional intensity (FI) of 0.01 or 0.1. Smoothing before applying BET is an important
step not previously discussed in the literature.
Automated brain extraction from head CT and CTA
images using convex optimization with shape
propagation Mohamed Najmi et al. (2019)
https://doi.org/10.1016/j.cmpb.2019.04.030
https://github.com/WuChanada/StripSkullCT Matlab
Robust brain extraction tool for CT head images
Zeynettin Akkus, Petro Kostandy, Kenneth A.Philbrick, Bradley J.Erickson et al. (7 June 2020)
https://doi.org/10.1016/j.neucom.2018.12.085 - Cited by 2
https://github.com/aqqush/CT_BET Keras Python
CT Preprocessing MNI Space
Normalization to spatial coordinates “registration problem”
Classification of damaged tissue in stroke CTs. A
representative stroke CT scan (A) is normalized to MNI space
(B) and spatially smoothed (C). Next, the resulting image is
compared to a group of control CTs by means of the Crawford–
Howell t-test. The resulting t-score map is converted to a probability
map, which is then overlaid onto the image itself (D). By thresholding
this probability map at a given significance level, the lesioned
regions can be delineated. The lesion map in MNI space can be
transformed back to individual subject space (E), so that it
can be compared with a lesion map manually delineated by an
operator (F) on the original CT image.
http://doi.org/10.1016/j.nicl.2014.03.009 -
Cited by 64
Human Brain in
Standard MNI Space
(2017) Jürgen Mai Milan
Majtanik
The Talairach coordinate of a point in the MNI
space: how to interpret it WilkinChau and
Anthony R.McIntosh (2005)
https://doi.org/10.1016/j.neuroimage.2004.12.007
“The two most widely used spaces in
the neuroscience community are the Talairach space and the
Montreal Neurological Institute (MNI) space. The Talairach
coordinate system has become the standard reference for
reporting the brain locations in scientific publication, even when
the data have been spatially transformed into different
brain templates (e.g., MNI space). “
CT Preprocessing Space Transform Optimization
Like with every signal processing step, you can always do better, and some pros/cons related to each method
https://www.slideserve.com/shaina/group-analyses-in-fmri
http://www.diedrichsenlab.org/imaging/propatlas.htm
Cited by 660
Advanced Normalisation Tools (ANTs)
http://www.mrmikehart.com/tutorials.html
Transcranial brain atlas
http://doi.org/10.1126/sciadv.aar6904
Spatial Normalization - an overview
https://www.sciencedirect.com/topics/medicine-and-
dentistry/spatial-normalization
MAR
Metal Artifact
Reduction
getting rid of the
metal / bone (dense
material) artifacts
Deep-MAR
Fast Enhanced CT Metal Artifact Reduction using Data
Domain Deep Learning Muhammad Usman Ghani, W. Clem Karl
https://arxiv.org/abs/1904.04691v3 (2019)
Filtered back projection (FBP) is the most widely used method
for image reconstruction in X-ray computed tomography (CT)
scanners, and can produce excellent images in many cases.
However, the presence of dense materials, such as metals, can
strongly attenuate or even completely block X-rays, producing
severe streaking artifacts in the FBP reconstruction. These
metal artifacts can greatly limit subsequent object delineation and
information extraction from the images, restricting their diagnostic
value.
DuDoNet Joint use of sinogram and image domains
DuDoNet: Dual Domain Network for CT Metal Artifact Reduction
Wei-An Lin, Haofu Liao, Cheng Peng, Xiaohang Sun, Jingdan Zhang, Jiebo
Luo, Rama Chellappa, Shaohua Kevin Zhou (2019)
http://openaccess.thecvf.com/content_CVPR_2019/html/Lin_DuDoNet_Dual_Domain_Ne
twork_for_CT_Metal_Artifact_Reduction_CVPR_2019_paper.html
Computed tomography (CT) is an imaging modality widely used for
medical diagnosis and treatment. CT images are often corrupted by
undesirable artifacts when metallic implants are carried by patients, which
creates the problem of metal artifact reduction (MAR).
Existing methods for reducing the artifacts due to metallic implants are
inadequate for two main reasons. First, metal artifacts are structured and
non-local so that simple image domain enhancement approaches would
not suffice. Second, the MAR approaches which attempt to reduce metal
artifacts in the X-ray projection (sinogram) domain inevitably lead to
severe secondary artifact due to sinogram inconsistency.
To overcome these difficulties, we propose an end-to-end trainable Dual
Domain Network (DuDoNet) to simultaneously restore sinogram
consistency and enhance CT images. The linkage between the
sigogram and image domains is a novel Radon inversion layer
that allows the gradients to back-propagate from the image domain to the
sinogram domain during training. Extensive experiments show that our
method achieves significant improvements over other single domain MAR
approaches. To the best of our knowledge, it is the first end-to-end dual-
domain network for MAR.
DuDoNet++ Joint use of sinogram and image domains
DuDoNet++: Encoding mask projection to reduce CT metal
artifacts Yuanyuan Lyu, Wei-An Lin, Jingjing Lu, S. Kevin Zhou
(Submitted on 2 Jan 2020 (v1), last revised 18 Jan 2020)
https://arxiv.org/abs/2001.00340
CT metal artifact reduction (MAR) is a notoriously challenging task
because the artifacts are structured and non-local in the image
domain. However, they are inherently local in the sinogram domain.
DuDoNet is the state-of-the-art MAR algorithm which exploits the
latter characteristic by learning to reduce artifacts in the
sinogram and image domain jointly. By design, DuDoNet treats
the metal-affected regions in sinogram as missing and replaces them
with the surrogate data generated by a neural network.
Since fine-grained details within the metal-affected regions are
completely ignored, the artifact-reduced CT images by DuDoNet
tend to be over-smoothed and distorted. In this work, we investigate
the issue by theoretical derivation. We propose to address the
problem by (1) retaining the metal-affected regions in sinogram and
(2) replacing the binarized metal trace with the metal mask projection
such that the geometry information of metal implants is encoded.
Extensive experiments on simulated datasets and expert evaluations
on clinical images demonstrate that our network called DuDoNet++
yields anatomically more precise artifact-reduced images
than DuDoNet, especially when the metallic objects are large.
Unsupervised Approach ADN with good performance
Artifact Disentanglement Network for
Unsupervised Metal Artifact Reduction
Haofu Liao, Wei-An Lin, Jianbo Yuan, S. Kevin Zhou, Jiebo
Luo (Submitted on 5 Jun 2019)
https://arxiv.org/abs/1906.01806v5
https://github.com/liaohaofu/adn
PyTorch
Current deep neural network based approaches to
computed tomography (CT) metal artifact reduction (MAR)
are supervised methods which rely heavily on synthesized
data for training. However, as synthesized data may not
perfectly simulate the underlying physical mechanisms of
CT imaging, the supervised methods often generalize
poorly to clinical applications. To address this problem, we
propose, to the best of our knowledge, the first
unsupervised learning approach to MAR. Specifically,
we introduce a novel artifact disentanglement network that
enables different forms of generations and regularizations
between the artifact-affected and artifact-free image
domains to support unsupervised learning. Extensive
experiments show that our method significantly
outperforms the existing unsupervised models for image-
to-image translation problems, and achieves
comparable performance to existing supervised models on
a synthesized dataset. When applied to clinical datasets,
our method achieves considerable improvements
over the supervised models.
Unsupervised Improvement over ADN?
Three-dimensional Generative Adversarial Nets
for Unsupervised Metal Artifact Reduction
Megumi Nakao, Keiho Imanishi, Nobuhiro Ueda, Yuichiro
Imai, Tadaaki Kirita, Tetsuya Matsuda
(Submitted on 19 Nov 2019))
https://arxiv.org/abs/1911.08105
In this paper, we introduce metal artifact reduction methods
based on an unsupervised volume-to-volume
translation learned from clinical CT images. We construct
three-dimensional adversarial nets with a regularized loss
function designed for metal artifacts from multiple
dental fillings. The results of experiments using 915 CT
volumes from real patients demonstrate that the proposed
framework has an outstanding capacity to reduce strong
artifacts and to recover underlying missing voxels, while
preserving the anatomical features of soft tissues and tooth
structures from the original images.
Using paired artifact-free MRI for CT MAR
Combining multimodal information for Metal
Artefact Reduction: An unsupervised deep
learning framework
Marta B.M. Ranzini, Irme Groothuis, Kerstin Kläser, M. Jorge Cardoso,
Johann Henckel, Sébastien Ourselin, Alister Hart, Marc Modat
[Submitted on 20 Apr 2020]
https://arxiv.org/abs/2004.09321
Metal artefact reduction (MAR) techniques aim at
removing metal-induced noise from clinical images. In
Computed Tomography (CT), supervised deep learning
approaches have been shown effective but limited in
generalisability, as they mostly rely on synthetic data. In
Magnetic Resonance Imaging (MRI) instead, no method has
yet been introduced to correct the susceptibility
artefact, still present even in MAR-specific acquisitions.
In this work, we hypothesise that a multimodal approach
to MAR would improve both CT and MRI. Given their
different artefact appearance, their complementary
information can compensate for the corrupted signal in
either modality. We thus propose an unsupervised deep
learning method for multimodal MAR. We introduce the use
of Locally Normalised Cross Correlation as a loss
term to encourage the fusion of multimodal information.
Experiments show that our approach favours a smoother
correction in the CT, while promoting signal recovery in the
MRI.
Unsupervised Approach jointly with other tasks
Joint Unsupervised Learning for the Vertebra
Segmentation, Artifact Reduction and Modality
Translation of CBCT Images
Yuanyuan Lyu, Haofu Liao, Heqin Zhu, S. Kevin Zhou
(Submitted on 2 Jan 2020 (v1), last revised 18 Jan 2020)
https://arxiv.org/abs/2001.00339
We investigate the unsupervised learning of the vertebra
segmentation, artifact reduction and modality translation of
CBCT images. To this end, we formulate this problem under a
unified framework that jointly addresses these three
tasks and intensively leverages the knowledge sharing. The
unsupervised learning of this framework is enabled by 1) a
novel shape-aware artifact disentanglement network that
supports different forms of image synthesis and vertebra
segmentation and 2) a deliberate fusion of knowledge from
an independent CT dataset. Specifically, the proposed
framework takes a random pair of CBCT and CT images as the
input, and manipulates the synthesis and segmentation via
different combinations of the decodings of the disentangled
latent codes. Then, by discovering various forms of
consistencies between the synthesized images and
segmented , the learning is achieved via self-learning from
the given CBCT and CT images obviating the need for the
paired (i.e., anatomically identical) ground-truth data.
Mandible segmentation to help MAR?
Recurrent convolutional neural networks for
mandible segmentation from computed tomography
Bingjiang Qiu, Jiapan Guo, Joep Kraeima, Haye H. Glas, Ronald
J. H. Borra, Max J. H. Witjes, Peter M. A. van Ooijen (Submitted
on 13 Mar 2020) https://arxiv.org/abs/2003.06486
Recently, accurate mandible segmentation in CT scans
based on deep learning methods has attracted much attention.
However, there still exist two major challenges, namely, metal
artifacts among mandibles and large variations in
shape or size among individuals. To address these two
challenges, we propose a recurrent segmentation
convolutional neural network (RSegCNN) that embeds
segmentation convolutional neural network (SegCNN) into the
recurrent neural network (RNN) for robust and accurate
segmentation of the mandible. Such a design of the system
takes into account the similarity and continuity of the mandible
shapes captured in adjacent image slices in CT scans. The
RSegCNN infers the mandible information based on the
recurrent structure with the embedded encoder-decoder
segmentation (SegCNN) components. The recurrent
structure guides the system to exploit relevant and important
information from adjacent slices, while the SegCNN
component focuses on the mandible shapes from a single CT
slice.
CT Noise
Modeling and
Denoising
Background
Noise Review #1
A review on CT image noise and its
denoising
Manoj Diwakara, Manoj Kumar
Biomedical Signal Processing and Control (April 2018)
https://doi.org/10.1016/j.bspc.2018.01.010
The process of CT image reconstruction depends on
many physical measurements such as radiation dose,
software/hardware. Due to statistical uncertainty in all physical
measurements in Computed Tomography, the inevitable noise
is introduced in CT images. Therefore, edge-preserving
denoising methods are required to enhance the quality of CT
images. However, there is a tradeoff between noise reduction
and the preservation of actual medical relevant contents.
Reducing the noise without losing the important features of the
image such as edges, corners and other sharp structures, is a
challenging task.
Nevertheless, various techniques have been presented to
suppress the noise from the CT scanned images. Each
technique has their own assumptions, merits and
limitations. This paper contains a survey of some significant
work in the area of CT image denoising. Often, researchers
face difficulty to understand the noise in CT images
and also to select an appropriate denoising method that is
specific to their purpose. Hence, a brief introduction about CT
imaging, the characteristics of noise in CT images and the
popular methods of CT image denoising are presented here.
The merits and drawbacks of CT image denoising methods are
also discussed.
Major factors affecting the quality of CT images:
●Blurring
1)How the equipment is operated.
2)Appropriate protocol factor values.
3)Blurring of image due to patient movement.
4)Fluctuation of CT number between pixels in the image for a scan of uniform material.
5)Some of the filter algorithms or bad parameters of filter algorithms (to reduce noise) blur the image
●Field of view (FOV)
●Artifacts
●
Beam hardening
●
Metal artifact
●
Patient motion
●
Software / hardware based artifacts
●Visual noise
To reconstruct a good quality CT image, the CT scanner has
two important characteristics:
(1) Geometric efficiency: When X-rays are transmitted to
the human body and some absorbed data are not received by
the active detectors, it means geometric efficiency is reduced.
(2) Absorption efficiency: When X-rays are transmitted to
the human body and some absorbed data are not captured by
the active detectors, it means absorption efficiency is reduced.
Therefore, the relationship between noise and
radiation dose in CT scanner must be analyzed.
●Detector
●Collimators
●
Scan range
●Tube current
●Scan (rotation) time
●
Slice thickness
●Peak kilovoltage (KVP)
(1) By understanding the radiation dose and improving the dose
efficiency of CT systems, the low dose CT image can be improved.
(2) In second approach, CT image quality can be improved by
developing algorithms to reduce the noise from CT images. These
algorithms can be further used in order to reduce the radiation dose.
Generally, the process of noise suppression is known as image
denoising.
Noise Review #2: Noise Sources
https://doi.org/10.1016/j.bspc.2018.01.010
Random noise: It may arise from the detection of a finite number of X-
ray quanta in the projection. It looks like a fluctuation in the image density.
As a result, the change into image density is unpredictable and in random
manner, this is known as random noise.
Statistical noise: The energy of X-rays are transmitted in the form of
individual chunks of energy called quanta. Therefore, these finite number
of X-ray quanta are detected by the X-ray detector. The number of
detected X-ray quanta may differ with another measurement because of
statistical fluctuation. The statistical noise in CT images may appear
because of fluctuations in detecting a finite number of X-ray quanta.
Statistical noise may also be called quantum noise. As more quanta are
detected in each measurement, the relative accuracy of each
measurement is improved. The only way to reduce the effects of
statistical noise is to increase the number of detected X-ray quanta.
Normally, this is achieved by increasing the number of transmitted X-rays
through an increase in X-ray dose.
Electronic noise: There are electric circuits to receive analog signals
which are also known as analog circuits. The process of receiving analog
signals by the electronic circuits may be affected with some noise, which
is referred as electronic noise. The latest CT scanners are well designed
to reduce the electronic noise.
Roundoff errors: The analog signals are converted into digital signals
using signal processing steps and then sent to the digital computer for
CT image reconstruction. In digital computers, there are digital circuits to
handle the process of discrete signals. Due to limited number of bits for
storage of discrete signals in computer system, mathematical
computation is not possible without roundoff. This limitation is referred as
roundoff error
Generally, noise in reconstructed CT images are introduced mainly by two reasons.
First, a continuously varying error due to electrical noise or roundoff errors, can be
modeled as a simple additive noise, and second reason is the possible error due to
random variations in detected X-ray intensity.
To differentiate tissues (soft and hard), CT numbers are defined by using Hounsfield
unit (HU) [60] for CT image reconstruction. Hounsfield unit (HU) scale is displayed in Fig. 3,
where some CT numbers are defined. The CT number for a given tissue is determined by the
X-ray linear attenuation coefficient (LAC). Linearity is the ability of the CT image to
assign the correct Hounsfield unit (HU) to a given tissue. A good linearity is essential for
quantitative analysis of CT images.
The distribution of noise in CT image can be
derived by estimating the noise variance through
reconstructions algorithms. However, the distribution
of noise in CT image can be accurately characterized
using the Poisson distribution. But for multi-
detector CT (MDCT) scanner, the noise distribution
is more accurately characterized by the Gaussian
distribution. The literature [51,57,121,117] also confirms
that the noise in CT images is generally an
additive white Gaussian noise.
Noise Review #3: Denoising method comparison
https://doi.org/10.1016/j.bspc.2018.01.010
[25] H. Chen, Y. Zhang, M.K. Kalra, F. Lin, P. Liao, J. Zhou, G. Wang,
Low-Dose CT with a Residual Encoder–Decoder
Convolutional Neural Network (RED-CNN), 2017 arXiv
preprint arXiv:1702.00288. https://arxiv.org/abs/1702.00288 -
Cited by 224
[54] L. Gondara, Medical image denoising using
convolutional denoising autoencoders., in: 2016 IEEE 16th
International Conference on Data Mining Workshops (ICDMW),
IEEE, 2016, pp. 241–246.
https://doi.org/10.1109/ICDMW.2016.0041 - Cited by 76
[67] E. Kang, J. Min, J.C. Ye, A Deep Convolutional Neural
Network Using Directional Wavelets for Low-Dose X-
Ray CT Reconstruction, 2016 arXiv preprint arXiv:1610.09736.
https://www.ncbi.nlm.nih.gov/pubmed/29027238
CT Noise in Practice
Assessing Robustness to Noise: Low-
Cost Head CT Triage
Sarah M. Hooper, Jared A. Dunnmon, Matthew P. Lungren, Sanjiv Sam Gambhir,
Christopher Ré, Adam S. Wang, Bhavik N. Patel
Stanford University
17 Mar 2020 https://arxiv.org/abs/2003.07977
In this work we use simulations to study noise
from low-cost scanners, which enables
systematic evaluation over large datasets without
increasing labeling demand. However, studying
variations in acquisition protocol using synthetic
data is relevant when considering model
deployment in any healthcare system.
Different institutions often have differing
acquisition protocols, with noise levels adjusted to
suit the needs of their healthcare practitioners.
However, robustness tests over acquisition
protocol and noise level are rarely
reported. Thus, the line of work presented in this
study is relevant for model testing prior to
deployment within any healthcare system. Finally,
learning directly in sinogram space instead of
reconstructed image space is an interesting future
study that may also be pursued with synthetic
data.
Low-Dose CT
Reduce patient dose,
with reduction in image quality
Poisson noise in CT Low-dose CT (Low photon counts)
Island Sign: An Imaging Predictor for Early
Hematoma Expansion and Poor Outcome
in Patients With Intracerebral Hemorrhage
Qi Li, Qing-Jun Liu, Wen-Song Yang, Xing-Chen Wang, Li-Bo Zhao, Xin Xiong, Rui Li, Du Cao,
Dan Zhu, Xiao Wei, and Peng Xie
Stroke. 2017;48:3019–3025 10 Oct 2017
https://doi.org/10.1161/STROKEAHA.117.017985
Poisson noise is due to the statistical error of low photon counts
and results in random, thin, bright and dark streaks that appear
preferentially in the direction of greatest attenuation (Figure 2).
With increased noise, high-contrast objects, such as bone, may
still be visible, but low-contrast soft-tissue boundaries may
be obscured.
Poisson noise can be decreased by increasing the mAs.
Modern scanners can perform tube current modulation, selectively
increasing the dose when acquiring a projection with high attenuation.
They also typically use bowtie filters, which provide a higher dose
towards the center of the field of view compared with the periphery.
There is a tradeoff between noise and resolution, so noise can also be
reduced by increasing the slice thickness, using a softer reconstruction
kernel (soft-tissue kernel instead of bone kernel) or blurring the image.
Noise can also be reduced by moving the arms out of the scanned
volume for an abdominal CT. If the arms cannot be moved out of the
scanned volume, placing them on top of the abdomen should reduce
noise relative to placing them at the sides. Similarly, large breasts
should be constrained in the front of the thorax rather than on both sides
in thoracic and cardiac CT. This is because the noise increases rapidly as
the photon counts approach zero, which means that the maximum
attenuation has a larger effect on the noise than the average attenuation.
Iterative methods require faster computer chips, and have
only recently become available for clinical use. One iterative
method, model-based iterative reconstruction (MBIR;
GE Healthcare, WI, USA) [5,6], received US FDA approval in
September 2011 [101]. MBIR substantially reduces image
noise and improves image quality, thus allowing scans to be
acquired at lower radiation doses (Figure 3) [2]. Furthermore,
owing to the tradeoff between noise and resolution,
these methods will also probably be important for reducing
noise in higher resolution images.
Dose reduction vs Image Quality
Vendor free basics of radiation dose
reduction techniques for CT
Takeshi Kubo (2019) European Journal of Radiology
https://doi.org/10.1016/j.ejrad.2018.11.002
●Automatic exposure control and iterative reconstruction methods
play a significant role in the CT radiation dose reduction.
●The validity of dose reduction can be evaluated with objective and
subjective image quality, and diagnostic accuracy.
●Realizing the reference dose level for common CT imaging
protocols is necessary to avoid overdose in the CT examinations.
●Efforts need to be made to decrease the low-yield CT
examination. Clinical decision support is expected to play a
significant role in leading to the more meaningful application of CT
examinations.
Tube current and image quality. CT Images of an anthropomorphic phantom
obtained with (a) 125 mAs and (b) 55 mAs at the level of lung bases. Standard
deviations of Hounsfield unit in the region of interest are 14.5 and 19.3 in the
image (a) and (b), respectively. Streak artifacts originating from the
thoracic vertebra are seen as black linear structures and more readily perceptible in
the image (b). The image acquired with lower radiation dose (b, 55 mAs) has more
noise and streak artifacts the one with higher radiation dose (a, 125 mAs).
Tube current adjustment by automatic exposure control system.
Modification of X-ray energy
profile (a) X-ray energy profile at
140kVp (solid line) and 80 kVp
(dashed line). (b, c) Modification of
energy profile with an extra X-ray
filter. Energy profile at 100 keV
without a filter (b) and at 100kVp with
an additional filter (c). Low energy X-
ray is mostly removed with the
additional filter.
Low-dose CT of course benefit from better restoration
SUPER Learning: A Supervised-Unsupervised
Framework for Low-Dose CT Image Reconstruction
Zhipeng Li, Siqi Ye, Yong Long, Saiprasad Ravishankar
(Submitted on 26 Oct 2019) https://arxiv.org/abs/1910.12024
Recent years have witnessed growing interest in machine learning-based
models and techniques for low-dose X-ray CT (LDCT) imaging tasks. The
methods can typically be categorized into supervised learning methods and
unsupervised or model-based learning methods. Supervised learning
methods have recently shown success in image restoration tasks. However,
they often rely on large training sets. Model-based learning methods
such as dictionary or transform learning do not require large or paired
training sets and often have good generalization properties, since they learn
general properties of CT image sets.
Recent works have shown the promising reconstruction performance of
methods such as PWLS-ULTRA that rely on clustering the underlying
(reconstructed) image patches into a learned union of transforms. In this paper,
we propose a new Supervised-UnsuPERvised (SUPER)
reconstruction framework for LDCT image reconstruction that combines the
benefits of supervised learning methods and (unsupervised) transform learning-
based methods such as PWLS-ULTRA that involve highly image-adaptive
clustering. The SUPER model consists of several layers, each of which
includes a deep network learned in a supervised manner and an
unsupervised iterative method that involves image-adaptive
components. The SUPER reconstruction algorithms are learned in a greedy
manner from training data. The proposed SUPER learning methods
dramatically outperform both the constituent supervised learning-based
networks and iterative algorithms for LDCT, and use much fewer iterations in the
iterative reconstruction modules.
Dual-energy/detector CT “sort of CT HDR” #1
Dual energy computed tomography for the head
Norihito NarutoToshihide ItohKyo Noguchi
Japanese Journal of Radiology February 2018, Volume 36, Issue 2, pp 69–80
https://doi.org/10.1007/s11604-017-0701-4 - Cited by 2
Dual energy CT (DECT) is a promising technology that provides better
diagnostic accuracy in several brain diseases. DECT can generate various types
of CT images from a single acquisition data set at high kV and low kV based on
material decomposition algorithms. The two-material decomposition
algorithm can separate bone/calcification from iodine accurately. The three-
material decomposition algorithm can generate a virtual non-contrast image,
which helps to identify conditions such as brain hemorrhage. A virtual
monochromatic image has the potential to eliminate metal artifacts by
reducing beam-hardening effects.
DECT also enables exploration of advanced imaging to make diagnosis easier.
One such novel application of DECT is the X-Map, which helps to
visualize ischemic stroke in the brain without using iodine contrast medium.
The X-Map uses a modified 3MD algorithm. A
motivation of this application is to visualize an
ischemic change of the brain parenchyma by
detecting an increase in water content in
a voxel. To identify a small change in water
content, the 3MD algorithm had a lipid-specific
slope of 2.0 applied in order to suppress the
small difference between gray matter and white
matter, which is mainly the difference in the lipid
content in gray and white matter. As shown in
the diagram, the nominal values of gray matter
and white matter are 33 HU at Sn150 kV and 42
HU at 80 kV, and 29 HU at Sn150 kV and 34 HU
at 80 kV, respectively. The lipid-specific slope
between the nominal point of gray matter and
white matter is 2.0 using the third generation
DSCT (SOMATOM Force; Siemens
Healthcare, Forchheim, Germany)
A patient with acute ischemic stroke 3 h after onset. A simulated standard CT image (a) obtained 3 h
after the ischemic stroke onset shows no definite early ischemic change, although the left
frontoparietal operculum may show questionable hypo-density. The X-Map (b) clearly shows the
ischemic lesion in the left middle cerebral artery territory. The diffusion-weighed image (c) also shows a
definite acute ischemic lesion in the left MCA territory
The two-material decomposition (2MD) is the
algorithm that generates several dual energy (DE)
images. The 2MD algorithm (a) can distinguish
one material from other materials such as bone and
iodine using a separation line. This algorithm has
been used for the DE direct bone removal
application. The three-material decomposition
(3MD) algorithm (b) can extract the iodine
component from contrast-enhanced tissues. All
voxels are projected along the iodine-specific slope
to the line connecting fat and soft-tissue. This
algorithm has been used for DE brain hemorrhage
application
Dual-energy/detector CT “sort of CT HDR” #2
Technical limitations of dual-energy CT in
neuroradiology: 30-month institutional
experience and review of literature
Julien Dinkel, Omid Khalilzadeh, Catherine M Phan, Ajit H Goenka, Albert J Yoo, Joshua A Hirsch, Rajiv
Gupta | Journal of NeuroInterventional Surgery 2015;7:596-602.
http://dx.doi.org/10.1136/neurintsurg-2014-011241
Although dual-energy CT systems (DECT) appears to be a promising option,
its limitations in neuroimaging have not been systematically studied.
Knowledge of these limitations is essential before DECT can be considered as a
standard modality in neuroradiology. In this study, a retrospective analysis was
performed to analyze failure modes and limitations of DECT in neuroradiology. To
further illustrate potential limitations of DECT, clinical analysis was supplemented
with an in vitro dilution experiment using cylinders containing predetermined
concentrations of heparinized swine blood, normal saline, and iodine.
There is a chronic infarct in the right middle cerebral artery territory with diffuse mineralization in this
region (circled). A single-energy image (A) and virtual non-contrast image (B) show hyperdensity
(mean of 58 HU) surrounding infarction of the right basal ganglia and adjacent internal capsule. There is
trace corresponding hyperdensity on the iodine overlay image (C). This finding, by itself, may represent
mineralization or a combination of iodine and hemorrhage. Hard-plaque removal software
(D) cannot identify this region of faint, diffuse mineralization.
Single-energy image (A) with beam-hardening artifacts from clips on
a right middle cerebral artery aneurysm. An iodine overlay image (C)
is particularly impaired by the metallic artifact. The virtual non-
contrast image (B) is less affected by the metallic artifact.
A proposed algorithm for
assessing
intraparenchymal
calcification using dual-
energy CT processing.
The original 80 and 140 kV
images are decomposed
into two alternate base-
pairs: brain parenchyma
and calcium.
A hyperdensity
disappearing on the brain
overlay can be regarded
as a calcification. ICH,
intracranial hemorrhage.
Two types of hyperattenuation seen on a mixed image (A, D) obtained by
dual-energy CT in a patient who underwent recanalization therapy. Contrast
staining (oval) in the right basal ganglia is also depicted in the iodine overlay
image (C) but not in the virtual non-contrast (VNC) image (B). A faint focal
mineralization is seen on the left lentiform nucleus (arrow). The iodine-specific
material decomposition algorithm cannot identify this fourth material which is
seen on both VNC (B) and iodine overlay image (C). After postprocessing using
the brain mineralization application, this hyperdensity disappears on the brain
overlay (E), confirming a calcification. Note that both iodine content and
calcifications are seen on the ‘calcium overlay’ (F).
Dual-energy/detector CT “sort of CT HDR” #3
Characteristic images of the CT brain protocol from the single-layer detector CT (SLCT; Brilliance iCT, Philips healthcare)
and dual-layer detector CT (DLCT; IQon spectral CT, Philips Healthcare). The contrast between the grey and white
matter is clear in both images. In the SLCT image, a drain is visible. The window level and width for both images is 40/80.
Van Ommen et al. (January 2019)
Dose of CT protocols acquired in clinical routine using a dual-layer detector CT scanner: A preliminary report
http://doi.org/10.1016/j.ejrad.2019.01.011
Veronica Fransson’s Master’s thesis (2019)
http://lup.lub.lu.se/luur/download?func=downloa
dFile&recordOId=8995820&fileOId=8995821
Iodine Quantification Using Dual Energy
Computed Tomography and applications in
Brain Imaging
A Review of the Applications of Dual-Energy CT in Acute Neuroimaging
https://doi.org/10.1177%2F0846537120904347
Dual-energy CT is a powerful tool for supplementing standard CT in acute
neuroimaging. Many clinical applications have been demonstrated to
represent added value, notably for improved diagnoses and diagnostic
confidence in head and spinal trauma, cerebral ischemia and hemorrhage, and
angiography. Emerging iodine quantification methods have potential to guide
medical, surgical, and interventional therapy and prognostication in stroke,
aneurysmal hemorrhage, and traumatic contusions. As the technology of DECT
continues to evolve, these tools promise maturation and expansion of their role in
emergent neurological presentations.
In three-material decomposition, if a fourth (or more) material, such as calcium, is
present at a certain concentration in a voxel, DECT cannot separate the constituent
materials and will misclassify them which may present challenges in separating
calcification from enhancement or hemorrhage. Iodine concentrations that are
too low may be unquantifiable or undetectable, and concentrations that are too high
may prevent complete iodine subtraction. The limitation of a relatively narrow field of
view (25-36.5 cm, depending on scanner generation) is of lesser importance in
neuroradiology, as the brain and spine, when centered in the field of view, should be
adequately covered.
Using Dual-Energy CT to Identify Small Foci of
Hemorrhage in the Emergency Setting
https://doi.org/10.1148/radiol.2019192258
Dual-Energy CT should better distinguish calcium from hematomaE
Dual-Energy Head CT Enables
Accurate Distinction of
Intraparenchymal Hemorrhage from
Calcification in Emergency
Department Patients Ranliang Hu, Laleh
Daftari Besheli, Joseph Young, Markus Wu,
Stuart Pomerantz, Michael H. Lev, Rajiv Gupta
https://doi.org/10.1148/radiol.2015150877
To evaluate the ability of dual-energy (DE) computed
tomography (CT) to differentiate calcification from
acute hemorrhage in the emergency department
setting.
In this institutional review board-approved study, all unenhanced
DE head CT examinations that were performed in the emergency
department in November and December 2014 were
retrospectively reviewed. Simulated 120-kVp single-energy
CT images were derived from the DE CT acquisition via
postprocessing. Patients with at least one focus of
intraparenchymal hyperattenuation on single-energy CT images
were included, and DE material decomposition postprocessing
was performed. Each focal hyperattenuation was analyzed on the
basis of the virtual noncalcium and calcium overlay
images and classified as calcification or hemorrhage.
Sensitivity, specificity, and accuracy were calculated for single-
energy and DE CT by using a common reference standard
established by relevant prior and follow-up imaging and clinical
information.
DE CT by using material decomposition enables
accurate differentiation between calcification and
hemorrhage in patients presenting for emergency
head imaging and can be especially useful in
problem-solving complex cases that are difficult
to determine based on conventional CT
appearance alone.
Multi-energy CT
Uniqueness criteria in multi-energy CT
Guillaume Bal, Fatma Terzioglu (Submitted on 6 Jan 2020)
https://arxiv.org/abs/2001.06095
Multi-Energy Computed Tomography (ME-CT ) is a
medical imaging modality aiming to reconstruct the spatial
density of materials from the attenuation properties of probing
x-rays. For each line in two- or three-dimensional space, ME-
CT measurements may be written as a nonlinear mapping
from the integrals of the unknown densities of a finite number
of materials along said line to an equal or larger number of
energy-weighted integrals corresponding to different x-ray
source energy spectra.
ME-CT reconstructions may thus be decomposed as a two-
step process:
1)Reconstruct line integrals of the material densities from the
available energy measurements; and
2)Reconstruct densities from their line integrals.
Step (ii) is the standard linear x-ray CT problem whose
invertibility is well-known, so this paper focuses on step (i).
Low-dose Multi-energy CT
Joint Reconstruction in Low Dose Multi-Energy CT
Jussi Toivanen, Alexander Meaney, Samuli Siltanen, Ville Kolehmainen
(Submitted on 11 Apr 2019 (v1), last revised 13 Feb 2020 (this version, v3))
https://arxiv.org/abs/1904.05671
Multi-energy CT takes advantage of the non-linearly varying attenuation
properties of elemental media with respect to energy, enabling more precise
material identification than single-energy CT. The increased precision comes with
the cost of a higher radiation dose. A straightforward way to lower the dose is to
reduce the number of projections per energy, but this makes tomographic
reconstruction more ill-posed.
In this paper, we propose how this problem can be overcome with a combination of
a regularization method that promotes structural similarity between images at
different energies and a suitably selected low-dose data acquisition protocol using
non-overlapping projections. The performance of various joint regularization
models is assessed with both simulated and experimental data, using the novel low-
dose data acquisition protocol. Three of the models are well-established, namely the
joint total variation, the linear parallel level sets and the spectral smoothness
promoting regularization models.
Furthermore, one new joint regularization model is introduced for multi-
energy CT: a regularization based on the structure function from the
structural similarity index. The findings show that joint regularization
outperforms individual channel-by-channel reconstruction. Furthermore, the
proposed combination of joint reconstruction and non-overlapping projection
geometry enables significant reduction of radiation dose.
G Poludniowski, G Landry, F DeBlois, P M Evans, and F Verhaegen.
SpekCalc: a program to calculate photon spectra from tungsten anode
x-ray tubes. Physics in Medicine and Biology, 54:N433—-N438, 2009.
https://doi.org/10.1088/0031-9155/54/19/N01
3D Few-view CT Reconstruction
Deep Encoder-decoder Adversarial
Reconstruction (DEAR) Network for 3D CT from
Few-view Data
Huidong Xie, Hongming Shan, Ge Wang (Submitted on 13 Nov 2019
https://arxiv.org/abs/1911.05880
In this paper, we propose a deep encoder-decoder adversarial reconstruction (DEAR) network
for 3D CT image reconstruction from few-view data. Since the artifacts caused by few-view
reconstruction appear in 3D instead of 2D geometry, a 3D deep network has a great potential for
improving the image quality in a data-driven fashion. More specifically, our proposed DEAR-3D
network aims at reconstructing 3D volume directly from clinical 3D spiral cone-beam image
data. DEAR-3D utilizes 3D convolutional layers to extract 3D information from multiple adjacent
slices in a generative adversarial network (GAN) framework. Different from reconstructing 2D
images from 3D input data, DEAR-3D directly reconstructs a 3D volume, with faithful texture and
image details; DEAR is validated on a publicly available abdominal CT dataset prepared and
authorized by Mayo Clinic. Compared with other 2D deep-learning methods, the proposed
DEAR-3D network can utilize 3D information to produce promising reconstruction results
Few-view CT may be implemented as a mechanically stationary scanner in the future [
Cramer et al. 2018] for health-care and other utilities. Current commercial CT scanners use one or
two x-ray sources mounted on a rotating gantry, and take hundreds of projections around a patient.
The rotating mechanism is not only massive but also power-consuming. Hence, current commercial
CT scanners are inaccessible outside hospitals and imaging centers, due to their size,
weight, and cost. Designing a stationary gantry with multiple miniature x-ray sources is an
interesting approach to resolve this issue [Cramer et al. 2018].
CT
Registration
Traditional
Background
Multimodal Spatial Normalization Example #1
Image processing steps for three methods of spatial normalization and measuring regional SUV. (a) Skull-stripping of original CT image, (b) spatial normalization of skull-stripped CT to skull-stripped CT
template, (c) applying transformation parameter normalizing CT image for spatial normalization of PET image, (d) skull-stripping of original MR image, (e) spatial normalization of skull-stripped MR image to skull-stripped
MR template, (f) coregistration of PET image to MR image, (g) applying transformation parameter normalizing MR image for spatial normalization of PET image, (h) spatial normalization of PET image with MNI PET
template, (i) measuring regional SUV with modified AAL VOI template, (j) acquisition of FSVOI with FreeSurfer, and (k) measuring regional SUV by using FSVOI overlaid on PET image coregistered to MR. AAL = automated
anatomical labeling, FSVOI = FreeSurfer-generated volume of interest, MNI = Montreal Neurological Institute, PET = positron emission tomography, SUV = standardized uptake value, VOI = volume of interest
A Computed
Tomography-Based
Spatial Normalization for
the Analysis of [18F]
Fluorodeoxyglucose
Positron Emission
Tomography of the Brain
Korean J Radiol. 2014 Nov-
Dec;15(6):862-870.
https://doi.org/10.3348/kjr.20
14.15.6.862
Multimodal Spatial Normalization Example #2
Spatial Normalization Of CT Images To Mni Space A Representative
http://fbcover.us/mni-template/
Pretty Mni Template Images Gallery Study Specific Epi Template
http://fbcover.us/mni-template/
Bic The Mcconnell Brain Imaging Centre Icbm 152 N Lin 2009
http://fbcover.us/mni-template/
MRI Spatial Normalization Example
Spatial registration for functional near-infrared spectroscopy: From
channel position on the scalp to cortical location in individual and
group analyses (NeuroImage 2013)
https://doi.org/10.1016/j.neuroimage.2013.07.025
Probabilistic registration of single-subject data without MRI. (A) Positions for channels and reference
points in real-world (RW) space are measured using a 3D-digitizer. The minimum number of reference
points is four, as in this case, where Nz (nasion), Cz, and left and right preauricular points (AL and AR) are
used. Alternatively, whole or selected 10/20 positions may be used. (B) The reference points in RW are
affine-transformed to the corresponding reference points in each entry in reference to the MRI database in
MNI space. (C) Channels of the scalp are projected onto the cortical surface of the reference brains. (D)
The cortically projected channel positions are integrated to yield the most likely coordinates (average:
centers of spheres) and variability (composite standard deviation: radii of spheres) in MNI space.
(NeuroImage 2013)
https://doi.org/10.1016/j.neuroimage.2013.07.025
ANT package and SyN as the “SOTA”
Evaluation of 14 nonlinear deformation
algorithms applied to human brain MRI
registration
Arno Klein et al. (2009)
NeuroImage Volume 46, Issue 3, 1 July 2009, Pages 786-802
https://doi.org/10.1016/j.neuroimage.2008.12.037 - Cited by 1776
More than 45,000 registrations between 80 manually labeled brains
were performed by algorithms including: AIR, ANIMAL, ART,
Diffeomorphic Demons, FNIRT, IRTK, JRD-fluid, ROMEO, SICLE, SyN,
and four different SPM5 algorithms (“SPM2-type” and regular
Normalization, Unified Segmentation, and the DARTEL Toolbox). All of
these registrations were preceded by linear registration between the
same image pairs using FLIRT.
One of the most significant findings of this study is that the relative
performances of the registration methods under comparison appear to
be little affected by the choice of subject population, labeling
protocol, and type of overlap measure. This is important because
it suggests that the findings are generalizable to new subject
populations that are labeled or evaluated using different labeling
protocols. Furthermore, we ranked the 14 methods according to
three completely independent analyses (permutation tests, one-way
ANOVA tests, and indifference-zone ranking) and derived three almost
identical top rankings of the methods. ART, SyN, IRTK, and
SPM's DARTEL Toolbox gave the best results according to
overlap and distance measures, with ART and SyN delivering the
most consistently high accuracy across subjects and label
sets. Updates will be published on the
http://www.mindboggle.info/papers/website
Blaiotta et al. (2018): “Advanced normalisation Tools (ANTs) package, through
the web site http://stnava.github.io/ANTs/. Indeed, the symmetric diffeomorphic
registration framework implemented in ANTs has established itself as the state-of-
the-art of medical image nonlinear spatial normalisation (Klein et al., 2009).”
Image Registration
Diffeomorphisms: SyN, Independent Evaluation: Klein, Murphy, Template Construction
(2004)(2010), Similarity Metrics, Multivariate registration,
Multiple modality analysis and statistical bias
How about missing data?
Diffeomorphic registration with intensity
transformation and missing data:
Application to 3D digital pathology of
Alzheimer’s disease
Daniel Tward • Timothy Brown • Yusuke Kageyama • Jaymin Patel
• Zhipeng Hou • Susumu Mori • Marilyn Albert • Juan Troncoso •
Michael Miller bioRxiv preprint first posted online Dec. 11, 2018; doi:
http://dx.doi.org/10.1101/494005
This paper examines the problem of diffeomorphic image mapping in
the presence of differing image intensity profiles and missing data.
Our motivation comes from the problem of aligning 3D brain MRI with 100
micron isotropic resolution, to histology sections with 1 micron in plane
resolution. Multiple stains, as well as damaged, folded, or missing tissue are
common in this situation. We overcome these challenges by introducing
two new concepts. Cross modality image matching is achieved by
jointly estimating polynomial transformations of the atlas intensity, together
with pose and deformation parameters. Missing data is accommodated
via a multiple atlas selection procedure where several atlases may be of
homogeneous intensity and correspond to “background” or “artifact”.
The two concepts are combined within an Expectation
Maximization algorithm, where atlas selection posteriors and
deformation parameters are updated iteratively, and polynomial
coefficients are computed in closed form. We show results for 3D
reconstruction of digital pathology and MRI in standard atlas coordinates. In
conjunction with convolutional neural networks, we quantify the 3D density
distribution of tauopathy throughout the medial temporal lobe of an
Alzheimer’s disease postmortem specimen.
Diffusion Tensor Imaging registration pipeline example
Improving spatial normalization of brain
diffusion MRI to measure longitudinal
changes of tissue microstructure in human
cortex and white matter
Florencia Jacobacci, Jorge Jovicich, Gonzalo Lerner, Edson Amaro Jr, Jorge
Armony, Julien Doyon, Valeria Della-Maggiore
Universidad de Buenos Aires
https://doi.org/10.1101/590521 (March 28, 2019)
https://github.com/florjaco/DWIReproducibleNormalization
Scalar diffusion tensor imaging (DTI) measures, such as
fractional anisotropy (FA) and mean diffusivity (MD),
are increasingly being used to evaluate longitudinal changes
in brain tissue microstructure. In this study, we aimed at
optimizing the normalization approach of longitudinal
DTI data in humans to improve registration in gray
matter and reduce artifacts associated with
multisession registrations. For this purpose, we examined the
impact of different normalization features on the across-
session test-retest reproducibility error of FA and MD maps
from multiple scanning sessions.
We found that a normalization approach using ANTs as the
registration algorithm, MNI152 T1 template as the target
image, FA as the moving image, and an intermediate FA
template yielded the highest test-retest reproducibility in
registering longitudinal DTI maps for both gray matter and
white matter. Our optimized normalization pipeline opens a
window to quantify longitudinal changes in
microstructure at the cortical level.
CT
Image Quality
Regulatory and
technical definitions
Technical Image Quality validated by radiologists
Validation of algorithmic CT image
quality metrics with preferences of
radiologists
Yuan Cheng Ehsan Abadi Taylor Brunton Smith Francesco Ria
Mathias Meyer Daniele Marin Ehsan Samei
https://doi.org/10.1002/mp.13795 (29 August 2019)
Automated assessment of perceptual image quality on
clinical Computed Tomography (CT) data by computer algorithms
has the potential to greatly facilitate data driven monitoring and
‐ICH, occurs earlier
optimization of CT image acquisition protocols. The application of
these techniques in clinical operation requires the knowledge of
how the output of the computer algorithms corresponds
to clinical expectations. This study addressed the need to
validate algorithmic image quality measurements on clinical CT
images with preferences of radiologists and determine the
clinically acceptable range of algorithmic measurements for
abdominal CT examinations.
Algorithmic measurements of image quality metrics
(organ HU, noise magnitude, and clarity) were performed on a
clinical CT image dataset with supplemental measures of noise
power spectrum from phantom images using techniques
developed previously. The algorithmic measurements were
compared to clinical expectations of image quality in an observer
study with seven radiologists.
The observer study results indicated that these algorithms can
robustly assess the perceptual quality of clinical CT
images in an automated fashion. Clinically acceptable ranges
of algorithmic measurements were determined. The
correspondence of these image quality assessment algorithms to
clinical expectations paves the way toward establishing diagnostic
reference levels in terms of clinically acceptable perceptual image
quality and data driven optimization of CT image
‐sized
acquisition protocols.
Image Quality (and resolution) task-specific
sometimes blurry+pixelated volumes can get you somewhere?
The Effect of Image Resolution on
Deep Learning in Radiography
YCarl F. Sabottke , Bradley M. Spieler Liang Radiology: Artificial
Intelligence (Jan 22 2020)
https://doi.org/10.1148/ryai.2019190015
Tracking convolutional neural network performance as a
function of image resolution allows insight into how the
relative subtlety of different radiology findings can affect the
success of deep learning in diagnostic radiology
applications.
Maximum AUCs were achieved at image resolutions
between 256 × 256 and 448 × 448 pixels for binary
decision networks targeting emphysema, cardiomegaly,
hernias, edema, effusions, atelectasis, masses, and nodules.
When comparing performance between networks that
utilize lower resolution (64 × 64 pixels) versus higher
(320 × 320 pixels) resolution inputs, emphysema,
cardiomegaly, hernia, and pulmonary nodule detection had
the highest fractional improvements in AUC at higher image
resolutions.
Increasing image resolution for CNN training often has a
trade-off with the maximum possible batch size, yet
optimal selection of image resolution has the potential for
further increasing neural network performance for various
radiology-based machine learning tasks. Furthermore,
identifying diagnosis-specific tasks that require
relatively higher image resolution can potentially
provide insight into the relative difficulty of identifying
different radiology findings.
Regulatory Image Quality
Achieving CT Regulatory Compliance: A
Comprehensive and Continuous Quality
Improvement Approach
Matthew E. Zygmont, Rebecca Neill, Shalmali Dharmadhikari, Phuong-Anh T.
Duong Current Problems in Diagnostic Radiology
Available online 12 February 2020
https://doi.org/10.1067/j.cpradiol.2020.01.013
Computed tomography (CT) represents one of the largest sources of
radiation exposure to the public in the United States. Regulatory
requirements now mandate dose tracking for all exams and
investigation of dose events that exceed set dose thresholds.
Radiology practices are tasked with ensuring quality control and
optimizing patient CT exam doses while maintaining diagnostic
efficacy. Meeting regulatory requirements necessitates the
development of an effective quality program in CT.
This review provides a template for accreditation compliant
quality control and CT dose optimization. The following paper
summarizes a large health system approach for establishing a quality
program in CT and discusses successes, challenges, and future
needs.
Protocol management was one of the most time intensive
components of our CT quality program. Central protocol
management with cross platform compatibility would allow for
efficient standardization and would have great impact especially in
large organizations. Modular protocol design from
manufacturers is another missing piece in the optimization
process. Having recursive protocol modules would greatly alleviate
the burden of making parameter changes to core imaging units. For
example, our routine head protocol is a standalone exam, but also
exists in combination protocols for CT angiography of the head and
neck, perfusion imaging, and trauma exams.
CT
Registration
Deep Learning
https://arxiv.org/pdf/1903.03545.pdf
https://paperswithcode.com/task/diffeomorphic-medical-image-registration
Conditional variational autoencoder for diffeomorphic registration #1
Learning a Probabilistic Model for Diffeomorphic
Registration Julian Krebs ; Hervé Delingette; Boris Mailhé; Nicholas Ayache; Tommaso
Mansi
Université Côte d’Azur, Inria / Siemens Healthineers, Digital Services, Digital Technology and Innovation, Princeton, NJ, USA
IEEE Transactions on Medical Imaging ( Volume: 38 , Issue: 9 , Sept. 2019 ) https://doi.org/10.1109/TMI.2019.2897112
Medical image registration is one of the key processing steps for biomedical image analysis such as cancer
diagnosis. Recently, deep learning based supervised and unsupervised image registration methods have been
extensively studied due to its excellent performance in spite of ultra-fast computational time
compared to the classical approaches.
In this paper, we present a novel unsupervised medical image registration method that trains deep neural
network for deformable registration of 3D volumes using a cycle-consistency.
To guarantee the topology preservation between the deformed and fixed images, we here adopt the cycle
consistency constraint between the original moving image and its re-deformed image. That is, the deformed
volumes are given as the inputs to the networks again by switching their order to impose the cycle consistency.
This constraint ensures that the shape of deformed images successively returns to the original shape.
Thanks to the cycle consistency, the proposed deep neural networks can take diverse pair of image data
with severe deformation for accurate registration. Experimental results using multiphase liver CT
images demonstrate that our method provides very precise 3D image registration within a few seconds,
resulting in more accurate cancer size estimation.
The number of
trainable
parameters in the
network was
~420k. The
framework has
been
implemented in
Tensorflow using
Keras. Training
took ~24 hours
and testing a
single registration
case took 0.32s
on a NVIDIA GTX
TITAN X GPU.
Conditional variational autoencoder for diffeomorphic registration #2
Learning a Probabilistic Model for Diffeomorphic
Registration Julian Krebs ; Hervé Delingette; Boris Mailhé; Nicholas Ayache; Tommaso
Mansi
Université Côte d’Azur, Inria / Siemens Healthineers, Digital Services, Digital Technology and Innovation, Princeton, NJ, USA
IEEE Transactions on Medical Imaging ( Volume: 38 , Issue: 9 , Sept. 2019 ) https://doi.org/10.1109/TMI.2019.2897112 -
Cited by 13
26. J. Fan, X. Cao, P.-T. Yap, D. Shen, Birnet: Brain image registration using dual-supervised fully
convolutional networks, 2018. https://arxiv.org/abs/1802.04692.
27. A. V. Dalca, G. Balakrishnan, J. Guttag, M. R. Sabuncu, "Unsupervised learning for fast probabilistic
diffeomorphic registration", Proc. Int. Conf. Med. Image Comput. Comput.-Assist. Intervent., pp. 729-738,
2018. https://arxiv.org/abs/1805.04605 - See next slide →
29. Y. Hu et al., "Weakly-supervised convolutional neural networks for multimodal image registration",
Med. Image Anal., vol. 49, pp. 1-13, Oct. 2018. https://arxiv.org/abs/1807.03361
Unsupervised Probabilistic+diffeomorphic tweak of VoxelMorph
Unsupervised Learning of Probabilistic
Diffeomorphic Registration for Images and
Surfaces Adrian V. Dalca, Guha Balakrishnan, John Guttag,
Mert R. Sabuncu (Submitted on 8 Mar 2019 (v1), last revised 23 Jul 2019
(this version, v2)) https://arxiv.org/abs/1903.03545
https://github.com/voxelmorph/voxelmorph
Paperswithcode Diffeomorphic Medical Image Registration
Classical deformable registration techniques achieve impressive results
and offer a rigorous theoretical treatment, but are computationally
intensive since they solve an optimization problem for each image pair.
Recently, learning-based methods have facilitated fast registration by
learning spatial deformation functions. However, these approaches use
restricted deformation models, require supervised labels, or do not
guarantee a diffeomorphic (topology-preserving) registration.
Furthermore, learning-based registration tools have not been
derived from a probabilistic framework that can offer uncertainty
estimates.
In this paper, we build a connection between classical and
learning-based methods. We present a probabilistic generative
model and derive an unsupervised learning-based inference algorithm
that uses insights from classical registration methods and makes use of
recent developments in convolutional neural networks (CNNs). We
demonstrate our method on a 3D brain registration task for both
images and anatomical surfaces, and provide extensive empirical
analyses. Our principled approach results in state of the art
accuracy and very fast runtimes, while providing diffeomorphic
guarantees.
Our algorithm can infer the registration of new image
pairs in under a second. Compared to traditional
methods, our approach is significantly faster,
and compared to recent learning based methods,
our method offers diffeomorphic guarantees.
We demonstrate that the surface extension to our
model can help improve registration while
preserving properties such as low runtime and
diffeomorphisms. Furthermore, several conclusions
shown in recent papers apply to our method. For
example, when only given very limited
training data, deformation from VoxelMorph can
still be used as initialization to a classical
method, enabling faster convergence (
Balakrishnan et al, 2019)
Not that many annotated training samples required?
Few Labeled Atlases are Necessary for
Deep-Learning-Based Segmentation Hyeon
Woo Lee, Mert R. Sabuncu, Adrian V. Dalca (Submitted on 13 Aug 2019
(v1), last revised 15 Aug 2019 (this version, v3))
https://arxiv.org/abs/1908.04466
We tackle biomedical image segmentation in the scenario of only
a few labeled brain MR images. This is an important and challenging
task in medical applications, where manual annotations are time-
consuming. Classical multi-atlas based anatomical segmentation
methods use image registration to warp segments from labeled images
onto a new scan. These approaches have traditionally required
significant runtime, but recent learning-based registration
methods promise substantial runtime improvement.
In a different paradigm, supervised learning-based segmentation
strategies have gained popularity. These methods have consistently
used relatively large sets of labeled training data, and their behavior in the
regime of a few labeled images has not been thoroughly evaluated. In
this work, we provide two important results for anatomical
segmentation in the scenario where few labeled images are available.
First, we propose a straightforward implementation of efficient semi-
supervised learning-based registration method, which we
showcase in a multi-atlas segmentation framework. Second, through a
thorough empirical study, we evaluate the performance of a supervised
segmentation approach, where the training images are augmented
via random deformations. Surprisingly, we find that in both
paradigms, accurate segmentation is generally possible even
in the context of few labeled images.
Metric learning approach for diffeomorphic transformation
Metric Learning for Image Registration
Marc Niethammer, Roland Kwitt, François-Xavier Vialard (2019)
https://arxiv.org/abs/1904.09524 / CVPR 2019
https://github.com/uncbiag/registration
Image registration is a key technique in medical image analysis to estimate
deformations between image pairs. A good deformation model is important for
high-quality estimates. However, most existing approaches use ad-hoc
deformation models chosen for mathematical convenience rather than to
capture observed data variation. Recent deep learning approaches learn
deformation models directly from data.
However, they provide limited control over the spatial regularity of
transformations. Instead of learning the entire registration approach, we learn a
spatially-adaptive regularizer within a registration model. This allows
controlling the desired level of regularity and preserving structural
properties of a registration model.
For example, diffeomorphic transformations can be attained. Our approach is a
radical departure from existing deep learning approaches to image registration
by embedding a deep learning model in an optimization-based
registration algorithm to parameterize and data-adapt the
registration model itself.
Much experimental and theoretical work remains. More sophisticated CNN
models should be explored; the method should be adapted for fast end-to-end
regression; more general parameterizations of regularizers should be studied
(e.g., allowing sliding), and the approach should be developed for LDDMM.
One/Few-shot learning for image registration as well
One Shot Learning for Deformable Medical
Image Registration and Periodic Motion
Tracking Tobias Fechter, Dimos Baltas (11 Jul 2019)
https://arxiv.org/abs/1907.04641
Deformable image registration is a very important field of research in medical imaging. Recently
multiple deep learning approaches were published in this area showing promising results. However,
drawbacks of deep learning methods are the need for a large amount of training
datasets and their inability to register unseen images different from the training datasets. One
shot learning comes without the need of large training datasets and has already been proven to
be applicable to 3D data.
In this work we present an one shot registration approach for periodic motion tracking in
3D and 4D datasets. When applied to 3D dataset the algorithm calculates the inverse of a
registration vector field simultaneously. For registration we employed a U-Net combined with a
coarse to fine approach and a differential spatial transformer module. The algorithm
was thoroughly tested with multiple 4D and 3D datasets publicly available. The results show that
the presented approach is able to track periodic motion and to yield a competitive registration
accuracy. Possible applications are the use as a stand-alone algorithm for 3D and 4D
motion tracking or in the beginning of studies until enough datasets for a separate training phase
are available.
Inpainting with registration
Synthesis and Inpainting-Based MR-
CT Registration for Image-Guided
Thermal Ablation of Liver Tumors
Dongming Wei, Sahar Ahmad, Jiayu Huo, Wen Peng, Yunhao Ge,
Zhong Xue, Pew-Thian Yap, Wentao Li, Dinggang Shen, Qian Wang
[Submitted on 30 Jul 2019] https://arxiv.org/abs/1907.13020
In this paper, we propose a fast MR-CT image registration
method to overlay a pre-procedural MR (pMR)
image onto an intra-procedural CT (iCT) image for
guiding the thermal ablation of liver tumors. By first using a
Cycle-GAN model with mutual information constraint to
generate synthesized CT (sCT) image from the cor-
responding pMR, pre-procedural MR-CT image
registration is carried out through traditional mono-
modality CT-CT image registration.
At the intra-procedural stage, a partial-convolution-
based network is first used to inpaint the probe and its
artifacts in the iCT image. Then, an unsupervised
registration network is used to efficiently align the pre-
procedural CT (pCT) with the inpainted iCT (inpCT)
image.
The final transformation from pMR to iCT is obtained
by combining the two estimated
transformations,i.e., (1) from the pMR image space to
the pCT image space (through sCT) and (2) from the pCT
image space to the iCT image space (through inpCT).
Registration with Segmentation jointly
Deep Learning-Based Concurrent Brain
Registration and Tumor Segmentation
Théo Estienne et al. (2020) Front. Comput. Neurosci., 20 March 2020 |
https://doi.org/10.3389/fncom.2020.00017
https://github.com/TheoEst/joint_registration_tumor_segmentation
Keras
In this paper, we propose a novel, efficient, and multi-task algorithm that
addresses the problems of image registration and brain tumor
segmentation jointly. Our method exploits the dependencies between
these tasks through a natural coupling of their interdependencies during
inference. In particular, the similarity constraints are relaxed within the
tumor regions using an efficient and relatively simple formulation. We
evaluated the performance of our formulation both quantitatively and
qualitatively for registration and segmentation problems on two publicly
available datasets (BraTS 2018 and OASIS 3), reporting competitive results
with other recent state-of-the-art methods.
Registration with Segmentation and Synthesis
JSSR: A Joint Synthesis, Segmentation, and Registration
System for 3D Multi-Modal Image Alignment of Large-
scale Pathological CT Scans
Fengze Liu, Jingzheng Cai, Yuankai Huo, Chi-Tung Cheng, Ashwin Raju, Dakai Jin, Jing Xiao, Alan Yuille, Le
Lu, ChienHung Liao, Adam P Harrison
[Submitted on 25 May 2020]https://arxiv.org/abs/2005.12209
Multi-modal image registration is a challenging problem yet important clinical
task in many real applications and scenarios. For medical imaging based diagnosis,
deformable registration among different image modalities is often required in order to
provide complementary visual information, as the first step. During the registration, the
semantic information is the key to match homologous points and pixels.
Nevertheless, many conventional registration methods are incapable to capture
the high-level semantic anatomical dense correspondences.
In this work, we propose a novel multi-task learning system, JSSR, based on an end-
to-end 3D convolutional neural network that is composed of a generator, a
register and a segmentor, for the tasks of synthesis, registration and segmentation,
respectively.
This system is optimized to satisfy the implicit constraints between different tasks
unsupervisedly. It first synthesizes the source domain images into the target
domain, then an intra-modal registration is applied on the synthesized images and
target images. Then we can get the semantic segmentation by applying segmentors
on the synthesized images and target images, which are aligned by the same
deformation field generated by the registers. The supervision from another fully-
annotated dataset is used to regularize the segmentors.
Follow https://paperswithcode.com/
Papers with Code for state-of-the-art
CT Denoising
Deep Learning
https://github.com/SSinyu/CT-Denoising-Review
Plenty of deep learning attempts
Deep Learning for Low-Dose CT Denoising
Maryam Gholizadeh-Ansari, Javad Alirezaie, Paul Babyn (Submitted
on 25 Feb 2019) https://arxiv.org/abs/1902.10127
In this paper, we propose a deep
neural network that uses dilated
convolutions with different dilation
rates instead of standard convolution
helping to capture more contextual
information in fewer layers. Also, we
have employed residual learning by
creating shortcut connections to
transmit image information from the
early layers to later ones. To further
improve the performance of the
network, we have introduced a non-
trainable edge detection layer
that extracts edges in horizontal,
vertical, and diagonal directions.
Finally, we demonstrate that
optimizing the network by a
combination of mean-square
error loss and perceptual loss
preserves many structural details in
the CT image. This objective function
does not suffer from over smoothing
and blurring effects caused by per-
pixel loss and grid-like artifacts
resulting from perceptual loss. The
experiments show that each
modification to the network improves
the outcome while only minimally
changing the complexity of the
network.
Few-view CT Reconstruction to reduce radiation dose
Dual Network Architecture for Few-view CT --
Trained on ImageNet Data and Transferred for
Medical Imaging
Huidong Xie, Hongming Shan, Wenxiang Cong, Xiaohua Zhang, Shaohua Liu, Ruola
Ning, Ge Wang (12 Sept 2019)
https://arxiv.org/abs/1907.01262
Few-view CT image reconstruction is an important topic to reduce
the radiation dose. Recently, data-driven algorithms have shown
great potential to solve the few-view CT problem. In this paper, we
develop a dual network architecture (DNA) for reconstructing
images directly from sinograms. In the proposed DNA method, a point-
based fully-connected layer learns the backprojection process
requesting significantly less memory than the prior arts do.
This paper is not the first work for
reconstructing images directly from raw
data, but previously proposed methods
require a significantly greater amount of GPU
memory for training. It is underlined that our
proposed method solves the memory issue
by learning the reconstruction process with
the point-wise fully-connected layer and
other proper network ingredients. Also, by
passing only a single point into the fully-
connected layer, the proposed method can
truly learn the backprojection process. In our
study, the DNA network demonstrates
superior performance and generalizability. In
the future works, we will validate the
proposed method on images up to
dimension 512 × 512 or even 1024 × 1024.
Wasserstein GANs for low-dose CT denoising
Low Dose CT Image Denoising Using a
Generative Adversarial Network with
Wasserstein Distance and Perceptual Loss
Qingsong Yang et al. (2018)
Rensselaer Polytechnic Institute, Troy, NY
https://dx.doi.org/10.1109%2FTMI.2018.2827462 - Cited by 139
Over the past years, various low-dose CT methods have produced impressive results.
However, most of the algorithms developed for this application, including the recently
popularized deep learning techniques, aim for minimizing the mean-squared-error (MSE)
between a denoised CT image and the ground truth under generic penalties. Although
the peak signal-to-noise ratio (PSNR) is improved, MSE- or weighted-MSE-based
methods can compromise the visibility of important structural details after
aggressive denoising.
This paper introduces a new CT image denoising method based on the generative
adversarial network (GAN) with Wasserstein distance and perceptual similarity. The
Wasserstein distance is a key concept of the optimal transport theory, and promises
to improve the performance of GAN. The perceptual loss suppresses noise by
comparing the perceptual features of a denoised output against those of the ground truth
in an established feature space, while the GAN focuses more on migrating the data
noise distribution from strong to weak statistically. Therefore, our proposed method
transfers our knowledge of visual perception to the image denoising task
and is capable of not only reducing the image noise level but also trying to keep the critical
information at the same time. Promising results have been obtained in our experiments
with clinical CT images.
In the future, we plan to incorporate the WGAN-VGG network with more
complicated generators such as the networks reported in [
Chen et al. 2017, Kang et al. 2016] and extend these networks for image
reconstruction from raw data by making a neural network counterpart of
the FBP process.
Sinogram pre-filtration and image post-processing are computationally efficient compared to iterative
reconstruction. Noise characteristic was well modeled in the sinogram domain for sinogram-domain filtration.
However, sinogram data of commercial scanners are not readily available to users, and these
methods may suffer from resolution loss and edge blurring. Sinogram data need to be carefully
processed, otherwise artifacts may be induced in the reconstructed images. Differently from sinogram
denoising, image post-processing directly operates on an image. Many efforts were made in the image
domain to reduce LDCT noise and suppress artifacts.
Despite the impressive denoising results with these innovative deep learning network structures, they fall into a category of an
end-to-end network that typically uses the mean squared error (MSE) between the network output and the ground truth as
the loss function. As revealed by the recent work [Johnson et al. 2016; Ledig et al. 2016], this per-pixel MSE is often
associated with over-smoothed edges and loss of details. As an algorithm tries to minimize per-pixel MSE, it overlooks
subtle image textures/signatures critical for human perception. It is reasonable to assume that CT images distribute over some
manifolds. From that point of view, the MSE based approach tends to take the mean of high-resolution patches using
the Euclidean distance rather than the geodesic distance. Therefore, in addition to the blurring effect, artifacts are also
possible such as non-uniform biases.
Zoomed ROI of the red rectangle in Fig. 7 demonstrates the two
attenuation liver lesions in the red and blue circles. The display
window is [−160, 240]HU.
Attention with GANs
Visual Attention Network for Low-Dose CT
Wenchao Du ; Hu Chen ; Peixi Liao ; Hongyu Yang ; Ge Wang ; Yi Zhang | IEEE
Signal Processing Letters ( Volume: 26 , Issue: 8 , Aug. 2019 )
https://doi.org/10.1109/LSP.2019.2922851
Noise and artifacts are intrinsic to low-dose
computed tomography (LDCT) data
acquisition, and will significantly affect the
imaging performance. Perfect noise
removal and image restoration is
intractable in the context of LDCT due to
the statistical and the technical
uncertainties. In this letter, we apply the
generative adversarial network (GAN)
framework with a visual attention
mechanism to deal with this problem in a
data-driven/machine learning fashion.
Our main idea is to inject visual attention
knowledge into the learning process of
GAN to provide a powerful prior of the
noise distribution. By doing this, both the
generator and discriminator networks are
empowered with visual attention
information so that they will not only
pay special attention to noisy
regions and surrounding structures
but also explicitly assess the local
consistency of the recovered
regions. Our experiments qualitatively
and quantitatively demonstrate the
effectiveness of the proposed method with
clinic CT images.
Cycle-consistent adversarial denoising for CT
Cycle consistent adversarial denoising
‐sized
network for multiphase coronary CT
angiography
Eunhee Kang Hyun Jung Koo Dong Hyun Yang Joon Bum Seo Jong
Chul Ye. Medical Physics (2018) https://doi.org/10.1002/mp.13284
We propose an unsupervised learning technique that can remove
the noise of the CT images in the low dose phases
‐sized
by learning
from the CT images in the routine dose phases. Although a supervised
learning approach is not applicable due to the differences in the
underlying heart structure in two phases, the images are closely
related in two phases, so we propose a cycle consistent adversarial
‐ICH, occurs earlier
denoising network to learn the mapping between the low and
‐sized
high dose cardiac phases
‐sized
.
Experimental results showed that the proposed method effectively
reduces the noise in the low dose CT image while
‐ICH, occurs earlier
preserving
detailed texture and edge information. Moreover, thanks to the
cyclic consistency and identity loss, the proposed network does not
create any artificial features that are not present in the input images.
Visual grading and quality evaluation also confirm that the proposed
method provides significant improvement in diagnostic quality.
The proposed network can learn the image distributions from the
routine dose cardiac phases, which is a big advantage over the existing
‐ICH, occurs earlier
supervised learning networks that need exactly matched low and
‐ICH, occurs earlier
routine dose CT images. Considering the effectiveness and
‐ICH, occurs earlier
practicability of the proposed method, we believe that the proposed
can be applied for many other CT acquisition protocols.
Example of multiphase coronary
CTA acquisition protocol. Low dose
‐ICH, occurs earlier
acquisition is performed in phase 1
and 2, whereas routine dose
‐ICH, occurs earlier
acquisition is performed in phases
3–10.
Denoising
Insights
Outside CTs
Image Denoising Not necessarily needing noise-free ground truth
Noise2Noise: Learning Image Restoration
without Clean Data Jaakko Lehtinen, Jacob Munkberg, Jon Hasselgren,
Samuli Laine, Tero Karras, Miika Aittala, Timo Aila
NVIDIA; Aalto University; MIT CSAIL
(Submitted on 12 Mar 2018)
https://arxiv.org/abs/1803.04189 https://github.com/NVlabs/noise2noise
We apply basic statistical reasoning to signal reconstruction by
machine learning -- learning to map corrupted observations to clean
signals -- with a simple and powerful conclusion: it is possible to learn
to restore images by only looking at corrupted examples, at
performance at and sometimes exceeding training using clean data,
without explicit image priors or likelihood models of the corruption. In
practice, we show that a single model learns photographic noise
removal, denoising synthetic Monte Carlo images, and
reconstruction of undersampled MRI scans -- all corrupted by
different processes -- based on noisy data only.
That clean data is not necessary for denoising is not a new observation:
indeed, consider, for instance, the classic BM3D algorithm that draws on
self-similar patches within a single noisy image. We show that the
previously-demonstrated high restoration performance of deep neural
networks can likewise be achieved entirely without clean data, all based on
the same general-purpose deep convolutional model. This points
the way to significant benefits in many applications by removing the need for
potentially strenuous collection of clean data.
Finnish Center for Artificial Intelligence FCAI
Published on Nov 19, 2018
https://youtu.be/dcV0OfxjrPQ
As a sanity check though, would be nice to have some clean “multiple frame averaged” ground truths.
[DnCNN] Beyond a Gaussian Denoiser: Residual Learning of Deep
CNN for Image Denoising https://arxiv.org/abs/1608.03981 This was introduced above
already
Noise2Noise: Learning Image Restoration without Clean Data
https://arxiv.org/abs/1803.04189 This was introduced above already
For benchmarking deep learning methods, unlike previous work
[Abdelhamed et al. 2018]
that directly tests with the pre-trained models, we re-train these models with the
same network architecture and similar hyper-parameters on the FMD dataset
from scratch. Specifically, we compare two representative models, one of
which requires ground truth (DnCNN) and the other does not
(Noise2Noise).
The benchmark results show that deep learning denoising models
trained on our FMD dataset outperforms other methods by a large
margin across all imaging modalities and noise levels.
A Poisson-Gaussian Denoising Dataset with Real Fluorescence Microscopy Images
Yide Zhang, Yinhao Zhu, Evan Nichols, Qingfei Wang, Siyuan Zhang, Cody Smith, Scott Howard
University of Notre Dame
(Submitted on 26 Dec 2018 (v1), last revised 5 Apr 2019)
https://arxiv.org/abs/1812.10366 - http://tinyurl.com/y6mwqcjs - https://github.com/bmmi/denoising-fluorescence
Shape Priors for ICH You can probably forget about it?
Haematoma “goes where it can” model as anomaly? But you prorabably want to co-segment hematoma with some more regular shapes?
Automation of CT-based haemorrhagic stroke assessment for
improved clinical outcomes: study protocol and design
Betty Chinda, George Medvedev, William Siu, Martin Ester, Ali Arab, Tao Gu,
Sylvain Moreno, Ryan C N D’Arcy, Xiaowei Song
BMJ Open | Neurology | Protocol
http://dx.doi.org/10.1136/bmjopen-2017-020260 (2018)
Haemorrhagic stroke is of significant healthcare concern due to its association with high
mortality and lasting impact on the survivors’ quality of life. Treatment decisions
and clinical outcomes depend strongly on the size, spread and location of
the haematoma. Non-contrast CT (NCCT) is the primary neuroimaging modality for
haematoma assessment in haemorrhagic stroke diagnosis. Current procedures do not
allow convenient NCCT-based haemorrhage volume calculation in clinical settings, while
research-based approaches are yet to be tested for clinical utility; there is a
demonstrated need for developing effective solutions. The project under review
investigates the development of an automatic NCCT-based haematoma
computation tool in support of accurate quantification of haematoma volumes.
CT scans showing different shapes of haematoma. The regions of hyperintensities
(bright) indicate the bleeding. Left panel shows it in an elliptical shape. The volume of the
haematoma can be estimated using the ABC/2 method. The red arrow indicates the ‘A’
dimension, while the green arrow is the ‘B’ dimension. Right panel shows the haematoma in a
non-elliptical (irregular) shape that has encroached into the lateral ventricles. The ABC/2 method
cannot be applied to this case.
An example showing haematoma with
no clear bleed-parenchyma boundary;
the volume of which cannot be
correctly calculated using existing
automation software and
demonstrating the need for improved
algorithms.
A screenshot of the Quantomo software beaning
used for comparison to validity testing. The top
toolbar shows options for selection and estimation
of haematoma; the left tool bar shows the
measurement panel where the total volume is
displayed. The most accurate way of estimating the
volume is by going slice by slice in 2D, which can be
time consuming, whereas the 3D estimate tends to
miss classified normal tissues surrounding the
haematoma.
Image restoration constraining with shape priors
Anatomically Constrained Neural Networks (ACNN):
Application to Cardiac Image Enhancement and Segmentation
Ozan Oktay, Enzo Ferrante, Konstantinos Kamnitsas, Mattias Heinrich, Wenjia Bai, Jose Caballero, Stuart Cook, Antonio de Marvao, Timothy
Dawes, Declan O’Regan, Bernhard Kainz, Ben Glocker, and Daniel Rueckert
Biomedical Image Analysis Group, Imperial College London; MRC Clinical Sciences Centre (CSC), London
(5 Dec 2017) https://arxiv.org/abs/1705.08302 - Cited by 95
Incorporation of prior knowledge about organ
shape and location is key to improve performance of
image analysis approaches. In particular, priors can be
useful in cases where images are corrupted and
contain artefacts due to limitations in image
acquisition. The highly constrained nature of anatomical
objects can be well captured with learning based
techniques. However, in most recent and promising
techniques such as CNN based segmentation it is not
obvious how to incorporate such prior knowledge.
The new framework encourages models to follow
the global anatomical properties of the underlying
anatomy (e.g. shape, label structure) via learnt non-
linear representations of the shape. We show that the
proposed approach can be easily adapted to different
analysis tasks (e.g. image enhancement ,
segmentation) and improve the prediction accuracy of
the state-of-the-art models
Transformers as way of getting the shape prior in?
TETRIS: Template Transformer Networks
for Image Segmentation
Matthew Chung Hai Lee, Kersten Petersen, Nick Pawlowski, Ben
Glocker, Michiel Schaap
Biomedical Image Analysis Group, Imperial College London / HeartFlow
10 Apr 2019 (modified: 11 Jun 2019) MIDL 2019
https://openreview.net/forum?id=r1lKJlSiK4 - Cited by 3
http://wp.doc.ic.ac.uk/bglocker/project/semantic-imaging/
In this paper we introduce and compare different approaches for
incorporating shape prior information into neural network
based image segmentation. Specifically, we introduce the concept
of template transformer networks (TeTrIS) where a shape
template is deformed to match the underlying structure of interest
through an end-to-end trained spatial transformer network. This has
the advantage of explicitly enforcing shape priors and is free of
discretisation artefacts by providing a soft partial volume
segmentation. We also introduce a simple yet effective way of
incorporating priors in state-of-the-art pixel-wise binary
classification methods such as fully convolutional networks and
U-net. Here, the template shape is given as an additional input
channel, incorporating this information significantly reduces false
positives. We report results on sub-voxel segmentation of
coronary lumen structures in cardiac computed tomography
showing the benefit of incorporating priors in neural network based
image segmentation.
Anatomical Shape prior for partially labeled segmentation
Prior-aware Neural Network for
Partially-Supervised Multi-Organ
Segmentation
Yuyin Zhou, Zhe Li, Song Bai, Chong Wang, Xinlei
Chen, Mei Han, Elliot Fishman, Alan Yuille
(Submitted on 12 Apr 2019)
https://arxiv.org/abs/1904.06346
As data annotation requires massive human labor
from experienced radiologists, it is common that training
data are partially labeled, e.g., pancreas datasets only
have the pancreas labeled while leaving the rest marked
as background. However, these background labels can
be misleading in multi-organ segmentation since the
"background" usually contains some other organs of
interest. To address the background ambiguity in these
partially-labeled datasets, we propose Prior-aware
Neural Network (PaNN) via explicitly incorporating
anatomical priors on abdominal organ sizes, guiding
the training process with domain-specific
knowledge. More specifically, PaNN assumes that the
average organ size distributions in the abdomen should
approximate their empirical distributions, a prior
statistics obtained from the fully-labeled
dataset.
Multi-task learning with shape priors
Shape-Aware Complementary-Task
Learning for Multi-Organ
Segmentation
Fernando Navarro, Suprosanna Shit, Ivan Ezhov, Johannes
Paetzold, Andrei Gafita, Jan Peeken, Stephanie Combs, Bjoern
Menze (Submitted on 14 Aug 2019)
https://arxiv.org/abs/1908.05099v1
https://github.com/JunMa11/SegWithDistMap
Multi-organ segmentation in whole-body
computed tomography (CT) is a constant
pre-processing step which finds its
application in organ-specific image retrieval,
radiotherapy planning, and interventional
image analysis. We address this problem
from an organ-specific shape-prior
learning perspective. We introduce the
idea of complementary-task learning
to enforce shape-prior leveraging the
existing target labels.
We propose two complementary-tasks
namely i) distance map regression and
ii) contour map detection to explicitly
encode the geometric properties of each
organ. We evaluate the proposed solution on
the public VISCERAL dataset containing CT
scans of multiple organs.
Flagging problematic volumes/slices like with clinical referrals?
An Alarm System For Segmentation
Algorithm Based On Shape Model
Fengze Liu, Yingda Xia, Dong Yang, Alan Yuille, Daguang Xu
(Submitted on 26 Mar 2019 https://arxiv.org/abs/1903.10645
We build an alarm system that will set off alerts when the
segmentation result is possibly unsatisfactory, assuming no
corresponding ground truth mask is provided. One plausible solution
is to project the segmentation results into a low dimensional feature
space; then learn classifiers/regressors to predict their qualities.
Motivated by this, in this paper, we learn a feature space using the
shape information which is a strong prior shared among
different datasets and robust to the appearance variation of input
data.The shape feature is captured using a Variational Auto-Encoder
(VAE) network that trained with only the ground truth masks.
During testing, the segmentation results with bad shapes shall
not fit the shape prior well, resulting in large loss values. Thus,
the VAE is able to evaluate the quality of segmentation result on
unseen data, without using ground truth. Finally, we learn a regressor
in the one-dimensional feature space to predict the qualities of
segmentation results. Our alarm system is evaluated on several
recent state-of-art segmentation algorithms for 3D medical
segmentation tasks.
The visualize on an NIH CT data for
pancreas segmentation. The Dice
between GT and prediction is 47.06
(real Dice) while the Dice between
prediction and Prediction
(Reconstruction) from VAE is 47.25
(fake Dice).
Our method use the fake Dice to
predict the former real Dice which
is usually unknown at inference phase
of real applications. This case shows
how these two Dice scores are related
to each other. In contrast, the
uncertainty used in existing
approaches mainly distributes on the
boundary of predicted mask, which
makes it a vague information when
detecting the failure cases.
Image restoration jointly with segmentation and automatic labelling?
CT Image Enhancement Using Stacked Generative Adversarial Networks and Transfer Learning for Lesion
Segmentation Improvement
Youbao Tang, Jinzheng Cai, Le Lu, Adam P. Harrison, Ke Yan, Jing Xiao, Lin Yang, Ronald M. Summers
(Submitted on 18 Jul 2018) https://arxiv.org/abs/1807.07144
Automated lesion segmentation from
computed tomography (CT) is an important
and challenging task in medical image analysis.
While many advancements have been made,
there is room for continued
improvements.
One hurdle is that CT images can exhibit high
noise and low contrast, particularly in lower
dosages. To address this, we focus on a
preprocessing method for CT images that uses
stacked generative adversarial networks
(SGAN) approach. The first GAN reduces the
noise in the CT image and the second GAN
generates a higher resolution image with
enhanced boundaries and high contrast.
To make up for the absence of high quality CT
images, we detail how to synthesize a large
number of low- and high-quality natural
images and use transfer learning with
progressively larger amounts of CT images.
INPUT BM3D DnCNN Single GAN
Our
denoising
GAN
Our
SGAN
Three examples of CT image enhancement results using
different methods on original images
Joint Deep Denoising and Segmentation
DenoiSeg: Joint Denoising and Segmentation
Tim-Oliver Buchholz, Mangal Prakash, Alexander Krull, Florian Jug
[Submitted on 6 May 2020]
https://arxiv.org/abs/2005.02987
https://github.com/juglab/DenoiSeg Tensorflow
https://pypi.org/project/denoiseg/
Here we propose DenoiSeg, a new
method that can be trained end-to-end on
only a few annotated ground truth
segmentations. We achieve this by
extending Noise2Void, a self-
supervised denoising scheme that can be
trained on noisy images alone, to also predict
dense 3-class segmentations.
We reason that the success of our proposed
method originates from the fact that similar
“skills” are required for denoising and
segmentation. The network becomes a
denoising expert by seeing all available raw
data, while co-learning to segment, even if
only a few segmentation labels are available.
This hypothesis is additionally fueled by our
observation that the best segmentation
results on high quality (very low noise) raw
data are obtained when moderate amounts
of synthetic noise are added.
Or even without the segmentation target?
Segmentation-Aware Image Denoising without
Knowing True Segmentation
Sicheng Wang, Bihan Wen, Junru Wu, Dacheng Tao, Zhangyang Wang
(Submitted on 22 May 2019)
https://arxiv.org/abs/1905.08965
Several recent works discussed application-driven image restoration neural networks,
which are capable of not only removing noise in images but also preserving
their semantic-aware details, making them suitable for various high-level
computer vision tasks as the pre-processing step. However, such approaches
require extra annotations for their high-level vision tasks, in order to train the joint
pipeline using hybrid losses. The availability of those annotations is yet often limited
to a few image sets, potentially restricting the general applicability of these methods to
denoising more unseen and unannotated images.
Motivated by that, we propose a segmentation-aware image denoising model
dubbed U-SAID, based on a novel unsupervised approach with a pixel-wise
uncertainty loss. U-SAID does not need any ground-truth segmentation
map, and thus can be applied to any image dataset. It generates denoised images with
comparable or even better quality, and the denoised results show stronger
robustness for subsequent semantic segmentation tasks, when compared
to either its supervised counterpart or classical "application-agnostic" denoisers.
Moreover, we demonstrate the superior generalizability of U-SAID in three-folds, by
plugging its "universal" denoiser without fine-tuning: (1) denoising unseen types
of images; (2) denoising as pre-processing for segmenting unseen noisy
images; and (3) denoising for unseen high-level tasks.
Deblurring
”Deconvolution”
“data-driven
sharpening
of the image”
Image Deblurring for CT Bone “leaks” to surrounding issue
Weighted deblurring for bone? Maybe intuitively easier to sharpen the bone/brain interfaces? In other words both your image
and labels are probabilistic distributions with point estimates describing the reality at some accuracy
Strictly speaking you cannot
really assume that pixels / voxels
are independent measurements
of that “receptive field”. Real-world
PSF “smears “the signal
http://doi.org/10.1155/2015/450341
PET quantification: strategies for partial
volume correction V. Bettinardi, I. Castiglioni, E. De
Bernardi & M. C. Gilardi Clinical and Translational
Imaging volume 2, pages199–218(2014)
https://doi.org/10.1007/s40336-014-0066-y
https://doi.org/10.1109/NSSMIC.2011.6153678
“Partial-volume effect and a partial-volume correction for the
NanoPET/CT™ preclinical PET/CT scanner”
Diagram of partial volume effect. (A) Pixel
computed tomography (CT) value with thick
slice. (B) Pixel CT value with thin slice. The
partial volume effect can be defined as the loss
of apparent activity in small objects or regions
because of the limited resolution of the imaging
system
https://doi.org/10.3341/jkos.2016.57.11.1
671
http://doi.org/10.2967/jnumed.106.035576
deconvolving with PSF
CT Super-resolution with U-Net
Computed tomography super-
resolution using deep convolutional
neural network
Junyoung Park et al. (2018)
https://doi.org/10.1088/1361-6560/aacdd4
The objective of this study is to develop a
convolutional neural network (CNN) for
computed tomography (CT) image super-
resolution. The network learns an end-to-end
mapping between low (thick-slice thickness)
and high (thin-slice thickness) resolution
images using the modified U-Net. To verify the
proposed method, we train and test the CNN
using axially averaged data of existing thin-
slice CT images as input and their middle slice
as the label.
The extraction and expansion paths of the
network with a large receptive field1 effectively
captured the high-resolution features as high-
resolution features. Although this work
mainly focused on resolution
improvement, the Z-axis averaging plus
super-resolution strategy was also useful
for reducing noise.
Not too many CT super-resolution networks
CT Super-resolution
GAN Constrained by
the Identical,
Residual, and Cycle
Learning
Ensemble(GAN-
CIRCLE)
Chenyu You, Guang Li, Yi Zhang,
Xiaoliu Zhang, Hongming Shan,
Shenghong Ju, Zhen Zhao, Zhuiyang
Zhang, Wenxiang Cong, Michael W.
Vannier, Punam K. Saha, Ge Wang
(Submitted on 10 Aug 2018
https://arxiv.org/abs/1808.04256
In this paper, we present a semi-
supervised deep learning
approach to accurately recover
high-resolution (HR) CT images
from low-resolution (LR)
counterparts. Specifically, with the
generative adversarial network
(GAN) as the building block, we
enforce the cycle-consistency in
terms of the Wasserstein distance
to establish a nonlinear end-to-end
mapping from noisy LR input
images to denoised and deblurred
HR outputs. We also include the
joint constraints in the loss function
to facilitate structural preservation.
To make further progress, we may also undertake
efforts to add more constraints such as the sinogram
consistence and the low-dimensional manifold
constraint to decipher the relationship between
noise, blurry appearances of images and the ground
truth, and even develop an adaptive and/or task-
specific loss function.
Synthetic X-Ray
A Deep Learning-Based Scatter Correction of
Simulated X-ray Images
Heesin Lee andJoonwhoan Lee (2019)
https://doi.org/10.3390/electronics8090944
X-ray scattering significantly limits image quality.
Conventional strategies for scatter reduction based on
physical equipment or measurements inevitably
increase the dose to improve the image quality. In
addition, scatter reduction based on a computational
algorithm could take a large amount of time. We propose
a deep learning-based scatter correction method,
which adopts a convolutional neural network (CNN) for
restoration of degraded images.
Because it is hard to obtain real data from an X-ray
imaging system for training the network, Monte Carlo
(MC) simulation was performed to generate the
training data. For simulating X-ray images of a human
chest, a cone beam CT (CBCT) was designed and
modeled as an example. Then, pairs of simulated images,
which correspond to scattered and scatter-free images,
respectively, were obtained from the model with different
doses. The scatter components, calculated by taking the
differences of the pairs, were used as targets to train the
weight parameters of the CNN.
Image Deblurring for CT with GANs?
Three dimensional blind image deconvolution for
fluorescence microscopy using generative
adversarial networks
Soonam Lee, Shuo Han, Paul Salama, Kenneth W. Dunn, Edward J. Delp
Purdue University / Indiana University
(Submitted on 19 Apr 2019) https://arxiv.org/abs/1904.09974
Due to image blurring image deconvolution is often used for
studying biological structures in fluorescence microscopy.
Fluorescence microscopy image volumes inherently suffer from
intensity inhomogeneity, blur, and are corrupted by various types of
noise which exacerbate image quality at deeper tissue depth.
Therefore, quantitative analysis of fluorescence microscopy in deeper
tissue still remains a challenge. This paper presents a three
dimensional blind image deconvolution method for fluorescence
microscopy using 3-way spatially constrained cycle-
consistent adversarial networks (CycleGAN). The restored
volumes of the proposed deconvolution method and other well-known
deconvolution methods, denoising methods, and an inhomogeneity
correction method are visually and numerically evaluated.
Using the 3-Way SpCycleGAN, we can successfully restore the blurred
and noisy volume to good quality volume so that deeper volume can
be used for the biological research. Future work will include
developing a 3D segmentation technique using our proposed
deconvolution method as a preprocessing step.
INPUT
SpCycleGAN
xy
xz
A lot of ideas to steal from (optical) microscopy
A new deep learning method
for image deblurring in optical
microscopic systems
Huangxuan Zhao et al. (2019)
http://doi.org/10.1002/jbio.201960147
In this paper, we present a deep-
learning-based deblurring method
that is fast and applicable to optical
microscopic imaging systems. We
tested the robustness of proposed
deblurring method on the publicly
available data, simulated data and
experimental data (including 2D
optical microscopic data and 3D
photoacoustic microscopic data),
which all showed much improved
deblurred results compared to
deconvolution. We compared our
results against several existing
deconvolution methods.
In addition, our method could also
replace traditional deconvolution
algorithms and become an
algorithm of choice in various
biomedical imaging systems
CycleGANs for deblurring can be done for unpaired data
CycleGAN with a Blur Kernel for
Deconvolution Microscopy:
Optimal Transport Geometry
Sungjun Lim et al. (2019)
https://arxiv.org/abs/1908.09414
In this paper, we present a novel
unsupervised cycle-consistent
generative adversarial network
(cycleGAN) with a linear blur
kernel, which can be used for both
blind- and non-blind image
deconvolution. In contrast to the
conventional cycleGAN approaches
that require two generators, the
proposed cycleGAN approach needs
only a single generator, which
significantly improves the robustness of
network training. We show that the
proposed architecture is indeed a dual
formulation of an optimal
transport problem that uses a
special form of penalized least squares
as transport cost. Experimental results
using simulated and real experimental
data confirm the efficacy of the
algorithm.
Inspiration from Natural Images
LSD
2
- Joint Denoising and Deblurring of
Short and Long Exposure Images with
Convolutional Neural Networks
Janne Mustaniemi, Juho Kannala, Jiri Matas, Simo Särkkä, Janne Heikkilä
(23 Nov 2018)
https://arxiv.org/abs/1811.09485
The paper addresses the problem of acquiring
high-quality photographs with handheld
smartphone cameras in low-light imaging
conditions. We propose an approach based on
capturing pairs of short and long exposure
images in rapid succession and fusing
them into a single high-quality photograph. Unlike
existing methods, we take advantage of both
images simultaneously and perform a joint
denoising and deblurring using a
convolutional neural network. The network is
trained using a combination of real and
simulated data. To that end, we introduce a
novel approach for generating realistic short-long
exposure image pairs. The evaluation shows that
the method produces good images in extremely
challenging conditions and outperforms existing
denoising and deblurring methods. Furthermore,
it enables exposure fusion even in the
presence of motion blur .
Deblurring Plug’n’Play framework for existing networks
Deep Plug-and-Play Super-Resolution
for Arbitrary Blur Kernels
Kai Zhang, Wangmeng Zuo, Lei Zhang (Submitted on 29 Mar 2019)
https://arxiv.org/abs/1903.12529
https://github.com/cszn/DPSR
PyTorch
While deep neural networks (DNN) based single image
super-resolution (SISR) methods are rapidly gaining
popularity, they are mainly designed for the widely-
used bicubic degradation, and there still remains the
fundamental challenge for them to super-resolve low-
resolution (LR) image with arbitrary blur kernels. In the
meanwhile, plug-and-play image restoration has been
recognized with high flexibility due to its modular structure for
easy plug-in of denoiser priors. In this paper, we propose a
principled formulation and framework (DSPR) by extending
bicubic degradation based deep SISR with the help of plug-
and-play framework to handle LR images with arbitrary blur
kernels. Specifically, we design a new SISR degradation
model so as to take advantage of existing blind deblurring
methods for blur kernel estimation. To optimize the new
degradation induced energy function, we then derive a plug-
and-play algorithm via variable splitting technique, which
allows us to plug any super-resolver prior rather than the
denoiser prior as a modular part.
Edge-Aware
Smoothing
Insights
Outside CT
Could be used in multi-task
setting for segmentation, but
not the most useful maybe
with deep segmentation
networks. Should help some
simple old school algorithms
Image Smoothing while keeping edges Edge-Aware Smoothing
→
In theory, image restoration
tries to restore the “original
image” under the degradation.
In contrast, edge-preserving
smoothing can be seen as
simplifying enhancement
technique that made “old
school” algorithms perform
better.
e.g. Liis Lindvere et al. (2013): ”Prior to
segmentation, the data were subjected to edge-
preserving 3D anisotropic diffusion filtering (
Perona and Malik, 1990
Cited by 13,940
)
Popular algorithms include
anisotropic diffusion, bilateral
and trilater filter, guided filter
and L0 gradient minimization
filter.
Quick and dirty Matlab test with three
methods for non-denoised input.
Bilateral filter actually does not
preserve edges, and the guide (the
input image itself) makes the
smoothing to take place for
background.
Image smoothing via L0
gradient minimization
https://doi.org/10.1145/2024156.2024208 - Cited by 872
https://youtu.be/jliea54nNFM?t=119
Deep Texture and Structure Aware
Filtering Network for Image Smoothing
Kaiyue Lu, Shaodi You, Nick Barnes; The
European Conference on Computer Vision
(ECCV), 2018, pp. 217-233
http://openaccess.thecvf.com/content_ECC
V_2018/html/Kaiyue_Lu_Deep_Texture_and_
ECCV_2018_paper.html
Image Smoothing texture bias possible for vasculature?
Not likely?
ImageNet-trained CNNs are biased towards
texture; increasing shape bias improves
accuracy and robustness
Robert Geirhos, Patricia Rubisch, Claudio Michaelis, Matthias Bethge, Felix A.
Wichmann, Wieland Brendel (Submitted on 29 Nov 2018)
https://arxiv.org/abs/1811.12231
Some recent studies suggest a more important role of image
textures. We here put these conflicting hypotheses to a
quantitative test by evaluating CNNs and human observers on
images with a texture-shape cue conflict. We show that
ImageNet-trained CNNs are strongly biased towards recognising
textures rather than shapes, which is in stark contrast to human
behavioural evidence and reveals fundamentally different
classification strategies.
INPUT
DENOISED &
EDGE-AWARE SMOOTHING
This should be easier to segment giving
that no significant data was thrown
away, thus the end-to-end constrain of
image restoration block, and as side-
effect the user could obtain a denoised
version for visualization
RESIDUAL NOISE &
“TEXTURE” (misc artifacts)
Image Smoothing texture bias possible for vasculature?
or could there be?
Is Texture Predictive for Age and
Sex in Brain MRI?
Nick Pawlowski, Ben Glocker
Biomedical Image Analysis Group, Imperial College London, UK
15 Apr 2019 (modified: 11 Jun 2019) MIDL 2019 Conference
https://arxiv.org/abs/1811.12231
Deep learning builds the foundation for many medical
image analysis tasks where neural networks are often
designed to have a large receptive field to incorporate
long spatial dependencies. Recent work has shown that
large receptive fields are not always necessary for
computer vision tasks on natural images. Recently
introduced BagNets (Brendel and Bethge, 2019) have
shown that on natural images, neural networks can perform
complex classification tasks by only interpreting
texture information rather than global structure.
BagNets interpret a neural network as a bag-of-features
classifier that is composed of a localised feature
extractor and a classifier that acts on the average bag-
encoding.
We explore whether this translates to certain medical
imaging tasks such as age and sex prediction from a T1-
weighted brain MRI scans.
We have generalised the concept of BagNets to the
setting of 3D images and general regression tasks. We
have shown that a BagNet with a receptive field of
(9mm)
3
yields surprisingly accurate predictions of age
and sex from T1-weight MRI scans. However, we find
that localised predictions of age and sex do not
yield easily interpretable insights into the
workings of the neural network which will be subject of
future work. Further, we believe that more accurate
localised predictions could lead to advanced
clinical insights similar to (Becker et al., 2018;
Cole et al., 2018).
Image Smoothing with Image Restoration?
In theory, additional “deep intermediate target” could help the final segmentation result as
you want your network “to pop out” the vasculature, without the texture, from the
background.
In practice then, think of how to either get the intermediate target in such a way that you do not throw any details
away (see Xu et al. 2015), or employ a Noise2Noise type of network for edge-aware smoothing as well. And check the
use of bilateral kernels in deep learning (see e.g.
Barron and Poole 2015; Jampani et al. 2016; Gharbi et al. 2017; Su et al. 2019
). The proposal of
Su et al. 2019 seems like a good starting point if you are into making this happen?
RAW After IMAGE RESTORATION Edge-Aware IMAGE SMOOTHING
Unsupervised Image Smoothing for “ground truth”?
Image smoothing via unsupervised learning
Qingnan Fan, Jiaolong Yang, David Wipf, Baoquan Chen, Xin Tong
Shandong University, Beijing Film Academy; Microsoft Research Asia; Peking University
(Submitted on 7 Nov 2018) https://arxiv.org/abs/1811.02804 |
https://github.com/fqnchina/ImageSmoothing - Cited by 4
In this paper, we present a unified unsupervised (label-free) learning
framework that facilitates generating flexible and high-quality smoothing
effects by directly learning from data using deep convolutional neural
networks (CNNs). The heart of the design is the training signal as a novel
energy function that includes an edge-preserving regularizer which
helps maintain important yet potentially vulnerable image structures,
and a spatially-adaptive L
p
flattening criterion which imposes
different forms of regularization onto different image regions for better
smoothing quality.
We implement a diverse set of image smoothing solutions employing
the unified framework targeting various applications such as, image
abstraction, pencil sketching, detail enhancement, texture removal and
content-aware image manipulation, and obtain results comparable with or
better than previous methods.
We have also shown that training a deep neural network on a large corpus
of raw images without ground truth labels can adequately solve the
underlying minimization problem and generate impressive results.
Moreover, the end-to-end mapping from a single input image to its
corresponding smoothed counterpart by the neural network can be
computed efficiently on both GPU and CPU, and the experiments have
shown that our method runs orders of magnitude faster than traditional
methods. We foresee a wide range of applications that can benefit from our
new pipeline.
Elimination of low-amplitude details while maintaining high-contrast edges using our method and representative traditional methods L0
and SGF. L0 regularization has a strong flattening effect. However, the side effect is that some spurious edges arise in local regions with
smooth gradations, such as those on the cloud. SGF is dedicated to elimination of fine-scale high-contrast details while preserving large-
scale salient structures. However, semantically-meaningful information such as the architecture and flagpole can be over-smoothed. In
contrast, our result exhibits a more appropriate, targeted balance between color flattening and salient edge preservation.
We also demonstrate the binary edge map B detected by our heuristic detection
method, which shows consistent image structure with our style image. Note that
binary edge maps are only used in the objective function for training; they are not
used in the test stage and are presented here only for comparison purpose
‘CT
Normalization’
Across different
scanners
More papers published on MRI normalization but some also for CT
Normalization of multicenter
CT radiomics by a generative
adversarial network method
Yajun Li, Guoqiang Han, Xiaomei Wu, Zhenhui Li,
Ke Zhao, Zhiping Zhang, Zaiyi Liu and Changhong
Liang Physics in Medicine & Biology (25 March 2020)
https://doi.org/10.1088/1361-6560/ab8319
To reduce the variability of radiomics
features caused by computed tomography (CT)
imaging protocols through using a generative
adversarial network (GAN) method. Material and
Methods: In this study, we defined a set of images
acquired with a certain imaging protocol as a
domain, and a total of 4 domains (A, B, C, and T
[target]) from 3 different scanners were
included.
Finally, to investigate whether our proposed
method could facilitate multicenter radiomics
analysis, we built the lasso classifier to
distinguish short-term from long-term survivors
based on a certain group
Our proposed GAN-based normalization method
could reduce the variability of radiomics features
caused by different CT imaging protocols and
facilitate multicenter radiomics analysis.
CT
Segmentation
Traditional
Background
Probabilistic Segmentation from 2004
Unified segmentation
John Ashburner and Karl J. Friston
NeuroImage Volume 26, Issue 3, 1 July 2005,
Pages 839-851
https://doi.org/10.1016/j.neuroimage.2005.02.018
A probabilistic framework is presented that enables
image registration, tissue classification, and
bias correction to be combined within the same
generative model. A derivation of a log-likelihood
objective function for the unified model is provided.
The model is based on a mixture of Gaussians
and is extended to incorporate a smooth intensity
variation and nonlinear registration with tissue
probability maps. A strategy for optimising the model
parameters is described, along with the requisite
partial derivatives of the objective function.
The hierarchical modelling scheme could be
extended in order to generate tissue probability
maps and other priors using data from many subjects.
This would involve a very large model, whereby many
images of different subjects are simultaneously
processed within the same heirarchical framework.
Strategies for creating average (in both shape and
intensity) brain atlases are currently being devised
(
Ashburner et al., 2000, Avants and Gee, 2004, Joshi et al., 2004)
. Such approaches
could be refined in order to produce average shaped
tissue probability maps and other data for use as priors.
The tissue probability maps for grey matter, white matter,
CSF, and “other”.
Results from applying the method to the BrainWeb data.
The first column shows the tissue probability maps for grey
and white matter. The first row of columns two, three, and
four show the 100% RF BrainWeb T1, T2, and PD
images after they are warped to match the tissue
probability maps (by inverting the spatial transform). Below
the warped BrainWeb images are the corresponding
segmented grey and white matter.
This figure shows the
underlying generative model
for the BrainWeb simulated T1,
T2, and PD images with 100%
intensity nonuniformity. The
BrainWeb images are shown
on the left. The right hand
column shows data simulated
using the estimated generative
model parameters for the
corresponding BrainWeb
images.
Our current implementation uses a low-dimensional
approach, which parameterises the deformations by a
linear combination of about a thousand cosine
transform bases (Ashburner and Friston, 1999). This is
not an especially precise way of encoding
deformations, but it can model the variability of overall
brain shape. Evaluations have shown that this simple
model can achieve a registration accuracy
comparable to other fully automated methods with
many more parameters ( Hellier et al., 2001,
Hellier et al., 2002).
Follow-up with Ashburner et al. (2018)
#1
Generative diffeomorphic modelling of
large MRI data sets for probabilistic
template construction
Claudia Blaiotta, Patrick Freund, M. Jorge Cardoso, John Ashburner
NeuroImage Volume 166, 1 February 2018, Pages 117-134
https://doi.org/10.1016/j.neuroimage.2017.10.060
One of the main challenges, which is encountered in all neuroimaging studies,
originates from the difficulty of mapping between different anatomical
shapes. In particular, a fundamental problem arises from having to ensure that
this mapping operation preserves topological properties and that it
provides, not only anatomical, but also functional overlap between distinct
instances of the same anatomical object (Brett et al., 2002).
This explains the rapid development of the discipline known as
computational anatomy (Grenander and Miller, 1998), which aims to provide
mathematically sound tools and algorithmic solutions to model high-
dimensional anatomical shapes, with the ultimate goal of encoding, or
accounting for, their variability.
In this paper we propose a general modelling scheme and a training algorithm,
which, given a large cross-sectional data set of MR scans, can learn a set of
average-shaped tissue probability maps, either in an unsupervised or
semi-supervised manner. This is achieved by building a hierarchical
generative model of MR data, where image intensities are captured using
multivariate Gaussian mixture models, after diffeomorphic warping
(Ashburner and Friston, 2011, Joshi et al., 2004) of a set of unknown probabilistic
templates, which act as anatomical priors. In addition, intensity
inhomogeneity artefacts are explicitly represented in our model, meaning that
the input data does not need to be bias corrected prior to model fitting.
●We present a generative modelling framework to process large MRI
data sets.
●The proposed framework can serve to learn average-shaped tissue
probability maps and empirical intensity priors.
●We explore semi-supervised learning and variational inference
schemes.
●The method is validated against state-of-the-art tools using publicly
available data.
To the best of our knowledge, the particular mathematical
formulation that we adopt to combine such modelling
techniques has never been adopted before. The
resulting approach enables processing simultaneously a
large number of MR scans in a groupwise fashion and
particularly it allows the tasks of image segmentation,
image registration, bias correction and atlas construction
to be solved by optimising a single objective function,
with one iterative algorithm. This is in contrast to a
commonly adopted approach to mathematical
modelling, which involves a pipeline of multiple
model fitting strategies that solve sub-problems
sequentially, without taking into account their circular
dependencies.
Follow-up with Ashburner et al. (2018)
#2
Generative diffeomorphic modelling of large MRI
data sets for probabilistic template construction
Claudia Blaiotta, Patrick Freund, M. Jorge Cardoso, John Ashburner
NeuroImage Volume 166, 1 February 2018, Pages 117-134
https://doi.org/10.1016/j.neuroimage.2017.10.060
OASIS data set. The first data set consists of thirty five T1-weighted MR
scans from the OASIS (Open Access Series of Imaging Studies) database (
Marcus et al., 2007). The data is freely available from the web site
http://www.oasis-brains.org, where details on the population demographics
and acquisition protocols are also reported. Additionally, the selected thirty five
subjects are the same ones that were used within the 2012 MICCAI Multi-Atlas
Labeling Challenge (Landman and Warfield, 2012).
Balgrist data set. The second data set consists of brain and cervical cord
scans of twenty healthy adults, acquired at University Hospital Balgrist with a
3T scanner (Siemens Magnetom Verio). Magnetisation-prepared rapid
acquisition gradient echo (MPRAGE) sequences, at 1 mm isotropic resolution,
were used to obtain T1-weighted data, while PD-weighted images of the same
subjects were acquired with a multi-echo 3D fast low-angle shot (FLASH)
sequence, within a whole-brain multi-parameter mapping protocol (
Weiskopf et al., 2013, Helms et al., 2008).
IXI data set. The third and last data set comprises twenty five T1-, T2-and
PD-weighted scans of healthy adults from the freely available IXI brain
database, which were acquired at Guy's Hospital, in London, on a 1.5T system
(Philips Medical Systems Gyroscan Intera). Additional information regarding
the demographics of the population, as well as the acquisition protocols, can
be found at http://brain-development.org/ixi-dataset.
Tissue probability maps obtained by applying the presented groupwise generative model to
a multispectral data set comprising head and neck scans of eighty healthy adults, from three
different databases.
Follow-up with Ashburner et al. (2018)
#3
Generative diffeomorphic modelling of large MRI
data sets for probabilistic template construction
Claudia Blaiotta, Patrick Freund, M. Jorge Cardoso, John Ashburner
NeuroImage Volume 166, 1 February 2018, Pages 117-134
https://doi.org/10.1016/j.neuroimage.2017.10.060
The accuracy of the algorithm presented here is compared to that achieved by the
groupwise image registration method described in Avants et al. (2010), whose
implementation is publicly available, as part of the Advanced normalisation Tools
(ANTs) package, through the web site http://stnava.github.io/ANTs/. Indeed, the
symmetric diffeomorphic registration framework implemented in ANTs has
established itself as the state-of-the-art of medical image nonlinear
spatial normalisation (Klein et al., 2009).
Brain segmentation accuracy of
the presented method in
comparison to SPM12 image
segmentation algorithm. Boxplots
indicate the distributions of Dice
score coefficients, with overlaid
scatter plots of the estimated
scores. Red stars denote outliers.
Modelling unseen data
Further validation experiments were performed to quantify the
accuracy of the framework described in this paper to model unseen
data, that is to say data that was not included in the atlas generation
process.
In particular, we evaluated registration accuracy using data from the
Internet Brain Segmentation Repository (IBSR), which is provided by
the Centre for Morphometric Analysis at Massachusetts General
Hospital (http://www.cma.mgh.harvard.edu/ibsr/). Experiments to
assess bias correction and segmentation accuracy were instead
performed on synthetic T1-weighted brain MR scans from the
Brainweb database (http://brainweb.bic.mni.mcgill.ca/), which were
simulated using a healthy anatomical model under different noise and
bias conditions.
Dice scores between the
estimated and ground
truth segmentations for
brain white matter and
brain gray matter, under
different noise and bias
conditions, for synthetic
T1-weighted data.
Ctseg as head CT pipeline from Duke University
A Method to Estimate Brain Volume from Head CT
Images and Application to Detect Brain Atrophy in
Alzheimer Disease
V. Adduru, S.A. Baum, C. Zhang, M. Helguera, R. Zand, M.
Lichtenstein, C.J. Griessenauer and A.M. Michael
American Journal of Neuroradiology February 2020, 41
(2) 224-230; DOI: https://doi.org/10.3174/ajnr.A6402
https://github.com/NuroAI/CTSeg
We present an automated head CT segmentation
method (CTseg) to estimate total brain volume and total
intracranial volume. CTseg adapts a widely used brain MR
imaging segmentation method from the Statistical
Parametric Mapping toolbox using a CT-based
template for initial registration. CTseg was tested and
validated using head CT images from a clinical archive.
In current clinical practice, brain atrophy is assessed by
inaccurate and subjective “eyeballing” of CT images.
Manual segmentation of head CT images is prohibitively
arduous and time-consuming. CTseg can potentially help
clinicians to automatically measure total brain volume and
detect and track atrophy in neurodegenerative diseases.
In addition, CTseg can be applied to large clinical archives
for a variety of research studies.
CTSeg pipeline for intracranial space and brain parenchyma segmentation from head CT images.
Within parentheses is the 3D coordinate space of the image. MNI indicates Montreal Neurological
Institute.
CT
Segmentation
and Detection
Deep Learning
Badly written review but probably lists relevant papers though
Automatic Neuroimage Processing and
Analysis in Stroke – A Systematic Review
Roger M. Sarmento et al. (2019)
IEEE Reviews in Biomedical Engineering (23 August 2019)
https://doi.org/10.1109/RBME.2019.2934500
There are some points that require greater attention such as low
sensitivity, optimization of the algorithm, a reduction of false positives,
improvement in the identification and segmentation processes of
different sizes and shapes. Also there is a need, to improve the
classification steps of different stroke types and subtypes.
Another important challenge to overcome is the lack of studies aimed at
identifying and classifying stroke in its subtypes: intracerebral
hemorrhage, subarachnoid hemorrhage, and brain ischemia due to
thrombosis, embolism, or systemic hypoperfusion. There is also no
record of work focusing on the detection and segmentation of the
penumbra zone, a region that presents a high probability of recovery if
identified and medicated quickly and correctly.
Moreover, transient ischemic attack (TIA) does not receive the focus it
merits from the researchers. Although it is a transient and reversible
alteration it can be a warning sign of an imminent ischemic stroke. In
many cases doctors are not able to distinguish a stroke from a TIA
before the symptoms appear. Neuroimaging such as CT and MRI are not
made for this type of accident, but there is a type of MRI, called diffusion
weighted imaging (DWI), which can show areas of brain tissue that are
not working and thus help to diagnose TIA. A potential research would
be the location of the TIA, the affected area and the severity of the
accident.
DeepSymNet
Combining symmetric and standard deep convolutional
representations for detecting brain hemorrhage
Arko Barman; Victor Lopez-Rivera; Songmi Lee; Farhaan S. Vahidy; James Z. Fan;
Sean I. Savitz; Sunil A. Sheth; Luca Giancardo (16 March 2020)
https://doi.org/10.1117/12.2549384
https://doi.org/10.3389/fnins.2019.01053
https://www.uth.edu/news/story.htm?id=5b8f2ad1-e3dd-4ad0-aca3-c845d7364953
We compare and contrast symmetry-aware, symmetry-naive feature
representations and their combination for the detection of Brain
hemorrhage (BH) using CT imaging. One of the proposed
architectures, e-DeepSymNet, achieves AUC 0.99
[0.97- 1.00]
for BH
detection. An analysis of the activation values shows that both
symmetry-aware and symmetry-naive representations offer
complementary information with symmetry-aware representation
naive contributing 20% towards the final predictions.
Qure validation dataset available from The Lancet paper
Deep learning algorithms for detection of critical
findings in head CT scans: a retrospective study
Sasank Chilamkurthy, Rohit Ghosh , Swetha Tanamala, Mustafa Biviji DNB,
Norbert G Campeau, Vasantha KumarVenugopal, Vidur Mahajan, , Pooja Rao,
Prashant Warier
The Lancet
Volume 392, Issue 10162, 1–7 December 2018, Pages 2388-2396
https://doi.org/10.1016/S0140-6736(18)31645-3
We retrospectively collected a dataset containing 313 318
head CT scans together with their clinical reports from
around 20 centres in India between Jan 1, 2011, and June 1,
2017.
We describe the development and validation of fully
automated deep learning algorithms that are trained to
detect abnormalities requiring urgent attention on non-
contrast head CT scans. The trained algorithms detect five
types of intracranial haemorrhage (namely,
intraparenchymal, intraventricular, subdural, extradural, and
subarachnoid) and calvarial (cranial vault) fractures. The
algorithms also detect mass effect and midline shift, both
used as indicators of severity of the brain injury.
The algorithms produced good results for normal scans
without bleed, scans with medium to large sized
intraparenchymal and extra-axial haemorrhages,
haemorrhages with fractures, and in predicting midline shift.
There was room for improvement for small-sized
intraparenchymal, intraventricular haemorrhages
and haemorrhages close to the skull base. In this study,
we did not separate chronic and acute haemorrhages. This
approach resulted in occasional prediction of scans with
infarcts and prominent cerebrospinal fluid spaces as
intracranial haemorrhages. However, the false positive rates of
the algorithms should not impede its usability as a triaging tool.
Deep Learning for ICH Segmentation
Precise diagnosis of intracranial
hemorrhage and subtypes using a three-
dimensional joint convolutional and
recurrent neural network
Hai Ye, Feng Gao, Youbing Yin, Danfeng Guo, Pengfei Zhao, Yi Lu, Xin Wang,
Junjie Bai, Kunlin Cao, Qi Song, Heye Zhang, Wei Chen, Xuejun Guo, Jun Xia
European Radiology (2019) 29:6191–6201
https://doi.org/10.1007/s00330-019-06163-2
It took our algorithm less than 30 s on average to process a 3D CT scan. For the
two-type classification task (predicting bleeding or not), our algorithm achieved
excellent values (≥ 0.98) across all reporting metrics on the subject level.
The proposed method was able to accurately detect ICH and its subtypes with
fast speed, suggesting its potential for assisting radiologists and physicians in
their clinical diagnosis workflow.
Deep Learning for ICH Segmentation: Review of Studies
Intracranial Hemorrhage Segmentation Using Deep Convolutional Model (18 Oct 2019) https://arxiv.org/pdf/1910.08643.pdf
Deep Learning for ICH Segmentation
Intracranial Hemorrhage Segmentation
Using Deep Convolutional Model
Murtadha D. Hssayeni, Muayad S. Croock, Aymen Al-Ani, Hassan Falah Al-
khafaji, Zakaria A. Yahya, and Behnaz Ghoraani (18 Oct 2019)
https://arxiv.org/pdf/1910.08643.pdf
https://alpha.physionet.org/content/ct-ich/1.0.0/
We developed a deep FCN, called U-Net, to segment the ICH regions from the
CT scans in a fully automated manner. The method achieved a Dice coefficient
of 0.31 for the ICH segmentation based on 5-fold cross-validation.
Data Description
The dataset is release in JPG (and NIfTI soon) formats at PhysioNet (
http://alpha.physionet.org/content/ct-ich/1.0.0/ ),
A dataset of 82 CT scans was collected, including 36 scans for patients
diagnosed with intracranial hemorrhage with the following types:
Intraventricular, Intraparenchymal, Subarachnoid, Epidural and Subdural. Each
CT scan for each patient includes about 30 slices with 5 mm slice-thickness.
The mean and std of patients' age were 27.8 and 19.5, respectively. 46 of the
patients were males and 36 of them were females. Each slice of the non-
contrast CT scans was by two radiologists who recorded hemorrhage types if
hemorrhage occurred or if a fracture occurred. The radiologists also
delineated the ICH regions in each slice. There was a consensus between the
radiologists. Radiologists did not have access to clinical history of the
patients, and used a down-sampled version of the CT scan.
During data collection, syngo by Siemens Medical Solutions was first used to
read the CT DICOM files and save two videos (avi format) using brain and bone
windows, respectively. Second, a custom tool was implemented in Matlab and
used to read the avi files, record the radiologist annotations, delineate
hemorrhage region and save it as white region in a black 650x650 image (jpg
format). Gray-scale 650x650 images (jpg format) for each CT slice were also
saved for both windows (brain and bone).
Kaggle Challenges eventually for all types of data
RSNA Intracranial Hemorrhage Detection
Identify acute intracranial hemorrhage and its subtypes
$25,000Prize Money Radiological Society of North America
https://www.kaggle.com/c/rsna-intracranial-hemorrhage-detection/data
petteriTeikari/RSNA_kaggle_CT_wrangle
https://www.kaggle.com/anjum48/reconstructing-3d-
volumes-from-metadata
Kaggle Challenge how the data was annotated
Construction of a Machine Learning Dataset through
Collaboration: The RSNA 2019 Brain CT Hemorrhage
Challenge
Adam E. Flanders , Luciano M. Prevedello, George Shih, Safwan S. Halabi, Jayashree Kalpathy-Cramer, Robyn Ball, John T. Mongan, Anouk Stein,
Felipe C. Kitamura, Matthew P. Lungren, Gagandeep Choudhary, Lesley Cala, Luiz Coelho, Monique Mogensen, Fanny Morón, Elka Miller, Ichiro
Ikuta, Vahe Zohrabian, Olivia McDonnell, Christie Lincoln, Lubdha Shah, David Joyner, Amit Agarwal, Ryan K. Lee, Jaya Nath, For the RSNA-ASNR
2019 Brain Hemorrhage CT Annotators
https://doi.org/10.1148/ryai.2020190211
The amount of volunteer labor required to compile, curate, and annotate a large
complex dataset of this type was substantial. A work commitment from our volunteer
force was set at no more than 10 hours of aggregate effort per annotator, recognizing
that there would be a wide range in performance per individual. An examination could be
accurately reviewed and labeled in a minute or less. On the basis of these estimates, it was
projected that the 60 annotators could potentially evaluate and effectively label 36,000
examinations at a rate of one per minute for a maximum of 10 hours of effort. This
provided a buffer of 11,000 potential annotations.
Even though the use case was limited to hemorrhage labels alone, it took thousands of
radiologist-hours to produce a final working dataset in the stipulated time period. To optimally
mitigate against misclassification in the training data, the training, validation, and test datasets
should have employed multiple reviewers. The size of the final dataset and the narrow time
frame to deliver it prohibited multiple evaluations for all of the available examinations. The auditing
mechanism employed for training new annotators showed that the most common error
produced was under-labeling of data, namely tagging an entire examination with a single
image label. Raising awareness of this error early in the process before the annotators began
working on the actual data helped to reduce the frequency of this error and improve consistency of
the single evaluations.
As this is a public dataset, it is available for further enhancement and use including the
possibility of adding multiple readers for all studies, performance of detailed segmentations,
performance of federated learning on the separate datasets, and evaluation of the
examinations for disease entities beyond hemorrhage.
Kaggle Challenge Competition Entry example
Intracranial Hemorrhage Classification
using CNN Hyun Joo Lee, Department of Mechanical Engineering,
Stanford University (CS230 Fall 2019)
http://cs230.stanford.edu/projects_fall_2019/reports/26248009.pdf
In this study, multi-class classification is conducted
to diagnose intracranial hemorrhages and its five
subtypes: intraparenchymal, intraventricular,
subarachnoid, subdural, epidural. Transfer
learning is applied based on ResNet-50 and
linear windowing is compared with sigmoid
windowing in its performance.
Due to the high imbalance in the number of
examples available, an undersampling approach
was taken to provide a better balanced training
dataset. As a result, the combination of sigmoid
windowing and combining three windows of
interest showed the highest F1 score.
Small datasets get detailed annotations
Expert-level detection of acute intracranial
hemorrhage on head computed tomography using
deep learning
Weicheng Kuo, Christian Häne, Pratik Mukherjee, Jitendra
Malik, and Esther L. Yuh PNAS October 21, 2019
https://doi.org/10.1073/pnas.1908021116
We trained a fully convolutional neural network with 4,396
head CT scans performed at the University of California at
San Francisco and affiliated hospitals and compared the
algorithm’s performance to that of 4 American Board of
Radiology (ABR) certified radiologists on an independent
test set of 200 randomly selected head CT scans.
https://www.ucsf.edu/news/2019/10/415681/ai-rivals-exper
t-radiologists-detecting-brain-hemorrhages
But the training images used by the researchers were
packed with information, because each small
abnormality was manually delineated at the pixel
level. The richness of this data – along with other steps
that prevented the model from misinterpreting
random variations or “noise” as meaningful –
created an extremely accurate algorithm. A deep learning algorithm recognizes abnormal
CT scans of the head in neurological
emergencies in 1 second. The algorithm also
classifies the pathological subtype of each
abnormality: red - subarachnoid hemorrhage,
purple - contusion, green - subdural hemorrhage.
Five cases judged negative by at least 2 of 4
radiologists, but positive for acute
hemorrhage by both the algorithm and the
gold standard.
3D CNNs for segmentation
3D Deep Neural Network Segmentation of
Intracerebral Hemorrhage: Development and
Validation for Clinical Trials
Matthew Sharrock, W. Andrew Mould, Hasan Ali, Meghan Hildreth, Daniel F Hanley, John Muschelli
https://www.medrxiv.org/content/10.1101/2020.03.05.20031823v1
https://github.com/msharrock/deepbleed
Using an automated pipeline and 2D and 3D deep neural networks, we
show that we can quickly and accurately estimate ICH volume
with high agreement with time-consuming manual segmentation. The
training and validation datasets include significant heterogeneity in terms
of pathology, such as the presence of intraventricular (IVH) or
subdural hemorrhages (SDH) as well as variable image acquisition
parameters. We show that deep neural networks trained with an
appropriate anatomic context in the network receptive field, can
effectively perform ICH segmentation, but those without enough context
will overestimate hemorrhage along the skull and around
calcifications in the ventricular system.
The natural history of ICH includes intraventricular extension of blood, particularly for
hemorrhages close to the ventricles and the success of segmentation in this context
has not previously been accounted for in segmentation studies based on either MRI
or CT. This is a clear example of the need to understand the natural history of the
underlying neuropathology as well account for the variability in acquisition
when developing models for the clinical context, tasks that are frequently
overlooked. This is especially so in the realm of DNNs where models with millions of
parameters can be finely tuned to aspects of a curated dataset from a single
institution that are not applicable externally. In our view, when decisions regarding
potential therapeutic intervention are to be made, they should be informed by
metrics and models validated in a prospective clinical trial on multicenter
data designed with a full understanding of the underlying pathology
Stantard U-Net with DenseCRF
ICHNet: Intracerebral Hemorrhage (ICH)
Segmentation Using Deep Learning
Mobarakol Islam
NUS Graduate School for Integrative Sciences and Engineering (NGS)National University of Singapore, Parita
Sanghani, Angela An Qi See, Michael Lucas James, Nicolas
Kon Kam King, Hongliang Ren
International MICCAI Brainlesion Workshop BrainLes 2018: Brainlesion: Glioma,
Multiple Sclerosis, Stroke and Traumatic Brain Injuries
https://doi.org/10.1007/978-3-030-11723-8_46
ICHNet, evolves by integrating dilated
convolution neural network (CNN) with
hypercolumn features where a modest number
of pixels are sampled and corresponding
features from multiple layers are concatenated.
Due to freedom of sampling pixels rather than
image patch, this model trains within the brain
region and ignores the CT background
padding. This boosts the convergence time
and accuracy by learning only healthy and
defected brain tissues. To overcome the class
imbalance problem, we sample an equal
number of pixels from each class. We also
incorporate 3D conditional random field
(3D CRF, deepmedic/dense3dCrf) to smoothen the
predicted segmentation as a post-processing
step. ICHNet demonstrates 87.6% Dice
accuracy in hemorrhage segmentation, that is
comparable to radiologists.
“Sharper boundary”-tweaks also for ICH
-Net: Focusing on the border
Ψ-Net: Focusing on the border
areas of intracerebral hemorrhage
on CT images
Zhuo Kuang, Xianbo Deng, Li Yua, Hongkui Wang, Tiansong Li,
Shengwei Wang. Computer Methods and Programs in
Biomedicine (Available online 14 May 2020)
https://doi.org/10.1016/j.cmpb.2020.105546
Highlights
●A CNN-based architecture is proposed for the ICH
segmentation on CT images. It consists of a novel model,
named as -Net,
Ψ-Net: Focusing on the border
and a multi-level training strategy.
●With the help of two attention blocks, firstly, -Net could
Ψ-Net could
suppress the irrelevant information, secondly, -Net could
Ψ-Net could
capture the spatial contextual information to fine tune the
border areas of the ICH.
●The multi-level training strategy includes two levels of
tasks, classification of the whole slice and the pixel-wise
segmentation. This structure speeds up the rate of
convergence and alleviate the vanishing gradient and class
imbalance problems.
●Compared to the previous works on the ICH segmentation.
Our method takes less time for training, and obtains more
accurate and robust performance.
You can see all the multi-task “Dice+Hausdorff” papers,
e.g. Caliva et al. 2019, Karimi et al. 2019
TBI segmentation very similar to ICH segmentation
Multiclass semantic segmentation and quantification of traumatic brain
injury lesions on head CT using deep learning: an algorithm development
and multicentre validation study
Miguel Monteiro*, Virginia F J Newcombe*, Francois Mathieu, Krishma Adatia, Konstantinos Kamnitsas,
Enzo Ferrante, Tilak Das, Daniel Whitehouse, Daniel Rueckert, David K Menon†, Ben Glocker
Funding European Union 7th Framework Programme, Hannelore Kohl Stiftung, OneMind, NeuroTrauma Sciences, Integra
Neurosciences, European Research Council Horizon 2020
Lancet Digital Health 2020 https://doi.org/10.1016/S2589-7500(20)30085-6
CT is the most common imaging modality in traumatic brain injury (TBI).
However, its conventional use requires expert clinical interpretation and does
not provide detailed quantitative outputs, which may have prognostic
importance. We aimed to use deep learning to reliably and efficiently
quantify and detect different lesion types.
We show the ability of a CNN to separately segment, quantify, and detect
multiclass haemorrhagic lesions and perilesional oedema. These
volumetric lesion estimates allow clinically relevant quantification of lesion
burden and progression, with potential applications for personalised treatment
strategies and clinical research in TBI.
Future work needs to focus on the optimal incorporation of such algorithms
into clinical practice, which must be accompanied by a rigorous
assessment of performance, strengths, and weaknesses. Such algorithms will
find clear research applications, and, if adequately validated, may be used to
help facilitate radiology workflows by flagging scans that require urgent
attention, aid reporting in resource-constrained environments, and detect
pathoanatomically relevant features for prognostication and a better
understanding of lesion progression
Perihematomal edema segmentation
Fully Automated Segmentation Algorithm
for Perihematomal Edema Volumetry
After Spontaneous Intracerebral
Hemorrhage
Natasha Ironside, Ching-Jen Chen, Simukayi Mutasa, Justin L. Sim, Dale Ding,
Saurabh Marfatiah, David Roh, Sugoto Mukherjee, Karen C. Johnston, Andrew
M. Southerland, Stephan A. Mayer, Angela Lignelli, Edward Sander Connolly
2 Feb 2020 Stroke. 2020;51:815–823
https://doi.org/10.1161/STROKEAHA.119.026764
Perihematomal edema (PHE) is a promising surrogate
marker of secondary brain injury in patients with
spontaneous intracerebral hemorrhage, but it can be
challenging to accurately and rapidly quantify. The
aims of this study are to derive and internally validate a fully
automated segmentation algorithm for volumetric analysis of
PHE.
Inpatient computed tomography scans of 400
consecutive adults with spontaneous, supratentorial
intracerebral hemorrhage enrolled in the Intracerebral
Hemorrhage Outcomes Project (2009–2018) were
separated into training (n=360) and test (n=40) datasets.
The fully automated segmentation algorithm accurately
quantified PHE volumes from computed tomography scans
of supratentorial intracerebral hemorrhage patients
with high fidelity and greater efficiency compared with manual
and semiautomated segmentation methods. External
validation of fully automated segmentation for assessment of
PHE is warranted.
Examples of
perihematomal edema
(PHE) segmentation in
the test dataset.
Column A shows the
input axial, noncontrast
computed
tomography slice.
Column B shows the
corresponding
manual PHE
segmentation (blue
line).
Column C shows the
corresponding semi-
automated PHE
segmentation (red
line).
Column D shows the
corresponding fully
automated PHE
segmentation (green
line).
in the end end-to-end
system for the upstream
restoration/segmentation
with downstream tasks
such as prognosis and
prescriptive treatment
In practice, not a lot of end-
to-end networks for
prognosis even, probably
due to lack of such open-
sourced datasets
“Simultaneous” Classification and Segmentation
JCS: An Explainable COVID-19 Diagnosis System
Prognosis models
mostly outside the scope
of this presentation
but here a small teaser for “the
actual” analysis of the imaging
features with non-imaging features
Best to look inspiration from modeling of other pathologies as not much specifically on ICH
A Wide and Deep Neural Network
for Survival Analysis from
Anatomical Shape and Tabular
Clinical Data
Sebastian Pölsterl, Ignacio Sarasua, Benjamín Gutiérrez-Becker, and
Christian Wachinger (9 Sept 2019)
https://arxiv.org/abs/1909.03890
Feature-Guided Deep Radiomics
for Glioblastoma Patient Survival
Prediction
Zeina A. Shboul, Mahbubul Alam, Lasitha Vidyaratne, Linmin Pei,
Mohamed I. Elbakary and Khan M. Iftekharuddin Front. Neurosci., 20
September 2019 | https://doi.org/10.3389/fnins.2019.00966
Deep learning survival analysis
enhances the value of hybrid
PET/CT for long-term
cardiovascular event prediction
L E Juarez-Orozco, J W Benjamins, T Maaniitty, A Saraste, P Van Der
Harst, J Knuuti European Heart Journal, Volume 40, Issue Supplement_1,
October 2019, ehz748.0177,
https://doi.org/10.1093/eurheartj/ehz748.0177
Deep Recurrent Survival Analysis
Kan Ren et al. (2019)
https://doi.org/10.1609/aaai.v33i01.33014798
Use of radiomics for the prediction of local control of brain metastases after stereotactic radiosurgery
https://doi.org/10.1093/neuonc/noaa007 (20 January 2020) by Andrei Mouraviev et al.
https://towardsdatascience.com/deep-learning-for-survival-analysis-fdd1505293c9
Prescriptive models
mostly outside the scope
of this presentation
how to treat the patient based on
the features measured from the
patient, i.e. “precision medicine”
Reinforcement learning and Control models
Is Deep Reinforcement Learning Ready for
Practical Applications in Healthcare? A
Sensitivity Analysis of Duel-DDQN for Sepsis
Treatment
MingYu Lu, Zachary Shahn, Daby Sow, Finale Doshi-Velez, Li-wei H. Lehman
MIT;
IBM Research, NYC; Harvard University
[Submitted on 8 May 2020]
https://arxiv.org/abs/2005.04301
In this work, we perform a sensitivity analysis on a state-of-the-art RL
algorithm (Dueling Double Deep Q-Networks) applied to
hemodynamic stabilization treatment strategies for septic
patients in the ICU
●Treatment History: Excluding treatment history leads to
aggressive treatment policies.
●Time bin durations: Longer time bins result in more aggressive
policies.
●Rewards: Long-term objectives lead to more aggressive and less
stable policies
●Embedding model: High sensitivity to architecture
●Random Restarts: DRL policies have many local optima
●Subgroup Analysis: Grouping by y Sequential Organ Failure
Assessment (SOFA) score finds DQN agents are
underaggressive in high risk patients and overaggressive
in low risk patients
https://photos.app.goo.gl/pptobiD22E9osiWf6
Finale Doshi-Velez @ NeurIPS Machine Learning for Healh 2018 (ML4H)
Associate Professor of Computer Science, Harvard Paulson School of Engineering and Applied Sciences (SEAS)
Deep Reinforcement Learning in Medicine
Anders Jonsson Kidney Dis 2019;5:18–22 https://doi.org/10.1159/000492670
Deep Reinforcement Learning and Simulation as a Path Toward
Precision Medicine
Brenden K. Petersen, Jiachen Yang, Will S. Grathwohl, Chase Cockrell, Claudio Santiago, Gary An, and
Daniel M. Faissol 6 Jun 2019 https://doi.org/10.1089/cmb.2018.0168
Deep Reinforcement Learning for Dynamic Treatment Regimes
on Medical Registry Data
Ying Liu, Brent Logan, Ning Liu, Zhiyuan Xu, Jian Tang, and Yanzhi Wang
Healthc Inform. 2017 Aug; 2017: 380–385. doi: 10.1109/ICHI.2017.45
Dynamic Treatment Recommendation with unclear targets
Supervised Reinforcement Learning with Recurrent Neural
Network for Dynamic Treatment Recommendation
Lu Wang, Wei Zhang, Xiaofeng He, Hongyuan Zha
KDD '18 Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery
& Data Mining https://doi.org/10.1145/3219819.3219961
The data-driven research on treatment recommendation involves two main branches: supervised
learning (SL) and reinforcement learning (RL) for prescription. SL based prescription tries to
minimize the difference between the recommended prescriptions and indicator signal which
denotes doctor prescriptions. Several pattern-based methods generate recommendations by utilizing the
similarity of patients
[Hu et al. 2016, Sun et al. 2016]
, but they are challenging to directly learn the relation between
patients and medications. Recently, some deep models achieve significant improvements by learning a
nonlinear mapping from multiple diseases to multiple drug categories
[Bajor and Lasko 2017, Wang et al. 2018,
Wang et al. 2017
. Unfortunately, a key concern for these SL based models still remains unresolved, i.e, the ground
truth of “good” treatment strategy being unclear in the medical literature [Marik 2015]. More
importantly, the original goal of clinical decision also considers the outcome of patients instead of only
matching the indicator signal.
The above issues can be addressed by reinforcement learning for dynamic treatment regime
(DTR) [Murphy 2003, Robins 1986]. DTR is a sequence of tailored treatments according to the dynamic
states of patients, which conforms to the clinical practice. As a real example shown in Figure 1, treatments
for the patient vary dynamically over time with the accruing observations. The optimal DTR is
determined by maximizing the evaluation signal which indicates the long-term outcome of patients, due to the
delayed effect of the current treatment and the influence of future treatment choices [
Chakraborty and Moodie 2013]. With the desired properties of dealing with delayed reward and
inferring optimal policy based on non-optimal prescription behaviors, a set of reinforcement learning
methods have been adapted to generate optimal DTR for life-threatening diseases, such as schizophrenia,
non-small cell lung cancer, and sepsis [e.g. Nemati et al. 2016]. Recently, some studies employ deep RL to
solve the DTR problem based on large scale EHRs
[Peng et al. 2019, Raghu et al. 2017, Weng et al. 2016
. Nevertheless, these
methods may recommend treatments that are obviously different from doctors’ prescriptions due to the lack
of the supervision from doctors, which may cause high risk [Shen et al. 2013] in clinical practice. In
addition, the existing methods are challenging for analyzing multiple diseases and the complex medication
space.
In fact, the evaluation signal and indicator signal play complementary roles,
where the indicator signal gives a basic effectiveness and the evaluation
signal helps optimize policy. Imitation learning (e.g. Finn et al. 2016) utilizes the
indicator signal to estimate a reward function for training robots by supposing the
indicator signal is optimal, which is not in line with the clinical reality. Supervised
actor-critic (e.g. Zhu et al. 2017) uses the indicator signal to pre-train a
“guardian” and then combines “actor” output and “guardian” output to send
low-risk actions for robots. However, the two types of signals are trained
separately and cannot learn from each other. Inspired by these studies, we
propose a novel deep architecture to generate recommendations for
more general DTR involving multiple diseases and medications, called
Supervised Reinforcement Learning with Recurrent
Neural Network (SRL-RNN) . The main novelty of SRL-RNN is to
combine the evaluation signal and indicator signal at the same time to learn an
integrated policy. More specifically, SRL-RNN consists of an off-policy actor-
critic framework to learn complex relations among medications, diseases, and
individual characteristics. The “actor” in the framework is not only influenced by
the evaluation signal like traditional RL but also adjusted by the doctors’
behaviors to ensure safe actions. RNN is further adopted to capture the
dependence of the longitudinal and temporal records of patients for the
POMDP problem. Note that treatment and prescription are used
interchangeably in this paper.
!
Precision Medicine as Control Problem
Precision medicine as a control problem: Using
simulation and deep reinforcement learning to discover
adaptive, personalized multi-cytokine therapy for sepsis
Brenden K. Petersen, Jiachen Yang, Will S. Grathwohl, Chase Cockrell, Claudio
Santiago, Gary An, Daniel M. Faissol (Submitted on 8 Feb 2018)
https://arxiv.org/abs/1802.10440 - Cited by 8 - Related articles
In this study, we attempt to discover an effective cytokine mediation treatment
strategy for sepsis using a previously developed agent-based model that
simulates the innate immune response to infection: the Innate
Immune Response agent-based model (IIRABM) . Previous
attempts at reducing mortality with multi-cytokine mediation using the IIRABM
have failed to reduce mortality across all patient parameterizations and motivated
us to investigate whether adaptive, personalized multi-cytokine
mediation can control the trajectory of sepsis and lower patient
mortality. We used the IIRABM to compute a treatment policy in which
systemic patient measurements are used in a feedback loop to inform future
treatment.
Using deep reinforcement learning, we identified a policy that achieves 0%
mortality on the patient parameterization on which it was trained. More
importantly, this policy also achieves 0.8% mortality over 500 randomly selected
patient parameterizations with baseline mortalities ranging from 1 - 99% (with an
average of 49%) spanning the entire clinically plausible parameter space of the
IIRABM. These results suggest that adaptive, personalized multi-cytokine
mediation therapy could be a promising approach for treating sepsis. We
hope that this work motivates researchers to consider such an approach as part
of future clinical trials. To the best of our knowledge, this work is the first to
consider adaptive, personalized multi-cytokine mediation therapy for sepsis, and
is the first to exploit deep reinforcement learning on a biological
simulation.
Sepsis seems to present the best problem for hospitals from health economics view
Surface Extraction
and Parcellation
Hard to do “MRI-level”
parcellations, but we
might want to visualize
at least the volumes as
mesh or NURBS
Surface (mesh or NURBS) from volumetric data
FastSurfer - A fast and accurate deep learning
based neuroimaging pipeline
Leonie Henschel et al.
German Center for Neurodegenerative Diseases (DZNE), Bonn, Germany
https://arxiv.org/abs/1910.03866 (9 Oct 2019)
To this end, we introduce an advanced deep learning architecture
capable of whole brain segmentation into 95 classes in
under 1 minute, mimicking FreeSurfer’s anatomical
segmentation and cortical parcellation. The network architecture
incorporates local and global competition via competitive dense
blocks and competitive skip pathways, as well as multi-slice
information aggregation that specifically tailor network
performance towards accurate segmentation of both
cortical and sub-cortical structures.
Further, we perform fast cortical surface reconstruction and
thickness analysis by introducing a spectral spherical
embedding and by directly mapping the cortical labels from the
image to the surface. This approach provides a full FreeSurfer
alternative for volumetric analysis (within 1 minute) and
surface-based thickness analysis (within only around
1h run time). For sustainability of this approach we perform
extensive validation: we assert high segmentation accuracy on
several unseen datasets, measure generalizability and
demonstrate increased test-retest reliability, and increased
sensitivity to disease effects relative to traditional FreeSurfer.
Mesh e.g. Deep Marching Cubes / DeepSDF
Deep Marching Cubes: Learning Explicit Surface
Representations
Yiyi Liao, Simon Donné, Andreas Geiger (2018)
http://www.cvlibs.net/publications/Liao2018CVPR.pdf - Cited by 42
https://github.com/yiyiliao/deep_marching_cubes
Marching cubes: A high resolution
3D surface construction algorithm
(1987) WE Lorensen, HE Cline
doi: 10.1145/37401.37422
Cited by 14,986 articles
In future work, we plan to adapt
our method to higher
resolution outputs using
octrees techniques
Curriculum DeepSDF
Yueqi Duan, Haidong Zhu, He Wang, Li Yi, Ram Nevatia, Leonidas J. Guibas (March 2020)
https://arxiv.org/abs/2003.08593
https://github.com/haidongz-usc/Curriculum-DeepSDF PyTorch
Mesh
→
Unreal/Unity/WebGL, etc. if you are into visualization
Helping brain surgeons practice with real-time
simulation August 30, 2019 by Sébastien Lozé
https://www.unrealengine.com/en-US/spotlights/helping-brai
n-surgeons-practice-with-real-time-simulation
In their 2018 paper Enhancement Techniques for Human Anatomy Visualization, Hirofumi
Seo and Takeo Igarashi state that “Human anatomy is so complex that just visualizing it in
traditional ways is insufficient for easy understanding…” To address this problem, Seo has
proposed a practical approach to brain surgery using real-time rendering with
Unreal Engine.
Now Seo and his team have taken this concept a step further with their 2019 paper
Real-Time Virtual Brain Aneurysm Clipping Surgery, where they demonstrate an
application prototype for viewing and manipulating a CG representation of a
patient’s brain in real time.
The software prototype, made possible with a grant (Grant Number JP18he1602001) from
Japan Agency for Medical Research and Development (AMED), helps surgeons visualize a patient’s
unique brain structure before, during, and after an operation.
Brain Browser is an open source free 3D brain atlas built on WebGL technologies,
it uses Three.JS to provide 3D/ layered brain visualization. Reviewed in
medevel.com
Blender .blend file by placed in the Assets folder of a Unity project
https://forum.unity.com/threads/holes-in-mesh-on-import-from-blender.248126/
Interaction between Volume Rendered 3D Texture and Mesh Objects
https://forum.unity.com/threads/interaction-between-volume-rendered-3d-texture-and-mes
h-objects.451345/
Easy then to visualize on computer / VR / MR / AR
OCTOBER 14, 2017 BY ANDIJAKL
Visualizing MRI & CT Scans in Mixed Reality / VR / AR, Part 4:
Segmenting the Brain
https://www.andreasjakl.com/visualizing-mri-ct-scans-in-mixed-reality-vr-ar-part-4-segmenting-the-brain/
Combining 3D scans and MRI data
http://www.neuro-memento-mori.com/combining-3d-scans-and-
mri-data/
VR software may bring
MRI segmentation into
the future
Matt O'Connor July 30,
2018
Advanced Visualization
https://www.healthimaging.
com/topics/advanced-visu
alization/vr-software-mri-s
egmentation-future
Nextmed: Automatic
Imaging
Segmentation, 3D
Reconstruction, and
3D Model
Visualization Platform
Using Augmented and
Virtual Reality (2020)
http://doi.org/10.3390/s2
0102962
NURBS e.g. DeepSplines
BézierGAN: Automatic Generation of
Smooth Curves from Interpretable Low-
Dimensional Parameters
Wei Chen, Mark Fuge
University of Maryland - work was supported by The Defense Advanced Research Projects Agency
(DARPA-16-63-YFAFP-059) via the Young Faculty Award (YFA) Program
https://arxiv.org/abs/1808.08871
Many real-world objects are designed by smooth curves, especially in the
domain of aerospace and ship, where aerodynamic shapes (e.g., airfoils) and
hydrodynamic shapes (e.g., hulls) are designed. However, the process of selecting
the desired design is complicated, especially for engineering applications where
strict requirements are imposed. For example, in aerodynamic or hydrodynamic
shape optimization, generally three main components for finding the desired
design are: (1) a shape synthesis method (e.g., B-spline or NURBS
parameterization), (2) a simulator that computes the performance metric of any
given shape, and (3) an optimization algorithm (e.g., genetic algorithm) to
select the design parameters that result in the best performance [1, 2]. To facilitate
the design process of those objects, we propose a deep learning based
generative adversarial networks (GAN) model that can synthesize smooth
curves. The model maps a low-dimensional latent representation to a sequence
of discrete points sampled from a rational Bézier curve.
DeepSpline: Data-Driven reconstruction of
Parametric Curves and Surfaces
Jun Gao, Chengcheng Tang, Vignesh Ganapathi-Subramanian, Jiahui Huang, Hao Su, Leonidas J. Guibas
University of Toronto;
Vector Institute; Tsinghua University; Stanford University; UC San Diego
(Submitted on 12 Jan 2019) https://arxiv.org/abs/1901.03781
Reconstruction of geometry based on different input modes, such as images or point clouds, has been
instrumental in the development of computer aided design and computer graphics. Optimal
implementations of these applications have traditionally involved the use of spline-based
representations at their core. Most such methods attempt to solve optimization problems that minimize
an output-target mismatch. However, these optimization techniques require an initialization that is close
enough, as they are local methods by nature. We propose a deep learning architecture that adapts to
perform spline fitting tasks accordingly, providing complementary results to the aforementioned
traditional methods.
To tackle challenges with the 2D cases such as multiple splines with intersections, we use a
hierarchical Recurrent Neural Network (RNN)
Krause et al. 2017
trained with ground truth labels, to predict a
variable number of spline curves, each with an undetermined number of control points.
In the 3D case, we reconstruct surfaces of revolution and extrusion without sel-fintersection
through an unsupervised learning approach, that circumvents the requirement for ground truth
labels. We use the Chamfer distance to measure the distance between the predicted point cloud and target
point cloud. This architecture is generalizable, since predicting other kinds of surfaces (like surfaces of
sweeping or NURBS), would require only a change of this individual layer, with the rest of the model
remaining the same.
Making the Brains physical with 3D Printing
Making data matter: Voxel printing for the digital
fabrication of data across scales and domains
Christoph Bader et al.
The Mediated Matter Group, Media Lab, Massachusetts Institute of Technology, Cambridg
https://doi.org/10.1126/sciadv.aas8652 (30 May 2018)
We present a multimaterial voxel-printing method that
enables the physical visualization of data sets commonly
associated with scientific imaging. Leveraging voxel-based
control of multimaterial three-dimensional (3D) printing, our
method enables additive manufacturing of discontinuous data
types such as point cloud data, curve and graph data, image-
based data, and volumetric data. By converting data sets into
dithered material deposition descriptions, through
modifications to rasterization processes, we demonstrate that
data sets frequently visualized on screen can be converted into
physical, materially heterogeneous objects.
Representative 3D-printed models of image-based data. (A) In vitro reconstructed living human lung
tissue on a microfluidic device, observed through confocal microscopy (29). The cilia, responsible for transporting
airway secretions and mucus-trapped particles and pathogens, are colored orange. Goblet cells, responsible for
mucus production, are colored cyan. (B) Biopsy from a mouse hippocampus, observed via confocal expansion
microscopy (proExM) (30). The 3D print visualizes neuronal cell bodies, axons, and dendrites.
(H) White matter tractography data of the human brain, created with the
3D Slicer medical image processing platform (37), visualizing bundles
of axons, which connect different regions of the brain. The original data
were acquired through diffusion-weighted (DWI) MRI.
Getting the
software tools
To clinical use,
e.g.
Detection/Segmentation
→
clinical prognosis (mortality
and functional outcome
prediction)
Five FDA-approved software exist May 2020
Neuroimaging of Intracerebral Hemorrhage
Rima S Rindler, Jason W Allen, Jack W Barrow, Gustavo Pradilla,
Daniel L Barrow
Neurosurgery, Volume 86, Issue 5, May 2020, Pages E414–E423,
https://doi.org/10.1093/neuros/nyaa029
Intracerebral hemorrhage (ICH) accounts for 10% to 20% of
strokes worldwide and is associated with high morbidity and
mortality rates. Neuroimaging is indispensable for rapid
diagnosis of ICH and identification of the underlying etiology,
thus facilitating triage and appropriate treatment of patients.
The most common neuroimaging modalities include
noncontrast computed tomography (CT), CT angiography
(CTA), digital subtraction angiography, and magnetic
resonance imaging (MRI). The strengths and disadvantages of
each modality will be reviewed.
Novel technologies such as dual-energy CT/CTA, rapid
MRI techniques, near-infrared spectroscopy (NIRS)*, and
automated ICH detection hold promise for faster pre- and in-
hospital ICH diagnosis that may impact patient management.
* The depth of near-infrared light penetration limits detection
of deep hemorrhages, and the size, type, and location of intracranial
hemorrhages cannot be determined with accuracy. Bilateral ICH may
be missed given that NIRS depends upon the differential light
absorbance between contralateral head locations. Patients with
traumatic brain injury may also have scalp hematomas that produce
false-positive results. Finally, variations in hair, scalp, and skull
thickness introduce additional barriers to ICH detection.
Automated ICH Detection
Rapid advancements in machine learning techniques have prompted a number of studies to
evaluate automated ICH detection algorithms for identifying both intra- and extra-axial ICH with
varying sensitivities (81%
Majumdar et al. 2018
, area under the curve 0.846
Arbabshirani et al. 2018
to 0.90
Chilamkurthy et al. 2018
) and specificities (92%).
Ye et al. 2019
FDA-approved programs are listed in the Table (A Bar, MS et al, unpublished data, September
2018).
Ojeda et al. 2019
Automated algorithms that detect critical findings would facilitate triage of cases awaiting
interpretation, especially in underserved areas, thereby improving workflow and patient outcomes
Chilamkurthy et al. 2018
. Utilizing a machine learning algorithm to detect ICH reduces the time to diagnosis by
96%
Arbabshirani et al. 2018
.
However, barriers have prevented widespread adoption of these techniques, including limited
inter-institutional generalizability of algorithms that were trained on limited, occasionally
singlesite datasets. Furthermore, ultimate accountability for errors generated using a machine
learning algorithm remains to be determined.
AIDOC FDA-approved ‘CT software’ #1
The utility of deep learning: evaluation of a
convolutional neural network for detection of
intracranial bleeds on non-contrast head computed
tomography studies
P. Ojeda; M. Zawaideh; M. Mossa-Basha; D. Haynor
Proceedings Volume 10949, Medical Imaging 2019: Image Processing;
109493J (2019) https://doi.org/10.1117/12.2513167
The algorithm was tested on 7112 non-contrast head
CTs acquired during 2016–2017 from a two, large urban
academic and trauma centers. Ground truth labels
were assigned to the test data per PACS query and
prior reports by expert neuroradiologists. No
scans from these two hospitals had been used during
the algorithm training process and Aidoc staff were at all
times blinded to the ground truth labels.
Model output was reviewed by three radiologists
and manual error analysis performed on
discordant findings. Specificity was 99%, sensitivity
was 95%, and overall accuracy was 98%. In summary, we
report promising results of a scalable and clinically
pragmatic deep learning model tested on a large
set of real-world data from high-volume medical centers.
This model holds promise for assisting clinicians in the
identification and prioritization of exams suspicious for
ICH, facilitating both the diagnosis and treatment of an
emergent and life-threatening condition.
AIDOC FDA-approved ‘CT software’ #2
Analysis of head CT scans flagged by deep
learning software for acute intracranial
hemorrhage
Daniel T. Ginat
Department of Radiology, Section of Neuroradiology, University of Chicago
Neuroradiology volume 62, pages335–340(2020)
https://doi.org/10.1007/s00234-019-02330-w
To analyze the implementation of deep learning software for
the detection and worklist prioritization of acute intracranial
hemorrhage on non-contrast head CT (NCCT) in various
clinical settings at an academic medical center.
This study reveals that the performance of the deep learning
software [Aidoc (Tel Aviv, Israel)] for acute intracranial
hemorrhage detection varies depending upon the patient visit
location. Furthermore, a substantial portion of flagged cases
were follow-up exams, the majority of which were inpatient
exams. These findings can help optimize the artificial
intelligence-driven clincical workflow.
This study has several limitations. The clinical impact of
the software, in terms of the significance of flagged cases
with pathology not related to ICH, reduction of the
turnaround time, a survey of radiologists regarding their
personal perspectives regarding the software
implementation, and whether there was improved
patient outcome were not a part of this study, but can be
addressed in future studies. Nevertheless, this study identified
potential deficiencies in the current software version, such as
not accounting for patient visit location and whether there are
prior head CTs. Such information could provide important
clinical context to improve the overall algorithm accuracy,
thereby flagging cases in a more useful manner.