ARTIFICIAL INTELLIGENCE AND RADIOLOGY PRESENTATION

RajMandavia 15 views 37 slides Oct 26, 2025
Slide 1
Slide 1 of 37
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37

About This Presentation

PowerPoint presentation regarding Artificial Intelligence and Radiology
Introduction, Definition of terms, Early visionaries, Applications of AI, Benefits, Challenges, and Predictions.


Slide Content

ARTIFICIAL INTELLIGENCE AND RADIOLOGY DR. RAJ MANDAVIA

INTRODUCTION Artificial intelligence (AI): the field of computer science that involves the simulation of intelligent behavior by computers. Used to predict, automate, augment, and optimize tasks historically done by humans

Early Visionaries Alan Turing: Brilliant British computer scientist, code analyst during World War II. Considered by many as the “father” of artificial intelligence. In 1951, he addressed what later became known as artificial intelligence. A computer, he postulated, could pass the Turing test if a human being could not determine the difference in a text conversation between other humans and the computer. If the computer passed the test, it would, he said, show evidence of “thinking”. The Turing test has since become shorthand for any AI that can convince a person into believing they are seeing or interacting with a real person. At the time, Turing could not carry out his proposed test on actual computer because there were no computers powerful enough o run it. An annual award is given in his name as the highest distinction in computer science.

Early Visionaries John McCarthy: Assistant professor of mathematics at Dartmouth College in New Hampshire. In 1956, he chose the term artificial intelligence in a proposal for a summer workshop to brainstorm thinking machines at the college. The conference, attended by mathematicians, computer scientists, and cognitive psychologists, is widely considered to be the founding event of artificial intelligence.

Glossary of terms Algorithm: A sequential list of mathematical formulas or programming commands that facilitate a computer’s ability to solve problems. Artificial intelligence (AI): The field of computer science that involves the simulation of intelligent behavior by computers. Its major subfields are machine learning and deep learning. Artificial neural networks: A computer system inspired by the human brain that typically contains at least one input layer, which sends weighted inputs to a series of hidden layers and an output layer at the end. Backpropagation: A method of training neural networks in which the system’s initial output is compared to the desired output, which is then adjusted until the difference between outputs is minimal.

Glossary of terms Black box learning: AI performs a great deal of complex math, especially in the hidden layers of artificial neural networks. The computations often can’t be understood by humans, but system still yields useful information. When this happens, it is called black box learning. Convolutional neural network (CNN): Are specifically designed to process images. Each CNN layer contains many filters. Each filter is a small matrix of weights, similar to a general neural network’s weights. The filters are repeatedly applied to image pixels, allowing them to recognize repeating patterns. CNNs are ideal for image analysis because images are composed of repeating patterns. Deep Learning: Artificial intelligence carried out by neural networks that have multiple hidden layers. It can be supervised or unsupervised. Feature extraction: Data derived from the input data that may be in the form of edges, lines, points, blobs, or texture, among others.

Glossary of terms Layers: An arrangement of a group of nodes in an artificial neural network that process a set of input features an produce an outcome. Machine learning: Used interchangeably with artificial intelligence by many. Machine learning is the process by which AI uses algorithms to perform artificial intelligence functions. Model: An abstract representation of what an artificial neural network has learned from the training dataset during the training process. Natural language processing: The ability of an advanced neural network to interpret human language; used in translation services like Alexa and Siri. Neurons: In AI, used interchangeably with the term nodes.

Glossary of terms Nodes: Computational units that have one or more weighted input connections, a transfer function that combines the inputs in some way, and an output connection. Nodes are organized into layers to comprise a network. Overfitting: A common problem in machine learning; learning a function that perfectly explains the training data that the model learned from but doesn’t generalize well to unseen test data. Overfitting happens when a model overlearns from the training data to the point that it learns idiosyncrasies that aren’t inherent representations of the input. Pooling: The process of reducing a matrix generated by a convolutional layer to a smaller matrix in part to reduce errors introduced by changes in positioning of new data. Sensitivity: A measure of how often a test returns a positive result for people who do have the condition being tested for (true positive).

Glossary of terms Specificity: A measure of a test’s ability to correctly generate a negative result for people who don’t have the disease being tested for (true negative). A low specificity test used for screening would yield a large number of people without the disease who would be subjected to further evaluation. Supervised learning: Computer learning by labeled examples. Supervised learning requires that the data used to train the algorithm is already labeled with correct answers and that the algorithm’s possible outputs are already known. Training dataset: The set of examples used initially to teach the network – that is, to train the algorithm. It may contain training examples assigned wit correct labels. The networks sees and learns from this data.

Glossary of terms Testing dataset: The gold standard used to evaluate the model. It is only used once a model is completely trained (using the training and validation sets). This dataset is meant to replicate the real world and has never previously been seen by the system. Unsupervised learning: The computer is fed data and learns on its own by automatically finding patterns and relationships inside of that dataset. Validation dataset: A dataset that is used to fine-tune, adjust weights, and select the best model during the training process. Weights. The connection strength between units, or nodes, in a neural network. These weights can be adjusted in a process called learning.

COMPUTING POWER The lack of computing processing power along with difficulty accessing appropriate amounts of training data affected the early progress of AI. Artificial intelligence requires a vast amount of computational power to process its data. AI could not be possible without a quantum leap in computer processing power. At the time of the Dartmouth Conference in 1956, the most advanced computer was built by IBM. It occupied an entire room, stored its data on cassette tapes, and received its instructions using paper punch cards.

COMPUTING POWER In 2020, the iPhone 12, which, by comparison, fits in the average person’s pocket, could perform 11 trillion operations per second, which is 55,000,000 times more than the IBM computer in 1956. And in 2018, what was then the world’s fastest modern supercomputer could perform a computation in 1 second that would have taken the “old” IBM almost 32,000 years to compute. A major part of that increase in computational speed came from the realization that the hardware known as graphics processing units (GPUs) could greatly accelerate processing speeds because of their ability to quickly manage large blocks of data simultaneously.

COMPUTER-ASSISTED DETECTION/DIAGNOSIS IN MEDICINE During the early development of computer applications to diagnosis in medicine, researches were trying to develop systems that were totally automated in reaching a diagnosis, which proved unrealistic at the time. One of the first uses of computer analysis introduced into medicine occurred outside of radiology. The first attempts to automate ECG analysis were in the 1970s. It is interesting to note that more than 50 years since the introduction of automated ECG analyses, expert panels still recommend that all computer-based electrocardiographic reports require physician overreading.

COMPUTER-ASSISTED DETECTION/DIAGNOSIS IN MEDICINE Computer-assisted detection (CADe) found its largest early use in radiology in mammography. In 1998, the first commercial CAD system for mammography was approved by the US Food and Drug Administration. Other relatively early CAD programs also focused on recognizing vertebral compression fractures on lateral radiographs and intracranial aneurysms on MRA. Some of these programs depended primarily on pattern-recognition software designed to decrease any potential oversights in observation. They did this by applying an electronic overlay that marked anything the CAD determined should have further evaluation after radiologists had made their initial reading and before their final reading.

COMPUTER-AIDED SIMPLE TRAIGE (CAST) Combination of computer-aided diagnosis (CAD) and simple triage and rapid treatment (START). CAST executes a fully automatic initial interpretation of a study – a focused, preliminary evaluation. Studies analyzed by the system are automatically classified into some predefined class for example, possible pulmonary embolism, possible pneumothorax, etc. CAST is particularly applicable in emergency diagnostic imaging to assist in triaging patients who might need immediate attention for potentially life-threatening emergencies. Although the primary goal of traditional CAD is assisting the diagnostic accuracy of a human reader, CAST may address reading sequence prioritization.

CAST APPLICATIONS Pulmonary embolism (PE): These programs are intended to assist radiologists in identifying suspected pulmonary emboli and flag such studies to accelerate workflow triage by conveying suspected positive findings on chest CT angiographies for pulmonary embolism. Aortic dissection: Such applications analyze chest and abdominal CT angiograms to assists in detection. Stroke: Applications have been designed to analyze contrast-enhanced CT images of the brain and send a notification if a suspected large vessel blockage has been identified. There are also applications that assess non-contrast CT brain scans for areas of hypodensity and for hyperdense vessels and outline them, also helpful in stroke detection.

CAST APPLICATIONS Fractures: Applications exist that attempt to identify and mark an area it finds suspicious for a fracture of the skeletal part being examined by conventional radiography. Pneumoperitoneum: Programs that detect free intraabdominal air on oral contrast-enhanced CT scans to prioritize and triage those studies have been developed. Pneumothorax: There are algorithms that will assist in the detection and localization of a suspected pneumothorax and elevate these studies for priority reading in the work list. Intracranial hemorrhage: Noncontrast head CT scans can be analyzed for features that suggest acute intracranial hemorrhage for prioritization and triage.

OTHER AI APPLICATIONS IN RADIOLOGY Screening mammography: Mammography is the area of radiology with the greatest of CAD and AI, according to a survey conducted by the American College of Radiology. There are many applications of AI available for use in digital mammography, digital breast tomosynthesis, and breast ultrasound. They have been shown to assist radiologists in finding and classifying abnormalities such as mass, distortion, asymmetry, and calcification. Pulmonary embolus: Deep learning systems can serve as a second reader for the immediate interpretation and prioritization of positive studies. A deep learning model has also been developed that can flag patients with a high clot burden or right ventricular strain to call attention to those patients who might have a worse prognosis.

OTHER AI APPLICATIONS IN RADIOLOGY Recognizing intracranial hemorrhage: AI has been shown to accurately detect the existence and type of intracranial hemorrhage on noncontrast CT scans of the head with a relatively high sensitivity and specificity. It has the potential to assist in the correct diagnosis of intracranial hemorrhage of a small number of patients originally thought to be negative. Compression fractures: Algorithms exist that can be applied to a CT study of the chest or abdomen for the detection of vertebral compression fractures. COVID-19 detection: Using a chest CT scan as an input, AI programs are available that analyze the scan and highlight areas with abnormal pulmonary patterns known to have an association with COVID-19.

OTHER AI APPLICATIONS IN RADIOLOGY Bone age: There is an application that can automatically analyze a radiograph of a child’s hand and calculate a bone age, as well as the standard deviation from normal. Lung Nodule Benignity vs. Malignancy: Programs exist that assist in detecting, classifying, and tracking the growth of pulmonary nodules.

OTHER AI APPLICATIONS IN RADIOLOGY Clinical Decision Support Systems (CDSS): Traditional clinical decision support systems designed to be an aid to clinical decision making by matching the characteristics of a patient with a computerized clinical knowledge base so as to offer recommendations to the clinician. In radiology, one of the main uses of clinical decision support is in guiding the ordering physician toward the most appropriate imaging study, if any, for each patient based on age, gender, past medical history, and symptoms. These systems draw their recommendations based on a computerized database of appropriate use criteria by experts in the field, such as that developed by the American College of Radiology called the ACR Appropriateness Criteria TM .

OTHER AI APPLICATIONS IN RADIOLOGY Detecting polyps on CT colonography: CAD is available for assisting the radiologist in identifying colorectal polyps on CT colonography. Coronary CT angiography: There is CAD for the automatic detection of significant stenosis (i.e., causing more than 50% narrowing) of the coronary arteries on CT angiography studies. Nuclear medicine: CAD systems exist for the diagnosis of bone metastases in whole-body scans and coronary artery disease in myocardial perfusion images. Liver fat: There are applications that characterize liver tissue by providing quantitative measures of liver fat, fibrosis, and inflammation.

BENEFITS Reduce errors: Studies have shown that computer-assisted diagnosis combined with human interpretation can make small but significant improvements in the overall accuracy in certain areas of imaging. Shorten study to report time: Reading sequence prioritization may reduce the time from study completion to interpretation and thus the time to make subsequent clinical decisions in certain scenarios where time may be of the essence. New substance for human insight: Artificial intelligence models may provide new information that we can use to develop distinctly human insights into improving patient care and newer algorithms with even greater utility.

CHALLENGES Extensive and accurate data: Artificial intelligence systems generally require a great deal of data for training. The models are only as effective as the data fed to them. It can take a long time to gather a sufficient amount of data and for humans to parse it in a way the computer will find useful. Well-annotated large medical datasets are needed, as many of the most noteworthy accomplishments of deep learning are based on very large amounts of data. Building such datasets in medicine is costly, requires enormous workload by experts and may also pose privacy and ethics issues. As in other areas of AI, the goal of large medical datasets is to minimize overfitting and increase generalizability.

CHALLENGES Exaggerated claims of “accuracy”: Sometimes companies make an exaggerated claim that a new test is “almost 100% accurate” in diagnosing a certain disease. That level of diagnostic accuracy is suspiciously high for any problem using AI. When AI models leave the development stage and start making real-world predictions, they nearly always worsen in performance. That is why independent validation is essential before using any new and high-impact AI systems. Task specificity: A common characteristic among most current AI tools is their ability to address only a limited number of specific tasks at any given time, a shortcoming of any form of narrow intelligence. In a hypothetical example, a human examining a single chest x-ray can identify a lung nodule, while at the same time recognizing there are 11 pairs of ribs, that the left atrium is enlarged, there are egg-shell calcifications of lymph nodes, and that the stomach bubble is displaced medially. A comprehensive AI system capable of detecting that many discordant abnormalities has yet to be developed.

CHALLENGES Reduction in human skills over time: In 2013, the US Federal Aviation Administration issued a safety alert for airline pilots to use autopilot less because, they said, “continuous use of autoflight (autopilot) systems could lead to degradation of the pilot’s ability to quickly recover the aircraft from an undesired state” and “unfortunately, continuous use of those systems does not reinforce a pilot’s knowledge and skills in manual flight operations.” In 2019, the Boeing 737 MAX was grounded worldwide after a malfunctioning computer-based flight control system caused two new aircraft to crash in Indonesia and Ethiopia, killing all 346 people on board. It was alleged that part of the issue was that pilot training on the new system may have been inadequate. There is a concern that, over time, the proliferation of AI-driven diagnosis will leave fewer and fewer human experts in a position to teach or validate computer models.

CHALLENGES Automatic acceptance of AI-generated diagnoses, especially in emergent situations: Radiologists, like other physicians, commonly ask for a second opinion from fellow radiologists after viewing an image in order to seek additional advice. AI-generated diagnoses, especially those generated automatically by reading prioritization applications, can differ in this approach by offering that advice unsolicited before the physician has had an opportunity to make their own judgement. In one experimental study, conducted outside the actual care of patients, physician acceptance of computer-generated advice, even when some of it was purposely manipulated to be inaccurate by the investigators, was accepted in many instances, especially if the expertise among those receiving the inaccurate advice was lower to begin with than another group with greater expertise. That propensity to follow incorrect advice was called clinical susceptibility and it carries implications for the acceptance of automated interpretations, especially if there is an accompanying decrease in human expertise.

CHALLENGES False positives and false negatives: The more false positives, the lower any test’s specificity. If the false-positive rate is high, the test is not very specific, which reduces the acceptance of the CAD system because the user has to reexamine all of the wrong hits. In CAST systems, where flagged studies are automatically moved to the top of the queue, the false-positive rate would have to be very low if the reading sequence optimization was to be trusted. Black box problem: As deep learning networks with more and more layers perform seemingly opaque computations to reach an actionable output, will humans lose their ability to judge the why and how a decision is made?

PREDICTIONS Artificial intelligence has and will continue to assist humans in their endeavor to achieve the utmost accuracy and safety in medical diagnosis and treatment. Futuristic predictions have been and will continue to be made, fascinating many and frightening others. The literature abounds with wrong predictions about AI made by some very smart people: Alan Newell, a leading researcher in computer science and a Turing Prize winner, said in 1958: “Within 10 years, a digital computer will be the world’s chess champion.” He was correct that an IBM computer called Deep Blue did beat the reigning world chess champion Garry Kasparov. However, that impressive, but programmatically narrow, accomplishment occurred 40 years after Newell’s prediction.

PREDICTIONS H.A. Simon, a Nobel Prize recipient said in 1965: “Machines will be capable, within 20 years, of doing any work a man can do.” Marvin Minsky, cofounder of the Massachusetts Institute of Technology’s AI Laboratory and another Turing Award recipient, said in 1970: “In from three to eight years, we will have a machine with the intelligence of an average human being.” It has been decades since he predicted that and, though there may be some discussion about what constitutes the intelligence of an average human being, we aren’t anywhere close to that prediction.

PREDICTIONS Roy Amara was a cofounder of the Institute for the Future, in Silicon Valley, and is known for a saying, appropriately called Amara’s law: We tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run. How many years, decades, even centuries, constitute the short run versus the long run is neither easy to predict nor likely to be correct.

THANK YOU