RAC- PPT .pptx

srujanabharathi97 8 views 20 slides Oct 31, 2025
Slide 1
Slide 1 of 20
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20

About This Presentation

GVS FCDV


Slide Content

Advancements and Challenges in Deep Learning for Brain Tumour Detection Ph.D Research Proposal Presentation By Mr.Farooqhussain Mohammad Roll No.: 23038105 Enrollment No.: Session:2024-25 Under the Supervision of Dr. Amit Kumar Dewangan Assistant Professor To Research Advisory Committee DEPARTMENT OF INFORMATION TECHNOLOGY SCHOOL OF STUDIES OF ENGINEERING & TECHNOLOGY GURU GHASIDAS VISHWAVIDYALAYA BILASPUR (C.G.)-495009 Feb -2025

Introduction Literature Review Research Gap Identified Motivation Objectives Proposed Methodology Expected Outcome References Outline

Introduction Brain tumours are abnormal growths of cells within the brain. They can be classified as either benign (non-cancerous) or malignant (cancerous). Malignant brain tumours are often referred to as brain cancer and can spread to other parts of the brain or spinal cord. Brain Tumors : Brain Tumors Images: .

: Primary Brain Tumors : Originate within the brain itself or in its immediate surroundings (such as the meninges, pituitary gland, or pineal gland). These include gliomas, meningiomas, and pituitary adenomas . Secondary (Metastatic) Brain Tumors : These are tumors that spread to the brain from cancers elsewhere in the body, such as lung, breast, or colon cancers. Types of Brain Tumors

Example input dataset with different MRI modalities and corresponding ground truth segmentation map

Modern Imaging Techniques

Types of Medical Image Format 1992 by MNI Flexible framework for defining custom data Three major subgroups: Info Dimension image 2000 by NIH Versatility: supports multi-dimensional i.e. 3D and 4D 1980 by BIR Multidimensional data Two binary files Image(.img) Header(.hdr) 1993 by ACR and NEMA Header and data store in single file (.dcm) Contain Rich Metadata MINC NIFTI DICOM ANALYZE Medical Image Format Header and data store in single file (.nii)

Types of Medical Image Format

Techniques for Medical Image Processing Using Deep Learning 1 . Self-Supervised and Semi-Supervised Learning Labeled medical data is scarce; these methods leverage unlabeled data. The Methods like SimCLR , BYOL is used to pre-train on large unlabeled datasets. Used in semi-supervised segmentation. 2. Multi-Modal and Cross-Modal Learning Combine different data sources (e.g., MRI, CT, pathology images, and clinical data) for better predictions Methods like CLIP-like models: Adapted for text-image relationships in radiology reports.and Cross-Modality GANs: Translate between imaging modalities (e.g., PET to CT ). 3.Vision Transformers ( ViTs ) and Hybrid Models ViTs have surpassed CNNs in many medical imaging tasks by capturing long-range dependencies. Hybrid Approaches: Combining CNNs with transformers (e.g., TransUNet ) improves segmentation tasks,and have applications like Lesion detection, tumor segmentation, and multi-organ analysis.

[3] Supervised learning algorithms ,Support Vector Machines (SVM), have been integrated with deep learning to improve accuracy Visual Geometry Group (VGG16) with 16 layers following transfer learning The document presents a comprehensive and well-structured approach to brain tumor detection and classification Lamba , K., Rani ,et al [3] [6] Technique called feature fusion or concatenation to combine features from multiple pre-trained deep learning models. Feature vector Improved Classification Performance And maximizes the transfer of knowledge 1.Yim, M. S., Kim, [1 [7] Blind watermarking approach with watermark detection with help of neural networks. Blind High robustness and authentication Runtime complexity is very high.

[8] [9] [10]

1. Generalization to Diverse Populations : Most existing models are trained on limited datasets that may not represent the diversity of patient populations. There is a need for models that can generalize well across different demographic and clinical settings. 2. Interpretability and Explainability: Deep learning models are often considered "black boxes," making it difficult for clinicians to understand and trust their predictions. Research is needed to develop methods that enhance the interpretability and explainability of these models. 3. Integration with Clinical Workflows: There is a lack of studies on how deep learning models can be seamlessly integrated into existing clinical workflows and diagnostic tools. Research should focus on practical implementation and real-world applications. 4. Ethical and Privacy Concerns: Handling patient data in deep learning research raises ethical and privacy issues. More research is needed to address these concerns and develop secure methods for data sharing and model training. Research Gap Identified

Problem Statement Brain tumors pose a significant global health challenge due to their high mortality and morbidity rates. Early and accurate detection of brain tumors is crucial for improving patient outcomes. Traditional imaging techniques, while effective, often face limitations such as noise, resolution issues, and imbalanced datasets. Deep learning has emerged as a transformative technology in medical imaging, offering the potential to enhance the accuracy and efficiency of brain tumor detection. However, the integration of deep learning techniques into clinical practice is fraught with challenges, including the need for large annotated datasets, computational resources, and model interpretability .

Objective of the proposed work To Develop novel hybrid model for accurate and robust brain tumor detection in MRI images. To explore techniques to optimize the training process, improve model interpretability, and develop robust methods for handling diverse and noisy data. Study of pre-trained deep learning models and machine learning techniques, to extract effectively informative features, reduces dimensionality, and classifies brain tumors with high accuracy . Integrating the proposed model into remote patient monitoring systems could revolutionize healthcare by enabling early detection, remote monitoring, and personalized treatment plans.

Flow diagram of Proposed Work

Proposed Methodology Collect multi-modal MRI data for data preprocessing then c ombine CNNs for local feature extraction and Vision Transformers for global context with attention-based fusion for accurate tumor segmentation and classification. Apply attention map method for visual explanations, and validate using Dice Coefficient. Finally, Integrate the model into cloud-based patient monitoring systems for early detection, enabling remote diagnosis and personalized treatment plans.

Expected Outcome

1.Yim , M. S., Kim, Y. H., Bark, H. S., Oh, S. J., Maeng , I., Shim, J. K., Chang, J. H., Kang, S. G., Yoo , B. C., Kwon, J. G., Byun , J., Yeo, W. H., Jung, S. H., Ryu , H. C., Kim, S. H., Choi, H. J., & Ji , Y. bin. (2024). Deep learning-driven macroscopic AI segmentation model for brain tumor detection via digital pathology: Foundations for terahertz imaging-based AI diagnostics. Heliyon , 10 (22). https:// doi.org/10.1016/j.heliyon.2024.e40452 2.Pande , Y., & Chaki , J. (2025). Brain tumor detection across diverse MR images: An automated triple-module approach integrating reduced fused deep features and machine learning. Results in Engineering , 25 . https:// doi.org/10.1016/j.rineng.2024.103832 3.Lamba , K., Rani, S., Anand , M., & Maguluri , L. P. (2024). An integrated deep learning and supervised learning approach for early detection of brain tumor using magnetic resonance imaging. Healthcare Analytics , 5 . https://doi.org/10.1016/j.health.2024.100336 REFERENCES

[8]S. Bakas et al., "Advancing the cancer genome atlas glioma MRI collections with expert segmentation labels and radiomic features," Scientific Data , vol. 4, no. 1, 2017. [9]B. H. Menze et al., "The multimodal brain tumor image segmentation benchmark ( BraTS )," IEEE Transactions on Medical Imaging , vol. 34, no. 10, pp. 1993–2024, 2014 .

THANK YOU
Tags