BTZ PPT.pptx it is the point for image segnenntation
Tirusew1
25 views
36 slides
Jun 16, 2024
Slide 1 of 36
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
About This Presentation
image segmentation and image fusion
Size: 580.76 KB
Language: en
Added: Jun 16, 2024
Slides: 36 pages
Slide Content
BAHIR DAR INSTITUTE OF TECHNOLOGY FACULTY OF ELECTRICAL AND COMPUTER ENGINEERING DEPARTMENT OF BIOMEDICAL ENGINEERING Advance biomedical image analysis: image fusion and image registration No Group members Id 1 Zemene yohannes BDU1603087 2 Tirusew Engidaw BDU1603086 3 Bezawit kassie BDU1603085 Submitted to: D.r selamawit Submission date: 04 /17/2024
Image fusion Image fusion is a process of combining two or more images of the same scene obtained from different sensors or modalities to create a single composite image that contains more information than any of the individual input images. The goal of image fusion is to enhance the overall quality, details, and interpretability of the fused image compared to the original inputs.
Registration of CT with PET scanning
Image fusion based on wavelet decomposition
Image fusion using MATLAB Simulink
Fusion Techniques There are various techniques for combining the information from the input images. These techniques can be categorized into spatial domain methods, transform domain methods, and deep learning-based methods. Spatial domain methods directly operate on the pixel values of the input images. Common techniques include averaging, weighted averaging, maximum or minimum selection, and decision-based methods.
Cont. form domain methods involve transforming the input images into a different domain (e.g., frequency domain) where fusion is performed before transforming back to the spatial domain. Techniques such as wavelet transform, Fourier transform, and sparse representation are used.
Cont. Deep learning-based methods leverage neural networks to learn the fusion process directly from the input images. These methods often offer state-of-the-art performance but require large amounts of training data and computational resources. Application of image fusion Remote sensing: Combining data from multiple sensors to improve land cover classification, change detection, and environmental monitoring.
Cont. Medical imaging: Integrating information from different imaging modalities (e.g., MRI, CT, PET) for improved diagnosis and treatment planning. Surveillance: Enhancing the quality of surveillance imagery for object detection, tracking, and recognition. Computer vision: Merging visual information from multiple cameras or viewpoints for scene understanding, augmented reality, and 3D reconstruction.
Pixel-level Fusion In pixel-level fusion, the fusion process is performed directly on the pixel values of the input images. Simple averaging: This method computes the average pixel value at each pixel location across the input images. It is straightforward but may not preserve all details or handle variations in image qualities effectively.
Weighted averaging: Here, each pixel is assigned a weight based on factors such as image quality. Relevance of information, or importance. Weighted averaging allows for more flexibility in preserving important details.
Transform Domain Fusion Transform domain fusion involves transforming images into a different domain (e.g., frequency domain) before fusion. Wavelet transform: This method decomposes images into different frequency sub bands using wavelet transforms. Fusion is then performed in the wavelet domain, allowing for selective combination of details from different frequency components.
Discrete cosine transform (DCT): Similar to wavelet transform, DCT decomposes images into frequency components. Fusion in the DCT domain can be used for image compression and enhancement. Principal component analysis (PCA): PCA can be applied to extract principal components from images. which can then be fused to create a composite image with enhanced features.
Feature-level Fusion Feature-level fusion focuses on extracting and combining relevant features from input images. Feature extraction: Features such as edges, textures, colors, or key points (e.g., corners, blobs) are extracted from input images using techniques like edge detection, texture analysis, or feature descriptors (e.g., SIFT, SURF).
Feature combination: Extracted features from different images are combined based on their relevance or importance using fusion rules (e.g., maximum, minimum, average) to create a fused feature representation.
Decision-level Fusion Decision-level fusion involves combining decisions or outputs from image processing algorithms applied to input images. Example: In object detection, decision-level fusion may combine detection results (e.g., bounding boxes, classifications) from multiple image processing algorithms or detectors to improve overall detection performance.
Image Registration Geometric (and Photometric) alignment of one image with another – Images may be of same or different types (MR, CT, …) Image registration is the process of aligning and overlaying two or more images of the same scene taken at different times
Cont. Image registration is the process of calculating spatial transforms which align a set of images to a common observational frame of reference Registration is a key step in any image analysis or understanding task where different sources of data must be combined
Cont . The goal of image registration is to spatially align images So that corresponding features in the images coincide, enabling meaningful comparison, analysis, and fusion During the registration process, two situations become evident: It is impossible to imagine; this is known as a matching issue. A nd it is also the most time-consuming step of the algorithm’s execution.
Image registration involves several steps, including feature detection and matching, transformation estimation ( Example affine, rigid, non-rigid), and resampling or warping of images to align them properly.
Cont. Image registration is essential in various fields such as: Medical imaging (aligning images for diagnosis and treatment planning) Remote sensing (aligning images for change detection and analysis) Computer vision (aligning images for object recognition and tracking) And astronomy (aligning images for celestial object analysis)
X- ray 2D-3D Registration
Different image registration results
Cont.
Cont.
Intensity-Based Registration Intensity-based registration methods rely on the pixel intensity values of the images to estimate the transformation parameters that align the images. Cross-correlation: This method measures the similarity between corresponding image patches or regions by computing the cross-correlation coefficient. The transformation parameters are then adjusted to maximize the cross-correlation.
Mutual information: Mutual information measures the statistical dependence between pixel intensities in the images. Registration based on mutual information aims to maximize the mutual information between the images by optimizing the transformation parameters. Normalized mutual information (NMI): NMI is a variation of mutual information that normalizes the mutual information value To account for differences in image intensities and dynamic ranges.
Feature-Based Registration Feature-based registration techniques involve extracting distinctive features from images and matching these features to estimate the transformation parameters. Key point detection and matching: Key points (e.g., corners, blobs) are detected in images using feature detectors (e.g., Harris corner detector, SIFT, SURF). Corresponding key points are then matched between images, and the transformation is estimated based on the matched key points.
Line or edge-based registration: This method detects and matches lines or edges in images to estimate geometric transformations such as affine or projective transformations. Region-based registration: Regions of interest or regions with distinctive textures are identified and matched between images to perform registration.
Hierarchical Registration Hierarchical registration methods utilize a multi-resolution approach to handle large disparities or complex deformations between images. Pyramid-based registration: Images are down sampled into multiple levels or scales, forming an image pyramid. Registration is performed hierarchically from coarse to fine scales, allowing for robust registration at different levels of detail.
Multi-resolution wavelet registration: Similar to pyramid-based methods Wavelet transforms are applied to images to decompose them into different frequency components. Registration is then performed at multiple resolutions using wavelet coefficients.
Deformable Registration Deformable registration techniques are used to handle non-linear deformations or distortions between images. B-spline registration: B-splines are used to model smooth deformations between images. Control points are placed on a grid, and the deformation field is adjusted to align images based on the control points.
Cont. Free-form deformation (FFD): FFD allows for more flexible deformations by defining a deformation grid that can be locally adjusted to match image features.
Optical Flow-Based Registration Optical flow methods estimate the motion field between images by tracking the displacement of image features over time. Dense optical flow: This method estimates the motion vectors for every pixel in the image, providing a dense motion field for registration. Sparse optical flow: Only selected key points or features are tracked to estimate the motion field, making sparse optical flow more computationally efficient but less detailed than dense optical flow.
Both image fusion and image registration are fundamental techniques in image processing and computer vision, playing crucial roles in enhancing image quality, extracting valuable information, and enabling advanced analysis and interpretation of images.