Lecture 06 - image processingcourse1.pptx

Alaa790395 41 views 47 slides Jun 12, 2024
Slide 1
Slide 1 of 47
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47

About This Presentation

image processingcourse


Slide Content

Digital Image Processing Lecture 6 Image Segmentation

Segmentation Image segmentation is the process of partitioning the digital image into multiple regions Segmentation accuracy determines the eventual success or failure of the following image processing tasks: Pattern recognition Computer Aided Diagnosis systems For this reason, considerable care should be maintained to segmentation procedure

Region/Segment What is a region or a segment? It is an aggregation of pixels Properties like gray level, color, texture, or shape are used to build groups of regions having a particular meaning. In mathematical sense the segmentation of the image I, which is a set of pixels, is the partition of I into n disjoint sets R1,R2, . . . , Rn, called segments or regions such that their union of all regions equals I. Image segmentation is based on either one of two gray level intensity properties: Gray level discontinuity (e.g. edge detection) Gray level similarity (e.g. thresholding)  

Detection of Discontinuity The most common method to detect discontinuity in an image is by running a spatial mask of specific width and number of coefficients The window mask is multiplied by the original image pixels and the sum of product is calculated Discontinuity could be applied on: An isolated point A line An edge

Discontinuity: Point Detection An isolated point : it is a point (several aggregate of pixels) whose intensity differs significantly from the its homogeneous background. The idea for isolated point detection consists of assuming a mask with equal coefficients everywhere except at the center , where the coefficient has a higher value with opposite sign The center of the mask is moved from one pixel to another within an image. At each location, the sum of the product of the mask coefficients and the pixels gray levels is calculated. An isolated point is detected when this sum is larger than a given threshold

Discontinuity: Point Detection

Discontinuity: Line Detection For line detection, the same idea is performed but with different masks Could detect horizontal , vertical , or ‘m’ slope lines The preferred direction in the mask is weighted by a different weight (larger value and opposite sign) The main disadvantage in this method is that only strong responses (lines having large differences than the background) could be detected.

Discontinuity: Line Detection

Discontinuity: Edge Detection Edge detection is the most common approach for detecting meaningful discontinuities in gray level. An edge : is the boundary between two regions with relatively distinct gray level properties. An edge could be ideal or blurred Blurred edges need sophisticated algorithms to be detected

Edge Detection

Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded in the edges More compact than pixels Ideal: artist’s line drawing (but artist is also using object-level knowledge)

Characterizing edges An edge is a place of rapid change in the image intensity function image intensity function (along horizontal scanline) first derivative edges correspond to extrema of derivative

Discontinuity: Edge Detection If the regions are sufficiently homogeneous , edge detection could be performed by the computation of a derivative operator , where the intensity variation is plotted then the first derivative is used to detect the presence of an edge For non- homogeneous regions, a technique based on gradient function should be used, as the maximum rate of change of image intensity is in direction of gradient.

The gradient points in the direction of most rapid increase in intensity Image gradient The gradient of an image: The gradient direction is given by how does this relate to the direction of the edge? The edge strength is given by the gradient magnitude

15

Discontinuity: Edge Detection The magnitude and the direction of the gradient should be calculated at each pixel: where G is the magnitude of the gradient and α is its direction . The edge is then found by linking all points in the direction calculated.  

Finite difference filters Other approximations of derivative filters exist:

Discontinuity: Edge Detection Edge detection algorithms face several problems: Noise Edge break due to illumination problems Any other effect that causes discontinuity

Effects of noise Consider a single row or column of the image Plotting intensity as a function of position gives a signal Where is the edge?

Effects of noise Finite difference filters respond strongly to noise Image noise results in pixels that look very different from their neighbors Generally, the larger the noise the stronger the response What is to be done? Smoothing the image should help, by forcing pixels different to their neighbors (=noise pixels?) to look more like neighbors

Solution: smooth first To find edges, look for peaks in f g f * g

Differentiation is convolution, and convolution is associative: This saves us one operation: Derivative theorem of convolution f

Designing an edge detector Criteria for an “optimal” edge detector: Good detection: the optimal detector must minimize the probability of false positives (detecting spurious edges caused by noise), as well as that of false negatives (missing real edges) Good localization: the edges detected must be as close as possible to the true edges Single response: the detector must return one point only for each true edge point; that is, minimize the number of local maxima around the true edge

Canny edge detector This is probably the most widely used edge detector in computer vision Theoretical model: step-edges corrupted by additive Gaussian noise Canny has shown that the first derivative of the Gaussian closely approximates the operator that optimizes the product of signal-to-noise ratio and localization

Canny edge detector Filter image ( Smooth ) with derivative of Gaussian Find magnitude and orientation of gradient Apply non-maximum suppression to the gradient magnitude image Thin multi-pixel wide “ridges” down to single pixel width Use double thresholding and connectivity analysis to detect and link edges (Linking and thresholding): Define two thresholds: low and high Use the high threshold to start edge curves and the low threshold to continue them MATLAB: edge(image, ‘canny’)

Example original image (Lena)

Example norm of the gradient

Example thresholding

Example thinning (non-maximum suppression)

Non-maximum suppression At q , we have a maximum if the value is larger than those at both p and at r. Interpolate to get these values.

Assume the marked point is an edge point. Then we construct the tangent to the edge curve (which is normal to the gradient at that point) and use this to predict the next points (here either r or s). Edge linking

Discontinuity: Edge Linking 1 Edge Linking : becomes an important approach after edge detection How to link edges? First define a small window (3x3) or (5x5) The center of this window is a pixel defined as an edge (x0 ,y0 ) Check other pixels inside the window if considered as edges or not by calculating the gradient at each A pixel ( x,y ) is considered linked edge if: |G( x,y )-G(x0 ,y0 )| < E Θ( x,y ) - Θ(x0 ,y0 ) < A Where G is the magnitude of the gradient and Θ is the angle of the gradient E and A are magnitude and phase thresholds respectively

Effect of  ( Gaussian kernel spread/size) Canny with Canny with original The choice of  depends on desired behavior large  detects large scale edges small  detects fine features

Edge detection is just the beginning… image human segmentation gradient magnitude

35 Basic Edge Detection by Using First-Order Derivative

36 Basic Edge Detection by Using First-Order Derivative

37

38

39

40 Advanced Techniques for Edge Detection The Marr-Hildreth edge detector

41

42 Marr-Hildreth Algorithm Filter the input image with an nxn Gaussian lowpass filter. N is the smallest odd integer greater than or equal to 6 Compute the Laplacian of the image resulting from step1 Find the zero crossing of the image from step 2

Edge Linking Using Hough Transform Effective method to solve linear segmentation problems Assume the following segmentation problem: A set of ‘n’ points are in a plane, We want to determine the points that are linked together to form a straight line Solution1: We will take a pair of points and form a straight line Check the number of remaining points that are close to this line This process is repeated for all possible pairs of points and each time the rest of points are checked for being part of this proposed line The line that has maximum number of points closer to it is the fittest one

Edge Linking Using Hough Transform What is the problem concerned with solution 1 ? Large number of computations For ‘n’ points, we have possible pairs to form the proposed lines Each case, we have to compare (n-2) points, if the proposed line passes closer to them or not The number of computations is then: Large mathematical computations  

Edge Linking Using Hough Transform Solution by means of Hough Method, Hough proposed the following model: Given that the equation of a straight line is: “a” and “b” are the line parameters At any point (x1 ,y1), we can write the equation of all lines passing through it as: We have got a linear equation between ‘a’ and ‘b’ that could be represented in another plane called: “The Parameter Space” Thus each point ( x,y ) will be transformed into a straight line in the parameter space If two lines in the parameter space -representing the pair of points (x1 ,y1 ) and (x2 ,y2 ) -will intersect at values (a, b) These values’ couple ‘a’ and ‘b’ are parameters of the straight line joining them in original space When all ‘n’ points are transformed into the parameter space domain, we will simply observe the values of ( a,b ) where the most points’ cloud density occurs. These values for ‘a’ and ‘b’ are the required ones; and these points are linked to form the required line  

Hough Transform

Hough Transform All lines passing in this crowded area are then inversely transformed to obtain the original points passing via the linked line with parameters ( a,b )
Tags