IMAGE ENHANCEMENT Image enhancement refers to the class of image processing operations whose goal is to produce an output digital image that is visually more suitable as appearance for its visual examination by a human observer . Image enhancement techniques can be divided into two broad categories: Spatial domain methods: which operate directly on pixels, Transform or Frequency domain methods: which operate and modify the Fourier transform of an image. Unfortunately, there is no general theory for determining what is `good' image enhancement when it comes to human perception. If it looks good, it is good!
A Diagrammatical representation
Image Enhancement in Spatial Domain The term spatial domain refers to the image plane itself and image processing methods in this domain are based on direct manipulation of the pixel values of an image. The Spatial domain based image enhancement methods are denoted by the expression: where is the input image, is the processed image, and is an operator on , defined over some neighborhood of
Image Enhancement in Spatial Domain
Image Enhancement in Spatial Domain The smallest possible size of the neighborhood is In this case depends only on the value of at a single point and becomes an intensity (or gray-level) transformation function of the form where and denotes the intensity of and at any point .
Some Basic Intensity Transformations Functions
Image Negative The negative of an image with gray levels in the range is defined as: where is the maximum number of intensity levels possible in the image and For the color image, image negation is applied individually on three gray scale planes and finally three negated planes are combined together to form the output image. This type of processing is particularly suitable for enhancing white or gray detail embedded in dark regions of an image, especially when the black areas are dominant in size
Example- Image Negation Original Tissue image showing a small lesion (or tumor) Negative Image: gives a better vision to analyze the image
Log Transformation The general form of log transform is where c is a constant and it is presumed that .
Log Transformation It maps a narrow range of low intensity values in the input into a wider range of output levels. And, a wider range of higher gray levels are converted into a lower range of output levels Transformation of this type is used to expand the values of dark pixels in an image while compressing the higher level values. The log compresses the dynamic range of image pixels with large variations in pixel values.
Log Transformation
Power Law (Gamma) Transformation Power Law transformation has the basic form where c and are positive constants Plots of versus for various values of are shown here:
Power Law (Gamma) Transformation The earlier diagram can be divided into three categories: maps a narrow range of dark input values to a wider range of (high) output values. maps a wider range of input values to a narrow range of output values. A variety of devices used for image capture, printing, and display respond according to power law. For eg ., the CRT based devices have gamma values varying between 1.8 to 2.5
Gamma Correction We know that for , actual image is displayed on the screen. For monitors with , would tend to produce images that are darker than actual. So, in order to display the images that are close in appearance to the actual ones, we simply pre-process the images by applying the transformation This is called gamma correction . Gamma correction is important for displaying an image accurately required for further analysis and decision making. For accurate reproduction of colors in an image, gamma correction is also required because it not only changes the intensity values but also the ratios of red to green to blue in a color image.
Gamma Correction
Contrast Stretching It is a process that expands the range of intensity levels in an image so that it spans the full intensity range of the display device. Statistically contrast is obtained by computing the variance of intensity values in an image Low contrast images have low value of variance. While acquisition, low contrast images may result due to improper or poor illumination, wrong camera settings, lack of dynamic range of image sensor etc.
Contrast Stretching It is a process that expands the range of intensity levels in an image so that it spans the full intensity range of the display device. General equation of contrast stretching: such that the pixels in intensity range are mapped to new intensity range, i.e., Two Variants: Range Normalization Intensity or Gray-Level Slicing
Intensity or Gray-Level Slicing To Highlight a specific range of intensities in an image, such as water masses in satellite imagery, enhancing flaws in X-ray images etc. Two possible ways of implementation is Assign a high value for all gray levels in the range of interest and a low value for all other gray levels. The second approach is to brighten (or darkens) the desired range of intensities but leave all other intensity levels unchanged
Intensity or Gray-Level Slicing Transformation 1 Transformation 2
Intensity or Gray-Level Slicing
Bit Plane Slicing Till now we were observing the affect of manipulating the intensity level ranges to the overall appearance of an image. Bit place slicing, on the other hand highlights the contribution made by specific bits of the pixels in an image.
Bit Plane Slicing
Bit Plane Slicing The four higher order bit planes (especially the planes 7 and 8), contain a significant amount of the image information, i.e., structural and edge information . The lower-order planes contribute to more subtle intensity details in the image, i.e., texture information and fine details (slight variations in the intensity values in the background) in an image . Decomposing an image into its bit planes is useful for analyzing the relative importance of each bit in the image, a process that aids in image compression , in which fewer than all planes are used in reconstructing an image.
Bit Plane Slicing
Bit Place Slicing: Data Hiding
Bit Place Slicing: Tamper Detection
Histogram Processing Histogram of a digital image is given by where is the intensity level and is the number of pixels in the image with intensity . Normalized Histogram is given by: where is an estimate of the probability of occurrence of intensity level in an image.
Analysis of Histogram-I Dark Image
Analysis of Histogram-II Bright Image
Analysis of Histogram-III Low Contrast Image
Analysis of Histogram-IV High Contrast Image
Histogram Equalization The objective of histogram equalization is to generate an image whose pixels: Tend to occupy the entire range of possible intensity levels Tend to be (almost) uniformly distributed with very few vertical lines being much higher than the others Such an image will have: High Contrast Large variety of gray tones Shows great deal of gray level detail
Histogram Equalization The transformation function for histogram equalization is of the form: such that Function is a strictly monotonically increasing function in the interval , i.e., Function , for The gray values in the histogram equalized image is represented by Since in practice, we deal with integer intensity values, we are forced to round all the results to their nearest integer intensity values. Therefore, strict monotonicity is not satisfied.
Neighborhood Operations Neighborhood operations simply operate on a larger Neighborhood of pixels than point operations Neighbourhoods are mostly a rectangle around a central pixel Any size rectangle and any shape filter are possible Origin x y Image f (x, y) (x, y) Neighbourhood
Simple Neighborhood Operations Some simple Neighborhood operations include: Min: Set the pixel value to the minimum in the neighbourhood Max: Set the pixel value to the maximum in the neighbourhood Median: The median value of a set of numbers is the midpoint value in that set (e.g. from the set [1, 7, 15, 18, 24] 15 is the median). Sometimes the median works better than the average
Simple Neighborhood Operations Example 123 127 128 119 115 130 140 145 148 153 167 172 133 154 183 192 194 191 194 199 207 210 198 195 164 170 175 162 173 151 Original Image x y Enhanced Image x y
The Spatial Filtering Process r s t u v w x y z Origin x y Image f (x, y) e processed = v *e + r *a + s *b + t *c + u *d + w *f + x *g + y *h + z *i Filter Simple 3*3 Neighbourhood e 3*3 Filter a b c d e f g h i Original Image Pixels * The above is repeated for every pixel in the original image to generate the filtered image
Spatial Filtering: Equation Form Filtering can be given in equation form as shown above Notations are based on the image shown to the left Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Smoothing Spatial Filters One of the simplest spatial filtering operations we can perform is a smoothing operation Simply average all of the pixels in a Neighborhood around a central value Especially useful in removing noise from images Also useful for highlighting gross detail 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 1 / 9 Simple averaging filter
Weighted Smoothing Filters More effective smoothing filters can be generated by allowing different pixels in the Neighborhood different weights in the averaging function Pixels closer to the central pixel are more important Often referred to as a weighted averaging 1 / 16 2 / 16 1 / 16 2 / 16 4 / 16 2 / 16 1 / 16 2 / 16 1 / 16 Weighted averaging filter
Averaging Filter Vs. Median Filter Example Filtering is often used to remove noise from images Sometimes a median filter works better than an averaging filter Original Image With Noise Image After Averaging Filter Image After Median Filter Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Averaging Filter Vs. Median Filter Example Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Averaging Filter Vs. Median Filter Example Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Averaging Filter Vs. Median Filter Example Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Strange Things Happen At The Edges! Origin x y Image f (x, y) e e e e At the edges of an image we are missing pixels to form a neighbourhood e e e
Strange Things Happen At The Edges! (cont…) There are a few approaches to dealing with missing edge pixels: Omit missing pixels Only works with some filters Can add extra code and slow down processing Pad the image Typically with either all white or all black pixels Replicate border pixels Truncate the image Allow pixels wrap around the image Can cause some strange image artefacts
Strange Things Happen At The Edges! (cont…) Original Image Filtered Image: Zero Padding Filtered Image: Replicate Edge Pixels Filtered Image: Wrap Around Edge Pixels Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Strange Things Happen At The Edges! (cont…) Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Strange Things Happen At The Edges! (cont…) Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Strange Things Happen At The Edges! (cont…) Images taken from Gonzalez & Woods, Digital Image Processing (2002)
Correlation & Convolution The filtering we have been talking about so far is referred to as correlation with the filter itself referred to as the correlation kernel Convolution is a similar operation, with just one subtle difference For symmetric filters it makes no difference e processed = v *e + z *a + y*b + x*c + w *d + u *e + t *f + s *g + r *h r s t u v w x y z Filter a b c d e e f g h Original Image Pixels *
Image Smoothing Example The image at the right is an original image of size 500×500 pixels. The images in the subsequent slides show the image after filtering with an averaging filter of increasing sizes 3, 5, 9, 15 and 35 Notice how detail begins to disappear
Image Smoothing Example
Image Smoothing Example
Image Smoothing Example
Image Smoothing Example
Image Smoothing Example
Another Smoothing Example By smoothing the original image we get rid of lots of the finer detail which leaves only the gross features for thresholding Images taken from Gonzalez & Woods, Digital Image Processing (2002) Original Image Smoothed Image Thresholded Image
Sharpening Filters
Sharpening Filters Objective is to sharpen the image by enhancing/ highlighting the discontinuities in the image. Discontinuities in an image are the points where there is an abrupt change in intensities among surrounding pixels. Forms of Discontinuities: Isolated Points Intensity Ramp Intensity Step The pictorial representation of these will be shown in the up-coming slides
Behavior of Derivatives at Image Discontinuities The behavior of derivatives during the transitions into and out of these discontinuities help us to detect image features (like lines, edges, etc.) and noise.
Sharpening Filters Image Smoothing Analogous to Spatial Integration Image Sharpening Analogous to Spatial Differentiation Spatial Differentiation Analogous to Rate of Change of Intensity
Sharpening Filters The strength (magnitude) of the response of the derivative operator is proportional to the degree of intensity change (or discontinuity). Greater the change in intensity, higher will be the magnitude of the derivative. Smaller the intensity change, lower the value of derivative. In constant areas (with no intensity change), the value of derivative is zero.
Sharpening Filters Derivative of a function is given by For digital images, because the minimum spacing between two adjacent pixels is 1. Image sharpening is performed by derivative operators: 1 st Order Derivative
Sharpening Filters Image sharpening is performed by derivative operators: 2 nd Order Derivative
Another Smoothing Example Image Smoothing Example Intensity Profile Example-I A B
Another Smoothing Example Image Smoothing Example Intensity Profile Example-I
Behavior of Derivatives at Image Discontinuities Discontinuity 1 st Order Derivative 2 nd Order Derivative Constant Regions Zero Zero Onset of Ramp or Step Non-Zero Non-Zero End of Ramp or Step No condition defined Non-Zero Along Ramps (Constant Slope) Non-Zero Zero
Behavior of Derivatives at Image Discontinuities
Edge Models From left to right, models (ideal representations) of a step, a ramp, and a roof edge , and their corresponding intensity profiles.
Edge Models
Points to Remember… Edges in digital images often are ramp-like transitions in intensity, in which case 1 st Order Derivative of the image would result in thick edges because the derivative is nonzero along a ramp. On the other hand, 2 nd Order Derivative produce a double edge effect (positive/negative) separated by zeros at ramp and step transitions. From this, we conclude that the second derivative enhances fine detail much better than the first derivative, a property that is ideally suited for sharpening images.
Points to Remember… The sign of 2 nd order derivative can be used to determine whether transition into an edge is from light to dark (negative to positive) or dark to light (positive to negative).
The Laplacian Filter Implementation of 2-D, second-order derivatives. It is an isotropic filter, whose response is independent of the direction of the discontinuities in the image to which the filter is applied. In other words, isotropic filters are rotation invariant , in the sense that rotating the image and then applying the filter gives the same result as applying the filter to the image first and then rotating the result.
The Laplacian Filter 1 1 -4 1 1 Gives an isotropic result for rotations in increments of 90°. -1 -1 4 -1 -1
The Laplacian Filter Other variants of Laplacian filter are 1 1 1 1 -8 1 1 1 1 Gives an isotropic result for rotations in increments of 45°. -1 -1 -1 -1 8 -1 -1 -1 -1
The Laplacian Filter Original Image Laplacian Filtered Image Laplacian Filtered Image Scaled for Display
But That Is Not Very Enhanced! The result of a Laplacian filtering is not an enhanced image We have to do more work in order to get our final image Subtract the Laplacian result from the original image to generate our final sharpened enhanced image
Laplacian Image Enhancement Example of Double Edge Effect in Laplacian
Laplacian Image Enhancement Example of Double Edge Effect in Laplacian with mask having positive center coefficient
Laplacian Image Enhancement Example of Double Edge Effect in Laplacian with mask having positive center coefficient
Laplacian Image Enhancement Example of Double Edge Effect in Laplacian with mask having negative center coefficient
Laplacian Image Enhancement Example of Double Edge Effect in Laplacian with mask having negative center coefficient
The Gradient Filter Implementation of 2-D, first-order derivatives. Gradient is a vector quantity having both magnitude and direction. It points in the direction of the greatest rate of change of at location . The magnitude (length) of gradient vector denoted as signify the rate of (intensity) change in the direction of the gradient vector.
The Gradient Filter The components of the gradient vector are derivatives, they are linear operators. However, the magnitude of this vector is not because of the squaring and square root operations. On the other hand, the partial derivatives are not rotation invariant (isotropic), but the magnitude of the gradient vector is. However, as in the case of the Laplacian , the isotropic properties of the discrete gradient are preserved only for a limited number of rotational increments that depend on the filter masks used to approximate the derivatives.