image processing image representation1.pptx

SamruddhiChillure1 10 views 65 slides Sep 16, 2024
Slide 1
Slide 1 of 65
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65

About This Presentation

Advance computer vision unit 1


Slide Content

Image representation in computer vision refers to the process of converting an image into a numerical or symbolic form that can be easily understood and processed by a computer. Images are typically represented as a collection of pixels, where each pixel corresponds to a specific color or intensity value

Mathematical Model used to describe image and signal A signal is a function depending on some variable with physical meaning(temp,pressure distribution,distance from the observer etc) It can be one dimensinal(depending on time) two dimensinal(depending on two cordinates o the plane) three dimensinal(describing volumetric object in space etc) higher dimensinal Scalar function might be sufficient to describe monochromatic image vector function for image processing to represnt (eg color images consisting of three component colors)

Function 1)continuous : has continuous domain and range 2) Discrete or digital:domain is discrete and range also discrete Image on human and tv retina can be model as function ot two variable f(x,y) where x,y are coordinates in plane or perhaps three variable (x,y,t) where t is time

image is aquired in many ways normally color is norm althouhg we present algorithm for monochromatic images 1)cameras operate in infra red part (for night surveillance) 2)electromagnetic spectrum 3)teraherts imaging 4)also outside the EM spectrum (light) is also common in:A)medical domain,data sets are generated through magnetic resonance (MR), B)computed tomography (CT) C)ultarsound etc.

all these these opeation generate huge data ,it requires analysis and understanding and with increasing frequecy these array are of 3 of or more dimesions.

continuous image function gray -scale image function values correspond to brightness at image point. the function value can express other physical quantities as well (t emp,pressure distribution,distance from the observer etc ) Brightness intergrate different optical qauntities . Image on human eye and TV camera sensor is in 2D can call this as intensity images(bearing information of brightness) 2D image is result of projection of 3D scene. 2D intensity image is result of perspective projection of 3D scene (though pinhole camera) A non linear perspective projection often approximated by a linear parallel(orthographic )projection

image Digitization image to be processed by the computer must be presented using disrete data structure eg a matrix. An image is captured by a sensor is expressed as a continuous function of f(x,y) is sampled into a matrix with M rows and N columns . Image quantization assigns to each continuous sample an integer value -continuous image function f(x,y) is split into k interal s. ( Nyquist criterion requires that the sampling frequency be at least twice the highest frequency contained in the signal, or information about the signal will be lost.) the finer the sampling (i.e the larger M and N) and quantization (the larger k),the better approximation of the continuous image function f(x,y) achieved.

image sampling poses two:1)sampling period should be determined (this is the distance between two neighboring sampling points in the image) 2)geometric arrangment of sampling points (apling grade should be set )

sampling

sampling

low level processing : has very little knowledge about the content of images low level methods often include image compression ,pre processing methods for noise filtering ,edge extraction and image sharpening It takes input as image captured by the a TV camera is 2D in nature.

In digital image processing and computer vision, image segmentation is the process of partitioning a digital image into multiple image segments, also known as image regions or image objects (sets of pixels).

High level processing: is based on the knowledge ,goals and plans of how to achieve these goals adn Artificial intelligence methods are widely apllicable. high level computer vision tries to imitate human cognition and the ability to make decisions according to the information contained in the image. high level vision begins with some form of formal model of the world , and then the reality perceived in the form of digitized images is compared to the model .

A matched is attempted ,and when differences emerges,partial matches (or sub -goals ) are sought that overcome the mismatches;the computer switches to low level image processing to find information needed to updates the model. this process is repeated iteratively ,and ‘understand’ an image thereby becomes a cooperation between top-down and bottom - up processes. a feedback loop is introduced in which high -level partial results creates tasks for low-level image processing ,and iterative image undrstanding process should eventually converge to the global goal. computer vision expected to solve very complex tasks ,the goal being to obtain similar results to those provided by the biological systems.

Continued with low ,high level processing
Tags