Introducti on • Remote Sensing is an interesting and explorat ory science , as it provides images of areas in a f ast and “cost-effici ent manner, and attempts to demonstrate the “what is happening ri ght now” in a study area . • Remote: because observation is done at a distance without physical contact with the object of interest • Sensing: De tection of energy, such as light or another form of electroma gnetic energy
Definition • The science and art of obtaining information about an object, area, or phenomenon through the analysis of data acquired by a dev ice that is not in contact with the object , area , or phenomenon under investigation “.(L&K,199 4). • The term Remote Sensing means the sensin g of the Earth's sur face from space by making use of the properties of electromagnetic wa ves emitted, reflected by the sensed objects, for the purpose of improving natural resources management, land use and the protection of the environment . (UN, 1999)
History • Galileo introduced the t elescope to astronomy in 1609 • 1827 - first photograph • 1858 - first aerial photograph from a h ot air balloon • 1861-1865 - Balloon photography used in American Civil War • 1888 — ‘rocket’ cam eras • 1903 - pigeon-mounted camera p atented • 1906 - photograph from a kite • 1914-1945 - Plane mounted Camer as WWI, WWII • 1908 —First photos from an airplane • 1909—Dres den International Photographic Exhibition • 1914-1918 — World War I
1858 - First aeria l (balloon) photo; picture of Paris Actual Pigeon Pictures
San Francisco from a ki te, 1906
Cont … .. • 1957 - Sp utnik-1 • 1960 - 1 st meteorological satellite ‘ TIROS-1 ’ launched • 1967 - NASA ‘ Earth Resource Technology Satellite’progr amme • 1972 - ERTS ( Landsat ) 1 launched ... • 1970-19 80 : Rapid advances in Digital Image Processing • 1986 : SPOT French Earth Observation Satellite • 1980s : Development of Hyperspectral sensors • 1990s : Global Remote Sensing system
Why Remote Sensing ? • Systematic data co llection • Information about three dimensions of real objects • Repeatability • Global coverage • The only solution sometimes for the othe rwise inaccessible areas • Multipurpose information
Remote Sensing Pro cess • Energy Source or Illumination ( A) - the first requirement for remote sensing is to have an energy source which illuminates or provides electromagnetic energy to the target of inter est . • Radiation and the Atmosphere (B) - as the energy travels from i ts source to the target, it will come in contact with and interact with the atmosphere it passes through. This interaction may take place a second time as the energy travels f rom the target to the sensor. • Interaction with the Target (c) - once the energy makes its way to the target through the atmosphere , it interacts with the target dep ending on the properties of both the target and the radiation.
• Recording of Energy by the Sensor (D) - after the energy has been scattered by , or emitted from the target, we r equire a sensor (remote - not in contact with the t arget) to collect and record the electromagnetic rad iation. • Transmission , Reception , and Processing (E) - the energy reco rded by the sensor has to be transmitted, often in electronic form , to a receiving and processing station where the data are processed i nto an image ( hardcopy and / or digital ).
• Interpretatio n and Analysis (F) - the processed image is in terpreted, visually and/or digitally, to extract information about the target w hich was illuminate d. • Applica tion (G) - the final element of the remote sensing process is achiev ed when we apply the information we have been able to extract fr om the imagery about the target in order to better understand it, reveal some new informati on, or assist in solving a particular p roblem.
Applicati ons • Land Use a nd Land Cover • Geologic and soil • Agriculture • Forestry • Water / snow Resources • Urban and Regional Planni ng • Wildlife Ecology • Archaeological • Enviro nment Assessment • Natural Disaster • Ocean and weather Monitor ing
Passive system record energy reflected or emitted by a target illuminated by sun. e.g. norma l photography, most optical satellite sensors Active sensors- Active system illuminates target wi th energy and measu re reflection. e.g. Radar sensors, Remote Sensing Sensor Passive se nsors-
Passive Remote Sensing Doesn’t employ any extern al source of energy. Measures either reflected radiation fr om Sun (can be operated only during daytime) or the emitte d radiation from the surface ( day / night operation ). Suffers from variable illumination conditio ns of Sun and influence of atmospheric conditions Active Remote sensing Has its own source of energy Active sensors emit a controlled beam of energy to the surface and measure the amount of energy reflected back to the sensor. Controlled illuminati on signal Day/night opera tion
Passive sensors : collect electromagne tic radiation in the visible and infra-red part of the spectrum: • Aerial Photograph s • Low resolution : Landsat ,, SPOT , IRS • High Resolution : Quickbird , IKONOS Active sensors : generate their o wn radiation : • Air - borne RADAR • Space borne RADAR : ERS 1 / 2, Radarsat • LiDAR ( laser scanner )
Fundamental Physic of Remote Sen sing Remote Sensing relies on the measurement of ElectroMagneti c (EM) energy. The most important source of EM energy is the sun Some sensors detect energy emitted by the Earth itself or provide their own ene rgy (Radar) . All matter reflects, penetrate, observes and emits of EMR in unique way, which called spec tral characteristi cs Two characteristic of electromagnetic radiation are particular ly important for understanding remote sensing. These are the wavelength and frequency.
The wave length is the length of one wave cycle which can be meas ured as the distance between successive Wave crests. wave length is usually represe nted by the lambda wave length is meas ured in meter (M) or some factors of meters such as nanometers (nn) micrometer s(um) , centimeters (cm) Frequency refers to the number of cycles of waves pa ssing a fixed point per unit of time . Frequency is normally measured in hertz ( Hz ), equivalent to one cycle per second and various multiples of hertz .
Wavelength and frequency are related by the following formula • Frequency, ν = c/λ where, λ = wavelen gth c = speed of l ight = 3.00 x 108 m/ s Therefore, the two ar e inversely related to each other. The shorter the wave le ngth, the higher the frequency. The longer the wave length, the lower fre quency Energy
Electro-magneti c spectrum (EMS) • Electro-magnetic spectru m (EMS) is an array of all EMR , which moves velocity of light, characteristic of wavelength and frequency of energy. • The electromagnetic spectrum ranges from the shorter wavelengths (including gamma and x- rays) to the longer wavelengths (including microwaves and broadcast radio wav es) • There are several regions of the electromagnetic spectrum which are useful for remote sen sing
Green Red 0.3μ m 0.4 0.5μ m 0.6 0.7μ m 10.0 15.0 300nm 500 nm 700nm Micro-waves TV Radio Electric pow er Infrared Visible sp ectrum rays rays Cosmic Gamma 、Optical range, U-V W a v e l e n g t h
Ultraviol et Range • This radiation is just beyond the violet porti on of the visible wavelen gths • Some Earth surface materials , primarily rocks and minerals , emit visible light when illuminated by UV radiation. Vi sib le Ra nge • The light which our eyes - our " remote sensors " can detect is part of the visible spectrum. • The visible w avelengths cover a range from approximately 0.4 to 0.7 µm. The longest visible wavelength is r ed and the shortest is violet .This is the only portion of the spectrum we can associate with the concept of colours .
V iolet: 0.4 - 0. 446 µm, Blue : 0.446 - 0.500 µ m Green : 0.500 - 0.578 µm , Yel low : 0.578 - 0.592 µm Orange : 0.592 - 0.620 µm , Red : 0.620 - 0.7 µm ► Visible Blue Band (.45-.52 microns) ► Visible Green Band (.52-.60 microns) ► Visible Red Band (.63-.69 microns) ►Panchromatic Bands (.50-.90 microns)
Panchromatic Bands (.50-.90 micrometers ) ► Wide range of sensitivity ► Visible to Near IR ► Higher spatial resolution ► Can be combined with other multi - spectral bands . Visible Blue Band (.45-.52 micrometers ) Greatest wa ter penetration Greatest atmosp heric scattering Greatest absorption Used for : water depth , water characteristics detection of subsurface f eatures ,soil and vegetation discr imination
Visible Gre en Band (.52-.60 micrometers) ► Vegetation discrimination ► Urban Infrastructure ► Less affected by atmospheric scattering ► Sediment and Chlorophyll Concentration Visible Red Band ( .63-.69 micrometers) Chlorophyll absorption band of healthy green vegetation. Vegetation type Plant condition Least affected by atmospheric scattering Less water pene tration but good near surface information ie . Water quality , sediment , and chlorophyll .
Infrared Range o Infrared region approximately 0.7 µm to 100 µm - more than 100 ti mes as wide as the visible portion! o Divided two categories b ased on their radiation properties - the reflected IR, and th e emitted or thermal IR. o reflected IR region is used fo r remote sensing purposes in ways very similar to radiation in the visible port ion. o The reflected IR covers wavelengths from approximat ely 0.7 µm to 3.0 µm. o The thermal IR region is quite different than the visible and reflected IR portions, as this energy is es sentially the radiation that is emitted fr om the Earth's surface in the form of heat. o The thermal IR covers wavelengths from approximately 3.0 µm to 100 µm.
Microwave Range • The portion of the sp ectrum of more recent interest to remote sensing is the microwave region from about 1 mm to 1 m. • This covers the longest wavelengths used for r emote sensing. The shorter wavelengths have properties similar to t he thermal infrared region while the longer wavelengths approach the wavelengths used for radio broad casts • Because of their long wavele ngths, compared to the visible and infrared, microwaves have special properties that are important for remo te sensing. • Longer wavelength microwave radiation can pen etrate though cloud, fog, haze etc as the longer wa velengths are not susceptible to atmospheric s cattering which affects shorter optical w avelengths
Interaction wi th target • There are three form s of interaction that can take place when energ y strikes. • Absorption (A ): radiation is absorbed into the target • Transmission (T): r adiation passes through a target • Reflection (R): radiation "bounces" off the target and is red irected • The proportions of each interaction will depend on the wavelength of the energy and the material and condition of the feature .
Clouds Atmospheri c absorption Earth Scattered radiation * transmitted radiation scattered radiation ** Atmospheric Atmosphere interactions Interaction in a t mosphere and on land Reflection processes Emission proc esses * selective radiation (bluis h optical images) ** non-selective radia tion (white clouds) Atmospheri c emission Reflected radiation Thermal emission R.S. Instrument Sun
Atmospheric Scattering • This occurs when the par ticles of gaseous molecules present in the atmosphere cause the EM waves to be redirected from the original path • Raleigh scattering : size atmospheric p articles < than the wavelengths of incoming radiation • Mie scattering : size atmospheri c particles ~ than the wavelengths of incoming radiation • Non-selective scattering : size atmosp heric particles > than the wavelengths of incoming radiation
Atmospheric windows • Atmospheric windows is that portion of the electromagnetic spectrum that can be transmitted through the atmosphere wit hout any distortion or absorption. Light in certain wavelength regions can penetrate the atmosphere well . These regions are called atmospheric windo ws. • Those areas of the spectr um which are not severely influenced by atmospheric absorption and thus, are useful to remote sensors, are called atmospheric windows
Spectral Reflectance • Spectral reflectance, (p)is the ratio of reflected energy to inciden t e nergy as a function o f wavelength. • The reflectance characterist ics of the earth’s surface f eatures are expressed by spectral reflectance, which is given by: p= ( R / I) x 100 Where,(p)) = Spectral reflectance at a particular wavelength. R= Energy of wavelength reflected from object I= Energy of wavelength incident upon the object
Spectral Reflec tance Curve for vegetation
Factor affe cting Spectral Signature on vegetation • Pigmentation absorption (visi ble light for photosynthesis) • Physiologica l structure (NIR reflectance) : • Leaf moisture content : (M ajor :1.4, 1.9 and 2.7μm , Minor : 0.96 & 1.1μm). • Soil background : Dark soil is better distinguishable from vegetation in NIR • Senescent vegetation : Due to aging , crop ripening, rise in reflectance in blue and red waveleng ths. • Angular elevation of sun and sensor and Canopy geomet ry : Reflectance of a rough tree canopy is dependent on solar angle. • Phenological canopy chan ges (seasonal changes) : e.g. ; For grasslands, Reflectance in red is maximized in autumn and minimized in spring, NIR maximized in summer, minimized in winter.
Spectral Reflectance Curve for So il
Factor affecting Spectra l Signature on vegetation Moisture content Texture Structure
Spectral Reflectance Cu rve for Water a) Ocean water b) Turbid w ater c) Water with chlorophyll
Factor affecting Spectral Signature on W ater • Majority of radiant flux incident is either absorbed or transmitted Visible: little absorbed, <5% r eflected and rest transmitted NIR : Mostly absorbed • Depth of water • Suspended mat erial within water • Surface rough ness
Remote Sensing Observation Platform o Sensor -the device that actually gathers the remotely sensed data o Platform -the device to which the sensor is attached ● The vehicles or carriers for remote sensing are called the platform. Based on its altitude above earth surface Typically platform are satellite and aircraft but they can also , aero plane , balloons , kites . platform may be classified as, 1) Ground borne 2) Air borne 3) Space borne
Air born and s pace born platform have been in use in r emote sensing of earth resources .the ground based remote sensing system for earth resources studies are mainly used for collectin g the ground truth or for laboratory simulation studies.
Path An orbit is the cou rse of motion taken by the satellite in spac e and the ground trace of the orbits called a 'Path‘ Row The lines joining the corresponding scene centers of different paths are parallel to t he equator ar e called ‘ Rows ,. Polar Inclined Equatorial Orbits The path followed by the satellite is called orbit .
Satellite o rbital characteristics Altitude • It is the distance ( in Km ) from the satellite to the mean surface level of the e arth. Inclination an gle • The angle ( in degrees ) between the orbit and the equator. Period • It is the time ( in minutes ) required to complete one full orbit. A polar satelli te orbiting at an altitude of 800 km has a period of 90 mins Repeat Cycle • It is the time ( in days ) between two successive identical orbit s
Perigee & Apogee • Perigee : It is the point in the orbit where an earth satell ite is closest to the earth . • Apogee : It is the point in the orbit where an earth satell ite is farthest from the e arth. Swath As satellite revolves around the Earth, the sensor se es a certain portion of the Earth 'surface. T he area is known as swath .
pass • The near polar satellites travel north ward on one si de of the earth(ascending p ass) and towards South Pole on the second half of the orbit(descendin g pass). •The ascending pass is on the shadowed side while the descending pass is on the sunlit side. • Optical sensors image the surface on a descendin g pass, while active sensors and emitted thermal and microwave radiation can also image the surfa ce on ascending pass Inclination angl e Equator Descending no de Ascending node Ascending pass and Descending Orbit South Pole Ground track
Types of Satellite Orbits Geostationary Orbits
These satellit e are also know as geosynchronous satellite tor which at an alti tude of around 36000 km. above equator orbiting the earth and makes o ne revo lution in 24 hours, synchronous with eart h rotation. This platform are covering the same place and give continuous near hemi spheric coverage ove the same 。 area day and night. Its covers is limit to 70 communication and me trological application. GOES (U.S.A) , METEOR (U.S.S.R), GMS (Japan) Uses of Geostati onary Orbits Weather satellites : (GOES, METEOSAT, INSAT) Communication : Telephone and televi sion relay satellites Limited spatial coverage N to 70 S latitude . these area mainly u sed for
Polar orb ital satellite( Sun synchronous satellite)
• There are earth satellite in which t he orbital plane is near polar and altitude (mostly 800-900 km) is such that the satellite passes over all places on earth havi ng the same latitude twice in each orbit at the same local time. T hrough these satell ite the entire globe is covered on regular basis and gives repetitive coverage o n periodic basis. all the remote sensing resources satellite may be grouped in this category . LANDSAT, SP OT, IRS
Resolution Resolution is defined as the ability of the system to render the information at the smallest discretel y separable quantity in term of distance ( spatial ), wavelength band of EM R(spectral), time(temporal) and radiation quantity (r adiometric) Spatial Resolution Spatial Resolution is the projection of a detector element or a slit onto the ground . In the other word scanner’s spatial resolution is the ground segmen t sensed at any i nstant. It also called ground resolution element Ground Resoluti ons = H*IFOV
The spatial resolution at which data are acquired has two effects — the ability to identify various feature and quantify their extent
Spectral Resolution Spectral Resolutions describes the ability of the sensor to define fine wav elength intervals i.e. sampling the spatially segmented image in different spectral intervals, there by allowing the spectral irradiance of the image to be determined . Landsat TM imagery band 1 = blue band 2 = gre en band 3 = red band 4 = near-ir band 5 = mid-ir band 6 = mid-ir
Single band 333 Multi ban d 453 Multi ban d 432 Multi ban d 123
Radiometric Reso lutions • This is the measure of the sensor to differentiate the smallest change in the spectral reflectanc e between various targets. the digitisati on is referred to as quantisation and is expressed as n binary bits. Thus 7 bits digitisation im plies 2 , or 128 discreet levels (0-127) High Radiometric re solution Low Radiometric re solution
Temporal Resolution Temporal resolution is also called as the receptivity of the satellite . It is the capability of the satellite to image to exact same area as the same viewing angle at diffe rent period time .
Types of Ground Tracki ng Many electronic re mote sensors acquire data using scanning system. A Scanning system used to collect data over a variet y of different wavelength ranges is called a multisp ectral scanner(MSS) is the most commonly used scanning sys tem. There are two main modes o f scanning Whisk Broom / Across track Push Broom/ Along Track
Whisk Broo m/ Across track • Using a rotating or oscill ating mirror , such system scan the terrain along scan lines that are right angles to the f light line. The scanner to repeatedly meas ure the energy from one side of some 90 to 120 . 。 。
Push Broom/ Along Track • Push Broom/ Along Track scanners record multispectral image data along a swath beneath an aircraft . Also similar is the use of the forward motion of the aircraft to build up a tow dimensional image by recording successive scan line that are oriented at right angle to the flight direction .
PHOTOGRAMMETRY The science of quantitative analysis of measurements from Photograph Photos - light Gramma - to draw Metron - to mea sure Photogrammetry is define d as the art, science and technology of obtaining reliable Information about physical objects and the environment through process of recording, measuring and i nterpreting photographic images and patterns of recorded radiant electromagnetic energy and other phenomena .As implied b y its name, the science originally consisted of analyzing photographs .
WHY PHOTOGRAMMETRY ? • Very precise • Time effective • Cost effective • Based on well established and tested • algorithms . • Less manual effort • More geographic fidel ity • Corrects all sorts of distortions. • Provide a reasonable geometric modeling altern ative when little is known about the geometric nature of the image data. • Provide an integrated solution for multiple images or photographs simultaneously • Achieve a reasonable accuracy without a great number of GCPs • Create a three-dim ensional stereo model or to extract the elevation inf ormation
STEREOSCOPIC COVERAGE
Types of photogra mmetry • Terrestria l • Aerial • Satellite
BRANCHES OF PHOTOGRAMMETRY Analog photogramm etry In analog photogrammetry , optical or mechanical instruments were used to reconstruct three - dimensional geometry from two overlapping photo graphs. The main product during this phase was topographic maps
o The computer replaces some expensive op tical and mechanical compo nents. o The resulting devices were analog/digit al hybrids. o Analytical aerotriangulation, analytical plotters, and orthophoto projectors were the main developments during this phas e. o Outputs of analytic al photogrammetry can be topographic maps, but can also be digital products, such as digit al maps and DEMs Analytical photogramme try
• Digital photogrammetry is applied to di gital images that are stored an d processed on a computer. • Digital photogrammetry is sometimes called softco py photogrammetry. • The output products ar e in digitalform, such as digital maps, DEMs, and d igital orthophotos saved on computer s torage media. Digital photogramm etry
Aerial Photography F actors Type of Aircra ft Camera/Focal length Type of film Shape of project area / terrain Location of existing control Ground conditions Atmospheric conditi ons Restricted areas
Standard Aerial Film Types Black & white ► Kodak Double-X Ae rographic 2405: Black and white medium-to-high speed, standard film for map ping and charting ►Agfa Aviph ot Pan 150: Fine grain film for high altitudes Color ►Kodak Aerocolor Negative 2445: Color neg ative, high speed film for mapping and reco nnaissance. ►Infrared Kodak Aerochrome Inf rared 2443: False color reversal film, high dimensional stability for vegetati on surveys , camouflage detection and earth resources
Overlap betw een photos 60% and 80% Fo rward Lap and 30% Side Lap Overlap Region 60% Overlap Stereo Pair
Types of photographs Vertical - View straight down, expression angle 85。to 90。. Low - oblique e - Side view, horizon is not visible, depression angle typically 20- 85。. High-obliqu e vantage- Side view, horizon is visible, depre ssion angle typically less than 20。 .
Geometry of an aerial photo graph
Aerial Triangulation (AT). The process of establishing a mathematical relationship between images, the camera or sensor model, and the gro und Air base. The distance between two image exposure stations. Average flying height. The distance between th e camera position at the time of exposure and the average ground elevation. A verage flying height can be determined by multiplying the focal length by the image scale. Base-height ratio (b/h). The ratio between the averag e flying height of the camera and the distance between where the two overlapping images were c aptured. Block of photographs. Formed by the combined expos ures of a flight. For example, a traditional frame camera blo ck might consist of a number of parallel strips with a sidelap of 20- 30%, and an overlap of 60%.
Bundle. The unit of photogrammetric triangulation after each po int measured in an image is connected with the perspective center by a straight light ray. There is one bundle of light rays for each image. Bundle block adjustment . A ma thematical technique (triangulation) that determine s the position and orientation of each image as they existed at the time of image ca pture, determines the ground coordinates measured on overlap areas of multiple i mages, and minimizes the error associated with the imag ery, image measurements, and GCPs. This is essenti ally a simultaneous triangulation performed on all observati ons. Calibration certificate/report. In aeria l photography, the manufacturer of the camera specifies the interior orientation in the form of a certificate or report. Information includes focal length, principal point offset, radial lens distortion data, and fiducial mark coordinates . Check point. An additional ground point used to independently verify the degree of accuracy of a triangulation .
Control point . A point with known coordinates in a coordinate system , expressed in the units (e.g., meters, feet, pixels, film units) of the specified coordinate system . Control point extension . The process of conve rting tie points to control points. T his technique requires the manual m easurement o f ground points on photos of overlapping areas. The ground coordinates associated with the GCPs are then determined using photogrammetric techniques. Exposure station. During image acquisition, each po int in the flight path at which the camera exp oses the film. Eye-base to height ratio. The eye-base is the distance between a person’s eyes. The height is the dista nce between the eyes and the image datum. When two images of a stereopair are adjusted in t he X and Y direction, the eye-base to height ratio is als o changed. Change the X and Y positions to compensate for parallax in the images . Fiducial. Four or eight reference markers fixed on the frame o f an aerial metric camera and visible in each exposure. Fiducials are used to compute the transformation from pixel coordinates to image coordinates .
Fiducial center. The center of an aerial photo; the intersection point of lines constructed to connect opposite fiducials . Focal length. The distance between th e optical center of the lens and where the optical a xis intersects the image plane. Focal length o f each camera is determined in a laboratory environment. Focal plane. The plane of the film or scanner used in obtaining an aerial photo. Ground Control Point (GCP). An easily iden tifiable point for which the ground coordinates of the map coordinate system are known . Nadir. The area on the ground directly beneath a scanner’s detectors . Nadir point . The intersection of the focal axis and the image plane . Parallax. "The apparent angular displacement of an object as seen in an aerial photograph with respect to a point of reference or coordinate system.
Perspective cent er. The optical center of a camera lens. 1. A point in the image coordin ate system defined by the x and y co ordinates o f the principal point and the focal length of the sensor. 2. After triangulation, a point in the ground coordinate system that define s the sensor’s position relative to the ground. Principal point. The point in the image plane onto which the perspective center is projected .
Photograp hic Scale Before a pho tograph can be used as a map supplement or subst itute, it is necessary to know its scale . On a map, the scale is printed as a representative fraction t hat expresses the ratio o f map distance to ground distance, For exampl e: RF = MD / GD On a photograph, the scale is also expr essed as a ratio , but is the ratio of the photo distance ( PD ) to ground distance . For example : RF=PD/GD
• The approximate s cale or average scale ( RF) of a vertical aerial photograph is determined by either of two methods ; the comparison method or the focal length-flight altitude m ethod. The scale of a vertical aerial photograph is determined by comparing the measured distance between two points on the photograph wi th the measured ground distance between the same two points. The ground distance is determined by actual measu rement on the ground or by the use of the scale on a map of the same area. The points selected on the photograph must be identifiable on the ground or map of the same area and should be spaced in such a manner that a line connecting them will pass through or nearly through the center of the photograph
scale = f ÷ H scale = photo distance ÷ ground distance Example • A camera equipped with a 152mm focal lens is used to t ake a vertical photograph from a flying height of 2780 m above mean sea level. if the terrain is flat and located at an elevation of 500m. what is the scale of the photograph ?
Focal length = 0.152m (152 /1000) H=2780 m (flying height) h= 500 m (groun d elevation) Scale = f 0.152 m H — h 2780 — 500 = = 1/0.0000666 1:15000
Class Work 1) A camera equipped with a 206 mm focal lens is used to take a vertical ph otograph from a flying height of 5200 m above mean sea level . if the terrain is flat and located a t an elevation o f 560m. what is the scale of the photograph ?
2) Vertical photograph was taken at a flying he ight 5000 m above sea level using cam era with 152 mm focal length A) Determine the phot o scale at point A and B which lie at eleva tion of 1200 and 1960? B) what the ground distance corresponds to a 30mm photo distance measured at each of these elevation?
Ground Di stance 1:25000 1:20000 750m 600m
Image Displac ement On a planimetric map all features /details are show in their correct horizontal position o n a certain scale. This is not so in the case of aerial photographs due to image ,displace ment or distortion. A disturbance of the principle of geometry is called displacement/distortion Sources Of DistortionsAnd Displacem ent The main sources of displaceme nt and distortion are the optical or photographic deficiencies (film and paper shrinkage , lens aberrations, filter aberrations, failure of the film-flattening mechanism in the camera focal plane, shutter malfunction ), image motion , atmospheric refraction of light rays, curvature of the earth, tilt, and topograph y or relief.
Relief Displace ment • Shift o r displacement in the photographic position of an image caused by the relief o f the object. The amount of relief displacement is direct ly correlated with the height or depth of t he object and the distance of the object fro m the nadir. • This displacement is inversely correlated with the flyin g altitude of the aircraft above the datum and the focal length used . • Higher altitude photograph y will produce less relief displacement than lower altitude photography.
• Even though relief displac ement constitutes a source of errors in measuring horizontal distances on vertic al aerial photographs, it is not necessarily a nu isance; because of relief disp lacement, we can determine the height of objects (or difference in elev ation between objects) and to see in three dimension by viewing stereosc opic pairs of aerial vertical photog raphs.
Displacement due to Tilt • A photograph is considered tilted when the angle between th e perpendicular projection through the center of the lens and the plumb line is greater than 3 . • An aircraft or air borne not perfectly horizontal causes a rotation of the camera about the x-axis or about the y-axis. • Rotation ab out the x-axis causes lateral tilt or y- tilt, which is due to the aircraft being wing- up- wing-downand displa cing the nadir point along Y-axis. 。
• Rotation ab out y-axis causes longitudinal tilt or x-tilt or list, which is due to the nose of the aircraft being up or down causing th e nadir point to be displaced along X - axis . •Along the axis of tilt there is no displacement relative to an equivalent untitled photograph a s this is the li ne where a tilted photograph and an equivalent vertical ph otograph would match and intersect on anothe r.
Image Paralla x • The term parallax is given to the relative movement of objects due to movement of the observer • The most ob vious evidence of this phenomenon is the apparen t shift of nearby objects relative to distant objects when travelling at speed in a vehicle • Parallax is often used to detect when an object is correctly f ocused in a theodolite or level telescope by moving the eye from side to side - any movement between the object and the crosshairs indicates that the focus is incor rect
• For a constant flying height (or consta nt camera to datum distance) and tilt-fre e photographs the parallax will be parallel to the b ase • Parallax wil l be smaller for more distant objects, or in the case of aerial pho tography, smaller for points at lower heights • The perception of depth in stereo photographs is dependent on this parallax, as neighboring points at different heights will exh ibit different parallaxes • As the parallax shifts apply to all points, a continuous 3 D impression of the object is given by the stereo photograp hs
• To obtain stereoscopic coverage of an entire area, the minimum overlap for any photograph y is 50% A mathematical definition of parallax is : p = x - x' Where x = left conjugate image coordinate x' = right conjugate image coo rdinate
Microwave Remote Sensing • Analyzing the information col lected by the sensors that ope rate in the microwave por tion o f the electromagnetic spectrum is called as Microwave Remote Sen sing. Wavelength : 1mm to 1m • The capability to penetrate through precipitation or into a surface layer is incre ased with longer wavelengths. • Radars operating at wa velengths greater than 2 cm are not significantly affected by cloud cover, however, rain does beco me a factor wavelengths shorter than 4 cm.
Microwave Re mote Sensing
• GHz (10 9C ycles/sec) • Ka 0.75 -1.1 26.5 -40 • K 1.1 -1.67 18 -26.5 • Ku 1.67 -2.4 12.5 -18 • X 2.4 -3.8 8 -12.5 • C 3.8 -7.5 4 -8 • S 7.5 -15 2 -4 • L 15 -30 1 -2 • P 30 -100 0.3 -1 Radar Bands Comm only Used For Sensing • BAND WAVELENGTH (cm) FREQUENCY
• Ka, K, and Ku bands: very shor t wavelengths used in early airborne radar sys tems but uncommon today. • X-band: used extensively on airborne syst ems for military reconnaissance and te rrain mapping. • C-band: common on many airborne research systems (CCRS Convair-5 80 and NASA AirSAR) and spaceborne systems (including ERS-1 and 2 a nd RADARSAT). • S-band: used on board the Russian A LMAZ satellite. • L-band: used onboard American SEASAT and Japanese JERS-1 & ALOS PALSAR satellite s and NASA airborne system. • P-band: longest rada r wavelengths, used on NASA e xperimental airborne re search system.
Advantages • Time independent. • Weather independent. ( Some areas of Earth ar e persistently cloud covered ) • Penetrate through clouds and to a high degree through rain. • Penetrates vegetation, dry soil, dry snow • Sensitive to moisture in soil, vegetation and snow. • Ability to co llect data which are far away from flight path . Disadvantag es • Large antenna ar e required • Antennas are heavy and have large power requirement • Interpretation of microwave ima ges are difficult • Geometry is problematic in undulating ter rain
• Detects the naturally e mitted microwave energy within its field of view. This emi tted energy is r elated to the temperature and moisture properties of the emitting object or sur face. • Passive microwave senso rs are typica lly radiometers. • Applications: Snow cov er mapping , Flood mapping , Soil moisture mapping . Passive microwave sensor
• It provide own source of microwave radiation to illuminate the target. • divided into two categories: imaging ( RADAR ) and non - imaging (altimeters and scatterometers). • Non-imaging microwave sensors ar e profiling devices which take measurements in one linear dimension, as opposed to t he two- dimensional representation of imaging sensors. • It transmits a microwave (radio) signal towards the target and detects the backscattered portion of the signal. Active microwave sensor
Errors in Remote sensing Imag es
• Remote sensing data (in raw form) as rece ived from imaging sensors mounted on satellites contain flaws or de ficiencies. • The correction of defic iencies and removal o f flaws pres ent in the data is termed as pre- processing . • Image pre-processing can be c lassified into three functional categories: • Radiometric corre ctions • Atmospheric correctio ns • Geometric correction
Radiometric errors It’s an error that influences the radiance or radiometric values ofa scene element ( pixel ). Change the val ue (Digital Number, DN) stored in an image. System err ors - minimized by cosmetic corrections Atmospher ic errors-minimized by atmospheric corrections Geometric err ors It s an error that is related to their spatial loc ation. change the position of a DN value . minimized by geometric co rrect
Radiometric errors & its corrections • Radiometric errors causes • Sensor failures or system noise affects value s • Signal tr avelling through atmosphere ;atmosphere affects the signa l • Sun illumination influences radiometric values • Seasonal changes affect radiometric value s • Terrain influences radiance
Internal errors:- • Introduced by remote sensing system . • Generally systematic and may be identified . • Corrected based on prelaunch or in flight measurements . External errors :- • Introduc ed by the phenomena that vary in nature through s pace and time. • Sources are atmospheric, terrain elevation etc. Radiometric Error sources • Remote sensing • System induced errors by mechanical , electrical or communication failures • Atmosphere induced errors by interaction of EM with atmospheric c onstituents
Random Bad Pixel s(Short Noise) • Some times an individua l detector does not record spectr al data for an individual pixel . When this occurs randomly , it is called a badpixel. • When there are numerous random bad pixels found within the sc ene, it is called shot noise because it appears that the image was shot by a shotgun . • Normally these bad pixels contain values of or 255 ( in 8- bitdata ) in one or more of the bands.
Random Bad Pixels (correction) • Locate each bad pixel in the band k dataset . • A simple threes holding algorithm makes a pass through the dataset and flags any pixel ( BVi ,j,k) having a brightness value of zero ( assuming values of represent sho rt noise and not a real land cover such as water ). • Once identified, evaluate the eig ht pixels surrounding the flagged pixel , as shown below :
Dropped l ines • Although detectors onboard orbiting sa tellites are well tested and calibrated before launch but an entire lin e containing no spectral information may be produced if an individual detector in a scanning system (e.g., Landsat MSS or Landsat 7 ETM+) fails to function properly. • If a detector in a li near array (e.g., SPOT XS, IRS, QuickBird) fails to function, this can result in an entire column of da ta with no spectral information. Such defects are due to errors in the scanning or sampling equipment, i n the transmission or recording of image data or in reproduction ofCCT 's. • The bad line or column is commonly called a line or co lumn drop-out and seen as horizontal black (pixel value 0) or white (pixel value 255) lines on the image.
Dropped lin es (corrections) • Correction is a cosmeti c operation, for this no data is available • It is bas ed on spatial auto-correlation o f continuous physical phenomena (ne ighboring values tend to be similar) Methods for dropped line correction 1. Replacement (line above, b elow) 2. Average line above and be low 3. Replacement based on correlation between bands
Striping • Horizontal or vertical ( raw data ), skewed ( processed data ) • Visible banding pattern over the whole image • Changed characteristics of the sensor detectors
Striping (correc tion) • To improv e the visual appearance • To represent equal ground leaving photon-flux with the same DN Methods for Striping correction 1. Use ca libration data No assumptions 2. Parametric histog ram matching Assumes eq ual area per class for each sensor Assumes a linear sensor model and a normal ( Gaussian ) distribution of the DN values 3. Non-parametric histog ram matching Assumes eq ual area per class for each sensor
Atmosphere induced errors HAZE • Scattered ligh t reaching the sensor from the atmosphere • Additive effect, reducing CON TRAST SUNANGLE • Time/Seasonal effect c hanging the atmospheric path • Multiplicat ive effect SKYLIGHT • Scattered light reaching the sensor after being reflected from the Earth ’ s surface • Multipli cative effect
Haze Correction Dark object subtraction method • Assumption: infrared bands a re not affected by Haze • Identify black b odies: clear water and shadow zones with zero reflectance in the infrar ed bands • Identify DN values at shorter wavelength bands of the same pixel position s. These DN are entirely due to haze • Subtract the mi nimum of the DN values related to blackbodies of a particul ar band from all the pixel values of that band
Effects of Sun Il lumination • Position of the sun relative to the earth changes depending on time of the day and the day of the year • Solar elev ation angle: Time-and location dependent • In the northern hemispher e the solar elevation angle is smaller in winter than in summer • The solar zenith angle is equal to 90 degree minus the solar elevation angle • Irradiance varies with the seasonal changes in solar elevation angle and the chan ging distance between the ea rth and sun
Geometric Errors These distortions ma y be due to several factors such as: ( i) the rotation of the Ear th. ( ii) the motion of the sc anning system, ( iii) the motion of the platform, ( iv ) the platform altitude and attitude , ( v) the curvature of the Ear th. • The geometric distortions should be removed for the geometric representation of the satellite imagery as close as possible to the real world. Geometric dis tortions are: - Systematic - Nonsystemat ic
• Systematic distortions are predictable in nature, and can be accounted f or by accurate modeling of the sensor and platform motion, and the geometric relationship o f the platform with the Ear th. • Non-systematic distortio ns or random errors can not be modeled and corrected in thi s way SYSTEMATIC ERRORS - Scan skew - Mirror scan velocity - Panoramic distortions - Platform vel ocity - Earth rotation - Earth Curvature
Scan skew • It is cause d by the forward motion of the platform during the time required for each mirror swe ep. The ground swath is not normal to the ground tra ck but is slightly skewed, producing cross-scan geometric distortion. T he magnitude of correction is 0.082 km for MSS .
Panoramic distortions For scanners u sed on space borne and airborne remote sensing platforms the ( IFOV ) is constant . As a result the effective pixel size on the ground is larger at the extr emities of the scan line than at the nadir . It produces along - scan distortion • Mirror scan velocity • The MSS mirror scanning rate is usually not constant ac ross a given scan, producing along-s can geometric distortion . The magnitude of the correction is 0.37 km for MSS .
Platform velo city • If the speed of the platform changes the ground track covered by successive mirror sc ans changes producing along-track scale dist ortion . Earth rotation • Rotation of earth in West - to - East Direction • Movement of s atellite in North- to-South Direct ion
Earth Curvature • Aircraft scanning mechanism because of their low altitude have small absolute swath widt h are not affected by earth curvature. • Neither are space systems like IRS, Landsat and Spot, because of the narrowness of their swath . However wide swath width space borne imag ing systems are affected. • e.g. NOAA with a wide swath o f 2700 km is affected by it. The edges of the sw ath the area of the earth’s surface viewed at a given angul ar IFOV is larger than if the curvature of the earth is ignored
Attitude One of the sensor syst em axes usually main tained normal to the e arth’s surface and introduces geometric distort ion Platform altitude If the platform departs from its normal altitu de, changes i n scale occur . NONSYSTEMATIC ERRORS
Digital Ima ge • An IMAGE is a Pictorial Representation of an object or a scene. Analog Digital What is a Digita l Image ? • Produced by Electro optical Sensors • Composed of tiny equal areas, or picture elements abbreviated as pixels or peel arranged in a rectangular array • With each pixel is associate d a number known as Digital Number ( DN) or Brightness value (BV) or gra y level which is a record of variation in radiant energy in discret e form. • An object refle cting more energy records a higher number for itself on the digital image and vice ve rsa. • Digital Image s of an area captured in different spectral ranges (bands) by senso rs onboard a remote sensing satellite. • • A pixel is referred by its column , row , band number .
• Digital Image Data Formats Commonly used format s 1) Band Sequential ( BSQ ) 2) Band Interleaved by Line ( BIL ) 3) Band Interleaved by Pixel ( BIP ) • Each of these formats is usually preceded on the by " header " and / or " trailer " information , which consists of ancillary data about the date , altitude of the sensor , attitude , sun angle , and so on . • Such information is useful when geometrically or radiometrically correc ting the data.
Band Sequential Format (BSQ) • Data for a single band for entire scene is written as one file • That is for each band there is separate file
Band Interleave by Line (Bil) • Data for all the bands are written as line by line on the same file . • That is • line 1, band 1, line 1 band 2, line 1 band 3, line 2, band 1, line 2 band 2, line 2 band 3 , line 3, band 1, line 3 band 2, line 3 band 3 etc
Band Interleave by Pixel (BIP) • data for the pi xels in all bands are written together. That is • pixel1 band1 pixel1 band2 pixel1 band3 p ixel2 band1 pixel2 band2 pixel 2 band3 pixel3 band1 pixel3 band2 pixel3 band3
Advantages / Disadv antages BSQ • if one wanted the ar ea in the center of a scene in four bands, it would be necessary to read into this location in f our separate files to extract the desired information. Many researchers like this format because it is not necessary to read "ser ially" past unwanted information if certain bands are of no value BIL/BIP • It is a use ful format if all the bands are to be used in the analysis. If some bands are not of interest, the format is ine fficient since it is necessary to read serially past all the unwanted data
Digital ima ge processing • Digital image processing can be defined as the computer manipulation of digital values contained in an image for the purposes of image correction , image enhancement and feature extraction . • Digital Image Processing A digital image processing system consists of computer Hardware and Image processing software necessary to analyze digital image data
• DIGITAL IMAGE PROCESSING SYSTEM FUNCTIONS Data Acquisition/Restoratio n • {Compensates for data errors, n oise and geometric distortions introduced in the images during acquisitioning and recording} i.e Preprocessing (Radiometri c and Geometric) Image Enhancement • {Alter s the visual impact of the image on the interpreter to improve the informati on content}
Information E xtraction • {Utilizes the decision m aking capabili ty o f computers to recognize and classify pixels on the basis of their signatures, Hyperspectral i mage analysis } Others • {Photog rammetric Information Extraction , Metadata and Image/Map Lineage Documentation , Image and Ma p Cartographic Composition, Geographic Information Systems (GIS), Integrated Image Processing and GIS , Utilities}
Major Commercial Digital Image Processing Systems •ERDAS IMAGINE •Leica Photogrammetry Su ite • ENVI • IDRISI • ER Mapper • PCI Geomatica •eCognition •MATLAB • Intergraph
RECTIFICATION • is a process of geometrically correct ing an image so that it can be represented on a planar surface , conform to other images or conform to a map. i.e it is the process by which geometry of an image is made planimetric. • It is necessary when accurate are a , distance and direction measurements are requi red to be made from the imagery. • It is achieved by transforming the data from one grid system into another grid system using a geometric transformation • In other words process of establishing mathematical relationship between th e addresses of pixels in an image with corresponding coordinates of those pixels on another image or map or ground
• Two basic operati ons must be performed to geometrically rectify a remo tely sensed image to a map coordinate sys tem: 1.Spatial Interpolation: The geometric relationship between input pi xel location (row & column) and associated map c o-ordinates of the same point (x,y) are identi fied. • •This establishes the nature of the geometric co- ordinate transformation param eters that must be applied to rectify the original input image (x,y) to its proper position in the rectified output image (X,Y). • • Involves selecting Ground Control Points (GCPS) and fitting polynomial equations using least squares technique
• GROUND CONTROL POINT (GCP) is a location on the surface of the Earth (e.g., a road intersection) that can be identified on the imagery and located accurately on a map. • There are two distinct sets of coordinates associated with eac h GCP: source or image coordinates specified in i rows and j columns , and Reference or map coordinates (e.g., x, y measured in degrees of latitude and longitude, or me ters in a Universal Transverse Mercator projection). • The paired coordinates (i, j and x, y) fro m many GCPs can be modeled to deri ve geometric transformation coe fficients.
• These coefficients m ay be used to geometrically rectify the remote sensor data to a standard datum and map projectio n • Accurate GCPs are essential for accurate rectification • Sufficiently large number of GCPs should be selected • Well dispersed GCPs result in more reliable rectification • GCPs for Large Scale Ima gery -Road intersections, airport runways, towers b uildings etc. • for small scale imagery -larger features like U rban area or Geological featur es can be used • NOTE : landmarks that can vary ( like lakes , other water bodies , vegetation etc ) should not be used . • GCPs should be spread across im age • •Requires a minimum number depending on the type of transformat ion
Intensity Interpolation • Pixel brightness value must be determined. • A pixel in the rectified image often requires a value from the input pix el grid that does not fall neatly on a row and column co-ordinate . • For this reason resampli ng mechanism is used to determine pixel brig htness value.
Image Enha ncement • Image enhancement te chniques improve the quality of an image as perceived by a human. These techniques are most useful because many satellite images when examined on a colour display give inadequate information for image inte rpretation. • Modification of an image to alter its impact on viewer Enhancements are used to make it easie r for visual interpretation and understanding of imagery. • Process of making an image more interpretable for a particular application t o accentuate certain image features for subse quent analysis or for image display .
• Useful since many satellite images give inadequate information for image interpretation. The contrast stretch, density sli cing, edge enhancement, and spatial filtering are the more commonly used tec hniques. • Image enhan cement is attempted after the image is corrected for geometric and radiomet ric distortions RADIOMETRIC E NHANCEMENT Modification of brightness values of each pixel in an image data set independently (Point operations). SPECTRAL ENHANCEMENT Enhancing images by transforming the values of eac h pixel on a multiband b asis SPATIAL E NHANCEMENT Modificat ion of pixel values based on the values of surrounding pixels. (L ocal operations)
Contrast Contrast generally refers to the difference in luminance or grey level values in an image and is an important characteristic . It can be defined as the ratio of the maximum intensity to the minimum intensity over an image. Reasons for low contrast of image data I. The individual objects and background words and the scene itself has a low contr ast ratio. II . Scattering of electromagnetic energy by the atmosphere can reduce the contras t of a scene. III. The remote sensi ng system may lack sufficient sensitivity to detect and record the contrast o f the terrain.
Contrast Enhancement • Expands the original input values to make use of the total range of the sensitivity of the display device . Contrast enhancement techniques expand the range of brightness values in an image so that the image can be efficientl y displayed in a manner desired by the analyst. The density values in a sce ne are literally pulled farther apart , that is , expanded over a greater range . • The effe ct is to increase the visual contrast between two areas of different uniform densities . This enables the analyst to discriminate easily b etween areas initially having a small difference in de nsity.
Linear Contrast Enhancement This is the simplest contrast stretch algorithm. The grey values in the original image an d the modified image follow a linear relation in this algo rithm. A density number in the low range of the original histogram is assigned to extr emely black, and a value at the high end is assigned to extremely white. The remaining pixel values are distributed linearly between these extremes.
Non Linear Con trast Enhancement • In these method s, the input and output data values follow a non - linear transformation . The general form of the non - linear contrast enhancement is defined by y = f (x), where x is the input data value and y is the outp ut data value. • The non-linear contrast enhancement techniques have been found to be useful for enhancing the colour contrast between the nearly classes and subclasses of a main clas s. Histogram Equalization o This is another non - linear contrast enhancement technique. In this technique, histogram o f the original image is redistributed to produ ce a uniform population density. This is obtai ned by grouping certain adjacent grey values . Thus the number of grey levels in the enhanced image is less than the number of grey levels in the original image .
• In this technique , histogram of the original image is redistributed to produce a uniform population den sity . • Image analysts must be aware that while hist ogram equalization often provides an image with the mo st contrast of any enhancement technique, it may hide much needed information. • If one is trying to bring out informa tion about data in terrain shadows, or th ere are clouds in your data, histogram equalization may not be appropriate .
Spatial Filteri ng • Spatial Filtering is the process of dividing the image into its constituent spatial frequencies, and selectively altering certain spatial frequencies to emphasize some image features. • spatial frequency defined as number of change s in Brightness Value pe r unit distance for any particular part o f an image. If there are ver y few changes in Brightness Value once a given area in an image, this is referred to as low frequency area. Conversely, if the Brightness Va lue change dramatically over short distances, this is an area of high frequency. • Process of suppressing (de-emphasizing) certain frequencies & passing (emphasiz ing) others. • This technique incr eases the analyst’s ability to discriminate detail. • Local operation i.e. pixel value is modified bas ed on the values surroundin g it.
• Used for enhancing certain features • Removal of noise . • Smoothening of image • Ability to discriminate detail. The three types of spatial filters used i n remote sensor data processing are : Low pass filters, Band pass filters and H igh pass filters. Spatial Convolution F iltering A linear spatial filter is a filter for which the brigh tness value (BV i.j) at location i, j in the output image is a function of some weighted average (linear combination) of brightness values located in a particular spat ial pattern around the i, j loc ation in the input image. This process o f evaluating the weighted neig hboring pixel values is called two-dimensional convolution filterin g .
Filter Types Low Pass Fi lters • Block high frequency details • Has a smoothening effect on images. • Used for removal of noise • Removal of “ salt & pepper ” noise • Blurring of image especia lly at edges. • High Pass Filters • Preserves high frequencies and Removes slowly varying components • Emphasizes fine details • Used for edg e detection and enhancement • Edges - Locations where transition from one category to other occurs
Low Pass Filters • Mean Fil ter • Median Filt er • Mode Filter • Minimum Filt er • Maximum Filt er • Olympic F ilter High Pass Fi ltering – Linear • Output brightness value is a function of linear combination of B V’s located in a particular spatial pattern around the i,j loca tion in the input image –Non Linear • use non linear combinations of pixels • Edge Detection - Background is lost • Edge Enhancement • Delineates Edges and makes the shapes an d details more prominent •backgroun d is not lost.
Density S licing Density slicing is the process in w hich the pixel values are sliced into different ranges and for each range a singl e value or color is assigned in the output image. It is al so know as level slicing. Density slicing is a digital data interpretation method used in analysis o f remotely sensed imagery to enhance the information gathered from an individual brightness band . Density slicing is done by dividing the range of brightnesses in a single band i nto intervals, then assigning each interval to a colour .
Density slicing may be thus used to introduce color to a single band image . Density slicing is useful in enhancing images , particularly if the pixel values are within a narrow range. It enhances the contrast between different ranges ofthe pixel values . Remote Sensing - Digital Image Processing-Image Enhancem ent
Principal componen t analysis • The multispectral image data is usually strongly correlated from one band to the oth er. The level of a given picture element on o ne band can to some extent be predicted from the lev el of that same pixel in an other band. • Principal component a nalysis is a pre-processing transformation that creates new images with the uncorrelated va lues of different images. This is accomplished by a linear transformation of variables that corresponds toa rotatio n and translation of the original coordinate system .
• Principal component analysis operates on all bands together. Thus, i t alleviates the diff iculty o f selecting appropriate bands associated with the band ratioing op eration. Principal components describe the data more efficiently tha n the original b and reflectance values. The first principal component accounts fo r a maximum portion of the variance in the data s et, often as high as 98%. Subsequent principal components account for successive ly smaller portions of the remaining variance .
The Normalized Difference Vegetation Index (NDVI) • The Normalized Difference Vegetation Index (NDVI) is a numerical indicator that use s the visible and near- infrared bands of th e electromagnetic spectrum, and is adopted to analyze remote sensing m easurements and assess whether the target being observed contains live green veget ation or not. • The Normalised Difference Vegetation Index ( NDVI ) gives a measure of the vegetative cover on the land surface o ver wide areas. Dense vegetation shows up very strongly in the imagery , and areas with little or no vegetation are also clearly identified . NDVI also identifies wat er and ice
• Generally, healthy vegetation wi ll absorb most of the visible light that falls on it, and reflects a large portion of the near-infrared light. Unhealthy or sparse vegetation reflects more visible light and l ess near- infrared light. Bare s oils on the other hand reflect moderately in both the red and infrared portion of the electromagnetic spec trum. • The NDVI algorithm subtract s the red reflectance values from the near - infrared and divides it by the sum of near-infrared a nd red bands. • The method , developed by NASA is known as the Normalized Difference Vegetation Index (NDV I) and is given by the equation ( NIR - RED / NIR + RED ) ,
NDVI= (NIR-RED) / (NI R+RED) where RED and NIR correspon d to channels 1 and 2 respectively . By normalizing the difference in this way , the values can be scaled between a value of -1 to +1. This also reduces the influence of atmospheric absorptionWater typically has an NDVI value less than 0, bare soils between and 0. 1 and vegetation over 0.1.
Cover Types RED NIR NDVI Dense Vegetatio ns 0.1 0.5 0.7 Dry Bare soil 0.269 0.283 0.025 Clouds 0.277 0.228 0.002 Snow and ic e 0.375 0.342 -0.046 water 0.022 0.013 -0.257 .
IMAGE CLASSIFICATI ONS • The overall objective of image classificat ion is to automatically categorize all pixels in an image into land cove rs classes or themes. • Normally, multispect ral data are used to perform the classification, an d the spectral pattern present within the data for each pixel is used as numerical basis for categorization. • That is, different feature types manifest different combination of DNs based on-their inherent spectral reflectance and emittance properties.
• The term cl assifier refers loosely to a computer program that implements a specific p rocedure for image classification. • Over the year s scientists have devised many classification strategies. From these altern atives the analyst m ust select the classifier that will best accomplish a specific t ask. • At present it is not possible to state that a given classifier is “best ” for all situation because characteristics of each im age and the circumstances for each stu dy vary so greatly. Therefore, it is essential that analyst un derstand the alternative strategies for image classi fication
• The traditional methods of classification mainly follow two approaches: unsupervised and supervised Unsupervised Clas sification • Unsupervised clas sifiers do not utilize training data as the basis for classification . Rather , this family of classifiers involves a lgorithms that examine the unknown pixe ls in an image and aggregate them into a number of classes base d on the natural groupings or clusters present in the image values. • The classes that result from unsupervis ed classification are spectral classes beca use they are based solely on the natural groupings in the image values, the identity of the spectral classes will not be initially known .
• There are numerous clust ering algorithms that can be used to determine the natural spectral groupings present in data set. One co mmon form o f clustering, called the “K-means” approa ch also called as IS ODATA (Interaction Self-Organizing Data Analysis Technique) accepts from the analyst the number of clusters to be located in the data. Advantages • 1. No extensive prior knowledge of the region is required. • 2. The opportunity for human error is minimized. • 3. Unique classes are recognized as distinct units
Disadvant ages and limitations • Unsupervised classification identifies spectrally homogeneous classes within the data; these classes do not necessarily corres pond to the informational categories that are of interest to analyst. As a resu lt, the analyst is faced with the proble m of matching spectral classes generated by the classification to the informational classes t hat are required by the ultimate user of the information. • Spectral properties of specific informatio n classes will change over time ( on a seasonal basis, as well as over the year ). As a result , relationships between informational classes and spectral classes are not constant and relationships define d for one image can seldom be extended to ot hers
Supervised classification • Supervised classification can b e defined normally as the process of samples of known identity to classify pixels of unknown identi ty. Samples of known identity are those pixels located withi n training areas. Pixels located within these areas term the training samples used to guide the classification algorithm to assigning specific spectral values to appropriate informational class. • The basic steps involved to a typical supervised classification proced ure are 1. The trai ning stage 2. Feature sele ction 3. Selection of appropriate classification algorithm 4. Post classification smoothening 5. Accuracy assessment
Training data • Training fields are areas of known identity delineated on the digital image, usually by specifying the corner p oints of a rectangular or polygonal area using line an d column numbers within the coordinate system of the digital image . The analyst must , of course , know the correct class for each area. KEY CHARACTERISTICS OF TRAINI NG AREAS • Shape • Location • Number • Placement • Uniformity
• Various supervised classification algorithms may be used to assig n an unknown pixel to one ofa number of classes. The choice of a particular classifier or decision rule depend s on the nature of the input data and the desired output. Among the most frequently used classification algorithms are the parallelep iped, minimum distance, an d maximum likelihood decision rules. 1.Parallelepipe d Classification • It is a very simple supervised classifier. Here two image bands are used to determine the training area of the pixels in e ach band based on maximum and minimum pixel values.
• .Although parallelepiped is the most accurate of the classification techniques , it is not most widely used . It leaves many unclassi fied pixels and also can have overlap between training pixels. The data values of the candidate pixel are compared to upper and lower lim its. • [Pixels inside of the rectangle (d efined in standard deviations ), are assigned the value of that class signature . • Pixels outside of the rectangle (defined by standard deviations ) are assigned a value of zero ( NULL ). • Disadvantages: poor accuracy, and potential for a large number of pixels classified as NULL . • Advantages: A speedy algorithm useful for quick results.]
255 191 127 64 64 127 191 255 .
2.Minimum distance to means classification • It is based o n the minimum distance decision rule that calculates the spectral distance between the measurement vector for the candidate pixel and the mean vector for each sam ple. Then it assigns the candidate pixel to the class having the minimum spectral distance. • [Every pixel is assigned to the a category base d on its distance from clus ter means. • Standard Devia tion is not taken into account. • Disadvantages : generally produces poorer classification results than maximum likelihood classifiers. • Advantages: Useful when a quick exami nation of a classification result is required. ]
255 191 127 64 64 127 191 255
3.Maximum L ikelihood classification • This Classification uses the training data by means of estimating me ans and variances of the classes, which are used to estimate probabilities and also c onsider the variability o f brightness values in each class. • This clas sifier is based on Bayesian probability theory. It is the most powerful classification methods when accurate training data is provided and one of the most widely used algorit hm. • [ Pixels inside of a stated threshold ( Standard Deviation Ellipsoid) are assigned the value of that class signatur e. • Pixels outside of a stated threshold (Standard De viation Ellipsoid ) are assigned a value of zero ( NULL ).
Disadvantages: Much slower than the minimum distance or parallelepiped classif ication algorithms. The potential f or a large number of NULL. Advantages : more “ accurate ” results ( depending on the quality of ground truth, and whether or not the class h as a normal distribution ).]
255 191 127 64 255 127 191 64
Unsupervised Classification • Unknown Classes • No classification error • Class based o spectral propertie s • Cluster may be unidentifiable and unexpected may be unidentifiabl e • Posterior cluster identificat ion is time consuming and tedi ous Supervised Class ification • Pre Define Classes • Identify classification errors • Class Based on information categories • Sleeted training data not sufficient and time consuming and te dious • Define class may not match natural classes
FuzzyClassif ications Soft or fuzzy classifi ers - a pixel does not belong fully to one class but it has different degrees of memb ership in several classes. The mixed pixel problem is more pronoun ced in lower resolution data. In fuzzy classifica tion, or pixel unmixing , the proportion of the land cover classes from a mixed pixel is calculated. For example a vegetation classification m ight included a pixel with grades of 0.68 for class “forest”, 0.29 for “street” and 0.03 for “grass” .
— 30 -> Water 30 - 60 -> Forest wetland 60-90 -> Upl and Forest Hard classifi cation Fuzzy classifi cation
ACCURACY ASSESSMENT Classificati on Accuracy Assessment • Quantitatively assessing classification accuracy requires the collection of some in situ data or a priori knowledge about some par ts of the terrain, which can then be compared with the remote sensing derived classification map. • Thus to asses classification accuracy it is necessary to compare two classification maps 1) the remote sensing derived map, and 2) assumed true map ( in fact it may contain some error ). The assumed true map may be derived from in situ investigation or quite often from the interpretation of remotely sensed data obtained at a larger scale or higher res olution.
Classification Error Matrix • One of the most common means of expressing classification accuracy is the preparation of classification error matrix sometimes called confusion or a contingency table. Error matri ces compare on a category-by-category basis, th e relationship between known reference data (ground truth) and the corresponding results of an automated classification . • Such matrices are square, with the number of rows an d columns equal to the number of categories whose classification accuracy is being assessed. Table show error matrix that an image analyst has prepared to determine how well a Classification has catego rized a representative subset of pixels used in the training process of a supervised classification. This matrix stems from classifying the sampled training set pixels and listing the known cover types used for training (columns) versus the Pixels actually classified into each land cover category by the classifier ( rows ).
Classification Ground Trut h . LULC DF OF TC SC DC BA GS SN WB WE BU Total DF 10 3 13 OF 1 15 2 18 TC 1 1 29 1 32 SC 8 1 9 DC 4 1 5 BA 4 4 GS 1 1 2 SN 3 3 WB 5 5 WE 1 1 BU 4 4 Total 12 19 32 8 5 4 2 3 5 1 5 96 Overall accuracy 87.5 PA 83.33 78.95 90.63 100.0 80.0 100.0 50.0 100.0 100 100.0 80.0 UA 76.92 83.33 90.63 88.89 80.0 100.0 50.0 100.0 100 100.0 100.0 Kappa 0.85
Kappa coeffic ient • Discrete multivariate techniques have been used to statistically evaluate the ac curacy of remote sensing derived maps and error matrices since 1983 and are widely adopted. These techniques are appropriate as the remotely sensed data are discrete rather than continuous and are also binomially or multinomial distrib uted rather than normally distribute d. • Kappa analysis is a discrete multivaria te technique for accuracy assessment. Kappa analysis yields a Khat statisti c that is the measure of agreemen t of accuracy. The Khat statistic is computed as • Where r is the num ber of rows in th e matrix xii is the number o f observations in row i and column i, and xi+ and • x+i are the marginal totals for the row I and column i respectively and N is the total number of observations.
IMAGE FUSION Data Fusion Data Fusion is a process dealing with data and information from multiple sources to achieve refined/improved information for d ecision-making Often, in case of data fusion, not only remote sensing images are fused but also further ancillary data (e.g. topographic maps , GPS coordinates , geophysical information etc.) contribute to t he resulting image. Image Fusion Image Fusion i s the combination of two or more different i mages to form a new image by using a certain algorithm .
• Image fusion is th e process of combining of image from different sourc es and related information form associated datasets to produce a resultant image that retains the most desirable information of each of the input image . Why image fusion Most of the sensors operate in t wo modes: multispectral mode and the panchromatic mode . U sually the multispectral mode has a bette r spectra l resolution than the panchroma tic mode. Most of the satellite sensors are such that the panchromatic mode has a better spatial resolution than the mu ltispectral mode, To combine the advantages of spatial and spect ral resolutions of two different s ensors, image fusion techniques are appl ied
CORASE XS FINE PAN
FUSED HI-RES XS
A im and use of Image Fusion Sharpening of Images • improvement of spatial resolution • preserve the interest ing spectral resolution Enhance features not visible in either of the single data alone • improved interpretation resu lts, due to the different physical characteristics of th e data Change Detection Substitute missing information in one image with signals from another sensor image ( e.g. clouds in NIR and Shadows in SAR)
Image Interpretation • Image interpretation is defined as the act of examining images to identi fy objects and judge their significance. An interpreter studies remotely s ensed data and attempts through logical process to detect, identify , measure and evaluate the si gnificance of environmental and cultural objects , patterns and spatial relationships . It is an information extraction pr ocess. Methods : - on hard copy , - on digital image • Why image interpretation di fficult compared to everyday visual interpretation ? - loss of sense of depth - viewing perspective different - different scale
ELEMENTS OF IMAGE INTERPRETATION
Shape General form/ structure of objecs: regular/irregular Numerous components of the environment can be identified wit h reasonable certainty merely by their shape. This is true o f both natural features and man-made objects.
Size o FUNCTION OF SCALE : Relati ve size is important o In many cases, the length, breadth , height, area and/or volume of an ob ject can be significant, whether these are surface features (e.g. diff erent tree species). The approximate size of many objects can be judged by comparisons with familiar features(e.g. roa ds) in the same scene .
Tone RELATIVE BRIGHTNESS OR C OLOR We have seen how different objects emit or reflect different wavelengths and intensities of radiant energy. Suc h differences may be reco rded as variations of picture tone, colour or density. Which enable discrimination of many spatial variables, for example, on land diff erent crop types or at sea water bo dies o f contrasting depths or temperatures. The term s 'light', 'medium' or 'dark' are used to describe variations in tone.
Pattern Spatial arrangement of visibly discernible objects Repetitive patterns of both natur al and cultural features are quite common, which is fortunate because much image interpretation is aimed at the mapping and analysis of relatively complex features rather than the more basic units of which they may be composed . Such features include agricultural complexes (e.g. farm s and orchards) and terrain features (e.g. alluvial river valleys and coastal plains).
Texture Arrangement and frequency of tonal variation ( closely associated with tone)smooth irregul ar Same tone but different textures are possible! Texture is an important image characteristic closely associated wit h tone in the sense that it is a quality thatpermits two areas of t he same overall tone to be differentiated on the basis of microtonal patterns. Com mon image textures include smooth , rippled , mottled , lineated and irregular. Unfortunate ly, texture analysis tends to be rather subjective , since different interpreters may use the same terms in slig htly different ways. Texture is rarely the only criter ion of identification or correlation employed in interpretation. More often it is invoked as the basis for a subdivision of categ ories already established using more fundamental criteria. For example :two rock units may have the same tone but dif ferent textures.
Site / Association Site • Relationship with other recognizable features in proximity to target of interest • At an advanced stage in image interpretation, the location of an object with respect to terrain feature s of other objects may be helpful in refining the identi fication and classification of certain picture contents. For example som e tree species are found more commonly in one topographic situation than in oth ers, while in industrial areas the ass ociation of several clustered, identifiable structures may help us determine the precise nature of the local enterprise . For example , the combination of one or two tall chimneys, a large central building, conveyors, cooling towers and solid fuel piles point to the correct identification of a thermal power stat ion
Shadow Hidden profiles may be revealed in silhouette (e.g. the shapes of buildings or the for ms of field boundaries). Shadows are especially useful in geomorphological studies where micro relief features may be easier to detect unde r conditions of low-angle solar illumination tha n when the sun is high in the sky. Unfortunately, deep shadows in areas of complex detail may obscure significa n t features, e.g. the volume and distribution o f traffic on a ci ty street.
Resolution o Capability to distinguish t wo closely spaced objects o Resolution of a sensor system may be defined as its capability to discriminate two closely spaced obje cts from each other. More than most other picture characteristics , resolution depends on aspects of the remote sensing system itself, including its nature, design and performance, as well as the ambient conditions during the sensing programme and subsequent processing of the acquired data. An interpreter must have a knowledge about the resolution of v arious remote sensing data product s.
Global Posi tioning System(GPS) • In 1973 the U.S. DOD decided to establish, develop, tes t, acquire, and deploy a spaceborne Global Positioning System (GPS), resulting in the NAVSTA RGPS (NAVigation Satellite Timing And Ranging Global Positioning System ). • It is an all-weather, space based navigation system development by the U .S. DOD to satisfy the requirements for the military forces to accurately determine their positio n, velocity, and time in a common reference system, anywhere on or near the Earth on a continuous basis . ”
• The Global Posi tioning System (GPS) is a space- based satellite navigation system that provides location and time informatio n in all weather conditions, anywhere o n or near the Earth where there is an clear line of sight to four or more GPS satellites • GPS was developed by the US department of defense to simply accur ate navigation • GPS uses satellite an d computer to computer position any where on earth • 24 satellite are acti ve in the space , which informed geographica l situation. • For 3- dimension latitude , longit ude and altitude require at least 4 satellite information accu racy
f • Developed by the US D OD • Provides - Accurate Navigation • 10 - 20 m - Worldwide Coverage - 24 hour access - Common Coordina te System • Designed to replace existing navigation sys tems • Accessible by Civil and Military GPS General Characteristics
• Development cos ts estimate ~$12 billion • Annual operat ing cost ~$400 million • 3 Segments: - Space : Satellites - User : Receivers - Control : Monitor & Control stations • Prime Space Segment contr actor: Rockwell International (now Lockheed Martin) • Coordinate Reference: WGS-84 • Operated by U S Air Force Space Command (AFSC) - Mission control center operations at Sch riever (formerly Falcon) AFB, Colorado Springs
Space Segment NAVSTAR : NAVig ation Satellite Time and Ran ging 24 Satellites (30) 20200 Km Control Se gment 1 Master Station 5 Monitoring Stations User Segment Receive Satellite Signal GPS System Component s
• 12 Hourly orbi ts – In view for 4-5 h ours • Designed to last 7.5 years • Different Classifications – Block 1, 2, 2A, 2R & 2 F • 24 Satell ites - 4 satellites in 6 Orbital Planes inclined at 55 Degrees • 20200 Km above the Earth Space Segment Equator 55
( \ / Hawaii Ascension Islands Monitor Statio n Ground An tenna Control Segme nt Monitor and Control Colorado Springs Master Control S tation Diego Garcia Kwajalein
• Master Control Station - Responsible for collecting tracking data fro m the monitoring statio ns and calculating satellite orbits and clock parameters • 5 Monitori ng Stations - Responsible for measuring pseudorange data. This orbital tracking network is used to d etermine the broadcast ephemeris and satellite clock modeling - Ground Control Sta tions - Responsible for upload of i nformation to SV’s
Control Sta tion Space Segment Satellite Constell ation Ground Antennas Monitor Stations Master Control Stat ion FAIRBANKS ENGLAND COLORADO SPRINGS SOUTH KOREA VANDENBERG, AFB CAPE CANAVERAL BAHRAIN KWAJALEIN ASCENSION ECUADOR TAHITI SOUTH AFRICA NEW ZEALAND Master Control Station (MCS) Advan ced Ground Antenna Ground Antenna ( GA ) Monitor Station ( MS ) National Geospatial-Intelligence Agency (NGA) Tracking Station User Segment GPS Segments Control Segment Alternate Master Control Station (AMCS) AFSCN USNO WASH D.C. DIEGO GARCIA ARGENTINA HAWAII
GPS ERROR • Atmospheric Refracti on 1. Ionsospheric Refraction - Sun ’ s activity , - Ionospheric thickness & proportions / concentration of ionized particles , - Season, - Actual path (i.e. relative position of satellite & receiver ) of the wave i.e. signal - Pseudo range errors vary from to 15 m at zenithal incidence to as much as 45m for low incidence signals. 2. Tropospheric Refraction - The delay depends on temperature , pressure , humidity , and elevation of satellite .
• Atmospheric Delay • Satellite Mask Angle • Sources of Signal Interference • Multipath Error • Signal Obstruction When something blocks the GPS signal. Areas of G reat Elevation Differences • Canyons • Mountain Obstruction • Urban Environments • Indoors • Human Error • Noise , Bias , and Blunder
Applicati ons • Topo and Locations • Mapping • Monitori ng • Volumes • Photo con trol • Construction Control and Stakeout • Boundari es • Seismic Stakeout • Profiles • Establishing Portable Control Stations (sharing with To tal Stations) • Agricultur e - Slope Staking • Tracking of people, vehicles • Plate movemen ts • Sports ( boating , hiking ,…) • Archeology • Public Transport • Emergency services