CG_MODULE2 (1) Fill Area Primitives Polygon Fill Areas

vikasveraj05 23 views 167 slides Aug 01, 2024
Slide 1
Slide 1 of 167
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19
Slide 20
20
Slide 21
21
Slide 22
22
Slide 23
23
Slide 24
24
Slide 25
25
Slide 26
26
Slide 27
27
Slide 28
28
Slide 29
29
Slide 30
30
Slide 31
31
Slide 32
32
Slide 33
33
Slide 34
34
Slide 35
35
Slide 36
36
Slide 37
37
Slide 38
38
Slide 39
39
Slide 40
40
Slide 41
41
Slide 42
42
Slide 43
43
Slide 44
44
Slide 45
45
Slide 46
46
Slide 47
47
Slide 48
48
Slide 49
49
Slide 50
50
Slide 51
51
Slide 52
52
Slide 53
53
Slide 54
54
Slide 55
55
Slide 56
56
Slide 57
57
Slide 58
58
Slide 59
59
Slide 60
60
Slide 61
61
Slide 62
62
Slide 63
63
Slide 64
64
Slide 65
65
Slide 66
66
Slide 67
67
Slide 68
68
Slide 69
69
Slide 70
70
Slide 71
71
Slide 72
72
Slide 73
73
Slide 74
74
Slide 75
75
Slide 76
76
Slide 77
77
Slide 78
78
Slide 79
79
Slide 80
80
Slide 81
81
Slide 82
82
Slide 83
83
Slide 84
84
Slide 85
85
Slide 86
86
Slide 87
87
Slide 88
88
Slide 89
89
Slide 90
90
Slide 91
91
Slide 92
92
Slide 93
93
Slide 94
94
Slide 95
95
Slide 96
96
Slide 97
97
Slide 98
98
Slide 99
99
Slide 100
100
Slide 101
101
Slide 102
102
Slide 103
103
Slide 104
104
Slide 105
105
Slide 106
106
Slide 107
107
Slide 108
108
Slide 109
109
Slide 110
110
Slide 111
111
Slide 112
112
Slide 113
113
Slide 114
114
Slide 115
115
Slide 116
116
Slide 117
117
Slide 118
118
Slide 119
119
Slide 120
120
Slide 121
121
Slide 122
122
Slide 123
123
Slide 124
124
Slide 125
125
Slide 126
126
Slide 127
127
Slide 128
128
Slide 129
129
Slide 130
130
Slide 131
131
Slide 132
132
Slide 133
133
Slide 134
134
Slide 135
135
Slide 136
136
Slide 137
137
Slide 138
138
Slide 139
139
Slide 140
140
Slide 141
141
Slide 142
142
Slide 143
143
Slide 144
144
Slide 145
145
Slide 146
146
Slide 147
147
Slide 148
148
Slide 149
149
Slide 150
150
Slide 151
151
Slide 152
152
Slide 153
153
Slide 154
154
Slide 155
155
Slide 156
156
Slide 157
157
Slide 158
158
Slide 159
159
Slide 160
160
Slide 161
161
Slide 162
162
Slide 163
163
Slide 164
164
Slide 165
165
Slide 166
166
Slide 167
167

About This Presentation

Besides points, lines and curves another useful
construct for describing components of a picture
is an area that is filled with some solid color or
pattern – fill area or a filled area
• Most curved surfaces can be approximated with a
set of polygon patches.
• When lighting effects and surface...


Slide Content

Fill Area Primitives

Fill-Area Primitives
•Besides points, lines and curves another useful
construct for describing components of a picture
is an area that is filled with some solid color or
pattern – fill area or a filled area
•Most curved surfaces can be approximated with a
set of polygon patches.
•When lighting effects and surface shading
procedures are applied, an approximated curved
surface can be displayed quite realistically.
•Approximating a curved surface with polygon
facets – surface tessellation.

Polygon Fill Areas
•Polygon – plane figure specified by a set of
three or more coordinate positions called
vertices that are connected in a sequence by
straight line segments called edges or sides.
•A polygon must have all its vertices within a
single plane and there are no edge crossings.

Polygon Classification
•Interior angle of a polygon is an angle inside the
polygon boundary that is formed by two adjacent
edges.
•If all interior angles of a polygon are less than or equal
to 180˚ , the polygon is convex otherwise concave.

•Some graphics packages including OpenGL,
require all fill polygons to be convex.
•Implementations of fill algorithms and other
graphics routines are more complicated for
concave polygons, so it is generally more
efficient to split a concave polygon into a set
of convex polygons before processing.
•Concave polygon splitting is often not
included in a graphics library.

Identifying Concave Polygon
•A concave polygon has at least one interior angle
greater than 180˚.
•The extension of some edges of a concave
polygon will intersect other edges.
•Some pair of interior points will produce a
line segment that intersects the polygon
boundary.
•If some vertices are on one side of the
extension line and some vertices are on the
other side, the polygon is concave.

(a)Concave Polygon (b)Convex Polygon

Splitting Concave Polygons
•Once we have identified a concave polygon,
we can split it into a set of convex polygons.
•With vector method for splitting a concave
polygon, we first need to form the edge
vectors.
•Given two consecutive vertex positions, V
k and
V
k+1, we define the edge vector between them
as E
k= V
k+1 - V
k

•Next we calculate the cross products of
successive edge vectors in order around the
polygon perimeter.
•If the z component of some cross products is
positive while the other cross products have a
negative z component, the polygon is concave.
Otherwise the polygon is convex.
•We can apply the vector method by
processing edge vectors in a counter clockwise
order.

•If any cross product has negative z component
the polygon is concave and we split it along
the line of the first edge vector in the cross-
product pair.

Demonstrate vector method for splitting concave
polygons

•We can split a concave polygon using a
rotational method.
•Proceeding counter clockwise around the
polygon edges, we shift the position of the
polygon so that each vertex V
k in turn is at the
coordinate origin.
•Then, we rotate the polygon about the origin
in a clockwise direction so that the vertex V
k+1
is on the x-axis.
•If the following vertex, V
k+2 is below the x-axis,
the polygon is concave.

•We then split the polygon along the x axis to form
two new polygons and we repeat the concave test
for each of the two new polygons.
•The steps above are repeated until we have tested all
vertices in the polygon list.

Splitting a Convex Polygon into a set of
Triangles
•Once we have the vertex list for a convex polygon, we
could transform it into a set of triangles.
•This can be accomplished by first defining any
sequence of three consecutive vertices to be a new
polygon (triangle).
•The middle triangle vertex is then deleted from the
original vertex list.
•The same procedure is applied to this modified vertex
list to strip off another triangle.
•We continue this procedure until the original polygon is
reduced to just three vertices, which define the last
triangle in the set.

Inside-Outside Tests
•Various graphics processes often need to identify
interior regions of objects.
•Identifying the interior of a simple object, such as
convex polygon, a circle or a sphere is generally a
straight forward process.
•For complex objects with intersecting edges it is
not always clear which region of the xy plane we
should call interior and which regions we should
designate as exterior to the object boundaries.

Odd Even Rule
•Also called as odd-parity rule or even odd rule.

Non Zero Winding Number
•Initialize the winding number to zero.
•Imagine a line drawn from any position P to a distant
point beyond the coordinate extents of the object.
•The line we choose must not pass through any
endpoint coordinates.
•Count the number of object line segments that cross
the reference line in each direction.
•Add one to the winding number, every time we
intersect a segment that crosses from right to left.
•Subtract one every time we intersect a segment that
crosses from left to right.

•The final value of the winding number, determines the
relative position of P.
•If the winding number is non zero, P is considered to
be an interior point.
•Otherwise P is considered as an exterior point.

Polygon Tables
•The objects in a scene are described as sets of
polygon surface facets.
•The description of each object includes
coordinate information specifying the
geometry for the polygon facets and other
surface parameters such as color,
transparency and light reflection properties.
•As information for each polygon is input, the
data are placed into tables.

•The polygon tables can be organized into two
groups : geometric tables and attribute tables.
•Geometric data tables contain vertex
coordinates and parameters to identify the
spatial orientation of polygon surfaces.
•Attribute information for an object includes
parameters specifying the degree of
transparency of the object and its surface
reflectivity and texture characteristics.

•Geometric data for objects in scene are
arranged in
–Vertex table, store the coordinate value of vertex
–Edge table, contains pointers back into the vertex
table to identify the vertices for each polygon
edge
–Surface-facet table, contains pointers back into
the edge table to identify the edge for each
polygon edge.
•Edge table can be expanded to include
forward pointers into the surface-facet table.

•Listing the geometric data in the three tables
provides convenient reference to the
individual coordinates for each object.
•Object can be displayed efficiently by using
data from the edge table to identify polygon
boundaries.

OpenGL Polygon Fill-Area Functions

•OpenGL provides a special rectangle function that
directly accepts vertex specifications in the xy
plane.
•The following routine can be more efficient than
generating a fill rectangle using glVertex
specifications
glRect*(x1,y1,x2,y2);
•One corner of this rectangle at coordinate
position (x1,y1), and the opposite corner of the
rectangle is at position (x2,y2).
•The rectangle is displayed with edges parallel to
the xy coordinate axes.

•As an example the following statement defines the
square
glRecti(200,100,50, 250);
•If we put the coordinate values for this rectangle
into arrays, we can generate the same square
with the following code:
int vertex1 []= {200,100};
int vertex2[]= {50,250} ;
glRectiv ( vertex1, vertex2);

Plane Equations
•Each polygon in a scene is contained within a plane of
infinite extent
•The general equation of a plane is
Ax + B y + C z + D = 0
Where, (x, y, z) is any point on the plane, and the
coefficients A, B, C, and D (called plane parameters) are
constants describing the spatial properties of the plane
•The values of A, B, C, and D can be obtained by solving a set
of three plane equations using the coordinate values for
three noncollinear points in the plane for the three
successive convex-polygon vertices, (x1, y1, z1), (x2, y2, z2),
and (x3, y3, z3), in a counterclockwise order

•Solve the following set of simultaneous linear plane
equations for the ratios A/D, B/D, and C/D:
(A/D)xk + (B/D)yk + (C/D)zk = −1, k = 1, 2, 3
•The solution to this set of equations can be obtained in
determinant form, using Cramer’s rule, as

•Expanding the determinants, we can write the calculations
for the plane coefficients in the form


•It is possible that the coordinates defining a polygon facet
may not be contained within a single plane
•We can solve this problem by dividing the facet into a set
of triangles; or we could find an approximating plane for
the vertex list

•One method for obtaining an approximating plane is to divide
the vertex list into subsets, where each subset contains three
vertices, and calculate plane parameters A, B, C, D for each
subset.
•The approximating plane parameters are then obtained as the
average value for each of the calculated plane parameters.

Front and Back Polygon Faces
•As we are usually dealing with polygon surfaces that
enclose an object interior, we need to distinguish between
the two sides of each surface.
•The side of a polygon that faces into the object interior is
called the back face, and the visible, or outward, side is the
front face.
•Any point that is not on the plane and that is visible to the
front face of a polygon surface section is said to be in front
of (or outside) the plane, and, thus, outside the object.
•And any point that is visible to the back face of the polygon
is behind (or inside) the plane.

•Plane equations can be used to identify the position of
spatial points relative to the polygon facets of an object.
•For any point (x, y, z) not on a plane with parameters A, B,
C, D, we have Ax + B y + C z + D != 0
•Thus, we can identify the point as either behind or in front
of a polygon surface contained within that plane according
to the sign (negative or positive) of Ax + By + Cz + D:
•if Ax + B y + C z + D < 0, the point (x, y, z) is behind the
plane
•if Ax + B y + C z + D > 0, the point (x, y, z) is in front of the
plane

Fill-Area Attributes
•Two basic procedures for filling an area on
raster systems, once the definition of the fill
region has been mapped to pixel coordinates.
•One procedure first determines the overlap
intervals for scan lines that cross the area.
•Then the pixel positions along these overlap
intervals are set to the fill color.

•Another method for area filling is to start from
a given interior position and “paint” outward,
pixel- by-pixel, from this point until we
encounter specified boundary conditions.
•Scan line method is usually applied to simple
shapes.
•Fill algorithms that use a starting interior point
are useful for filling areas with more complex
boundaries.

Fill Styles
•A basic fill-area
attribute provided by a
general graphics library
is the display of the
interior.
•We can display a region
with a single color, a
specified fill pattern, or
in a hollow style by
showing only the
boundary of the region.

•Tiling - the process of filling an area with
rectangular pattern.
•Tiling Pattern – the rectangular fill pattern.
• Hatch fill – Predefined fill patterns are available in
the system.

•A reference point (xp,yp) for the starting position of a
fill pattern could be set at any convenient position
inside or outside the fill region.
•For instance, the reference point could be set on the
polygon vertex.
•Or the reference point could be chosen as the lower
left corner of the bounding rectangle.
•To simplify selection of the reference coordinate, some
packages always use the coordinate origin of the
display window as the pattern start position.
•This simplifies the tiling operations when each
element of a pattern is to be mapped to a single pixel.

Color Blended Fill Regions
•It is also possible to combine a fill pattern with
background colors in various ways.
•A pattern could be combined with background
colors using a transparency factor that
determines how much of the background
should be mixed with the object color.
•Or we could use simple logical or replace
operations.

•Some fill methods using blended colors have
been referred to as soft-fill or tint-fill algorithms.
•Linear soft-fill algorithm repaints an area that was
originally painted by merging a foreground color
F with a single background color B, where F=B.
•The current RGB color P of each pixel within the
area to be refilled is some linear combination of F
and B:
P = t F + (1-t)B (1) where the transparency factor t
has a value between 0 and 1 for each pixel.
•For values of t less than 0.5, the background color
contributes more to the interior color of the
region.

•The vector eq (1) holds for each RGB
component of the colors, with
P = (P
R,P
G,P
B) F = (F
R, F
G, F
B) B = (B
R, B
G, B
B) (2)
•We can calculate the value of parameter t
using one of the RGB color components as
t = (P
k - B
k )/ (F
k - B
k ) (3)
Where k = R, G, or B and F
k != B
k

•When two background colors B
1 and B
2 are
mixed with foreground color F, the resulting
pixel color P is
P = t
0 F + t
1 B
1 + (1- t
0-t
1)B
2
where the sum of the color-term coefficients t
0
, t
1 , and (1- t
0 - t
1 ) must be equal to 1.

General Scan-Line Polygon Fill
Algorithm
•Determine the intersection positions of the
boundaries of the fill region with the screen
scan lines.
•Then the fill colors are applied to each section
of a scan line that lies within the interior of
the fill region.
•Identifies the same interior regions as the
odd-even rules.

•For each scan line that crosses the polygon,
the edge intersections are sorted from left to
right
•Then the pixel positions between, and
including, each intersection pair are set to the
specified fill color.
•If a pattern fill is to be applied to the polygon,
then the color for each pixel along a scan line
is determined from its overlap position with
the fill pattern.

•Whenever a scan line passes through a vertex, it
intersects two polygon edges at that point.
•In some cases, this can result in an odd number of
boundary intersections for a scan line.
•Figure shows two scan lines that cross a polygon fill
area and intersect a vertex.
•Scan line y` intersects an even number of edges, and
the two pairs of intersection points along this scan line
correctly identify the interior pixel spans.
•But scan line y intersects five polygon edges.
•To identify the interior pixels for scan line y, we must
count the vertex intersection as only one point.
•Thus, as we process scan lines we need to distinguish
between these cases.

•We can detect the topological difference
between scan line y and scan line y` by noting the
position of the intersecting edges relative to the
scan line.
•For scan line y, the two edges sharing an
intersection vertex are on opposite sides of the
scan line.
•But for scan line y`, the two intersecting edges
are both above the scan line.
•Thus, a vertex that has adjoining edges on
opposite sides of an intersecting scan line should
be counted as just one boundary intersection
point.

•We can identify these vertices by tracing
around the polygon boundary in either
clockwise or counter clockwise order.
•Observe the relative changes in vertex y
coordinates as we move from one edge to the
next.
•If three end point y values of two consecutive
edges monotonically increase or decrease,
count the shared vertex as single intersection
point for the scan line passing through that
vertex.

•We can check to determine whether edge and
the next non horizontal edge have increasing or
decreasing end point y values.
•If so, the lower edge can be shortened to ensure
only one intersection point is generated.
•When the endpoint y coordinates of two edges
are increasing, the y value of the upper end point
for the current edge is decreased by 1.
•When the endpoint y values are decreasing,
decrease the y coordinate of the upper end point
of the edge following the current edge.

•The slope of the edge can be expressed in terms of
the scan line intersection coordinates
m = (y
k+1 - y
k )/
(x
k+1 - x
k ) (1)
•Since the change in y coordinates between two scan
lines is simply
y
k+1 - y
k = 1 (2)

•The x intersection value x
k+1 the upper scan line
can be determined from the x intersection value
x
k on the preceding scan line as
x
k+1 = x
k + 1/m (3)
•Each successive x intercept can thus be calculated
by adding the inverse of the slope and rounding
to nearest integer.
•The slope m is the ratio of two integers:
m = Δy/Δx
•Incremental calculations of x intercepts along an
edge for successive scan lines can be expressed as
x
k+1 = x
k + Δx/Δy (4)

•Using the above equation, we can perform
integer evaluation of the x intercepts by
initializing a counter to 0.
•Then increment the counter by the value of Δx
each time we move up to a new scan line.
•Whenever the counter value becomes equal
to or greater than Δy, increment the x
intersection value by 1 and decrease the
counter by the value Δy.

•Suppose m = 7/3
•Initially, set counter to 0, and increment to 3
(which is Δx).
•When we move to next scan line, increment
counter by Δx
• When counter is equal or greater than 7 (which
is Δy), increment the x-intercept (in other words,
the x-intercept for this scan line is one more than
the previous scan line), and decrement counter
by 7.
•Continue determining the scan-line intersections
in this way until we reach the upper endpoint of
the edge.

•To efficiently perform a polygon fill, store the
polygon boundary in a sorted edge table that
contains all the information necessary to process
the scan lines efficiently.
•Proceeding around the edges in either clockwise
or counter clockwise order, we can use a bucket
sort to store the edges, sorted on the smallest y
value of each edge, in the correct scan-line
positions.
•Only non horizontal edges are entered into the
sorted edge table.
•As edges are processed, we can shorten certain
edges to resolve the vertex-intersection question.

•Each entry in the table for a particular scan
line contains the maximum y value for that
edge, the x intercept value (at the lower
vertex ) for the edge, and inverse slope of the
edge.
•For each scan line, the edges are in sorted
order from left to right.
•We process the scan lines from the bottom of
the polygon to its top producing an active
edge list for each scan line crossing the
polygon boundaries.

OpenGL Fill Area Attribute Functions
•Fill area routines are available for convex
polygons only.
•We generate displays of filled convex polygons
in four steps:
–Define a fill pattern
–Invoke the polygon fill-routine
–Activate the polygon fill feature of OpenGL
–Describe the polygons to be filled

•A polygon fill pattern is displayed up to and
including the polygon edges.
•There are no boundary lines around the fill
region unless we specifically add them to the
display.

OpenGL Fill-Pattern Function
•By default, a convex polygon is displayed as a
solid color region, using the current color setting.
•To fill a polygon with a pattern in OpenGL, we use
bit mask.
•A value of 1 in the mask indicates that the
corresponding pixel is to be set to the current
color.
•A value of 0 leaves the value of that frame buffer
position unchanged.

•The bits must be specified starting with the
bottom row of the pattern, and continuing up
to the top most row of the pattern.
•Once we have set a mask, we can establish it
as the current fill pattern with the function
glPolygonStipple(fill pattern);
•Next, we need to enable the fill routine
glEnable (GL_POLYGON_STIPPLE);
•We turn of pattern filling with
glDisable (GL_POLYGON_STIPPLE);

OpenGL Texture and Interpolation
Pattern
•Another method for filling polygons is to use texture
patterns.
•Produce fill patterns that simulate the surface
appearance of wood, brick or some other material.
•We can obtain interpolation coloring of a polygon
interior.
•To do this, assign different colors to polygon vertices.
•Interpolation fill of a polygon interior is used to
produce realistic displays of shaded surfaces under
various lighting conditions.

•The following code assigns either a blue, red or green color to each of the
three vertices of a triangle.
•Polygon fill is then a linear interpolation of the colors at the vertices.



glBegin (GL_TRIANGLES);
glColor3f(0.0, 0.0, 1.0);
glVertex2i(50,50);
glColor3f(0.0, 1.0, 0.0);
glVertex2i(150,50);
glColor3f(1.0, 0.0, 0.0);
glVertex2i(75,150);
glEnd();

OpenGL Wire Frame Methods
•We can also choose to show only polygon
edges.
•This produces a wire-frame or hollow display
of the polygon.
•We could also display a polygon by only
plotting a set of points at the vertex positions.
•These options are selected with the function
glPolygonMode (face, displayMode);

•We use parameter face to designate which
face of the polygon we want to show as edges
only or vertex only.
•This parameter can be assigned with
GL_FRONT, GL_BACK, GL_FRONT_AND_BACK.
•If we want only the polygon edges displayed
for our selection, we can assign the constant
GL_LINE to parameter display mode.
• Another option is GL_FILL, which is the
default display mode.

•Another option is to display a polygon with
both an interior fill and different color or
pattern for its edges (or for its vertices).
•This is accomplished by specifying the polygon
twice:
–one with parameter displayMode set to GL_FILL
–again with displayMode set to GL_LINE (or
GL_POINT).

•OpenGL provides a mechanism that allows us to
eliminate selected edges from a wire frame
display.
•Each polygon vertex is stored with a one bit flag
that indicates whether or not that vertex is
connected to the next vertex by a boundary edge.
•Set the bit flag to off and the edge following that
vertex will not be displayed.
•We can set the flag for an edge with the following
function
glEdgeFlag(flag);

•To indicate that a vertex does not precede a
boundary edge, assign the OpenGL constant
GL_FALSE to parameter flag.
•This applies to all subsequently specified
vertices until the next call to glEdgeFlag is
made.
•The OpenGL constant GL_TRUE turns the edge
flag back on again, which is default.
•Place the glEdgeFlag between glBegin / glEnd
pairs.

Geometric
Transformations

GEOMETRIC TRANSFORMATIONS
•Operations that are applied to the geometric
description of an object to change its position,
orientation,or size.
•Also referred as- model - Transformation.
•Can be used to describe how objects might
move around in a scene during an animation
sequence or simply to view them from
another angle.

Basic Two-Dimensional Geometric
Transformations
•The geometric-transformation functions -
translation, rotation, and scaling.
•Other useful transformation routines that are
sometimes included in a package are
reflection and shearing operations.

Two-Dimensional Translation
•By adding offsets to its coordinates so as to
generate a new coordinate position.
•A translation is applied to an object that is
defined with multiple coordinate positions, by
relocating all the coordinate positions by the
same displacement along parallel paths.

•A straight-line segment is translated by
applying Equation 3 to each of the two line
endpoints and redrawing the line between the
new endpoint positions.
•A polygon is translated by adding a translation
vector to the coordinate position of each
vertex and then regenerate the polygon using
the new set of vertex coordinates.

Two-Dimensional Rotation
•A rotation transformation of an object - by
specifying a rotation axis and a rotation angle.
•All points of the object are then transformed
to new positions by rotating the points
through the specified angle about the rotation
axis.
•A two-dimensional rotation of an object is
obtained by repositioning the object along a
circular path in the xy plane.

Two-Dimensional Rotation

Transformed coordinates in
terms of angles θ and φ

Rotation About an Arbitrary Pivot
Position

•A straight-line segment is rotated by applying
Equations 9 to each of the two line endpoints
and redrawing the line between the new
endpoint positions.
•A polygon is rotated by displacing each vertex
using the specified rotation angle and then
regenerating the polygon using the new
vertices.

Two-Dimensional Scaling
•To alter the size of an object, we apply a
scaling transformation.
•A simple two dimensional scaling operation is
performed by multiplying object positions (x,
y) by scaling factors sx and sy to produce the
transformed coordinates.

Two-Dimensional Scaling

•Any positive values can be assigned for scaling
factors.
•Values less than 1 reduce the size of the
objects.
•Values greater than 1 produce enlargements.
•Specifying a value of 1 for both sx and sy leaves
the size of objects unchanged.

•When sx and sy are assigned the same value, a
uniform scaling is produced, which maintains
relative object proportions.
•Unequal values for sx and sy result in a
differential scaling that is often used in design
applications, where pictures are constructed
from a few basic shapes that can be adjusted
by scaling and positioning transformations

•Objects transformed with Equation 11 are
both scaled and repositioned.
•Scaling factors with absolute values less than 1
move objects closer to the coordinate origin,
while absolute values greater than 1 move
coordinate positions farther from the origin.
•Figure 7 illustrates scaling of a line by
assigning the value 0.5 to both sx and sy in
Equation 11.
•Both the line length and the distance from the
origin are reduced by a factor of 1/2 .

Scaling with Fixed Point

Matrix Representations and
Homogeneous Coordinates
•Many graphics applications involve sequences of
geometric transformations.
•The viewing transformations involve sequences of
translations and rotations to take us from the
original scene specification to the display on an
output device.
•How the matrix representations discussed in the
previous sections can be reformulated so that
such transformation sequences can be processed
efficiently.

•Each of the three basic two-dimensional
transformations can be expressed in the general
matrix form P` = M1 · P +M2 with coordinate
positions P` and P represented as column vectors.
•Matrix M1 is a 2 × 2 array containing
multiplicative factors, and M2 is a two-element
column matrix containing translational terms.
•For translation, M1 is the identity matrix.
•For rotation or scaling, M2 contains the
translational terms associated with the pivot
point or scaling fixed point.

•To produce a sequence of transformations with these
equations, such as scaling followed by rotation and
then translation - calculate the transformed
coordinates one step at a time.
•First, coordinate positions are scaled, then these scaled
coordinates are rotated, and finally, the rotated
coordinates are translated.
•A more efficient approach, however, is to combine the
transformations so that the final coordinate positions
are obtained directly from the initial coordinates,
without calculating intermediate coordinate values.

Homogeneous Coordinates
•Multiplicative and translational terms for a 2D
geometric transformation can be combined into a
single matrix if we expand the representations to 3 ×
3 matrices.
•We can use the third column of a transformation
matrix for the translation terms, and all
transformation equations can be expressed as matrix
multiplications.

•A standard technique - expand each 2D
coordinate-position representation (x, y) to a
three-element representation (xh, yh, h), called
homogeneous coordinates, where the
homogeneous parameter h is a nonzero value
such that x = xh/h , y = yh/h.
• Set h = 1, Each two-dimensional position is
then represented with homogeneous
coordinates (x, y, 1).

Two-Dimensional Translation Matrix
•Using a homogeneous-coordinate approach,
we can represent the equations for a two-
dimensional translation of a coordinate
position using the following matrix
multiplication:

Two-Dimensional Rotation Matrix
•Two-dimensional rotation transformation
equations about the coordinate origin can
be expressed in the matrix form

Two-Dimensional Scaling Matrix
•A scaling transformation relative to the
coordinate origin can now be expressed as
the matrix multiplication

Inverse Transformations
•For translation, we obtain
the inverse matrix by
negating the translation
distances.
•An inverse rotation is
accomplished by replacing
the rotation angle by its
negative.
•We form the inverse matrix
for any scaling
transformation by replacing
the scaling parameters with
their reciprocals.

Two-Dimensional Composite
Transformations
•Using matrix representations, we can set up a
sequence of transformations as a composite
transformation matrix by calculating the product of
the individual transformations.
•Forming products of transformation matrices is often
referred to as a concatenation, or composition, of
matrices.

Composite Two-Dimensional
Translations
•If two successive translation vectors (t1x, t1y) and
(t2x, t2y) are applied to a two dimensional
coordinate position P, the final transformed location
P` is calculated as
•The composite transformation matrix for this
sequence of translations is

Composite Two-Dimensional
Rotations
•Two successive
rotations applied to a
point P produce the
transformed position
•By multiplying the two
rotation matrices, we
can verify that two
successive rotations are
additive:

Composite Two-Dimensional Scalings
•Concatenating transformation matrices for two
successive scaling operations in two dimensions
produces the following composite scaling matrix:

General Two-Dimensional Pivot-Point
Rotation
•we can generate a two-dimensional rotation about
any other pivot point (xr , yr ) by performing the
following sequence of translate-rotate-translate
operations:
1. Translate the object so that the pivot-point
position is moved to the coordinate origin.
2. Rotate the object about the coordinate origin.
3. Translate the object so that the pivot point is
returned to its original position.

•The composite transformation matrix for this
sequence is obtained with the concatenation
where T(−xr , −yr ) = T
−1
(xr , yr ).

General Two-Dimensional Fixed-Point
Scaling
1. Translate the object so that the fixed point coincides
with the coordinate origin
2. Scale the object with respect to the coordinate origin.
3. Use the inverse of the translation in step (1) to return
the object to its original position.

Matrix Concatenation Properties
•Multiplication of matrices is associative.
•For any three matrices M1,M2 and M3:
M3. M2.M1 = (M3.M2) . M1 = M3.(M2.M1)
•Depending upon the order in which the
transformation are specified, we can
construct a composite matrix either by
multiplying from left to right (pre
multiplying) or by multiplying from right to
left (post multiplying.

•Some graphics system may use post multiply
matrices, so that this transformation sequence
would have to be invoked in the reverse order.
•The last transformation invoked (which is M1
for this example ) is the first to be applied.
•The first transformation that is called (M3 in
this case) is the last to be applied.

•Transformation products, may not be commutative.
•The matrix product M2·M1 is not equal to M1 ·M2, in
general.
•If we want to translate and rotate an object, we must be
careful about the order in which the composite matrix is
evaluated (Figure 13).
•For some special cases, such as sequence of
transformations that are all of same kind – the
multiplication of transformation matrices is commutative.
•Ex: Two successive rotation or two successive translation
or two successive scaling could be performed in either
order and the final position would be the same.

General 2D Composite Transformations
and Computational Efficiency
•A two-dimensional transformation, representing any
combination translations, rotations and scalings can be
expressed as


•The four elements rsjk are the multiplicative rotation-
scaling terms in the transformation, which involve only
rotation angles and scaling factors.
•Elements trsx and trsy are the translational terms,
containing combinations of translation distances, pivot-
point and fixed-point coordinates, rotation angles, and
scaling parameters.

Other Two-Dimensional
Transformations
•A transformation that
produces a mirror
image of an object is
called a reflection.
•For a two-dimensional
reflection, this image is
generated relative to an
axis of reflection by
rotating the object 180◦
about the reflection
axis.

•This transformation retains x values, but “flips” the y
values of coordinate position. The matrix for this
transformation is

•A reflection about the line x = 0 (the y axis) flips x
coordinates while keeping y coordinates the same.
The matrix for this transformation is

•An example of
reflection about the
origin is shown in Figure
18.
•The reflection matrix 54
is the same as the
rotation matrix R(θ)
with θ = 180 .

•Reflection 54 can be
generalized to any
reflection point in the
xy plane(Figure 19).
•This reflection is the
same as a 180◦ rotation
in the xy plane about
the reflection point.

•If we choose the
reflection axis as the
diagonal line y = x , the
reflection matrix is

Shear
•A transformation that
distorts the shape of an
object such that the
transformed shape appears
as if the object were
composed of internal layers
that had been caused to
slide over each other is
called a shear.
•Two common shearing
transformations are those
that shift coordinate x
values and those that shift y
values.
•An x-direction shear relative
to the x axis is produced
with the transformation
matrix



which transforms coordinate
positions as

•We can generate x-direction shears relative to
other reference lines with


•coordinate positions are transformed as

•A y-direction shear relative to the line x = xref
is generated with the transformation matrix


•which generates the transformed coordinate
values

Raster Methods for Geometric
Transformations
•Raster systems store picture information as
color patterns in the frame buffer.
•Therefore, some simple object
transformations can be carried out rapidly by
manipulating an array of pixel values.
•Few arithmetic operations are needed, so the
pixel transformations are particularly efficient.

•Functions that manipulate rectangular pixel
arrays - raster operations.
• Moving a block of pixel values from one
position to another - block transfer, a bitblt, or
a pixblt.
•Routines for performing some raster
operations are usually available in a graphics
package.

•Rotations in 90-degree increments are
accomplished easily by rearranging the
elements of a pixel array.
•We can rotate a two-dimensional object or
pattern 90◦ counterclockwise by reversing the
pixel values in each row of the array, then
interchanging rows and columns.

•A 180◦ rotation is obtained by reversing the
order of the elements in each row of the array,
then reversing the order of the rows.

•For array rotations that are not multiples of
90◦, we need to do some extra processing.
•Each destination pixel area is mapped onto
the rotated array and the amount of overlap
with the rotated pixel areas is calculated.
• A color for a destination pixel can then be
computed by averaging the colors of the
overlapped source pixels, weighted by their
percentage of area overlap.

•We can use similar methods to scale a block of
pixels.
•Pixel areas in the original block are scaled,
using specified values for sx and sy, and then
mapped onto a set of destination pixels.
•The color of each destination pixel is then
assigned according to its area of overlap with
the scaled pixel areas.

OpenGL Raster Transformations
•A translation of a rectangular array of pixel-color
values from one buffer area to another can be
accomplished in OpenGL as the following copy
operation:
glCopyPixels (xmin, ymin, width, height,
GL_COLOR);
•The first four parameters in this function give the
location and dimensions of the pixel block; and
the OpenGL symbolic constant GL_COLOR
specifies that it is color values are to be copied.

•A source buffer for the glCopyPixels function
is chosen with the glReadBuffer routine, and a
destination buffer is selected with the
glDrawBuffer routine.

•We can rotate a block of pixel-color values in
90-degree increments by first saving the block
in an array, then rearranging the elements of
the array and placing it back in the refresh
buffer.
•A block of RGB color values in a buffer can be
saved in an array with the function
glReadPixels (xmin, ymin, width, height,
GL_RGB,GL_UNSIGNED_BYTE, colorArray);

•Then we put the rotated array back in the
buffer with
glDrawPixels (width, height, GL_RGB,
GL_UNSIGNED_BYTE, colorArray);

•A two-dimensional scaling transformation can be
performed as a raster operation in OpenGL by
specifying scaling factors and then invoking either
glCopyPixels or glDrawPixels.
•For the raster operations, we set the scaling
factors with glPixelZoom (sx, sy);
•where parameters sx and sy can be assigned any
nonzero floating-point values.
•Positive values greater than 1.0 increase the size
of an element in the source array,
•Positive values less than 1.0 decrease element
size

Two-Dimensional Viewing

The Two-Dimensional Viewing Pipeline
•A section of a two-dimensional scene that is selected for display is called a
clipping window
•Sometimes the clipping window is alluded to as the world window or the viewing
window
• Graphics packages allow us also to control the placement within the display
window using another “window” called the viewport
•Objects inside the clipping window are mapped to the viewport, and it is the
viewport that is then positioned within the display window

•By changing the position of a viewport, we can view objects at different positions
on the display area of an output device
•Multiple viewports can be used to display different sections of a scene at
different screen positions
•Usually, clipping windows and viewports are rectangles in standard position, with
the rectangle edges parallel to the coordinate axes

•The mapping of a two-dimensional, world-coordinate scene description to device
coordinates is called a two-dimensional viewing transformation
•Sometimes this transformation is simply referred to as the window-to-viewport
transformation or the windowing transformation

•Once a world-coordinate scene has been constructed, we could setup a separate
two-dimensional, viewing coordinate reference frame for specifying the clipping
window
•But the clipping window is often just defined in world coordinates, so viewing
coordinates for two-dimensional applications are the same as world coordinates
•To make the viewing process independent of the requirements of any output
device, graphics systems convert object descriptions to normalized coordinates
and apply the clipping routines
•At the final step of the viewing transformation, the contents of the viewport are
transferred to positions within the display window
•Clipping is usually performed in normalized coordinates
•This allows us to reduce computations by first concatenating the various
transformation matrices

OpenGL Two-Dimensional Viewing Functions
•OpenGL Projection Mode
•glMatrixMode (GL_PROJECTION);
•This designates the projection matrix as the current matrix, which is originally set
to the identity matrix
•glLoadIdentity ( );

GLU Clipping-Window Function
•To define a two-dimensional clipping window, we can use the GLU function:
•gluOrtho2D (xwmin, xwmax, ywmin, ywmax);
•This function specifies an orthogonal projection for mapping the scene to the
screen.
•Objects outside the normalized square (and outside the clipping window) are
eliminated from the scene to be displayed.

OpenGL Viewport Function
•We specify the viewport parameters with the OpenGL function
•glViewport (xvmin, yvmin, vpWidth, vpHeight);
•Where all parameter values are given in integer screen coordinates relative to the
display window.
•Parameters xvmin and yvmin specify the position of the lower left corner of the
viewport relative to the lower-left corner of the display window, and the pixel
width and height of the viewport are set with parameters vpWidth and vpHeight

•Multiple viewports can be created in OpenGL for a variety of
applications
•We can obtain the parameters for the currently active viewport using
the query function
•glGetIntegerv (GL_VIEWPORT, vpArray);
•where vpArray is a single-subscript, four-element array

Creating a GLUT Display Window
•Initialize GLUT with the following function:
•glutInit (&argc, argv);
•We have three functions in GLUT for defining a display window and
choosing its dimensions and position:
•glutInitWindowPosition (xTopLeft, yTopLeft);
•glutInitWindowSize (dwWidth, dwHeight);
•glutCreateWindow ("Title of Display Window");

Setting the GLUT Display-Window Mode and Color
•Various display-window parameters are selected with the GLUT function
•glutInitDisplayMode (mode);
•A background color for the display window is chosen in RGB mode with the
OpenGL routine
•glClearColor (red, green, blue, alpha);
•In color-index mode, we set the display-window color with
•glClearIndex(index); where parameter index is assigned an integer value
corresponding to a position within the color table.

GLUT Display-Window Identifier
•Multiple display windows can be created for an application, and each
is assigned a positive-integer display-window identifier
•windowID = glutCreateWindow ("A Display Window");

Current GLUT Display Window
•When we specify any display-window operation, it is applied to the
current display window, which is either the last display window that
we created or the one we select with the following command:
•glutSetWindow (windowID);
•We can query the system to determine which window is the current
display window: currentWindowID = glutGetWindow ( );

Relocating and Resizing a GLUT Display Window
•We can reset the screen location for the current display window with
glutPositionWindow (xNewTopLeft, yNewTopLeft);
•Similarly, the following function resets the size of the current display
window: glutReshapeWindow (dwNewWidth, dwNewHeight);
•With the following command, we can expand the current display
window to fill the screen: glutFullScreen ( );
•We can adjust for a change in display-window dimensions using the
statement glutReshapeFunc (winReshapeFcn);

Managing Multiple GLUT Display Windows
•We use the following routine to convert the current display window
to an icon in the form of a small picture or symbol representing the
window: glutIconifyWindow ( );
•The label on this icon will be the same name that we assigned to the
window, but we can change this with the following command:
glutSetIconTitle ("Icon Name");
•We also can change the name of the display window with a similar
command:
•glutSetWindowTitle ("New Window Name");

•We can choose any display window to be in front of all other windows
by first designating it as the current window, and then issuing the
“pop-window” command:
•glutSetWindow (window ID); glutPopWindow ( );
•In a similar way, we can “push” the current display window to the
back so that it is behind all other display windows. This sequence of
operations is
•glutSetWindow (windowID); glutPushWindow ( );
•We can also take the current window off the screen with
•glutHideWindow ( );
•In addition, we can return a “hidden” display window, or one that has
been converted to an icon, by designating it as the current display
window and then invoking the function
•glutShowWindow ( );

•Within a selected display window, we can set up any number of
second- level display windows, called subwindows.
•This provides a means for partitioning display windows into different
display sections.
•We create a subwindow with the following function:
•glutCreateSubWindow (windowID, xBottomLeft, yBottomLeft, width,
height);
•Parameter windowID identifies the display window in which we want
to set up the subwindow.
•With the remaining parameters, we specify its size and the placement
of the lower-left corner of the sub window relative to the lower left
corner of the display window.

Viewing Graphics Objects in a GLUT Display Window
•glutDisplayFunc (pictureDescrip);
•The argument is a routine that describes what is to be displayed in
the currentwindow
•The following function is used to indicate that the contents of the
current display window should be renewed:
•glutPostRedisplay ( );

Executing the Application Program
•We need to issue the final GLUT command that signals execution of
the program:
•glutMainLoop ( );
•At this time, display windows and their graphic contents are sent to
the screen.

Other GLUT Functions
•Some times it is convenient to designate a function that is to be
executed when there are no other events for the system to process.
•We can do that with glutIdleFunc (function);
•We can use the following function to query the system about some of
the current state parameters: glutGet (stateParam);
•This function returns an integer value corresponding to the symbolic
constant we select for its argument.
•Example: we can retrieve the current display-window width with
GLUT_WINDOW_WIDTH.
Tags