3 d display-methods-in-computer-graphics(For DIU)

466 views 19 slides Dec 05, 2019
Slide 1
Slide 1 of 19
Slide 1
1
Slide 2
2
Slide 3
3
Slide 4
4
Slide 5
5
Slide 6
6
Slide 7
7
Slide 8
8
Slide 9
9
Slide 10
10
Slide 11
11
Slide 12
12
Slide 13
13
Slide 14
14
Slide 15
15
Slide 16
16
Slide 17
17
Slide 18
18
Slide 19
19

About This Presentation

This is a simple ppt made especially for the DIU students..i know u folks are lazy..so enjoyyy


Slide Content

3D display methods in computer graphics ? Submitted By : Arafat Ahmed Tanzeer : 162-15-7895

What is 3d display methods in computer graphics ? 3D computer graphics ( in contrast to 2D computer graphic s) are graphics that utilize a three dimensional representation of geometric data that is stored in the computer for the purposes of performing calculations and rendering 2D images. Such images may be for later display or for real-time viewing.

What we are going to talk about : Parallel Projection. Perspective Projection. Depth Cueing

Parallel Projection: A parallel projection is a projection of an object in three-dimensional space onto a fixed plane, known as the projection plane or image plane, where the rays, known as lines of sight or projection lines, are parallel to each other. In parallel projection, z co-ordinate is discarded and parallel, lines from each vertex on the object are extended until they intersect the view plane. We connect the projected vertices by line segments which correspond to connections on the original object . As shown in next slide a parallel projection preserves relative proportions of objects but does not produce the realistic views.

Project points on the object surface along parallel lines onto the display plane. Parallel lines are still parallel after projection. Used in engineering and architectural drawings. Views maintain relative proportions of the object. Some points about Parallel Projection :

Perspective Projection : The perspective projection, on the other hand , produces realistic views but does not preserve relative proportions. In perspective projection , the lines of projection are not parallel . Instead , they all converge at a single point called the ‘ center of projection ’ or ‘ projection reference point ’. The perspective projection is perhaps the most common projection technique familiar to us as image formed by eye or lenses of photographic film on perspective projection.

The distance and angles are not preserved and parallel lines do not remain parallel . Instead, they all converge at a single point called center of projection or projection reference point . There are 3 types of perspective projections:- • One point perspective projection is simple to draw. • Two point perspective projection gives better impression of depth. • Three point perspective projection is most difficult to draw. Projection reference point :

The perspective projection conveys depth information by making distance object smalls than near one. This is the way that our eyes and a camera lens form images and so the displays are more realistic. The disadvantage is that if object have only limited variation , the image may not provide adequate depth information and ambiguity appears. Some points about Perspective Projection :

Depth Cueing : Depth cueing is implemented by having objects blend into the background color with increasing distance from the viewer. The range of distances over which this blending occurs is controlled by the sliders. To create realistic image, the depth information is important so that we can easily identify , for a particular viewing direction, which is the front and which is the back of displayed objects. The depth of an object can be represented by the intensity of the image. The parts of the objects closest to the viewing position are displayed with the highest intensities and objects farther away are displayed with decreasing intensities. This effect is known as ‘depth cueing’ .

To easily identify the front and back of display objects. Depth information can be included using various methods . A simple method to vary the intensity of objects according to their distance from the viewing position. Eg : lines closest to the viewing position are displayed with the higher intensities and lines farther away are displayed with lower intensities. Some points about Depth Cueing :

Visible line and surface identification When we view a picture containing non-transparent objects and surfaces, then we cannot see those objects from view which are behind from objects closer to eye . We must remove these hidden surfaces to get a realistic screen image. The identification and removal of these surfaces is called Hidden-surface problem

Depth Buffer Z − Buffer Method It is an image-space approach. The basic idea is to test the Z-depth of each surface to determine the closest visible surface. To override the closer polygons from the far ones, two buffers named frame buffer and depth buffer, are used . Depth buffer is used to store depth values for x , y position, as surfaces are processed 0≤ depth ≤1 0≤ depth ≤1 The frame buffer is used to store the intensity value of color value at each position x , y

Scan-Line Method: The Edge Table − It contains coordinate endpoints of each line in the scene, the inverse slope of each line, and pointers into the polygon table to connect edges to surfaces. The Polygon Table − It contains the plane coefficients, surface material properties, other surface data, and may be pointers to the edge table.

Area-Subdivision Method: Surrounding surface − One that completely encloses the area. Overlapping surface − One that is partly inside and partly outside the area. Inside surface − One that is completely inside the area. Outside surface − One that is completely outside the area.

A-Buffer Method: The A-buffer expands on the depth buffer method to allow transparencies. The key data structure in the A-buffer is the accumulation buffer . Each position in the A-buffer has two fields − Depth field − It stores a positive or negative real number Intensity field − It stores surface-intensity information or a pointer value If depth >= 0, the number stored at that position is the depth of a single surface overlapping the corresponding pixel area. The intensity field then stores the RGB components of the surface color at that point and the percent of pixel coverage.

If depth < 0, it indicates multiple-surface contributions to the pixel intensity. The intensity field then stores a pointer to a linked list of surface data. The surface buffer in the A-buffer includes − RGB intensity components Opacity Parameter Depth Percent of area coverage Surface identifier

Surface Rendering : Surface rendering involves setting the surface intensity of objects according to the lighting conditions in the scene and according to assigned surface characteristics. The lighting conditions specify the intensity and positions of light sources and the general background illumination required for a scene . On the other hand the surface characteristics of objects specify the degree of transparency and smoothness or roughness of the surface; usually the surface rendering methods are combined with perspective and visible surface identification to generate a high degree of realism in a displayed scene.

Surface Rendering : Set the surface intensity of objects according to Lighting conditions in the scene Assigned surface characteristics Lighting specifications include the intensity and positions of light sources and the general background illumination required for a scene. Surface properties include degree of transparency and how rough or smooth of the surfaces