Download lesson plan of computer graphics and more Study Guides, Projects, Research Computer Graphics in PDF only on Docsity!
Table of Contents
**1. Time Table
- Syllabus
- Lesson Plan
- Copy of Assignment with Solution
- Copy of Question Bank with Solutions
- Question Papers (Sessional with Solution and Final Exam)
- Copy of Notes**
- Performance Sheet (Marks Detail of Sessional Exams )
DEPARTMENT: COMPUTER SCIENCE & ENGINEERING
LESSON PLAN
YEAR :2019-20 SEM : V
SUBJECT: COMPUTER GRAPHICS CODE : CSE
FACULTY NAME: SHASHANK SINGH
NOTE:
- Lecture No. Will be as lecture schedule covering the topic given under Lecture No.
- Description will briefly include aspects to be covered and type of instructions like Numerical, Sketches etc.
- Date will be filled on the day of conduct of Lecture& signed by Faculty.
- Lesson plan is initially checked by HOD and approved. It will maintain with attendance register.
- HOD will check progress on weekly basis.
- Course completion will be assessed as ratio of Lecture held and planned.
CAMBRIDGE INSTITUTE OF POLYTECHNIC,
ANGARA, RANCHI
Lesson Plan Lectur e No. CONTENTS Date of Lecture Signature Facult y HO D
- (^) UNIT I Introduction to computer graphics & graphics systems Why computer graphics used? Definition of Computer Graphics.
- (^) Types of Computer Graphics: Generative Graphics Image Analysis Lognitive What is Interactive Computer Graphics? Application of Computer Graphics
- (^) Image Files: An image is made up of a rectangular grid of pixels. It has a definite height and a definite width Counted in pixels. The pixels that represent an image are structured as a grid (columns and rows), where every single pixel comprises numbers that depict the degree of color and brightness. Image Processing
default values. The lookup table size relies on the number of colors that the graphics system can display simultaneously.
- Color Models: It can be defined as an organized system, which is used for the creation of a wide variety of colors starting from a small collection of primary colors.
- Additive Color Models: Additive color models are those color models, which use light for displaying colors. In these color models, colors are seen as an aftereffect of transmitted light.
- Subtractive Color Models: Subtractive color models are those color models, which use printing inks for displaying colors. In these color models, colors are seen as an aftereffect of reflected light. RGB Color Model: The RGB color model is an additive color model, which uses three primary colors, namely, red (R), green (G), and blue (B). These primary colors can further be used to create secondary colors, such as yellow, white, and magenta. Computers can display approximately 16. million colors using this color model. In RGB model, every color’s measurement is done with a value that ranges from 0 to 255, where 0 means no light, and value 255 means maximum intensity. This (0 to 255) is the limit of storing information in 1 byte of computer memory. In RGB model, 3 bytes
(24 bits) of information are needed for defining the three primary colors RGB. A large percentage of the visible band of colors can be represented by mixing red, green, and blue (RGB) colored light in various intensities and proportions. RGB colors are also called as additive colors , as they combine to create white color. In other words, adding all colors creates white, that is, all visible wavelengths are transmitted back to the eye. Additive colors are used for monitors, video, and lighting.
- CMYK Color Model The colors cyan (C), magenta (M), yellow (Y) and black (K) (CMYK) is a subtractive color model, which is mainly used in color print production. This color model is also called as a complementary RGB model. In this color model, inks of four colors, cyan (C), magenta (M), yellow (Y) and black (K) are used for defining colors. Every color has some ink amount, whose measurement is done by using percent starting from 0 to 100, where a value of 100 signifies the application of inks at full intensity. This CMY color model describes colors using a subtractive process, which closely matches the working principles of the printer. The color space of CMYK color model is called subtractive. The application of inks of cyan, magenta, yellow, and black color to a surface that is white in color, is done for subtracting some color from that white surface. This method of subtraction finally leads to the creation of the final color. This model is generally called as CMY model. The subtraction of all colors by the CMY combination at full intensity renders black color. Nevertheless, because of impurities that exist in the
microfilm. Output in the form of hard copy is permanent and stable. Generally, paper is the widely used form of hardcopy.
- Soft Copy: Soft copy refers to the electronic version of an output, which resides in computer memory or on disk, and is displayed on a screen. It is not a permanent form of output as the hard copy. Outputs in the form of soft copy can be in audio visual or graphical form. Printer and plotters are the most widely used hard copy display devices. The graphics stored on a computer’s disk drive are printed using printers. Monitors and flat panel display devices are the most commonly used soft copy display devices. Monitors are used commonly as output device for computer graphics. Any monitor performs the basic function, which is to allow the
- Random Scan Display Random scan displays refers to direct view storage tube display and calligraphic display. Random scan displays are also known as line drawing displays. Cathode Ray Tube (CRT) Model Technique
- Raster Scan Display Systems: A raster scan display system is the rectangular pattern of image capture and reconstruction in televisions. Most raster displays have some specialized hardware to assist in scan converting output primitives into the pixmap, and to perform the raster operations, such as moving, copying, and modifying pixels or blocks of pixels. This hardware is called as a graphics display processor. The fundamental difference among display systems is how much the display processor does against how much must be done by the graphics subroutine package executing on the general-purpose CPU that drives the raster display.
- Printers: Toner-based Printer Liquid Inkjet Printers Solid Ink Printers Dye-sublimation Printers Inkless Printers UV Printers
- Projectors LCD Projectors DLP Projectors
- 2 Aspect Ratio: The aspect ratio of the image is the ratio of width of the image to the height of the image. Usually aspect ratio is represented in two numbers separated by the colon. 3 Measuring Color-depth and Computer Memory Color-depth is the main factor on which resolution depends. The three alternatives of color depth are :
- 8-bit (256 colors)
- 16-bit (65 thousand of colors)
- 24/32-bit (16, 8 million colors) 4 Refresh Rate: The term “refresh-rate” refers to the display screen which is being updated and refreshed. For a stable flicker-free picture at least 70 refreshes per second is recommended (For every “refresh” the picture on your monitor is re-drawn). 5 Elements of picture quality The megapixel is one aspect that plays an important role in the quality of the camera or the photo that it produces. In addition, the camera sensor and optical quality of a lens play an important role in the quality of an image. Apart from the sensor and lens, other elements that determine the quality of photos are:
- Appropriate lighting on a subject.
- Proper focus
- Taking a photo at high resolution
- Active graphics devices : In interactive Computer Graphics user have some controls over the picture, i.e., the user can make any change in the produced image. One example of it is the ping-pong game. Passive graphics devices: In non-interactive computer graphics, the picture is produced on the monitor, and the user does not have any controlled over the image, i.e., the user cannot make any change in the rendered image. One example of its Titles shown on T.V.
18. UNIT II
Points: A Point has position in space. The only characteristic that distinguishes one point from another is its position. Line: In computer graphics, a line refers to a line segment, which is a part of a straight line that extends indefinitely in opposite directions. A line is defined by two endpoints and is represented by the line equation y=mx+b. In the equation y=mx+b, m is the slope and b the intercept of the line. DDA Algorithm The Digital Differential Analyzer (DDA) algorithm is one of the incremental scan-conversion methods. This approach is characterized by calculating at each step using the results obtained in the previous step.
- DDA CONTD.
- Bresenham’s Line Algorithm Bresenham’s line algorithm is one of the highly efficient incremental methods for scan-converting lines. It is an algorithm that determines which points in an n- dimensional raster should be plotted in order to form a close approximation to a straight line between two given points. It is an accurate and efficient raster line- generating algorithm that uses only incremental integer calculations.
- Circle: A circle is a symmetrical figure. Any circle-generating algorithms may be used to plot eight points for each value that the algorithm calculates Midpoint Circle Drawing Algorithm: The midpoint circle drawing algorithm also uses the eight-way symmetry of the circle. An implicit representation of a function is used for the comparable elements of Bresenham’s midpoint algorithm for circles. The function is as follows: F(x, y) = x2 +y2 −R2 = 0
- Midpoint Circle Drawing Algorithm Contd.
pixels are also displayed in fill color.
- In 4-connected there is a problem. Sometimes it does not fill the corner pixel as it checks only the adjacent position of the given pixel.
- (^) Algorithm for boundary fill (4-connected) Boundary fill (x, y, fill, boundary)
- Initialize boundary of the region, and variable fill with color.
- Let the interior pixel(x,y) (Now take an integer called current and assign it to (x,y)) current=getpixel(x,y)
- If current is not equal to boundary and current is not equal to fill then set pixel (x, y, fill) boundary fill 4(x+1,y,fill,boundary) boundary fill 4(x-1,y,fill,boundary) boundary fill 4(x,y+1,fill,boundary) boundary fill 4(x,y-1,fill,boundary)
- End.
- Flood-fill Algorithm By this algorithm, we can recolor an area that is not defined within a single color boundary. In this, we can paint such areas by replacing a color instead of searching for a boundary color value. This whole approach is termed as flood fill algorithm. This procedure can also be used to reduce the storage requirement of the stack by filling pixel spans. How does it work? The problem is pretty simple and usually follows these steps:
- Take the position of the starting point.
- Decide whether you want to go in 4 directions ( N, S, W, E ) or 8 directions ( N, S, W, E, NW, NE, SW, SE ).
- Choose a replacement color and a target color.
- Travel in those directions.
- If the tile you land on is a target, replace it with the chosen color.
- Repeat 4 and 5 until you’ve been everywhere within the boundaries. Algorithm for Flood fill algorithm floodfill4 (x, y, fillcolor, oldcolor: integer) begin if getpixel (x, y) = old color then begin setpixel (x ,y, fillcolor) floodfill4 (x+1, y, fillcolor, oldcolor) floodfill4 (x-1, y, fillcolor, oldcolor) floodfill4 (x, y+1, fillcolor, oldcolor) floodfill4 (x, y-1, fillcolor, oldcolor) end.
29. UNIT III
Basic transformations Changes in orientation, size, and shape are accomplished with geometric transformations that alter the coordinate descriptions of objects. The basic geometric transformations are translation, rotation, and scaling. Other transformations that are often applied to objects include reflection and shear.
1. Translation A translation is applied to an object by repositioning it along a straight-line path from one coordinate location to another. We translate a two-dimensional point by adding translation distances, tx and ty, to the original coordinate position (x, y) to move the point to a new position (x', y'). The translation distance pair (tx, ty) is called a translation vector or shift vector. 2. Scaling A scaling transformation alters the size of an object. This operation can be carried out for Polygons by multiplying the coordinate values (x, y) of each vertex by scaling factors sx and sy to produce the transformed coordinates (x', y'): X’ = x∙ s x , y’ = y ∙ s y Scaling factor sx, scales objects in the x direction, while sy scales in the y direction.
- 3. Rotation A two-dimensional rotation is applied to an object by repositioning it along a circular path in the xy plane. To generate a rotation, we specify a rotation angle 0 and the position (x, y) of the rotation point (or pivot point) about which the object is to be rotated. Positive values for the rotation angle define counter clockwise rotations. negative values rotate objects in the clockwise direction. This transformation can also be described as a rotation about a rotation axis that is perpendicular to the xy plane and passes through the pivot point. We first determine the transformation equations for rotation of a point position p when the pivot point is at the coordinate origin. The angular and coordinate relationships of the original and transformed point positions are shown in the diagram.
- 4. Matrix representation and homogeneous coordinates. Many graphics applications involve sequences of geometric transformations. An animation, for example, might require an object to be translated and rotated at each increment of the motion. In design and picture construction applications, we perform translations, rotations, and scaling to tit the picture components into their proper positions. Each of the basic transformations can be expressed in the general matrix form P' = m1. P + m with coordinate positions p and p' represented as column vectors. matrix m1 is a 2 by 2 array containing multiplicative factors, and m2 is a two-element column matrix containing translational terms.
- For translation, m1 is the identity matrix.
- For rotation, m2 contains the translational terms associated with the pivot point. For scaling, m contains the translational terms associated with the fixed point.
- We carry out the viewing transformation in several steps, as indicated below.
- First, we construct the scene in world coordinates using the output primitives and attributes.
- Next, to obtain a particular orientation for the window, we can set up a two-dimensional viewing coordinate system in the world-coordinate plane, and define a window in the viewing-coordinate system.
- The viewing coordinate reference frame is used to provide a method for setting up arbitrary orientations for rectangular windows. Once the viewing reference frame is established, we can transform descriptions in world coordinates to viewing coordinates.
- We then define a viewport in normalized coordinates (in the range from 0 to 1) and map the viewing-coordinate description of the scene to normalized coordinates.
- At the final step, all parts of the picture that lie outside the viewport are clipped, and the contents of the viewport are transferred to device coordinates.
- Viewing Coordinate Reference Frame: This coordinate system provides the reference frame for specifying the world coordinate window. First, a viewing-coordinate origin is selected at some world position: Po = (x0, y0). Then we need to establish the orientation, or rotation, of this reference frame. One way to do this is to specify a world vector V that defines the viewing y0, direction. Vector V is called the view up vector. Given V, we can calculate the components of unit vectors v = (vx, vy) and u = (ux, uy) for the viewing yv and xv axes, respectively. These unit vectors are used to form the first and second rows of the rotation matrix R that aligns the viewing xvy v axes with the world xwy w axes. We obtain the matrix for converting world coordinate positions to viewing coordinates as a two-step composite transformation:
- First, we translate the viewing origin to the world origin,
- Then we rotate to align the two coordinate reference frames.
- Window-To-Viewport Coordinate Transformation Once object descriptions have been transferred to the viewing reference frame, we choose the window extents in viewing coordinates and select the viewport limits in normalized coordinates. Object descriptions are then transferred to normalized device coordinates. We do this using a transformation that maintains the same relative
- Clipping: In computer graphics our screen act as a 2-D coordinate system. it is not necessary that each and every point can be viewed on our viewing pane(i.e. our computer screen). We can view points, which lie in particular range (0,0) and (Xmax, Ymax). So, clipping is a procedure that identifies those portions of a picture that are either inside or outside of our viewing pane. Point Clipping Algorithm:
- Get the minimum and maximum coordinates of both viewing pane.
- Get the coordinates for a point.
- Check whether given input lies between minimum and maximum coordinate of viewing pane.
- If yes display the point which lies inside the region otherwise discard it.
- To clip a line, we need to consider only its endpoints. Three cases can be considered:
- If both endpoints of a line lie inside a clip rectangle, the entire line lies inside the clip rectangle and can be accepted.
- If one endpoint lies inside and one outside ,the line intersects the clip rectangle and we must compute the intersection pint.
- If both endpoints are outside the clip rectangle, the lines may intersect with the clip rectangle and we need to perform further calculations to determine whether they are intersections and if they are, where they occur.