The document discusses line drawing algorithms for computer graphics. It begins with an overview of graphics hardware and cathode ray tube (CRT) displays. It then discusses the problem of converting lines defined by endpoints into pixels on a display. Two algorithms are described: a simple solution that calculates y-coordinates for each x-coordinate, and the digital differential analyzer (DDA) algorithm, which incrementally calculates the next point along the line. The DDA improves on the simple solution by avoiding multiplications but still has issues with rounding errors and floating point operations.
IJCER (www.ijceronline.com) International Journal of computational Engineerin...ijceronline
The document discusses a digital video watermarking technique using discrete cosine transform (DCT) and perceptual analysis. It proposes embedding a binary watermark in the DCT domain of video frames. A mathematical model is developed to insert a visible watermark into video frames in the DCT domain while considering characteristics of the human visual system to minimize perceptual quality impact. Experimental results show a watermarked video frame with the watermark logo embedded at different positions. The technique aims to provide copyright protection for digital video applications.
The document discusses DCT/IDCT concepts and applications. It provides an introduction to DCT and IDCT, explaining that they are used widely in video and audio compression. It describes the DCT and IDCT functions and how they work to transform signals between spatial and frequency domains. Examples of one-dimensional and two-dimensional DCT/IDCT equations are also given. Finally, common applications of DCT/IDCT compression techniques are listed, such as in DVD players, cable TV, graphics cards, and medical imaging systems.
Discrete cosine transform (DCT) is a widely used tool in image and video compression applications. Recently, the high-throughput DCT designs have been adopted to fit the requirements of real-time application.
Operating the shifting and addition in parallel, an error-compensated adder-tree (ECAT) is proposed to deal with the truncation errors and to achieve low-error and high-throughput discrete cosine transform (DCT) design. Instead of the 12 bits used in previous works, 9-bit distributed arithmetic. DA-based DCT core with an error-compensated adder-tree (ECAT). The proposed ECAT operates shifting and addition in parallel by unrolling all the words required to be computed. Furthermore, the error-compensated circuit alleviates the truncation error for high accuracy design. Based on low-error ECAT, the DA-precision in this work is chosen to be 9 bits instead of the traditional 12 bits. Therefore, the hardware cost is reduced, and the speed is improved using the proposed ECAT.
Hardware Implementation of Genetic Algorithm Based Digital Colour Image Water...IDES Editor
This document describes a hardware implementation of a genetic algorithm based digital color image watermarking system. The system embeds a watermark image into the luminance channel (Y channel) of a host color image after converting the image from RGB to YUV color space. A genetic algorithm is used to determine optimal intensity values in the host image for embedding the watermark image bits invisibly. The proposed design is implemented as a custom integrated circuit for real-time watermarking of images as they are captured by a digital camera. Synthesis results show that the design can operate at 5ns clock speed and consumes a maximum power of 73.84mW when implemented on an Altera Cyclone II FPGA.
The International Journal of Engineering and Science (The IJES)theijes
This document presents a robust watermarking technique that uses discrete wavelet transform and singular value decomposition. The technique embeds a watermark image into the singular values of the host image's blue channel after decomposing it using discrete wavelet transform. The proposed method was shown to be robust against various signal processing attacks and geometric attacks while maintaining good visual quality. Experimental results on a sample image demonstrated the technique's effectiveness against attacks like blurring, noise addition, rotation and gamma correction. The technique was concluded to provide copyright protection of digital images using properties of both discrete wavelet transform and singular value decomposition.
Performance Analysis of Digital Watermarking Of Video in the Spatial Domainpaperpublications3
Abstract:In this paper, we have suggested the spatial domain method for the digital video watermarking for both visible and invisible watermarks. The methods are used for the copyright protection as well as proof of ownership. In this paper we first extracted the frames from the video and then used spatial domain characteristics of the frames where we directly worked on the pixel value of the frame according to the watermark and calculated different parameters.
Keywords:Digital video watermarking, copyright protection, spatial domain watermarking, Least Significant bit substitution.
This document discusses different techniques for digital image watermarking, including in the spatial and frequency domains. It provides an overview of watermarking concepts and applications. It then describes two watermarking algorithms - one that embeds watermarks in the spatial domain by modifying pixel intensities in selected image blocks, and another that embeds watermarks in the wavelet domain by modifying selected wavelet coefficients. Both algorithms are described step-by-step and include watermark insertion and extraction procedures. Results are provided showing the performance of the algorithms under different attacks in terms of normalized cross-correlation between the original and extracted watermarks.
This document summarizes a technique called CADU (collaborative adaptive down-sampling and upconversion) to improve image compression at low bit rates. The technique adaptively decreases high frequency information by directionally prefiltering an image before uniform downsampling. This allows the downsampled image to be conventionally compressed while avoiding aliasing artifacts. At the decoder, the low-resolution image is decompressed and then upconverted to the original resolution using constrained least squares restoration with an autoregressive model. Experimental results show CADU outperforms JPEG2000 in PSNR and visual quality at low to medium bit rates. The technique suggests oversampling wastes resources and could hurt quality given tight bit budgets.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document summarizes a master's thesis on guided transcoding techniques for reducing bitrates in adaptive video streaming using the H.265/HEVC video coding standard. It introduces fundamental concepts in image and video coding such as prediction, frequency transformation, quantization, and adaptive streaming methods. The thesis evaluates two guided transcoding methods called pruning and deflation that aim to store and transmit less video data while maintaining high quality, and compares their performance on various test sequences.
The document proposes two new algorithms - the New Backlight Dimming Algorithm (NBDA) and the New Image Enhancement Algorithm (NIEA) - to simultaneously reduce LCD backlight power consumption and enhance image contrast using content-based histogram analysis.
The NBDA analyzes image histograms to select the appropriate LCD backlight current level to reduce power. The NIEA then enhances image contrast to compensate for brightness changes from dimming the backlight, maintaining image quality.
Experimental results on an FPGA platform show the algorithms can on average reduce power consumption by 47% while improving image enhancement ratio by 6.8%, assessed using PSNR and SSIM metrics, allowing viewers to perceive little change in image quality despite
The document discusses JPEG 2000 software licensing. It notes that the authors' JPEG 2000 software package was originally intended for internal research and as a reference for the JPEG 2000 Part 10 standard, but that they have received numerous requests from companies and academics. It raises questions about how to balance non-commercial and commercial use policies for the code and how to provide access to it while potentially creating revenue.
Wavelet analysis involves representing a signal as a sum of wavelet functions of varying location and scale. Wavelet transforms allow for efficient video compression by removing spatial and temporal redundancies. Without compression, transmitting uncompressed video would require huge storage and bandwidth. Using wavelet compression, a day of video could be stored using the same space as an uncompressed minute. The discrete wavelet transform decomposes a signal into different frequency subbands, making it suitable for scalable and tolerant video compression standards like JPEG2000. Wavelet compression provides better quality at low bit rates compared to DCT techniques like JPEG.
Spandana image processing and compression techniques (7840228)indianspandana
This document discusses image processing and compression techniques. It begins by stating the objectives of image processing as sharpening images, minimizing degradation, and reducing the amount of memory needed to store images through compression. It then provides introductions and definitions related to digital image processing. The key stages of digital image processing are described as image acquisition, enhancement, restoration, morphological processing, segmentation, object recognition, representation and description, color image processing, and compression. Image compression aims to reduce the amount of data required to represent a digital image by removing redundant data. The goals and flow of image compression are explained. Finally, different compression techniques such as lossless compression using run-length encoding and lossy compression are outlined.
Digital Image Processing is an introduction to the topic that covers the definition of digital images and digital image processing. It provides a brief history of the field and examples of applications like medical imaging, satellite imagery analysis, and industrial inspection. The document concludes with an overview of the key stages in digital image processing like image acquisition, enhancement, and representation.
The document proposes a new video watermarking algorithm using the dual-tree complex wavelet transform (DTCWT). The DTCWT offers advantages like shift invariance and directional selectivity. The algorithm embeds a watermark by adding its coefficients to high frequency DTCWT coefficients of video frames. Masks are used to hide the watermark perceptually. Experimental results show the proposed method is robust to geometric distortions, lossy compression, and a joint attack, outperforming comparable DWT-based methods. It is suitable for playback control due to its robustness and simple implementation.
DWT Based Audio Watermarking Schemes : A Comparative Study ijcisjournal
The main problem encountered during multimedia transmission is its protection against illegal distribution
and copying. One of the possible solutions for this is digital watermarking. Digital audio watermarking is
the technique of embedding watermark content to the audio signal to protect the owner copyrights. In this
paper, we used three wavelet transforms i.e. Discrete Wavelet Transform (DWT), Double Density DWT
(DDDWT) and Dual Tree DWT (DTDWT) for audio watermarking and the performance analysis of each
transform is presented. The key idea of the basic algorithm is to segment the audio signal into two parts,
one is for synchronization code insertion and other one is for watermark embedding. Initially, binary
watermark image is scrambled using chaotic technique to provide secrecy. By using QuantizationIndex
Modulation (QIM), this method works as a blind technique. The comparative analysis of the three methods
is made by conducting robustness and imperceptibility tests are conducted on five benchmark audio
signals.
The document describes a video watermarking scheme based on discrete wavelet transform (DWT) and principal component analysis (PCA) for copyright protection. The scheme embeds a binary logo watermark into video frames by applying DWT to decompose frames into sub-bands, then applying block-based PCA on sub-blocks of low and high frequency sub-bands. The watermark is embedded into the principal components of the sub-blocks. Algorithms are provided for applying DWT, PCA transforms, and embedding and extracting the watermark. The scheme aims to provide imperceptibility, robustness against attacks, and ownership protection for digital video content.
Digital video watermarking using modified lsb and dct techniqueeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The document discusses digital image processing and provides an overview of key concepts. It defines digital and analog images and explains how digital images are represented by pixels. It outlines fundamental steps in digital image processing like image acquisition, enhancement, restoration, morphological processing, segmentation, representation, compression and object recognition. It also discusses applications in areas like remote sensing, medical imaging, film and video effects.
A Dual Tree Complex Wavelet Transform Construction and Its Application to Ima...CSCJournals
This paper discusses the application of complex discrete wavelet transform (CDWT) which has significant advantages over real wavelet transform for certain signal processing problems. CDWT is a form of discrete wavelet transform, which generates complex coefficients by using a dual tree of wavelet filters to obtain their real and imaginary parts. The paper is divided into three sections. The first section deals with the disadvantage of Discrete Wavelet Transform (DWT) and method to overcome it. The second section of the paper is devoted to the theoretical analysis of complex wavelet transform and the last section deals with its verification using the simulated images.
image compression using matlab project reportkgaurav113
The document discusses JPEG image compression and its implementation in MATLAB. It describes the steps taken to encode and decode grayscale images using the JPEG baseline standard in MATLAB. These include dividing images into 8x8 blocks, applying the discrete cosine transform, quantizing the results, and entropy encoding the data. Encoding compression ratios and processing times are compared between classic and fast DCT approaches. The project also examines how quantization coefficients affect the restored image quality.
Presented at the Digital Initiatives and Nearby History Institute, Terre Haute, IN, July 19, 2006 and the Indiana Library Federation Annual Conference: Indianapolis, IN, April 12, 2006;
International Journal of Computational Engineering Research(IJCER)ijceronline
The document describes a Simulink model for splitting real-time video/images into four blocks in real-time. The model takes in an RGB video, splits it into four 128x128 blocks using a submatrix block, resizes the blocks back to the original resolution, and displays each block on a separate screen. The model is implemented using various Simulink blocks like resize, color space conversion, matrix concatenation, and submatrix selection to split, process, and display the video/image in real-time across multiple screens.
This document presents a new algorithm for progressive medical image coding using binary wavelet transforms (BWT). It divides grayscale medical images into binary bit-planes and applies a three-level BWT to each bit-plane. It then encodes each BWT bit-plane using quadtree-based partitioning to exploit the energy concentration in high-frequency subbands. Experiments on ultrasound, MRI and CT images show it provides significant improvements in bitrate for required quality compared to existing progressive image coding methods.
This document discusses raster graphics and color models in computer graphics. It describes how polygons can be filled using scanline and boundary fill algorithms. It also explains how colors can be represented using color models based on the electromagnetic spectrum and human color perception. Different color models use primary colors to produce a gamut of other colors.
The document discusses computer graphics algorithms for drawing lines, circles, and filling polygons rasterly. It begins by explaining Bresenham's line drawing algorithm, including how it uses only integer calculations to incrementally determine pixel positions. It then covers a simple circle drawing technique using the circle equation, before introducing the more accurate mid-point circle algorithm. Finally, it briefly mentions polygon fill algorithms and provides an overview of raster graphics techniques.
The document discusses the Sutherland-Hodgeman polygon clipping algorithm. It clips a polygon by considering it against each boundary edge of the window. It passes the polygon's vertices to clipping procedures for the left, right, bottom and top edges. At each stage, it generates a new set of vertices for the clipped polygon which is passed to the next stage. There are four cases considered - when vertices are inside/outside the window boundary and their intersection points are determined and stored in the output vertex list. Once all vertices are clipped against one boundary, the result is passed to the next boundary for further clipping until the fully clipped polygon is produced.
This document summarizes a lecture on computer graphics. It discusses algorithms for 2D scan conversion including drawing lines, curves, and filled polygons. It explains how to draw a line using Bresenham's algorithm and considers factors like line thickness, joins, and pixel selection. For curves, it describes approximating them with line segments of variable length. When drawing filled polygons, it covers topics like equality removal and algorithms for determining which pixels are inside the polygon.
IJERA (International journal of Engineering Research and Applications) is International online, ... peer reviewed journal. For more detail or submit your article, please visit www.ijera.com
This document summarizes a master's thesis on guided transcoding techniques for reducing bitrates in adaptive video streaming using the H.265/HEVC video coding standard. It introduces fundamental concepts in image and video coding such as prediction, frequency transformation, quantization, and adaptive streaming methods. The thesis evaluates two guided transcoding methods called pruning and deflation that aim to store and transmit less video data while maintaining high quality, and compares their performance on various test sequences.
The document proposes two new algorithms - the New Backlight Dimming Algorithm (NBDA) and the New Image Enhancement Algorithm (NIEA) - to simultaneously reduce LCD backlight power consumption and enhance image contrast using content-based histogram analysis.
The NBDA analyzes image histograms to select the appropriate LCD backlight current level to reduce power. The NIEA then enhances image contrast to compensate for brightness changes from dimming the backlight, maintaining image quality.
Experimental results on an FPGA platform show the algorithms can on average reduce power consumption by 47% while improving image enhancement ratio by 6.8%, assessed using PSNR and SSIM metrics, allowing viewers to perceive little change in image quality despite
The document discusses JPEG 2000 software licensing. It notes that the authors' JPEG 2000 software package was originally intended for internal research and as a reference for the JPEG 2000 Part 10 standard, but that they have received numerous requests from companies and academics. It raises questions about how to balance non-commercial and commercial use policies for the code and how to provide access to it while potentially creating revenue.
Wavelet analysis involves representing a signal as a sum of wavelet functions of varying location and scale. Wavelet transforms allow for efficient video compression by removing spatial and temporal redundancies. Without compression, transmitting uncompressed video would require huge storage and bandwidth. Using wavelet compression, a day of video could be stored using the same space as an uncompressed minute. The discrete wavelet transform decomposes a signal into different frequency subbands, making it suitable for scalable and tolerant video compression standards like JPEG2000. Wavelet compression provides better quality at low bit rates compared to DCT techniques like JPEG.
Spandana image processing and compression techniques (7840228)indianspandana
This document discusses image processing and compression techniques. It begins by stating the objectives of image processing as sharpening images, minimizing degradation, and reducing the amount of memory needed to store images through compression. It then provides introductions and definitions related to digital image processing. The key stages of digital image processing are described as image acquisition, enhancement, restoration, morphological processing, segmentation, object recognition, representation and description, color image processing, and compression. Image compression aims to reduce the amount of data required to represent a digital image by removing redundant data. The goals and flow of image compression are explained. Finally, different compression techniques such as lossless compression using run-length encoding and lossy compression are outlined.
Digital Image Processing is an introduction to the topic that covers the definition of digital images and digital image processing. It provides a brief history of the field and examples of applications like medical imaging, satellite imagery analysis, and industrial inspection. The document concludes with an overview of the key stages in digital image processing like image acquisition, enhancement, and representation.
The document proposes a new video watermarking algorithm using the dual-tree complex wavelet transform (DTCWT). The DTCWT offers advantages like shift invariance and directional selectivity. The algorithm embeds a watermark by adding its coefficients to high frequency DTCWT coefficients of video frames. Masks are used to hide the watermark perceptually. Experimental results show the proposed method is robust to geometric distortions, lossy compression, and a joint attack, outperforming comparable DWT-based methods. It is suitable for playback control due to its robustness and simple implementation.
DWT Based Audio Watermarking Schemes : A Comparative Study ijcisjournal
The main problem encountered during multimedia transmission is its protection against illegal distribution
and copying. One of the possible solutions for this is digital watermarking. Digital audio watermarking is
the technique of embedding watermark content to the audio signal to protect the owner copyrights. In this
paper, we used three wavelet transforms i.e. Discrete Wavelet Transform (DWT), Double Density DWT
(DDDWT) and Dual Tree DWT (DTDWT) for audio watermarking and the performance analysis of each
transform is presented. The key idea of the basic algorithm is to segment the audio signal into two parts,
one is for synchronization code insertion and other one is for watermark embedding. Initially, binary
watermark image is scrambled using chaotic technique to provide secrecy. By using QuantizationIndex
Modulation (QIM), this method works as a blind technique. The comparative analysis of the three methods
is made by conducting robustness and imperceptibility tests are conducted on five benchmark audio
signals.
The document describes a video watermarking scheme based on discrete wavelet transform (DWT) and principal component analysis (PCA) for copyright protection. The scheme embeds a binary logo watermark into video frames by applying DWT to decompose frames into sub-bands, then applying block-based PCA on sub-blocks of low and high frequency sub-bands. The watermark is embedded into the principal components of the sub-blocks. Algorithms are provided for applying DWT, PCA transforms, and embedding and extracting the watermark. The scheme aims to provide imperceptibility, robustness against attacks, and ownership protection for digital video content.
Digital video watermarking using modified lsb and dct techniqueeSAT Publishing House
IJRET : International Journal of Research in Engineering and Technology is an international peer reviewed, online journal published by eSAT Publishing House for the enhancement of research in various disciplines of Engineering and Technology. The aim and scope of the journal is to provide an academic medium and an important reference for the advancement and dissemination of research results that support high-level learning, teaching and research in the fields of Engineering and Technology. We bring together Scientists, Academician, Field Engineers, Scholars and Students of related fields of Engineering and Technology
The document discusses digital image processing and provides an overview of key concepts. It defines digital and analog images and explains how digital images are represented by pixels. It outlines fundamental steps in digital image processing like image acquisition, enhancement, restoration, morphological processing, segmentation, representation, compression and object recognition. It also discusses applications in areas like remote sensing, medical imaging, film and video effects.
A Dual Tree Complex Wavelet Transform Construction and Its Application to Ima...CSCJournals
This paper discusses the application of complex discrete wavelet transform (CDWT) which has significant advantages over real wavelet transform for certain signal processing problems. CDWT is a form of discrete wavelet transform, which generates complex coefficients by using a dual tree of wavelet filters to obtain their real and imaginary parts. The paper is divided into three sections. The first section deals with the disadvantage of Discrete Wavelet Transform (DWT) and method to overcome it. The second section of the paper is devoted to the theoretical analysis of complex wavelet transform and the last section deals with its verification using the simulated images.
image compression using matlab project reportkgaurav113
The document discusses JPEG image compression and its implementation in MATLAB. It describes the steps taken to encode and decode grayscale images using the JPEG baseline standard in MATLAB. These include dividing images into 8x8 blocks, applying the discrete cosine transform, quantizing the results, and entropy encoding the data. Encoding compression ratios and processing times are compared between classic and fast DCT approaches. The project also examines how quantization coefficients affect the restored image quality.
Presented at the Digital Initiatives and Nearby History Institute, Terre Haute, IN, July 19, 2006 and the Indiana Library Federation Annual Conference: Indianapolis, IN, April 12, 2006;
International Journal of Computational Engineering Research(IJCER)ijceronline
The document describes a Simulink model for splitting real-time video/images into four blocks in real-time. The model takes in an RGB video, splits it into four 128x128 blocks using a submatrix block, resizes the blocks back to the original resolution, and displays each block on a separate screen. The model is implemented using various Simulink blocks like resize, color space conversion, matrix concatenation, and submatrix selection to split, process, and display the video/image in real-time across multiple screens.
This document presents a new algorithm for progressive medical image coding using binary wavelet transforms (BWT). It divides grayscale medical images into binary bit-planes and applies a three-level BWT to each bit-plane. It then encodes each BWT bit-plane using quadtree-based partitioning to exploit the energy concentration in high-frequency subbands. Experiments on ultrasound, MRI and CT images show it provides significant improvements in bitrate for required quality compared to existing progressive image coding methods.
This document discusses raster graphics and color models in computer graphics. It describes how polygons can be filled using scanline and boundary fill algorithms. It also explains how colors can be represented using color models based on the electromagnetic spectrum and human color perception. Different color models use primary colors to produce a gamut of other colors.
The document discusses computer graphics algorithms for drawing lines, circles, and filling polygons rasterly. It begins by explaining Bresenham's line drawing algorithm, including how it uses only integer calculations to incrementally determine pixel positions. It then covers a simple circle drawing technique using the circle equation, before introducing the more accurate mid-point circle algorithm. Finally, it briefly mentions polygon fill algorithms and provides an overview of raster graphics techniques.
The document discusses the Sutherland-Hodgeman polygon clipping algorithm. It clips a polygon by considering it against each boundary edge of the window. It passes the polygon's vertices to clipping procedures for the left, right, bottom and top edges. At each stage, it generates a new set of vertices for the clipped polygon which is passed to the next stage. There are four cases considered - when vertices are inside/outside the window boundary and their intersection points are determined and stored in the output vertex list. Once all vertices are clipped against one boundary, the result is passed to the next boundary for further clipping until the fully clipped polygon is produced.
This document summarizes a lecture on computer graphics. It discusses algorithms for 2D scan conversion including drawing lines, curves, and filled polygons. It explains how to draw a line using Bresenham's algorithm and considers factors like line thickness, joins, and pixel selection. For curves, it describes approximating them with line segments of variable length. When drawing filled polygons, it covers topics like equality removal and algorithms for determining which pixels are inside the polygon.
The document discusses computer graphics viewing in 2D. It covers windowing concepts, clipping lines and areas to a window using algorithms like Cohen-Sutherland for lines and Sutherland-Hodgman for areas. When displaying a scene, objects must be clipped to only show those within the window using efficient algorithms to avoid drawing parts outside the window.
Extended Context/Extended Media - Class 02Bryan Chung
This document discusses different types of artistic mediums and notations, including analog vs digital, autographic vs allographic art, visual art, musical art, performing art, and instruction art. It provides examples of artistic works that use notation as a key component, such as musical scores, theater scripts, Yoko Ono's instruction paintings, Sol LeWitt's conceptual diagrams, and software code. It explores whether notation can be considered a form of art and discusses how new digital and computational tools have expanded creative possibilities for drawing and generative art.
The document summarizes a study conducted by students at Johnson & Wales University on how other students at the university record their assignments. The study found that while students bring various technological devices to class, they still prefer using paper planners to record assignments. The study hypothesized that students' preference for analog options over technology designed for scheduling relates to their ability to trust existing scheduling technologies. The study involved surveying both full-time and continuing education students about their behaviors, attitudes, and demographics regarding different scheduling methods. The results found that most students still prefer paper planners, so creating a digital calendar may not be beneficial. The study recommends repeating the survey with different student populations and developing a line of paper agendas to sell.
Curve clipping involves using polygon clipping to test if a curved object's bounding rectangle overlaps a clipping window. If there is no overlap, the object is discarded. If there is overlap, the simultaneous curve and boundary equations are solved to find intersection points. Special cases like circles are considered, such as discarding a circle if its center is outside the clipping window plus/minus the radius. Bezier and spline curves can also be clipped by approximating them as polylines or using their convex hull properties.
Clipping is a technique that identifies parts of an image that are inside or outside a defined clipping region or window. There are different types of clipping including point, line, polygon, curve, and text clipping. The Cohen-Sutherland algorithm is commonly used for line clipping. It assigns 4-bit codes to line endpoints to determine if they are fully inside, outside, or intersect the clipping window boundary. Intersecting line segments are then subdivided and clipped. Midpoint subdivision is another algorithm that divides partially visible lines at their midpoint into shorter segments.
The document discusses analog to digital conversion. It begins by explaining the difference between analog and digital signals. It then provides examples of applications that require analog to digital conversion like microphones and thermocouples. The document discusses the two main steps in analog to digital conversion - quantization, which breaks down the analog value into discrete states, and encoding, which assigns a digital value to each state. It also discusses factors that affect accuracy like resolution and sampling rate. Finally, it describes several types of analog to digital converters like flash ADCs, sigma-delta ADCs, dual slope ADCs, and successive approximation ADCs.
Cohen-sutherland & liang-basky line clipping algorithmShilpa Hait
The document describes two line clipping algorithms: Cohen-Sutherland and Liang-Barsky. Cohen-Sutherland assigns region codes to line endpoints and checks for complete visibility, invisibility, or partial visibility. It then finds intersection points if the line is partially visible. Liang-Barsky uses the parametric line equation and clipping window inequalities to determine intersection points u1 and u2, clipping the line between those points if u1 < u2. Liang-Barsky is generally more efficient as it requires only one division to update u1 and u2, while Cohen-Sutherland may repeatedly calculate unnecessary intersections.
The midpoint circle algorithm is similar to Bresenham's circle algorithm and uses the midpoint between pixels to determine whether the pixel is inside or outside a circle. It defines a decision parameter pi based on the midpoint and updates pi by integer amounts at each step to determine the next pixel along the circle. The initial value of pi is set to 5/4 - r when r is an integer to determine the first pixel.
Clipping is a process that extracts portions of data or scenes inside a specified clipping region. It uses endpoint codes, which assign a 4-bit code to line endpoints to indicate if they are inside or outside the clipping window. One algorithm is the Cohen-Sutherland algorithm which uses these endpoint codes to test if lines are completely inside, completely outside, or intersect the clipping window. Another is the Mid-Point Subdivision algorithm which avoids directly calculating line-window intersections by performing a binary search via dividing lines at their midpoint.
This document discusses coordinate systems and mapping between world coordinates and screen coordinates in OpenGL. It explains that:
1) Objects are defined using world coordinates, while screens use pixel coordinates, so OpenGL maps between these spaces.
2) The world window defines the region of the world coordinates that will be drawn, and the viewport defines the screen region it will be drawn to.
3) OpenGL uses a linear transformation to map world to screen coordinates, defined by scaling and translation constants A, B, C, and D that are calculated based on the world window and viewport sizes.
Clipping algorithms identify portions of an image that are inside or outside a specified clipping region. They are used to extract a defined scene for viewing, identify visible surfaces, and perform other drawing and display operations. Common types of clipping include point, line, polygon, and curve clipping. Algorithms like Cohen-Sutherland and mid-point subdivision use codes and binary subdivision to efficiently determine which image portions are visible and should be displayed.
The 12 Principles of Animation were developed by Disney animators to make animation more realistic and appealing. The principles include squash and stretch, anticipation, staging, straight ahead and pose-to-pose, follow through and overlapping action, slow in and slow out, arcs, secondary action, timing, exaggeration, solid drawing, and appeal. Understanding and applying these principles can help animators design scenes that effectively illustrate the principles in action.
This document discusses principles of animation and how they can be applied to computer animation. It covers traditional animation techniques like squash and stretch, timing, anticipation, staging, follow through, and exaggeration. These principles are important for producing good computer animation. The document also discusses how animation can facilitate learning by corresponding to the structure of internal representations, as per the congruence principle. Research shows animation can convey concepts of change and processes that are difficult to represent statically, like circulatory systems or electronic circuits. However, animation must be evaluated compared to non-changing graphics, as its benefit is adding the dimension of change over time.
This document discusses techniques for modeling curves and surfaces in computer graphics. It introduces three common representations of curves and surfaces: explicit, implicit, and parametric forms. It focuses on parametric polynomial forms, specifically discussing cubic polynomial curves, Hermite curves, Bezier curves, B-splines, and NURBS. It also covers rendering curves and surfaces by evaluating polynomials, recursive subdivision of Bezier polynomials, and ray casting for implicit surfaces like quadrics. Finally, it discusses mesh subdivision techniques like Catmull-Clark and Loop subdivision for generating smooth surfaces.
Notes 2D-Transformation Unit 2 Computer graphicsNANDINI SHARMA
Notes of 2D Transformation including Translation, Rotation, Scaling, Reflection, Shearing with solved problem.
Clipping algorithm like cohen-sutherland-hodgeman, midpoint-subdivision with solved problem.
The document discusses different types of digital radiography technologies including computed radiography which uses photostimulable phosphor plates, indirect digital radiography using a scintillator and photodiode array, and direct digital radiography using photoconductive materials. It covers the processes of image acquisition, processing, display, and archiving for digital radiography systems. Key differences between direct and indirect digital radiography technologies are also outlined.
Computer graphics involves two main pipelines - the geometry pipeline and imaging pipeline. The geometry pipeline involves modeling, transformations, and hidden surface elimination to represent 3D objects. The imaging pipeline involves rasterization, texture mapping, and composition to produce 2D images from the 3D representations. Key algorithms in computer graphics include rasterization to convert 3D objects to pixels, texture mapping to add surface detail, and shading models like Gouraud and Phong shading to add lighting effects. The course covers basic graphics concepts and definitions as well as the graphics pipeline through examples of processing a 3D scene.
This document outlines a course on Computer Graphics and Visualization (CSE304). It provides details on the subject teacher, textbook, schedule, assessments, topics to be covered in the course's 6 units, and expected learning outcomes. Students will learn about 2D and 3D computer graphics tools and techniques, apply algorithms for transformations and projections, and explore visibility, shading, curves, and object representation. Assessments include tests, a mandatory mini project in OpenGL, and a mid-term and end-term exam. Upon completing the course, students will have skills in various areas of computer graphics.
presentation By Daroko blog-where IT learners Apply skills.
This topic an presentation will introduce you to Computer graphics hardware types.
---------------------------------
• Daroko blog (www.professionalbloggertricks.com)
• Presentation by Daroko blog, to see More tutorials more than this one here, Daroko blog has all tutorials related with IT course, simply visit the site by simply Entering the phrase Daroko blog (www.professionalbloggertricks.com) to search engines such as Google or yahoo!, learn some Blogging, affiliate marketing ,and ways of making Money with the computer graphic Applications(it is useless to learn all these tutorials when you can apply them as a student you know),also learn where you can apply all IT skills in a real Business Environment after learning Graphics another computer realate courses.ly
• Be practically real, not just academic reader
Do Not just learn computer graphics an close your computer tab and go away..
APPLY them in real business,
Visit Daroko blog for real IT skills applications,androind, Computer graphics,Networking,Programming,IT jobs Types, IT news and applications,blogging,Builing a website, IT companies and how you can form yours, Technology news and very many More IT related subject.
-simply google:Daroko blog(professionalbloggertricks.com)
Build Your Own 3D Scanner:
Course Notes
http://mesh.brown.edu/byo3d/
SIGGRAPH 2009 Courses
Douglas Lanman and Gabriel Taubin
This course provides a beginner with the necessary mathematics, software, and practical details to leverage projector-camera systems in their own 3D scanning projects. An example-driven approach is used throughout; each new concept is illustrated using a practical scanner implemented with off-the-shelf parts. The course concludes by detailing how these new approaches are used in rapid prototyping, entertainment, cultural heritage, and web-based applications.
This document discusses computer graphics and its applications. It defines computer graphics as the field concerned with digitally synthesizing and manipulating visual content. The two main types are raster (composed of pixels) and vector (composed of paths). Raster images are bitmaps mapped to a grid, while vector images use mathematical formulas. Common graphics applications include paint programs, animation software, CAD, and desktop publishing. Cathode ray tubes are used to display images by scanning an electron beam across a screen coated with phosphors.
Linea de scaners y unidades de captura de imagenes, lider mundial de fabricación alemana, image access, wide format scanners
http://www.PrintLAT.com
http://www.imageaccess.com
This document provides an overview of computer graphics. It discusses the definition of computer graphics, goals of computer graphics, applications of computer graphics, graphics systems including images, hardware and software. It also describes two dimensional and three dimensional images, color models, input devices like keyboards and scanners, the computation stage involving transformations and rasterization, output devices like displays, and basics of animation.
This document analyzes KinectFusion, a real-time 3D reconstruction system using a moving depth camera. It introduces SLAMBench, a benchmarking framework for KinectFusion. The document describes the KinectFusion pipeline including preprocessing, tracking, integration and raycasting steps. It evaluates several RGB-D datasets and identifies the Washington RGB-D Scenes dataset as most suitable. It notes drawbacks in KinectFusion like noisy trajectories and inconsistent models. Future work proposed is reducing tracking noise using a Kalman filter.
This document discusses computer graphics and its various aspects. It defines computer graphics as the field concerned with digitally synthesizing and manipulating visual content. The two main types of computer graphics are raster (composed of pixels) and vector (composed of paths). Raster images are bitmap images mapped to a grid of pixels that can be edited at the pixel level. Vector images use mathematical formulas to draw objects like lines and curves. Common graphics applications include paint programs, animation software, CAD, and desktop publishing. Cathode ray tubes are used to display images by scanning an electron beam across a screen coated with phosphors.
The document summarizes two approaches to implementing foveated imaging in CMOS image sensors: (1) A pyramidal architecture with multiple rings of pixels having different integration times, allowing for dynamic range enhancement. (2) A universal multiresolution sensor using a 3T pixel design that allows pixels to be grouped and averaged, enabling adaptive resolution. Both designs aim to mimic the human retina and improve efficiency over traditional sensors. The pyramidal and multiresolution sensors were fabricated in 0.18um CMOS technology and are being tested for applications like video conferencing and industrial inspection.
This document provides a history of computer graphics from the 1940s to present day. It discusses early computer graphics research at institutions like MIT and development of technologies like CRT displays. Major innovations and standards are summarized, including Ivan Sutherland's Sketchpad program, the development of vector and bitmap graphics, and graphics standards like GKS and OpenGL. Common computer graphics applications and devices used in graphics systems like scanners, printers, and different display technologies are also outlined.
This document provides a history and overview of computer graphics. It discusses the evolution of computer graphics from the 1940s to present day, covering important developments like Ivan Sutherland's Sketchpad program in 1963 and the creation of OpenGL in the 1990s. Key concepts covered include output hardware like CRTs, flat panel displays, and 3D graphics. Input devices, file formats, and applications are also summarized. The document concludes with a discussion of stereoscopic 3D and virtual reality.
This document provides a history and overview of computer graphics. It discusses the evolution of computer graphics from the 1940s to present day, covering important developments like Ivan Sutherland's Sketchpad program in 1963 and the creation of standards like OpenGL. Key concepts covered include output devices like CRTs, flat panel displays, and printers. Input devices such as mice, digitizers, and scanners are also summarized. The applications of computer graphics are outlined.
This document provides a history and overview of computer graphics. It discusses the evolution of computer graphics from the 1940s to present day, covering important developments like Ivan Sutherland's Sketchpad program in 1963 and the creation of OpenGL in the 1990s. Key concepts covered include output hardware like CRTs and flat panel displays, 2D and 3D graphics standards, and input and output devices used in computer graphics systems.
This document provides a history and overview of computer graphics. It discusses the evolution of computer graphics from the 1940s to present day, covering important developments like Ivan Sutherland's Sketchpad program in 1963 and the creation of OpenGL in the 1990s. Key concepts covered include output hardware like CRTs and flat panel displays, 2D and 3D graphics standards, and input and output devices used in computer graphics systems.
The document provides information about raster-scan and random scan displays. It explains that raster-scan displays sweep the electron beam across the screen row by row to display an image stored in a frame buffer. Each pixel can have multiple bits to represent color and intensity. Random scan displays do not sweep continuously but move the beam individually to each point as determined by the graphics information. Refresh rates of 60-80 frames per second are needed for raster-scan displays to avoid flicker.
3D clipping removes objects that are not visible from the scene to reduce computational effort. It involves two steps:
1) Discard objects that are behind the camera, outside the field of view, or too far away.
2) Clip objects that intersect with any clipping plane by comparing their bounding volumes to the viewing volume and clipping individual polygons using the Sutherland-Hodgman algorithm.
1) The document discusses various 3D geometric transformations including translation, scaling, rotation, and coordinate transformations.
2) It explains transformations like uniform scaling, translation of a point, and rotation about the X, Y, and Z axes.
3) Rotation about an arbitrary axis is described as translating the axis to the origin, rotating it onto an axis, rotating the object, and translating the axis back.
This document discusses 3D coordinate spaces and transformations, including translations, scaling, and rotations. It explains how 3D scenes are projected onto 2D planes for display through parallel or perspective projections. Parallel projections include orthographic and isometric views, while perspective projections can be one-point or two-point. Key elements of 3D viewing include the camera position, look vector, and up vector. Matrices are used to represent transformations in homogeneous coordinates.
The document discusses 2D geometric transformations including translation, rotation, scaling, and shearing. It introduces representing transformations using 2x2 matrices and homogeneous coordinates using 3x3 matrices. Transformations can be combined by matrix multiplication, allowing multiple transformations to be applied sequentially in an efficient manner.
The document describes the midpoint ellipse algorithm. It begins with properties of ellipses and the standard ellipse equation. It then explains the midpoint ellipse algorithm, which samples points along an ellipse using different directions in different regions to approximate the elliptical path. Key steps include initializing decision parameters, testing the parameters to determine the next point, and calculating new parameters at each step. Pixel positions are determined and can be plotted.
The document discusses algorithms for line drawing in computer graphics. It begins with an overview of graphics hardware and display technologies like CRT, plasma, and LCD screens. The main challenge is scan conversion - how to choose which pixels to turn on when drawing a line segment on a pixel-based display. Two simple algorithms are described: 1) calculating the y-coordinate for each integer x-coordinate, and 2) rounding to the nearest pixel. However, this approach is too slow for typical scenes with many lines. A faster algorithm called the Digital Differential Analyzer (DDA) is introduced.
Light pens were input devices created in 1952 that detected light from CRT screens to select screen positions, working by generating electric pulses when pointed at spots lit up by electron beams. They became popular in the 1980s but are now obsolete as they only work with CRT displays and have disadvantages like obscuring the screen, causing arm fatigue, and producing false readings in bright lighting.
An LCD is a thin, flat panel that uses liquid crystals to display text, images, and video. It consists of pixels filled with liquid crystals arrayed in front of a backlight. Each pixel is made of subpixels that can be independently controlled to produce thousands of colors. LCDs are used in devices like computer monitors, TVs, aircraft displays, clocks and phones due to advantages like low power consumption and thinness, though disadvantages include limited viewing angles and the need for backlighting.
This document discusses various graphical input devices used to provide data and control signals to information processing systems. It describes keyboards, mice, trackballs, spaceballs, joysticks, data gloves, digitizers, graphics tablets, image scanners, and light pens. These input devices allow users to enter text, position cursors, control machines and games, draw and paint digitally, and scan objects. The computer graphics industry utilizes many of these input devices and has become a major field for creating animated movies, technical drawings, and other digital graphics.
The document summarizes the cathode ray tube (CRT) monitor. It describes how CRTs work by using an electron gun to shoot a beam of electrons at phosphor dots on the inside of the glass screen, which glow different colors to create a visible image. The cathode ray tube contains a heated filament that emits electrons, which are accelerated toward the screen by a voltage difference and controlled by electric fields to scan across pixels in lines to display images or waveforms. While CRTs provided better image quality than early LCDs, they also had disadvantages like large size, potential health risks from radiation and magnetic fields, and being heavy.
The document discusses color models including HLS and YIQ. It provides background on visible light wavelengths and introduces the YIQ and HLS color models. The YIQ model with Y for luminance, I for in-phase, and Q for quadrature was used in analog television to transmit color information using one signal. The HLS and HSV models represent color as Hue, Lightness/Value, and Saturation in a double hexagonal cone with white at the top and black at the bottom to better match human color perception compared to the RGB model. The models have applications in color selection, comparison, editing and image analysis.
Cathode ray tubes use an electron gun and fluorescent screen to create images through light emitted from the screen. An evacuated glass envelope contains the electron gun, which shoots electrons at the screen, causing phosphors to glow and form the image. Color CRTs use three electron guns and phosphors to produce red, green, and blue light, creating a full color image. While bulky and potentially hazardous, CRTs provide high quality images through superior color and contrast compared to other display technologies.
Plasma display panels (PDPs) use plasma technology to provide a flat panel display option. PDPs contain two glass plates separated by an inert gas that is ionized by an electrical current to produce visible light. This allows each pixel to emit red, green, or blue light independently. PDPs offer advantages over other displays like fast response time, wide viewing angles, low weight, and simple manufacturing. They are well-suited for applications like HDTV monitors. However, PDPs also have disadvantages compared to liquid crystal displays like higher power consumption and narrower viewing angles.
The document is a summer training report submitted by Saikat Das, who completed training in Developing Dynamic Web Application with MYSQL and PHP (Part A), and Introduction to Security and Ethical Hacking (Part B). For Part A, the training covered topics like installing WAMP, using PHP and MySQL to build dynamic websites, managing databases and queries. For Part B, the training discussed the importance of security, threats/attacks, hardening systems, and ethical hacking techniques. The report provides details of the course agenda, concepts, and commands covered during the training.
2. 2
of
32
Contents
Graphics hardware
The problem of scan conversion
Considerations
Line equations
Scan converting algorithms
– A very simple solution
– The DDA algorithm
Conclusion
3. 3
of
32
Graphics Hardware
It’s worth taking a little look at how graphics
hardware works before we go any further
Images taken from Hearn & Baker, “Computer Graphics C version
How do things end up on the screen?
4. 4
of
32
Architecture Of A Graphics System
Display
Frame Video
Processor Monitor
Memory Buffer Controller
Display System
CPU
Processor Memory
System Bus
5. 5
of
32
Output Devices
There are a range of output devices
currently available:
– Printers/plotters
– Cathode ray tube displays
– Plasma displays
– LCD displays
– 3 dimensional viewers
– Virtual/augmented reality headsets
We will look briefly at some of the more
common display devices
6. 6
of
32
Basic Cathode Ray Tube (CRT)
Fire an electron beam at a phosphor coated
screen
Images taken from Hearn & Baker, “Computer Graphics C version
7. Images taken from Hearn & Baker, “Computer Graphics C version
7
of
32
Draw one line at a time
Raster Scan Systems
8. 8
of
32
Colour CRT
An electron gun for each colour – red, green
and blue
Images taken from Hearn & Baker, “Computer Graphics C version
9. 9
of
32
Plasma-Panel Displays
Applying voltages to
crossing pairs of
Images taken from Hearn & Baker, “Computer Graphics C version
conductors causes
the gas (usually a
mixture including
neon) to break
down into a glowing
plasma of electrons
and ions
10. 10
of
32
Liquid Crystal Displays
Light passing
through the liquid
Images taken from Hearn & Baker, “Computer Graphics C version
crystal is twisted
so it gets through
the polarizer
A voltage is
applied using the
crisscrossing
conductors to stop
the twisting and
turn pixels off
11. 11
of
32
The Problem Of Scan Conversion
A line segment in a scene is defined by the
coordinate positions of the line end-points
y
(7, 5)
(2, 2)
x
12. 12
of
32
The Problem (cont…)
But what happens when we try to draw this
on a pixel based display?
How do we choose which pixels to turn on?
13. 13
of
32
Considerations
Considerations to keep in mind:
– The line has to look good
• Avoid jaggies
– It has to be lightening fast!
• How many lines need to be drawn in a typical
scene?
• This is going to come back to bite us again and
again
14. 14
of
32
Line Equations
Let’s quickly review the equations involved
in drawing lines
y Slope-intercept line
equation:
yend y = m× x +b
where:
yend - y0
y0 m=
xend - x0
b = y 0 - m × x0
x
x0 xend
15. 15
of
32
Lines & Slopes
The slope of a line (m) is defined by its start
and end coordinates
The diagram below shows some examples
of lines and their slopes
m=∞
m = -4 m=4
m = -2 m=2
m = -1 m=1
m = - 1 /2 m = 1 /2
m = - 1 /3 m = 1 /3
m=0 m=0
16. 16
of
32
A Very Simple Solution
We could simply work out the corresponding
y coordinate for each unit x coordinate
Let’s consider the following example:
y
(7, 5)
5
2
(2, 2)
x
2 7
17. 17
of
32
A Very Simple Solution (cont…)
5
4
3
2
1
0
0 1 2 3 4 5 6 7
18. 18
of
32
A Very Simple Solution (cont…)
y
(7, 5)
5 First work out m and b:
5-2 3
m= =
2
7-2 5
(2, 2)
3 4
b = 2- *2 =
2 3 4 5 6 7
x 5 5
Now for each x value work out the y value:
3 4 3 3 4 1
y (3) = × 3 + = 2 y ( 4) = × 4 + = 3
5 5 5 5 5 5
3 4 4 3 4 2
y (5) = × 5 + = 3 y (6) = × 6 + = 4
5 5 5 5 5 5
19. 19
of
32
A Very Simple Solution (cont…)
Now just round off the results and turn on
these pixels to draw our line
3
7 y (3) = 2 » 3
6 5
5 1
4
y (4) = 3 » 3
5
3
4
2 y (5) = 3 » 4
1 5
0 2
y (6) = 4 » 4
0 1 2 3 4 5 6 7 8 5
20. 20
of
32
A Very Simple Solution (cont…)
However, this approach is just way too slow
In particular look out for:
– The equation y = mx + b requires the
multiplication of m by x
– Rounding off the resulting y coordinates
We need a faster solution
21. 21
of
32
A Quick Note About Slopes
In the previous example we chose to solve
the parametric line equation to give us the y
coordinate for each unit x coordinate
What if we had done it the other way
around?
y -b
So this gives us: x =
m
yend - y0
where: m = and b = y0 - m × x0
xend - x0
22. 22
of
32
A Quick Note About Slopes (cont…)
Leaving out the details this gives us:
2 1
x(3) = 3 » 4 x ( 4) = 5 » 5
3 3
We can see easily that 7
this line doesn’t look 6
very good! 5
4
We choose which way 3
to work out the line 2
pixels based on the 1
0
slope of the line
0 1 2 3 4 5 6 7 8
23. 23
of
32
A Quick Note About Slopes (cont…)
If the slope of a line is between -1 and 1 then
we work out the y coordinates for a line based
on it’s unit x coordinates
Otherwise we do the opposite – x coordinates
are computed based on unit y coordinates
m=∞
m = -4 m=4
m = -2 m=2
m = -1 m=1
m = - 1 /2 m = 1 /2
m = - 1 /3 m = 1 /3
m=0 m=0
24. 24
of
32
A Quick Note About Slopes (cont…)
5
4
3
2
1
0
0 1 2 3 4 5 6 7
25. 25
of
32
The DDA Algorithm
The digital differential
analyzer (DDA) algorithm
takes an incremental
approach in order to
speed up scan conversion
The original differential
Simply calculate yk+1 a n a l yze r w as a p h ys i c a l
based on yk machine developed by
Vannevar Bush at MIT in the
1930’s in order to sol ve
ordinary differential equations.
26. 26
of
32
The DDA Algorithm (cont…)
Consider the list of points that we
determined for the line in our previous
example:
(2, 2), (3, 23/5), (4, 31/5), (5, 34/5), (6, 42/5), (7, 5)
Notice that as the x coordinates go up by
one, the y coordinates simply go up by the
slope of the line
This is the key insight in the DDA algorithm
27. 27
of
32
The DDA Algorithm (cont…)
When the slope of the line is between -1 and 1
begin at the first point in the line and, by
incrementing the x coordinate by 1, calculate
the corresponding y coordinates as follows:
yk +1 = yk + m
When the slope is outside these limits,
increment the y coordinate by 1 and calculate
the corresponding x coordinates as follows:
1
xk +1 = xk +
m
28. 28
of
32
The DDA Algorithm (cont…)
Again the values calculated by the equations
used by the DDA algorithm must be rounded
to match pixel values
(xk+1, round(yk+m)) (round(xk+ 1/m), yk+1)
(xk, yk) (xk+ 1/m, yk+1)
(xk+1, yk+m) (xk, yk)
(xk, round(yk)) (round(xk), yk)
29. 29
of
32
DDA Algorithm Example
Let’s try out the following examples:
y y (2, 7)
7
(7, 5)
5
2 2
(2, 2) (3, 2)
x x
2 7 2 3
31. 31
of
32
The DDA Algorithm Summary
The DDA algorithm is much faster than our
previous attempt
– In particular, there are no longer any
multiplications involved
However, there are still two big issues:
– Accumulation of round-off errors can make
the pixelated line drift away from what was
intended
– The rounding operations and floating point
arithmetic involved are time consuming
32. 32
of
32
Conclusion
In this lecture we took a very brief look at
how graphics hardware works
Drawing lines to pixel based displays is time
consuming so we need good ways to do it
The DDA algorithm is pretty good – but we
can do better
Next time we’ll like at the Bresenham line
algorithm and how to draw circles, fill
polygons and anti-aliasing