0% found this document useful (0 votes)
8 views26 pages

UNIT-1 cg

The document provides an overview of computer graphics, including its definition, applications, and key areas such as modeling, rendering, animation, image processing, visualization, user interface design, and virtual/augmented reality. It also discusses game development, detailing the phases involved in creating a game, components of game development, and the tools and technologies used. Overall, it emphasizes the importance of computer graphics in various fields and the collaborative nature of game development.

Uploaded by

littlekarry256
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
8 views26 pages

UNIT-1 cg

The document provides an overview of computer graphics, including its definition, applications, and key areas such as modeling, rendering, animation, image processing, visualization, user interface design, and virtual/augmented reality. It also discusses game development, detailing the phases involved in creating a game, components of game development, and the tools and technologies used. Overall, it emphasizes the importance of computer graphics in various fields and the collaborative nature of game development.

Uploaded by

littlekarry256
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 26

UNIT I

Chapter No
1 Introduction
1.1 Definition of Computer Graphics
1.2 Area of Computer Graphics
1.3 Design and Drawing
2 Graphic Devices
2.1 Cathode Ray tube
2.2 Direct View Storage Tube
2.3 Light Pen
3 C Graphics Basics
3.1 Graphics Programming
3.2 Initializing the Graphics
3.3 C Graphical Functions
3.4 Simple Program
1 Introduction

Computer graphics, as used in computer science, encompasses graphical depictions of digital


information. These representations are utilized in a variety of applications, including business
presenting graphics, Computer Aided Design(CAD), Graphical Information Systems(GIS),
Global Positioning Systems(GPS), and image processing. In addition, computer graphics can be
used to design alarm, control of entry, and video surveillance systems.

In short, computer graphics are visual representations produced by computers. Because many
augmented reality apps use visual images superimposed on the real world, it is critical to
comprehend where this imagery . Computer graphics applications include interactive films,
augmented reality interfaces for users, visualization of buildings, and experimental simulations.
It is built on a variety of technologies and tools, including graphical programming languages
(e.g., OpenGL, DirectX) and applications such as Blender, Maya, and AutoCAD.

1.1 Definition of Computer Graphics

Definition: 1 Computer graphics is a branch of computer science that concentrates on creating,


modifying, and rendering visual images using computers. Images, animations, & special effects
are created using computational approaches and algorithms.

Definition: 2 Computer graphics, as the name implies, is the procedure of making images with
software programs and techniques. To create successful computer graphics, you need to know
the basic principles of design and how to use software on computers to utilize texture
visualization, shadows, color, and other features.

Definition 3: Computer graphics is a technological method of creating, organizing, and


modifying visual content, such as photographs, visuals, and 3D models, with techniques and
particular equipment or applications.

Definition:4 Computer graphics is the investigation of techniques and instruments for


mathematically and algorithmically representing visual information in order to imitate actual or
imagined worlds.

1.2 Area’s/ Applications of Computer Graphics

Computer graphics encompasses a wide range of topics and applications, each focused on a
distinct component of visual image creation, processing, and interpretation.

1.2.1 Modeling: It focuses on constructing and depicting objects or scenes in 2D and 3D.
It comprises geometric modeling (e.g., forms, curves, and surfaces) as well as procedural
modeling.
Types of Modeling
2D modeling
 Flat objects or situations are represented on a plane (x and y dimensions).
 Typically used for illustrations, diagrams, and vector graphics.
 Tools include Adobe Illustrator and CorelDRAW.
3D Modeling
 Represents objects in three dimensions (x, y, and z), which permits rotation and
perspective.
 Applications include games, animations, simulations, and CAD.
 Tools include Blender, Maya, 3ds Max, and AutoCAD.

1.2.2 Rendering: It deals with creating lifelike or styled 2D graphics from 3D models. It
includes coloring, illumination, ray tracer, the rasterization, and global lighting.
Types of Rendering
Real Time Rendering
 Focused on speed, creating frames sufficiently fast for interactive applications such as video
games or simulations.
 This was accomplished utilizing Graphics Processing Units (GPUs) and techniques such as
rasterization. For example, in a game, frames are rendered at 60 frames per second (fps).
Offline Rendering.
 Concentrated on achieving high-quality, lifelike results.
 It takes longer to calculate complicated lighting and shading techniques.
 Used in film, animation, and visual effects.As an example, consider rendering a CGI
scene for a film.

1.2.3 Animation : Animation is the technique of giving the appearance of motion by presenting
a series of visuals, or frames, that evolve over time. It transforms static things or individuals to
life and is utilized in a variety of settings, including entertainment, learning, simulation, and
advertising. It concentrates on mimicking motion and dynamics throughout time. It involves
keyframe animation, capture of motion, character the creation of and physics-based simulations.
Types of Animation
2D Animation: It creates movements in a 2D space. Traditional animation is created by drawing
frames by frame. Digital 2D animation is the process of creating vector-based or raster-based
animations using software. Examples include Tom and Jerry cartoons.
3D animation : It Creates movement in a 3D space. Modeling, rigging (skeleton setup), and
animating characters and objects are all required. Examples include movies like Toy Story and
Frozen.
Stop Motion Animation: In this the physical objects are gradually moved and shot frame by
frame. Examples are Wallace & Gromit and Coraline.
Motion Graphics. It concentrates on animated visual components such as text, forms, and logos.
It is used extensively in commercial and promotional videos. Examples include kinetic
typographic animations.
Cel Animation: The conventional approach involves drawing each frame on transparent sheets
(cels).It is used in early animated films, such as Snow White and the Seven Dwarfs.
Rotoscoping: Traces live-action video frame by frame to produce realistic animations. It is used
in movies to create realistic character movements.
Procedural Animation : Algorithms are used to automatically generate animations such as
physics simulations and particle effects. For example, in games, you can simulate fabric, water,
or fire.
Cutout Animation: It flat characters and objects (cutouts) are moved progressively. Examples
include South Park.

1.2.4 Image processing involves the manipulation and improvement of digital images.
Noise elimination, images filtration, detection of edges, and color modifications are all possible
applications.
Types of Image Processing
 Analog image processing involves the processing of actual pictures like photographs or
prints. Images can be filtered or enhanced using optical equipment.
 Digital image processing is the alteration of digital images using techniques.
Executed on computers or specific hardware (for example, GPUs).

Visualization : Visualization is the process of displaying knowledge, ideas, or data in a


graphical or visual manner to improve comprehension, communication, and decision-making. It
converts complicated information into visuals that humans can understand, such as graphical
diagrams, images, and animations. It concentrates on translating data into visual forms to
facilitate comprehension and analysis. It comprises scientific visualization (e.g., medical scans)
as well as visualizing data (e.g., diagrams, infographics).
Types of Visualization
Data Visualization: Structured data is represented graphically. Examples include bar charts, line
graphs, scatter plots, and histograms.
Scientific Visualization : It concentrates on showing scientific data such as simulations,
physical events, and medical imaging. Examples include 3D representations of atoms, weather
patterns, and MRI scans.
Information Visualization: Text, networks, and relationships are examples of abstract or
unstructured data that can be visually represented. Examples include mind maps, tree diagrams,
and network graphs.
3D Visualization. It uses 3D models are used to depict actual objects or phenomena. Examples
include models of buildings, product prototypes, and medical reconstructions.
Interactive Visualization: The users can interact with the visual components to study data
dynamically. Examples include interactive dashboards and VR experiences.
Geospatial visualization: It provides geographical or spatial data. Examples include maps,
heatmaps, and GIS (Geographic Information System) visualizations.

1.2.5 User Interface Design (UI): UI design is the method of creating the structure, visual
components, and interactive characteristics of applications, apps, and websites with which users
interact. The purpose of UI design is to produce an instinctive, effective, and visually appealing
interface that improves the User Experience (UX). It creates graphical interfaces that enable
users to engage with software. Some examples involve menus, controls, symbols, and visual
feedback methods. UX refers to the total experience a user has when engaging with a system,
which includes functionality, simplicity of use, and satisfaction. The Basic Concepts of UI
Design
 Clarity: The interface should be simple to use, with clear labels, icons, and actions. To avoid
confusion, essential components should be emphasized or prominently shown.
 Consistent visual appearance, layout, and behavior across the application.
Use consistent colors, typefaces, buttons, and icons so that users understand what to expect.
 Keep interfaces simple and intuitive by removing extraneous components. Use white space
wisely to reduce clutter in appearance.
 Feedback: Users should acquire visual or audio indications when interacting with items, such
as buttons and loading indicators. Feedback informs users that their decisions are being
processed.
 Accessibility: Design should be inclusive to individuals with impairments, such as
colorblindness or limited vision. Features such as font scaling, high contrast options, and
accessibility features should be available.
 Responsiveness: The UI ought to adjust and deliver a consistent experience across all screen
sizes. Check that the layout adjusts well for different devices (responsive design).
 Efficiency: Reduce the number of steps necessary to execute tasks. Users can navigate more
quickly with features like as autocomplete, shortcuts, and predictive text.
 Hierarchy: Create a visual hierarchy that directs users' focus on the most critical items. Size,
color, and location can be used to highlight critical components such as controls, the news, or
icons.

1.2.6 Virtual Reality and Augmented Reality (AR). VR is an entirely immersing virtual
environment that mimics the physical world. The users interact with a virtual 3D world using
specialized hardware. It aims to create immersive 3D experiences. AR superimposes digital stuff
onto the actual world. AR improves the actual world by superimposing digital media (images,
sounds, or data) onto the outside world in real time. MR blends aspects of both VR and AR to
generate settings in which physical and digital items interact in real time. For example, MR
enables users to interact with simulated items placed in their real-world surroundings.

1.2.7 Computer Aided Design (CAD) . It creates exact simulations for architecture,
engineering, and industrial design. For instance, AutoCAD and SolidWorks. Computer-Aided
Design (CAD) is the use of electronic software and technology to develop, alter, evaluate, or
optimize designs for a variety of engineering, architectural, and manufacturing applications.
Professionals may use CAD to create exact and adaptable drawings in both 2D and 3D designs,
which reduces development time and improves design quality.
Types of CAD
2D CAD focuses on making flat, two-dimensional designs. Applications include architectural
floor plans and circuit schematics. Examples: AutoCAD (2D mode) and LibreCAD.
3D CAD enables users to construct realistic 3D objects. Applications include mechanical
elements, product prototypes, and architectural models. Examples include SolidWorks, CATIA,
and Autodesk Fusion 360.
Parametric CAD uses parameters (dimensions, limitations) to define design elements For
example, changing one dimension updates every aspect of the model. Example software: Creo
and SolidWorks.
Surface Modeling: It concentrates on generating smooth edges for aesthetics or aerodynamic
reasons. Applications include automotive design and consumer electronics. Examples include
Rhino and Autodesk Alias.
Solid Modeling: It generates models with solid geometry and volume. Applications include
engineering and manufacturing. Examples include SolidWorks and Autodesk Inventor.
Assembly Modeling: it combines several components to create a full system or product.
Applications include the design of machinery and complex systems.

1.3 Game Development

Game development includes the entire process of creating games for consoles, from planning
and designing to computer programming, evaluation, and final release. Bringing a game to life
necessitates collaboration across multiple specialist groups, including designers, programmers,
artists, and sound engineers. Game development can be difficult, necessitating a mix of technical
knowledge, creativity, and strategic planning. It normally follows a set approach, but the
specifics differ depending on the game type (mobile, console, PC, VR/AR). This includes
creating interactive settings and performers for video games. It combines role-playing, drawing,
demonstration, and physics simulations.

1.3.1 Key Phases in Game Development

(i) Conceptualization/Pre-production
 The generation of ideas involves brainstorming game concepts, such as genre, target
audience, and gaming mechanics.
 The Game Design Document (GDD) describes the game's overall vision, the mechanics,
individuals, narrative, levels, and technical needs.
 A prototype is a miniature, working version of a game used for testing fundamental
gameplay mechanics
(ii) Design: Define game mechanics, including the controls, goals, player interactions, and
levels.
 Story and Narrative: Develop a tale, characters, and dialogue for games such as RPGs
and adventures.
 Art and Aesthetics: Developing characters that are used environments, and assets such as
2D or 3D drawings, textures, and animations.
 Audio Design: Created the game's soundtrack using sound design and vocal
performances.
(iii) Development/Production
 Programming entails developing code for game controls, interfaces for users, artificial
intelligence (AI), physics, and interactivity. The platform and tools that are employed
will determine the development environment.
 Engine Selection: Choosing or developing a game engine (such as Unreal Engine, Unity,
or Godot) that serves as the foundation for the game's graphics, physics, and other
components.
 Art creation includes models in 3D, textures, visuals, visual effects (VFX), and user
interface elements.
 Level Design: Creating distinct gaming levels or environments, including the positioning
of objects, challenges, and enemy characters.

(iv) Testing and Quality Assurance (QA)


 Playtesting involves both internal and external testing to find bugs, balance concerns, and
possibilities for improvement in the game.
 Bug fixing involves resolving any flaws, crashes, or undesired behaviors in the game.
Performance optimization ensures optimal game performance on target systems.
 Polishing involves improving game aspects including visuals, music, UI, and controls
depending on feedback.
(v) Release and Post-production
 Launch: Making the game available to the public via channels such as Steam,
PlayStation, Xbox, or mobile app stores.
 Marketing involves creating promotional materials such as trailers, posters, online
content, and influencers collaborations to increase the game's visibility.
 Post-launch updates will include patches, downloadable content (DLC), bug fixes, and
additional features depending on user input and reviews.
 Community Engagement: Communicating with the player base via forums, social media,
and upgrades to maintain long-term interest in the game.

1.3.2 Components of Game Development

(i) Game Engines A game engine is technology that contains tools for game production,
such as graphics illustration, physics simulations, sound, and input processing.

 Unity is a popular game engine, especially for mobile, indie, and 2D games. It uses C#
for scripting.
 The Unreal Engine is well-known for its complex physics and high-quality 3D graphics.
It employs C++ for programming and includes Blueprints, a visual scripting language.
 Godot is an open-source gaming engine that supports both 2D and 3D game production.
It makes use of two scripting languages: GDScript and C#.
 CryEngine creates photorealistic worlds for high-end games such as Crysis.

(ii) Programming Languages

 C++ is a high-performance programming language utilized by game engines such as


Unreal Engine
 C# is commonly used in Unity to script gameplay mechanics, AI, and interaction
 Python is commonly used for programming and tool creation, particularly in prototyping
and tiny independent games
 JavaScript is commonly used for games on the internet that run in browsers.

(iii) Art and Animation

 2D art refers to flat graphics used in 2D games, such as creature designs, backgrounds,
and interfaces. Commonly used tools include Adobe Photoshop, Illustrator, and Aseprite.
 3D Art: Create models for people, settings, and props in games. Blender, Autodesk Maya,
and ZBrush are popular 3D modeling applications.
 Animation involves creating motion for characters, surroundings, and objects. Maya,
Blender, Spine, and Unity's animation tools are among the available tools.
 Textures add color and surface complexity to 3D models.
(iv) Sound and Music

 Sound Effects: Used for strolling, photographing, exploding objects and environmental
sounds. Audacity, Adobe Audition, and Logic Pro are some examples of software tool
 Music is composed to match the game's tone and ambiance. Composers frequently
collaborate with designers to fit the game's emotive beats.
 Voice Acting: In games with speech, performers record lines for their respective
characters. During the post-production phase, dialogue and lip sync are synchronized.

(v) Game Design

 Gameplay mechanics refer to the rules and procedures that control how players interact
with the game world, including combat, puzzles, and leveling.
 The User Interface (UI) includes menus, buttons, and HUD elements for player
interaction with the games
 User Experience (UX) refers to the game's overall flow, including navigation,
difficulties, and engagement.
 Level Design: Creating enjoyable and difficult game stages for players.

(vi) Game Development Platforms

 PC games run on Windows, macOS, and Linux and are typically available on sites like
Steam or Epic Games Store.
 Console games are optimized for PlayStation, Xbox, and Nintendo Switch systems
 Mobile games are built for smartphones and tablets, including iOS (Apple App Store)
and Android (Google Play Store).
 Web games are made using HTML5, JavaScript, and WebGL.

1.3.3 Graphics Hardware Development.

Graphics Hardware Development is the process of creating and improving physical hardware
that is used in computer systems to render and process visual data. This comprises GPUs, video
cards, memory, and display techniques. Graphics hardware is essential in many industries, such
gaming, film production, virtual reality (VR), augmented reality (AR), and scientific
visualization. The creation of these parts of hardware is a highly specialized topic that
necessitates a mix of physics, engineering, and computer science. It focuses on the design and
optimization of hardware, such as GPUs (Graphics Processing Units), to enable efficient
rendering and computing.

a) Key Components of Graphics Hardware

(i) Graphics Processing Unit (GPU)


 The GPU is the fundamental component of current graphics hardware. It is a highly
parallel processor built primarily to perform the complicated and repeated computations
required to generate images and videos.
 GPUs are specialized for 3D rendering, texture mapping, and shader processing. They
excel at processing in parallel, which allows them to manage thousands of processes at
once.
 GPU Types:
o Discrete GPUs: High-performance, discrete GPUs used in dedicated graphics
cards (e.g., NVIDIA GeForce and AMD Radeon).
o Integrated GPUs (e.g., Intel HD Graphics) provide inferior performance but are
integrated into more cheap computers.
(ii) Graphics Card
 A graphics card is a hardware unit that incorporates the GPU, VRAM, and other
components like power connectors, cooling, and display interfaces.
 Video memory (VRAM) stores graphics data like as textures, frame buffers, and shaders
for quick access during rendering.
 Graphics cards have complicated cooling mechanisms to prevent overheating from high-
performance graphics workloads. This can include fans, heat sinks, and even liquid
cooling.
(iii) Memory and Bandwidth
 Video RAM (VRAM) stores graphics elements, including textures and frame buffers, for
rendering purposes. The greater the VRAM, the more capable the graphics card is of
handling high-resolution textures and complicated sceneries.
 Memory bandwidth refers to the speed at which data may be read and written to VRAM.
Higher memory bandwidth increases the GPU's capacity to access data fast, which is
critical for generating high-quality images.
 .
(iv) Shaders and Shader Cores
 Shaders are little programs that run on the GPU and control different components of the
graphics pipeline, including as lighting, texture mapping, and post-processing effects.
 Shader Cores are the separate processing units in the GPU that run these shaders. Modern
GPUs have hundreds of shader cores, which enable simultaneous calculation of graphical
tasks..
(v) Rasterization and Ray Tracing
 Rasterization is the process of turning 3D models to 2D images by calculating pixel color
and intensity. It has served as the dominant rendering approach for real-time graphics.
 Ray tracking creates photorealistic images by tracking the path of the light rays as they
pass through surfaces. Ray tracing, while more computationally demanding, creates very
realistic observations, shades, and illumination effects. GPUs with ray tracing capability
are becoming more widespread in high-end graphics cards.
(vi) Graphics Pipeline
Graphics hardware processes images in a pipeline, with distinct steps of rendering performed
sequentially. The stages include:
 Vertex processing involves altering the geometry of 3D objects.
 Clipping is the process of removing elements of an object that are not visible to the
viewer.
 Rasterization is the process of mapping a three-dimensional scene to a two-dimensional
screen.
 Fragment (Pixel) Processing: Identifying the color and other characteristics of each pixel.
 Output: The final image is displayed on screen.

b) Graphics Hardware Development Process

1. Architecture Design

 The GPU architecture specifies the internal arrangement and interactions of several
hardware components, including shader cores, memory, and cache.
 Designers strive to optimize design for tasks such as display speed, power efficiency,
and the capacity to run numerous operations concurrently.

2. Fabrication and Semiconductor Manufacturing

 The GPU is shipped for manufacture after the architecture has been designed.
Semiconductor manufacturers like TSMC and Samsung use sophisticated lithography
techniques to create small circuits on silicon chips.
 Process Nodes: The size of the GPU's transistors is essential for effectiveness and
energy efficiency. Modern GPUs are frequently built using process nodes as tiny as
7nm or 5nm.

3. Optimization and Power Efficiency

 Hardware experts tune GPUs for efficiency per watt. This is crucial because GPUs in
consoles, workstations, and data centers must balance speed with power consumption.
 Modern GPUs feature dynamic frequency scaling, which adjusts the clock speed
based on demand to save power while not in use.

4. Testing and Validation

 GPUs are rigorously tested before release to ensure they meet performance
expectations. This contains load and power consumption tests, as well as performance
benchmarks.
 Driver Development: Specialized drivers are designed to optimize GPU performance
with operating systems and software. These drivers allow the GPU to interact with
APIs such as DirectX or Vulkan for rendering graphics.

c) Trends in Graphics Hardware Development

1. Ray Tracing
 Ray tracing is becoming a regular feature on high-end GPUs. NVIDIA and AMD have
provided real-time ray tracing abilities with hardware that supports it (for example, the
NVIDIA RTX series).
 Real-time ray tracing is a cutting-edge technology that improves image realism by
replicating light rays to produce more realistic reflections, refractions, and shadows.

2. AI-Powered Graphics

 AI and machine learning are being incorporated into graphics hardware for real-time
image augmentation. Examples include DLSS (Deep Learning Super Sampling), which
utilizes AI to upscale images and boost frame rates without losing visual quality.
 Modern GPUs include NVIDIA's Tensor Cores, which enhance AI activities like deep
learning and computations.

3. Virtual Reality (VR) and Augmented Reality (AR)

 As VR and AR technologies advance, there is a greater need for GPUs capable of


low-latency rendering, high-resolution displays, and smooth frame rates to enhance
immersive experiences.
 VR graphics hardware sometimes contains specialized capabilities such as foveated
rendering, which adjusts image quality according on the user's position, boosting
performance.

4. Cloud Gaming

 Cloud gaming services like Google Stadia and NVIDIA GeForce Now use remote
servers with powerful GPUs to stream games to gamers, eliminating the need for
expensive local hardware.
 Virtual GPUs (vGPUs) enable efficient use of GPU capabilities in cloud data centers,
allowing several users to utilize the same hardware.

5. Energy Efficiency and Cooling Solutions

 High-end GPUs generate more heat due to their higher performance. Advanced
cooling technologies, including as liquid cooling and thermal design improvements,
are used to keep the hardware operating at safe temperatures.
 Energy-effective GPUs are being developed to deliver excellent performance with
minimal power usage.

6. Multi-GPU Configurations

 GPU scaling is a practice in high-performance systems that uses numerous GPUs to improve
graphics performance.
 NVIDIA SLI and AMD CrossFire technologies let two or more GPUs operate together for
demanding workloads, while this method is becoming less prevalent as single-GPU
performance improves.

d) Major Graphics Hardware Manufacturers

1. NVIDIA

 The company is well-known for its GeForce and RTX GPUs, designed for gamers,
creators, and professionals. NVIDIA is also a frontrunner in AI-powered graphics, thanks
to its Tensor Cores.
 The Quadro series is suitable for professional desktops in design, a simulation, and
rendering applications..

2. AMD

 It competes with NVIDIA in the gaming and professional areas with its Radeon and
Radeon Pro graphics cards. AMD also makes APUs (Accelerated Processing Units),
which combine a CPU and a GPU into a single chip.
 AMD's RDNA architecture is widely employed in gaming and workstation GPUs
because of its superior performance.

3. Intel

 Intel is known for its CPUs, but has recently expanded into discrete GPUs with the
Intel Arc series. Intel GPUs are designed to compete in both the gaming and AI
markets.

4. ARM

 ARM develops Mali graphics processors for smartphones, embedded systems, and
applications that require little power. ARM-based GPUs are commonly seen in
phones and tablets.

e) Challenges in Graphics Hardware Development

(i) Thermal Management: As GPU performance improves, so does heat output. Effective
cooling is critical, particularly in tiny or portable devices when cooling alternatives are
limited.
(ii) Cost of Development: Developing advanced GPU technology is costly and time-
consuming, requiring extensive research and testing.
(iii) Balancing performance and power efficiency is a fundamental problem in GPU
development, particularly for mobile and embedded systems.
(iv) Industry Competition The GPU industry is fiercely competitive, with major
manufacturers like NVIDIA, AMD, and Intel vying for domination through novel
features and performance increases.
1.3.4 Computer Vision

Computer vision is an interdisciplinary topic of artificial intelligence (AI) and computer science
that seeks to enable machines to interpret and comprehend the visual environment. It entails
creating algorithms and models that let computers process, evaluate, and extract significant data
from digital images or videos, similar to how humans perceive visual stimuli.
The purpose of computer vision is to automate functions that the human visual system can
perform, such as item identification, face recognition, motion tracking, and scene
comprehension. It is widely employed in many areas, including healthcare, automobiles robotics,
recreation, and security.

a) Key Areas of Computer Vision

(i) Image Processing

Computer vision relies heavily on image processing techniques. They entail modifying
and improving images in order to render them better suited for analysis. This can include:
 Noise reduction involves removing image changes caused by sensor or transmission
problems.
 Edge detection involves identifying the borders of objects in a picture using techniques
such as Canny or Sobel.
 Improving image quality with adjustments to brightness, contrast, and sharpness.

(ii) Object Detection and Recognition

Object detection is the process of locating and recognizing items within a picture. This
includes identifying things' positions (bounding boxes) as well as categorizing them (e.g.,
person, car, dog).
Algorithms Used:

 Haar Cascades is a machine learning-based technique for object detection.


 Convolutional Neural Networks (CNNs) are deep learning networks that detect
and classify objects in photos.
 YOLO (You Only Look Once) and SSD (Single Shot Multibox Detector) are real-
time object detection methods.

(iii) Image Classification

Image categorization identifies complete images or specific portions of an image according


to specified categories. The purpose is to decide whether class an item belongs to (e.g., dog
vs cat).

Deep Learning: CNNs are often used for classification of images, with the model learning
features from big datasets and categorizing new photos.

(iv) Semantic Segmentation


Semantic segmentation classifies every pixel in an image into a certain class or category. This
strategy is very beneficial for tasks like scene comprehension or detecting things in complex
situations. Semantic segmentation, for example, aids in the detection of road lanes, people,
automobiles, and other objects in self-driving cars.

(v) Instance Segmentation

Instance segmentation is related to semantic segmentation, but goes one step further by
discriminating between items of the same class. For example, it can detect and segment multiple
cars in an image, even if they are the same type.

(vi) Face Detection and Recognition

 Face Detection locates faces in photos or videos, whereas Face Recognition validates
identities based on facial traits.This field utilizes algorithms such as Haar Cascades,
Histogram of Oriented Gradients (HOG), and Deep Learning-based Face Recognition
(e.g., FaceNet, DeepFace).
 Common uses include security (monitoring), biometrics (unlocking devices), and social
media (tagging people in images).

(vii) Optical Character Recognition (OCR)

 OCR extracts text from pictures and scanned documents. OCR systems examine the
visual structure of text, identify the characters, and transform them to machine-readable
formats.
 Common applications include document digitalization, ANPR, and picture text
extraction.

(viii) Motion and Activity Recognition

 Motion Recognition detects and tracks objects that move in video streams.
 Activity Recognition identifies actions or behaviors in video footage, such as jogging,
walking, or executing specified tasks. Employed for surveillance, HCI, analytics in
sports, and healthcare.

(ix) Depth Perception and 3D Vision

 Depth perception is a system's capacity to assess an object's distance from the camera or
viewer.
3D Vision: Reconstructs 3D models from 2D photos. This is critical for applications like
as robots, autonomous cars, and augmented reality (AR).
Stereo Vision, LiDAR (Light Detection and Ranging), and Depth Cameras (e.g.,
Microsoft Kinect) can measure depth and build 3D models.

(x) Scene Understanding


 Scene Understanding analyzes images to identify links between things and their context
in the environment.
 Recognize objects in a kitchen scenario, such as plates, glasses, and stoves, and their
spatial relationships.
 Scene comprehension makes sense of a scene by using techniques like as item
identification, segmentation, and contextual reasoning.

b) Techniques and Approaches in Computer Vision

1. Traditional Computer Vision Methods

 Edge detection is the process of identifying boundaries between sections in an image.


 Corner Detection: Identifying spots in an image with significant intensity changes (used
for feature matching).
 Histogram equalization improves visual contrast by distributing intensity more evenly.

2. Machine Learning and Deep Learning

 Classical machine learning algorithms, such as support vector machines and k-nearest
neighbors, can be used with feature extraction approaches to tackle computer vision
tasks.
 Deep learning, namely Convolutional Neural Networks (CNNs), has become the
preferred method for computer vision problems. CNNs learn hierarchical features from
raw visual data, improving performance for tasks such as object detection, recognition,
and segmentation.
 Transfer Learning involves fine-tuning a pre-trained deep learning model on a smaller
dataset to perform specific tasks, such as image classification with VGG16 or ResNet
architectures.

3. Feature Extraction

 Traditional computer vision algorithms, such as SIFT, SURF, and HOG, rely on
manually created features.
 Deep learning reduces the requirement for human feature extraction by automatically
learning important features.

4. Convolutional Neural Networks (CNNs)

 CNNs are the foundation for contemporary computer vision applications. They are made
up of convolutional, pooling, and fully linked layers that automatically learn spatial
feature hierarchies.
 Use Pooling Layers to reduce image size while keeping vital characteristics.
 Convolutional Layers: Filter input to extract features such as edges, textures, and abstract
concepts in deeper layers.
 Fully Connected Layers are used at the network's end to classify or predict based on
learned features.
5. Generative Models

 GANs generate new images, including realistic faces and enhanced resolution. They are
made up of two neural networks (a generator and a discriminator) that are trained to
compete.
 Autoencoders are used for unsupervised learning, including denoising images,
compression, and identifying anomalies.

c) Applications of Computer Vision

1. Autonomous Vehicles

Computer vision helps self-driving cars perceive their surroundings. It is used to detect
barriers, recognize traffic signs, track pedestrians, and detect lane changes.

2. Healthcare

 Medical imaging uses computer vision to analyze pictures such as X-rays, CT scans, and
MRIs for diagnosis of illnesses, tumor identification, and organ segmentation.
 Computer vision can help with monitoring patients from afar through analyzing video or
picture data to detect indicators of disease or abnormalities.

3. Security and Surveillance

Computer vision applications include recognizing faces in security systems, ALPR, and
detection of movement in surveillance cameras.

o .
4. Retail and E-Commerce

 Retailers utilize computer vision to recognize and monitor products on shelves, allowing
for automated stock control and self-checkout systems.
 Augmented Reality (AR) enhances the buying experience by allowing shoppers to
envision things in their surroundings, such as putting on clothes or examining furniture in
a room.

5. Robotics

Robots employ computer vision to navigate, avoid obstacles, and manipulate items. In
manufacturing, vision-guided robots can pick and place things, as well as assemble and
conduct quality control duties.

6. Agriculture
Precision agriculture uses computer vision systems to monitor crops, detect weeds, and
analyze plant health. Drones coupled with cameras and computer vision algorithms are
capable of surveying big farms.

7. Sports Analytics

Computer vision is utilized in sports to assess player motions, monitor the ball during
games, and predict outcomes or strategy using visual data.

8. Entertainment

In video editing, computer vision is used to automate scene changes, tracking of objects,
and effects rendering. Motion capture is used in filmmaking and games to record and
simulate human movement using visual data.

1.3.5 Real-time graphics

Real-time graphics is the development and presentation of images or visuals that are formed,
processed, and displayed in real time, usually at a rate that allows for seamless, continuous
display. The main feature of real-time graphics is that they are created quickly enough to be
presented without noticeable delay or lag, allowing for applications and experiences.
Real-time graphics are used in a variety of fields, including games, virtual reality (VR),
augmented reality (AR), simulations, and interactive multimedia applications. The system
generates visual content in real-time, typically at 30 or 60 frames per second (FPS) or greater,
depending on the application.

Applications of Real-Time Graphics

1. Video Games

 Video games use real-time visuals to create dynamic and immersive settings. Whether it's
a fast-paced shooter or an open-world playing roles game, the ability to generate images
in real time is critical for generating dynamic and interesting experiences.
 Game engines like Unreal Engine and Unity offer strong tools for creating high-quality
real-time graphics.

2. Virtual Reality (VR) and Augmented Reality (AR)

 Real-time rendering is crucial for generating immersive VR settings and allowing users to
interact with the scene in real time. Latency and frame rate are crucial for delivering a
seamless and believable experience that avoids motion sickness.
 Augmented reality (AR) uses real-time images to seamlessly integrate virtual and physical
aspects.

3. Simulations and Training


Real-time graphics are used in simulators for training purposes, including aircraft, military,
and medical simulations. The capacity to create realistic, interactive settings is critical to
developing effective teaching tools.

4. Interactive Media and Visualization

 Real-time graphics are utilized in applications such as virtual tours, visualization of


buildings, and interactive storytelling, allowing viewers to engage with visual content in real
time.
 Data Visualization uses real-time visuals to present and track dynamic datasets in dashboards
and monitoring systems.

5. Automotive Industry (Advanced Driver Assistance Systems)

Real-time graphics are used in autonomous cars and driver assistance systems to interpret
and display visual information from cameras, LiDAR, and other sensors, detecting road
signs, pedestrians, and other automobiles in real time.

6. Live Broadcasting and Entertainment

 Live sports broadcasters employ real-time graphics to incorporate data-driven graphics


(e.g. scoreboards, player statistics) and visual effects into the live feed.
 Real-time rendering is commonly used in interactive discussions and broadcasts to
generate dynamic images.

1.3 Design and Drawing

Graphics design and drawing are two connected but independent professions that create visual
information for interactions, expression, and problem solving. While sketching traditionally uses
hand tools to make images on paper or canvas, graphic design combines traditional and digital
media to create graphics for multiple modes of communication. These graphics can include
logos, illustrations, commercials, and websites, as well as posters and pamphlets.

Graphic design is the art and method of developing and executing ideas and experiences using
visual and textual material. Text, images, colors, and layout are used to produce visuals that
effectively transmit a message or idea.

1.3.1 Key Principles of Graphic Design

 Contrast creates visual intrigue and directs viewers' attention. It can be accomplished through
variations in color, size, form, or typography. Examples include light lettering on a dark
background or large bold headlines with smaller body text.
 Balance refers to how visual elements are distributed in a design. It might be symmetrical
(evenly balanced) or asymmetrical (uneven but weight-balanced).
Examples include a centered logo with equally scattered parts and a dynamic layout with
asymmetrical alignment.
 Alignment: Proper alignment connects each element in the design, producing an orderly and
coherent look. For example, aligning text to the left or right edge of an image or graphic
improves clarity.
 . Use hierarchy to direct the viewer's attention to key parts like as headlines, subheadings,
and body content. For example, use larger font size for titles and smaller font size for
supporting material.
 Repetition reinforces visual aspects and promotes unity in design.
Repeating colors, forms, or patterns can establish consistency
 Proximity: Group related elements together to facilitate viewer comprehension. For example,
grouping relevant text and photos in a brochure
 White space, often called negative space, refers to vacant areas in a design. White space
promotes readability and visual appeal while also allowing the design to "breathe."

1.3.2 Types of Graphic Design

a) Branding and Logo Design o Branding creates a visual identity for a firm or product,
including logos, color schemes, typography, and design language.
Logo design involves producing a unique symbol or wordmark to represent a business or
product.
b) Web Design - Designing the arrangement, structure, and presentation of websites. It
focuses on user experience (UX) and user interface (UI) design, making sure the site is
visually appealing, easy to navigate, and responsive across all devices.
Design elements include wireframes, typography, color palettes, icons, and responsive
layouts.
c) Advertising Design o Graphic designers develop ads for print (magazines, billboards) and
digital (banners, social media ads) channels. The goal is to captivate the viewer's
attention while effectively communicating a commercial message.
d) Packaging Design o This area involves creating product packaging, such as labels, boxes,
and containers. The package should be functional, visually appealing, and reflect the
brand's identity.
e) Editorial Design : It entails creating layouts for newspapers, magazines, books, and other
media. It entails arranging text, graphics, and other materials in an eye-catching and easy-
to-read fashion.
f) Motion Graphics: Motion graphics is the development of animated graphics, videos, and
visual effects. It appears in advertising, explainer videos, and title sequences for movies
and television shows.
g) UI/UX Design

 UI (User Interface) Design involves the visual layout of interactive elements such as
buttons, sliders, and icons in websites or apps.
 UX (User Experience) Design aims to provide a user-friendly and gratifying experience.

1.3.3 Drawing
Drawing is the process of producing marks on a surface (such as paper, canvas, or screen) to
produce visual representations, which is frequently used for artistic expression or sketching ideas
for design projects.

a) Types of Drawing

a) Sketching: A rapid, crude drawing to record ideas, proportions, or layouts. Sketches are
typically used as preparatory works for more complex drawings or ideas.
Tools include pencils, charcoal, and digital drawing tablets.
b) Figure Drawing: Focuses on drawing the human body, whether in motion or fixed stance.
Artists study anatomy, dimensions, and posture to create accurate depictions of the body.
Often practiced with life drawings or photographs.
c) Illustrations: Drawings or paintings used to clarify or enhance a concept, product, or story.
This style of drawing is utilized in a variety of media, including novels, comics, and
commercials.Tools include pencils, ink pens, markers, and digital tools like as Photoshop or
Procreate.
Portrait Drawing: A focused form of drawing that captures the likeness, personality, and
mood of a subject, usually a person.
Techniques include accurately portraying facial features, expressions, and proportions.
d) Landscape Drawing o Drawing natural settings like mountains, woods, oceans, and cities.
Artists focus on perspective, light, and environmental aspects. Tools include graphite,
charcoal, colored pencils, and pastels
e) Still Life Drawing: Focuses on inanimate objects such as fruits, flowers, and ordinary goods.
This activity improves observation abilities and the ability to precisely draw textures,
shadows, and reflections.

b) Drawing Tools
(i) Traditional tools:
 Pencils are used for drawing, shading, and detailing. Different classes of pencils (for
example, 2B, 4H) provide varying degrees of hardness and blackness.
 Charcoal is ideal for creating expressive and dramatic artwork with its rich, dark lines
and shading.
 Pens and ink are commonly used for intricate sketches and illustrations, typically
alongside pencil drawings.
 Watercolors are used to add color and texture, creating transparent effects.
Pastels are soft drawing tools used to combine and create brilliant colors.

(ii) Digital Tools:

 Wacom tablets and iPads with Apple Pencil provide digital drawing and illustration.
These tools mimic the sensation of hand drawing while providing the freedom of digital
editing.
 Procreate is a popular iOS software for drawing, drawing, and painting.
 Clip Studio Paint is a versatile software for drawing used by illustrators and comic artists.
 Adobe Fresco is a versatile drawing and painting tool that uses both raster and vector
brushes.
(iii ) Relationship Between Graphics Design and Drawing

While drawing focuses on creative thinking and unstructured expression, graphic design
combines drawing skills with other components such as typography, colors, and layout to
produce images that convey a message or serve a specific purpose. Many graphic designers start
with sketches or drawings to conceptualize their ideas before improving them digitally.

For example

 a logo designer may sketch out numerous logo designs before vectorizing them in software
such Adobe Illustrator
 Web designers may use pencil-and-paper wireframes to outline website layouts before
producing UI/UX designs in tools such as Sketch or Figma.

Both disciplines involve creativity, visual problem-solving abilities, and technological


capabilities, and they frequently compliment one another, particularly in digital art, advertising,
and product packaging.

1.4 Graphic Devices

Graphic devices are hardware and software tools used to create, manipulate, display, or interact
with graphics and visual content. These devices are essential for designing graphics, digital art,
video editing, gaming, and numerous other multimedia applications.

1.4.1 Cathode Ray tube

A cathode-ray tube (CRT) is a vacuum tube that displays a trace on a fluorescent screen when an
electron beam rotates by applied electric or magnetic fields. The cathode ray tube converts an
electrical signal into a visual representation. Cathode rays, or streams of electron particles, are
simple to create; electrons orbit every atom and flow from one to the next via an electric current.
A cathode ray tube uses an electric field to accelerate electrons from one end to the other. When
electrons strike the far end of the tube, they give up all the energy they carry due to their speed,
and this is transformed into other forms such as heat.

A cathode ray tube uses an electric field to accelerate electrons from one end to the other. When
the electrons reach the far end of the tube, they release all of the energy they carried due to their
speed, which is converted to other forms such as heat. A modest amount of energy is converted
into X-rays.The cathode ray tube (CRT), created in 1897 by German physicist Karl Ferdinand
Braun, is a sealed glass envelope housing an electron gun, a source of electrons, and a
fluorescent light, typically with internal or external mechanisms to accelerate and redirect
electrons. Electrons collide with a fluorescent tube, producing light. The electron beam is
deflected and manipulated so that an image can be displayed on the projector. The image may
depict electrical wave forms (oscilloscope), images (television, computer monitor), echoes of
radar-detected aircraft, and so forth. A single electron beam can be used to display moving
images in natural colors.
1.4.2 Direct View Storage Tube

Direct View Storage Tube (DVST) is an early sort of technology for displays used in computers
and graphics systems, particularly from the 1950s to the early 1970s. It is a type of electronic
display that enables the direct recording of images on the screen. The DVST represented a
significant technological leap at the time since it combined graphic display and storage in a
single unit.

The Direct View Storage Tube uses an electron beam to draw and store images on a
phosphorescent surface. Here is a breakdown of how it operates:

 Electron Beam: The tube works similar to CRT, with an electron gun emitting electrons at a
phosphor-coated screen.
 Image Drawing: Electromagnetic fields drive electron beams to scan and "draw" images on
phosphor surfaces. This surface would generate light in reaction to the electrons, producing a
visual image.
 Refresh Mechanism: Stored images fade over time. To maintain visibility, the electron beam
would regularly refresh the screen, re-energizing the phosphorescent material.
 The system may "write" and "read" images to the screen for processing. The saved image
could be viewed and manipulated in a variety of ways, which was a novel feature for the
time.

a) Applications of Direct View Storage Tube

The DVST was utilized in several early computer graphics systems, mostly for science and
military applications. Some of its applications include:
 Early computers and graphics workstations used DVSTs to show graphical data. One of the
most noteworthy systems was MIT's Whirlwind computer, which used a DVST for dynamic
real-time graphics in the 1950s
 The DVST's capacity to store and display images led to its employment in military and
aerospace applications such as radar or sonar data storage and graphical simulation
 Researchers in subjects including physics, biology, and engineering use DVSTs to display
complicated visual data like particle tracks and graphs.

b) Advantages of the DVST

 The DVST's storage capability was a crucial benefit over older CRTs, enabling permanent
displays.
 Interactive Display: This technology enabled users to manipulate and observe graphics in real
time.
 DVSTs save cost and space by combining storage and display capabilities into a single
device, eliminating the requirement for separate media for graphical data (e.g. punch cards or
tapes).

c) Disadvantages and Limitations

• Resolution: DVSTs had lower resolution compared to current displays. Additionally, image
persistence was not always trustworthy, since images faded over time.
• Early DVSTs had poor color and detail capabilities, compared to modern monitors and
graphic systems.
• DVSTs were unsuitable for commercial application due to their size and high cost. They
were mostly utilized for specific purposes such as defense and scientific research.

1.4.3 Light Pen

A light pen is a type of input device that detects light generated by a computer screen. It was one
of the first methods for interacting with graphics and is used in design of graphics, drawing, and
precise screen manipulation.

a) Working of Light Pen


The light pen uses a photoelectric sensor and is used together with a CRT display or another
light-emitting screen.

 Light Detection: The light pen has a cell with a photoelectric or light-sensitive sensor at the
tip. When the user uses the light pen to contact the screen, the sensor recognizes the light
emitted by the CRT display.
 Position Sensing: Older CRT panels use an electron beam to constantly refresh the screen.
The light pen can identify when the beam touches the part of the screen that the pen is
pointing at. The coordinates (x, y) on the screen are determined by computing the timing
between the pen's contact with the screen and the position of the electron beam.
 Input: The pen delivers positional information to the computer, enabling user interaction such
as choosing objects, drawing, and modifying graphics on the screen.
 response: Depending on software and system setup, users could receive rapid response on the
display as they moved the light pen, similar to a mouse or stylus today.

b) Applications of Light Pen

 Early graphic design and drawing apps utilized light pens to create digital artwork. Artists
could sketch directly on the screen, just as they would with a pen on paper. For example, in
CAD (Computer Aided Design) software, the light pen was used to pick objects and draw
lines.
 Light pens were utilized for straightforward data entry, including selecting options from
graphical menus and marking precise locations on charts or maps.
 Early interactive systems, such as interactive whiteboards and instructional software, utilized
light pens to allow users to choose and modify elements on the screen.
 Early arcade and console games used light pens to fire targets and interact with game
interfaces.
 Medical Imaging: Light pens were utilized to interact with digitized images and radiographic
data on computer screens.

c) Advantages of Light Pen

 Direct engagement: The light pen's direct engagement with the screen is more intuitive than
using a mouse or keyboard.
 The light pen provides exceptional precision while sketching or selecting things on a screen,
especially for fine detail.
 Speed: Light pens can be faster than mice for certain operations including selecting, drawing,
and tracing.

d) Disadvantages of Light Pen

 Limited to CRT displays: The light pen was built for CRT monitors, which are no longer
commonly used in computing. Modern displays, such as LCD or LED panels, do not provide
the same level of engagement as CRT screens with light pens.
 Difficult to usage Over Time: Using a light pen needs fine hand-eye coordination, which can
become exhausting or uncomfortable after extended usage.
 Limited Precision: The light pen is not as exact as contemporary instruments or digital
tablets, especially for intricate drawing or design work.
 Limited Software Support: Light pen technology is not as widely supported as other input
devices such as mice or touchscreens in software applications.

1.5 C Graphics Basics


C Graphics is the use of the C programming language to generate graphic representations such as
pictures, graphs, animations, and other forms of graphical output. C graphics can be used for a
range of tasks, including drawing shapes, designing user interfaces, and generating games. To
grasp the fundamentals, note that C cannot directly handle graphics, hence developers must rely
on libraries to do graphical tasks.

1.5.1 Graphics Programming

The most widely used libraries for C graphics include:

 Graphics.h (part of Turbo C and early compilers)


 OpenGL (for 3D graphics)
 SDL (Simple DirectMedia Layer)
 SFML (Simple and Fast Multimedia Library)
 GTK (for GUI applications)
 Qt (for creating cross-platform applications)

1.5.2 Initializing the Graphics

Setting Up Graphics in C (Using Graphics.h)

 Installation: Use Turbo C or Borland C++ (ancient IDEs) to include the graphics.h library. If
you're using a modern IDE (such as Code::Blocks or DevC++), one may need to install a
graphics library, or it can use another library like SDL or OpenGL.
 Initializing Graphics Mode. Before any drawing can occur in C, the graphics mode must be
initialized. This is accomplished via the initgraph() method.

1.5.3 C Graphical Functions

The basic Graphics Functions in C (Graphics.h) are as follows

 Drawing a Line
line(x1, y1, x2, y2);
In the above function (x1, y1) and (x2, y2) are the coordinates of the starting and ending points
of the line.

 Drawing a Circle
circle(x, y, radius);

In the above function, (x, y) is the center of the circle. , radius is the radius of the circle.

 Drawing a Rectangle
rectangle(x1, y1, x2, y2);
In the above function (x1, y1) and (x2, y2) are the top-left and bottom-right coordinates of the
rectangle.

(i) Program to draw basic functions in C

You might also like