UNIT-1 cg
UNIT-1 cg
Chapter No
1 Introduction
1.1 Definition of Computer Graphics
1.2 Area of Computer Graphics
1.3 Design and Drawing
2 Graphic Devices
2.1 Cathode Ray tube
2.2 Direct View Storage Tube
2.3 Light Pen
3 C Graphics Basics
3.1 Graphics Programming
3.2 Initializing the Graphics
3.3 C Graphical Functions
3.4 Simple Program
1 Introduction
In short, computer graphics are visual representations produced by computers. Because many
augmented reality apps use visual images superimposed on the real world, it is critical to
comprehend where this imagery . Computer graphics applications include interactive films,
augmented reality interfaces for users, visualization of buildings, and experimental simulations.
It is built on a variety of technologies and tools, including graphical programming languages
(e.g., OpenGL, DirectX) and applications such as Blender, Maya, and AutoCAD.
Definition: 2 Computer graphics, as the name implies, is the procedure of making images with
software programs and techniques. To create successful computer graphics, you need to know
the basic principles of design and how to use software on computers to utilize texture
visualization, shadows, color, and other features.
Computer graphics encompasses a wide range of topics and applications, each focused on a
distinct component of visual image creation, processing, and interpretation.
1.2.1 Modeling: It focuses on constructing and depicting objects or scenes in 2D and 3D.
It comprises geometric modeling (e.g., forms, curves, and surfaces) as well as procedural
modeling.
Types of Modeling
2D modeling
Flat objects or situations are represented on a plane (x and y dimensions).
Typically used for illustrations, diagrams, and vector graphics.
Tools include Adobe Illustrator and CorelDRAW.
3D Modeling
Represents objects in three dimensions (x, y, and z), which permits rotation and
perspective.
Applications include games, animations, simulations, and CAD.
Tools include Blender, Maya, 3ds Max, and AutoCAD.
1.2.2 Rendering: It deals with creating lifelike or styled 2D graphics from 3D models. It
includes coloring, illumination, ray tracer, the rasterization, and global lighting.
Types of Rendering
Real Time Rendering
Focused on speed, creating frames sufficiently fast for interactive applications such as video
games or simulations.
This was accomplished utilizing Graphics Processing Units (GPUs) and techniques such as
rasterization. For example, in a game, frames are rendered at 60 frames per second (fps).
Offline Rendering.
Concentrated on achieving high-quality, lifelike results.
It takes longer to calculate complicated lighting and shading techniques.
Used in film, animation, and visual effects.As an example, consider rendering a CGI
scene for a film.
1.2.3 Animation : Animation is the technique of giving the appearance of motion by presenting
a series of visuals, or frames, that evolve over time. It transforms static things or individuals to
life and is utilized in a variety of settings, including entertainment, learning, simulation, and
advertising. It concentrates on mimicking motion and dynamics throughout time. It involves
keyframe animation, capture of motion, character the creation of and physics-based simulations.
Types of Animation
2D Animation: It creates movements in a 2D space. Traditional animation is created by drawing
frames by frame. Digital 2D animation is the process of creating vector-based or raster-based
animations using software. Examples include Tom and Jerry cartoons.
3D animation : It Creates movement in a 3D space. Modeling, rigging (skeleton setup), and
animating characters and objects are all required. Examples include movies like Toy Story and
Frozen.
Stop Motion Animation: In this the physical objects are gradually moved and shot frame by
frame. Examples are Wallace & Gromit and Coraline.
Motion Graphics. It concentrates on animated visual components such as text, forms, and logos.
It is used extensively in commercial and promotional videos. Examples include kinetic
typographic animations.
Cel Animation: The conventional approach involves drawing each frame on transparent sheets
(cels).It is used in early animated films, such as Snow White and the Seven Dwarfs.
Rotoscoping: Traces live-action video frame by frame to produce realistic animations. It is used
in movies to create realistic character movements.
Procedural Animation : Algorithms are used to automatically generate animations such as
physics simulations and particle effects. For example, in games, you can simulate fabric, water,
or fire.
Cutout Animation: It flat characters and objects (cutouts) are moved progressively. Examples
include South Park.
1.2.4 Image processing involves the manipulation and improvement of digital images.
Noise elimination, images filtration, detection of edges, and color modifications are all possible
applications.
Types of Image Processing
Analog image processing involves the processing of actual pictures like photographs or
prints. Images can be filtered or enhanced using optical equipment.
Digital image processing is the alteration of digital images using techniques.
Executed on computers or specific hardware (for example, GPUs).
1.2.5 User Interface Design (UI): UI design is the method of creating the structure, visual
components, and interactive characteristics of applications, apps, and websites with which users
interact. The purpose of UI design is to produce an instinctive, effective, and visually appealing
interface that improves the User Experience (UX). It creates graphical interfaces that enable
users to engage with software. Some examples involve menus, controls, symbols, and visual
feedback methods. UX refers to the total experience a user has when engaging with a system,
which includes functionality, simplicity of use, and satisfaction. The Basic Concepts of UI
Design
Clarity: The interface should be simple to use, with clear labels, icons, and actions. To avoid
confusion, essential components should be emphasized or prominently shown.
Consistent visual appearance, layout, and behavior across the application.
Use consistent colors, typefaces, buttons, and icons so that users understand what to expect.
Keep interfaces simple and intuitive by removing extraneous components. Use white space
wisely to reduce clutter in appearance.
Feedback: Users should acquire visual or audio indications when interacting with items, such
as buttons and loading indicators. Feedback informs users that their decisions are being
processed.
Accessibility: Design should be inclusive to individuals with impairments, such as
colorblindness or limited vision. Features such as font scaling, high contrast options, and
accessibility features should be available.
Responsiveness: The UI ought to adjust and deliver a consistent experience across all screen
sizes. Check that the layout adjusts well for different devices (responsive design).
Efficiency: Reduce the number of steps necessary to execute tasks. Users can navigate more
quickly with features like as autocomplete, shortcuts, and predictive text.
Hierarchy: Create a visual hierarchy that directs users' focus on the most critical items. Size,
color, and location can be used to highlight critical components such as controls, the news, or
icons.
1.2.6 Virtual Reality and Augmented Reality (AR). VR is an entirely immersing virtual
environment that mimics the physical world. The users interact with a virtual 3D world using
specialized hardware. It aims to create immersive 3D experiences. AR superimposes digital stuff
onto the actual world. AR improves the actual world by superimposing digital media (images,
sounds, or data) onto the outside world in real time. MR blends aspects of both VR and AR to
generate settings in which physical and digital items interact in real time. For example, MR
enables users to interact with simulated items placed in their real-world surroundings.
1.2.7 Computer Aided Design (CAD) . It creates exact simulations for architecture,
engineering, and industrial design. For instance, AutoCAD and SolidWorks. Computer-Aided
Design (CAD) is the use of electronic software and technology to develop, alter, evaluate, or
optimize designs for a variety of engineering, architectural, and manufacturing applications.
Professionals may use CAD to create exact and adaptable drawings in both 2D and 3D designs,
which reduces development time and improves design quality.
Types of CAD
2D CAD focuses on making flat, two-dimensional designs. Applications include architectural
floor plans and circuit schematics. Examples: AutoCAD (2D mode) and LibreCAD.
3D CAD enables users to construct realistic 3D objects. Applications include mechanical
elements, product prototypes, and architectural models. Examples include SolidWorks, CATIA,
and Autodesk Fusion 360.
Parametric CAD uses parameters (dimensions, limitations) to define design elements For
example, changing one dimension updates every aspect of the model. Example software: Creo
and SolidWorks.
Surface Modeling: It concentrates on generating smooth edges for aesthetics or aerodynamic
reasons. Applications include automotive design and consumer electronics. Examples include
Rhino and Autodesk Alias.
Solid Modeling: It generates models with solid geometry and volume. Applications include
engineering and manufacturing. Examples include SolidWorks and Autodesk Inventor.
Assembly Modeling: it combines several components to create a full system or product.
Applications include the design of machinery and complex systems.
Game development includes the entire process of creating games for consoles, from planning
and designing to computer programming, evaluation, and final release. Bringing a game to life
necessitates collaboration across multiple specialist groups, including designers, programmers,
artists, and sound engineers. Game development can be difficult, necessitating a mix of technical
knowledge, creativity, and strategic planning. It normally follows a set approach, but the
specifics differ depending on the game type (mobile, console, PC, VR/AR). This includes
creating interactive settings and performers for video games. It combines role-playing, drawing,
demonstration, and physics simulations.
(i) Conceptualization/Pre-production
The generation of ideas involves brainstorming game concepts, such as genre, target
audience, and gaming mechanics.
The Game Design Document (GDD) describes the game's overall vision, the mechanics,
individuals, narrative, levels, and technical needs.
A prototype is a miniature, working version of a game used for testing fundamental
gameplay mechanics
(ii) Design: Define game mechanics, including the controls, goals, player interactions, and
levels.
Story and Narrative: Develop a tale, characters, and dialogue for games such as RPGs
and adventures.
Art and Aesthetics: Developing characters that are used environments, and assets such as
2D or 3D drawings, textures, and animations.
Audio Design: Created the game's soundtrack using sound design and vocal
performances.
(iii) Development/Production
Programming entails developing code for game controls, interfaces for users, artificial
intelligence (AI), physics, and interactivity. The platform and tools that are employed
will determine the development environment.
Engine Selection: Choosing or developing a game engine (such as Unreal Engine, Unity,
or Godot) that serves as the foundation for the game's graphics, physics, and other
components.
Art creation includes models in 3D, textures, visuals, visual effects (VFX), and user
interface elements.
Level Design: Creating distinct gaming levels or environments, including the positioning
of objects, challenges, and enemy characters.
(i) Game Engines A game engine is technology that contains tools for game production,
such as graphics illustration, physics simulations, sound, and input processing.
Unity is a popular game engine, especially for mobile, indie, and 2D games. It uses C#
for scripting.
The Unreal Engine is well-known for its complex physics and high-quality 3D graphics.
It employs C++ for programming and includes Blueprints, a visual scripting language.
Godot is an open-source gaming engine that supports both 2D and 3D game production.
It makes use of two scripting languages: GDScript and C#.
CryEngine creates photorealistic worlds for high-end games such as Crysis.
2D art refers to flat graphics used in 2D games, such as creature designs, backgrounds,
and interfaces. Commonly used tools include Adobe Photoshop, Illustrator, and Aseprite.
3D Art: Create models for people, settings, and props in games. Blender, Autodesk Maya,
and ZBrush are popular 3D modeling applications.
Animation involves creating motion for characters, surroundings, and objects. Maya,
Blender, Spine, and Unity's animation tools are among the available tools.
Textures add color and surface complexity to 3D models.
(iv) Sound and Music
Sound Effects: Used for strolling, photographing, exploding objects and environmental
sounds. Audacity, Adobe Audition, and Logic Pro are some examples of software tool
Music is composed to match the game's tone and ambiance. Composers frequently
collaborate with designers to fit the game's emotive beats.
Voice Acting: In games with speech, performers record lines for their respective
characters. During the post-production phase, dialogue and lip sync are synchronized.
Gameplay mechanics refer to the rules and procedures that control how players interact
with the game world, including combat, puzzles, and leveling.
The User Interface (UI) includes menus, buttons, and HUD elements for player
interaction with the games
User Experience (UX) refers to the game's overall flow, including navigation,
difficulties, and engagement.
Level Design: Creating enjoyable and difficult game stages for players.
PC games run on Windows, macOS, and Linux and are typically available on sites like
Steam or Epic Games Store.
Console games are optimized for PlayStation, Xbox, and Nintendo Switch systems
Mobile games are built for smartphones and tablets, including iOS (Apple App Store)
and Android (Google Play Store).
Web games are made using HTML5, JavaScript, and WebGL.
Graphics Hardware Development is the process of creating and improving physical hardware
that is used in computer systems to render and process visual data. This comprises GPUs, video
cards, memory, and display techniques. Graphics hardware is essential in many industries, such
gaming, film production, virtual reality (VR), augmented reality (AR), and scientific
visualization. The creation of these parts of hardware is a highly specialized topic that
necessitates a mix of physics, engineering, and computer science. It focuses on the design and
optimization of hardware, such as GPUs (Graphics Processing Units), to enable efficient
rendering and computing.
1. Architecture Design
The GPU architecture specifies the internal arrangement and interactions of several
hardware components, including shader cores, memory, and cache.
Designers strive to optimize design for tasks such as display speed, power efficiency,
and the capacity to run numerous operations concurrently.
The GPU is shipped for manufacture after the architecture has been designed.
Semiconductor manufacturers like TSMC and Samsung use sophisticated lithography
techniques to create small circuits on silicon chips.
Process Nodes: The size of the GPU's transistors is essential for effectiveness and
energy efficiency. Modern GPUs are frequently built using process nodes as tiny as
7nm or 5nm.
Hardware experts tune GPUs for efficiency per watt. This is crucial because GPUs in
consoles, workstations, and data centers must balance speed with power consumption.
Modern GPUs feature dynamic frequency scaling, which adjusts the clock speed
based on demand to save power while not in use.
GPUs are rigorously tested before release to ensure they meet performance
expectations. This contains load and power consumption tests, as well as performance
benchmarks.
Driver Development: Specialized drivers are designed to optimize GPU performance
with operating systems and software. These drivers allow the GPU to interact with
APIs such as DirectX or Vulkan for rendering graphics.
1. Ray Tracing
Ray tracing is becoming a regular feature on high-end GPUs. NVIDIA and AMD have
provided real-time ray tracing abilities with hardware that supports it (for example, the
NVIDIA RTX series).
Real-time ray tracing is a cutting-edge technology that improves image realism by
replicating light rays to produce more realistic reflections, refractions, and shadows.
2. AI-Powered Graphics
AI and machine learning are being incorporated into graphics hardware for real-time
image augmentation. Examples include DLSS (Deep Learning Super Sampling), which
utilizes AI to upscale images and boost frame rates without losing visual quality.
Modern GPUs include NVIDIA's Tensor Cores, which enhance AI activities like deep
learning and computations.
4. Cloud Gaming
Cloud gaming services like Google Stadia and NVIDIA GeForce Now use remote
servers with powerful GPUs to stream games to gamers, eliminating the need for
expensive local hardware.
Virtual GPUs (vGPUs) enable efficient use of GPU capabilities in cloud data centers,
allowing several users to utilize the same hardware.
High-end GPUs generate more heat due to their higher performance. Advanced
cooling technologies, including as liquid cooling and thermal design improvements,
are used to keep the hardware operating at safe temperatures.
Energy-effective GPUs are being developed to deliver excellent performance with
minimal power usage.
6. Multi-GPU Configurations
GPU scaling is a practice in high-performance systems that uses numerous GPUs to improve
graphics performance.
NVIDIA SLI and AMD CrossFire technologies let two or more GPUs operate together for
demanding workloads, while this method is becoming less prevalent as single-GPU
performance improves.
1. NVIDIA
The company is well-known for its GeForce and RTX GPUs, designed for gamers,
creators, and professionals. NVIDIA is also a frontrunner in AI-powered graphics, thanks
to its Tensor Cores.
The Quadro series is suitable for professional desktops in design, a simulation, and
rendering applications..
2. AMD
It competes with NVIDIA in the gaming and professional areas with its Radeon and
Radeon Pro graphics cards. AMD also makes APUs (Accelerated Processing Units),
which combine a CPU and a GPU into a single chip.
AMD's RDNA architecture is widely employed in gaming and workstation GPUs
because of its superior performance.
3. Intel
Intel is known for its CPUs, but has recently expanded into discrete GPUs with the
Intel Arc series. Intel GPUs are designed to compete in both the gaming and AI
markets.
4. ARM
ARM develops Mali graphics processors for smartphones, embedded systems, and
applications that require little power. ARM-based GPUs are commonly seen in
phones and tablets.
(i) Thermal Management: As GPU performance improves, so does heat output. Effective
cooling is critical, particularly in tiny or portable devices when cooling alternatives are
limited.
(ii) Cost of Development: Developing advanced GPU technology is costly and time-
consuming, requiring extensive research and testing.
(iii) Balancing performance and power efficiency is a fundamental problem in GPU
development, particularly for mobile and embedded systems.
(iv) Industry Competition The GPU industry is fiercely competitive, with major
manufacturers like NVIDIA, AMD, and Intel vying for domination through novel
features and performance increases.
1.3.4 Computer Vision
Computer vision is an interdisciplinary topic of artificial intelligence (AI) and computer science
that seeks to enable machines to interpret and comprehend the visual environment. It entails
creating algorithms and models that let computers process, evaluate, and extract significant data
from digital images or videos, similar to how humans perceive visual stimuli.
The purpose of computer vision is to automate functions that the human visual system can
perform, such as item identification, face recognition, motion tracking, and scene
comprehension. It is widely employed in many areas, including healthcare, automobiles robotics,
recreation, and security.
Computer vision relies heavily on image processing techniques. They entail modifying
and improving images in order to render them better suited for analysis. This can include:
Noise reduction involves removing image changes caused by sensor or transmission
problems.
Edge detection involves identifying the borders of objects in a picture using techniques
such as Canny or Sobel.
Improving image quality with adjustments to brightness, contrast, and sharpness.
Object detection is the process of locating and recognizing items within a picture. This
includes identifying things' positions (bounding boxes) as well as categorizing them (e.g.,
person, car, dog).
Algorithms Used:
Deep Learning: CNNs are often used for classification of images, with the model learning
features from big datasets and categorizing new photos.
Instance segmentation is related to semantic segmentation, but goes one step further by
discriminating between items of the same class. For example, it can detect and segment multiple
cars in an image, even if they are the same type.
Face Detection locates faces in photos or videos, whereas Face Recognition validates
identities based on facial traits.This field utilizes algorithms such as Haar Cascades,
Histogram of Oriented Gradients (HOG), and Deep Learning-based Face Recognition
(e.g., FaceNet, DeepFace).
Common uses include security (monitoring), biometrics (unlocking devices), and social
media (tagging people in images).
OCR extracts text from pictures and scanned documents. OCR systems examine the
visual structure of text, identify the characters, and transform them to machine-readable
formats.
Common applications include document digitalization, ANPR, and picture text
extraction.
Motion Recognition detects and tracks objects that move in video streams.
Activity Recognition identifies actions or behaviors in video footage, such as jogging,
walking, or executing specified tasks. Employed for surveillance, HCI, analytics in
sports, and healthcare.
Depth perception is a system's capacity to assess an object's distance from the camera or
viewer.
3D Vision: Reconstructs 3D models from 2D photos. This is critical for applications like
as robots, autonomous cars, and augmented reality (AR).
Stereo Vision, LiDAR (Light Detection and Ranging), and Depth Cameras (e.g.,
Microsoft Kinect) can measure depth and build 3D models.
Classical machine learning algorithms, such as support vector machines and k-nearest
neighbors, can be used with feature extraction approaches to tackle computer vision
tasks.
Deep learning, namely Convolutional Neural Networks (CNNs), has become the
preferred method for computer vision problems. CNNs learn hierarchical features from
raw visual data, improving performance for tasks such as object detection, recognition,
and segmentation.
Transfer Learning involves fine-tuning a pre-trained deep learning model on a smaller
dataset to perform specific tasks, such as image classification with VGG16 or ResNet
architectures.
3. Feature Extraction
Traditional computer vision algorithms, such as SIFT, SURF, and HOG, rely on
manually created features.
Deep learning reduces the requirement for human feature extraction by automatically
learning important features.
CNNs are the foundation for contemporary computer vision applications. They are made
up of convolutional, pooling, and fully linked layers that automatically learn spatial
feature hierarchies.
Use Pooling Layers to reduce image size while keeping vital characteristics.
Convolutional Layers: Filter input to extract features such as edges, textures, and abstract
concepts in deeper layers.
Fully Connected Layers are used at the network's end to classify or predict based on
learned features.
5. Generative Models
GANs generate new images, including realistic faces and enhanced resolution. They are
made up of two neural networks (a generator and a discriminator) that are trained to
compete.
Autoencoders are used for unsupervised learning, including denoising images,
compression, and identifying anomalies.
1. Autonomous Vehicles
Computer vision helps self-driving cars perceive their surroundings. It is used to detect
barriers, recognize traffic signs, track pedestrians, and detect lane changes.
2. Healthcare
Medical imaging uses computer vision to analyze pictures such as X-rays, CT scans, and
MRIs for diagnosis of illnesses, tumor identification, and organ segmentation.
Computer vision can help with monitoring patients from afar through analyzing video or
picture data to detect indicators of disease or abnormalities.
Computer vision applications include recognizing faces in security systems, ALPR, and
detection of movement in surveillance cameras.
o .
4. Retail and E-Commerce
Retailers utilize computer vision to recognize and monitor products on shelves, allowing
for automated stock control and self-checkout systems.
Augmented Reality (AR) enhances the buying experience by allowing shoppers to
envision things in their surroundings, such as putting on clothes or examining furniture in
a room.
5. Robotics
Robots employ computer vision to navigate, avoid obstacles, and manipulate items. In
manufacturing, vision-guided robots can pick and place things, as well as assemble and
conduct quality control duties.
6. Agriculture
Precision agriculture uses computer vision systems to monitor crops, detect weeds, and
analyze plant health. Drones coupled with cameras and computer vision algorithms are
capable of surveying big farms.
7. Sports Analytics
Computer vision is utilized in sports to assess player motions, monitor the ball during
games, and predict outcomes or strategy using visual data.
8. Entertainment
In video editing, computer vision is used to automate scene changes, tracking of objects,
and effects rendering. Motion capture is used in filmmaking and games to record and
simulate human movement using visual data.
Real-time graphics is the development and presentation of images or visuals that are formed,
processed, and displayed in real time, usually at a rate that allows for seamless, continuous
display. The main feature of real-time graphics is that they are created quickly enough to be
presented without noticeable delay or lag, allowing for applications and experiences.
Real-time graphics are used in a variety of fields, including games, virtual reality (VR),
augmented reality (AR), simulations, and interactive multimedia applications. The system
generates visual content in real-time, typically at 30 or 60 frames per second (FPS) or greater,
depending on the application.
1. Video Games
Video games use real-time visuals to create dynamic and immersive settings. Whether it's
a fast-paced shooter or an open-world playing roles game, the ability to generate images
in real time is critical for generating dynamic and interesting experiences.
Game engines like Unreal Engine and Unity offer strong tools for creating high-quality
real-time graphics.
Real-time rendering is crucial for generating immersive VR settings and allowing users to
interact with the scene in real time. Latency and frame rate are crucial for delivering a
seamless and believable experience that avoids motion sickness.
Augmented reality (AR) uses real-time images to seamlessly integrate virtual and physical
aspects.
Real-time graphics are used in autonomous cars and driver assistance systems to interpret
and display visual information from cameras, LiDAR, and other sensors, detecting road
signs, pedestrians, and other automobiles in real time.
Graphics design and drawing are two connected but independent professions that create visual
information for interactions, expression, and problem solving. While sketching traditionally uses
hand tools to make images on paper or canvas, graphic design combines traditional and digital
media to create graphics for multiple modes of communication. These graphics can include
logos, illustrations, commercials, and websites, as well as posters and pamphlets.
Graphic design is the art and method of developing and executing ideas and experiences using
visual and textual material. Text, images, colors, and layout are used to produce visuals that
effectively transmit a message or idea.
Contrast creates visual intrigue and directs viewers' attention. It can be accomplished through
variations in color, size, form, or typography. Examples include light lettering on a dark
background or large bold headlines with smaller body text.
Balance refers to how visual elements are distributed in a design. It might be symmetrical
(evenly balanced) or asymmetrical (uneven but weight-balanced).
Examples include a centered logo with equally scattered parts and a dynamic layout with
asymmetrical alignment.
Alignment: Proper alignment connects each element in the design, producing an orderly and
coherent look. For example, aligning text to the left or right edge of an image or graphic
improves clarity.
. Use hierarchy to direct the viewer's attention to key parts like as headlines, subheadings,
and body content. For example, use larger font size for titles and smaller font size for
supporting material.
Repetition reinforces visual aspects and promotes unity in design.
Repeating colors, forms, or patterns can establish consistency
Proximity: Group related elements together to facilitate viewer comprehension. For example,
grouping relevant text and photos in a brochure
White space, often called negative space, refers to vacant areas in a design. White space
promotes readability and visual appeal while also allowing the design to "breathe."
a) Branding and Logo Design o Branding creates a visual identity for a firm or product,
including logos, color schemes, typography, and design language.
Logo design involves producing a unique symbol or wordmark to represent a business or
product.
b) Web Design - Designing the arrangement, structure, and presentation of websites. It
focuses on user experience (UX) and user interface (UI) design, making sure the site is
visually appealing, easy to navigate, and responsive across all devices.
Design elements include wireframes, typography, color palettes, icons, and responsive
layouts.
c) Advertising Design o Graphic designers develop ads for print (magazines, billboards) and
digital (banners, social media ads) channels. The goal is to captivate the viewer's
attention while effectively communicating a commercial message.
d) Packaging Design o This area involves creating product packaging, such as labels, boxes,
and containers. The package should be functional, visually appealing, and reflect the
brand's identity.
e) Editorial Design : It entails creating layouts for newspapers, magazines, books, and other
media. It entails arranging text, graphics, and other materials in an eye-catching and easy-
to-read fashion.
f) Motion Graphics: Motion graphics is the development of animated graphics, videos, and
visual effects. It appears in advertising, explainer videos, and title sequences for movies
and television shows.
g) UI/UX Design
UI (User Interface) Design involves the visual layout of interactive elements such as
buttons, sliders, and icons in websites or apps.
UX (User Experience) Design aims to provide a user-friendly and gratifying experience.
1.3.3 Drawing
Drawing is the process of producing marks on a surface (such as paper, canvas, or screen) to
produce visual representations, which is frequently used for artistic expression or sketching ideas
for design projects.
a) Types of Drawing
a) Sketching: A rapid, crude drawing to record ideas, proportions, or layouts. Sketches are
typically used as preparatory works for more complex drawings or ideas.
Tools include pencils, charcoal, and digital drawing tablets.
b) Figure Drawing: Focuses on drawing the human body, whether in motion or fixed stance.
Artists study anatomy, dimensions, and posture to create accurate depictions of the body.
Often practiced with life drawings or photographs.
c) Illustrations: Drawings or paintings used to clarify or enhance a concept, product, or story.
This style of drawing is utilized in a variety of media, including novels, comics, and
commercials.Tools include pencils, ink pens, markers, and digital tools like as Photoshop or
Procreate.
Portrait Drawing: A focused form of drawing that captures the likeness, personality, and
mood of a subject, usually a person.
Techniques include accurately portraying facial features, expressions, and proportions.
d) Landscape Drawing o Drawing natural settings like mountains, woods, oceans, and cities.
Artists focus on perspective, light, and environmental aspects. Tools include graphite,
charcoal, colored pencils, and pastels
e) Still Life Drawing: Focuses on inanimate objects such as fruits, flowers, and ordinary goods.
This activity improves observation abilities and the ability to precisely draw textures,
shadows, and reflections.
b) Drawing Tools
(i) Traditional tools:
Pencils are used for drawing, shading, and detailing. Different classes of pencils (for
example, 2B, 4H) provide varying degrees of hardness and blackness.
Charcoal is ideal for creating expressive and dramatic artwork with its rich, dark lines
and shading.
Pens and ink are commonly used for intricate sketches and illustrations, typically
alongside pencil drawings.
Watercolors are used to add color and texture, creating transparent effects.
Pastels are soft drawing tools used to combine and create brilliant colors.
Wacom tablets and iPads with Apple Pencil provide digital drawing and illustration.
These tools mimic the sensation of hand drawing while providing the freedom of digital
editing.
Procreate is a popular iOS software for drawing, drawing, and painting.
Clip Studio Paint is a versatile software for drawing used by illustrators and comic artists.
Adobe Fresco is a versatile drawing and painting tool that uses both raster and vector
brushes.
(iii ) Relationship Between Graphics Design and Drawing
While drawing focuses on creative thinking and unstructured expression, graphic design
combines drawing skills with other components such as typography, colors, and layout to
produce images that convey a message or serve a specific purpose. Many graphic designers start
with sketches or drawings to conceptualize their ideas before improving them digitally.
For example
a logo designer may sketch out numerous logo designs before vectorizing them in software
such Adobe Illustrator
Web designers may use pencil-and-paper wireframes to outline website layouts before
producing UI/UX designs in tools such as Sketch or Figma.
Graphic devices are hardware and software tools used to create, manipulate, display, or interact
with graphics and visual content. These devices are essential for designing graphics, digital art,
video editing, gaming, and numerous other multimedia applications.
A cathode-ray tube (CRT) is a vacuum tube that displays a trace on a fluorescent screen when an
electron beam rotates by applied electric or magnetic fields. The cathode ray tube converts an
electrical signal into a visual representation. Cathode rays, or streams of electron particles, are
simple to create; electrons orbit every atom and flow from one to the next via an electric current.
A cathode ray tube uses an electric field to accelerate electrons from one end to the other. When
electrons strike the far end of the tube, they give up all the energy they carry due to their speed,
and this is transformed into other forms such as heat.
A cathode ray tube uses an electric field to accelerate electrons from one end to the other. When
the electrons reach the far end of the tube, they release all of the energy they carried due to their
speed, which is converted to other forms such as heat. A modest amount of energy is converted
into X-rays.The cathode ray tube (CRT), created in 1897 by German physicist Karl Ferdinand
Braun, is a sealed glass envelope housing an electron gun, a source of electrons, and a
fluorescent light, typically with internal or external mechanisms to accelerate and redirect
electrons. Electrons collide with a fluorescent tube, producing light. The electron beam is
deflected and manipulated so that an image can be displayed on the projector. The image may
depict electrical wave forms (oscilloscope), images (television, computer monitor), echoes of
radar-detected aircraft, and so forth. A single electron beam can be used to display moving
images in natural colors.
1.4.2 Direct View Storage Tube
Direct View Storage Tube (DVST) is an early sort of technology for displays used in computers
and graphics systems, particularly from the 1950s to the early 1970s. It is a type of electronic
display that enables the direct recording of images on the screen. The DVST represented a
significant technological leap at the time since it combined graphic display and storage in a
single unit.
The Direct View Storage Tube uses an electron beam to draw and store images on a
phosphorescent surface. Here is a breakdown of how it operates:
Electron Beam: The tube works similar to CRT, with an electron gun emitting electrons at a
phosphor-coated screen.
Image Drawing: Electromagnetic fields drive electron beams to scan and "draw" images on
phosphor surfaces. This surface would generate light in reaction to the electrons, producing a
visual image.
Refresh Mechanism: Stored images fade over time. To maintain visibility, the electron beam
would regularly refresh the screen, re-energizing the phosphorescent material.
The system may "write" and "read" images to the screen for processing. The saved image
could be viewed and manipulated in a variety of ways, which was a novel feature for the
time.
The DVST was utilized in several early computer graphics systems, mostly for science and
military applications. Some of its applications include:
Early computers and graphics workstations used DVSTs to show graphical data. One of the
most noteworthy systems was MIT's Whirlwind computer, which used a DVST for dynamic
real-time graphics in the 1950s
The DVST's capacity to store and display images led to its employment in military and
aerospace applications such as radar or sonar data storage and graphical simulation
Researchers in subjects including physics, biology, and engineering use DVSTs to display
complicated visual data like particle tracks and graphs.
The DVST's storage capability was a crucial benefit over older CRTs, enabling permanent
displays.
Interactive Display: This technology enabled users to manipulate and observe graphics in real
time.
DVSTs save cost and space by combining storage and display capabilities into a single
device, eliminating the requirement for separate media for graphical data (e.g. punch cards or
tapes).
• Resolution: DVSTs had lower resolution compared to current displays. Additionally, image
persistence was not always trustworthy, since images faded over time.
• Early DVSTs had poor color and detail capabilities, compared to modern monitors and
graphic systems.
• DVSTs were unsuitable for commercial application due to their size and high cost. They
were mostly utilized for specific purposes such as defense and scientific research.
A light pen is a type of input device that detects light generated by a computer screen. It was one
of the first methods for interacting with graphics and is used in design of graphics, drawing, and
precise screen manipulation.
Light Detection: The light pen has a cell with a photoelectric or light-sensitive sensor at the
tip. When the user uses the light pen to contact the screen, the sensor recognizes the light
emitted by the CRT display.
Position Sensing: Older CRT panels use an electron beam to constantly refresh the screen.
The light pen can identify when the beam touches the part of the screen that the pen is
pointing at. The coordinates (x, y) on the screen are determined by computing the timing
between the pen's contact with the screen and the position of the electron beam.
Input: The pen delivers positional information to the computer, enabling user interaction such
as choosing objects, drawing, and modifying graphics on the screen.
response: Depending on software and system setup, users could receive rapid response on the
display as they moved the light pen, similar to a mouse or stylus today.
Early graphic design and drawing apps utilized light pens to create digital artwork. Artists
could sketch directly on the screen, just as they would with a pen on paper. For example, in
CAD (Computer Aided Design) software, the light pen was used to pick objects and draw
lines.
Light pens were utilized for straightforward data entry, including selecting options from
graphical menus and marking precise locations on charts or maps.
Early interactive systems, such as interactive whiteboards and instructional software, utilized
light pens to allow users to choose and modify elements on the screen.
Early arcade and console games used light pens to fire targets and interact with game
interfaces.
Medical Imaging: Light pens were utilized to interact with digitized images and radiographic
data on computer screens.
Direct engagement: The light pen's direct engagement with the screen is more intuitive than
using a mouse or keyboard.
The light pen provides exceptional precision while sketching or selecting things on a screen,
especially for fine detail.
Speed: Light pens can be faster than mice for certain operations including selecting, drawing,
and tracing.
Limited to CRT displays: The light pen was built for CRT monitors, which are no longer
commonly used in computing. Modern displays, such as LCD or LED panels, do not provide
the same level of engagement as CRT screens with light pens.
Difficult to usage Over Time: Using a light pen needs fine hand-eye coordination, which can
become exhausting or uncomfortable after extended usage.
Limited Precision: The light pen is not as exact as contemporary instruments or digital
tablets, especially for intricate drawing or design work.
Limited Software Support: Light pen technology is not as widely supported as other input
devices such as mice or touchscreens in software applications.
Installation: Use Turbo C or Borland C++ (ancient IDEs) to include the graphics.h library. If
you're using a modern IDE (such as Code::Blocks or DevC++), one may need to install a
graphics library, or it can use another library like SDL or OpenGL.
Initializing Graphics Mode. Before any drawing can occur in C, the graphics mode must be
initialized. This is accomplished via the initgraph() method.
Drawing a Line
line(x1, y1, x2, y2);
In the above function (x1, y1) and (x2, y2) are the coordinates of the starting and ending points
of the line.
Drawing a Circle
circle(x, y, radius);
In the above function, (x, y) is the center of the circle. , radius is the radius of the circle.
Drawing a Rectangle
rectangle(x1, y1, x2, y2);
In the above function (x1, y1) and (x2, y2) are the top-left and bottom-right coordinates of the
rectangle.