0% found this document useful (0 votes)
42 views

Printed Circuit Board

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
42 views

Printed Circuit Board

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 50

PRINTED CIRCUIT BOARD

A printed circuit board (PCB), also known as a printed wiring board (PWB), connects
electronic components in a circuit. It features a laminated structure of conductive and
insulating layers. Conductive layers, typically made of copper, are etched with patterns of
traces, planes, and other features similar to wires on a flat surface. These layers are laminated
onto and/or between non-conductive substrates.

Component Mounting and Connections

Electrical components are attached to conductive pads on the PCB's outer layers, typically
through soldering. This process ensures both electrical connection and mechanical fastening.
Vias, or plated-through holes, allow for interconnections between different layers.

Advantages of PCBs

PCBs are essential in nearly all electronic products. While older methods like wire wrap and
point-to-point construction are rarely used now, PCBs are preferred for their ability to
automate manufacturing and assembly. Electronic design automation software aids in the
layout process, making mass production cheaper and faster. PCBs enable simultaneous
mounting and wiring of components, and multiple PCBs can be fabricated at once, requiring
only a single layout design.

Types of PCBs

 Single-Sided: One copper layer.


 Double-Sided: Two copper layers, one on each side of a substrate.
 Multi-Layer: Multiple layers of copper and substrate, allowing higher component
density. Inner layers save surface space for components.

The popularity of multi-layer PCBs increased with the adoption of surface mount technology,
though they complicate repair and analysis.

Market Growth

In 2014, the global market for bare PCBs was over $60.2 billion and is projected to reach $79
billion by 2024.

Basic Structure

A basic PCB comprises an insulating material sheet and a copper foil layer laminated to the
substrate. Chemical etching divides the copper into conductive lines called tracks or circuit
traces, connection pads, vias for inter-layer connections, and solid conductive areas for
shielding. These tracks act as fixed wires, insulated by air and the board's substrate material.
The PCB surface may have a coating called solder resist or solder mask to protect the copper
from corrosion and prevent solder shorts.

Etching Process
The pattern to be etched into each copper layer is known as the "artwork." Etching typically
involves using photoresist, which is coated onto the PCB and exposed to light in the artwork's
pattern. The resist material shields the copper during etching. After etching, the board is
cleaned. This process allows for mass reproduction of PCB designs, akin to duplicating
photographs from film negatives.

Substrate Materials

The most common insulating substrate is FR-4 glass epoxy. Another option is cotton paper
impregnated with phenolic resin, often tan or brown.

Terminology

When a PCB has no components installed, it is sometimes called a printed wiring board
(PWB) or etched wiring board, though these terms are less common today. A PCB with
installed components is known as a printed circuit assembly (PCA), PCB assembly (PCBA),
or informally, a "card." The IPC preferred term for an assembled board is circuit card
assembly (CCA), and for an assembled backplane, it is a backplane assembly.

Legend Printing

A PCB may feature a legend identifying components, test points, or other information.
Initially done with silkscreen printing, today's methods use finer quality printing techniques.
The legend typically does not affect the PCB's functionality.

Layer Structure

PCBs can have multiple copper layers, usually arranged in pairs. The number of layers and
their interconnections, such as vias and plated-through holes (PTHs), indicate the board's
complexity. More layers provide better routing options and signal integrity but increase
manufacturing time and cost.

Vias and Complexity

The choice of vias impacts board size, signal routing, and reliability but also adds to
production complexity and expense. Simple two-layer boards, with copper on both sides
(external layers), are easier to produce. Multi-layer boards add internal copper and insulation
layers, offering advanced routing options and better performance for signal integrity, high-
frequency signaling, and power supply decoupling.

Two-Layer and Four-Layer Boards

Two-layer boards are the simplest, featuring copper on both sides. Four-layer boards, the next
step up, offer significantly more routing options with internal layers often used for ground or
power planes, enhancing signal integrity, reducing electromagnetic interference (EMI), and
improving power decoupling.

Multi-Layer Construction
In multi-layer PCBs, materials are laminated in an alternating sandwich structure: copper,
substrate, copper, substrate, etc. Each copper plane is etched, and internal vias are plated-
through before lamination. Only the outer layers need coating, as inner copper layers are
protected by adjacent substrate layers.

Component Mounting

Through-Hole Components: These are mounted by inserting their wire leads through holes
in the PCB and soldering them to traces on the opposite side. This method, once common, is
now used primarily for large components like electrolytic capacitors and connectors.
Through-hole technology requires precise drilling of holes, adding to manufacturing costs
and limiting routing area on multi-layer boards.

Surface Mount Components: These components have small metal tabs or end caps that are
soldered directly to the copper traces on the same side of the board. Surface mounting is
widely used for smaller components such as transistors, diodes, IC chips, resistors, and
capacitors. It allows for higher circuit density and is more cost-effective, as it supports
automation and reduces labor costs.

Evolution of Mounting Techniques

Through-Hole Technology: Initially, PCBs used through-hole technology, with components


soldered on one or both sides. This method requires manual or wave soldering and is more
expensive due to the need for many drilled holes, which also reduce available routing space.

Surface Mount Technology (SMT): Emerging in the 1960s and gaining widespread use by
the 1990s, SMT involves soldering components directly onto the PCB surface. This technique
allows for smaller components, higher circuit density, and the use of both sides of the board.
SMT is ideal for automation, significantly increasing production rates and reducing costs.
Surface mount components can be as small as one-quarter to one-tenth the size and weight of
their through-hole counterparts.

Benefits and Drawbacks

Benefits of SMT:

 Smaller and lighter components.


 Higher circuit densities.
 Reduced labor costs due to automation.
 Higher production rates.

Drawbacks of SMT:

 Certain large components still require through-hole mounting.


 Some wire-ended components are cheaper than their SMT equivalents.

Historical Context
Surface mounting became popular in the 1980s and 1990s, leading to a shift away from
through-hole technology. Components were redesigned for SMT, making them smaller and
easier to place on both sides of the PCB, which facilitated higher circuit densities and smaller
PCB assemblies.

### Electrical Properties of PCB Traces

**Trace Characteristics:** Each trace on a PCB is a flat, narrow strip of copper foil left after etching.
The resistance of a trace is determined by its width, thickness, and length, and it must be low enough
to carry the required current. Power and ground traces are typically wider than signal traces.

**Ground Planes and Shielding:** In multi-layer PCBs, an entire layer can be solid copper, acting as a
ground plane for shielding and power return. This design helps reduce noise and improves overall
performance.

**Microwave and RF Circuits:** For microwave circuits, traces are laid out as transmission lines in
planar forms like stripline or microstrip, with precise dimensions to ensure consistent impedance. In
radio-frequency (RF) and fast-switching circuits, the inductance and capacitance of PCB traces
become significant, usually undesirable. However, these properties can be deliberately used in
designing distributed-element filters, antennas, and fuses, eliminating the need for additional
components.

**High-Density Interconnects (HDI):** HDI PCBs feature very fine tracks and vias, with widths or
diameters under 152 micrometers, allowing for greater circuit density and complexity.

PCB Laminates

Manufacturing Process: Laminates are created by curing layers of cloth or paper with
thermoset resin under heat and pressure, forming uniform pieces that can be up to 4 by 8 feet
in size. Various cloth weaves, thicknesses, and resin percentages achieve the desired
thickness and dielectric properties.

Material Composition: The type designation of a laminate (e.g., FR-4, CEM-1, G-10) is
determined by the cloth or fiber material, resin material, and cloth-to-resin ratio. Key
characteristics include fire retardancy, dielectric constant (ϵr\epsilon_rϵr), loss tangent (tan⁡δ\
tan \deltatanδ), tensile and shear strength, glass transition temperature (TgT_gTg), and Z-axis
expansion coefficient.

Dielectrics and Pre-Pregs: Different dielectrics are used based on circuit requirements.
Common materials include:

 FR-4: Woven glass and epoxy (most common)


 FR-1, CEM-1, CEM-3: Various cotton paper and epoxy combinations
 Teflon (PTFE): Polytetrafluoroethylene for specialized needs
 G-10, FR-5: Woven glass and epoxy with enhanced properties

Pre-preg materials in the PCB industry vary widely, each with specific applications and
properties.

Thermal Expansion: Particularly important in high-precision technologies like ball grid


arrays (BGA) and naked die applications, glass fiber offers excellent dimensional stability,
minimizing thermal expansion.

Copper-Clad Laminate: The base material with unetched copper is known as copper-clad
laminate.

Importance of Material Homogeneity

As PCB features shrink and frequencies increase, even minor nonhomogeneities (e.g., uneven
fiberglass distribution, thickness variations, resin bubbles) become significant. These
imperfections can cause local variations in the dielectric constant, affecting performance.

Common Laminate Types

 FR-2: Phenolic cotton paper


 FR-3: Cotton paper and epoxy
 FR-4: Woven glass and epoxy (most used)
 FR-5: Woven glass and epoxy with enhanced properties
 FR-6: Matte glass and polyester
 G-10: Woven glass and epoxy
 CEM-1 to CEM-5: Various combinations of cotton paper, non-woven glass, and
epoxy/polyester

Standard Thickness

Standard laminate thicknesses are specified in ANSI/IPC-D-275, accommodating various


design and application needs.

PCB Substrate Materials

Composition: PCB substrates are typically dielectric composite materials consisting of an


epoxy resin matrix reinforced with woven or non-woven glass fibers, or sometimes paper.
Fillers, such as ceramics, may be added to the resin to enhance properties like dielectric
constant.

Reinforcement Types:

 Woven Reinforcements: These are cheaper but may introduce local variations in
electrical parameters due to the high dielectric constant of glass and the non-uniform
resin/glass ratio.
 Non-Woven Reinforcements: These are more expensive but better suited for RF and
analog applications due to their uniform structure and lower dielectric constant.
Key Parameters:

1. Thermomechanical Properties:
o Glass Transition Temperature (Tg): The temperature at which the resin
softens, leading to increased thermal expansion. High Tg materials are
essential for thick boards to prevent mechanical overload and failure.
o Tensile Strength and Shear Strength: These indicate the board's ability to
withstand mechanical stresses.
o Thermal Expansion: The expansion of the material under heat, which can
stress the plated-through holes and other components.
2. Electrical Properties:
o Dielectric Constant (εr): Determines signal propagation speed and affects
phase distortion in wideband applications. It typically decreases with
increasing frequency.
o Dielectric Breakdown Voltage: The maximum voltage the material can
withstand before failing.
o Tracking Resistance: The material's resistance to high voltage discharges
along its surface.
o Loss Tangent (tan δ): Indicates how much electromagnetic energy is
absorbed by the board material. Low-loss materials are crucial for high-
frequency applications but are more expensive.
3. Other Properties:
o Moisture Absorption: The tendency of the material to absorb water, which
can degrade electrical properties and cause physical damage like cracking or
delamination during soldering. FR-4 has low moisture absorption (0.15%),
while Teflon has very low absorption (0.01%).

Material Choices:

 FR-4: The most common material, offering a good balance of properties for most
applications.
 Teflon (PTFE): Used for specialized high-frequency applications due to its low
moisture absorption and stable dielectric properties.
 Polyimides and Cyanate Esters: High-performance materials but with higher
moisture absorption, which can degrade performance under humid conditions.

Considerations for High-Frequency Applications

As PCB features become smaller and operating frequencies increase, the uniformity of the
substrate material becomes critical. Variations such as uneven fiberglass distribution or resin
bubbles can significantly impact electrical performance. The choice of materials and their
properties must be carefully matched to the application's requirements to ensure optimal
performance and reliability.

PCB Manufacturing Process

Fabrication Data and CAM:


1. Input Fabrication Data: Generated by computer-aided design (CAD) and
component information.
2. Verification: Ensures data accuracy.
3. Compensation: Adjustments for manufacturing deviations (e.g., scaling for
lamination distortions).
4. Panelization: Arranging multiple PCBs on a single panel for efficient manufacturing.
5. Output Digital Tools: Producing copper patterns, drill files, and inspection protocols.

Historical Design Methods: Initially, PCBs were designed manually using a photomask on a
mylar sheet, often at a larger scale. Component pin pads were placed, and traces were routed
using self-adhesive tape and dry transfers for common footprints. These designs were
photolithographically transferred onto photoresist-coated copper-clad boards.

Modern Design Steps:

1. Schematic Capture: Using an electronic design automation (EDA) tool.


2. Card Dimensions and Template: Based on required circuitry and enclosure.
3. Component and Heat Sink Placement: Determining optimal positions.
4. Layer Stack Determination: Deciding the number of layers (from one to tens),
ground and power planes. Ground planes act as AC signal grounds, while power
planes provide DC power.
5. Line Impedance Calculation: Considering dielectric layer thickness, copper
thickness, trace width, and separation for differential signals. Techniques like
microstrip, stripline, or dual stripline are used.
6. Component Placement: Considering thermal management and geometry. Vias and
lands are marked.
7. Signal Tracing: Routing signal traces. EDA tools automatically create clearances and
connections in power and ground planes.
8. Fabrication Data Output: Generating a set of Gerber files, a drill file, and a pick-
and-place file for manufacturing.

Key Considerations

Copper Thickness and Trace Design: Each trace is a narrow part of the remaining copper
foil after etching. Its resistance is determined by width, thickness, and length. Power and
ground traces may be wider than signal traces. Multi-layer boards may use entire layers as
ground planes for shielding and power return.

Electrical Properties:

 Impedance Control: Essential for high-frequency applications, achieved through


precise trace dimensions.
 Distributed-Element Circuits: Inductance and capacitance of PCB conductors can
be utilized in circuit design for elements like filters and antennas.

Thermal and Mechanical Considerations:

 Thermal Expansion: Particularly important for high-reliability applications.


Materials like FR-4 offer low expansion, minimizing stress on components and vias.
 Moisture Absorption: Materials like Teflon have very low moisture absorption,
while others like polyimides have higher absorption, affecting performance under
humid conditions.

Laminate Types and Properties

FR-4: The most common material, offering a good balance of electrical and mechanical
properties. High-Frequency Materials: Teflon and other specialized materials are used for
applications requiring stable dielectric properties at high frequencies.

Fabrication Challenges

High Density Interconnects (HDI): HDI PCBs have finer traces and vias, requiring
advanced manufacturing techniques to maintain precision and reliability.

Design Automation: Modern PCBs leverage advanced EDA tools to automate many design
steps, reducing errors and increasing efficiency.

This concise overview covers the key aspects of PCB design and manufacturing, from initial
data input through modern design techniques and considerations for material properties and
electrical performance.

A graphics card, also known as a GPU or Graphics Processing Unit, is hardware designed to
enhance a computer's video memory and display quality, elevating its performance and
enabling high-level tasks like gaming and video editing. It plays a pivotal role in rendering
lifelike visuals on a display device, such as a monitor, by generating a feed of graphics
output.

The GPU, often referred to as the brain of the graphics card, is the primary component
responsible for creating the stunning visuals we see on our screens. It comes in various
models, each with its own level of power and capabilities.

Graphics cards are typically computer expansion cards, housed in the form of printed circuit
boards (expansion boards) that are inserted into expansion slots on the motherboard.
However, some graphics cards may come in dedicated enclosures, connecting to the
computer via docking stations or cables, known as external GPUs (eGPUs).

One of the distinguishing features of a graphics card is its ability to offload graphics
processing tasks from the central processing unit (CPU), thereby enhancing overall system
performance. This allows for smoother gameplay, faster video rendering, and improved
graphical fidelity.

In addition to display output, modern graphics cards can also be utilized for additional
processing tasks, reducing the workload on the CPU. Platforms like OpenCL and CUDA
enable the use of graphics cards for general-purpose computing applications, such as AI
training, cryptocurrency mining, and molecular simulation.

Overall, graphics cards are preferred over integrated graphics for their superior performance
and ability to handle demanding graphical tasks effectively. They are essential components
for anyone seeking to maximize their computer's graphical capabilities and overall
performance.

Types of Graphics Card


1. Integrated – The graphics which are built into the
motherboard are known as Integrated, are generally used
in most laptops, the cannot be easily upgraded.
2. Discrete – It is an external graphics card which is a
hardware and added on a motherboard as an extra
component. Most people may not need an external
graphics card for their work on PC. Basic work like creating
files, doing office work, watching movies, listing songs, etc
may not need a graphics card. But for the users playing
high resolutions games and video editing may need an
external component i.e graphics card for their purpose.
Components of Graphics Card
 GPU: Basically like a CPU, the GPU is a real piece of
hardware in a system.
 Memory: It also referred to as VRAM, the graphics card
has memory allocated specifically to support the initial
operations.
 Interface: PCI Express, located at the card’s bottom as per
requirement, is used by the majority of GPUs in the system.
 Heat Sink: To aid in the initial dissipating heat
accumulation during use, every GPU has a heat sink and
fans as well.
 Power Connectors: Six- or eight-pin power connectors
are very much necessary for modern GPUs to process;
sometimes in the special cases, two or three are needed.
 Outputs: A variety of useful video outputs are available in
the system, frequently including HDMI, DisplayPort, DVI,
or VGA as per requirement.
 BIOS: The graphics card when the user turn off the system
or computer, the BIOS saves the required data on voltages,
memory, and other components. It also contains the initial
basic setup and program information to process well.
Features of Graphics Card
 Memory: Graphics card carries its own memory. Memory
range could be from 128MB to 2GB of memory. We should
buy a card with more memory. More RAM equals higher
resolutions, more colors on the screen, and the best special
effects.
 Multiple Screen support: Most new video cards have the
ability to connect two monitors to one card. This feature is
very important for video editing and hardcore gamer
craves that extra real estate as well. You can either see
two separate Desktops or make the two monitors into one
Desktop.
 Gaming And Video Editing: The discrete graphics card is
not only for a gamer but those who use high-end video
editing software also get help as a high-quality graphics
card to reduce the rendering time of an image also give a
high-def environment.
 Connection – The graphic card is connected to the
monitor using many different ports put the port must be
present on both monitor and Graphics card. These are
some common ports used to connect graphics card with a
monitor.
 VGA
 HDMI
 DVI

 Some motherboards have more than 1 expansion slot so


we can add more than one graphics card to make
performance better. Many laptops nowadays come with an
integrated graphics card in them.
Manufacturers of Graphics Card
The two main manufacturers of discrete graphics card are:
 NVIDIA
 AMD

Integrated graphics serve as a viable alternative to dedicated graphics cards, with


implementations integrated into the motherboard, CPU, or system-on-chip (SoC).
Motherboard-based setups are often referred to as "on-board video", offering a cost-effective,
compact, and energy-efficient solution.

While integrated graphics may offer advantages in terms of affordability, simplicity, and
energy consumption, they typically deliver lower performance compared to dedicated
graphics cards. This is because the graphics processing unit (GPU) in integrated graphics
shares system resources with the CPU, leading to potential performance limitations.

In contrast, dedicated graphics cards boast separate components such as dedicated random
access memory (RAM), cooling systems, and power regulators. This allows them to offload
graphics processing tasks from the CPU and system RAM, potentially improving overall
system performance, particularly in graphics-intensive applications like gaming, 3D
animation, and video editing.

Both AMD and Intel have introduced CPUs and motherboard chipsets supporting integrated
GPUs. AMD markets CPUs with integrated graphics as Accelerated Processing Units
(APUs), while Intel promotes similar technology under the "Intel Graphics Technology"
branding. These integrated solutions offer a balance between performance and affordability,
catering to a wide range of computing needs.

As graphics cards have advanced in processing power, their hunger for electrical power has
grown accordingly. Today's high-performance models demand substantial energy, with
examples like the GeForce Titan RTX boasting a thermal design power (TDP) of 280 watts,
while the GeForce RTX 2080 Ti Founder's Edition can consume an average of 300 watts
during video game testing. Despite strides in CPU and power supply efficiency, graphics
cards remain among the most power-intensive components in computers.

To meet these demands, modern graphics cards often feature six-pin (75 W) or eight-pin (150
W) power sockets directly connecting to the power supply. However, cooling these power-
hungry cards poses a challenge, especially in systems with multiple graphics cards requiring
power supplies exceeding 750 watts. Effective heat extraction becomes crucial in such setups.

The latest Nvidia GeForce RTX 30 series, using Ampere architecture, has pushed power
draw to new heights. Custom variants like the "Hall of Fame" RTX 3090 have been recorded
peaking at a staggering 630 watts, while standard RTX 3090 models can reach up to 450
watts. Even the RTX 3080 and 3070 draw significant power, up to 350 watts in the former's
case. Founders Edition cards employ a "dual axial flow through" cooler design, efficiently
expelling heat through fans positioned above and below the card.

Graphics cards come in various sizes to accommodate different computer builds. Some are
designated as "low profile," fitting smaller enclosures. These profiles primarily vary in
height, with low-profile cards occupying less than a standard PCIe slot's height. Length and
thickness also differ, with high-end models often spanning two or three expansion slots and
some, like the RTX 4090, exceeding 300mm in length. Opting for a lower profile card is
advisable when space is limited, although larger computer cases like mid-towers or full
towers can mitigate clearance issues.

MULTI-THREAD SCALING

Multicard scaling, a feature offered by some graphics cards, allows users to distribute
graphics processing across multiple cards. This can be achieved through the PCIe bus on the
motherboard or, more commonly, via a data bridge. Typically, the linked cards must be of the
same model, as most low-end cards lack support for this feature. AMD and Nvidia offer their
proprietary scaling methods, with AMD's CrossFireX and Nvidia's SLI (since replaced by
NVLink in the Turing generation). It's important to note that cards from different
manufacturers or architectures cannot be combined for multicard scaling.

When using multiple cards, memory size compatibility is crucial. If cards have different
memory sizes, the system will utilize the lowest value, disregarding higher values. Presently,
consumer-grade setups can utilize up to four cards, necessitating a large motherboard with the
appropriate configuration. Nvidia's GeForce GTX 590 graphics card, for example, can be
configured in a four-card setup. To ensure optimal performance, users should use cards with
similar performance metrics.

Motherboards certified for multicard configurations include models like the ASUS Maximus
3 Extreme and Gigabyte GA EX58 Extreme. Adequate power supply is essential, especially
for four-card setups, which may require a 1000+ watt supply. Effective thermal management
is crucial for powerful graphics cards, requiring well-ventilated chassis and robust cooling
solutions such as air or water cooling. Larger configurations often employ advanced cooling
methods like immersion cooling to prevent thermal throttling.

While SLI and Crossfire setups have become less common due to limited game support and
affordability constraints, they remain prevalent in specialized applications like
supercomputers, workstations for video rendering and 3D rendering, visual effects,
simulations, and AI training. Graphics drivers play a critical role in supporting multicard
setups, with specific driver versions tailored for different operating systems. Additionally,
certain operating systems or software packages offer programming APIs for applications to
perform 3D rendering efficiently.

Parts[edit]

A Radeon HD 7970 with the main heatsink


removed, showing the major components of the card. The large, tilted silver object is the GPU die,
which is surrounded by RAM chips, which are covered in extruded aluminum heatsinks. Power
delivery circuitry is mounted next to the RAM, near the right side of the card.
A modern graphics card consists of a printed circuit board on which the components are
mounted. These include:
Graphics processing unit[edit]
Main article: graphics processing unit

A graphics processing unit (GPU), also occasionally called visual processing unit (VPU), is a
specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the
building of images in a frame buffer intended for output to a display. Because of the large degree
of programmable computational complexity for such a task, a modern graphics card is also a
computer unto itself.

A half-height graphics card

A heat sink is a vital component found on most modern graphics cards, serving to efficiently
dissipate the heat generated by the graphics processing unit (GPU). This heat sink is designed
to evenly spread out the heat across its surface and throughout the unit itself. Often, a fan is
mounted on the heat sink to facilitate cooling, ensuring that the GPU operates within optimal
temperature ranges. However, not all graphics cards feature heat sinks; some utilize
alternative cooling methods such as liquid cooling systems or water blocks. In the early days
of graphics cards, particularly in the 1980s and early 1990s, heat production was minimal,
and heat sinks were not necessary.

Effective thermal management is essential for modern graphics cards, especially those with
high-performance GPUs. Alongside heat sinks, advanced cooling solutions may incorporate
heat pipes, typically made of copper, to enhance thermal conductivity and heat dissipation.
These thermal solutions help prevent overheating and maintain stable performance during
intensive graphics processing tasks.

The video BIOS, or firmware, plays a crucial role in the operation of a graphics card. It
contains essential programming for initializing and controlling various aspects of the card,
including memory configuration, operating speeds, and voltage settings. However, modern
video BIOSes primarily focus on basic functions like identifying the card and initializing
display modes, while more advanced features such as video scaling and pixel processing are
handled by software drivers.

Graphics cards boast varying memory capacities, ranging from 2 to 24 GB, with some high-
end models offering up to 32 GB. This memory, often referred to as VRAM (Video Random
Access Memory), serves as dedicated storage for screen images, textures, vertex buffers, and
shader programs. Over the years, memory technology has evolved from DDR to advanced
versions like GDDR6, with corresponding increases in memory bandwidth and speed.

The RAMDAC (Random-Access-Memory Digital-to-Analog Converter) is another important


component found in older graphics cards. It converts digital signals from the GPU into analog
signals for displays with analog inputs, such as CRT monitors. However, with the widespread
adoption of digital displays, the need for a separate RAMDAC has diminished, as most
modern displays utilize digital connections like HDMI or DisplayPort. As a result, newer
graphics cards often integrate the functionality of the RAMDAC directly into the GPU,
eliminating the need for a standalone component.

A Radeon HD 5850 with a DisplayPort, HDMI and two DVI ports

Output interfaces[edit]

Video-in video-out (VIVO)


for S-Video (TV-out), Digital Visual Interface (DVI) for high-definition television (HDTV), and DE-15 for
Video Graphics Array (VGA)
The most common connection systems between the graphics card and the computer display are:
Video Graphics Array (VGA) (DE-15)[edit]

Video Graphics Array (DE-15)


Main article: Video Graphics Array

Also known as D-sub, VGA is an analog-based standard adopted in the late 1980s designed for
CRT displays, also called VGA connector. Today, the VGA analog interface is used for high
definition video resolutions including 1080p and higher. Some problems of this standard
are electrical noise, image distortion and sampling error in evaluating pixels. While the VGA
transmission bandwidth is high enough to support even higher resolution playback, the picture
quality can degrade depending on cable quality and length. The extent of quality difference
depends on the individual's eyesight and the display; when using a DVI or HDMI connection,
especially on larger sized LCD/LED monitors or TVs, quality degradation, if present, is
prominently visible. Blu-ray playback at 1080p is possible via the VGA analog interface, if Image
Constraint Token (ICT) is not enabled on the Blu-ray disc.
Digital Visual Interface (DVI)[edit]

Digital Visual Interface (DVI-I)


Main article: Digital Visual Interface

Digital Visual Interface is a digital-based standard designed for displays such as flat-panel
displays (LCDs, plasma screens, wide high-definition television displays) and video projectors.
There were also some rare high-end CRT monitors that use DVI. It avoids image distortion and
electrical noise, corresponding each pixel from the computer to a display pixel, using its native
resolution. It is worth noting that most manufacturers include a DVI-I connector, allowing (via
simple adapter) standard RGB signal output to an old CRT or LCD monitor with VGA input.
Video-in video-out (VIVO) for S-Video, composite video and component video[edit]

VIVO connector
Main article: Video-in video-out

These connectors are included to allow connection with televisions, DVD players, video
recorders and video game consoles. They often come in two 10-pin mini-DIN
connector variations, and the VIVO splitter cable generally comes with either 4 connectors (S-
Video in and out plus composite video in and out), or 6 connectors (S-Video in and
out, component YPBPR out and composite in and out).
High-Definition Multimedia Interface (HDMI)[edit]

High-Definition Multimedia Interface


Main article: HDMI

HDMI is a compact audio/video interface for transferring uncompressed video data and
compressed/uncompressed digital audio data from an HDMI-compliant device ("the source
device") to a compatible digital audio device, computer monitor, video projector, or digital
television.[52] HDMI is a digital replacement for existing analog video standards. HDMI
supports copy protection through HDCP.
DisplayPort[edit]

DisplayPort
Main article: DisplayPort

DisplayPort is a digital display interface developed by the Video Electronics Standards


Association (VESA). The interface is primarily used to connect a video source to a display
device such as a computer monitor, though it can also be used to transmit audio, USB, and other
forms of data.[53] The VESA specification is royalty-free. VESA designed it to replace VGA, DVI,
and LVDS. Backward compatibility to VGA and DVI by using adapter dongles enables
consumers to use DisplayPort fitted video sources without replacing existing display devices.
Although DisplayPort has a greater throughput of the same functionality as HDMI, it is expected
to complement the interface, not replace it.[54][55]
Motherboard interfaces[edit]
Main articles: Bus (computing) and Expansion card

ATI Graphics Solution Rev 3 from 1985/1986,


supporting Hercules graphics. As can be seen from the PCB the layout was done in 1985, whereas the
marking on the central chip CW16800-A says "8639" meaning that chip was manufactured week 39,
1986. This card is using the ISA 8-bit (XT) interface.

The chronological progression of connection systems between graphics cards and


motherboards highlights the evolution of computer hardware and interconnectivity standards:
1. S-100 bus (1974): Introduced as part of the Altair 8800, the S-100 bus was the first
industry-standard bus for microcomputers.
2. ISA (1981): The Industry Standard Architecture, introduced by IBM, became
dominant in the 1980s. It was an 8- or 16-bit bus clocked at 8 MHz.
3. NuBus (1984): Used in Macintosh II computers, NuBus was a 32-bit bus with an
average bandwidth of 10 to 20 MB/s.
4. MCA (1987): IBM's Micro Channel Architecture, introduced in 1987, was a 32-bit
bus clocked at 10 MHz.
5. EISA (1988): Released to compete with IBM's MCA, the Extended Industry Standard
Architecture was compatible with ISA and operated as a 32-bit bus clocked at 8.33
MHz.
6. VLB (VESA Local Bus) (1992): An extension of ISA, VLB was a 32-bit bus clocked
at 33 MHz, providing faster data transfer.
7. PCI (Peripheral Component Interconnect) (1993): Replaced earlier bus standards,
including EISA, ISA, MCA, and VLB. PCI offered dynamic connectivity between
devices and operated as a 32-bit bus clocked at 33 MHz.
8. UPA (UltraPort Architecture) (1995): Introduced by Sun Microsystems, UPA was a
64-bit bus clocked at 67 or 83 MHz.
9. USB (Universal Serial Bus) (1996): Initially used for miscellaneous devices, USB
saw the introduction of USB displays and display adapters.
10. AGP (Accelerated Graphics Port) (1997): Dedicated to graphics, AGP was a 32-bit
bus clocked at 66 MHz, providing faster data transfer for graphics-intensive
applications.
11. PCI-X (PCI eXtended) (1998): An extension of the PCI bus, PCI-X increased the
bus width to 64 bits and clock frequency to up to 133 MHz, offering enhanced
performance.
12. PCI Express (PCIe) (2004): A point-to-point interface, PCIe offered significantly
faster data transfer rates compared to AGP. It became the standard for modern
graphics cards, providing improved performance and scalability.

This chronological overview demonstrates the steady advancement in graphics card


connectivity, driven by the demand for faster data transfer speeds and more efficient
communication between components.
The central processing unit (CPU), often referred to simply as the processor, serves as the
primary computing engine in a computer system. Its intricate electronic circuitry is
responsible for executing the instructions of a computer program, encompassing tasks
ranging from arithmetic and logical operations to controlling input/output operations. This
pivotal role distinguishes the CPU from external components like main memory and
specialized coprocessors such as graphics processing units (GPUs).

While the form, design, and implementation of CPUs have evolved significantly over time,
their fundamental operation has remained largely consistent. Key components of a CPU
include the arithmetic–logic unit (ALU), responsible for performing arithmetic and logical
operations, processor registers for supplying operands to the ALU and storing operation
results, and a control unit that coordinates fetching, decoding, and executing instructions by
managing the ALU, registers, and other components. Modern CPUs allocate substantial
semiconductor area to features like caches and instruction-level parallelism to enhance
performance, along with supporting various CPU modes to accommodate operating systems
and virtualization.

Most contemporary CPUs are integrated onto microprocessor chips, with some chips
featuring multiple CPUs, known as multi-core processors. These individual physical CPUs,
or processor cores, may also support multithreading at the CPU level to further enhance
performance.

Furthermore, an integrated circuit (IC) containing a CPU may incorporate additional


components such as memory, peripheral interfaces, and more, forming microcontrollers or
systems on a chip (SoCs). This integration enhances efficiency and compactness in
computing devices.

A central processing unit (CPU) made by Intel: An Intel


Core i9-14900K Inside a central processing unit:
The integrated circuit of Intel's Xeon 3060, first manufactured in 2006

Operation[edit]
The instruction cycle of a CPU is a fundamental process that drives the execution of
computer programs. Let's delve deeper into each stage:

1. Fetch: In this stage, the CPU retrieves the next instruction from memory based on the
address stored in the program counter (PC). The PC keeps track of the memory
address of the current instruction being executed. The fetched instruction is then
loaded into the instruction register (IR) within the CPU.
2. Decode: Once the instruction is fetched, the CPU decodes it to determine what
operation it needs to perform. This involves interpreting the opcode (operation code)
and any operands associated with the instruction. The decoding process prepares the
CPU for the next stage, where the actual operation will be executed.
3. Execute: In this stage, the CPU carries out the operation specified by the decoded
instruction. This may involve performing arithmetic or logical calculations, accessing
data from memory, or transferring data between different registers within the CPU.
The execution stage produces results or changes the state of the CPU and the system
as a whole.
4. Write Back: Some CPUs have an additional stage called "write back," where the
results of the executed instruction are written back to memory or stored in registers.
This stage completes the instruction cycle and prepares the CPU to fetch the next
instruction.

Throughout the instruction cycle, the program counter is updated to point to the next
instruction to be fetched, ensuring that the CPU continues to execute instructions in sequence.
Additionally, modern CPUs may employ optimizations such as pipelining, out-of-order
execution, and speculative execution to improve performance by overlapping the execution of
multiple instructions. These techniques further enhance the efficiency and throughput of the
CPU.

Fetch[edit]
Fetch involves retrieving an instruction (which is represented by a number or sequence of
numbers) from program memory. The instruction's location (address) in program memory is
determined by the program counter (PC; called the "instruction pointer" in Intel x86
microprocessors), which stores a number that identifies the address of the next instruction to be
fetched. After an instruction is fetched, the PC is incremented by the length of the instruction so
that it will contain the address of the next instruction in the sequence. [d] Often, the instruction to
be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting
for the instruction to be returned. This issue is largely addressed in modern processors by
caches and pipeline architectures (see below).
Decode[edit]
Further information: Instruction set architecture § Instruction encoding

In the execute stage of the CPU instruction cycle, the decoded instruction is carried out,
resulting in the desired operation being performed. This stage involves various components
within the CPU working together to perform arithmetic or logical operations, access memory,
or manipulate data according to the instruction's requirements.

Here's a more detailed breakdown of the execute stage:

1. Arithmetic Operations: If the instruction involves arithmetic operations such as


addition, subtraction, multiplication, or division, the arithmetic logic unit (ALU) is
responsible for executing these operations. The ALU performs the necessary
calculations on the operands provided by the instruction.
2. Logical Operations: Instructions that involve logical operations like AND, OR,
NOT, or bitwise operations are executed by the ALU as well. These operations
manipulate binary data at the bit level according to the specified logic.
3. Memory Access: If the instruction requires accessing data from memory, the memory
management unit (MMU) coordinates the retrieval of data from the appropriate
memory location. This may involve fetching data from RAM, cache, or other storage
devices.
4. Control Flow Operations: Instructions that control the flow of program execution,
such as conditional branches or jumps, are executed by the control unit. The control
unit modifies the program counter (PC) to redirect the flow of execution based on the
outcome of the operation.
5. Data Movement: Instructions that involve moving data between registers, memory
locations, or I/O devices are executed by the data movement unit. This unit ensures
that data is transferred accurately and efficiently according to the instruction's
specifications.

During the execute stage, the CPU generates signals and control signals based on the decoded
instruction, directing various components within the CPU to perform the necessary
operations. Once the execution is complete, the CPU proceeds to the next stage of the
instruction cycle or prepares to fetch the next instruction from memory.

EXECUTE

Following the fetch and decode stages, the CPU proceeds to the execute step, where it
performs the actual operation specified by the instruction. This step can involve a single
action or a sequence of actions, depending on the CPU architecture. During each action,
control signals are activated or deactivated to enable various CPU components to execute the
operation. These actions are typically synchronized with clock pulses.

For instance, when executing an addition instruction, the CPU activates the registers holding
the operands and the relevant components of the arithmetic logic unit (ALU) responsible for
addition. As the clock pulse occurs, the operands are transferred from the source registers to
the ALU, where the addition operation takes place. The result, the sum, emerges at the output
of the ALU.

Subsequent clock pulses may activate additional components to store the output, such as
writing the sum to a register or main memory. If the result exceeds the capacity of the ALU's
output, triggering an arithmetic overflow, an overflow flag is set, impacting subsequent
operations. This orchestrated sequence of actions ensures the proper execution of instructions
and the handling of their outcomes within the CPU.

Structure and implementation[edit]


See also: Processor design

Block diagram of a basic


uniprocessor-CPU computer. Black lines indicate data flow, whereas red lines indicate
control flow; arrows indicate flow directions.

At the core of a CPU lies its instruction set, defining a set of fundamental operations it can
execute. These operations encompass tasks like arithmetic calculations, comparisons, and
program control. Each operation is encoded into a unique bit pattern, termed the machine
language opcode. During execution, the CPU interprets this opcode, typically through a
binary decoder, to generate control signals that dictate its behavior.

A machine language instruction comprises the opcode alongside optional bits specifying
operation arguments, such as operands for arithmetic operations. As complexity increases, a
machine language program emerges—a sequence of instructions the CPU processes. These
instructions reside in memory, fetched by the CPU as needed for execution.

Within the CPU's processor lies the arithmetic–logic unit (ALU), a combinational logic
circuit responsible for executing mathematical and logical operations specified by the
instructions. When processing an instruction, the CPU retrieves it from memory, utilizes the
ALU to perform the operation, and then stores the result back into memory.

Beyond basic arithmetic and logic, the instruction set encompasses a range of operations.
These include loading and storing data in memory, directing program flow through branching
operations, and handling floating-point arithmetic through the CPU's dedicated floating-point
unit (FPU). Together, these instructions facilitate the diverse array of tasks executed by a
CPU.

Control unit[edit]
Main article: Control unit
The control unit (CU) is a component of the CPU that directs the operation of the processor. It
tells the computer's memory, arithmetic and logic unit and input and output devices how to
respond to the instructions that have been sent to the processor.
It directs the operation of the other units by providing timing and control signals. Most computer
resources are managed by the CU. It directs the flow of data between the CPU and the other
devices. John von Neumann included the control unit as part of the von Neumann architecture. In
modern computer designs, the control unit is typically an internal part of the CPU with its overall
role and operation unchanged since its introduction.[69]
Arithmetic logic unit[edit]
Main article: Arithmetic logic unit

Symbolic representation of an ALU and its


input and output signals
The arithmetic logic unit (ALU) is a digital circuit within the processor that performs integer
arithmetic and bitwise logic operations. The inputs to the ALU are the data words to be operated
on (called operands), status information from previous operations, and a code from the control
unit indicating which operation to perform. Depending on the instruction being executed, the
operands may come from internal CPU registers, external memory, or constants generated by
the ALU itself.
When all input signals have settled and propagated through the ALU circuitry, the result of the
performed operation appears at the ALU's outputs. The result consists of both a data word,
which may be stored in a register or memory, and status information that is typically stored in a
special, internal CPU register reserved for this purpose.
Modern CPUs typically contain more than one ALU to improve performance.
Address generation unit[edit]
Main article: Address generation unit

The Address Generation Unit (AGU), also referred to as the Address Computation Unit
(ACU), is a crucial component within a CPU, responsible for swiftly calculating memory
addresses essential for accessing main memory. By conducting these calculations in parallel
with other CPU tasks, the AGU optimizes performance by minimizing the number of CPU
cycles required for executing various instructions, thereby enhancing overall efficiency.

Here's a succinct breakdown of its significance:

1. Address Calculations: The AGU performs arithmetic operations to determine


memory addresses swiftly, particularly during tasks such as accessing array elements.
2. Performance Optimization: By swiftly completing address calculations within a
single CPU cycle, the AGU significantly boosts execution speed.
3. Specialized Instructions: Some CPU architectures include instructions tailored to
leverage the AGU's capabilities, enabling quicker execution of memory-related tasks.
4. Multiple AGUs: Advanced CPU designs may incorporate multiple AGUs, allowing
for parallel execution of address-calculation operations and enhancing memory
subsystem bandwidth.
5. Architecture Dependency: AGU capabilities vary depending on CPU architecture,
with some supporting a broader range of operations for efficient memory access.

In essence, the AGU plays a pivotal role in CPU performance by efficiently handling address
calculations, minimizing memory access overhead, and facilitating the swift execution of
instructions.

Memory management unit (MMU)[edit]


Main article: Memory management unit
Many microprocessors (in smartphones and desktop, laptop, server computers) have a memory
management unit, translating logical addresses into physical RAM addresses, providing memory
protection and paging abilities, useful for virtual memory. Simpler processors,
especially microcontrollers, usually don't include an MMU.
Cache[edit]
A CPU cache is a vital hardware component used by the central processing unit (CPU) to
speed up data access from the main memory. It stores frequently accessed data closer to the
processor core, reducing access time. Modern CPUs have multiple cache levels, including
instruction and data caches, organized hierarchically (L1, L2, L3, L4).

Key Points:

1. Cache Levels:
o L1 Cache: Closest to the CPU core, split into L1d (data) and L1i
(instructions).
o L2 Cache: Acts as a repository for L1 caches, each core has a dedicated L2
cache.
o L3 Cache: Shared among all cores, larger than L2.
o L4 Cache: Less common, often implemented on DRAM.
2. Characteristics:
o Speed and Proximity: Caches are faster and closer to the CPU than main
memory.
o Optimization: Each level is optimized differently.
o Splitting: Modern CPUs split the L1 cache for efficiency.
3. Evolution and Implementation:
o Early CPUs: Had single-level caches without splitting.
o Current CPUs: Almost all have multi-level caches, with split L1 and shared
L3.
o Integration: Multiple cache levels integrated on a single chip.
4. Specialized Caches:
o TLB: Part of MMU, crucial for virtual memory management.
5. Sizing:
o Power of Two: Cache sizes are typically in powers of two.
o Exceptions: Some designs have non-standard sizes.

Summary:

CPU caches enhance data access efficiency in modern processors. They consist of multiple
levels with specific roles, optimizing performance. The evolution from single to multi-level
caches reflects ongoing advancements in CPU design to meet performance demands.

Clock rate[edit]
Main article: Clock rate

Most CPUs operate synchronously, relying on a clock signal to regulate sequential


operations. This clock signal, generated by an external oscillator circuit, provides a consistent
rhythm of pulses, determining the CPU's execution rate. Essentially, faster clock pulses allow
the CPU to process more instructions per second.

Synchronous Operation

 Clock Signal: The clock signal's period is set longer than the maximum signal
propagation time within the CPU, ensuring reliable data movement.
 Architecture: This approach simplifies CPU design by synchronizing data movement
with clock signal edges.
 Inefficiencies: Slower components dictate overall CPU speed, leading to
inefficiencies as some sections are faster.
 Challenges: High clock rates complicate signal synchronization and increase energy
consumption and heat dissipation.
 Techniques: Clock gating deactivates unnecessary components to reduce power
consumption. However, its complexities limit usage in mainstream designs. The IBM
PowerPC-based Xenon CPU in the Xbox 360 demonstrates effective clock gating.

Asynchronous (Clockless) CPUs

In contrast to synchronous CPUs, clockless CPUs operate without a central clock signal,
relying on asynchronous operations.

 Advantages: Reduced power consumption and improved performance.


 Challenges: Design complexity and limited widespread adoption.
 Examples: Notable designs include the ARM-compliant AMULET and the MIPS
R3000-compatible MiniMIPS.

Hybrid Designs

Some CPUs integrate asynchronous elements with synchronous components.

 Asynchronous ALUs: Used alongside superscalar pipelining to enhance arithmetic


performance.
 Power Efficiency: Asynchronous designs are more power-efficient and have better
thermal properties, making them suitable for embedded computing applications.

Summary

 Synchronous CPUs: Depend on a clock signal for sequential operations, facing


challenges with high clock rates and power consumption.
 Asynchronous CPUs: Operate without a central clock, offering potential benefits in
power and performance but are complex to design.
 Hybrid Designs: Combine asynchronous and synchronous elements, aiming to
balance performance and efficiency.

Voltage regulator module[edit]


Main article: Voltage regulator module
Many modern CPUs have a die-integrated power managing module which regulates on-demand
voltage supply to the CPU circuitry allowing it to keep balance between performance and power
consumption.
Integer range[edit]
Every CPU represents numerical values in a specific way. For example, some early digital
computers represented numbers as familiar decimal (base 10) numeral system values, and
others have employed more unusual representations such as ternary (base three). Nearly all
modern CPUs represent numbers in binary form, with each digit being represented by some two-
valued physical quantity such as a "high" or "low" voltage.[g]

A six-bit word containing the binary encoded


representation of decimal value 40. Most modern CPUs employ word sizes that are a power
of two, for example 8, 16, 32 or 64 bits.
In binary CPUs, the word size, also known as bit width, data path width, or integer precision,
determines the number of bits processed in one operation. For instance, an 8-bit CPU handles
integers represented by eight bits, covering a range of 256 discrete values. The integer size
also dictates the memory locations directly addressable by the CPU. For instance, with a 32-
bit memory address, a CPU can access 2^32 memory locations. Some CPUs utilize
mechanisms like bank switching to extend memory addressing, overcoming limitations.

CPUs with larger word sizes entail more complex circuitry, making them physically larger,
costlier, and more power-hungry. Despite the availability of CPUs with larger word sizes
(e.g., 16, 32, 64, or even 128 bits), smaller 4- or 8-bit microcontrollers are popular in modern
applications due to their compact size, lower cost, and power efficiency. However, for higher
performance needs, the benefits of larger word sizes may outweigh these drawbacks.

Some CPUs feature internal data paths shorter than the word size to reduce size and cost. For
example, although the IBM System/360 instruction set was 32-bit, models like the Model 30
and Model 40 had 8-bit data paths, requiring four cycles for a 32-bit add. Similarly, the
Motorola 68000 series had 16-bit data paths, necessitating two cycles for a 32-bit add.

To balance advantages of lower and higher bit lengths, many instruction sets adopt different
widths for integer and floating-point data. For instance, the IBM System/360 supported 64-bit
floating-point values within a primarily 32-bit instruction set, enhancing floating-point
accuracy and range. Later CPU designs often employ mixed bit widths, especially for
general-purpose processors needing a blend of integer and floating-point capabilities to meet
diverse computational requirements.

Parallelism[edit]
Main article: Parallel computing

Model of a subscalar
CPU, in which it takes fifteen clock cycles to complete three instructions
The description of the basic operation of a CPU offered in the previous section describes the
simplest form that a CPU can take. This type of CPU, usually referred to as subscalar, operates
on and executes one instruction on one or two pieces of data at a time, that is less than
one instruction per clock cycle (IPC < 1).
This process gives rise to an inherent inefficiency in subscalar CPUs. Since only one instruction
is executed at a time, the entire CPU must wait for that instruction to complete before proceeding
to the next instruction. As a result, the subscalar CPU gets "hung up" on instructions which take
more than one clock cycle to complete execution. Even adding a second execution unit (see
below) does not improve performance much; rather than one pathway being hung up, now two
pathways are hung up and the number of unused transistors is increased. This design, wherein
the CPU's execution resources can operate on only one instruction at a time, can only possibly
reach scalar performance (one instruction per clock cycle, IPC = 1). However, the performance is
nearly always subscalar (less than one instruction per clock cycle, IPC < 1).
Attempts to achieve scalar and better performance have resulted in a variety of design
methodologies that cause the CPU to behave less linearly and more in parallel. When referring to
parallelism in CPUs, two terms are generally used to classify these design techniques:
 instruction-level parallelism (ILP), which seeks to increase the rate at which
instructions are executed within a CPU (that is, to increase the use of on-die
execution resources);
 task-level parallelism (TLP), which purposes to increase the number
of threads or processes that a CPU can execute simultaneously.
Each methodology differs both in the ways in which they are implemented, as well as the relative
effectiveness they afford in increasing the CPU's performance for an application. [i]
Instruction-level parallelism[edit]
Main article: Instruction-level parallelism

Basic five-stage pipeline. In the best case


scenario, this pipeline can sustain a completion rate of one instruction per clock cycle.
One of the simplest methods for increased parallelism is to begin the first steps of instruction
fetching and decoding before the prior instruction finishes executing. This is a technique known
as instruction pipelining, and is used in almost all modern general-purpose CPUs. Pipelining
allows multiple instruction to be executed at a time by breaking the execution pathway into
discrete stages. This separation can be compared to an assembly line, in which an instruction is
made more complete at each stage until it exits the execution pipeline and is retired.
Pipelining does, however, introduce the possibility for a situation where the result of the previous
operation is needed to complete the next operation; a condition often termed data dependency
conflict. Therefore, pipelined processors must check for these sorts of conditions and delay a
portion of the pipeline if necessary. A pipelined processor can become very nearly scalar,
inhibited only by pipeline stalls (an instruction spending more than one clock cycle in a stage).

A simple superscalar pipeline. By


fetching and dispatching two instructions at a time, a maximum of two instructions per clock
cycle can be completed.
Improving CPU Performance with Superscalar Designs
In the ever-evolving landscape of computer architecture, innovations like instruction
pipelining have significantly reduced idle time within CPU components. Among these
advancements, superscalar designs stand out. A superscalar CPU boasts a long
instruction pipeline and multiple identical execution units—such as load–store units,
arithmetic–logic units, floating-point units, and address generation units.
Here’s how superscalar architectures work:
1. Instruction Dispatch and Parallel Execution:
o Instructions are read and passed to a dispatcher.
o The dispatcher evaluates whether instructions can be executed in
parallel (simultaneously).
o If feasible, the instructions are dispatched to execution units, resulting
in simultaneous execution.
o The number of instructions completed in a cycle depends on how many
instructions can be dispatched concurrently.
2. Challenges in Superscalar Design:
o The heart of a superscalar design lies in creating an efficient
dispatcher.
o The dispatcher must quickly determine parallelizability and dispatch
instructions to keep execution units busy.
o Filling the instruction pipeline optimally is crucial, necessitating
substantial CPU cache.
o Hazard-avoidance techniques—such as branch prediction, speculative
execution, register renaming, out-of-order execution, and transactional
memory—are vital for sustained performance.
3. Branch Prediction and Speculative Execution:
o Predicting conditional instruction paths minimizes pipeline waits.
o Speculative execution executes code portions that may not be needed
after a conditional operation.
o Out-of-order execution rearranges instruction execution to reduce data
dependency delays.
4. Single Instruction Stream, Multiple Data Stream (SIMD):
o In cases where large amounts of similar data need processing (e.g.,
video creation or photo editing), modern processors can selectively
disable pipeline stages.
o When executing the same instruction repeatedly, the CPU skips fetch
and decode phases, significantly boosting performance.
5. Trade-offs and Floating-Point Units:
o While most modern CPUs incorporate some degree of superscalar
design, there are trade-offs.
o The Intel P5 Pentium, for instance, had integer superscalar ALUs but
lacked floating-point superscalar capabilities.
o Intel’s P6 architecture addressed this by adding superscalar features to
its floating-point unit.
6. Software Interface and the Role of ISA:
o Recent emphasis has shifted from hardware to software interfaces
(instruction set architecture, or ISA) for high-ILP (instruction-level
parallelism) computers.
o Strategies like very long instruction words (VLIW) implicitly encode ILP
in software, simplifying CPU design.

In summary, superscalar architectures empower CPUs to execute instructions


beyond the traditional one-per-clock-cycle limit. As technology advances, the
delicate dance between hardware and software continues, shaping the future of
computing performance.
Task-level parallelism[edit]

Parallel Computing and Multiprocessing

Parallel computing, a field of research, executes multiple threads or processes


simultaneously. In Flynn's taxonomy, this approach is termed Multiple Instruction Stream,
Multiple Data Stream (MIMD).

Multiprocessing Technologies
Multiprocessing (MP) is a technology that allows multiple CPUs to share a coherent view of
memory. Symmetric Multiprocessing (SMP) enables CPUs to cooperate on the same
program, ensuring an up-to-date memory view. Non-Uniform Memory Access (NUMA) and
directory-based coherence protocols expand CPU cooperation beyond SMP limitations.

Chip-Level Multiprocessing

Chip-Level Multiprocessing (CMP) integrates multiple processors and interconnects onto a


single chip, forming multi-core processors.

Multithreading Advancements

Multithreading (MT) enables finer-grain parallelism within a single program. Unlike


multiprocessing, where entire CPUs are replicated, only specific components within a CPU
are duplicated to support MT, making it more cost-effective. However, MT requires
significant software changes, as hardware support is more visible.

Types of Multithreading

Temporal Multithreading switches to another thread when one is stalled waiting for data,
optimizing CPU usage. Simultaneous Multithreading executes instructions from multiple
threads in parallel within a single CPU clock cycle.

Shift in CPU Design Focus

Historically, CPU design focused on achieving high Instruction-Level Parallelism (ILP)


through techniques like pipelining, caches, and superscalar execution. However, this
approach faced limitations due to increasing CPU power dissipation and memory frequency
disparities.

Transition to Throughput Computing

CPU designers shifted focus to throughput computing, emphasizing the aggregate


performance of multiple programs over single-threaded performance. This led to the
proliferation of multi-core processor designs resembling less superscalar architectures.

Examples of Multiprocessing Designs

Recent processor families, including x86-64 Opteron, Athlon 64 X2, SPARC UltraSPARC
T1, and IBM POWER4 and POWER5, feature Chip-Level Multiprocessing. Video game
console CPUs like Xbox 360's triple-core PowerPC and PlayStation 3's 7-core Cell
microprocessor also employ multiprocessing designs.

Data parallelism[edit]
Main articles: Vector processor and SIMD

Data Parallelism in Processors


Data parallelism, an increasingly important paradigm in computing, contrasts with traditional
scalar processing. Scalar processors handle one piece of data per instruction, while vector
processors deal with multiple pieces of data per instruction. This distinction is framed in
Flynn's taxonomy as Single Instruction Stream, Single Data Stream (SISD) for scalar
processors and Single Instruction Stream, Multiple Data Stream (SIMD) for vector
processors.

Advantages of Vector Processors

Vector processors excel in tasks requiring the same operation on large data sets, such as sums
or dot products. This makes them ideal for multimedia applications (images, video, and
sound) and scientific and engineering computations. Unlike scalar processors that must fetch,
decode, and execute each instruction for every data value, vector processors perform a single
operation on a large data set with one instruction, greatly enhancing efficiency in data-
intensive tasks.

Evolution and Adoption of SIMD

Early vector processors like the Cray-1 were used mainly in scientific research and
cryptography. As digital multimedia emerged, the need for SIMD in general-purpose
processors grew. Following the inclusion of floating-point units in general-purpose
processors, SIMD execution units began to appear. Early SIMD implementations, such as
HP's Multimedia Acceleration eXtensions (MAX) and Intel's MMX, were integer-only,
limiting their effectiveness for floating-point-intensive applications.

Modern SIMD Specifications

Developers refined early SIMD designs, leading to modern SIMD specifications associated
with specific instruction set architectures (ISAs). Notable examples include:

 Intel's Streaming SIMD Extensions (SSE): Enhanced performance for multimedia


and scientific applications by supporting floating-point operations.
 PowerPC's AltiVec (VMX): Improved vector processing capabilities, significantly
benefiting applications requiring extensive data parallelism.

Summary

The shift from scalar to vector processing, driven by the need for efficient data parallelism,
has significantly impacted processor design. Modern SIMD implementations in general-
purpose processors optimize performance for a wide range of applications, from multimedia
to scientific computations, by enabling efficient parallel processing of large data sets.

Hardware performance counter[edit]


Main article: Hardware performance counter
Many modern architectures (including embedded ones) often include hardware performance
counters (HPC), which enables low-level (instruction-level) collection, benchmarking, debugging
or analysis of running software metrics.[81][82] HPC may also be used to discover and analyze
unusual or suspicious activity of the software, such as return-oriented programming (ROP)
or sigreturn-oriented programming (SROP) exploits etc.[83] This is usually done by software-
security teams to assess and find malicious binary programs.[84]
Many major vendors (such as IBM, Intel, AMD, and Arm) provide software interfaces (usually
written in C/C++) that can be used to collect data from the CPU's registers in order to get
metrics.[85] Operating system vendors also provide software like perf (Linux) to
record, benchmark, or trace CPU events running kernels and applications.
Hardware counters provide a low-overhead method for collecting comprehensive performance
metrics related to a CPU's core elements (functional units, caches, main memory, etc.) – a
significant advantage over software profilers.[86] Additionally, they generally eliminate the need to
modify the underlying source code of a program.[87][88] Because hardware designs differ between
architectures, the specific types and interpretations of hardware counters will also change.

Privileged modes[edit]
Most modern CPUs have privileged modes to support operating systems and virtualization.
Cloud computing can use virtualization to provide virtual central processing units[89] (vCPUs)
for separate users.[90]
A host is the virtual equivalent of a physical machine, on which a virtual system is operating.
[91]
When there are several physical machines operating in tandem and managed as a whole, the
grouped computing and memory resources form a cluster. In some systems, it is possible to
dynamically add and remove from a cluster. Resources available at a host and cluster level can
be partitioned into resources pools with fine granularity.

Performance[edit]
Further information: Computer performance and Benchmark (computing)

Processor Performance Factors

The performance or speed of a processor depends on various factors, primarily the clock rate
(measured in hertz) and instructions per clock (IPC). Together, these determine the
instructions per second (IPS) the CPU can execute. However, reported IPS values often
reflect "peak" rates on artificial sequences, not realistic workloads. Real-world applications
involve a mix of instructions, some taking longer to execute, affecting overall performance.
Additionally, the efficiency of the memory hierarchy significantly impacts processor
performance, an aspect not fully captured by IPS.

Benchmarks for Real-World Performance

To address the limitations of IPS, standardized tests or "benchmarks" like SPECint have been
developed. These benchmarks aim to measure the actual effective performance of processors
in commonly used applications, providing a more accurate representation of real-world
performance.

Multi-Core Processors

Multi-core processors increase processing performance by integrating multiple cores into a


single chip. Ideally, a dual-core processor would be nearly twice as powerful as a single-core
processor, but in practice, the gain is about 50% due to software inefficiencies. Increasing the
number of cores allows the processor to handle more tasks simultaneously, enhancing its
capability to manage asynchronous events and interrupts. Each core can be thought of as a
separate floor in a processing plant, handling different tasks or working together on a single
task if necessary.

Inter-Core Communication

The increase in processing speed with additional cores is not directly proportional because
cores need to communicate through specific channels, consuming some of the available
processing power. This inter-core communication adds complexity and limits the overall
performance gain from additional cores.

Modern CPU Capabilities

Modern CPUs have features like simultaneous multithreading and uncore, which share CPU
resources to increase utilization. These capabilities make monitoring performance levels and
hardware usage more complex. To address this, some CPUs include additional hardware
logic for monitoring usage, providing counters accessible to software. An example is Intel's
Performance Counter Monitor technology.

Summary

Processor performance is influenced by clock rate, IPC, and the efficiency of the memory
hierarchy. Benchmarks like SPECint provide a more accurate measure of real-world
performance. Multi-core processors enhance the ability to run multiple tasks simultaneously,
though performance gains are limited by inter-core communication. Modern CPUs
incorporate advanced features to improve resource utilization, necessitating sophisticated
monitoring tools.

4o

Different Parts of CPU


Now, the CPU consists of 3 major units, which are:
1. Memory or Storage Unit
2. Control Unit
3. ALU(Arithmetic Logic Unit)
Let us now look at the block diagram of the computer:
Here, in this diagram, the three major components are also shown.
So, let us discuss these major components:

Memory or Storage Unit

The memory or storage unit is crucial for storing instructions, data, and intermediate results
necessary for processing. It ensures that the CPU has quick access to the data and instructions
it needs to execute tasks efficiently. This unit is also referred to as the internal storage unit,
main memory, primary storage, or Random Access Memory (RAM).

Key Functions:

1. Data Storage: Stores instructions, data, and intermediate results required for
processing.
2. Intermediate Storage: Temporarily holds data during task execution.
3. Final Storage: Stores final processing results before they are outputted.
4. Data Transfer: Manages data transmission between the memory unit and other
components.

Types of Memory:

1. Primary Memory: RAM provides fast, temporary storage directly accessible by the
CPU.
2. Secondary Memory: Includes hard drives and SSDs, offering larger, long-term
storage but slower access.

Control Unit

The control unit directs the operations of all parts of the computer, ensuring that instructions
are fetched, decoded, and executed correctly. It does not process data but orchestrates the
entire processing sequence.

Key Functions:

1. Data Control and Transfer: Manages data and instruction flow between computer
parts.
2. Unit Management: Oversees operations of all units within the computer.
3. Instruction Fetch and Decode: Retrieves and decodes instructions from memory.
4. Device Communication: Manages data transfer between the CPU and input/output
devices.
5. Flow Control: Maintains orderly information flow across the processor.

ALU (Arithmetic Logic Unit)

The Arithmetic Logic Unit (ALU) performs arithmetic and logical operations. It consists of
two sections, each specializing in different types of operations.

Arithmetic Section:

1. Basic Operations: Performs addition, subtraction, multiplication, and division.


2. Complex Calculations: Handles complex calculations using basic operations.

Logic Section:

1. Logical Operations: Executes selection, comparison, matching, and merging


operations.

Additional Notes:

1. Multiple ALUs: Modern CPUs may have multiple ALUs for increased processing
power.
2. Timer Functions: ALUs can manage timers to coordinate system operations
efficiently.

Summary

 Memory Unit: Stores and manages data and instructions, impacting computer speed
and performance.
 Control Unit: Directs operations, ensuring efficient data flow and execution of
instructions.
 ALU: Performs essential arithmetic and logical operations for various tasks.
Understanding these units' roles and functions highlights how a computer processes
information and executes tasks efficiently.

What Does a CPU Do?


The main function of a computer processor is to execute instruction
and produce an output. CPU work are Fetch, Decode and Execute
are the fundamental functions of the computer.
 Fetch: the first CPU gets the instruction. That means
binary numbers that are passed from RAM to CPU.
 Decode: When the instruction is entered into the CPU, it
needs to decode the instructions. with the help of
ALU(Arithmetic Logic Unit) the process of decode begins.
 Execute: After decode step the instructions are ready to
execute
 Store: After execute step the instructions are ready to
store in the memory.
Types of CPU
We have three different types of CPU:
 Single Core CPU: The oldest type of computer CPUs is
single core CPU. These CPUs were used in the 1970s. these
CPUs only have a single core that preform different
operations. This means that the single core CPU can only
process one operation at a single time. single core CPU CPU
is not suitable for multitasking.
 Dual-Core CPU: Dual-Core CPUs contain a single
Integrated Circuit with two cores. Each core has its cache
and controller. These controllers and cache are work as a
single unit. dual core CPUs can work faster than the single-
core processors.
 Quad-Core CPU: Quad-Core CPUs contain two dual-core
processors present within a single integrated circuit (IC) or
chip. A quad-core processor contains a chip with four
independent cores. These cores read and execute various
instructions provided by the CPU. Quad Core CPU increases
the overall speed for programs. Without even boosting the
overall clock speed it results in higher performance.

A camera is an instrument used to capture and store images and videos, either digitally via an
electronic image sensor, or chemically via a light-sensitive material such as photographic film. As
a pivotal technology in the fields of photography and videography, cameras have played a
significant role in the progression of visual arts, media, entertainment, surveillance, and scientific
research. The invention of the camera dates back to the 19th century and has since evolved with
advancements in technology, leading to a vast array of types and models in the 21st century.
Cameras function through a combination of multiple mechanical components and principles.
These include exposure control, which regulates the amount of light reaching the sensor or film;
the lens, which focuses the light; the viewfinder, which allows the user to preview the scene; and
the film or sensor, which captures the image.
Several types of cameras exist, each suited to specific uses and offering unique
capabilities. Single-lens reflex (SLR) cameras provide real-time, exact imaging through the
lens. Large-format and medium-format cameras offer higher image resolution and are often used
in professional and artistic photography. Compact cameras, known for their portability and
simplicity, are popular in consumer photography. Rangefinder cameras, with separate viewing
and imaging systems, were historically widely used in photojournalism. Motion picture
cameras are specialized for filming cinematic content, while digital cameras, which became
prevalent in the late 20th and early 21st century, use electronic sensors to capture and store
images.
The rapid development of smartphone camera technology in the 21st century has blurred the
lines between dedicated cameras and multifunctional devices, profoundly influencing how society
creates, shares, and consumes visual content.

Mechanics[edit]

Basic elements of a modern digital single-lens reflex


(SLR) still camera
Most cameras capture light from the visible spectrum, while specialized cameras capture other
portions of the electromagnetic spectrum, such as infrared.[7]: vii
All cameras use the same basic design: light enters an enclosed box through a converging or
convex lens and an image is recorded on a light-sensitive medium.[8] A shutter mechanism
controls the length of time that light enters the camera.[9]: 1182–1183
Most cameras also have a viewfinder, which shows the scene to be recorded, along with means
to adjust various combinations of focus, aperture and shutter speed.[10]: 4
Exposure control[edit]
Main article: Exposure (photography)
Aperture[edit]
Main article: Aperture
Different apertures of a lens
Light enters the camera through an aperture, an opening adjusted by overlapping plates called
the aperture ring.[11][12][13] Typically located in the lens,[14] this opening can be widened or narrowed
to alter the amount of light that strikes the film or sensor.[11] The size of the aperture can be set
manually, by rotating the lens or adjusting a dial or automatically based on readings from an
internal light meter.[11]
As the aperture is adjusted, the opening expands and contracts in increments called f-stops.[a]
[11]
The smaller the f-stop, the more light is allowed to enter the lens, increasing the exposure.
Typically, f-stops range from f/1.4 to f/32[b] in standard increments: 1.4, 2, 2.8, 4, 5.6, 8, 11, 16,
22, and 32.[15] The light entering the camera is halved with each increasing increment.[14]

The distance range in which objects appear clear and sharp,


called depth of field, can be adjusted by many cameras. This allows a photographer to control
which objects appear in focus, and which do not.
The wider opening at lower f-stops narrows the range of focus so the background is blurry while
the foreground is in focus. This depth of field increases as the aperture closes. A narrow aperture
results in a high depth of field, meaning that objects at many different distances from the camera
will appear to be in focus.[16] What is acceptably in focus is determined by the circle of confusion,
the photographic technique, the equipment in use and the degree of magnification expected of
the final image.[17]
Shutter[edit]
Main article: Shutter (photography)
The shutter, along with the aperture, is one of two ways to control the amount of light entering the
camera. The shutter determines the duration that the light-sensitive surface is exposed to light.
The shutter opens, light enters the camera and exposes the film or sensor to light, and then the
shutter closes.[14][18]
There are two types of mechanical shutters: the leaf-type shutter and the focal-plane shutter. The
leaf-type uses a circular iris diaphragm maintained under spring tension inside or just behind the
lens that rapidly opens and closes when the shutter is released.[15]
A focal-plane shutter. In this shutter, the metal shutter
blades travel vertically.
More commonly, a focal-plane shutter is used.[14] This shutter operates close to the film plane and
employs metal plates or cloth curtains with an opening that passes across the light-sensitive
surface. The curtains or plates have an opening that is pulled across the film plane during
exposure. The focal-plane shutter is typically used in single-lens reflex (SLR) cameras, since
covering the film (rather than blocking the light passing through the lens) allows the photographer
to view the image through the lens at all times, except during the exposure itself. Covering the
film also facilitates removing the lens from a loaded camera, as many SLRs have
interchangeable lenses.[11][15]
A digital camera may use a mechanical or electronic shutter, the latter of which is common in
smartphone cameras. Electronic shutters either record data from the entire sensor
simultaneously (a global shutter) or record the data line by line across the sensor (a rolling
shutter).[11] In movie cameras, a rotary shutter opens and closes in sync with the advancement of
each frame of film.[11][19]
The duration for which the shutter is open is called the shutter speed or exposure time. Typical
exposure times can range from one second to 1/1,000 of a second, though longer and shorter
durations are not uncommon. In the early stages of photography, exposures were often several
minutes long. These long exposure times often resulted in blurry images, as a single object is
recorded in multiple places across a single image for the duration of the exposure. To prevent
this, shorter exposure times can be used. Very short exposure times can capture fast-moving
action and eliminate motion blur.[20][15][11][14] However, shorter exposure times require more light to
produce a properly exposed image, so shortening the exposure time is not always possible.
Like aperture settings, exposure times increment in powers of two. The two settings determine
the exposure value (EV), a measure of how much light is recorded during the exposure. There is
a direct relationship between the exposure times and aperture settings so that if the exposure
time is lengthened one step, but the aperture opening is also narrowed one step, then the
amount of light that contacts the film or sensor is the same.[14]
Light meter[edit]
Main article: Light meter
A handheld digital light meter showing an exposure of
1/200th at an aperture of f/11, at ISO 100. The light sensor is on top, under the white
diffusing hemisphere.
In most modern cameras, the amount of light entering the camera is measured using a built-in
light meter or exposure meter.[c] Taken through the lens (called TTL metering), these readings
are taken using a panel of light-sensitive semiconductors.[12] They are used to calculate optimal
exposure settings. These settings are typically determined automatically as the reading is used
by the camera's microprocessor. The reading from the light meter is incorporated with aperture
settings, exposure times, and film or sensor sensitivity to calculate the optimal exposure. [d]
Light meters typically average the light in a scene to 18% middle gray. More advanced cameras
are more nuanced in their metering—weighing the center of the frame more heavily (center-
weighted metering), considering the differences in light across the image (matrix metering), or
allowing the photographer to take a light reading at a specific point within the image (spot
metering).[16][20][21][11]
Lens[edit]
Main articles: Camera lens and Photographic lens design
A camera lens is an assembly of multiple optical elements, typically made from high-quality
glass.[22] Its primary function is to focus light onto a camera's film or digital sensor, thereby
producing an image.[12] This process significantly influences image quality, the overall appearance
of the photo, and which parts of the scene are brought into focus. [22]
A camera lens is constructed from a series of lens elements, small pieces of glass arranged to
form an image accurately on the light-sensitive surface. Each element is designed to
reduce optical aberrations, or distortions, such as chromatic aberration (a failure of the lens to
focus all colors at the same point), vignetting (darkening of image corners),
and distortion (bending or warping of the image). The degree of these distortions can vary
depending on the subject of the photo.[22]
The focal length of the lens, measured in millimeters, plays a critical role as it determines how
much of the scene the camera can capture and how large the objects appear. Wide-angle lenses
provide a broad view of the scene, while telephoto lenses capture a narrower view but magnify
the objects. The focal length also influences the ease of taking clear pictures handheld, with
longer lengths making it more challenging to avoid blur from small camera movements. [22]
Two primary types of lenses include zoom and prime lenses. A zoom lens allows for changing its
focal length within a certain range, providing the convenience of adjusting the scene capture
without moving the camera or changing the lens. A prime lens, in contrast, has a fixed focal
length. While less flexible, prime lenses often provide superior image quality, are typically lighter,
and perform better in low light.[22]
Focus involves adjusting the lens elements to sharpen the image of the subject at various
distances.[23] The focus is adjusted through the focus ring on the lens, which moves the lens
elements closer or further from the sensor. Autofocus is a feature included in many lenses, which
uses a motor within the lens to adjust the focus quickly and precisely based on the lens's
detection of contrast or phase differences. This feature can be enabled or disabled using
switches on the lens body.[12]
Advanced lenses may include mechanical image stabilization systems that move lens elements
or the image sensor itself to counteract camera shake, especially beneficial in low-light
conditions or at slow shutter speeds.[22] Lens hoods, filters, and caps are accessories used
alongside a lens to enhance image quality, protect the lens, or achieve specific effects. [12]
Viewfinder[edit]
The camera's viewfinder provides a real-time approximation of what will be captured by the
sensor or film. It assists photographers in aligning, focusing, and adjusting the composition,
lighting, and exposure of their shots, enhancing the accuracy of the final image. [14]
Viewfinders fall into two primary categories: optical and electronic. Optical viewfinders, commonly
found in Single-Lens Reflex (SLR) cameras, use a system of mirrors or prisms to reflect light
from the lens to the viewfinder, providing a clear, real-time view of the scene. Electronic
viewfinders, typical in mirrorless cameras, project an electronic image onto a small display,
offering a wider range of information such as live exposure previews and histograms, albeit at the
cost of potential lag and higher battery consumption.[11] Specialized viewfinder systems exist for
specific applications, like subminiature cameras for spying or underwater photography.[18]
Parallax error, resulting from misalignment between the viewfinder and lens axes, can cause
inaccurate representations of the subject's position. While negligible with distant subjects, this
error becomes prominent with closer ones. Some viewfinders incorporate parallax-compensating
devices to mitigate that issue.[15]
Film and sensor[edit]
Image capture in a camera occurs when light strikes a light-sensitive surface: photographic
film or a digital sensor.[18] Housed within the camera body, the film or sensor records the light's
pattern when the shutter is briefly opened to allow light to pass during the exposure. [16]
Loading film into a film camera is a manual process. The film, typically housed in a cartridge, is
loaded into a designated slot in the camera. One end of the film strip, the film leader, is manually
threaded onto a take-up spool. Once the back of the camera is closed, the film advance lever or
knob is used to ensure the film is correctly placed. The photographer then winds the film, either
manually or automatically depending on the camera, to position a blank portion of the film in the
path of the light. Each time a photo is taken, the film advance mechanism moves the exposed
film out of the way, bringing a new, unexposed section of film into position for the next shot. [16]
The film must be advanced after each shot to prevent double exposure — where the same
section of film is exposed to light twice, resulting in overlapped images. Once all frames on the
film roll have been exposed, the film is rewound back into the cartridge, ready to be removed
from the camera for developing.[18]
In digital cameras, sensors typically comprise Charge-Coupled Devices (CCDs) or
Complementary Metal-Oxide-Semiconductor (CMOS) chips, both of which convert incoming light
into electrical charges to form digital images.[11] CCD sensors, though power-intensive, are
recognized for their excellent light sensitivity and image quality. Conversely, CMOS sensors offer
individual pixel readouts, leading to less power consumption and faster frame rates, with their
image quality having improved significantly over time.
Digital cameras convert light into electronic data that can be directly processed and stored. The
volume of data generated is dictated by the sensor's size and properties, necessitating storage
media such as Compact Flash, Memory Sticks, and SD (Secure Digital) cards.[18] Modern digital
cameras typically feature a built-in monitor for immediate image review and adjustments. [11] Digital
images are also more readily handled and manipulated by computers, offering a significant
advantage in terms of flexibility and post-processing potential over traditional film. [18]
Camera accessories[edit]
Flash[edit]
A flash provides a short burst of bright light during exposure and is a commonly used artificial
light source in photography. Most modern flash systems use a battery-powered high-voltage
discharge through a gas-filled tube to generate bright light for a very short time (1/1,000 of a
second or less).[e][21]
Many flash units measure the light reflected from the flash to help determine the appropriate
duration of the flash. When the flash is attached directly to the camera—typically in a slot at the
top of the camera (the flash shoe or hot shoe) or through a cable—activating the shutter on the
camera triggers the flash, and the camera's internal light meter can help determine the duration
of the flash.[21][16]
Additional flash equipment can include a light diffuser, mount and stand, reflector, soft box,
trigger and cord.
Other accessories[edit]
Accessories for cameras are mainly used for care, protection, special effects, and functions.

 Lens hood: used on the end of a lens to block the sun or other light source to prevent
glare and lens flare (see also matte box).
 Lens cap: covers and protects the camera lens when not in use.
 Lens adapter: allows the use of lenses other than those for which the camera was
designed.
 Filter: allows artificial colors or changes light density.
 Lens extension tube: allows close focus in macro photography.
 Care and protection: including camera case and cover, maintenance tools, and
screen protector.
 Camera monitor: provides an off-camera view of the composition with a brighter and
more colorful screen, and typically exposes more advanced tools such as framing
guides, focus peaking, zebra stripes, waveform monitors (oftentimes as an "RGB
parade"), vectorscopes and false color to highlight areas of the image critical to the
photographer.
 Tripod: primarily used for keeping the camera steady while recording video, doing a
long exposure, and time-lapse photography.
 Microscope adapter: used to connect a camera to a microscope to photograph what
the microscope is examining.
 Cable release: used to remotely control the shutter using a remote shutter button that
can be connected to the camera via a cable. It can be used to lock the shutter open
for the desired period, and it is also commonly used to prevent the camera shake
from pressing the built-in camera shutter button.
 Dew shield: prevents moisture build-up on the lens.
 UV filter: can protect the front element of a lens from scratches, cracks, smudges,
dirt, dust, and moisture while keeping a minimum impact on image quality.
 Battery and sometimes a charger.
Large format cameras use special equipment that includes a magnifier loupe, view finder, angle
finder, and focusing rail/truck. Some professional SLRs can be provided with
interchangeable finders for eye-level or waist-level focusing, focusing screens, eyecup, data
backs, motor-drives for film transportation or external battery packs.
Primary types[edit]
Single-lens reflex (SLR) camera[edit]
Main article: Single-lens reflex camera
Further information: Instant return mirror

Nikon D200 digital camera


In photography, the single-lens reflex camera (SLR) is provided with a mirror to redirect light from
the lens to the viewfinder prior to releasing the shutter for composing and focusing an image.
When the shutter is released, the mirror swings up and away, allowing the exposure of
the photographic medium, and instantly returns after the exposure is finished. No SLR camera
before 1954 had this feature, although the mirror on some early SLR cameras was entirely
operated by the force exerted on the shutter release and only returned when the finger pressure
was released.[24][25] The Asahiflex II, released by Japanese company Asahi (Pentax) in 1954, was
the world's first SLR camera with an instant return mirror.[26]
In the single-lens reflex camera, the photographer sees the scene through the camera lens. This
avoids the problem of parallax which occurs when the viewfinder or viewing lens is separated
from the taking lens. Single-lens reflex cameras have been made in several formats including
sheet film 5x7" and 4x5", roll film 220/120 taking 8,10, 12, or 16 photographs on a 120 roll, and
twice that number of a 220 film. These correspond to 6x9, 6x7, 6x6, and 6x4.5 respectively (all
dimensions in cm). Notable manufacturers of large format and roll film SLR cameras
include Bronica, Graflex, Hasselblad, Seagull, Mamiya and Pentax. However, the most common
format of SLR cameras has been 35 mm and subsequently the migration to digital SLR cameras,
using almost identical sized bodies and sometimes using the same lens systems.
Almost all SLR cameras use a front-surfaced mirror in the optical path to direct the light from the
lens via a viewing screen and pentaprism to the eyepiece. At the time of exposure, the mirror is
flipped up out of the light path before the shutter opens. Some early cameras experimented with
other methods of providing through-the-lens viewing, including the use of a semi-
transparent pellicle as in the Canon Pellix[27] and others with a small periscope such as in
the Corfield Periflex series.[28]
Large-format camera[edit]
Main article: View camera
The large-format camera, taking sheet film, is a direct successor of the early plate cameras and
remained in use for high-quality photography and technical, architectural, and industrial
photography. There are three common types: the view camera, with its monorail and field
camera variants, and the press camera. They have extensible bellows with the lens and shutter
mounted on a lens plate at the front. Backs taking roll film and later digital backs are available in
addition to the standard dark slide back. These cameras have a wide range of movements
allowing very close control of focus and perspective. Composition and focusing are done on view
cameras by viewing a ground-glass screen which is replaced by the film to make the exposure;
they are suitable for static subjects only and are slow to use.
Plate camera[edit]

19th-century studio camera with bellows for focusing


Main article: Photographic plate
The earliest cameras produced in significant numbers were plate cameras, using sensitized
glass plates. Light entered a lens mounted on a lens board which was separated from the plate
by extendible bellows. There were simple box cameras for glass plates but also single-lens reflex
cameras with interchangeable lenses and even for color photography (Autochrome Lumière).
Many of these cameras had controls to raise, lower, and tilt the lens forwards or backward to
control perspective.
Focusing of these plate cameras was by the use of a ground glass screen at the point of focus.
Because lens design only allowed rather small aperture lenses, the image on the ground glass
screen was faint and most photographers had a dark cloth to cover their heads to allow focusing
and composition to be carried out more quickly. When focus and composition were satisfactory,
the ground glass screen was removed, and a sensitized plate was put in its place protected by
a dark slide. To make the exposure, the dark decline was carefully slid out and the shutter
opened, and then closed and the dark fall replaced.
Glass plates were later replaced by sheet film in a dark slide for sheet film; adapter sleeves were
made to allow sheet film to be used in plate holders. In addition to the ground glass, a simple
optical viewfinder was often fitted.
Medium-format camera[edit]
Main article: Medium format

Hasselblad medium format camera


Medium-format cameras have a film size between the large-format cameras and smaller 35 mm
cameras.[29] Typically these systems use 120 or 220 roll film.[30] The most common image sizes
are 6×4.5 cm, 6×6 cm and 6×7 cm; the older 6×9 cm is rarely used. The designs of this kind of
camera show greater variation than their larger brethren, ranging from monorail systems through
the classic Hasselblad model with separate backs, to smaller rangefinder cameras. There are
even compact amateur cameras available in this format.
Twin-lens reflex camera[edit]
Main article: Twin-lens reflex camera
Twin-lens reflex camera
Twin-lens reflex cameras used a pair of nearly identical lenses: one to form the image and one
as a viewfinder.[31] The lenses were arranged with the viewing lens immediately above the taking
lens. The viewing lens projects an image onto a viewing screen which can be seen from above.
Some manufacturers such as Mamiya also provided a reflex head to attach to the viewing screen
to allow the camera to be held to the eye when in use. The advantage of a TLR was that it could
be easily focused using the viewing screen and that under most circumstances the view seen on
the viewing screen was identical to that recorded on film. At close distances, however, parallax
errors were encountered, and some cameras also included an indicator to show what part of the
composition would be excluded.
Some TLRs had interchangeable lenses, but as these had to be paired lenses, they were
relatively heavy and did not provide the range of focal lengths that the SLR could support. Most
TLRs used 120 or 220 films; some used the smaller 127 films.
Compact cameras[edit]
Instant camera[edit]
Main article: Instant camera

Instant Camera
After exposure, every photograph is taken through pinch rollers inside the instant camera.
Thereby the developer paste contained in the paper 'sandwich' is distributed on the image. After
a minute, the cover sheet just needs to be removed and one gets a single original positive image
with a fixed format. With some systems, it was also possible to create an instant image negative,
from which then could be made copies in the photo lab. The ultimate development was the SX-
70 system of Polaroid, in which a row of ten shots – engine driven – could be made without
having to remove any cover sheets from the picture. There were instant cameras for a variety of
formats, as well as adapters for instant film use in medium- and large-format cameras.
Subminiature camera[edit]
Main article: Subminiature camera
Subminiature spy camera
Subminiature cameras were first produced in the twentieth century and use film significantly
smaller than 35mm. The expensive 8×11mm Minox, the only type of camera produced by the
company from 1937 to 1976, became very widely known and was often used for espionage (the
Minox company later also produced larger cameras). Later inexpensive subminiatures were
made for general use, some using rewound 16 mm cine film. Image quality with these small film
sizes was limited.
Folding camera[edit]
Main article: Folding camera

Folding camera
The introduction of films enabled the existing designs for plate cameras to be made much
smaller and for the baseplate to be hinged so that it could be folded up, compressing the bellows.
These designs were very compact and small models were dubbed vest pocket cameras. Folding
roll film cameras were preceded by folding plate cameras, more compact than other designs.
Box camera[edit]
Main article: Box camera

Kodak box camera


Box cameras were introduced as budget-level cameras and had few if any controls. The original
box Brownie models had a small reflex viewfinder mounted on the top of the camera and had no
aperture or focusing controls and just a simple shutter. Later models such as the Brownie 127
had larger direct view optical viewfinders together with a curved film path to reduce the impact of
deficiencies in the lens.
Rangefinder camera[edit]
Main article: Rangefinder camera
Rangefinder camera, Leica c. 1936
As camera lens technology developed and wide aperture lenses became more
common, rangefinder cameras were introduced to make focusing more precise. Early
rangefinders had two separate viewfinder windows, one of which is linked to the focusing
mechanisms and moved right or left as the focusing ring is turned. The two separate images are
brought together on a ground glass viewing screen. When vertical lines in the object being
photographed meet exactly in the combined image, the object is in focus. A normal composition
viewfinder is also provided. Later the viewfinder and rangefinder were combined. Many
rangefinder cameras had interchangeable lenses, each lens requiring its range- and viewfinder
linkages.
Rangefinder cameras were produced in half- and full-frame 35 mm and roll film (medium format).
Motion picture cameras[edit]
Main article: Movie camera
Further information: Digital movie camera
A movie camera or a video camera operates similarly to a still camera, except it records a series
of static images in rapid succession, commonly at a rate of 24 frames per second. When the
images are combined and displayed in order, the illusion of motion is achieved. [32]: 4
Cameras that capture many images in sequence are known as movie cameras or as cine
cameras in Europe; those designed for single images are still cameras. However, these
categories overlap as still cameras are often used to capture moving images in special
effects work and many modern cameras can quickly switch between still and motion recording
modes.
A ciné camera or movie camera takes a rapid sequence of photographs on an image sensor or
strips of film. In contrast to a still camera, which captures a single snapshot at a time, the ciné
camera takes a series of images, each called a frame, through the use of an intermittent
mechanism.
The frames are later played back in a ciné projector at a specific speed, called the frame
rate (number of frames per second). While viewing, a person's visual system merge the separate
pictures to create the illusion of motion. The first ciné camera was built around 1888 and by 1890
several types were being manufactured. The standard film size for ciné cameras was quickly
established as 35mm film and this remained in use until the transition to digital cinematography.
Other professional standard formats include 70 mm film and 16 mm film whilst amateur
filmmakers used 9.5 mm film, 8 mm film, or Standard 8 and Super 8 before the move into digital
format.
The size and complexity of ciné cameras vary greatly depending on the uses required of the
camera. Some professional equipment is very large and too heavy to be handheld whilst some
amateur cameras were designed to be very small and light for single-handed operation.
Professional video camera[edit]
Arri Alexa, a digital movie camera
Main article: Professional video camera
Further information: Video camera
A professional video camera (often called a television camera even though the use has spread
beyond television) is a high-end device for creating electronic moving images (as opposed to a
movie camera, that earlier recorded the images on film). Originally developed for use in television
studios, they are now also used for music videos, direct-to-video movies, corporate and
educational videos, marriage videos, etc.
These cameras earlier used vacuum tubes and later electronic image sensors.
Camcorders[edit]

Sony HDR-HC1E, a HDV camcorder.


Main article: Camcorders
A camcorder is an electronic device combining a video camera and a video recorder. Although
marketing materials may use the colloquial term "camcorder", the name on the package and
manual is often "video camera recorder". Most devices capable of recording video are camera
phones and digital cameras primarily intended for still pictures; the term "camcorder" is used to
describe a portable, self-contained device, with video capture and recording its primary function.
Digital camera[edit]
Main article: Digital camera

Disassembled Digital Camera


Further information: Digital image, Digital imaging, Digital photography, Digital single-lens reflex
camera, and Digital video
A digital camera (or digicam) is a camera that encodes digital images and videos and stores
them for later reproduction.[33] They typically use semiconductor image sensors.[34] Most cameras
sold today are digital,[35] and they are incorporated into many devices ranging from mobile phones
(called camera phones) to vehicles.
Digital and film cameras share an optical system, typically using a lens of variable aperture to
focus light onto an image pickup device.[36] The aperture and shutter admit the correct amount of
light to the imager, just as with film but the image pickup device is electronic rather than
chemical. However, unlike film cameras, digital cameras can display images on a screen
immediately after being captured or recorded, and store and delete images from memory. Most
digital cameras can also record moving videos with sound. Some digital cameras
can crop and stitch pictures & perform other elementary image editing.
Consumers adopted digital cameras in the 1990s. Professional video cameras transitioned to
digital around the 2000s–2010s. Finally, movie cameras transitioned to digital in the 2010s.
The first camera using digital electronics to capture and store images was developed by Kodak
engineer Steven Sasson in 1975. He used a charge-coupled device (CCD) provided by Fairchild
Semiconductor, which provided only 0.01 megapixels to capture images. Sasson combined the
CCD device with movie camera parts to create a digital camera that saved black and white
images onto a cassette tape.[37]: 442 The images were then read from the cassette and viewed on a
TV monitor.[38]: 225 Later, cassette tapes were replaced by flash memory.
In 1986, Japanese company Nikon introduced an analog-recording electronic single-lens reflex
camera, the Nikon SVC.[39]

Sony Alpha 1, a full-frame mirrorless digital camera


The first full-frame digital SLR cameras were developed in Japan from around 2000 to 2002: the
MZ-D by Pentax,[40] the N Digital by Contax's Japanese R6D team,[41] and the EOS-1Ds by Canon.
[42]
Gradually in the 2000s, the full-frame DSLR became the dominant camera type for
professional photography.[citation needed]
On most digital cameras a display, often a liquid crystal display (LCD), permits the user to view
the scene to be recorded and settings such as ISO speed, exposure, and shutter speed.[10]: 6–7 [43]: 12
Camera phone[edit]

Smartphone with built-in camera


Main article: Camera phone
Further information: Front-facing camera and Selfie
In 2000, Sharp introduced the world's first digital camera phone, the J-SH04 J-Phone, in Japan.
[44]
By the mid-2000s, higher-end cell phones had an integrated digital camera, and by the
beginning of the 2010s, almost all smartphones had an integrated digital camera.

You might also like