Printed Circuit Board
Printed Circuit Board
A printed circuit board (PCB), also known as a printed wiring board (PWB), connects
electronic components in a circuit. It features a laminated structure of conductive and
insulating layers. Conductive layers, typically made of copper, are etched with patterns of
traces, planes, and other features similar to wires on a flat surface. These layers are laminated
onto and/or between non-conductive substrates.
Electrical components are attached to conductive pads on the PCB's outer layers, typically
through soldering. This process ensures both electrical connection and mechanical fastening.
Vias, or plated-through holes, allow for interconnections between different layers.
Advantages of PCBs
PCBs are essential in nearly all electronic products. While older methods like wire wrap and
point-to-point construction are rarely used now, PCBs are preferred for their ability to
automate manufacturing and assembly. Electronic design automation software aids in the
layout process, making mass production cheaper and faster. PCBs enable simultaneous
mounting and wiring of components, and multiple PCBs can be fabricated at once, requiring
only a single layout design.
Types of PCBs
The popularity of multi-layer PCBs increased with the adoption of surface mount technology,
though they complicate repair and analysis.
Market Growth
In 2014, the global market for bare PCBs was over $60.2 billion and is projected to reach $79
billion by 2024.
Basic Structure
A basic PCB comprises an insulating material sheet and a copper foil layer laminated to the
substrate. Chemical etching divides the copper into conductive lines called tracks or circuit
traces, connection pads, vias for inter-layer connections, and solid conductive areas for
shielding. These tracks act as fixed wires, insulated by air and the board's substrate material.
The PCB surface may have a coating called solder resist or solder mask to protect the copper
from corrosion and prevent solder shorts.
Etching Process
The pattern to be etched into each copper layer is known as the "artwork." Etching typically
involves using photoresist, which is coated onto the PCB and exposed to light in the artwork's
pattern. The resist material shields the copper during etching. After etching, the board is
cleaned. This process allows for mass reproduction of PCB designs, akin to duplicating
photographs from film negatives.
Substrate Materials
The most common insulating substrate is FR-4 glass epoxy. Another option is cotton paper
impregnated with phenolic resin, often tan or brown.
Terminology
When a PCB has no components installed, it is sometimes called a printed wiring board
(PWB) or etched wiring board, though these terms are less common today. A PCB with
installed components is known as a printed circuit assembly (PCA), PCB assembly (PCBA),
or informally, a "card." The IPC preferred term for an assembled board is circuit card
assembly (CCA), and for an assembled backplane, it is a backplane assembly.
Legend Printing
A PCB may feature a legend identifying components, test points, or other information.
Initially done with silkscreen printing, today's methods use finer quality printing techniques.
The legend typically does not affect the PCB's functionality.
Layer Structure
PCBs can have multiple copper layers, usually arranged in pairs. The number of layers and
their interconnections, such as vias and plated-through holes (PTHs), indicate the board's
complexity. More layers provide better routing options and signal integrity but increase
manufacturing time and cost.
The choice of vias impacts board size, signal routing, and reliability but also adds to
production complexity and expense. Simple two-layer boards, with copper on both sides
(external layers), are easier to produce. Multi-layer boards add internal copper and insulation
layers, offering advanced routing options and better performance for signal integrity, high-
frequency signaling, and power supply decoupling.
Two-layer boards are the simplest, featuring copper on both sides. Four-layer boards, the next
step up, offer significantly more routing options with internal layers often used for ground or
power planes, enhancing signal integrity, reducing electromagnetic interference (EMI), and
improving power decoupling.
Multi-Layer Construction
In multi-layer PCBs, materials are laminated in an alternating sandwich structure: copper,
substrate, copper, substrate, etc. Each copper plane is etched, and internal vias are plated-
through before lamination. Only the outer layers need coating, as inner copper layers are
protected by adjacent substrate layers.
Component Mounting
Through-Hole Components: These are mounted by inserting their wire leads through holes
in the PCB and soldering them to traces on the opposite side. This method, once common, is
now used primarily for large components like electrolytic capacitors and connectors.
Through-hole technology requires precise drilling of holes, adding to manufacturing costs
and limiting routing area on multi-layer boards.
Surface Mount Components: These components have small metal tabs or end caps that are
soldered directly to the copper traces on the same side of the board. Surface mounting is
widely used for smaller components such as transistors, diodes, IC chips, resistors, and
capacitors. It allows for higher circuit density and is more cost-effective, as it supports
automation and reduces labor costs.
Surface Mount Technology (SMT): Emerging in the 1960s and gaining widespread use by
the 1990s, SMT involves soldering components directly onto the PCB surface. This technique
allows for smaller components, higher circuit density, and the use of both sides of the board.
SMT is ideal for automation, significantly increasing production rates and reducing costs.
Surface mount components can be as small as one-quarter to one-tenth the size and weight of
their through-hole counterparts.
Benefits of SMT:
Drawbacks of SMT:
Historical Context
Surface mounting became popular in the 1980s and 1990s, leading to a shift away from
through-hole technology. Components were redesigned for SMT, making them smaller and
easier to place on both sides of the PCB, which facilitated higher circuit densities and smaller
PCB assemblies.
**Trace Characteristics:** Each trace on a PCB is a flat, narrow strip of copper foil left after etching.
The resistance of a trace is determined by its width, thickness, and length, and it must be low enough
to carry the required current. Power and ground traces are typically wider than signal traces.
**Ground Planes and Shielding:** In multi-layer PCBs, an entire layer can be solid copper, acting as a
ground plane for shielding and power return. This design helps reduce noise and improves overall
performance.
**Microwave and RF Circuits:** For microwave circuits, traces are laid out as transmission lines in
planar forms like stripline or microstrip, with precise dimensions to ensure consistent impedance. In
radio-frequency (RF) and fast-switching circuits, the inductance and capacitance of PCB traces
become significant, usually undesirable. However, these properties can be deliberately used in
designing distributed-element filters, antennas, and fuses, eliminating the need for additional
components.
**High-Density Interconnects (HDI):** HDI PCBs feature very fine tracks and vias, with widths or
diameters under 152 micrometers, allowing for greater circuit density and complexity.
PCB Laminates
Manufacturing Process: Laminates are created by curing layers of cloth or paper with
thermoset resin under heat and pressure, forming uniform pieces that can be up to 4 by 8 feet
in size. Various cloth weaves, thicknesses, and resin percentages achieve the desired
thickness and dielectric properties.
Material Composition: The type designation of a laminate (e.g., FR-4, CEM-1, G-10) is
determined by the cloth or fiber material, resin material, and cloth-to-resin ratio. Key
characteristics include fire retardancy, dielectric constant (ϵr\epsilon_rϵr), loss tangent (tanδ\
tan \deltatanδ), tensile and shear strength, glass transition temperature (TgT_gTg), and Z-axis
expansion coefficient.
Dielectrics and Pre-Pregs: Different dielectrics are used based on circuit requirements.
Common materials include:
Pre-preg materials in the PCB industry vary widely, each with specific applications and
properties.
Copper-Clad Laminate: The base material with unetched copper is known as copper-clad
laminate.
As PCB features shrink and frequencies increase, even minor nonhomogeneities (e.g., uneven
fiberglass distribution, thickness variations, resin bubbles) become significant. These
imperfections can cause local variations in the dielectric constant, affecting performance.
Standard Thickness
Reinforcement Types:
Woven Reinforcements: These are cheaper but may introduce local variations in
electrical parameters due to the high dielectric constant of glass and the non-uniform
resin/glass ratio.
Non-Woven Reinforcements: These are more expensive but better suited for RF and
analog applications due to their uniform structure and lower dielectric constant.
Key Parameters:
1. Thermomechanical Properties:
o Glass Transition Temperature (Tg): The temperature at which the resin
softens, leading to increased thermal expansion. High Tg materials are
essential for thick boards to prevent mechanical overload and failure.
o Tensile Strength and Shear Strength: These indicate the board's ability to
withstand mechanical stresses.
o Thermal Expansion: The expansion of the material under heat, which can
stress the plated-through holes and other components.
2. Electrical Properties:
o Dielectric Constant (εr): Determines signal propagation speed and affects
phase distortion in wideband applications. It typically decreases with
increasing frequency.
o Dielectric Breakdown Voltage: The maximum voltage the material can
withstand before failing.
o Tracking Resistance: The material's resistance to high voltage discharges
along its surface.
o Loss Tangent (tan δ): Indicates how much electromagnetic energy is
absorbed by the board material. Low-loss materials are crucial for high-
frequency applications but are more expensive.
3. Other Properties:
o Moisture Absorption: The tendency of the material to absorb water, which
can degrade electrical properties and cause physical damage like cracking or
delamination during soldering. FR-4 has low moisture absorption (0.15%),
while Teflon has very low absorption (0.01%).
Material Choices:
FR-4: The most common material, offering a good balance of properties for most
applications.
Teflon (PTFE): Used for specialized high-frequency applications due to its low
moisture absorption and stable dielectric properties.
Polyimides and Cyanate Esters: High-performance materials but with higher
moisture absorption, which can degrade performance under humid conditions.
As PCB features become smaller and operating frequencies increase, the uniformity of the
substrate material becomes critical. Variations such as uneven fiberglass distribution or resin
bubbles can significantly impact electrical performance. The choice of materials and their
properties must be carefully matched to the application's requirements to ensure optimal
performance and reliability.
Historical Design Methods: Initially, PCBs were designed manually using a photomask on a
mylar sheet, often at a larger scale. Component pin pads were placed, and traces were routed
using self-adhesive tape and dry transfers for common footprints. These designs were
photolithographically transferred onto photoresist-coated copper-clad boards.
Key Considerations
Copper Thickness and Trace Design: Each trace is a narrow part of the remaining copper
foil after etching. Its resistance is determined by width, thickness, and length. Power and
ground traces may be wider than signal traces. Multi-layer boards may use entire layers as
ground planes for shielding and power return.
Electrical Properties:
FR-4: The most common material, offering a good balance of electrical and mechanical
properties. High-Frequency Materials: Teflon and other specialized materials are used for
applications requiring stable dielectric properties at high frequencies.
Fabrication Challenges
High Density Interconnects (HDI): HDI PCBs have finer traces and vias, requiring
advanced manufacturing techniques to maintain precision and reliability.
Design Automation: Modern PCBs leverage advanced EDA tools to automate many design
steps, reducing errors and increasing efficiency.
This concise overview covers the key aspects of PCB design and manufacturing, from initial
data input through modern design techniques and considerations for material properties and
electrical performance.
A graphics card, also known as a GPU or Graphics Processing Unit, is hardware designed to
enhance a computer's video memory and display quality, elevating its performance and
enabling high-level tasks like gaming and video editing. It plays a pivotal role in rendering
lifelike visuals on a display device, such as a monitor, by generating a feed of graphics
output.
The GPU, often referred to as the brain of the graphics card, is the primary component
responsible for creating the stunning visuals we see on our screens. It comes in various
models, each with its own level of power and capabilities.
Graphics cards are typically computer expansion cards, housed in the form of printed circuit
boards (expansion boards) that are inserted into expansion slots on the motherboard.
However, some graphics cards may come in dedicated enclosures, connecting to the
computer via docking stations or cables, known as external GPUs (eGPUs).
One of the distinguishing features of a graphics card is its ability to offload graphics
processing tasks from the central processing unit (CPU), thereby enhancing overall system
performance. This allows for smoother gameplay, faster video rendering, and improved
graphical fidelity.
In addition to display output, modern graphics cards can also be utilized for additional
processing tasks, reducing the workload on the CPU. Platforms like OpenCL and CUDA
enable the use of graphics cards for general-purpose computing applications, such as AI
training, cryptocurrency mining, and molecular simulation.
Overall, graphics cards are preferred over integrated graphics for their superior performance
and ability to handle demanding graphical tasks effectively. They are essential components
for anyone seeking to maximize their computer's graphical capabilities and overall
performance.
While integrated graphics may offer advantages in terms of affordability, simplicity, and
energy consumption, they typically deliver lower performance compared to dedicated
graphics cards. This is because the graphics processing unit (GPU) in integrated graphics
shares system resources with the CPU, leading to potential performance limitations.
In contrast, dedicated graphics cards boast separate components such as dedicated random
access memory (RAM), cooling systems, and power regulators. This allows them to offload
graphics processing tasks from the CPU and system RAM, potentially improving overall
system performance, particularly in graphics-intensive applications like gaming, 3D
animation, and video editing.
Both AMD and Intel have introduced CPUs and motherboard chipsets supporting integrated
GPUs. AMD markets CPUs with integrated graphics as Accelerated Processing Units
(APUs), while Intel promotes similar technology under the "Intel Graphics Technology"
branding. These integrated solutions offer a balance between performance and affordability,
catering to a wide range of computing needs.
As graphics cards have advanced in processing power, their hunger for electrical power has
grown accordingly. Today's high-performance models demand substantial energy, with
examples like the GeForce Titan RTX boasting a thermal design power (TDP) of 280 watts,
while the GeForce RTX 2080 Ti Founder's Edition can consume an average of 300 watts
during video game testing. Despite strides in CPU and power supply efficiency, graphics
cards remain among the most power-intensive components in computers.
To meet these demands, modern graphics cards often feature six-pin (75 W) or eight-pin (150
W) power sockets directly connecting to the power supply. However, cooling these power-
hungry cards poses a challenge, especially in systems with multiple graphics cards requiring
power supplies exceeding 750 watts. Effective heat extraction becomes crucial in such setups.
The latest Nvidia GeForce RTX 30 series, using Ampere architecture, has pushed power
draw to new heights. Custom variants like the "Hall of Fame" RTX 3090 have been recorded
peaking at a staggering 630 watts, while standard RTX 3090 models can reach up to 450
watts. Even the RTX 3080 and 3070 draw significant power, up to 350 watts in the former's
case. Founders Edition cards employ a "dual axial flow through" cooler design, efficiently
expelling heat through fans positioned above and below the card.
Graphics cards come in various sizes to accommodate different computer builds. Some are
designated as "low profile," fitting smaller enclosures. These profiles primarily vary in
height, with low-profile cards occupying less than a standard PCIe slot's height. Length and
thickness also differ, with high-end models often spanning two or three expansion slots and
some, like the RTX 4090, exceeding 300mm in length. Opting for a lower profile card is
advisable when space is limited, although larger computer cases like mid-towers or full
towers can mitigate clearance issues.
MULTI-THREAD SCALING
Multicard scaling, a feature offered by some graphics cards, allows users to distribute
graphics processing across multiple cards. This can be achieved through the PCIe bus on the
motherboard or, more commonly, via a data bridge. Typically, the linked cards must be of the
same model, as most low-end cards lack support for this feature. AMD and Nvidia offer their
proprietary scaling methods, with AMD's CrossFireX and Nvidia's SLI (since replaced by
NVLink in the Turing generation). It's important to note that cards from different
manufacturers or architectures cannot be combined for multicard scaling.
When using multiple cards, memory size compatibility is crucial. If cards have different
memory sizes, the system will utilize the lowest value, disregarding higher values. Presently,
consumer-grade setups can utilize up to four cards, necessitating a large motherboard with the
appropriate configuration. Nvidia's GeForce GTX 590 graphics card, for example, can be
configured in a four-card setup. To ensure optimal performance, users should use cards with
similar performance metrics.
Motherboards certified for multicard configurations include models like the ASUS Maximus
3 Extreme and Gigabyte GA EX58 Extreme. Adequate power supply is essential, especially
for four-card setups, which may require a 1000+ watt supply. Effective thermal management
is crucial for powerful graphics cards, requiring well-ventilated chassis and robust cooling
solutions such as air or water cooling. Larger configurations often employ advanced cooling
methods like immersion cooling to prevent thermal throttling.
While SLI and Crossfire setups have become less common due to limited game support and
affordability constraints, they remain prevalent in specialized applications like
supercomputers, workstations for video rendering and 3D rendering, visual effects,
simulations, and AI training. Graphics drivers play a critical role in supporting multicard
setups, with specific driver versions tailored for different operating systems. Additionally,
certain operating systems or software packages offer programming APIs for applications to
perform 3D rendering efficiently.
Parts[edit]
A graphics processing unit (GPU), also occasionally called visual processing unit (VPU), is a
specialized electronic circuit designed to rapidly manipulate and alter memory to accelerate the
building of images in a frame buffer intended for output to a display. Because of the large degree
of programmable computational complexity for such a task, a modern graphics card is also a
computer unto itself.
A heat sink is a vital component found on most modern graphics cards, serving to efficiently
dissipate the heat generated by the graphics processing unit (GPU). This heat sink is designed
to evenly spread out the heat across its surface and throughout the unit itself. Often, a fan is
mounted on the heat sink to facilitate cooling, ensuring that the GPU operates within optimal
temperature ranges. However, not all graphics cards feature heat sinks; some utilize
alternative cooling methods such as liquid cooling systems or water blocks. In the early days
of graphics cards, particularly in the 1980s and early 1990s, heat production was minimal,
and heat sinks were not necessary.
Effective thermal management is essential for modern graphics cards, especially those with
high-performance GPUs. Alongside heat sinks, advanced cooling solutions may incorporate
heat pipes, typically made of copper, to enhance thermal conductivity and heat dissipation.
These thermal solutions help prevent overheating and maintain stable performance during
intensive graphics processing tasks.
The video BIOS, or firmware, plays a crucial role in the operation of a graphics card. It
contains essential programming for initializing and controlling various aspects of the card,
including memory configuration, operating speeds, and voltage settings. However, modern
video BIOSes primarily focus on basic functions like identifying the card and initializing
display modes, while more advanced features such as video scaling and pixel processing are
handled by software drivers.
Graphics cards boast varying memory capacities, ranging from 2 to 24 GB, with some high-
end models offering up to 32 GB. This memory, often referred to as VRAM (Video Random
Access Memory), serves as dedicated storage for screen images, textures, vertex buffers, and
shader programs. Over the years, memory technology has evolved from DDR to advanced
versions like GDDR6, with corresponding increases in memory bandwidth and speed.
Output interfaces[edit]
Also known as D-sub, VGA is an analog-based standard adopted in the late 1980s designed for
CRT displays, also called VGA connector. Today, the VGA analog interface is used for high
definition video resolutions including 1080p and higher. Some problems of this standard
are electrical noise, image distortion and sampling error in evaluating pixels. While the VGA
transmission bandwidth is high enough to support even higher resolution playback, the picture
quality can degrade depending on cable quality and length. The extent of quality difference
depends on the individual's eyesight and the display; when using a DVI or HDMI connection,
especially on larger sized LCD/LED monitors or TVs, quality degradation, if present, is
prominently visible. Blu-ray playback at 1080p is possible via the VGA analog interface, if Image
Constraint Token (ICT) is not enabled on the Blu-ray disc.
Digital Visual Interface (DVI)[edit]
Digital Visual Interface is a digital-based standard designed for displays such as flat-panel
displays (LCDs, plasma screens, wide high-definition television displays) and video projectors.
There were also some rare high-end CRT monitors that use DVI. It avoids image distortion and
electrical noise, corresponding each pixel from the computer to a display pixel, using its native
resolution. It is worth noting that most manufacturers include a DVI-I connector, allowing (via
simple adapter) standard RGB signal output to an old CRT or LCD monitor with VGA input.
Video-in video-out (VIVO) for S-Video, composite video and component video[edit]
VIVO connector
Main article: Video-in video-out
These connectors are included to allow connection with televisions, DVD players, video
recorders and video game consoles. They often come in two 10-pin mini-DIN
connector variations, and the VIVO splitter cable generally comes with either 4 connectors (S-
Video in and out plus composite video in and out), or 6 connectors (S-Video in and
out, component YPBPR out and composite in and out).
High-Definition Multimedia Interface (HDMI)[edit]
HDMI is a compact audio/video interface for transferring uncompressed video data and
compressed/uncompressed digital audio data from an HDMI-compliant device ("the source
device") to a compatible digital audio device, computer monitor, video projector, or digital
television.[52] HDMI is a digital replacement for existing analog video standards. HDMI
supports copy protection through HDCP.
DisplayPort[edit]
DisplayPort
Main article: DisplayPort
While the form, design, and implementation of CPUs have evolved significantly over time,
their fundamental operation has remained largely consistent. Key components of a CPU
include the arithmetic–logic unit (ALU), responsible for performing arithmetic and logical
operations, processor registers for supplying operands to the ALU and storing operation
results, and a control unit that coordinates fetching, decoding, and executing instructions by
managing the ALU, registers, and other components. Modern CPUs allocate substantial
semiconductor area to features like caches and instruction-level parallelism to enhance
performance, along with supporting various CPU modes to accommodate operating systems
and virtualization.
Most contemporary CPUs are integrated onto microprocessor chips, with some chips
featuring multiple CPUs, known as multi-core processors. These individual physical CPUs,
or processor cores, may also support multithreading at the CPU level to further enhance
performance.
Operation[edit]
The instruction cycle of a CPU is a fundamental process that drives the execution of
computer programs. Let's delve deeper into each stage:
1. Fetch: In this stage, the CPU retrieves the next instruction from memory based on the
address stored in the program counter (PC). The PC keeps track of the memory
address of the current instruction being executed. The fetched instruction is then
loaded into the instruction register (IR) within the CPU.
2. Decode: Once the instruction is fetched, the CPU decodes it to determine what
operation it needs to perform. This involves interpreting the opcode (operation code)
and any operands associated with the instruction. The decoding process prepares the
CPU for the next stage, where the actual operation will be executed.
3. Execute: In this stage, the CPU carries out the operation specified by the decoded
instruction. This may involve performing arithmetic or logical calculations, accessing
data from memory, or transferring data between different registers within the CPU.
The execution stage produces results or changes the state of the CPU and the system
as a whole.
4. Write Back: Some CPUs have an additional stage called "write back," where the
results of the executed instruction are written back to memory or stored in registers.
This stage completes the instruction cycle and prepares the CPU to fetch the next
instruction.
Throughout the instruction cycle, the program counter is updated to point to the next
instruction to be fetched, ensuring that the CPU continues to execute instructions in sequence.
Additionally, modern CPUs may employ optimizations such as pipelining, out-of-order
execution, and speculative execution to improve performance by overlapping the execution of
multiple instructions. These techniques further enhance the efficiency and throughput of the
CPU.
Fetch[edit]
Fetch involves retrieving an instruction (which is represented by a number or sequence of
numbers) from program memory. The instruction's location (address) in program memory is
determined by the program counter (PC; called the "instruction pointer" in Intel x86
microprocessors), which stores a number that identifies the address of the next instruction to be
fetched. After an instruction is fetched, the PC is incremented by the length of the instruction so
that it will contain the address of the next instruction in the sequence. [d] Often, the instruction to
be fetched must be retrieved from relatively slow memory, causing the CPU to stall while waiting
for the instruction to be returned. This issue is largely addressed in modern processors by
caches and pipeline architectures (see below).
Decode[edit]
Further information: Instruction set architecture § Instruction encoding
In the execute stage of the CPU instruction cycle, the decoded instruction is carried out,
resulting in the desired operation being performed. This stage involves various components
within the CPU working together to perform arithmetic or logical operations, access memory,
or manipulate data according to the instruction's requirements.
During the execute stage, the CPU generates signals and control signals based on the decoded
instruction, directing various components within the CPU to perform the necessary
operations. Once the execution is complete, the CPU proceeds to the next stage of the
instruction cycle or prepares to fetch the next instruction from memory.
EXECUTE
Following the fetch and decode stages, the CPU proceeds to the execute step, where it
performs the actual operation specified by the instruction. This step can involve a single
action or a sequence of actions, depending on the CPU architecture. During each action,
control signals are activated or deactivated to enable various CPU components to execute the
operation. These actions are typically synchronized with clock pulses.
For instance, when executing an addition instruction, the CPU activates the registers holding
the operands and the relevant components of the arithmetic logic unit (ALU) responsible for
addition. As the clock pulse occurs, the operands are transferred from the source registers to
the ALU, where the addition operation takes place. The result, the sum, emerges at the output
of the ALU.
Subsequent clock pulses may activate additional components to store the output, such as
writing the sum to a register or main memory. If the result exceeds the capacity of the ALU's
output, triggering an arithmetic overflow, an overflow flag is set, impacting subsequent
operations. This orchestrated sequence of actions ensures the proper execution of instructions
and the handling of their outcomes within the CPU.
At the core of a CPU lies its instruction set, defining a set of fundamental operations it can
execute. These operations encompass tasks like arithmetic calculations, comparisons, and
program control. Each operation is encoded into a unique bit pattern, termed the machine
language opcode. During execution, the CPU interprets this opcode, typically through a
binary decoder, to generate control signals that dictate its behavior.
A machine language instruction comprises the opcode alongside optional bits specifying
operation arguments, such as operands for arithmetic operations. As complexity increases, a
machine language program emerges—a sequence of instructions the CPU processes. These
instructions reside in memory, fetched by the CPU as needed for execution.
Within the CPU's processor lies the arithmetic–logic unit (ALU), a combinational logic
circuit responsible for executing mathematical and logical operations specified by the
instructions. When processing an instruction, the CPU retrieves it from memory, utilizes the
ALU to perform the operation, and then stores the result back into memory.
Beyond basic arithmetic and logic, the instruction set encompasses a range of operations.
These include loading and storing data in memory, directing program flow through branching
operations, and handling floating-point arithmetic through the CPU's dedicated floating-point
unit (FPU). Together, these instructions facilitate the diverse array of tasks executed by a
CPU.
Control unit[edit]
Main article: Control unit
The control unit (CU) is a component of the CPU that directs the operation of the processor. It
tells the computer's memory, arithmetic and logic unit and input and output devices how to
respond to the instructions that have been sent to the processor.
It directs the operation of the other units by providing timing and control signals. Most computer
resources are managed by the CU. It directs the flow of data between the CPU and the other
devices. John von Neumann included the control unit as part of the von Neumann architecture. In
modern computer designs, the control unit is typically an internal part of the CPU with its overall
role and operation unchanged since its introduction.[69]
Arithmetic logic unit[edit]
Main article: Arithmetic logic unit
The Address Generation Unit (AGU), also referred to as the Address Computation Unit
(ACU), is a crucial component within a CPU, responsible for swiftly calculating memory
addresses essential for accessing main memory. By conducting these calculations in parallel
with other CPU tasks, the AGU optimizes performance by minimizing the number of CPU
cycles required for executing various instructions, thereby enhancing overall efficiency.
In essence, the AGU plays a pivotal role in CPU performance by efficiently handling address
calculations, minimizing memory access overhead, and facilitating the swift execution of
instructions.
Key Points:
1. Cache Levels:
o L1 Cache: Closest to the CPU core, split into L1d (data) and L1i
(instructions).
o L2 Cache: Acts as a repository for L1 caches, each core has a dedicated L2
cache.
o L3 Cache: Shared among all cores, larger than L2.
o L4 Cache: Less common, often implemented on DRAM.
2. Characteristics:
o Speed and Proximity: Caches are faster and closer to the CPU than main
memory.
o Optimization: Each level is optimized differently.
o Splitting: Modern CPUs split the L1 cache for efficiency.
3. Evolution and Implementation:
o Early CPUs: Had single-level caches without splitting.
o Current CPUs: Almost all have multi-level caches, with split L1 and shared
L3.
o Integration: Multiple cache levels integrated on a single chip.
4. Specialized Caches:
o TLB: Part of MMU, crucial for virtual memory management.
5. Sizing:
o Power of Two: Cache sizes are typically in powers of two.
o Exceptions: Some designs have non-standard sizes.
Summary:
CPU caches enhance data access efficiency in modern processors. They consist of multiple
levels with specific roles, optimizing performance. The evolution from single to multi-level
caches reflects ongoing advancements in CPU design to meet performance demands.
Clock rate[edit]
Main article: Clock rate
Synchronous Operation
Clock Signal: The clock signal's period is set longer than the maximum signal
propagation time within the CPU, ensuring reliable data movement.
Architecture: This approach simplifies CPU design by synchronizing data movement
with clock signal edges.
Inefficiencies: Slower components dictate overall CPU speed, leading to
inefficiencies as some sections are faster.
Challenges: High clock rates complicate signal synchronization and increase energy
consumption and heat dissipation.
Techniques: Clock gating deactivates unnecessary components to reduce power
consumption. However, its complexities limit usage in mainstream designs. The IBM
PowerPC-based Xenon CPU in the Xbox 360 demonstrates effective clock gating.
In contrast to synchronous CPUs, clockless CPUs operate without a central clock signal,
relying on asynchronous operations.
Hybrid Designs
Summary
CPUs with larger word sizes entail more complex circuitry, making them physically larger,
costlier, and more power-hungry. Despite the availability of CPUs with larger word sizes
(e.g., 16, 32, 64, or even 128 bits), smaller 4- or 8-bit microcontrollers are popular in modern
applications due to their compact size, lower cost, and power efficiency. However, for higher
performance needs, the benefits of larger word sizes may outweigh these drawbacks.
Some CPUs feature internal data paths shorter than the word size to reduce size and cost. For
example, although the IBM System/360 instruction set was 32-bit, models like the Model 30
and Model 40 had 8-bit data paths, requiring four cycles for a 32-bit add. Similarly, the
Motorola 68000 series had 16-bit data paths, necessitating two cycles for a 32-bit add.
To balance advantages of lower and higher bit lengths, many instruction sets adopt different
widths for integer and floating-point data. For instance, the IBM System/360 supported 64-bit
floating-point values within a primarily 32-bit instruction set, enhancing floating-point
accuracy and range. Later CPU designs often employ mixed bit widths, especially for
general-purpose processors needing a blend of integer and floating-point capabilities to meet
diverse computational requirements.
Parallelism[edit]
Main article: Parallel computing
Model of a subscalar
CPU, in which it takes fifteen clock cycles to complete three instructions
The description of the basic operation of a CPU offered in the previous section describes the
simplest form that a CPU can take. This type of CPU, usually referred to as subscalar, operates
on and executes one instruction on one or two pieces of data at a time, that is less than
one instruction per clock cycle (IPC < 1).
This process gives rise to an inherent inefficiency in subscalar CPUs. Since only one instruction
is executed at a time, the entire CPU must wait for that instruction to complete before proceeding
to the next instruction. As a result, the subscalar CPU gets "hung up" on instructions which take
more than one clock cycle to complete execution. Even adding a second execution unit (see
below) does not improve performance much; rather than one pathway being hung up, now two
pathways are hung up and the number of unused transistors is increased. This design, wherein
the CPU's execution resources can operate on only one instruction at a time, can only possibly
reach scalar performance (one instruction per clock cycle, IPC = 1). However, the performance is
nearly always subscalar (less than one instruction per clock cycle, IPC < 1).
Attempts to achieve scalar and better performance have resulted in a variety of design
methodologies that cause the CPU to behave less linearly and more in parallel. When referring to
parallelism in CPUs, two terms are generally used to classify these design techniques:
instruction-level parallelism (ILP), which seeks to increase the rate at which
instructions are executed within a CPU (that is, to increase the use of on-die
execution resources);
task-level parallelism (TLP), which purposes to increase the number
of threads or processes that a CPU can execute simultaneously.
Each methodology differs both in the ways in which they are implemented, as well as the relative
effectiveness they afford in increasing the CPU's performance for an application. [i]
Instruction-level parallelism[edit]
Main article: Instruction-level parallelism
Multiprocessing Technologies
Multiprocessing (MP) is a technology that allows multiple CPUs to share a coherent view of
memory. Symmetric Multiprocessing (SMP) enables CPUs to cooperate on the same
program, ensuring an up-to-date memory view. Non-Uniform Memory Access (NUMA) and
directory-based coherence protocols expand CPU cooperation beyond SMP limitations.
Chip-Level Multiprocessing
Multithreading Advancements
Types of Multithreading
Temporal Multithreading switches to another thread when one is stalled waiting for data,
optimizing CPU usage. Simultaneous Multithreading executes instructions from multiple
threads in parallel within a single CPU clock cycle.
Recent processor families, including x86-64 Opteron, Athlon 64 X2, SPARC UltraSPARC
T1, and IBM POWER4 and POWER5, feature Chip-Level Multiprocessing. Video game
console CPUs like Xbox 360's triple-core PowerPC and PlayStation 3's 7-core Cell
microprocessor also employ multiprocessing designs.
Data parallelism[edit]
Main articles: Vector processor and SIMD
Vector processors excel in tasks requiring the same operation on large data sets, such as sums
or dot products. This makes them ideal for multimedia applications (images, video, and
sound) and scientific and engineering computations. Unlike scalar processors that must fetch,
decode, and execute each instruction for every data value, vector processors perform a single
operation on a large data set with one instruction, greatly enhancing efficiency in data-
intensive tasks.
Early vector processors like the Cray-1 were used mainly in scientific research and
cryptography. As digital multimedia emerged, the need for SIMD in general-purpose
processors grew. Following the inclusion of floating-point units in general-purpose
processors, SIMD execution units began to appear. Early SIMD implementations, such as
HP's Multimedia Acceleration eXtensions (MAX) and Intel's MMX, were integer-only,
limiting their effectiveness for floating-point-intensive applications.
Developers refined early SIMD designs, leading to modern SIMD specifications associated
with specific instruction set architectures (ISAs). Notable examples include:
Summary
The shift from scalar to vector processing, driven by the need for efficient data parallelism,
has significantly impacted processor design. Modern SIMD implementations in general-
purpose processors optimize performance for a wide range of applications, from multimedia
to scientific computations, by enabling efficient parallel processing of large data sets.
Privileged modes[edit]
Most modern CPUs have privileged modes to support operating systems and virtualization.
Cloud computing can use virtualization to provide virtual central processing units[89] (vCPUs)
for separate users.[90]
A host is the virtual equivalent of a physical machine, on which a virtual system is operating.
[91]
When there are several physical machines operating in tandem and managed as a whole, the
grouped computing and memory resources form a cluster. In some systems, it is possible to
dynamically add and remove from a cluster. Resources available at a host and cluster level can
be partitioned into resources pools with fine granularity.
Performance[edit]
Further information: Computer performance and Benchmark (computing)
The performance or speed of a processor depends on various factors, primarily the clock rate
(measured in hertz) and instructions per clock (IPC). Together, these determine the
instructions per second (IPS) the CPU can execute. However, reported IPS values often
reflect "peak" rates on artificial sequences, not realistic workloads. Real-world applications
involve a mix of instructions, some taking longer to execute, affecting overall performance.
Additionally, the efficiency of the memory hierarchy significantly impacts processor
performance, an aspect not fully captured by IPS.
To address the limitations of IPS, standardized tests or "benchmarks" like SPECint have been
developed. These benchmarks aim to measure the actual effective performance of processors
in commonly used applications, providing a more accurate representation of real-world
performance.
Multi-Core Processors
Inter-Core Communication
The increase in processing speed with additional cores is not directly proportional because
cores need to communicate through specific channels, consuming some of the available
processing power. This inter-core communication adds complexity and limits the overall
performance gain from additional cores.
Modern CPUs have features like simultaneous multithreading and uncore, which share CPU
resources to increase utilization. These capabilities make monitoring performance levels and
hardware usage more complex. To address this, some CPUs include additional hardware
logic for monitoring usage, providing counters accessible to software. An example is Intel's
Performance Counter Monitor technology.
Summary
Processor performance is influenced by clock rate, IPC, and the efficiency of the memory
hierarchy. Benchmarks like SPECint provide a more accurate measure of real-world
performance. Multi-core processors enhance the ability to run multiple tasks simultaneously,
though performance gains are limited by inter-core communication. Modern CPUs
incorporate advanced features to improve resource utilization, necessitating sophisticated
monitoring tools.
4o
The memory or storage unit is crucial for storing instructions, data, and intermediate results
necessary for processing. It ensures that the CPU has quick access to the data and instructions
it needs to execute tasks efficiently. This unit is also referred to as the internal storage unit,
main memory, primary storage, or Random Access Memory (RAM).
Key Functions:
1. Data Storage: Stores instructions, data, and intermediate results required for
processing.
2. Intermediate Storage: Temporarily holds data during task execution.
3. Final Storage: Stores final processing results before they are outputted.
4. Data Transfer: Manages data transmission between the memory unit and other
components.
Types of Memory:
1. Primary Memory: RAM provides fast, temporary storage directly accessible by the
CPU.
2. Secondary Memory: Includes hard drives and SSDs, offering larger, long-term
storage but slower access.
Control Unit
The control unit directs the operations of all parts of the computer, ensuring that instructions
are fetched, decoded, and executed correctly. It does not process data but orchestrates the
entire processing sequence.
Key Functions:
1. Data Control and Transfer: Manages data and instruction flow between computer
parts.
2. Unit Management: Oversees operations of all units within the computer.
3. Instruction Fetch and Decode: Retrieves and decodes instructions from memory.
4. Device Communication: Manages data transfer between the CPU and input/output
devices.
5. Flow Control: Maintains orderly information flow across the processor.
The Arithmetic Logic Unit (ALU) performs arithmetic and logical operations. It consists of
two sections, each specializing in different types of operations.
Arithmetic Section:
Logic Section:
Additional Notes:
1. Multiple ALUs: Modern CPUs may have multiple ALUs for increased processing
power.
2. Timer Functions: ALUs can manage timers to coordinate system operations
efficiently.
Summary
Memory Unit: Stores and manages data and instructions, impacting computer speed
and performance.
Control Unit: Directs operations, ensuring efficient data flow and execution of
instructions.
ALU: Performs essential arithmetic and logical operations for various tasks.
Understanding these units' roles and functions highlights how a computer processes
information and executes tasks efficiently.
A camera is an instrument used to capture and store images and videos, either digitally via an
electronic image sensor, or chemically via a light-sensitive material such as photographic film. As
a pivotal technology in the fields of photography and videography, cameras have played a
significant role in the progression of visual arts, media, entertainment, surveillance, and scientific
research. The invention of the camera dates back to the 19th century and has since evolved with
advancements in technology, leading to a vast array of types and models in the 21st century.
Cameras function through a combination of multiple mechanical components and principles.
These include exposure control, which regulates the amount of light reaching the sensor or film;
the lens, which focuses the light; the viewfinder, which allows the user to preview the scene; and
the film or sensor, which captures the image.
Several types of cameras exist, each suited to specific uses and offering unique
capabilities. Single-lens reflex (SLR) cameras provide real-time, exact imaging through the
lens. Large-format and medium-format cameras offer higher image resolution and are often used
in professional and artistic photography. Compact cameras, known for their portability and
simplicity, are popular in consumer photography. Rangefinder cameras, with separate viewing
and imaging systems, were historically widely used in photojournalism. Motion picture
cameras are specialized for filming cinematic content, while digital cameras, which became
prevalent in the late 20th and early 21st century, use electronic sensors to capture and store
images.
The rapid development of smartphone camera technology in the 21st century has blurred the
lines between dedicated cameras and multifunctional devices, profoundly influencing how society
creates, shares, and consumes visual content.
Mechanics[edit]
Lens hood: used on the end of a lens to block the sun or other light source to prevent
glare and lens flare (see also matte box).
Lens cap: covers and protects the camera lens when not in use.
Lens adapter: allows the use of lenses other than those for which the camera was
designed.
Filter: allows artificial colors or changes light density.
Lens extension tube: allows close focus in macro photography.
Care and protection: including camera case and cover, maintenance tools, and
screen protector.
Camera monitor: provides an off-camera view of the composition with a brighter and
more colorful screen, and typically exposes more advanced tools such as framing
guides, focus peaking, zebra stripes, waveform monitors (oftentimes as an "RGB
parade"), vectorscopes and false color to highlight areas of the image critical to the
photographer.
Tripod: primarily used for keeping the camera steady while recording video, doing a
long exposure, and time-lapse photography.
Microscope adapter: used to connect a camera to a microscope to photograph what
the microscope is examining.
Cable release: used to remotely control the shutter using a remote shutter button that
can be connected to the camera via a cable. It can be used to lock the shutter open
for the desired period, and it is also commonly used to prevent the camera shake
from pressing the built-in camera shutter button.
Dew shield: prevents moisture build-up on the lens.
UV filter: can protect the front element of a lens from scratches, cracks, smudges,
dirt, dust, and moisture while keeping a minimum impact on image quality.
Battery and sometimes a charger.
Large format cameras use special equipment that includes a magnifier loupe, view finder, angle
finder, and focusing rail/truck. Some professional SLRs can be provided with
interchangeable finders for eye-level or waist-level focusing, focusing screens, eyecup, data
backs, motor-drives for film transportation or external battery packs.
Primary types[edit]
Single-lens reflex (SLR) camera[edit]
Main article: Single-lens reflex camera
Further information: Instant return mirror
Instant Camera
After exposure, every photograph is taken through pinch rollers inside the instant camera.
Thereby the developer paste contained in the paper 'sandwich' is distributed on the image. After
a minute, the cover sheet just needs to be removed and one gets a single original positive image
with a fixed format. With some systems, it was also possible to create an instant image negative,
from which then could be made copies in the photo lab. The ultimate development was the SX-
70 system of Polaroid, in which a row of ten shots – engine driven – could be made without
having to remove any cover sheets from the picture. There were instant cameras for a variety of
formats, as well as adapters for instant film use in medium- and large-format cameras.
Subminiature camera[edit]
Main article: Subminiature camera
Subminiature spy camera
Subminiature cameras were first produced in the twentieth century and use film significantly
smaller than 35mm. The expensive 8×11mm Minox, the only type of camera produced by the
company from 1937 to 1976, became very widely known and was often used for espionage (the
Minox company later also produced larger cameras). Later inexpensive subminiatures were
made for general use, some using rewound 16 mm cine film. Image quality with these small film
sizes was limited.
Folding camera[edit]
Main article: Folding camera
Folding camera
The introduction of films enabled the existing designs for plate cameras to be made much
smaller and for the baseplate to be hinged so that it could be folded up, compressing the bellows.
These designs were very compact and small models were dubbed vest pocket cameras. Folding
roll film cameras were preceded by folding plate cameras, more compact than other designs.
Box camera[edit]
Main article: Box camera