0% found this document useful (0 votes)
27 views

Synopsis: Atieno Renee Loice

This document discusses methods for continuing to improve computer performance as semiconductor miniaturization becomes less effective. It finds that performance gains will need to come from software, algorithms, and hardware architecture. Specifically, it outlines that software can be optimized through performance engineering, algorithms can solve problems more efficiently, and hardware can be streamlined to focus on parallel processing using fewer transistors. Overall, it argues that while gains may be uneven and face diminishing returns, there are still opportunities to advance computing performance through optimizations at the higher levels of the computing technology stack.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
27 views

Synopsis: Atieno Renee Loice

This document discusses methods for continuing to improve computer performance as semiconductor miniaturization becomes less effective. It finds that performance gains will need to come from software, algorithms, and hardware architecture. Specifically, it outlines that software can be optimized through performance engineering, algorithms can solve problems more efficiently, and hardware can be streamlined to focus on parallel processing using fewer transistors. Overall, it argues that while gains may be uneven and face diminishing returns, there are still opportunities to advance computing performance through optimizations at the higher levels of the computing technology stack.
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 3

Atieno Renee loice

Ci/00074/018
CCS419: Advanced computer architecture

ADVANCEMENT IN COMPUTER PERFORMANCE

Synopsis
Much of the improvement in computer performance comes from decades of miniaturization
of computer components, which, until recently, doubled the number of transistors on computer
chips every 2 years. Unfortunately, semiconductor miniaturization is running out of steam as a
viable way to grow computer performance. Nevertheless, opportunities for growth in computing
performance will still be available, especially at the “Top” of the computing technology stack:
software, algorithms, and hardware architecture. Software can be made more efficient by
performance engineering: restructuring software to make it run faster. Performance engineering
can remove inefficiencies in programs, known as software bloat, arising from traditional
software-development strategies that aim to minimize an application’s development time rather
than the time it takes to run. Performance engineering can also tailor software to the hardware on
which it runs, for example, to take advantage of parallel processors and vector units. Algorithms
offer more-efficient ways to solve problems. Indeed, since the late 1970s, the time to solve the
maximum-flow problem improved nearly as much from algorithmic advances
as from hardware speedups. But progress on a given algorithmic problem occurs unevenly
and sporadically and must ultimately face diminishing returns. As such, we see the biggest
benefits coming from algorithms for new problem domains (e.g., machine learning) and from
developing new theoretical machine models that better reflect emerging hardware. Hardware
architectures can be streamlined—for instance, through processor simplification, where a
complex processing core is replaced with a simpler core that requires fewer transistors. The
freed-up transistor budget can then be redeployed in other ways—for example, by increasing the
number of processor cores running in parallel, which can lead to large efficiency gains for
problems that can exploit parallelism.

Introduction
This is a research on methods that can lead to improvement in computing performance as
semiconductor miniaturization is running out of steam as viable way to grow computer
performance.

Methodology
GPU logic integrated into laptop microprocessors
I obtained, from WikiChip (57), annotated die photos for Intel microprocessors with GPUs integrated on
die, which began in 2010 with Sandy Bridge. I measured the area in each annotated photo dedicated a
GPU and calculated the ratio of this area to the total area of the chip. Intel’s quad-core chips had
approximately the following percentage devoted to the GPU: Sandy Bridge (18%), Ivy Bridge (33%),
Haswell (32%), Skylake (40 to 60%, depending on version), Kaby Lake (37%), and Coffee Lake (36%).
Annotated die photos for Intel microarchitectures newer than Coffee Lake were not available and
therefore not included in the study. We did not find enough information about modern AMD processors
to include them in this study.

Results
Performance gains will need to come from technologies at the top of the computing stack, which
include software, algorithm and hardware:
Software
Software development in the Moore era has generally focused on minimizing the time it takes to
develop an application, rather than the time it takes to run that application once it is deployed.
This strategy has led to enormous inefficiencies in programs, often called software bloat. In
addition, much existing software fails to take advantage of architectural features of chips, such as
parallel processors and vector units. In the post-Moore era, software performance engineering—
restructuring software to make it run faster—can help applications run more quickly by
removing bloat and by tailoring software to specific features of the hardware architecture.

Algorithm

Algorithmic advances have already made many contributions to performance growth and will
continue to do so in the future. A major goal is to solve a problem with less computational work.
For example, by using Strassen’s algorithm(15) for matrix multiplication. For some problems,
the gains can be much more impressive: The President’s Council of Advisors on Science and
Technology concluded in 2010 that “performance gains due to improvements in algorithms have
vastly exceeded even the dramatic performance gains due to increased processor
speed” .Because algorithm design requires human ingenuity, however, it is hard to anticipate
advances.

Hardware
Historically, computer architects used more and more transistors to make serial computations
run faster, vastly increasing the complexity of processing cores, even though gains in
performance suffered from diminishing returns over time (38). We argue that in the post-Moore
era, architects will need to adopt the opposite strategy and focus on hardware streamlining:
implementing hardware functions using fewer transistors and less silicon area. As we shall see,
the primary advantage of hardware streamlining comes from providing additional chip area for
more circuitry to operate in parallel. Thus, the greatest benefit accrues to applications that have
ample parallelism. Indeed, the performance of hardware for applications without much
parallelism has already stagnated. But there is plenty of parallelism in many emerging
application domains, such as machine learning, graphics, video and image processing, sensory
computing, and signal processing. Computer architects should be able to design streamlined
architectures to provide increasing performance for these and other domains for many years after
Moore’s law ends.
Conclusion
As miniaturization wanes, the silicon-fabrication improvements at the Bottom will no longer
provide the predictable, broad-based gains in computer performance that society has enjoyed
for more than 50 years. Performance-engineering of software, development of algorithms, and
hardware streamlining at the Top can continue to make computer applications faster in the post-
Moore era, rivaling the gains accrued over many years by Moore’s law. Unlike the historical
gains at the Bottom, however, the gains at the Top will be opportunistic, uneven, sporadic, and
subject to diminishing returns as problems become better explored. But even where opportunities
exist, it may be hard to exploit them if the necessary modifications to a component require
compatibility with other components. Big components can allow their owners to capture the
economic advantages from performance gains at the Top while minimizing external disruptions.

REFERENCES
http://science.sciencemag.org/content/368/6495/eaam9744#BIBL

You might also like