Hpc in Abstract
Hpc in Abstract
net/publication/258580735
HPC in Abstract
CITATIONS READS
0 825
2 authors:
Some of the authors of this publication are also working on these related projects:
All content following this page was uploaded by Hemprasad Y. Badgujar on 02 June 2014.
HPC in Abstract
This is an era of High Performance processors having SISD architecture (first Clusters, Grids) having CPUs, MEMORY,
Computing abbreviated as HPC. HPC highly super scalar HPC system). Next GPUs, Multicore Processors, Arm
evolved due to increase in demand of decade of HPC was 1970s with vector Processor even DSPs and FPGA,
processing speed. It is a hybrid technology processors having SIMD architecture. Networking technologies (network
that continuously changes its colors, Later phase in 1980s was marked with topology, data transfer bandwidth &
so it is hard to define it in static words. massively parallel processors with MIMD latency, network protocols, network
However, if we still persist to define it the architecture having networked processors connections including infiniband, Ethernet
effort leads to a definition as below – with own memory and OS. Later in 1990s and proprietary connections with Fault
“HPC is a branch of computer science term ‘cluster’ was introduced for indicating Tolerance),
that concentrates on combining the connected standalone computers. These Software technologies, which include
power of computing devices, networking were mostly developed and designed suitable compilers for certain architecture,
infrastructure and storage systems by private communities for services of programming models (MPI, SHMEM,
using parallel hardware, networking and commodity hardware for networking. Grid PGAS etc.) and algorithms for parallel &
software technology that utilizes human was also introduced in the same decade distributed systems, use of middleware
expertise to gain optimized and sustained having foundation on collaboration of for bridging applications (Open-source,
performance for solving scientific, geographically distributed organizations freeware, commercial) and operating
engineering, big data and research and processing nodes for end users. Today system, and parallel programming for
problems including grand challenges like the cloud is one of the best sources of problemsolving methodologies
computer modeling, simulations and HPC through several massively scalable (Data parallelism, Pipelining, Functional
analysis of data.” and dynamic services as networking, Parallelism), programming languages (C/
HPC always focuses on developing processing, data storage, applications etc. C++, Python, Fortran etc. with parallel
high performance and it is achieved by All these services are provided through platforms and technologies like Open MP,
parallel processing systems, parallel Internet. If we need more information MPI Programming , UPC [Unified Parallel
programming algorithms and high about today’s fastest HPC systems, these C], OpenCL, CUDA, OpenCV, etc.)
bandwidth & low latency network by are listed in top500.org, out of which first If we go with the concept of HPC
incorporating computational and process 5 are shown in table 1. we can not avoid concepts of Moore’s
administration techniques of parallelism. As mentioned earlier HPC is hybrid law, concept of Von Neumann, Flynn’s
Since past several decades, HPC technology that covers a wide range of Taxiometry (SISD, SIMD, MISD, MIMD),
systems were only used for crunching technologies for example: topologies of computers, processors,
numbers for scientific research, Hardware technologies, which memory communication networks, data
engineering and big data problems. Now include computer architectures as New concurrency and correctness (data races,
they are open to everyone. Today HPC Generation processor architectures (IBM atomic operations, deadlocks, live lock
has almost become an essential part of CELL BE, Nvidia Tesla GPU, Intel Larrabee etc.), memory management (shared
businesses also for their daily procedures Microarchitecture and Intel Nehalem memory, distributed memory, semaphores
in order to help them become more microarchitecture ) and traditional and hybrid environments), partitioning,
competitive and profitable. processor architectures (Single Processor, data dependency, synchronization,
So far as history of HPC is concerned, SIMD, Vector computers, MPP, SMP, limits and cost measurement of parallel
it started in the decade of 1960s with scalar distributed systems, Constellations, programming (Speedup, Efficiency,
Hemprasad Y Badgujar received his B.E. in Computer Engineering (University of Pune) in 2010, M-Tech in Information
Technology (SGGSIE&T Nanded Gov. Autonomous) in 2013. Presently he is working in NDMVP’s Karmaveer Adv.
Babural Ganpatrao Thakare College of Engineering Nashik. Areas of research interest are in High Performance
Computing, Parallel & GPU Computing, Computer and Network Security, Computer Vision, User Authentication &
Identification Systems
Prof. Dr. R C Thool received his B.E and M.E Electronics from SGGS Institute of Engineering and Technology,
About the Authors
Nanded (Marathwada University, Aurangabad) in 1986 and 1991 respectively. He obtained his Ph.D. in Electronics
and Computer Science from SRTMU, Nanded and is presently working as Professor in Department of Information
Technology at SGGS Institute of Engineering and Technology, Nanded. Areas of research interest are Computer
Vision, Image Processing and Pattern Recognition, Networking and Security, High Performance Computing, Parallel
& GPU Computing. He has published more than 50 papers in National and International Journals and Conferences.
He is Member of IEEE, ISTE, ASABE (USA) and also a member of University Co-ordination committee of Nvidia and
Expert on AICTE committee and NBA.