L1.1 HPC Environment
L1.1 HPC Environment
Computing
Envirnment
Contents
• Network switch
• Storage system
𝑐𝑜𝑟𝑒𝑠
𝑐𝑦𝑐𝑙𝑒𝑠
𝐹𝐿𝑂𝑃𝑠
𝐹𝐿𝑂𝑃𝑆 = 𝑛𝑜𝑑𝑒𝑠 × × ×
𝑛𝑜𝑑𝑒𝑠 𝑠𝑒𝑐𝑜𝑛𝑑 𝑐𝑦𝑐𝑙𝑒
• The 3rd term clock cycles per second is known as the clock frequency, typically 2 ~ 3 GHz.
• The 4th term FLOPs per cycle is how many floating-point operations are done in one clock cycle.
Typical values for Intel Xeon CPUs are:
--- Sandy Bridge and Ivy Bridge: 8 DP FLOPs/cycle, 16 SP FLOPs/cycle.
Computer power grows
rapidly
• Iphone 4 vs. 1985 Cray-2 supercomputer • Rapid growth of the power of the top-500 supercomputers
(logarithmic y-axis, in GFLOPS)
Top 500
Supercomputers
• The list of June 2017
Statistics of the
Top 500
HPC user
environment
• Operation system: Linux (Redhat/CentOS, Ubuntu, etc), Unix.
• Login: ssh, gsissh.
• File transfer: secure ftp (scp), grid ftp (globus).
• Job scheduler: Slurm, PBS, SGE, Loadleveler.
• Software management: module.
• Compilers: Intel, GNU, PGI.
• MPI implementations: OpenMPI, MPICH, MVAPICH, Intel MPI.
• Debugging and profiling tools: Totalview, Tau, DDT, Vtune.
p: number of processors/cores,
α: fraction of the program that is serial. • The figure is from: https://en.wikipedia.org/wiki/Parallel_computing
Distributed or shared memory
systems
Figures are from the book Using OpenMP: Portable Shared Memory Parallel Programming
An example: weather
science
• Serial weather model
• Shared-memory weather model (for several cores within one node)
• Distributed-memory weather model (for many nodes within one cluster)
National-wide HPC
resources:
• XSEDE (eXtreme XSEDE
Science and Engineering Discovery Environment) is a virtual system that
provides compute resources for scientists and researchers from all over the US.
• Its mission is to facilitate research collaboration among institutions, enhance research
productivity, provide remote data transfer, and enable remote instrumentation.
• A combination of supercomputers in many institutions in the US.
• XSEDE provides regular HPC trainings and workshops:
--- online training: https://www.xsede.org/web/xup/online-training
---- monthly workshops: https://www.xsede.org/web/xup/course-calendar
XSEDE
resources (1)
XSEDE
resources (2)
XSEDE
resources (3)
BU Shared Computer
Cluster
• A Linux cluster (SCC)
with over 580 nodes, 11,000 processors,
and 252 GPUs. Currently over 3 Petabytes of disk.
• Located in Holyoke, MA at the Massachusetts Green
High Performance Computing Center (MGHPCC), a
collaboration between 5 major universities and the
Commonwealth of Massachusetts.
• Went into production in June, 2013 for Research
Computing. Continues to be updated/expanded.
• Webpage:
http://www.bu.edu/tech/support/research/computing-
resources/scc/