0% found this document useful (0 votes)
20 views

Lecture 1 Parallel and Scalable Machine Learning by HPC Morris Riedel

Uploaded by

Rachid Rahmoune
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
20 views

Lecture 1 Parallel and Scalable Machine Learning by HPC Morris Riedel

Uploaded by

Rachid Rahmoune
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 50

Parallel and Scalable Machine Learning

Introduction to Machine Learning Algorithms

Prof. Dr. – Ing. Morris Riedel


Associated Professor
School of Engineering and Natural Sciences, University of Iceland, Reykjavik, Iceland
Research Group Leader, Juelich Supercomputing Centre, Forschungszentrum Juelich, Germany

@Morris Riedel @MorrisRiedel @MorrisRiedel


LECTURE 1

Parallel and Scalable Machine Learning by HPC


February 17, 2020
Juelich Supercomputing Centre, Germany
Outline of the Training Course
1. Parallel & Scalable Machine Learning driven by HPC
2. Introduction to Machine Learning Fundamentals
3. Supervised Learning with a Simple Learning Model
4. Artificial Neural Networks (ANNs)
5. Introduction to Statistical Learning Theory
6. Validation and Regularization
7. Pattern Recognition Systems
8. Parallel and Distributed Training of ANN
9. Supervised Learning with Deep Learning
10. Unsupervised Learning – Clustering
11. Clustering with HPC
12. Introduction to Deep Reinforcement Learning

Lecture 1 – Parallal and Scalable Machine Learning by HPC 2 / 50


Outline

 Machine Learning driven by HPC


 Welcome & Course Goals, Content & Timeliness
 Machine Learning Models & Learning Approaches
 Machine Learning Prerequisites & Challenges
 Innovative Deep Learning (DL) Techniques & Short Examples
 Relationships to High Performance Computing (HPC) & Big Data

 High Performance Computing Technology Foundations


 Understanding HPC Technologies & Research & Resource Availability
 Pan-European HPC Infrastructure PRACE & Training Portal
 Multi-Core vs. Many-Core Technologies
 DEEP Series of Projects & Modular Supercomputing Architecture (MSA)
 Access to Hands-On Training Systems & JURECA HPC System at JSC

Lecture 1 – Parallal and Scalable Machine Learning by HPC 3 / 50


Machine Learning driven by HPC

Lecture 1 – Parallal and Scalable Machine Learning by HPC 4 / 50


Welcome @ Juelich Supercomputing Centre (JSC) of Forschungszentrum Juelich

 Selected Facts
 One of EU largest
inter-disciplinary
research centres
(~5000 employees)
 Special expertise
 Physics
 Material sciences
 Nanotechnology
 Neuroscience &
Medicine
 Information technology
(HPC, Big Data, Quantum
Computing, Clouds, etc.)
 Artificial Intelligence (AI) [25] Juelich Supercomputing Centre [24] Helmholtz Association Web Page

Lecture 1 – Parallal and Scalable Machine Learning by HPC 5 / 50


Welcome @ Forschungszentrum Juelich (FZJ) of the Helmholtz Association

1956 Shareholders 11 609.3 5,914 867


90 % Federal Republic of
FOUNDATION INSTITUTES million euros REVENUE EMPLOYEES VISITING
Germany total SCIENTISTS
on 12 December 10 % North Rhine-Westphalia 2 project 2,165scientists
management 536 doctoral researchers from 65 countries
(40 % external funding)
organizations
323 trainees and students
on placement

Lecture 1 – Parallal and Scalable Machine Learning by HPC 6 / 50


Helmholtz AI @ FZJ & JSC

 Helmholtz AI Central Unit


 Helmholtz Zentrum Muenchen (HMGU)
 Other Helmholtz AI Local Units
 Energy: Karlsruhe Institute of Technology (KIT)
 Earth & Environment: Geesthacht Centre for
Material & Costal Research (HZG)
 Aernoautics, Space & Transport:
German Aerospace Centre (DLR)
 Matter: Helmholtz Centre
Dresden-Rossendorf (HZDR)
 Information: Helmholtz AI Local @
Forschungszentrum Juelich
 Young Investigator Group @ INM-1
 High Level Support Team (HLST) @ JSC
[20] Helmholtz AI Web page
Lecture 1 – Parallal and Scalable Machine Learning by HPC 7 / 50
Artificial Intelligence @ Juelich Supercomputing Centre
Communities
(e.g. remote
Research Group Research sensing &
High Groups health) DEEP-EST
Productivity EU PROJECT
Data Processing

Domain-specific
Simulation Labs
SDLs
Cross-
Sectional
Team Deep PADC
Learning
Cross-Sectional Teams Data Life Cycle Labs Exascale co-Design

Modular Modular Industry


Supercomputer Supercomputer Relations
JURECA Facilities JUWELS Team

Lecture 1 – Parallal and Scalable Machine Learning by HPC 8 / 50


University of iceland

 Selected Facts
 Ranked among the top 200
universities in the world
(by Times Higher Education)
 ~2900 students at the
SENS school
 Long collaboration with
Forschungszentrum Juelich
 ~350 MS students
 ~150 doctoral students.
 Many foreign &
Erasmus students
 English courses
[19] University of Iceland Web page

Lecture 1 – Parallal and Scalable Machine Learning by HPC 9 / 50


Call to Action: Follow Us on Twitter! Stay Informed & Share Impressions!

Lecture 1 – Parallal and Scalable Machine Learning by HPC 10 / 50


Course Goals

Fundamentals Practical Skills Advancements Technology

Community Building

 Join our JOint International Machine Learning Laboratory (JOIML)

Lecture 1 – Parallal and Scalable Machine Learning by HPC 11 / 50


Terminology & Differences between AI, ML & DL

Classification

Artificial Intelligence (AI)


A wide area of techniques and tools that enable
computers to mimic human behaviour (+ robotics)

Machine Learning (ML) Clustering

Learning from data without explicitly being


programmed with common programming languages

Deep Learning (DL)


Regression
Systems with the ability to learn underlying
features in data using large neural networks

Lecture 1 – Parallal and Scalable Machine Learning by HPC 12 / 50


Machine Learning Models – Short Overview

Classification Clustering Regression

 Groups of data exist  No groups of data exist  Identify a line with


 New data classified  Create groups from a certain slope
to existing groups data close to each other describing the data

 Machine learning methods can be roughly categorized in classification, clustering, or regression augmented with various techniques for data
exploration, selection, or reduction – despite the momentum of deep learning, traditional machine learning algorithms are still widely relevant today

Lecture 1 – Parallal and Scalable Machine Learning by HPC 13 / 50


Learning Approaches – What means Learning from data?

 The basic meaning of learning is ‘to use a set of observations to uncover an underlying process‘ [14] Image sources: Species Iris Group of
 The three different learning approaches are supervised, unsupervised, and reinforcement learning North America Database, www.signa.org

 Supervised Learning
 Majority of methods follow this
approach in this course
 Example: credit card approval based
on previous customer applications
 Unsupervised Learning
 Often applied before other learning  higher level data representation
 Example: Coin recognition in vending [15] A.C. Cheng et al., ‘InstaNAS:
Instance-aware Neural
machine based on weight and size Architecture Search’, 2018

 Reinforcement Learning
 Typical ‘human way‘ of learning
 Example: Toddler tries to touch a hot cup of tea (again and again)
 Day 3 offers details about unsupervised learning with examples & Day 3 offers also a short introduction to deep reinforcement learning
Lecture 1 – Parallal and Scalable Machine Learning by HPC 14 / 50
Machine Learning Prerequisites & Challenges

1. Some pattern exists


2. No exact mathematical formula
3. Data exists Data Applied
Mining Statistics
 Idea ‘Learning from Data‘ Data
 Shared with a wide variety Science
of other disciplines
 E.g. signal processing,
data mining, etc.
Machine
 Challenges Computing Learning
 Data is often complex
 Learning from data requires  Machine learning is a very broad subject and goes from
processing time  Clouds very abstract theory to extreme practice (‘rules of thumb’)
 Training machine learning models needs processing time

Lecture 1 – Parallal and Scalable Machine Learning by HPC 15 / 50


Learning Approaches – Supervised Learning

 Each observation of the predictor measurement(s)


has an associated response measurement:
 Input
 Output
 Data
 (the output guides the learning process as a ‘supervisor‘)
 Goal: Fit a model that relates the response to the predictors
 Prediction: Aims of accurately predicting the response for future observations
 Inference: Aims to better understanding the relationship between the
response and the predictors

 Supervised learning approaches fits a model that related the response to the predictors
 Supervised learning approaches are used in classification algorithms such as SVMs
 Supervised learning works with data = [input, correct output]

 Lecture 2 offers an example of using supervised learning with a known dataset & simple learning model to understand basic concepts
Lecture 1 – Parallal and Scalable Machine Learning by HPC 16 / 50
Simple Application Example: Classification of a Flower
(what type of flower is this?)

(flowers of type ‘IRIS Setosa‘)

 Groups of data exist


 New data classified
to existing groups
[14] Image sources: Species Iris Group of
North America Database, www.signa.org (flowers of type ‘IRIS Virginica‘)

 Lecture 2 offers an example of using supervised learning with a known dataset & simple learning model to understand basic concepts
Lecture 1 – Parallal and Scalable Machine Learning by HPC 17 / 50
A Simple Linear Learning Model – The Perceptron

 Human analogy in learning


 Human brain consists of nerve cells called neurons
 Human brain learns by changing the strength of neuron connections (wi)
upon repeated stimulation by the same impulse (aka a ‘training phase‘)
 Training a perceptron model means adapting the weights wi
 Done until they fit input-output relationships of the given ‘training data‘
[18] F. Rosenblatt, 1957

(training data)

(modelled as
bias term)

(activation
function,
d
+1 or -1) (representing the threshold)
(the signal) (dimension of features)

 Lecture 2 offers an example of using supervised learning with a known dataset & simple learning model to understand basic concepts
Lecture 1 – Parallal and Scalable Machine Learning by HPC 18 / 50
From Simple Perceptron to Innovative Deep Learning Techniques

[1] M. Riedel, ‘Deep Learning - Using a Convolutional Neural Network‘,


Invited YouTube Lecture, six lectures, University of Ghent, 2017

[2] M. Riedel et al., ‘Introduction to Deep Learning Models‘,


JSC Tutorial, three days, JSC, 2019

[3] H. Lee et al., ‘Convolutional


Deep Belief Networks for
Scalable Unsupervised
Cross-  Provide deep learning tools that work with HPC machines (e.g. Python/Keras/Tensorflow)
Learning of Hierarchical
Sectional  Advance deep learning applications and research on HPC prototypes (e.g. DEEP-EST, SMITH, etc.) Representations’
Team Deep  Engage with industry (industrial relations team) & support SMEs (e.g. Soccerwatch, ON4OFF)
Learning  Offer tutorials & application enabling support for commercial & scientific users (e.g. YouTube)
 Cooperate in a artificial intelligence network across Helmholtz Association (e.g. Helmholtz AI)

 Day 2 offers a more detailed introduction to Deep Learning Techniques with examples and Convolutional Neural Networks (CNNs)
Lecture 1 – Parallal and Scalable Machine Learning by HPC 19 / 50
Deep Learning Technique Example – Convolutional Neural Networks (CNNs)

 Innovation via specific layers and architecture types

[5] A. Rosebrock

[4] Neural Network 3D Simulation

 Day 2 offers a more detailed introduction to Deep Learning Techniques with examples and Convolutional Neural Networks (CNNs)
Lecture 1 – Parallal and Scalable Machine Learning by HPC 20 / 50
Complex Relationships: ML & DL vs. HPC/Clouds & Big Data

Computing
High Performance
Training Computing & Cloud
Model Performance / Accuracy

Time
Large Deep Learning Networks Computing
‘small datasets‘

manual feature
engineering‘ Medium Deep Learning Networks
changes the
ordering
Small Neural Networks

Traditional Learning Models


MatLab
Statistical
SVMs
Random Computing with R
Forests
scikit-learn Weka Octave

Dataset Volume  ‘Big Data‘ [6] www.big-data.tips

Lecture 1 – Parallal and Scalable Machine Learning by HPC 21 / 50


Understanding Deep Learning Momentum & Startup Example

1952 Stochastic Gradient


Descent
• Solving optimization
problems
1958 Perceptron Learning
Model Big Data Hardware Software
• Learning • Large datasets • More memory • Scalable data
• Easy access • Graphical

weights science tools


• More storage Processing Units • New learning
for less cost (GPUs) models
1985 ‘Backpropagation of Error‘ • HPC & parallel • Open Source &
approch in learning systems free software
• Artificial Neural
packages
Networks

1995 Deep Convolutional


Neural Networks [8] Keras
• Significant
improvements in [10] soccerwatch.tv
[9] TensorFlow

[7] NVIDIA
image analysis
Combination: Start-up Example of my research group
Impact in AI & HPC
in industry & science [11] C. Bodenstein & M. Riedel et al., Automated Soccer Scence Tracking using Deep Neural Networks

Lecture 1 – Parallal and Scalable Machine Learning by HPC 22 / 50


Impacts of Deep Learning Techniques for Different Types of Data

 Using Long Short-Term Memory (LSTMs) with


electric power production time series data

 Using Deep Learning to enable


automatic camera tracking of soccer

[11] C. Bodenstein, M. Goetz and M. Riedel et al., NIC Symposium, 2016

Lecture 1 – Parallal and Scalable Machine Learning by HPC 23 / 50


[YouTube Lectures] More Machine Learning Fundamentals

[21] Morris Riedel, ‘Introduction to Machine Learning Algorithms‘,


Invited YouTube Lecture, six lectures, University of Ghent, 2017

Lecture 1 – Parallal and Scalable Machine Learning by HPC 24 / 50


High Performance Computing Technology Foundations

Lecture 1 – Parallal and Scalable Machine Learning by HPC 25 / 50


High Performance Computing (HPC) vs. High Throughput Computing (HTC)

 High Performance Computing (HPC) is based on computing resources that enable the efficient use of parallel computing techniques
through specific support with dedicated hardware such as high performance cpu/core interconnections.

HPC

(network connection
very important & costly)

 High Throughput Computing (HTC) is based on commonly available computing resources such as commodity PCs and small clusters that
enable the execution of ‘farming jobs’ without providing a high performance interconnection between the cpu/cores.

(network connection
less important)

HTC

 This course is using HPC resources while the general techniques and algorithms can also work on HTC (e.g. Apache Spark, etc.)
Lecture 1 – Parallal and Scalable Machine Learning by HPC 26 / 50
Partnership for Advanced Computing in Europe (PRACE)

 Basic Facts
 HPC-driven infrastructure
 An international not-for-profit
association under Belgien
law (with its seat in Brussels)
 Has 25 members and
2 observers
 Governed by the PRACE
Council in which each
member has a seat
 Daily management
of the association is
delegated to the Board
of Directors

[26] PRACE

Lecture 1 – Parallal and Scalable Machine Learning by HPC 27 / 50


PRACE as Persistent pan-European HPC Infrastructure

Mission:
enabling world-class science through
large scale simulations

Offering:
HPC resources on leading edge
capability systems

Resource award:
through a single and fair pan-
European peer review process for
open research

HPC

[26] PRACE

Lecture 1 – Parallal and Scalable Machine Learning by HPC 28 / 50


PRACE Advanced Training Centers (PATC) & Portal

 Selected Facts for more pieces of Information about HPC Techniques & Methods
 More than 10 000 people trained by 6 PRACE Advanced Training Centers (PATC) and other events
 Training portal consists of valuable material in all fields related to HPC & supercomputing
 Easy search
function to
find materials
of past events
 Material of this
training will be
also available
after the event

[27] PRACE Training Portal

Lecture 1 – Parallal and Scalable Machine Learning by HPC 29 / 50


High Performance Computing & Data Sciences getting more intertwined

 Floating Point Operations


1.000.000 FLOP/s per one second (FLOPS or
FLOP/s)
~1984  1 GigaFlop/s = 109 FLOPS
 1 TeraFlop/s = 1012 FLOPS
 1 PetaFlop/s = 1015 FLOPS
 1 ExaFlop/s = 1018 FLOPS

© Photograph by Rama,
Wikimedia Commons

1.000.000.000.000.000 FLOP/s
~295.000 cores~2009 (JUGENE)

>5.900.000.000.000.000
FLOP/s
~ 500.000 cores
~2013  end of service in 2018

Lecture 1 – Parallal and Scalable Machine Learning by HPC 30 / 50


HPC Building Blocks - Multi-Core CPUs

 Significant advances in CPU (or microprocessor chips)


 Multi-core architecture with dual,
quad, six, or n processing cores
 Processing cores are all on one chip
 Multi-core CPU chip architecture
one chip
 Hierarchy of caches (on/off chip)
 L1 cache is private to each core; on-chip
 L2 cache is shared; on-chip
 L3 cache or Dynamic random access memory (DRAM); off-chip [28] Distributed & Cloud Computing Book

 Clock-rate for single processors increased from 10 MHz (Intel 286) to 4 GHz (Pentium 4) in 30 years
 Clock rate increase with higher 5 GHz unfortunately reached a limit due to power limitations / heat
 Multi-core CPU chips have quad, six, or n processing cores on one chip and use cache hierarchies

Lecture 1 – Parallal and Scalable Machine Learning by HPC 31 / 50


HPC Building Blocks - Many-core GPGPUs

 Use of very many simple cores


 High throughput computing-oriented architecture
 Use massive parallelism by executing a lot of
concurrent threads slowly
 Handle an ever increasing amount of multiple
instruction threads
 CPUs instead typically execute a single [28] Distributed & Cloud Computing Book

long thread as fast as possible


 Many-core GPUs are used in large  Graphics Processing Unit (GPU) is great for data parallelism and task parallelism
 Compared to multi-core CPUs, GPUs consist of a many-core architecture with
clusters and within massively hundreds to even thousands of very simple cores executing threads rather slowly
parallel supercomputers today
 Named General-Purpose Computing on GPUs (GPGPU)
 Different programming models emerge

Lecture 1 – Parallal and Scalable Machine Learning by HPC 32 / 50


IBM Power 4+
JSC JUMP (2004), 9 TFlop/s
 HPC Roadmap & IBM Power 6 IBM Blue Gene/L
Key Vendors JUMP, 9 TFlop/s JUBL, 45 TFlop/s
JUROPA IBM Blue Gene/P
200 TFlop/s JUGENE, 1 PFlop/s
HPC-FF
100 TFlop/s File IBM Blue Gene/Q
Server JUQUEEN (2012)
JURECA Cluster (2015) GPFS, 5.9 PFlop/s
2.2 PFlop/s Lustre

JURECA Booster (2017)


5 PFlop/s

JUWELS Cluster Hierarchical


Module (2018) Storage Server JUWELS Scalable
12 PFlop/s Module (2019/20)
50+ PFlop/s

General Purpose Cluster Highly scalable

Lecture 1 – Parallal and Scalable Machine Learning by HPC 33 / 50


Tutorial Machine: JURECA Cluster/Booster at JSC

Lecture 1 – Parallal and Scalable Machine Learning by HPC 34 / 50


DEEP Series of Projects – Modular Supercomputing Architecture Research

[12] DEEP Projects Web Page

Strong collaboration
with our industry partners
Intel, Extoll & Megware

 3 EU Exascale projects  Strong collaboration with industry


DEEP, DEEP-ER, DEEP-EST partners Intel, Extoll & Megware

 27 partners
Coordinated by JSC
 Juelich Supercomputing Centre
 EU-funding: 30 M€ implements the DEEP projects
designs in its HPC infrastructure
JSC-part > 5,3 M€
 Nov 2011 – Dec 2020

Lecture 1 – Parallal and Scalable Machine Learning by HPC 35 / 50


Application Co-Design for Machine & Deep Learning in HPC

 The modular supercomputing architecture (MSA)


[12] DEEP Projects Web Page
enables a flexible HPC system design co-designed
by the need of different application workloads

Lecture 1 – Parallal and Scalable Machine Learning by HPC 36 / 50


Innovative HPC Hardware via Machine/Deep Learning Co-Design

 Explore more scalability with NVIDIA compared


 Explore Network Attached Memory (NAM)
to NVIDIA NVLink/NVSwitch ‘Islands‘

 The modular supercomputing architecture (MSA)


[13] E. Erlingsson, M. Riedel et al., enables a flexible HPC system design co-designed
IEEE MIPRO Conference, 2018 by the need of different application workloads

Lecture 1 – Parallal and Scalable Machine Learning by HPC 37 / 50


Test JURECA Access

Lecture 1 – Parallal and Scalable Machine Learning by HPC 38 / 50


Test JURECA Access – Important Tutorial Setup Steps

 In the case we use Jupyter JSC (https://jupyter-jsc.fz-juelich.de/)


 Start a JupyterLab,
 Open a new Terminal and Enter the following commands (as it is explained here https://jupyter-jsc.fz-juelich.de/hub/static/files/projects.html )
 wget --no-check-certificate https://jupyter-jsc.fz-juelich.de/static/files/symlinks.sh
 bash ~/symlinks.sh
 Create a new folder where you can store all your personal items (practicals and new files)
 mkdir /p/project/training2001/$USER
################ Addional information
 Copy your own copy of the practicals
 cd /p/project/training2001/$USER -How to start an iteractive session on terminal with the GPUs of
 cp -R /p/project/training2001/practicals . JURECA, load the modules, activate the virtualenv and run python
scripts

 Backup: In the case we use a SSH client (MobaXterm) Navigate to your practical folder
 Open the terminal cd /p/project/training2001/$USER/practicals
Start an interactive session (Note the reservation: 17/02
 Create a new folder where you can store ‘prace_1_gpu’, 18/02 ‘prace_2_gpu’, 19/02 ‘prace_3_gpu’)
all your personal items (practicals and new files) salloc --gres=gpu:1 --partition=gpus --nodes=1 --
 mkdir /p/project/training2001/$USER account=training2001 --time=01:00:00 --reservation=
prace_1_gpu
 Copy your own copy of the practicals Run the pre-made script which will load the necessary modules
 cd /p/project/training2001/$USER and activate the python virtual enviroment
 cp -R /p/project/training2001/practicals . . run_venv_jupyter_terminal.sh
Done, now you can run your python scripts by using
srun python name_function.py

Lecture 1 – Parallal and Scalable Machine Learning by HPC 39 / 50


SSH Clients – Putty for Windows

 Example: Putty SSH Client for Windows


 Not recommended, better install MobaXterm

Lecture 1 – Parallal and Scalable Machine Learning by HPC 40 / 50


MobaXterm SSH Client

[22] MobaXterm SSH Client

Lecture 1 – Parallal and Scalable Machine Learning by HPC 41 / 50


SSH Keys – Use Private/Public Key Pair to Access DEEP HPC System Example

 Remember to use your


Private SSH Key to connect
to the DEEP system
 Corresponding Public SSH key
is already uploaded on the
HPC System (remote host)
per username(!)

Lecture 1 – Parallal and Scalable Machine Learning by HPC 42 / 50


[Video] PRACE – Introduction to Supercomputing

[23] PRACE – Introduction to Supercomputing

Lecture 1 – Parallal and Scalable Machine Learning by HPC 43 / 50


Lecture Bibliography

Lecture 1 – Parallal and Scalable Machine Learning by HPC 44 / 50


Lecture Bibliography (1)

 [1] Morris Riedel, ‘Deep Learning - Using a Convolutional Neural Network‘, Invited YouTube Lecture, six lectures & exercises, University of Ghent, 2017, Online:
https://www.youtube.com/watch?v=gOL1_YIosYk&list=PLrmNhuZo9sgZUdaZ-f6OHK2yFW1kTS2qF
 [2] M. Riedel et al., ‘Introduction to Deep Learning Models‘, JSC Tutorial, three days, JSC, 2019, Online:
http://www.morrisriedel.de/introduction-to-deep-learning-models
 [3] H. Lee et al., ‘Convolutional Deep Belief Networks for Scalable Unsupervised Learning of Hierarchical Representations’, Online:
http://doi.acm.org/10.1145/1553374.1553453
 [4] YouTube Video, ‘Neural Network 3D Simulation‘, Online:
https://www.youtube.com/watch?v=3JQ3hYko51Y
 [5] A. Rosebrock, ‘Get off the deep learning bandwagon and get some perspective‘, Online:
http://www.pyimagesearch.com/2014/06/09/get-deep-learning-bandwagon-get-perspective/
 [6] Big Data Tips – Big Data Mining & Machine Learning, Online:
http://www.big-data.tips/
 [7] NVIDIA Web Page, Online:
https://www.nvidia.com/en-us/
 [8] Keras Python High-Level Deep Learning Library, Online:
https://keras.io/
 [9] TensorFlow Python Low-Level Deep learning Library, Online:
https://www.tensorflow.org/
 [10] Deep Learning Start-Up Beispiel Deutschland, Online:
https://soccerwatch.tv/
 [11] C. Bodenstein, M. Goetz, M. Riedel, ‘Automated Soccer Scene Tracking using Deep Neural Networks’, Poster IAS Symposium, Online:
https://www.researchgate.net/publication/328997974_Automated_Soccer_Scene_Tracking_Using_Deep_Neural_Networks

Lecture 1 – Parallal and Scalable Machine Learning by HPC 45 / 50


Lecture Bibliography (2)

 [12] DEEP Series Projects Web Page, Online:


http://www.deep-projects.eu/
 [13] E. Erlingsson, G. Cavallaro, A. Galonska, M. Riedel, H. Neukirchen, ‘Modular Supercomputing Design Supporting Machine Learning Applications‘, in
conference proceedings of the 41st IEEE MIPRO 2018, May 21-25, 2018, Opatija, Croatia, Online:
https://www.researchgate.net/publication/326708137_Modular_supercomputing_design_supporting_machine_learning_applications
 [14] Species Iris Group of North America Database, Online:
http://www.signa.org
 [15] Cheng, A.C, Lin, C.H., Juan, D.C., InstaNAS: Instance-aware Neural Architecture Search, Online:
https://arxiv.org/abs/1811.10201
 [16] UCI Machine Learning Repository Iris Dataset, Online:
https://archive.ics.uci.edu/ml/datasets/Iris
 [17] Wikipedia ‘Sepal‘, Online:
https://en.wikipedia.org/wiki/Sepal
 [18] F. Rosenblatt, ‘The Perceptron--a perceiving and recognizing automaton’,
Report 85-460-1, Cornell Aeronautical Laboratory, 1957, Online:
https://blogs.umass.edu/brain-wars/files/2016/03/rosenblatt-1957.pdf
 [19] University of Iceland, School of Enginering and Natural Sciences, Online:
https://english.hi.is/school_of_engineering_and_natural_sciences
 [20] Helmholtz AI Web Page, Online:
https://www.helmholtz.ai/
 [21] Morris Riedel, ‘Introduction to Machine Learning Algorithms‘, Invited YouTube Lecture, six lectures, University of Ghent, 2017, Online:
https://www.youtube.com/watch?v=KgiuUZ3WeP8&list=PLrmNhuZo9sgbcWtMGN0i6G9HEvh08JG0J

Lecture 1 – Parallal and Scalable Machine Learning by HPC 46 / 50


Lecture Bibliography (3)

 [22] MobaXterm SSH Client, Online:


https://mobaxterm.mobatek.net/
 [23] PRACE – Introduction to Supercomputing, Online:
https://www.youtube.com/watch?v=D94FJx9vxFA
 [24] Helmholtz Association Web Page, Online:
https://www.helmholtz.de/en/
 [25] Juelich Supercomputing Centre, Online:
https://www.fz-juelich.de/ias/jsc/EN/Home/home_node.html
 [26] Partnership for Advanced Computing in Europe (PRACE), Online:
http://www.prace-ri.eu/
 [27] PRACE Training Portal, Online:
http://www.training.prace-ri.eu/material/index.html
 [28] K. Hwang, G. C. Fox, J. J. Dongarra, ‘Distributed and Cloud Computing’, Book, Online:
http://store.elsevier.com/product.jsp?locale=en_EU&isbn=9780128002049

Lecture 1 – Parallal and Scalable Machine Learning by HPC 47 / 50


Acknowledgements

Lecture 1 – Parallal and Scalable Machine Learning by HPC 48 / 50


Acknowledgements – High Productivity Data Processing Research Group

Finishing Mid-Term Started Started


Finished PHD in Winter Finished PHD in Spring in Spring in Spring
in 2016 2019 in 2019 2019 2019 2019

PD Dr. Senior PhD Senior PhD PhD Student PhD Student PhD Student
G. Cavallaro Student A.S. Memon Student M.S. Memon E. Erlingsson S. Bakarat R. Sedona

DEEP
Finished PHD Thesis Thesis Learning
in 2018 Completed Completed Startup

Dr. M. Goetz MSc M. MSc MSc MSc Student


This research group has received funding
(now KIT) Richerzhagen P. Glock C. Bodenstein G.S. Guðmundsson from the European Union's
(now other division) (now INM-1) (now (Landsverkjun) Horizon 2020 research and
innovation programme under
Soccerwatch.tv) grant agreement No 763558
(DEEP-EST EU Project)

Lecture 1 – Parallal and Scalable Machine Learning by HPC 49 / 50


Lecture 1 – Parallal and Scalable Machine Learning by HPC 50 / 50

You might also like