High Performance Computing

High Performance Computing

This content is rated 0 out of 5
  • From www.udacity.com
  • Self-paced
  • Free Access
More info
  • 24 Sequences
  • Introductive Level

Their employees are learning daily with Edflex

  • Safran
  • Air France
  • TotalEnergies
  • Generali
Learn more

Course details


The course topics are centered on three different ideas or extensions to the usual serial RAM model you encounter in CS 101. Recall that a serial RAM assumes a sequential or serial processor connected to a main memory.* Unit 1: The work-span or dynamic multithreading modelIn this model, the idea is that there are multiple processors connected to the main memory. Since they can all “see” the same memory, the processors can coordinate and communicate via reads and writes to that “shared” memory.Sub-topics include:** Intro to the basic algorithmic model** Intro to OpenMP, a practical programming model** Comparison-based sorting algorithms** Scans and linked list algorithms** Tree algorithms** Graph algorithms, e.g., breadth-first search* Unit 2: Distributed memory or network modelsIn this model, the idea is that there is not one serial RAM, but many serial RAMs connected by a network. In this model, each serial RAM’s memory is private to the other RAMs; consequently, the processors must coordinate and communicate by sending and receiving messages.Sub-topics include:** The basic algorithmic model** Intro to the Message Passing Interface, a practical programming model** Reasoning about the effects of network topology** Dense linear algebra** Sorting** Sparse graph algorithms** Graph partitioning* Unit 3: Two-level memory or I/O modelsIn this model, we return to a serial RAM, but instead of having only a processor connected to a main memory, there is a smaller but faster scratchpad memory in between the two. The algorithmic question here is how to use the scratchpad effectively, in order to minimize costly data transfers from main memory.Sub-topics include:** Basic models** Efficiency metrics, including “emerging” metrics like energy and power** I/O-aware algorithms** Cache-oblivious algorithms




  • Rich Vuduc - Rich Vuduc an associate professor in the School of Computational Science and Engineering (CSE) atGeorgia Tech. His research is in the area of high-performance computing.This year, Professor Vuduc is also serving as both the Associate Chair of Academic Affairs in the School of CSE and as the Director of CSE Programs.Research: The HPC Garage [@hpcgarage].Professor Vuduc’s lab is developing automated tools and techniques to tune, to analyze, and to debug software for parallel machines, including emerging high-end multi/manycore architectures and accelerators. They focus on applying these methods to CSE applications, which include computer-based simulation of natural and engineered systems and data analysis.


The Georgia Institute of Technology, also known as Georgia Tech or GT, is a co-educational public research university located in Atlanta, Georgia, USA. It is part of the wider University System of Georgia network. Georgia Tech has offices in Savannah (Georgia, USA), Metz (France), Athlone (Ireland), Shanghai (China), and Singapore.

Georgia Tech's reputation is built on its engineering and computer science programmes, which are among the best in the world5,6. The range of courses on offer is complemented by programmes in the sciences, architecture, humanities and management.


Udacity is a for-profit educational organization founded by Sebastian Thrun, David Stavens, and Mike Sokolsky offering massive open online courses (MOOCs). According to Thrun, the origin of the name Udacity comes from the company's desire to be "audacious for you, the student". While it originally focused on offering university-style courses, it now focuses more on vocational courses for professionals.

This content is rated 4.5 out of 5
(no review)

What did you think of this course?

Complete this resource to write a review