Please note: you are viewing unit and programme information
for a past academic year. Please see the current academic year for up to date information.
Unit name |
High Performance Computing |
Unit code |
COMS35101 |
Credit points |
10 |
Level of study |
H/6
|
Teaching block(s) |
Teaching Block 2 (weeks 13 - 24)
|
Unit director |
Professor. McIntosh-Smith |
Open unit status |
Not open |
Pre-requisites |
COMS12200 or EENG34040 or COMSM1302 |
Co-requisites |
None
|
School/department |
Department of Computer Science |
Faculty |
Faculty of Engineering |
Description including Unit Aims
The aim of this unit is to introduce and explore technologies relating to high performance, high throughput, and high availability computing, and to offer practical hands-on use of and experience with said technologies. Students completing the unit should have had an opportunity to integrate
content from other units in the programme, for example implementing high performance parallel versions of algorithms previously encountered.
The aim of this unit is to introduce and explore technologies relating to high performance, high throughput, and high availability computing, and to offer practical hands-on use of and experience with said technologies. Students completing the unit should have had an opportunity to integrate
content from other units in the programme, for example implementing high performance parallel versions of algorithms from COMS21103 or COMS21202 based on theory introduced in COMS22101. The syllabus will include:
- Algorithmic models (the view from Berkeley)
- Computational models (PRAM, Flynn's taxonomy).
- Communication models (interconnects, message passing).
- Memory models (NUMA, COMA).
- Single-computer technologies (vector computing via SSE, multi-core computing via OpenMP, stream computing via CUDA/OpenCL).
- Multi-computer technologies (cluster computing via MPI, cloud/grid computing via Hadoop/MapReduce).
- Other approaches (batch processing via Condor, distributed computing via BOINC, distributed and redundant file systems, e.g., RAID/GFS, load balancing, check-pointing).
- Design and implementation of parallel algorithms and libraries.
Intended Learning Outcomes
On successful completion of this unit, students will be able to:
- Understand state-of-the-art high performance computing technologies, and select the right one for a given task;
- Utilise said technologies through appropriate programming interfaces (e.g., specialist languages, additions to standard languages or via libraries or compiler assistance);
- Analyse, implement, debug and profile high performance algorithms as realised in software.
Specific learning outcomes will be tackled through focused coursework activities, including:
- Mastering shared memory multi-core parallelisation through approaches such as OpenMP and Ct
- Message passing parallel programming through APIs such as MPI
- Many-core parallel programming through stream languages such as OpenCL and Cuda.
Teaching Information
Roughly 2/3 of teaching in lecture format, 1/3 in laboratory or problem class format.
Assessment Information
Assessment for the unit is 100% via coursework assignments based on hands-on use of high performance computing platforms (e.g., BlueCrystal phase 1 or similar). The assignments will turn the theory developed in this and previous units into practical experience.
Reading and References
- D.A. Patterson and J.L. Hennessy. Computer Organization and Design: The Hardware/Software Interface. Morgan Kaufman, ISBN: 1-558-60604-1 Price: �49.95.
- A. Grama, G. Karypis, V. Kumar and A. Gupta. Introduction to Parallel Computing (2nd Edition). Addison Wesley, ISBN: 0201648652. Price: �52.99.
- B. Chapman, G. Jost and R. van der Pas. Using OpenMP: Portable Shared Memory Parallel Programming. MIT Press, ISBN: 0262533022. Price: �25.95.
- P. Pacheco. Parallel Programming with MPI. Morgan Kaufmann, ISBN: 1558603395. Price: �43.70.