Previous Offerings
In fall 2004, Dr. Gary Howell taught CSC 783 (MA 783), Parallel Algorithms &
Scientific Computing Tu-Th. 4-5:20, Withers 328.
The main course content was practical parallel computing using message passing,
as well as an introduction to shared memory computing using OpenMP.
Students wrote, tested, and analyzed
performance of parallel codes using the Linux blade center.
A common interest in the class was efficient serial and parallel
matrix vector multiplication in both the dense and sparse cases.
A list of references for the course includes some on-line tutorials.
[References]
The first lecture includes a course syllabus and synopsis of grading policy,
as well as discussion of computer architecture in relation to efficient
codes [Lecture 1]. Lecture 2 introduces some
profilers and timers [Lecture 2]. Lecture
3 is on in-cache floating point optimization [Lecture 3].
Lecture 4 gives pointers on in-cache floating point on the Xeon
processor and dicsusses out of cache performance [Lecture 4]. Lecture 5 gives more specifics on Xeon performance [Lecture 5]. The 6th lecture gives some examples of performance
on sparse matrix vector multiplies and how this is influenced by the cache memory, [Lecture 6]. Lectures 7, 8, 9, and 10 introduce the
shared memory standard API OpenMP [Lecture 7],
[Lecture 8], [Lecture 9],
[Lecture 10]. A revised version of lecture
6, showing how to read sample matrices in Harwell-Boeing Format is
[Read HB ]. The file [Intro
to MPI ] is a discussion of the standard MPI library. The file
[Lecture10-b] explores the "race-condition"
example of Lecture 10 in a bit more detail.
- July 6 - Debugging on the NCSU HPC machines
Tuesday, July 6 -- 9 AM to noon
Location: 331 Daniels
- Lecture Notes [Debuggers.html]
- Lab Note [Getting a Totalview GUI]
- June 29 - Introduction to MPI
- Lecture Notes [SimpleMPI.html]
- Sample MPI programs [pachec.tar.gz]
Tuesday, June 29 -- 9 AM to noon
Location: 331 Daniels
- June 22 - Using the NCSU HPC machines
Tuesday, June 22 -- 9 AM to noon
Location: 331 Daniels
- Lecture Notes [HTML]
- Sample SCALAPACK calls [scali.tar.gz]
- March 04 - MPI Course Schedule
- Monday, March 8th 1:30pm - 3:30pm
Location: 331 Daniels - part 1
"Introduction to MPI"
- Monday, March 8th 3:30pm - 5:00pm
Location: 331 Daniels
"MPI Workshop"
- Tuesday, March 9th 1:30pm - 3:30pm
Location: 331 Daniels
"Introduction to MPI" - part 2
- Tuesday, March 9th 3:30pm - 5:00pm
Location: 331 Daniels
"MPI Workshop"
- Wednesday, March 10th 1:30pm - 3:30pm
Location: 331 Daniels
"Introduction to MPI" - part 3
- Wednesday, March 10th 3:30pm - 5:00pm
Location: 331 Daniels
"MPI Workshop"
- Thursday, March 11th 2:00pm - 3:30pm
Location: 331 Daniels
"MPI Workshop"
- 23-Oct-03
An Introduction to MPI course is being planned
during the first week of November.
- 29-Oct-03
ITD will offer Message Passing Interface (MPI)
training next week 3-7 November.
An "Overview of MPI" is scheduled for Monday
November 3rd from 11am - noon and on Friday
November 7th from 1-2pm. The same material
will be presented at each session.
A 3-part "Introduction to MPI" was offered
November 4th-6th from 1:30-3:00pm each day.
The material for the 3-day training will be
divided approximately as:
- Tuesday - Basic MPI
[eg init, send, receive, finalize
compiling, executing]
- Wednesday - Collective Communications,
Topologies
- Thursday - MPI2
[eg non-blocking, persistent point to point
communications, one-sided communication,
MPI I/O]
- 23-Oct-03
An Introduction to MPI course was offered
during the first week of November.
It is expected the course will be held
for a couple hours on ~3 afternoons that week.
Location and times will be posted here soon.
|