{"cells":[{"cell_type":"markdown","metadata":{},"source":["
This is org-mode document. Org-mode is a markup language which allows\n","to write text documents containing executable code blocks.
"]},{"cell_type":"markdown","metadata":{},"source":["Org-mode can do much more than this and you can find out more here.
"]},{"cell_type":"markdown","metadata":{},"source":["Org-mode works in emacs (the best editor in the world).
"]},{"cell_type":"markdown","metadata":{},"source":["You can do two things with code blocks (besides simply reading\n","them):
"]},{"cell_type":"markdown","metadata":{},"source":["execute: this can be done using the C-c C-c
(Ctrl-c\n","Ctrl-c) keybinding. The stdout of the code will be appended below the\n","code block itself
tangle: tangling a code block means creating a separate file that\n","contains the code which can thus be compiled and run as usual. If you\n","hit C-c C-v C-t
(this call the emacs function\n","org-babel-tangle
) all the code blocks in this file will be\n","tangled. If, instead, you only want to tangle a single code blocks, go\n","to that block and hit C-u C-c C-v C-t
Once you have tangled a code block, you can compile and run it like\n","any other code. For the code blocks of this document to work, you have\n","to use the following compile command:
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["gcc -fopenmp -I. aux.c code_block.c -o code_block"]},{"cell_type":"markdown","metadata":{},"source":["And then you can run it by launching the code_block
\n","executable
Set an environment variable: in order to set an\n","environment variable within emacs you have to hit\n","M-x setenv
then write the name of the variable, hit enter,\n","write its value and hit enter again
Refresh images in an org-mode document: if\n","images don't show up, use the command M-x\n"," org-redisplay-inline-images
Show this document as a presentation: to show\n","this document as a sequence of slides, you must install the emacs\n","org-tree-slide
package. Then open the document and execute\n","the command M-x\n"," org-tree-slide-mode
. You can mode forward and backward using the\n",">
and <
keys. The setup code at the\n","bottom of this file should automagically install this package upon\n","opening
Export and org-mode document: Org-mode lets you\n","export org-mode documents in various formats such as pdf or html. Just\n","hit C-c C-e
and follow the instructions.
When you open this document, the code block at the bottom (in the\n","Setup section) is supposed to be executed automatically to setup your\n","environment. If this does not happen (it may be the case in older emacs\n","version), just go there and hit C-c C-c
to execute it\n","manually. Also, the two code blocks named auxc
and\n","auxh
are supposed to be tangled automatically. If this does\n","not happen, do it manually.
Roughly speaking, parallel computers can be classified into two\n","types:
"]},{"cell_type":"markdown","metadata":{},"source":["Shared memory: all the CPUs share one (logical)\n","memory, i.e., all processes can access the same addressing space\n","regardless of the CPU they are running on. This makes it simple to\n","communicate data from one process to another
Distributed memory: the computer is actually\n","formed of multiple node, each having one or more CPUs and its own\n","memory. Nodes are connected through a network. A process running on one\n","node can only access data on the local memory; therefore, if it needs\n","data that is on another node, a message must be exchanged through the\n","network
One \"logical memory\" does not necessarily mean that only one physical\n","memory exists. If multiple memory modules exist, access to data may\n","non-uniform
"]},{"cell_type":"markdown","metadata":{},"source":["Both types of shared-memory architectures can be programmed using the\n","same tools and technologies. When performance is a concern, though,\n","special care must be taken for NUMA machines (we will not cover in this\n","course)
"]},{"cell_type":"markdown","metadata":{},"source":["Until the early 00's all processors had only one core (in fact we did\n","not use the word \"core\" at all). Then why have multicore processors\n","become ubiquitous? Energy consumption is the reason why:
"]},{"cell_type":"markdown","metadata":{},"source":["P = CV2f
"]},{"cell_type":"markdown","metadata":{},"source":["however there is a linear dependence between f and V, therefore P grows as the cube of $f$!!!
"]},{"cell_type":"markdown","metadata":{},"source":["Because the performance of single-core processors could only be\n","improved by increasing the frequency, this trend became\n","unsustainable.
"]},{"cell_type":"markdown","metadata":{},"source":["Multicore processors design relies on Thread Level\n","Parallelism to improve performance. This means that more\n","transistors are used to assemble multiple execution units (i.e., cores)\n","on a single chip. This improves performance with only a linear increase\n","in the energy consumption: the capacitance C grows because of the increased\n","number of transistors.
"]},{"cell_type":"markdown","metadata":{},"source":["In fact, it is also possible to produce faster processors which\n","consume less energy!!! Consider a quad-core processor with frequency\n","0.6f: it will be 2.4 times\n","faster and consume roughly 15\\% less energy
"]},{"cell_type":"markdown","metadata":{},"source":["Multicore computer: what does it look like?
"]},{"cell_type":"markdown","metadata":{},"source":["The hwloc
library is designed to retrieve all the\n","details of the architecture. For example, on my computer, I can run the\n","lstopo
program from hwloc
to retrieve the\n","architecture:
Many options exist, but they are not all simple, portable, efficient\n","etc.
"]},{"cell_type":"markdown","metadata":{},"source":["Examples:
"]},{"cell_type":"markdown","metadata":{},"source":["pThreads (POSIX Threads): difficult to use and\n","debug, not fully portable
Intel TBB/OneAPI: proprietary
Cilk: limited support and portability
OpenMP: extremely portable, efficient,\n","relatively easy to use. huge community and support
OpenMP (Open specifications for MultiProcessing) is\n","an Application Program Interface (API) to explicitly direct\n","multi-threaded, shared memory parallelism.
"]},{"cell_type":"markdown","metadata":{},"source":["First standard 1.0 was published in 1997
Latest standard is 5.2 published in November 2021
\n","Many resources at https://www.openmp.org
The OpenMP standard is developed by an advisory board that includes\n","many members from academia (UTK, LBNL, ANL, NASA,…) and industry (Intel,\n","AMD, NEC, Fujitsu, NVIDIA,…)
"]},{"cell_type":"markdown","metadata":{},"source":["This course does not cover the whole OpenMP standard. The OpenMP\n","manual is over 600 pages as of today (v5.2)
"]},{"cell_type":"markdown","metadata":{},"source":["Only a subset of constructs and clauses will be presented.
"]},{"cell_type":"markdown","metadata":{},"source":["Tons of tutorials can be found online but better be used with\n","moderation.
"]},{"cell_type":"markdown","metadata":{},"source":["OpenMP is based on a fork-join execution model:
"]},{"cell_type":"markdown","metadata":{},"source":["Execution is started by a single thread called master\n","thread
when a parallel region is encountered, the master thread spawns a\n","set of threads
the set of instructions enclosed in a parallel region is\n","executed
at the end of the parallel region all the threads synchronize and\n","terminate leaving only the master
The master is a member of the team and has\n","thread number 0
Starting from the beginning of the region, the code is duplicated\n","and all threads will execute that code.
There is an implied barrier at the end of a\n","parallel section.
If any thread terminates within a parallel region, all threads in\n","the team will terminate.
Just a simple hello world with multiple threads:
"]},{"cell_type":"markdown","metadata":{},"source":["start with serial execution
open a parallel region where:
\n","Just a simple hello world with multiple threads:
"]},{"cell_type":"markdown","metadata":{},"source":["start with serial execution
open a parallel region where:
\n","omp_get_thread_num()
and\n","omp_get_num_threads()
How many threads do we have in the parallel regions of a code? The\n","number of threads depends on:
"]},{"cell_type":"markdown","metadata":{},"source":["Evaluation of the if
clause (one or many)
Setting of the num_threads
clause
Use of the omp_set_num_threads()
library\n","function
Setting of the OMP_NUM_THREADS
environment\n","variable
Implementation default - usually the number of CPUs on a node,\n","though it could be dynamic
Complete example
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeHere is a minor variant of the hello world program…with a bug
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeMost variables are shared by default
Global variables include:
\n","Private variables include:
\n","Fortran: Automatic variables within a statement block
The OpenMP Data Scope Attribute Clauses are used to explicitly\n","define how variables should be scoped. They include:
\n","private
firstprivate
shared
default
reduction
private(list)
: a new object of the same type is\n","created for each thread (uninitialized!)
firstprivate(list)
: Listed variables are initialized\n","according to the value of their original objects prior to entry into the\n","parallel or work-sharing construct.
lastprivate(list)
: The value copied back into the\n","original variable object is obtained from the last (sequentially)\n","iteration or section of the enclosing construct.
shared(list)
: only one object exists in memory and\n","all the threads access it
default(shared|private|none)
: sets the default\n","scoping
reduction(operator:list)
: performs a reduction on\n","the variables that appear in its list.
Let's fix the bug: by declaring iam
private, each thread\n","will have its own copy of this variable
The interest of parallel programming is not to execute the same\n","workload multiple times but to distribute the workload to the available\n","processes so that execution time can be reduced. This implies that\n","multiple instructions will be executed concurrently\n","(or, equivalently, in parallel).
"]},{"cell_type":"markdown","metadata":{},"source":["Two successive statements S1 and S2 can be executed concurrently if\n","they are independent. According to the\n","Bernstein conditions there exist three types of\n","dependencies:
"]},{"cell_type":"markdown","metadata":{},"source":["Read-After-Write or true\n","dependency or flow dependency: if\n","Input(S2)
overlaps with Output(S1)
Write-After-Read or\n","anti-dependency: if Output(S2)
overlaps\n","with Input(S1)
Write-After-Write or output\n","dependency: if Output(S2)
overlaps with\n","Output(S1)
Example. Are these two statements independent?
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeWhat kind of dependency is there? RAW. Here is a more convoluted\n","example
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeExample. Are these two statements independent?
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeWhat kind of dependency is there? WAR. Note that WAR dependencies can\n","be sometimes removed!
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeNow the second and third statement have become independent. Here is a\n","more convoluted example
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeExample. Are these two statements independent?
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeWhat kind of dependency is there? WAW. Here is a more convoluted\n","example
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeThe master
directive identifies a code block which is\n","only executed by the master thread
The single
directive identifies a code block which is\n","only executed by one (any) thread
One obvious difference between single
and\n","master
is that with master
only the thread\n","with id 0 can execute the code block. This has a risk: you have to make\n","sure that the master thread passes by that code block otherwise it will\n","never be executed.
Can you spot any other difference from executing the two code blocks\n","above? There is an implied barrier at the end of the\n","single
block. It can be removed using the\n","nowait
clause
In the code below, all the iterations in the loop are\n","independent. This means that they can be executed\n","concurrently. However the code below is wrong because\n","it does not produce the same result as in sequential
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeOpenMP provides a construct that automatically parallelizes loops by\n","executing chunks of iterations concurrently. Note that the loop index\n","i
is implicitly private
.
The schedule
clause in the for
construct\n","specifies how the iterations of the loop are assigned to threads:
static
: loop iterations are divided into pieces of\n","size chunk and then statically assigned to threads in a round-robin\n","fashion
dynamic
: loop iterations are divided into pieces of\n","size chunk, and dynamically scheduled among the threads; when a thread\n","finishes one chunk, it is dynamically assigned another
guided
: for a chunk size of 1, the size of each\n","chunk is proportional to the number of unassigned iterations divided by\n","the number of threads, decreasing to 1. For a chunk size with value k\n","(greater than 1), the size of each chunk is determined in the same way\n","with the restriction that the chunks do not contain fewer than k\n","iterations
runtime
: The scheduling decision is deferred until\n","runtime by the environment variable OMP SCHEDULE
Let's see how schedule
works:
The collapse
clause allows for combining multiple loops\n","into a single one and for parallelizing its iterations. The number of\n","loops to collapse can be provided as an option
A barrier is simply a waiting point: all threads must wait for all\n","the others to reach a barrier point before moving on. Example
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeImproper use of barriers can cause deadlocks: if not\n","all threads pass by the barrier, those who do will be waiting\n","forever…
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeThe critical
directive identifies a code block which is\n","executed in mutual exclusion by all threads, i.e., one\n","at a time.
Critical sections can have names. The name argument is used to\n","identify the critical construct. For any critical construct for which\n","name is not specified, the effect is as if an identical (unspecified)\n","name was specified. It is not possible to have two or more threads in\n","different critical regions that have the same name!
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeThe atomic construct ensures that a specific storage location is\n","accessed atomically so that possible simultaneous reads and writes by\n","multiple threads do not result in indeterminate values. Five types of\n","atomic constructs exist: read
, write
,\n","update
, capture
and compare
read
: atomically read a memory location, i.e.,\n","x
can not change while being readwrite
: atomically write a memory location
update
: atomically update (i.e. read-modify-write) a\n","memory location
So what's the interest of atomic? take this example: we could\n","certainly use critical
to protect the update of\n","x[]
but this would prevent calls to\n","compute_one
to be executed concurrently. With\n","atomic
only the update of x[]
is\n","serialized.
capture
: atomically update a memory location and\n","capture its initial or final valuecompare
: atomically and conditionally update a memory\n","locationAssume this simple code that computes the sum of all the elements of\n","an array
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includesum
. We could actually use a critical section or\n","an atomic update but we would loose all performance."]},{"cell_type":"markdown","metadata":{},"source":["Reductions allow us to take advantage of\n","associativity and commutativity of some operators (+ in this case):
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeFor the C
language, predefined reduction operators are\n","(note that : in the table below is actually a | )
Operator | \n","Initializer | \n","Combiner | \n","
---|---|---|
+ | \n","omppriv=0 | \n","ompout += ompin | \n","
* | \n","omppriv=1 | \n","ompout *= ompin | \n","
~ | \n","omppriv=~0 | \n","ompout ~= ompin | \n","
: | \n","omppriv=0 | \n","ompout := ompin | \n","
^ | \n","omppriv=0 | \n","ompout ^= ompin | \n","
&& | \n","omppriv=1 | \n","ompout = ompin &&\n","ompout | \n","
:: | \n","omppriv=0 | \n","ompout = ompin :: ompout | \n","
max | \n","omppriv=minval | \n","ompout = max(ompin,ompout) | \n","
min | \n","omppriv=maxval | \n","ompout = min(ompin,ompout) | \n","
The OpenMP task
construct simply identifies a block of\n","code which is ready to be executed and whose execution is\n","deferred. Once the task is created, it can be executed\n","by any thread, at any time. This means that we can not\n","make any assumptions on when a task is executed and by which thread and\n","in which order all the created tasks are executed.
Why do we need the master
construct in the code\n","above?
A slightly more complex example, with a bug:
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeWhat went wrong?
"]},{"cell_type":"markdown","metadata":{},"source":["The value of shared variables accessed within a task might change\n","between the creation of the task and its actual execution. Some clauses\n","can be used to define the scope of variables within tasks:
"]},{"cell_type":"markdown","metadata":{},"source":["shared(x)
means that when the task is executed x is\n","the same variable (the same memory location) as when the task was\n","created
firstprivate(x)
means that x is private to the task,\n","i.e., when the task is created, a brand new variable x is created as\n","well and its value is set to be the same as the value of x in the\n","enclosing context at the moment when the task is created. This new copy\n","is destroyed when the task is finished
private(x)
means that x is private to the task,\n","i.e., when the task is created, a brand new variable x is created as\n","well. This new copy is destroyed when the task is finished
If a variable is private
in the parallel region it is\n","implicitly firstprivate
in the included tasks
A slightly more complex example, with a bugfix:
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeCreating and handling tasks has a cost. Therefore, it is not always\n","worth creating a task, for example, if the task has only little work to\n","do. The if
clause can be used to choose whether to create a\n","task or immediately run the code block
So how can we be sure that some tasks are actually executed? The\n","taskwait
directive ensures that all the previously\n","submitted tasks have been executed. Note that this does not include\n","descendants, i.e., tasks that have been generated by other tasks.
It is possible to define an execution order by specifying task\n","dependencies. This is done through the\n","depend
clause and the Bernstein conditions:
The in
dependence-type. The generated task will be a\n","dependent task of all previously generated sibling tasks that reference\n","at least one of the list items in an out
or\n","inout
dependence-type list.
The out
and inout
dependence-types. The\n","generated task will be a dependent task of all previously generated\n","sibling tasks that reference at least one of the list items in an\n","in
, out
, or inout
dependence-type\n","list.
Example:
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeCan you draw the dependency graph?
"]},{"cell_type":"markdown","metadata":{},"source":["Assuming only two threads are available and all functions take one\n","second, the following two schedulings are possible.
"]},{"cell_type":"markdown","metadata":{},"source":["The priority
clause can be used to give the OpenMP\n","scheduler a hint on the importance of a task
When using pointers to specify dependencies, you should dereference\n","it to make sure the dependence is inferred from the pointed data rather\n","than the pointer variable.
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeA lock is a data of type omp_lock_t
which can be used to\n","prevent simultaneous access to shared resources according to the\n","schema
acquire (or set or lock) the lock
access data
release (on unset or unlock) the lock
Acquisition of the lock is exclusive in the sense that only one\n","threads can hold the lock at a given time. A lock can be in one of the\n","following states:
"]},{"cell_type":"markdown","metadata":{},"source":["uninitialized: the lock is not active and cannot\n","be acquired/released by any thread;
unlocked: the lock has been initialized and can\n","be acquired by any thread;
locked: the lock has been acquired by one thread\n","and cannot be acquired by any other thread until the owner releases\n","it.
Transitions through states can be achieved with the following\n","routines
"]},{"cell_type":"markdown","metadata":{},"source":["omp_init_lock
: initializes a lock
omp_destroy_lock
: uninitializes a lock
omp_set_lock
: waits until a lock is available, and\n","then sets it
omp_unset_lock
: unsets a lock
omp_test_lock
: tests a lock, and sets it if it is\n","available
Example
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeExample with test lock
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeDo not create a new loop just for the purpose of parallelizing it
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeTasks have a relatively high overhead and should be\n","used with caution.
"]},{"cell_type":"markdown","metadata":{},"source":["Critical sections serialize the work of processes.
"]},{"cell_type":"code","execution_count":null,"metadata":{},"outputs":[],"source":["//%cflags:-fopenmp\n","#includeStupid to say but there is no parallelism if you don't put a parallel\n","section somewhere. Just don't forget.
"]},{"cell_type":"markdown","metadata":{},"source":["Conversely, if you can merge multiple parallel sections into a single\n","one, it might be good to do so in order to reduce overhead
"]},{"cell_type":"markdown","metadata":{},"source":["