Le serveur de séminaires SEMPARIS accueille toutes les annonces de séminaires de la région parisienne, dans tous les domaines de la physique et des mathématiques. Il vous permet aussi de recevoir par courrier électronique la sélection de votre choix. Enfin, il permet l'archivage des présentations au format PDF ou Powerpoint. [ Plus d'informations ]
Séminaires à venir | ||
[ vue agenda ] |
Lundi 16 Septembre 2024, 14:15 à IPHT, Salle Claude Itzykson, Bât. 774 | COURS (Cours) | physics |
|
|
|
Abstract: | Abstract:\newline Many numerical computations in physics require applying repetitively the same operations on large arrays, a type of task likely to receive a substantial gain in speed when parallelized. Traditionally, parallelism has been achieved by increasing the number of CPUs in a computer, or the number of computing cores in each CPU.\\ Graphical Processing Units (GPU) may be viewed as a specialized type of processor with thousands of computing cores. Thanks to a combination of hardware and software developments, their use has expanded from the domain of computer graphics into that of general purpose computing. Besides commercial softwares written to take advantage of the computing power of GPUs, several tools exist to develop computer programs that offload parts of a computation to one or more GPUs.\\ Among these programming frameworks, CUDA (Compute Unified Device Architecture) is one of the most widely employed. CUDA is an extension of the C language, but can be used also in C, FORTRAN, Python and many other programming languages. The goal of this course is to expose the concepts of CUDA programming from the ground up, in order to write simple programs that use GPUs to accelerate critical sections of code.\\ Architecture of a GPU (computing units, memory, interplay with the host computer). What type of tasks are GPUs good for? Memory operations (allocation, data transfer to and from a GPU). GPU shared memory. CUDA kernels. Streams, synchronization between dependent tasks. Reduction operations on a GPU. Available libraries: cuFFT, cuBLAS, cuSPARSE, cuRAND. |
Lundi 16 Septembre 2024, 14:15 à IPHT, Salle Claude Itzykson, Bât. 774 | COURS (Cours) | physics |
|
|
|
Abstract: | Abstract:\newline Many numerical computations in physics require applying repetitively the same operations on large arrays, a type of task likely to receive a substantial gain in speed when parallelized. Traditionally, parallelism has been achieved by increasing the number of CPUs in a computer, or the number of computing cores in each CPU.\\ Graphical Processing Units (GPU) may be viewed as a specialized type of processor with thousands of computing cores. Thanks to a combination of hardware and software developments, their use has expanded from the domain of computer graphics into that of general purpose computing. Besides commercial softwares written to take advantage of the computing power of GPUs, several tools exist to develop computer programs that offload parts of a computation to one or more GPUs.\\ Among these programming frameworks, CUDA (Compute Unified Device Architecture) is one of the most widely employed. CUDA is an extension of the C language, but can be used also in C, FORTRAN, Python and many other programming languages. The goal of this course is to expose the concepts of CUDA programming from the ground up, in order to write simple programs that use GPUs to accelerate critical sections of code.\\ Architecture of a GPU (computing units, memory, interplay with the host computer). What type of tasks are GPUs good for? Memory operations (allocation, data transfer to and from a GPU). GPU shared memory. CUDA kernels. Streams, synchronization between dependent tasks. Reduction operations on a GPU. Available libraries: cuFFT, cuBLAS, cuSPARSE, cuRAND. |
Lundi 23 Septembre 2024, 14:15 à IPHT, Salle Claude Itzykson, Bât. 774 | COURS (Cours) | physics |
|
|
|
Abstract: | Abstract:\newline Many numerical computations in physics require applying repetitively the same operations on large arrays, a type of task likely to receive a substantial gain in speed when parallelized. Traditionally, parallelism has been achieved by increasing the number of CPUs in a computer, or the number of computing cores in each CPU.\\ Graphical Processing Units (GPU) may be viewed as a specialized type of processor with thousands of computing cores. Thanks to a combination of hardware and software developments, their use has expanded from the domain of computer graphics into that of general purpose computing. Besides commercial softwares written to take advantage of the computing power of GPUs, several tools exist to develop computer programs that offload parts of a computation to one or more GPUs.\\ Among these programming frameworks, CUDA (Compute Unified Device Architecture) is one of the most widely employed. CUDA is an extension of the C language, but can be used also in C, FORTRAN, Python and many other programming languages. The goal of this course is to expose the concepts of CUDA programming from the ground up, in order to write simple programs that use GPUs to accelerate critical sections of code.\\ Architecture of a GPU (computing units, memory, interplay with the host computer). What type of tasks are GPUs good for? Memory operations (allocation, data transfer to and from a GPU). GPU shared memory. CUDA kernels. Streams, synchronization between dependent tasks. Reduction operations on a GPU. Available libraries: cuFFT, cuBLAS, cuSPARSE, cuRAND. |
Jeudi 26 Septembre 2024, 17:00 à UFR-PHYS-SU, Amphi 25 Campus Pierre-et-Marie-Curie, Jussieu | CPMC (Colloquium Pierre et Marie Curie) | astro-ph|cond-mat|gr-qc|hep-ex|hep-lat|hep-ph|hep-th|physics|quant-ph |
|
|
|
Abstract: | TBA |
Lundi 30 Septembre 2024, 14:15 à IPHT, Salle Claude Itzykson, Bât. 774 | COURS (Cours) | physics |
|
|
|
Abstract: | Abstract:\newline Many numerical computations in physics require applying repetitively the same operations on large arrays, a type of task likely to receive a substantial gain in speed when parallelized. Traditionally, parallelism has been achieved by increasing the number of CPUs in a computer, or the number of computing cores in each CPU.\\ Graphical Processing Units (GPU) may be viewed as a specialized type of processor with thousands of computing cores. Thanks to a combination of hardware and software developments, their use has expanded from the domain of computer graphics into that of general purpose computing. Besides commercial softwares written to take advantage of the computing power of GPUs, several tools exist to develop computer programs that offload parts of a computation to one or more GPUs.\\ Among these programming frameworks, CUDA (Compute Unified Device Architecture) is one of the most widely employed. CUDA is an extension of the C language, but can be used also in C, FORTRAN, Python and many other programming languages. The goal of this course is to expose the concepts of CUDA programming from the ground up, in order to write simple programs that use GPUs to accelerate critical sections of code.\\ Architecture of a GPU (computing units, memory, interplay with the host computer). What type of tasks are GPUs good for? Memory operations (allocation, data transfer to and from a GPU). GPU shared memory. CUDA kernels. Streams, synchronization between dependent tasks. Reduction operations on a GPU. Available libraries: cuFFT, cuBLAS, cuSPARSE, cuRAND. |
Lundi 7 Octobre 2024, 14:15 à IPHT, Salle Claude Itzykson, Bât. 774 | COURS (Cours) | physics |
|
|
|
Abstract: | Abstract:\newline Many numerical computations in physics require applying repetitively the same operations on large arrays, a type of task likely to receive a substantial gain in speed when parallelized. Traditionally, parallelism has been achieved by increasing the number of CPUs in a computer, or the number of computing cores in each CPU.\\ Graphical Processing Units (GPU) may be viewed as a specialized type of processor with thousands of computing cores. Thanks to a combination of hardware and software developments, their use has expanded from the domain of computer graphics into that of general purpose computing. Besides commercial softwares written to take advantage of the computing power of GPUs, several tools exist to develop computer programs that offload parts of a computation to one or more GPUs.\\ Among these programming frameworks, CUDA (Compute Unified Device Architecture) is one of the most widely employed. CUDA is an extension of the C language, but can be used also in C, FORTRAN, Python and many other programming languages. The goal of this course is to expose the concepts of CUDA programming from the ground up, in order to write simple programs that use GPUs to accelerate critical sections of code.\\ Architecture of a GPU (computing units, memory, interplay with the host computer). What type of tasks are GPUs good for? Memory operations (allocation, data transfer to and from a GPU). GPU shared memory. CUDA kernels. Streams, synchronization between dependent tasks. Reduction operations on a GPU. Available libraries: cuFFT, cuBLAS, cuSPARSE, cuRAND. |
Jeudi 28 Novembre 2024, 17:00 à UFR-PHYS-SU, Amphi 25 Campus Pierre-et-Marie-Curie, Jussieu | CPMC (Colloquium Pierre et Marie Curie) | astro-ph|cond-mat|gr-qc|hep-ex|hep-lat|hep-ph|hep-th|physics|quant-ph |
|
|
|
Abstract: | TBA |
Jeudi 30 Janvier 2025, 17:00 à UFR-PHYS-SU, Amphi 25 Campus Pierre-et-Marie-Curie, Jussieu | CPMC (Colloquium Pierre et Marie Curie) | astro-ph|cond-mat|gr-qc|hep-ex|hep-lat|hep-ph|hep-th|physics|quant-ph |
|
|
|
Abstract: | TBA |
Jeudi 27 Mars 2025, 17:00 à UFR-PHYS-SU, Amphi 25 Campus Pierre-et-Marie-Curie, Jussieu | CPMC (Colloquium Pierre et Marie Curie) | astro-ph|cond-mat|gr-qc|hep-ex|hep-lat|hep-ph|hep-th|physics|quant-ph |
|
|
|
Abstract: | TBA |
Jeudi 22 Mai 2025, 17:00 à UFR-PHYS-SU, Amphi 25 Campus Pierre-et-Marie-Curie, Jussieu | CPMC (Colloquium Pierre et Marie Curie) | astro-ph|cond-mat|gr-qc|hep-ex|hep-lat|hep-ph|hep-th|physics|quant-ph |
|
|
|
Abstract: | TBA |
Vous êtes invité à vous abonner aux listes de distribution SEMPARIS afin de recevoir les annonces par courriel.
|
[ English version ] |