Knowledge Base Resources
Contributed by cyberinfrastructure professionals (researchers, research computing facilitators, research software engineers and HPC system administrators), these resources are shared through the ConnectCI community platform. Add resources you find helpful!
HPC University
3
A comprehensive list of training resources from the HPC University. HPCU is a virtual organization whose primary goal is to provide a cohesive, persistent, and sustainable on-line environment to share educational and training materials for a continuum of high performance computing environments that span desktop computing capabilities to the highest-end of computing facilities offered by HPC centers.
ACCESS HPC Workshop Series
1
Monthly workshops sponsored by ACCESS on a variety of HPC topics organized by Pittsburgh Supercomputing Center (PSC). Each workshop will be telecast to multiple satellite sites and workshop materials are archived.
NCSA HPC Training Moodle
1
Self-paced tutorials on high-end computing topics such as parallel computing, multi-core performance, and performance tools. Other related topics include 'Cybersecurity for End Users' and 'Developing Webinar Training.' Some of the tutorials also offer digital badges. Many of these tutorials were previously offered on CI-Tutor. A list of open access training courses are provided below.
Parallel Computing on High-Performance Systems
Profiling Python Applications
Using an HPC Cluster for Scientific Applications
Debugging Serial and Parallel Codes
Introduction to MPI
Introduction to OpenMP
Introduction to Visualization
Introduction to Performance Tools
Multilevel Parallel Programming
Introduction to Multi-core Performance
Using the Lustre File System
Cornell Virtual Workshop
1
Cornell Virtual Workshop is a comprehensive training resource for high performance computing topics. The Cornell University Center for Advanced Computing (CAC) is a leader in the development and deployment of Web-based training programs. Our Cornell Virtual Workshop learning platform is designed to enhance the computational science skills of researchers, accelerate the adoption of new and emerging technologies, and broaden the participation of underrepresented groups in science and engineering. Over 350,000 unique visitors have accessed Cornell Virtual Workshop training on programming languages, parallel computing, code improvement, and data analysis. The platform supports learning communities around the world, with code examples from national systems such as Frontera, Stampede2, and Jetstream2.
Mechanism and Implementation of Various MPI Libraries
0
There is a detailed explanation about communication routines and managing methods of different MPI libraries, as well as several exercises designed for users to get familiar with the implementation of MPI build process.
Setting up PyFR flow solver on clusters
0
These instructions were executed on the FASTER and Grace cluster computing facilities at Texas A&M University. However, the process can be applied to other clusters with similar environments. For local installation, please refer to the PyFR documentation.
Please note that these instructions were valid at the time of writing. Depending on the time you're executing these, the versions of the modules may need to be updated.
1. Loading Modules
The first step involves loading pre-installed software libraries required for PyFR. Execute the following commands in your terminal to load these modules:
module load foss/2022b
module load libffi/3.4.4
module load OpenSSL/1.1.1k
module load METIS/5.1.0
module load HDF5/1.13.1
2. Python Installation from Source
Choose a location for Python 3.11.1 installation, preferably in a .local directory. Navigate to the directory containing the Python 3.11.1 source code. Then configure and install Python:
cd $INSTALL/Python-3.11.1/
./configure --prefix=$LOCAL --enable-shared --with-system-ffi --with-openssl=/sw/eb/sw/OpenSSL/1.1.1k-GCCcore-11.2.0/ PKG_CONFIG_PATH=$LOCAL/pkgconfig LDFLAGS=/usr/lib64/libffi.so.6.0.2
make clean; make -j20; make install;
3. Virtual Environment Setup
A virtual environment allows you to isolate Python packages for this project from others on your system. Create and activate a virtual environment using:
pip3.11 install virtualenv
python3.11 -m venv pyfr-venv
. pyfr-venv/bin/activate
4. Install PyFR Dependencies
Several Python packages are required for PyFR. Install these packages using the following commands:
pip3 install --upgrade pip
pip3 install --no-cache-dir wheel
pip3 install --no-cache-dir botorch pandas matplotlib pyfr
pip3 uninstall -y pyfr
5. Install PyFR from Source
Finally, navigate to the directory containing the PyFR source code, and then install PyFR:
cd /scratch/user/sambit98/github/PyFR/
python3 setup.py develop
Congratulations! You've successfully set up PyFR on the FASTER and Grace cluster computing facilities. You should now be able to use PyFR for your computational fluid dynamics simulations.
Benchmarking with a cross-platform open-source flow solver, PyFR
0
What is PyFR and how does it solve fluid flow problems?
PyFR is an open-source Computational Fluid Dynamics (CFD) solver that is based on Python and employs the high-order Flux Reconstruction technique. It effectively solves fluid flow problems by utilizing streaming architectures, making it suitable for complex fluid dynamics simulations.
How does PyFR achieve scalability on clusters with CPUs and GPUs?
PyFR achieves scalability by leveraging distributed memory parallelism through the Message Passing Interface (MPI). It implements persistent, non-blocking MPI requests using point-to-point (P2P) communication and organizes kernel calls to enable local computations while exchanging ghost states. This design approach allows PyFR to efficiently operate on clusters with heterogeneous architectures, combining CPUs and GPUs.
Why is PyFR valuable for benchmarking clusters?
PyFR's exceptional performance has been recognized by its selection as a finalist in the ACM Gordon Bell Prize for High-Performance Computing. It demonstrates strong-scaling capabilities by effectively utilizing low-latency inter-GPU communication and achieving strong-scaling on unstructured grids. PyFR has been successfully benchmarked with up to 18,000 NVIDIA K20X GPUs on Titan, showcasing its efficiency in handling large-scale simulations.
AHPCC documentary
0
This link is a documentary website to use AHPCC.
MPI Resources
0
Workshop for beginners and intermediate students in MPI which includes helpful exercises. Open MPI documentation.