Knowledge Base Resources
Contributed by cyberinfrastructure professionals (researchers, research computing facilitators, research software engineers and HPC system administrators), these resources are shared through the ConnectCI community platform. Add resources you find helpful!
ACCESS HPC Workshop Series
1
Monthly workshops sponsored by ACCESS on a variety of HPC topics organized by Pittsburgh Supercomputing Center (PSC). Each workshop will be telecast to multiple satellite sites and workshop materials are archived.
Introduction to GPU/Parallel Programming using OpenACC
0
Introduction to the basics of OpenACC.
Introduction to Parallel Programming for GPUs with CUDA
0
This tutorial provides a comprehensive introduction to CUDA programming, focusing on essential concepts such as CUDA thread hierarchy, data parallel programming, host-device heterogeneous programming model, CUDA kernel syntax, GPU memory hierarchy, and memory optimization techniques like global memory coalescing and shared memory bank conflicts. Aimed at researchers, students, and practitioners, the tutorial equips participants with the skills needed to leverage GPU acceleration for scalable computation, particularly in the context of AI.
MATLAB with other Programming Languages
0
MATLAB is a really useful tool for data analysis among other computational work. This tutorial takes you through using MATLAB with other programming languages including C, C++, Fortran, Java, and Python.
PetIGA, an open-source code for isogeometric analysis
0
This documentation provides an overview of the PetIGA framework, an open source code for solving multiphysics problems with isogeometric analysis. The documentation covers some simple tutorials and examples to help users get started with the framework and apply it to solve real-world problems in continuum mechanics, including solid and fluid mechanics.
OpenMP Tutorial
0
OpenMP (Open Multi-Processing) is an API that supports multi-platform shared-memory multiprocessing programming in C, C++, and Fortran on many platforms, instruction-set architectures and operating systems, including Solaris, AIX, FreeBSD, HP-UX, Linux, macOS, and Windows. It consists of a set of compiler directives, library routines, and environment variables that influence run-time behavior.
C Programming
0
"These notes are part of the UW Experimental College course on Introductory C Programming. They are based on notes prepared (beginning in Spring, 1995) to supplement the book The C Programming Language, by Brian Kernighan and Dennis Ritchie, or K&R as the book and its authors are affectionately known. (The second edition was published in 1988 by Prentice-Hall, ISBN 0-13-110362-8.) These notes are now (as of Winter, 1995-6) intended to be stand-alone, although the sections are still cross-referenced to those of K&R, for the reader who wants to pursue a more in-depth exposition." C is a low-level programming language that provides a deep understanding of how a computer's memory and hardware work. This knowledge can be valuable when optimizing apps for performance or when dealing with resource-constrained environments.C is often used as the foundation for creating cross-platform libraries and frameworks. Learning C can allow you to develop libraries that can be used across different platforms, including iOS, Android, and desktop environments.
CUDA Toolkit Documentation
0
NVIDIA CUDA Toolkit Documentation: If you are working with GPUs in HPC, the NVIDIA CUDA Toolkit is essential. You can access the CUDA Toolkit documentation, including programming guides and API references, at this provided website
Introduction to MP
0
Open Multi-Processing, is an API designed to simplify the integration of parallelism in software development, particularly for applications running on multi-core processors and shared-memory systems. It is an important resource as it goes over what openMP and ways to work with it. It is especially important because it provides a straightforward way to express parallelism in code through pragma directives, making it easier to create parallel regions, parallelize loops, and define critical sections. The key benefit of OpenMP lies in its ease of use, automatic thread management, and portability across various compilers and platforms. For app development, especially in the context of mobile or desktop applications, OpenMP can enhance performance by leveraging the capabilities of modern multi-core processors. By parallelizing computationally intensive tasks, such as image processing, data analysis, or simulations, apps can run faster and more efficiently, providing a smoother user experience and taking full advantage of the available hardware resources. OpenMP's scalability allows apps to adapt to different hardware configurations, making it a valuable tool for developers aiming to optimize their software for a range of devices and platforms.