Knowledge Base Resources
Contributed by cyberinfrastructure professionals (researchers, research computing facilitators, research software engineers and HPC system administrators), these resources are shared through the ConnectCI community platform. Add resources you find helpful!
Chameleon
0
Chameleon is an NSF-funded testbed system for Computer Science experimentation. It is designed to be deeply reconfigurable, with a wide variety of capabilities for researching systems, networking, distributed and cluster computing and security.
Slurm User Group Mailing List
0
ACCESS Support Portal
0
MATLAB bioinformatics toolbox
0
Bioinformatics Toolbox provides algorithms and apps for Next Generation Sequencing (NGS), microarray analysis, mass spectrometry, and gene ontology. Using toolbox functions, you can read genomic and proteomic data from standard file formats such as SAM, FASTA, CEL, and CDF, as well as from online databases such as the NCBI Gene Expression Omnibus and GenBank.
TensorFlow for Deep Neural Networks
0
TensorFlow is a powerful framework for Deep Learning, developed by google. This specifically is their python package, which is easy to use and can be used to train incredibly powerful models.
Info about retiring of R GIS packages rgdal, rgeos, maptools in 2023
0
R GIS packages "rgdal", "rgeos", and "maptools" are package set to be archived and no longer supported by end of 2023. Many other R GIS packages are build on top of these packages, including "sp" and "raster". The packages recommended as replacement for "sp" is "sf" and the replacement for "raster" is "terra". Below are links to published articles regarding this transition. Additionally, I am including links to the documentation for the new packages recommended to be used "sf" and "terra".
Master’s in Cybersecurity Degree Essentials
0
Offers comprehensive information on various master's degree options in cybersecurity, including program details, admission requirements, and career opportunities, helping students make informed decisions about pursuing an advanced degree in cybersecurity.
Campus Champions Home Page
0
Campus Champions foster a dynamic environment for a diverse community of research computing and data professionals sharing knowledge and experience in digital research infrastructure.
Jetstream2 Status
0
Jetstream2 makes cutting-edge high-performance computing and software easy to use for your research regardless of your project’s scale—even if you have limited experience with supercomputing systems.Cloud-based and on-demand, the 24/7 system includes discipline-specific apps. You can even create virtual machines that look and feel like your lab workstation or home machine, with thousands of times the computing power.
MATLAB with other Programming Languages
0
MATLAB is a really useful tool for data analysis among other computational work. This tutorial takes you through using MATLAB with other programming languages including C, C++, Fortran, Java, and Python.
HPCwire
0
HPCwire is a prominent news and information source for the HPC community. Their website offers articles, analysis, and reports on HPC technologies, applications, and industry trends.
Thrust resources
0
Thrust is a CUDA library that optimizes parallelization on the GPU for you. The Thrust tutorial is great for beginners. The documentation is helpful for anyone using Thrust.
Trusted CI Resources Page
0
Very helpful list of external resources from Trusted CI
Practical Machine Learning with Python
0
This video series provides a holistic understanding of machine learning, covering theory, application, and inner workings of supervised, unsupervised, and deep learning algorithms. It covers topics such as linear regression, K Nearest Neighbors, Support Vector Machines (SVM), flat clustering, hierarchical clustering, and neural networks. Goes over the high level intuitions of the algorithms and how they are logically meant to work. Apply the algorithms in code using real world data sets along with a module, such as with Scikit-Learn.
Long Tales of Science: A podcast about women in HPC
0
A series of interviews with women in the HPC community
Probabilistic Semantic Data Association for Collaborative Human-Robot Sensing
0
Humans cannot always be treated as oracles for collaborative sensing. Robots thus need to maintain beliefs over unknown world states when receiving semantic data from humans, as well as account for possible discrepancies between human-provided data and these beliefs. To this end, this paper introduces the problem of semantic data association (SDA) in relation to conventional data association problems for sensor fusion. It then, develops a novel probabilistic semantic data association (PSDA) algorithm to rigorously address SDA in general settings. Simulations of a multi-object search task show that PSDA enables robust collaborative state estimation under a wide range of conditions.
Expanse Home Page
0
Expanse at SDSC is a cluster designed by Dell and SDSC delivering 5.16 peak petaflops, and offers Composable Systems and Cloud Bursting.
Examples of Thrust code for GPU Parallelization
0
Some examples for writing Thrust code. To compile, download the CUDA compiler from NVIDIA. This code was tested with CUDA 9.2 but is likely compatible with other versions. Before compiling change extension from thrust_ex.txt to thrust_ex.cu. Any code on the device (GPU) that is run through a Thrust transform is automatically parallelized on the GPU. Host (CPU) code will not be. Thrust code can also be compiled to run on a CPU for practice.
Weka
0
Weka is a collection of machine learning algorithms for data mining tasks. It contains tools for data preparation, classification, regression, clustering, association rules mining, and visualization.
Why 'N How: Martinos Center for Biomedical Imaging:
0
The Why & How seminar series is designed to introduce research assistants, graduate students, and postdoctoral and clinical fellows – really, anyone who is interested – to the many tools used in medical imaging. These include software tools and most of the major imaging modalities wielded by investigators (MRI, PET, EEG, MEG, optical, TMS and others). As the name of the series suggests, the talks cover both the reasons researchers might need a particular tool and the nuts and bolts of how to apply it. You can watch videos of the overviews below.
RMACC Systems Administrator Workshop Slides
0
A compilation of the slides from this year's RMACC Sys Admin Workshop.
RMACC Sys Admin Workhop Schedule:
Tuesday
12:00 PM Sign-in
1:00 PM Introductions
1:30 PM Lightning Talk - HPC Survival guide
2:00 PM Node Management - Scott Serr
2:30 PM Lightning Talk - Warewulf
3:00 PM Urgent HPC - Coltran Hophan-Nichols and Alexander Salois
Wednesday
9:00 AM Breakfast
10:00 AM Round table Sites - BYU, INL, UMT, ASU, MSU
11:00 AM Open OnDemand setup - Dean Anderson
11:30 AM Lightning talk - Long term hardware support
12:00 PM Lunch
1:00 PM HPC Security - Matt Bidwell
2:00 PM Lightning talk- Security
2:30 PM ACCESS resources - Couso
3:00 PM Easybuild tutorial - Alexander Salois
3:30 PM General Q & A
Thursday
9:00 AM Breakfast
10:00 AM Lightning Talk- Containers and Virtual Machines
11:00 AM University of Montana - Hellgate Site Tour
11:30 AM Closing Remarks
Ask.CI Q&A Platform for Research Computing
0
Open Storage Network
0
The Open Storage Network, a national resource available through the XSEDE resource allocation system, is high quality, sustainable, distributed storage cloud for the research community.
Gaussian 16
0
Gaussian 16 is a computational chemistry package that is used in predicting molecular properties and understanding molecular behavior at a quantum mechanical level.
DAGMan for orchestrating complex workflows on HTC resources (High Throughput Computing)
0
DAGMan (Directed Acyclic Graph Manager) is a meta-scheduler for HTCondor. It manages dependencies between jobs at a higher level than the HTCondor Scheduler.
It is a workflow management system developed by the High-Throughput Computing (HTC) community, specifically for managing large-scale scientific computations and data analysis tasks. It enables users to define complex workflows as directed acyclic graphs (DAGs). In a DAG, nodes represent individual computational tasks, and the directed edges represent dependencies between the tasks. DAGMan manages the execution of these tasks and ensures that they are executed in the correct order based on their dependencies.
The primary purpose of DAGMan is to simplify the management of large-scale computations that consist of numerous interdependent tasks. By defining the dependencies between tasks in a DAG, users can easily express the order of execution and allow DAGMan to handle the scheduling and coordination of the tasks. This simplifies the development and execution of complex scientific workflows, making it easier to manage and track the progress of computations.