Skip to main content

Neocortex

Neocortex is a highly innovative resource that targets the acceleration of AI-powered scientific discovery by greatly shortening the time required for deep learning training and by fostering greater integration of artificial deep learning with scientific workflows. The promising Cerebras WSE (wafer Scale Engine) accelerators in Neocortex facilitates the development of more efficient algorithms for artificial intelligence and graph analytics.
 

Associated Resources

Neocortex is a highly innovative advanced computing system ideal for foundation and large language models. Neocortex, which captures promising specialized innovative hardware technologies, is designed to vastly accelerate large deep learning (DL) models and high- performance computing (HPC) research in pursuit of science, discovery, and societal good. Neocortex features two Cerebras CS-2 systems, provisioned by an HPE Superdome Flex HPC server and the Bridges-2 filesystems. Each CS-2 system features a Cerebras WSE-2 (Wafer Scale Engine 2), the largest chip ever built, with 850,000 Sparse Linear Algebra Compute cores, 40 GB SRAM on-chip memory, 20 PB/s aggregate memory bandwidth and 220 Pb/s interconnect bandwidth. The HPE Superdome Flex (SDF) features 32 Intel Xeon Platinum 8280L CPUs with 28 cores (56 threads) each, 2.70-4.0 GHz, 38.5 MB cache, 24 TiB RAM, aggregate memory bandwidth of 4.5 TB/s, and 204.6 TB aggregate local storage capacity with 150 GB/s read bandwidth. The SDF can provide 1.2 Tb/s to each CS-2 system and 1.6 Tb/s from the Bridges-2 filesystems. Jobs are submitted via SLURM. The CS-2 systems can run customized TensorFlow and Pytorch containers, as well as programs written using the Cerebras SDK or the WSE Field Equation API.

Neocortex is a highly innovative advanced computing system ideal for foundation and large language models. Neocortex, which captures promising specialized innovative hardware technologies, is designed to vastly accelerate large deep learning (DL) models and high- performance computing (HPC) research in pursuit of science, discovery, and societal good. Neocortex features two Cerebras CS-2 systems, provisioned by an HPE Superdome Flex HPC server and the Bridges-2 filesystems. Each CS-2 system features a Cerebras WSE-2 (Wafer Scale Engine 2), the largest chip ever built, with 850,000 Sparse Linear Algebra Compute cores, 40 GB SRAM on-chip memory, 20 PB/s aggregate memory bandwidth and 220 Pb/s interconnect bandwidth. The HPE Superdome Flex (SDF) features 32 Intel Xeon Platinum 8280L CPUs with 28 cores (56 threads) each, 2.70-4.0 GHz, 38.5 MB cache, 24 TiB RAM, aggregate memory bandwidth of 4.5 TB/s, and 204.6 TB aggregate local storage capacity with 150 GB/s read bandwidth. The SDF can provide 1.2 Tb/s to each CS-2 system and 1.6 Tb/s from the Bridges-2 filesystems. Jobs are submitted via SLURM. The CS-2 systems can run customized TensorFlow and Pytorch containers, as well as programs written using the Cerebras SDK or the WSE Field Equation API.

Neocortex is a highly innovative resource that targets the acceleration of AI-powered scientific discovery by vastly shortening the time required for deep learning training, fostering greater integration of artificial deep learning with scientific workflows, and providing revolutionary new hardware for the development of more efficient algorithms for artificial intelligence and high performance computing.

The HPE Superdome Flex (SDFlex) features 32 Intel Xeon Platinum 8280L CPUs with 28 cores (56 threads) each, 2.70-4.0 GHz, 38.5 MB cache, 24 TiB RAM, aggregate memory bandwidth of 4.5 TB/s, and 204.6 TB aggregate local storage capacity with 150 GB/s read bandwidth. The SDF can provide 1.2 Tb/s to each CS-2 system and 1.6 Tb/s from the Bridges-2 filesystems.

SDF Service units are calculated as chassis hours. Each chassis has 112 cpu cores, so an SDFlex SU = 112 core hours.

Members get updates about announcements, events, and outages.

Neocortex ACCESS RP graphic

Upcoming Events

No upcoming events.
See past events

Announcements

No announcements for this group.

Coordinators