Skip to main content

Computing Systems and Services

Campus Champion Allocation

A Campus Champion is an employee of, or affiliated with, a college or university (or other institution engaged in research), whose role includes helping their institution’s researchers, educators and scholars (faculty, postdocs, graduate students, undergraduates and professionals) with their computing-intensive and data-intensive research, education, scholarship and/or creative activity, including but not limited to helping them use advanced digital capabilities to improve, grow and/or accelerate these achievements.

Details and description

Type

HPC

Access

Allocation awarded by your Campus Champion

Allocation period

Open continuously

Hardware/storage

Allows access to the XSEDE ecosystem


Delta Illinois

Delta is a computing and data resource that balances cutting-edge graphics processors and CPU architectures that shapes the future of advanced research computing. Made possible by the National Science Foundation, Delta is the most performant GPU-computing resource in NSF’s portfolio. University of Illinois researchers can be allocated on Delta.

Details and description

Type

HPC

Access

Allocation awarded by NCSA

Allocation period

Available continuously

Hardware/storage

    • 124 CPU nodes
    • 100 quad A100 GPU nodes
    • 100 quad A40 GPU nodes
    • Five eight-way A100 GPU nodes
    • One MI100 GPU node
    • Eight utility nodes will provide login access, data transfer capability and other services
    • 100 Gb/s HPE SlingShot network fabric
    • 7 PB of disk-based Lustre storage
    • 3 PB of flash based storage for data intensive workloads

Delta ACCESS

A computing and data resource that balances cutting-edge graphics processor and CPU architectures that shapes the future of advanced research computing. Made possible by the National Science Foundation, Delta is the most performant GPU-computing resource in NSF’s portfolio. Most allocations for Delta will be allocated through ACCESS.

Details and description

Type

HPC

Access

Allocation awarded by ACCESS

Allocation period

ACCESS quarterly allocation
(Select tiers are available continuously)

Hardware/storage

    • 124 CPU nodes
    • 100 quad A100 GPU nodes
    • 100 quad A40 GPU nodes
    • Five eight-way A100 GPU nodes
    • One MI100 GPU node
    • Eight utility nodes will provide login access, data transfer capability and other services
    • 100 Gb/s HPE SlingShot network fabric
    • 7 PB of disk-based Lustre storage
    • 3 PB of flash based storage for data intensive workloads

Granite

Granite is NCSA’s Tape Archive system, closely integrated with Taiga, to provide users with a place to store, and easily access, longer-term archive datasets. Access to this tape system is available directly via tools such as scp, Globus, and S3. Data written to Granite is replicated to two tapes for mirrored protection in case of tape failure. Granite can be used for storage of infrequently accessed data, disaster recovery, archive datasets, etc.

Details and description

Type

Storage

Access

    • Internal Rate: $16/TB/Year
    • External Rate: Contact Support

Allocation period

Open continuously

Hardware/storage

    • 19 Frame Spectra TFinity Library
    • 40PB of replicated capacity on TS1140 (JAG 7) media
    • Managed by Versity’s ScoutFS/ScoutAM products.

HAL Cluster

HAL is an efficient purpose-built system for distributed deep learning. It combines NVIDIA GPUs, high-speed interconnect, and a high-performing NVME SSD-based storage to provide a reliable and robust platform for developing and training deep neural networks. The system is funded by the NSF MRI program to provide University of Illinois researchers with a computational resource for machine learning needs. University of Illinois researchers can be obtain an allocation on HAL.

Details and description

Type

HPC-AI

Access

Allocation awarded by CAII

Allocation period

On rolling basis, for the duration of the project

Hardware

    • 16 IBM AC922 nodes
      • IBM 8335-GTH AC922 server
        • 2x 20-core IBM POWER9 CPU @ 2.4GHz
        • 256 GB DDR4
        • 4x NVIDIA V100 GPUs
        • 16 GB HBM2
      • 2-Port EDR 100 Gb/s IB ConnectX-5 Adapter
    • EDR IB Interconnect
    • 1 IBM 9006-22P storage node
      • 72TB Hardware RAID array
      • NFS
    • 3 DDN GS400NVE Flash Arrays
      • 360 TB usable, NVME SSD-based storage
      • Spectrum Scale File System
    • 1 NVIDIA DGX A100 node
      • 2x 64-core AMD Rome 7742 CPU @ 3.4GHz
      • 1 TB DDR4
      • 8x NVIDIA A100 GPUs
        • 40 GB HBM2
      • 15 TB NVMe storage

Highly Optimized Logical Learning Instrument (HOLL-I)

HOLL-I is a new service at NCSA that offers public access to an extreme scale machine learning capability, to complement other available resources at NCSA such as Delta and HAL. Leveraging the power of a Cerebras CS-2 Wafer Scale Engine, and with access to NCSA’s shared project storage on TAIGA, HOLL-I is capable of performing large machine-learning tasks in short order. HOLL-I’s unique architecture offers higher-speed processing than anything currently available on campus.

Details and description

Type

Extreme Scale Machine Learning

Access

Allocations through NCSA managed service fees (NCSA Director has a discretionary allocation)

Allocation

Service fees cover usage during the next quarter, charged by actual usage

Hardware

    • 1 CS-2 Accelerator
    • 9 Dell R6515 job prep and i/o nodes
      • 64 AMD Epyc cores
      • 256 GB DDR4 ram
      • 4TB local NVME
    • 1 Arista 100Gbe switch with 64 ports

Storage

    • 192TB local ZFS storage for scratch
    • Access to TAIGA shared NCSA LUSTRE project space
      • TAIGA storage for project data pursuant to TAIGA guidelines (link)
    • 40 GB ram on accelerator, loading/dumping at 120GB/s

Hydro

Hydro is a Unix-based cluster computing resource designed with a focus on supporting research and development related to national security and preparedness as well as research in other domains. Hydro is made available by the New Frontiers Initiative (NFI). It combines NVIDIA GPUs, high-speed interconnect, and a high-performing NVME SSD-based storage to provide a reliable and robust platform. Hydro is available to allocated NFI projects and Illinois Computes projects.

Details and description

Type

HPC

Access

Allocations through NFI and Illinois Computes projects.

Hardware

2 Login and 42 Compute nodes: 384 GB of memory per node, 40 Gb/s WAN bandwidth

    • 2 login nodes and 33 compute nodes with Dell PowerEdge R720 and Dual Socket Intel Xeon CPU E5-2690 (8 core, SandyBridge)
    • 7 compute nodes with Dell PowerEdge R7525 and Dual Socket AMD EPYC CPU 7452 (32 core, Rome)
    • 2 compute nodes with Dell PowerEdge R7525 and Dual Socket AMD EPYC CPU 7453 (28 core, Milan)

Illinois Campus Cluster

The Illinois Campus Cluster provides access to computing and data storage resources and frees you from the hassle of administering your own compute cluster. Any individual, research team or campus unit can invest in compute nodes or storage disks or pay a fee for on-demand use of compute cycles or storage space. Hardware and storage can be customized to the specific needs of individual research teams. Below are what NCSA is able to allocate, though the system is much larger.

Details and description

Type

HPC

Access

Cost to purchase nodes, storage, or usage on-demand

Allocation period

Open continuously

Hardware/storage

    • 8 nodes with: 64GB memory, InfiniBand interconnect, 20 cores (E2670V2 CPU), Tesla K40M GPU
    • 8 nodes with: 64GB memory, InfiniBand interconnect, 20 cores (E2670V2 CPU), No GPU
    • 4 nodes with: 256GB memory, InfiniBand interconnect, 24 cores (E2690V3 CPU), No GPU

Illinois HTC Program

The High Throughput Computing (HTC) Pilot program is a collaborative, volunteer effort between Research IT, Engineering IT Shared Services and NCSA. The computing systems that comprise the HTC Pilot resource are retired compute nodes from the Illinois Campus Cluster Program (ICCP) or otherwise idle workstations in Linux Workstation labs.

Details and description

Type

High Throughput Computing (HTC)

Access

Allocation awarded by University of Illinois Urbana campus

Allocation period

Open continuously

Hardware/storage

300 compute nodes with 12-core Intel Xeon X5650 @2.67GHz  and 24 GB RAM. Of those, ~2 have 48 GB RAM and ~1 have 96 GB RAM


Nightingale

Nightingale is a high-performance compute cluster for sensitive data. It accommodates projects requiring extra security, such as compliance with HIPAA and CUI policies. It is available for a fee to University of Illinois faculty, staff, students and their collaborators through desktop access and encrypted laptop access. NCSA experts manage the complex requirements surrounding sensitive data, taking the burden off the user so they can focus on their research.

Details and description

Type

HPC for sensitive data

Access

Cost varies by resource request. See Nightingale Overview and Costs for more detail

Batch Computing

    • 16 dual 64-core AMD systems with 1 TB of RAM
    • 2 dual-A100 compute nodes with 32-core AMDs and 512 GB of RAM

Interactive Compute Nodes

    • 4 interactive compute/login nodes with dual 64-core AMDs and 512 GB of RAM
    • 6 interactive nodes with 1 A100, dual 32-core AMDs with 256GB RAM
    • 5 interactive nodes with 1 A40 with dual 32-core AMDs and 512GB RAM

Allocation period

Open continuously

Storage

    • 880 TB of high-speed parallel LUSTRE-based storage

Radiant

Radiant is a new private cloud-computing service operated by NCSA for the benefit of NCSA and UIUC faculty and staff.  Customers can purchase VMs, computing time in cores, storage of various types and public IPs for use with their VMs.

Details and description

Type

HPC

Access

Cost varies by the Radiant resource requested

Allocation period

Open continuously

Hardware/storage

    • 140 nodes
    • 3360 cores
    • 35TB Memory
    • 25GbE/100GbE backing network
    • 185TB Usable flash capacity
    • access to NCSA’s 10PB+ (and growing) center-wide storage infrastructure/archive

Research IT Software Collaborative Services

Getting hands-on programming support for performance analysis, software optimization, efficient use of accelerators, I/O optimization, data analytics, visualization, use of research computing resources by science gateways and workflows.

Details and description

Type

Support

Access

Allocation awarded by campus Research IT

Allocation period

Open continuously


Taiga

Taiga is NCSA’s Global File System able to integrate with all non-HIPAA environments in the National Petascale Computation Facility. Built with SSUs (Scaleable Storage Units) spec’d by NCSA engineers with DDN, it provides a center-wide, single-namespace file system available across multiple platforms at NCSA. This allows researchers to access their data on multiple systems simultaneously; improving their ability to run science pipelines across batch, cloud and container resources. Taiga is also well integrated with the Granite Tape Archive to allow users to readily stage out data to their tape allocation for long-term, cold storage.

Details and description

Type

Storage

Access

    • Internal Rate: $30.48/TB/Year
    • External Rate: Contact Support

Allocation period

Open continuously

Hardware/storage

    • 19PB of hybrid NVME/HDD storage based on four Taiga SSUs
    • Backed by HDR Infiniband
    • Running DDN’s Lustre ExaScaler appliance.

vForge

vForge is a high-performance batch computing cluster built on NCSA’s Radiant cloud computing environment and Taiga center-wide storage system. vForge provides both CPU and GPU nodes and can be dynamically scaled to meet changing computational demands.

Details and description

Normal Queue

    • 8 nodes
    • Intel® Xeon® Processor E5 v3 (Haswell) processors
    • 24-core/48-thread vCPUs
    • 192 GB memory

GPU Queue

    • 1 node
    • 1 A100-SXM4-80GB
    • 10 vCPU
    • 256 GB memory

Note: vForge is a cloud-based cluster and resources can change dynamically based on demand.


Back to top