Laguna Resource Overview

Last updated September 12, 2024

Laguna is a state-of-the-art system boasting 1 shared login nodes, 16 compute nodes, and 8 GPU nodes available for researchers to use.

Laguna is a shared resource, so there are limits in place on size and duration of jobs. This ensures that everyone has a chance to run jobs. For details on the limits, see Running Jobs.

0.0.1 Partitions and compute nodes

There are a two Slurm partitions available on Laguna, each with a separate job queue. These are general-use partitions available to all researchers. The table below describes the intended purpose for each partition:

Partition Purpose
compute Serial and parallel jobs (single node or multiple nodes)
gpu Jobs requiring GPU nodes

Each partition has a different mix of compute nodes. The table below describes the available nodes by partition. Each node typically has two sockets with one multi-core processor each and an equal number of cores per processor. In the table below, the CPUs/node column refers to logical CPUs such that 1 logical CPU = 1 core = 1 thread.

Partition CPU model CPU frequency CPUs/node GPU model GPUs/node Memory/node Nodes
compute epyc-9554 3.75 GHz 128 365 GB 16
gpu epyc-9354 3.25 GHz 64 L40S 2 735 GB 8

There are a few commands you can use for more detailed node information. For CPUs, the lscpu command will provide information about CPUs. For nodes with GPUs, the nvidia-smi command and its various options will provide information about GPUs. After module load gcc/13.3.0 hwloc, use the lstopo command to view a node’s topology.

0.0.2 GPU specifications

The following is a summary table for the GPU specifications:

GPU Model Partitions Architecture Memory Memory Bandwidth Base Clock Speed CUDA Cores Tensor Cores Single Precision Performance (FP32) Double Precision Performance (FP64)
L40S gpu ada 48 GB 864 GB/s 1110 MHz 18k 568 91.6 TFLOPS 1.43 TFLOPS