Skip to Main Content

The University of Tennessee

Newton header banner

Frequently Used Tools:



Home » Documentation » Systems

Newton HPC Systems

The Newton HPC Program operates a number of computing systems with different capabilities and characteristics. All Newton compute clusters are accessible through the login node login.newton.utk.edu, and all clusters use the same operating system, software environment, and job queue system.

Sigma Cluster

The sigma cluster is the largest and most powerful Newton computational resource. It is a 108-node Lenovo NeXtScale cluster based on FDR Infiniband with Intel Haswell CPUs. The cluster is rated as a peak performance of 112 TFLOPS.

Monster

Monster is a shared memory SMP system with 1 TB of RAM and support for 48 CPU threads with Intel Broadwell CPUs. This system is designed to facilitate jobs that require a very large in-RAM data set in a single shared memory space.

Rho Cluster

Rho is a GPGPU cluster with 48 compute nodes each hosting an Nvidia Tesla GPGPU accelerator card. It has an accelerated peak performance of 80 TFLOPS.

Chi Cluster

Chi is a 1728 CPU-core AMD compute cluster based on QDR Ininfiband.

Phi Cluster

Phi is an 864 CPU-core Intel compute cluster based on QDR Infiniband.

Clusters Summary

Cluster Name CPU Model Nodes Cores /Node RAM /Node Total Cores Total Ram Interconnect GPGPU Status
Sigma Intel Xeon E5-2680v3 108 24 128 GB 2592 13824 GB FDR Infiniband (56Gbit/sec) - Online
Rho Intel Xeon E5-2670 48 16 32 GB 768 1536 GB QDR Infiniband M2090 Tesla Online
Chi AMD Opteron 6180 SE 36 48 96 GB 1728 3456 GB QDR Infiniband - Offline
Phi Intel Xeon X5660 72 12 24 GB 864 1728 GB QDR Infiniband - Online
Monster Intel Xeon E5-2687W v4 @ 3.00GHz 1 48 1 TB 48 1 TB Ethernet - Online