Skip to Main Content

The University of Tennessee

Newton header banner

Frequently Used Tools:



Home » Program Description

The Newton HPC Program

The Newton HPC Program is a joint initiative between the Office of Research, the Office of Information Technology (OIT), and the departments of the University of Tennessee to establish and support a high-performance research computing environment to take over the labor-intensive tasks associated with building and managing large computing installations. The program offers a flexible computing framework to facilitate work in a wide range of research areas. The support staff then leverages this standard computing framework to provide the most effective and efficient support for computationally intensive research.

The Newton cluster

The Newton high performance compute cluster consists of over 300 Linux compute nodes, 4,200 x86_64 architecture processors, and 8000 GBytes of RAM with 48 of the compute nodes feature Tesla GPU compute accelerators. The compute cluster uses Infiniband networking to provide high bandwidth (up to 56 Gbit/sec) and low latency message passing between cluster nodes during parallel computations. The Newton Program also operates multiple high-memory compute nodes which can provide 128 GBytes of RAM for large in-memory dataset calculations. The cluster has a theoretical peak performance of 30 Tflop/s. All computing infrastructure is housed in a data center that is actively monitored 24-hours a day and is managed by a team of professional system administrators.

Newton storage resources consist of 150 TBytes of high-performance Lustre storage for use by computational jobs. All mass storage on the cluster is backed up nightly to a storage system that is housed in a geographically separate data center, and historical snapshots of the backup data are made available to users for data recovery purposes.

The Newton computing infrastructure is managed using a custom-designed system that supports a high degree of automation in management and monitoring while remaining flexible enough to support new technologies and new computational techniques. The system also supports automatic documentation and accounting of system configuration and changes. The Newton program uses the Grid Engine batch-queue system to allocate cluster processing units to users' computational jobs. The Grid Engine supports finely grained resource controls that allow the program to make service level guarantees on job turnaround, job throughput, and resource reservations for high-priority projects.

All researchers affiliated with the University of Tennessee are eligible for accounts on the Newton systems. A basic account allows use of computing and storage resources when not in use by higher-priority members of the Newton Program. Through a direct buy-in process, researchers may gain higher priority in using system resources. The program uses buy-in funds to make infrastructure and computing capacity improvements and allocates priority use of the systems in proportion to each buy-in researcher's financial contribution. In addition, computational resources are available for direct allocation to faculty members by the Office of Research.