Computer Services Calendar Safety     Search PSFC



PSFC Partition on the Engaging Cluster at MGHPCC

Information and forms regarding the PSFC Partition on the Engaging Cluster at MGHPCC

In order to run jobs on the PSFC Engaging cluster, you need to submit a form to request access to it, using the following link:

Apply for an account on the PSFC Partition.

That link will require you to enter your PSFC username and password.

Logging into the Engaging Cluster

Go to:

This is the Engaging OnDemand interface. It requires MIT authentication, either using your MIT username and password, or your MIT certificate. Once you are authenticated, you will be logged into OnDemand.

After doing that, in order to access our dedicated PSFC partition to submit jobs, select the Clusters menu, and then select "Engaging Shell Access". This will log you into the PSFC Engaging node (If you prefer, you can skip using Engaging Ondemand, and simply ssh directly into that node).


MIT's Engaging Cluster Documentation


About the Massachusetts Green High Performance Computing Center (MGHPCC)


The new PSFC computational cluster consists of a 100 compute node subsystem integrated into the “Engaging Cluster,” which is located at the Massachusetts Green High Performance Computing Center (MGHPCC) in Holyoke, Massachusetts.


The PSFC subsystem is operated as part of the “Engaging Cluster” and also has access to the 2.5 Petabyte Lustre parallel file system of the Engaging Cluster.


This 100 node subsystem is connected together by a high speed, non-blocking FDR Infiniband system. This Infiniband system is capable of 14 Gb/s with a latency of 0.7 microseconds. With four channels, the effective node-to-node is 56 Gb/s or 6.4 GB/s for user applications. This network is non-blocking, thus each node has immediate access to each other node as well as to the parallel file system.


Each compute node in the subsystem is configured as follows:

Processors:   2  -  Intel E5,  Haswell-EP - 2.1GHz, 16 cores each, total 32 cores
Memory:       128 GB DDR4, the default is 4 GB per core.
Local Disk:   1.0 TB
The total subsystem is 3200 cores  with 12.8 Terabytes of memory

The individual compute nodes are very similar to the compute nodes in the “Cori – Phase 1” system at NERSC


The following snippet is from Chris Hill, MIT's representative to the MGHPCC and a founding member of the Engaging platform: Its called engaging1 (eo for short) because one of the goals is to develop more interactive and dynamic approaches to computational sciences. This is a concept some of us refer to as "engaging supercomputing" and/or computational science 2.0. The cluster consists of several head or login nodes, hundreds to thousands of compute nodes and a very large central Lustre storage system.


Thanks go to Dr. John Wright for preparing most of the information on this site regarding the PSFC partition on the Engaging Cluster.


Accessing the PSFC Engaging Cluster

A brief introduction to the PSFC Cluster

Basic Usage and Commands

PSFC Cluster FAQs

Cluster Demos

PSFC Engaging Cluster Nodes Status

Help Desk for the PSFC Partition on the Engaging Cluster, email:


For publications and reports making significant use of the PSFC partition on the engaging cluster, please add this text (or some suitable equivalent) to your acknowledgements:

"The simulations presented in this paper were performed on the MIT-PSFC partition of the Engaging cluster at the MGHPCC facility ( which was funded by DoE grant number DE-FG02-91-ER54109."



77 Massachusetts Avenue, NW16, Cambridge, MA 02139,


massachusetts institute of technology