System Configuration

Overview

El Gato is comprised of 136 compute nodes, one login node, and one administrative node. The nodes are connected via a Mellanox FDR Infiniband network with a fully non-blocking fat tree topology.

70 nodes are configured with NVIDIA GPUs, 20 nodes are configured with Intel Xeon Phi coprocessors, and 46 nodes are Intel Ivy Bridge with only CPUs.

Information on the node configuration follows.   

GPU Node Configuration

  • Dual Socket IBM iDataPlex M5 node
  • 2x Intel Ivy Bridge E5-2650 v2 2.6 GHz 8-core processors (16 cores total per node)
  • 256 GB 1800 MHz RAM
  • 2x NVIDIA Kepler K20X Graphics Processor Units
  • Mellanox Connect X3 FDR Infiniband Interconnect

Non-Accelerated Nodes

  • Dual Socket IBM iDataPlex M5 node
  • 2x Intel Ivy Bridge E5-2650 v2 2.6 GHz 8-core processors (16 cores total per node)
  • 64 GB 1800 MHz RAM
  • Mellanox Connect X3 FDR Infiniband Interconnect

Phi Node Configuration

  • Dual Socket IBM iDataPlex M5 node
  • 2x Intel Ivy Bridge E5-2650 v2 2.6 GHz 8-core processors (16 cores total per node)
  • 256 GB 1800 MHz RAM
  • Mellanox Connect X3 FDR Infiniband Interconnect

Bash Shell Required

To use the system, owing to idiosyncrasies with the Platform LSF system, a bash shell is required for use with the system. If your HPC accounts have their shell set to tcshrc or similar shells, you will need to change to bash for El Gato to work properly (use the uchsh command). If there is an essential need for other shells please contact us.

File System and Quotas

  • To check your disk usage and quota, use the uquota command.
  • The filesystems on El Gato are not directly connected to the filesystems on the other HPC computers. There is relatively fast network connectivity between El Gato and the other HPC systems, and you can use ssh and scp to move files.
  • The $HOME directories are located on the DDN server in the /gsfs2 disk, and are exported to the front end on all of the compute nodes over the FDR Infiniband network. Home directory quotas are set to 6GB.
  • For users associated with the NSF MRI project that funded El Gato, users who have received approved "standard" accounts, and users that have purchased disk space specifically on the new system (this is in addition to any space purchased on the other HPC DDN system) will have access to group-specific directories in /gsfs1/rsgrps. Your access to these directories will be listed by uquota. If you believe you should have access to the /gsfs1/rsgrps/ directory but do not, please contact el-gato-support.
  • We will provide temporary disk space on /gsfs1/xdisk, as with the other HPC systems. For more information, see here.
  • Users may make use of local ~1TB hard disks on the nodes by writing to /localscratch. However, these files may be removed at any time without notice and this functionality is used at your own risk.