System Architecture and Hardware Specifications


HPC Frontends

  • Login nodes are computers where users interact with the HPC cluster.
  • Primarily used for creating the HPC batch submit scripts and downloading input file/s available on the web.
  • This can be reached from anywhere on the Internet
  • Each HPC cluster has its own separate frontend node:
  1. TUX Frontend for Intel Nodes
    • IBM X3650 (8 x Intel ® Xeon ® CPU 5405 @ 2GHz)
    • 24 Gigabytes of RAM
    • 2 x 10 Gbps ethernet and 4 x 1 Gbps ethernet
  2. DUKE Frontend for AMD Nodes
    • Supermicro AS-1022GG-TF (32 x AMD Opteron(TM) Processor 6272 @ 1.4 GHz)
    • 112 Gigabytes of RAM
    • 2 x 10 Gbps ethernet, 2 x 1 Gbps ethernet and 1 x 40Gbps infiniband

Compute Nodes

  • Run batch jobs and interactive tasks for GUI dependent applications
  • Compile applications of the users
  • Accessible only to users if they have a job running on it
  • Total number of cpu/s: 2944
  • Each HPC cluster has its own separated pool of nodes:
  1. TUX Nodes: tux-[01-48]
    • Total number of nodes: 48
    • Superserver 2027PR (48 x Intel ® Xeon ® CPU E5-2697 v2 @ 2.70GHz)
    • 256 Gigabytes of RAM
    • 2 x 10 Gbps ethernet
  2. DUKE Nodes: duke-[01-10]
    • Total number of nodes: 10
    • Supermicro SBA-7142G-T4 (64 x AMD Opteron(TM) Processor 6378 @ 2.4 GHz)
    • 132 Gigabytes of RAM
    • 1 x 10Gbps pass-through infiniband and 1 x 40Gbps infiniband

Network File System (/home, /opt/hpcc)

  • Total space: 44 TB
  • Each user is limited to 100GB quota each
  • Each user is limited to 100GB quota each
  • Recommended for storage of application source codes, user program binaries and small quantity of active datasets
  • Highly available and backed up regularly
  • Low throughput performance; “not intended as work-area storage”
  1. /home
    • User $HOME (/home) directories, network mounted to the compute node/s and frontend node/s
  2. /opt/hpcc
    • Contains user and system application binaries
    • Network mounted storage to the compute and front-end nodes
    • Applications in /opt/hpccc are accessible by all users of the HPC facility
    • Only the administrators can install files in /opt/hpcc
    • Up-to-date stable versions of common hpc applications (e.g. compilers, mpi binaries, etc.)
    • Apps can be executed manually or through the modules command

Scratch Directories

  • Total space: 582 TB (291 TB each scratch)
  • User $SCRATCH1 (/scratch1) and $SCRATCH2 (/scratch2), network mounted to the compute node/s and frontend node/s
  • Content is automatically purged every 4 weeks
  • Not backed up and non-recoverable
  • Quota for each user is 10TB and 1,000,000 files for each scratch directory



HPC-Archi.jpg (51.9 KB) Glenda Mae Baldonado, 02/07/2017 01:25 PM