Storage

A durable repository is setup to store researchers' environmental and geospatial data. The repository can be used to store data, short-term or long term. The CoARE Project has multiple storage options to accommodate various storage requirements of facility users:


iRODS

  • iRODS is a data management platform designed to meet the requirements of organizations in handling massive amounts of data by consolidating heterogenous storage resources under a single unified namespace and providing seamless and interactive access to its content.
  • Facility for long-term data storage & retention and distribution.
  • Recommended use is for collaboration and data sharing.
  • Platform can also be used for archiving job outputs produced from the HPC system
  • Complete instructions how to use the iRODS can be viewed on this Wiki

Magnetic Tape Data Storage

  • Recommended storage for archiving data on a long-term basis (1-20 years)
  • Data will be stored in a magnetic tape
  • Capacity: 2.5 TB native, 6.25 compressed
  • Once data are stored in the tapes, intended users have the option to keep the tapes or store it in the DOST-ASTI premises
  • Expenses on purchasing magnetic tapes will be shouldered by intended users

HPC Storage

HPC storage is by default allocated to an HPC/ HPC-GPU user's account.

  • /home
    • User $HOME (/home) directories, network mounted to the compute node/s and frontend node/s
    • Intended storage for inactive data
  • /opt/hpcc
    • Contains user and system application binaries
    • Network mounted storage to the compute and front-end nodes
    • Applications in /opt/hpccc are accessible by all users of the HPC facility
    • Only the administrators can install files in /opt/hpcc
    • Up-to-date stable versions of common hpc applications (e.g. compilers, mpi binaries, etc.)
    • Apps can be executed manually or through the modules command
  • Scratch Directories (/scratch1 & /scratch2)
    • Intended storage for heavy IO only (e.g. active runs, MPI jobs, etc.)
    • User $SCRATCH1 (/scratch1) and $SCRATCH2 (/scratch2), network mounted to the compute node/s and frontend node/s
    • Content is automatically purged every 4 weeks
    • Not backed up and non-recoverable
    • Quota for each user is 10TB per directory

Science Cloud Storage

  • Storage that is attached to Virtual Machines (VM) that will be provided to users
  • Intended storage for user's files