Skip to main content

MonARCH

Introduction to HPC

We are running this course via DataFluency on: 22 November 2024. To register, please go to Data Fluency


Welcome to the MonARCH documentation. Please select from the options at the left to get started.

MonARCH (Monash Advanced Research Computing Hybrid) is the next-generation HPC/HTC Cluster, designed from the ground up to address the emergent and future needs of the Monash HPC community.

A key feature of MonARCH is that it is provisioned through R@CMon, the Research Cloud @ Monash facility. Through the use of advanced cloud technology, MonARCH is able to configure and grow dynamically. As with any HPC cluster, MonARCH presents a single point-of-access to computational researchers to run calculations on its constituent servers.

MonARCH aims to continually develop over time. Currently, it consists of the following servers

  • mi* nodes are 36 core Xeon-Gold-6150 @ 2.70GHz servers wtih 158893MB usable memory
  • hc* nodes are 24 core Xeon-E5-2680-v3 @ 2.50GHz servers with 100550MB usable memory
  • hs* nodes are 16 core Xeon-E5-2667-v3 @ 3.20GHz servers with 100550MB usable memory
  • gp* nodes are 28 core Xeon-E5-2680-v4 @ 2.40GHz servers with 241660MB usable memory. Each server has two P100 GPU cards.
  • mk* nodes are 48 core Xeon-Platinum-8260 @ 2.4GHz servers with 342000M usable memory.
  • ge* baremetal nodes are 24 core Xeon-E5-2680-v3 @ 3.3GHZ servers with 257669M usable memory. Each server has eight K80 GPU processors (four boards with 2 K80 chips each).
  • gf* nodes are are 24 core Xeon-E5-2680-v3 @ 2.5GHz servers with 235980M usable memory. Each server has four K80 GPU processors (two boards with two K80 chips each).
  • hm00. This single node is 36 core Xeon-Gold-6150 @ 2.7GHz server with 1.4TB usable memory.

For data storage, we have deployed a parallel file system service using Intel Enterprise Lustre; providing over 300 TB usable storage with room for future expansion.

The MonARCH service is operated by the Monash HPC team and continuing technical and operational support from the Monash Cloud team, and eSolutions Servers-and-Storage, and Networks teams.

Acknowledgement

If you have found the MonARCH useful for your research, we will be very grateful if you kindly acknowledge us with a text along the lines of:

This research was supported in part by the Monash eResearch Centre and eSolutions-Research Support Services through the use of the MonARCH HPC Cluster.