Home
MonARCH
After over nine years of service, the MonARCH HPC cluster will be retired. Moving forward, the HPC workloads on MonARCH will be served by M3.
In the coming weeks, our eResearch HPC team will be reaching out to you on how your current MonARCH project(s) will be migrated to the M3 HPC system. In the meantime, please continue running your current workloads on MonARCH as per usual.
For any concerns and queries, please feel free to email us.
For information on training, please see our new training page.
MonARCH is Retiring this July 2025
We cordially invite all current and prospective MonARCH users to apply for an appropriate allocation on M3. There are two options:
- larger MonARCH projects requiring 3 or more TBs of storage are to apply for a Monash M3 allocation; or
- smaller MonARCH projects requiring < 500 GB and involving one or two project members to apply for a startup (PEACH) allocation.
The PEACH (Personal Easy Allocation of Compute on HPC) is a new offering on M3 for small, personal-sized HPC projects. A PEACH project may be lead by a PhD student and is granted a small allocation of storage space. Many existing MonARCH projects fall under this category.
This table shows the differences between a PEACH project and a Monash M3 project:
PEACH project | Monash M3 project | |
---|---|---|
Project Leader | Can be a student | Academic researcher, lab head, group leader |
# of members | Ideal for one or two | Research group / lab |
Accessible partitions | Dedicated peach partition | All other M3 partitions |
CPU limits per user | 100 CPU cores | Monash M3 standard |
Strudel Desktops | Yes (CPU only, no GPUs) | Yes with GPUs |
GPUs | No GPU access | up to four GPUs per user |
Project Space (initial) | 50 GB of protected space | Monash M3 standard: https://docs.erc.monash.edu/M3/Files/StorageQuotas |
Scratch Space (initial) | 100 GB of /scratch2 space per project | see above link |
Per-user scratch space | None. Your project will have scratch storage. (unlike MonARCH) | None. Like any other M3 project. |
Applying for your M3 allocation
Note that your current MonARCH project (including all its data) will not be automatically carried over to M3.
You need to apply for a new M3 project either under the PEACH allocation or a Monash M3 allocation.
For details on how to apply, please see: https://docs.erc.monash.edu/M3/GettingStarted/CreateM3Project
The PEACH allocation is appropriate if you are a PhD student working on your own project and do not require
large storage space and GPUs. If you need more than the 50 GB protected and 100 GB /scratch2
space for PEACH
,
ask your research supervisor to join and co-lead your project; and then request an increase of your allocation
as required. Please see: https://docs.erc.monash.edu/M3/Files/StorageQuotas
Larger MonARCH projects, like those with several project members and are using 10s of TBs of space, please ask your
academic supervisor to apply for a Monash M3 allocation. This will provide the standard M3 allocation of 500 GB of
protected space and 3 TB of /scratch2
space.
More quota may be requested as needed, please see: https://docs.erc.monash.edu/M3/Files/StorageQuotas
After submitting your M3 project request, we will notify you once your application is approved and the project is ready to use. Your M3 project will have a new project ID different from that on your existing MonARCH project.
You will need to copy important data on monarch
into your new M3 project space.
We will remind you closer to the cutover that you must copy all files to the new cluster. If you need assistance in transferring your data please indicate this in the M3 application form.
If your current MonARCH allocation is bigger than the one we will grant on M3, please advise us as to:
- if you need the data on M3; or
- if this data can be cleaned up and/or archived.
We recommend all projects to apply for a vault allocation to be used for archiving your valuable data. Please see: https://docs.erc.monash.edu/RDS/StorageProducts/VaultStorage on how to secure an allocation.
Contact: mcc-help@monash.edu
if you need assistance with data migration.
[To be deprecated] Documentation on the current MonARCH System
Welcome to the MonARCH documentation. Please select from the options at the left to get started.
MonARCH (Monash Advanced Research Computing Hybrid) is the next-generation HPC/HTC Cluster, designed from the ground up to address the emergent and future needs of the Monash HPC community.
Through the use of advanced cloud technology, MonARCH is able to configure and grow dynamically. As with any HPC cluster, MonARCH presents a single point-of-access to computational researchers to run calculations on its constituent servers.
MonARCH aims to continually develop over time. Currently, it consists of the following servers
Name | CPU | Number of Cores / Server | Usable Memory / Server | Notes |
---|---|---|---|---|
mi* | Xeon-Gold 6150 @ 2.70GHz | 36 | 158893MB | |
hi* | Xeon-Gold 6150 @ 2.70GHz | 27 | 131000MB | Same hardware as mi* nodes, but with less cores/memory in the VM |
ga* | Xeon-Gold-6330 @ 3.10GHz | 56 | 754178MB | Each server has two A100 GPU devices |
gd* | Xeon-Gold-6448Y @ 4.1GHG | 64 | 774551MB | Each server has two A40 GPU devices |
hm00 | Xeon-Gold-6150 @ 2.70GHz | 26 | 1419500MB | Specialist High Memory ~1.4TB machine. Please contact support to get access |
md* | Xeon(R) Gold 5220R @ 2.2GHz | 48 | 735000MB | The most recent Monarch Nodes which are baremetal |
mk* | Xeon-Platinum-8260 @ 2.50GHz | 48 | 342000MB | |
ms* | Xeon-Gold-6338 @ 2.00GHz | 64 | 505700MB | The most recent Monarch Nodes |
For data storage, we have deployed a parallel file system service using Intel Enterprise Lustre; providing over 300 TB usable storage with room for future expansion.
The MonARCH service is operated by the Monash HPC team and continuing technical and operational support from the Monash Cloud team, and eSolutions Servers-and-Storage, and Networks teams.
Acknowledgement and Citation
If you have found MonARCH useful for your research, we will be very grateful if you kindly acknowledge us. Find out how to acknowledge MonARCH in your publications.