M3
Notice of upcoming two-day scheduled maintenance - from 1 April 2025 0800 AEDT
Please be advised that M3 will be undergoing a two-day scheduled maintenance starting Tuesday, 1 April 2025 from 8:00 AM AEDT.
During this two-day maintenance period, we will be conducting these activities:
- Day 1: updating system software underpinning the /fs04 Lustre file system and routine security and OS updates on login and compute nodes; and
- Day 2: file system cutover from /scratch to /scratch2 - note that your access to /scratch will cease from April 1 onwards.
Throughout the two-day maintenance period, M3 will not be accessible via ssh, scp, Globus and rsync. There will be no running jobs and pending jobs will remain on the queue. You will not be able to launch any M3 desktop or interactive sessions.
Upon successful migration, your data previously on /scratch/PROJID will now be accessible either under /scratch2/PROJID or under /scratch2/PROJID/oldscratch.
Due to their large volume or high number (e.g., millions) of files, some /scratch folders may not complete by the second day. However, we aim to release the cluster for use by the end of the two-day maintenance. By the end of the second day, we will notify you directly if migration has not completed for your /scratch folder/s; so that you are aware why the corresponding /scratch2 folder for your project is inaccessible.
Please contact us at help@massive.org.au if you have any concerns.
2/Feb/2025 An update on the upcoming file system migration
As previously advised, /scratch
will be decommissioned and we have
commenced work to migrate /scratch
to /scratch2
.
We have now updated the user_info
command so that it now reports the
project usage and quotas for both scratch spaces.
There is no action needed at your end at this time.
18/Dec/2024: We are still in the process of porting our old M3 documentation from https://old-docs.massive.org.au/. As part of this process, we have aimed to improve our M3 docs by removing outdated content, restructuring, and rewriting some pages.
For a near-replica of the old docs, please see https://docs.erc.monash.edu/old-M3/.
If you identify any content that is missing from these new docs, or otherwise have any feedback about these docs, please let us know! In the meantime, you may still find what you're looking for in our old docs.
Welcome to the M3 user documentation! You can explore all of our pages in the left sidebar. If you don't see this sidebar, click on the the triple bar ≡ in the top-left to reveal the sidebar.
What is M3?
M3 is a High-Performance Computing (HPC) cluster, and is the third stage of MASSIVE. M3 allows researchers to process large amounts of complex data by parallelising their workloads across many computers. Since 2010, MASSIVE has played a key role in driving discoveries across many disciplines including biomedical sciences, materials research, engineering and geosciences.
What hardware does M3 have?
M3 is made up of a large number of (mostly Intel) CPUs and NVIDIA GPUs connected by fast Mellanox (NVIDIA) InfiniBand interconnects. The CPUs are quite powerful on their own, but M3's real benefit is that your workload can be split across many CPUs at once, allowing parallel workloads to be executed much more quickly.
Is M3 right for me?
If you are a Monash researcher who needs to process large amounts of data more quickly than is possible on your own computer, then M3 can speed up your work. If you only have a relatively light workload, particularly one that does not rely on GPUs, then MonARCH is effectively a smaller version of M3 that may be more suitable for you.
How can I use M3?
If you're interested in using M3, please see our Getting Started guide. Your usage of M3 is subject to the MASSIVE Terms of Use.