Skip to main content

Choosing the Right Compute Platform

Monash eResearch provides access to a variety of services and platforms to support the research community. Whether you are running massive data processing pipelines, hosting a web application or scaling up to national supercomputing infrastructure, this guide will help you find the right fit for your workload.

Quick Reference Summary

PlatformBest ForKey Considerations
M3 HPCData processing GPU workloads
Standard HPC via managed Linux
Supports research partnership models and co-investment
Queue-based (SLURM)
No user root privileges
NectarCreating virtual machines (VMs)
Deploying web-accessible services
User (root) privileges
Customisable (memory and CPU)
Support for parallel processing using small VM clusters
User must self-manage and administer VMs
Nectar projects must be renewed every 12 months
NCI - partnershipMulti-node workloads
Extreme scaling
Managed Linux
Queue-based
Complements national merit allocation scheme (NCMAS)
Quarterly request and allocation scheme
Pawsey - partnershipMulti-node workloads
Extreme scaling
Managed Linux
Short-term projects only
Long-term projects require approval via the national merit allocation scheme (NCMAS)
Quarterly request and allocation scheme
DUGCompute on demand GPU workloads Standard HPC Managed LinuxFee-based service
MAVERICLarge memory support
Distributed GPU jobs
Designed primarily for AI training
Supports sensitive data
Linux-based
Container-only
Isolated system
ARM-based CPUs

Detailed Platform Breakdown

M3 HPC

M3 HPC is the primary high-performance computing (HPC) cluster for Monash researchers. It is designed to handle a wide variety of computational workloads.

  • Why use it: Excellent for large-scale data processing, AI/Machine Learning tasks and heavy GPU workloads. It comes pre-loaded with hundreds of scientific software modules.
  • Things to consider: M3 HPC is a shared environment that is managed by a job scheduler (a queue) that ensures that all researchers have equal access to the service. User accounts do not have administrator (root) privileges, therefore system-level packages cannot be directly installed. Additional packages can be requested and virtual environments can be used.

Nectar Research Cloud

Nectar provides cloud-based virtual machines (VMs) for Australian researchers.

  • Why use it: Nectar is a national project that provides researchers with access to hosting and compute infrastructure. Nectar is perfect for hosting web-accessible services (e.g. databases, web portals or interactive dashboards) and when full control over a virtual machine (VM) is required. Users have full control over their VMs, including full administrator (root) access.
  • Things to consider: With great power comes great responsibility. Users are entirely responsible for the system setup, administration, security patching and ongoing maintenance of their VMs. Limited support is available for setting up and configuring VMs.

NCI (National Computational Infrastructure)

The NCI is a national HPC that provides researchers with access to high-performance computing. Access to the NCI is provided through a strategic partnership between Monash University and the NCI facility.

  • Why use it: The NCI is best suited for highly parallelized, multi-node compute workloads that exceed the capacity of local clusters and serves as a complement to the national NCMAS (National Computational Merit Allocation Scheme).
  • Things to consider: Like M3 HPC, the NCI is a strict, queue-based HPC environment. Unlike M3 HPC projects are awarded a quarterly allocation through an application process. Code should be highly optimized for multi-node scaling in order to get the most out of the resource. The NCI offers a large number of CPU-only nodes as well as a smaller number of NVIDIA GPU nodes.

Pawsey Supercomputing Research Centre

Pawsey offers world-class supercomputing infrastructure and is based in Western Australia.

  • Why use it:, The Monash/Pawsey partnership is designed to jump-start new projects and allows researchers to test code scalability. It acts as an ideal stepping stone before applying for large-scale NCMAS allocations.
  • Things to consider: The Pawsey access scheme is intended for initial project phases and is not designed to support long-term or ongoing compute needs. Unlike M3 HPC projects are awarded a quarterly allocation via an application process. Pawsey offers a large number of CPU-only nodes and a smaller number of AMD GPU nodes.

DUG (Compute on Demand)

DUG is a privately operated commercial provider of fully managed Linux HPC infrastructure.

  • Why use it: DUG offers rapid access to flexible "compute on demand" without the administrative overhead of managing the systems directly.
  • Things to consider: Unlike the merit-based or university-funded resources above, DUG incurs direct financial costs. Research groups and departments must have funding available in order to cover any usage costs.