Welcome to the M3 user guide
warning
These are our old docs! Please see our new docs at https://docs.erc.monash.edu/M3/.
These old docs were roughly converted from our old format. As a result, this copy is not identical to our previous docs. You can still find our original old docs at https://old-docs.massive.org.au/.
You may notice some formatting and structural issues with these old docs. We will not resolve these, this is here purely for backwards compatibility to ensure old URLs do not die.
Help and Support
Using M3
- About M3
- System configuration
- Partitions
- Requesting an account
- Overview of the process
-
- Log in to the HPC ID system
-
- Create an M3 account
-
- Create or join an M3 project
-
- Set your M3 account password
-
- Log in to M3
- M3 Accounts for External Collaborators
-
- Overview of the process
- Requesting help on M3
- Software issues
- System issues
- Contacting us
- Help Desk
- Drop-in Sessions
- Postal Address
- Connecting to M3
- Connecting to M3 via ssh
- Linux and OS X Users
- Windows Users
- Connecting to M3 via Strudel2
- Troubleshooting Common Issues Using Strudel
- I can login to Strudel and launch a desktop but cannot connect. I click the connect button and get an error saying “failed to connect to server”
- Strudel Web fails to connect to the desktop or Strudel2 crashes when trying to connect to desktop
- I can connect to a Desktop, but the display looks messed up, or I can see the display, but cannot interact with the Desktop!
- I’m unable to type certain letters in Strudel web desktop
- Connecting to M3 via ssh
- File Systems on M3
- What to put on each file system?
- Home directory (~10GB)
- Project directory (Shared with everyone on your project)
- Scratch directory (Shared with everyone on your project)
- Disk Quotas
- Default Quotas for New Projects
- Scratch Usage Policies
- System Backups and File Recovery
- File Recovery Procedure
- Information for Desktop Users
- Thumbnails Generating Too Much Data
- Remember to empty your trash folder
- Already over quota?
- Storage outside of M3
- Instructions to access Market and Vault shares on M3
- Instructions to access Vault with SFTP and Rsync
- What to put on each file system?
- Copying files to and from M3
- GUI Tool - Windows, Mac OS, and Linux Users
- FileZilla
- WinSCP - Windows
- Globus
- Getting Started
- Personal Globus Endpoint
- Transferring Data
- Globus - Command Line Interface
- Globus - Platform as a Service
- Globus Juptyer Notebooks
- Data Portals
- Command Line Interface - Linux and OS X Users
- rsync
- GUI Tool - Windows, Mac OS, and Linux Users
- Software on M3
- Modules
- Requesting an install
- Containers and Docker on M3
- Python and conda on M3
- Software for the Monash Bioinformatics Platform (MBP)
- Related pages
- Installed software modules
- Common software issues
- Running Python on M3
- Running Anaconda on M3
- Python and conda on M3: Frequently Asked Questions (FAQ)
- Running Miniconda on M3
- Using PyCharm with your Virtual environment
- Running Python Virtual Environments on M3
- Running Python and conda on M3
- Conda on M3 (Rocky 9)
- CryoSPARC
- Software Deprecation: Java
- Modules
- Running jobs on M3
- Partitions on M3
- Compute partitions
- GPU partitions
- Restricted partitions
- Checking the status of M3
- The STATUS field explained
- Slurm Accounts
- Default accounts
- Setting the account for a job
- Questions about slurm accounts
- Getting started with job submission scripts
- Running Simple Batch Jobs
- An example Slurm job script
- Cancelling jobs
- Running Simple Batch Jobs
- MPI on M3
- On CentOS 7
- Software
- Load the MPI compiler
- Compile the software
- Running the software
- On Rocky 9
- UCX_NET_DEVICES
- On CentOS 7
- Running Multi-threading Jobs
- An example Slurm Multi-threading job script
- Running Interactive Jobs
- Submitting an Interactive Job
- How long do interactive jobs last for?
- Reconnecting to/Disconnecting from an Active Interactive Job
- Running GPU Jobs
- Running GPU Batch Jobs
- Compiling your own CUDA or OpenCL codes for use on M3
- Running Array Jobs
- An example of Slurm Array job script
- QoS (Quality of Service)
- How to run jobs with QoS
- Explanation
- Features & Constraints
- An example of using constraint in the job script
- Features Available
- Checking job status
- Method 1: show_job
- Method 2: Slurm commands
- Project Allocation
- Project Space
- Questions about allocation
- Diagnosing problems with jobs
- QOSMaxGRESPerUser
- Pending jobs
- CPU, Memory, and Desktop Job Limits
- Job Submission rejected
- Partitions on M3
- GPUs on M3
- Starter Guide: GPUs on M3
- GPUs on HPC (remote) vs. Laptop/Workstation (local)
- How do I choose a GPU?
- Questions to ask yourself when choosing a GPU
- GPU Look-Up Tables
- How do I choose a GPU?
- Look-Up Tables
- Frequently Asked Questions About GPUs on M3
- How long will I wait for a GPU?
- Why am I waiting so long for a GPU?
- How do I know if my job is using the GPU?
- I’m in the Machine Learning Community, do you have any specific advice for me?
- Starter Guide: GPUs on M3
Communities
- MX2 Eiger
- The Big Picture
- Authorising MASSIVE to download your data (CAP Leaders only)
- Creating research projects (research lab leaders only)
- Requesting access to M3
- Getting started on M3
- Connecting to M3
- Accessing your MX data
- Reprocessing your MX data
- Recommended Settings
- Known Issues
- Cannot open or read filename.tmp error
- Need help?
- Machine Learning
- Software
- Reference datasets
- GPUs on M3
- Training
- Community Engagement
- Quick guide for checkpointing
- Why checkpointing?
- Neuroimaging
- Using Slurm to submit a simple FSL job
- Background
- Data and scripts
- Using Slurm to submit a simple FSL job
- Cryo EM
- Relion
- The Graphical Interface
- Using the Queue
- Motion Correction
- 2d/3d Classification & Refinement
- Particle Polishing
- Access to CryoSPARC on M3
- Cryo EM Pre-Processing Tool
- Topaz script
- Cryo EM Benchmarking and Optimisation
- Cryo EM Benchmarking and Optimisation
- Relion
- Bioinformatics
- Requesting an account on M3
- The Genomics partition
- Getting started with the Bioinformatics module
- Importing the Bioinformatics module environment
- Installing additional software with Bioconda
- Pipelines and workflow managers on M3
- Running NextFlow on M3
- FAQ
- DGX
- Hardware
- How to access the DGX hardware
- What jobs are suitable?
- How do you demonstrate a suitable project/job?
- Data Collections
- Machine learning
- ImageNet 2012 (ILSVRC2012)
- ImageNet 2015 Object Detection Data (ILSVRC2015 DET)
- International Skin Imaging Collaboration 2019 (ISIC 2019)
- NIH Chest X-ray Dataset (NIH CXR-14)
- Stanford Natural Language Inference (SNLI) Corpus
- COCO (Common Objects in Context) 2017
- AlphaFold
- AlphaFold v2 - AlphaFold-Multimer release
- Neuroimaging
- Human Connectome Project Dataset (HCP): HCP-1200
- Lifespan Human Connectome Project Development
- Lifespan Human Connectome Project Aging
- Baby Connectome Project
- Human Connectome Project for Early Psychosis
- Developing Human Connectome Project (dHCP)
- Brain Genomics Superstruct Project (GSP)
- Nathan Kline Institute Rockland Sample (NKI-RS): Neuroimaging Release
- Genomes
- BlastDB
- Requesting a data collection
- Machine learning
- XNAT
- Create an account at Monash-XNAT
- Request to create a project at Monash-XNAT
- Mirroring data from Alfred-XNAT to Monash-XNAT
- Pulling Data from Monash-XNAT to MASSIVE M3
FAQs
- MASSIVE M3 on Rocky Linux 2024
- What is the plan?
- What has not changed
- Important Considerations with Strudel2
- How to access the Rocky Linux front-facing nodes?
- How do I submit batch jobs to Rocky Linux nodes?
- What modules are available?
- Avizo/Amira on Rocky
- How do I request software to be installed on Rocky Linux?
- Very long JupyterLab queue times
- sbatch: error: Batch job submission failed: Requested node configuration is not available
- Frequently asked questions
- Accounts
- Jobs
- M3, desktops and Strudel
- About Field of Research (FOR) and Socio-Economic Objective (SEO) codes
- Miscellaneous
- My HOME directory is full!
- How to connect to M3 if I can’t use a Strudel desktop?
- The user_info command
- The ncdu command
- Common causes of your disk filling up
- Large Log Files: ~/.vnc/
- Conda environments and packages: ~/.conda/
- Cache Folders:
- Large data files
- Other disk quota issues