Introduction and agenda
|
|
About NCI Gadi
|
Gadi is a much larger HPC than Artemis
Access is merit-based or paid for
Log in to Gadi via ssh to gadi.nci.org.au
|
Accounting
|
Service units (SU) are the charge rate on Gadi, and projects have a finite allocation of SU/KSU
nci_account -P <project> for KSU per project
lquota for disk and iNode by project
nci-files-report -g <project> -f <filesystem> for disk and iNode by user
Unused KSU are forfeited, and negatively impact future applications
|
PBS jobs and Gadi-specific commands
|
Your Artemis PBS scripts will be easily portable to Gadi, with a few small changes
Gadi has some custom commands and directives
|
Data transfer
|
Gadi’s data transfer queue is copyq
Gadi’s data mover node is gadi-dm.nci.org.au
Use Gadi copyq or Artemis dtq depending on your data transfer requirements
|
Software
|
Unlike our Artemis support, NCI won’t (rarely) install software globally upon request, but they will assist you to perform local installations
You can make your locally installed software compatible with module commands by following the steps above
|
Optimisation
|
NCI monitor job performance. Users with consistently underperforming jobs will be singled out!
Time spent benchmarking and optimising key jobs in your workflow can save you considerable walltime and KSU
|
Example parallel job
|
As Gadi does not allow job arrays, we recommend OpenMPI and nci-parallel to distribute parallel tasks over CPUs/nodes
Included examples can hopefully be adapted for your own work
|