General
The cluster is open to all approved university researchers and the application must be sponsored by a principal investigator (PI), a faculty of the university. The PI needs to apply for an account too if he/she would like to access the cluster. Please refer to X-GPU cluster website on what resources are available and how to apply for the account.
You can access the xgpu.ust.hk, the login-node of the cluster with Secure Shell (SSH) on campus. On PC Windows platform, you can use the free SSH client like PuTTY. You need to use campus network or HKUST Wi-Fi (SSID: eduroam) to access the login node. In case you are from off-campus, you can access the login-node via VPN (Virtual Private Network) or send us your desktop IP access to grant the access.
You can use SFTP (SSH File Transfer Protocol) client such as FileZilla on PC Windows to do the file transfer.
Compute node has processors, memory and local disk as resources. The resources limit in the cluster is based on CPU core only for allocation and the usage quota is group based to be shared among the members of a group (PI). For your usage quota, please refer to the Cluster Resource Limits for details depending on which PI group you are with.
The default disk quota for each user is 100GB for /home and the 10TB scratch disk shared among the members of a group (details of the storage system please refer to the Cluster Storage Page
To check your scratch file system - /$HOME/xgpu-scratch disk usage, use command
To check your home file system - /home disk usage, use command quota
You can compile and develop your application in hpc3.ust.hk. You have to submit jobs using SLURM which is the resource management and job scheduling system in the cluster to run. Details can found in the Job Scheduling System page.
Slurm
You can use the command squeue –u $USER to check your job status. The output field of ST (job state) shows the job status. The typical states are R (running), PD (Pending), CG (Completing).
The GrpJobs is the total number of jobs able to run at any given time for a PI group in a partition. The GrpNodes is the total number of nodes able to be used at any given time for a PI group in a partition. The GrpsubmitJobs is total number of jobs able to be submitted, running and waiting, to the system at any given time for a PI group. The maximum WallTime is the maximum run time limit for jobs in the partition.
A possible scenario is that one or more higher priority jobs exist in the partition. For example, there is only one idle node while a job asking for 2 nodes is submitted, so the job is pending. Later another job asking for 1 node is submitted but it will be pending as there exists a pending higher priority job (earlier submitted jobs asking 2 nodes). The priority is related to the submission time.
For SLURM job submission, you need to explicitly declare the total number of GPU devices to use in the script using a line as follows:
#SBATCH -N number_of_node -n number_of_CPU_cores --gres=gpu:number_of_GPU_devices
For example, the following line declares to use 4 CPU cores and 2 GPU devices in 1 node:
#SBATCH -N 1 -n 4 --gres=gpu:2
If you don’t declare the number of GPU devices to use with option “--gres”, SLURM would NOT allocate any GPU device for the job and the application would NOT be able to find any GPU device available. For more details on sample job submission script, you may refer to example on this page.
Software
In general, you can install software in your own /home/$USER directory or in the /scratch/PI/<pi_group> directory. Please note that you are responsible for the licenses and copyright of the software you install in the cluster. You should also adhere to ITSC’s Acceptable Use Policy.
- Method 1: Use Anaconda
e.g, after you have created the virtual environment and activate it, to install tensorflow-gpu in the environment:
module load anaconda3 ; conda install -c anaconda tensorflow-gpu - Method 2: Use Singularity