Software

System Software

Cluster management software: Rocks Cluster Software 6.2 based on the CentOS 6.6 64-bit operating system
Cluster file system: Lustre 2.8.0
Resource management and job scheduling system: Simple Linux Utility for Resource Management (SLURM) 14.11.11

Application Software

  • Anaconda v5.2 with Python 2.7
    • Software location - /opt/anaconda2
    • To access, add the following in your ~/.bash_profile, and relogin again to be effective.
      source /usr/local/setup/anaconda2.sh
    • Notes
  • Anaconda v5.2 with Python 3.6
    • Software location - /opt/anaconda3
    • To access, add the following in your ~/.bash_profile, and relogin again to be effective.
      source /usr/local/setup/anaconda3.sh
    • Notes
  • cmake v3.16.4
    • The system default version is v2.8.12.2
    • Software location – /usr/local/cmake
    • To access, add the following in your ~/.bash_profile, and relogin again to be effective.
      source /usr/local/setup/cmake-3.16.4.sh
      
  • CUDA Toolkit v9.2
    • Software location – /usr/local/cuda
    • The binary path /usr/local/cuda/bin and the library path /usr/local/cuda/lib64 are available as the system default.
    • Note: the software is only available in compute nodes equipped with GPU coprocessors and CUDA environment is available by default. For GPU application development with CUDA and gpu device, please email to hpcadmin@ust.hk to apply for direct access to one of those nodes.
  • CUDA Toolkit v8.0
    • The system default version is v9.2.
    • Software location – /usr/local/cuda
    • The binary path /usr/local/cuda/bin and the library path /usr/local/cuda/lib64 are available as the system default.
    • Note: in case you need to use this older version for your application, add the following in your ~/.bash_profile and relogin again to be effective.
      source /usr/local/setup/cuda-8.0.sh
  • CUDA Toolkit v7.5
    • The system default version is v9.2.
    • Software location – /usr/local/cuda-7.5
    • Note: in case you need to use this older version for your application, add the following in your ~/.bash_profile and relogin again to be effective.
      source /usr/local/setup/cuda-7.5.sh
  • cuDNN (CUDA Deep Neural Network library) v7.1
    • The library is to be used with CUDA Toolkit v9.2
    • The include and library path are /usr/local/cuda/include and /usr/local/cuda/lib64 respectively.
    • Note: the software is only available in compute nodes equipped with GPU coprocessors.
  • cuDNN (CUDA Deep Neural Network library) v5.1
    • The library is to be used with CUDA Toolkit v8.0 and v7.5
    • The include and library path are /usr/local/cuda/include and /usr/local/cuda/lib64 respectively.
    • Note: the software is only available in compute nodes equipped with GPU coprocessors.
  • CUDPP (CUDA Data Parallel Primitives Library) v2.2
    • Software location – /usr/local/cudpp
    • The include and library path are /usr/local/cudpp/include and /usr/local/cudpp/lib respectively.
  • Gaussian 09 Rev D.01
    • Software location – /usr/local/g09
    • Notes
  • GCC (GNU Compiler Collection) v4.9.2
    • The system default version is v4.4.7
    • Software location – /opt/rh/devtoolset-3
    • To access, add the following in your ~/.bash_profile, and relogin again to be effective.
      source /usr/local/setup/gcc-g++-4.9.2.sh
      
  • GSL (GNU Scientific Library) v2.1
    • Software location – /usr/local/gsl
    • You can run /usr/local/gsl/bin/gsl-config to identify the link flags of the software for your compilation. In case you use the dynamic library, set up the LD_LIBRARY_PATH appropriately when you run your application.
  • HDF5 v1.8.5.patch1-9 (CentOS 6 rpm)
  • Intel Parallel Studio XE 2018 Update3
    • Software location – /opt/intel/psxe2018U3
    • To access, add the following in your ~/.bash_profile, and relogin again to be effective.
      source /usr/local/setup/intel_psxe2018U3.sh
    • Note: for MPI application submitted with SLURM script, specify the infiniband network interconnect as below and run the application.
      export I_MPI_FABRICS=ofa
      mpirun ./your_application
  • Intel Parallel Studio XE 2016 Update3
    • Software location – /opt/intel
    • To access, add the following in your ~/.bash_profile, and relogin again to be effective.
      source /usr/local/setup/intel_psxe2016U3.sh
    • Note: for MPI application submitted with SLURM script, specify the infiniband network interconnect as below and run the application.
      export I_MPI_FABRICS=ofa
      mpirun ./your_application
  • JDK (Java Standard Edition Development Kit) v8u181
    • Software location – /usr/java/latest/
  • Matlab R2017a
    • Software location – /usr/local/matlab
    • Notes
  • MVAPICH2 v2.2rc1
    • Software location – /usr/local/mvapich2-2.2rc1-gcc/
    • To access, add the following in your ~/.bash_profile, and relogin again to be effective.
      source /usr/local/setup/mvapich2-2.2rc1-gcc.sh
    • Note: to run MPI application with SLURM, use srun as the process manager as below.
      srun --mpi=pmi2 ./your_application
  • NCAR Graphics v6.1.0
    • Software location – /usr/local/ncarg
    • To access, add the following in your ~/.bash_profile, and relogin again to be effective.
      source /usr/local/setup/ncarg-setup.sh
  • NetCDF v4.1.1 (CentOS 6 rpm)
  • OpenFOAM v4.1
    • To access, add the following in your ~/.bash_profile, and relogin again to be effective.
      source /usr/local/setup/openmpi-2.0.0.sh
      source /usr/local/setup/openfoam-4.1.sh
      
    • To visualize the result in ParaView, enable X forwarding during SSH login to the cluster provided that you have an X server with your desktop like Linux:
      ssh -X @hpc2.ust.hk
      Note: since the login node is shared by all users in the cluster, only lightweight visualization tasks are advised to be performed in the login node. For intensive visualization task, please download the result and run it in your workstation.
  • Open MPI v2.0.0
    • Software location – /usr/local/openmpi-2.0.0
    • To access, add the following in your ~/.bash_profile, and relogin again to be effective.
      source /usr/local/setup/openmpi-2.0.0.sh
  • PGI Cluster Development Kit v15.10
    • Software location – /usr/local/pgicdk-15.10
    • To access, add the following in your ~/.bash_profile, and relogin again to be effective.
      source /usr/local/setup/pgicdk-15.10.sh
  • Python v2.7.8
    • The system default version is v2.6.6
    • Software location /opt/rh/python27
    • To access, add the following in your ~/.bash_profile, and relogin again to be effective.
      source /usr/local/setup/python-2.7.8.sh
    • Pip v9.0.1, Numpy v1.7.1 and Scipy v0.12.1 are available
  • Python v3.3.2
    • The system default version is v2.6.6
    • Software location /opt/rh/python33
    • To access, add the following in your ~/.bash_profile, and relogin again to be effective.
      source /usr/local/setup/python-3.3.2.sh
    • Pip v9.0.1, Numpy v1.7.1 and Scipy v0.12.1 are available.
  • R v3.5.0
    • R and Rscript can be invoked with the system default path
  • SAS v9.4
    • Software location – /usr/local/SAS
    • To access, add the following in your ~/.bash_profile, and relogin again to be effective.
      source /usr/local/setup/sas94.sh
    • Notes
    • Sample slurm script
  • Singularity v3.6
    • Software location - /usr/bin/singularity
    • Notes