System Software
Cluster management software: OpenHPC 1.3 based on the CentOS 7.7 64-bit operating system
Cluster file system for Scratch: BeeGFS 7.1.4
Resource management and job scheduling system: Simple Linux Utility for Resource Management (SLURM) 18.08.8
Application Software
Modules
HPC3 uses Lmod to manage installations for most application software. With the modules system, user can set up the shell environment to give access to applications, and make running and compiling software easier. It also allows us multiple versions of the same software co-exist and be used and abstract things from the version and high dependencies of the OS.
When log into the cluster, a default, bare bone environment with minimal software available would be shown. The module system is used to manage the user environment and to activate software packages on demand. In order to use software installed on HPC3, the corresponding software module must first be loaded. When loading a module, the system will set or modify the corresponding user environment variables to enable access to the software package provided by that module. For example, the $PATH environment variable might be updated so that appropriate executables for that package can be used.
Module Usage
The most common module commands are listed in the following table for reference.
Module command | Description |
---|---|
module list | List loaded modules in current environment |
module avail | List available software |
module spider R | Search for particular software, e.g, R |
module whatis R | Display information about a particular module, e.g, R |
module load R | Load a particular module, e.g, R |
module load anaconda3/2021.05 | Load a particular module with specific version, e.g, anaconda3 of version 2021.05 |
module unload R | Unload a particular module, e.g, R |
module swap gnu8 intel | Swap modules, e.g, replace default GNU8 compiler with intel compiler |
module purge | Remove all modules |
Table 1. common module commands
Module List
The following software are available as loadable module:
Module name | Version | Description |
---|---|---|
EasyBuild | 3.9.4 | Build and installation framework |
anaconda3 | 2020.02 | Package manager for Python/R |
anaconda3 | 2021.05 | Default version, Package manager for Python/R |
autotools | Default loaded, Developer utilities | |
charliecloud | 0.11 | Lightweight user-defined software stacks for high-performance computing |
cmake | 3.15.4 | Open-source, cross-platform family of tools designed to build, test and package software |
cuda | 10.2 | CUDA toolkit for NVIDIA GPU, only useable in GPU nodes |
cuda | 11.2 | Default version, CUDA toolkit for NVIDIA GPU, only useable in GPU nodes |
gnu | 5.4.0 | GNU compilers |
gnu8 | 8.3.0 | Default loaded, GNU compilers |
gnuplot | 5.2 | Portable command-line driven graphing utility |
gromacs | 2021.4 | Molecular dynamics package mainly designed for simulations of proteins, lipids, and nucleic acids |
hwloc | 2.1.0 | Package provides a portable abstraction of the hierarchical topology of modern architectures |
intel | 19.1.1.217 | Intel Parallel Studio XE 2019 |
llvm5 | 5.0.1 | Compiler infrastructure LLVM |
mathematica | 12.2.0 | Symbolic mathematical computation program |
matlab | R2019b | MATLAB |
matlab | R2020a | MATLAB |
matlab | R2020b | Default version, MATLAB |
nvhpc | 21.9 | NVIDIA HPC Software Development Kit |
ohpc | Default loaded, OpenHPC module to load autotools, prun, gnu8 and openmpi3 | |
openmpi3 | 3.1.4 | Default loaded, An open source Message Passing Interface implementation |
papi | 5.7.0 | Performance Application Programming Interface |
pmix | 2.2.2 | Process Management Interface |
prun | 1.3 | Default loaded, job launch utility for multiple MPI families |
R | 3.6.1 | A programming language for statistical computing |
singularity | 3.4.1 | Container platform |
subversion | 1.13.0 | Open source version control system |
valgrind | 3.15.0 | An instrumentation framework for building dynamic analysis tools |
Table 2. available modules in HPC3
Software Installation
In general, if users find the listed software above can't meet their needs, they can install software in their own home directory or in the group share directory. Please note that users are responsible for the licenses and copyright of the software they install in the cluster. Users should also adhere to ITSC’s Acceptable Use Policy.