Install
Get the packages
The packages are available at this link. If you have downloading problems, contact the Developers.
From version 2.0, TBPLaS will be released as two separate packages, namely tbplas-cpp
and
tbplas-py
, which contain the C++ and Python APIs, respectively. tbplas-py
is originally
the tbplas
package, and inherits the version number, i.e., the first release will have the
version number of 2.0. On the contrary, tbplas-cpp
is a new package, and will be versioned
from zero. That’s to say, the first release will have the version number of 0.0. Both packages
need to be installed for full functionality. Pre-compiled installers are also available. For
tbplas-py
the wheel installer is cross-platform, while for tbplas-cpp
the pre-compiled
installer is Windows-only. For optimal performance and full functionality, native build from
source code is recommended for tbplas-cpp
. If you prefer native build, proceed with the
following sections. Otherwise, go ahead to Pre-compiled installers.
Prerequisites
Operating system
TBPLaS has been developed and tested on Linux. Ideally, it should work on most Linux distributions. Since MacOS is also a unix-like operating system, installation may be possible if the required packages have been properly installed. For Windows, it is recommend to use virtual machines, e.g., VirtualBox or VMWare. WSL is also an alternative. If you insist on native build on Windows, try MSYS2.
Compilers
TBPLaS requires a C++ compiler supporting C++17
standard and OpenMP 4.0
specifications. The following
compilers have been tested:
GCC 7.5.0
Intel oneAPI 2023.1.0
AMD AOCL 5.0.0 (Clang 17.0.6)
Compatibility Notice
MSVC (Microsoft Visual C++ Compiler) does not fully support OpenMP 4.0. We recommend GCC of the UCRT64
environment of MSYS2 on Windows. Actually, we use that environment to build the pre-compiled installers.
Builder
CMake>=3.15
is required to build TBPLaS. We have tested version 3.29.
Parallel / GPU computation framework (optional)
GPU computation support requires either NVIDIA CUDA toolkit
or NVIDIA HPC SDK
. Parallelization
over k-points and random samples requires MPI
. The following versions have been tested:
NVIDIA CUDA toolkit 12.4, 12.8
NVIDIA HPC SDK 23.5
MPICH 4.1.2
Intel oneAPI 2023.1.0
Math libraries (optional)
TBPLaS utilizes the Eigen
library for dense linear algebra calculations, which can use BLAS
and
LAPACK
as backends under the hood. We have also implemented interfaces to call BLAS and LAPACK
directly. Vendor-provided sparse matrix libraries supporting CSR
format can be utilized for optimal
performance in TBPM calculations, while searching for eigenvalues within a specific energy range
requires FEAST>=4.0
. The following versions have been tested:
Netlib LAPACK 3.12.0
OpenBLAS 0.3.28
Intel oneAPI 2023.1.0
AOCL 5.0.0
FEAST 4.0
I/O (optional)
Binary I/O requires the HDF5
library with C++ interface
. If not available, TBPLaS will use the
default text-based I/O. We have tested version 1.14.2.
Python environment
TBPLaS requires Python>=3.7
. In addition to the Python interpreter, development headers are also required,
as well as NumPy
, SciPy
, Matplotlib
, Cython
, setuptools>=40.8.0
and build>=1.2.2
packages.
LAMMPS and DeepH-pack interfaces requires ASE
and h5py
as optional dependencies. The following versions
have been tested:
Python 3.12.9
NumPy 1.26.3, 2.2.4
SciPy 1.11.4, 1.15.2
Matplotlib 3.8.0, 3.10.0
Cython 3.0.6, 3.0.11
setuptools 40.8.0
build 1.2.2
ASE 3.24.0
h5py 3.12.1
The most convenient approach to install the packages is to use the Anaconda
or WinPython distributions. After installation, firstly create a virtual environment,
then proceed with the package manager to install the packages, e.g., conda
or pip
.
Setting up the environment variables
To ensure that the installation guide function correctly, we assume that all the dependencies,
EXCEPT the Python environment, should be compiled from source code and installed into the
$HOME/tbplas_dev
directory. This is typically achieved by specifying the installation
destination using the --prefix
or CMAKE_INSTALL_PREFIX
options. Note that some dependencies
may have their own prefix options, or do not provide any such options at all. Discussion on these
cases will be beyond the scope of this installation guide. The users are recommended to consult the
installation guides of these packages for correctly installing them into the $HOME/tbplas_dev
.
After the installation of the dependencies, especially from source code, some relevant environment
variables need to be configured, such that the compiler and CMake can find the headers and libraries.
We offer a bash script tbplas-cpp-VERSION_CPP/tools/init.sh
(replace VERSION_CPP with the actually version number)
for configuring the environment variables, which can be installed by:
# Unpack the source code, replace VERSION_CPP with the actually version number
tar -xf tbplas-cpp-VERSION_CPP.tar.bz2
# Install to tbplas_dev directory
cp tbplas-cpp-VERSION_CPP/tools/init.sh $HOME/tbplas_dev
source $HOME/tbplas_dev/init.sh
# Update shell settings
echo "source $HOME/tbplas_dev/init.sh" >> $HOME/.bashrc
Suppose we are going to build TBPLaS with OpenBLAS 0.3.28
, HDF5 1.14.2
and FEAST 4.0
.
The dependencies have been built from source code and installed into the $HOME/tbplas_dev
directory,
and the Python virtual environment we have prepared for the installation is named tbplas
.
Then the following bash commands will set up the relevant environment variables:
# HDF5
dest=$HOME/tbplas_dev/hdf5-1.14.2
set_mod add pkg $dest
set_env add CMAKE_PREFIX_PATH $dest
# OpenBLAS
dest=$HOME/tbplas_dev/openblas-0.3.28
set_mod add pkg $dest
set_env add CMAKE_PREFIX_PATH $dest
set_env add CMAKE_MODULE_PATH $dest/lib/cmake/openblas
unset dest
# FEAST
reset_env add FEASTROOT $HOME/tbplas_dev/FEAST/4.0
# Python environment
conda activate tbplas
Add the settings to $HOME/.bashrc
if you want them to be permanently effective.
If the dependencies have been installed from software repository, probably their paths are already
included in the environment variables. In that case, skip the settings for HDF5, OpenBLAS and FEAST.
Some dependencies may have their own instructions on setting up the environment variables, e.g.,
Intel oneAPI, AOCC, AOCL, CUDA toolkit and HPC SDK. Check the official installation guides of these
dependencies for more details.
Installation
The two packages tbplas-cpp
and tbplas-py
can be installed independently from each other,
enabling the decoupling of C++ and Python API. The aim of this design is to have a unified Python
frontend that can be dynamically switched between C++ backends with different features, e.g.,
one with CUDA and the other with MPI support.
tbplas-cpp
To compile tbplas-cpp
, create the build directory and change to it:
cd tbplas-cpp-VERSION_CPP
test -d build && rm -rf build
mkdir build && cd build
Then invoke CMake with the following options to configure the build:
cmake .. \
-DCMAKE_INSTALL_PREFIX=$HOME/tbplas_dev/tbplas-cpp-VERSION_CPP \
-DCMAKE_C_COMPILER=gcc \
-DCMAKE_CXX_COMPILER=g++ \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_SHARED_LIBS=on \
-DBUILD_EXAMPLES=on \
-DBUILD_TESTS=off \
-DBUILD_PYTHON_INTERFACE=on \
-DWITH_OPENMP=on \
-DWITH_MPI=off \
-DWITH_CUDA=off \
-DWITH_FEAST=off \
-DWITH_HDF5=off \
-DEIGEN_BACKEND=default \
-DDIAG_BACKEND=default \
-DTBPM_BACKEND=default
Interpretation of the options:
CMAKE_INSTALL_PREFIX
: installation destinationCMAKE_C_COMPILER
: C compilerCMAKE_CXX_COMPILER
: C++ compilerBUILD_EXAMPLES
: whether to build the example programsWITH_OPENMP
: whether to enable OpenMP-based parallelizationWITH_MPI
: whether to enable MPI-based parallelizationWITH_CUDA
: whether to enable GPU computation based on CUDAWITH_FEAST
: whether to enable interface to FEAST libraryEIGEN_BACKEND
: math library for general linear algebra operationsDIAG_BACKEND
: math library for Hamiltonian diagonalizationTBPM_BACKEND
: math library for time propagation
For example, setting WITH_MPI
to on
will enable MPI-based parallelization, while setting
DIAG_BACKEND
to openblas
will utilize OpenBLAS for diagonalization-based calculations.
For more details, see Miscellaneous. Customize the options according to your needs. Note that some
of the options are mutually exclusive. If the configuration succeeds, proceed with the compilation:
make -j
The example programs, libraries and extensions will be produced in the bin
and lib
subdirectories of the build directory, respectively. Finally, install the files to CMAKE_INSTALL_PREFIX
by:
make install
And set up the environments variables by:
1reset_env add TBPLAS_CPP_INSTALL_PATH $HOME/tbplas_dev/tbplas-cpp-VERSION_CPP
2reset_env add TBPLAS_CORE_PATH $TBPLAS_CPP_INSTALL_PATH/lib64
The 1st line defines the installation directory of tbplas-cpp
, and the 2nd line sets the location
of extensions. Add the settings to $HOME/.bashrc
if you want them to be permanently effective.
GPU version
To build tbplas-cpp
with GPU support, set the WITH_CUDA
option to on
when configuring
the build. Also, add the CUDA_ARCHITECTURES
option to the configuration options. Here is an
example:
cmake .. \
-DCMAKE_INSTALL_PREFIX=$HOME/tbplas_dev/tbplas-cpp-VERSION_CPP \
-DCMAKE_C_COMPILER=gcc \
-DCMAKE_CXX_COMPILER=g++ \
-DCMAKE_BUILD_TYPE=Release \
-DBUILD_SHARED_LIBS=on \
-DBUILD_EXAMPLES=on \
-DBUILD_TESTS=off \
-DBUILD_PYTHON_INTERFACE=on \
-DWITH_OPENMP=on \
-DWITH_MPI=off \
-DWITH_CUDA=on \
-DCUDA_ARCHITECTURES=61 \
-DWITH_FEAST=off \
-DWITH_HDF5=off \
-DEIGEN_BACKEND=default \
-DDIAG_BACKEND=default \
-DTBPM_BACKEND=default
The CUDA_ARCHITECTURES
option controls the target of code generation via the CUDA_NVCC_FLAGS
flag:
set(_cuda_arch "compute_${CUDA_ARCHITECTURES}")
set(_cuda_code "sm_${CUDA_ARCHITECTURES}")
set(CUDA_NVCC_FLAGS " ${CUDA_NVCC_FLAGS} --compile -Xcompiler=-fPIC; -std=c++17 -gencode arch=${_cuda_arch},code=${_cuda_code}")
You must set the correct architecture of your GPU device, other tbplas-cpp
will not work.
For better compatibility, gcc
and g++
are recommend as C/C++ compilers.
tbplas-py
The installation of tbplas-py
is much simpler. Unpack the source code and run pip
by:
# Replace VERSION_PY with the actual version number
tar -xf tbplas_py-VERSION_PY.tar.bz2
cd tbplas_py-VERSION_PY
pip install .
which will install tbplas-py
into the virtual environment of tbplas
.
Testing
Finally, run the test suite under tbplas_py-VERSION_PY/tests
by:
cd tests
./run_tests.sh
The band structure, density of states and many other capabilities will be demonstrated by the test suite. If everything goes well, then the installation is successful.
Pre-compiled installers
tbplas-cpp
Pre-compiled installer for tbplas-cpp
is available on Windows. To install the package, unpack
the installer to a path without spaces and non-ascii characters, e.g., D:\tbplas-cpp-VERSION_CPP-win64
, then
create a new environment variable in the following approach:
Open
PowerShell
and invoke the registry editor by typingregedit.exe
.Navigate to
Computer\HKEY_CURRENT_USER\Environment
.Right-click to create a new key
TBPLAS_CORE_PATH
with the value being the installation path, e.g.,D:\tbplas-cpp-VERSION_CPP-win64
.Reboot for the environment variables to take effect.
tbplas-py
Open PowerShell
and change to the directory containing the installer. Then install the package by:
pip install .\tbplas_py-VERSION_PY-py3-none-any.whl