This page discusses the platforms on which the xSDK 1.0.0 release has been tested and contains general instructions for building, as well as more specific instructions for select high-end computing systems. See also details about obtaining the xSDK.
As more information becomes available for building the xSDK 1.0.0 release on different platforms, that information will be posted here.. Check back for updates.
xSDK 1.0.0 general build instructions
1. After cloning spack git repo, setup spack environment
# For bash users $ export SPACK_ROOT=/path/to/spack $ . $SPACK_ROOT/share/spack/setup-env.sh # For tcsh or csh users (note you must set SPACK_ROOT) $ setenv SPACK_ROOT /path/to/spack $ source $SPACK_ROOT/share/spack/setup-env.csh
1.1 Make sure proxy settings are set – if needed.
If a web proxy is required for internet access on the install machine, please set up proxy settings appropriately. Otherwise, Spack will fail to “fetch” the packages of your interest.
# For bash users $ export http_proxy=<your proxy URL> $ export https_proxy=<your proxy URL> # For tcsh or csh users $ setenv http_proxy <your proxy URL> $ setenv https_proxy <your proxy URL>
2. Setup spack compilers
spack compiler find
Spack compiler configuration is stored in $HOME/.spack/$UNAME/compilers.yaml and can be checked with
spack compiler list
3. Edit/update packages.yaml file to specify any system/build tools needed for xSDK installation.
Although Spack can install required build tools, it can be convenient to use preinstalled tools – if already installed. Such preinstalled packages can be specified to spack in $HOME/.spack/packages.yaml config file. The following is an example from a Linux build.
packages: mpich: buildable: false externals: - spec: mpich@4.0.1%gcc@9.4.0 prefix: /software/mpich-4.0.1 python: buildable: False externals: - spec: python@3.8.10%gcc@9.4.0 prefix: /usr perl: buildable: false externals: - spec: perl@5.30.0 prefix: /usr all: providers: mpi: [mpich] blas: [netlib-lapack] lapack: [netlib-lapack]
4. Install xSDK
After the edit, xSDK packages and external dependencies can be installed with a single command:
spack install xsdk@1.0.0
Note: One can install xsdk packages with cuda enabled on Nvidia GPU
spack install xsdk@1.0.0+cuda cuda_arch=70 (V100) or cuda_arch=80 (A100)
Or rocm enabled on AMD GPU
spack install xsdk@1.0.0+rocm amdgpu_target=gfx90a (MI-250)
5. (Optional) Install xSDK with modules.
Optionally one can install xsdk packages as modules.
spack config add "modules:default:enable:[tcl]" spack install lmod . $(spack location -i lmod)/lmod/lmod/init/bash . $SPACK_ROOT)/share/spack/setup-env.sh spack install xsdk@1.0.0
6. Load xSDK module and its sub-modules.
Now you can load xSDK environment. Try Spack’s load command:
spack load xsdk
Then, module avail generates the following output, for example:
alquimia/1.1.0-gcc-11.4.0-t4taayg amrex/23.08-gcc-11.4.0-oaz6h7f arborx/1.4.1-gcc-11.4.0-vsmx4nk arpack-ng/3.9.0-gcc-11.4.0-z63cq52 autoconf-archive/2023.02.20-gcc-11.4.0-tnyi7fq autoconf/2.69-gcc-11.4.0-fprh6tn automake/1.16.5-gcc-11.4.0-p4diycd bc/1.07.1-gcc-11.4.0-zakobqi berkeley-db/18.1.40-gcc-11.4.0-6tpzx4t bison/3.8.2-gcc-11.4.0-53ue7o2 blaspp/2023.08.25-gcc-11.4.0-u4f7aqc blt/0.4.1-gcc-11.4.0-b52wm6u boost/1.79.0-gcc-11.4.0-is4go5n butterflypack/2.4.0-gcc-11.4.0-xvil2nl bzip2/1.0.8-gcc-11.4.0-wghhz36 ca-certificates-mozilla/2023-05-30-gcc-11.4.0-64miwvk camp/0.2.3-gcc-11.4.0-kcrqgbk cmake/3.27.7-gcc-11.4.0-k6g4bs2 curl/8.4.0-gcc-11.4.0-ntljfxr datatransferkit/3.1.1-gcc-11.4.0-uqvervg dealii/9.5.1-gcc-11.4.0-za4j34j diffutils/3.9-gcc-11.4.0-s24ftpl ed/1.4-gcc-11.4.0-uprgqkw eigen/3.4.0-gcc-11.4.0-nypjypl exago/1.6.0-gcc-11.4.0-oxebbgl expat/2.5.0-gcc-11.4.0-op4y6jn fftw/3.3.10-gcc-11.4.0-jhdoqmb findutils/4.9.0-gcc-11.4.0-v6biqy5 gdbm/1.23-gcc-11.4.0-h4t7ujf gettext/0.22.3-gcc-11.4.0-jmnp3va ginkgo/1.7.0-gcc-11.4.0-qqpadt2 git/2.42.0-gcc-11.4.0-qk4bhbw gmake/4.4.1-gcc-11.4.0-nrjojvj gmp/6.2.1-gcc-11.4.0-n3duejd gsl/2.7.1-gcc-11.4.0-shxsboz hdf5/1.14.3-gcc-11.4.0-gf5c3yg heffte/2.4.0-gcc-11.4.0-szuskhw hiop/1.0.0-gcc-11.4.0-th7fgnp hwloc/2.9.1-gcc-11.4.0-5njunzh hypre/2.30.0-gcc-11.4.0-v2o5b2y intel-tbb/2021.9.0-gcc-11.4.0-srd2l4g kokkos/4.1.00-gcc-11.4.0-3ld2ztw krb5/1.20.1-gcc-11.4.0-y6xjdzn lapackpp/2023.08.25-gcc-11.4.0-grwa447 libbsd/0.11.7-gcc-11.4.0-r6ipwk7 libedit/3.1-20210216-gcc-11.4.0-zpgqim4 libevent/2.1.12-gcc-11.4.0-5m3ldrh libffi/3.4.4-gcc-11.4.0-bz5uwkj libiconv/1.17-gcc-11.4.0-galknam libidn2/2.3.4-gcc-11.4.0-panjmu5 libmd/1.0.4-gcc-11.4.0-jueqoy2 libpciaccess/0.17-gcc-11.4.0-ee6x35a libsigsegv/2.14-gcc-11.4.0-owlp5qn libtool/2.4.7-gcc-11.4.0-wtoa66g libunistring/1.1-gcc-11.4.0-5smktkf libxcrypt/4.4.35-gcc-11.4.0-hhcboff libxml2/2.10.3-gcc-11.4.0-cdq2y26 libyaml/0.2.5-gcc-11.4.0-xly53rq lmod/8.7.24-gcc-11.4.0-46ct5d3 lua-luafilesystem/1.8.0-gcc-11.4.0-pszhdzp lua-luaposix/36.1-gcc-11.4.0-fcjbyrh lua/5.4.4-gcc-11.4.0-ta6duim m4/1.4.19-gcc-11.4.0-op4zsj7 metis/5.1.0-gcc-11.4.0-6f6rut5 mfem/4.6.0-gcc-11.4.0-qz7ale6 mpfr/4.2.0-gcc-11.4.0-h4gbhhz muparser/2.3.4-gcc-11.4.0-szw3yzj ncurses/6.4-gcc-11.4.0-t5pt5jc netlib-scalapack/2.2.0-gcc-11.4.0-z4yivrf nghttp2/1.57.0-gcc-11.4.0-vn5hf5n ninja/1.11.1-gcc-11.4.0-oth2e7t numactl/2.0.14-gcc-11.4.0-qbbzdxc omega-h/scorec.10.6.0-gcc-11.4.0-zfnroia openblas/0.3.24-gcc-11.4.0-3fnxjsy openmpi/4.1.6-gcc-11.4.0-pypzzgz openssh/9.5p1-gcc-11.4.0-bdww7mj openssl/3.1.3-gcc-11.4.0-2o57lid p4est/2.8-gcc-11.4.0-opu4cib parmetis/4.0.3-gcc-11.4.0-bmsnxg2 pcre2/10.42-gcc-11.4.0-6adl2ha perl/5.38.0-gcc-11.4.0-aphk4lw petsc/3.20.1-gcc-11.4.0-jlk6cwv pflotran/5.0.0-gcc-11.4.0-nt4wcuu phist/1.12.0-gcc-11.4.0-32hc2cb pigz/2.7-gcc-11.4.0-ssnwycb pkgconf/1.9.5-gcc-11.4.0-34ij4th plasma/23.8.2-gcc-11.4.0-36sgq32 pmix/5.0.1-gcc-11.4.0-jaa7anf precice/2.5.0-gcc-11.4.0-pv2yp6h pumi/2.2.8-gcc-11.4.0-lhprzwy py-calver/2022.6.26-gcc-11.4.0-brnykb3 py-cython/0.29.36-gcc-11.4.0-zttirjx py-editables/0.3-gcc-11.4.0-2dzksky py-flit-core/3.9.0-gcc-11.4.0-pzofb2j py-hatch-vcs/0.3.0-gcc-11.4.0-ejmp765 py-hatchling/1.18.0-gcc-11.4.0-yb67s2w py-iniconfig/2.0.0-gcc-11.4.0-3vtlvm3 py-libensemble/1.0.0-gcc-11.4.0-gbrnc3y py-mpi4py/3.1.4-gcc-11.4.0-5fj5evr py-numpy/1.26.2-gcc-11.4.0-qos5sc7 py-packaging/23.1-gcc-11.4.0-avng6ze py-pathspec/0.11.1-gcc-11.4.0-2ea3wc6 py-petsc4py/3.20.1-gcc-11.4.0-rf3irrx py-pip/23.1.2-gcc-11.4.0-b6k5rze py-pluggy/1.0.0-gcc-11.4.0-5algpib py-psutil/5.9.5-gcc-11.4.0-mfjczor py-pydantic/1.10.9-gcc-11.4.0-gy2es42 py-pyproject-metadata/0.7.1-gcc-11.4.0-3xxc3ey py-pytest/7.3.2-gcc-11.4.0-iwynvpl py-pyyaml/6.0-gcc-11.4.0-dlm4j2v py-setuptools-scm/7.1.0-gcc-11.4.0-y5itsbr py-setuptools/68.0.0-gcc-11.4.0-yggpklq py-tomli/2.0.1-gcc-11.4.0-u3qq256 py-trove-classifiers/2023.8.7-gcc-11.4.0-ipkdjij py-typing-extensions/4.8.0-gcc-11.4.0-jbsluhs py-wheel/0.41.2-gcc-11.4.0-agx4ui4 python/3.11.6-gcc-11.4.0-kulhm7c raja/0.14.0-gcc-11.4.0-tn6fkq5 re2c/2.2-gcc-11.4.0-mxktyqz readline/8.2-gcc-11.4.0-kruiaae sed/4.9-gcc-11.4.0-spjw4l5 slate/2023.08.25-gcc-11.4.0-il2esca slepc/3.20.0-gcc-11.4.0-hrpafky sqlite/3.43.2-gcc-11.4.0-k63ovzn strumpack/7.2.0-gcc-11.4.0-c6rqjtr suite-sparse/5.13.0-gcc-11.4.0-zyagkto sundials/6.6.2-gcc-11.4.0-wscetkt superlu-dist/8.2.0-gcc-11.4.0-ogaafxy tar/1.34-gcc-11.4.0-mswkhxi tasmanian/8.0-gcc-11.4.0-ofj4o43 tcl/8.6.12-gcc-11.4.0-kon5jbm texinfo/7.0.3-gcc-11.4.0-rti6rr2 trilinos/14.4.0-gcc-11.4.0-m4fkkmf umpire/6.0.0-gcc-11.4.0-37kxecu unzip/6.0-gcc-11.4.0-hko3zj5 util-linux-uuid/2.38.1-gcc-11.4.0-rrhwe6c util-macros/1.19.3-gcc-11.4.0-ckzs647 xsdk/1.0.0-gcc-11.4.0-q2x4vdg xz/5.4.1-gcc-11.4.0-ivttcv6 zfp/1.0.0-gcc-11.4.0-ntiuxox zlib-ng/2.1.4-gcc-11.4.0-7onvw4d zstd/1.5.5-gcc-11.4.0-4fyj7pw
xSDK 1.0.0 platform testing
xSDK 1.0.0 has been updated/fixed on a regular basis on various workstations (and more)
- linux-fedora39-aarch64 / clang@17.0.1
- linux-fedora39-aarch64 / gcc@13.2.1
- linux-fedora39-skylake / oneapi@2023.2.0
- linux-fedora39-skylake / oneapi@2023.2.0 [+sycl]
- linux-fedora39-skylake / clang@17.0.0
- linux-fedora39-skylake / gcc@13.2.1
- linux-ubuntu20.04-skylake / oneapi@2022.2.0
- linux-ubuntu20.04-skylake / gcc@9.4.0
- linux-ubuntu22.04-zen3 / gcc@11.4.0 [+cuda]
- linux-ubuntu22.04-zen4 / gcc@11.4.0
- linux-ubuntu22.04-zen4 / gcc@11.4.0 [+rocm]
- linux-rocky9-cascadelake / oneapi@2023.1.0
- linux-rocky9-cascadelake / gcc@11.3.1
- linux-rocky9-cascadelake / gcc@11.3.1 [+cuda]
- linux-rocky9-skylake_avx512 / intel@19.1.1.217
xSDK packages are tested on key machines at DOE computing facilities – ALCF, NERSC, OLCF and LLNL.
- ALCF:Polaris: HPE Apollo 6500 Cray XC40 with AMD EPYC Milan CPUs, Nvidia A100 GPUs
- build packages on the front-end node with:
spack install -j 64 --no-cache --fresh xsdk@1.0.0 +cuda cuda_arch=80
- Relevant spack config files for this build are at:
linux-sles15-zen3 / gcc@11.2.0
- build packages on the front-end node with:
- NERSC: Perlmutter: a HPE Cray EX with AMD EPYC Milan CPUs, Nvidia A100 GPUs
- build packages on the compile/front-end node with:
spack install xsdk@1.0.0 +cuda cuda_arch=80 ^dealii~threads %gcc@11.2.0
- Relevant .spack config files for this build are at:
linux-sles15-zen3 / gcc@11.2.0
- build packages on the compile/front-end node with:
- OLCF: Frontier: HPE Cray with AMD EPYC 3rd gen CPUs, AMD MI-250X GPUs
- Obtain the spack.yaml for Frontier GNU environment
- Building with GNU compilers on frontend with
ROCM
enabled:- spack env create xsdk_frontier spack.yaml
- spack env activate -p xsdk_frontier
- spack install
- OLCF: Summit: is a supercomputer housed in Oak Ridge National Laboratory featuring nodes with dual-socket processors IBM POWER9 each connected with NVLink to 3 NVIDIA Volta V100 GPUs for total of 2 processors and 6 GPUs. The nodes rung RedHat 7 Linux with IBM, GNU, and PGI compilers.
- Building with GCC 11.2 is possible but there are limitations on the jobs on the login node that make only 16 GiB of the main memory available for a single user. Some packages inside xSDK fail because of it due to out-of-memory errors generated by the compiler.
- To deal with the issue, the build must take inside the queuing/scheduling system provided by IBM which is based on LSF (Load Sharing Facility). A sample submission file is provided that should be submitted with
bsub xsdk100.lsf
- To deal with the issue, the build must take inside the queuing/scheduling system provided by IBM which is based on LSF (Load Sharing Facility). A sample submission file is provided that should be submitted with
- Build xSDK (on compute nodes via bsub) with:
spack env create xsdk_summit spack.yaml
spack env activate -p xsdk_summit
Spec's available in in spack.yaml:
- xsdk%gcc@11.2.0~dealii~precice~exago~hiop
- xsdk%gcc@11.2.0~dealii~precice~exago~hiop+cuda cuda_arch==70 ^cuda@11.7.1
- Relevant .spack config files for this build are at:
linux-rhel7-power9le / gcc@11.2.0
- Building with GCC 11.2 is possible but there are limitations on the jobs on the login node that make only 16 GiB of the main memory available for a single user. Some packages inside xSDK fail because of it due to out-of-memory errors generated by the compiler.
- LLNL: Tioga: HPE Cray with AMD Trento CPUs, AMD MI-250X GPUs
- Building with GNU compilers on frontend with
ROCM
enabled:./bin/spack install -j 64 --no-cache --fresh xsdk@1.0.0 +rocm amdgpu_target=gfx90a ^dealii~threads
- Relevant spack config files for this build are at:
linux-rhel8-zen3 / gcc@12.1.0
- Building with GNU compilers on frontend with