This page discusses the platforms on which the xSDK 0.6.0 release has been tested and contains general instructions for building, as well as more specific instructions for select high-end computing systems. See also details about obtaining the xSDK.
As more information becomes available for building the xSDK 0.6.0 release on different platforms, that information will be posted here.. Check back for updates.
xSDK 0.6.0 general build instructions
1. After cloning spack git repo, setup spack environment
# For bash users $ export SPACK_ROOT=/path/to/spack $ . $SPACK_ROOT/share/spack/setup-env.sh # For tcsh or csh users (note you must set SPACK_ROOT) $ setenv SPACK_ROOT /path/to/spack $ source $SPACK_ROOT/share/spack/setup-env.csh
1.1 Make sure proxy settings are set – if needed.
If a web proxy is required for internet access on the install machine, please set up proxy settings appropriately. Otherwise, Spack will fail to “fetch” the packages of your interest.
# For bash users $ export http_proxy=<your proxy URL> $ export https_proxy=<your proxy URL> # For tcsh or csh users $ setenv http_proxy <your proxy URL> $ setenv https_proxy <your proxy URL>
2. Setup spack compilers
spack compiler find
Spack compiler configuration is stored in $HOME/.spack/$UNAME/compilers.yaml and can be checked with
spack compiler list
3. Edit/update packages.yaml file to specify any system/build tools needed for xSDK installation.
Although Spack can install required build tools, it can be convenient to use preinstalled tools – if already installed. Such preinstalled packages can be specified to spack in $HOME/.spack/packages.yaml config file. The following is an example from a Linux/Intel/KNL build.
packages: perl: externals: - spec: perl@5.16.3 prefix: /usr python: externals: - spec: python@3.6.6 prefix: /usr py-numpy: externals: - spec: py-numpy@1.12.1 prefix: /usr py-setuptools: externals: - spec: py-setuptools@39.2.0 prefix: /usr intel-mpi: externals: - spec: intel-mpi@18.0.2 prefix: /homes/intel/18u2 intel-mkl: externals: - spec: intel-mkl@18.0.2 prefix: /homes/intel/18u2 all: providers: mpi: [intel-mpi] blas: [intel-mkl] lapack: [intel-mkl] compiler: [intel@18.0.2]
4. Install xSDK
After the edit, xSDK packages and external dependencies can be installed with a single command:
spack install xsdk
Note: One can install xsdk packages with cuda enabled
spack install xsdk+cuda
5. Install environment modules.
Optionally one can install environment-modules package to access xsdk packages as modules.
spack install environment-modules
After installation, the modules can be enabled with the following command line.
For bash:
# For bash users $ source `spack location -i environment-modules`/init/bash # For tcsh or csh users $ source `spack location -i environment-modules`/init/tcsh
6. Load xSDK module and its sub-modules.
Now you can load xSDK environment. Try Spack’s load command with the -r (resolve all dependencies) option:
spack load -r xsdk
Then, module avail generates the following output, for example:
alquimia-xsdk-0.6.0-gcc-10.2.1-zoykb4h amrex-20.10-gcc-10.2.1-spae3q4 arpack-ng-3.7.0-gcc-10.2.1-uwxuq54 autoconf-2.69-gcc-10.2.1-wmf6sye automake-1.16.2-gcc-10.2.1-urbak7q berkeley-db-18.1.40-gcc-10.2.1-ka75van blaspp-2020.10.02-gcc-10.2.1-j67awbk boost-1.74.0-gcc-10.2.1-ewlq2hr butterflypack-1.2.1-gcc-10.2.1-kdq222t bzip2-1.0.8-gcc-10.2.1-3hyx7me cmake-3.18.4-gcc-10.2.1-25oxd4z datatransferkit-3.1-rc2-gcc-10.2.1-qflzrfh dealii-9.2.0-gcc-10.2.1-aceshes diffutils-3.7-gcc-10.2.1-prtgmtr eigen-3.3.8-gcc-10.2.1-kxm4w2k environment-modules-4.6.0-gcc-10.2.1-6vtpt33 expat-2.2.10-gcc-10.2.1-yvmle3m fftw-3.3.8-gcc-10.2.1-5q24fo3 gdbm-1.18.1-gcc-10.2.1-avmbuxe gettext-0.21-gcc-10.2.1-kuhzeen ginkgo-1.3.0-gcc-10.2.1-6fkb6ce glm-0.9.7.1-gcc-10.2.1-d2j77md gsl-2.5-gcc-10.2.1-uhcodkb hdf5-1.10.7-gcc-10.2.1-ac5c2oh heffte-2.0.0-gcc-10.2.1-swtm5a7 hwloc-1.11.11-gcc-10.2.1-kjw75cw hypre-2.20.0-gcc-10.2.1-ibcn6eu intel-tbb-2020.3-gcc-10.2.1-bus7arz lapackpp-2020.10.02-gcc-10.2.1-rc5xg5r libbsd-0.10.0-gcc-10.2.1-f4ah5cp libffi-3.3-gcc-10.2.1-26dgjqu libiconv-1.16-gcc-10.2.1-u7h4b32 libpciaccess-0.16-gcc-10.2.1-u6rwpiz libsigsegv-2.12-gcc-10.2.1-wqetmq4 libtool-2.4.6-gcc-10.2.1-xei4ehl libuuid-1.0.3-gcc-10.2.1-rfjy7ek libxml2-2.9.10-gcc-10.2.1-uoxbnat m4-1.4.18-gcc-10.2.1-izahzmo matio-1.5.17-gcc-10.2.1-4otq3jh metis-5.1.0-gcc-10.2.1-adi64dl mfem-4.2.0-gcc-10.2.1-ciy3w32 muparser-2.2.6.1-gcc-10.2.1-qngsbis nanoflann-1.2.3-gcc-10.2.1-5c5bhk2 ncurses-6.2-gcc-10.2.1-fgq3hjm netcdf-c-4.7.4-gcc-10.2.1-xh4dqwy netlib-lapack-3.8.0-gcc-10.2.1-3mpzker netlib-scalapack-2.1.0-gcc-10.2.1-tktatam ninja-1.10.1-gcc-10.2.1-t5k3ch5 numactl-2.0.14-gcc-10.2.1-y3cmmhw oce-0.18.3-gcc-10.2.1-7lqp5pb omega-h-9.32.5-gcc-10.2.1-mthb7cw openmpi-3.1.6-gcc-10.2.1-47665tf openssl-1.1.1h-gcc-10.2.1-rjpl3sn p4est-2.2-gcc-10.2.1-gpv7yu4 parmetis-4.0.3-gcc-10.2.1-sauu5ho perl-5.32.0-gcc-10.2.1-elb6iz7 petsc-3.14.1-gcc-10.2.1-cimoa6x pflotran-xsdk-0.6.0-gcc-10.2.1-k7k5hjn phist-1.9.3-gcc-10.2.1-4vbcupe pkgconf-1.7.3-gcc-10.2.1-fvghehr plasma-20.9.20-gcc-10.2.1-ttlh47a precice-2.1.1-gcc-10.2.1-gc73fnz pumi-2.2.5-gcc-10.2.1-ct3nlby py-cython-0.29.21-gcc-10.2.1-qwlfifg py-libensemble-0.7.1-gcc-10.2.1-mfpvhhk py-mpi4py-3.0.3-gcc-10.2.1-c2o7ier py-numpy-1.19.4-gcc-10.2.1-uaqric7 py-petsc4py-3.14.0-gcc-10.2.1-txns6r6 py-psutil-5.7.2-gcc-10.2.1-ea57hkx py-setuptools-50.3.2-gcc-10.2.1-qk77g7z python-3.8.6-gcc-10.2.1-ng753ur readline-8.0-gcc-10.2.1-kekzgsc slate-2020.10.00-gcc-10.2.1-4wsgrjv slepc-3.14.0-gcc-10.2.1-5enhmb5 sqlite-3.33.0-gcc-10.2.1-czvszaf strumpack-5.0.0-gcc-10.2.1-hmtdyvt suite-sparse-5.7.2-gcc-10.2.1-orkvad7 sundials-5.5.0-gcc-10.2.1-ygkbszu superlu-dist-6.4.0-gcc-10.2.1-sjb6oa5 tar-1.32-gcc-10.2.1-btctfje tasmanian-7.3-gcc-10.2.1-mjarath tcl-8.6.10-gcc-10.2.1-277guz6 trilinos-13.0.1-gcc-10.2.1-eebl7kk util-macros-1.19.1-gcc-10.2.1-nrix7mj xsdk-0.6.0-gcc-10.2.1-fokazi4 xz-5.2.5-gcc-10.2.1-t5elnub zfp-0.5.5-gcc-10.2.1-kftjwbr zlib-1.2.11-gcc-10.2.1-7blb2jz
xSDK 0.6.0 platform testing
xSDK 0.6.0 has been updated/fixed on a regular basis on various workstations (and more)
- darwin-catalina-haswell / apple-clang@12.0.0
- linux-centos7-mic_knl / intel@18.0.2
- linux-centos7-skylake_avx512 / gcc@7.4.0
- linux-centos7-skylake_avx512 / intel@19.0.3.199
- linux-fedora31-skylake / clang@9.0.1
- linux-fedora32-graviton / gcc@10.2.1
- linux-fedora32-ivybridge / gcc@10.2.1
In collaboration with ALCF, NERSC, and OLCF xSDK packages are tested on key machines at these DOE computing facilities.
- ALCF: Theta: Cray XC40 with Intel compilers [in KNL mode]
- Theta front end nodes use Xeon processors and the compute nodes use KNL processors. Due to this difference – usually the builds on the compile/front-end nodes are done in cross compile mode – this does not work well with all xSDK packages. Hence xSDK build on Theta compute nodes.
- build the packages on the compute node – by allocating a long enough 1 node job (if possible – say 24h) and run the following script
#!/bin/sh -x module remove darshan module remove xalt module load cce export HTTPS_PROXY=theta-proxy.tmi.alcf.anl.gov:3128 export https_proxy=theta-proxy.tmi.alcf.anl.gov:3128 export HTTP_PROXY=theta-proxy.tmi.alcf.anl.gov:3128 export http_proxy=theta-proxy.tmi.alcf.anl.gov:3128 aprun -cc none -n 1 python3 /home/balay/spack-xsdk-knl/bin/spack install -j16 xsdk target=mic_knl ^boost@1.70.0
- Relevant .spack config files for this build are at:
cray-cnl7-mic_knl / intel@19.1.0.166
- NERSC: Cori: Cray XC40 with Intel compilers [in Haswell mode]
- build packages on the compile/fronet-end node with:
spack install xsdk ^petsc+batch ^boost@1.70.0 ^dealii~oce cflags=-L/opt/cray/pe/atp/2.1.3/libApp cxxflags=-L/opt/cray/pe/atp/2.1.3/libApp
- Relevant .spack config files for this build are at:
cray-cnl7-haswell / intel@19.0.3.199
- build packages on the compile/fronet-end node with:
- OLCF:Summit: is a supercomputer featuring nodes with two sockets of IBM POWER9 and 6 NVIDIA Volta V100 GPUs connected with NVLik running RedHat 7 Linux with IBM, GNU, and PGI compilers.
- Building with GCC is possible for both GCC 7 and 8 but there are limitations on the jobs on the login node that make only 16 GiB of the main memory available for a single user. Some packages inside xSDK fail because of it. This can be circumvented by submitting a job to the batch queue. An example LSF submission file looks like this:
#!/bin/bash #BSUB -P <project_code> #BSUB -W 2:00 #BSUB -nnodes 1 #BSUB -alloc_flags smt4 #BSUB -J xsdk #BSUB -o xsdk06o.%J #BSUB -e xsdk06e.%J projroot=/ccs/proj/<project_code>/$USER SPACK_ROOT=${projroot}/spack export SPACK_ROOT PATH=${PATH}:${SPACK_ROOT}/bin export PATH module unload darshan-runtime module unload spectrum-mpi module unload xalt module unload xl module load gcc/7.4.0 module load spectrum-mpi/10.3.1.2-20200121 cd $SPACK_ROOT # build xSDK without CUDA support spack install xsdk~precice # build xSDK witho CUDA support spack install xsdk+cuda~trilinos~libensemble~precice~dealii^openblas@0.3.5 ^cuda@10.1.243 ^magma cuda_arch=70
- Building xSDK with IBM XL is possible but many constituent packages have issues for the XL compiler suite and must be disabled with a Spack invocation:
spack install xsdk%xl~trilinos~butterflypack~dealii~omega-h~phist~precice~libensemble~strumpack~slepc~tasmanian^netlib-lapack
- Relevant .spack config files for this build are at:
linux-rhel7-power9le / gcc@7.4.0
- Building with GCC is possible for both GCC 7 and 8 but there are limitations on the jobs on the login node that make only 16 GiB of the main memory available for a single user. Some packages inside xSDK fail because of it. This can be circumvented by submitting a job to the batch queue. An example LSF submission file looks like this:
- LLNL:Lassen: IBM POWER9 with IBM, and GNU compilers
- Building with IBM compilers and GCC is possible for both GCC 7 and 8 but as with Summit there are limitations on the jobs on the login node that make only 16 GiB available for a single user. Some packages inside xSDK fail because of it. This can be circumvented by submitting a job to the batch queue.
- Build xSDK on a Lassen compute node without CUDA enabled:
spack install xsdk~dealii ^spectrum-mpi ^openblas@0.3.5
- Build xSDK on a Lassen compute node with CUDA enabled:
spack install xsdk+cuda~dealii ^spectrum-mpi ^openblas@0.3.5 ^magma cuda_arch=70 ^cmake@3.18.2
- Relevant .spack config files for this build are at:
linux-rhel7-power9le / gcc@7.3.1