Installing xSDK 0.5.0

This page discusses the platforms on which the xSDK 0.5.0 release has been tested and contains general instructions for building, as well as more specific instructions for select high-end computing systems. See also details about obtaining the xSDK.

As more information becomes available for building the xSDK 0.5.0 release on different platforms, that information will be posted here..  Check back for updates.

xSDK 0.5.0 general build instructions

1. After cloning spack git repo, setup spack environment
# For bash users
$ export SPACK_ROOT=/path/to/spack
$ . $SPACK_ROOT/share/spack/setup-env.sh

# For tcsh or csh users (note you must set SPACK_ROOT)
$ setenv SPACK_ROOT /path/to/spack
$ source $SPACK_ROOT/share/spack/setup-env.csh
1.1 Make sure proxy settings are set – if needed.

If a web proxy is required for internet access on the install machine, please set up proxy settings appropriately. Otherwise, Spack will fail to “fetch” the packages of your interest.

# For bash users
$ export http_proxy=<your proxy URL>
$ export https_proxy=<your proxy URL>

# For tcsh or csh users
$ setenv http_proxy <your proxy URL>
$ setenv https_proxy <your proxy URL>
2. Setup spack compilers
spack compiler find

Spack compiler configuration is stored in $HOME/.spack/$UNAME/compilers.yaml and can be checked with

spack compiler list
3. Edit/update packages.yaml file to specify any system/build tools needed for xSDK installation.

Although Spack can install required build tools, it can be convenient to use preinstalled  tools – if already installed.  On macOS, in addition to Xcode, we’ve used some packages from Homebrew.  Such preinstalled packages can be specified to spack in $HOME/.spack/packages.yaml config file.    The following is an example for macOS

# -------------------------------------------------------------------------
# This file controls default concretization preferences for Spack.
#
# Settings here are versioned with Spack and are intended to provide
# sensible defaults out of the box. Spack maintainers should edit this
# file to keep it current.
#
# Users can override these settings by editing the following files.
#
# Per-spack-instance settings (overrides defaults):
#   $SPACK_ROOT/etc/spack/packages.yaml
#
# Per-user settings (overrides default and site settings):
#   ~/.spack/packages.yaml
# -------------------------------------------------------------------------
packages:
  autoconf:
    paths:
      autoconf@2.69: /usr/local
    buildable: False
  automake:
    paths:
      automake@1.15.1: /usr/local
    buildable: False
  libtool:
    paths:
      libtool@2.4.6: /usr/local
    buildable: False
  m4:
    paths:
      m4@1.4.18: /usr/local
    buildable: False
  cmake:
    paths:
      cmake@3.9.3: /usr/local
    buildable: False
  pkg-config:
    paths:
      pkg-config@1.3.9: /usr
    buildable: False
  python:
    paths:
      python@2.7.13: /usr
    buildable: False
  all:
    providers:
      mpi: [mpich]
    compiler: [clang@9.0.0-apple]
4. Install xSDK

After the edit, xSDK packages and external dependencies can be installed with a single command:

 spack install xsdk
5. Install environment modules.

Optionally one can install  environment-modules package to access xsdk packages as modules.

 spack install environment-modules

After installation, the modules can be enabled with the following command line.

For bash:

# For bash users
$ source  `spack location -i environment-modules`/init/bash
# For tcsh or csh users
$ source  `spack location -i environment-modules`/init/tcsh
6. Load xSDK module and its sub-modules.

Now you can load xSDK environment. Try Spack’s load command with the -r (resolve all dependencies) option:

spack load -r xsdk

Then, module list generates the following output, for example:

Currently Loaded Modulefiles:
 1) mpich-3.3.1-gcc-9.2.1-vlr4olk 
 2) numactl-2.0.12-gcc-9.2.1-7iruphx 
 3) zlib-1.2.11-gcc-9.2.1-f4ghtmi 
 4) hdf5-1.10.5-gcc-9.2.1-z4sim6l 
 5) openblas-0.3.7-gcc-9.2.1-kdem7d7 
 6) metis-5.1.0-gcc-9.2.1-w3psdd6 
 7) parmetis-4.0.3-gcc-9.2.1-2ix5swm 
 8) superlu-dist-6.1.1-gcc-9.2.1-hhoted2 
 9) hypre-2.18.2-gcc-9.2.1-hcwnxf3 
10) bzip2-1.0.8-gcc-9.2.1-5ypuoif 
11) boost-1.70.0-gcc-9.2.1-bdw77ev 
12) glm-0.9.7.1-gcc-9.2.1-g4p6bby 
13) matio-1.5.13-gcc-9.2.1-pr53e6y 
14) netcdf-c-4.7.2-gcc-9.2.1-gddudyy 
15) trilinos-12.18.1-gcc-9.2.1-daopyls 
16) petsc-3.12.1-gcc-9.2.1-5wyj7ys 
17) pflotran-xsdk-0.5.0-gcc-9.2.1-rlnl6uw 
18) alquimia-xsdk-0.5.0-gcc-9.2.1-eburogx 
19) amrex-19.08-gcc-9.2.1-73ao5c7 
20) arpack-ng-3.7.0-gcc-9.2.1-3lhyzwn 
21) netlib-scalapack-2.0.2-gcc-9.2.1-biyu4fj 
22) butterflypack-1.1.0-gcc-9.2.1-nyzgadn 
23) adol-c-develop-gcc-9.2.1-woat6nv 
24) gsl-2.5-gcc-9.2.1-jcojksx 
25) intel-tbb-2019.4-gcc-9.2.1-vx5f7b5 
26) muparser-2.2.6.1-gcc-9.2.1-or665xh 
27) nanoflann-1.2.3-gcc-9.2.1-r53uf54 
28) oce-0.18.3-gcc-9.2.1-z3eqs6z 
29) p4est-2.2-gcc-9.2.1-nceihyv 
30) slepc-3.12.0-gcc-9.2.1-6fj6itb 
31) suite-sparse-5.3.0-gcc-9.2.1-hvzd3ij 
32) dealii-9.1.1-gcc-9.2.1-wrxc3bl 
33) ginkgo-1.1.0-gcc-9.2.1-6kbkqyq 
34) sundials-5.0.0-gcc-9.2.1-jh2wooz 
35) mfem-4.0.1-xsdk-gcc-9.2.1-de2m3ht 
36) omega-h-9.29.0-gcc-9.2.1-m5yql3j 
37) phist-1.8.0-gcc-9.2.1-zawopoy 
38) plasma-19.8.1-gcc-9.2.1-x5fsmdm 
39) eigen-3.3.7-gcc-9.2.1-2ffrn5s 
40) libiconv-1.16-gcc-9.2.1-fetha6s 
41) xz-5.2.4-gcc-9.2.1-453rtbj 
42) libxml2-2.9.9-gcc-9.2.1-gkze6pv 
43) precice-1.6.1-gcc-9.2.1-6bqxkx3 
44) pumi-2.2.1-gcc-9.2.1-5kwnbpn 
45) py-numpy-1.17.2-gcc-9.2.1-72ai42y 
46) python-3.7.4-gcc-9.2.1-5m6odl7 
47) py-mpi4py-3.0.3-gcc-9.2.1-tn72t3a 
48) py-petsc4py-3.12.0-gcc-9.2.1-2yr6fwh 
49) py-libensemble-0.5.2-gcc-9.2.1-2y6w4fa 
50) strumpack-3.3.0-gcc-9.2.1-3dqkcwf 
51) tasmanian-7.0-gcc-9.2.1-kdpehwk 
52) xsdk-0.5.0-gcc-9.2.1-pkcmcs3

xSDK 0.5.0 platform testing

xSDK 0.5.0 has been updated/fixed on a regular basis on various workstations (and more)

In collaboration with ALCF, NERSC, and OLCF xSDK packages are tested on key machines at these DOE computing facilities.

  • ALCF: Theta: Cray XC40 with Intel compilers [in KNL mode]
    • Theta front end nodes use Xeon processors and the compute nodes use KNL processors. Due to this difference – usually the builds on the compile/front-end nodes are done in cross compile mode – this does not work well with  all xSDK packages.  Hence xSDK build on Theta compute nodes.
    • build the packages on the compute node – by allocating a long enough 1 node job (if possible – say 24h) and run the following script
      #!/bin/bash -x
      module remove darshan
      module remove xalt
      module load cce
      module load intel/19.0.5.281
      module load gcc/7.3.0
      
      export HTTPS_PROXY=theta-proxy.tmi.alcf.anl.gov:3128
      export https_proxy=theta-proxy.tmi.alcf.anl.gov:3128
      export HTTP_PROXY=theta-proxy.tmi.alcf.anl.gov:3128
      export http_proxy=theta-proxy.tmi.alcf.anl.gov:3128
      
      aprun -cc none -n 1 python /home/balay/spack/bin/spack install -j16 xsdk ^dealii~adol-c~p4est
    • Relevant .spack config files for this build are at:
      cray-cnl6-mic_knl / intel@19.0.5.281
  • NERSC: Cori: Cray XC40 with Intel compilers [in Haswell mode]
    • build packages on the compile/fronet-end node with:
      module remove altd darshan
      spack install -j8 xsdk
    • Relevant .spack config files for this build are at:
      cray-cnl7-haswell / intel@19.0.3.199
  • OLCF:Summit: IBM POWER9 with IBM, GNU, and PGI compilers
    • Building with GCC is possible for both GCC 7 and 8 but there are limitations on the jobs on the login node that make only 16 GiB available for a single user. Some packages inside xSDK fail because of it. This can be circumvented by submitting a job to the batch queue. An example LSF submission file looks like this:
      #!/bin/bash
      #BSUB -P <project_code>
      #BSUB -W 2:00
      #BSUB -nnodes 1
      #BSUB -alloc_flags smt4
      #BSUB -J xsdk-cuda
      #BSUB -o xsdk-cuda.%J
      #BSUB -e xsdk-cuda.%J
      
      SPACK_ROOT=/ccs/proj/csc326/luszczek/spack
      export SPACK_ROOT
      PATH=${PATH}:${SPACK_ROOT}/bin
      export PATH
      
      module unload xalt
      module load gcc/8.1.1
      module load spectrum-mpi/10.3.0.1-20190611
      
      cd $SPACK_ROOT
      spack install xsdk # add +cuda to build packages with CUDA support
      date
    • Building xSDK with IBM XL is possible but many constituent packages have issues for the XL compiler suite and must be disabled with a Spack invocation:
      spack install xsdk%xl~trilinos~butterflypack~dealii~omega-h~phist~precice~libensemble~strumpack~slepc~tasmanian^netlib-lapack
    • The relevant  configuration files that reside in $HOME/.spack  are available in summit directory.