Adastra quick start
How to ask for help
You can always ask you questions at svp@cines.fr. We recommend that you follow the “asking for help” tips listed in this document.
Registering an account
We recommend that you follow this document. In case you want to submit a new research project instead of simply joining an already existing one, things can get more involved and we recommend that you contact your Comité Thématique (CT) representative.
Connecting to Adastra
We will refer to <login>
as the username used to log on Adastra.
If you are running Windows, launch an SSH client (like PuTTy) and connect to the Adastra address:
adastra.cines.fr
. In Putty’s settings:Connection/SSH/X11
check theEnable X11 forwarding
box.If you are on Linux, open a terminal, and connect to Adastra like so:
$ ssh -X <login>@adastra.cines.fr
Your terminal’s working directory now points to your home directory and you can now enter commands which will execute on a login node. On Adastra you will find login nodes on which you are to compile and submit tasks to the hundreds of compute nodes. The way a user can be attached to multiple project (DARI allocation) is handled using the concept of login unique.
More details in:
Transferring sources to Adastra
If possible, use something like Gitlab or Github.
If the IP address of your Git repository is not authorized by Adastra’s firewall, you may ask svp@cines.fr to fix that for you. Else:
Under Windows: use an SFTP solution such as FileZilla.
Under Unix like, you can use FileZilla or the following command:
me@lab:~$ scp -r local_dir <login>@adastra.cines.fr:~/my_adastra_path
More details in:
The module environment
Adastra uses modules to expose softwares. If you want to use prebuilt simulation software please refer to these documents Using a spack product, Simulation softwares and CINES Spack module catalog.
If you want to compile your own code, you might benefit from reading theses documents: CrayPE basics, Libraries & toolchains
A typical environment for the GENOA (CPU) partition looks like that:
$ module purge
$ # A CrayPE environment version
$ module load cpe/23.12
$ # An architecture
$ module load craype-x86-genoa
$ # A compiler to target the architecture
$ module load PrgEnv-cray
Compiling code
For CPU compilations, we recommend PrgEnv-gnu
.
For GPU compilations, we recommend one of PrgEnv-amd
or potentially PrgEnv-gnu
with rocm
. For OpenACC GPU you should use PrgEnv-cray
.
Assuming the environment above (for the GENOA partition), you would compile a hello world program like so:
#include <mpi.h>
#include <iostream>
int main(int argc, char ** argv) {
::MPI_Init(&argc, &argv);
std::cout << "Hello world\n";
::MPI_Finalize();
return 0;
}
$ CC -O3 -Wall hello_world.cpp -o hello_world
Note
The MPI library is implicitly linked.
Example compilation using CMake are given here in CMake examples.
More details in:
Running a compute task
Assuming the build above, you could submit a task onto the GENOA partition, using SLURM:
$ srun --account=<account_to_charge> \
--job-name="hello_world" \
--constraint=GENOA \
--nodes=1 \
--exclusive \
--time=1:00:00 \
--ntasks-per-node=24 \
--cpus-per-task=8 \
-- ./hello_world
srun: job 548214 queued and waiting for resources
srun: job 548214 has been allocated resources
Hello world
Hello world
Hello world
Hello world
...
Note
Here we use srun
as shortcut for allocating and submitting work. This is but an example, not the recommend way. In practice you should use proper batch scripts.
Warning
The scratch storage area is cleaned up of old file on a regular basis.
More details in:
Submitting a task
When running real world cases, we prefer using the concept of scheduled job. It consist in specifying, in a script, a list of statements to execute at a later time, when the compute resources are available. The above mentioned srun
command would be expressed in a file named job.sh
like so:
#!/bin/bash
#SBATCH --account=<account_to_charge>
#SBATCH --job-name="hello_world"
#SBATCH --constraint=GENOA
#SBATCH --nodes=1
#SBATCH --exclusive
#SBATCH --time=1:00:00
srun --ntasks-per-node=24 --cpus-per-task=8 -- ./hello_world
You can schedule the task for later execution like so:
$ sbatch job.sh