Getting started with Puhti
This is a quick start guide for Puhti users. It is assumed that you have previously used CSC cluster resources like Taito/Sisu. If not, you can start by looking at overview of CSC supercomputers.
Go to my.csc.fi to apply for access to Puhti or view your projects and their project numbers
if you already have access. On Puhti, you can also use command
Connecting to Puhti
Connect using a normal ssh-client:
$ ssh email@example.com
There is also a beta web interface available at www.puhti.csc.fi, where you can log in with your CSC user name. In this interface you can manage files, launch interactive applications and list jobs, quotas and project statuses. You can use it also for graphical applications, alternatively NoMachine is also available.
CSC uses the Lmod module system.
Modules are set up in a hierarchical fashion, meaning you need to load a compiler before MPI and other libraries appear.
The system comes with two compiler families installed, the Intel and GCC compilers. We have installed both the 18 and 19 versions of the Intel compiler, and for GCC 9.1, 8.3 and 7.4 are available. The pgi compiler 19.7 is available for building gpu applications.
High performance libraries
Puhti has several high performance libraries installed, see more information about libraries.
Currently the system has a two MPI implementations installed:
We recommend to test using hpcx-mpi first, this one is from the network vendor and is based on OpenMPI.
You will need to have the MPI module loaded when submitting your jobs.
More information about specific applications can be found here
Python is available through the python-env module. This will replace the system python call with python 3.7. The anaconda environment has a lot of regularly used packages installed by default.
Puhti uses the slurm batch job system.
A description of the different slurm partitions can be found here. Note that the GPU partitions are available from the normal login nodes.
Very important change!!
You have to specify your billing project in your batch script with the
flag. Failing to do so will cause your job to be held with the reason “AssocMaxJobsLimit”.
srun directly also requires the flag.
- Login nodes can access the Internet
- Compute nodes can access the Internet
The project based shared storage can be found under
Note that this folder is shared by all users in a project. This folder is not meant for long term data storage
and files that have not been used for 90 days will be automatically removed. The default quota for this folder is 1 TB. There is also a persistent project based
storage with a default quota of 50 GB. It is located under
/projappl/<project>. Each user can store up to 10 GB of data in their home directory (
Linux basics Tutorial for CSC
If you are new to Linux command line or using supercomputers, please consult this tutorial section!