Skip to content

NAMD

NAMD is a parallel molecular dynamics code designed for high-performance simulation of large biomolecular systems. The software is developed and distributed by the Theoretical and Computational Biophysics Group at the Beckman Institute of the University of Illinois.

Available

The following versions are available:

  • Puhti: 2.14, 2.14-cuda, 3.0alpha11-cuda
  • Mahti: 2.14

License

CSC has obtained a computing center license, which allows usage for non-commercial research. For commercial use, contact namd@ks.uiuc.edu. See also acknowledging usage below.

Usage

NAMD can be run either with CPUs or with a GPU + CPUs.

Performance considerations

Tests show that leaving one core for communication for each task is beneficial when running on multiple nodes:

(( namd_threads = SLURM_CPUS_PER_TASK - 1 ))

This is also recommended by the NAMD manual. Please test with your input.

Make sure --ntasks-per-node multiplied by --cpus-per-task equals 40 (Puhti) or 128 (Mahti), i.e. all cores in a node. Try different ratios and select the optimal one.

The data below shows the ApoA1 benchmark (92k atoms, 2 fs timestep) on Mahti with ns/day as a function of allocated nodes and varying the number of namd_threads as set in the Mahti script below.

NAMD Scaling on Mahti

The data also shows the following things:

  • Optimal settings depend on the amount of resources in addition to system and run parameters.
    • For this system, as the amount of resources are increased, the optimum performance shifts from more threads per task (15) towards fewer threads per task (3).
  • 1 GPU (+ 10 CPU cores) on Puhti gives a performance that is comparable to running on two full Mahti nodes. However, note that using more resources to get results faster is also more expensive in terms of consumed billing units. To avoid wasting resources, ensure that your job actually benefits from increasing the number of cores. You should get at least a 1.5-fold speedup when doubling the amount of resources.
  • To test your own system, run e.g. 500 steps of dynamics and search for the Benchmark time: line in the output.

NAMD 3.0 Alpha

An alpha-version of NAMD 3.0 is available on Puhti as namd/3.0alpha11-cuda. This module shows an 2-3 times improved GPU performance over namd/2.14-cuda, e.g. 156 ns/day vs. 55 ns/day for the ApoA1 system. However, as with all alpha-versions, please check your results carefully.

Batch script example for Puhti

The script below requests 5 tasks per node and 8 threads per task on two full Puhti nodes (80 cores). One thread per task is reserved for communication.

#!/bin/bash 
#SBATCH --account=<project>
#SBATCH --partition=test
#SBATCH --time=0:10:00
#SBATCH --nodes=2             
#SBATCH --ntasks-per-node=5   # test to find the optimum number
#SBATCH --cpus-per-task=8     # 40/(ntasks-per-node)

module purge
module load gcc/11.3.0
module load openmpi/4.1.4
module load namd/2.14

# leave one core per process for communication
(( namd_threads = SLURM_CPUS_PER_TASK - 1 ))

srun namd2 +ppn ${namd_threads} apoa1.namd > apoa1.out

# while NAMD suggests using 1 thread per task for communication (as above)
# all cores for computing can be tested with:
# srun namd2 +ppn ${SLURM_CPUS_PER_TASK} apoa1.namd > apoa1.out

Batch script example for Puhti using GPU

Note, NAMD runs most efficiently on one GPU, and this is usually more cost-efficient than running on multiple CPU-only nodes.

#!/bin/bash 
#SBATCH --account=<project>
#SBATCH --partition=gputest
#SBATCH --time=0:10:00
#SBATCH --ntasks=1     
#SBATCH --cpus-per-task=10  
#SBATCH --gres=gpu:v100:1

module load namd/2.14-cuda

srun namd2 +ppn ${SLURM_CPUS_PER_TASK} +setcpuaffinity +devices ${GPU_DEVICE_ORDINAL} apoa1.namd > apoa1.out

Batch script example for Mahti

The script below requests 16 tasks per node and 8 threads per task on two full Mahti nodes (256 cores). One thread per task is reserved for communication.

#!/bin/bash
#SBATCH --account=<project>
#SBATCH --partition=test
#SBATCH --time=0:10:00 
#SBATCH --nodes=2
#SBATCH --ntasks-per-node=16  # test to find the optimum number
#SBATCH --cpus-per-task=8     # 128/(ntasks-per-node)

module purge
module load gcc/11.2.0
module load openmpi/4.1.2
module load namd/2.14

# leave one core per process for communication
(( namd_threads = SLURM_CPUS_PER_TASK - 1))

srun namd2 +ppn ${namd_threads} apoa1.namd > apoa1.out

Submit batch jobs with:

sbatch namd_job.bash

References

The NAMD License Agreement specifies that any reports or published results obtained with NAMD shall acknowledge its use and credit the developers as:

NAMD was developed by the Theoretical and Computational Biophysics Group in the Beckman Institute for Advanced Science and Technology at the University of Illinois at Urbana-Champaign.

Also, any published work which utilizes NAMD shall include the following reference:

James C. Phillips, David J. Hardy, Julio D. C. Maia, John E. Stone, Joao V. Ribeiro, Rafael C. Bernardi, Ronak Buch, Giacomo Fiorin, Jerome Henin, Wei Jiang, Ryan McGreevy, Marcelo C. R. Melo, Brian K. Radak, Robert D. Skeel, Abhishek Singharoy, Yi Wang, Benoit Roux, Aleksei Aksimentiev, Zaida Luthey-Schulten, Laxmikant V. Kale, Klaus Schulten, Christophe Chipot, and Emad Tajkhorshid. Scalable molecular dynamics on CPU and GPU architectures with NAMD. Journal of Chemical Physics, 153:044130, 2020. https://doi.org/10.1063/5.0014475

In addition, electronic documents should include a direct link to the official NAMD page.

More information