Building and Running Hello World Fortran
This page describes how to build and run a Hello World Fortran90 program.
Source Code¶
The Fortran90 MPI Hello World source code is here:
http://people.sc.fsu.edu/~jburkardt/f_src/hello_mpi/hello_mpi.f90
DRP Cluster¶
The following commands build Fortran90 MPI Hello World, then submit the
executable to the job scheduler for execution on the DRP
cluster. All the commands listed are run on
the DRP front-end nodes (drpfen01
or drpfen02
).
The CCI file system is described here.
Load the MPI module¶
See Modules for instructions on loading MPI compiler wrappers.
For this example the MVAPICH2 version 2.0, using GCC 6.3, module is loaded:
module load gcc
module load mvapich2
Build¶
mpif90 hello_mpi.f90 -o hello_mpi
Submit the Job¶
Create the file submitMvapich.sh with the following contents:
1 2 |
|
Give submitMvapich.sh executable permissions:
chmod +x submitMvapich.sh
Submit the job to the Slurm job scheduler requesting 4 processes :
sbatch -n 4 -t 5 ./submitMvapich.sh
For more info on the job scheduler see Slurm.
Monitor the Job¶
The status of the job can be checked with the following command:
squeue -l
View Output¶
By default Slurm writes stderr and stdout to a file named
slurm-
slurm-
Process 2 says "Hello, world!"
HELLO_MPI - Master process:
FORTRAN90/MPI version
An MPI test program.
The number of processes is 4
Process 0 says "Hello, world!"
HELLO_MPI - Master process:
Normal end of execution: "Goodbye, world!".
Elapsed wall clock time = 0.116825E-03 seconds.
Process 1 says "Hello, world!"
Process 3 says "Hello, world!"
+ rm /tmp/hosts.1070892