Include Page | ||||
---|---|---|---|---|
|
...
Excerpt | ||
---|---|---|
| ||
Model for 1D/2D wave propagation, sediment transport and morphological changes |
...
Summary
All information about the XBeach model can be found at the website http://www.xbeach.org. This page only describes internal Deltares procedures.Below you find instructions to run XBeach on the Deltares H4 cluster
Compiling and running XBeach MPI on the h4 cluster
Step 1: Compiling the program
- Set the intel 11, 32 bit compiler by executing the follwing command lines:
...
- Go to the directory where your source code is avalaible
- Update the source code with command:
Code Block |
---|
svn svnupdateupdate |
- Clean up your directory by typing:
Code Block |
---|
make distcleanclean |
- In here build a Makefile with the command:
Code Block |
---|
FC=ifortgfortran44 ./configure |
- type ./configure --help for detailed options; i.e to build mpi executable you can use one of the following commands:
Code Block |
---|
FC=gfortran44 ./configure --with-mpi
FC=gfortran44 MPIFC=/opt/mpich2-1.0.8-gcc44-x86_64/bin/mpif90 ./configure --with-mpi
|
- You can also use the gfortan44 compiler to build a Makefile with netcdf output:
Code Block |
---|
FC=gfortran44 PKG_CONFIG_PATH=/opt/netcdf-4.1.1/gfortran/lib/pkgconfig ./configure --with-netcdf |
- Or both:
Code Block |
---|
FC=ifortgfortran44 MPIFC=/opt/mpich2-1.0.8-gcc44-x86_64/bin/mpif90 PKG_CONFIG_PATH=/opt/netcdf-4.1.1/gfortran/lib/pkgconfig ./configure --with-netcdf --with-mpi |
- Build you XBeach executable by running you make file and typing:
...
- If nor errors you have your executable now.
Step 2: Run XBeach MPI
- Put your run directory with the XBeach input files somewehere accessible for the h4-cluster (i.e. the P-drive)
...
Code Block |
---|
### ********************************************************************
### ********************************************************************
### ** **
### ** Example shell script to run XBeach executable in parallel **
### ** with MPICH2 via SGE on linux cluster. **
### ** c 2009 Deltares **
### ** author: Menno Genseberger **
### ** Changes: Leroy van Logchem 24 Nov 2010 **
### ** -- Use tight integrated mpich2 PE. Requires secret file: **
### ** ~/.mpd.conf **
### ** secretword=mys3cret **
### ** **
### ********************************************************************
### ********************************************************************
### The next line species the shell "/bin/sh" to be used for the execute
### of this script.
#!/bin/sh
### The "-cwd" requests execution of this script from the current
### working directory; without this, the job would be started from the
### user's home directory.
#$ -cwd
### The name of this SGE job is explicitly set to another name;
### otherwise the name of the SGE script itself would be used. The name
### of the job also determines how the jobs output files will be called.
#$ -N XB_ZandMotor
### The next phrase asks for a "parallel environment" called "mpich2",
### to be run with 4 slots (for instance 4 cores).
### "mpich2" is a specific name for H3/H4 linux clusters (this name is
### for instance "mpi" on DAS-2/DAS-3 linux clusters).
#$ -pe distrib 3
### Start SGE.
. /opt/sge/InitSGE
### Code compiled with Intel 11.0 compiler.
export LD_LIBRARY_PATH=/opt/intel/Compiler/11.0/081/lib/ia32:$LD_LIBRARY_PATH
### Specific setting for H3/H4 linuxclusters, needed for MPICH2
### commands (mpdboot, mpirun, mpiexed, mpdallexit etc.).
export PATH="/opt/mpich2/bin:${PATH}"
xbeach_bin_dir=/u/thiel/checkouts/trunk
cp $xbeach_bin_dir/xbeach xbeach.usedexe
### Some general information available via SGE. Note that NHOSTS can be
echo ----------------------------------------------------------------------
echo Parallel run of XBeach with MPICH2 on H4 linuxcluster.
echo SGE_O_WORKDIR: $SGE_O_WORKDIR
echo HOSTNAME : $HOSTNAME
echo NHOSTS : $NHOSTS
echo NQUEUES : $NQUEUES
echo NSLOTS : $NSLOTS
echo PE_HOSTFILE : $PE_HOSTFILE
echo Contents of auto generated machinefile:
cat $TMPDIR/machines
echo ----------------------------------------------------------------------
### General, start XBeach in parallel by means of mpirun.
mpirun -np $NSLOTS $xbeach_bin_dir/xbeach >> output_xbeach_mpi 2>&1
### General for MPICH2, finish your MPICH2 communication network.
mpdallexit
|
...
- Put your shell script (<name>.sh) in the simulation directory
- In Putty go to your simulation directory on the h4
- submit your job to the h4-cluster by:
Code Block |
---|
qsub <name>.sh |
Trouble shooting
Ofcourse Of course life is not always straight forward on a linux cluster. Don't feel stupid when you type "qstat" and your name is not part of the list. Stay calm and try not to cry but instead go to your simulation directory and check for the .o file. It will tell you the error. If the error is about mpd please contact Jamie (8176) or Jaap (8363) if not contact ICT.
Note for Jaap and Jamie type "mpd &" and follow instructions:
Code Block |
---|
cn $HOME touch .mpd.conf chmod 600 .mpd.configconf |
Now edit the .mpd.config file and add the text:
Code Block |
---|
MPD_SECRETWORD=secret
|
...
Compile XBeach parallel version for use on Linux/cluster
- On your Windows PC, start Exceed > Exceed XDMCP Broadcast (or the icon on your desktop).
- Choose 'devux.wldelft.nl'.
- Under Sessions, choose 'GNOME'.
- Use your Deltares user name and password to log in.
- Start a Terminal session (Applications > System Tools > Terminal).
- Make a directory "checkouts":
Code Block mkdir ~/checkouts
- Checkout the latest and greatest version of XBeach (enter your Deltares password if asked for):
If you already have the local repository, but want to update it, use:Code Block svn co https://repos.deltares.nl/repos/XBeach/trunk ~/checkouts/XBeach
Code Block svn update
- Go to the XBeach directory:
Code Block cd ~/checkouts/XBeach
- Run
Code Block FC=ifort ./configure
- Run
Code Block make
- Make sure version 10 of the Intel Fortran compiler is used (instead of version 8):
Code Block . /opt/intel/fc/10/bin/ifortvars.sh
- Delete all files not needed for compiling to get rid of files that could mess it up:
Code Block make clean
- Compile the parallel version:
Code Block PATH=/opt/mpich2-1.0.7/bin:$PATH USEMPI=yes ./configure && make
- (optional) Copy the executable to your personal bin-folder:
Code Block cd ~/bin cp ~/checkouts/bin/xbeach.mpi .
Compiled version
Compiled Linux versions of XBeach (both the MPI and serial version) can be found in Arend's bulletin box:
Code Block |
---|
/BULLETIN/pool_ad/xbeach_linux/ # or /u/pool_ad/BULLETIN/xbeach_linux/ |
...
Run XBeach parallel version on h3 cluster
To run a parallel XBeach job on the h3 cluster (from now on 'h3'), you need 3 things:
- A parallel version of XBeach somewhere on the u-drive (preferably in /u/username/bin)
- A job (shell) file you feed to the cluster
- A directory on your part of the u-drive with the simulation-data (params.txt, bathy.dep, etc)
Logging on to cluster
Windows
The easiest way to log on to h3, is using the program PuTTY, which can be found on the Desktop of your Deltares PC. The first time you connect to h3, you need to supply some basic parameters, which you can save for later use (with 'Save'). In the Dialog Box, under Host Name, fill in: h3.wldelft.nl; you don't need to touch the other options (leave the Protocol to SSH). Optionally, save the information as e.g. 'h3'. The first time you connect to h3, you'll probably see a message about the server's host key. Click Yes to accept. Log in with your Deltares user name and password.
Linux
If you want to connect from e.g. the development server (devux) to h3, you can connect from a terminal session. Type
Code Block |
---|
ssh h3 |
to connect to h3. Your user name has already been sent for you, so you only need to submit your password.
Obtain latest version of XBeach executable
There are 2 ways to obtain the latest (or any) version of the parallel XBeach executable:
- Compile it yourself (see the instructions in #compilexbeach)
- Copy it from Arend Pool's BULLETIN:
Code Block cd ~/bin cp /u/pool_ad/BULLETIN/xbeach_linux/current/xbeach.mpi .
Obtain the XBeach MPI job file
There are also 2 ways to obtain the job file to run xbeach.mpi on h3:
- Copy it from Arend Pool's BULLETIN:
In the above instructions, it is assumed you place the job file in the direction /simulations. If you want to place it somewhere else, feel free to do so and change the instructions accordingly.Code Block mkdir ~/simulations # can be skipped if directory already exists cd ~/simulations cp /u/pool_ad/BULLETIN/xbeach_linux/xbeach.sh .
- Create the shell file yourself in a location you prefer. The file should contain the following code:
The second last line should contain the path to xbeach.mpi. Edit this if you have placed it somewhere else.Code Block #!/bin/sh . /opt/sge/InitSGE export PATH="/opt/mpich2/bin:$PATH" echo "numslots: $DELTAQ_NumSlots" echo "nodes: $DELTAQ_NodeList" echo $DELTAQ_NodeList | tr ' ' '\n' | sed 's/.wldelft.nl//' > machines echo "Machines file:" cat machines mpdboot -1 -n $DELTAQ_NumSlots -f machines mpirun -np $DELTAQ_NumSlots ~/bin/xbeach.mpi mpdallexit
Run the parallel job
Make sure you have placed your simulation (directory) somewhere on a shared location (u-drive or p-drive) and go ('cd') there (for example):
Code Block |
---|
cd /u/username/simulations/simulation1 # or
cd /p/project/simulations/simulation1 |
Finally, submit your job to the grid engine (h3) with the following command:
Code Block |
---|
qsub -pe spread N /path-to-job-file/xbeach.sh |
where N is the number of nodes you want to use and path-to-job-file the path to xbeach.sh.
More info
...