|
||||||||||
|
Core Facility for Computational Modeling Users log in to the
head node of Polaris at polaris.cllrnet.ca. Access is
only available through ssh, a secure shell that
protects users privacy and prevents unauthorized
access. This means that traditional telnet, rsh and
rlogin access is not possible. SSH is available on
most recent UNIX systems such as Linux, Solaris and
HPC, and is accessed by typing the following
command: ssh username@polaris.cllrnet.ca Users not familiar with UNIX will find a brief
introduction to the UNIX shell
here. For security reasons
standard FTP services are not available on
Polaris. Instead, you can use sftp and scp protocols,
which are secure and function similarly to the more
familiar UNIX ftp and rcp commands. For example, if
you wish to copy files from directory "work" to your
home directory on Polaris account, you would type:
scp -r work/* username@polaris.cllrnet.ca:~/ The good
news: Polaris currently boasts a 175 GB RAID-5
disk server, which is capable of storing data
redundantly across multiple hard disks. It also uses
uninterruptable power supplies to maintain power
during brief outages. Finally, we strive to maintain
maximum network security by using only secure services,
firewalling, enforcing good password habits. The bad news: No system is perfectly safe.
while the RAID system does afford users some
protection against data loss, user data is not
backed up. Therefore, users must work under the
assumption that data are always susceptible
catastrophic loss due to circumstances beyond our
control (e.g., fire, flood, virii and other malicious
activity). Individual users are urged to backup their
data to their home system on a regular
basis. Likewise, be aware that files and folders that
are accidentally deleted from the disk are not
recoverable unless they are backed up. In terms of disk usage, please be aware that we
reserve the right to limit disk space on a per-user
basis. We also appreciate your cooperation in removing
old files as you offload them to your local
system. The bottom line: Access is provided to users
in the hope that it will be useful, but without any
warranty; without even the implied warranty of
merchantability or fitness for a particular purpose.
Each node has a 20MB disk
drive that can be used for temporary scratch
space. This space is located in the /tmp/ of each
node, and is different across each drive. Please note
however that each node is a dual processor machine,
and that both processors will have access to the same
/tmp/ space.
Please don't user your user account
on Polaris for receiving email. Also be sure you are
forwarding email to your regular account by including
a .forward file in your home directory that lists your
regular email address. Currently, the primary
package for parallel code is MPI. MPI compilers are:
mpicc, mpiCC (C and C++, respectively) mpi90 and
mpif77 (FORTRAN 90 and 77). These work similarly to
their non-MPI equivalents. Other packages such as PVM can be installed as
required, resource-permitting. All processes must be submitted to the batch
scheduler, which is responsible for assigning
individual nodes for each process. Running processes
directly from the command line or through
mpirun will execute processes to the head node
(polaris) only, which is not permitted. Users should use the 'cllrnet' queue, which is used
for both serial and batch jobs. Queueing priority is
currently on a first-come first-served basis. Polaris uses the Sun Grid
Engine (SGE) scheduler. A complete users guide is
available here.
For the impatient, the relevant commands include:
qsub qalter qhold qrls qmon qrsh qstat.
Interactive single-process
jobs can be submitted using the qrsh command,
which allocates a single process and creates a
shell. However, if no processes are currently
available, the user must wait until a process is
available to be allocated before a shell is
opened. The qsub command can be used to submit a
single process to be run. This is best done by
creating a script that executes the desired command,
and using qsub to execute the script. An example
script is found here. Parallel jobs are submitted
similarly, using the qsub command and a script
containing an mpirun command. An example
script is found here.
Introduction to MPI that
includes programs to get you started here.
|