| Trees | Indices | Help |
|
|---|
|
|
object --+
|
Manager
Base class to launch simulations remotely on computers with queuing systems.
Basically, ssh into machine and run job.
Derive a class from :class:`Manager` and override the attributes
and implement a specialized :meth:`Manager.qsub` method if needed.
ssh_ must be set up (via `~/.ssh/config`_) to allow access via a commandline such as :
ssh <hostname> <command> ...
Typically you want something such as :
host <hostname>
hostname <hostname>.fqdn.org
user <remote_user>
in ``~/.ssh/config`` and also set up public-key authentication in order to avoid typing your password all the time.
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
|||
|
Inherited from |
|||
|
|||
_hostname = Nonehostname of the super computer (**required**) |
|||
_scratchdir = Nonescratch dir on hostname (**required**) |
|||
_qscript = Nonename of the template submission script appropriate for the queuing system on :attr:`Manager._hostname`; can be a path to a local file or a template stored in :data:`gromacs.config.qscriptdir` or a key for :data:`gromacs.config.templates` (**required**) |
|||
_walltime = Nonemaximum run time of script in hours; the queuing system script :attr:`Manager._qscript` is supposed to stop :program:`mdrun` after 99% of this time via the ``-maxh`` option. |
|||
log_RE = re.compile(r'Regular expression used by :meth:`Manager.get_status` to parse the logfile from :program:`mdrun`. |
|||
|
|||
|
Inherited from |
|||
|
|||
Set up the manager.
:Arguments:
*statedir*
directory component under the remote scratch dir (should
be different for different jobs) [basename(CWD)]
*prefix*
identifier for job names [MD]
|
scp dirname to host. :Arguments: dirname to be transferred :Returns: return code from scp |
scp *filename* to host in *dirname*. :Arguments: filename and dirname to be transferred to :Returns: return code from scp |
``scp -r`` *dirname* from host into *targetdir* :Arguments:
:Returns: return code from scp |
Find *checkfile* locally if possible. If *checkfile* is not found in *dirname* then it is transferred from the remote host. If needed, the trajectories are concatenated using :meth:`Manager.cat`. :Returns: local path of *checkfile* |
Concatenate parts of a run in *dirname*.
Always uses :func:`gromacs.cbook.cat` with *resolve_multi* = 'guess'.
.. Note:: The default is to immediately delete the original files
(*cleanup* = ``True``).
:Keywords:
*dirname*
directory to work in
*prefix*
prefix (deffnm) of the files [md]
*cleanup* : boolean
if ``True``, remove all used files [``True``]
|
Submit job remotely on host. This is the most primitive implementation: it just runs the commands : cd remotedir && qsub qscript on :attr:`Manager._hostname`. *remotedir* is *dirname* under :attr:`Manager._scratchdir` and *qscript* defaults to the queuing system script hat was produced from the template :attr:`Manager._qscript`. |
Check status of remote job by looking into the logfile.
Report on the status of the job and extracts the performance in ns/d if
available (which is saved in :attr:`Manager.performance`).
:Arguments:
- *dirname*
- *logfilename* can be a shell glob pattern [md*.log]
- *silent* = True/False; True suppresses log.info messages
:Returns: ``True`` is job is done, ``False`` if still running
``None`` if no log file found to look at
.. Note:: Also returns ``False`` if the connection failed.
.. Warning:: This is an important but somewhat **fragile** method. It
needs to be improved to be more robust.
|
Check status of remote job by looking into the logfile.
Report on the status of the job and extracts the performance in ns/d if
available (which is saved in :attr:`Manager.performance`).
:Arguments:
- *dirname*
- *logfilename* can be a shell glob pattern [md*.log]
- *silent* = True/False; True suppresses log.info messages
:Returns: ``True`` is job is done, ``False`` if still running
``None`` if no log file found to look at
.. Note:: Also returns ``False`` if the connection failed.
.. Warning:: This is an important but somewhat **fragile** method. It
needs to be improved to be more robust.
|
Check status of remote job by looking into the logfile.
Report on the status of the job and extracts the performance in ns/d if
available (which is saved in :attr:`Manager.performance`).
:Arguments:
- *dirname*
- *logfilename* can be a shell glob pattern [md*.log]
- *silent* = True/False; True suppresses log.info messages
:Returns: ``True`` is job is done, ``False`` if still running
``None`` if no log file found to look at
.. Note:: Also returns ``False`` if the connection failed.
.. Warning:: This is an important but somewhat **fragile** method. It
needs to be improved to be more robust.
|
Calculate how many dependent (chained) jobs are required.
Uses *performance* in ns/d (gathered from :meth:`get_status`) and job max
*walltime* (in hours) from the class unless provided as keywords.
n = ceil(runtime/(performance*0.99*walltime)
:Keywords:
*runtime*
length of run in ns
*performance*
ns/d with the given setup
*walltime*
maximum run length of the script (using 99% of it), in h
:Returns: *n* or 1 if walltime is unlimited
|
Wait until the job associated with *dirname* is done.
Super-primitive, uses a simple while ... sleep for *seconds* delay
:Arguments:
*dirname*
look for log files under the remote dir corresponding to *dirname*
*seconds*
delay in *seconds* during re-polling
|
Set up position restraints run and transfer to host. *kwargs* are passed to :func:`gromacs.setup.MD_restrained` |
Set up production and transfer to host. :Arguments:
|
|
|||
_walltimemaximum run time of script in hours; the queuing system script :attr:`Manager._qscript` is supposed to stop :program:`mdrun` after 99% of this time via the ``-maxh`` option. A value of ``None`` or ``inf`` indicates no limit.
|
log_RERegular expression used by :meth:`Manager.get_status` to parse the logfile from :program:`mdrun`.
|
| Trees | Indices | Help |
|
|---|
| Generated by Epydoc 3.0.1 on Sat Jun 12 15:59:36 2010 | http://epydoc.sourceforge.net |