hpc05.connect module

hpc05.connect.connect_ipcluster(n, profile='pbs', hostname='hpc05', username=None, password=None, culler=True, culler_args=None, env_path=None, local=True, timeout=300, folder=None, client_kwargs=None)[source]

Connect to an ipcluster on the cluster headnode.

Parameters
  • n (int) – Number of engines to be started.

  • profile (str, default 'pbs') – Profile name of IPython profile.

  • hostname (str) – Hostname of machine where the ipcluster runs. If connecting via the headnode use: socket.gethostname() or set local=True.

  • username (str) – Username to log into hostname. If not provided, it tries to look it up in your ssh/config.

  • password (str) – Password for ssh username@hostname.

  • culler (bool) – Controls starting of the culler. Default: True.

  • culler_args (str) – Add arguments to the culler. e.g. ‘–timeout=200’

  • env_path (str, default: None) – Path of the Python environment, ‘/path/to/ENV/’ if Python is in /path/to/ENV/bin/python. Examples ‘~/miniconda3/envs/dev/’, ‘miniconda3/envs/dev’, ‘~/miniconda3’. Defaults to the environment that is sourced in bashrc or bash_profile.

  • local (bool, default: True) – Connect to the client locally or over ssh. Set it False if a connection over ssh is needed.

  • timeout (int) – Time for which we try to connect to get all the engines.

  • folder (str, optional) – Folder that is added to the path of the engines, e.g. “~/Work/my_current_project”.

  • client_kwargs (dict) – Keyword arguments that are passed to hpc05.Client().

Returns

  • client (ipython.Client object) – An IPyparallel client.

  • dview (ipyparallel.client.view.DirectView object) – Direct view, equivalent to client[:].

  • lview (ipyparallel.client.view.LoadBalancedView) – LoadedBalancedView, equivalent to client.load_balanced_view().

hpc05.connect.kill_ipcluster(name=None)[source]

Kill your ipcluster and cleanup the files.

This should do the same as the following bash function (recommended: add this in your bash_profile / bashrc): ```bash del() {

qselect -u $USER | xargs qdel rm -f .hpc05.hpc ipengine* ipcontroller* pbs_* pkill -f hpc05_culler 2> /dev/null pkill -f ipcluster 2> /dev/null pkill -f ipengine 2> /dev/null pkill -f ipyparallel.controller 2> /dev/null pkill -f ipyparallel.engines 2> /dev/null

hpc05.connect.kill_remote_ipcluster(hostname='hpc05', username=None, password=None, env_path=None)[source]

Kill your remote ipcluster and cleanup the files.

This should do the same as the following bash function (recommended: add this in your bash_profile / bashrc): ```bash del() {

qselect -u $USER | xargs qdel rm -f .hpc05.hpc ipengine* ipcontroller* pbs_* pkill -f hpc05_culler 2> /dev/null pkill -f ipcluster 2> /dev/null pkill -f ipengine 2> /dev/null pkill -f ipyparallel.controller 2> /dev/null pkill -f ipyparallel.engines 2> /dev/null

hpc05.connect.start_and_connect(n, profile='pbs', hostname='hpc05', culler=True, culler_args=None, env_path=None, local=True, timeout=300, folder=None, client_kwargs=None, kill_old_ipcluster=True)[source]

Start an ipcluster locally and connect to it.

Parameters
  • n (int) – Number of engines to be started.

  • profile (str, default 'pbs') – Profile name of IPython profile.

  • hostname (str) – Hostname of machine where the ipcluster runs. If connecting via the headnode use: socket.gethostname() or set local=True.

  • culler (bool) – Controls starting of the culler. Default: True.

  • culler_args (str) – Add arguments to the culler. e.g. ‘–timeout=200’

  • env_path (str, default: None) – Path of the Python environment, ‘/path/to/ENV/’ if Python is in /path/to/ENV/bin/python. Examples ‘~/miniconda3/envs/dev/’, ‘miniconda3/envs/dev’, ‘~/miniconda3’. Defaults to the environment that is sourced in bashrc or bash_profile.

  • local (bool, default: True) – Connect to the client locally or over ssh. Set it False if a connection over ssh is needed.

  • timeout (int) – Time for which we try to connect to get all the engines.

  • folder (str, optional) – Folder that is added to the path of the engines, e.g. “~/Work/my_current_project”.

  • client_kwargs (dict) – Keyword arguments that are passed to hpc05.Client().

  • kill_old_ipcluster (bool) – If True, it cleansup any old instances of ipcluster and kills your jobs in qstat or squeue.

Returns

  • client (ipython.Client object) – An IPyparallel client.

  • dview (ipyparallel.client.view.DirectView object) – Direct view, equivalent to client[:].

  • lview (ipyparallel.client.view.LoadBalancedView) – LoadedBalancedView, equivalent to client.load_balanced_view().

hpc05.connect.start_ipcluster(n, profile, env_path=None, timeout=300)[source]

Start an ipcluster locally.

Parameters
  • n (int) – Number of engines to be started.

  • profile (str, default 'pbs') – Profile name of IPython profile.

  • env_path (str, default=None) – Path of the Python environment, ‘/path/to/ENV/’ if Python is in /path/to/ENV/bin/python. Examples ‘~/miniconda3/envs/dev/’, ‘miniconda3/envs/dev’, ‘~/miniconda3’. Defaults to the environment that is sourced in bashrc or bash_profile.

  • timeout (int) – Time limit after which the connection attempt is cancelled.

Returns

Return type

None

hpc05.connect.start_remote_and_connect(n, profile='pbs', hostname='hpc05', username=None, password=None, culler=True, culler_args=None, env_path=None, timeout=300, folder=None, client_kwargs=None, kill_old_ipcluster=True)[source]

Start a remote ipcluster on hostname and connect to it.

Parameters
  • n (int) – Number of engines to be started.

  • profile (str, default 'pbs') – Profile name of IPython profile.

  • hostname (str) – Hostname of machine where the ipcluster runs. If connecting via the headnode use: socket.gethostname() or set local=True.

  • username (str) – Username to log into hostname. If not provided, it tries to look it up in your ssh/config.

  • password (str) – Password for ssh username@hostname.

  • culler (bool) – Controls starting of the culler. Default: True.

  • culler_args (str) – Add arguments to the culler. e.g. ‘–timeout=200’

  • env_path (str, default: None) – Path of the Python environment, ‘/path/to/ENV/’ if Python is in /path/to/ENV/bin/python. Examples ‘~/miniconda3/envs/dev/’, ‘miniconda3/envs/dev’, ‘~/miniconda3’. Defaults to the environment that is sourced in bashrc or bash_profile.

  • timeout (int) – Time for which we try to connect to get all the engines.

  • folder (str, optional) – Folder that is added to the path of the engines, e.g. “~/Work/my_current_project”.

  • client_kwargs (dict) – Keyword arguments that are passed to hpc05.Client().

  • kill_old_ipcluster (bool) – If True, it cleansup any old instances of ipcluster and kills your jobs in qstat or squeue.

Returns

  • client (ipython.Client object) – An IPyparallel client.

  • dview (ipyparallel.client.view.DirectView object) – Direct view, equivalent to client[:].

  • lview (ipyparallel.client.view.LoadBalancedView) – LoadedBalancedView, equivalent to client.load_balanced_view().

hpc05.connect.start_remote_ipcluster(n, profile='pbs', hostname='hpc05', username=None, password=None, env_path=None, timeout=300)[source]

Starts an ipcluster over ssh on hostname and wait untill it’s successfully started.

Parameters
  • n (int) – Number of engines to be started.

  • profile (str, default 'pbs') – Profile name of IPython profile.

  • hostname (str) – Hostname of machine where the ipcluster runs.

  • username (str) – Username to log into hostname. If not provided, it tries to look it up in your ssh/config.

  • password (str) – Password for ssh username@hostname.

  • env_path (str, default=None) – Path of the Python environment, ‘/path/to/ENV/’ if Python is in /path/to/ENV/bin/python. Examples ‘~/miniconda3/envs/dev/’, ‘miniconda3/envs/dev’, ‘~/miniconda3’. Defaults to the environment that is sourced in bashrc or bash_profile.

  • timeout (int) – Time for which we try to connect to get all the engines.

Returns

Return type

None

hpc05.connect.wait_for_succesful_start(log_file, timeout=300)[source]
hpc05.connect.watch_file(fname)[source]
hpc05.connect.watch_stdout(stdout)[source]