Skip to main content

JupyterHub SLURM Spawner with specific spawn page

Project description

jupyterhub_moss: JupyterHub MOdular Slurm Spawner

jupyterhub_moss is a Python package that provides:

Install

pip install jupyterhub_moss

Usage

Partition settings

To use jupyterhub_moss, you need first a working JupyterHub instance. jupyterhub_moss needs then to be imported in your JupyterHub configuration file (usually named jupyterhub_conf.py):

import batchspawner
import jupyterhub_moss

c = get_config()

# ...your config 

# Init JupyterHub configuration to use this spawner
jupyterhub_moss.set_config(c)

Once jupyterhub_moss is set up, you can define the partitions available on Slurm by setting c.MOSlurmSpawner.partitions in the same file:

# ...

# Partition descriptions
c.MOSlurmSpawner.partitions = {
    "partition_1": {  # Partition name     # (See description of fields below for more info)
        "architecture": "x86_86",          # Nodes architecture
        "description": "Partition 1",      # Displayed description
        "gpu": None,                       # --gres= template to use for requesting GPUs
        "max_ngpus": 0,                    # Maximum number of GPUs per node
        "max_nprocs": 28,                  # Maximum number of CPUs per node
        "max_runtime": 12*3600,            # Maximum time limit in seconds (Must be at least 1hour)
        "simple": True,                    # True to show in Simple tab
        "venv": "/jupyter_env_path/bin/",  # Path to Python environment bin/ used to start jupyter on the Slurm nodes 
    },
    "partition_2": {
        "architecture": "ppc64le",
        "description": "Partition 2",
        "gpu": "gpu:V100-SXM2-32GB:{}",
        "max_ngpus": 2,
        "max_nprocs": 128,
        "max_runtime": 1*3600,
        "simple": True,
        "venv": "/path/to/jupyter/env/for/partition_2/bin/",
    },
    "partition_3": {
        "architecture": "x86_86",
        "description": "Partition 3",
        "gpu": None,
        "max_ngpus": 0,
        "max_nprocs": 28,
        "max_runtime": 12*3600,
        "simple": False,
        "venv": "/path/to/jupyter/env/for/partition_3/bin/",
    },
}

Field descriptions

  • architecture: The architecture of the partition. This is only cosmetic and will be used to generate subtitles in the spawn page.
  • description: The description of the partition. This is only cosmetic and will be used to generate subtitles in the spawn page.
  • gpu: A template string that will be used to request GPU resources through --gres. The template should therefore include a {} that will be replaced by the number of requested GPU and follow the format expected by --gres. If no GPU is available for this partition, set to None.
  • max_ngpus: The maximum number of GPU that can be requested for this partition. The spawn page will use this to generate appropriate bounds for the user inputs. If no GPU is available for this partition, set to 0.
  • max_nprocs: The maximum number of processors that can be requested for this partition. The spawn page will use this to generate appropriate bounds for the user inputs.
  • max_runtime: The maximum job runtime for this partition in seconds. It should be of minimum 1 hour as the Simple tab only display buttons for runtimes greater than 1 hour.
  • simple: Whether the partition should be available in the Simple tab. The spawn page that will be generated is organized in a two tabs: a Simple tab with minimal settings that will be enough for most users and an Advanced tab where almost all Slurm job settings can be set. Some partitions can be hidden from the Simple tab with setting simple to False.
  • venv: Path to Python environment bin/ used to start jupyter on the Slurm nodes. jupyterhub_moss expects that a virtual environment is used to start jupyter. The path of this venv is set in the venv field and can be changed according to the partition. If there is only one venv, simply set the same path to all partitions.

Spawn page

The spawn page (available at /hub/spawn) will be generated according to the partition settings. For example, this is the spawn page generated for the partition settings above:

This spawn page is separated in two tabs: a Simple and an Advanced tab. On the Simple tab, the user can choose between the partitions set though simple: True (partition_1 and partition_2 in this case), choose to take a minimum, a half or a maximum number of cores and choose the job duration. The available resources are checked using sinfo and displayed on the table below. Clicking on the Start button will request the job.

The spawn page adapts to the chosen partition. This is the page when selecting the partition_2:

As the maximum number of cores is different, the CPUs row change accordingly. Also, as gpu was set for partition_2, a new button row appears to enable GPU requests.

The Advanced tab allows finer control on the requested resources.

The user can select any partition (partition_3 is added in this case) and the table of available resources reflects this. The user can also choose any number of nodes (with the max given by max_nprocs), of GPUs (max: max_gpus) and have more control on the job duration (max: max_runtime).

Spawn through URL

It is also possible to pass the spawning options as query arguments to the spawn URL: https://<server:port>/hub/spawn. For example, https://<server:port>/hub/spawn?partition=partition_1&nprocs=4 will directly spawn a Jupyter server on partition_1 with 4 cores allocated.

The following query argument is required:

  • partition: The name of the SLURM partition to use.

The following optional query arguments are available:

  • exclusive: Set to true for exclusive node usage (--exclusive)
  • jupyterlab: Set to true to start with JupyterLab
  • ngpus: Number of GPUs (--gres:<gpu>:)
  • nnodes: Number of nodes (--nodes)
  • nprocs: Number of CPUs per task (--cpus-per-task)
  • ntasks: Number of tasks per node (--ntasks-per-node)
  • options: Extra SLURM options
  • output: Set to true to save logs to slurm-*.out files.
  • reservation: SLURM reservation name (--reservation)
  • runtime: Job duration as hh:mm:ss (--time)

Development

See CONTRIBUTING.md.

Credits:

We would like acknowledge the following ressources that served as base for this project and thank their authors:

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

jupyterhub_moss-1.1.0.tar.gz (15.3 kB view details)

Uploaded Source

Built Distribution

jupyterhub_moss-1.1.0-py3-none-any.whl (14.3 kB view details)

Uploaded Python 3

File details

Details for the file jupyterhub_moss-1.1.0.tar.gz.

File metadata

  • Download URL: jupyterhub_moss-1.1.0.tar.gz
  • Upload date:
  • Size: 15.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.7.12

File hashes

Hashes for jupyterhub_moss-1.1.0.tar.gz
Algorithm Hash digest
SHA256 9456ccf445efcc18d6aa58f4e10d290d3e06b963b6886e22e8c45641025d987d
MD5 b6a98ff34dcb407b7ce180bea6729a74
BLAKE2b-256 67207d37ac1cf535506782cb103d6996fd223994f539709fea3155e09fbdd927

See more details on using hashes here.

File details

Details for the file jupyterhub_moss-1.1.0-py3-none-any.whl.

File metadata

  • Download URL: jupyterhub_moss-1.1.0-py3-none-any.whl
  • Upload date:
  • Size: 14.3 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.4.2 importlib_metadata/4.8.1 pkginfo/1.7.1 requests/2.26.0 requests-toolbelt/0.9.1 tqdm/4.62.2 CPython/3.7.12

File hashes

Hashes for jupyterhub_moss-1.1.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ce578e67698620ad93cf0d8c845436bf444d9838eb847d1a6fbc7a1d38a6fd2f
MD5 23adf88578956cb033070499b570c47b
BLAKE2b-256 330887abcf4e14b05a28ea9611213c099b1ac04f3cfdf8ad1fce23ccc1bd043f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page