Deploy Dask on job queuing systems like PBS or SLURM
Project description
This helps to deploy Dask on batch-style job schedulers like PBS and SLURM.
Example
from dask_jobqueue import PBSCluster
cluster = PBSCluster(processes=6, threads=4, memory="16GB")
cluster.start_workers(10)
from dask.distributed import Client
client = Client(cluster)
Adaptivity
This can also adapt the cluster size dynamically based on current load. This helps to scale up the cluster when necessary but scale it down and save resources when not actively computing.
cluster.adapt()
History
This package came out of the Pangeo collaboration and was copy-pasted from a live repository at this commit. Unfortunately, development history was not preserved.
Original developers include the following:
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
dask-jobqueue-0.1.0.tar.gz
(23.9 kB
view hashes)
Built Distribution
Close
Hashes for dask_jobqueue-0.1.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 939ea92a8f43c628a5cfeef866b3537c8524014ea3e6568454fa1c84ba8f6638 |
|
MD5 | bc8eb7339e130c17854b73d8eae5dc63 |
|
BLAKE2b-256 | 83d1acfa164d8747f68b8281c9f96851d2dbab0b847333fac8ccca0368cd0d20 |