Dask Cluster objects in Saturn Cloud
Project description
dask-saturn
Python library for interacting with Dask clusters in Saturn Cloud.
Dask-Saturn mimics the API of Dask-Kubernetes, but allows the user to interact with clusters created within Saturn Cloud.
Start cluster
From within a Jupyter notebook, you can start a cluster:
from dask_saturn import SaturnCluster
cluster = SaturnCluster()
cluster
By default this will start a dask cluster with the same settings that you have already set in the Saturn UI or in a prior notebook.
To start the cluster with a certain number of workers using the n_workers
option. Similarly, you can set the scheduler_size
, worker_size
, and worker_is_spot
.
Note: If the cluster is already running then you can't change the settings. Attempting to do so will raise a warning.
Use the autoclose
option to set up a cluster that is tied to the client
kernel. This functions like a regular dask LocalCluster
, when your jupyter
kernel dies or is restarted, the dask cluster will close.
Adjust number of workers
Once you have a cluster you can interact with it via the jupyter
widget, or using the scale
and adapt
methods.
For example, to manually scale up to 20 workers:
cluster.scale(20)
To create an adaptive cluster that controls its own scaling:
cluster.adapt(minimum=1, maximum=20)
Interact with client
To submit tasks to the cluster, you sometimes need access to the
Client
object. Instantiate this with the cluster as the only argument:
from distributed import Client
client = Client(cluster)
client
Close cluster
To terminate all resources associated with a cluster, use the
close
method:
cluster.close()
Change settings
To update the settings (such as n_workers
, worker_size
, worker_is_spot
, nthreads
) on an existing cluster, use the reset
method:
cluster.reset(n_workers=3)
You can also call this without instantiating the cluster first:
cluster = SaturnCluster.reset(n_workers=3)
Sync files to workers
When working with distributed dask clusters, the workers don't have access to the same file system as your client does. So you will see files in your jupyter server that aren't available on the workers. To move files to the workers you can use the RegisterFiles
plugin and call sync_files
on any path that you want to update on the workers.
For instance if you have a file structure like:
/home/jovyan/project/
|---- utils/
| |---- __init__.py
| |---- hello.py
|
|---- Untitled.ipynb
where hello.py contains:
# utils/hello.py
def greet():
return "Hello"
If the code in hello.py changes or you add new files to utils, you'll want to push those changes to the workers. After setting up the SaturnCluster
and the Client
, register the RegisterFiles
plugin with the workers. Then every time you make changes to the files in utils, run sync_files
. The worker plugin makes sure that any new worker that comes up will have any files that you have synced.
from dask_saturn import RegisterFiles, sync_files
client.register_worker_plugin(RegisterFiles())
sync_files(client, "utils")
# If a python script has changed, restart the workers so they will see the changes
client.restart()
# import the function and tell the workers to run it
from util.hello import greet
client.run(greet)
TIP: You can always check the state of the filesystem on your workers by running
client.run(os.listdir)
Development
Create/update a dask-saturn conda environment:
make conda-update
Set environment variables to run dask-saturn with a local atlas server:
export BASE_URL=http://dev.localtest.me:8888/
export SATURN_TOKEN=<JUPYTER_SERVER_SATURN_TOKEN>
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for dask_saturn-0.2.3-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 84e2e2cdccdba8afd237d4ad15d685ede5d06f7ffde6816423be994721fbd072 |
|
MD5 | cfdad457b4cbb60b16a1268ca97b770f |
|
BLAKE2b-256 | ea49e0971955b9094348c5c628c318f8ca06a3ffb40cae0e66c981519e9f8814 |