Apache Mesos backend for Dask scheduling library
Project description
[![Build Status](http://52.0.47.203:8000/api/badges/lensacom/dask.mesos/status.svg)](http://52.0.47.203:8000/lensacom/dask.mesos)
[![Join the chat at https://gitter.im/lensacom/dask.mesos](https://badges.gitter.im/lensacom/dask.mesos.svg)](https://gitter.im/lensacom/dask.mesos?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[Apache Mesos](http://mesos.apache.org/) backend for [Dask](https://github.com/dask/dask) scheduling library.
Run distributed python dask workflows on your Mesos cluster.
## Notable Features
- distributively run tasks in docker container
- specify resource requirements per task
- bin packing for optimized resource utilization
## Installation
**Prerequisits:** [satyr](https://github.com/lensacom/satyr), [dask](https://github.com/dask/dask.git), [toolz](https://pypi-hypernode.com/pypi/toolz). All of them should be installed w/ the following commands:
`pip install dask.mesos` or use [lensa/dask.mesos](https://hub.docker.com/r/lensa/dask.mesos/) Docker image
Configuration:
- `MESOS_MASTER=zk://127.0.0.1:2181/mesos`
- `ZOOKEEPER_HOST=127.0.0.1:2181`
## Example
```python
from __future__ import absolute_import, division, print_function
from dask_mesos import mesos, MesosExecutor
@mesos(cpus=0.1, mem=64)
def add(x, y):
"""Run add on mesos with specified resources"""
return x + y
@mesos(cpus=0.3, mem=128, image='lensa/dask.mesos')
def mul(x, y):
"""Run mul on mesos in specified docker image"""
return x * y
with MesosExecutor(name='dask') as executor:
"""This context handles Mesos scheduler's lifecycle"""
a, b = 23, 89
alot = add(a, b)
gigalot = mul(alot, add(10, 2))
gigalot.compute(get=executor.get) # (a + b) * (10 + 2)
executor.compute([alot, gigalot]) # list of futures
```
## Configuring dask.mesos Tasks
You can configure your mesos tasks in your decorator, currently the following options are available:
* **cpus**: The amount of cpus to use for the task.
* **mem**: Memory in MB to reserver for the task.
* **disk**: The amount of disk to use for the task.
* **image**: A docker image name. If not set, mesos containerizer will be used.
Both mem and cpus are defaults to some low values set in _satyr_ so you're encouraged to override them here. More options like constraints, other resources are on the way.
[![Join the chat at https://gitter.im/lensacom/dask.mesos](https://badges.gitter.im/lensacom/dask.mesos.svg)](https://gitter.im/lensacom/dask.mesos?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
[Apache Mesos](http://mesos.apache.org/) backend for [Dask](https://github.com/dask/dask) scheduling library.
Run distributed python dask workflows on your Mesos cluster.
## Notable Features
- distributively run tasks in docker container
- specify resource requirements per task
- bin packing for optimized resource utilization
## Installation
**Prerequisits:** [satyr](https://github.com/lensacom/satyr), [dask](https://github.com/dask/dask.git), [toolz](https://pypi-hypernode.com/pypi/toolz). All of them should be installed w/ the following commands:
`pip install dask.mesos` or use [lensa/dask.mesos](https://hub.docker.com/r/lensa/dask.mesos/) Docker image
Configuration:
- `MESOS_MASTER=zk://127.0.0.1:2181/mesos`
- `ZOOKEEPER_HOST=127.0.0.1:2181`
## Example
```python
from __future__ import absolute_import, division, print_function
from dask_mesos import mesos, MesosExecutor
@mesos(cpus=0.1, mem=64)
def add(x, y):
"""Run add on mesos with specified resources"""
return x + y
@mesos(cpus=0.3, mem=128, image='lensa/dask.mesos')
def mul(x, y):
"""Run mul on mesos in specified docker image"""
return x * y
with MesosExecutor(name='dask') as executor:
"""This context handles Mesos scheduler's lifecycle"""
a, b = 23, 89
alot = add(a, b)
gigalot = mul(alot, add(10, 2))
gigalot.compute(get=executor.get) # (a + b) * (10 + 2)
executor.compute([alot, gigalot]) # list of futures
```
## Configuring dask.mesos Tasks
You can configure your mesos tasks in your decorator, currently the following options are available:
* **cpus**: The amount of cpus to use for the task.
* **mem**: Memory in MB to reserver for the task.
* **disk**: The amount of disk to use for the task.
* **image**: A docker image name. If not set, mesos containerizer will be used.
Both mem and cpus are defaults to some low values set in _satyr_ so you're encouraged to override them here. More options like constraints, other resources are on the way.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
dask.mesos-0.2.2.tar.gz
(12.2 kB
view details)
File details
Details for the file dask.mesos-0.2.2.tar.gz
.
File metadata
- Download URL: dask.mesos-0.2.2.tar.gz
- Upload date:
- Size: 12.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8398a0a85db1ca442c2dc2536da33fbcebd700ef4fa67427e19683f06121dca4 |
|
MD5 | 5e91a6c1dfc2e5b6c18ee9ed9d0d8383 |
|
BLAKE2b-256 | d9ee9b16a06511ebaefa9391d326d601e77dd942328ffa4b7cfbd5d70e5dc14a |