A robust implementation of concurrent.futures.ProcessPoolExecutor
Project description
# Reusable Process Pool Executor [![Build Status](https://travis-ci.org/tomMoral/loky.svg?branch=master)](https://travis-ci.org/tomMoral/loky) [![Build status](https://ci.appveyor.com/api/projects/status/oifqilb5sb0p7fdp/branch/master?svg=true)](https://ci.appveyor.com/project/tomMoral/loky/branch/master) [![codecov](https://codecov.io/gh/tomMoral/loky/branch/master/graph/badge.svg)](https://codecov.io/gh/tomMoral/loky)
### Goal
The aim of this project is to provide a robust, cross-platform and
cross-version implementation of the `ProcessPoolExecutor` class of
`concurrent.futures`. It notably features:
* __Deadlock free implementation__: one of the major concern in
standard `multiprocessing` and `concurrent.futures` libraries is the
ability of the `Pool/Executor` to handle crashes of worker
processes. This library intends to fix those possible deadlocks and
send back meaningful errors.
* __Consistent spawn behavior__: All processes are started using
fork/exec on POSIX systems. This ensures safer interactions with
third party libraries.
* __Reusable executor__: strategy to avoid respawning a complete
executor every time. A singleton executor instance can be reused (and
dynamically resized if necessary) across consecutive calls to limit
spawning and shutdown overhead. The worker processes can be shutdown
automatically after a configurable idling timeout to free system
resources.
* __Transparent cloudpickle integration__: to call interactively
defined functions and lambda expressions in parallel. It is also
possible to register a custom pickler implementation to handle
inter-process communications.
* __No need for ``if __name__ == "__main__":`` in scripts__: thanks
to the use of ``cloudpickle`` to call functions defined in the
``__main__`` module, it is not required to protect the code calling
parallel functions under Windows.
### Usage
```python
import os
from time import sleep
from loky import get_reusable_executor
def say_hello(k):
pid = os.getpid()
print("Hello from {} with arg {}".format(pid, k))
sleep(.01)
return pid
# Create an executor with 4 worker processes, that will
# automatically shutdown after idling for 2s
executor = get_reusable_executor(max_workers=4, timeout=2)
res = executor.submit(say_hello, 1)
print("Got results:", res.result())
results = executor.map(say_hello, range(50))
n_workers = len(set(results))
print("Number of used processes:", n_workers)
assert n_workers == 4
```
### Acknowledgement
This work is supported by the Center for Data Science, funded by the IDEX
Paris-Saclay, ANR-11-IDEX-0003-02
### Goal
The aim of this project is to provide a robust, cross-platform and
cross-version implementation of the `ProcessPoolExecutor` class of
`concurrent.futures`. It notably features:
* __Deadlock free implementation__: one of the major concern in
standard `multiprocessing` and `concurrent.futures` libraries is the
ability of the `Pool/Executor` to handle crashes of worker
processes. This library intends to fix those possible deadlocks and
send back meaningful errors.
* __Consistent spawn behavior__: All processes are started using
fork/exec on POSIX systems. This ensures safer interactions with
third party libraries.
* __Reusable executor__: strategy to avoid respawning a complete
executor every time. A singleton executor instance can be reused (and
dynamically resized if necessary) across consecutive calls to limit
spawning and shutdown overhead. The worker processes can be shutdown
automatically after a configurable idling timeout to free system
resources.
* __Transparent cloudpickle integration__: to call interactively
defined functions and lambda expressions in parallel. It is also
possible to register a custom pickler implementation to handle
inter-process communications.
* __No need for ``if __name__ == "__main__":`` in scripts__: thanks
to the use of ``cloudpickle`` to call functions defined in the
``__main__`` module, it is not required to protect the code calling
parallel functions under Windows.
### Usage
```python
import os
from time import sleep
from loky import get_reusable_executor
def say_hello(k):
pid = os.getpid()
print("Hello from {} with arg {}".format(pid, k))
sleep(.01)
return pid
# Create an executor with 4 worker processes, that will
# automatically shutdown after idling for 2s
executor = get_reusable_executor(max_workers=4, timeout=2)
res = executor.submit(say_hello, 1)
print("Got results:", res.result())
results = executor.map(say_hello, range(50))
n_workers = len(set(results))
print("Number of used processes:", n_workers)
assert n_workers == 4
```
### Acknowledgement
This work is supported by the Center for Data Science, funded by the IDEX
Paris-Saclay, ANR-11-IDEX-0003-02
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
loky-1.2.1.tar.gz
(69.7 kB
view details)
Built Distribution
loky-1.2.1-py2.py3-none-any.whl
(55.4 kB
view details)
File details
Details for the file loky-1.2.1.tar.gz
.
File metadata
- Download URL: loky-1.2.1.tar.gz
- Upload date:
- Size: 69.7 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 9055b21ef326a4310fa4b90bcfdc17dc633dbbbb8501c489e4613775fe6ab8c1 |
|
MD5 | f7124934af8b2abc068d6bf48ea1d892 |
|
BLAKE2b-256 | 57604ac374cfaf3cb1f238f9366d62fd214c1ab06113f7c68bd27d59384da313 |
Provenance
File details
Details for the file loky-1.2.1-py2.py3-none-any.whl
.
File metadata
- Download URL: loky-1.2.1-py2.py3-none-any.whl
- Upload date:
- Size: 55.4 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | ba1120988e4a705c7b2e81176040cd572f80e03878917cbaa6be00857510aaa2 |
|
MD5 | bd777d7f6de702f6d84003626a9fe6b7 |
|
BLAKE2b-256 | af0d96f3b6dad0741b8ff566a7b84236420b3617a1cad71f7a3d43bb11d3645f |