dask chunked read_text on gzip file
Project description
Dask indexed gzip
##################
|pypi-version| |travis| |coveralls|
An implementation compatible with `dask read_text`_ interface,
than can chunk a gzipped text file into several partitions,
thanks to an index, provided by `indexed_gzip`_
This is useful when your data resides in a big gzipped file,
yet you want to leverage dask parallelism capabilities.
Sample session
---------------
::
>>> import os
>>> import dask_igzip
.. initalization
>>> data_path = os.path.join(os.path.dirname(dask_igzip.__file__), "..", "test", "data")
::
>>> source = os.path.join(data_path, "sample.txt.gz")
>>> # 3 lines per chunk (obviously this is for demoing)
>>> bag = dask_igzip.read_text(source, chunk_size=3, encoding="utf-8")
>>> lines = bag.take(4, npartitions=2)
>>> print("".join(lines).strip())
a first sentence
a second sentence
a third sentence
a fourth sentence
>>> bag.str.upper().str.strip().compute()[8]
'LINE 9'
Why ?
-----
Dask `read_text` creates a unique partition if you provide it with a gzip file.
This limitations comes from the fact that
there is no way to split the gzip file in a predictable yet coherent way.
This project provides an implementation where the gzip is indexed,
then lines positions are also indexed,
so that reading the text can be done by chunk (thus enabling parallelism).
On first run, indexes are saved on disk, so that subsequent runs are fast.
.. _`indexed_gzip`: https://githuib.com/pauldmccarthy/indexed_gzip
.. _`dask read_text`: https://dask.pydata.org/en/latest/bag-creation.html#db-read-text
.. |pypi-version| image:: https://img.shields.io/pypi/v/dask-igzip.svg
:target: https://pypi-hypernode.com/pypi/dask-igzip
:alt: Latest PyPI version
.. |travis| image:: http://img.shields.io/travis/jurismarches/dask_igzip/master.svg?style=flat
:target: https://travis-ci.org/jurismarches/dask_igzip
.. |coveralls| image:: http://img.shields.io/coveralls/jurismarches/dask_igzip/master.svg?style=flat
:target: https://coveralls.io/r/jurismarches/dask_igzip
Changelog
#########
The format is based on `Keep a Changelog`_
and this project tries to adhere to `Semantic Versioning`_.
.. _`Keep a Changelog`: http://keepachangelog.com/en/1.0.0/
.. _`Semantic Versioning`: http://semver.org/spec/v2.0.0.html
0.2.0 - 2018-06-20
==================
New
---
- read_text now accept a limit parameter to limit the global amount of lines to read
Changed
-------
- incompatible format for lines index
0.1.0 - 2018-06-19
==================
New
---
- initial release
- 100% code coverage
##################
|pypi-version| |travis| |coveralls|
An implementation compatible with `dask read_text`_ interface,
than can chunk a gzipped text file into several partitions,
thanks to an index, provided by `indexed_gzip`_
This is useful when your data resides in a big gzipped file,
yet you want to leverage dask parallelism capabilities.
Sample session
---------------
::
>>> import os
>>> import dask_igzip
.. initalization
>>> data_path = os.path.join(os.path.dirname(dask_igzip.__file__), "..", "test", "data")
::
>>> source = os.path.join(data_path, "sample.txt.gz")
>>> # 3 lines per chunk (obviously this is for demoing)
>>> bag = dask_igzip.read_text(source, chunk_size=3, encoding="utf-8")
>>> lines = bag.take(4, npartitions=2)
>>> print("".join(lines).strip())
a first sentence
a second sentence
a third sentence
a fourth sentence
>>> bag.str.upper().str.strip().compute()[8]
'LINE 9'
Why ?
-----
Dask `read_text` creates a unique partition if you provide it with a gzip file.
This limitations comes from the fact that
there is no way to split the gzip file in a predictable yet coherent way.
This project provides an implementation where the gzip is indexed,
then lines positions are also indexed,
so that reading the text can be done by chunk (thus enabling parallelism).
On first run, indexes are saved on disk, so that subsequent runs are fast.
.. _`indexed_gzip`: https://githuib.com/pauldmccarthy/indexed_gzip
.. _`dask read_text`: https://dask.pydata.org/en/latest/bag-creation.html#db-read-text
.. |pypi-version| image:: https://img.shields.io/pypi/v/dask-igzip.svg
:target: https://pypi-hypernode.com/pypi/dask-igzip
:alt: Latest PyPI version
.. |travis| image:: http://img.shields.io/travis/jurismarches/dask_igzip/master.svg?style=flat
:target: https://travis-ci.org/jurismarches/dask_igzip
.. |coveralls| image:: http://img.shields.io/coveralls/jurismarches/dask_igzip/master.svg?style=flat
:target: https://coveralls.io/r/jurismarches/dask_igzip
Changelog
#########
The format is based on `Keep a Changelog`_
and this project tries to adhere to `Semantic Versioning`_.
.. _`Keep a Changelog`: http://keepachangelog.com/en/1.0.0/
.. _`Semantic Versioning`: http://semver.org/spec/v2.0.0.html
0.2.0 - 2018-06-20
==================
New
---
- read_text now accept a limit parameter to limit the global amount of lines to read
Changed
-------
- incompatible format for lines index
0.1.0 - 2018-06-19
==================
New
---
- initial release
- 100% code coverage
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Close
Hashes for dask-igzip-0.2.0.linux-x86_64.tar.gz
Algorithm | Hash digest | |
---|---|---|
SHA256 | cd3f3ddc5ed99ce77f6cefdf2b79e91463b2a41434461b24e1855566b15357df |
|
MD5 | 2128d6bb82eae5d269a5e9233060fed9 |
|
BLAKE2b-256 | 75b156377248714f959b72016566e93128099166231402207e18d71851aa24f1 |
Close
Hashes for dask_igzip-0.2.0-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | b313cd8ad0b13062b6b73fd3728e07649708197768c95e8ad7ecbe560a73e57a |
|
MD5 | fd9fd28f734f49b3e4ce3832a62008a5 |
|
BLAKE2b-256 | 7923303e351b201218424e1dd016d0a1d11d327d09657993057458ba20814eed |