Skip to main content

Dask + Delta Table

Project description

Dask-DeltaTable

Reading and writing to Delta Lake using Dask engine.

Installation

dask-deltatable is available on PyPI:

pip install dask-deltatable

And conda-forge:

conda install -c conda-forge dask-deltatable

Features:

  1. Read the parquet files from Delta Lake and parallelize with Dask
  2. Write Dask dataframes to Delta Lake (limited support)
  3. Supports multiple filesystems (s3, azurefs, gcsfs)
  4. Subset of Delta Lake features:
    • Time Travel
    • Schema evolution
    • Parquet filters
      • row filter
      • partition filter

Not supported

  1. Writing to Delta Lake is still in development.
  2. optimize API to run a bin-packing operation on a Delta Table.

Reading from Delta Lake

import dask_deltatable as ddt

# read delta table
df = ddt.read_deltalake("delta_path")

# with specific version
df = ddt.read_deltalake("delta_path", version=3)

# with specific datetime
df = ddt.read_deltalake("delta_path", datetime="2018-12-19T16:39:57-08:00")

df is a Dask DataFrame that you can work with in the same way you normally would. See the Dask DataFrame documentation for available operations.

Accessing remote file systems

To be able to read from S3, azure, gcsfs, and other remote filesystems, you ensure the credentials are properly configured in environment variables or config files. For AWS, you may need ~/.aws/credential; for gcsfs, GOOGLE_APPLICATION_CREDENTIALS. Refer to your cloud provider documentation to configure these.

ddt.read_deltalake("s3://bucket_name/delta_path", version=3)

Accessing AWS Glue catalog

dask-deltatable can connect to AWS Glue catalog to read the delta table. The method will look for AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables, and if those are not available, fall back to ~/.aws/credentials.

Example:

ddt.read_deltalake(catalog="glue", database_name="science", table_name="physics")

Writing to Delta Lake

To write a Dask dataframe to Delta Lake, use to_deltalake method.

import dask.dataframe as dd
import dask_deltatable as ddt

df = dd.read_csv("s3://bucket_name/data.csv")
# do some processing on the dataframe...
ddt.to_deltalake("s3://bucket_name/delta_path", df)

Writing to Delta Lake is still in development, so be aware that some features may not work.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dask_deltatable-0.3.3.tar.gz (22.8 kB view details)

Uploaded Source

Built Distribution

dask_deltatable-0.3.3-py3-none-any.whl (18.3 kB view details)

Uploaded Python 3

File details

Details for the file dask_deltatable-0.3.3.tar.gz.

File metadata

  • Download URL: dask_deltatable-0.3.3.tar.gz
  • Upload date:
  • Size: 22.8 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.9.20

File hashes

Hashes for dask_deltatable-0.3.3.tar.gz
Algorithm Hash digest
SHA256 a3c3d931d8f1b66d916af083014900c19d56be20e0c3f248669c27c0b1b9ab56
MD5 d9d7b64d6ef26193ecea360515d79591
BLAKE2b-256 80d3bcf3de1b7a1856e187a0acf570b2b71ae0233ad3d75580d9dc8c76dc4e70

See more details on using hashes here.

Provenance

File details

Details for the file dask_deltatable-0.3.3-py3-none-any.whl.

File metadata

File hashes

Hashes for dask_deltatable-0.3.3-py3-none-any.whl
Algorithm Hash digest
SHA256 7582e33838c61307f6b8cba1eb74c1b340a786caa388c9457e7495c1ffe0bdfd
MD5 3589b4b3e78f95c1796c59c9ff7d393f
BLAKE2b-256 aba81d85c6dbfcdfecdd6308b9ac22e779b5ef7c2a2f44ca101e45022ad06cfa

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page