Skip to main content

SQL query layer for Dask

Project description

Conda PyPI GitHub Workflow Status Read the Docs Codecov GitHub Binder

SQL + Python

dask-sql is a distributed SQL query engine in Python. It allows you to query and transform your data using a mixture of common SQL operations and Python code and also scale up the calculation easily if you need it.

  • Combine the power of Python and SQL: load your data with Python, transform it with SQL, enhance it with Python and query it with SQL - or the other way round. With dask-sql you can mix the well known Python dataframe API of pandas and Dask with common SQL operations, to process your data in exactly the way that is easiest for you.
  • Infinite Scaling: using the power of the great Dask ecosystem, your computations can scale as you need it - from your laptop to your super cluster - without changing any line of SQL code. From k8s to cloud deployments, from batch systems to YARN - if Dask supports it, so will dask-sql.
  • Your data - your queries: Use Python user-defined functions (UDFs) in SQL without any performance drawback and extend your SQL queries with the large number of Python libraries, e.g. machine learning, different complicated input formats, complex statistics.
  • Easy to install and maintain: dask-sql is just a pip/conda install away (or a docker run if you prefer).
  • Use SQL from wherever you like: dask-sql integrates with your jupyter notebook, your normal Python module or can be used as a standalone SQL server from any BI tool. It even integrates natively with Apache Hue.
  • GPU Support: dask-sql supports running SQL queries on CUDA-enabled GPUs by utilizing RAPIDS libraries like cuDF, enabling accelerated compute for SQL.

Read more in the documentation.

dask-sql GIF

Example

For this example, we use some data loaded from disk and query them with a SQL command from our python code. Any pandas or dask dataframe can be used as input and dask-sql understands a large amount of formats (csv, parquet, json,...) and locations (s3, hdfs, gcs,...).

import dask.dataframe as dd
from dask_sql import Context

# Create a context to hold the registered tables
c = Context()

# Load the data and register it in the context
# This will give the table a name, that we can use in queries
df = dd.read_csv("...")
c.create_table("my_data", df)

# Now execute a SQL query. The result is again dask dataframe.
result = c.sql("""
    SELECT
        my_data.name,
        SUM(my_data.x)
    FROM
        my_data
    GROUP BY
        my_data.name
""", return_futures=False)

# Show the result
print(result)

Quickstart

Have a look into the documentation or start the example notebook on binder.

dask-sql is currently under development and does so far not understand all SQL commands (but a large fraction). We are actively looking for feedback, improvements and contributors!

Installation

dask-sql can be installed via conda (preferred) or pip - or in a development environment.

With conda

Create a new conda environment or use your already present environment:

conda create -n dask-sql
conda activate dask-sql

Install the package from the conda-forge channel:

conda install dask-sql -c conda-forge

With pip

You can install the package with

pip install dask-sql

For development

If you want to have the newest (unreleased) dask-sql version or if you plan to do development on dask-sql, you can also install the package from sources.

git clone https://github.com/dask-contrib/dask-sql.git

Create a new conda environment and install the development environment:

conda env create -f continuous_integration/environment-3.9-dev.yaml

It is not recommended to use pip instead of conda for the environment setup.

After that, you can install the package in development mode

pip install -e ".[dev]"

The Rust DataFusion bindings are built as part of the pip install. Note that if changes are made to the Rust source in src/, another build must be run to recompile the bindings. This repository uses pre-commit hooks. To install them, call

pre-commit install

Testing

You can run the tests (after installation) with

pytest tests

GPU-specific tests require additional dependencies specified in continuous_integration/gpuci/environment.yaml. These can be added to the development environment by running

conda env update -n dask-sql -f continuous_integration/gpuci/environment.yaml

And GPU-specific tests can be run with

pytest tests -m gpu --rungpu

SQL Server

dask-sql comes with a small test implementation for a SQL server. Instead of rebuilding a full ODBC driver, we re-use the presto wire protocol. It is - so far - only a start of the development and missing important concepts, such as authentication.

You can test the sql presto server by running (after installation)

dask-sql-server

or by using the created docker image

docker run --rm -it -p 8080:8080 nbraun/dask-sql

in one terminal. This will spin up a server on port 8080 (by default) that looks similar to a normal presto database to any presto client.

You can test this for example with the default presto client:

presto --server localhost:8080

Now you can fire simple SQL queries (as no data is loaded by default):

=> SELECT 1 + 1;
 EXPR$0
--------
    2
(1 row)

You can find more information in the documentation.

CLI

You can also run the CLI dask-sql for testing out SQL commands quickly:

dask-sql --load-test-data --startup

(dask-sql) > SELECT * FROM timeseries LIMIT 10;

How does it work?

At the core, dask-sql does two things:

  • translate the SQL query using DataFusion into a relational algebra, which is represented as a logical query plan - similar to many other SQL engines (Hive, Flink, ...)
  • convert this description of the query into dask API calls (and execute them) - returning a dask dataframe.

For the first step, Arrow DataFusion needs to know about the columns and types of the dask dataframes, therefore some Rust code to store this information for dask dataframes are defined in dask_planner. After the translation to a relational algebra is done (using DaskSQLContext.logical_relational_algebra), the python methods defined in dask_sql.physical turn this into a physical dask execution plan by converting each piece of the relational algebra one-by-one.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dask_sql-2023.10.1.tar.gz (193.0 kB view details)

Uploaded Source

Built Distributions

dask_sql-2023.10.1-cp38-abi3-win_amd64.whl (17.1 MB view details)

Uploaded CPython 3.8+ Windows x86-64

dask_sql-2023.10.1-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl (18.5 MB view details)

Uploaded CPython 3.8+ manylinux: glibc 2.17+ x86-64

dask_sql-2023.10.1-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl (19.0 MB view details)

Uploaded CPython 3.8+ manylinux: glibc 2.17+ ARM64

dask_sql-2023.10.1-cp38-abi3-macosx_11_0_arm64.whl (17.2 MB view details)

Uploaded CPython 3.8+ macOS 11.0+ ARM64

dask_sql-2023.10.1-cp38-abi3-macosx_10_7_x86_64.whl (18.5 MB view details)

Uploaded CPython 3.8+ macOS 10.7+ x86-64

File details

Details for the file dask_sql-2023.10.1.tar.gz.

File metadata

  • Download URL: dask_sql-2023.10.1.tar.gz
  • Upload date:
  • Size: 193.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.10.13

File hashes

Hashes for dask_sql-2023.10.1.tar.gz
Algorithm Hash digest
SHA256 7ef6c16ba19f494e2f53e68fef4a950cee3a270d5f5fba7c71c2c9d6488a2a63
MD5 44820d909452b9dcfe91d4308f67a5d0
BLAKE2b-256 e5218a1d21bec215f1819df8786ccc4e71af0e328c0f00a087e6ae61f100f7de

See more details on using hashes here.

File details

Details for the file dask_sql-2023.10.1-cp38-abi3-win_amd64.whl.

File metadata

File hashes

Hashes for dask_sql-2023.10.1-cp38-abi3-win_amd64.whl
Algorithm Hash digest
SHA256 402afc3d2e42681cac7b1db0dde928ff66850806820d06a18f40ed53fe25f268
MD5 ddab4272b65b1f349e343619d1275611
BLAKE2b-256 d9ea5c1da8c88f8225c756eea125457665f6ffb781a89013f6302e189350610b

See more details on using hashes here.

File details

Details for the file dask_sql-2023.10.1-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for dask_sql-2023.10.1-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 bc9dbaa1310b1b40a25c15795b754dfcf5101253d1f7bf2ec0656fa2b8401aad
MD5 b49bf147783fedd97c622b4aed74c492
BLAKE2b-256 a2e4af2eed4bed895ce1fafe6fe09263d480dceef9c69e9b4eb289c72720a512

See more details on using hashes here.

File details

Details for the file dask_sql-2023.10.1-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl.

File metadata

File hashes

Hashes for dask_sql-2023.10.1-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
Algorithm Hash digest
SHA256 d4bb66fe36ad4285d1980f8264bf0e4744b05faee13fa7aff251321e6a3ef58a
MD5 c5de3502992d165beeb64b995fdedec9
BLAKE2b-256 4bf1a5a016a1e71aeacca2906cf3e2afaa0cd90fbd89022bba88f841d030908b

See more details on using hashes here.

File details

Details for the file dask_sql-2023.10.1-cp38-abi3-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for dask_sql-2023.10.1-cp38-abi3-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 09e55e1fd24f605242079932557cac1c5a1f32ec5282463454af244779c147f6
MD5 c99cd4fc2ba213b0be2cca4224fba888
BLAKE2b-256 9b00f705c14c9b1c5c1daec0ecff10fc71460fdfa846917cea742288474994bb

See more details on using hashes here.

File details

Details for the file dask_sql-2023.10.1-cp38-abi3-macosx_10_7_x86_64.whl.

File metadata

File hashes

Hashes for dask_sql-2023.10.1-cp38-abi3-macosx_10_7_x86_64.whl
Algorithm Hash digest
SHA256 6f320d6278b51a9fa06d218128b05e9def4b4f59e76fd2fe8e2a9883a938abad
MD5 5d9f03f91cb0e51156b4599432eb6129
BLAKE2b-256 45c83dc7ee28c085b02638081571afa3df5be592aafa3ef0c5a328ec3af591b9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page