SQL query layer for Dask
Project description
dask-sql
is a distributed SQL query engine in Python.
It allows you to query and transform your data using a mixture of
common SQL operations and Python code and also scale up the calculation easily
if you need it.
- Combine the power of Python and SQL: load your data with Python, transform it with SQL, enhance it with Python and query it with SQL - or the other way round.
With
dask-sql
you can mix the well known Python dataframe API ofpandas
andDask
with common SQL operations, to process your data in exactly the way that is easiest for you. - Infinite Scaling: using the power of the great
Dask
ecosystem, your computations can scale as you need it - from your laptop to your super cluster - without changing any line of SQL code. From k8s to cloud deployments, from batch systems to YARN - ifDask
supports it, so willdask-sql
. - Your data - your queries: Use Python user-defined functions (UDFs) in SQL without any performance drawback and extend your SQL queries with the large number of Python libraries, e.g. machine learning, different complicated input formats, complex statistics.
- Easy to install and maintain:
dask-sql
is just a pip/conda install away (or a docker run if you prefer). - Use SQL from wherever you like:
dask-sql
integrates with your jupyter notebook, your normal Python module or can be used as a standalone SQL server from any BI tool. It even integrates natively with Apache Hue. - GPU Support:
dask-sql
supports running SQL queries on CUDA-enabled GPUs by utilizing RAPIDS libraries likecuDF
, enabling accelerated compute for SQL.
Read more in the documentation.
Example
For this example, we use some data loaded from disk and query them with a SQL command from our python code.
Any pandas or dask dataframe can be used as input and dask-sql
understands a large amount of formats (csv, parquet, json,...) and locations (s3, hdfs, gcs,...).
import dask.dataframe as dd
from dask_sql import Context
# Create a context to hold the registered tables
c = Context()
# Load the data and register it in the context
# This will give the table a name, that we can use in queries
df = dd.read_csv("...")
c.create_table("my_data", df)
# Now execute a SQL query. The result is again dask dataframe.
result = c.sql("""
SELECT
my_data.name,
SUM(my_data.x)
FROM
my_data
GROUP BY
my_data.name
""", return_futures=False)
# Show the result
print(result)
Quickstart
Have a look into the documentation or start the example notebook on binder.
dask-sql
is currently under development and does so far not understand all SQL commands (but a large fraction). We are actively looking for feedback, improvements and contributors!
Installation
dask-sql
can be installed via conda
(preferred) or pip
- or in a development environment.
With conda
Create a new conda environment or use your already present environment:
conda create -n dask-sql
conda activate dask-sql
Install the package from the conda-forge
channel:
conda install dask-sql -c conda-forge
With pip
You can install the package with
pip install dask-sql
For development
If you want to have the newest (unreleased) dask-sql
version or if you plan to do development on dask-sql
, you can also install the package from sources.
git clone https://github.com/dask-contrib/dask-sql.git
Create a new conda environment and install the development environment:
conda env create -f continuous_integration/environment-3.9-dev.yaml
It is not recommended to use pip
instead of conda
for the environment setup.
After that, you can install the package in development mode
pip install -e ".[dev]"
The Rust DataFusion bindings are built as part of the pip install
.
Note that if changes are made to the Rust source in src/
, another build must be run to recompile the bindings.
This repository uses pre-commit hooks. To install them, call
pre-commit install
Testing
You can run the tests (after installation) with
pytest tests
GPU-specific tests require additional dependencies specified in continuous_integration/gpuci/environment.yaml
.
These can be added to the development environment by running
conda env update -n dask-sql -f continuous_integration/gpuci/environment.yaml
And GPU-specific tests can be run with
pytest tests -m gpu --rungpu
SQL Server
dask-sql
comes with a small test implementation for a SQL server.
Instead of rebuilding a full ODBC driver, we re-use the presto wire protocol.
It is - so far - only a start of the development and missing important concepts, such as
authentication.
You can test the sql presto server by running (after installation)
dask-sql-server
or by using the created docker image
docker run --rm -it -p 8080:8080 nbraun/dask-sql
in one terminal. This will spin up a server on port 8080 (by default) that looks similar to a normal presto database to any presto client.
You can test this for example with the default presto client:
presto --server localhost:8080
Now you can fire simple SQL queries (as no data is loaded by default):
=> SELECT 1 + 1;
EXPR$0
--------
2
(1 row)
You can find more information in the documentation.
CLI
You can also run the CLI dask-sql
for testing out SQL commands quickly:
dask-sql --load-test-data --startup
(dask-sql) > SELECT * FROM timeseries LIMIT 10;
How does it work?
At the core, dask-sql
does two things:
- translate the SQL query using DataFusion into a relational algebra, which is represented as a logical query plan - similar to many other SQL engines (Hive, Flink, ...)
- convert this description of the query into dask API calls (and execute them) - returning a dask dataframe.
For the first step, Arrow DataFusion needs to know about the columns and types of the dask dataframes, therefore some Rust code to store this information for dask dataframes are defined in dask_planner
.
After the translation to a relational algebra is done (using DaskSQLContext.logical_relational_algebra
), the python methods defined in dask_sql.physical
turn this into a physical dask execution plan by converting each piece of the relational algebra one-by-one.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distributions
File details
Details for the file dask_sql-2023.11.0rc1.tar.gz
.
File metadata
- Download URL: dask_sql-2023.11.0rc1.tar.gz
- Upload date:
- Size: 192.3 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | f6cddc38fa4a7c61ef226470901972a5a6098b953a55421a67fbd142fdf6ba5b |
|
MD5 | f16b13dcc5e6e3915290e09ac512e171 |
|
BLAKE2b-256 | f6475ee99a920b470f38c22a1123fe777299bdd8574ae0daba37351eb0fd1f08 |
File details
Details for the file dask_sql-2023.11.0rc1-cp38-abi3-win_amd64.whl
.
File metadata
- Download URL: dask_sql-2023.11.0rc1-cp38-abi3-win_amd64.whl
- Upload date:
- Size: 16.5 MB
- Tags: CPython 3.8+, Windows x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.11
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 10cb5b693c1d5870b6884d95d53848be201cf01c9c6f092e49058c02a9c9f6f6 |
|
MD5 | 0da4daebea9f962ac382ef0e0d531e50 |
|
BLAKE2b-256 | e7f1c129b5c5f9748279deb4f9459bfdf74113aacea231dcc48f88e001ab6dae |
File details
Details for the file dask_sql-2023.11.0rc1-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
.
File metadata
- Download URL: dask_sql-2023.11.0rc1-cp38-abi3-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
- Upload date:
- Size: 18.1 MB
- Tags: CPython 3.8+, manylinux: glibc 2.17+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e4720a08ce0efc6b711b5ae03a67331859efdae2e3725c3d5e534cd70b1c46ba |
|
MD5 | 9e14ed209712f153d6e75e4b03fc1bf0 |
|
BLAKE2b-256 | 0631565a7e5e6451c49f3ad991a0952c8947283573fc6bcc3c94dd34d10b43f6 |
File details
Details for the file dask_sql-2023.11.0rc1-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
.
File metadata
- Download URL: dask_sql-2023.11.0rc1-cp38-abi3-manylinux_2_17_aarch64.manylinux2014_aarch64.whl
- Upload date:
- Size: 18.4 MB
- Tags: CPython 3.8+, manylinux: glibc 2.17+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b33b03dc11298ed6c151f95d16ecc9a5f2fe43f7db83782884ec8b79ce793035 |
|
MD5 | 86f44301bfca08f6b85c1067dabb68c4 |
|
BLAKE2b-256 | 3dc68e90d24c7f5c20a02e0141fee4c339b5dc40f269858a5124957eefa0e5c3 |
File details
Details for the file dask_sql-2023.11.0rc1-cp38-abi3-macosx_11_0_arm64.whl
.
File metadata
- Download URL: dask_sql-2023.11.0rc1-cp38-abi3-macosx_11_0_arm64.whl
- Upload date:
- Size: 16.5 MB
- Tags: CPython 3.8+, macOS 11.0+ ARM64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | e76169eb82a05d88ee77475092731c623191d7d878f0f8563b2b8df5fda56b85 |
|
MD5 | 0b42635aba7aca55cc2afa72ee9a0cda |
|
BLAKE2b-256 | f5332253fd122386ccac2ee29e48a5f7bdd0259915049a96b519147e48c1bdbb |
File details
Details for the file dask_sql-2023.11.0rc1-cp38-abi3-macosx_10_7_x86_64.whl
.
File metadata
- Download URL: dask_sql-2023.11.0rc1-cp38-abi3-macosx_10_7_x86_64.whl
- Upload date:
- Size: 17.8 MB
- Tags: CPython 3.8+, macOS 10.7+ x86-64
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.10.13
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 57c6b64b9b6bece73d0864ccc2c855d086533f18db5d6c0dce540d07675f14b3 |
|
MD5 | 77c39dcee249b3c54fdf20d4fe9cea5c |
|
BLAKE2b-256 | cde07739397d85ceadd86a93ab2fbb5465e0fe436b8aeed88f0389411e85448a |