Skip to main content

Dask SQL

Project description

dask-sql

Conda PyPI GitHub Workflow Status Read the Docs Codecov GitHub Binder

dask-sql adds a SQL query layer on top of dask. This allows you to query and transform your dask dataframes using common SQL operations.

The queries will run as normal dask operations, which can be distributed within your dask cluster. The goal of this project is therefore similar to what Spark SQL/Hive/Drill/... is for the Hadoop world - but with much less features (so far...). Some ideas for this project are coming from the very great blazingSQL project.

Read more in the documentation.

You can try out dask-sql quickly by using the docker command

docker run --rm -it -p 8080:8080 nbraun/dask-sql

See information in the SQL server at the end of this page.


NOTE

dask-sql is currently under development and does so far not understand all SQL commands. We are actively looking for feedback, improvements and contributors!


Example

We use the timeseries random data from dask.datasets as an example:

from dask_sql import Context
from dask.datasets import timeseries

# Create a context to hold the registered tables
c = Context()

# If you have a cluster of dask workers,
# initialize it now

# Load the data and register it in the context
# This will give the table a name
df = timeseries()
c.create_table("timeseries", df)

# Now execute an SQL query. The result is a dask dataframe
# The query looks for the id with the highest x for each name
# (this is just random test data, but you could think of looking
# for outliers in the sensor data)
result = c.sql("""
    SELECT
        lhs.name,
        lhs.id,
        lhs.x
    FROM
        timeseries AS lhs
    JOIN
        (
            SELECT
                name AS max_name,
                MAX(x) AS max_x
            FROM timeseries
            GROUP BY name
        ) AS rhs
    ON
        lhs.name = rhs.max_name AND
        lhs.x = rhs.max_x
""")

# Show the result...
print(result.compute())

# ... or use it for any other dask calculation
# (just an example, could also be done via SQL)
print(result.x.mean().compute())

You can also run the CLI dask-sql for testing out SQL commands quickly:

dask-sql --load-test-data --startup

(dask-sql) > SELECT * FROM timeseries LIMIT 10;

Installation

dask-sql can be installed via conda (preferred) or pip - or in a development environment.

With conda

Create a new conda environment or use your already present environment:

conda create -n dask-sql
conda activate dask-sql

Install the package from the conda-forge channel:

conda install dask-sql -c conda-forge

With pip

dask-sql needs Java for the parsing of the SQL queries. Make sure you have a running java installation with version >= 8.

To test if you have Java properly installed and set up, run

$ java -version
openjdk version "1.8.0_152-release"
OpenJDK Runtime Environment (build 1.8.0_152-release-1056-b12)
OpenJDK 64-Bit Server VM (build 25.152-b12, mixed mode)

After installing Java, you can install the package with

pip install dask-sql

For development

If you want to have the newest (unreleased) dask-sql version or if you plan to do development on dask-sql, you can also install the package from sources.

git clone https://github.com/nils-braun/dask-sql.git

Create a new conda environment and install the development environment:

conda create -n dask-sql --file conda.txt -c conda-forge

It is not recommended to use pip instead of conda. If you however need to, make sure to have Java (jdk >= 8) and maven installed and correctly setup before continuing. Have a look into conda.txt for the rest of the development environment.

After that, you can install the package in development mode

pip install -e ".[dev]"

To compile the Java classes (at the beginning or after changes), run

python setup.py java

Testing

You can run the tests (after installation) with

pytest tests

How does it work?

At the core, dask-sql does two things:

  • translate the SQL query using Apache Calcite into a relational algebra, which is specified as a tree of java objects - similar to many other SQL engines (Hive, Flink, ...)
  • convert this description of the query from java objects into dask API calls (and execute them) - returning a dask dataframe.

For the first step, Apache Calcite needs to know about the columns and types of the dask dataframes, therefore some java classes to store this information for dask dataframes are defined in planner. After the translation to a relational algebra is done (using RelationalAlgebraGenerator.getRelationalAlgebra), the python methods defined in dask_sql.physical turn this into a physical dask execution plan by converting each piece of the relational algebra one-by-one.

SQL Server

dask-sql comes with a small test implementation for a SQL server. Instead of rebuilding a full ODBC driver, we re-use the presto wire protocol. It is - so far - only a start of the development and missing important concepts, such as authentication.

You can test the sql presto server by running (after installation)

dask-sql-server

or by using the created docker image

docker run --rm -it -p 8080:8080 nbraun/dask-sql

in one terminal. This will spin up a server on port 8080 (by default) that looks similar to a normal presto database to any presto client.

You can test this for example with the default presto client:

presto --server localhost:8080

Now you can fire simple SQL queries (as no data is loaded by default):

=> SELECT 1 + 1;
 EXPR$0
--------
    2
(1 row)

You can find more information in the documentation.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

dask_sql-0.3.1.tar.gz (19.3 MB view details)

Uploaded Source

Built Distribution

dask_sql-0.3.1-py3-none-any.whl (19.3 MB view details)

Uploaded Python 3

File details

Details for the file dask_sql-0.3.1.tar.gz.

File metadata

  • Download URL: dask_sql-0.3.1.tar.gz
  • Upload date:
  • Size: 19.3 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/49.6.0.post20210108 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.8.6

File hashes

Hashes for dask_sql-0.3.1.tar.gz
Algorithm Hash digest
SHA256 724af112b76b488fca1027492a52ba17d4dd13c128445237adc529406cf9188a
MD5 0130c13785de5d841f677b9a9e6bb5be
BLAKE2b-256 a8e5d12019768062916c514ede147f2be1390b7c721f927d949e295a1862e7a6

See more details on using hashes here.

File details

Details for the file dask_sql-0.3.1-py3-none-any.whl.

File metadata

  • Download URL: dask_sql-0.3.1-py3-none-any.whl
  • Upload date:
  • Size: 19.3 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.7.0 requests/2.25.1 setuptools/49.6.0.post20210108 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.8.6

File hashes

Hashes for dask_sql-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 29b30ae325cff8b7b3ad461fe49b685cb3eaf2a5fb574ed64323705327f3cd9d
MD5 70d40304511f9cfd2f947797321bb32b
BLAKE2b-256 655f103ee561cce256759000b1ccb46e73b3e20aa7370ab83010d091dcef4dd2

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page