Skip to main content

Treasure Data Driver for Python

Project description

pytd

Build Status Build status PyPI version

Quickly read/write your data directly from/to the Presto query engine and Plazma primary storage

Unlike the other official Treasure Data API libraries for Python, td-client-python and pandas-td, pytd gives a direct access to their back-end query and storage engines. The seamless connection allows your Python code to read and write a large volume of data in a shorter time. It eventually makes your day-to-day data analytics work more efficient and productive.

Project milestones

This project has been actively developed based on the milestones.

Installation

pip install pytd

Usage

Set your API key and endpoint to the environment variables, TD_API_KEY and TD_API_SERVER, respectively, and create a client instance:

import pytd

client = pytd.Client(database='sample_datasets')
# or, hard-code your API key, endpoint, and/or query engine:
# >>> pytd.Client(apikey='1/XXX', endpoint='https://api.treasuredata.com/', database='sample_datasets', engine='presto')

Issue Presto query and retrieve the result:

client.query('select symbol, count(1) as cnt from nasdaq group by 1 order by 1')
# {'columns': ['symbol', 'cnt'], 'data': [['AAIT', 590], ['AAL', 82], ['AAME', 9252], ..., ['ZUMZ', 2364]]}

In case of Hive:

client = pytd.Client(database='sample_datasets', engine='hive')
client.query('select hivemall_version()')
# {'columns': ['_c0'], 'data': [['0.6.0-SNAPSHOT-201901-r01']]} (as of Feb, 2019)

Once you install the package with PySpark dependencies, any data represented as pandas.DataFrame can directly be written to TD via td-spark:

pip install pytd[spark]
import pandas as pd

df = pd.DataFrame(data={'col1': [1, 2], 'col2': [3, 10]})
client.load_table_from_dataframe(df, 'takuti.foo', if_exists='overwrite')

If you want to use existing td-spark JAR file, creating SparkWriter with td_spark_path option would be helpful.

writer = pytd.writer.SparkWriter(apikey='1/XXX', endpoint='https://api.treasuredata.com/', td_spark_path='/path/to/td-spark-assembly.jar')
client = pytd.Client(database='sample_datasets', writer=writer)
client.load_table_from_dataframe(df, 'mydb.bar', if_exists='overwrite')

DB-API

pytd implements Python Database API Specification v2.0 with the help of prestodb/presto-python-client.

Connect to the API first:

from pytd.dbapi import connect

conn = connect(pytd.Client(database='sample_datasets'))
# or, connect with Hive:
# >>> conn = connect(pytd.Client(database='sample_datasets', engine='hive'))

Cursor defined by the specification allows us to flexibly fetch query results from a custom function:

def query(sql, connection):
    cur = connection.cursor()
    cur.execute(sql)
    rows = cur.fetchall()
    columns = [desc[0] for desc in cur.description]
    return {'data': rows, 'columns': columns}

query('select symbol, count(1) as cnt from nasdaq group by 1 order by 1', conn)

Below is an example of generator-based iterative retrieval, just like pandas.DataFrame.iterrows:

def iterrows(sql, connection):
    cur = connection.cursor()
    cur.execute(sql)
    index = 0
    columns = None
    while True:
        row = cur.fetchone()
        if row is None:
            break
        if columns is None:
            columns = [desc[0] for desc in cur.description]
        yield index, dict(zip(columns, row))
        index += 1

for index, row in iterrows('select symbol, count(1) as cnt from nasdaq group by 1 order by 1', conn):
    print(index, row)
# 0 {'cnt': 590, 'symbol': 'AAIT'}
# 1 {'cnt': 82, 'symbol': 'AAL'}
# 2 {'cnt': 9252, 'symbol': 'AAME'}
# 3 {'cnt': 253, 'symbol': 'AAOI'}
# 4 {'cnt': 5980, 'symbol': 'AAON'}
# ...

How to replace pandas-td

pytd offers pandas-td-compatible functions that provide the same functionalities in a more efficient way. If you are still using pandas-td, we recommend you to switch to pytd as follows.

First, install the package from PyPI:

pip install pytd
# or, `pip install pytd[spark]` if you wish to use `to_td`

Next, make the following modifications on the import statements.

Before:

import pandas_td as td
In [1]: %%load_ext pandas_td.ipython

After:

import pytd.pandas_td as td
In [1]: %%load_ext pytd.pandas_td.ipython

Consequently, all pandas_td code should keep running correctly with pytd. Report an issue from here if you noticed any incompatible behaviors.

Use existing td-spark-assembly.jar file

If you want to use existing td-spark JAR file, creating SparkWriter with td_spark_path option would be helpful. You can pass a writer to connect() function.

import pytd
import pytd.pandas_td as td
import pandas as pd
apikey = '1/XXX'
endpoint = 'https://api.treasuredata.com/'

writer = pytd.writer.SparkWriter(apikey=apikey, endpoint=endpoint, td_spark_path='/path/to/td-spark-assembly.jar')
con = td.connect(apikey=apikey, endpoint=endpoint, writer=writer)

df = pd.DataFrame(data={'col1': [1, 2], 'col2': [3, 10]})
td.to_td(df, 'mydb.buzz', con, if_exists='replace', index=False)

For developers

We use black and isort as a formatter, and flake8 as a linter. Our CI checks format with them.

Note that black requires Python 3.6+ while pytd supports 3.5+, so you must need to have Python 3.6+ for development.

We highly recommend you to introduce pre-commit to ensure your commit follows required format.

You can install pre-commit as follows:

pip install pre-commit
pre-commit install

Now, black, isort, and flake8 will check each time you commit changes. You can skip these check with git commit --no-verify.

If you want to check code format manually, you can install them as follows:

pip install black isort flake8

Then, you can run those tool manually;

black pytd
flake8 pytd
isort

You can run formatter, linter, and test by using nox as the following:

pip install nox # You should install at the first time
nox

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pytd-0.6.0.tar.gz (25.3 kB view details)

Uploaded Source

Built Distribution

pytd-0.6.0-py3-none-any.whl (29.7 kB view details)

Uploaded Python 3

File details

Details for the file pytd-0.6.0.tar.gz.

File metadata

  • Download URL: pytd-0.6.0.tar.gz
  • Upload date:
  • Size: 25.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.7.3

File hashes

Hashes for pytd-0.6.0.tar.gz
Algorithm Hash digest
SHA256 339a2d2a0eb631e76d34263667a5f680d8846d747dc1c7c800775761cbee0f6a
MD5 c3c241273e176307f45ff7d76f152693
BLAKE2b-256 65f68ecc2ac9bfbed619e2ce43aeee58fa1219b66c440bd0caf645441f60c476

See more details on using hashes here.

File details

Details for the file pytd-0.6.0-py3-none-any.whl.

File metadata

  • Download URL: pytd-0.6.0-py3-none-any.whl
  • Upload date:
  • Size: 29.7 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/40.8.0 requests-toolbelt/0.9.1 tqdm/4.32.2 CPython/3.7.3

File hashes

Hashes for pytd-0.6.0-py3-none-any.whl
Algorithm Hash digest
SHA256 971836828b7cb4e074e7fd5ae79e45f4f12b03569be4c3eeb0968bf4d89ad71b
MD5 f065b0a5f1911a9b13b457aa621fe3d3
BLAKE2b-256 d013e4e7cac96d15eb872d431a3535467e96508a61281e139a01a8dec15b7367

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page