Skip to main content

SQLAlchemy dialect for BigQuery

Project description

GA pypi versions

SQLALchemy Dialects

Quick Start

In order to use this library, you first need to go through the following steps:

  1. Select or create a Cloud Platform project.

  2. [Optional] Enable billing for your project.

  3. Enable the BigQuery Storage API.

  4. Setup Authentication.

Installation

Install this library in a virtualenv using pip. virtualenv is a tool to create isolated Python environments. The basic problem it addresses is one of dependencies and versions, and indirectly permissions.

With virtualenv, it’s possible to install this library without needing system install permissions, and without clashing with the installed system dependencies.

Supported Python Versions

Python >= 3.6

Unsupported Python Versions

Python <= 3.5.

Mac/Linux

pip install virtualenv
virtualenv <your-env>
source <your-env>/bin/activate
<your-env>/bin/pip install sqlalchemy-bigquery

Windows

pip install virtualenv
virtualenv <your-env>
<your-env>\Scripts\activate
<your-env>\Scripts\pip.exe install sqlalchemy-bigquery

Usage

SQLAlchemy

from sqlalchemy import *
from sqlalchemy.engine import create_engine
from sqlalchemy.schema import *
engine = create_engine('bigquery://project')
table = Table('dataset.table', MetaData(bind=engine), autoload=True)
print(select([func.count('*')], from_obj=table).scalar())

Project

project in bigquery://project is used to instantiate BigQuery client with the specific project ID. To infer project from the environment, use bigquery:// – without project

Authentication

Follow the Google Cloud library guide for authentication. Alternatively, you can provide the path to a service account JSON file in create_engine():

engine = create_engine('bigquery://', credentials_path='/path/to/keyfile.json')

Location

To specify location of your datasets pass location to create_engine():

engine = create_engine('bigquery://project', location="asia-northeast1")

Table names

To query tables from non-default projects or datasets, use the following format for the SQLAlchemy schema name: [project.]dataset, e.g.:

# If neither dataset nor project are the default
sample_table_1 = Table('natality', schema='bigquery-public-data.samples')
# If just dataset is not the default
sample_table_2 = Table('natality', schema='bigquery-public-data')

Batch size

By default, arraysize is set to 5000. arraysize is used to set the batch size for fetching results. To change it, pass arraysize to create_engine():

engine = create_engine('bigquery://project', arraysize=1000)

Page size for dataset.list_tables

By default, list_tables_page_size is set to 1000. list_tables_page_size is used to set the max_results for dataset.list_tables operation. To change it, pass list_tables_page_size to create_engine():

engine = create_engine('bigquery://project', list_tables_page_size=100)

Adding a Default Dataset

If you want to have the Client use a default dataset, specify it as the “database” portion of the connection string.

engine = create_engine('bigquery://project/dataset')

When using a default dataset, don’t include the dataset name in the table name, e.g.:

table = Table('table_name')

Note that specifying a default dataset doesn’t restrict execution of queries to that particular dataset when using raw queries, e.g.:

# Set default dataset to dataset_a
engine = create_engine('bigquery://project/dataset_a')

# This will still execute and return rows from dataset_b
engine.execute('SELECT * FROM dataset_b.table').fetchall()

Connection String Parameters

There are many situations where you can’t call create_engine directly, such as when using tools like Flask SQLAlchemy. For situations like these, or for situations where you want the Client to have a default_query_job_config, you can pass many arguments in the query of the connection string.

The credentials_path, credentials_info, credentials_base64, location, arraysize and list_tables_page_size parameters are used by this library, and the rest are used to create a QueryJobConfig

Note that if you want to use query strings, it will be more reliable if you use three slashes, so 'bigquery:///?a=b' will work reliably, but 'bigquery://?a=b' might be interpreted as having a “database” of ?a=b, depending on the system being used to parse the connection string.

Here are examples of all the supported arguments. Any not present are either for legacy sql (which isn’t supported by this library), or are too complex and are not implemented.

engine = create_engine(
    'bigquery://some-project/some-dataset' '?'
    'credentials_path=/some/path/to.json' '&'
    'location=some-location' '&'
    'arraysize=1000' '&'
    'list_tables_page_size=100' '&'
    'clustering_fields=a,b,c' '&'
    'create_disposition=CREATE_IF_NEEDED' '&'
    'destination=different-project.different-dataset.table' '&'
    'destination_encryption_configuration=some-configuration' '&'
    'dry_run=true' '&'
    'labels=a:b,c:d' '&'
    'maximum_bytes_billed=1000' '&'
    'priority=INTERACTIVE' '&'
    'schema_update_options=ALLOW_FIELD_ADDITION,ALLOW_FIELD_RELAXATION' '&'
    'use_query_cache=true' '&'
    'write_disposition=WRITE_APPEND'
)

In cases where you wish to include the full credentials in the connection URI you can base64 the credentials JSON file and supply the encoded string to the credentials_base64 parameter.

engine = create_engine(
    'bigquery://some-project/some-dataset' '?'
    'credentials_base64=eyJrZXkiOiJ2YWx1ZSJ9Cg==' '&'
    'location=some-location' '&'
    'arraysize=1000' '&'
    'list_tables_page_size=100' '&'
    'clustering_fields=a,b,c' '&'
    'create_disposition=CREATE_IF_NEEDED' '&'
    'destination=different-project.different-dataset.table' '&'
    'destination_encryption_configuration=some-configuration' '&'
    'dry_run=true' '&'
    'labels=a:b,c:d' '&'
    'maximum_bytes_billed=1000' '&'
    'priority=INTERACTIVE' '&'
    'schema_update_options=ALLOW_FIELD_ADDITION,ALLOW_FIELD_RELAXATION' '&'
    'use_query_cache=true' '&'
    'write_disposition=WRITE_APPEND'
)

To create the base64 encoded string you can use the command line tool base64, or openssl base64, or python -m base64.

Alternatively, you can use an online generator like www.base64encode.org <https://www.base64encode.org>_ to paste your credentials JSON file to be encoded.

Creating tables

To add metadata to a table:

table = Table('mytable', ..., bigquery_description='my table description', bigquery_friendly_name='my table friendly name')

To add metadata to a column:

Column('mycolumn', doc='my column description')

Threading and Multiprocessing

Because this client uses the grpc library, it’s safe to share instances across threads.

In multiprocessing scenarios, the best practice is to create client instances after the invocation of os.fork by multiprocessing.pool.Pool or multiprocessing.Process.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sqlalchemy-bigquery-1.4.2.tar.gz (98.2 kB view details)

Uploaded Source

Built Distribution

sqlalchemy_bigquery-1.4.2-py2.py3-none-any.whl (32.4 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file sqlalchemy-bigquery-1.4.2.tar.gz.

File metadata

  • Download URL: sqlalchemy-bigquery-1.4.2.tar.gz
  • Upload date:
  • Size: 98.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/34.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.9 tqdm/4.63.0 importlib-metadata/4.11.3 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.10.0

File hashes

Hashes for sqlalchemy-bigquery-1.4.2.tar.gz
Algorithm Hash digest
SHA256 93002a234ac449e5a898e3f1d150cfd2f2310e1d719a782766b2172b560b1702
MD5 be74c4db9b23fcb0f6ad0327c098c721
BLAKE2b-256 ea397c6ce6ddbd4551580ae80bcc359799f9df616f49fd17bf067b0475b407fd

See more details on using hashes here.

Provenance

File details

Details for the file sqlalchemy_bigquery-1.4.2-py2.py3-none-any.whl.

File metadata

  • Download URL: sqlalchemy_bigquery-1.4.2-py2.py3-none-any.whl
  • Upload date:
  • Size: 32.4 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.8.2 readme-renderer/34.0 requests/2.27.1 requests-toolbelt/0.9.1 urllib3/1.26.9 tqdm/4.63.0 importlib-metadata/4.11.3 keyring/23.5.0 rfc3986/2.0.0 colorama/0.4.4 CPython/3.10.0

File hashes

Hashes for sqlalchemy_bigquery-1.4.2-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 807db125c1ed28740968b1f36a9548fb74f41dc564056b4e410c4f74e87a6727
MD5 a04bcde74d5dd3f2536306c75acfc2eb
BLAKE2b-256 42b69d781d5cad2fb0562734521089585635323e22d5317a2ecae26663e6f45e

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page