Skip to main content

Databricks Connect Client

Project description

Databricks Connect

Databricks Connect is a Python library to run PySpark DataFrame queries on a remote Spark cluster. Databricks Connect leverages the power of Spark Connect. An application using Databricks Connect runs locally, and when the results of a DataFrame query need to be evaluated, the query is run on a configured Databricks cluster.

The following is a simple Python code that uses Databricks Connect and prints out a number range. The number range query is executed on the Databricks cluster.

from databricks.connect import DatabricksSession

session = DatabricksSession.builder.getOrCreate()

df = session.range(1, 10)
df.show()

Specifying Connection Parameters

DatabricksSession offers a few ways to specify the Databricks workspace, cluster and user credentials, collectively referred to in the rest of this document as connection parameters. The specified credentials are used to execute the DataFrame queries on the cluster. This user must have cluster access permissions and appropriate data access permissions.

NOTE: Currently, Databricks Connect only supports credentials based on Personal Access Token. Other authentication mechanisms are coming soon.

When DatabricksSession is initialized with no additional parameters as below, connection parameters are picked up from the environment.

session = DatabricksSession.builder.getOrCreate()

First, the SPARK_REMOTE environment variable is used if it's configured.

If configured, the SPARK_REMOTE environment variable must contain the spark connect connection string. Read more about spark connect connection string.

SPARK_REMOTE="sc://<databricks workspace url>:443/;token=<bearer token>;x-databricks-cluster-id=<cluster id>"

If this environment variable is not configured, Databricks Connect will now look for connection parameters using the Databricks SDK.

The Databricks Python SDK reads these values from two locations - first from environment variables that may be configured. For parameters not configured via environment variables, the 'DEFAULT' profile, if set up, from the configuration file .databrickscfg. The details on the environment variable and configuration file can be found in the Databricks SDK.

Similar to the authentication environment variables, the Databricks SDK reads the cluster identifier from the environment variable DATABRICKS_CLUSTER_ID or from the cluster_id entry in the config file.

When the defaults should not be used, the Databricks Connect session can be initialized explicitly with a Config object from the Databricks SDK. In the below example, we are configuring Databricks Session to use the foo-user profile from the configuration file. Read more on profiles in configuration files in the Databricks SDK.

from databricks.sdk.core import Config
from databricks.connect import DatabricksSession

config = Config(
    profile="foo-user",
    # ...
)

session = DatabricksSession.builder.sdkConfig(config).getOrCreate()

Connection parameters can also be specified directly in code.

session = DatabricksSession.builder.remote(
    host="<databricks workspace url>",
    cluster_id="<databricks cluster id>",
    token="<bearer token>"
).getOrCreate()

The spark connect connection string can also be specified directly in code.

session = DatabricksSession.builder\
    .remote("sc://<databricks workspace url>:443/;token=<bearer token>;x-databricks-cluster-id=<cluster id>")\
    .getOrCreate()

In summary, connection parameters are collected in the following order. When all connection parameters are available, evaluation is stopped.

  1. Specified directly using remote(), either as a connection string or as keyword arguments.
  2. Specified via the Databricks SDK using sdkConfig().
  3. Specified in the SPARK_REMOTE environment variable.
  4. Specified via the Databricks SDK's default authentication.

Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

databricks_connect-13.0.0-py2.py3-none-any.whl (1.9 MB view details)

Uploaded Python 2 Python 3

File details

Details for the file databricks_connect-13.0.0-py2.py3-none-any.whl.

File metadata

File hashes

Hashes for databricks_connect-13.0.0-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 1585dd40e5189607a076da0140b7be593d49533408ad4a2650aabe2cbc09714b
MD5 36e6db1f588cb135e33f2e180948887b
BLAKE2b-256 4f562c8f61aeb9f0c97eb95f4cf0226c4940a97694d07e644beea054b8545c52

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page