Skip to main content

Microsoft Azure File DataLake Storage Client Library for Python

Project description

Azure DataLake service client library for Python

Overview

This preview package for Python includes ADLS Gen2 specific API support made available in Storage SDK. This includes:

  1. New directory level operations (Create, Rename, Delete) for hierarchical namespace enabled (HNS) storage account. For HNS enabled accounts, the rename/move operations are atomic.
  2. Permission related operations (Get/Set ACLs) for hierarchical namespace enabled (HNS) accounts.

Source code | Package (PyPi) | Package (Conda) | API reference documentation | Product documentation | Samples

Getting started

Prerequisites

Install the package

Install the Azure DataLake Storage client library for Python with pip:

pip install azure-storage-file-datalake --pre

Create a storage account

If you wish to create a new storage account, you can use the Azure Portal, Azure PowerShell, or Azure CLI:

# Create a new resource group to hold the storage account -
# if using an existing resource group, skip this step
az group create --name my-resource-group --location westus2

# Install the extension 'Storage-Preview'
az extension add --name storage-preview

# Create the storage account
az storage account create --name my-storage-account-name --resource-group my-resource-group --sku Standard_LRS --kind StorageV2 --hierarchical-namespace true

Authenticate the client

Interaction with DataLake Storage starts with an instance of the DataLakeServiceClient class. You need an existing storage account, its URL, and a credential to instantiate the client object.

Get credentials

To authenticate the client you have a few options:

  1. Use a SAS token string
  2. Use an account shared access key
  3. Use a token credential from azure.identity

Alternatively, you can authenticate with a storage connection string using the from_connection_string method. See example: Client creation with a connection string.

You can omit the credential if your account URL already has a SAS token.

Create client

Once you have your account URL and credentials ready, you can create the DataLakeServiceClient:

from azure.storage.filedatalake import DataLakeServiceClient

service = DataLakeServiceClient(account_url="https://<my-storage-account-name>.dfs.core.windows.net/", credential=credential)

Key concepts

DataLake storage offers four types of resources:

  • The storage account
  • A file system in the storage account
  • A directory under the file system
  • A file in a the file system or under directory

Async Clients

This library includes a complete async API supported on Python 3.5+. To use it, you must first install an async transport, such as aiohttp. See azure-core documentation for more information.

Async clients and credentials should be closed when they're no longer needed. These objects are async context managers and define async close methods.

Clients

The DataLake Storage SDK provides four different clients to interact with the DataLake Service:

  1. DataLakeServiceClient - this client interacts with the DataLake Service at the account level. It provides operations to retrieve and configure the account properties as well as list, create, and delete file systems within the account. For operations relating to a specific file system, directory or file, clients for those entities can also be retrieved using the get_file_client, get_directory_client or get_file_system_client functions.
  2. FileSystemClient - this client represents interaction with a specific file system, even if that file system does not exist yet. It provides operations to create, delete, or configure file systems and includes operations to list paths under file system, upload, and delete file or directory in the file system. For operations relating to a specific file, the client can also be retrieved using the get_file_client function. For operations relating to a specific directory, the client can be retrieved using the get_directory_client function.
  3. DataLakeDirectoryClient - this client represents interaction with a specific directory, even if that directory does not exist yet. It provides directory operations create, delete, rename, get properties and set properties operations.
  4. DataLakeFileClient - this client represents interaction with a specific file, even if that file does not exist yet. It provides file operations to append data, flush data, delete, create, and read file.
  5. DataLakeLeaseClient - this client represents lease interactions with a FileSystemClient, DataLakeDirectoryClient or DataLakeFileClient. It provides operations to acquire, renew, release, change, and break leases on the resources.

Examples

The following sections provide several code snippets covering some of the most common Storage DataLake tasks, including:

Client creation with a connection string

Create the DataLakeServiceClient using the connection string to your Azure Storage account.

from azure.storage.filedatalake import DataLakeServiceClient

service = DataLakeServiceClient.from_connection_string(conn_str="my_connection_string")

Uploading a file

Upload a file to your file system.

from azure.storage.filedatalake import DataLakeFileClient

data = b"abc"
file = DataLakeFileClient.from_connection_string("my_connection_string",
                                                 file_system_name="myfilesystem", file_path="myfile")
file.create_file ()
file.append_data(data, offset=0, length=len(data))
file.flush_data(len(data))

Downloading a file

Download a file from your file system.

from azure.storage.filedatalake import DataLakeFileClient

file = DataLakeFileClient.from_connection_string("my_connection_string",
                                                 file_system_name="myfilesystem", file_path="myfile")

with open("./BlockDestination.txt", "wb") as my_file:
    download = file.download_file()
    download.readinto(my_file)

Enumerating paths

List the paths in your file system.

from azure.storage.filedatalake import FileSystemClient

file_system = FileSystemClient.from_connection_string("my_connection_string", file_system_name="myfilesystem")

paths = file_system.get_paths()
for path in paths:
    print(path.name + '\n')

Optional Configuration

Optional keyword arguments that can be passed in at the client and per-operation level.

Retry Policy configuration

Use the following keyword arguments when instantiating a client to configure the retry policy:

  • retry_total (int): Total number of retries to allow. Takes precedence over other counts. Pass in retry_total=0 if you do not want to retry on requests. Defaults to 10.
  • retry_connect (int): How many connection-related errors to retry on. Defaults to 3.
  • retry_read (int): How many times to retry on read errors. Defaults to 3.
  • retry_status (int): How many times to retry on bad status codes. Defaults to 3.
  • retry_to_secondary (bool): Whether the request should be retried to secondary, if able. This should only be enabled of RA-GRS accounts are used and potentially stale data can be handled. Defaults to False.

Other client / per-operation configuration

Other optional configuration keyword arguments that can be specified on the client or per-operation.

Client keyword arguments:

  • connection_timeout (int): The number of seconds the client will wait to establish a connection to the server. Defaults to 20 seconds.
  • read_timeout (int): The number of seconds the client will wait, between consecutive read operations, for a response from the server. This is a socket level timeout and is not affected by overall data size. Client-side read timeouts will be automatically retried. Defaults to 60 seconds.
  • transport (Any): User-provided transport to send the HTTP request.

Per-operation keyword arguments:

  • raw_response_hook (callable): The given callback uses the response returned from the service.
  • raw_request_hook (callable): The given callback uses the request before being sent to service.
  • client_request_id (str): Optional user specified identification of the request.
  • user_agent (str): Appends the custom value to the user-agent header to be sent with the request.
  • logging_enable (bool): Enables logging at the DEBUG level. Defaults to False. Can also be passed in at the client level to enable it for all requests.
  • logging_body (bool): Enables logging the request and response body. Defaults to False. Can also be passed in at the client level to enable it for all requests.
  • headers (dict): Pass in custom headers as key, value pairs. E.g. headers={'CustomValue': value}

Troubleshooting

General

DataLake Storage clients raise exceptions defined in Azure Core.

This list can be used for reference to catch thrown exceptions. To get the specific error code of the exception, use the error_code attribute, i.e, exception.error_code.

Logging

This library uses the standard logging library for logging. Basic information about HTTP sessions (URLs, headers, etc.) is logged at INFO level.

Detailed DEBUG level logging, including request/response bodies and unredacted headers, can be enabled on a client with the logging_enable argument:

import sys
import logging
from azure.storage.filedatalake import DataLakeServiceClient

# Create a logger for the 'azure.storage.filedatalake' SDK
logger = logging.getLogger('azure.storage')
logger.setLevel(logging.DEBUG)

# Configure a console output
handler = logging.StreamHandler(stream=sys.stdout)
logger.addHandler(handler)

# This client will log detailed information about its HTTP sessions, at DEBUG level
service_client = DataLakeServiceClient.from_connection_string("your_connection_string", logging_enable=True)

Similarly, logging_enable can enable detailed logging for a single operation, even when it isn't enabled for the client:

service_client.list_file_systems(logging_enable=True)

Next steps

More sample code

Get started with our Azure DataLake samples.

Several DataLake Storage Python SDK samples are available to you in the SDK's GitHub repository. These samples provide example code for additional scenarios commonly encountered while working with DataLake Storage:

  • datalake_samples_access_control.py - Examples for common DataLake Storage tasks:

    • Set up a file system
    • Create a directory
    • Set/Get access control for the directory
    • Create files under the directory
    • Set/Get access control for each file
    • Delete file system
  • datalake_samples_upload_download.py - Examples for common DataLake Storage tasks:

    • Set up a file system
    • Create file
    • Append data to the file
    • Flush data to the file
    • Download the uploaded data
    • Delete file system

Additional documentation

Table for ADLS Gen1 to ADLS Gen2 API Mapping For more extensive REST documentation on Data Lake Storage Gen2, see the Data Lake Storage Gen2 documentation on docs.microsoft.com.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

azure-storage-file-datalake-12.15.0b1.tar.gz (271.6 kB view details)

Uploaded Source

Built Distribution

File details

Details for the file azure-storage-file-datalake-12.15.0b1.tar.gz.

File metadata

File hashes

Hashes for azure-storage-file-datalake-12.15.0b1.tar.gz
Algorithm Hash digest
SHA256 137b448f2189f4f6b9a2159534eaa80f7b003e172723190352238241c9751f71
MD5 8dd25f1f87fc4f97c34d7965fee1a0b0
BLAKE2b-256 915640e6f5bec6bda53f9edd5679bf467a1a3f97e504081a98229b7ef23ee085

See more details on using hashes here.

File details

Details for the file azure_storage_file_datalake-12.15.0b1-py3-none-any.whl.

File metadata

File hashes

Hashes for azure_storage_file_datalake-12.15.0b1-py3-none-any.whl
Algorithm Hash digest
SHA256 1c29d6275d8ad969c30ef11abd6e5926cf146d2072ccccab644b89a5f126ce83
MD5 f2f084a7fc81b4270a0055cde7a2174a
BLAKE2b-256 e6af42c0c987bcca38ed6626489bc44759a8d92e30bf8abfc8acecdd47dc4180

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page