Skip to main content

Microsoft Azure Event Hubs checkpointer implementation with Blob Storage Client Library for Python

Project description

Azure EventHubs Checkpoint Store client library for Python using Storage Blobs

Azure EventHubs Checkpoint Store is used for storing checkpoints while processing events from Azure Event Hubs. This Checkpoint Store package works as a plug-in package to EventHubConsumerClient. It uses Azure Storage Blob as the persistent store for maintaining checkpoints and partition ownership information.

Please note that this is an async library, for sync version of the Azure EventHubs Checkpoint Store client library, please refer to azure-eventhub-checkpointstoreblob.

Source code | Package (PyPi) | API reference documentation | Azure Eventhubs documentation | Azure Storage documentation

Getting started

Prerequisites

  • Python 3.5.3 or later.

  • Microsoft Azure Subscription: To use Azure services, including Azure Event Hubs, you'll need a subscription. If you do not have an existing Azure account, you may sign up for a free trial or use your MSDN subscriber benefits when you create an account.

  • Event Hubs namespace with an Event Hub: To interact with Azure Event Hubs, you'll also need to have a namespace and Event Hub available. If you are not familiar with creating Azure resources, you may wish to follow the step-by-step guide for creating an Event Hub using the Azure portal. There, you can also find detailed instructions for using the Azure CLI, Azure PowerShell, or Azure Resource Manager (ARM) templates to create an Event Hub.

  • Azure Storage Account: You'll need to have an Azure Storage Account and create a Azure Blob Storage Block Container to store the checkpoint data with blobs. You may follow the guide creating an Azure Block Blob Storage Account.

Install the package

$ pip install azure-eventhub-checkpointstoreblob-aio

Key concepts

Checkpointing

Checkpointing is a process by which readers mark or commit their position within a partition event sequence. Checkpointing is the responsibility of the consumer and occurs on a per-partition basis within a consumer group. This responsibility means that for each consumer group, each partition reader must keep track of its current position in the event stream, and can inform the service when it considers the data stream complete. If a reader disconnects from a partition, when it reconnects it begins reading at the checkpoint that was previously submitted by the last reader of that partition in that consumer group. When the reader connects, it passes the offset to the event hub to specify the location at which to start reading. In this way, you can use checkpointing to both mark events as "complete" by downstream applications, and to provide resiliency if a failover between readers running on different machines occurs. It is possible to return to older data by specifying a lower offset from this checkpointing process. Through this mechanism, checkpointing enables both failover resiliency and event stream replay.

Offsets & sequence numbers

Both offset & sequence number refer to the position of an event within a partition. You can think of them as a client-side cursor. The offset is a byte numbering of the event. The offset/sequence number enables an event consumer (reader) to specify a point in the event stream from which they want to begin reading events. You can specify a timestamp such that you receive events enqueued only after the given timestamp. Consumers are responsible for storing their own offset values outside of the Event Hubs service. Within a partition, each event includes an offset, sequence number and the timestamp of when it was enqueued.

Examples

Create an EventHubConsumerClient

The easiest way to create a EventHubConsumerClient is to use a connection string.

from azure.eventhub.aio import EventHubConsumerClient
eventhub_client = EventHubConsumerClient.from_connection_string("my_eventhub_namespace_connection_string", "my_consumer_group", eventhub_name="my_eventhub")

For other ways of creating a EventHubConsumerClient, refer to EventHubs library for more details.

Consume events using a BlobCheckpointStore to do checkpoint

import asyncio

from azure.eventhub.aio import EventHubConsumerClient
from azure.eventhub.extensions.checkpointstoreblobaio import BlobCheckpointStore

connection_str = '<< CONNECTION STRING FOR THE EVENT HUBS NAMESPACE >>'
consumer_group = '<< CONSUMER GROUP >>'
eventhub_name = '<< NAME OF THE EVENT HUB >>'
storage_connection_str = '<< CONNECTION STRING OF THE STORAGE >>'
container_name = '<< STORAGE CONTAINER NAME>>'

async def on_event(partition_context, event):
    # Put your code here.
    await partition_context.update_checkpoint(event)  # Or update_checkpoint every N events for better performance.

async def main():
    checkpoint_store = BlobCheckpointStore.from_connection_string(
        storage_connection_str,
        container_name
    )
    client = EventHubConsumerClient.from_connection_string(
        connection_str,
        consumer_group,
        eventhub_name=eventhub_name,
        checkpoint_store=checkpoint_store,
    )

    async with client:
        await client.receive(on_event)

if __name__ == '__main__':
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())

Use BlobCheckpointStore with a different version of Azure Storage Service API

Some environments have different versions of Azure Storage Service API. BlobCheckpointStore by default uses the Storage Service API version 2019-07-07. To use it against a different version, specify api_version when you create the BlobCheckpointStore object.

Troubleshooting

General

Enabling logging will be helpful to do trouble shooting.

Logging

  • Enable azure.eventhub.extensions.checkpointstoreblobaio logger to collect traces from the library.
  • Enable azure.eventhub logger to collect traces from the main azure-eventhub library.
  • Enable azure.eventhub.extensions.checkpointstoreblobaio._vendor.storage logger to collect traces from azure storage blob library.
  • Enable uamqp logger to collect traces from the underlying uAMQP library.
  • Enable AMQP frame level trace by setting logging_enable=True when creating the client.

Next steps

Examples

Documentation

Reference documentation is available here

Provide Feedback

If you encounter any bugs or have suggestions, please file an issue in the Issues section of the project.

Contributing

This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit https://cla.microsoft.com.

When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.

This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.

Impressions

Release History

1.1.2 (2021-01-11)

Bug fixes

  • Fixed a bug that BlobCheckpointStore.list_ownership and BlobCheckpointStore.list_checkpoints triggering KeyError due to reading empty metadata of parent node when working with Data Lake enabled Blob Storage.

1.1.1 (2020-09-08)

Bug fixes

  • Fixed a bug that may gradually slow down retrieving checkpoint data from the storage blob if the storage account "File share soft delete" is enabled. #12836

1.1.0 (2020-03-09)

New features

  • Param api_version of BlobCheckpointStore now supports older versions of Azure Storage Service API.

1.0.0 (2020-01-13)

Stable release. No new features or API changes.

1.0.0b6 (2019-12-04)

Breaking changes

  • Renamed BlobPartitionManager to BlobCheckpointStore.
  • Constructor of BlobCheckpointStore has been updated to take the storage container details directly rather than an instance of ContainerClient.
  • A from_connection_string constructor has been added for Blob Storage connection strings.
  • Module blobstoragepmaio is now internal, all imports should be directly from azure.eventhub.extensions.checkpointstoreblobaio.
  • BlobCheckpointStore now has a close() function for shutting down an HTTP connection pool, additionally the object can be used in a context manager to manage the connection.

1.0.0b5 (2019-11-04)

New features

  • Added method list_checkpoints which list all the checkpoints under given eventhub namespace, eventhub name and consumer group.

1.0.0b4 (2019-10-09)

This release has trivial internal changes only. No feature changes.

1.0.0b1 (2019-09-10)

New features

  • BlobPartitionManager that uses Azure Blob Storage Block Blob to store EventProcessor checkpoint data

Impressions

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file azure-eventhub-checkpointstoreblob-aio-1.1.2.zip.

File metadata

  • Download URL: azure-eventhub-checkpointstoreblob-aio-1.1.2.zip
  • Upload date:
  • Size: 312.3 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.3.0 pkginfo/1.6.1 requests/2.25.1 setuptools/49.2.1 requests-toolbelt/0.9.1 tqdm/4.56.0 CPython/3.9.1

File hashes

Hashes for azure-eventhub-checkpointstoreblob-aio-1.1.2.zip
Algorithm Hash digest
SHA256 bbab4d0b5bfcf94e80808b2590c7a05a435a188606da4802c15835d37d21c332
MD5 fdb8df2644d336a6f4b8aeb0b56e376e
BLAKE2b-256 ae692756ba657e058c0f5c5519935ab14656864ab03f610fca2be6f0a204642b

See more details on using hashes here.

File details

Details for the file azure_eventhub_checkpointstoreblob_aio-1.1.2-py3-none-any.whl.

File metadata

File hashes

Hashes for azure_eventhub_checkpointstoreblob_aio-1.1.2-py3-none-any.whl
Algorithm Hash digest
SHA256 d59054a54cc5b5e299c19d35f289fe83970b6c2ea62f8a6d1816f0277e2628ae
MD5 32609bdcac9c889dc921345e3f15822b
BLAKE2b-256 df2f0c9553dea518cb23a549bcb36024d1dd7dc9e663ce88793ff762ef5392f8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page