Skip to main content

Low level, multiprocessing based AWS Kinesis producer & consumer library

Project description

https://img.shields.io/travis/NerdWalletOSS/kinesis-python.svg https://img.shields.io/codecov/c/github/NerdWalletOSS/kinesis-python.svg Latest PyPI version

The official Kinesis python library requires the use of Amazon’s “MultiLangDaemon”, which is a Java executable that operates by piping messages over STDIN/STDOUT.

ಠ_ಠ

While the desire to have a single implementation of the client library from a maintenance standpoint makes sense for the team responsible for the KPL, requiring the JRE to be installed and having to account for the overhead of the stream being consumed by Java and Python is not desireable for teams working in environments without Java.

This is a pure-Python implementation of Kinesis producer and consumer classes that leverages Python’s multiprocessing module to spawn a process per shard and then sends the messages back to the main process via a Queue. It only depends on boto3 (AWS SDK), offspring (Subprocess implementation) and six (py2/py3 compatibility).

It also includes a DynamoDB state back-end that allows for multi-instance consumption of multiple shards, and stores the checkpoint data so that you can resume where you left off in a stream following restarts or crashes.

Overview

All of the functionality is wrapped in two classes: KinesisConsumer and KinesisProducer

Consumer

The consumer works by launching a process per shard in the stream and then implementing the Python iterator protocol.

from kinesis.consumer import KinesisConsumer

consumer = KinesisConsumer(stream_name='my-stream')
for message in consumer:
    print "Received message: {0}".format(message)

Messages received from each of the shard processes are passed back to the main process through a Python Queue where they are yielded for processing. Messages are not strictly ordered, but this is a property of Kinesis and not this implementation.

Locking, Checkpointing & Multi-instance consumption

When deploying an application with multiple instances DynamoDB can be leveraged as a way to coordinate which instance is responsible for which shard, as it is not desirable to have each instance process all records.

With or without multiple nodes it is also desirable to checkpoint the stream as you process records so that you can pickup from where you left off if you restart the consumer.

A “state” backend that leverages DynamoDB allows consumers to coordinate which node is responsible which shards and where in the stream we are currently reading from.

from kinesis.consumer import KinesisConsumer
from kinesis.state import DynamoDB

consumer = KinesisConsumer(stream_name='my-stream', state=DynamoDB(table_name='my-kinesis-state'))
for message in consumer:
    print "Received message: {0}".format(message)

The DynamoDB table must already exist and must have a HASH key of shard, with type S (string).

Producer

The producer works by launching a single process for accumulation and publishing to the stream.

from kinesis.producer import KinesisProducer

producer = KinesisProducer(stream_name='my-stream')
producer.put('Hello World from Python')

By default the accumulation buffer time is 500ms, or the max record size of 1Mb, whichever occurs first. You can change the buffer time when you instantiate the producer via the buffer_time kwarg, specified in seconds. For example, if your primary concern is budget and not performance you could accumulate over a 60 second duration.

producer = KinesisProducer(stream_name='my-stream', buffer_time=60)

The background process takes precaution to ensure that any accumulated messages are flushed to the stream at shutdown time through signal handlers and the python atexit module, but it is not fully durable and if you were to send a kill -9 to the producer process any accumulated messages would be lost.

AWS Permissions

By default the producer, consumer & state classes all use the default boto3 credentials chain. If you wish to alter this you can instantiate your own boto3.Session object and pass it into the constructor via the boto3_session keyword argument of KinesisProducer, KinesisConsumer or DynamoDB.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

kinesis-python-0.1.6.tar.gz (9.4 kB view details)

Uploaded Source

Built Distributions

kinesis_python-0.1.6-py3-none-any.whl (13.2 kB view details)

Uploaded Python 3

kinesis_python-0.1.6-py2-none-any.whl (13.2 kB view details)

Uploaded Python 2

File details

Details for the file kinesis-python-0.1.6.tar.gz.

File metadata

File hashes

Hashes for kinesis-python-0.1.6.tar.gz
Algorithm Hash digest
SHA256 b78f3c01599ab3730cc2c86f635dac95296aa708dce7707905b22f6ee7d6ad0b
MD5 99dbb693a53914a59dbf790367407e91
BLAKE2b-256 599f2afe56e91cc8c82c07c027fec3d733498db4a9fd03baa9840d8973baacd1

See more details on using hashes here.

File details

Details for the file kinesis_python-0.1.6-py3-none-any.whl.

File metadata

File hashes

Hashes for kinesis_python-0.1.6-py3-none-any.whl
Algorithm Hash digest
SHA256 c741b642eff75f41e1ee11edd47664b7f6f4b9e52532a751f3de4a49f7b2d90a
MD5 d83cc70ada4750b7ec91e0735937e8ac
BLAKE2b-256 8fe634aaf989bc1b59b581153a973ba1999fabf9104d9248424b0bc1121b4cd6

See more details on using hashes here.

File details

Details for the file kinesis_python-0.1.6-py2-none-any.whl.

File metadata

File hashes

Hashes for kinesis_python-0.1.6-py2-none-any.whl
Algorithm Hash digest
SHA256 e3c1581bcd434261dae0390138102c10fb6b33e7372c8a1c0ec603f6dfe0ae67
MD5 ec61753fb01099f5597a90dcd5db4347
BLAKE2b-256 692ca6cd5e012151fd5cae8f932d64d5ce2a4b2f3ea7bea6c3b1d13cf1c53f51

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page