Skip to main content

Fast, easy to use client for EventStore

Project description

Photon-pump is a fast, user-friendly client for Eventstore.

It emphasises a modular design, hidden behind an interface that’s written for humans.

Installation

Photon pump is available on the cheese shop.

pip install photon-pump

You will need to install lib-protobuf 3.2.0 or above.

Documentation is available on Read the docs.

Basic Usage

Working with connections

Usually you will want to interact with photon pump via the ~photonpump.Client class. The ~photonpump.Client is a full-duplex client that can handle many requests and responses in parallel. It is recommended that you create a single connection per application.

First you will need to create a connection:

>>> import asyncio
>>> from photonpump import connect
>>>
>>> loop = asyncio.get_event_loop()
>>>
>>> async with connect(loop=loop) as c:
>>>     await c.ping()

The photonpump.connect function returns an async context manager so that the connection will be automatically closed when you are finished. Alternatively you can create a client and manage its lifetime yourself.

>>> import asyncio
>>> from photonpump import connect
>>>
>>> loop = asyncio.get_event_loop()
>>>
>>> client = connect(loop=loop)
>>> await client.connect()
>>> await client.ping()
>>> await client.close()

Reading and Writing single events

A connection can be used for both reading and writing events. You can publish a single event with the ~photonpump.Client.publish_event method:

>>> # When publishing events, you must provide the stream name.
>>> stream = 'ponies'
>>> event_type = 'PonyJumped'
>>>
>>> result = await conn.publish_event(stream, event_type, body={
>>>     'Pony': 'Derpy Hooves',
>>>     'Height': 10,
>>>     'Distance': 13
>>>     })

We can fetch a single event with the complementary ~photonpump.Client.get_event method if we know its event number and the stream where it was published:

>>> event_number = result.last_event_number
>>> event = await conn.get_event(stream, event_number)

Assuming that your event was published as json, you can load the body with the ~photonpump.messages.Event.json method:

async def write_an_event():
    async with photonpump.connect() as conn:
        await conn.publish_event('pony_stream', 'pony.jumped', body={
            'name': 'Applejack',
            'height_m': 0.6
        })


async def read_an_event(conn):
    event = await conn.get_event('pony_stream', 1)
    print(event)


async def write_two_events(conn):
    await conn.publish('pony_stream', [
        NewEvent('pony.jumped', body={
            'name': 'Rainbow Colossus',
            'height_m': 0.6
        },
        NewEvent('pony.jumped', body={
            'name': 'Sunshine Carnivore',
            'height_m': 1.12
        })
    ])


async def read_two_events(conn):
    events = await conn.get('pony_stream', max_count=2, from_event=0)
    print(events[0])


async def stneve_owt_daer(conn):
    events = await conn.get('pony_stream', direction=StreamDirection.backward, max_count=2)
    print(events[0])


async def ticker(delay):
    while True:
        yield NewEvent('tick', body{ 'tick': i})
        i += 1
        await asyncio.sleep(delay)


async def write_an_infinite_number_of_events(conn):
    await conn.publish('ticker_stream', ticker(1000))


async def read_an_infinite_number_of_events(conn):
    async for event in conn.iter('ticker_stream'):
        print(event)


>>> data = event.json()
>>> assert data['Pony'] == 'Derpy Hooves'

Reading and Writing in Batches

We can read and write several events in a request using the ~photonpump.Client.get and ~photonpump.Client.publish methods of our ~photonpump.Client. the photonpump.message.NewEvent function is a helper for constructing events.

>>> stream = 'more_ponies'
>>> events = [
>>>     NewEvent('PonyJumped',
>>>              data={
>>>                 'Pony': 'Peculiar Hooves',
>>>                 'Height': 9,
>>>                 'Distance': 13
>>>              }),
>>>     NewEvent('PonyJumped',
>>>              data={
>>>                 'Pony': 'Sparkly Hooves',
>>>                 'Height': 12,
>>>                 'Distance': 12
>>>              }),
>>>     NewEvent('PonyJumped',
>>>              data={
>>>                 'Pony': 'Sparkly Hooves',
>>>                 'Height': 11,
>>>                 'Distance': 14
>>>              })]
>>>
>>> await conn.publish(stream, events)

We can get events from a stream in slices by setting the from_event_number and max_count arguments. We can read events from either the front or back of the stream.

>>> import StreamDirection from photonpump.messages
>>>
>>> all_events = await conn.get(stream)
>>> assert len(all_events) == 3
>>>
>>> first_event = await conn.get(stream, max_count=1)[0].json()
>>> assert first_event['Pony'] == 'Peculiar Hooves'
>>>
>>> second_event = await conn.get(stream, max_count=1, from_event_number=1)[0].json()
>>> assert second_event['Pony'] == 'Sparkly Hooves'
>>>
>>> reversed_events = await conn.get(stream, direction=StreamDirection.backward)
>>> assert len(reversed_events) == 3
>>> assert reversed_events[2] == first_event

Reading with Asynchronous Generators

We can page through a stream manually by using the from_event_number argument of ~photonpump.Client.get, but it’s simpler to use the ~photonpump.Client.iter method, which returns an asynchronous generator. By default, iter will read from the beginning to the end of a stream, and then stop. As with get, you can set the ~photon.messages.StreamDirection, or use from_event to control the result:

>>> async for event in conn.iter(stream):
>>>     print (event)

This extends to asynchronous comprehensions:

>>> async def feet_to_metres(jumps):
>>>    async for jump in jumps:
>>>         data = jump.json()
>>>         data['Height'] = data * 0.3048
>>>         data['Distance'] = data * 0.3048
>>>         yield data
>>>
>>> jumps = (event async for event in conn.iter('ponies')
>>>             if event.type == 'PonyJumped')
>>> async for jump in feet_to_metres(jumps):
>>>     print (event)

Persistent Subscriptions

Sometimes we want to watch a stream continuously and be notified when a new event occurs. Eventstore supports volatile and persistent subscriptions for this use case.

A persistent subscription stores its state on the server. When your application restarts, you can connect to the subscription again and continue where you left off. Multiple clients can connect to the same persistent subscription to support competing consumer scenarios. To support these features, persistent subscriptions have to run against the master node of an Eventstore cluster.

Firstly, we need to create the subscription <photonpump.connection.Client.create_subscription>.

>>> async def create_subscription(subscription_name, stream_name, conn):
>>>     await conn.create_subscription(subscription_name, stream_name)

Once we have a subscription, we can connect to it <photonpump.connection.Client.connect_subscription> to begin receiving events. A persistent subscription exposes an events property, which acts like an asynchronous iterator.

>>> async def read_events_from_subscription(subscription_name, stream_name, conn):
>>>     subscription = await conn.connect_subscription(subscription_name, stream_name)
>>>     async for event in subscription.events:
>>>         print(event)
>>>         await subscription.ack(event)

Eventstore will send each event to one consumer at a time. When you have handled the event, you must acknowledge receipt. Eventstore will resend messages that are unacknowledged.

Volatile Subscriptions

In a Volatile Subscription, state is stored by the client. When your application restarts, you must re-subscribe to the stream. There is no support in Eventstore for competing consumers to a volatile subscription. Volatile subscriptions can run against any node in a cluster.

Volatile subsciptions do not support event acknowledgement.

>>> async def subscribe_to_stream(stream, conn):
>>>     subscription = await conn.subscribe_to(stream)
>>>     async for event in subscription.events:
>>>         print(event)

High-Availability Scenarios

Eventstore supports an HA-cluster deployment topology. In this scenario, Eventstore runs a master node and multiple slaves. Some operations, particularly persistent subscriptions and projections, are handled only by the master node. To connect to an HA-cluster and automatically find the master node, photonpump supports cluster discovery.

The cluster discovery interrogates eventstore gossip to find the active master. You can provide the IP of a maching in the cluster, or a DNS name that resolves to some members of the cluster, and photonpump will discover the others.

>>> async def connect_to_cluster(hostname_or_ip, port=2113):
>>>     with connect(discovery_host=hostname_or_ip, discovery_port=2113) as c:
>>>         await c.ping()

If you provide both a host and discovery_host, photonpump will prefer discovery.

Debugging

If you want to step through code that uses photonpump, it’s helpful to be aware that Event Store’s TCP API (which photonpump uses) makes use of a ‘heartbeat’ to ensure that connections are not left open. This means that if you’re sitting at a debugger (e.g. pdb) prompt – and therefore not running the event loop for tens of seconds at a time – you’ll find that you get disconnected. To prevent that, you can run it with Event Store’s heartbeat timeouts set to high values – e.g. with a Dockerfile like this.

Development

We use make to manage the common development tasks. Check Makefile for all available options. The most important commands are:

make init

Installs requirement.txt (you’ll need a virtualenv)

make eventstore_docker

Starts eventstore in docker

make all_tests

runs all of the tests in your virtualenv (requires running eventstore instance, localhost:1113)

make tox

runs tests against all supported python versions

## [0.7.2] - 2019-01-29 Fixed: Iterators restart at the last processed event number when the connection drops. Refactor: MessageReader returns a TcpCommand in the header rather than an int. Chore: Removed unused dependencies.

## [0.7.1] - 2019-01-29 Fixed: Volatile subscriptions fail to restart when the connection is recreated.

## [0.7.0] - 2019-01-29 Fixed: Volatile subscriptions fail to yield all events for a projection. This was caused by a confusion between the linked event and original event.

### Breaking Changes
  • Event.original_event is now Event.received_event because the original name was unclear.

  • Event.event_number is now equal to the value of received_event.event_number, not the value of the linked event number.

## [0.6.0.1] - 2019-01-03 Add automagic deployment to pypi with Travis and Versioneer

## [0.6.0] - 2018-12-21 Added batch size param to subscribe_to method

## [0.6.0-alpha-5] - 2018-11-09 Fixed: CreatePersistentSubscription command was never cleaned up after success

## [0.6.0-alpha-4] - 2018-10-05 Fixed: We now handle deleted messages correctly.

## [0.6.0-alpha-2] - 2018-09-17 Discovery now supports “selectors” to control how we pick a node from gossip

## [0.6.0-alpha-1] - 2018-09-14 Added support for catch-up subscriptions.

## [0.5] - 2018-04-27 ### Breaking changes - Dropped the ConnectionContextManager class. - “Connection” class is now “Client” and acts as a context manager in its own right - Rewrote the connection module completely. - PersistentSubscriptions no longer use a maxsize parameter when creating a streaming iterator. This is a workaround for https://github.com/madedotcom/photon-pump/issues/49

## [0.4] - 2018-04-27 ### Fixes - Added cluster discovery for HA scenarios.

## [0.3] - 2018-04-11 ### Fixes - iter properly supports iterating a stream in reverse. ### Breaking change - published_event reversed order of type and stream

[0.7.2]: https://github.com/madedotcom/photon-pump/compare/v0.7.1..v0.7.2 [0.7.1]: https://github.com/madedotcom/photon-pump/compare/v0.7.0..v0.7.1 [0.7.0]: https://github.com/madedotcom/photon-pump/compare/v0.6.0.1..v0.7.0 [0.6.0.1]: https://github.com/madedotcom/photon-pump/compare/v0.6.0..v0.6.0.1 [0.6.0]: https://github.com/madedotcom/photon-pump/compare/v0.6.0-alpha-5..v0.6.0 [0.6.0-alpha-5]: https://github.com/madedotcom/photon-pump/compare/v0.6.0-alpha-4..v0.6.0-alpha-5 [0.6.0-alpha-4]: https://github.com/madedotcom/photon-pump/compare/v0.6.0-alpha-2..v0.6.0-alpha-4 [0.6.0-alpha-2]: https://github.com/madedotcom/photon-pump/compare/v0.6.0-alpha-1..v0.6.0-alpha-2 [0.6.0-alpha-2]: https://github.com/madedotcom/photon-pump/compare/v0.6.0-alpha-1..v0.6.0-alpha-2 [0.6.0-alpha-1]: https://github.com/madedotcom/photon-pump/compare/v0.5.0..v0.6.0-alpha-1 [0.5]: https://github.com/madedotcom/photon-pump/compare/v0.4.0..v0.5.0 [0.4]: https://github.com/madedotcom/photon-pump/compare/v0.3.0..v0.4.0 [0.3]: https://github.com/madedotcom/photon-pump/compare/v0.2.5..v0.3 [0.2.5]: https://github.com/madedotcom/photon-pump/compare/v0.2.4..v0.2.5

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

photon-pump-0.9.tar.gz (58.2 kB view details)

Uploaded Source

File details

Details for the file photon-pump-0.9.tar.gz.

File metadata

  • Download URL: photon-pump-0.9.tar.gz
  • Upload date:
  • Size: 58.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/2.0.0 pkginfo/1.5.0.1 requests/2.22.0 setuptools/41.4.0 requests-toolbelt/0.9.1 tqdm/4.36.1 CPython/3.8.0

File hashes

Hashes for photon-pump-0.9.tar.gz
Algorithm Hash digest
SHA256 a116b42a6e0b4ab4b207d6ce853bb7721df1cb4f7c2097265b25963fd62f0740
MD5 5768280270cc42b3e6bf899ef424f50e
BLAKE2b-256 b4a662fa813d2eaa8528a308b686eb561d3a6d4d1d8d33b7130820ed3ef2c24f

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page