Skip to main content

SQS Workers

Project description

SQS Workers

How can I use it?

Unless you are the part of the Doist development team, you most likely don't need it. It's something opinionated, built out of our own internal needs, and probably provides little value for outside developers.

Queue processors are in abundance (see http://queues.io/ for examples), and there is no shortage of SQS queue processors on PyPI, so please don't put your high hopes on this particular implementation

Got it, but how can I start using it anyway?

Install the package with

pip install sqs-workers

Configure your boto3 library to provide access requisites for your installation with something like this:

aws configure

Don't forget to set your preferred AWS region.

Then you will start managing two systems (most likely, from the same codebase): one of them adds messages to the queue, and another one executes them.

from sqs_workers import SQSEnv, create_standard_queue

# This environment will use AWS requisites from ~/.aws/ directory
sqs = SQSEnv()

# Create a new queue.
# Note that you can use the AWS web interface for the same action as well, the
# web interface provides more options. You only need to do it once.
create_standard_queue(sqs, "emails")

# Get the queue object
queue = sqs.queue("emails")

# Register a queue processor
@queue.processor("send_email")
def send_email(to, subject, body):
    print(f"Sending email {subject} to {to}")

Then there are two ways of adding tasks to the queue. Classic (aka "explicit"):

queue.add_job("send_email", to="user@example.com", subject="Hello world", body="hello world")

And the "Celery way" (we mimic the Celery API to some extent)

send_email.delay(to="user@example.com", subject="Hello world", body="hello world")

To process the queue, you have to run workers manually. Create a new file which will contain the definition of the sqs object and register all processors (most likely, by importing necessary modules from your project), and then run SQS

from sqs_workers import SQSEnv
sqs = SQSEnv()
...
sqs.queue('emails').process_queue()

In production, we usually don't handle multiple queues in the same process, but for the development environment it's easier to tackle with all the queues at once with

sqs.process_queues()

Serialization

There are two serializers: JSON and pickle.

Baked tasks

You can create so-called "baked async tasks," entities which, besides the task itself, contain arguments which have to be used to call the task.

Think of baked tasks as of light version of Celery signatures

task = send_email.bake(to='user@example.com', subject='Hello world', body='hello world')
task.delay()

Is the same as

send_email.delay(to='user@example.com', subject='Hello world', body='hello world')

Batching

If you have many tasks to enqueue, it may be more efficient to use batching when adding them:

# Classic ("explicit") API
with queue.add_batch():
    queue.add_job("send_email", to="user1@example.com", subject="Hello world", body="hello world")
    queue.add_job("send_email", to="user2@example.com", subject="Hello world", body="hello world")

# "Celery way"
with send_email.batch():
    send_email.delay(to="user1@example.com", subject="Hello world", body="hello world")
    send_email.delay(to="user2@example.com", subject="Hello world", body="hello world")

When batching is enabled:

  • tasks are added to SQS by batches of 10, reducing the number of AWS operations
  • it is not possible to get the task MessageId, as it is not known until the batch is sent
  • care has to be taken about message size, as SQS limits both the individual message size and the maximum total payload size to 256 kB.

Synchronous task execution

In Celery, you can run any task synchronously if you just call it as a function with arguments. Our AsyncTask raises a RuntimeError for this case.

send_email(to='user@example.com', subject='Hello world', body='hello world')
...
RuntimeError: Async task email.send_email called synchronously (probably,
by mistake). Use either AsyncTask.run(...) to run the task synchronously
or AsyncTask.delay(...) to add it to the queue.

If you want to run a task synchronously, use run() method of the task.

send_email.run(to='user@example.com', subject='Hello world', body='hello world')

FIFO queues

Fifo queues can be created with create_fifo_queue and have to have the name, which ends with ".fifo".

from sqs_workers import SQSEnv, create_fifo_queue
sqs = SQSEnv()

create_fifo_queue(sqs, 'emails_dead.fifo')
create_fifo_queue(sqs, 'emails.fifo',
    redrive_policy=sqs.redrive_policy('emails_dead.fifo', 3)
)

Unless the flag content_based_deduplication is set, every message has to be sent with an attribute _deduplication_id. By default, all messages have the same message group default, but you can change it with _group_id.

sqs.queue("emails.fifo").add_job(
    'send_email', _deduplication_id=subject, _group_id=email, **kwargs)

More about FIFO queues on AWS

Exception processing

If task processing ended up with an exception, the error is logged, and the task is returned back to the queue after a while. The exact behavior is defined by queue settings.

Custom processors

You can define your own processor if you need to perform some specific actions before of after executing a particular task.

Example for the custom processor

from sqs_workers import SQSEnv
from sqs_workers.processors import Processor

class CustomProcessor(Processor):
    def process(self, job_kwargs):
        print(f'Processing {self.queue_name}.{self.job_name} with {job_kwargs}')
        super(CustomProcessor, self).process(job_kwargs)

sqs = SQSEnv(processor_maker=CustomProcessor)

Working with contexts

Context is implicitly passed to the worker via the job message and can be used for logging or profiling purposes.

Usage example.

queue = sqs.queue("q1")

@queue.processor('q1', 'hello_world', pass_context=True)
def hello_world(username=None, context=None):
    print(f'Hello {username} from {context["remote_addr"]}')

with sqs.context(remote_addr='127.0.0.1'):
    hello_world.delay(username='Foo')

queue.process_batch()

Alternatively, you can set a context like this.

sqs.context['remote_addr'] = '127.0.0.1'
hello_world.delay(username='Foo')

And then, when the context needs to be cleared:

sqs.context.clear()

In a web application, you usually call it at the end of the processing of the web request.

Automatic applying of the context for all tasks

Instead of dealing with the context inside every processing function, you can perform this in processors by subclassing them.

import os
from sqs_workers import SQSEnv
from sqs_workers.processors import Processor

class CustomProcessor(Processor):
    def process(self, job_kwargs, job_context):
        os.environ['REMOTE_ADDR'] = job_context['remote_addr']
        super(CustomProcessor, self).process(job_kwargs, job_context)
        os.unsetenv('REMOTE_ADDR')

sqs = SQSEnv(
    processor_maker=CustomProcessor,
)

Raw queues

Raw queues can have only one processor, which should be a function, accepting the message as its only argument.

Raw queues are helpful to process messages, added to SQS from external sources, such as CloudWatch events.

You start the processor the same way, creating a new standard queue if needed.

from sqs_workers import SQSEnv, create_standard_queue
sqs = SQSEnv()
create_standard_queue(sqs, 'cron')

Then you get a queue, but provide a queue_maker parameter to it, to create a queue of the necessary type, and define a processor for it.

from sqs_workers import RawQueue

cron = sqs.queue('cron', RawQueue)

@cron.raw_processor()
def processor(message):
    print(message.body)

Then start processing your queue as usual:

cron.process_queue()

You can also send raw messages to the queue, but this is probably less useful:

cron.add_raw_job("Hello world")

Processing Messages from CloudWatch

By default message body by CloudWatch scheduler has following JSON structure.

{
  "version": "0",
  "id": "a9a10406-9a1f-0ddc-51ae-08db551fac42",
  "detail-type": "Scheduled Event",
  "source": "aws.events",
  "account": "NNNNNNNNNN",
  "time": "2019-09-20T09:19:56Z",
  "region": "eu-west-1",
  "resources": ["arn:aws:events:eu-west-1:NNNNNNNNNN:rule/Playground"],
  "detail": {}
}

Headers of the message:

{
    'SenderId': 'AIDAJ2E....',
    'ApproximateFirstReceiveTimestamp': '1568971264367',
    'ApproximateReceiveCount': '1',
    'SentTimestamp': '1568971244845',
}

You can pass any valid JSON as a message, though, and it will be passed as-is to the message body. Something like this:

{ "message": "Hello world" }

Dead-letter queues and redrive

On creating the queue, you can set the fallback dead-letter queue and redrive policy, which can look like this.

from sqs_workers import SQSEnv, create_standard_queue
sqs = SQSEnv()

create_standard_queue(sqs, 'emails_dead')
create_standard_queue(sqs, 'emails',
    redrive_policy=sqs.redrive_policy('emails_dead', 3)
)

It means "move the message to the email_deadletters queue after four (3 + 1) failed attempts to send it to the recipient."

Backoff policies

You can define the backoff policy for the entire environment or specific queue.

queue = sqs.queue("emails", backoff_policy=DEFAULT_BACKOFF)

@queue.processor('send_email')
def send_email(to, subject, body):
    print(f"Sending email {subject} to {to}")

The default policy is the exponential backoff. We recommend setting both the backoff policy and the dead-letter queue to limit the maximum number of execution attempts.

Alternatively, you can set the backoff to IMMEDIATE_RETURN to re-execute the failed task immediately.

queue = sqs.queue("emails", backoff_policy=IMMEDIATE_RETURN)

@queue.processor('send_email')
def send_email(to, subject, body):
    print(f"Sending email {subject} to {to}")

Shutdown policies

When launching the queue processor with process_queue(), it's possible to optionally set when it has to be stopped.

Following shutdown policies are supported:

  • IdleShutdown(idle_seconds): return from function when no new tasks are sent for a specific period.

  • MaxTasksShutdown(max_tasks): return from function after processing at least max_task jobs. It can be helpful to prevent memory leaks.

The default policy is NeverShutdown. It's also possible to combine two previous policies with OrShutdown or AndShutdown policies or create custom classes for specific behavior.

Example of stopping processing the queue after 5 minutes of inactivity:

from sqs_workers import SQSEnv
from sqs_workers.shutdown_policies import IdleShutdown

sqs = SQSEnv()
sqs.queue("foo").process_queue(shutdown_policy=IdleShutdown(idle_seconds=300))

Processing a dead-letter queue by pushing back failed messages

The most common way to process a dead-letter queue is to fix the main bug causing messages to appear there in the first place and then to re-process these messages again.

With sqs-workers, in can be done by putting back all the messages from a dead-letter queue back to the main one. While processing the queue, the processor takes every message and pushes it back to the upstream queue with a hard-coded delay of 1 second.

Usage example:

>>> from sqs_workers import JobQueue
>>> from sqs_workers.shutdown_policies IdleShutdown
>>> from sqs_workers.deadletter_queue import DeadLetterQueue
>>> env = SQSEnv()
>>> foo = env.queue("foo")
>>> foo_dead = env.queue("foo_dead", DeadLetterQueue.maker(foo))
>>> foo_dead.process_queue(shutdown_policy=IdleShutdown(10))

This code takes all the messages in the foo_dead queue and pushes them back to the foo queue. Then it waits 10 seconds to ensure no new messages appear, and quit.

Processing a dead-letter queue by executing tasks in-place

Instead of pushing the tasks back to the main queue, you can execute them in place. For this, you need to copy the queue processors from the main queue to the deadletter.

Usage example:

>>> env = SQSEnv()
>>> foo = env.queue("foo")
>>> foo_dead = env.queue("foo_dead")
>>> foo.copy_processors(foo_dead)
>>>
>>> from sqs_workers.shutdown_policies IdleShutdown
>>> foo_dead.process_queue(shutdown_policy=IdleShutdown(10))

This code takes all the messages in the foo_dead queue and executes them with processors from the "foo" queue. Then it waits 10 seconds to ensure no new messages appear, and quit.

Using in unit tests with MemorySession

There is a special MemorySession, which can be used as a quick and dirty replacement for real queues in unit tests. If you have a function create_task which adds some tasks to the queue and you want to test how it works, you can technically write your tests like this:

from sqs_workers import SQSEnv
env = SQSEnv()

def test_task_creation_side_effects():
    create_task()
    env.process_batch('foo')
    ...

The problem is that your test starts depending on AWS (or localstack) infrastructure, which you don't always need. What you can do instead is you can pass MemorySession to your SQSEnv instance.

from sqs_workers import SQSEnv, MemorySession
env = SQSEnv(MemorySession())

Please note that MemorySession has some serious limitations, and may not fit well your use-case. Namely, when you work with MemorySession:

  • Redrive policy doesn't work
  • There is no differences between standard and FIFO queues
  • FIFO queues don't support content-based deduplication
  • Delayed tasks are executed ineffectively: the task is gotten from the queue, and if the time hasn't come, the task is put back.
  • API can return slightly different results

Contributing

Any help is welcome!

Please feel free to report any issue you may have.

If you want to contribute with code, please open a Pull Request. A few things to keep in mind:

  • sqs-workers is released under the MIT license
  • please use pre-commit to ensure some formatting rules and basic consistency checks are applied before each Git commit
  • please add tests for your changes!

Testing

We use pytest to run unittests, and tox to run them for all supported Python versions.

If you just run pytest or tox, all tests will be run against AWS, localstack, and MemorySession. You can disable those you don't want to use using the pytest -k flag, for instance using -k localstack or -k 'not aws'.

Running tests with pytest requires installing two additional dependencies:

pip install pytest localstack-client

Testing with AWS

Make sure you have all dependencies installed, and boto3 client configured (ref) and then run

pytest -k aws

Alternatively, to test all supported versions, run

tox -- -k aws

Testing with localstack

Localstack tests should perform faster than testing against AWS, and besides, they work well in offline.

Run ElasticMQ and make sure that the SQS endpoint is available by the address localhost:4566:

docker run -p 4566:9324 --rm -it softwaremill/elasticmq-native

Then run

pytest -k localstack

or

tox -- -k localstack

Testing with MemorySession

MemorySession should be even faster, but has all the limitations documented above. But it can still be useful to test some logic changes.

Simply run

pytest -k memory

or

tox -- -k memory

Releasing new versions

  • Bump version in sqs_workers/__version__.py
  • Update the CHANGELOG
  • Commit the changes with a commit message "Version X.X.X"
  • Push the changes to GitHub and PyPI with a single command make upload

Why does it depend on werkzeug? 😱

The only reason is werkzeug.utils.validate_arguments which we love, and we are lazy enough to move it to this codebase.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

sqs-workers-0.5.4.tar.gz (29.0 kB view details)

Uploaded Source

Built Distribution

sqs_workers-0.5.4-py2.py3-none-any.whl (27.4 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file sqs-workers-0.5.4.tar.gz.

File metadata

  • Download URL: sqs-workers-0.5.4.tar.gz
  • Upload date:
  • Size: 29.0 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.24.0 setuptools/49.2.1 requests-toolbelt/0.9.1 tqdm/4.51.0 CPython/3.9.0

File hashes

Hashes for sqs-workers-0.5.4.tar.gz
Algorithm Hash digest
SHA256 305c6cf5c201cb849e11e38001958e82089d7bb62d74171761d0820ac9aa8a7b
MD5 6edb658c456528f1a786fdf1ac42a054
BLAKE2b-256 10a4be8dddc3744022297a6ce0804fc927f4acefbbed827c312735155ac80f46

See more details on using hashes here.

File details

Details for the file sqs_workers-0.5.4-py2.py3-none-any.whl.

File metadata

  • Download URL: sqs_workers-0.5.4-py2.py3-none-any.whl
  • Upload date:
  • Size: 27.4 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.24.0 setuptools/49.2.1 requests-toolbelt/0.9.1 tqdm/4.51.0 CPython/3.9.0

File hashes

Hashes for sqs_workers-0.5.4-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 573311626b9f6b44f2cfccd48d5fcfcce98028bac0cb05ca95a377adbf58ebcd
MD5 b35ee127ab4fc5e4baa65d26c641f81c
BLAKE2b-256 466cca113502587485b880c6989a98f5781f10be7b26c1b31e72b81d920a5bd9

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page