A message queue written around PostgreSQL.
Project description
psycopg2_mq is a message queue implemented on top of PostgreSQL, SQLAlchemy, and psycopg2.
Currently the library provides only the low-level constructs that can be used to build a multithreaded worker system. It is broken into two components:
psycopg2_mq.MQWorker - a reusable worker object that manages a single-threaded worker that can accept jobs and execute them. An application should create worker per thread. It supports an API for thread-safe graceful shutdown.
psycopg2_mq.MQSource - a source object providing a client-side API for invoking and querying job states.
Data Model
Queues
Workers run jobs defined in queues. Currently each queue will run jobs concurrently, while a future version may support serial execution on a per-queue basis. Each registered queue should contain an execute_job(job) method.
Jobs
The execute_job method of a queue is passed a Job object containing the following attributes:
id
queue
method
args
cursor
As a convenience, there is an extend(**kw) method which can be used to add extra attributes to the object. This is useful in individual queues to define a contract between a queue and its methods.
Cursors
A Job can be scheduled with a cursor_key. There can only be one pending job and one running job for any cursor. New jobs scheduled while another one is pending will be ignored and the pending job is returned.
A job.cursor dict is provided to the workers containing the cursor data, and is saved back to the database when the job is completed. This effectively gives jobs some persistent, shared state, and serializes all jobs over a given cursor.
Scheduled Jobs
A Job can be scheduled in the future by providing a datetime object to the when argument. This, along with a cursor key, can provide a nice throttle on how frequently a job runs. For example, schedule jobs to run in 30 seconds with a cursor_key and any jobs that are scheduled in the meantime will be dropped. The assumption here is that the arguments are constant and data to continue execute is in the cursor or another table.
Example Worker
from psycopg2_mq import (
MQWorker,
make_default_model,
)
from sqlalchemy import (
MetaData,
create_engine,
)
import sys
class EchoQueue:
def execute_job(self, job):
return f'hello, {job.args["name"]} from method="{job.method}"'
if __name__ == '__main__':
engine = create_engine(sys.argv[1])
metadata = MetaData()
model = make_default_model(metadata)
worker = MQWorker(
engine=engine,
queues={
'echo': EchoQueue(),
},
model=model,
)
worker.run()
Example Source
engine = create_engine()
metadata = MetaData()
model = make_default_model(metadata)
session_factory = sessionmaker()
session_factory.configure(bind=engine)
dbsession = session_factory()
with dbsession.begin():
mq = MQSource(
dbsession=dbsession,
model=model,
)
job = mq.call('echo', 'hello', {'name': 'Andy'})
print(f'queued job={job.id}')
0.4 (2019-10-28)
Add a worker column to the Job model to track what worker is handling a job.
Add an optional name argument to MQWorker to name the worker - the value will be recorded in each job.
Add a threads argument (default=``1``) to MQWorker to support handling multiple jobs from the same worker instance instead of making a worker per thread.
Add capture_signals argument (default=``True``) to MQWorker which will capture SIGTERM, SIGINT and SIGUSR1. The first two will trigger graceful shutdown - they will make the process stop handling new jobs while finishing active jobs. The latter will dump to stderr a JSON dump of the current status of the worker.
0.3.3 (2019-10-23)
Only save a cursor update if the job is completed successfully.
0.3.2 (2019-10-22)
Mark lost jobs during timeouts instead of just when a worker starts in order to catch them earlier.
0.3.1 (2019-10-17)
When attempting to schedule a job with a cursor and a scheduled_time earlier than a pending job on the same cursor, the job will be updated to run at the earlier time.
When attempting to schedule a job with a cursor and a pending job already exists on the same cursor, a conflict_resolver function may be supplied to MQSource.call to update the job properties, merging the arguments however the user wishes.
0.3 (2019-10-15)
Add a new column cursor_snapshot to the Job model which will contain the value of the cursor when the job begins.
0.2 (2019-10-09)
Add cursor support for jobs. This requires a schema migration to add a cursor_key column, a new JobCursor model, and some new indices.
0.1.6 (2019-10-07)
Support passing custom kwargs to the job in psycopg2_mq.MQSource.call to allow custom columns on the job table.
0.1.5 (2019-05-17)
Fix a regression when serializing errors with strings or cycles.
0.1.4 (2019-05-09)
More safely serialize exception objects when jobs fail.
0.1.3 (2018-09-04)
Rename the thread to contain the job id while it’s handling a job.
0.1.2 (2018-09-04)
Rename Job.params to Job.args.
0.1.1 (2018-09-04)
Make psycopg2 an optional dependency in order to allow apps to depend on psycopg2-binary if they wish.
0.1 (2018-09-04)
Initial release.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for psycopg2_mq-0.4-py2.py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 7a71029a00551be2ad962a790375dd5de0d03b76314c705d94dbf9571b8032ea |
|
MD5 | f24d15d4c00aa6efcdc089d14095835b |
|
BLAKE2b-256 | 250e2fd16e60fc69fd91ba3b2f9bf9503f5d29dbcaa710152fbc50de885d68ce |