Easy distributed locking using PostgreSQL Advisory Locks.
Project description
Introduction
PALs makes it easy to use PostgreSQL Advisory Locks to do distributed application level locking.
Do not confuse this type of locking with table or row locking in PostgreSQL. It’s not the same thing.
Distributed application level locking can be implemented by using Redis, Memcache, ZeroMQ and others. But for those who are already using PostgreSQL, setup & management of another service is unnecessary.
Usage
# Think of the Locker instance as a Lock factory.
locker = pals.Locker('my-app-name', 'postgresql://user:pass@server/dbname')
lock1 = locker.lock('my-lock')
lock2 = locker.lock('my-lock')
# The first aquire works
assert lock1.aquire() is True
# Non blocking version should fail immediately
assert lock2.aquire(blocking=False) is False
# Blocking version will retry and eventually fail
aquired, retries = lock2.aquire(return_retries=True)
assert aquired is False
assert retries > 4
# You can set the retry parameters yourself if you don't like our defaults.
lock2.aquire(retry_delay=100, retry_timeout=300)
# They can also be set on the lock instance
lock3 = locker.lock('my-lock', retry_delay=100, retry_timeout=300)
# Release the lock
lock1.release()
# Recommended usage pattern:
if not lock1.aquire():
# Remember to check to make sure you got your lock
return
try:
# do my work here
finally:
lock1.release()
# But more recommended and easier is to use the lock as a context manager:
with lock1:
assert lock2.aquire() is False
# Outside the context manager the lock should have been released and we can get it now
assert lock2.aquire()
# The context manager version will throw an exception if it fails to aquire the lock. This
# pattern was chosen because it feels symantically wrong to have to check to see if the lock
# was actually aquired inside the context manager. If the code inside is ran, the lock was
# aquired.
try:
with lock1:
# We won't get here because lock2 aquires the lock just above
pass
except pals.AquireFailure:
pass
Running Tests Locally
Setup Database Connection
We have provided a docker-compose file, but you don’t have to use it:
$ docker-compose up -d $ export PALS_DB_URL=postgresql://postgres:password@localhost:54321/postgres
You can also put the environment variable in a .env file and pipenv will pick it up.
Run the Tests
With tox:
$ tox
Or, manually:
$ pipenv install --dev $ pipenv shell $ pytest pals/tests.py
Lock Releasing & Expiration
Unlike locking systems built on cache services like Memcache and Redis, whose keys can be expired by the service, there is no faculty for expiring an advisory lock in PostgreSQL. If a client holds a lock and then sleeps/hangs for mins/hours/days, no other client will be able to get that lock until the client releases it. This actually seems like a good thing to us, if a lock is acquired, it should be kept until released.
But what about accidental failures to release the lock?
If a developer uses
lock.aquire()
but doesn’t later calllock.release()
?If code inside a lock accidentally throws an exception (and .release() is not called)?
If the process running the application crashes or the process’ server dies?
PALs helps #1 and #2 above in a few different ways:
Locks work as context managers. Use them as much as possible to guarantee a lock is released.
Locks release their lock when garbage collected.
PALs uses a dedicated SQLAlchemy connection pool. When a connection is returned to the pool, either because a connection
.close()
is called or due to garbage collection of the connection, PALs issues apg_advisory_unlock_all()
. It should therefore be impossible for an idle connection in the pool to ever still be holding a lock.
Regarding #3 above, pg_advisory_unlock_all()
is implicitly invoked by PostgreSQL whenever a
connection (a.k.a session) ends, even if the client disconnects ungracefully. So if a process
crashes or otherwise disappears, PostgreSQL should notice and remove all locks held by that
connection/session.
The possibility could exist that PostgreSQL does not detect a connection has closed and keeps
a lock open indefinitely. However, in manual testing using scripts/hang.py
no way was found
to end the Python process without PostgreSQL detecting it.
See Also
Changelog
0.1.0 released 2019-02-22
Use
lock_timeout
setting to expire blocking calls (d0216ce)fix tox (1b0ffe2)
rename to PALs (95d5a3c)
improve readme (e8dd6f2)
move tests file to better location (a153af5)
add flake8 dep (3909c95)
fix tests so they work locally too (7102294)
get circleci working (28f16d2)
suppress exceptions in Lock __del__ (e29c1ce)
Add hang.py script (3372ef0)
fix packaging stuff, update readme (cebd976)
initial commit (871b877)
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file PALs-0.1.0.tar.gz
.
File metadata
- Download URL: PALs-0.1.0.tar.gz
- Upload date:
- Size: 18.5 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.7.1 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.7.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | a58e858cef53fe73af75ad1d35638cebf2eaa0b049da48eee53a187ae387a39e |
|
MD5 | 1b4c8e4982d5e34c87e515212bfeba06 |
|
BLAKE2b-256 | 1186305fdc9a467e934b8ed8d4801ac284d84249e4e74a9e93ae6dae5b2bf25f |
File details
Details for the file PALs-0.1.0-py2.py3-none-any.whl
.
File metadata
- Download URL: PALs-0.1.0-py2.py3-none-any.whl
- Upload date:
- Size: 7.4 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/1.13.0 pkginfo/1.5.0.1 requests/2.21.0 setuptools/40.7.1 requests-toolbelt/0.9.1 tqdm/4.31.1 CPython/3.7.2
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 44ce46c89493ed7a24028404c582f858d64b9600a2486f7ce32c567ef3eb2c70 |
|
MD5 | cf02854625bba2a569ff21950b3a13a5 |
|
BLAKE2b-256 | 22fe453fd22dbe1d007b264a78c6d6b53a38804636001b7a133b6af7d6763804 |