Skip to main content

Provides job scheduling capabilities to RQ (Redis Queue)

Project description

RQ Scheduler

RQ Scheduler is a small package that adds job scheduling capabilities to RQ, a Redis based Python queuing library.

https://travis-ci.org/rq/rq-scheduler.svg?branch=master

Support RQ Scheduler

If you find rq-scheduler useful, please consider supporting its development via Tidelift.

Requirements

Installation

You can install RQ Scheduler via pip:

pip install rq-scheduler

Or you can download the latest stable package from PyPI.

Usage

Schedule a job involves doing two different things:

  1. Putting a job in the scheduler

  2. Running a scheduler that will move scheduled jobs into queues when the time comes

Scheduling a Job

There are two ways you can schedule a job. The first is using RQ Scheduler’s enqueue_at

from redis import Redis
from rq import Queue
from rq_scheduler import Scheduler
from datetime import datetime

scheduler = Scheduler(connection=Redis()) # Get a scheduler for the "default" queue
scheduler = Scheduler('foo', connection=Redis()) # Get a scheduler for the "foo" queue

# You can also instantiate a Scheduler using an RQ Queue
queue = Queue('bar', connection=Redis())
scheduler = Scheduler(queue=queue, connection=queue.connection)

# Puts a job into the scheduler. The API is similar to RQ except that it
# takes a datetime object as first argument. So for example to schedule a
# job to run on Jan 1st 2020 we do:
scheduler.enqueue_at(datetime(2020, 1, 1), func) # Date time should be in UTC

# Here's another example scheduling a job to run at a specific date and time (in UTC),
# complete with args and kwargs.
scheduler.enqueue_at(datetime(2020, 1, 1, 3, 4), func, foo, bar=baz)

# You can choose the queue type where jobs will be enqueued by passing the name of the type to the scheduler
# used to enqueue
scheduler = Scheduler('foo', queue_class="rq.Queue")
scheduler.enqueue_at(datetime(2020, 1, 1), func) # The job will be enqueued at the queue named "foo" using the queue type "rq.Queue"

The second way is using enqueue_in. Instead of taking a datetime object, this method expects a timedelta and schedules the job to run at X seconds/minutes/hours/days/weeks later. For example, if we want to monitor how popular a tweet is a few times during the course of the day, we could do something like

from datetime import timedelta

# Schedule a job to run 10 minutes, 1 hour and 1 day later
scheduler.enqueue_in(timedelta(minutes=10), count_retweets, tweet_id)
scheduler.enqueue_in(timedelta(hours=1), count_retweets, tweet_id)
scheduler.enqueue_in(timedelta(days=1), count_retweets, tweet_id)

IMPORTANT: You should always use UTC datetime when working with RQ Scheduler.

Periodic & Repeated Jobs

As of version 0.3, RQ Scheduler also supports creating periodic and repeated jobs. You can do this via the schedule method. Note that this feature needs RQ >= 0.3.1.

This is how you do it

scheduler.schedule(
    scheduled_time=datetime.utcnow(), # Time for first execution, in UTC timezone
    func=func,                     # Function to be queued
    args=[arg1, arg2],             # Arguments passed into function when executed
    kwargs={'foo': 'bar'},         # Keyword arguments passed into function when executed
    interval=60,                   # Time before the function is called again, in seconds
    repeat=10,                     # Repeat this number of times (None means repeat forever)
    meta={'foo': 'bar'}            # Arbitrary pickleable data on the job itself
)

IMPORTANT NOTE: If you set up a repeated job, you must make sure that you either do not set a result_ttl value or you set a value larger than the interval. Otherwise, the entry with the job details will expire and the job will not get re-scheduled.

Cron Jobs

As of version 0.6.0, RQ Scheduler also supports creating Cron Jobs, which you can use for repeated jobs to run periodically at fixed times, dates or intervals, for more info check https://en.wikipedia.org/wiki/Cron. You can do this via the cron method.

This is how you do it

scheduler.cron(
    cron_string,                # A cron string (e.g. "0 0 * * 0")
    func=func,                  # Function to be queued
    args=[arg1, arg2],          # Arguments passed into function when executed
    kwargs={'foo': 'bar'},      # Keyword arguments passed into function when executed
    repeat=10,                  # Repeat this number of times (None means repeat forever)
    result_ttl=300              # Specify how long (in seconds) successful jobs and their results are kept. Defaults to -1 (forever)
    ttl=200                     # Specifies the maximum queued time (in seconds) before it's discarded. Defaults to None (infinite TTL).
    queue_name=queue_name,      # In which queue the job should be put in
    meta={'foo': 'bar'},        # Arbitrary pickleable data on the job itself
    use_local_timezone=False    # Interpret hours in the local timezone
)

Retrieving scheduled jobs

Sometimes you need to know which jobs have already been scheduled. You can get a list of enqueued jobs with the get_jobs method

list_of_job_instances = scheduler.get_jobs()

In it’s simplest form (as seen in the above example) this method returns a list of all job instances that are currently scheduled for execution.

Additionally the method takes two optional keyword arguments until and with_times. The first one specifies up to which point in time scheduled jobs should be returned. It can be given as either a datetime / timedelta instance or an integer denoting the number of seconds since epoch (1970-01-01 00:00:00). The second argument is a boolean that determines whether the scheduled execution time should be returned along with the job instances.

Example

# get all jobs until 2012-11-30 10:00:00
list_of_job_instances = scheduler.get_jobs(until=datetime(2012, 10, 30, 10))

# get all jobs for the next hour
list_of_job_instances = scheduler.get_jobs(until=timedelta(hours=1))

# get all jobs with execution times
jobs_and_times = scheduler.get_jobs(with_times=True)
# returns a list of tuples:
# [(<rq.job.Job object at 0x123456789>, datetime.datetime(2012, 11, 25, 12, 30)), ...]

Checking if a job is scheduled

You can check whether a specific job instance or job id is scheduled for execution using the familiar python in operator

if job_instance in scheduler:
    # Do something
# or
if job_id in scheduler:
    # Do something

Canceling a job

To cancel a job, simply pass a Job or a job id to scheduler.cancel

scheduler.cancel(job)

Note that this method returns None whether the specified job was found or not.

Running the scheduler

RQ Scheduler comes with a script rqscheduler that runs a scheduler process that polls Redis once every minute and move scheduled jobs to the relevant queues when they need to be executed

# This runs a scheduler process using the default Redis connection
rqscheduler

If you want to use a different Redis server you could also do

rqscheduler --host localhost --port 6379 --db 0

The script accepts these arguments:

  • -H or --host: Redis server to connect to

  • -p or --port: port to connect to

  • -d or --db: Redis db to use

  • -P or --password: password to connect to Redis

  • -b or --burst: runs in burst mode (enqueue scheduled jobs whose execution time is in the past and quit)

  • -i INTERVAL or --interval INTERVAL: How often the scheduler checks for new jobs to add to the queue (in seconds, can be floating-point for more precision).

  • -j or --job-class: specify custom job class for rq to use (python module.Class)

  • -q or --queue-class: specify custom queue class for rq to use (python module.Class)

The arguments pull default values from environment variables with the same names but with a prefix of RQ_REDIS_.

Running the Scheduler as a Service on Ubuntu

sudo /etc/systemd/system/rqscheduler.service

[Unit]
Description=RQScheduler
After=network.target

[Service]
ExecStart=/home/<<User>>/.virtualenvs/<<YourVirtualEnv>>/bin/python \
    /home/<<User>>/.virtualenvs/<<YourVirtualEnv>>/lib/<<YourPythonVersion>>/site-packages/rq_scheduler/scripts/rqscheduler.py

[Install]
WantedBy=multi-user.target

You will also want to add any command line parameters if your configuration is not localhost or not set in the environment variables.

Start, check Status and Enable the service

sudo systemctl start rqscheduler.service
sudo systemctl status rqscheduler.service
sudo systemctl enable rqscheduler.service

Running Multiple Schedulers

Multiple instances of the rq-scheduler can be run simultaneously. It allows for

  • Reliability (no single point of failure)

  • Failover (scheduler instances automatically retry to attain lock and schedule jobs)

  • Running scheduler on multiple server instances to make deployment identical and easier

Multiple schedulers can be run in any way you want. Typically you’ll only want to run one scheduler per server/instance.

rqscheduler -i 5

# another shell/systemd service or ideally another server
rqscheduler -i 5

# different parameters can be provided to different schedulers
rqscheduler -i 10

Practical example:

  • scheduler_a is running on ec2_instance_a

  • If scheduler_a crashes or ec2_instance_a goes down, then our tasks won’t be scheduled at all

  • Instead we can simply run 2 schedulers. Another scheduler called scheduler_b can be run on ec2_instance_b

  • Now both scheduler_a and scheduler_b will periodically check and schedule the jobs

  • If one fails, the other still works

You can read more about multiple schedulers in #212 and #195

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

rq-scheduler-0.13.1.tar.gz (16.6 kB view details)

Uploaded Source

Built Distribution

rq_scheduler-0.13.1-py2.py3-none-any.whl (13.9 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file rq-scheduler-0.13.1.tar.gz.

File metadata

  • Download URL: rq-scheduler-0.13.1.tar.gz
  • Upload date:
  • Size: 16.6 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.28.1 setuptools/46.0.0 requests-toolbelt/0.9.1 tqdm/4.51.0 CPython/3.8.3

File hashes

Hashes for rq-scheduler-0.13.1.tar.gz
Algorithm Hash digest
SHA256 89d6a18f215536362b22c0548db7dbb8678bc520c18dc18a82fd0bb2b91695ce
MD5 85ffd5923dce576c83f3f92aa072a81f
BLAKE2b-256 a08a1a434c0e1b3a63e2f02d4013059c4b1c9821a424397ecb3d88514e07c60e

See more details on using hashes here.

File details

Details for the file rq_scheduler-0.13.1-py2.py3-none-any.whl.

File metadata

  • Download URL: rq_scheduler-0.13.1-py2.py3-none-any.whl
  • Upload date:
  • Size: 13.9 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.2.0 pkginfo/1.6.1 requests/2.28.1 setuptools/46.0.0 requests-toolbelt/0.9.1 tqdm/4.51.0 CPython/3.8.3

File hashes

Hashes for rq_scheduler-0.13.1-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 c2b19c3aedfc7de4d405183c98aa327506e423bf4cdc556af55aaab9bbe5d1a1
MD5 467ac5c05825a96ee21d4e6c82d9a28b
BLAKE2b-256 df2727876b5b285552030134494406adb93eceb3d0e1cac6e75d49ff1f16af56

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page