Skip to main content

A client for Scrapyd

Project description

PyPI Version Build Status Coverage Status Python Version

Scrapyd-client is a client for Scrapyd. It provides:

Command line tools:

  • scrapyd-deploy, to deploy your project to a Scrapyd server

  • scrapyd-client, to interact with your project once deployed

Python client:

  • ScrapydClient, to interact with Scrapyd within your python code

scrapyd-deploy

Deploying your project to a Scrapyd server typically involves two steps:

  1. Eggifying your project. You’ll need to install setuptools for this. See Egg Caveats below.

  2. Uploading the egg to the Scrapyd server through the addversion.json endpoint.

The scrapyd-deploy tool automates the process of building the egg and pushing it to the target Scrapyd server.

Including Static Files

If the egg needs to include static (non-Python) files, edit the setup.py file in your project. Otherwise, you can skip this step.

If you don’t have a setup.py file, create one with:

scrapyd-deploy --build-egg=/dev/null

Then, set the package_data keyword argument in the setup() function call in the setup.py file. Example (note: projectname would be your project’s name):

from setuptools import setup, find_packages

setup(
    name         = 'project',
    version      = '1.0',
    packages     = find_packages(),
    entry_points = {'scrapy': ['settings = projectname.settings']},
    package_data = {'projectname': ['path/to/*.json']}
)

Deploying a Project

First cd into your project’s root, you can then deploy your project with the following:

scrapyd-deploy <target> -p <project>

This will eggify your project and upload it to the target. If you have a setup.py file in your project, it will be used, otherwise one will be created automatically.

If successful you should see a JSON response similar to the following:

Deploying myproject-1287453519 to http://localhost:6800/addversion.json
Server response (200):
{"status": "ok", "spiders": ["spider1", "spider2"]}

To save yourself from having to specify the target and project, you can set the defaults in the Scrapy configuration file.

Versioning

By default, scrapyd-deploy uses the current timestamp for generating the project version, as shown above. However, you can pass a custom version using --version:

scrapyd-deploy <target> -p <project> --version <version>

The version must be comparable with LooseVersion. Scrapyd will use the greatest version unless specified.

If you use Mercurial or Git, you can use HG or GIT respectively as the argument supplied to --version to use the current revision as the version. You can save yourself having to specify the version parameter by adding it to your target’s entry in scrapy.cfg:

[deploy]
...
version = HG

Local Settings

You may want to keep certain settings local and not have them deployed to Scrapyd. To accomplish this you can create a local_settings.py file at the root of your project, where your scrapy.cfg file resides, and add the following to your project’s settings:

try:
    from local_settings import *
except ImportError:
    pass

scrapyd-deploy doesn’t deploy anything outside of the project module, so the local_settings.py file won’t be deployed.

Egg Caveats

Some things to keep in mind when building eggs for your Scrapy project:

  • Make sure no local development settings are included in the egg when you build it. The find_packages function may be picking up your custom settings. In most cases you want to upload the egg with the default project settings.

  • Avoid using __file__ in your project code as it doesn’t play well with eggs. Consider using pkgutil.get_data instead. Instead of:

    path = os.path.dirname(os.path.realpath(__file__))  # BAD
    open(os.path.join(path, "tools", "json", "test.json"), "rb").read()

    Use:

    import pkgutil
    pkgutil.get_data("projectname", "tools/json/test.json")
  • Be careful when writing to disk in your project, as Scrapyd will most likely be running under a different user which may not have write access to certain directories. If you can, avoid writing to disk and always use tempfile for temporary files.

Including dependencies

If your project has additional dependencies, you can either install them on the Scrapyd server, or you can include them in the project’s egg, in two steps:

  • Create a requirements.txt file at the root of the project

  • Use the --include-dependencies option when building or deploying your project:

    scrapyd-deploy --include-dependencies

scrapyd-client

For a reference on each subcommand invoke scrapyd-client <subcommand> --help.

Where filtering with wildcards is possible, it is facilitated with fnmatch. The --project option can be omitted if one is found in a scrapy.cfg.

deploy

This is a wrapper around scrapyd-deploy.

projects

Lists all projects of a Scrapyd instance:

# lists all projects on the default target
scrapyd-client projects
# lists all projects from a custom URL
scrapyd-client -t http://scrapyd.example.net projects

schedule

Schedules one or more spiders to be executed:

# schedules any spider
scrapyd-client schedule
# schedules all spiders from the 'knowledge' project
scrapyd-client schedule -p knowledge \*
# schedules any spider from any project whose name ends with '_daily'
scrapyd-client schedule -p \* \*_daily

spiders

Lists spiders of one or more projects:

# lists all spiders
scrapyd-client spiders
# lists all spiders from the 'knowledge' project
scrapyd-client spiders -p knowledge

ScrapydClient

Interact with Scrapyd within your python code.

from scrapyd_client import ScrapydClient
client = ScrapydClient()

for project in client.projects():
   print(client.jobs(project=project))

Scrapy configuration file

Targets

You can define a Scrapyd target in your project’s scrapy.cfg file. Example:

[deploy]
url = http://scrapyd.example.com/api/scrapyd
username = scrapy
password = secret
project = projectname

You can now deploy your project without the <target> argument or -p <project> option:

scrapyd-deploy

If you have multiple targets, add the target name in the section name. Example:

[deploy:targetname]
url = http://scrapyd.example.com/api/scrapyd

[deploy:another]
url = http://other.example.com/api/scrapyd

If you are working with CD frameworks, you do not need to commit your secrets to your repository. You can use environment variable expansion like so:

[deploy]
url = $SCRAPYD_URL
username = $SCRAPYD_USERNAME
password = $SCRAPYD_PASSWORD

or using this syntax:

[deploy]
url = ${SCRAPYD_URL}
username = ${SCRAPYD_USERNAME}
password = ${SCRAPYD_PASSWORD}

To deploy to one target, run:

scrapyd-deploy targetname -p <project>

To deploy to all targets, use the -a option:

scrapyd-deploy -a -p <project>

To list all available targets, use the -l option:

scrapyd-deploy -l

To list all available projects on one target, use the -L option:

scrapyd-deploy -L example

While your target needs to be defined with its URL in scrapy.cfg, you can use netrc for username and password, like so:

machine scrapyd.example.com
    username scrapy
    password secret

History

1.2.3 (2023-01-30)

  • feat: Add scrapyd-client --username and --password options. (@mxdev88)

  • feat: Expand environment variables in the scrapy.cfg file. (@mxdev88)

  • feat: Add ScrapydClient: a python client to interact with Scrapyd. (@mxdev88)

  • Add support for Python 3.10, 3.11. (@Laerte)

1.2.2 (2022-05-03)

  • fix: Fix FileNotFoundError when using scrapyd-deploy --deploy-all-targets.

1.2.1 (2022-05-02)

  • feat: Add scrapyd-deploy --include-dependencies option to install project dependencies from a requirements.txt file. (@mxdev88)

  • fix: Remove temporary directories created by scrapyd-deploy --deploy-all-targets.

  • chore: Address deprecation warnings.

  • chore: Add dependency on urllib3.

1.2.0 (2021-10-01)

  • Add support for Scrapy 2.5.

  • Add support for Python 3.7, 3.8, 3.9, PyPy3.7.

  • Drop support for Python 2.7, 3.4, 3.5.

  • Remove scrapyd_client.utils.get_config, which was a compatibility wrapper for Python 2.7.

1.2.0a1 (2017-08-24)

  • Install scrapyd-deploy as a console script.

  • New scrapyd-client CLI with deploy, projects, spiders, and schedule subcommands.

1.1.0 (2017-02-10)

  • New -a option to deploy to all targets.

  • Fix returncode on egg deploy error.

  • Add Python 3 support.

  • Drop Python 2.6 support.

1.0.1 (2015-04-09)

Initial release.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

scrapyd-client-1.2.3.tar.gz (20.9 kB view details)

Uploaded Source

Built Distribution

scrapyd_client-1.2.3-py3-none-any.whl (15.3 kB view details)

Uploaded Python 3

File details

Details for the file scrapyd-client-1.2.3.tar.gz.

File metadata

  • Download URL: scrapyd-client-1.2.3.tar.gz
  • Upload date:
  • Size: 20.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.11.1

File hashes

Hashes for scrapyd-client-1.2.3.tar.gz
Algorithm Hash digest
SHA256 fef27cc0d8bf54cbd03467250fb0705635733de81a577b0ba00c739c2d96c894
MD5 7582b9d8037dd2ae89bb824b1a4683d0
BLAKE2b-256 a2d82ab34e608eb6f921ef2747d7b70b4bf556136746c99640ce5c4964fdc4d1

See more details on using hashes here.

Provenance

File details

Details for the file scrapyd_client-1.2.3-py3-none-any.whl.

File metadata

File hashes

Hashes for scrapyd_client-1.2.3-py3-none-any.whl
Algorithm Hash digest
SHA256 beeab9310f1ff942978ca472e4ccdc88648cfde3c33734a11337a93514a43859
MD5 8f9197855088b25bff272fa1180695aa
BLAKE2b-256 5e2a341614d6387d6607f526d91f7823692ef0e1c46406b3df421f0a822d18cb

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page