Skip to main content

An Apache Airflow provider package built by Astronomer to integrate with Ray.

Project description

Ray provider

:books: Docs   |   :rocket: Getting Started   |   :speech_balloon: Slack (#airflow-ray)  |   :fire: Contribute  

Orchestrate your Ray jobs using Apache Airflow® combining Airflow's workflow management with Ray's distributed computing capabilities.

Benefits of using this provider include:

  • Integration: Incorporate Ray jobs into Airflow DAGs for unified workflow management.
  • Distributed computing: Use Ray's distributed capabilities within Airflow pipelines for scalable ETL, LLM fine-tuning etc.
  • Monitoring: Track Ray job progress through Airflow's user interface.
  • Dependency management: Define and manage dependencies between Ray jobs and other tasks in DAGs.
  • Resource allocation: Run Ray jobs alongside other task types within a single pipeline.

Table of Contents

Quickstart

Check out the Getting Started guide in our docs. Sample DAGs are available at example_dags/.

Sample DAGs

Example 1: Using @ray.task for job life cycle

The below example showcases how to use the @ray.task decorator to manage the full lifecycle of a Ray cluster: setup, job execution, and teardown.

This approach is ideal for jobs that require a dedicated, short-lived cluster, optimizing resource usage by cleaning up after task completion

https://github.com/astronomer/astro-provider-ray/blob/bd6d847818be08fae78bc1e4c9bf3334adb1d2ee/example_dags/ray_taskflow_example.py#L1-L57

Example 2: Using SetupRayCluster, SubmitRayJob & DeleteRayCluster

This example shows how to use separate operators for cluster setup, job submission, and teardown, providing more granular control over the process.

This approach allows for more complex workflows involving Ray clusters.

Key Points:

  • Uses SetupRayCluster, SubmitRayJob, and DeleteRayCluster operators separately.
  • Allows for multiple jobs to be submitted to the same cluster before deletion.
  • Demonstrates how to pass cluster information between tasks using XCom.

This method is ideal for scenarios where you need fine-grained control over the cluster lifecycle, such as running multiple jobs on the same cluster or keeping the cluster alive for a certain period.

https://github.com/astronomer/astro-provider-ray/blob/bd6d847818be08fae78bc1e4c9bf3334adb1d2ee/example_dags/setup-teardown.py#L1-L44

Getting Involved

Platform Purpose Est. Response time
Discussion Forum General inquiries and discussions < 3 days
GitHub Issues Bug reports and feature requests < 1-2 days
Slack Quick questions and real-time chat 12 hrs

Changelog

We follow Semantic Versioning for releases. Check CHANGELOG.rst for the latest changes.

Contributing Guide

All contributions, bug reports, bug fixes, documentation improvements, enhancements are welcome.

A detailed overview on how to contribute can be found in the Contributing Guide.

License

Apache 2.0 License

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

astro_provider_ray-0.2.1.tar.gz (19.9 kB view details)

Uploaded Source

Built Distribution

astro_provider_ray-0.2.1-py3-none-any.whl (22.2 kB view details)

Uploaded Python 3

File details

Details for the file astro_provider_ray-0.2.1.tar.gz.

File metadata

  • Download URL: astro_provider_ray-0.2.1.tar.gz
  • Upload date:
  • Size: 19.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.12.5

File hashes

Hashes for astro_provider_ray-0.2.1.tar.gz
Algorithm Hash digest
SHA256 52bd6b58292776e35848cbdf56645e090ca92c4cdf84c0c0770d29978682f38b
MD5 db96a1412505ba62f784bcdb92e09afa
BLAKE2b-256 e343cc0ef6c44ccfee11e8bca19747f5347681004e72145677a0fef69b1aaad6

See more details on using hashes here.

File details

Details for the file astro_provider_ray-0.2.1-py3-none-any.whl.

File metadata

File hashes

Hashes for astro_provider_ray-0.2.1-py3-none-any.whl
Algorithm Hash digest
SHA256 5e1871f86a1ff7315fb2019b1b9d976d283f3e826eddb8847714903ff8e791e4
MD5 0e0858a9f066cc96b5b37f2dfcecef99
BLAKE2b-256 c871807a3e121416210d221101dbd57ee4b5ab1ba2abcf658a5cf19a4359a5d1

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page