Skip to main content

PyTriton - Flask/FastAPI-like interface to simplify Triton's deployment in Python environments.

Project description

PyTriton - a Flask/FastAPI-like framework designed to streamline the use of NVIDIA’s Triton Inference Server.

For comprehensive guidance on how to deploy your models, optimize performance, and explore the API, delve into the extensive resources found in our documentation.

Features at a Glance

The distinct capabilities of PyTriton are summarized in the feature matrix:

Feature

Description

Native Python support

You can create any Python function and expose it as an HTTP/gRPC API.

Framework-agnostic

You can run any Python code with any framework of your choice, such as: PyTorch, TensorFlow, or JAX.

Performance optimization

You can benefit from dynamic batching, response cache, model pipelining, clusters, and GPU/CPU inference.

Decorators

You can use batching decorators to handle batching and other pre-processing tasks for your inference function.

Easy installation and setup

You can use a simple and familiar interface based on Flask/FastAPI for easy installation and setup.

Model clients

You can access high-level model clients for HTTP/gRPC requests with configurable options and both synchronous and asynchronous API.

Streaming (alpha)

You can stream partial responses from a model by serving it in a decoupled mode.

Learn more about PyTriton’s architecture.

Prerequisites

Before proceeding with the installation of PyTriton, ensure your system meets the following criteria:

  • Operating System: Compatible with glibc version 2.35 or higher. - Primarily tested on Ubuntu 22.04. - Other supported OS include Debian 11+, Rocky Linux 9+, and Red Hat UBI 9+. - Use ldd --version to verify your glibc version.

  • Python: Version 3.8 or newer.

  • pip: Version 20.3 or newer.

  • libpython: Ensure libpython3.*.so is installed, corresponding to your Python version.

Install

The PyTriton can be installed from pypi.org by running the following command:

pip install nvidia-pytriton

Important: The Triton Inference Server binary is installed as part of the PyTriton package.

Discover more about PyTriton’s installation procedures, including Docker usage, prerequisites, and insights into building binaries from source to match your specific Triton server versions.

Quick Start

The quick start presents how to run Python model in Triton Inference Server without need to change the current working environment. In the example we are using a simple Linear model.

The infer_fn is a function that takes an data tensor and returns a list with single output tensor. The @batch from batching decorators is used to handle batching for the model.

import numpy as np
from pytriton.decorators import batch

@batch
def infer_fn(data):
    result = data * np.array([[-1]], dtype=np.float32)  # Process inputs and produce result
    return [result]

In the next step, you can create the binding between the inference callable and Triton Inference Server using the bind method from pyTriton. This method takes the model name, the inference callable, the inputs and outputs tensors, and an optional model configuration object.

from pytriton.model_config import Tensor
from pytriton.triton import Triton
triton = Triton()
triton.bind(
    model_name="Linear",
    infer_func=infer_fn,
    inputs=[Tensor(name="data", dtype=np.float32, shape=(-1,)),],
    outputs=[Tensor(name="result", dtype=np.float32, shape=(-1,)),],
)
triton.run()

Finally, you can send an inference query to the model using the ModelClient class. The infer_sample method takes the input data as a numpy array and returns the output data as a numpy array. You can learn more about the ModelClient class in the clients section.

from pytriton.client import ModelClient

client = ModelClient("localhost", "Linear")
data = np.array([1, 2, ], dtype=np.float32)
print(client.infer_sample(data=data))

After the inference is done, you can stop the Triton Inference Server and close the client:

client.close()
triton.stop()

The output of the inference should be:

{'result': array([-1., -2.], dtype=float32)}

For the full example, including defining the model and binding it to the Triton server, check out our detailed Quick Start instructions. Get your model up and running, explore how to serve it, and learn how to invoke it from client applications.

The full example code can be found in examples/linear_random_pytorch.

Examples

The examples page showcases various use cases of serving models using PyTriton. This includes simple examples of running models in PyTorch, TensorFlow2, JAX, and plain Python. In addition, more advanced scenarios are covered, such as online learning, multi-node models, and deployment on Kubernetes using PyTriton. Each example is accompanied by instructions on how to build and run it. Discover more about utilizing PyTriton by exploring our examples.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

nvidia_pytriton-0.5.6-py3-none-manylinux_2_35_x86_64.whl (40.5 MB view details)

Uploaded Python 3 manylinux: glibc 2.35+ x86-64

nvidia_pytriton-0.5.6-py3-none-manylinux_2_35_aarch64.whl (39.1 MB view details)

Uploaded Python 3 manylinux: glibc 2.35+ ARM64

File details

Details for the file nvidia_pytriton-0.5.6-py3-none-manylinux_2_35_x86_64.whl.

File metadata

File hashes

Hashes for nvidia_pytriton-0.5.6-py3-none-manylinux_2_35_x86_64.whl
Algorithm Hash digest
SHA256 6403e65c2bbab0ab2fe2b737ad612e2b88f3edf20d41aadd1d544ffb309a701c
MD5 78f81cfab943656bf91dbe7bcbf2e818
BLAKE2b-256 85e65c2d20816bfa23cdf903f7a8cf5a30103f47dac722a3d356ea9710831bb3

See more details on using hashes here.

File details

Details for the file nvidia_pytriton-0.5.6-py3-none-manylinux_2_35_aarch64.whl.

File metadata

File hashes

Hashes for nvidia_pytriton-0.5.6-py3-none-manylinux_2_35_aarch64.whl
Algorithm Hash digest
SHA256 53a6d7f0aba00366284e528d5cfaf807ed12ade389e4c96e070f52bc4dec9ea5
MD5 dce3071cee37e1b34b45f91b4b1d7d1f
BLAKE2b-256 d7d84658a84f814087fcfb01da7aeb9f13e5e30fe28eca8c7243dcbc4ddfcb9a

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page