Skip to main content

Accelerate PyTorch models with ONNX Runtime OpenVINO EP

Project description

The torch-ort-inference package uses the PyTorch APIs to accelerate PyTorch models using ONNX Runtime OpenVINO EP.

Dependencies

The torch-ort-inference package depends on the onnxruntime-openvino package.

Post-installation step

Once torch-ort-inference is installed, there is a post-installation step:

python -m torch_ort.configure

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

torch_ort_infer-0.0.1-py3-none-any.whl (9.7 kB view details)

Uploaded Python 3

File details

Details for the file torch_ort_infer-0.0.1-py3-none-any.whl.

File metadata

File hashes

Hashes for torch_ort_infer-0.0.1-py3-none-any.whl
Algorithm Hash digest
SHA256 bdf07ac71cbc7413b15aca66400b0f652c257b6ec347834dc11d2543c82b317f
MD5 6033868d26d923a2ee1a1ae7a51d7ef1
BLAKE2b-256 dee197ebbf93639e513330434a0f97abd405a1ba00887f5a0ff898241ec7bc17

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page