Accelerate PyTorch models with ONNX Runtime OpenVINO EP
Project description
The torch-ort-inference package uses the PyTorch APIs to accelerate PyTorch models using ONNX Runtime OpenVINO EP.
Dependencies
The torch-ort-inference package depends on the onnxruntime-openvino package.
Post-installation step
Once torch-ort-inference is installed, there is a post-installation step:
python -m torch_ort.configure
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
No source distribution files available for this release.See tutorial on generating distribution archives.
Built Distribution
File details
Details for the file torch_ort_infer-0.0.1-py3-none-any.whl
.
File metadata
- Download URL: torch_ort_infer-0.0.1-py3-none-any.whl
- Upload date:
- Size: 9.7 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.1 CPython/3.9.7
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | bdf07ac71cbc7413b15aca66400b0f652c257b6ec347834dc11d2543c82b317f |
|
MD5 | 6033868d26d923a2ee1a1ae7a51d7ef1 |
|
BLAKE2b-256 | dee197ebbf93639e513330434a0f97abd405a1ba00887f5a0ff898241ec7bc17 |