A high performance deep learning inference library
Project description
NVIDIA TensorRT is an SDK that facilitates high-performance machine learning inference. It is designed to work in a complementary fashion with training frameworks such as TensorFlow, PyTorch, and MXNet. It focuses specifically on running an already-trained network quickly and efficiently on NVIDIA hardware.
IMPORTANT: This is a special release of TensorRT designed to work only with TensorRT-LLM. Please refrain from upgrading to this version if you are not using TensorRT-LLM.
To install, please execute the following:
pip install tensorrt --extra-index-url https://pypi.nvidia.com
Or add the index URL to the (space-separated) PIP_EXTRA_INDEX_URL environment variable:
export PIP_EXTRA_INDEX_URL='https://pypi.nvidia.com'
pip install tensorrt
When the extra index url does not contain https://pypi.nvidia.com
, a nested pip install
will run with the proper extra index url hard-coded.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
File details
Details for the file tensorrt_lean-cu12-10.6.0.post1.tar.gz
.
File metadata
- Download URL: tensorrt_lean-cu12-10.6.0.post1.tar.gz
- Upload date:
- Size: 18.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.0.0 CPython/3.10.12
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 567fc0781d9ba6f18664c9bc55e99309dba944f2fe02c5dd0387e6aa5c93f8e7 |
|
MD5 | db9860d5bec9061c0df109bae28b7960 |
|
BLAKE2b-256 | 7e0f4a01e58126c300fa3703045d09c9303cbc0b3f27b5f86d4726c60430bb74 |