NVIDIA cuTENSOR
Project description
cuTENSOR is a high-performance CUDA library for tensor primitives.
Key Features
Extensive mixed-precision support:
FP64 inputs with FP32 compute.
FP32 inputs with FP16, BF16, or TF32 compute.
Complex-times-real operations.
Conjugate (without transpose) support.
Support for up to 64-dimensional tensors.
Arbitrary data layouts.
Trivially serializable data structures.
Main computational routines:
Direct (i.e., transpose-free) tensor contractions.
Tensor reductions (including partial reductions).
Element-wise tensor operations:
Support for various activation functions.
Arbitrary tensor permutations.
Conversion between different data types.
Documentation
Please refer to https://docs.nvidia.com/cuda/cutensor/index.html for the cuTENSOR documentation.
Installation
The cuTENSOR wheel can be installed as follows:
pip install cutensor-cuXX
where XX is the CUDA major version (currently CUDA 11 is supported). The package cutensor (without the -cuXX suffix) is considered deprecated.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distributions
Hashes for cutensor_cu11-1.6.1-py3-none-manylinux2014_x86_64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | e2714bc62793d0df4ff5811a72f59faf3fc18ac5ec08aa1950ddaec99a011947 |
|
MD5 | c09776d32d46c3fd44394fc427e753e7 |
|
BLAKE2b-256 | 9375ac0348d3228c61758eb80a86b55ee855f87e4bf9be1e72cb4f369d2e433a |
Hashes for cutensor_cu11-1.6.1-py3-none-manylinux2014_aarch64.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | 393eb7d6d9ac5aa927b6b3c329f5cdbe8a80afd8522d20521e1c306c54339aa0 |
|
MD5 | 50f90ff162530d30325eb82f1721385c |
|
BLAKE2b-256 | cf587e89d6e362e13af354f14d9a09fa4564fe062d1fcf370685debe56309914 |