Skip to main content

Pytorch domain library for recommendation systems

Project description

TorchRec (Beta Release)

Docs

TorchRec is a PyTorch domain library built to provide common sparsity & parallelism primitives needed for large-scale recommender systems (RecSys). It allows authors to train models with large embedding tables sharded across many GPUs.

TorchRec contains:

  • Parallelism primitives that enable easy authoring of large, performant multi-device/multi-node models using hybrid data-parallelism/model-parallelism.
  • The TorchRec sharder can shard embedding tables with different sharding strategies including data-parallel, table-wise, row-wise, table-wise-row-wise, and column-wise sharding.
  • The TorchRec planner can automatically generate optimized sharding plans for models.
  • Pipelined training overlaps dataloading device transfer (copy to GPU), inter-device communications (input_dist), and computation (forward, backward) for increased performance.
  • Optimized kernels for RecSys powered by FBGEMM.
  • Quantization support for reduced precision training and inference.
  • Common modules for RecSys.
  • Production-proven model architectures for RecSys.
  • RecSys datasets (criteo click logs and movielens)
  • Examples of end-to-end training such the dlrm event prediction model trained on criteo click logs dataset.

Installation

Torchrec requires Python >= 3.7 and CUDA >= 11.0 (CUDA is highly recommended for performance but not required). The example below shows how to install with CUDA 11.3. This setup assumes you have conda installed.

Binaries

Experimental binary on Linux for Python 3.7, 3.8 and 3.9 can be installed via pip wheels

CUDA

conda install pytorch cudatoolkit=11.3 -c pytorch-nightly
pip install torchrec-nightly

CPU Only

conda install pytorch cpuonly -c pytorch-nightly
pip install torchrec-nightly-cpu

Colab example: introduction + install

See our colab notebook for an introduction to torchrec which includes runnable installation. - Tutorial Source - Open in Google Colab

From Source

We are currently iterating on the setup experience. For now, we provide manual instructions on how to build from source. The example below shows how to install with CUDA 11.3. This setup assumes you have conda installed.

  1. Install pytorch. See pytorch documentation

    conda install pytorch cudatoolkit=11.3 -c pytorch-nightly
    
  2. Install Requirements

    pip install -r requirements.txt
    
  3. Next, install FBGEMM_GPU from source (included in third_party folder of torchrec) by following the directions here. Installing fbgemm GPU is optional, but using FBGEMM w/ CUDA will be much faster. For CUDA 11.3 and SM80 (Ampere) architecture, the following instructions can be used:

    export CUB_DIR=/usr/local/cuda-11.3/include/cub
    export CUDA_BIN_PATH=/usr/local/cuda-11.3/
    export CUDACXX=/usr/local/cuda-11.3/bin/nvcc
    python setup.py install -DTORCH_CUDA_ARCH_LIST="7.0;8.0"
    

    The last line of the above code block (python setup.py install...) which manually installs fbgemm_gpu can be skipped if you do not need to build fbgemm_gpu with custom build-related flags. Skip to the next step if that is the case.

  4. Download and install TorchRec.

    git clone --recursive https://github.com/facebookresearch/torchrec
    
    # cd to the directory where torchrec's setup.py is located. Then run one of the below:
    cd torchrec
    python setup.py install develop --skip_fbgemm  # If you manually installed fbgemm_gpu in the previous step.
    python setup.py install develop                # Otherwise. This will run the fbgemm_gpu install step for you behind the scenes.
    python setup.py install develop --cpu_only     # For a CPU only installation of FBGEMM
    
  5. Test the installation.

    torchx run -s local_cwd --script test_installation.py
    

    See TorchX for more information on launching distributed and remote jobs.

  6. If you want to run a more complex example, please take a look at the torchrec DLRM example.

License

TorchRec is BSD licensed, as found in the LICENSE file.

Project details


Release history Release notifications | RSS feed

Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

torchrec_nightly_cpu-2022.4.11-py39-none-any.whl (2.8 MB view details)

Uploaded Python 3.9

torchrec_nightly_cpu-2022.4.11-py38-none-any.whl (2.8 MB view details)

Uploaded Python 3.8

torchrec_nightly_cpu-2022.4.11-py37-none-any.whl (2.8 MB view details)

Uploaded Python 3.7

File details

Details for the file torchrec_nightly_cpu-2022.4.11-py39-none-any.whl.

File metadata

File hashes

Hashes for torchrec_nightly_cpu-2022.4.11-py39-none-any.whl
Algorithm Hash digest
SHA256 d8d97d9e7403bee4ba6cbe057df61c811a75ee45346d1b9a7d1bc7f5092b5954
MD5 d24074652b32bbd1c2beef3ec88348e1
BLAKE2b-256 f40e64073d417dc6dab9ea0d3308ff3141024fa1a9ef9ba7aab79a288ec45e14

See more details on using hashes here.

File details

Details for the file torchrec_nightly_cpu-2022.4.11-py38-none-any.whl.

File metadata

File hashes

Hashes for torchrec_nightly_cpu-2022.4.11-py38-none-any.whl
Algorithm Hash digest
SHA256 ef228936bdad7ef00634e9ccfad316109caf0ce1c07ffe352d36b86a080066ce
MD5 5f08b2349097ed5da65ef3b6df363099
BLAKE2b-256 a0fd72cc7f70ee899cede54ee44141aabb7841cbd15fd2fe15b60b175e94f076

See more details on using hashes here.

File details

Details for the file torchrec_nightly_cpu-2022.4.11-py37-none-any.whl.

File metadata

File hashes

Hashes for torchrec_nightly_cpu-2022.4.11-py37-none-any.whl
Algorithm Hash digest
SHA256 5715bb23b6294e606a8f3f765b0116f81e6dfd4983f334d323f1aa5fa71d99b8
MD5 a5a465b666b71ec7ba7090b2ccff06ed
BLAKE2b-256 1e1cd03971fe056b8fc7b400eee74f59e8edd48e6c54ab0f1dac5bc9c08398cc

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page