Skip to main content

A native-PyTorch library for large scale LLM training

Project description

torchtitan

TorchTitan_Logo

torchtitan is still in pre-release!

torchtitan is currently in a pre-release state and under extensive development.

torchtitan is a native PyTorch reference architecture showcasing some of the latest PyTorch techniques for large scale model training.

  • Designed to be easy to understand, use and extend for different training purposes.
  • Minimal changes to the model code when applying 1D, 2D, or (soon) 3D Parallel.
  • Modular components instead of monolithic codebase.
  • Get started in minutes, not hours!

Please note: torchtitan is a proof-of-concept for Large-scale LLM training using native PyTorch. It is (and will continue to be) a repo to showcase PyTorch's latest distributed training features in a clean, minimal codebase. torchtitan is complementary to and not a replacement for any of the great large-scale LLM training codebases such as Megatron, Megablocks, LLM Foundry, Deepspeed, etc. Instead, we hope that the features showcased in torchtitan will be adopted by these codebases quickly. torchtitan is unlikely to ever grow a large community around it.

Pre-Release Updates:

(4/16/2024): TorchTitan is now public but in a pre-release state and under development. Currently we showcase pre-training Llama2 models (LLMs) of various sizes from scratch.

Key features available:
1 - FSDP2 (per param sharding)
2 - Tensor Parallel (FSDP + Tensor Parallel)
3 - Selective layer and op activation checkpointing
4 - Distributed checkpointing (asynch pending)
5 - 3 datasets pre-configured (47K - 144M)
6 - GPU usage, MFU, tokens per second and other metrics all reported and displayed via TensorBoard.
7 - optional Fused RMSNorm, learning rate scheduler, meta init, and more.
8 - All options easily configured via toml files.

Coming soon features:

1 - Asynch checkpointing
2 - FP8 support
3 - Context Parallel
4 - 3D (Pipeline Parallel)
5 - Torch Compile support

Installation

Install PyTorch from source or install the latest pytorch nightly, then install requirements by

pip install -r requirements.txt

Install additional dev requirements if you want to contribute to the repo:

pip install -r dev-requirements.txt

run the llama debug model locally to verify the setup is correct:

./run_llama_train.sh

TensorBoard

To visualize TensorBoard metrics of models trained on a remote server via a local web browser:

  1. Make sure metrics.enable_tensorboard option is set to true in model training (either from a .toml file or from CLI).

  2. Set up SSH tunneling, by running the following from local CLI

ssh -L 6006:127.0.0.1:6006 [username]@[hostname]
  1. Inside the SSH tunnel that logged into the remote server, go to the torchtitan repo, and start the TensorBoard backend
tensorboard --logdir=./outputs/tb
  1. In the local web browser, go to the URL it provides OR to http://localhost:6006/.

Multi-Node Training

For training on ParallelCluster/Slurm type configurations, you can use the multinode_trainer.slurm file to submit your sbatch job.
Note that you will need to adjust the number of nodes and gpu count to your cluster configs.
To adjust total nodes:

#SBATCH --ntasks=2
#SBATCH --nodes=2

should both be set to your total node count. Then update the srun launch parameters to match:

srun torchrun --nnodes 2

where nnodes is your total node count, matching the sbatch node count above.

To adjust gpu count per node:

If your gpu count per node is not 8, adjust:

--nproc_per_node

in the torchrun command and

#SBATCH --gpus-per-task

in the SBATCH command section.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

torchtitan-0.0.2.tar.gz (5.2 kB view details)

Uploaded Source

Built Distribution

torchtitan-0.0.2-py3-none-any.whl (4.9 kB view details)

Uploaded Python 3

File details

Details for the file torchtitan-0.0.2.tar.gz.

File metadata

  • Download URL: torchtitan-0.0.2.tar.gz
  • Upload date:
  • Size: 5.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.13

File hashes

Hashes for torchtitan-0.0.2.tar.gz
Algorithm Hash digest
SHA256 14981ecfa3ac1fc6ce220c6700cf17d85f9b1c61cbae0d498211be84db8db5d2
MD5 0e7e09226c84c7014224b314472afd60
BLAKE2b-256 fb7383d7c481a9ee1d97d44da7cc3b0dee4c09b04c9fe99cf055ee47f9e18aae

See more details on using hashes here.

File details

Details for the file torchtitan-0.0.2-py3-none-any.whl.

File metadata

  • Download URL: torchtitan-0.0.2-py3-none-any.whl
  • Upload date:
  • Size: 4.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.0.0 CPython/3.10.13

File hashes

Hashes for torchtitan-0.0.2-py3-none-any.whl
Algorithm Hash digest
SHA256 6316af7599d3d0b2b541d7018305abb40c6cf5cda41d5cda451391996ff06f6f
MD5 aa0541aa4b01b726cb0de3be84eb3eb2
BLAKE2b-256 c696007977f62a02259e3cff0400cd43a9cf6d357a4d851232cf29951048bf43

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page