Skip to main content

NVIDIA's Launcher for TAO Toolkit.

Project description

TAO Toolkit Quick Start Guide

This page provides a quick start guide for installing and running TAO Toolkit.

Requirements

Hardware

The following system configuration is recommended to achieve reasonable training performance with TAO Toolkit and supported models provided:

  • 32 GB system RAM
  • 32 GB of GPU RAM
  • 8 core CPU
  • 1 NVIDIA GPU
  • 100 GB of SSD space

TAO Toolkit is supported on A100, V100 and RTX 30x0 GPUs.

Software Requirements

Software Version
Ubuntu 18.04 LTS 18.04
python >=3.6.9
docker-ce >19.03.5
docker-API 1.40
nvidia-container-toolkit >1.3.0-1
nvidia-container-runtime 3.4.0-1
nvidia-docker2 2.5.0-1
nvidia-driver >465
python-pip >21.06
python-dev

Installing the Pre-requisites

The tao-launcher is strictly a python3 only package, capable of running on python 3.6.9 or 3.7.

  1. Install docker-ce by following the official instructions.

    Once you have installed docker-ce, follow the post-installation steps to ensure that the docker can be run without sudo.

  2. Install nvidia-container-toolkit by following the install-guide.

  3. Get an NGC account and API key:

    a. Go to NGC and click the TAO Toolkit container in the Catalog tab. This message is displayed: Sign in to access the PULL feature of this repository. b. Enter your Email address and click Next, or click Create an Account. c. Choose your organization when prompted for Organization/Team. d. Click Sign In.

  4. Log in to the NGC docker registry (nvcr.io) using the command docker login nvcr.io and enter the following credentials:

      a. Username: $oauthtoken
      b. Password: YOUR_NGC_API_KEY
    

    where YOUR_NGC_API_KEY corresponds to the key you generated from step 3.

DeepStream 6.0 - NVIDIA SDK for IVA inference is recommended.

Installing TAO Toolkit

TAO Toolkit is a Python pip package that is hosted on the NVIDIA PyIndex. The package uses the docker restAPI under the hood to interact with the NGC Docker registry to pull and instantiate the underlying docker containers. You must have an NGC account and an API key associated with your account. See the Installation Prerequisites section for details on creating an NGC account and obtaining an API key.

  1. Create a new virtualenv using virtualenvwrapper.

    You may follow the instructions in this link to set up a Python virtualenv using a virtualenvwrapper.

    Once you have followed the instructions to install virtualenv and virtualenvwrapper, set the Python version in the virtualenv. This can be done in either of the following ways:

    • Defining the environment variable called VIRTUALENVWRAPPER_PYTHON. This variable should point to the path where the python3 binary is installed in your local machine. You can also add it to your .bashrc or .bash_profile for setting your Python virtualenv by default.

      export VIRTUALENVWRAPPER_PYTHON=/usr/bin/python3
      
    • Setting the path to the python3 binary when creating your virtualenv using the virtualenvwrapper wrapper

      mkvirtualenv launcher -p /path/to/your/python3
      

    Once you have logged into the virtualenv, the command prompt should show the name of your virtual environment

    (launcher) py-3.6.9 desktop:
    

    When you are done with you session, you may deactivate your virtualenv using the deactivate command:

    deactivate
    

    You may re-instantiate this created virtualenv env using the workon command.

    workon launcher
    
  2. Install the TAO Launcher Python package called nvidia-tlt.

    pip3 install nvidia-tlt
    

    If you had installed an older version of nvidia-tlt launcher, you may upgrade to the latest version by running the following command.

    pip3 install --upgrade nvidia-tlt
    
  3. Invoke the entrypoints using the tlt command.

    tlt --help
    

    The sample output of the above command is:

    usage: tlt [-h]
             {list,stop,info,augment,bpnet,classification,detectnet_v2,dssd,emotionnet,faster_rcnn,fpenet,gazenet,gesturenet,
             heartratenet,intent_slot_classification,lprnet,mask_rcnn,punctuation_and_capitalization,question_answering,
             retinanet,speech_to_text,ssd,text_classification,converter,token_classification,unet,yolo_v3,yolo_v4,yolo_v4_tiny}
             ...
    
    Launcher for TAO
    
    optional arguments:
    -h, --help            show this help message and exit
    
    tasks:
          {list,stop,info,augment,bpnet,classification,detectnet_v2,dssd,emotionnet,faster_rcnn,fpenet,gazenet,gesturenet,heartratenet
         ,intent_slot_classification,lprnet,mask_rcnn,punctuation_and_capitalization,question_answering,retinanet,speech_to_text,
         ssd,text_classification,converter,token_classification,unet,yolo_v3,yolo_v4,yolo_v4_tiny}  
    

    Note that under tasks you can see all the launcher-invokable tasks. The following are the specific tasks that help with handling the launched commands using the TAO Launcher:

    • list
    • stop
    • info

    When installing the TAO Toolkit Launcher to your host machine's native python3 as opposed to the recommended route of using virtual environment, you may get an error saying that tlt binary wasn't found. This is because the path to your tlt binary installed by pip wasn't added to the PATH environment variable in your local machine. In this case, please run the following command:

    export PATH=$PATH:~/.local/bin
    

Running the TAO Toolkit

Information about the TAO Launcher CLI and details on using it to run TAO supported tasks are captured in the TAO Toolkit Launcher section of the TAO Toolkit User Guide.

Use the examples

Example Jupyter notebooks for all the tasks that are supported in TAO Toolkit are available in NGC resources. TAO Toolkit provides sample workflows for Computer Vision and Conversational AI.

Computer Vision

All the samples for the supported computer vision tasks are hosted on ngc under the TAO Computer Vision Samples. To run the available examples, download this sample resource by using the following commands.

wget --content-disposition https://api.ngc.nvidia.com/v2/resources/nvidia/tao/cv_samples/versions/v1.3.0/zip -O cv_samples_v1.3.0.zip
unzip -u cv_samples_v1.3.0.zip  -d ./cv_samples_v1.3.0 && rm -rf cv_samples_v1.3.0.zip && cd ./cv_samples_v1.3.0

Conversational AI

The TAO Conversational AI package, provides several end to end sample workflows to train conversational AI models using TAO Toolkit and subsequently deploying them to Riva. You can find these samples at:

Conversational AI Task Jupyter Notebooks
Speech to Text Speech to Text Notebook
Speech to Text Citrinet Speech to Text Citrinet Notebook
Question Answering Question Answering Notebook
Text Classification Text Classification Notebook
Token Classification Token Classification Notebook
Punctuation and Capitalization Punctuation Capitalization Notebook
Intent and Slot Classification Intent Slot Classification Notebook
NGram Language Model NGram Language Model Notebook
Text to Speech Text to Speech Notebook

You can download these resources, by using the NGC CLI command available at the NGC resource page. Once you download the respective tutorial resource, you may instantiate the jupyter notebook server.

pip3 install jupyter
jupyter notebook --ip 0.0.0.0 --allow-root --port 8888

Copy and paste the link produced from this command into your browser to access the notebook. The /workspace/examples folder will contain a demo notebook. Feel free to use any free port available to host the notebook if port 8888 is unavailable.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

nvidia_tlt-0.1.21-py3-none-any.whl (149.8 kB view details)

Uploaded Python 3

File details

Details for the file nvidia_tlt-0.1.21-py3-none-any.whl.

File metadata

  • Download URL: nvidia_tlt-0.1.21-py3-none-any.whl
  • Upload date:
  • Size: 149.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.7.1 importlib_metadata/4.8.3 pkginfo/1.8.2 requests/2.27.1 requests-toolbelt/0.9.1 tqdm/4.63.0 CPython/3.9.11

File hashes

Hashes for nvidia_tlt-0.1.21-py3-none-any.whl
Algorithm Hash digest
SHA256 4298e1781e7266b79734c66cb67cf38db523d168830e4dd75da4a2d0f76b7e58
MD5 13d2a97ab621e8a0df86149e7d762276
BLAKE2b-256 335834e8f304c5733f0a9e5880a06fa565ca5f7da8776b2cd5537108092513ec

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page