Skip to main content

Transformers Model Optimization Tool of ONNXRuntime

Project description

Transformer Model Optimization Tool Overview

ONNX Runtime automatically applies most optimizations while loading a transformer model. Some of the latest optimizations that have not yet been integrated into ONNX Runtime are available in this tool that tunes models for the best performance.

This tool can help in the following senarios:

  • Model is exported by tf2onnx or keras2onnx, and ONNX Runtime does not have graph optimization for them right now.
  • Convert model to use float16 to boost performance using mixed precision on GPUs with Tensor Cores (like V100 or T4).
  • Model has inputs with dynamic axis, which blocks some optimizations to be applied in ONNX Runtime due to shape inference.
  • Disable or enable some fusions to see its impact on performance or accuracy.

Installation

First you need install onnxruntime or onnxruntime-gpu package for CPU or GPU inference. To use onnxruntime-gpu, it is required to install CUDA and cuDNN and add their bin directories to PATH environment variable.

This tool can be installed using pip as follows:

pip install onnxruntime-tools

In your python code, you can use it like the following:

from onnxruntime_tools import optimizer
optimized_model = optimizer.optimize_model("gpt2.onnx", model_type='gpt2', num_heads=12, hidden_size=768)
optimized_model.convert_model_float32_to_float16()
optimized_model.save_model_to_file("gpt2_fp16.onnx")

You can also use command like the following to optimize model:

python -m onnxruntime_tools.optimizer_cli --input gpt2.onnx --output gpt2_opt.onnx --model_type gpt2

If you want to use the latest script, you can get script files from here. Then run it like the following:

python optimizer.py --input gpt2.onnx --output gpt2_opt.onnx --model_type gpt2

Export a transformer model to ONNX

PyTorch could export model to ONNX. The tf2onnx and keras2onnx tools can be used to convert model that trained by Tensorflow. Huggingface transformers has a notebook shows an example of exporting a pretrained model to ONNX. For Keras2onnx, please refere to its example script. For tf2onnx, please refer to its BERT tutorial.

Model Optimizer

Example of using the script optimizer.py to optimize a BERT-large model to run in V100 GPU:

python -m onnxruntime_tools.optimizer_cli --input bert_large.onnx --output bert_large_fp16.onnx --num_heads 16 --hidden_size 1024 --float16

Options

See below for description of some options of optimizer.py:

  • input: input model path
  • output: output model path
  • model_type: (defaul: bert) There are 4 model types: bert (exported by PyTorch), gpt2 (exported by PyTorch), and bert_tf (BERT exported by tf2onnx), bert_keras (BERT exported by keras2onnx) respectively.
  • num_heads: (default: 12) Number of attention heads. BERT-base and BERT-large has 12 and 16 respectively.
  • hidden_size: (default: 768) BERT-base and BERT-large has 768 and 1024 hidden nodes respectively.
  • input_int32: (optional) Exported model ususally uses int64 tensor as input. If this flag is specified, int32 tensors will be used as input, and it could avoid un-necessary Cast nodes and get better performance.
  • float16: (optional) By default, model uses float32 in computation. If this flag is specified, half-precision float will be used. This option is recommended for NVidia GPU with Tensor Core like V100 and T4. For older GPUs, float32 is likely faster.
  • verbose: (optional) Print verbose information when this flag is specified.

Supported Models

Right now, this tool assumes input model has 3 inputs for input IDs, segment IDs, and attention mask. A model with less or addtional inputs might not be fully optimized.

Most optimizations require exact match of a subgraph. Any layout change in subgraph might cause some optimization not working. Note that different versions of training or export tool might lead to different graph layouts.

Here is list of models from Huggingface Transformers that have been tested using this tool:

  • BertForSequenceClassification as in transformers example exported by PyTorch 1.2-1.4 using opset version 10 or 11.
  • BertForQuestionAnswering as in transformers example exported by PyTorch 1.2-1.4 using opset version 10 or 11.
  • TFBertForSequenceClassification as in transformers example exported by keras2onnx installed from its master source.
  • TFBertForQuestionAnswering exported by keras2onnx installed from its master source.
  • GPT2Model exported by PyTorch 1.4 using opset version 10 or 11.
  • GPT2LMHeadModel exported by PyTorch 1.4 using opset version 10 or 11. If your model is not in the list, the optimized model might not work. You are welcome to update the scripts to support new models.

For GPT2 models, current optimization does not support past state (both inputs and outputs). You need disable it in transformers by setting enable_cache=False during exporting.

Benchmark

The benchmark script requires PyTorch be installed.

You can run benchmark script to see the inference speed of OnnxRuntime. Here is an example to run benchmark on pretrained model bert-base-cased on GPU.

python -m onnxruntime_tools.transformers.benchmark -g -m bert-base-cased -o -v -b 0
python -m onnxruntime_tools.transformers.benchmark -g -m bert-base-cased -o
python -m onnxruntime_tools.transformers.benchmark -g -m bert-base-cased -e torch
python -m onnxruntime_tools.transformers.benchmark -g -m bert-base-cased -e torchscript

The first command will generate ONNX models (both before and after optimizations), but not run performance tests since we set batch size to 0. The other three commands will run performance test on three engines: OnnxRuntime, PyTorch and PyTorch+TorchScript.

If you remove -o parameter, optimizer is not used in benchmark.

If your GPU (like V100 or T4) has TensorCore, you can append --fp16 to the above commands to enable mixed precision using float16.

If you want to benchmark on CPU, you can remove -g option in the commands.

Note that our current benchmark on GPT2 model has disabled past state from inputs and outputs.

By default, ONNX model has only one input (input_ids). You can use -i parameter to test models with more inputs. For example, we can add "-i 3" to command line to test a bert model with 3 inputs (input_ids, token_type_ids and attention_mask). The performance result might be different. This option only supports OnnxRuntime right now.

Model Verification

If your model has three inputs (like input_ids, token_type_ids and attention_mask), a script compare_bert_results.py can be used to do a quick verification. The tool will generate some fake input data, and compare results from both the original and optimized models. If outputs are all close, it is safe to use the optimized model.

Example of verifying models optimized for CPU:

python -m onnxruntime_tools.transformers.compare_bert_results --baseline_model original_model.onnx --optimized_model optimized_model_cpu.onnx --batch_size 1 --sequence_length 128 --samples 100

For GPU, please append --use_gpu to the command.

Performance Test

bert_perf_test.py can be used to check the model inference performance. Below are examples:

python -m onnxruntime_tools.transformers.bert_perf_test --model optimized_model_cpu.onnx --batch_size 1 --sequence_length 128 --samples 100 --test_times 10 --inclusive

For GPU, please append --use_gpu to the command.

After test is finished, a file like perf_results_CPU_B1_S128_<date_time>.txt or perf_results_GPU_B1_S128_<date_time>.txt will be output to the model directory.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distribution

onnxruntime_tools-1.3.0.1005-py3-none-any.whl (62.0 kB view details)

Uploaded Python 3

File details

Details for the file onnxruntime_tools-1.3.0.1005-py3-none-any.whl.

File metadata

  • Download URL: onnxruntime_tools-1.3.0.1005-py3-none-any.whl
  • Upload date:
  • Size: 62.0 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.23.0 setuptools/41.2.0 requests-toolbelt/0.9.1 tqdm/4.45.0 CPython/3.7.6

File hashes

Hashes for onnxruntime_tools-1.3.0.1005-py3-none-any.whl
Algorithm Hash digest
SHA256 13ec634d1c9fbe1499c9999809a25f91d10f206329b332f93cd21c1d1374f20d
MD5 7e3b359261469062aa059ac6201667d0
BLAKE2b-256 86ec8d011c4593b7db2abcd8c5d68bc817a0da0c353e232530c4e77eced2f3eb

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page