Skip to main content

Convert tokenizers into OpenVINO models

Project description

OpenVINO Tokenizers

NOTE: This version is pre-release software and has not undergone full release validation or qualification. No support is offered on pre-release software and APIs/behavior are subject to change. It should NOT be incorporated into any production software/solution and instead should be used only for early testing and integration while awaiting a final release version of this software.

Downloads Anaconda-Server Badge

OpenVINO Tokenizers adds text processing operations to OpenVINO.

Features

  • Perform tokenization and detokenization without third-party dependencies
  • Convert a HuggingFace tokenizer into OpenVINO model tokenizer and detokenizer
  • Combine OpenVINO models into a single model
  • Add greedy decoding pipeline to text generation model

Installation

(Recommended) Create and activate virtual env:

python3 -m venv venv
source venv/bin/activate
 # or
conda create --name openvino_tokenizers
conda activate openvino_tokenizers

Minimal Installation

Use minimal installation when you have a converted OpenVINO tokenizer:

pip install openvino-tokenizers
 # or
conda install -c conda-forge openvino openvino-tokenizers

Convert Tokenizers Installation

If you want to convert HuggingFace tokenizers into OpenVINO tokenizers:

pip install openvino-tokenizers[transformers]
 # or
conda install -c conda-forge openvino openvino-tokenizers && pip install transformers[sentencepiece] tiktoken

Install Pre-release Version

Use openvino-tokenizers[transformers] to install tokenizers conversion dependencies.

pip install --pre -U openvino openvino-tokenizers --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly

Build and Install from Source

Using OpenVINO PyPI package

openvino-tokenizers build depends on openvino package which will be automatically installed from PyPI during the build process. To install unreleased versions, you would need to install openvino package from the nightly distribution channel using --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly

git clone https://github.com/openvinotoolkit/openvino_tokenizers.git
cd openvino_tokenizers
pip install . --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly

This command is the equivalent of minimal installation. Install tokenizers conversion dependencies if needed:

pip install transformers[sentencepiece] tiktoken

:warning: Latest commit of OpenVINO Tokenizers might rely on features that are not present in the release OpenVINO version. Use a nightly build of OpenVINO or build OpenVINO Tokenizers from a release branch if you have issues with the build process.

Using OpenVINO archive

Install OpenVINO archive distribution. Use --no-deps to avoid OpenVINO installation from PyPI into your current environment. --extra-index-url is needed to resolve build dependencies only.

source path/to/installed/openvino/setupvars.sh
git clone https://github.com/openvinotoolkit/openvino_tokenizers.git
cd openvino_tokenizers
pip install --no-deps . --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly

This command is the equivalent of minimal installation. Install tokenizers conversion dependencies if needed:

pip install transformers[sentencepiece] tiktoken

:warning: Latest commit of OpenVINO Tokenizers might rely on features that are not present in the release OpenVINO version. Use a nightly build of OpenVINO or build OpenVINO Tokenizers from a release branch if you have issues with the build process.

Build and install for development

Using OpenVINO PyPI package

git clone https://github.com/openvinotoolkit/openvino_tokenizers.git
cd openvino_tokenizers
pip install -e .[all] --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly
# verify installation by running tests
cd tests/
pytest .

Using OpenVINO archive

Install OpenVINO archive distribution. Use --no-deps to avoid OpenVINO installation from PyPI into your current environment. --extra-index-url is needed to resolve build dependencies only.

source path/to/installed/openvino/setupvars.sh
git clone https://github.com/openvinotoolkit/openvino_tokenizers.git
cd openvino_tokenizers
pip install -e .[all] --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly
# verify installation by running tests
cd tests/
pytest .

C++ Installation

You can use converted tokenizers in C++ pipelines with prebuild binaries.

  1. Download OpenVINO archive distribution for your OS from here and extract the archive.
  2. Download OpenVINO Tokenizers prebuild libraries from here. To ensure compatibility first three numbers of OpenVINO Tokenizers version should match OpenVINO version and OS.
  3. Extract OpenVINO Tokenizers archive into OpenVINO installation directory. OpenVINO Tokenizers archive maintains the structure to be aligned with OpenVINO archive:
    • Windows: <openvino_dir>\runtime\bin\intel64\Release\
    • MacOS_x86: <openvino_dir>/runtime/lib/intel64/Release
    • MacOS_arm64: <openvino_dir>/runtime/lib/arm64/Release/
    • Linux_x86: <openvino_dir>/runtime/lib/intel64/
    • Linux_arm64: <openvino_dir>/runtime/lib/aarch64/

After that you can add binary extension in the code with:

  • core.add_extension("openvino_tokenizers.dll") for Windows
  • core.add_extension("libopenvino_tokenizers.dylib") for MacOS
  • core.add_extension("libopenvino_tokenizers.so") for Linux

and read/compile converted (de)tokenizers models. If you use version 2023.3.0.0, the binary extension file is called (lib)user_ov_extension.(dll/dylib/so).

Reducing the ICU Data Size

By default, all available ICU locales are supported, which significantly increases the package size. To reduce the size of the ICU libraries included in your final package, follow these steps:

  1. Use the ICU Data Configuration File:

    • This file specifies which features and locales to include in a custom data bundle. You can find more information here.
  2. Set the ICU Data Filter File as an Environment Variable:

    • On Unix-like systems (Linux, macOS): Set the ICU_DATA_FILTER_FILE environment variable to the path of your configuration file (filters.json):

      export ICU_DATA_FILTER_FILE="filters.json"
      
    • On Windows: Set the ICU_DATA_FILTER_FILE environment variable using the Command Prompt or PowerShell:

      Command Prompt:

      set ICU_DATA_FILTER_FILE=filters.json
      

      PowerShell:

      $env:ICU_DATA_FILTER_FILE="filters.json"
      
  3. Create a Configuration File:

    • An example configuration file (filters.json) might look like this:
    {
      "localeFilter": {
        "filterType": "language",
        "includelist": [
          "en"
        ]
      }
    }
    
  4. Configure OpenVINO Tokenizers:

    • When building OpenVINO tokenizers, set the following CMake option during the project configuration:
    -DBUILD_FAST_TOKENIZERS=ON
    
    • Example for a pip installation path:
    ICU_DATA_FILTER_FILE=</path/to/filters.json> pip install git+https://github.com/openvinotoolkit/openvino_tokenizers.git --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly --config-settings=override=cmake.options.BUILD_FAST_TOKENIZERS=ON
    

By following these instructions, you can effectively reduce the size of the ICU libraries in your final package.

Build OpenVINO Tokenizers without FastTokenizer Library

If a tokenizer doesn't use CaseFold, UnicodeNormalization or Wordpiece operations, you can drastically reduce package binary size by building OpenVINO Tokenizers without FastTokenizer dependency with this flag:

-DENABLE_FAST_TOKENIZERS=OFF

This option can also help with building for platform that is supported by FastTokenizer, for example Android x86_64.

Example for a pip installation path:

pip install git+https://github.com/openvinotoolkit/openvino_tokenizers.git --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly --config-settings=override=cmake.options.ENABLE_FAST_TOKENIZERS=OFF

Usage

:warning: OpenVINO Tokenizers can be inferred on a CPU device only.

Convert HuggingFace tokenizer

OpenVINO Tokenizers ships with CLI tool that can convert tokenizers from Huggingface Hub or Huggingface tokenizers saved on disk:

convert_tokenizer codellama/CodeLlama-7b-hf --with-detokenizer -o output_dir

There is also convert_tokenizer function that can convert tokenizer python object.

import numpy as np
from transformers import AutoTokenizer
from openvino import compile_model, save_model
from openvino_tokenizers import convert_tokenizer

hf_tokenizer = AutoTokenizer.from_pretrained("bert-base-uncased")
ov_tokenizer = convert_tokenizer(hf_tokenizer)

compiled_tokenzier = compile_model(ov_tokenizer)
text_input = ["Test string"]

hf_output = hf_tokenizer(text_input, return_tensors="np")
ov_output = compiled_tokenzier(text_input)

for output_name in hf_output:
    print(f"OpenVINO {output_name} = {ov_output[output_name]}")
    print(f"HuggingFace {output_name} = {hf_output[output_name]}")
# OpenVINO input_ids = [[ 101 3231 5164  102]]
# HuggingFace input_ids = [[ 101 3231 5164  102]]
# OpenVINO token_type_ids = [[0 0 0 0]]
# HuggingFace token_type_ids = [[0 0 0 0]]
# OpenVINO attention_mask = [[1 1 1 1]]
# HuggingFace attention_mask = [[1 1 1 1]]

# save tokenizer for later use
save_model(ov_tokenizer, "openvino_tokenizer.xml")

loaded_tokenizer = compile_model("openvino_tokenizer.xml")
loaded_ov_output = loaded_tokenizer(text_input)
for output_name in hf_output:
    assert np.all(loaded_ov_output[output_name] == ov_output[output_name])

Connect Tokenizer to a Model

To infer and convert the original model, install torch or torch-cpu to the virtual environment.

from transformers import AutoTokenizer, AutoModelForSequenceClassification
from openvino import compile_model, convert_model
from openvino_tokenizers import convert_tokenizer, connect_models

checkpoint = "mrm8488/bert-tiny-finetuned-sms-spam-detection"
hf_tokenizer = AutoTokenizer.from_pretrained(checkpoint)
hf_model = AutoModelForSequenceClassification.from_pretrained(checkpoint)

text_input = ["Free money!!!"]
hf_input = hf_tokenizer(text_input, return_tensors="pt")
hf_output = hf_model(**hf_input)

ov_tokenizer = convert_tokenizer(hf_tokenizer)
ov_model = convert_model(hf_model, example_input=hf_input.data)
combined_model = connect_models(ov_tokenizer, ov_model)
compiled_combined_model = compile_model(combined_model)

openvino_output = compiled_combined_model(text_input)

print(f"OpenVINO logits: {openvino_output['logits']}")
# OpenVINO logits: [[ 1.2007061 -1.4698029]]
print(f"HuggingFace logits {hf_output.logits}")
# HuggingFace logits tensor([[ 1.2007, -1.4698]], grad_fn=<AddmmBackward0>)

Use Extension With Converted (De)Tokenizer or Model With (De)Tokenizer

Import openvino_tokenizers will add all tokenizer-related operations to OpenVINO, after which you can work with saved tokenizers and detokenizers.

import numpy as np
import openvino_tokenizers
from openvino import Core

core = Core()

# detokenizer from codellama sentencepiece model
compiled_detokenizer = core.compile_model("detokenizer.xml")

token_ids = np.random.randint(100, 1000, size=(3, 5))
openvino_output = compiled_detokenizer(token_ids)

print(openvino_output["string_output"])
# ['sc�ouition�', 'intvenord hasient', 'g shouldwer M more']

Text generation pipeline

import numpy as np
from openvino import compile_model, convert_model
from openvino_tokenizers import add_greedy_decoding, convert_tokenizer
from transformers import AutoModelForCausalLM, AutoTokenizer


model_checkpoint = "JackFram/llama-68m"
hf_tokenizer = AutoTokenizer.from_pretrained(model_checkpoint)
hf_model = AutoModelForCausalLM.from_pretrained(model_checkpoint, use_cache=False)

# convert hf tokenizer
text_input = ["Quick brown fox jumped "]
ov_tokenizer, ov_detokenizer = convert_tokenizer(hf_tokenizer, with_detokenizer=True)
compiled_tokenizer = compile_model(ov_tokenizer)

# transform input text into tokens
ov_input = compiled_tokenizer(text_input)
hf_input = hf_tokenizer(text_input, return_tensors="pt")

# convert Pytorch model to OpenVINO IR and add greedy decoding pipeline to it
ov_model = convert_model(hf_model, example_input=hf_input.data)
ov_model_with_greedy_decoding = add_greedy_decoding(ov_model)
compiled_model = compile_model(ov_model_with_greedy_decoding)

# generate new tokens
new_tokens_size = 10
prompt_size = ov_input["input_ids"].shape[-1]
input_dict = {
    output.any_name: np.hstack([tensor, np.zeros(shape=(1, new_tokens_size), dtype=np.int_)])
    for output, tensor in ov_input.items()
}
for idx in range(prompt_size, prompt_size + new_tokens_size):
    output = compiled_model(input_dict)["token_ids"]
    input_dict["input_ids"][:, idx] = output[:, idx - 1]
    input_dict["attention_mask"][:, idx] = 1
ov_token_ids = input_dict["input_ids"]

hf_token_ids = hf_model.generate(
    **hf_input,
    min_new_tokens=new_tokens_size,
    max_new_tokens=new_tokens_size,
    temperature=0,  # greedy decoding
)

# decode model output
compiled_detokenizer = compile_model(ov_detokenizer)
ov_output = compiled_detokenizer(ov_token_ids)["string_output"]
hf_output = hf_tokenizer.batch_decode(hf_token_ids, skip_special_tokens=True)
print(f"OpenVINO output string: `{ov_output}`")
# OpenVINO output string: `['Quick brown fox was walking through the forest. He was looking for something']`
print(f"HuggingFace output string: `{hf_output}`")
# HuggingFace output string: `['Quick brown fox was walking through the forest. He was looking for something']`

TensorFlow Text Integration

OpenVINO Tokenizers include converters for certain TensorFlow Text operations. Currently, only the MUSE model is supported. Here is an example of model conversion and inference:

import numpy as np
import tensorflow_hub as hub
import tensorflow_text  # register tf text ops
from openvino import convert_model, compile_model
import openvino_tokenizers  # register ov tokenizer ops and translators


sentences = ["dog",  "I cuccioli sono carini.", "私は犬と一緒にビーチを散歩するのが好きです"]
tf_embed = hub.load(
    "https://www.kaggle.com/models/google/universal-sentence-encoder/frameworks/"
    "TensorFlow2/variations/multilingual/versions/2"
)
# convert model that uses Sentencepiece tokenizer op from TF Text
ov_model = convert_model(tf_embed)
ov_embed = compile_model(ov_model, "CPU")

ov_result = ov_embed(sentences)[ov_embed.output()]
tf_result = tf_embed(sentences)

assert np.all(np.isclose(ov_result, tf_result, atol=1e-4))

RWKV Tokenizer

from urllib.request import urlopen

from openvino import compile_model
from openvino_tokenizers import build_rwkv_tokenizer


rwkv_vocab_url = (
    "https://raw.githubusercontent.com/BlinkDL/ChatRWKV/main/tokenizer/rwkv_vocab_v20230424.txt"
)

with urlopen(rwkv_vocab_url) as vocab_file:
    vocab = map(bytes.decode, vocab_file)
    tokenizer, detokenizer = build_rwkv_tokenizer(vocab)

tokenizer, detokenizer = compile_model(tokenizer), compile_model(detokenizer)

print(tokenized := tokenizer(["Test string"])["input_ids"])  # [[24235 47429]]
print(detokenizer(tokenized)["string_output"])  # ['Test string']

Supported Tokenizer Types

Huggingface
Tokenizer Type
Tokenizer Model Type Tokenizer Detokenizer
Fast WordPiece
BPE
Unigram
Legacy SentencePiece .model
Custom tiktoken
RWKV Trie

Test Results

This report is autogenerated and includes tokenizers and detokenizers tests. The Output Matched, % column shows the percent of test strings for which the results of OpenVINO and Huggingface Tokenizers are the same. To update the report run pytest --update_readme tokenizers_test.py in tests directory.

Output Match by Tokenizer Type

Tokenizer Type Output Matched, % Number of Tests
BPE 95.57 5932
SentencePiece 88.23 6534
Tiktoken 99.19 494
WordPiece 99.10 1327

Output Match by Model

Tokenizer Type Model Output Matched, % Number of Tests
BPE EleutherAI/gpt-j-6b 95.29 255
BPE EleutherAI/gpt-neo-125m 95.29 255
BPE EleutherAI/gpt-neox-20b 95.82 239
BPE EleutherAI/pythia-12b-deduped 95.82 239
BPE KoboldAI/fairseq-dense-13B 96.65 239
BPE NousResearch/Meta-Llama-3-8B-Instruct 100.00 241
BPE Salesforce/codegen-16B-multi 96.08 255
BPE Xenova/gpt-4o 100.00 255
BPE ai-forever/rugpt3large_based_on_gpt2 94.51 255
BPE bigscience/bloom 97.49 239
BPE databricks/dolly-v2-3b 95.82 239
BPE deepseek-ai/deepseek-coder-6.7b-instruct 100.00 257
BPE facebook/bart-large-mnli 95.29 255
BPE facebook/galactica-120b 95.82 239
BPE facebook/opt-66b 96.65 239
BPE gpt2 95.29 255
BPE laion/CLIP-ViT-bigG-14-laion2B-39B-b160k 75.29 255
BPE microsoft/deberta-base 96.65 239
BPE roberta-base 95.29 255
BPE sentence-transformers/all-roberta-large-v1 95.29 255
BPE stabilityai/stablecode-completion-alpha-3b-4k 95.82 239
BPE stabilityai/stablelm-2-1_6b 100.00 239
BPE stabilityai/stablelm-tuned-alpha-7b 95.82 239
BPE tiiuae/falcon-7b 94.51 255
SentencePiece NousResearch/Llama-2-13b-hf 96.65 239
SentencePiece NousResearch/Llama-2-13b-hf_legacy 100.00 239
SentencePiece NousResearch/Llama-2-13b-hf_sp_backend 100.00 239
SentencePiece THUDM/chatglm2-6b_legacy 100.00 153
SentencePiece THUDM/chatglm3-6b_legacy 50.97 155
SentencePiece camembert-base 52.30 239
SentencePiece camembert-base_legacy 76.15 239
SentencePiece codellama/CodeLlama-7b-hf 96.65 239
SentencePiece codellama/CodeLlama-7b-hf_legacy 96.65 239
SentencePiece codellama/CodeLlama-7b-hf_sp_backend 94.98 239
SentencePiece facebook/musicgen-small 84.52 239
SentencePiece facebook/musicgen-small_legacy 79.92 239
SentencePiece microsoft/Phi-3-mini-128k-instruct 95.85 241
SentencePiece microsoft/Phi-3-mini-128k-instruct_legacy 95.85 241
SentencePiece microsoft/Phi-3-mini-128k-instruct_sp_backend 94.19 241
SentencePiece microsoft/deberta-v3-base 96.65 239
SentencePiece microsoft/deberta-v3-base_legacy 100.00 239
SentencePiece mlx-community/quantized-gemma-7b-it 99.17 241
SentencePiece mlx-community/quantized-gemma-7b-it_legacy 99.17 241
SentencePiece mlx-community/quantized-gemma-7b-it_sp_backend 100.00 241
SentencePiece rinna/bilingual-gpt-neox-4b 80.75 239
SentencePiece rinna/bilingual-gpt-neox-4b_legacy 86.61 239
SentencePiece t5-base 85.77 239
SentencePiece t5-base_legacy 81.17 239
SentencePiece xlm-roberta-base 96.23 239
SentencePiece xlm-roberta-base_legacy 96.23 239
SentencePiece xlnet-base-cased 65.27 239
SentencePiece xlnet-base-cased_legacy 59.41 239
Tiktoken Qwen/Qwen-14B-Chat 100.00 255
Tiktoken THUDM/glm-4-9b 98.33 239
WordPiece ProsusAI/finbert 100.00 107
WordPiece bert-base-multilingual-cased 100.00 107
WordPiece bert-base-uncased 100.00 107
WordPiece cointegrated/rubert-tiny2 100.00 107
WordPiece distilbert-base-uncased-finetuned-sst-2-english 100.00 107
WordPiece google/electra-base-discriminator 100.00 107
WordPiece google/mobilebert-uncased 100.00 91
WordPiece jhgan/ko-sbert-sts 100.00 107
WordPiece prajjwal1/bert-mini 100.00 91
WordPiece rajiv003/ernie-finetuned-qqp 100.00 91
WordPiece rasa/LaBSE 88.79 107
WordPiece sentence-transformers/all-MiniLM-L6-v2 100.00 107
WordPiece squeezebert/squeezebert-uncased 100.00 91

Recreating Tokenizers From Tests

In some tokenizers, you need to select certain settings so that their output is closer to the Huggingface tokenizers:

  • THUDM/chatglm2-6b detokenizer always skips special tokens. Use skip_special_tokens=True during conversion
  • THUDM/chatglm3-6b detokenizer don't skips special tokens. Use skip_special_tokens=False during conversion
  • All tested tiktoken based detokenizers leave extra spaces. Use clean_up_tokenization_spaces=False during conversion

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

openvino_tokenizers-2024.4.1.0.dev20240926-py3-none-win_amd64.whl (14.2 MB view details)

Uploaded Python 3 Windows x86-64

openvino_tokenizers-2024.4.1.0.dev20240926-py3-none-manylinux_2_31_aarch64.whl (14.0 MB view details)

Uploaded Python 3 manylinux: glibc 2.31+ ARM64

openvino_tokenizers-2024.4.1.0.dev20240926-py3-none-macosx_11_0_arm64.whl (13.8 MB view details)

Uploaded Python 3 macOS 11.0+ ARM64

openvino_tokenizers-2024.4.1.0.dev20240926-py3-none-macosx_10_15_x86_64.whl (13.9 MB view details)

Uploaded Python 3 macOS 10.15+ x86-64

File details

Details for the file openvino_tokenizers-2024.4.1.0.dev20240926-py3-none-win_amd64.whl.

File metadata

File hashes

Hashes for openvino_tokenizers-2024.4.1.0.dev20240926-py3-none-win_amd64.whl
Algorithm Hash digest
SHA256 f0b92fb83d43106e6fe777137efd1074997ad21b3b018d69013df2b132afe5b9
MD5 e8175a0873fe628b3f598df42d223cf8
BLAKE2b-256 38f444115fe3fd507cdb2216da30c28c7caa1ba4ed6fa169f436b20d0b2055db

See more details on using hashes here.

File details

Details for the file openvino_tokenizers-2024.4.1.0.dev20240926-py3-none-manylinux_2_31_aarch64.whl.

File metadata

File hashes

Hashes for openvino_tokenizers-2024.4.1.0.dev20240926-py3-none-manylinux_2_31_aarch64.whl
Algorithm Hash digest
SHA256 9bd8a2d5232e7ef2272217dee5b726ef6261d259b5ba47d37b7dab2236cd33bb
MD5 202e8c324a46e9deb6be545e3acfdb4f
BLAKE2b-256 f95c29c191bda48ea611d95d16dcd96c5385dbdaaff30c094f011d79ee831150

See more details on using hashes here.

File details

Details for the file openvino_tokenizers-2024.4.1.0.dev20240926-py3-none-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tokenizers-2024.4.1.0.dev20240926-py3-none-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 130810ff6d32a234b7c84a1108879179219e5586019a3a0d3a6cebdb5b8866ee
MD5 73b54f1b71ff846e5c7bcc47e671803a
BLAKE2b-256 78b3091b02991330a04dd22e0337cb68f63ed642ce0575c6df458f9a95443867

See more details on using hashes here.

File details

Details for the file openvino_tokenizers-2024.4.1.0.dev20240926-py3-none-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for openvino_tokenizers-2024.4.1.0.dev20240926-py3-none-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 5863c40ede925c43a81853c6e8589ed1150aaf290f68fa03a1b323fba256349d
MD5 803e6a8ceb65944c7b31c6680f9f71a9
BLAKE2b-256 d880f5e66ec1e51dc29a0d94af5179b6d44fdc25c9872ef73832c6bf0723f770

See more details on using hashes here.

File details

Details for the file openvino_tokenizers-2024.4.1.0.dev20240926-py3-none-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for openvino_tokenizers-2024.4.1.0.dev20240926-py3-none-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 b0dc14703518d12313d30e9887b5df40e91bfbc04d53101269bf7f114917414e
MD5 f01718f5187e07b4dd23527f10396fe7
BLAKE2b-256 3d26413b1ecb97a7bfa9e417d223ee2b68a85926935427f627826f3e6f1a2d52

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page