Skip to main content

Library of the most popular Generative AI model pipelines, optimized execution methods, and samples

Project description

OpenVINO™ GenAI Library

OpenVINO™ GenAI is a flavor of OpenVINO™, aiming to simplify running inference of generative AI models. It hides the complexity of the generation process and minimizes the amount of code required.

Install OpenVINO™ GenAI

NOTE: Please make sure that you are following the versions compatibility rules, refer to the OpenVINO™ GenAI Dependencies for more information.

The OpenVINO™ GenAI flavor is available for installation via Archive and PyPI distributions. To install OpenVINO™ GenAI, refer to the Install Guide.

To build OpenVINO™ GenAI library from source, refer to the Build Instructions.

OpenVINO™ GenAI Dependencies

OpenVINO™ GenAI depends on OpenVINO and OpenVINO Tokenizers.

When installing OpenVINO™ GenAI from PyPi, the same versions of OpenVINO and OpenVINO Tokenizers are used (e.g. openvino==2024.3.0 and openvino-tokenizers==2024.3.0.0 are installed for openvino-genai==2024.3.0). If you update one of the dependency packages (e.g. pip install openvino --pre --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly), versions might be incompatible due to different ABI and running OpenVINO GenAI can result in errors (e.g. ImportError: libopenvino.so.2430: cannot open shared object file: No such file or directory). Having packages version in format <MAJOR>.<MINOR>.<PATCH>.<REVISION>, only <REVISION> part of the full version can be varied to ensure ABI compatibility, while changing <MAJOR>, <MINOR> or <PATCH> parts of the version might break ABI.

GenAI, Tokenizers, and OpenVINO wheels for Linux on PyPI are compiled with _GLIBCXX_USE_CXX11_ABI=0 to cover a wider range of platforms. In contrast, C++ archive distributions for Ubuntu are compiled with _GLIBCXX_USE_CXX11_ABI=1. It is not possible to mix different Application Binary Interfaces (ABIs) because doing so results in a link error. This incompatibility prevents the use of, for example, OpenVINO from C++ archive distributions alongside GenAI from PyPI.

If you want to try OpenVINO GenAI with different dependencies versions (not prebuilt packages as archives or python wheels), build OpenVINO GenAI library from source.

Usage

Prerequisites

  1. Installed OpenVINO™ GenAI

    To use OpenVINO GenAI with models that are already in OpenVINO format, no additional python dependencies are needed. To convert models with optimum-cli and to run the examples, install the dependencies in ./samples/requirements.txt:

    # (Optional) Clone OpenVINO GenAI repository if it does not exist
    git clone --recursive https://github.com/openvinotoolkit/openvino.genai.git
    cd openvino.genai
    # Install python dependencies
    python -m pip install ./thirdparty/openvino_tokenizers/[transformers] --pre --extra-index-url https://storage.openvinotoolkit.org/simple/wheels/nightly
    python -m pip install --upgrade-strategy eager -r ./samples/requirements.txt
    
  2. A model in OpenVINO IR format

    Download and convert a model with optimum-cli:

    optimum-cli export openvino --model "TinyLlama/TinyLlama-1.1B-Chat-v1.0" --trust-remote-code "TinyLlama-1.1B-Chat-v1.0"
    

LLMPipeline is the main object used for decoding. You can construct it straight away from the folder with the converted model. It will automatically load the main model, tokenizer, detokenizer and default generation configuration.

Python

A simple example:

import openvino_genai as ov_genai
pipe = ov_genai.LLMPipeline(models_path, "CPU")
print(pipe.generate("The Sun is yellow because", max_new_tokens=100))

Calling generate with custom generation config parameters, e.g. config for grouped beam search:

import openvino_genai as ov_genai
pipe = ov_genai.LLMPipeline(models_path, "CPU")

result = pipe.generate("The Sun is yellow because", max_new_tokens=100, num_beam_groups=3, num_beams=15, diversity_penalty=1.5)
print(result)

output:

'it is made up of carbon atoms. The carbon atoms are arranged in a linear pattern, which gives the yellow color. The arrangement of carbon atoms in'

A simple chat in Python:

import openvino_genai as ov_genai
pipe = ov_genai.LLMPipeline(models_path)

config = {'max_new_tokens': 100, 'num_beam_groups': 3, 'num_beams': 15, 'diversity_penalty': 1.5}
pipe.set_generation_config(config)

pipe.start_chat()
while True:
    print('question:')
    prompt = input()
    if prompt == 'Stop!':
        break
    print(pipe(prompt, max_new_tokens=200))
pipe.finish_chat()

Test to compare with Huggingface outputs

C++

A simple example:

#include "openvino/genai/llm_pipeline.hpp"
#include <iostream>

int main(int argc, char* argv[]) {
    std::string models_path = argv[1];
    ov::genai::LLMPipeline pipe(models_path, "CPU");
    std::cout << pipe.generate("The Sun is yellow because", ov::genai::max_new_tokens(256));
}

Using group beam search decoding:

#include "openvino/genai/llm_pipeline.hpp"
#include <iostream>

int main(int argc, char* argv[]) {
    std::string models_path = argv[1];
    ov::genai::LLMPipeline pipe(models_path, "CPU");

    ov::genai::GenerationConfig config;
    config.max_new_tokens = 256;
    config.num_beam_groups = 3;
    config.num_beams = 15;
    config.diversity_penalty = 1.0f;

    std::cout << pipe.generate("The Sun is yellow because", config);
}

A simple chat in C++ using grouped beam search decoding:

#include "openvino/genai/llm_pipeline.hpp"
#include <iostream>

int main(int argc, char* argv[]) {
    std::string prompt;

    std::string models_path = argv[1];
    ov::genai::LLMPipeline pipe(models_path, "CPU");

    ov::genai::GenerationConfig config;
    config.max_new_tokens = 100;
    config.num_beam_groups = 3;
    config.num_beams = 15;
    config.diversity_penalty = 1.0f;

    pipe.start_chat();
    for (;;;) {
        std::cout << "question:\n";
        std::getline(std::cin, prompt);
        if (prompt == "Stop!")
            break;

        std::cout << "answer:\n";
        auto answer = pipe(prompt, config);
        std::cout << answer << std::endl;
    }
    pipe.finish_chat();
}

Streaming example with lambda function:

#include "openvino/genai/llm_pipeline.hpp"
#include <iostream>

int main(int argc, char* argv[]) {
    std::string models_path = argv[1];
    ov::genai::LLMPipeline pipe(models_path, "CPU");

    auto streamer = [](std::string word) {
        std::cout << word << std::flush;
        // Return flag corresponds whether generation should be stopped.
        // false means continue generation.
        return false;
    };
    std::cout << pipe.generate("The Sun is yellow because", ov::genai::streamer(streamer), ov::genai::max_new_tokens(200));
}

Streaming with a custom class:

C++ template for a stremer.

#include "openvino/genai/streamer_base.hpp"
#include "openvino/genai/llm_pipeline.hpp"
#include <iostream>

class CustomStreamer: public ov::genai::StreamerBase {
public:
    bool put(int64_t token) {
        // Custom decoding/tokens processing logic.

        // Returns a flag whether generation should be stopped, if true generation stops.
        return false;
    };

    void end() {
        // Custom finalization logic.
    };
};

int main(int argc, char* argv[]) {
    CustomStreamer custom_streamer;

    std::string models_path = argv[1];
    ov::genai::LLMPipeline pipe(models_path, "CPU");
    std::cout << pipe.generate("The Sun is yellow because", ov::genai::max_new_tokens(15), ov::genai::streamer(custom_streamer));
}

Python template for a streamer.

import openvino_genai as ov_genai

class CustomStreamer(ov_genai.StreamerBase):
    def __init__(self):
        super().__init__()
        # Initialization logic.

    def put(self, token_id) -> bool:
        # Custom decoding/tokens processing logic.

        # Returns a flag whether generation should be stopped, if true generation stops.
        return False

    def end(self):
        # Custom finalization logic.

pipe = ov_genai.LLMPipeline(models_path, "CPU")
custom_streamer = CustomStreamer()

pipe.generate("The Sun is yellow because", max_new_tokens=15, streamer=custom_streamer)

For fully implemented iterable CustomStreamer please refer to multinomial_causal_lm sample.

Continuous batching with LLMPipeline:

To activate continuous batching please provide additional property to LLMPipeline config: ov::genai::scheduler_config. This property contains struct SchedulerConfig.

#include "openvino/genai/llm_pipeline.hpp"

int main(int argc, char* argv[]) {
    ov::genai::SchedulerConfig scheduler_config;
    // fill other fields in scheduler_config with custom data if required
    scheduler_config.cache_size = 1;    // minimal possible KV cache size in GB, adjust as required

    ov::genai::LLMPipeline pipe(models_path, "CPU", ov::genai::scheduler_config(scheduler_config));
}

Performance Metrics

openvino_genai.PerfMetrics (referred as PerfMetrics for simplicity) is a structure that holds performance metrics for each generate call. PerfMetrics holds fields with mean and standard deviations for the following metrics:

  • Time To the First Token (TTFT), ms
  • Time per Output Token (TPOT), ms/token
  • Generate total duration, ms
  • Tokenization duration, ms
  • Detokenization duration, ms
  • Throughput, tokens/s

and:

  • Load time, ms
  • Number of generated tokens
  • Number of tokens in the input prompt

Performance metrics are stored either in the DecodedResults or EncodedResults perf_metric field. Additionally to the fields mentioned above, PerfMetrics has a member raw_metrics of type openvino_genai.RawPerfMetrics (referred to as RawPerfMetrics for simplicity) that contains raw values for the durations of each batch of new token generation, tokenization durations, detokenization durations, and more. These raw metrics are accessible if you wish to calculate your own statistical values such as median or percentiles. However, since mean and standard deviation values are usually sufficient, we will focus on PerfMetrics.

import openvino_genai as ov_genai
pipe = ov_genai.LLMPipeline(models_path, "CPU")
result = pipe.generate(["The Sun is yellow because"], max_new_tokens=20)
perf_metrics = result.perf_metrics

print(f'Generate duration: {perf_metrics.get_generate_duration().mean:.2f}')
print(f'TTFT: {perf_metrics.get_ttft().mean:.2f} ms')
print(f'TPOT: {perf_metrics.get_tpot().mean:.2f} ms/token')
print(f'Throughput: {perf_metrics.get_throughput().mean:.2f} tokens/s')
#include "openvino/genai/llm_pipeline.hpp"
#include <iostream>

int main(int argc, char* argv[]) {
    std::string models_path = argv[1];
    ov::genai::LLMPipeline pipe(models_path, "CPU");
    auto result = pipe.generate("The Sun is yellow because", ov::genai::max_new_tokens(20));
    auto perf_metrics = result.perf_metrics;

    std::cout << std::fixed << std::setprecision(2);
    std::cout << "Generate duration: " << perf_metrics.get_generate_duration().mean << " ms" << std::endl;
    std::cout << "TTFT: " << metrics.get_ttft().mean  << " ms" << std::endl;
    std::cout << "TPOT: " << metrics.get_tpot().mean  << " ms/token " << std::endl;
    std::cout << "Throughput: " << metrics.get_throughput().mean  << " tokens/s" << std::endl;
}

output:

mean_generate_duration: 76.28
mean_ttft: 42.58
mean_tpot 3.80

Note: If the input prompt is just a string, the generate function returns only a string without perf_metrics. To obtain perf_metrics, provide the prompt as a list with at least one element or call generate with encoded inputs.

Accumulating metrics

Several perf_metrics can be added to each other. In that case raw_metrics are concatenated and mean/std values are recalculated. This accumulates statistics from several generate() calls

#include "openvino/genai/llm_pipeline.hpp"
#include <iostream>

int main(int argc, char* argv[]) {
    std::string models_path = argv[1];
    ov::genai::LLMPipeline pipe(models_path, "CPU");
    auto result_1 = pipe.generate("The Sun is yellow because", ov::genai::max_new_tokens(20));
    auto result_2 = pipe.generate("The Sun is yellow because", ov::genai::max_new_tokens(20));
    auto perf_metrics = result_1.perf_metrics + result_2.perf_metrics

    std::cout << std::fixed << std::setprecision(2);
    std::cout << "Generate duration: " << perf_metrics.get_generate_duration().mean << " ms" << std::endl;
    std::cout << "TTFT: " << metrics.get_ttft().mean  << " ms" << std::endl;
    std::cout << "TPOT: " << metrics.get_tpot().mean  << " ms/token " << std::endl;
    std::cout << "Throughput: " << metrics.get_throughput().mean  << " tokens/s" << std::endl;
}
import openvino_genai as ov_genai
pipe = ov_genai.LLMPipeline(models_path, "CPU")
res_1 = pipe.generate(["The Sun is yellow because"], max_new_tokens=20)
res_2 = pipe.generate(["Why Sky is blue because"], max_new_tokens=20)
perf_metrics = res_1.perf_metrics + res_2.perf_metrics

print(f'Generate duration: {perf_metrics.get_generate_duration().mean:.2f}')
print(f'TTFT: {perf_metrics.get_ttft().mean:.2f} ms')
print(f'TPOT: {perf_metrics.get_tpot().mean:.2f} ms/token')
print(f'Throughput: {perf_metrics.get_throughput().mean:.2f} tokens/s')

Using raw performance metrics

In addition to mean and standard deviation values, the perf_metrics object has a raw_metrics field. This field stores raw data, including:

  • Timestamps for each batch of generated tokens
  • Batch sizes for each timestamp
  • Tokenization durations
  • Detokenization durations
  • Other relevant metrics

These metrics can be use for more fine grained analysis, such as getting exact calculating median values, percentiles, etc. Below are a few examples of how to use raw metrics.

Getting timestamps for each generated token:

import openvino_genai as ov_genai
pipe = ov_genai.LLMPipeline(models_path, "CPU")
result = pipe.generate(["The Sun is yellow because"], max_new_tokens=20)
perf_metrics = result.perf_metrics
raw_metrics = perf_metrics.raw_metrics

print(f'Generate duration: {perf_metrics.get_generate_duration().mean:.2f}')
print(f'Throughput: {perf_metrics.get_throughput().mean:.2f} tokens/s')
print(f'Timestamps: {" ms, ".join(f"{i:.2f}" for i in raw_metrics.m_new_token_times)}')

Getting pure inference time without tokenizatin and detokenization duration:

import openvino_genai as ov_genai
import numpy as np
pipe = ov_genai.LLMPipeline(models_path, "CPU")
result = pipe.generate(["The Sun is yellow because"], max_new_tokens=20)
perf_metrics = result.perf_metrics
print(f'Generate duration: {perf_metrics.get_generate_duration().mean:.2f} ms')

raw_metrics = perf_metrics.raw_metrics
generate_duration = np.array(raw_metrics.generate_durations)
tok_detok_duration = np.array(raw_metrics.tokenization_durations) - np.array(raw_metrics.detokenization_durations)
pure_inference_duration = np.sum(generate_duration - tok_detok_duration) / 1000 # in milliseconds
print(f'Pure Inference duration: {pure_inference_duration:.2f} ms')

Example of using raw metrics to calculate median value of generate duration:

import openvino_genai as ov_genai
import numpy as np
pipe = ov_genai.LLMPipeline(models_path, "CPU")
result = pipe.generate(["The Sun is yellow because"], max_new_tokens=20)
perf_metrics = result.perf_metrics
raw_metrics = perf_metrics.raw_metrics

print(f'Generate duration: {perf_metrics.get_generate_duration().mean:.2f}')
print(f'Throughput: {perf_metrics.get_throughput().mean:.2f} tokens/s')
durations = np.array(raw_metrics.m_new_token_times[1:]) - np.array(raw_metrics.m_new_token_times[:-1])
print(f'Median from token to token duration: {np.median(durations):.2f} ms')

For more examples of how metrics are used, please refer to the Python benchmark_genai.py and C++ benchmark_genai samples.

How It Works

For information on how OpenVINO™ GenAI works, refer to the How It Works Section.

Supported Models

For a list of supported models, refer to the Supported Models Section.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distributions

No source distribution files available for this release.See tutorial on generating distribution archives.

Built Distributions

openvino_genai-2024.5.0.0-cp312-cp312-win_amd64.whl (1.5 MB view details)

Uploaded CPython 3.12 Windows x86-64

openvino_genai-2024.5.0.0-cp312-cp312-manylinux_2_31_aarch64.whl (2.4 MB view details)

Uploaded CPython 3.12 manylinux: glibc 2.31+ ARM64

openvino_genai-2024.5.0.0-cp312-cp312-macosx_11_0_arm64.whl (1.8 MB view details)

Uploaded CPython 3.12 macOS 11.0+ ARM64

openvino_genai-2024.5.0.0-cp312-cp312-macosx_10_15_x86_64.whl (2.1 MB view details)

Uploaded CPython 3.12 macOS 10.15+ x86-64

openvino_genai-2024.5.0.0-cp311-cp311-win_amd64.whl (1.5 MB view details)

Uploaded CPython 3.11 Windows x86-64

openvino_genai-2024.5.0.0-cp311-cp311-manylinux_2_31_aarch64.whl (2.4 MB view details)

Uploaded CPython 3.11 manylinux: glibc 2.31+ ARM64

openvino_genai-2024.5.0.0-cp311-cp311-macosx_11_0_arm64.whl (1.8 MB view details)

Uploaded CPython 3.11 macOS 11.0+ ARM64

openvino_genai-2024.5.0.0-cp311-cp311-macosx_10_15_x86_64.whl (2.1 MB view details)

Uploaded CPython 3.11 macOS 10.15+ x86-64

openvino_genai-2024.5.0.0-cp310-cp310-win_amd64.whl (1.5 MB view details)

Uploaded CPython 3.10 Windows x86-64

openvino_genai-2024.5.0.0-cp310-cp310-manylinux_2_31_aarch64.whl (2.4 MB view details)

Uploaded CPython 3.10 manylinux: glibc 2.31+ ARM64

openvino_genai-2024.5.0.0-cp310-cp310-macosx_11_0_arm64.whl (1.8 MB view details)

Uploaded CPython 3.10 macOS 11.0+ ARM64

openvino_genai-2024.5.0.0-cp310-cp310-macosx_10_15_x86_64.whl (2.1 MB view details)

Uploaded CPython 3.10 macOS 10.15+ x86-64

openvino_genai-2024.5.0.0-cp39-cp39-win_amd64.whl (1.5 MB view details)

Uploaded CPython 3.9 Windows x86-64

openvino_genai-2024.5.0.0-cp39-cp39-manylinux_2_31_aarch64.whl (2.4 MB view details)

Uploaded CPython 3.9 manylinux: glibc 2.31+ ARM64

openvino_genai-2024.5.0.0-cp39-cp39-macosx_11_0_arm64.whl (1.8 MB view details)

Uploaded CPython 3.9 macOS 11.0+ ARM64

openvino_genai-2024.5.0.0-cp39-cp39-macosx_10_15_x86_64.whl (2.1 MB view details)

Uploaded CPython 3.9 macOS 10.15+ x86-64

File details

Details for the file openvino_genai-2024.5.0.0-cp312-cp312-win_amd64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp312-cp312-win_amd64.whl
Algorithm Hash digest
SHA256 f16176469c4bc3bdedfd05669debadba82340185a7b710539bbf309c70877571
MD5 46b59d4c0ee3a7b89cbea3b231049d1b
BLAKE2b-256 64293ba6443903946dcbf6212e4199f63e790bca4f6bfef1fadec49b92473321

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp312-cp312-manylinux_2_31_aarch64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp312-cp312-manylinux_2_31_aarch64.whl
Algorithm Hash digest
SHA256 434f9c741deca688c56311ab93cd893f0397d4f30f87aa6aee58efbb8c37389c
MD5 93e53879a6aa74cc51141ad986b32a77
BLAKE2b-256 dd4269e1421dabede6d6ceced17d9f73b19eaa2eef450d48c7d0fb2c1c3f62c1

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp312-cp312-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp312-cp312-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 81525d6f84e2f049186939067202ba35616f27646c72e7236bef2f85d6f9fff3
MD5 739f9a1e7ca11b76dfdbf694a16bb51e
BLAKE2b-256 23b4190ff0b68302db6356d969b51983d631e21ddd233e51a02b1340a5e3646a

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp312-cp312-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp312-cp312-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 eaabac317d19e9e4d70b68b039f73a919cfd607151332b1fd9ec2591eb76abef
MD5 f1080c763ae74aa18dee69a52a726659
BLAKE2b-256 41354978d3a700aca22cf5090da48df7a156885f3af5b49aad2047886ff74964

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp312-cp312-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp312-cp312-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 f7d569f31d422ea889c7d107718436758dfa37120e0e78788bb1108b135224f1
MD5 585c3f301cdd55c7561a8baee4ec9338
BLAKE2b-256 ff0061d4877cd63a1b2df20761589d66d2c71f2899d32117bd606d3592aeb041

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp311-cp311-win_amd64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp311-cp311-win_amd64.whl
Algorithm Hash digest
SHA256 65b116b956c1d15691a6c8c064226969cd216c8803b682d8bd4a3d83ed75712e
MD5 33933cf669f6445000b60277d32fb2cb
BLAKE2b-256 747ba1129ec183a9cfef4480ed6d5f97274f7f9f5e8954f9fb43add12f28d02d

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp311-cp311-manylinux_2_31_aarch64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp311-cp311-manylinux_2_31_aarch64.whl
Algorithm Hash digest
SHA256 521b3a18dc655bffc9470ab146fa566f3a577bf2350f5ef45f5d69e532f0941e
MD5 ee764f39797748eb81a1dd59de3f32f5
BLAKE2b-256 c7c6fde8a0bee6726b57541a8d2f24a9106c4c7e2716dd61e303d70e7cb13e7c

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp311-cp311-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp311-cp311-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 7ab5d0cdf0892769329dbce51b5857ea2552d33eec7f7d7614a80e467302e4c8
MD5 58f86c1f12190639586d126ddac06cec
BLAKE2b-256 6620d8d35d7d0ae13981908f11896c50787580b82eb474631482e1c3973405dd

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp311-cp311-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp311-cp311-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 d98e9d8ea133d810981ed36327bd1fc6da0c5f24adaf290d84791321a4c919f6
MD5 440f679a1e3161de6728e8993e7d2a41
BLAKE2b-256 323aaebd23935f5aa09716abe011c4a3b54cd865b490233d583c0ee543bdb3e3

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp311-cp311-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp311-cp311-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 d1053ed37d67f9c6d7933d2217e17d828d126bb94208f514ae61c7764628210d
MD5 636d11ee19fa5531b1e8d96e7f63c76a
BLAKE2b-256 802692aa5f21af1ab5ac9058e1fb8cb6248727004437714f4dddd0c23e14907e

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp310-cp310-win_amd64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp310-cp310-win_amd64.whl
Algorithm Hash digest
SHA256 4ca609ef3969268d528c0053c3430107a8154adf16e71cea19c4814d351d9034
MD5 df6e8830149be93099efe979e7ccc1bf
BLAKE2b-256 7993f352dfcf7a405e75369853ce835f74e224c1d7d9aa40ca569ec6ac5b53ca

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp310-cp310-manylinux_2_31_aarch64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp310-cp310-manylinux_2_31_aarch64.whl
Algorithm Hash digest
SHA256 ed4cdd9cfd8a88d70686a4b5e540fa88bf019698b7e08003f10fb10a1353c753
MD5 a93f44795199c87451ffd5513aefb3b6
BLAKE2b-256 228b98d236f20291b057e14bf66e9dfb1065555fb7c37497508de3c0c25790ed

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp310-cp310-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp310-cp310-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 bbf86a358c01495f51b8bbd8440e3a1578185b68237398187e58d811f132cd39
MD5 20f00bc6e7d859e2f0d78b34d0b4f92a
BLAKE2b-256 b46fff8640b578cbd6a57620c8f6f9d143ea30daa71bf6a167cc121b5f305b6c

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp310-cp310-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp310-cp310-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 9d7ec490c1c0823728b0617aa4b28d861d21716dc7896159e28f56e72465671d
MD5 6b972426bd018c71faee90cb22489e22
BLAKE2b-256 dff9d3ee94c265d35d83bf69ae88f185177d4f1aee89225274aa62a1c8c9274a

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp310-cp310-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp310-cp310-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 97cb6804334208bba5f97b4a3c22f019b04c9c80de79fce6e850cd7904774573
MD5 5ac2962f57fc208e005ed7bb086595ef
BLAKE2b-256 d5895149d95819fb3e59441ba78a5538e8321bc3bd214c3d6003713e3e322c34

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp39-cp39-win_amd64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp39-cp39-win_amd64.whl
Algorithm Hash digest
SHA256 6aacbe9ff340db9f554609d23c807d075acdcc1444b42f496514ea41be6386ae
MD5 7bf44fef60cc5d43da7bc98757d9f010
BLAKE2b-256 91bb486148005a872e51c0f8907ee6d844a57b39c4d25fa6bf99ff1bd3c52d4c

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp39-cp39-manylinux_2_31_aarch64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp39-cp39-manylinux_2_31_aarch64.whl
Algorithm Hash digest
SHA256 4237e9705361c0c6deae2cdc140e58fdbd7d9d84f19f652d51527807d3fb9149
MD5 9f157fa255d0ed2326066171d3f35958
BLAKE2b-256 da1cf2f7f88f2046bf1e4f6a23837be3ccc2e0808f38bffeaeaf1e427bdcd679

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp39-cp39-manylinux2014_x86_64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp39-cp39-manylinux2014_x86_64.whl
Algorithm Hash digest
SHA256 408f483b1c91b3370b77c6562d7ff782bd4bb91f6fef410f0163f0356ccb9b5b
MD5 2ee3f04d74ce67e4e4085af1c7dbdeea
BLAKE2b-256 e25deee1949a3ff73cdcc6e7a682f32d4e8c76e2ded36b4d4cf1c2ba73339bfe

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp39-cp39-macosx_11_0_arm64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp39-cp39-macosx_11_0_arm64.whl
Algorithm Hash digest
SHA256 07174c214d28e1680f7c830a3237e768e00b0b1c74de46ce83d06c3de1ccfc02
MD5 d270d32698bea9e5d4d2aff833622268
BLAKE2b-256 3195f554fc69690bb0653000ce7237e127c873228e8d0b2046d64b20b5f03943

See more details on using hashes here.

File details

Details for the file openvino_genai-2024.5.0.0-cp39-cp39-macosx_10_15_x86_64.whl.

File metadata

File hashes

Hashes for openvino_genai-2024.5.0.0-cp39-cp39-macosx_10_15_x86_64.whl
Algorithm Hash digest
SHA256 ed5611aafc442171ad8c1f23a325ca6d667fff2a3e6343174dba7c6d838c4d25
MD5 089b084e95485e51279775cef418a892
BLAKE2b-256 0afbf25836cc3bf63ee7e92d9b1d26971149ad5cdcc1303977380326f40e66f8

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page