Skip to main content

OpenAI Instrumentation Package

Project description

OpenTelemetry Instrumentation for OpenAI

An OpenTelemetry instrumentation for the openai client library.

This instrumentation currently only supports instrumenting the Chat completions APIs.

We currently support the following features:

  • sync and async chat completions
  • Streaming support
  • Functions calling with tools
  • Client side metrics
  • Following 1.27.0 Gen AI Semantic Conventions

Installation

pip install elastic-opentelemetry-instrumentation-openai

Usage

This instrumentation supports 0-code / autoinstrumentation:

opentelemetry-instrument python use_openai.py

# You can record more information about prompts as log events by enabling content capture.
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true opentelemetry-instrument python use_openai.py

# You can record more information about prompts as span events by enabling content capture.
OTEL_INSTRUMENTATION_GENAI_CAPTURE_MESSAGE_CONTENT=true ELASTIC_OTEL_GENAI_EVENTS=span opentelemetry-instrument python use_openai.py

Or manual instrumentation:

import openai
from opentelemetry.instrumentation.openai import OpenAIInstrumentor

OpenAIInstrumentor().instrument()

# assumes at least the OPENAI_API_KEY environment variable set
client = openai.Client()

messages = [
    {
        "role": "user",
        "content": "Answer in up to 3 words: Which ocean contains the canarian islands?",
    }
]

chat_completion = client.chat.completions.create(model="gpt-4o-mini", messages=messages)

Instrumentation specific environment variable configuration

  • ELASTIC_OTEL_GENAI_EVENTS (default: span): when set to log exports GenAI events as log events instead of span events.

Elastic specific semantic conventions

  • New embeddings value for gen_ai.operation.name
  • New gen_ai.request.encoding_format attribute with openai specific values [float, base64]

Development

We use pytest to execute tests written with the standard library unittest framework.

Test dependencies need to be installed before running.

python3 -m venv .venv
source .venv/bin/activate
pip install -r dev-requirements.txt

pytest

To run integration tests doing real requests:

OPENAI_API_KEY=unused pytest --integration-tests

Refreshing HTTP payloads

We use VCR.py to automatically record HTTP responses from LLMs to reuse in tests without running the LLM. Refreshing HTTP payloads may be needed in these cases

  • Adding a new unit test
  • Extending a unit test with functionality that requires an up-to-date HTTP response

Integration tests default to using ollama, to avoid cost and leaking sensitive information. However, unit test recordings should use the authoritative OpenAI platform unless the test is about a specific portability corner case.

To refresh a test, delete its cassette file in tests/cassettes and make sure you have environment variables set for recordings, detailed later.

If writing a new test, start with the test logic with no assertions. If extending an existing unit test rather than writing a new one, remove the corresponding recorded response from cassettes instead.

Then, run pytest as normal. It will execute a request against the LLM and record it. Update the test with correct assertions until it passes. Following executions of pytest will use the recorded response without querying the LLM.

OpenAI Environment Variables

Azure OpenAI Environment Variables

Azure is different from OpenAI primarily in that a URL has an implicit model. This means it ignores the model parameter set by the OpenAI SDK. The implication is that one endpoint cannot serve both chat and embeddings at the same time. Hence, we need separate environment variables for chat and embeddings. In either case, the DEPLOYMENT_URL is the "Endpoint Target URI" and the API_KEY is the Endpoint Key for a corresponding deployment in https://oai.azure.com/resource/deployments

License

This software is licensed under the Apache License, version 2 ("Apache-2.0").

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

Built Distribution

File details

Details for the file elastic_opentelemetry_instrumentation_openai-0.4.0.tar.gz.

File metadata

File hashes

Hashes for elastic_opentelemetry_instrumentation_openai-0.4.0.tar.gz
Algorithm Hash digest
SHA256 d96c03aec761e84e7b9ebda3b4e4021e37bd6e33bd5b5ffdf168ed3498b35301
MD5 b2216f117091c78ebc89a13951d9ec57
BLAKE2b-256 570fbae4209e1386cf6e396d0ed7b68525213b2e9905b6549711d7806d4b7965

See more details on using hashes here.

Provenance

The following attestation bundles were made for elastic_opentelemetry_instrumentation_openai-0.4.0.tar.gz:

Publisher: release-openai.yml on elastic/elastic-otel-python-instrumentations

Attestations:

File details

Details for the file elastic_opentelemetry_instrumentation_openai-0.4.0-py3-none-any.whl.

File metadata

File hashes

Hashes for elastic_opentelemetry_instrumentation_openai-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8de0f0b8f739250f26e83266b8b09e5d6188ed94c330b85a6c65fffb291fa951
MD5 d557fb412f8f4b87b78b213f004f27b3
BLAKE2b-256 0954bcc3db4aea8669a65a57e7ddb254134f03f7c6e4b8e581efd9b5ed7340ce

See more details on using hashes here.

Provenance

The following attestation bundles were made for elastic_opentelemetry_instrumentation_openai-0.4.0-py3-none-any.whl:

Publisher: release-openai.yml on elastic/elastic-otel-python-instrumentations

Attestations:

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page