Microsoft Azure Evaluation Library for Python
Project description
Azure AI Evaluation client library for Python
We are excited to introduce the public preview of the Azure AI Evaluation SDK.
Source code | Package (PyPI) | API reference documentation | Product documentation | Samples
This package has been tested with Python 3.8, 3.9, 3.10, 3.11, and 3.12.
For a more complete set of Azure libraries, see https://aka.ms/azsdk/python/all
Getting started
Prerequisites
- Python 3.8 or later is required to use this package.
Install the package
Install the Azure AI Evaluation library for Python with pip::
pip install azure-ai-evaluation
Key concepts
Evaluators are custom or prebuilt classes or functions that are designed to measure the quality of the outputs from language models.
Examples
Evaluators
Users can create evaluator runs on the local machine as shown in the example below:
import os
from pprint import pprint
from azure.ai.evaluation import evaluate, RelevanceEvaluator, ViolenceEvaluator
def response_length(response, **kwargs):
return {"value": len(response)}
if __name__ == "__main__":
# Built-in evaluators
# Initialize Azure OpenAI Model Configuration
model_config = {
"azure_endpoint": os.environ.get("AZURE_OPENAI_ENDPOINT"),
"api_key": os.environ.get("AZURE_OPENAI_KEY"),
"azure_deployment": os.environ.get("AZURE_OPENAI_DEPLOYMENT"),
}
# Initialzing Relevance Evaluator
relevance_eval = RelevanceEvaluator(model_config)
# Running Relevance Evaluator on single input row
relevance_score = relevance_eval(
response="The Alpine Explorer Tent is the most waterproof.",
context="From the our product list,"
" the alpine explorer tent is the most waterproof."
" The Adventure Dining Table has higher weight.",
query="Which tent is the most waterproof?",
)
pprint(relevance_score)
# {'gpt_relevance': 5.0}
# Content Safety Evaluator
# Initialize Project Scope
azure_ai_project = {
"subscription_id": <subscription_id>,
"resource_group_name": <resource_group_name>,
"project_name": <project_name>
}
violence_eval = ViolenceEvaluator(azure_ai_project)
violence_score = violence_eval(query="What is the capital of France?", response="Paris.")
pprint(violence_score)
# {'violence': 'Very low',
# 'violence_reason': "The system's response is a straightforward factual response "
# 'to a geography question. There is no violent content or '
# 'language present.',
# 'violence_score': 0}
# Code based evaluator
response_length("The Alpine Explorer Tent is the most waterproof.")
# {'value': 48}
# Using multiple evaluators together using `Evaluate` API
result = evaluate(
data="evaluate_test_data.jsonl",
evaluators={
"response_length": response_length,
"violence": violence_eval,
},
)
pprint(result)
Simulator
Simulators allow users to generate synthentic data using their application. Simulator expects the user to have a callback method that invokes their AI application.
Simulating with a Prompty
---
name: ApplicationPrompty
description: Simulates an application
model:
api: chat
parameters:
temperature: 0.0
top_p: 1.0
presence_penalty: 0
frequency_penalty: 0
response_format:
type: text
inputs:
conversation_history:
type: dict
---
system:
You are a helpful assistant and you're helping with the user's query. Keep the conversation engaging and interesting.
Output with a string that continues the conversation, responding to the latest message from the user, given the conversation history:
{{ conversation_history }}
Application code:
import json
import asyncio
from typing import Any, Dict, List, Optional
from azure.ai.evaluation.simulator import Simulator
from promptflow.client import load_flow
import os
import wikipedia
# Set up the model configuration without api_key, using DefaultAzureCredential
model_config = {
"azure_endpoint": os.environ.get("AZURE_OPENAI_ENDPOINT"),
"azure_deployment": os.environ.get("AZURE_DEPLOYMENT"),
# not providing key would make the SDK pick up `DefaultAzureCredential`
# use "api_key": "<your API key>"
}
# Use Wikipedia to get some text for the simulation
wiki_search_term = "Leonardo da Vinci"
wiki_title = wikipedia.search(wiki_search_term)[0]
wiki_page = wikipedia.page(wiki_title)
text = wiki_page.summary[:1000]
def method_to_invoke_application_prompty(query: str, messages_list: List[Dict], context: Optional[Dict]):
try:
current_dir = os.path.dirname(__file__)
prompty_path = os.path.join(current_dir, "application.prompty")
_flow = load_flow(
source=prompty_path,
model=model_config,
credential=DefaultAzureCredential()
)
response = _flow(
query=query,
context=context,
conversation_history=messages_list
)
return response
except Exception as e:
print(f"Something went wrong invoking the prompty: {e}")
return "something went wrong"
async def callback(
messages: Dict[str, List[Dict]],
stream: bool = False,
session_state: Any = None, # noqa: ANN401
context: Optional[Dict[str, Any]] = None,
) -> dict:
messages_list = messages["messages"]
# Get the last message from the user
latest_message = messages_list[-1]
query = latest_message["content"]
# Call your endpoint or AI application here
response = method_to_invoke_application_prompty(query, messages_list, context)
# Format the response to follow the OpenAI chat protocol format
formatted_response = {
"content": response,
"role": "assistant",
"context": {
"citations": None,
},
}
messages["messages"].append(formatted_response)
return {"messages": messages["messages"], "stream": stream, "session_state": session_state, "context": context}
async def main():
simulator = Simulator(model_config=model_config)
outputs = await simulator(
target=callback,
text=text,
num_queries=2,
max_conversation_turns=4,
user_persona=[
f"I am a student and I want to learn more about {wiki_search_term}",
f"I am a teacher and I want to teach my students about {wiki_search_term}"
],
)
print(json.dumps(outputs, indent=2))
if __name__ == "__main__":
# Ensure that the following environment variables are set in your environment:
# AZURE_OPENAI_ENDPOINT and AZURE_DEPLOYMENT
# Example:
# os.environ["AZURE_OPENAI_ENDPOINT"] = "https://your-endpoint.openai.azure.com/"
# os.environ["AZURE_DEPLOYMENT"] = "your-deployment-name"
asyncio.run(main())
print("done!")
Adversarial Simulator
from from azure.ai.evaluation.simulator import AdversarialSimulator, AdversarialScenario
from azure.identity import DefaultAzureCredential
from typing import Any, Dict, List, Optional
import asyncio
azure_ai_project = {
"subscription_id": <subscription_id>,
"resource_group_name": <resource_group_name>,
"project_name": <project_name>
}
async def callback(
messages: List[Dict],
stream: bool = False,
session_state: Any = None,
context: Dict[str, Any] = None
) -> dict:
messages_list = messages["messages"]
# get last message
latest_message = messages_list[-1]
query = latest_message["content"]
context = None
if 'file_content' in messages["template_parameters"]:
query += messages["template_parameters"]['file_content']
# the next few lines explains how to use the AsyncAzureOpenAI's chat.completions
# to respond to the simulator. You should replace it with a call to your model/endpoint/application
# make sure you pass the `query` and format the response as we have shown below
from openai import AsyncAzureOpenAI
oai_client = AsyncAzureOpenAI(
api_key=<api_key>,
azure_endpoint=<endpoint>,
api_version="2023-12-01-preview",
)
try:
response_from_oai_chat_completions = await oai_client.chat.completions.create(messages=[{"content": query, "role": "user"}], model="gpt-4", max_tokens=300)
except Exception as e:
print(f"Error: {e}")
# to continue the conversation, return the messages, else you can fail the adversarial with an exception
message = {
"content": "Something went wrong. Check the exception e for more details.",
"role": "assistant",
"context": None,
}
messages["messages"].append(message)
return {
"messages": messages["messages"],
"stream": stream,
"session_state": session_state
}
response_result = response_from_oai_chat_completions.choices[0].message.content
formatted_response = {
"content": response_result,
"role": "assistant",
"context": {},
}
messages["messages"].append(formatted_response)
return {
"messages": messages["messages"],
"stream": stream,
"session_state": session_state,
"context": context
}
Adversarial QA
scenario = AdversarialScenario.ADVERSARIAL_QA
simulator = AdversarialSimulator(azure_ai_project=azure_ai_project, credential=DefaultAzureCredential())
outputs = asyncio.run(
simulator(
scenario=scenario,
max_conversation_turns=1,
max_simulation_results=3,
target=callback
)
)
print(outputs.to_eval_qa_json_lines())
Direct Attack Simulator
scenario = AdversarialScenario.ADVERSARIAL_QA
simulator = DirectAttackSimulator(azure_ai_project=azure_ai_project, credential=DefaultAzureCredential())
outputs = asyncio.run(
simulator(
scenario=scenario,
max_conversation_turns=1,
max_simulation_results=2,
target=callback
)
)
print(outputs)
Troubleshooting
General
Azure ML clients raise exceptions defined in Azure Core.
Logging
This library uses the standard logging library for logging. Basic information about HTTP sessions (URLs, headers, etc.) is logged at INFO level.
Detailed DEBUG level logging, including request/response bodies and unredacted
headers, can be enabled on a client with the logging_enable
argument.
See full SDK logging documentation with examples here.
Next steps
- View our samples.
- View our documentation
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit cla.microsoft.com.
When you submit a pull request, a CLA-bot will automatically determine whether you need to provide a CLA and decorate the PR appropriately (e.g., label, comment). Simply follow the instructions provided by the bot. You will only need to do this once across all repos using our CLA.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Release History
1.0.0b4 (2024-10-16)
Breaking Changes
- Removed
numpy
dependency. All NaN values returned by the SDK have been changed to fromnumpy.nan
tomath.nan
. credential
is now required to be passed in for all content safety evaluators andProtectedMaterialsEvaluator
.DefaultAzureCredential
will no longer be chosen if a credential is not passed.- Changed package extra name from "pf-azure" to "remote".
Bugs Fixed
- Adversarial Conversation simulations would fail with
Forbidden
. Added logic to re-fetch token in the exponential retry logic to retrive RAI Service response.
Other Changes
- Enhance the error message to provide clearer instruction when required packages for the remote tracking feature are missing.
1.0.0b3 (2024-10-01)
Features Added
- Added
type
field toAzureOpenAIModelConfiguration
andOpenAIModelConfiguration
- The following evaluators now support
conversation
as an alternative input to their usual single-turn inputs:ViolenceEvaluator
SexualEvaluator
SelfHarmEvaluator
HateUnfairnessEvaluator
ProtectedMaterialEvaluator
IndirectAttackEvaluator
CoherenceEvaluator
RelevanceEvaluator
FluencyEvaluator
GroundednessEvaluator
- Surfaced
RetrievalScoreEvaluator
, formally an internal part ofChatEvaluator
as a standalone conversation-only evaluator.
Breaking Changes
- Removed
ContentSafetyChatEvaluator
andChatEvaluator
- The
evaluator_config
parameter ofevaluate
now maps in evaluator name to a dictionaryEvaluatorConfig
, which is aTypedDict
. Thecolumn_mapping
betweendata
ortarget
and evaluator field names should now be specified inside this new dictionary:
Before:
evaluate(
...,
evaluator_config={
"hate_unfairness": {
"query": "${data.question}",
"response": "${data.answer}",
}
},
...
)
After
evaluate(
...,
evaluator_config={
"hate_unfairness": {
"column_mapping": {
"query": "${data.question}",
"response": "${data.answer}",
}
}
},
...
)
- Simulator now requires a model configuration to call the prompty instead of an Azure AI project scope. This enables the usage of simulator with Entra ID based auth. Before:
azure_ai_project = {
"subscription_id": os.environ.get("AZURE_SUBSCRIPTION_ID"),
"resource_group_name": os.environ.get("RESOURCE_GROUP"),
"project_name": os.environ.get("PROJECT_NAME"),
}
sim = Simulator(azure_ai_project=azure_ai_project, credentails=DefaultAzureCredentials())
After:
model_config = {
"azure_endpoint": os.environ.get("AZURE_OPENAI_ENDPOINT"),
"azure_deployment": os.environ.get("AZURE_DEPLOYMENT"),
}
sim = Simulator(model_config=model_config)
If api_key
is not included in the model_config
, the prompty runtime in promptflow-core
will pick up DefaultAzureCredential
.
Bugs Fixed
- Fixed issue where Entra ID authentication was not working with
AzureOpenAIModelConfiguration
1.0.0b2 (2024-09-24)
Breaking Changes
data
andevaluators
are now required keywords inevaluate
.
1.0.0b1 (2024-09-20)
Breaking Changes
- The
synthetic
namespace has been renamed tosimulator
, and sub-namespaces under this module have been removed - The
evaluate
andevaluators
namespaces have been removed, and everything previously exposed in those modules has been added to the root namespaceazure.ai.evaluation
- The parameter name
project_scope
in content safety evaluators have been renamed toazure_ai_project
for consistency with evaluate API and simulators. - Model configurations classes are now of type
TypedDict
and are exposed in theazure.ai.evaluation
module instead of coming frompromptflow.core
. - Updated the parameter names for
question
andanswer
in built-in evaluators to more generic terms:query
andresponse
.
Features Added
- First preview
- This package is port of
promptflow-evals
. New features will be added only to this package moving forward. - Added a
TypedDict
forAzureAIProject
that allows for better intellisense and type checking when passing in project information
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file azure_ai_evaluation-1.0.0b4.tar.gz
.
File metadata
- Download URL: azure_ai_evaluation-1.0.0b4.tar.gz
- Upload date:
- Size: 154.8 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: RestSharp/106.13.0.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 460c19c55ab1b002ca05991e6711006292f05e3c52b9e4c851aed59aec795024 |
|
MD5 | 6aa1e2542781f093c4a64df1404ad0c1 |
|
BLAKE2b-256 | 194c9813ca6f1f226b592f3e07d8bf6ece341d17d3103def83f09aa41dcb9972 |
File details
Details for the file azure_ai_evaluation-1.0.0b4-py3-none-any.whl
.
File metadata
- Download URL: azure_ai_evaluation-1.0.0b4-py3-none-any.whl
- Upload date:
- Size: 159.4 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: RestSharp/106.13.0.0
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 49206ec1f0a68166e040180ffed32b4005be4f451cd1aea3fdac23be9468eeae |
|
MD5 | 727a4514d3fe442a9557c1ddd5fb2a81 |
|
BLAKE2b-256 | e70d07441f8dec920364f5c6fe747a5ed74cc4489c340b3cacbc5963d348f71c |