Microsoft Azure Metrics Advisor Client Library for Python
Project description
Azure Metrics Advisor client library for Python
Metrics Advisor is a scalable real-time time series monitoring, alerting, and root cause analysis platform. Use Metrics Advisor to:
- Analyze multi-dimensional data from multiple data sources
- Identify and correlate anomalies
- Configure and fine-tune the anomaly detection model used on your data
- Diagnose anomalies and help with root cause analysis
Source code | Package (Pypi) | API reference documentation | Product documentation | Samples
Getting started
Install the package
Install the Azure Metrics Advisor client library for Python with pip:
pip install azure-ai-metricsadvisor --pre
Prerequisites
- Python 2.7, or 3.6 or later is required to use this package.
- You need an Azure subscription, and a Metrics Advisor serivce to use this package.
Authenticate the client
You will need two keys to authenticate the client:
- The subscription key to your Metrics Advisor resource. You can find this in the Keys and Endpoint section of your resource in the Azure portal.
- The API key for your Metrics Advisor instance. You can find this in the web portal for Metrics Advisor, in API keys on the left navigation menu.
We can use the keys to create a new MetricsAdvisorClient
or MetricsAdvisorAdministrationClient
.
import os
from azure.ai.metricsadvisor import (
MetricsAdvisorKeyCredential,
MetricsAdvisorClient,
MetricsAdvisorAdministrationClient,
)
service_endpoint = os.getenv("ENDPOINT")
subscription_key = os.getenv("SUBSCRIPTION_KEY")
api_key = os.getenv("API_KEY")
client = MetricsAdvisorClient(service_endpoint,
MetricsAdvisorKeyCredential(subscription_key, api_key))
admin_client = MetricsAdvisorAdministrationClient(service_endpoint,
MetricsAdvisorKeyCredential(subscription_key, api_key))
Key concepts
MetricsAdvisorClient
MetricsAdvisorClient
helps with:
- listing incidents
- listing root causes of incidents
- retrieving original time series data and time series data enriched by the service.
- listing alerts
- adding feedback to tune your model
MetricsAdvisorAdministrationClient
MetricsAdvisorAdministrationClient
allows you to
- manage data feeds
- manage anomaly detection configurations
- manage anomaly alerting configurations
- manage hooks
DataFeed
A DataFeed
is what Metrics Advisor ingests from your data source, such as Cosmos DB or a SQL server. A data feed contains rows of:
- timestamps
- zero or more dimensions
- one or more measures
Metric
A DataFeedMetric
is a quantifiable measure that is used to monitor and assess the status of a specific business process. It can be a combination of multiple time series values divided into dimensions. For example a web health metric might contain dimensions for user count and the en-us market.
AnomalyDetectionConfiguration
AnomalyDetectionConfiguration
is required for every time series, and determines whether a point in the time series is an anomaly.
Anomaly & Incident
After a detection configuration is applied to metrics, AnomalyIncident
s are generated whenever any series within it has an DataPointAnomaly
.
Alert
You can configure which anomalies should trigger an AnomalyAlert
. You can set multiple alerts with different settings. For example, you could create an alert for anomalies with lower business impact, and another for more important alerts.
Notification Hook
Metrics Advisor lets you create and subscribe to real-time alerts. These alerts are sent over the internet, using a notification hook like EmailNotificationHook
or WebNotificationHook
.
Examples
- Add a data feed from a sample or data source
- Check ingestion status
- Configure anomaly detection configuration
- Configure alert configuration
- Query anomaly detection results
- Query incidents
- Query root causes
- Add hooks for receiving anomaly alerts
Add a data feed from a sample or data source
Metrics Advisor supports connecting different types of data sources. Here is a sample to ingest data from SQL Server.
import os
import datetime
from azure.ai.metricsadvisor import MetricsAdvisorKeyCredential, MetricsAdvisorAdministrationClient
from azure.ai.metricsadvisor.models import (
SqlServerDataFeedSource,
DataFeedSchema,
DataFeedMetric,
DataFeedDimension,
DataFeedRollupSettings,
DataFeedMissingDataPointFillSettings
)
service_endpoint = os.getenv("ENDPOINT")
subscription_key = os.getenv("SUBSCRIPTION_KEY")
api_key = os.getenv("API_KEY")
sql_server_connection_string = os.getenv("SQL_SERVER_CONNECTION_STRING")
query = os.getenv("SQL_SERVER_QUERY")
client = MetricsAdvisorAdministrationClient(
service_endpoint,
MetricsAdvisorKeyCredential(subscription_key, api_key)
)
data_feed = client.create_data_feed(
name="My data feed",
source=SqlServerDataFeedSource(
connection_string=sql_server_connection_string,
query=query,
),
granularity="Daily",
schema=DataFeedSchema(
metrics=[
DataFeedMetric(name="cost", display_name="Cost"),
DataFeedMetric(name="revenue", display_name="Revenue")
],
dimensions=[
DataFeedDimension(name="category", display_name="Category"),
DataFeedDimension(name="city", display_name="City")
],
timestamp_column="Timestamp"
),
ingestion_settings=datetime.datetime(2019, 10, 1),
data_feed_description="cost/revenue data feed",
rollup_settings=DataFeedRollupSettings(
rollup_type="AutoRollup",
rollup_method="Sum",
rollup_identification_value="__CUSTOM_SUM__"
),
missing_data_point_fill_settings=DataFeedMissingDataPointFillSettings(
fill_type="SmartFilling"
),
access_mode="Private"
)
return data_feed
Check ingestion status
After we start the data ingestion, we can check the ingestion status.
import datetime
from azure.ai.metricsadvisor import MetricsAdvisorKeyCredential, MetricsAdvisorAdministrationClient
service_endpoint = os.getenv("ENDPOINT")
subscription_key = os.getenv("SUBSCRIPTION_KEY")
api_key = os.getenv("API_KEY")
data_feed_id = os.getenv("DATA_FEED_ID")
client = MetricsAdvisorAdministrationClient(service_endpoint,
MetricsAdvisorKeyCredential(subscription_key, api_key)
)
ingestion_status = client.list_data_feed_ingestion_status(
data_feed_id,
datetime.datetime(2020, 9, 20),
datetime.datetime(2020, 9, 25)
)
for status in ingestion_status:
print("Timestamp: {}".format(status.timestamp))
print("Status: {}".format(status.status))
print("Message: {}\n".format(status.message))
Configure anomaly detection configuration
While a default detection configuration is automatically applied to each metric, we can tune the detection modes used on our data by creating a customized anomaly detection configuration.
from azure.ai.metricsadvisor import MetricsAdvisorKeyCredential, MetricsAdvisorAdministrationClient
from azure.ai.metricsadvisor.models import (
ChangeThresholdCondition,
HardThresholdCondition,
SmartDetectionCondition,
SuppressCondition,
MetricDetectionCondition,
)
service_endpoint = os.getenv("ENDPOINT")
subscription_key = os.getenv("SUBSCRIPTION_KEY")
api_key = os.getenv("API_KEY")
metric_id = os.getenv("METRIC_ID")
client = MetricsAdvisorAdministrationClient(
service_endpoint,
MetricsAdvisorKeyCredential(subscription_key, api_key)
)
change_threshold_condition = ChangeThresholdCondition(
anomaly_detector_direction="Both",
change_percentage=20,
shift_point=10,
within_range=True,
suppress_condition=SuppressCondition(
min_number=5,
min_ratio=2
)
)
hard_threshold_condition = HardThresholdCondition(
anomaly_detector_direction="Up",
upper_bound=100,
suppress_condition=SuppressCondition(
min_number=2,
min_ratio=2
)
)
smart_detection_condition = SmartDetectionCondition(
anomaly_detector_direction="Up",
sensitivity=10,
suppress_condition=SuppressCondition(
min_number=2,
min_ratio=2
)
)
detection_config = client.create_detection_configuration(
name="my_detection_config",
metric_id=metric_id,
description="anomaly detection config for metric",
whole_series_detection_condition=MetricDetectionCondition(
condition_operator="OR",
change_threshold_condition=change_threshold_condition,
hard_threshold_condition=hard_threshold_condition,
smart_detection_condition=smart_detection_condition
)
)
return detection_config
Configure alert configuration
Then let's configure in which conditions an alert needs to be triggered.
from azure.ai.metricsadvisor import MetricsAdvisorKeyCredential, MetricsAdvisorAdministrationClient
from azure.ai.metricsadvisor.models import (
MetricAlertConfiguration,
MetricAnomalyAlertScope,
TopNGroupScope,
MetricAnomalyAlertConditions,
SeverityCondition,
MetricBoundaryCondition,
MetricAnomalyAlertSnoozeCondition,
)
service_endpoint = os.getenv("ENDPOINT")
subscription_key = os.getenv("SUBSCRIPTION_KEY")
api_key = os.getenv("API_KEY")
anomaly_detection_configuration_id = os.getenv("DETECTION_CONFIGURATION_ID")
hook_id = os.getenv("HOOK_ID")
client = MetricsAdvisorAdministrationClient(
service_endpoint,
MetricsAdvisorKeyCredential(subscription_key, api_key)
)
alert_config = client.create_alert_configuration(
name="my alert config",
description="alert config description",
cross_metrics_operator="AND",
metric_alert_configurations=[
MetricAlertConfiguration(
detection_configuration_id=anomaly_detection_configuration_id,
alert_scope=MetricAnomalyAlertScope(
scope_type="WholeSeries"
),
alert_conditions=MetricAnomalyAlertConditions(
severity_condition=SeverityCondition(
min_alert_severity="Low",
max_alert_severity="High"
)
)
),
MetricAlertConfiguration(
detection_configuration_id=anomaly_detection_configuration_id,
alert_scope=MetricAnomalyAlertScope(
scope_type="TopN",
top_n_group_in_scope=TopNGroupScope(
top=10,
period=5,
min_top_count=5
)
),
alert_conditions=MetricAnomalyAlertConditions(
metric_boundary_condition=MetricBoundaryCondition(
direction="Up",
upper=50
)
),
alert_snooze_condition=MetricAnomalyAlertSnoozeCondition(
auto_snooze=2,
snooze_scope="Metric",
only_for_successive=True
)
),
],
hook_ids=[hook_id]
)
return alert_config
Query anomaly detection results
We can query the alerts and anomalies.
import datetime
from azure.ai.metricsadvisor import MetricsAdvisorKeyCredential, MetricsAdvisorClient
service_endpoint = os.getenv("ENDPOINT")
subscription_key = os.getenv("SUBSCRIPTION_KEY")
api_key = os.getenv("API_KEY")
alert_config_id = os.getenv("ALERT_CONFIG_ID")
alert_id = os.getenv("ALERT_ID")
client = MetricsAdvisorClient(service_endpoint,
MetricsAdvisorKeyCredential(subscription_key, api_key)
)
results = client.list_alerts(
alert_configuration_id=alert_config_id,
start_time=datetime.datetime(2020, 1, 1),
end_time=datetime.datetime(2020, 9, 9),
time_mode="AnomalyTime",
)
for result in results:
print("Alert id: {}".format(result.id))
print("Create time: {}".format(result.created_time))
results = client.list_anomalies(
alert_configuration_id=alert_config_id,
alert_id=alert_id,
)
for result in results:
print("Create time: {}".format(result.created_time))
print("Severity: {}".format(result.severity))
print("Status: {}".format(result.status))
Query incidents
We can query the incidents for a detection configuration.
import datetime
from azure.ai.metricsadvisor import MetricsAdvisorKeyCredential, MetricsAdvisorClient
service_endpoint = os.getenv("ENDPOINT")
subscription_key = os.getenv("SUBSCRIPTION_KEY")
api_key = os.getenv("API_KEY")
anomaly_detection_configuration_id = os.getenv("DETECTION_CONFIGURATION_ID")
client = MetricsAdvisorClient(service_endpoint,
MetricsAdvisorKeyCredential(subscription_key, api_key)
)
results = client.list_incidents(
detection_configuration_id=anomaly_detection_configuration_id,
start_time=datetime.datetime(2020, 1, 1),
end_time=datetime.datetime(2020, 9, 9),
)
for result in results:
print("Metric id: {}".format(result.metric_id))
print("Incident ID: {}".format(result.id))
print("Severity: {}".format(result.severity))
print("Status: {}".format(result.status))
Query root causes
We can also query the root causes of an incident
from azure.ai.metricsadvisor import MetricsAdvisorKeyCredential, MetricsAdvisorClient
service_endpoint = os.getenv("ENDPOINT")
subscription_key = os.getenv("SUBSCRIPTION_KEY")
api_key = os.getenv("API_KEY")
anomaly_detection_configuration_id = os.getenv("DETECTION_CONFIGURATION_ID")
incident_id = os.getenv("INCIDENT_ID")
client = MetricsAdvisorClient(service_endpoint,
MetricsAdvisorKeyCredential(subscription_key, api_key)
)
results = client.list_incident_root_causes(
detection_configuration_id=anomaly_detection_configuration_id,
incident_id=incident_id,
)
for result in results:
print("Score: {}".format(result.score))
print("Description: {}".format(result.description))
Add hooks for receiving anomaly alerts
We can add some hooks so when an alert is triggered, we can get call back.
from azure.ai.metricsadvisor import MetricsAdvisorKeyCredential, MetricsAdvisorAdministrationClient
from azure.ai.metricsadvisor.models import EmailNotificationHook
service_endpoint = os.getenv("ENDPOINT")
subscription_key = os.getenv("SUBSCRIPTION_KEY")
api_key = os.getenv("API_KEY")
client = MetricsAdvisorAdministrationClient(service_endpoint,
MetricsAdvisorKeyCredential(subscription_key, api_key))
hook = client.create_hook(
hook=EmailNotificationHook(
name="email hook",
description="my email hook",
emails_to_alert=["alertme@alertme.com"],
external_link="https://docs.microsoft.com/en-us/azure/cognitive-services/metrics-advisor/how-tos/alerts"
)
)
Async APIs
This library includes a complete async API supported on Python 3.6+. To use it, you must first install an async transport, such as aiohttp. See azure-core documentation for more information.
from azure.ai.metricsadvisor import MetricsAdvisorKeyCredential
from azure.ai.metricsadvisor.aio import MetricsAdvisorClient, MetricsAdvisorAdministrationClient
client = MetricsAdvisorClient(
service_endpoint,
MetricsAdvisorKeyCredential(subscription_key, api_key)
)
admin_client = MetricsAdvisorAdministrationClient(
service_endpoint,
MetricsAdvisorKeyCredential(subscription_key, api_key)
)
Troubleshooting
General
The Azure Metrics Advisor clients will raise exceptions defined in Azure Core.
Logging
This library uses the standard logging library for logging.
Basic information about HTTP sessions (URLs, headers, etc.) is logged at INFO
level.
Detailed DEBUG
level logging, including request/response bodies and unredacted
headers, can be enabled on the client or per-operation with the logging_enable
keyword argument.
See full SDK logging documentation with examples here.
Next steps
More sample code
For more details see the samples README.
Contributing
This project welcomes contributions and suggestions. Most contributions require you to agree to a Contributor License Agreement (CLA) declaring that you have the right to, and actually do, grant us the rights to use your contribution. For details, visit cla.microsoft.com.
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
Release History
1.0.0 (2021-07-06)
Breaking Changes
- Changed
DetectionConditionsOperator
->DetectionConditionOperator
cross_conditions_operator
->condition_operator
AnomalyAlert.created_on
->AnomalyAlert.created_time
AnomalyAlert.modified_on
->AnomalyAlert.modified_time
Anomaly.created_on
->Anomaly.created_time
admin_emails
has been renamed toadmins
inNotificationHook
admin_emails
has been renamed toadmins
inDataFeedOptions
viewer_emails
has been renamed toviewers
inDataFeedOptions
1.0.0b4 (2021-06-07)
New Features
- Added
AzureLogAnalyticsDataFeedSource
andAzureEventHubsDataFeedSource
- Update method now returns the updated object
- Added DatasourceCredentials and DatasourceCredential operations
- Added authentication type support for data feed
Breaking Changes
-
Delete methods now take positional only argument as id
-
update_subscription_key
andupdate_api_key
are merged into one methodupdate_key
-
Removed
DataFeedOptions
and moved all its properties to theDataFeed
model -
Deprecated:
HttpRequestDataFeed
ElasticsearchDataFeed
-
Renamed
AzureApplicationInsightsDataFeed
->AzureApplicationInsightsDataFeedSource
AzureBlobDataFeed
->AzureBlobDataFeedSource
AzureCosmosDBDataFeed
->AzureCosmosDbDataFeedSource
AzureDataExplorerDataFeed
->AzureDataExplorerDataFeedSource
AzureTableDataFeed
->AzureTableDataFeedSource
InfluxDBDataFeed
->InfluxDbDataFeedSource
MySqlDataFeed
->MySqlDataFeedSource
PostgreSqlDataFeed
->PostgreSqlDataFeedSource
SQLServerDataFeed
->SqlServerDataFeedSource
MongoDBDataFeed
->MongoDbDataFeedSource
AzureDataLakeStorageGen2DataFeed
->AzureDataLakeStorageGen2DataFeedSource
Dependency Updates
- Bump
msrest
requirement from0.6.12
to0.6.21
1.0.0b3 (2021-02-09)
New Features
- AAD support authentication #15922
MetricsAdvisorKeyCredential
support for rotating the subscription and api keys to update long-lived clients
Breaking Changes
list_dimension_values
has been renamed tolist_anomaly_dimension_values
- update methods now return None
- Updated DataFeed.metric_ids to be a dict rather than a list
Hotfixes
- Bump
six
requirement from1.6
to 1.11.0`
1.0.0b2 (2020-11-10)
Breaking Changes
create_hook
now takes as input anEmailHook
orWebHook
Anomaly
has been renamed toDataPointAnomaly
Incident
has been renamed toAnomalyIncident
IncidentPropertyIncidentStatus
has been renamed toAnomalyIncidentStatus
Alert
has been renamed toAnomalyAlert
Severity
has been renamed toAnomalySeverity
Metric
has been renamed toDataFeedMetric
Dimension
has been renamed toDataFeedDimension
EmailHook
has been renamed toEmailNotificationHook
WebHook
has been renamed toWebNotificationHook
Hook
has been renamed toNotificationHook
TimeMode
has been renamed toAlertQueryTimeMode
admins
has been renamed toadmin_emails
onNotificationHook
admins
has been renamed toadmin_emails
onDataFeedOptions
viewers
has been renamed toviewer_emails
onDataFeedOptions
timestamp_list
has been renamed totimestamps
onMetricSeriesData
value_list
has been renamed tovalues
onMetricSeriesData
SeriesResult
has been renamed toMetricEnrichedSeriesData
create_anomaly_alert_configuration
has been renamed tocreate_alert_configuration
get_anomaly_alert_configuration
has been renamed toget_alert_configuration
delete_anomaly_alert_configuration
has been renamed todelete_alert_configuration
update_anomaly_alert_configuration
has been renamed toupdate_alert_configuration
list_anomaly_alert_configurations
has been renamed tolist_alert_configurations
create_metric_anomaly_detection_configuration
has been renamed tocreate_detection_configuration
get_metric_anomaly_detection_configuration
has been renamed toget_detection_configuration
delete_metric_anomaly_detection_configuration
has been renamed todelete_detection_configuration
update_metric_anomaly_detection_configuration
has been renamed toupdate_detection_configuration
list_metric_anomaly_detection_configurations
has been renamed tolist_detection_configurations
list_feedbacks
has been renamed tolist_feedback
list_alerts_for_alert_configuration
has been renamed tolist_alerts
list_anomalies_for_alert
&list_anomalies_for_detection_configuration
have been grouped intolist_anomalies
list_dimension_values_for_detection_configuration
has been renamed tolist_dimension_values
list_incidents_for_alert
&list_incidents_for_detection_configuration
have been grouped intolist_incidents
New Features
__repr__
added to all models
1.0.0b1 (2020-10-07)
First preview release
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file azure-ai-metricsadvisor-1.0.0.zip
.
File metadata
- Download URL: azure-ai-metricsadvisor-1.0.0.zip
- Upload date:
- Size: 277.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | c07d13c9c3c1a2913a022eb74018133603fad515c848783a676e8405baefb94e |
|
MD5 | a7b28fde3c8f898854a05077d819f4c5 |
|
BLAKE2b-256 | f00316532aaeede7abc3a36f6693c1b8b9462dab82d470cffef8d90746615fd7 |
File details
Details for the file azure_ai_metricsadvisor-1.0.0-py2.py3-none-any.whl
.
File metadata
- Download URL: azure_ai_metricsadvisor-1.0.0-py2.py3-none-any.whl
- Upload date:
- Size: 148.5 kB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.4.1 importlib_metadata/4.6.1 pkginfo/1.7.0 requests/2.25.1 requests-toolbelt/0.9.1 tqdm/4.61.2 CPython/3.9.5
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | b87e07ef1c04d3d4c25dd90886a05512f44b858a758cfb6830a6250bb901e62a |
|
MD5 | eaf9fae506bcbc00fcaa28964d8a39c4 |
|
BLAKE2b-256 | 96ae1e70a4ed37a02d6bdf2e252dc4052448cc94b4287b8e2ccc3a53f72d57cc |