Skip to main content

Performance metrics for Pyramid using StatsD

Project description

Performance metrics for Pyramid using StatsD. The project aims at providing ways to instrument a Pyramid application in the least intrusive way.

Installation

Install using setuptools, e.g. (within a virtualenv):

$ pip install pyramid_metrics

Setup

Once pyramid_metrics is installed, you must use the config.include mechanism to include it into your Pyramid project’s configuration. In your Pyramid project’s __init__.py:

config = Configurator(.....)
config.include('pyramid_metrics')

Alternately you can use the pyramid.includes configuration value in your .ini file:

[app:myapp]
pyramid.includes = pyramid_metrics

Usage

Pyramid_metrics configuration (values are defaults):

[app:myapp]
metrics.host = localhost
metrics.port = 8125

metrics.prefix = application.stage

metrics.route_performance = true

Route performance

If enabled, the route performance feature will time the request processing. By using the StatsD Timer type metric, pre-aggregation will provide information on latency, rate and total number. The information is sent two times: per route and globally.

The key name is composed of the route name, the HTTP method and the outcome (as HTTP status code or ‘exc’ for exception).

  • Global key request.<HTTP_METHOD>.<STATUS_CODE_OR_EXC>

  • Per route key route.<ROUTE_NAME>.request.<HTTP_METHOD>.<STATUS_CODE_OR_EXC>

API

Counter

StatsD type: https://github.com/etsy/statsd/blob/master/docs/metric_types.md#counting

# Increment a counter named cache.hit by 1
request.metrics.incr('cache.hit')

# Increment by N
request.metrics.incr(('cache.hit.read.total', count=len(cacheresult)))

# Stat names can be composed from list or tuple
request.metrics.incr(('cache', cache_action))

Gauge

StatsD type: https://github.com/etsy/statsd/blob/master/docs/metric_types.md#gauges

# Set the number of SQL connections to 8
request.metrics.gauge('sql.connections', 8)

# Increase the value of the metrics by some amount
request.metrics.gauge('network.egress', 34118, delta=True)

Timer

StatsD type: https://github.com/etsy/statsd/blob/master/docs/metric_types.md#timing

# Simple timing
time_in_ms = requests.get('http://example.net').elapsed.microseconds/1000
request.metrics.timing('net.example.responsetime', time_in_ms)

# Using the time marker mechanism
request.metrics.marker_start('something_slow')
httpclient.get('http://example.net')
request.metrics.marker_stop('something_slow')

# Measure different outcome
request.metrics.marker_start('something_slow')
try:
    httpclient.get('http://example.net').raise_for_status()
except:
    # Send measure to key 'something_slow.error'
    request.metrics.marker_stop('something_slow', suffix='error')
else:
    # Send measure to key 'something_slow.ok'
    request.metrics.marker_stop('something_slow', suffix='ok')

# Using the context manager
with request.metrics.timer(['longprocess', processname]):
   run_longprocess(processname)
   # Send measure to 'longprocess.foobar' or 'longprocess.foobar.exc'

Currently implemented

  • Collection utility as a request method

  • Ability to send metrics per Pyramid route

  • Simple time marker mechanism

  • Simple counter

  • Context manager for Timing metric type

TODO

  • Full StatsD metric types

  • Extensions for automatic metrology (SQLAlchemy, MongoDB, Requests…)

  • Whitelist/blacklist of metrics

  • Time allocation per subsystem (using the time marker mechanism)

Considerations

  • The general error policy is: always failsafe. Pyramid_metrics should NEVER break your application.

  • The DNS resolution is done during configuration to avoid recurring latencies.

Development

Run tests

The tests are run by nose and all dependencies are in requirements-test.txt.

$ pip install -r requirements-test
...

$ nosetests
...

Run tests with tox

$ pip install tox
...

$ tox          # Run on python 2.7 and python 3.4
...

$ tox -e py34  # Run on python 3.4 only

Contributors

  • Pior Bastida (@pior)

  • Philippe Gauthier (@deuxpi)

  • Hadrien David (@hadrien)

  • Jay R. Wren (@jrwren)

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

pyramid_metrics-0.3.1.tar.gz (10.7 kB view details)

Uploaded Source

Built Distribution

pyramid_metrics-0.3.1-py3-none-any.whl (14.7 kB view details)

Uploaded Python 3

File details

Details for the file pyramid_metrics-0.3.1.tar.gz.

File metadata

File hashes

Hashes for pyramid_metrics-0.3.1.tar.gz
Algorithm Hash digest
SHA256 2d6f89e49788c76ce351affd2974110ffe6138f78dc788c6755e2c27b9e7a6a5
MD5 ddd84babecf40161346861fbe77da9f2
BLAKE2b-256 f7d4a81709341e8d9c36e40727f05d80b52dfc731430e132627d440d290989f4

See more details on using hashes here.

File details

Details for the file pyramid_metrics-0.3.1-py3-none-any.whl.

File metadata

File hashes

Hashes for pyramid_metrics-0.3.1-py3-none-any.whl
Algorithm Hash digest
SHA256 923426cfcd70eb3c4e7af690c5ac4d989fc33f2dd46b5ed2440924656e9814ec
MD5 1c3aee42891847fbc22029fba82d2ea3
BLAKE2b-256 ade176492184d57c859b6ba18b334111ca55b48d36b60a8eae8f7afff928c29c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page