Various strategies for sending requests
Project description
aio-request
This library simplifies an interaction between microservices:
- Allows sending requests using various strategies
- Propagates a deadline and a priority of requests
- Exposes client/server metrics
Example:
import aiohttp
import aio_request
async with aiohttp.ClientSession() as client_session:
client = aio_request.setup(
transport=aio_request.AioHttpTransport(client_session),
endpoint="http://endpoint:8080/",
)
response_ctx = client.request(
aio_request.get("thing"),
deadline=aio_request.Deadline.from_timeout(5)
)
async with response_ctx as response:
pass # process response here
Request strategies
The following strategies are supported:
- Single attempt. Only one attempt is sent.
- Sequential. Attempts are sent sequentially with delays between them.
- Parallel. Attempts are sent in parallel one by one with delays between them.
Attempts count and delays are configurable.
Example:
import aiohttp
import aio_request
async with aiohttp.ClientSession() as client_session:
client = aio_request.setup(
transport=aio_request.AioHttpTransport(client_session),
endpoint="http://endpoint:8080/",
)
response_ctx = client.request(
aio_request.get("thing"),
deadline=aio_request.Deadline.from_timeout(5),
strategy=aio_request.parallel_strategy(
attempts_count=3,
delays_provider=aio_request.linear_delays(min_delay_seconds=0.1, delay_multiplier=0.1)
)
)
async with response_ctx as response:
pass # process response here
Deadline & priority propagation
To enable it for the server side a middleware should be configured:
import aiohttp.web
import aio_request
app = aiohttp.web.Application(middlewares=[aio_request.aiohttp_middleware_factory()])
Expose client/server metrics
To enable client metrics a metrics provider should be passed to the transport:
import aiohttp
import aio_request
async with aiohttp.ClientSession() as client_session:
client = aio_request.setup(
transport=aio_request.AioHttpTransport(
client_session,
metrics_provider=aio_request.PROMETHEUS_METRICS_PROVIDER
),
endpoint="http://endpoint:8080/",
)
It is an example of how it should be done for aiohttp and prometheus.
To enable client metrics a metrics provider should be passed to the middleware:
import aiohttp.web
import aio_request
app = aiohttp.web.Application(
middlewares=[
aio_request.aiohttp_middleware_factory(
metrics_provider=aio_request.PROMETHEUS_METRICS_PROVIDER
)
]
)
Circuit breaker
import aiohttp
import aio_request
async with aiohttp.ClientSession() as client_session:
client = aio_request.setup_v2(
transport=aio_request.AioHttpTransport(client_session),
endpoint="http://endpoint:8080/",
circuit_breaker=aio_request.DefaultCircuitBreaker[str, int](
break_duration=1.0,
sampling_duration=1.0,
minimum_throughput=2,
failure_threshold=0.5,
),
)
In the case of requests count >= minimum throughput(>=2) in sampling period(1 second) the circuit breaker will open if failed requests count/total requests count >= failure threshold(50%).
v0.1.21 (2022-01-05)
- Content type should be None in Response.json()
v0.1.20 (2022-01-05)
v0.1.19 (2021-11-01)
v0.1.18 (2021-09-08)
v0.1.17 (2021-09-01)
v0.1.16 (2021-09-01)
v0.1.15 (2021-09-01)
v0.1.14 (2021-08-18)
v0.1.13 (2021-08-15)
v0.1.12 (2021-07-21)
v0.1.11 (2021-07-21)
- Fix Request.update_headers, add Request.extend_headers #59
v0.1.10 (2021-07-20)
- Add Response.is_json property to check whether content-type is json compatible #58
- Tracing support #54,
- Configuration of a new pipeline
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
Hashes for aio_request-0.1.21-py3-none-any.whl
Algorithm | Hash digest | |
---|---|---|
SHA256 | d638cbe431cc297b508cbc178530b4545adf2effd97facbebfa0eb5a404a5bf7 |
|
MD5 | 6c383154570a0074b72828842e6935f3 |
|
BLAKE2b-256 | bd96584a6dd5c9b7a47a1e31153628bf54b5557ef98ae48b597b4b69c05c4c7c |