Skip to main content

Launch a WSGI or ASGI Application in a background thread with werkzeug or uvicorn.

Project description

⚙️ Server Thread

Tests codecov PyPI conda

Launch a WSGI or ASGI Application in a background thread with werkzeug or uvicorn.

This application was created for localtileserver and provides the basis for how it can launch an image tile server as a background thread for visualizing data in Jupyter notebooks.

While this may not be a widely applicable library, it is useful for a few Python packages I have created that require a background service.

🚀 Usage

Use the ServerThread with any WSGI or ASGI Application.

Start by creating a application (this can be a flask app or a simple app like below):

# Create some WSGI Application
from werkzeug import Request, Response

@Request.application
def app(request):
    return Response("howdy", 200)

Then launch the app with the ServerThread class:

import requests
from server_thread import ServerThread

# Launch app in a background thread
server = ServerThread(app)

# Perform requests against the server without blocking
requests.get(f"http://{server.host}:{server.port}/").raise_for_status()

⬇️ Installation

Get started with server-thread to create applications that require a WSGIApplication in the background.

🐍 Installing with conda

Conda makes managing server-thread's dependencies across platforms quite easy and this is the recommended method to install:

conda install -c conda-forge server-thread

🎡 Installing with pip

If you prefer pip, then you can install from PyPI: https://pypi-hypernode.com/project/server-thread/

pip install server-thread

💭 Feedback

Please share your thoughts and questions on the Discussions board. If you would like to report any bugs or make feature requests, please open an issue.

If filing a bug report, please share a scooby Report:

import server_thread
print(server_thread.Report())

🚀 Examples

Minimal examples for using server-thread with common micro-frameworks.

💨 FastAPI

from fastapi import FastAPI

app = FastAPI()


@app.get("/")
def root():
    return {"message": "Howdy!"}


server = ServerThread(app)
requests.get(f"http://{server.host}:{server.port}/").json()

⚗️ Flask

from flask import Flask

app = Flask("testapp")


@app.route("/")
def howdy():
    return {"message": "Howdy!"}


server = ServerThread(app)
requests.get(f"http://{server.host}:{server.port}/").json()

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

server-thread-0.2.0.tar.gz (7.2 kB view details)

Uploaded Source

Built Distribution

server_thread-0.2.0-py3-none-any.whl (8.5 kB view details)

Uploaded Python 3

File details

Details for the file server-thread-0.2.0.tar.gz.

File metadata

  • Download URL: server-thread-0.2.0.tar.gz
  • Upload date:
  • Size: 7.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.8.13

File hashes

Hashes for server-thread-0.2.0.tar.gz
Algorithm Hash digest
SHA256 d89f80048b1c2ea311ab10665e955d8a075c7a823a06c80563714aff42c4ec12
MD5 7632997419fcb13014bfa1f1f9465f64
BLAKE2b-256 4b16b3a3e54dac1a557384aafef25edd1909586904937f72128eeadebfb15673

See more details on using hashes here.

File details

Details for the file server_thread-0.2.0-py3-none-any.whl.

File metadata

File hashes

Hashes for server_thread-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 2b779e54ec96debadc72b9622f7365b191ed444a230b38bd31b89ea8b7166a38
MD5 da657ddfd87d69359bb5dad8714790b6
BLAKE2b-256 6429454b1c597eb418f4f95499eabbb8d32c375c50043695fab456ea8a7e8c27

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page