TorchServe is a tool for serving neural net models for inference
Project description
TorchServe (PyTorch model server) is a flexible and easy to use tool for serving deep learning models exported from PyTorch.
Use the TorchServe CLI, or the pre-configured Docker images, to start a service that sets up HTTP endpoints to handle model inference requests.
Installation
Full installation instructions are in the project repo: https://github.com/pytorch/serve/blob/master/README.md
Source code
You can check the latest source code as follows:
git clone https://github.com/pytorch/serve.git
Citation
If you use torchserve in a publication or project, please cite torchserve: https://github.com/pytorch/serve
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distributions
Built Distribution
File details
Details for the file torchserve-0.2.2-py2.py3-none-any.whl
.
File metadata
- Download URL: torchserve-0.2.2-py2.py3-none-any.whl
- Upload date:
- Size: 5.0 MB
- Tags: Python 2, Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/3.1.1 pkginfo/1.5.0.1 requests/2.24.0 setuptools/47.3.1.post20200622 requests-toolbelt/0.9.1 tqdm/4.47.0 CPython/3.8.1
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 6242cb83b8539c74afc10da5336197154fac085ada2e234fbbd810bd55e4d7d6 |
|
MD5 | 1198077f08c553e95bd05657574c65d2 |
|
BLAKE2b-256 | 5761222b84dc042bfc5152de30da09adb802827c48dad6c71f71b79041340c21 |