Skip to main content

LLM unified service

Project description

Modelz LLM

discord invitation link trackgit-views

Modelz LLM is an inference server that facilitates the utilization of open source large language models (LLMs), such as FastChat, LLaMA, and ChatGLM, on either local or cloud-based environments with OpenAI compatible API.

Features

  • OpenAI compatible API: Modelz LLM provides an OpenAI compatible API for LLMs, which means you can use the OpenAI python SDK or LangChain to interact with the model.
  • Self-hosted: Modelz LLM can be easily deployed on either local or cloud-based environments.
  • Open source LLMs: Modelz LLM supports open source LLMs, such as FastChat, LLaMA, and ChatGLM.
  • Cloud native: We provide docker images for different LLMs, which can be easily deployed on Kubernetes, or other cloud-based environments (e.g. Modelz)

Quick Start

Install

pip install modelz-llm
# or install from source
pip install git+https://github.com/tensorchord/modelz-llm.git[gpu]

Run the self-hosted API server

Please first start the self-hosted API server by following the instructions:

modelz-llm -m bigscience/bloomz-560m --device cpu

Currently, we support the following models:

Model Name Huggingface Model Docker Image Recommended GPU
FastChat T5 lmsys/fastchat-t5-3b-v1.0 modelzai/llm-fastchat-t5-3b Nvidia L4(24GB)
Vicuna 7B Delta V1.1 lmsys/vicuna-7b-delta-v1.1 modelzai/llm-vicuna-7b Nvidia A100(40GB)
LLaMA 7B decapoda-research/llama-7b-hf modelzai/llm-llama-7b Nvidia A100(40GB)
ChatGLM 6B INT4 THUDM/chatglm-6b-int4 modelzai/llm-chatglm-6b-int4 Nvidia T4(16GB)
ChatGLM 6B THUDM/chatglm-6b modelzai/llm-chatglm-6b Nvidia L4(24GB)
Bloomz 560M bigscience/bloomz-560m modelzai/llm-bloomz-560m CPU
Bloomz 1.7B bigscience/bloomz-1b7 CPU
Bloomz 3B bigscience/bloomz-3b Nvidia L4(24GB)
Bloomz 7.1B bigscience/bloomz-7b1 Nvidia A100(40GB)

Use OpenAI python SDK

Then you can use the OpenAI python SDK to interact with the model:

import openai
openai.api_base="http://localhost:8000"
openai.api_key="any"

# create a chat completion
chat_completion = openai.ChatCompletion.create(model="any", messages=[{"role": "user", "content": "Hello world"}])

Integrate with Langchain

You could also integrate modelz-llm with langchain:

import openai
openai.api_base="http://localhost:8000"
openai.api_key="any"

from langchain.llms import OpenAI

llm = OpenAI()

llm.generate(prompts=["Could you please recommend some movies?"])

Deploy on Modelz

You could also deploy the modelz-llm directly on Modelz:

Supported APIs

Modelz LLM supports the following APIs for interacting with open source large language models:

  • /completions
  • /chat/completions
  • /embeddings
  • /engines/<any>/embeddings
  • /v1/completions
  • /v1/chat/completions
  • /v1/embeddings

Acknowledgements

  • FastChat for the prompt generation logic.
  • Mosec for the inference engine.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

modelz-llm-23.6.13.tar.gz (20.7 kB view details)

Uploaded Source

Built Distribution

modelz_llm-23.6.13-py3-none-any.whl (12.5 kB view details)

Uploaded Python 3

File details

Details for the file modelz-llm-23.6.13.tar.gz.

File metadata

  • Download URL: modelz-llm-23.6.13.tar.gz
  • Upload date:
  • Size: 20.7 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.11.4

File hashes

Hashes for modelz-llm-23.6.13.tar.gz
Algorithm Hash digest
SHA256 6634fc3d3b5088156b828874c2838d9b7ed5ae73c042368fd1f0a674d317b007
MD5 763565724e08d99e46f77e5cd837da7e
BLAKE2b-256 f7eb59551c64286f091ec64ffe10ae06f8369509991f6b3a70e2475fdb769415

See more details on using hashes here.

File details

Details for the file modelz_llm-23.6.13-py3-none-any.whl.

File metadata

File hashes

Hashes for modelz_llm-23.6.13-py3-none-any.whl
Algorithm Hash digest
SHA256 15bd193f802d80e58a574f518c4813fb48665773e46473e20a54fab64ea6527c
MD5 adad19516024195809294da1b597c45c
BLAKE2b-256 98955820abe595f3a5c305ddeda3ce18071396a8f984259e682ee9693984620c

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page