Skip to main content

The smallest possible LLM API

Project description

llama-small

MicroLlama

The smallest possible LLM API. Build a question and answer interface to your own content in a few minutes. Uses OpenAI embeddings, gpt-3.5 and Faiss, via Langchain.

Usage

  1. Combine your source documents into a single JSON file called source.json. It should look like this:
[
    {
        "source": "Reference to the source of your content. Typically a title.",
        "url": "URL for your source. This key is optional.",
        "content": "Your content as a single string. If there's a title or summary, put these first, separated by new lines."
    }, 
    ...
]

See example.source.json for an example.

  1. Install MicroLlama into a virtual environment:
pip install microllama
  1. Get an OpenAI API key and add it to the environment, e.g. export OPENAI_API_KEY=sk-etc. Note that indexing and querying require OpenAI credits, which aren't free.

  2. Run your server with microllama. If a vector search index doesn't exist, it'll be created from your source.json, and stored.

  3. Query your documents at /api/ask?your question.

  4. Microllama includes an optional web front-end, which is generated with microllama make-front-end. This command creates a single index.html file which you can edit. It's served at /.

Deploying your API

Create a Dockerfile with microllama make-dockerfile. Then:

On Fly.io

Sign up for a Fly.io account and install flyctl. Then:

fly launch # answer no to Postgres, Redis and deploying now 
fly secrets set OPENAI_API_KEY=sk-etc 
fly deploy

On Google Cloud Run

gcloud run deploy --source . --set-env-vars="OPENAI_API_KEY=sk-etc"

For Cloud Run and other serverless platforms you should generate the FAISS index at container build time, to reduce startup time. See the two commented lines in Dockerfile.

Based on

TODO

  • Use splitting which generates more meaningful fragments, e.g. text_splitter = SpacyTextSplitter(chunk_size=700, chunk_overlap=200, separator=" ")

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

microllama-0.4.tar.gz (15.5 kB view details)

Uploaded Source

Built Distribution

microllama-0.4-py2.py3-none-any.whl (14.1 kB view details)

Uploaded Python 2 Python 3

File details

Details for the file microllama-0.4.tar.gz.

File metadata

  • Download URL: microllama-0.4.tar.gz
  • Upload date:
  • Size: 15.5 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-requests/2.28.2

File hashes

Hashes for microllama-0.4.tar.gz
Algorithm Hash digest
SHA256 a8117dee0dc3354d480a889d18304b5f1e2e6c98e6c411bbf5b5ce923a06f77f
MD5 8a3e7538186cc86812e2445a10e4b5b1
BLAKE2b-256 fa8480b0dad459e5c644e3973bd9b1680a24093e00df75caf473e5a74f5dad18

See more details on using hashes here.

File details

Details for the file microllama-0.4-py2.py3-none-any.whl.

File metadata

  • Download URL: microllama-0.4-py2.py3-none-any.whl
  • Upload date:
  • Size: 14.1 kB
  • Tags: Python 2, Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-requests/2.28.2

File hashes

Hashes for microllama-0.4-py2.py3-none-any.whl
Algorithm Hash digest
SHA256 7e9a04c0da534d9f12d60e127114d90e5f0196ca0ec3566e9c7771a64a0317b1
MD5 15f20170e35a740b17feab9d3edbbfba
BLAKE2b-256 625786b8cf0f8b1adb3f119dc468fadd6620c0cc91b443e608720650ef6aee47

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page