Skip to main content

Automatically make the OpenAI tool JSON Schema, parsing call and constructing the result to the chat model.

Project description

LLM FOO

Overview

LLM FOO is a cutting-edge project blending the art of Kung Fu with the science of Large Language Models... or actually this is about automatically making the OpenAI tool JSON Schema, parsing call and constructing the result to the chat model.

But hey I hope this will become a set of small useful LLM helper functions that will make building stuff easier because current bleeding edge APIs are a bit of a mess and I think we can do better.

Installation

pip install llmfoo

Usage

Here's a quick example of how to use LLM FOO:

  1. Add @tool annotation. (You need to have OPENAI_API_KEY in env and ability to call gpt-4-1106-preview model)
  2. llmfoo will generate the json schema to YOURFILE.tool.json with GPT-4-Turbo - "Never send a machine to do a human's job" .. like who wants to write boilerplate docs for Machines???
  3. Annotated functions have helpers:
    • openai_schema to return the schema (You can edit it from the json if your not happy with what the machines did)
    • openai_tool_call to make the tool call and return the result in chat API message format
    • openai_tool_output to make the tool call and return the result in assistant API tool output format
from time import sleep

from openai import OpenAI

from llmfoo.functions import tool


@tool
def adder(x: int, y: int) -> int:
    return x + y


@tool
def multiplier(x: int, y: int) -> int:
    return x * y


client = OpenAI()


def test_chat_completion_with_adder():
    number1 = 3267182746
    number2 = 798472847
    messages = [
        {
            "role": "user",
            "content": f"What is {number1} + {number2}?"
        }
    ]
    response = client.chat.completions.create(
        model="gpt-4-1106-preview",
        messages=messages,
        tools=[adder.openai_schema]
    )
    messages.append(response.choices[0].message)
    messages.append(adder.openai_tool_call(response.choices[0].message.tool_calls[0]))
    response2 = client.chat.completions.create(
        model="gpt-4-1106-preview",
        messages=messages,
        tools=[adder.openai_schema]
    )
    assert str(adder(number1, number2)) in response2.choices[0].message.content.replace(",", "")


def test_assistant_with_multiplier():
    number1 = 1238763428176
    number2 = 172388743612
    assistant = client.beta.assistants.create(
        name="The Calc Machina",
        instructions="You are a calculator with a funny pirate accent.",
        tools=[multiplier.openai_schema],
        model="gpt-4-1106-preview"
    )
    thread = client.beta.threads.create(messages=[
        {
            "role":"user",
            "content":f"What is {number1} * {number2}?"
        }
    ])
    run = client.beta.threads.runs.create(
        thread_id=thread.id,
        assistant_id=assistant.id
    )
    while True:
        run_state = client.beta.threads.runs.retrieve(
            run_id=run.id,
            thread_id=thread.id,
        )
        if run_state.status not in ['in_progress', 'requires_action']:
            break
        if run_state.status == 'requires_action':
            tool_call = run_state.required_action.submit_tool_outputs.tool_calls[0]
            run = client.beta.threads.runs.submit_tool_outputs(
                thread_id=thread.id,
                run_id=run.id,
                tool_outputs=[
                    multiplier.openai_tool_output(tool_call)
                ]
            )
            sleep(1)
        sleep(0.1)
    messages = client.beta.threads.messages.list(thread_id=thread.id)
    assert str(multiplier(number1, number2)) in messages.data[0].content[0].text.value.replace(",", "")

Contributing

Interested in contributing? Loved to get your help to make this project better! The APIs under are changing and system is still very much first version.

License

This project is licensed under the MIT License.

Acknowledgements

  • Thanks to all the contributors and maintainers.
  • Special thanks to the Kung Fu masters such as Bruce Lee who inspired this project.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

llmfoo-0.3.0.tar.gz (5.2 kB view details)

Uploaded Source

Built Distribution

llmfoo-0.3.0-py3-none-any.whl (5.5 kB view details)

Uploaded Python 3

File details

Details for the file llmfoo-0.3.0.tar.gz.

File metadata

  • Download URL: llmfoo-0.3.0.tar.gz
  • Upload date:
  • Size: 5.2 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.11.5 Darwin/23.1.0

File hashes

Hashes for llmfoo-0.3.0.tar.gz
Algorithm Hash digest
SHA256 bcb4bdc33731ea031f65a7d46f9ca3830637bb27b6089ef21c94b049ee6c08e9
MD5 54fcd0246b901fac62d140f831081871
BLAKE2b-256 1193357539d665a0e2d31b79a32694e90967395cd57cac4934f5828a50e06aba

See more details on using hashes here.

Provenance

File details

Details for the file llmfoo-0.3.0-py3-none-any.whl.

File metadata

  • Download URL: llmfoo-0.3.0-py3-none-any.whl
  • Upload date:
  • Size: 5.5 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: poetry/1.6.1 CPython/3.11.5 Darwin/23.1.0

File hashes

Hashes for llmfoo-0.3.0-py3-none-any.whl
Algorithm Hash digest
SHA256 ea519e4e4c87aec0c087bb7fb91ab43f11e881894fc662862e1561c288c80b87
MD5 3c8108e573b4a3412c11bc84d1d22cf8
BLAKE2b-256 e92aa96ae7cd6d7450360abf24cf67fbb2daeaff9ed7e017d45235af59885655

See more details on using hashes here.

Provenance

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page