Create programs that think, using LLMs.
Project description
Think
Think is a Python package for creating thinking programs.
It provides simple but powerful primitives for composable and robust integration of Large Language Models (LLMs) into your Python programs.
Think supports OpenAI and Anthropic models.
Examples
Ask a question:
from think import LLM, ask
llm = LLM.from_url("anthropic:///claude-3-haiku-20240307")
async def haiku(topic):
return await ask(llm, "Write a haiku about {{ topic }}", topic=topic)
print(asyncio.run(haiku("computers")))
Get answers as structured data:
from think import LLM, LLMQuery
llm = LLM.from_url("openai:///gpt-4o-mini")
class CityInfo(LLMQuery):
"""
Give me basic information about {{ city }}.
"""
name: str
country: str
population: int
latitude: float
longitude: float
async def city_info(city):
return await CityInfo.run(llm, city=city)
info = asyncio.run(city_info("Paris"))
print(f"{info.name} is a city in {info.country} with {info.population} inhabitants.")
Integrate AI with custom tools:
from datetime import date
from think import LLM
from think.llm.chat import Chat
llm = LLM.from_url("openai:///gpt-4o-mini")
def current_date() -> str:
"""
Get the current date.
:returns: current date in YYYY-MM-DD format
"""
return date.today().isoformat()
async def days_to_xmas() -> str:
chat = Chat("How many days are left until Christmas?")
return await llm(chat, tools=[current_date])
print(asyncio.run(days_to_xmas()))
Use vision (with models that support it):
from think import LLM
from think.llm.chat import Chat
llm = LLM.from_url("openai:///gpt-4o-mini")
async def describe_image(path):
image_data = open(path, "rb").read()
chat = Chat().user("Describe the image in detail", images=[image_data])
return await llm(chat)
print(asyncio.run(describe_image("path/to/image.jpg")))
Quickstart
Install via pip
:
pip install think-llm
Note that the package name is think-llm
, not think
.
You can set up your LLM credentials via environment variables, for example:
export OPENAI_API_KEY=<your-openai-key>
export ANTHROPIC_API_KEY=<your-anthropic-key>
Or pass them directly in the model URL:
from think import LLM
llm = LLM.from_url(f"openai://{YOUR_OPENAI_KEY}@/gpt-4o-mini")
In practice, you might want to store the entire model URL in the environment
variable and just call LLM.from_url(os.environ["LLM_URL"])
.
Model URL
Think uses a URL-like format to specify the model to use. The format is:
provider://[api-key@]server/model-name
provider
is the model provider, e.g.openai
oranthropic
api-key
is the API key for the model provider (optional if set via environment)server
is the server to use, useful for local LLMs; for OpenAI and Anthropic it should be empty to use their default base URLmodel-name
is the name of the model to use
Using the URL format allows you to easily switch between different models and providers without changing your code, or using multiple models in the same program without hardcoding anything.
Roadmap
Features and capabilities that are planned for the near future:
- documentation
- support for other LLM APIs via LiteLLM or similar
- support for local LLMs via HuggingFace
- more examples
If you want to help with any of these, please look at the open issues, join the conversation and submit a PR. Please read the Contributing section below.
Contributing
Contributions are welcome!
To ensure that your contribution is accepted, please follow these guidelines:
- open an issue to discuss your idea before you start working on it, or if there's already an issue for your idea, join the conversation there and explain how you plan to implement it
- make sure that your code is well documented (docstrings, type annotations, comments, etc.) and tested (test coverage should only go up)
- make sure that your code is formatted and type-checked with
ruff
(default settings)
Copyright
Copyright (C) 2023-2024. Senko Rasic and Think contributors. You may use and/or distribute this project under the terms of MIT license. See the LICENSE file for more details.
Project details
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file think_llm-0.0.6.tar.gz
.
File metadata
- Download URL: think_llm-0.0.6.tar.gz
- Upload date:
- Size: 20.4 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.14
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 2f10a0e54d87632cb96c44a2406135c851dc30d7e618c89b5b5bb24a1506efc3 |
|
MD5 | 0f10b6b3ec39d5d561651af7c5973716 |
|
BLAKE2b-256 | 5998a100277943ad45221c4ecb386ea6ea03909a398a8c85051a75ef891675e6 |
File details
Details for the file think_llm-0.0.6-py3-none-any.whl
.
File metadata
- Download URL: think_llm-0.0.6-py3-none-any.whl
- Upload date:
- Size: 19.3 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/5.1.1 CPython/3.10.14
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 8beb10e448ac91c55aee7b11fef654dd15dd978da7a4ee0815dfb64ac4612990 |
|
MD5 | e6f636ed97fd5aa77a5fd427542bf2b1 |
|
BLAKE2b-256 | 25a5b6fa337045b8e2225e3ab6cbe3d6e3887ff07edda57852345fa87542bbbc |