Probabilistic Generative Model Programming
Project description
Outlines
Build reliable workflows based on interactions with generative models.
Prompting • Controlled generation • Agents • Sampling • Parallel execution • Examples
Outlines allows you to control and diagnose interactions with LLMs more effectively. Modern language models are powerful and versatile, but the way they interface with existing systems can be very brittle, their outputs can be unreliable, and complex workflows (agents) can introduce a lot of error-prone code duplication. Outlines provides robust prompting primitives that separate the prompting from the execution logic and lead to simple implementations of few-shot generations, ReAct, meta-prompting, agents, etc. Outlines helps developers control text generation and produce predictable outputs that make the interaction with user code more robust. Its sampling-first approach allows one to diagnose issues with model-generated output more easily, and implement more robust generation methods such as self-consistency or DiVeRSe.
Outlines is designed as a library that integrates well with the broader Python environment. Generation can be interleaved with control flow or custom function calls, prompts can be imported from other modules or libraries.
Features
- Simple and powerful prompting primitives based on the Jinja templating engine.
- Interleave completions with loops, conditionals, and custom Python functions
- Caching of generations
- Integration with OpenAI and HuggingFace models
- Controlled generation, including multiple choice, type constraints and dynamic stopping
- Sampling of multiple sequences
- Vectorized execution
Installation
Outlines is available on PyPi:
pip install outlines
Prompting
Writing prompts by concatenating strings in pure Python quickly becomes cumbersome: the prompt building logic gets entangled with the rest of the program, and the structure of the rendered prompt is obfuscated.Outlines makes it easier to write and manage prompts by encapsulating templates inside "template functions".
These functions make it possible to neatly separate the prompt logic from the general program logic; they can be imported from other modules and libraries.
Template functions require no superfluous abstraction, they use the Jinja2 templating engine to help build complex prompts in a concise manner:
import outlines.text as text
import outlines.models as models
examples = [
("The food was digusting", "Negative"),
("We had a fantastic night", "Positive"),
("Recommended", "Positive"),
("The waiter was rude", "Negative")
]
@text.prompt
def labelling(to_label, examples):
"""You are a sentiment-labelling assistant.
{% for example in examples %}
{{ example[0] }} // {{ example[1] }}
{% endfor %}
{{ to_label }} //
"""
complete = models.text_completion.openai("text-davinci-003")
prompt = labelling("Just awesome", examples)
answer = complete(prompt)
Chaining with loops and conditionals (example)
Outlines comes with very few abstractions, and is designed to blend into existing code and integrate with the rest of the ecosystem.
reviews = ["Just awesome", "Avoid", "Will come back"]
def send_notification(review):
"""This function sends a notification with the review's content."""
...
for review in reviews:
prompt = labelling(review, examples)
answer = model(prompt)
if answer == "Positive":
send_notification(review)
Agents (example)
Outlines makes building agents like AutoGPT, BabyAGI, ViperGPT or Transformers Agent easier by removing boilerplate prompting code.
Tools
We can teach language models to call external functions to get additional informations or perform tasks, by encoding the functions' description in the prompt. To avoid duplicating information between the function definition and the description passed to the prompt, we define custom Jinja filters that can extract the function's name, description, signature and source:
from typing import Callable, List
import outlines.text as text
def google_search(query: str):
"""Google Search"""
pass
def wikipedia_search(query: str):
"""Wikipedia Search"""
pass
@text.prompt
def agent(tools: List[Callable]):
"""AVAILABLE COMMANDS:
{% for tool in tools %}
TOOL
{{ tool | name }}, {{ tool | description }}, args: {{ tool | signature }}
{{ tool | source }}
{% endfor %}
"""
prompt = my_commands([google_search, wikipedia_search])
Response models
We can instruct models to return their output in a pre-defined format, often JSON. To avoid duplicating information between the function definition and the description passed to the prompt we define a custom Jinja filter that can extract the expected response's schema:
from pydantic import BaseModel
import outlines.text as text
class Joke(BaseModel):
joke: str
explanation: str
@text.prompt
def joke_ppt(response_model):
"""Tell a joke and explain why the joke is funny.
RESPONSE FORMAT:
{{ response_model | schema }}
"""
joke_ppt(Joke)
# Tell a joke and explain why the joke is funny.
#
# RESPONSE FORMAT:
# {
# "joke": "The joke"
# "explanation": "The explanation of why the joke is funny"
# }
Controlled generation
The first step towards reliability of systems that include large language models is to ensure that there is a well-defined interface between their output and user-defined code. Outlines provides ways to control the generation of language models to make their output more predictable.
You can stop the generation after a given sequence has been found:
answer = model("Tell me a one-sentence joke.", stop_at=["."])
You can reduce the completion to a choice between multiple possibilities:
prompt = labelling("Just awesome", examples)
answer = model(prompt, is_in=["Positive", "Negative"])
You can require the generated sequence to be an int or a float:
import outlines.models as models
model = models.text_completion.hf("sshleifer/tiny-gpt2")
answer = model("2 + 2 = ", type="int")
print(answer)
# 4
model = models.text_completion.hf("sshleifer/tiny-gpt2")
answer = model("1.7 + 3.2 = ", type="float")
print(answer)
# 4.9
Sampling (uncertainty, simulation-based inference)
Outlines is strictly sampling based, and focused on using methods such as self-consistency, adaptive consistency, DiVeRSe, Tree of thoughts, lattice sampling, etc. Several samples can be obtained using the num_samples
keyword argument:
import outlines.models as models
model = models.text_completion.hf("sshleifer/tiny-gpt2")
answer = model("2 + 2 = ", num_samples=5)
print(answer)
# [4, 5, 4, 4, 4]
The focus on sampling allows us to explore different ideas, such as using the diversity of answers to evaluate the model's uncertainty, or simulation-based inference to optimize the prompt.
Vectorization and parallel execution
You can pass prompts in a NumPy array to Outlines models:
import numpy as np
import outlines.models as models
model = models.text_completion.openai("text-davinci-003")
prompts = [
["Translate 'Hello' in Italian", "Translate 'Hello' in French"],
["Translate 'Hello' in Spanish", "Translate 'Hello' in German"],
]
answers = model(prompts)
print(answers.shape)
# (2, 2)
Outlines also provide a outlines.vectorize
decorator that will vectorize any function. If the function is async the requests will be run concurrently:
import aiohttp
import numpy as np
import outlines
@outlines.vectorize
async def wikipedia_search(query):
url = f"https://en.wikipedia.org/w/api.php?format=json&action=query&prop=extracts&exintro&explaintext&redirects=1&titles={query}&origin=*"
async with aiohttp.ClientSession() as session:
async with session.get(url) as response:
return await response.text()
results = wikipedia_search([["Cat", "Dog"],["Bird", "Horse"]])
print(results.shape)
# (2, 2)
This feature allows you to run multiple workflows in parallel, for instance to avoid overfitting when iterating over a workflow or in production to run workflows over several different inputs.
Contributing
What contributions?
We curently only accept bug fixes and documentation contributions. If you have a feature request, please start a new discussions. The issue tracker is only intended for actionable items.
How to contribute?
Run pip install -e .[test]
or conda env create -f environment.yml
. To build the documentation you will also need to run pip install -r requirements-doc.txt
.
Before pushing your code to repository please run pre-commit run --all-files
and pytest
to make sure that the code is formatted correctly and that the tests pass.
Do not hesitate to open a draft PR before your contribution is ready, especially if you have questions and/or need feedback.
Examples
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.
Source Distribution
Built Distribution
File details
Details for the file outlines-0.0.4.tar.gz
.
File metadata
- Download URL: outlines-0.0.4.tar.gz
- Upload date:
- Size: 443.2 kB
- Tags: Source
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | d709ff1224bbcf12040c406e0416501b643d02260f31ea889e74c380f212c5c5 |
|
MD5 | dfe4a62e4986978c4698a8b5c9c7d905 |
|
BLAKE2b-256 | 635916f29daa8784c555b2be30a3e7b3c5eaa139e328c38913dc446fa00ef888 |
File details
Details for the file outlines-0.0.4-py3-none-any.whl
.
File metadata
- Download URL: outlines-0.0.4-py3-none-any.whl
- Upload date:
- Size: 29.2 kB
- Tags: Python 3
- Uploaded using Trusted Publishing? No
- Uploaded via: twine/4.0.2 CPython/3.9.16
File hashes
Algorithm | Hash digest | |
---|---|---|
SHA256 | 496b0fbb4ca36a0b54da55ab18bb76d1107fb100f6cb70143ac7e06fe1da45dd |
|
MD5 | 659b86f743d6345e04ae719943f422b7 |
|
BLAKE2b-256 | f6984661b88a7dfc8d483864fe2074887068e5b1cef6fa65213f1201200c474f |