Skip to main content

Probabilistic Generative Model Programming

Project description

Outlines Logo

Outlines

Build reliable workflows based on interactions with generative models.

PromptingControlled generationAgentsSamplingExamples

Outlines allows you to control and diagnose interactions with LLMs more effectively. Modern language models are powerful and versatile, but the way they interface with existing systems can be very brittle, their outputs can be unreliable, and complex workflows (agents) can introduce a lot of error-prone code duplication. Outlines provides robust prompting primitives that separate the prompting from the execution logic and lead to simple implementations of few-shot generations, ReAct, meta-prompting, agents, etc. Outlines helps developers control text generation and produce predictable outputs that make the interaction with user code more robust. Its sampling-first approach allows one to diagnose issues with model-generated output more easily, and implement more robust generation methods such as self-consistency or DiVeRSe.

Outlines is designed as a library that integrates well with the broader Python environment. Generation can be interleaved with control flow or custom function calls, prompts can be imported from other modules or libraries.

Features

  • Simple and powerful prompting primitives based on the Jinja templating engine.
  • Interleave completions with loops, conditionals, and custom Python functions
  • Caching of generations
  • Integration with OpenAI and HuggingFace models
  • Controlled generation, including multiple choice, type constraints and dynamic stopping
  • Sampling of multiple sequences

Installation

Outlines is available on PyPi:

pip install outlines

Prompting

Writing prompts by concatenating strings in pure Python quickly becomes cumbersome: the prompt building logic gets entangled with the rest of the program, and the structure of the rendered prompt is obfuscated.Outlines makes it easier to write and manage prompts by encapsulating templates inside "template functions".

These functions make it possible to neatly separate the prompt logic from the general program logic; they can be imported from other modules and libraries.

Template functions require no superfluous abstraction, they use the Jinja2 templating engine to help build complex prompts in a concise manner:

import outlines.text as text
import outlines.models as models


examples = [
    ("The food was digusting", "Negative"),
    ("We had a fantastic night", "Positive"),
    ("Recommended", "Positive"),
    ("The waiter was rude", "Negative")
]

@text.prompt
def labelling(to_label, examples):
    """You are a sentiment-labelling assistant.

    {% for example in examples %}
    {{ example[0] }} // {{ example[1] }}
    {% endfor %}
    {{ to_label }} //
    """

complete = models.text_completion.openai("text-davinci-003")
prompt = labelling("Just awesome", examples)
answer = complete(prompt)

Chaining with loops and conditionals (example)

Outlines comes with very few abstractions, and is designed to blend into existing code and integrate with the rest of the ecosystem.

reviews = ["Just awesome", "Avoid", "Will come back"]

def send_notification(review):
    """This function sends a notification with the review's content."""
    ...

for review in reviews:
    prompt = labelling(review, examples)
    answer = model(prompt)
    if answer == "Positive":
        send_notification(review)

Agents (example)

Outlines makes building agents like AutoGPT, BabyAGI, ViperGPT or Transformers Agent easier by removing boilerplate prompting code.

Tools

We can teach language models to call external functions to get additional informations or perform tasks, by encoding the functions' description in the prompt. To avoid duplicating information between the function definition and the description passed to the prompt, we define custom Jinja filters that can extract the function's name, description, signature and source:

from typing import Callable, List
import outlines.text as text


def google_search(query: str):
    """Google Search"""
    pass


def wikipedia_search(query: str):
    """Wikipedia Search"""
    pass


@text.prompt
def agent(tools: List[Callable]):
    """AVAILABLE COMMANDS:

    {% for tool in tools %}
    TOOL
    {{ tool | name }}, {{ tool | description }}, args: {{ tool | signature }}
    {{ tool | source }}
    {% endfor %}
    """


prompt = my_commands([google_search, wikipedia_search])

Response models

We can instruct models to return their output in a pre-defined format, often JSON. To avoid duplicating information between the function definition and the description passed to the prompt we define a custom Jinja filter that can extract the expected response's schema:

from pydantic import BaseModel
import outlines.text as text


class Joke(BaseModel):
    joke: str
    explanation: str


@text.prompt
def joke_ppt(response_model):
    """Tell a joke and explain why the joke is funny.

    RESPONSE FORMAT:
    {{ response_model | schema }}
    """


joke_ppt(Joke)
# Tell a joke and explain why the joke is funny.
#
# RESPONSE FORMAT:
# {
#    "joke": "The joke"
#    "explanation": "The explanation of why the joke is funny"
#  }

Controlled generation

The first step towards reliability of systems that include large language models is to ensure that there is a well-defined interface between their output and user-defined code. Outlines provides ways to control the generation of language models to make their output more predictable.

You can stop the generation after a given sequence has been found:

answer = model("Tell me a one-sentence joke.", stop_at=["."])

You can reduce the completion to a choice between multiple possibilities:

prompt = labelling("Just awesome", examples)
answer = model(prompt, is_in=["Positive", "Negative"])

You can require the generated sequence to be an int or a float:

import outlines.models as models


model = models.text_completion.hf("sshleifer/tiny-gpt2")
answer = model("2 + 2 = ", type="int")
print(answer)
# 4

model = models.text_completion.hf("sshleifer/tiny-gpt2")
answer = model("1.7 + 3.2 = ", type="float")
print(answer)
# 4.9

Sampling (uncertainty, simulation-based inference)

Outlines is strictly sampling based, and focused on using methods such as self-consistency, adaptive consistency, DiVeRSe, Tree of thoughts, lattice sampling, etc. Several samples can be obtained using the num_samples keyword argument:

import outlines.models as models


model = models.text_completion.hf("sshleifer/tiny-gpt2")
answer = model("2 + 2 = ", num_samples=5)
print(answer)
# [4, 5, 4, 4, 4]

The focus on sampling allows us to explore different ideas, such as using the diversity of answers to evaluate the model's uncertainty, or simulation-based inference to optimize the prompt.

Contributing

What contributions?

We curently only accept bug fixes and documentation contributions. If you have a feature request, please start a new discussions. The issue tracker is only intended for actionable items.

How to contribute?

Run pip install -e .[test] or conda env create -f environment.yml. To build the documentation you will also need to run pip install -r requirements-doc.txt.

Before pushing your code to repository please run pre-commit run --all-files and pytest to make sure that the code is formatted correctly and that the tests pass.

Do not hesitate to open a draft PR before your contribution is ready, especially if you have questions and/or need feedback.

Examples

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

outlines-0.0.3.tar.gz (441.9 kB view details)

Uploaded Source

Built Distribution

outlines-0.0.3-py3-none-any.whl (28.6 kB view details)

Uploaded Python 3

File details

Details for the file outlines-0.0.3.tar.gz.

File metadata

  • Download URL: outlines-0.0.3.tar.gz
  • Upload date:
  • Size: 441.9 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for outlines-0.0.3.tar.gz
Algorithm Hash digest
SHA256 a8b853d1c66a24b5caeecd6c5673d5d2c1bc99c3d73b0fd9becf9738d164eb4a
MD5 2a95a36457a9d20a453cb0bc6e0abf01
BLAKE2b-256 ca4e43e60ad0fdb245e64f0f542bad2e5b94c8d88a1fbfe8f3775f38d99dcfca

See more details on using hashes here.

File details

Details for the file outlines-0.0.3-py3-none-any.whl.

File metadata

  • Download URL: outlines-0.0.3-py3-none-any.whl
  • Upload date:
  • Size: 28.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.2 CPython/3.9.16

File hashes

Hashes for outlines-0.0.3-py3-none-any.whl
Algorithm Hash digest
SHA256 0727d52e53d596296919616727de15b7425c698298bcf9fb32ddaf2322ea668f
MD5 ea3fb907a7eb55dff42673a2061c2fe2
BLAKE2b-256 2224b66b8cfacaec93c7ed8581bf6c76840ca745f785103b374e32c64b002dab

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page