Skip to main content

Interactive Composition Explorer

Project description

Interactive Composition Explorer 🧊

ICE is a Python library and trace visualizer for language model programs.

Screenshot

ice-screenshot Execution trace visualized in ICE

Features

  • Run language model recipes in different modes: humans, human+LM, LM
  • Inspect the execution traces in your browser for debugging
  • Define and use new language model agents, e.g. chain-of-thought agents
  • Run recipes quickly by parallelizing language model calls
  • Reuse component recipes such as question-answering, ranking, and verification

ICE is pre-1.0

:warning: The ICE API may change at any point. The ICE interface is being actively developed and we may change the API at any point, including removing functionality, renaming methods, splitting ICE into multiple projects, and other similarly disruptive changes. Use at your own risk.

Requirements

ICE requires Python 3.10. If you only have newer or older version(s) of Python installed, we recommend using pyenv to install Python 3.10 and manage multiple Python versions.

If you use Windows, you'll need to run ICE inside of WSL.

Getting started

  1. As part of general good Python practice, consider first creating and activating a virtual environment to avoid installing ICE 'globally'. For example:

    python3.10 -m venv venv
    source venv/bin/activate
    
  2. Install ICE:

    pip install ought-ice
    
  3. Set required secrets in ~/.ought-ice/.env. See .env.example for the format.

  4. Start ICE in its own terminal and leave it running:

    python -m ice.server
    
  5. To learn more, go through the Primer.

Developing ICE

  1. If you want to make changes to ICE itself, clone the repository, then install it in editable mode:

    python3.10 -m venv venv
    source venv/bin/activate
    pip install --upgrade pip
    pip install -e '.[dev]' --config-settings editable_mode=compat
    npm --prefix ui ci
    npm --prefix ui run dev
    

Terminology

  • Recipes are decompositions of a task into subtasks.

    The meaning of a recipe is: If a human executed these steps and did a good job at each workspace in isolation, the overall answer would be good. This decomposition may be informed by what we think ML can do at this point, but the recipe itself (as an abstraction) doesn’t know about specific agents.

  • Agents perform atomic subtasks of predefined shapes, like completion, scoring, or classification.

    Agents don't know which recipe is calling them. Agents don’t maintain state between subtasks. Agents generally try to complete all subtasks they're asked to complete (however badly), but some will not have implementations for certain task types.

  • The mode in which a recipe runs is a global setting that can affect every agent call. For instance, whether to use humans or agents. Recipes can also run with certain RecipeSettings, which can map a task type to a specific agent_name, which can modify which agent is used for that specfic type of task.

Additional resources

  1. Join the ICE Slack channel to collaborate with other people composing language model tasks. You can also use it to ask questions about using ICE.

  2. Watch the recording of Ought's Lab Meeting to understand the high-level goals for ICE, how it interacts with Ought's other work, and how it contributes to alignment research.

  3. Read the ICE announcement post for another introduction.

Contributions

ICE is an open-source project by Ought. We're an applied ML lab building the AI research assistant Elicit.

We welcome community contributions:

  • If you're a developer, you can dive into the codebase and help us fix bugs, improve code quality and performance, or add new features.
  • If you're a language model researcher, you can help us add new agents or improve existing ones, and refine or create new recipes and recipe components.

For larger contributions, make an issue for discussion before submitting a PR.

And for even larger contributions, join us - we're hiring!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

ought-ice-0.4.0.tar.gz (1.5 MB view details)

Uploaded Source

Built Distribution

ought_ice-0.4.0-py3-none-any.whl (769.6 kB view details)

Uploaded Python 3

File details

Details for the file ought-ice-0.4.0.tar.gz.

File metadata

  • Download URL: ought-ice-0.4.0.tar.gz
  • Upload date:
  • Size: 1.5 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.4

File hashes

Hashes for ought-ice-0.4.0.tar.gz
Algorithm Hash digest
SHA256 b0f941ff7cc3e1200a25cc9c2b883c76df15dfae0686cf7a22eca9c22e0373d3
MD5 dffb273541910bebf51750a5a25b07fa
BLAKE2b-256 93fc807d71578881bc8f1403f8459c5aac826cd389f8e44c5e96d60d21b0a4bb

See more details on using hashes here.

File details

Details for the file ought_ice-0.4.0-py3-none-any.whl.

File metadata

  • Download URL: ought_ice-0.4.0-py3-none-any.whl
  • Upload date:
  • Size: 769.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/4.0.1 CPython/3.10.4

File hashes

Hashes for ought_ice-0.4.0-py3-none-any.whl
Algorithm Hash digest
SHA256 e70fd33465b15686f1a4f654e2e20df34a1e6566191c689718f49dbbef1bdfc1
MD5 f8730ac6f1df13e1db32a2283cfc3731
BLAKE2b-256 1b272358d3d41e66a68587035a0311d7398b7c49889db2b4bdfefeefa76e9334

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page