Skip to main content

Multi-Agent Language Game Environments for LLMs

Project description

🏟 Chat Arena

Multi-Agent Language Game Environments for LLMs

License: Apache2 PyPI Python 3.9+ Twitter

Chat Arena is a Python library designed to facilitate communication and collaboration between multiple large language models (LLMs). It provides the following features:

  • Language Game Environments: it provides a framework for creating multi-agent language game environments, and a set of general-purposed language-driven environments.
  • Infrastructure for Multi-LLM Interaction: it allows you to quickly create multiple LLM-powered player agents, and enables seamlessly communication between them.
  • User-friendly Interfaces: it provides both Web browser UI and command line interface (CLI) to develop (prompt engineer) your LLM players to succeed in the environment.

ChatArena Architecture

Getting Started

Demo button

Installation

Requirements:

  • Python >= 3. 7
  • OpenAI API key (optional, for using GPT-3.5-turbo or GPT-4 as an LLM agent)

Install with pip:

pip install chatarena

or install from source:

git clone https://github.com/chatarena/chatarena
cd chatarena
pip install .

To use GPT-3 as an LLM agent, set your OpenAI API key:

export OPENAI_API_KEY="your_api_key_here"

Launch the Demo Locally

The quickest way to see Chat Arena in action is via the demo Web UI. To launch the demo on your local machine, you first need to git clone the repository and install it from source (see above instruction). Then run the following command in the root directory of the repository:

gradio app.py

This will launch a demo server for Chat Arena and you can access it via http://127.0.0.1:7861/ in your browser.

Basic Usage

Key Concepts

  • Player: a player is an agent that can interact with other players in a game environment. A player can be a human or a large language model (LLM). A player is defined by its name, its backend, and its role.
    • Backend: a backend is a Python class that defines how a player interacts with other players. A backend can be a human, a LLM, or a combination of them. A backend is defined by its name, its type, and its parameters.
  • Environment: an environment is a Python class that defines the rules of a game. An environment is defined by its name, its type, and its parameters.
    • Moderator: a moderator is a Python class that defines how the game is played. A moderator is defined by its name, its type, and its parameters.
  • Arena: an arena is a Python class that defines the overall game. An arena is defined by its name, its type, and its parameters.

Step 1: Define Multiple Players with LLM Backend

from chatarena.agent import Player
from chatarena.backends import OpenAIChat

# Describe the environment (which is shared by all players)
environment_description = "It is in a university classroom ..."

# A "Professor" player
player1 = Player(name="Professor", backend=OpenAIChat(),
                 role_desc="You are a professor in ...",
                 global_prompt=environment_description)
# A "Student" player
player2 = Player(name="Student", backend=OpenAIChat(),
                 role_desc="You are a student who is interested in ...",
                 global_prompt=environment_description)
# A "Teaching Assistant" player
player3 = Player(name="Teaching assistant", backend=OpenAIChat(),
                 role_desc="You are a teaching assistant of module ...",
                 global_prompt=environment_description)

Step 2: Create a Language Game Environment

You can also create a language model-driven environment and add it to the Chat Arena:

from chatarena.environments.conversation import Conversation

env = Conversation(player_names=[p.name for p in [player1, player2, player3]])

Step 3: Run the Language Game using Arena

Arena is a utility class to help you run language games.

from chatarena.arena import Arena

arena = Arena(players=[player1, player2, player3],
              environment=env, global_prompt=environment_description)
# Run the game for 10 steps
arena.run(num_steps=10)

# Alternatively, you can run your own main loop
for _ in range(10):
    arena.step()
    # Your code goes here ...

You can easily save your game play history to file

arena.save_history(path=...)

and save your game config to file

arena.save_config(path=...)

Other Utilities

Load Arena from config file (here we use examples/nlp-classroom-3players.json in this repository as an example)

arena = Arena.from_config("examples/nlp-classroom-3players.json")
arena.run(num_steps=10)

Run the game in an interactive CLI interface

arena.launch_cli()

Advanced Usage

ModeratedConversation: a LLM-driven Environment

We support a more advanced environment called ModeratedConversation that allows you to control the game dynamics using an LLM. The moderator is a special player that controls the game state transition and determines when the game ends. For example, you can define a moderator that track the board status of a board game, and end the game when a player wins. You can try out our Tic-tac-toe and Rock-paper-scissors games to get a sense of how it works:

# Tic-tac-toe example
Arena.from_config("examples/tic-tac-toe.json").launch_cli()

# Rock-paper-scissors example
Arena.from_config("examples/rock-paper-scissors.json").launch_cli()

Creating your Custom Environment

You can define your own environment by extending the Environment class. We provide a tutorial to demonstrate how to define a custom environment, using our Chameleon environment as example.

Contributing

We welcome contributions to improve and extend Chat Arena. Please follow these steps to contribute:

  1. Fork the repository.
  2. Create a new branch for your feature or bugfix.
  3. Commit your changes to the new branch.
  4. Create a pull request describing your changes.
  5. We will review your pull request and provide feedback or merge your changes.

Please ensure your code follows the existing style and structure.

Contact

If you have any questions or suggestions, feel free to open an issue or submit a pull request. You can also follow the lead developer Twitter to get the latest updates.

Happy chatting!

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

chatarena-0.1.5.tar.gz (26.4 kB view details)

Uploaded Source

Built Distribution

chatarena-0.1.5-py3-none-any.whl (30.8 kB view details)

Uploaded Python 3

File details

Details for the file chatarena-0.1.5.tar.gz.

File metadata

  • Download URL: chatarena-0.1.5.tar.gz
  • Upload date:
  • Size: 26.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.9.6 readme-renderer/37.3 requests/2.28.2 requests-toolbelt/0.10.1 urllib3/1.26.15 tqdm/4.65.0 importlib-metadata/6.1.0 keyring/23.13.1 rfc3986/1.5.0 colorama/0.4.6 CPython/3.8.16

File hashes

Hashes for chatarena-0.1.5.tar.gz
Algorithm Hash digest
SHA256 bb9809e2deddde36f4b80737c2589fdd1589436e0540e8ded81b7dbc6e198b0a
MD5 05c7bb42da1f2515604ddd9b2ea01157
BLAKE2b-256 f92bc83eb83f44a2fc588cdc7c78fc270f336a934ec01854792504878dbc9df4

See more details on using hashes here.

File details

Details for the file chatarena-0.1.5-py3-none-any.whl.

File metadata

  • Download URL: chatarena-0.1.5-py3-none-any.whl
  • Upload date:
  • Size: 30.8 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/3.8.0 pkginfo/1.9.6 readme-renderer/37.3 requests/2.28.2 requests-toolbelt/0.10.1 urllib3/1.26.15 tqdm/4.65.0 importlib-metadata/6.1.0 keyring/23.13.1 rfc3986/1.5.0 colorama/0.4.6 CPython/3.8.16

File hashes

Hashes for chatarena-0.1.5-py3-none-any.whl
Algorithm Hash digest
SHA256 e4f42a436a69de9ccaafb0bdcabd3431f5276f14030f8b6ba2a0e408b4c5a637
MD5 35590479ee2eabe38cc0f105d97b8ce3
BLAKE2b-256 bef83b460507d697a33346f0f4c54c4d10f72c4daf22c07dac8dba41cf6d8b91

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page