Skip to main content

A standard API for Multi-Objective Multi-Agent Decision making and a diverse set of reference environments.

Project description

Project Status: Active – The project has reached a stable, usable state and is being actively developed. tests pre-commit Code style: black

MOMAland is an open source Python library for developing and comparing multi-objective multi-agent reinforcement learning algorithms by providing a standard API to communicate between learning algorithms and environments, as well as a standard set of environments compliant with that API. Essentially, the environments follow the standard PettingZoo APIs, but return vectorized rewards as numpy arrays instead of scalar values.

The documentation website is at https://momaland.farama.org/, and we have a public discord server (which we also use to coordinate development work) that you can join here.

Environments

MOMAland includes environments taken from the MOMARL literature, as well as multi-objective version of classical environments, such as SISL or Butterfly. The full list of environments is available at https://momaland.farama.org/environments/all-envs/.

Installation

To install MOMAland, use:

pip install momaland

This does not include dependencies for all components of MOMAland (not everything is required for the basic usage, and some can be problematic to install on certain systems).

  • pip install "momaland[testing]" to install dependencies for API testing.
  • pip install "momaland[learning]" to install dependencies for the supplied learning algorithms.
  • pip install "momaland[all]" for all dependencies for all components.

API

Similar to PettingZoo, the MOMAland API models environments as simple Python env classes. Creating environment instances and interacting with them is very simple - here's an example using the "momultiwalker_stability_v0" environment:

from momaland.envs.momultiwalker_stability import momultiwalker_stability_v0 as _env
import numpy as np

# .env() function will return an AEC environment, as per PZ standard
env = _env.env(render_mode="human")

env.reset(seed=42)
for agent in env.agent_iter():
    # vec_reward is a numpy array
    observation, vec_reward, termination, truncation, info = env.last()

    if termination or truncation:
        action = None
    else:
        action = env.action_space(agent).sample() # this is where you would insert your policy

    env.step(action)
env.close()

# optionally, you can scalarize the reward with weights
# Making the vector reward a scalar reward to shift to single-objective multi-agent (aka PettingZoo)
# We can assign different weights to the objectives of each agent.
weights = {
    "walker_0": np.array([0.7, 0.3]),
    "walker_1": np.array([0.5, 0.5]),
    "walker_2": np.array([0.2, 0.8]),
}
env = LinearizeReward(env, weights)

For details on multi-objective multi-agent RL definitions, see Multi-Objective Multi-Agent Decision Making: A Utility-based Analysis and Survey.

You can also check more examples in this colab notebook! MOMAland Demo in Collab

Learning Algorithms

We provide a set of learning algorithms that are compatible with the MOMAland environments. The learning algorithms are implemented in the learning/ directory. To keep everything as self-contained as possible, each algorithm is implemented as a single-file (close to cleanRL's philosophy).

Nevertheless, we reuse tools provided by other libraries, like multi-objective evaluations and performance indicators from MORL-Baselines.

Here is a list of algorithms that are currently implemented:

Name Single/Multi-policy Reward Utility Observation space Action space Paper
MOMAPPO (OLS) continuous,
discrete
Multi Team Team / Linear Any Any
Scalarized IQL Single Individual Individual / Linear Discrete Discrete
Centralization wrapper Any Team Team / Any Discrete Discrete
Linearization wrapper Single Any Individual / Linear Any Any

Environment Versioning

MOMAland keeps strict versioning for reproducibility reasons. All environments end in a suffix like "_v0". When changes are made to environments that might impact learning results, the number is increased by one to prevent potential confusion.

Development Roadmap

We have a roadmap for future development available here.

Project Maintainers

Project Managers: Florian Felten (@ffelten)

Maintenance for this project is also contributed by the broader Farama team: farama.org/team.

Citing

If you use this repository in your research, please cite:

@inproceedings{TODO}

Development

Setup pre-commit

Clone the repo and run pre-commit install to setup the pre-commit hooks.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

momaland-0.1.1.tar.gz (111.4 kB view details)

Uploaded Source

Built Distribution

momaland-0.1.1-py3-none-any.whl (150.6 kB view details)

Uploaded Python 3

File details

Details for the file momaland-0.1.1.tar.gz.

File metadata

  • Download URL: momaland-0.1.1.tar.gz
  • Upload date:
  • Size: 111.4 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for momaland-0.1.1.tar.gz
Algorithm Hash digest
SHA256 ffd61796d5cfbac21125b6fe1d156b816c44d404d5d072e3accdcf7c45c18b5a
MD5 085908d219b6be5a2c0fdc3603318fc9
BLAKE2b-256 21d8a0e61753e96a9017b9dd8fef29014bdda21a74a9bf4167d8ac01788f86ad

See more details on using hashes here.

File details

Details for the file momaland-0.1.1-py3-none-any.whl.

File metadata

  • Download URL: momaland-0.1.1-py3-none-any.whl
  • Upload date:
  • Size: 150.6 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.0 CPython/3.12.4

File hashes

Hashes for momaland-0.1.1-py3-none-any.whl
Algorithm Hash digest
SHA256 288f27fb72e2db67bc4aec9d250d5ddb2526667f52b4c825ccc4387355eb3cc9
MD5 7ec67e581c4f2dfe0f015d81de70d8eb
BLAKE2b-256 93235813f20cd417187ce167da1ab4efd69e1afc9286ec8ffd662d1fdfdb7524

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page