Skip to main content

Lightweight self-hosted span annotation tool

Project description

logo

factgenie

Github downloads PyPI slack Code style: black Github stars

Annotate LLM outputs with a lightweight, self-hosted web application 🌈

factgenie

📢 Changelog

  • [1.0.0] - 2024-11-13: The first official release 🎉

👉️ How can factgenie help you?

Outputs from large language models (LLMs) may contain errors: semantic, factual, and lexical.

With factgenie, you can have the error spans annotated:

  • From LLMs through an API.
  • From humans through a crowdsourcing service.

Factgenie can provide you:

  1. A user-friendly website for collecting annotations from human crowdworkers.
  2. API calls for collecting equivalent annotations from LLM-based evaluators.
  3. A visualization interface for visualizing the data and inspecting the annotated outputs.

What does factgenie not help with is collecting the data (we assume that you already have these), starting the crowdsourcing campaign (for that, you need to use a service such as Prolific.com) or running the LLM evaluators (for that, you need a local framework such as Ollama or a proprietary API).

🏃 Quickstart

Make sure you have Python >=3.9 installed.

If you want to quickly try out factgenie, you can install the package from PyPI:

pip install factgenie

However, the recommended approach for using factgenie is using an editable package:

git clone https://github.com/ufal/factgenie.git
cd factgenie
pip install -e .[dev,deploy]

This approach will allow you to manually modify configuration files, write your own data classes and access generated files.

After installing factgenie, use the following command to run the server on your local computer:

factgenie run --host=127.0.0.1 --port 8890

More information on how to set up factgenie is on Github wiki.

💡 Usage guide

See the following wiki pages that that will guide you through various use-cases of factgenie:

Topic Description
🔧 Setup How to install factgenie.
🗂️ Data Management How to manage datasets and model outputs.
🤖 LLM Annotations How to annotate outputs using LLMs.
👥 Crowdsourcing Annotations How to annotate outputs using human crowdworkers.
✍️ Generating Outputs How to generate outputs using LLMs.
📊 Analyzing Annotations How to obtain statistics on collected annotations.
💻 Command Line Interface How to use factgenie command line interface.
🌱 Contributing How to contribute to factgenie.

🔥 Tutorials

We also provide step-by-step walkthroughs showing how to employ factgenie on the the dataset from the Shared Task in Evaluating Semantic Accuracy:

Tutorial Description
🏀 #1: Importing a custom dataset Loading the basketball statistics and model-generated basketball reports into the web interface.
💬 #2: Generating outputs Using Llama 3.1 with Ollama for generating basketball reports.
📊 #3: Customizing data visualization Manually creating a custom dataset class for better data visualization.
🤖 #4: Annotating outputs with an LLM Using GPT-4o for annotating errors in the basketball reports.
👨‍💼 #5: Annotating outputs with human annotators Using human annotators for annotating errors in the basketball reports.

💬 Cite us

Our paper was published at INLG 2024 System Demonstrations!

You can also find the paper on arXiv.

For citing us, please use the following BibTeX entry:

@inproceedings{kasner2024factgenie,
    title = "factgenie: A Framework for Span-based Evaluation of Generated Texts",
    author = "Kasner, Zden{\v{e}}k  and
      Platek, Ondrej  and
      Schmidtova, Patricia  and
      Balloccu, Simone  and
      Dusek, Ondrej",
    editor = "Mahamood, Saad  and
      Minh, Nguyen Le  and
      Ippolito, Daphne",
    booktitle = "Proceedings of the 17th International Natural Language Generation Conference: System Demonstrations",
    year = "2024",
    address = "Tokyo, Japan",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2024.inlg-demos.5",
    pages = "13--15",
}

Acknowledgements

This work was co-funded by the European Union (ERC, NG-NLG, 101039303).

erc-logo

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

factgenie-1.0.0.tar.gz (3.1 MB view details)

Uploaded Source

Built Distribution

factgenie-1.0.0-py3-none-any.whl (3.2 MB view details)

Uploaded Python 3

File details

Details for the file factgenie-1.0.0.tar.gz.

File metadata

  • Download URL: factgenie-1.0.0.tar.gz
  • Upload date:
  • Size: 3.1 MB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.12

File hashes

Hashes for factgenie-1.0.0.tar.gz
Algorithm Hash digest
SHA256 2e4eb8f134f02f69241752f4a9cd5283082bd2d3939c4a245b2bbfb9044aaef6
MD5 31e6ae43103738e699985bf0edd3510e
BLAKE2b-256 8d27649b090f278e6746ffaf39c54aabf9a6cace59cafff9969e47938de89c50

See more details on using hashes here.

File details

Details for the file factgenie-1.0.0-py3-none-any.whl.

File metadata

  • Download URL: factgenie-1.0.0-py3-none-any.whl
  • Upload date:
  • Size: 3.2 MB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: twine/5.1.1 CPython/3.10.12

File hashes

Hashes for factgenie-1.0.0-py3-none-any.whl
Algorithm Hash digest
SHA256 8b7d4f9a09e5134d8709da301296e38e7f4c061e54f248f01cb989484832ae8a
MD5 65cacd7140348ee5c6a0e09a0ae11d10
BLAKE2b-256 96234ecf96fa6eabff907b76540f3603d1cfaf5a3147bfe34378d9a177d4c03e

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page