Skip to main content

Chat with local AI assistants or roleplay using ollama

Project description

plaitime

This is a really simple & lean program to chat with a local LLM powered by the excellent ollama library. It has a rewind feature that allows you to explore alternative paths.

PRs with improvements (optical, technical, ...) are very welcome.

I made this to experiment with different models and prompts, use cases includes:

  • Chat with a generic AI assistant just like in ChatGPT
  • Let LLM impersonate characters to chat with
  • Set up the LLM for pen-and-paper roleplay

Installation

pip install plaitime

You need to pull the models with ollama, for example:

ollama pull llama3.2

See the excellent ollama online documentation what models are available.

Usage

You can create "characters" that use different models and prompts. Each character has their own message history, which is persisted unless you turn that off. Use the combo-box to switch characters.

Chatting

Pressing Enter sends the text to the LLM. When you do that the first time, the LLM may need some time to ramp up, be patient. In the following it will react faster.

You can interrupt the generation or rewind the chat by pressing Escape. This gives you the opportunity to change your previous message, ie. to prevent the LLM from moving into an undesired direction. Another way to correct the LLM is to change the system prompt of the model. The model will always see the current system prompt for its next generation, so you can change its factual memory or personality in the middle of the conversation.

Changing model or system prompt

To create a new character, use the "New character" button. To change the model or system prompt of an existing character, use the "Configure" button. The system prompt is special, because it guides the overall behavior of the LLM. Use it to make the LLM adhere to a given response format, to configure its personality, to provide permanent facts. In short, the system prompt can be regarded as static memory for the LLM.

The LLM will never forget its system prompt, because the program is aware of the size of the context window for each model and will clip the conversation so that it fits into the context window without losing the system prompt. This is important because otherwise your model will forget how to behave.

For some LLMs you can set the temperature, which makes the model more creative, while a low temperature makes it more predictable.

Limitations and workarounds

Because of the finite message window, the LLM will eventually forget details of what you were talking about earlier and become inconsistent. A workaround is to let the LLM periodically make a story summary, which you usually need to edit to fill in details that the LLM missed. You can do that with the "summary" button (which may take a while to complete).

Contributing

See the issues on Github and feel free to contribute!

Note, however, that I want to keep plaitime simple and hackable.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

plaitime-0.1.0rc1.tar.gz (14.1 kB view details)

Uploaded Source

Built Distribution

plaitime-0.1.0rc1-py3-none-any.whl (13.9 kB view details)

Uploaded Python 3

File details

Details for the file plaitime-0.1.0rc1.tar.gz.

File metadata

  • Download URL: plaitime-0.1.0rc1.tar.gz
  • Upload date:
  • Size: 14.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for plaitime-0.1.0rc1.tar.gz
Algorithm Hash digest
SHA256 56f7705a8ef26be9ffc55efb4a659500b5184b0f01337d80e97d42c31fdb451e
MD5 a68e49d228b7d42e3794a06e155d06e4
BLAKE2b-256 b6a619907381db831899da1cf74235b16d383c129fb1f3fa0145c654dee00cd8

See more details on using hashes here.

Provenance

The following attestation bundles were made for plaitime-0.1.0rc1.tar.gz:

Publisher: release.yml on HDembinski/plaitime

Attestations:

File details

Details for the file plaitime-0.1.0rc1-py3-none-any.whl.

File metadata

  • Download URL: plaitime-0.1.0rc1-py3-none-any.whl
  • Upload date:
  • Size: 13.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? Yes
  • Uploaded via: twine/5.1.1 CPython/3.12.7

File hashes

Hashes for plaitime-0.1.0rc1-py3-none-any.whl
Algorithm Hash digest
SHA256 4572a0e8745455f820ad9b6b508444e39ff2543626b473008c2d45c030512a5f
MD5 5c5c1b2f533b8e3ec30852a1c6fc56bd
BLAKE2b-256 f47b9ddb23215b3c430be9880293011b800b520c289e8092674f304a8bcb5150

See more details on using hashes here.

Provenance

The following attestation bundles were made for plaitime-0.1.0rc1-py3-none-any.whl:

Publisher: release.yml on HDembinski/plaitime

Attestations:

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page