A text-based terminal client for Ollama.
Project description
oterm
the text-based terminal client for Ollama.
Features
- intuitive and simple terminal UI, no need to run servers, frontends, just type
oterm
in your terminal. - multiple persistent chat sessions, stored together with system prompt & parameter customizations in sqlite.
- can use any of the models you have pulled in Ollama, or your own custom models.
- allows for easy customization of the model's system prompt and parameters.
Installation
Using brew
for MacOS:
brew tap ggozad/formulas
brew install ggozad/formulas/oterm
Using yay
(or any AUR helper) for Arch Linux:
yay -S oterm
Using pip
:
pip install oterm
Using
In order to use oterm
you will need to have the Ollama server running. By default it expects to find the Ollama API running on http://127.0.0.1:11434
. If you are running Ollama inside docker or on a different host/port, use the OLLAMA_HOST
environment variable to customize the host/port. Alternatively you can use OLLAMA_URL
to specify the full http(s) url. Setting OTERM_VERIFY_SSL
to False
will disable SSL verification.
OLLAMA_URL=http://host:port/api
To start oterm
simply run:
oterm
Commands
By pressing ^ Ctrl+p you can access the command palette from where you can perform most of the chat actions. The following commands are available:
New chat
- create a new chat sessionEdit chat parameters
- edit the current chat session (change system prompt, parameters or format)Rename chat
- rename the current chat sessionExport chat
- export the current chat session as markdownDelete chat
- delete the current chat sessionRegenerate last Ollama message
- regenerates the last message from Ollama (will override theseed
for the specific message with a random one.) Useful if you want to change the system prompt or parameters or just want to try again.
Keyboard shortcuts
The following keyboard shortcuts are supported:
-
^ Ctrl+t - toggle between dark/light theme
-
^ Ctrl+q - quit
-
^ Ctrl+l - switch to multiline input mode
-
^ Ctrl+i - select an image to include with the next message
-
↑ - navigate through history of previous prompts
-
^ Ctrl+Tab - open the next chat
-
^ Ctrl+Shift+Tab - open the previous chat
In multiline mode, you can press Enter to send the message, or Shift+Enter to add a new line at the cursor.
While Ollama is inferring the next message, you can press Esc to cancel the inference.
Note that some of the shortcuts may not work in a certain context, for example pressing ↑ while the prompt is in multi-line mode.
Tools
Since version 0.6.0
oterm
supports integration with tools. Tools are special "functions" that can provide external information to the LLM model that it does not otherwise have access to.
The following tools are currently supported:
date_time
- provides the current date and time in ISO format.current_location
- provides the current location of the user (longitude, latitude, city, region, country). Uses ipinfo.io to determine the location.current_weather
- provides the current weather in the user's location. Uses OpenWeatherMap to determine the weather.shell
- allows you to run shell commands and use the output as input to the model. Obviously this can be dangerous, so use with caution.
The tooling API in Ollama does not currently support streaming. When using tools, you will have to wait for the tools & model to finish before you see the response.
Note that tools integration is experimental and may change in the future. I particularly welcome contributions for new tools, but please bear in mind that any additional requirements in terms of dependencies or paid-for API usage should be kept to a minimum.
Copy / Paste
It is difficult to properly support copy/paste in terminal applications. You can copy blocks to your clipboard as such:
- clicking a message will copy it to the clipboard.
- clicking a code block will only copy the code block to the clipboard.
For most terminals there exists a key modifier you can use to click and drag to manually select text. For example:
iTerm
Option key.Gnome Terminal
Shift key.Windows Terminal
Shift key.
Customizing models
When creating a new chat, you may not only select the model, but also customize the the system
instruction, tools
used, as well as the parameters
(such as context length, seed, temperature etc) passed to the model. For a list of all supported parameters refer to the Ollama documentation. Checking the JSON output
checkbox will force the model to reply in JSON format. Please note that oterm
will not (yet) pull models for you, use ollama
to do that. All the models you have pulled or created will be available to oterm
.
You can also "edit" the chat to change the system prompt, parameters or format. Note, that the model cannot be changed once the chat has started.
Chat session storage
All your chat sessions are stored locally in a sqlite database. You can customize the directory where the database is stored by setting the OTERM_DATA_DIR
environment variable.
You can find the location of the database by running oterm --db
.
Screenshots
License
This project is licensed under the MIT License.
Project details
Release history Release notifications | RSS feed
Download files
Download the file for your platform. If you're not sure which to choose, learn more about installing packages.