Skip to main content

A prompt programming language

Project description

banks

PyPI - Version PyPI - Python Version

PyPI Release test docs

Hatch project code style - black Checked with mypy License - MIT

Banks is the linguist professor who will help you generate meaningful LLM prompts using a template language that makes sense. If you're still using f-strings for the job, keep reading.

Docs are currently in the pipes and at the moment this README is the best resource for Banks' users, anyways they are available here


Table of Contents

Installation

pip install banks

Examples

Generate a blog writing prompt

Given a generic template to instruct an LLM to generate a blog article, we use Banks to generate the actual prompt on our topic of choice, "retrogame computing":

from banks import Prompt


p = Prompt("Write a 500-word blog post on {{ topic }}.\n\nBlog post:")
topic = "retrogame computing"
print(p.text({"topic": topic}))

This will print the following text, that can be pasted directly into Chat-GPT:

Write a 500-word blog post on retrogame computing.

Blog post:

Generate a summarizer prompt

Instead of hardcoding the content to summarize in the prompt itself, we can generate it starting from a generic one:

from banks import Prompt


prompt_template = """
Summarize the following documents:
{% for document in documents %}
{{ document }}
{% endfor %}
Summary:
"""

# In a real-world scenario, these would be loaded as external resources from files or network
documents = [
    "A first paragraph talking about AI",
    "A second paragraph talking about climate change",
    "A third paragraph talking about retrogaming"
]

p = Prompt(prompt_template)
print(p.text({"documents": documents}))

The resulting prompt:

Summarize the following documents:

A first paragraph talking about AI

A second paragraph talking about climate change

A third paragraph talking about retrogaming

Summary:

Lemmatize text while processing a template

Banks comes with predefined filters you can use to process data before generating the prompt. Say you want to use a lemmatizer on a document before summarizing it, first you need to install simplemma:

pip install simplemma

then you can use the lemmatize filter in your templates like this:

from banks import Prompt


prompt_template = """
Summarize the following document:
{{ document | lemmatize }}
Summary:
"""

p = Prompt(prompt_template)
print(p.text({"document": "The cats are running"}))

the output would be:

Summarize the following document:
the cat be run
Summary:

Use a LLM to generate a text while rendering a prompt

Sometimes it might be useful to ask another LLM to generate examples for you in a few-shot prompt. Provided you have a valid OpenAI API key stored in an env var called OPENAI_API_KEY you can ask Banks to do something like this (note we can annotate the prompt using comments - anything within {# ... #} will be removed from the final prompt):

from banks import Prompt


prompt_template = """
Generate a tweet about the topic {{ topic }} with a positive sentiment.

{#
    This is for illustration purposes only, there are better and cheaper ways
    to generate examples for a few-shots prompt.
#}
Examples:
{% for number in range(3) %}
- {% generate "write a tweet with positive sentiment" "gpt-3.5-turbo" %}
{% endfor %}
"""

p = Prompt(prompt_template)
print(p.text({"topic": "climate change"}))

The output would be something similar to the following:

Generate a tweet about the topic climate change with a positive sentiment.


Examples:

- "Feeling grateful for the amazing capabilities of #GPT3.5Turbo! It's making my work so much easier and efficient. Thank you, technology!" #positivity #innovation

- "Feeling grateful for all the opportunities that come my way! With #GPT3.5Turbo, I am able to accomplish tasks faster and more efficiently. #positivity #productivity"

- "Feeling grateful for all the wonderful opportunities and experiences that life has to offer! #positivity #gratitude #blessed #gpt3.5turbo"

If you paste Banks' output into ChatGPT you would get something like this:

Climate change is a pressing global issue, but together we can create positive change! Let's embrace renewable energy, protect our planet, and build a sustainable future for generations to come. 🌍💚 #ClimateAction #PositiveFuture

The generate extension uses LiteLLM under the hood, and provided you have the proper environment variables set, you can use any model from the supported model providers.

Go meta: create a prompt and generate its response

We can leverage Jinja's macro system to generate a prompt, send the result to OpenAI and get a response. Let's bring back the blog writing example:

from banks import Prompt

prompt_template = """
{% from "banks_macros.jinja" import run_prompt with context %}

{%- call run_prompt() -%}
Write a 500-word blog post on {{ topic }}

Blog post:
{%- endcall -%}
"""

p = Prompt(prompt_template)
print(p.text({"topic": "climate change"}))

The snippet above won't print the prompt, instead will generate the prompt text

Write a 500-word blog post on climate change

Blog post:

and will send it to OpenAI using the generate extension, eventually returning its response:

Climate change is a phenomenon that has been gaining attention in recent years...
...

Go meta(meta): process a LLM response

When generating a response from a prompt template, we can take a step further and post-process the LLM response by assinging it to a variable and applying filters to it:

from banks import Prompt

prompt_template = """
{% from "banks_macros.jinja" import run_prompt with context %}

{%- set prompt_result %}
{%- call run_prompt() -%}
Write a 500-word blog post on {{ topic }}

Blog post:
{%- endcall -%}
{%- endset %}

{# nothing is returned at this point: the variable 'prompt_result' contains the result #}

{# let's use the prompt_result variable now #}
{{ prompt_result | upper }}
"""

p = Prompt(prompt_template)
print(p.text({"topic": "climate change"}))

The final answer from the LLM will be printed, this time all in uppercase.

Reuse templates from files

We can get the same result as the previous example loading the prompt template from file instead of hardcoding it into the Python code. For convenience, Banks comes with a few default templates distributed the package. We can load those templates from file like this:

from banks import Prompt


p = Prompt.from_template("blog.jinja")
topic = "retrogame computing"
print(p.text({"topic": topic}))

License

banks is distributed under the terms of the MIT license.

Project details


Download files

Download the file for your platform. If you're not sure which to choose, learn more about installing packages.

Source Distribution

banks-0.2.0.tar.gz (14.1 kB view details)

Uploaded Source

Built Distribution

banks-0.2.0-py3-none-any.whl (12.9 kB view details)

Uploaded Python 3

File details

Details for the file banks-0.2.0.tar.gz.

File metadata

  • Download URL: banks-0.2.0.tar.gz
  • Upload date:
  • Size: 14.1 kB
  • Tags: Source
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-httpx/0.27.0

File hashes

Hashes for banks-0.2.0.tar.gz
Algorithm Hash digest
SHA256 e98685d13450ea3fe1ed6eaf593c1fc9ebef62b44ce9ed1959005f89dcb6462b
MD5 39c2667096c4fe6bd730ae27955d2405
BLAKE2b-256 f37391f15ca9fd0c8efdcdd1fead95f14c4f2e35108dfa95a39591e926b73174

See more details on using hashes here.

File details

Details for the file banks-0.2.0-py3-none-any.whl.

File metadata

  • Download URL: banks-0.2.0-py3-none-any.whl
  • Upload date:
  • Size: 12.9 kB
  • Tags: Python 3
  • Uploaded using Trusted Publishing? No
  • Uploaded via: python-httpx/0.27.0

File hashes

Hashes for banks-0.2.0-py3-none-any.whl
Algorithm Hash digest
SHA256 768976264f5483d23776a327c5aa7a4a416ffb6a1521a219546fbb9d33580e51
MD5 8ebcb1f45843975f72725779b56c9762
BLAKE2b-256 5d67c6dd6f12c785422607ef0b23af9d1c804884279d89559ca2c9d108547cdd

See more details on using hashes here.

Supported by

AWS AWS Cloud computing and Security Sponsor Datadog Datadog Monitoring Fastly Fastly CDN Google Google Download Analytics Microsoft Microsoft PSF Sponsor Pingdom Pingdom Monitoring Sentry Sentry Error logging StatusPage StatusPage Status page