banks
LLM prompt language based on Jinja. Banks provides tools and functions to build prompts text and chat messages from generic blueprints. It allows attaching metadata to prompts to ease their management, and versioning is first-class citizen. Banks provides ways to store prompts on disk along with their metadata.
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (13.7%) to scientific vocabulary
Keywords
Keywords from Contributors
Repository
LLM prompt language based on Jinja. Banks provides tools and functions to build prompts text and chat messages from generic blueprints. It allows attaching metadata to prompts to ease their management, and versioning is first-class citizen. Banks provides ways to store prompts on disk along with their metadata.
Basic Info
- Host: GitHub
- Owner: masci
- License: mit
- Language: Python
- Default Branch: main
- Homepage: https://masci.github.io/banks/
- Size: 1020 KB
Statistics
- Stars: 114
- Watchers: 3
- Forks: 18
- Open Issues: 1
- Releases: 24
Topics
Metadata Files
README.md
banks
Banks is the linguist professor who will help you generate meaningful
LLM prompts using a template language that makes sense. If you're still using f-strings for the job, keep reading.
Docs are available here.

Table of Contents
- banks
- Installation
- Features
- Cookbook
- Examples
- :point_right: Render a prompt template as chat messages
- :point_right: Add images to the prompt for vision models
- :point_right: Use a LLM to generate a text while rendering a prompt
- :point_right: Function calling directly from the prompt
- :point_right: Use prompt caching from Anthropic
- Reuse templates from registries
- Async support
- Contributing
- License
Installation
```console pip install banks
install optional deps; litellm, redis
pip install "banks[all]" ```
Features
Prompts are instrumental for the success of any LLM application, and Banks focuses around specific areas of their lifecycle: - :orangebook: Templating: Banks provides tools and functions to build prompts text and chat messages from generic blueprints. - :tickets: Versioning and metadata: Banks supports attaching metadata to prompts to ease their management, and versioning is first-class citizen. - :filecabinet: Management: Banks provides ways to store prompts on disk along with their metadata.
Cookbook
- :blue_book: In-prompt chat completion
- :blue_book: Prompt caching with Anthropic
- :blue_book: Prompt versioning
Examples
For a more extensive set of code examples, see the documentation page.
:point_right: Render a prompt template as chat messages
You'll find yourself feeding an LLM a list of chat messages instead of plain text more often than not. Banks will help you remove the boilerplate by defining the messages already at the prompt level.
```py from banks import Prompt
prompt_template = """ {% chat role="system" %} You are a {{ persona }}. {% endchat %}
{% chat role="user" %} Hello, how are you? {% endchat %} """
p = Prompt(prompttemplate) print(p.chatmessages({"persona": "helpful assistant"}))
Output:
[
ChatMessage(role='system', content=[
ContentBlock(type=, cache_control=None, text='You are a helpful assistant.',
imageurl=None, inputaudio=None)
], toolcallid=None, name=None),
ChatMessage(role='user', content=[
ContentBlock(type=, cache_control=None, text='Hello, how are you?',
imageurl=None, inputaudio=None)
], toolcallid=None, name=None)
]
```
:point_right: Add images to the prompt for vision models
If you're working with a multimodal model, you can include images directly in the prompt, and Banks will do the job needed to upload them when rendering the chat messages:
```py import litellm
from banks import Prompt
prompt_template = """ {% chat role="user" %} Guess where is this place. {{ picture | image }} {%- endchat %} """
picurl = ( "https://upload.wikimedia.org/wikipedia/commons/thumb/4/4d/CorcianoMar30202401.jpg/1079px-CorcianoMar302024_01.jpg" )
Alternatively, load the image from disk
picurl = "/Users/massi/Downloads/CorcianoMar30202401.jpg"
p = Prompt(prompttemplate) asdict = [msg.modeldump(excludenone=True) for msg in p.chatmessages({"picture": picurl})] r = litellm.completion(model="gpt-4-vision-preview", messages=as_dict)
print(r.choices[0].message.content) ```
:point_right: Use a LLM to generate a text while rendering a prompt
Sometimes it might be useful to ask another LLM to generate examples for you in a
few-shots prompt. Provided you have a valid OpenAI API key stored in an env var
called OPENAI_API_KEY you can ask Banks to do something like this (note we can
annotate the prompt using comments - anything within {# ... #} will be removed
from the final prompt):
```py from banks import Prompt
prompt_template = """ {% set examples %} {% completion model="gpt-3.5-turbo-0125" %} {% chat role="system" %}You are a helpful assistant{% endchat %} {% chat role="user" %}Generate a bullet list of 3 tweets with a positive sentiment.{% endchat %} {% endcompletion %} {% endset %}
{# output the response content #} Generate a tweet about the topic {{ topic }} with a positive sentiment. Examples: {{ examples }} """
p = Prompt(prompt_template) print(p.text({"topic": "climate change"})) ```
The output would be something similar to the following:
txt
Generate a tweet about the topic climate change with a positive sentiment.
Examples:
- "Feeling grateful for the sunshine today! 🌞 #thankful #blessed"
- "Just had a great workout and feeling so energized! 💪 #fitness #healthyliving"
- "Spent the day with loved ones and my heart is so full. 💕 #familytime #grateful"
[!IMPORTANT] The
completionextension uses LiteLLM under the hood, and provided you have the proper environment variables set, you can use any model from the supported model providers.[!NOTE] Banks uses a cache to avoid generating text again for the same template with the same context. By default the cache is in-memory but it can be customized.
:point_right: Function calling directly from the prompt
Banks provides a filter tool that can be used to convert a callable passed to a prompt into an LLM function call.
Docstrings are used to describe the tool and its arguments, and during prompt rendering Banks will perform all the LLM
roundtrips needed in case the model wants to use a tool within a {% completion %} block. For example:
```py import platform
from banks import Prompt
def getlaptopinfo(): """Get information about the user laptop.
For example, it returns the operating system and version, along with hardware and network specs."""
return str(platform.uname())
p = Prompt(""" {% set response %} {% completion model="gpt-3.5-turbo-0125" %} {% chat role="user" %}{{ query }}{% endchat %} {{ getlaptopinfo | tool }} {% endcompletion %} {% endset %}
{# the variable 'response' contains the result #}
{{ response }} """)
print(p.text({"query": "Can you guess the name of my laptop?", "getlaptopinfo": getlaptopinfo}))
Output:
Based on the information provided, the name of your laptop is likely "MacGiver."
```
:point_right: Use prompt caching from Anthropic
Several inference providers support prompt caching to save time and costs, and Anthropic in particular offers fine-grained control over the parts of the prompt that we want to cache. With Banks this is as simple as using a template filter:
```py prompt_template = """ {% chat role="user" %} Analyze this book:
{# Only this part of the chat message (the book content) will be cached #} {{ book | cache_control("ephemeral") }}
What is the title of this book? Only output the title. {% endchat %} """
p = Prompt(prompttemplate) print(p.chatmessages({"book":"This is a short book!"}))
Output:
[
ChatMessage(role='user', content=[
ContentBlock(type='text', text='Analyze this book:\n\n'),
ContentBlock(type='text', cache_control=CacheControl(type='ephemeral'), text='This is a short book!'),
ContentBlock(type='text', text='\n\nWhat is the title of this book? Only output the title.\n')
])
]
```
The output of p.chat_messages() can be fed to the Anthropic client directly.
Reuse templates from registries
We can get the same result as the previous example loading the prompt template from a registry
instead of hardcoding it into the Python code. For convenience, Banks comes with a few registry types
you can use to store your templates. For example, the DirectoryTemplateRegistry can load templates
from a directory in the file system. Suppose you have a folder called templates in the current path,
and the folder contains a file called blog.jinja. You can load the prompt template like this:
```py from banks import Prompt from banks.registries import DirectoryTemplateRegistry
registry = DirectoryTemplateRegistry(populated_dir) prompt = registry.get(name="blog")
print(prompt.text({"topic": "retrogame computing"})) ```
Async support
To run banks within an asyncio loop you have to do two things:
1. set the environment variable BANKS_ASYNC_ENABLED=true.
2. use the AsyncPrompt class that has an awaitable run method.
Example: ```python from banks import AsyncPrompt
async def main(): p = AsyncPrompt("Write a blog article about the topic {{ topic }}") result = await p.text({"topic": "AI frameworks"}) print(result)
asyncio.run(main()) ```
Contributing
Contributions are very welcome, the CONTRIBUTING.md file contains all the details about how to do it.
License
banks is distributed under the terms of the MIT license.
Owner
- Name: Massimiliano Pippi
- Login: masci
- Kind: user
- Location: Italy
- Company: LlamaIndex
- Website: https://dev.pippi.im
- Repositories: 72
- Profile: https://github.com/masci
ex @datadog @elastic @arduino
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software, please cite it using these metadata." title: "Banks: the linguist professor who will help you generate meaningful Prompts" date-released: 2023-06-12 url: "https://github.com/masci/banks" authors: - family-names: Pippi given-names: Massimiliano
GitHub Events
Total
- Create event: 27
- Issues event: 23
- Release event: 11
- Watch event: 48
- Delete event: 16
- Issue comment event: 34
- Push event: 136
- Pull request review comment event: 5
- Pull request review event: 15
- Pull request event: 52
- Fork event: 14
Last Year
- Create event: 27
- Issues event: 23
- Release event: 11
- Watch event: 48
- Delete event: 16
- Issue comment event: 34
- Push event: 136
- Pull request review comment event: 5
- Pull request review event: 15
- Pull request event: 52
- Fork event: 14
Committers
Last synced: 10 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Massimiliano Pippi | m****i@g****m | 189 |
| Mayank Jobanputra | m****a@g****m | 3 |
| Logan | l****h@l****m | 3 |
| Stefano Fiorucci | s****i@g****m | 2 |
| Krystof Olik | 4****a | 1 |
| Fabian Affolter | m****l@f****h | 1 |
| Clelia (Astra) Bertelli | 1****t | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 14
- Total pull requests: 60
- Average time to close issues: 3 days
- Average time to close pull requests: about 19 hours
- Total issue authors: 9
- Total pull request authors: 7
- Average comments per issue: 0.93
- Average comments per pull request: 0.72
- Merged pull requests: 59
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 13
- Pull requests: 51
- Average time to close issues: 3 days
- Average time to close pull requests: about 13 hours
- Issue authors: 8
- Pull request authors: 6
- Average comments per issue: 0.85
- Average comments per pull request: 0.78
- Merged pull requests: 50
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- logan-markewich (5)
- anakin87 (2)
- neerajprad (1)
- TuanaCelik (1)
- alex-stoica (1)
- nicpottier (1)
- MrSpejn (1)
- HaveF (1)
- AstraBert (1)
Pull Request Authors
- masci (64)
- logan-markewich (6)
- mayankjobanputra (5)
- anakin87 (4)
- fabaff (2)
- AstraBert (2)
- ArmykOliva (2)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 2,266,420 last-month
- Total dependent packages: 0
- Total dependent repositories: 1
- Total versions: 29
- Total maintainers: 1
pypi.org: banks
A prompt programming language
- Documentation: https://github.com/masci/banks#readme
- License: mit
-
Latest release: 2.2.0
published 8 months ago