fastlmi
A fast framework (in both performance and development time) for creating language model interfaces - a more generic term for tools built for AIs like ChatGPT plugins.
Science Score: 57.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
✓DOI references
Found 1 DOI reference(s) in README -
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (15.0%) to scientific vocabulary
Keywords
Repository
A fast framework (in both performance and development time) for creating language model interfaces - a more generic term for tools built for AIs like ChatGPT plugins.
Basic Info
Statistics
- Stars: 7
- Watchers: 1
- Forks: 2
- Open Issues: 0
- Releases: 3
Topics
Metadata Files
README.md
FastLMI
FastLMI is a modern, fast (both in performance and development time) framework for creating LMIs, based on the beloved FastAPI library.
What is an LMI?
and why not just call it an API?
LMI stands for "Language Model Interface" -- it's a catch-all term for tools given to AI agents that can interact with them, and how we define those tools (interfaces). We aren't providing an application programming interface; rather than providing instructions (in the form of docs) for human developers to write apps, LMIs come bundled with instructions for AI agents to interface with them.
FastLMI is more than just a library for making ChatGPT plugins -- we believe the LMI ecosystem has the potential to be and do much more without relying on a single authority to curate and provide "good" plugins. FastLMI is designed to be ecosystem-agnostic with adapters for popular ecosystems, such as ChatGPT/OpenAI plugins.
Cite Us
Doing academic research on language models and their abilities to use tools using the FastLMI library? Cite us with this BibTex entry!
bibtex
@software{Zhu_FastLMI_2023,
author = {Zhu, Andrew},
doi = {10.5281/zenodo.7925999},
month = may,
title = {{FastLMI}},
url = {https://github.com/zhudotexe/fastlmi},
version = {0.2.0},
year = {2023}
}
Requirements
Python 3.8+
Installation
shell
$ pip install fastlmi
Just as with FastAPI, you will need an ASGI server to run the app, such as Uvicorn.
```shell $ pip install uvicorn
or pip install "uvicorn[standard]" for Cython-based extras
```
Example (OpenAI/ChatGPT Plugin)
NOTE: As of v0.2.0 FastLMI includes the AI Plugin (OpenAI, LangChain) interface as a default. This may change to an extension-based system in the future as the library develops.
To show off just how easy it is to create a plugin, let's make a ChatGPT plugin that gives it the ability to roll dice in the d20 format (AIs playing D&D, anyone?).
Example Requirements
First, you'll need to install the d20 library:
shell
$ pip install d20
Create it
Then, create a main.py file.
```python import d20 # pip install d20 from fastlmi import FastLMI, utils from pydantic import BaseModel
app = FastLMI( title="Dice Roller", nameformodel="DiceRoller", description="A simple plugin to roll dice.", descriptionformodel=( "DiceRoller can roll dice in XdY format and do math.\n" "Some dice examples are:\n" "4d6kh3 :: highest 3 of 4 6-sided dice\n" "2d6ro<3 :: roll 2d6s, then reroll any 1s or 2s once\n" "8d6mi2 :: roll 8d6s, with each die having a minimum roll of 2\n" "(1d4 + 1, 3, 2d6kl1)kh1 :: the highest of 1d4+1, 3, and the lower of 2 d6s\n" "Normal math operations are also supported." ), contactemail="foo@example.com", legalurl="https://example.com/legal", )
use this when developing localhost plugins to allow the browser to make the local request
utils.corsallowopenai(app)
class DiceRequest(BaseModel): dice: str # the dice to roll
class DiceResponse(BaseModel): result: str # the resulting dice string total: int # the total numerical result of the roll (rounded down to nearest integer)
@app.post("/roll") def roll(dice: DiceRequest) -> DiceResponse: """Roll the given dice and return a detailed result.""" result = d20.roll(dice.dice) return DiceResponse(result=result.result, total=result.total) ```
(this example script is also available at `examples/diceroller.py`!)_
Run it
... and run it with:
```shell $ uvicorn main:app
INFO: Uvicorn running on http://127.0.0.1:8000 (Press CTRL+C to quit) INFO: Started server process [53532] INFO: Waiting for application startup. INFO: Application startup complete. ```
Register it
Finally, we need to tell ChatGPT about the new plugin. In the ChatGPT interface, select the "Plugins" model, then head to the Plugin Store -> Develop your own plugin.
Here, type in the address of your plugin. By default, it's localhost:8000.

Click "Find manifest file," and you should see your plugin appear. FastLMI automatically handles generating all the plugin metadata needed by OpenAI!

Chat away
To use your new plugin, select it from the list of plugins when starting a new chat:

and start chatting. Congratulations! 🎉 You've just created a brand-new ChatGPT plugin - and we're excited to see what else you'll make!

Example (LangChain Tool)
LangChain supports interfacing with AI plugins! If you've followed the steps above (at least up through "Run it"), you can also expose your new LMI to a LangChain agent.
This example assumes that you
have LangChain installed, and that your
LMI is running at http://localhost:8000.
```python from langchain.agents import AgentType from langchain.agents import initializeagent, loadtools from langchain.chat_models import ChatOpenAI from langchain.tools import AIPluginTool
tool = AIPluginTool.frompluginurl("http://localhost:8000/.well-known/ai-plugin.json") llm = ChatOpenAI(temperature=0) tools = loadtools(["requestsall"]) tools += [tool]
agentchain = initializeagent(tools, llm, agent=AgentType.CHATZEROSHOTREACTDESCRIPTION, verbose=True) agent_chain.run("Can you roll me stats for a D&D character?") ```
Authentication
As of v0.2.0, FastLMI has built-in support for service-level auth, where the AI agent or LMI driver will send an authorization token you decide as a header with each request.
Enabling authentication is easy! First, define the authentication scheme you wish to use - this will be a subclass of
LMIAuth. For example, to provide service-level auth, you can define the LMIServiceAuth scheme:
```python from fastlmi import Depends, FastLMI from fastlmi.auth import LMIServiceAuth
auth = LMIServiceAuth( accesstokens=["yoursecrettokenhere"], verificationtokens={"openai": "verificationtokengeneratedintheChatGPT_UI"} ) ```
This auth scheme allows defining a set of allowed access tokens (if one wanted to, for example, have a different token for each plugin service). Then, when you define your app, all you have to do is add 2 parameters:
python
app = FastLMI(..., auth=auth, dependencies=[Depends(auth)])
Tada! 🎉 Your LMI now tells consumers that it uses the service_http auth scheme, and will validate that each request
to one of your defined routes provides a valid Bearer token.
A complete example is in examples/ai_plugin_auth.py. You can read more about OpenAI's service-level
auth here.
Route-Level Auth
If you wanted to only require that certain routes use auth, you can also define the auth dependency on a route level:
diff
- app = FastLMI(..., auth=auth, dependencies=[Depends(auth)])
+ app = FastLMI(..., auth=auth)
...
- @app.post("/hello")
+ @app.post("/hello", dependencies=[Depends(auth)])
def hello():
...
Read More
Being based on FastAPI, FastLMI can take full advantage of its superpowers. Check out the FastAPI documentation for more!
Todo
- scopes
- script to check for missing docs, over limits, etc
- logging
- configure ecosystems
Owner
- Name: Andrew Zhu
- Login: zhudotexe
- Kind: user
- Location: Philadelphia, PA
- Company: University of Pennsylvania
- Website: https://zhu.codes
- Repositories: 88
- Profile: https://github.com/zhudotexe
PhD @ UPenn || there once was a girl from purdue / who kept a young cat in a pew / she taught it to speak / alphabetical Greek / but it never got farther than μ
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use this software in academic research, please cite it as below." authors: - family-names: "Zhu" given-names: "Andrew" orcid: "https://orcid.org/0000-0002-6664-3215" title: "FastLMI" version: 0.2.0 doi: 10.5281/zenodo.7925999 date-released: 2023-05-11 url: "https://github.com/zhudotexe/fastlmi"
GitHub Events
Total
Last Year
Committers
Last synced: 7 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Andrew Zhu | me@a****m | 15 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 1
- Total pull requests: 0
- Average time to close issues: about 1 hour
- Average time to close pull requests: N/A
- Total issue authors: 1
- Total pull request authors: 0
- Average comments per issue: 2.0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 0
- Pull requests: 0
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 0
- Pull request authors: 0
- Average comments per issue: 0
- Average comments per pull request: 0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- JacobFV (1)
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 1
-
Total downloads:
- pypi 14 last-month
- Total dependent packages: 0
- Total dependent repositories: 0
- Total versions: 3
- Total maintainers: 1
pypi.org: fastlmi
A FastAPI-based framework for quickly building and iterating on language model interfaces.
- Homepage: https://github.com/zhudotexe/fastlmi
- Documentation: https://fastlmi.readthedocs.io/
- License: MIT License
-
Latest release: 0.2.0
published almost 3 years ago