open-text-embeddings

Open Source Text Embedding Models with OpenAI Compatible API

https://github.com/rag-wtf/open-text-embeddings

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (11.6%) to scientific vocabulary

Keywords

aws-lambda embeddings fastapi huggingface langchain langchain-python openai opensource python sentence-transformers serverless-framework text-embeddings universal-sentence-encoder
Last synced: 6 months ago · JSON representation ·

Repository

Open Source Text Embedding Models with OpenAI Compatible API

Basic Info
Statistics
  • Stars: 160
  • Watchers: 4
  • Forks: 22
  • Open Issues: 4
  • Releases: 5
Topics
aws-lambda embeddings fastapi huggingface langchain langchain-python openai opensource python sentence-transformers serverless-framework text-embeddings universal-sentence-encoder
Created over 2 years ago · Last pushed over 1 year ago
Metadata Files
Readme Changelog Contributing License Citation

README.md

open-text-embeddings

PyPI Open in Colab Publish Python Package

Many open source projects support the compatibility of the completions and the chat/completions endpoints of the OpenAI API, but do not support the embeddings endpoint.

The goal of this project is to create an OpenAI API-compatible version of the embeddings endpoint, which serves open source sentence-transformers models and other models supported by the LangChain's HuggingFaceEmbeddings, HuggingFaceInstructEmbeddings and HuggingFaceBgeEmbeddings class.

ℹ️ Supported Text Embeddings Models

Below is a compilation of open-source models that are tested via the embeddings endpoint:

The models mentioned above have undergone testing and verification. It is worth noting that all sentence-transformers models are expected to perform seamlessly with the endpoint.

It may not be immediately apparent that utilizing the BAAI/bge-* and intfloat/e5-* series of models with the embeddings endpoint can yield different embeddings for the same input value, depending on how it is sent to the embeddings endpoint. Consider the following examples:

Example 1:

json { "input": "The food was delicious and the waiter..." }

Example 2:

json { "input": ["The food was delicious and the waiter..."] }

This discrepancy arises because the BAAI/bge-* and intfloat/e5-* series of models require the addition of specific prefix text to the input value before creating embeddings to achieve optimal performance. In the first example, where the input is of type str, it is assumed that the embeddings will be used for queries. Conversely, in the second example, where the input is of type List[str], it is assumed that you will store the embeddings in a vector database. Adhering to these guidelines is essential to ensure the intended functionality and optimal performance of the models.

🔍 Demo

Try out open-text-embeddings in your browser:

Open in Colab

🖥️ On-Premise Deployment

Standalone FastAPI Server

To run the embeddings endpoint locally as a standalone FastAPI server, follow these steps:

  1. Install the dependencies by executing the following commands:

bash pip install --no-cache-dir open-text-embeddings[server]

  1. Download the desired model using the following command, for example intfloat/e5-large-v2:

bash ./download.sh intfloat/e5-large-v2

  1. Run the server with the desired model using the following command which normalize embeddings is enabled by default:

bash MODEL=intfloat/e5-large-v2 python -m open.text.embeddings.server

Set the NORMALIZE_EMBEDDINGS to 0 or False if the model doesn't support normalize embeddings, for example:

bash MODEL=intfloat/e5-large-v2 NORMALIZE_EMBEDDINGS=0 python -m open.text.embeddings.server

If a GPU is detected in the runtime environment, the server will automatically execute using the cuba mode. However, you have the flexibility to specify the DEVICE environment variable to choose between cpu and cuba. Here's an example of how to run the server with your desired configuration:

bash MODEL=intfloat/e5-large-v2 DEVICE=cpu python -m open.text.embeddings.server

This setup allows you to seamlessly switch between CPU and GPU modes, giving you control over the server's performance based on your specific requirements.

You can enabled verbose logging by setting the VERBOSE to 1, for example:

bash MODEL=intfloat/e5-large-v2 VERBOSE=1 python -m open.text.embeddings.server

  1. You will see the following text from your console once the server has started:

bash INFO: Started server process [19705] INFO: Waiting for application startup. INFO: Application startup complete. INFO: Uvicorn running on http://0.0.0.0:8000 (Press CTRL+C to quit)

☁️ Cloud Deployment

To deploy the embeddings endpoint to the cloud platform using GitHub Actions, fork the repo first, then follow these steps for:

AWS Lambda Function

  1. Add your AWS credentials (AWS_KEY and AWS_SECRET) to the repository secrets. You can do this by navigating to https://github.com/your-username/open-text-embeddings/settings/secrets/actions.

  2. Manually trigger the Deploy Dev or Remove Dev GitHub Actions to deploy or remove the AWS Lambda Function.

Modal

  1. Add your Modal credentials (MODAL_TOKEN_ID, MODAL_TOKEN_SECRET and MODAL_USERNAME) to the repository secrets. You can do this by navigating to https://github.com/your-username/open-text-embeddings/settings/secrets/actions.

  2. Manually trigger the Deploy Modal GitHub Actions to deploy the Modal web endpoints.

🧪 Testing the Embeddings Endpoint

To test the embeddings endpoint, the repository includes an embeddings.ipynb notebook with a LangChain-compatible OpenAIEmbeddings class.

To get started:

  1. Install the dependencies by executing the following command:

bash pip install --no-cache-dir open-text-embeddings openai

  1. Execute the cells in the notebook to test the embeddings endpoint.

🧑‍💼 Contributing

Contributions are welcome! Please check out the issues on the repository, and feel free to open a pull request. For more information, please see the contributing guidelines.

Thank you very much for the following contributions:

  • Vokturz contributed #2: support for CPU/GPU choice and initialization before starting the app.
  • jayxuz contributed #5: improved OpenAI API compatibility, better support for previous versions of Python (start from v3.7), better defaults and bug fixes.

📔 License

This project is licensed under the terms of the MIT license.

🗒️ Citation

If you utilize this repository, please consider citing it with:

@misc{open-text-embeddings, author = {Lim Chee Kin}, title = {open-text-embeddings: Open Source Text Embedding Models with OpenAI API-Compatible Endpoint}, year = {2023}, publisher = {GitHub}, journal = {GitHub repository}, howpublished = {\url{https://github.com/limcheekin/open-text-embeddings}}, }

Owner

  • Name: rag.wtf
  • Login: rag-wtf
  • Kind: organization

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
  - family-names: "Lim"
    given-names: "Chee Kin"
title: "open-text-embeddings"
date-released: 2023-10-10
url: "https://github.com/limcheekin/open-text-embeddings"

GitHub Events

Total
  • Watch event: 1
Last Year
  • Watch event: 1

Issues and Pull Requests

Last synced: 6 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels