rag-chunking-evaluation

Assess the effectiveness of chunking strategies in RAG systems via a custom evaluation framework.

https://github.com/leo310/rag-chunking-evaluation

Science Score: 44.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
    Found CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Committers with academic emails
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (9.1%) to scientific vocabulary

Keywords

chunking evaluation-framework retrieval retrieval-augmented-generation
Last synced: 6 months ago · JSON representation ·

Repository

Assess the effectiveness of chunking strategies in RAG systems via a custom evaluation framework.

Basic Info
  • Host: GitHub
  • Owner: Leo310
  • License: mit
  • Language: Jupyter Notebook
  • Default Branch: main
  • Homepage:
  • Size: 4.44 MB
Statistics
  • Stars: 1
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
chunking evaluation-framework retrieval retrieval-augmented-generation
Created over 1 year ago · Last pushed over 1 year ago
Metadata Files
Readme License Citation

README.md

RAG Chunking Evaluation

This repository contains code and datasets for evaluating chunking strategies in Retrieval-Augmented Generation (RAG) systems. The project includes various benchmarks, data loaders, and utility functions to facilitate the evaluation process.

Setup

  1. Clone this repository
  2. Create a virtual environment:

sh python -m venv venv source venv/bin/activate # On Windows use `venv\Scripts\activate`

  1. Install dependencies:

sh pip install -r requirements.txt

  1. Set up environment variables: Copy .env.example to .env and fill in the required values.

Usage

Follow the instructions in the my_benchmark notebook to run the proposed chunking evaluation framework. The specific chunking strategies under evaluation are detailed in the chunking_strategies notebook.

Each step in the evaluation pipeline generates intermediate results, which are saved in the data directory for later review and loading.

The experimental directory includes tests for other benchmarks and evaluation frameworks, such as Ragas, Trulens, and Multi-Hop-RAG.

Owner

  • Name: Leo
  • Login: Leo310
  • Kind: user
  • Location: Berlin
  • Company: SAP SE

🚀🌝💀💩₿

Citation (CITATION.cff)

cff-version: 1.2.0
message: "If you use this software, please cite it as below."
authors:
- family-names: "Heininger"
  given-names: "Leonard"
  orcid: "https://orcid.org/0000-0000-0000-0000"
title: "rag-chunking-evaluation"
version: 1.0.0
doi: 10.5281/zenodo.1234
date-released: 2024-08-09
url: "https://github.com/Leo310/rag-chunking-evaluation"

GitHub Events

Total
  • Push event: 1
Last Year
  • Push event: 1

Committers

Last synced: 11 months ago

All Time
  • Total Commits: 17
  • Total Committers: 1
  • Avg Commits per committer: 17.0
  • Development Distribution Score (DDS): 0.0
Past Year
  • Commits: 17
  • Committers: 1
  • Avg Commits per committer: 17.0
  • Development Distribution Score (DDS): 0.0
Top Committers
Name Email Commits
Leo310 l****0@g****m 17

Issues and Pull Requests

Last synced: 11 months ago

All Time
  • Total issues: 0
  • Total pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Total issue authors: 0
  • Total pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Past Year
  • Issues: 0
  • Pull requests: 0
  • Average time to close issues: N/A
  • Average time to close pull requests: N/A
  • Issue authors: 0
  • Pull request authors: 0
  • Average comments per issue: 0
  • Average comments per pull request: 0
  • Merged pull requests: 0
  • Bot issues: 0
  • Bot pull requests: 0
Top Authors
Issue Authors
Pull Request Authors
Top Labels
Issue Labels
Pull Request Labels

Dependencies

requirements.txt pypi
  • anthropic *
  • chromadb *
  • huggingface *
  • langchain *
  • langchain_benchmarks *
  • langchain_experimental *
  • langchainhub *
  • langsmith *
  • nest_asyncio *
  • openai *
  • pandas *
  • pyarrow *
  • python-dotenv *
  • ragas *
  • sentence_transformers *
  • tiktoken *
  • tqdm *
  • trulens *