Science Score: 54.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
✓Committers with academic emails
1 of 15 committers (6.7%) from academic institutions -
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.0%) to scientific vocabulary
Keywords
Repository
Build, evaluate and observe LLM apps
Basic Info
- Host: GitHub
- Owner: aiplanethub
- License: apache-2.0
- Language: Jupyter Notebook
- Default Branch: main
- Homepage: https://beyondllm.aiplanet.com/
- Size: 2.64 MB
Statistics
- Stars: 288
- Watchers: 7
- Forks: 44
- Open Issues: 11
- Releases: 6
Topics
Metadata Files
README.md
BeyondLLM
Build - Rapid Experiment - Evaluate - Observability
Beyond LLM offers an all-in-one toolkit for experimentation, evaluation, and deployment of Retrieval-Augmented Generation (RAG) systems, simplifying the process with automated integration, customizable evaluation metrics, and support for various Large Language Models (LLMs) tailored to specific needs, ultimately aiming to reduce LLM hallucination risks and enhance reliability.
👉 Join our Discord community!Try out a quick demo on Google Colab:
Quick install
bash
pip install beyondllm
Quickstart Guide- Chat with YouTube Video
In this quick start guide, we'll demonstrate how to create a Chat with YouTube video RAG application using Beyond LLM with less than 8 lines of code. This 8 lines of code includes: * Getting custom data source * Retrieving documents * Generating LLM responses * Evaluating embeddings * Evaluating LLM responses
Approach-1: Using Default LLM and Embeddings
Build customised RAG in less than 5 lines of code using Beyond LLM.
```python from beyondllm import source,retrieve,generator import os os.environ['GOOGLEAPIKEY'] = "Your Google API Key:"
data = source.fit("https://www.youtube.com/watch?v=oJJyTztI6g",dtype="youtube",chunksize=512,chunkoverlap=50) retriever = retrieve.autoretriever(data,type="normal",top_k=3) pipeline = generator.Generate(question="what tool is video mentioning about?",retriever=retriever)
print(pipeline.call()) ```
Approach-2: With Custom LLM and Embeddings
Beyond LLM support various Embeddings and LLMs that are two very important components in Retrieval Augmented Generation.
```python from beyondllm import source,retrieve,embeddings,llms,generator import os from getpass import getpass os.environ['OPENAIAPIKEY'] = getpass("Your OpenAI API Key:")
data = source.fit("https://www.youtube.com/watch?v=oJJyTztI6g",dtype="youtube",chunksize=1024,chunkoverlap=0) embedmodel = embeddings.OpenAIEmbeddings() retriever = retrieve.autoretriever(data,embedmodel,type="normal",top_k=4) llm = llms.ChatOpenAIModel() pipeline = generator.Generate(question="what tool is video mentioning about?",retriever=retriever,llm=llm)
print(pipeline.call()) #AI response print(retriever.evaluate(llm=llm)) #evaluate embeddings print(pipeline.getragtriad_evals()) #evaluate LLM response ```
Output
```bash The tool mentioned in the context is called Jupiter, which is an AI Guru designed to simplify the learning of complex data science topics. Users can access Jupiter by logging into AI Planet, accessing any course for free, and then requesting explanations of topics from Jupiter in various styles, such as in the form of a movie plot. Jupiter aims to make AI education more accessible and interactive for everyone.
Hit_rate:1.0 MRR:1.0
Context relevancy Score: 8.0 Answer relevancy Score: 7.0 Groundness score: 7.666666666666667 ```
Observability
Observability helps to keep track of the closed source models on the latency and the cost monitor tracking. BeyondLLM provides Observer that currently monitors the OpenAI LLM model performance.
```python from beyondllm import source,retrieve,generator, llms, embeddings from beyondllm.observe import Observer import os
os.environ['OPENAIAPIKEY'] = 'sk-****'
Observe = Observer() Observe.run()
llm=llms.ChatOpenAIModel() embed_model = embeddings.OpenAIEmbeddings()
data = source.fit("https://www.youtube.com/watch?v=oJJyTztI6g",dtype="youtube",chunksize=512,chunkoverlap=50) retriever = retrieve.autoretriever(data,embedmodel,type="normal",topk=4)
pipeline = generator.Generate(question="what tool is video mentioning about?",retriever=retriever, llm=llm) pipeline = generator.Generate(question="What is the tool used for?",retriever=retriever, llm=llm) pipeline = generator.Generate(question="How can i use the tool for my own use?",retriever=retriever, llm=llm) ```
Documentation
See the beyondllm.aiplanet.com for complete documentation.
Contribution guidelines
Beyond LLM thrives in the rapidly evolving landscape of open-source projects. We wholeheartedly welcome contributions in various capacities, be it through innovative features, enhanced infrastructure, or refined documentation.
See Contributing guide for more information on contributing to the BeyondLLM library.
Acknowledgements
and the entire OpenSource community.
License
The contents of this repository are licensed under the Apache License, version 2.0.
Get in Touch
You can schedule a 1:1 meeting with our Team to get started with GenAI Stack, OpenAGI, AI Planet Open Source LLMs(Buddhi, effi and Panda Coder) and Beyond LLM. Schedule the call here: https://calendly.com/jaintarun
Owner
- Name: AI Planet
- Login: aiplanethub
- Kind: organization
- Website: https://aiplanet.com/
- Twitter: aiplanethub
- Repositories: 137
- Profile: https://github.com/aiplanethub
Ecosystem educating and building AI for everyone!
Citation (CITATION.cff)
cff-version: 1.2.0
title: 'BeyondLLM by AI Planet'
message: >-
If you use this software, please cite it using this
metadata.
type: software
authors:
- given-names: Tarun
family-names: Jain
email: tarun@aiplanet.com
- given-names: Adithya
family-names: Hegde
email: adithya@aiplanet.com
- given-names: Muhammad
family-names: Taha
email: muhammad@aiplanet.com
repository-code: 'https://github.com/aiplanethub/beyondllm'
url: 'https://aiplanet.com/'
license: Apache-2.0
GitHub Events
Total
- Issues event: 2
- Watch event: 35
- Issue comment event: 39
- Push event: 3
- Pull request review comment event: 10
- Pull request review event: 5
- Pull request event: 11
- Fork event: 12
- Create event: 1
Last Year
- Issues event: 2
- Watch event: 35
- Issue comment event: 39
- Push event: 3
- Pull request review comment event: 10
- Pull request review event: 5
- Pull request event: 11
- Fork event: 12
- Create event: 1
Committers
Last synced: 6 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| Tarun Jain | t****n@a****m | 31 |
| Tarun Jain | 1****I | 17 |
| Adithya Hegde | h****9@g****m | 14 |
| lucifertrj | t****4@c****n | 7 |
| Muhammad Taha | m****d@a****m | 7 |
| Jaimin Godhani | 1****1 | 5 |
| shivaya-aiplanet | s****a@a****m | 5 |
| ARYA CHAKRABORTY | a****2@g****m | 3 |
| Arinjay Wyawhare | 7****e | 2 |
| ShreehariVaasishta | s****1@g****m | 2 |
| Ikko Eltociear Ashimine | e****r@g****m | 1 |
| PEDDIREDDY MADHAVI | 9****y | 1 |
| Ritwick Bhargav | r****0@g****m | 1 |
| arya-aiplanet | 1****t | 1 |
| deepak-aiplanet | 1****t | 1 |
Committer Domains (Top 20 + Academic)
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 11
- Total pull requests: 75
- Average time to close issues: 21 days
- Average time to close pull requests: 6 days
- Total issue authors: 7
- Total pull request authors: 18
- Average comments per issue: 4.91
- Average comments per pull request: 0.43
- Merged pull requests: 55
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 4
- Pull requests: 18
- Average time to close issues: 6 days
- Average time to close pull requests: 7 days
- Issue authors: 4
- Pull request authors: 10
- Average comments per issue: 6.5
- Average comments per pull request: 0.83
- Merged pull requests: 10
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- tarun-aiplanet (4)
- alvinrach (1)
- jaintarunAI (1)
- farzad528 (1)
- sk5268 (1)
- adithya-aiplanet (1)
- shikhardadhich (1)
Pull Request Authors
- adithya-aiplanet (25)
- taha-aiplanet (19)
- Jai0401 (10)
- jaintarunAI (9)
- arya-aiplanet (7)
- shivaya-aiplanet (7)
- tarun-aiplanet (6)
- lucifertrj (6)
- jaywyawhare (6)
- adityasingh-0803 (4)
- deepak-aiplanet (2)
- eltociear (2)
- madhavi-peddireddy (2)
- Anush008 (2)
- ritwickbhargav80 (2)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 2
-
Total downloads:
- pypi 58 last-month
-
Total dependent packages: 0
(may contain duplicates) -
Total dependent repositories: 0
(may contain duplicates) - Total versions: 8
- Total maintainers: 1
proxy.golang.org: github.com/aiplanethub/beyondllm
- Documentation: https://pkg.go.dev/github.com/aiplanethub/beyondllm#section-documentation
- License: apache-2.0
-
Latest release: v0.2.3
published over 1 year ago
Rankings
pypi.org: beyondllm
Beyond LLM is an toolkit to Build Experiment Evaluate and Observe RAG pipelines
- Documentation: https://beyondllm.readthedocs.io/
- License: apache-2.0
-
Latest release: 0.2.3
published over 1 year ago
Rankings
Maintainers (1)
Dependencies
- actions/checkout v2 composite
- actions/setup-python v2 composite
- aiohttp 3.9.5
- aiosignal 1.3.1
- annotated-types 0.6.0
- anyio 4.3.0
- async-timeout 4.0.3
- attrs 23.2.0
- beautifulsoup4 4.12.3
- cachetools 5.3.3
- certifi 2024.2.2
- charset-normalizer 3.3.2
- click 8.1.7
- colorama 0.4.6
- dataclasses-json 0.6.4
- deprecated 1.2.14
- dirtyjson 1.0.8
- distro 1.9.0
- exceptiongroup 1.2.1
- frozenlist 1.4.1
- fsspec 2024.3.1
- google-ai-generativelanguage 0.4.0
- google-api-core 2.18.0
- google-auth 2.29.0
- google-generativeai 0.4.1
- googleapis-common-protos 1.63.0
- greenlet 3.0.3
- grpcio 1.62.2
- grpcio-status 1.62.2
- h11 0.14.0
- httpcore 1.0.5
- httpx 0.27.0
- idna 3.7
- joblib 1.4.0
- llama-index 0.10.27
- llama-index-agent-openai 0.2.2
- llama-index-cli 0.1.12
- llama-index-core 0.10.30
- llama-index-embeddings-gemini 0.1.6
- llama-index-embeddings-openai 0.1.7
- llama-index-indices-managed-llama-cloud 0.1.5
- llama-index-legacy 0.9.48
- llama-index-llms-openai 0.1.15
- llama-index-multi-modal-llms-openai 0.1.5
- llama-index-program-openai 0.1.5
- llama-index-question-gen-openai 0.1.3
- llama-index-readers-file 0.1.19
- llama-index-readers-llama-parse 0.1.4
- llama-parse 0.4.1
- llamaindex-py-client 0.1.18
- marshmallow 3.21.1
- multidict 6.0.5
- mypy-extensions 1.0.0
- nest-asyncio 1.6.0
- networkx 3.2.1
- nltk 3.8.1
- numpy 1.26.4
- openai 1.20.0
- packaging 24.0
- pandas 2.0.3
- pillow 10.3.0
- proto-plus 1.23.0
- protobuf 4.25.3
- pyasn1 0.6.0
- pyasn1-modules 0.4.0
- pydantic 2.7.0
- pydantic-core 2.18.1
- pypdf 4.2.0
- pysbd 0.3.4
- python-dateutil 2.9.0.post0
- pytz 2024.1
- pyyaml 6.0.1
- regex 2024.4.16
- requests 2.31.0
- rsa 4.9
- six 1.16.0
- sniffio 1.3.1
- soupsieve 2.5
- sqlalchemy 2.0.29
- striprtf 0.0.26
- tenacity 8.2.3
- tiktoken 0.6.0
- tqdm 4.66.2
- typing-extensions 4.11.0
- typing-inspect 0.9.0
- tzdata 2024.1
- urllib3 2.2.1
- wrapt 1.16.0
- yarl 1.9.4
- llama-index 0.10.27
- llama-index-embeddings-gemini 0.1.6
- nltk 3.8.1
- numpy 1.26.4
- openai 1.20.0
- pandas 2.0.3
- pydantic 2.7.0
- pypdf 4.2.0
- pysbd 0.3.4
- python ~3.9 | ~3.10 | ~3.11
- pyyaml 6.0.1
- regex 2024.4.16
- sqlalchemy 2.0.29
- tiktoken 0.6.0
- aiohttp ==3.9.5
- aiosignal ==1.3.1
- annotated-types ==0.6.0
- anyio ==4.3.0
- async-timeout ==4.0.3
- attrs ==23.2.0
- beautifulsoup4 ==4.12.3
- cachetools ==5.3.3
- certifi ==2024.2.2
- charset-normalizer ==3.3.2
- click ==8.1.7
- colorama ==0.4.6
- dataclasses-json ==0.6.4
- deprecated ==1.2.14
- dirtyjson ==1.0.8
- distro ==1.9.0
- exceptiongroup ==1.2.1
- frozenlist ==1.4.1
- fsspec ==2024.3.1
- google-ai-generativelanguage ==0.4.0
- google-api-core ==2.18.0
- google-auth ==2.29.0
- google-generativeai ==0.4.1
- googleapis-common-protos ==1.63.0
- greenlet ==3.0.3
- grpcio ==1.62.2
- grpcio-status ==1.62.2
- h11 ==0.14.0
- httpcore ==1.0.5
- httpx ==0.27.0
- idna ==3.7
- joblib ==1.4.0
- llama-index ==0.10.27
- llama-index-agent-openai ==0.2.2
- llama-index-cli ==0.1.12
- llama-index-core ==0.10.30
- llama-index-embeddings-gemini ==0.1.6
- llama-index-embeddings-openai ==0.1.7
- llama-index-indices-managed-llama-cloud ==0.1.5
- llama-index-legacy ==0.9.48
- llama-index-llms-openai ==0.1.15
- llama-index-multi-modal-llms-openai ==0.1.5
- llama-index-program-openai ==0.1.5
- llama-index-question-gen-openai ==0.1.3
- llama-index-readers-file ==0.1.19
- llama-index-readers-llama-parse ==0.1.4
- llama-parse ==0.4.1
- llamaindex-py-client ==0.1.18
- marshmallow ==3.21.1
- multidict ==6.0.5
- mypy-extensions ==1.0.0
- nest-asyncio ==1.6.0
- networkx ==3.2.1
- nltk ==3.8.1
- numpy ==1.26.4
- openai ==1.20.0
- packaging ==24.0
- pandas ==2.0.3
- pillow ==10.3.0
- proto-plus ==1.23.0
- protobuf ==4.25.3
- pyasn1 ==0.6.0
- pyasn1-modules ==0.4.0
- pydantic ==2.7.0
- pydantic-core ==2.18.1
- pypdf ==4.2.0
- pysbd ==0.3.4
- python-dateutil ==2.9.0.post0
- pytz ==2024.1
- pyyaml ==6.0.1
- regex ==2024.4.16
- requests ==2.31.0
- rsa ==4.9
- six ==1.16.0
- sniffio ==1.3.1
- soupsieve ==2.5
- sqlalchemy ==2.0.29
- striprtf ==0.0.26
- tenacity ==8.2.3
- tiktoken ==0.6.0
- tqdm ==4.66.2
- typing-extensions ==4.11.0
- typing-inspect ==0.9.0
- tzdata ==2024.1
- urllib3 ==2.2.1
- wrapt ==1.16.0
- yarl ==1.9.4