private-ai
Private-AI is an innovative AI project designed for asking questions about your documents using powerful Large Language Models (LLMs). The unique feature? It works offline, ensuring 100% privacy with no data leaving your environment
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (14.7%) to scientific vocabulary
Keywords
Repository
Private-AI is an innovative AI project designed for asking questions about your documents using powerful Large Language Models (LLMs). The unique feature? It works offline, ensuring 100% privacy with no data leaving your environment
Basic Info
- Host: GitHub
- Owner: AryanVBW
- License: apache-2.0
- Language: Python
- Default Branch: main
- Homepage: https://aryanvbw.github.io/Private-Ai/
- Size: 11.8 MB
Statistics
- Stars: 18
- Watchers: 1
- Forks: 8
- Open Issues: 16
- Releases: 1
Topics
Metadata Files
README.md
🚀 Welcome to Private-AI!
Private-AI is an innovative AI project designed for asking questions about your documents using powerful Large Language Models (LLMs). The unique feature? It works offline, ensuring 100% privacy with no data leaving your environment.
🌐 What does Private-AI offer?
High-level API: Abstracts the complexity of a Retrieval Augmented Generation (RAG) pipeline. Handles document ingestion, chat, and completions.
Low-level API: For advanced users to implement custom pipelines. Includes features like embeddings generation and contextual chunks retrieval.
🌟 Why Private-AI?
Privacy is the key motivator! Private-AI addresses concerns in data-sensitive domains like healthcare and legal, ensuring your data stays under your control.
🤖 installation
Private-Ai Installation Guide
- Install Python 3.11 (or 3.12)
- Using apt(Debian base linux like-kali,Ubantu etc. ) ```bash sudo apt-get install python3.11 apt install python3.11-venv
- Using pyenv:
bash
pyenv install 3.11
pyenv local 3.11
```
- Install Poetry for dependency management.
bash sudo apt install python3-poetry sudo apt install python3-pytest### Installation Whithout GPU: - Git clone Private-Ai repository:
bash git clone https://github.com/AryanVBW/Private-Ai cd Private-Ai && \ python3.11 -m venv .venv && source .venv/bin/activate && \ pip install --upgrade pip poetry && poetry install --with ui,local && ./scripts/setup python3.11 -m private_gpt
Run of private Ai:
- forRunAgain jutsGoTo Private Ai directoy anr run following comand:
bash
make run
👍👍All Done 👍👍
For GPU utilization and customization, follow the steps below:
- For Private-Ai to run fully locally GPU acceleration is required (CPU execution is possible, but very slow)
### clone repo
- Git clone Private-Ai repository:
bash git clone https://github.com/AryanVBW/Private-Ai cd Private-Ai### Dependencies Installation: - Install make (OSX:
brew install make, Windows:choco install make). - Install dependencies:
bash poetry install --with ui### Local LLM Setup: - Install extra dependencies for local execution:
bash poetry install --with local - Use the setup script to download embedding and LLM models:
bash poetry run python scripts/setup### Finalize: - Installation of private Ai:
bash make
### Verification and run : - Run
make runorpoetry run python -m private_gpt. - Open http://localhost:8001 to see Gradio UI with a mock LLM echoing input.
- Git clone Private-Ai repository:
Customization:
- Customize low-level parameters in
private_gpt/components/llm/llm_component.py. - Configure LLM options in
settings.yaml.
GPU Support:
OSX: Build llama.cpp with Metal support.
bash CMAKE_ARGS="-DLLAMA_METAL=on" pip install --force-reinstall --no-cache-dir llama-cpp-pythonWindows NVIDIA GPU: Install VS2022, CUDA toolkit, and run:
powershell $env:CMAKE_ARGS='-DLLAMA_CUBLAS=on'; poetry run pip install --force-reinstall --no-cache-dir llama-cpp-pythonLinux NVIDIA GPU and Windows-WSL: Install CUDA toolkit and run:
bash CMAKE_ARGS='-DLLAMA_CUBLAS=on' poetry run pip install --force-reinstall --no-cache-dir llama-cpp-python
Troubleshooting:
- Check GPU support and dependencies for your platform.
- For C++ compiler issues, follow troubleshooting steps.
Note: If any issues, retry in verbose mode with -vvv during installations.
Troubleshooting C++ Compiler: - Windows 10/11: Install Visual Studio 2022 and MinGW. - OSX: Ensure Xcode is installed or install clang/gcc with Homebrew.
🧩 Architecture Highlights:
FastAPI-Based API: Follows the OpenAI API standard, making it easy to integrate.
LlamaIndex Integration: Leverages LlamaIndex for the RAG pipeline, providing flexibility and extensibility.
Present and Future: Evolving into a gateway for generative AI models and primitives. Stay tuned for exciting new features!
💡 How to Contribute?
Contributions are welcome! Check the ProjectBoard for ideas. Ensure code quality with format and typing checks (run make check).
🤗Supporters:
Supported by Qdrant, Fern, and LlamaIndex. Influenced by projects like LangChain, GPT4All, LlamaCpp, Chroma, and SentenceTransformers.
👏 Thank you for contributing to the future of private and powerful AI with Private-AI! 📝 License: Apache-2.0
Copyright Notice
This is a modified version of PrivateGPT. All rights and licenses belong to the PrivateGPT team.
© 2023 PrivateGPT Developers. All rights reserved.
Owner
- Name: ARYAN
- Login: AryanVBW
- Kind: user
- Company: Aryanvbw.tech
- Website: Aryanvbw.tech
- Repositories: 1
- Profile: https://github.com/AryanVBW
Aryan or Arya is a term originally used as an ethnocultural self-designation by Indo-Iranians in ancient times, in contrast to the nearby outsiders known as 'no
Citation (CITATION.cff)
# This CITATION.cff file was generated with cffinit.
# Visit https://bit.ly/cffinit to generate yours today!
cff-version: 1.2.0
title: PrivateGPT
message: >-
If you use this software, please cite it using the
metadata from this file.
type: software
authors:
- given-names: Iván
family-names: Martínez Toro
email: ivanmartit@gmail.com
orcid: 'https://orcid.org/0009-0004-5065-2311'
- family-names: Gallego Vico
given-names: Daniel
email: danielgallegovico@gmail.com
orcid: 'https://orcid.org/0009-0006-8582-4384'
- given-names: Pablo
family-names: Orgaz
email: pabloogc+gh@gmail.com
orcid: 'https://orcid.org/0009-0008-0080-1437'
repository-code: 'https://github.com/imartinez/privateGPT'
license: Apache-2.0
date-released: '2023-05-02'
GitHub Events
Total
- Watch event: 5
- Issue comment event: 1
- Fork event: 1
Last Year
- Watch event: 5
- Issue comment event: 1
- Fork event: 1
Issues and Pull Requests
Last synced: over 1 year ago
All Time
- Total issues: 1
- Total pull requests: 15
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Total issue authors: 1
- Total pull request authors: 4
- Average comments per issue: 2.0
- Average comments per pull request: 0.0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 9
Past Year
- Issues: 1
- Pull requests: 15
- Average time to close issues: N/A
- Average time to close pull requests: N/A
- Issue authors: 1
- Pull request authors: 4
- Average comments per issue: 2.0
- Average comments per pull request: 0.0
- Merged pull requests: 0
- Bot issues: 0
- Bot pull requests: 9
Top Authors
Issue Authors
- gokusan92 (1)
Pull Request Authors
- dependabot[bot] (12)
- AryanVBW (4)
- TEch1Shop (1)
- imgbot[bot] (1)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- actions/setup-python v4 composite
- snok/install-poetry v1 composite
- actions/checkout v4 composite
- docker/build-push-action v5 composite
- docker/login-action v3 composite
- docker/metadata-action v5 composite
- actions/checkout v4 composite
- actions/checkout v4 composite
- actions/github-script v4 composite
- actions/setup-node v4 composite
- actions/checkout v4 composite
- actions/setup-node v3 composite
- google-github-actions/release-please-action v3 composite
- actions/stale v8 composite
- ./.github/workflows/actions/install_dependencies * composite
- actions/checkout v3 composite
- actions/upload-artifact v3 composite
- actions/checkout v2 composite
- 206 dependencies
- boto3 ^1.28.56
- chromadb ^0.4.13
- fastapi ^0.103.1
- injector ^0.21.0
- llama-index 0.9.3
- pypdf ^3.16.2
- python >=3.11,<3.12
- python-multipart ^0.0.6
- pyyaml ^6.0.1
- qdrant-client ^1.6.9
- watchdog ^3.0.0
