https://github.com/faisalhakimi22/automated-customer-support-chatbot

A smart AI-powered chatbot designed to automate customer support using Rasa's conversation management with OpenAI's GPT models combines. The chatbot understands queries, provides relevant responses, and can be deployed across multiple platforms.

https://github.com/faisalhakimi22/automated-customer-support-chatbot

Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (10.4%) to scientific vocabulary
Last synced: 5 months ago · JSON representation

Repository

A smart AI-powered chatbot designed to automate customer support using Rasa's conversation management with OpenAI's GPT models combines. The chatbot understands queries, provides relevant responses, and can be deployed across multiple platforms.

Basic Info
  • Host: GitHub
  • Owner: Faisalhakimi22
  • Language: Python
  • Default Branch: main
  • Homepage:
  • Size: 25.1 MB
Statistics
  • Stars: 0
  • Watchers: 1
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Created 12 months ago · Last pushed 8 months ago
Metadata Files
Readme

README.md

Automated Customer Support Chatbot

Rasa Llama.cpp LangChain FastAPI Streamlit MIT License

A next-generation hybrid customer support chatbot combining the power of Rasa 3, LangChain, Llama.cpp, and FastAPI, with a sleek Streamlit chat UI. Deliver both structured and generative answersleveraging local LLMs and Retrieval Augmented Generation (RAG)for exceptional customer experiences.


Features

  • Hybrid Intelligence: Rasa's robust dialogue + LLM-powered generative answers
  • Local LLaMA Backend: Fast, private, and cost-effective (supports unsloth.Q8_0.gguf and more)
  • RAG Support: LangChain + ChromaDB for document-grounded answers
  • FastAPI Microservice: Scalable LLM API for custom actions
  • Modern Streamlit UI: Chat-style, responsive, and cloud-ready
  • Flexible Deployment: Run locally, on your server, or deploy the UI to Streamlit Cloud

Architecture

mermaid flowchart TD A[" User"] --> B[" Streamlit Chat UI"] B --> C[" Rasa 3 Server"] C -- "Custom Action (action_ask_gpt)" --> D[" FastAPI LLM API"] D --> E[" Llama.cpp LLM"] D --> F[" LangChain RAG (ChromaDB, Docs)"]


Project Structure

text Automated-Customer-Support-Chatbot/ actions/ # Custom Rasa actions (calls LLM API) data/ # Rasa NLU, stories, rules models/ # Trained Rasa models, LLaMA GGUF files results/ # Output, logs, etc. app.py # Streamlit frontend llm_api.py # FastAPI LLM+RAG backend start_chatbot.py # Script to launch all services locally requirements.txt # Python dependencies README.md # This file ... # Other configs and scripts


How It Works

  1. User chats via the Streamlit web UI
  2. Streamlit sends messages to the Rasa 3 server (REST API)
  3. Rasa handles intent/entity recognition and dialogue
  4. For open-ended/knowledge queries, Rasa triggers action_ask_gpt:
    • Calls the FastAPI LLM API
    • API uses LangChain to retrieve context (RAG) and generate a response with Llama.cpp
  5. The answer flows back to the user via Rasa and Streamlit

Tech Stack

| Backend | LLM & RAG | Frontend | Deployment | |-----------------|-------------------|---------------|--------------------| | Rasa 3 | Llama.cpp | Streamlit | Local/Cloud | | FastAPI | LangChain | | Streamlit Cloud | | Python 3.83.10 | ChromaDB | | Docker (optional) |


Quickstart

1. Clone the Repository

sh git clone https://github.com/your-username/Automated-Customer-Support-Chatbot.git cd Automated-Customer-Support-Chatbot

2. Install Dependencies

sh pip install -r requirements.txt

3. Set Up Environment Variables

Create a .env file: env OPENAI_API_KEY=your_openai_key GGUF_MODEL_PATH=path_to_your_llama_model.gguf

4. Train and Start Rasa

sh rasa train rasa run --enable-api

5. Start the LLM API

sh python llm_api.py

6. Start the Streamlit Frontend

sh streamlit run app.py

Or use start_chatbot.py to launch all services together (locally).


Deploying on Streamlit Cloud

  • Only the Streamlit frontend is deployed on Streamlit Cloud.
  • Rasa and the LLM API must run on a separate server (local, VM, or cloud).
  • Set the Rasa server URL in your Streamlit Cloud environment variables.

Customization

  • Add Documents: Place PDFs, Markdown, or text files in the docs/ folder for RAG.
  • NLU & Stories: Edit data/nlu.yml, data/stories.yml, and data/rules.yml for intents and flows.
  • Custom Actions: Extend logic in actions/.
  • Model: Swap out LLaMA models by changing GGUF_MODEL_PATH.

Troubleshooting

  • See TROUBLESHOOTING.md for common issues and solutions.
  • Key tips:
    • Ensure Rasa and LLM API are running and accessible
    • Check environment variables and API keys
    • Review logs for errors

License

MIT License


Made with by Faisal Hakimi

Owner

  • Name: Faisal Hakimi
  • Login: Faisalhakimi22
  • Kind: user
  • Location: Pakistan

Computer Science | Aspiring Data Analyst | Ai Enthusiast | Machine Learning

GitHub Events

Total
  • Push event: 17
  • Create event: 2
Last Year
  • Push event: 17
  • Create event: 2