https://github.com/akaiko1/langchain_examples
Practical, minimal examples for building with LangChain
Science Score: 26.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (12.0%) to scientific vocabulary
Keywords
Repository
Practical, minimal examples for building with LangChain
Basic Info
Statistics
- Stars: 0
- Watchers: 0
- Forks: 0
- Open Issues: 0
- Releases: 0
Topics
Metadata Files
README.md
LangChain Examples
Practical, minimal examples for building with LangChain and friends (LangGraph, FAISS, Ollama, etc.). Start locally, then adapt to your stack.
Contents
- Main Applications of LangChain
- RAG Demo (Ollama + FAISS)
- Workflows Demo (Map-Reduce, LCEL)
- Troubleshooting
Main Applications
- RAG (Retrieval-Augmented Generation): Answer questions over your docs, wikis, tickets, and codebases using chunking, embeddings, retrievers, re-ranking, and citations.
- Multi-step Workflows: Summarize, extract, translate, and classify at scale using deterministic chains and map-reduce patterns.
- Tool-Using Agents: Safely call APIs, databases, search, and internal tools with plan→act loops (often built with LangGraph for reliability).
- Structured Extraction: Produce typed JSON or fill schemas from semi-structured text via output parsers and validation.
- Conversational AI with Memory: Build chat experiences that remember context and can take actions through function/tool calling.
- Code & Data Assistants: Repo Q&A, refactoring helpers, SQL generation over warehouses/lakes, and “chat with your data.”
RAG Demo (Ollama + FAISS)
A minimal Retrieval-Augmented Generation example lives in RAG/ and lets you ask questions over local .md/.txt files using a Gemma model served by Ollama.
Prerequisites
- Install Ollama and pull models:
ollama pull gemma3:1bollama pull nomic-embed-text
- Python 3.10+
Quickstart
- Create a virtual environment and install deps:
python -m venv .venv && source .venv/bin/activatepip install -r RAG/requirements.txt
- Ingest sample docs and build the FAISS index:
python RAG/ingest.py
- Ask questions:
python RAG/query.py "What are the main applications of LangChain?"- or interactive:
python RAG/app.py(streams tokens)
Configuration
- Models:
LLM_MODEL(defaultgemma3:1b),EMBED_MODEL(defaultnomic-embed-text). - Paths:
INDEX_DIR,DATA_DIR(default to subfolders ofRAG/). - Ollama URL:
OLLAMA_BASE_URLif nothttp://127.0.0.1:11434.
Project Structure
text
RAG/
README.md # RAG-specific docs
requirements.txt # LangChain, FAISS, dotenv
ingest.py # Build local FAISS index from data/
query.py # Query the index with Gemma via Ollama
app.py # Simple interactive CLI
data/ # Sample .md/.txt files
index/ # Generated FAISS index (gitignored)
Workflows Demo (Map-Reduce, LCEL)
Deterministic multi-step pipelines for summarization, structured extraction, translation, and classification using a local Gemma model via Ollama.
Prerequisites
ollama pull gemma3:1b- Python 3.10+
Quickstart (async)
- Create venv and install deps:
python -m venv .venv && source .venv/bin/activatepip install -r Workflows/requirements.txt
- Try examples with the included sample:
python Workflows/summarize.py Workflows/data/multistep_sample.txt --concurrency 4python Workflows/extract.py Workflows/data/multistep_sample.txt --concurrency 4python Workflows/translate.py Workflows/data/multistep_sample.txt --lang es --concurrency 4python Workflows/classify.py Workflows/data/multistep_sample.txt --labels tutorial reference tips --concurrency 4
Configuration
LLM_MODEL(defaultgemma3:1b),OLLAMA_BASE_URLfor non-default hosts.- Scripts support
--concurrencyto control async map parallelism. - Adjust chunking via
--chunk_size/--chunk_overlapwhere available.
Troubleshooting
- Ollama not reachable: ensure the daemon is running; set
OLLAMA_BASE_URL. - No docs indexed: add
.md/.txtfiles toRAG/data/and rerunpython RAG/ingest.py. - Import errors: verify you’re in the venv and ran
pip install -r RAG/requirements.txt.
References
- LangChain Docs: https://python.langchain.com/
- LangGraph: https://langgraph.readthedocs.io/
- Ollama: https://ollama.ai/
Owner
- Name: Akaiko
- Login: Akaiko1
- Kind: user
- Repositories: 1
- Profile: https://github.com/Akaiko1
GitHub Events
Total
- Push event: 1
- Gollum event: 2
- Create event: 2
Last Year
- Push event: 1
- Gollum event: 2
- Create event: 2
Dependencies
- faiss-cpu ==1.8.0.post1
- langchain ==0.2.14
- langchain-community ==0.2.12
- langchain-ollama >=0.1.0
- langchain-text-splitters ==0.2.2
- pydantic >=2.7.0
- python-dotenv >=1.0.1
- langchain ==0.2.14
- langchain-community ==0.2.12
- langchain-ollama >=0.1.0
- langchain-text-splitters ==0.2.2
- pydantic >=2.7.0
- python-dotenv >=1.0.1