https://github.com/akaiko1/langchain_examples

Practical, minimal examples for building with LangChain

https://github.com/akaiko1/langchain_examples

Science Score: 26.0%

This score indicates how likely this project is to be science-related based on various indicators:

  • CITATION.cff file
  • codemeta.json file
    Found codemeta.json file
  • .zenodo.json file
    Found .zenodo.json file
  • DOI references
  • Academic publication links
  • Academic email domains
  • Institutional organization owner
  • JOSS paper metadata
  • Scientific vocabulary similarity
    Low similarity (12.0%) to scientific vocabulary

Keywords

langchain llm llms ollama rag
Last synced: 6 months ago · JSON representation

Repository

Practical, minimal examples for building with LangChain

Basic Info
  • Host: GitHub
  • Owner: Akaiko1
  • Language: Python
  • Default Branch: master
  • Homepage:
  • Size: 18.6 KB
Statistics
  • Stars: 0
  • Watchers: 0
  • Forks: 0
  • Open Issues: 0
  • Releases: 0
Topics
langchain llm llms ollama rag
Created 6 months ago · Last pushed 6 months ago
Metadata Files
Readme

README.md

LangChain Examples

Practical, minimal examples for building with LangChain and friends (LangGraph, FAISS, Ollama, etc.). Start locally, then adapt to your stack.

Python 3.10+ LangChain 0.2.x Ollama

Contents

  • Main Applications of LangChain
  • RAG Demo (Ollama + FAISS)
  • Workflows Demo (Map-Reduce, LCEL)
  • Troubleshooting

Main Applications

  • RAG (Retrieval-Augmented Generation): Answer questions over your docs, wikis, tickets, and codebases using chunking, embeddings, retrievers, re-ranking, and citations.
  • Multi-step Workflows: Summarize, extract, translate, and classify at scale using deterministic chains and map-reduce patterns.
  • Tool-Using Agents: Safely call APIs, databases, search, and internal tools with plan→act loops (often built with LangGraph for reliability).
  • Structured Extraction: Produce typed JSON or fill schemas from semi-structured text via output parsers and validation.
  • Conversational AI with Memory: Build chat experiences that remember context and can take actions through function/tool calling.
  • Code & Data Assistants: Repo Q&A, refactoring helpers, SQL generation over warehouses/lakes, and “chat with your data.”

RAG Demo (Ollama + FAISS)

A minimal Retrieval-Augmented Generation example lives in RAG/ and lets you ask questions over local .md/.txt files using a Gemma model served by Ollama.

Prerequisites

  • Install Ollama and pull models:
    • ollama pull gemma3:1b
    • ollama pull nomic-embed-text
  • Python 3.10+

Quickstart

  • Create a virtual environment and install deps:
    • python -m venv .venv && source .venv/bin/activate
    • pip install -r RAG/requirements.txt
  • Ingest sample docs and build the FAISS index:
    • python RAG/ingest.py
  • Ask questions:
    • python RAG/query.py "What are the main applications of LangChain?"
    • or interactive: python RAG/app.py (streams tokens)

Configuration

  • Models: LLM_MODEL (default gemma3:1b), EMBED_MODEL (default nomic-embed-text).
  • Paths: INDEX_DIR, DATA_DIR (default to subfolders of RAG/).
  • Ollama URL: OLLAMA_BASE_URL if not http://127.0.0.1:11434.

Project Structure

text RAG/ README.md # RAG-specific docs requirements.txt # LangChain, FAISS, dotenv ingest.py # Build local FAISS index from data/ query.py # Query the index with Gemma via Ollama app.py # Simple interactive CLI data/ # Sample .md/.txt files index/ # Generated FAISS index (gitignored)

Workflows Demo (Map-Reduce, LCEL)

Deterministic multi-step pipelines for summarization, structured extraction, translation, and classification using a local Gemma model via Ollama.

Prerequisites

  • ollama pull gemma3:1b
  • Python 3.10+

Quickstart (async)

  • Create venv and install deps:
    • python -m venv .venv && source .venv/bin/activate
    • pip install -r Workflows/requirements.txt
  • Try examples with the included sample:
    • python Workflows/summarize.py Workflows/data/multistep_sample.txt --concurrency 4
    • python Workflows/extract.py Workflows/data/multistep_sample.txt --concurrency 4
    • python Workflows/translate.py Workflows/data/multistep_sample.txt --lang es --concurrency 4
    • python Workflows/classify.py Workflows/data/multistep_sample.txt --labels tutorial reference tips --concurrency 4

Configuration

  • LLM_MODEL (default gemma3:1b), OLLAMA_BASE_URL for non-default hosts.
  • Scripts support --concurrency to control async map parallelism.
  • Adjust chunking via --chunk_size/--chunk_overlap where available.

Troubleshooting

  • Ollama not reachable: ensure the daemon is running; set OLLAMA_BASE_URL.
  • No docs indexed: add .md/.txt files to RAG/data/ and rerun python RAG/ingest.py.
  • Import errors: verify you’re in the venv and ran pip install -r RAG/requirements.txt.

References

Owner

  • Name: Akaiko
  • Login: Akaiko1
  • Kind: user

GitHub Events

Total
  • Push event: 1
  • Gollum event: 2
  • Create event: 2
Last Year
  • Push event: 1
  • Gollum event: 2
  • Create event: 2

Dependencies

RAG/requirements.txt pypi
  • faiss-cpu ==1.8.0.post1
  • langchain ==0.2.14
  • langchain-community ==0.2.12
  • langchain-ollama >=0.1.0
  • langchain-text-splitters ==0.2.2
  • pydantic >=2.7.0
  • python-dotenv >=1.0.1
Workflows/requirements.txt pypi
  • langchain ==0.2.14
  • langchain-community ==0.2.12
  • langchain-ollama >=0.1.0
  • langchain-text-splitters ==0.2.2
  • pydantic >=2.7.0
  • python-dotenv >=1.0.1