https://github.com/pgalko/bambooai
A Python library powered by Language Models (LLMs) for conversational data discovery and analysis.
Science Score: 26.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Committers with academic emails
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (16.9%) to scientific vocabulary
Keywords
Repository
A Python library powered by Language Models (LLMs) for conversational data discovery and analysis.
Basic Info
Statistics
- Stars: 667
- Watchers: 16
- Forks: 66
- Open Issues: 12
- Releases: 63
Topics
Metadata Files
README.md
BambooAI

https://bambooai.org
BambooAI is an open-source library that enables natural language-based data analysis using Large Language Models (LLMs). It works with both local datasets and can fetch data from external sources and APIs.
Table of Contents
- Overview
- Features
- Demo Videos
- Installation
- Quick Start
- How It Works
- Configuration
- Auxiliary Datasets
- Dataframe Ontology (Semantic Memory)
- Vector DB (Episodic Memory)
- Usage Examples
- Web Application Setup
- Model Support
- Environment Variables
- Logging
- Performance Comparison
- Contributing
Overview
BambooAI is an experimental tool that makes data analysis more accessible by allowing users to interact with their data through natural language conversations. It's designed to:
- Process natural language queries about datasets
- Generate and execute Python code for analysis and visualization
- Help users derive insights without extensive coding knowledge
- Augment capabilities of data analysts at all levels
- Streamline data analysis workflows
Features
- Natural language interface for data analysis
- Web UI and Jupyter notebook support
- Support for local and external datasets
- Integration with internet searches and external APIs
- User feedback during stream
- Optional planning agent for complex tasks
- Integration of custom ontologies
- Code generation for data analysis and visualization
- Self healing/error correction
- Custom code edits and code execution
- Knowledge base integration via vector database
- Workflows saving and follow ups
- In-context and multimodal queries
Demo Videos
Machine Learning Example (Jupyter Notebook)
A demonstration of creating a machine learning model to predict Titanic passenger survival:
https://github.com/user-attachments/assets/59ef810c-80d8-4ef1-8edf-82ba64178b85
Sports Data Analysis (Web UI)
Example of various sports data analysis queries:
https://github.com/user-attachments/assets/7b9c9cd6-56e3-46ee-a6c6-c32324a0c5ef
Installation
bash
pip install bambooai
Or alternatively clone the repo and install the requirements
bash
git clone https://github.com/pgalko/BambooAI.git
pip install -r requirements.txt
Quick Start
Try it out on a basic example in Google Colab:
Basic Example
Install BambooAI:
bash pip install bambooaiConfigure environment: ```bash cp .env.example .env
Edit .env with your settings
```
Configure agents/models ```bash cp LLMCONFIGsample.json LLM_CONFIG.json
Edit LLM_CONFIG.json with your desired combination of agents, models and parameters
```
Run ```python import pandas as pd from bambooai import BambooAI
import plotly.io as pio pio.renderers.default = 'jupyterlab'
df = pd.readcsv('titanic.csv') bamboo = BambooAI(df=df, planning=True, vectordb=False, searchtool=True) bamboo.pdagent_converse() ```
How It Works
The BambooAI operates through six key steps:
Initiation
- Launches with a user question or prompt for one
- Continues in a conversation loop until exit
Task Routing
- Classifies questions using LLM
- Routes to appropriate handler (text response or code generation)
User Feedback
- If the instruction is vague or unclear the model will pause and ask user for feedback
- If the model encounters any ambiguity during the solving process, it will pause and ask for direction offering a few options
Dynamic Prompt Build
- Evaluates data requirements
- Asks for feedback or uses tools if more context is needed
- Formulates analysis plan
- Performs semantic search for similar questions
- Generates code using selected LLM
Debugging and Execution
- Executes generated code
- Handles errors with LLM-based correction
- Retries until successful or limit reached
Results and Knowledge Base
- Ranks answers for quality
- Stores high-quality solutions in vector database
- Presents formatted results or visualizations
Flow Chart

Configuration
Parameters
BambooAI accepts the following initialization parameters:
python
bamboo = BambooAI(
df=None, # DataFrame to analyze
auxiliary_datasets=None, # List of paths to auxiliary datasets
max_conversations=4, # Number of conversation pairs to keep in memory
search_tool=False, # Enable internet search capability
planning=False, # Enable planning agent for complex tasks
webui=False, # Run as web application
vector_db=False, # Enable vector database for knowledge storage
df_ontology=False, # Use custom dataframe ontology
exploratory=True, # Enable expert selection for query handling
custom_prompt_file=None # Enable the use of custom/modified prompt templates
)
Detailed Parameter Descriptions:
df(pd.DataFrame, optional)- Input dataframe for analysis
- If not provided, BambooAI will attempt to source data from the internet or auxiliary datasets
auxiliary_datasets(list, default=None)- List of paths to auxiliary datasets
- These will be incorporated into the solution as needed, and pulled when the code executes
- These are to complement the main dataframe
max_conversations(int, default=4)- Number of user-assistant conversation pairs to maintain in context
- Affects context window and token usage
search_tool(bool, default=False)- Enables internet search capabilities
- Requires appropriate API keys when enabled
planning(bool, default=False)- Enables the Planning agent for complex tasks
- Breaks down tasks into manageable steps
- Improves solution quality for complex queries
webui(bool, default=False)- Runs BambooAI as a web application
- Uses Flask API for web interface
vector_db(bool, default=False)- Enables vector database for knowledge storage and semantic search
- Stores high-quality solutions for future reference
- Requires Pinecone API key
- Supports two embeddings models
text-embedding-3-small(OpenAI) andall-MiniLM-L6-v2(HF)
df_ontology(str, default=None)- Uses custom dataframe ontology for improved understanding
- Requires OWL ontology as a
.ttlfile. The parameter takes the path to the TTL file. - Significantly improves solution quality
exploratory(bool, default=True)- Enables expert selection for query handling
- Chooses between Research Specialist and Data Analyst roles
custom_prompt_file(str, default=None)- Enables users to provide custom prompt templates
- Requires path to the YAML file containing the templates
Agent and Model Configuration
BambooAI uses multi-agent system where different specialized agents handle specific aspects of the data analysis process. Each agent can be configured to use different LLM models and parameters based on their specific requirements.
Configuration Structure
The LLM configuration is stored in LLM_CONFIG.json. Here's the complete configuration structure:
json
{
"agent_configs": [
{"agent": "Expert Selector", "details": {"model": "gpt-4.1", "provider":"openai","max_tokens": 2000, "temperature": 0}},
{"agent": "Analyst Selector", "details": {"model": "claude-3-7-sonnet-20250219", "provider":"anthropic","max_tokens": 2000, "temperature": 0}},
{"agent": "Theorist", "details": {"model": "gemini-2.5-pro-preview-03-25", "provider":"gemini","max_tokens": 4000, "temperature": 0}},
{"agent": "Dataframe Inspector", "details": {"model": "gemini-2.0-flash", "provider":"gemini","max_tokens": 8000, "temperature": 0}},
{"agent": "Planner", "details": {"model": "gemini-2.5-pro-preview-03-25", "provider":"gemini","max_tokens": 8000, "temperature": 0}},
{"agent": "Code Generator", "details": {"model": "claude-3-5-sonnet-20241022", "provider":"anthropic","max_tokens": 8000, "temperature": 0}},
{"agent": "Error Corrector", "details": {"model": "claude-3-5-sonnet-20241022", "provider":"anthropic","max_tokens": 8000, "temperature": 0}},
{"agent": "Reviewer", "details": {"model": "gemini-2.5-pro-preview-03-25", "provider":"gemini","max_tokens": 8000, "temperature": 0}},
{"agent": "Solution Summarizer", "details": {"model": "gemini-2.5-flash-preview-04-17", "provider":"gemini","max_tokens": 4000, "temperature": 0}},
{"agent": "Google Search Executor", "details": {"model": "gemini-2.5-flash-preview-04-17", "provider":"gemini","max_tokens": 4000, "temperature": 0}},
{"agent": "Google Search Summarizer", "details": {"model": "gemini-2.5-flash-preview-04-17", "provider":"gemini","max_tokens": 4000, "temperature": 0}}
],
"model_properties": {
"gpt-4o": {"capability":"base","multimodal":"true", "templ_formating":"text", "prompt_tokens": 0.0025, "completion_tokens": 0.010},
"gpt-4.1": {"capability":"base","multimodal":"true", "templ_formating":"text", "prompt_tokens": 0.002, "completion_tokens": 0.008},
"gpt-4o-mini": {"capability":"base", "multimodal":"true","templ_formating":"text", "prompt_tokens": 0.00015, "completion_tokens": 0.0006},
"gpt-4.1-mini": {"capability":"base", "multimodal":"true","templ_formating":"text", "prompt_tokens": 0.0004, "completion_tokens": 0.0016},
"o1-mini": {"capability":"reasoning", "multimodal":"false","templ_formating":"text", "prompt_tokens": 0.003, "completion_tokens": 0.012},
"o3-mini": {"capability":"reasoning", "multimodal":"false","templ_formating":"text", "prompt_tokens": 0.0011, "completion_tokens": 0.0044},
"o1": {"capability":"reasoning", "multimodal":"false","templ_formating":"text", "prompt_tokens": 0.015, "completion_tokens": 0.06},
"gemini-2.0-flash": {"capability":"base", "multimodal":"true","templ_formating":"text", "prompt_tokens": 0.0001, "completion_tokens": 0.0004},
"gemini-2.5-flash-preview-04-17": {"capability":"reasoning", "multimodal":"true","templ_formating":"text", "prompt_tokens": 0.00015, "completion_tokens": 0.0035},
"gemini-2.0-flash-thinking-exp-01-21": {"capability":"reasoning", "multimodal":"false","templ_formating":"text", "prompt_tokens": 0.0, "completion_tokens": 0.0},
"gemini-2.5-pro-exp-03-25": {"capability":"reasoning", "multimodal":"true","templ_formating":"text", "prompt_tokens": 0.0, "completion_tokens": 0.0},
"gemini-2.5-pro-preview-03-25": {"capability":"reasoning", "multimodal":"true","templ_formating":"text", "prompt_tokens": 0.00125, "completion_tokens": 0.01},
"claude-3-5-haiku-20241022": {"capability":"base", "multimodal":"true","templ_formating":"xml", "prompt_tokens": 0.0008, "completion_tokens": 0.004},
"claude-3-5-sonnet-20241022": {"capability":"base", "multimodal":"true","templ_formating":"xml", "prompt_tokens": 0.003, "completion_tokens": 0.015},
"claude-3-7-sonnet-20250219": {"capability":"base", "multimodal":"true","templ_formating":"xml", "prompt_tokens": 0.003, "completion_tokens": 0.015},
"open-mixtral-8x7b": {"capability":"base", "multimodal":"false","templ_formating":"text", "prompt_tokens": 0.0007, "completion_tokens": 0.0007},
"mistral-small-latest": {"capability":"base", "multimodal":"false","templ_formating":"text", "prompt_tokens": 0.001, "completion_tokens": 0.003},
"codestral-latest": {"capability":"base", "multimodal":"false","templ_formating":"text", "prompt_tokens": 0.001, "completion_tokens": 0.003},
"open-mixtral-8x22b": {"capability":"base", "multimodal":"false","templ_formating":"text", "prompt_tokens": 0.002, "completion_tokens": 0.006},
"mistral-large-2407": {"capability":"base", "multimodal":"false","templ_formating":"text", "prompt_tokens": 0.003, "completion_tokens": 0.009},
"deepseek-chat": {"capability":"base", "multimodal":"false","templ_formating":"text", "prompt_tokens": 0.00014, "completion_tokens": 0.00028},
"deepseek-reasoner": {"capability":"reasoning", "multimodal":"false","templ_formating":"text", "prompt_tokens": 0.00055, "completion_tokens": 0.00219},
"/mnt/c/Users/pgalk/vllm/models/DeepSeek-R1-Distill-Qwen-14B": {"capability":"reasoning", "multimodal":"false","templ_formating":"text", "prompt_tokens": 0.00, "completion_tokens": 0.00},
"deepseek-r1-distill-llama-70b": {"capability":"reasoning", "multimodal":"false","templ_formating":"text", "prompt_tokens": 0.00, "completion_tokens": 0.00},
"deepseek-r1:32b": {"capability":"reasoning", "multimodal":"false","templ_formating":"text", "prompt_tokens": 0.00, "completion_tokens": 0.00},
"deepseek-ai/deepseek-r1": {"capability":"reasoning", "multimodal":"false","templ_formating":"text", "prompt_tokens": 0.00, "completion_tokens": 0.00}
}
}
The LLM_CONFIG.json configuration file needs to be located in the BambooAI working dir, eg. /Users/palogalko/AI_Experiments/Bamboo_AI/web_app/LLM_CONFIG.json, and all API keys for the specified models need to be present in the .env also located in the working dir.
The above combination of agents/models is the most performant according to our tests as of 22 Apr 2025 using sports and performance datasets. I would strongly encourage you to experiment with these settings to see what combination best suits your particular use case.
Agent Roles
- Expert Selector: Determines the best expert type for handling the query
- Analyst Selector: Selects specific analysis approach
- Theorist: Provides theoretical background and methodology
- Dataframe Inspector: Analyzes and understands data structure. (Requires ontology file)
- Planner: Creates step-by-step analysis plans
- Code Generator: Writes Python code for analysis
- Error Corrector: Debugs and fixes code issues
- Reviewer: Evaluates solution quality, and adjusts the plans accordingly
- Solution Summarizer: Creates concise result summaries
- Google Search Executor: Optimizes and executes search queries
- Google Search Summarizer: Synthesizes search results
Configuration Fields
agent_configs: Agents configurationagent: The type of agentdetails:model: Model identifierprovider: Service provider (openai, anthropic, gemini, etc.)max_tokens: Maximum tokens for completiontemperature: Creativity parameter (0-1)
model_properties: Model propertiescapability: Base or Reasoning modelmultimodal: Multimodal or text onlytempl_formating: Prompt formatting. XML or Textprompt_tokens: Cost of input (1K)completion_tokens: Cost of output (1K)
If you assign a model for an agent in agent_configs make sure that the model is defined in model_properties.
Example Alternative Configurations
Using Ollama:
json { "agent": "Planner", "details": { "model": "llama3:70b", "provider": "ollama", "max_tokens": 2000, "temperature": 0 } }Using VLLM:
json { "agent": "Code Generator", "details": { "model": "/path/to/model/DeepSeek-R1-Distill-14B", "provider": "vllm", "max_tokens": 2000, "temperature": 0 } }
Auxiliary Datasets
BambooAI supports working with multiple datasets simultaneously, allowing for more comprehensive and contextual analysis. The auxiliary datasets feature enables you to reference and incorporate additional data sources alongside your primary dataset.
When you ask questions that might benefit from auxiliary data, BambooAI will:
- Analyze which datasets contain relevant information
- Load only the necessary datasets
- Join or cross-reference the data as needed
- Generate and execute code that properly handles the multi-dataset operations
How to Use
```python from bambooai import BambooAI import pandas as pd
Load primary dataset
maindf = pd.readcsv('main_data.csv')
Specify paths to auxiliary datasets
auxiliarypaths = [ 'path/to/supportingdata1.csv', 'path/to/supportingdata2.parquet', 'path/to/referencedata.csv' ]
Initialize BambooAI with auxiliary datasets
bamboo = BambooAI( df=maindf, auxiliarydatasets=auxiliary_paths, ) ```
Dataframe Ontology (Semantic Memory)
BambooAI supports custom ontologies to ground the agents within the specific domain of interest.
How to Use
```python from bambooai import BambooAI import pandas as pd
Initialize with ontology file path
bamboo = BambooAI( df=yourdataframe, dfontology="path/to/ontology.ttl" ) ```
What It Does
The ontology file defines your data structure using RDF/OWL notation, including: - Object properties (relationships) - Data properties (attributes) - Classes (data types) - Individuals (specific instances)
This helps BambooAI understand complex data relationships and generate more accurate code.
Vector DB (Episodic Memory)
BambooAI supports integration with vector database. The main putpose is to allow storage and retrieval of successfull analysis allowing the system to evolve and learn over time.
How to Use
```python from bambooai import BambooAI import pandas as pd
Initialize with ontology file path
bamboo = BambooAI(
df=yourdataframe,
vectordb=True
)
``
Requires an account with [Pinecone (free)](https://app.pinecone.io/), and the API key stored in the.env`
PINECONE_API_KEY=<YOUR API KEY HERE>
What It Does
Upon successful analysis completion, user has an ability to rank and store the solution. - The intent of the highly ranked solutions (>6) will be vectorised using the selected model, and stored in Pinecone vector DB together with the solution metadata - Metadata: - Data Model - Plan - Code - Rank - When a new task arives the system will query the vector index and retrieve the closes match that is above similarity threshold (0.8) - The saved solutions will serve as a reference for subsequent similar tasks guiding the relevant agents through the solving process.
Usage Examples
Interactive Mode (Jupyter Notebook or CLI)
```python import pandas as pd from bambooai import BambooAI
import plotly.io as pio pio.renderers.default = 'jupyterlab'
df = pd.readcsv('trainingactivitydata.csv') auxdata = [ 'path/to/wellnessdata.csv', 'path/to/nutritiondata.parquet', ]
bamboo = BambooAI(df=df, searchtool=True, planning=True) bamboo.pdagent_converse() ```
Single Query Mode (Jupyter Notebook or CLI)
python
bamboo.pd_agent_converse("Calculate 30, 50, 75 and 90 percentiles of the heart rate column")
Web Application Setup
Web UI screenshot (Interactive Workflow Map):
Option 1: Using Docker (Recommended)
BambooAI can be easily deployed using Docker, which provides a consistent environment regardless of your operating system or local setup.
For detailed Docker setup and usage instructions, please refer to our Docker Setup Wiki.
The Docker approach offers several advantages: - No need to manage Python dependencies locally - Consistent environment across different machines - Easy configuration through volume mounting - Support for both repository-based and standalone deployments - Sandboxed code execution for enhanced security
Prerequisites: - Docker installed on your system - Docker Compose installed on your system
Option 2: Using pip package
Install BambooAI:
bash pip install bambooaiDownload web_app folder from repository
Configure environment: ```bash cp .env.example
/.env Edit .env with your settings
```
Configure LLM agents, models and parameters
bash cp LLM_CONFIG_sample.json <path_to_web_app>/LLM_CONFIG.json
- Edit
web_app/LLM_CONFIG.jsonin the web_app directory - Configure each agent with desired model:
json { "agent_configs": [ { "agent": "Code Generator", "details": { "model": "your-preferred-model", "provider": "provider-name", "max_tokens": 4000, "temperature": 0 } } ] } - If no configuration is provided the execution will fail and an error message will be displayed.
- Run application:
bash cd <path_to_web_app> python app.py
Option 3: Using complete repository
Clone repository:
bash git clone https://github.com/pgalko/BambooAI.git cd BambooAIInstall dependencies:
bash pip install -r requirements.txtConfigure environment: ```bash cp .env.example web_app/.env
Edit .env with your settings
```
Configure LLM agents, models and parameters
bash cp LLM_CONFIG_sample.json web_app/LLM_CONFIG.json
- Edit
web_app/LLM_CONFIG.jsonin the web_app directory - Configure each agent with desired model:
json { "agent_configs": [ { "agent": "Code Generator", "details": { "model": "your-preferred-model", "provider": "provider-name", "max_tokens": 4000, "temperature": 0 } } ] } - If no configuration is provided the execution will fail and an error message will be displayed.
- Run application:
bash cd web_app python app.py
- Run application:
Access web interface at http://localhost:5000 (5001 if using Docker)
Model Support
API-based Models
- OpenAI
- Google (Gemini)
- Anthropic
- Groq
- Mistral
- DeepSeek
- OpenRouter
Local Models
- Ollama (all models)
- VLLM (all models)
- Various local models
Environment Variables
Required variables in .env:
Model API Keys. Specify the API keys for the models you want to use, and have access to.
<VENDOR_NAME>_API_KEY: API keys for selected providersGEMINI_API_KEY: This needs to be set if you want to use the native Gemini web search tool (Grounding). You can alternatively use Selenium, however it is much slower and not as tightly integrated.
Search and VectorDB API Keys(Optional)
PINECONE_API_KEY: Optional for vector databaseSERPER_API_KEY: Required for Selenium search
Remote API Endpoints(Optional)
REMOTE_OLLAMA: Optional URL for remote Ollama serverREMOTE_VLLM: Optional URL for remote VLLM server
Application Configuration
FLASK_SECRET: This is used to sign the session cookie for WebAppWEB_SEARCH_MODE: 'google_ai' to use Gemini native search tool, or 'selenium' to use selenium web driverSELENIUM_WEBDRIVER_PATH: Path to your Selenium WebDriver. This is required if you are using the 'selenium' web search mode.EXECUTION_MODE: 'local' to run the code executor locally, or 'api' to run the code executor on a remote server or container.EXECUTOR_API_BASE_URL: `URL of the remote code executor API. This is required if you are using the 'api' execution mode eg.http://192.168.1.201:5000
Logging
The log for each Run/Thread is stored in logs/bambooai_run_log.json. The file gets overwriten when the new Thread starts.
Consolidated logs are stored in logs/bambooai_consolidated_log.json with 5MB size limit and 3-file rotation. Logged information includes:
- Chain ID
- LLM call details (agent, timestamp, model, prompt, response)
- Token usage and costs
- Performance metrics
- Summary statistics per model
Performance Comparison
For detailed evaluation report, see: Objective Assessment Report
Contributing
Contributions are welcome via pull requests. Focus on maintaining code readability and conciseness.
This project is indexed with DeepWiki by Cognition Labs, providing developers with: - AI-generated comprehensive documentation - Interactive code exploration - Context-aware development guidance - Visualization of project workflows
Access the project's full interactive documentation: DeepWiki pgalko/BambooAI
Notes
- Supports multiple model providers and local execution
- Exercise caution with code execution
- Monitor token usage
- Development is ongoing
Contact
palo@bambooai.io
Todo
- Future improvements planned
Owner
- Name: pgalko
- Login: pgalko
- Kind: user
- Location: Melbourne, Australia
- Website: https://www.athletedata.net
- Twitter: pgalko
- Repositories: 3
- Profile: https://github.com/pgalko
If knowledge is power, knowing what we don’t know is wisdom
GitHub Events
Total
- Create event: 19
- Commit comment event: 1
- Release event: 15
- Issues event: 8
- Watch event: 202
- Delete event: 9
- Issue comment event: 24
- Push event: 91
- Pull request review event: 1
- Pull request event: 42
- Gollum event: 8
- Fork event: 24
Last Year
- Create event: 19
- Commit comment event: 1
- Release event: 15
- Issues event: 8
- Watch event: 202
- Delete event: 9
- Issue comment event: 24
- Push event: 91
- Pull request review event: 1
- Pull request event: 42
- Gollum event: 8
- Fork event: 24
Committers
Last synced: 9 months ago
Top Committers
| Name | Commits | |
|---|---|---|
| pgalko | 3****o | 310 |
| Pablo | p****r@h****m | 3 |
| Ranu Yulianto | r****o@g****m | 2 |
| Murtuza Chawala | c****u@g****m | 1 |
| Ikko Eltociear Ashimine | e****r@g****m | 1 |
Issues and Pull Requests
Last synced: 6 months ago
All Time
- Total issues: 17
- Total pull requests: 42
- Average time to close issues: about 1 month
- Average time to close pull requests: about 11 hours
- Total issue authors: 14
- Total pull request authors: 6
- Average comments per issue: 2.29
- Average comments per pull request: 0.5
- Merged pull requests: 27
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 7
- Pull requests: 37
- Average time to close issues: 3 months
- Average time to close pull requests: about 9 hours
- Issue authors: 7
- Pull request authors: 3
- Average comments per issue: 1.29
- Average comments per pull request: 0.43
- Merged pull requests: 22
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- Murtuza-Chawala (3)
- pnmartinez (2)
- dineshbikaner (1)
- mohitpawar473 (1)
- metalshanked (1)
- HuntZhaozq (1)
- Muhammad-Ahsan-Rasheed (1)
- JecintaMulongo (1)
- kalyanm2305 (1)
- AbhijitManepatil (1)
- PratikRules (1)
- blakkd (1)
- BACMiao (1)
- LeoRigasaki (1)
Pull Request Authors
- rnYulianto (21)
- pgalko (13)
- pnmartinez (6)
- AartGoossens (3)
- eltociear (1)
- Murtuza-Chawala (1)
Top Labels
Issue Labels
Pull Request Labels
Packages
- Total packages: 2
- Total downloads: unknown
-
Total dependent packages: 0
(may contain duplicates) -
Total dependent repositories: 0
(may contain duplicates) - Total versions: 102
proxy.golang.org: github.com/pgalko/bambooai
- Documentation: https://pkg.go.dev/github.com/pgalko/bambooai#section-documentation
- License: mit
-
Latest release: v0.4.25
published 7 months ago
Rankings
proxy.golang.org: github.com/pgalko/BambooAI
- Documentation: https://pkg.go.dev/github.com/pgalko/BambooAI#section-documentation
- License: mit
-
Latest release: v0.4.25
published 7 months ago
Rankings
Dependencies
- newspaper3k *
- openai *
- pandas *
- termcolor *
- tiktoken *