https://github.com/bytedance/trae-agent
Trae Agent is an LLM-based agent for general purpose software engineering tasks.
Science Score: 36.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
○CITATION.cff file
-
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
✓Academic publication links
Links to: arxiv.org -
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (16.4%) to scientific vocabulary
Keywords
Repository
Trae Agent is an LLM-based agent for general purpose software engineering tasks.
Basic Info
- Host: GitHub
- Owner: bytedance
- License: mit
- Language: Python
- Default Branch: main
- Homepage: https://www.trae.ai/
- Size: 2.79 MB
Statistics
- Stars: 9,314
- Watchers: 51
- Forks: 944
- Open Issues: 81
- Releases: 0
Topics
Metadata Files
README.md
Trae Agent
Trae Agent is an LLM-based agent for general purpose software engineering tasks. It provides a powerful CLI interface that can understand natural language instructions and execute complex software engineering workflows using various tools and LLM providers.
For technical details please refer to our technical report.
Project Status: The project is still being actively developed. Please refer to docs/roadmap.md and CONTRIBUTING if you are willing to help us improve Trae Agent.
Difference with Other CLI Agents: Trae Agent offers a transparent, modular architecture that researchers and developers can easily modify, extend, and analyze, making it an ideal platform for studying AI agent architectures, conducting ablation studies, and developing novel agent capabilities. This research-friendly design enables the academic and open-source communities to contribute to and build upon the foundational agent framework, fostering innovation in the rapidly evolving field of AI agents.
✨ Features
- 🌊 Lakeview: Provides short and concise summarisation for agent steps
- 🤖 Multi-LLM Support: Works with OpenAI, Anthropic, Doubao, Azure, OpenRouter, Ollama and Google Gemini APIs
- 🛠️ Rich Tool Ecosystem: File editing, bash execution, sequential thinking, and more
- 🎯 Interactive Mode: Conversational interface for iterative development
- 📊 Trajectory Recording: Detailed logging of all agent actions for debugging and analysis
- ⚙️ Flexible Configuration: YAML-based configuration with environment variable support
- 🚀 Easy Installation: Simple pip-based installation
🚀 Installation
Requirements
- UV (https://docs.astral.sh/uv/)
- API key for your chosen provider (OpenAI, Anthropic, Google Gemini, OpenRouter, etc.)
Setup
bash
git clone https://github.com/bytedance/trae-agent.git
cd trae-agent
uv sync --all-extras
source .venv/bin/activate
⚙️ Configuration
YAML Configuration (Recommended)
Copy the example configuration file:
bash cp trae_config.yaml.example trae_config.yamlEdit
trae_config.yamlwith your API credentials and preferences:
```yaml agents: traeagent: enablelakeview: true model: traeagentmodel # the model configuration name for Trae Agent maxsteps: 200 # max number of agent steps tools: # tools used with Trae Agent - bash - strreplacebasededittool - sequentialthinking - taskdone
modelproviders: # model providers configuration anthropic: apikey: youranthropicapikey provider: anthropic openai: apikey: youropenaiapi_key provider: openai
models: traeagentmodel: modelprovider: anthropic model: claude-sonnet-4-20250514 maxtokens: 4096 temperature: 0.5 ```
Note: The trae_config.yaml file is ignored by git to protect your API keys.
Using Base URL
In some cases, we need to use a custom URL for the api. Just add the base_url field after provider, take the following config as an example:
openai:
api_key: your_openrouter_api_key
provider: openai
base_url: https://openrouter.ai/api/v1
Note: For field formatting, use spaces only. Tabs (\t) are not allowed.
Environment Variables (Alternative)
You can also configure API keys using environment variables and store them in the .env file:
bash
export OPENAI_API_KEY="your-openai-api-key"
export OPENAI_BASE_URL="your-openai-base-url"
export ANTHROPIC_API_KEY="your-anthropic-api-key"
export ANTHROPIC_BASE_URL="your-anthropic-base-url"
export GOOGLE_API_KEY="your-google-api-key"
export GOOGLE_BASE_URL="your-google-base-url"
export OPENROUTER_API_KEY="your-openrouter-api-key"
export OPENROUTER_BASE_URL="https://openrouter.ai/api/v1"
export DOUBAO_API_KEY="your-doubao-api-key"
export DOUBAO_BASE_URL="https://ark.cn-beijing.volces.com/api/v3/"
MCP Services (Optional)
To enable Model Context Protocol (MCP) services, add an mcp_servers section to your configuration:
yaml
mcp_servers:
playwright:
command: npx
args:
- "@playwright/mcp@0.0.27"
Configuration Priority: Command-line arguments > Configuration file > Environment variables > Default values
Legacy JSON Configuration: If using the older JSON format, see docs/legacy_config.md. We recommend migrating to YAML.
📖 Usage
Basic Commands
```bash
Simple task execution
trae-cli run "Create a hello world Python script"
Check configuration
trae-cli show-config
Interactive mode
trae-cli interactive ```
Provider-Specific Examples
```bash
OpenAI
trae-cli run "Fix the bug in main.py" --provider openai --model gpt-4o
Anthropic
trae-cli run "Add unit tests" --provider anthropic --model claude-sonnet-4-20250514
Google Gemini
trae-cli run "Optimize this algorithm" --provider google --model gemini-2.5-flash
OpenRouter (access to multiple providers)
trae-cli run "Review this code" --provider openrouter --model "anthropic/claude-3-5-sonnet" trae-cli run "Generate documentation" --provider openrouter --model "openai/gpt-4o"
Doubao
trae-cli run "Refactor the database module" --provider doubao --model doubao-seed-1.6
Ollama (local models)
trae-cli run "Comment this code" --provider ollama --model qwen3 ```
Advanced Options
```bash
Custom working directory
trae-cli run "Add tests for utils module" --working-dir /path/to/project
Save execution trajectory
trae-cli run "Debug authentication" --trajectory-file debug_session.json
Force patch generation
trae-cli run "Update API endpoints" --must-patch
Interactive mode with custom settings
trae-cli interactive --provider openai --model gpt-4o --max-steps 30 ```
Docker Mode Commands
Preparation
Important: You need to make sure Docker is configured in your environment.
Usage
```bash
Specify a Docker image to run the task in a new container
trae-cli run "Add tests for utils module" --docker-image python:3.11
Specify a Docker image to run the task in a new container and mount the directory
trae-cli run "write a script to print helloworld" --docker-image python:3.12 --working-dir test_workdir/
Attach to an existing Docker container by ID (--working-dir is invalid with --docker-container-id)
trae-cli run "Update API endpoints" --docker-container-id 91998a56056c
Specify an absolute path to a Dockerfile to build an environment
trae-cli run "Debug authentication" --dockerfile-path test_workspace/Dockerfile
Specify a path to a local Docker image file (tar archive) to load
trae-cli run "Fix the bug in main.py" --docker-image-file testworkspace/traeagent_custom.tar
Remove the Docker container after finishing the task (keep default)
trae-cli run "Add tests for utils module" --docker-image python:3.11 --docker-keep false ```
Interactive Mode Commands
In interactive mode, you can use:
- Type any task description to execute it
- status - Show agent information
- help - Show available commands
- clear - Clear the screen
- exit or quit - End the session
🛠️ Advanced Features
Available Tools
Trae Agent provides a comprehensive toolkit for software engineering tasks including file editing, bash execution, structured thinking, and task completion. For detailed information about all available tools and their capabilities, see docs/tools.md.
Trajectory Recording
Trae Agent automatically records detailed execution trajectories for debugging and analysis:
```bash
Auto-generated trajectory file
trae-cli run "Debug the authentication module"
Saves to: trajectories/trajectoryYYYYMMDDHHMMSS.json
Custom trajectory file
trae-cli run "Optimize database queries" --trajectory-file optimization_debug.json ```
Trajectory files contain LLM interactions, agent steps, tool usage, and execution metadata. For more details, see docs/TRAJECTORY_RECORDING.md.
🔧 Development
Contributing
For contribution guidelines, please refer to CONTRIBUTING.md.
Troubleshooting
Import Errors:
bash
PYTHONPATH=. trae-cli run "your task"
API Key Issues: ```bash
Verify API keys
echo $OPENAIAPIKEY trae-cli show-config ```
Command Not Found:
bash
uv run trae-cli run "your task"
Permission Errors:
bash
chmod +x /path/to/your/project
📄 License
This project is licensed under the MIT License - see the LICENSE file for details.
✍️ Citation
bibtex
@article{traeresearchteam2025traeagent,
title={Trae Agent: An LLM-based Agent for Software Engineering with Test-time Scaling},
author={Trae Research Team and Pengfei Gao and Zhao Tian and Xiangxin Meng and Xinchen Wang and Ruida Hu and Yuanan Xiao and Yizhou Liu and Zhao Zhang and Junjie Chen and Cuiyun Gao and Yun Lin and Yingfei Xiong and Chao Peng and Xia Liu},
year={2025},
eprint={2507.23370},
archivePrefix={arXiv},
primaryClass={cs.SE},
url={https://arxiv.org/abs/2507.23370},
}
🙏 Acknowledgments
We thank Anthropic for building the anthropic-quickstart project that served as a valuable reference for the tool ecosystem.
Owner
- Name: Bytedance Inc.
- Login: bytedance
- Kind: organization
- Location: Singapore
- Website: https://opensource.bytedance.com
- Twitter: ByteDanceOSS
- Repositories: 255
- Profile: https://github.com/bytedance
Issues and Pull Requests
Last synced: 5 months ago
All Time
- Total issues: 76
- Total pull requests: 120
- Average time to close issues: 2 days
- Average time to close pull requests: 1 day
- Total issue authors: 62
- Total pull request authors: 43
- Average comments per issue: 0.8
- Average comments per pull request: 0.69
- Merged pull requests: 57
- Bot issues: 0
- Bot pull requests: 0
Past Year
- Issues: 76
- Pull requests: 120
- Average time to close issues: 2 days
- Average time to close pull requests: 1 day
- Issue authors: 62
- Pull request authors: 43
- Average comments per issue: 0.8
- Average comments per pull request: 0.69
- Merged pull requests: 57
- Bot issues: 0
- Bot pull requests: 0
Top Authors
Issue Authors
- rupaut98 (5)
- JasonHonKL (4)
- parthasdey2304 (2)
- daicidemeigui (2)
- chao-peng (2)
- zzy2002 (2)
- zhimin-z (2)
- 975523482 (2)
- suvaidkhan (2)
- Arthur-WWW (1)
- owenzhao (1)
- yotta-coder (1)
- vrajpat3ll (1)
- kuatroka (1)
- OkGeneraL (1)
Pull Request Authors
- JasonHonKL (31)
- chao-peng (16)
- rupaut98 (9)
- Momena-akhtar (6)
- trae-agent (4)
- luke396 (4)
- yaohaowei0914 (3)
- lingyaochu (3)
- zanghu (3)
- renweilong7 (3)
- sirix-v (2)
- TbhT (2)
- liangyuanpeng (2)
- L-Qun (2)
- KayanoLiam (2)
Top Labels
Issue Labels
Pull Request Labels
Dependencies
- anthropic >=0.54.0
- click >=8.0.0
- openai >=1.86.0
- pydantic >=2.0.0
- python-dotenv >=1.0.0
- rich >=13.0.0
- typing-extensions >=4.0.0
- annotated-types 0.7.0
- anthropic 0.54.0
- anyio 4.9.0
- certifi 2025.4.26
- click 8.2.1
- colorama 0.4.6
- coverage 7.9.0
- distro 1.9.0
- h11 0.16.0
- httpcore 1.0.9
- httpx 0.28.1
- idna 3.10
- iniconfig 2.1.0
- jiter 0.10.0
- markdown-it-py 3.0.0
- mdurl 0.1.2
- openai 1.86.0
- packaging 25.0
- pluggy 1.6.0
- pydantic 2.11.5
- pydantic-core 2.33.2
- pygments 2.19.1
- pytest 8.4.0
- pytest-asyncio 1.0.0
- pytest-cov 6.2.0
- pytest-mock 3.14.1
- python-dotenv 1.1.0
- rich 14.0.0
- sniffio 1.3.1
- tqdm 4.67.1
- trae-agent 0.1.0
- typing-extensions 4.14.0
- typing-inspection 0.4.1