simpleagent
This is a tutorial project. This project achieved an LLM Agent with several tools and without any other agent package like langchain
Science Score: 44.0%
This score indicates how likely this project is to be science-related based on various indicators:
-
✓CITATION.cff file
Found CITATION.cff file -
✓codemeta.json file
Found codemeta.json file -
✓.zenodo.json file
Found .zenodo.json file -
○DOI references
-
○Academic publication links
-
○Academic email domains
-
○Institutional organization owner
-
○JOSS paper metadata
-
○Scientific vocabulary similarity
Low similarity (15.0%) to scientific vocabulary
Repository
This is a tutorial project. This project achieved an LLM Agent with several tools and without any other agent package like langchain
Basic Info
- Host: GitHub
- Owner: penghanli
- License: bsd-3-clause
- Language: Python
- Default Branch: main
- Size: 28.3 KB
Statistics
- Stars: 1
- Watchers: 1
- Forks: 0
- Open Issues: 0
- Releases: 0
Metadata Files
README.md
SimpleAgent
Introduction
This is a tutorial project for the new researcher of LLM, especially for those who have LLM and LLM agent background knowledge but lack of experience in coding. This project achieved an LLM Agent with several tools and without any other agent package like langchain, etc. Additionally, in order to reduce the difficulty of implementation and customization, this project divided LLM agent into LLM core, system prompt, memory cache and tool caller. All the elements were built by simple approach and try to reduce the dependency of other libraries.
You can debug the main.py, and you will see all the running details of LLM Agent includes system prompts influence, history management, the communication protocol between LLM and all the agent tools and how to create a LLM agent tool.
I believe you will get better understanding of LLM agent implementation details after reproduce this project.
Custom Field
Because this project achieved an LLM agent without any other agent library, you can customize all the elements of the agent as you wish. * You can change the core of LLM. In the llmcore_ package, there have four LLMs include two API method and two local deployment method. However, you can add any model you want, and it is very easy to use other language model. * You can judge the system prompt to adapt your agent tasks and modify the communication protocol between LLM and tools. The systemtemplates.py_ will be a good example for your reference. * Customize tools is also permitted in this project. You just need to refer to the communication protocol in the system prompt, and the agent will automatically decide when to use the tools you provide. * And you can also change the history management method to keep the agent focus on the information that you wish or reduce the token and computational consumption.
Benchmark
All the tests used GPT-4o as the LLM core.
Execution Accuracy
This index was test under 110 questions which cover normal question, weather searching, linux operation, web shopping, mysql database manipulation, user interaction, hybrid tasks, complicated tasks and ~~unsafe tasks~~ (GPT-4o shows good refuse rate to these unsafe questions or commands)
You can download the database in this link, and the database has various types of structure, you can choose one that most suitable to you.
According to these questions, the GPT-4o based agent shows good performance: it successfully answers all the simple questions and execute all the simple tasks. Only two hybrid tasks (success rate - 18/20) and three complicated tasks (success rate - 7/10) were failing to execute. However, the agent has automatic error correction function. The agent will automatically add some auxiliary prompts to help the agent to complete the tasks better when it faces some fatal errors. And most of the time, we can rerun the agent to solve some logical error owing to good reasoning ability of GPT-4o.
Token Consumption
The agent consumes about 5,000 tokens each step, and the agent will take 2-10 steps in one task according to the complexity of the task.
Quick Call
I strongly recommend the user to use code editor including debugger to check all the running details of the LLM agent.
Before you run the agent, you should install all the required libraries using pip install -r requirements.txt.
And then, set your LLM api key (GLM, GPT or any other model you added), tool config and debug the main.py.
If you still want to run the agent using terminal, you can use command like python main.py --llm_name gpt-4o-api --system_instruction system_template --api_key your_api_key --weather_api_key your_weather_api_key --task what is the weather in Singapore today?
Agent Design
When the user raises a task, the agent will perform the following step to complete it:
- Merge the system prompt into the history manager.
- Add the question or task to the history manager.
- Using history to generate agent trajectory and input it into LLM.
- LLM give the output contains tool using type or determine finish the task and give the final outcome.
- If the LLM determine to use tool in the output message, the tool caller will choose the relevant tool and execute it with args.
- Keep thinking and using tools until solve the task and updating the history and trajectory.
Owner
- Name: phl
- Login: penghanli
- Kind: user
- Company: 电子科技大学
- Repositories: 2
- Profile: https://github.com/penghanli
Citation (CITATION.cff)
cff-version: 1.2.0 message: "If you use SimpleAgent framework, please cite it as below." authors: - family-names: "Peng" given-names: "Hanli" orcid: "https://orcid.org/0009-0001-9086-8473" title: "SimpleAgent" version: 1.0.0 date-released: 2025-02-17 url: "https://github.com/penghanli/SimpleAgent"
GitHub Events
Total
- Watch event: 3
- Push event: 3
- Create event: 4
Last Year
- Watch event: 3
- Push event: 3
- Create event: 4